Quora: Why do people say things on the internet which they wouldn’t say in the real world?

Way back in 2004 (only 13 years ago but several lifetimes in internet years), a Professor of Psychology at Rider University named John Suler wrote a paper called The Online Disinhibition Effect. In it, he identifies the two kinds of online disinhibition:

Benign disinhibition. We’re more likely to open up, show vulnerability, and share our deepest fears. We help others, and we give willingly to strangers on sites like GoFundMe and Kickstarter.

Toxic disinhibition. We’re more likely to harass, abuse, and threaten others when we can’t see their face. We indulge our darkest desires. We hurt people because it’s easy.

Suler identified eight ways in which the internet facilitates both benign and toxic disinhibition. Let’s look at three of them:

Anonymity. Have you ever visited an unfamiliar city and been intoxicated by the fact that no one knew you? You could become anyone you wanted; you could do anything. That kind of anonymity is rarely available in our real lives. Think about how you’re perceived by your family, friends, and co-workers. How often do you have the opportunity to indulge in unexpected — and potentially unwanted — thoughts, opinions, and activities?

Anonymity is a cloak. It allows us to become someone else (for better or worse), if only for the brief time that we’re online. If we’re unkind in our real lives, sometimes we’ll indulge in a bit of kindness online. And if we typically keep our opinions to ourselves, we often shout them all the louder on the internet.

Invisibility. Anonymity is a cloak that renders us—and the people we interact with—invisible. And when we don’t have to look someone in the eye it’s much, much easier to indulge our worst instincts.

“…the opportunity to be physically invisible amplifies the disinhibition effect… Seeing a frown, a shaking head, a sigh, a bored expression, and many other subtle and not so subtle signs of disapproval or indifference can inhibit what people are willing to express…”

Solipsistic Introjection & Dissociative Imagination. When we’re online, it feels like we exist only in our imagination, and the people we talk to are simply voices in our heads. And where do we feel most comfortable saying the kinds of things that we’re too scared to normally say? That’s right—in our heads, where it’s safe.

Just like retreating into our imagination, visiting the internet can be an escape from the overwhelming responsibilities of the real world. Once we’ve associated the internet with the “non-real” world, it’s much easier to say those things we wouldn’t say in real life.

“Online text communication can evolve into an introjected psychological tapestry in which a person’s mind weaves these fantasy role plays, usually unconsciously and with considerable disinhibition.”

The internet has enriched our lives in so many ways. We’re smarter (every single piece of information ever recorded can be accessed on your phone — think about that) and more connected (how many social networks do you belong to?) than ever.

We’re also dumber (how often do you mindlessly scroll through Facebook without actually reading anything?) and more isolated (we’re connected, but how well do we really know each other?)

Given that dichotomy, it makes sense that the internet brings out both the best and the worst in us. Benign disinhibition brings us together — and toxic disinhibition rips us apart.

Originally published on Quora

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Quora: How can you reinforce and reward positive behavior in an online community?

Online communities have unlimited potential to be forces for positive change.

Too often we focus on the negative aspects of online communities. How many articles have been written about online toxicity and rampant trolling? It’s an important topic — and one we should never shy away from discussing — but for all the toxicity in the online world, there are many acts of kindness and generosity that go unlooked.

There are a few steps that Community Managers can take to reinforce and reward positive behavior in their communities:

Promote and reinforce community guidelines. Before you can begin to champion positive behavior, ensure that it’s clearly outlined in your code of conduct. It’s not enough to say that you don’t allow harassment; if you want to prevent abuse, you have to provide a clear definition of what abuse actually entails.

A study was conducted to measure the effects of boundaries on children’s play. In one playground, students were provided with a vast play area, but no fences. They remained clustered around their teacher, unsure how far they could roam, uncertain of appropriate behavior. In another playground, children were given the same amount of space to play in, but with one key difference—a fence was placed around the perimeter. In the fenced playground, the children confidently spread out to the edges of the space, free to play and explore within the allotted space.

The conclusion? We need boundaries. Limitations provide us with a sense of security. If we know how far we can roam, we’ll stride right up to that fence.

Online communities are the playgrounds of the 21st century—even adult communities. Place fences around your playground, and watch your community thrive.

The flipside of providing boundaries/building fences is that some people will not only stride right up to the fence, they’ll kick it until it falls over. (Something tells us this metaphor is getting out of our control… ) When community members choose not to follow community guidelines and engage in dangerous behavior like harassment, abuse, and threats, it’s imperative that you take action. Taking action doesn’t have to be Draconian. There are innovative techniques that go beyond just banning users.

Some communities have experimented with displaying warning messages to users who are about to post harmful content. Riot Games has conducted fascinating research on this topic. They found that positive in-game messaging reduced offensive language by 62%.

For users who repeatedly publish dangerous content, an escalated ban system can be useful. On their first offence, send them a warning message. On their second, mute them. On their third, temporarily ban their account, and so on.

Every community has to design a moderation flow that works best for them.

Harness the power of user reputation and behavior-based triggers. These techniques use features that are unique to Community Sift, but they’re still valuable tools.

Toxic users tend to leave signatures behind. They may have their good days, but most days are bad—and they’re pretty consistently bad. On the whole, thee users tend to use the same language and indulge in the same antisocial behavior from one session to the next.

The same goes for positive users. They might have a bad day now and then; maybe they drop the stray F-bomb. But all in all, most sessions are positive, healthy, and in line with your community guidelines.

What if you could easily identify your most negative and most positive users in real time? And what if you could measure their behavior over time, instead of a single play session? With Community Sift, all players start out neutral, since we haven’t identified their consistent behavior yet. Over time, the more they post low-risk content, the more “trusted” they become. Trusted users are subject to a less restrictive content filter, allowing them more expressivity and freedom. Untrusted users are given a more restrictive content filter, limiting their ability to manipulate the system.

You can choose to let users know if their chat permissions have been opened up or restricted, thereby letting your most positive users know that their behavior will be rewarded.

Publicly celebrate positive users. Community managers and moderators should go out of their way to call out users who exhibit positive behavior. For a forum or comments section, that could mean upvoting posts or commenting on posts. In a chat game, that could look like publicly thanking positive users, or even providing in-game rewards like items or currency for players who follow guidelines.

We believe that everyone should be free to share without fear of harassment or abuse. We think that most people tend to agree. But there’s more to stopping online threats than just identifying the most dangerous content and taking action on the most negative users. We have to recognize and reward positive users as well.

Originally published on Quora 

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required