Quora: Why do people say things on the internet which they wouldn’t say in the real world?

Way back in 2004 (only 13 years ago but several lifetimes in internet years), a Professor of Psychology at Rider University named John Suler wrote a paper called The Online Disinhibition Effect. In it, he identifies the two kinds of online disinhibition:

Benign disinhibition. We’re more likely to open up, show vulnerability, and share our deepest fears. We help others, and we give willingly to strangers on sites like GoFundMe and Kickstarter.

Toxic disinhibition. We’re more likely to harass, abuse, and threaten others when we can’t see their face. We indulge our darkest desires. We hurt people because it’s easy.

Suler identified eight ways in which the internet facilitates both benign and toxic disinhibition. Let’s look at three of them:

Anonymity. Have you ever visited an unfamiliar city and been intoxicated by the fact that no one knew you? You could become anyone you wanted; you could do anything. That kind of anonymity is rarely available in our real lives. Think about how you’re perceived by your family, friends, and co-workers. How often do you have the opportunity to indulge in unexpected — and potentially unwanted — thoughts, opinions, and activities?

Anonymity is a cloak. It allows us to become someone else (for better or worse), if only for the brief time that we’re online. If we’re unkind in our real lives, sometimes we’ll indulge in a bit of kindness online. And if we typically keep our opinions to ourselves, we often shout them all the louder on the internet.

Invisibility. Anonymity is a cloak that renders us—and the people we interact with—invisible. And when we don’t have to look someone in the eye it’s much, much easier to indulge our worst instincts.

“…the opportunity to be physically invisible amplifies the disinhibition effect… Seeing a frown, a shaking head, a sigh, a bored expression, and many other subtle and not so subtle signs of disapproval or indifference can inhibit what people are willing to express…”

Solipsistic Introjection & Dissociative Imagination. When we’re online, it feels like we exist only in our imagination, and the people we talk to are simply voices in our heads. And where do we feel most comfortable saying the kinds of things that we’re too scared to normally say? That’s right—in our heads, where it’s safe.

Just like retreating into our imagination, visiting the internet can be an escape from the overwhelming responsibilities of the real world. Once we’ve associated the internet with the “non-real” world, it’s much easier to say those things we wouldn’t say in real life.

“Online text communication can evolve into an introjected psychological tapestry in which a person’s mind weaves these fantasy role plays, usually unconsciously and with considerable disinhibition.”

The internet has enriched our lives in so many ways. We’re smarter (every single piece of information ever recorded can be accessed on your phone — think about that) and more connected (how many social networks do you belong to?) than ever.

We’re also dumber (how often do you mindlessly scroll through Facebook without actually reading anything?) and more isolated (we’re connected, but how well do we really know each other?)

Given that dichotomy, it makes sense that the internet brings out both the best and the worst in us. Benign disinhibition brings us together — and toxic disinhibition rips us apart.

Originally published on Quora

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Quora: How can you reinforce and reward positive behavior in an online community?

Online communities have unlimited potential to be forces for positive change.

Too often we focus on the negative aspects of online communities. How many articles have been written about online toxicity and rampant trolling? It’s an important topic — and one we should never shy away from discussing — but for all the toxicity in the online world, there are many acts of kindness and generosity that go unlooked.

There are a few steps that Community Managers can take to reinforce and reward positive behavior in their communities:

Promote and reinforce community guidelines. Before you can begin to champion positive behavior, ensure that it’s clearly outlined in your code of conduct. It’s not enough to say that you don’t allow harassment; if you want to prevent abuse, you have to provide a clear definition of what abuse actually entails.

A study was conducted to measure the effects of boundaries on children’s play. In one playground, students were provided with a vast play area, but no fences. They remained clustered around their teacher, unsure how far they could roam, uncertain of appropriate behavior. In another playground, children were given the same amount of space to play in, but with one key difference—a fence was placed around the perimeter. In the fenced playground, the children confidently spread out to the edges of the space, free to play and explore within the allotted space.

The conclusion? We need boundaries. Limitations provide us with a sense of security. If we know how far we can roam, we’ll stride right up to that fence.

Online communities are the playgrounds of the 21st century—even adult communities. Place fences around your playground, and watch your community thrive.

The flipside of providing boundaries/building fences is that some people will not only stride right up to the fence, they’ll kick it until it falls over. (Something tells us this metaphor is getting out of our control… ) When community members choose not to follow community guidelines and engage in dangerous behavior like harassment, abuse, and threats, it’s imperative that you take action. Taking action doesn’t have to be Draconian. There are innovative techniques that go beyond just banning users.

Some communities have experimented with displaying warning messages to users who are about to post harmful content. Riot Games has conducted fascinating research on this topic. They found that positive in-game messaging reduced offensive language by 62%.

For users who repeatedly publish dangerous content, an escalated ban system can be useful. On their first offence, send them a warning message. On their second, mute them. On their third, temporarily ban their account, and so on.

Every community has to design a moderation flow that works best for them.

Harness the power of user reputation and behavior-based triggers. These techniques use features that are unique to Community Sift, but they’re still valuable tools.

Toxic users tend to leave signatures behind. They may have their good days, but most days are bad—and they’re pretty consistently bad. On the whole, thee users tend to use the same language and indulge in the same antisocial behavior from one session to the next.

The same goes for positive users. They might have a bad day now and then; maybe they drop the stray F-bomb. But all in all, most sessions are positive, healthy, and in line with your community guidelines.

What if you could easily identify your most negative and most positive users in real time? And what if you could measure their behavior over time, instead of a single play session? With Community Sift, all players start out neutral, since we haven’t identified their consistent behavior yet. Over time, the more they post low-risk content, the more “trusted” they become. Trusted users are subject to a less restrictive content filter, allowing them more expressivity and freedom. Untrusted users are given a more restrictive content filter, limiting their ability to manipulate the system.

You can choose to let users know if their chat permissions have been opened up or restricted, thereby letting your most positive users know that their behavior will be rewarded.

Publicly celebrate positive users. Community managers and moderators should go out of their way to call out users who exhibit positive behavior. For a forum or comments section, that could mean upvoting posts or commenting on posts. In a chat game, that could look like publicly thanking positive users, or even providing in-game rewards like items or currency for players who follow guidelines.

We believe that everyone should be free to share without fear of harassment or abuse. We think that most people tend to agree. But there’s more to stopping online threats than just identifying the most dangerous content and taking action on the most negative users. We have to recognize and reward positive users as well.

Originally published on Quora 

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Quora: What can social networks do to provide safer spaces for women?

For many women, logging onto social media is inherently dangerous. Online communities are notoriously hostile towards women, with women in the public eye—journalists, bloggers, and performers—often facing the worst abuse. But abuse is not just the province of the famous. Nearly every woman who has ever expressed an opinion online has had these experiences: Rape threats. Death threats. Harassment. Sometimes, even their children are targeted.

In the last few years, we’ve seen many well-documented cases of ongoing, targeted harassment of women online. Lindy West. Anita Sarkeesian. Leslie Jones. These women were once famous for their talent and success. Now their names are synonymous with online abuse of the worst kind.

And today we add a new woman to the list: Allie Rose-Marie Leost. An animator for EA Labs, her social media accounts were targeted this weekend in a campaign of online harassment. A blog post misidentified her as the lead animator for Mass Effect: Andromeda, and blamed her for the main character’s awkward facial animations. Turns out, Leost never even worked on Mass Effect: Andromeda. And yet she was forced to spend a weekend defending herself against baseless, crude, and sexually violent attacks from strangers.

Clearly, social media has a problem, and it’s not going away anytime soon. And it’s been happening for years.

A 2014 report by the Pew Research Center found that:

Young women, those 18-24, experience certain severe types of harassment at disproportionately high levels: 26% of these young women have been stalked online, and 25% were the target of online sexual harassment.

Young Women Experience Particulary Severe Form of Harassment

We don’t want to discount the harassment and abuse that men experience online, in particular in gaming communities. This issue affects all genders. However, there is an additional level of violence and vitriol directed at women. And it almost always includes threats of sexual violence. Women are also more likely to be doxxed, the practice of sharing someone else’s personal information online without their consent.

So, what can social networks do to provide safer spaces for women?

First, they need to make clear in their community guidelines that harassment, abuse, and threats are unacceptable —regardless of whether they’re directed at a man or a woman. For too long social networks have adopted a “free speech at all costs” approach to community building. If open communities want to flourish, they have to define where free speech ends, and accountability begins.

Then, social networks need to employ moderation strategies that:

Prevent abuse in real time. Social networks cannot only depend on moderators or users to find and remove harassment as it happens. Not only does that put undue stress on the community to police itself, it also ignores the fundamental problem—when a woman receives a rape threat, the damage is already done, regardless of how quickly it’s removed from her feed.

The best option is to stop abuse in real time, which means finding the right content filter. Text classification is faster and more accurate than it’s ever been, thanks to recent advances in artificial intelligence, machine learning, and Natural Language Processing (NLP).

Our expert system uses a cutting-edge blend of human ingenuity and automation to identify and filter the worst content in real time. People make the rules, and the system implements them.

When it comes to dangerous content like abuse and rape threats, we decided that traditional NLP wasn’t accurate enough. Community Sift uses Unnatural Language Processing (uNLP) to find the hidden, “unnatural” meaning. Any system can identify the word “rape,” but a determined user will always find a way around the obvious. The system also needs to identify the l337 5p34k version of r4p3, the backwards variant, and the threat hidden in a string of random text.

Take action on bad actors in real time. It’s critical that community guidelines are reinforced. Most people will change their behavior once they know it’s unacceptable. And if they don’t, social networks can take more severe action, including temporary or permanent bans. Again, automation is critical here. Companies can use the same content filter tool to automatically warn, mute, or suspend accounts as soon as they post abusive content.

Encourage users to report offensive content. Content filters are great at finding the worst stuff and allowing the best. Automation does the easy work. But there will always be content in between that requires human review. It’s essential that social networks provide accessible, user-friendly reporting tools for objectionable content. Reported content should be funnelled into prioritised queues based on content type. Moderators can then review the most potentially dangerous content and take appropriate action.

Social networks will probably never stop users from attempting to harass women with rape or death threats. It’s built into our culture, although we can hope for a change in the future. But they can do something right now—leverage the latest, smartest technology to identify abusive language in real time.

Originally published on Quora

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Quora: What moderation techniques work best for social networks?

Moderation is a delicate art. It can take some real finesse to get it right. Every community is different and requires different techniques. But there are a few guiding principles that work for just about every product, from social networks to online games to forums.

Something to consider as you build your moderation strategy:

  • You have the power to shape the community.
  • Words have real consequences.

They may seem unconnected, but they’re profoundly linked. When creating a set of community guidelines and deciding how you will communicate and support them, you’re acknowledging that your community deserves the best experience possible, free of abuse, threats, and harassment. There is an old assumption that trolls and toxicity are inevitable by-products of the great social experiment that is the Internet, but that doesn’t have to be true. With the right techniques—and technology—you can build a healthy, thriving community.

First, it’s crucial that you set your community guidelines and display them in an area of your app or website that is readily available.

Some things to consider when setting guidelines:

  • The age/demographic of your community. If you’re in the US, and your community is marketed towards users under 13, by law you have to abide by the Children’s Online Privacy Protection Rule (COPPA). The EU has similar regulations under the new General Data Protection Rule (GDPR). In addition to regulating how you store Personally Identifiable Information(PII) on your platform, these laws also affect what kinds of information users can share with each other.
  • Know exactly where you stand on topics like profanity and sexting. It’s easy to take a stand on the really bad stuff like rape threats and hate speech. The trickier part is deciding where you draw the line with less dangerous subjects like swearing. Again, the age and demographic of your community will play into this. What is your community’s resilience level? Young audiences will likely need stricter policies, while mature audiences might be able to handle a more permissive atmosphere.
  • Ensure that your moderation team has an extensive policy guide to refer to. This will help avoid misunderstandings and errors when taking actions on user’s accounts. If your moderators don’t know your guidelines, how can you expect the community to follow them?

Then, decide how you are going to moderate content. Your best option is to leverage software that combines AI (Artificial Intelligence) with HI (Human Intelligence). Machine learning has taken AI to a new level in the last few years, so it just makes sense to take advantage of recent advances in technology. But you always need human moderators as well. The complex algorithms powering AI are excellent at some things, like identifying high-risk content (hate speech bullying, abuse, and threats). Humans are uniquely suited to more subtle tasks, like reviewing nuanced content and reaching out to users who have posted cries for help.

Many companies decide to build content moderation software in-house, but it can be expensive, complex, and time-consuming to design and maintain. Luckily, there are existing moderation tools on the market.

Full disclosure: My company Two Hat Security makes two AI-powered content moderation tools that were built to identify and remove high-risk content. Sift Ninja is ideal for startups and new products that are just establishing an audience.Community Sift is an enterprise-level solution for bigger products.

Once you’ve chosen a tool that meets your needs, you can build out the appropriate workflows for your moderators.

Start with these basic techniques:

  • Automatically filter content that doesn’t meet your guidelines. Why force your users to see content that you don’t allow? With AI-powered automation, you can filter the riskiest content in real time.
  • Automatically escalate dangerous content (excessive bullying, cries for help, and grooming) to queues for your moderators to review.
  • Automatically take action on users based on their behavior. Warn, mute, or ban users who don’t follow the guidelines. It’s not about punishment—Riot Games found that users who are given immediate feedback are far less likely to re-offend:

When players were informed only of what kind of behavior had landed them in trouble, 50% did not misbehave in a way that would warrant another punishment over the next three months.

  • Give users a tool to report objectionable content. Moderators can then review the content and determine if further action is required.

Building community is the fun part of launching a new social product. What kind of community do you want? Once you know the answer, you can get started. Draft your community guidelines, know how you will reinforce them, and invest in a moderation system that uses a blend of artificial and human intelligence. And once the hard stuff is out of the way—have fun, and enjoy the ride.  : )

Originally published on Quora

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Four Moderation Strategies To Keep the Trolls Away

To paraphrase the immortal Charles Dickens:

It was the : ) of times, it was the : ( of times…

Today, our tale of two communities continues.

Yesterday, we tested our theory that toxicity can put a dent in your profits. We used our two fictional games AI Warzone and Trials of Serathian as an A/B test, and ran their theoretical financials through our mathematical formula to see how they performed.

And what were the results? The AI Warzone community flourished. With a little help from a powerful moderation strategy, they curbed toxicity and kept the trolls at bay. The community was healthy, and users stuck around.

Trials of Serathian paid the cost of doing nothing. As toxicity spread, user churn went up, and the company had to spend more and more on advertising to attract new users just to meet their growth target.

Today, we move from the hypothetical to the real. Do traditional techniques like crowdsourcing and muting actually work? Are there more effective strategies? And what does it mean to engineer a healthy community?

Charles Kettering famously said that “A problem well stated is a problem half-solved”; so let’s start by defining a word that gets used a lot in the industry, but can mean very different things to different people: trolls.

What is a Troll?

We’re big fans of the Glove and Boots video Levels of Trolling.

Technically these are goblins, but still. These guys again!

The crux of the video is that trolling can be silly and ultimately harmless — like (most) pranks — or it can be malicious and abusive, especially when combined with anonymity.

When we talk about trolls, we refer to users who maliciously and persistently seek to ruin other users’ experiences.

Trolls are persistent. Their goal is to hurt the community. And unfortunately, traditional moderation techniques have inadvertently created a culture where trolls are empowered to become the loudest voices in the room.

Strategies That Aren’t Working

Many social networks and gaming companies— including Trials of Serathian —take a traditional approach to moderation. It follows a simple pattern: depend on your users to report everything, give users the power to mute, and let the trolls control the conversation.

Let’s take a look at each strategy to see where it falls short.

Crowdsourcing Everything

Crowdsourcing — depending on users to report toxic chat — is the most common moderation technique in the industry. As we’ll discover later, crowdsourcing is a valuable tool in your moderation arsenal. But it can’t be your only tool.

Let’s get real — chat happens in real time. So by relying on users to report abusive chat, aren’t you in effect allowing that abuse to continue? The damage is already done by the time the abusive player is finally banned, or the chat is removed. It’s already affected its intended victim.

Imagine if you approached software bugs the same way. You have QA testers for a reason — to find the big bugs. Would you release a game that was plagued with bugs? Would you expect your users to do the heavy lifting? Of course not.

Community is no different. There will always be bugs in our software, just as there will always be users who have a bad day, say something to get a rise out of a rival, or just plain forget the guidelines. Just like there will always be users who want to watch the world burn — the ones we call trolls. If you find and remove trolls without depending on the community to do it for you, you go a long way towards creating a healthier atmosphere.

You earn your audience’s trust — and by extension their loyalty — pretty quickly when you ship a solid, polished product. That’s as true of community as it is of gameplay.

If you’ve already decided that you won’t tolerate harassment, abuse, and hate speech in your community, why let it happen in the first place?

Muting Annoying Players

Muting is similar to crowdsourcing. Again, you’ve put all of the responsibility on your users to police abuse. In a healthy community, only about 1% of users are true trolls — players who are determined to upset the status quo and hurt the community. When left unmoderated, that number can rise to as much as 20%.

That means that the vast majority of users are impacted by the behavior of the few. So why would you ask good players to press mute every time they encounter toxic behavior? It’s a band-aid solution and doesn’t address the root of the problem.

It’s important that users have tools to report and mute other players. But they cannot be the only line of defense in the war on toxicity. It has to start with you.

Letting The Trolls Win

We’ve heard this argument a lot. “Why would I get rid of trolls? They’re our best users!” If trolls make up only 1% of your user base, why are you catering to a tiny minority?

Good users — the kind who spend money and spread the word among their friends — don’t put up with trolls. They leave, and they don’t come back.

Simon Fraser University’s Reddit study proved that a rise in toxicity always results in slower community growth. Remember our formula in yesterday’s post? The more users you lose, the more you need to acquire, and the smaller your profits.

Trust us — there is a better way.

Strategies That Work

Our fictional game AI Warzone took a new approach to community. They proactively moderated chat with the intention to shape a thriving, safe, and healthy community using cutting-edge techniques and the latest in artificial and human intelligence.

The following four strategies worked for AI Warzone — and luckily, they work in the real world too.

Knowing Community Resilience

One of the hardest things to achieve in games is balance. Developers spend tremendous amounts of time, money, and resources ensuring that no one dominant strategy defines gameplay. Both Trials of Serathian and AI Warzone spent a hefty chunk of development time preventing imbalance in their games.

The same concept can be applied to community dynamics. In products where tension and conflict are built into gameplay, doesn’t it make sense to ensure that your community isn’t constantly at each other’s throats? Some tension is good, but a community that is always at war can hardly sustain itself.

It all comes down to resilience — how much negativity can a community take before it collapses?

Without moderation, players in battle games like AI Warzone and Trials of Serathian are naturally inclined to acts — and words — of aggression. Unfortunately, that’s also true of social networks, comment sections, and forums.

The first step to building an effective moderation strategy is determining your community’s unique resilience level. Dividing content into quadrants can help:

  • High Risk, High Frequency
  • High Risk, Low Frequency
  • Low Risk, High Frequency
  • Low Risk, Low Frequency

 

Where does your community draw the line?

Younger communities will always have a lower threshold for high-risk chat. That means stricter community guidelines with a low tolerance for swearing, bullying, and other potentially dangerous activity.

The older the community gets, the stronger its resilience. An adult audience might be fine with swearing, as long as it isn’t directed at other users.

Once you know what your community can handle, it’s time to look closely at your userbase.

Dividing Users Based on Behavior

It’s tempting to think of users as just a collection of usernames and avatars, devoid of personality or human quirks. But the truth is that your community is made up of individuals, all with different behavior patterns.

You can divide this complex community into four categories based on behavior.

 

The four categories of user behavior.

Let’s take a closer look at each risk group:

  • Boundary testers: High risk, low frequency offenders. These players will log in and instantly see what they can get away with. They don’t start out as trolls — but they will upset your community balance if you let them get away with it.
  • Trolls: High risk, high frequency offenders. As we’ve discussed, these players represent a real threat to your community’s health. They exist only to harass good players and drive them away.
  • Average users/don’t worry: Low risk, low frequency offenders. These players usually follow community guidelines, but they have a bad day now and then. They might take their mood out on the rest of the community, mostly in a high-stress situation.
  • Spammers: Low risk, high frequency offenders. Annoying and tenacious, but they pose a minor threat to the community.

Once you’ve divided your users into four groups, you can start figuring out how best to deal with them.

Taking Action Based on Behavior

Each of the four user groups should be treated differently. Spammers aren’t trolls. And players who drop an f-bomb during a heated argument aren’t as dangerous as players who frequently harass new users.

 

How to deal with different kinds of behavior.

Filter and Ban Trolls

Your best option is to deal with trolls swiftly and surely. Filter their abusive chat, and ban their accounts if they don’t stop. Set up escalation queues for potentially dangerous content like rape threats, excessive bullying, and threats, then let your moderation team review them and take action.

Warn Boundary Testers

A combination of artificial intelligence and human intelligence works great for these users. Set up computer automation to warn and/or mute them in real time. If you show them that you’re serious about community guidelines early on, they are unlikely to re-offend.

Crowdsource Average Users

Crowdsourcing is ideal for this group. Content here is low risk and low frequency, so if a few users see it, it’s unlikely that the community will be harmed. Well-trained moderators can review reported content and take action on users if necessary.

Mute Spammers

There are a couple of options here. You can mute spammers and let them know they’ve been muted. Or, for a bit of fun try a stealth ban. Let them post away, blissfully unaware that no one in the room can see what they’re saying.

Combining Artificial and Human Intelligence

The final winning strategy? Artificial intelligence (AI) and computer automation are smarter, more advanced, and more powerful than they’ve ever been. Combine that with well-trained and thoughtful human teams, and you have the opportunity to bring moderation and community health to the next level.

A great real world example of this is Twitch. In December 2016 they introduced a new tool called AutoMod.

It allows individual streamers to select a unique resilience level for their own channel. On a scale of 1–4, streamers set their tolerance level for hate speech, bullying, sexual language, and profanity. AutoMod reviews and labels each message for the above topics. Based on the streamer’s chosen tolerance level, AutoMod holds the message back for moderators to review, then approve or reject.

Reactions to AutoMod were resoundingly positive:

Positive user responses and great press? We hope the industry is watching.

The Cost of Doing Nothing

So, what have Trials of Serathian and AI Warzone taught us? First, we really, really need someone to make these games. Like seriously. We’ll wait…

 

This is as far as we got.

 

We learned that toxicity increases user churn, that traditional moderation techniques don’t work, and that community resilience is essential. We learned that trolls can impact profits in surprising ways.

In the end, there are three costs of doing nothing:

  • Financial. Money matters.
  • Brand. Reputation matters.
  • Community. People matter.

Our fictional friends at AI Warzone found a way to keep the trolls away — and keep profits up. They carefully considered how to achieve community balance, and how to build resilience. They constructed a moderation strategy that divided users into four distinct groups and dealt with each group differently. They consistently reinforced community guidelines in real-time. And in the process, they proved to their community that a troll-free environment doesn’t diminish tension or competition. Quite the opposite — it keeps it alive and thriving.

Any community can use the four moderation strategies outlined here, whether it’s an online game, social sharing app, or comments section, and regardless of demographic. And as we’ve seen with Twitch’s AutoMod, communities are welcoming these strategies with open arms and open minds.

One final thought:

Think of toxicity as a computer virus. We know that online games and social networks attract trolls. And we know that if we go online without virus protection, we’re going to get a virus. It’s the nature of social products, and the reality of the internet. Would you deliberately put a virus on your computer, knowing what’s out there? Of course not. You would do everything in your power to protect your computer from infection.

By the same token, shouldn’t you do everything in your power to protect your community from infection?

Want more? Check out the rest of the series:

At Two Hat Security, we use Artificial Intelligence to protect online communities from high-risk content. Visit our website to learn more.

Just getting started? Growing communities deserve to be troll-free, too.

Originally published on Medium

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Doing The Math: Does Moderation Matter?

Welcome back to our series about the cost of doing nothing. Feeling lost? Take a minute to read the first two posts, The Other Reason You Should Care About Online Toxicity and A Tale of Two Online Communities.

Today we test our theory: when social products do nothing about toxicity, they lose money. Using AI Warzone and Trials of Serathian (two totally-made-up-but-awesome online games) as examples, we’ll run their theoretical financials through our mathematical formula to see how they perform.

Remember — despite being slightly different games, AI Warzone and Trials of Serathian have similar communities. They’re both competitive MMOs, are targeted to a 13+ audience, and are predominantly male.

But they differ in one key way. Our post-apocalyptic robot battle game AI Warzone proactively moderates the community, and our epic Medieval fantasy Trials of Serathian does nothing.

Let’s take a look at the math.

The Math of Toxicity

In 2014, Jeffrey Lin from Riot Games presented a stat at GDC that turned the gaming world on its head. According to their research, users who experience toxicity are 320% more likely to quit. That’s huge. To put that number in further perspective, consider this statistic from a 2015 study:

52% of MMORPG players reported that they had been cyber-victimized, and 35% said they had committed cyberbullying themselves.

A majority of players have experienced toxicity. And a surprising amount of them admit to engaging in toxic behavior.

We’ll take those numbers as our starting point. Now, let’s add a few key facts — based on real data — about our two fictional games to fill in the blanks:

  • Each community has 1 million users
  • Each community generates $13.51 in revenue from each user
  • The base monthly churn rate for an MMO is 5%, regardless of moderation
  • According to the latest Fiksu score, it costs $2.78 to acquire a new user
  • They’ve set a 10% Month over Month growth target

So far, so good — they’re even.

Now let’s add toxicity into the mix.

Even with a proactive moderation strategy in place, we expect AI Warzone users to experience about 10% toxicity. It’s a complex battle game where tension is built into the game mechanic, so there will be conflict. Users in Trials of Serathian — our community that does nothing to mitigate that tension— experience a much higher rate of toxicity, at 30%.

Using a weighted average, we’ll raise AI Warzone’s churn rate from 5% to 6.6%. And we’ll raise Trials of Serathian to 9.8%.

Taking all of these numbers into account, we can calculate the cost of doing nothing using a fairly simple formula, where U is total users, and U¹ is next month’s total users:

U¹ = U — (U * Loss Rate) + Acquired through Advertising

Using our formula to calculate user churn and acquisition costs, let’s watch what happens in their first quarter.

Increased User Churn = Increased Acquisition Costs

In their first quarter, AI Warzone loses 218,460 users. And to meet their 10% growth rate target, they spend $1,527,498 to acquire more.

Trials of Serathian, however, loses 324,380 users (remember, their toxicity rate is much higher). And they have to spend $1,821,956 to acquire more users to meet the same growth target.

Let’s imagine that AI Warzone spends an additional $60,000 in that first quarter on moderation costs. Even with the added costs, they’ve still saved $234,457 in profits.

That’s a lot. Not enough to break a company, but enough to make executives nervous.

Let’s check back in at the end of the year.

The Seven Million Dollar Difference

We gathered a few key stats from our two communities.

When Trials of Serathian does nothing, their EOY results are:

  • Churn rate: 9.8%
  • User Attrition: -8,672,738
  • Total Profits (after acquisition costs): $39,784,858

And when AI Warzone proactively moderates, their EOY results are:

  • Churn rate: 6.6%
  • User Attrition: -5,840,824
  • Total Profits (after acquisition costs): $47,177,580

AI Warzone deals with toxicity in real time and loses fewer users in the process — by nearly 3 million. They can devote more of their advertising budget to acquiring new users, and their userbase grows exponentially. The end result? They collect $7,392,722 more in profits than Trials of Serathian, who does nothing.

Userbase growth with constant 30% revenue devoted to advertising.

And what does AI Warzone do with $7 million more in revenue? Well, they develop and ship new features, fix bugs, and even start working on their next game. AI Warzone: Aftermath, anyone?

These communities don’t actually exist, of course. And there are a multitude of factors that can effect userbase growth and churn rate. But it’s telling, nonetheless.

And there are real-world examples, too.

Sticks and Stones

Remember the human cost that we talked about earlier? Money matters — but so do people.

We mentioned Twitter in The Other Reason You Share About Online Toxicity. Twitter is an easy target right now, so it’s tempting to forget how important the social network is, and how powerful it can be.

Twitter is a vital platform for sharing new ideas and forging connections around the globe. Crucially, it’s a place where activists and grassroots organizers can assemble and connect with like-minded citizens to incite real political change. The Arab Spring in 2011 and the Women’s March in January of this year are only two examples out of thousands.

But it’s become known for the kind of abuse that Lily Allen experienced recently — and for failing to deal with it adequately. Twitter is starting to do something — over the last two years, they’ve released new features that make it easier to report and block abusive accounts. And earlier this week even more new features were introduced. The question is, how long can a community go without doing something before the consequences catch up to them?

Twitter’s user base is dwindling, and their stock is plummeting, in large part due to their inability to address toxicity. Can they turn it around? We hope so. And we have some ideas about how they can do it (stay tuned for tomorrow’s post).

What Reddit Teaches us About Toxicity and Churn

Reddit is another real-world example of the cost of doing nothing.

In collaboration with Simon Fraser University, we provided the technology to conduct an independent study of 180 subreddits, using a public Reddit data set. In their academic paper “The Impact of Toxic Language on the Health of Reddit Communities,” SFU analyzes the link between toxicity and community growth.

They found a correlation between an increase in toxic posts and a decrease in community growth. Here is just one example:

The blue line shows high-risk posts decreasing; the red line shows the corresponding increase in community growth.

It’s a comprehensive study and well worth your time. You can download the whitepaper here.

What Now?

Using our formula, we can predict how a proactive moderation strategy can impact your bottom line. And using our two fictional games as a model, we can see how a real-world community might be affected by toxicity.

AI Warzone chose to engineer a healthy community — and Trials of Serathian chose to do nothing.

But what does it mean to “engineer a healthy community”? And what strategies can you leverage in the real world to shape a troll-free community?

In tomorrow’s post, we examine the moderation techniques that AI Warzone used to succeed.

Spoiler alert: They work in real games, too.

Originally published on Medium

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


A Tale of Two Online Communities

What happens when two games with similar communities take two very different approaches to chat?

Welcome to the end of the world. We have robots!

Picture this:

It’s dark. The faint green glow of a computer screen lights your field of vision. You swipe left, right, up, down, tracing the outline of a floating brain, refining a neural network, making connections. Now, an LED counter flashes red to your right, counting down from ten. You hear clanking machinery and grinding cogs in the distance. To your left, a new screen appears: a scrap yard, miles of twisted, rusty metal. The metal begins to move, slowly. It shakes itself like a wet dog. The counter is closer to zero. Urgent voices, behind, below, above you:

“NOW.”

“YOUR TURN.”

“DON’T MESS IT UP!”

“LET’S DO THIS!”

“YOU GOT THIS!”

Welcome to AI Warzone, a highly immersive, choice-driven game in which players create machines that slowly gain self-awareness, based on user’s key moral decisions. Set in 3030, machines battle each other in the industrial ruins of Earth. You create and join factions with other users that can help or hinder their progress, leading to — as we see above — a tense atmosphere rife with competition. A complex game with a steep learning curve, AI Warzone is not for the faint of heart.

Welcome to the past. We have dragons!

Now, imagine this:

You stand atop a great rocky crag, looking down on a small village consisting of a few thatch-roofed cottages. A motley crew stands behind you; several slope-browed goblins, the towering figure of a hooded female Mage, and two small dragons outfitted with rough-hewn leather saddles.

You hold a gleaming silver sword in your hand. A group of black-robed men and women, accompanied by trolls and Mages, approach the village, some on dragon-back, others atop snarling wolves. Some of them shout, their voices ringing across the bleak landscape. Almost time, you whisper, lifting your broadsword in the air and swinging it, so it shines in the pale sun. Almost time

“FUCK YOU FAGGOT,” you hear from far below.

“kill yurself,” a goblin behind you says.

“Show us yr tits!” yells one of the black-robed warriors in the village.

“Oh fuck this,” says the hooded female Mage. She disappears abruptly.

This is life in Trials of Serathian, an MMO set in the Medieval world of Haean. Users can play on the Dawn or Dusk side. On the Dawn side, they can choose to be descendants of the famed warrior Serathian, Sun Mages, or goblins; on the Dusk side, they can play as descendants of the infamous warrior Lord Warelind, Moon Mages, or trolls. Dawn and Dusk clans battle for the ultimate goal — control of Haean.

Two Communities, Two Approaches to Chat

Spoiler alert: AI Warzone and Trials of Serathian aren’t real games. We cobbled together elements from existing games to create two typical gaming communities.

Like most products with social components, both AI Warzone and Trials of Serathian struggle with trolls. And not the mythical, Tolkien-esque kind — the humans-behaving-badly-online kind.

In both games, players create intense bonds with their clan or faction, since they are dependent on fellow players to complete challenges. When players make mistakes, both games have seen incidents of ongoing harassment in retaliation. Challenges are complex, and new users are subject to intense harassment if they don’t catch on immediately.

Second spoiler alert: Only one of these games avoids excessive user churn. Only one of these games has to spend more and more out of their advertising budget to attract new users. And only one of these games nurtures a healthy, growing community that is willing to follow the creators — that’s you — to their next game. The difference? One of these games took steps to deal with toxicity, and the other did nothing.

In tomorrow’s post, we take a deep dive into the math. Remember our “math magic” from The Other Reason You Should Care About Online Toxicity? We’re going to put it to the test.

Originally published on Medium

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Tackling Toxicity in Online Gaming Communities

The gaming industry is making a breakthrough.

For most of its history, internet gaming has been one big free-for-all. Users have seen little reprieve from the pervasive theme of hostility, particularly within anonymous environments.

A sustained lack of maintenance to any system results in faults, so it should come as no surprise that many industry leaders are finally ready to stop ignoring the issue and embrace innovative approaches.

As product and game designers, we create social experiences to enrich people’s lives. We believe social connections can have a profound transformational effect on humanity by giving people the ability to connect with anyone from anywhere. When we take a look around at the most popular web products to date — social media, social games, instant messaging — the greatest common denominator becomes apparent: each other. The online world now offers us a whole new way of coming together.

There is, however, a problem created when the social environment we are used to operating within is paired down to bare language alone. In the physical world, social conventions and body language guide us through everyday human interaction. Much of our communication happens non-verbally, offering our brains a wider range of data to interpret. Our reactions to potentially misleading messages follow a similar pattern of logic, primarily driven by the rich database of the unconscious mind.

Online, these cues disappear, placing developers who wish to discourage toxic discourse in an awkward position. Should we act quickly and risk misinterpretation, or give users the benefit of the doubt until a moderator can take a closer look? The second option comes with the equally unsavoury proposition of leaving abusive speech unattended for hours at a time, by which point others will have already seen it. With reports showing that users who experience toxicity in an online community are 320% more likely to quit, developers concerned with user retention can no longer afford to look the other way. So what are our options?

Methods for tackling community management generally fall into one of two categories: penalty or reward. Typical responses to bad behaviour include warning messages, partial restrictions from game features and, as a final measure, temporary or permanent bans. On the flipside, rewards for exemplary behaviour seem to offer more room for creativity. Massive online battle arena game Defense of the Ancients has a commendation system whereby users can give out up to 6 commendations per week, based on four options: Friendly, Forgiving, Teaching, or Leadership. Commendable users receive no other tangible reward beyond prestige.

“Personally, [DotA’s commendation system] always incentivized me to try and be helpful in future games simply because leaving a game and feeling like you had a positive impact despite losing feels way better than raging at people and having them threaten to report you,” explains one Reddit user in a discussion thread centering around commendations in online games.

Another notable example is League of Legends’ recent move to give exclusive skins to users with no history of bans in the last year. A Pavlovian model of positive-reinforcement seems to be gaining fast traction in the gaming industry.

Still, a complex problem requires a complex solution, and toxicity continues to persist in both these communities. With all the work that goes into creating a successful game, few studios have the time or resources left over to build, perfect, and localize intricate systems of penalty and reward.

The first step is acknowledging two inconvenient truths: context is everything, and our words exist in shades of gray. Even foul language can play a positive role in a community depending on the context. An online world for kids has different needs from a social network for adults, so there’s no one-size-fits-all solution.

Competing with the ever-expanding database of the human mind is no easy task, and when it comes to distinguishing between subtle shifts in tone and meaning, machines have historically fallen short. The nuances of human communication make the supervision of online communities a notoriously difficult process to automate. Of course, with greater scale comes a greater need for automation — so what’s a Product Manager to do?

Empowering Young Adults While Managing Online Risk

I recall being a young boy living in orchard country in the beautiful Okanagan Valley. By the age of 8, I had the run of my 37-acre orchard and its surrounding gullies and fields. I’d run, bike, hike, and explore with a German Shepherd as my co-conspirator and a backpack filled with trail mix. Occasionally, I’d wipe out and return home with some tears in my eyes and a wound on my leg, but it always healed and I was all the more diligent the next time.

Surrounding the orchard were various homes of people my family knew, and I knew I could visit if I ever needed help. I was aware that talking to strangers could be dangerous and I knew well enough to stay away from the dangerous bits of landscape, not that there were any cliffs or raging rivers- had there been, my radius of freedom might have been a little smaller.

Was there some risk? Yes. Was the risk of death or serious harm serious? No. Had it been, I wouldn’t have been allowed to travel so far and wide. Also, my life had been guided by my parents to ensure I knew how to make good decisions (which, for the most part, I did). This ability to assume an appropriate amount of risk helped guide me to be the person I now am. In truth, I’m a bit of an experience junky, but I’m also a little risk averse. However, when thrust into difficult situations I don’t shy away from them.

My company provides filter and moderation tools for online communities. We do it very well. In years past, filters for online communities (that is to say, the bit of technology that blocks certain words and phrases) had to be either a blacklist filter or a whitelist filter. Blacklist filters make sure that nothing on that list is said. The problem with blacklist filtering is that you’re constantly trying to figure out the new ways of saying bad things. Whitelist filters are the opposite, they only allow users to say things that are on the whitelist, which proves to be a very restrictive way to communicate.

We decided to do it differently, where we look at words and phrases and assign a risk to those words. We can then gauge how the word is used and look at the context by which it’s being used (is the user trusted, has the user demonstrated negative behaviour in the past, is the environment for older users or younger users, etc.). We can then filter uniquely by user and context, thus eliminating the overbearing action of saying all words are either good or bad (yes, some words are just bad and others are just good).

An area we’re keenly interested in is how we can help replicate a healthy amount of risk in an online community without putting users in danger. Most parents accept that a child might fall and scrape their knee while playing on a playground. We also accept the risk that when a child plays with other children they might be on the receiving end of some not-nice behaviour. We hope this won’t happen, but when it does we comfort them and teach them about character and how they should react to such people. They will meet bullies throughout their entire life. In the online arena though, we’ve become quite scared of anything that might cause risk to a child, possibly with good reason. When we think about the effects of this, we are concerned that children are no longer learning important life lessons.

I love how Tanya Byron said that we must “use our understanding of how [children] develop to empower them to manage risks and make the digital world safer.

Recently we’ve been asking ourselves, ‘How can we allow for a safe amount of risk to be present while providing tools that mimic real life?’ For example, in real life, a bully has to look into the eye of his or her victim. Although we can’t mimic that, we can deliver specific and timely responses to a bully that encourage them, at the moment of their bullying, to picture how others might receive what they’re saying. Another example might be the way an adult can engage in a situation that is beginning to get more serious. Even though we start to filter the sort of words that become more abusive, how can we then get this information to an adult or moderator as quickly and efficiently as possible so the adult can intervene? This is the subject of our current development, as we believe deeply that in order for kids to be truly safe online, they need to grow and develop skills that cause them to make smarter decisions and show greater amounts of empathy. This includes the need to look at what’s an appropriate amount of risk for children at all ages.

The internet is providing an unprecedented amount of access to people of all ages and backgrounds. Perhaps, as we progress in our understanding of its impact, more and more companies will start to realize the role they must take in helping it develop well. We must be willing to challenge assumptions and work through our own discomforts so that we can engage in a healthy discussion. As parents, we must challenge ourselves to see how technology has changed the way our children interact and how the risks we’re well aware of from when we were children are experienced in the digital world. How can we learn to help our kids fall gracefully and stand up again more confidently?

 

Originally published on the Family Online Safety Institute website