The Other Reason You Should Care About Online Toxicity

 In Online Safety

In these divisive and partisan times, there seems to be one thing we can all agree on, regardless of party lines — online toxicity sucks.

Earlier this week Lily Allen announced that she was leaving Twitter. When you read this recent thread about her devastating early labor in 2010, it’s not hard to see why:

Does anyone want their social feeds to be peppered with hate speech or threats? Does anyone like logging into their favorite game and being greeted with a barrage of insults? And does anyone want to hear another story about cyberbullying gone tragically, fatally wrong? And yet we allow it to happen, time and time again.

The human cost of online abuse is obvious. But there’s another hidden cost when you allow trolls and toxicity to flourish in your product.

Toxicity is poison — and it will eat away at your profits.

Every company faces a critical decision when creating a social network or online game. Do you take steps to deal with toxicity from the very beginning? Do you proactively moderate the community to ensure that everyone plays nice?

Or — do you do nothing? Do you launch your product and hope for the best? Maybe you build a Report feature so users can report abuse or harassment. Maybe you build a Mute button so players can ignore other players who post offensive content. Sure, it’s a traditional approach to moderation, but does it really work?

If you’re not sure what to choose, you’re not alone. The industry has grappled with these questions for years now.

We want to make it an easy choice. We want it to be a no-brainer. We want doing something to be the industry standard. We believe that chat is a game mechanic like any other, and that community balance is as important as game balance.

When you choose to do something, not only do you build the framework for a healthy, growing, loyal community — you’ll also save yourself a bunch of money in the process.

In this series of posts, we’ll introduce two fictional online games, AI Warzone and Trials of Serathian. We’ll people them with communities, each a million users strong. One game will choose to proactively moderate the community, and the other will do nothing. Think of it as an A/B test.

Then, armed with real-world statistics, our own research, and a few brilliant data scientists, we’ll perform a bit of math magic. We’ll toss them all into a hat (minus the data scientists; they get cranky when we try to put them in hats), say the magic words, wave our wands, and — tada! — pull out a formula. We’ll run both games’ profits, user churn, and acquisition costs through our formula to determine, once and for all, the cost of doing nothing.

But first, let’s have a bit of fun and delve into our fictional communities. Who is Serathian and why is he on trial? And what kind of virtual battles can one expect in an AI Warzone?

Join us tomorrow for our second installment in this four-part series: A Tale of Two Online Communities.

 

Originally published on Medium

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Recommended Posts
Contact Us

Hello! Send us your question, and we'll get back to you as soon as possible.

Start typing and press Enter to search