Does Your Online Community Need a Chat Filter?

“Chatting is a major step in our funnel towards creating engaged, paying users. And so, it’s really in Twitch’s best interests — and in the interest of most game dev companies and other social media companies — to make being social on our platform as pleasant and safe as possible.”

– Ruth Toner, Data Scientist at Twitch, UX Game Summit 2017

You’ve probably noticed a lot of talk in the industry these days about chat filters and proactive moderation. Heck, we talk about filters and moderation techniques all the time.

We’ve previously examined whether you should buy moderation software or build it yourself and shared five of the best moderation workflows we’ve found to increase productivity.

Today, let’s take a step back and uncover why filtering matters in the first place.

Filtering: First things first

Using a chat, username, or image filter is an essential technique for any product with social features. Ever find yourself buried under a mountain of user reports, unable to dig yourself out? That’s what happens when you don’t use a filter.

Every game, virtual world, social app, or forum has its own set of community guidelines. Guidelines and standards are crucial in setting the tone of an online community, but they can’t guarantee good behavior.

Forget about Draconian measures. Gone are the days of the inflexible blacklist/whitelist, where every word is marked as either good or bad with no room for the nuances and complexity of language. Contextual filters that are designed around the concept of varying risk levels let you make choices based on your community’s unique needs.

Plenty of online games contain violence and allow players to talk about killing each other, and that’s fine — in context. And some online communities may allow users to engage in highly sexual conversations in one-on-one chat.

For many communities, their biggest concern is hate speech. Users can swear, sext, and taunt each other to their heart’s content, but racial and religious epithets are forbidden. With a filter that can distinguish between vulgarity and hate speech, you can grant your users the expressivity they deserve while still protecting them from abuse.

With a smart filter that leverages an AI/human hybrid approach, you can give those different audiences the freedom to express themselves.

Key takeaway #1: A content filter and automated moderation system is business crucial for products with user-generated content and user interactions.

Social is king

“A user who experiences toxicity is 320% more likely to quit.” – Jeffrey Lin, Riot Games

Back in 2014, a Riot Games study found a correlation between abusive player behavior and user churn. In fact, the study suggested that users who experience abuse their first time in the game are potentially three times more likely to quit and never return.

In 2016 we conducted a study with an anonymous battle game, and interestingly, found that users who engaged in chat were three times more likely to keep playing after the first day.

While there is still a lot of work to be done in this field, these two preliminary studies suggest that social matters. Further studies may show slightly different numbers, but it’s well understood that negative behavior leads to churn in online communities.

When users chat, they form connections. And when they form connections, they return to the game.

But the flipside of that is also true: When users chat and their experience is negative, they leave. We all know it’s far more expensive to acquire a new user than to keep an existing one — so it’s critical that gaming and social platforms do everything in their power to retain new users. The first step? Ensure that their social experience is free from abuse and harassment.


Key takeaway #2: The benefits of chatting can be canceled by user churn driven by exposure to high-risk content


Reduce moderation workload and increase user retention

Proactive filtering and smart moderation doesn’t just protect your brand and your community from abusive content. It also has a major impact on your bottom line.

Habbo is a virtual hotel where users from around the world can design rooms, roleplay in organizations, and even open their own trade shops and cafes. When the company was ready to scale up and expand into other languages, they realized that they needed a smarter way to filter content without sacrificing user’s ability to express themselves. 

By utilizing a more contextual filter than their previous blacklist/whitelist, Habbo managed to reduce their moderation workload by a whopping 70%. Without reportable content, there just isn’t much for users to report.

Friendbase is a virtual world where teens can chat, create, and play as friendly avatars. Similar to Habbo, the company launched their chat features with a simple blacklist/whitelist filter technology. However, chat on the platform was quickly filled with sexism, racism, and bullying behavior. For Friendbase, this behavior led to high user churn.

By leveraging a smarter, more contextual chat filter they were able to manage their users’ first experiences and create a healthy, more positive environment. Within six months of implementing new filtering and moderation technology, user retention by day 30 had doubled. And just like Habbo, user reports decreased significantly. That means fewer moderators are needed to do less work. And not only that — the work they do is far more meaningful.

Key takeaway #3: Build a business case and invest in a solid moderation software to ensure you leverage the best of healthy interactions in your online community.

Does your online community need a chat filter?

[postcallout title=”Thinking of Building Your Own Chat Filter?” body=”If you’re thinking about building your own chat filter, here are 5 important things to consider.” buttontext=”Read More” buttonlink=””]Ultimately, you will decide what’s best for your community. But the answer is almost always a resounding “yes.”

Many products with social features launch without a filter (or with a rudimentary blacklist/whitelist) and find that the community stays healthy and positive — until it scales. More users means more moderation, more reports, and more potential for abuse.

Luckily, you can prevent those growing pains by launching with a smart, efficient moderation solution that blends AI and human review to ensure that your community is protected from abuse and that users are leveraging your platform for what it was intended — connection, interaction, and engagement.



Want more articles like this? Subscribe to our newsletter and never miss a blog!

* indicates required