4 Musts for Safe In-Game Chat in any Language

A good in-game chat makes for more play.

Users engage more deeply, return more often which results in improvements in important metrics such as lifetime value (LTV).

Two Hat proved all this about a year ago in our whitepaper for the gaming industry, An Opportunity to Chat.

In order for chat experiences to be considered “good” by the user in the first place though, you have to make sure that no users are excluded, bullied, or harassed away from your chat community and game before they ever get a chance to fall in love with it.

That said, it’s hard to deliver a consistently positive chat experience in one language fluently and with nuance, let alone in the world’s 20 most popular langauges. Add in leet aka 1337 and other ever-evolving unnatural language hacks and the task of scaling content moderation for global chat can be daunting.

With this shifting landscape in mind, Two Hat offers these 4 Musts for Safe In-Game Chat in any Language.

#1. Set expectations with clear guidelines
Humans change our language and behavior based on our environment. The very act of being online allows for a loosening of some behavioral norms and often anonymity, so it’s important users understand the guidelines for behavior in your community. As you ponder how to establish these guidelines, remember that cultural norms around the world are very different.

In other words, what is a reasonable chat policy in one language or culture may be inappropriate in another.

#2. Develop unique policies for each culture
French is spoken fluently in Canada, Africa and the Caribbean, but the experiences of those places are entirely different.

Why?

Culture.

Native speakers know these nuances, translation engines do not. Two Hat can provide accurate and customizable chat filters built and supported by our in-house team of native speakers of over 20 languages.

These filters must be on every gaming site and inside every mobile gaming app.

#3. Let user reputation be your guide
Users with a good reputation should be rewarded. Positive users are aligned with the purpose of your product, as well as your business interests, and they’re the ones who keep others coming back.

For those few who harass others – in any language – set policies that automate appropriate measures.

For example: set a policy requiring human review of any message sent by a user with 2 negative incidents in the last 7 days, etc. In this way, user reputation becomes the impetus behind in-game experience, democratizing user socialization.

#4. Tap your natural resources
In every language and in every culture the key to building opportunity is engaging your most committed players. The key to building safer and more inclusive in-game communities is the same.

Engaged, positive users empowered to flag and report negative experiences are the glue that binds in every language and culture.

Make sure each has a voice if they feel threatened or bullied or witness others being harassed, provide the community leaders that emerge with the tools and voice to be of positive influence, and build a chat experience that’s as cool and inclusive as your game works to be.



From Censorship to Diligence: How Chat Moderation is Evolving

In the past, once you figured out there was a problem in your online community, it was probably too late to do much about it. That’s because chat and other social features were often home-grown or off the shelf solutions tacked-on to games and other communities after the fact, rather than baked-in to product strategy.

So, the go-to solution when there was a problem in a chat community was simply to disallow (formerly known as blacklistlisting) ‘offensive’ users. But blacklisting alone (words, people, etc.) doesn’t really do anything to solve the underlying issues and invites accusations of censorship against community managers (i.e. your brand). It was (still is for some) an unsustainable approach, and a new way of thinking was needed.

Today, chat and chat moderation are considered and strategized for at the design stage. Community guidelines, policies and the means to educate users as to acceptable chat conduct are established before products ever go to market. There are many reasons for the change, but the biggest may be the global shift to prioritizing a user’s experience with your product, rather than the product itself.

Experience first
The broader shift to experience-first practices has opened the door for brands to leverage chat communities as revenue drivers (see our whitepaper, An Opportunity to Chat, available on VentureBeat, for more on that).

At the same time though, prioritizing chat moderation means brands and community managers need to ask themselves some very tough, complex questions that they didn’t have to ask before.

Will new community members who may not know the rules be subject to the same policies as veteran users? How will you account for variance in ages (8-year-olds communicate differently than 38-year-olds)? What are your moderators going to do if a user threatens someone, or starts talking suicide? Should people be able to report one another? Are bans permanent? Do they carry over to other products, brands or communities?

Answering these questions takes a lot of research, discussion, and forethought. More than anything, it’s essential to be sure that the answers you get to, and the community experience you build, move your brand away from being perceived by users as a censor of discussion, and towards being perceived as their diligent partner in creating a great experience.

Diligence is the opposite of censorship
One of the conversations we encounter when discussing chat moderation policies with our clients pivots around turning concerns of taking freedom of expression away from users into an opportunistic discussion about how chat moderation as part of product, brand, and business strategy are often misaligned. In fact, it is essential for brands to move away from thinking of chat moderation as just a tool for managing risk, and towards the realization that it’s also a way to identify your most influential and profitable users. Why?

Because chat and chat moderation drive clear business improvements in user engagement, retention, and lifetime value. We also know that positive chat experiences contribute to ‘K Factor’, or virality, i.e. the better the chat experience, the more likely a user is to share their satisfaction with a friend.

So then, far from fearing the label of limiting use expression, the discussion your team needs to have about chat moderation is, “How can we encourage and scale the types of chat experiences shared by our most valuable users?”

Instead of just muting those who use bad words, pick out the positive things influential users chat about and see how they inspire others to engage and stick around. Discover what your most valuable, long-term users are chatting about and figure out how to surface those conversations for new and prospective users, sooner, and to greater effect.

Don’t fear the specter of censorship. Embrace the role of chat moderation as a powerful instrument of diligence, a productive business tool, and the backbone for a great user experience.



Will This New AI Model Change How the Industry Moderates User Reports Forever?

Picture this:

You’re a moderator for a popular MMO. You spend hours slumped in front of your computer reviewing a seemingly endless stream of user-generated reports. You close most of them — people like to report their friends as a prank or just to test the report feature. After the 500th junk report, your eyes blur over and you accidentally close two reports containing violent hate speech — and you don’t even realize it. Soon enough, you’re reviewing reports that are weeks old — and what’s the point in taking action after so long? There are so many reports to review, and never enough time…

Doesn’t speak to you? Imagine this instead:

You’ve been playing a popular MMO for months now. You’re a loyal player, committed to the game and your fellow players. Several times a month, you purchase new items for your avatar. Recently, another player has been harassing you and your guild, using racial slurs, and generally disrupting your gameplay. You keep reporting them, but it seems like nothing ever happens – when you log back in the next day, they’re still there. You start to think that the game creators don’t care about you – are they even looking at your reports? You see other players talking about reports on the forum: “No wonder the community is so bad. Reporting doesn’t do anything.” You log on less often; you stop spending money on items. You find a new game with a healthier community. After a few months, you stop logging on entirely.

Still doesn’t resonate? One last try:

You’re the General Manager at a studio that makes a high-performing MMO. Every month your Head of Community delivers reports about player engagement and retention, operating costs, and social media mentions. You notice that operating costs go up while the lifetime value of a user is going down. Your Head of Community wants to hire three new moderators. A story in Wired is being shared on social media — players complain about rampant hate speech and homophobic slurs in the game that appear to go unnoticed. You’re losing money and your brand reputation is suffering — and you’re not happy about it.

The problem with reports
Most social platforms give users the ability to report offensive content. User-generated reports are a critical tool in your moderation arsenal. They surface high-risk content that you would otherwise miss, and they give players a sense of ownership over and engagement in their community.

They’re also one of the biggest time-wasters in content moderation.

Some platforms receive thousands of user reports a day. Up to 70% of those reports don’t require any action from a moderator — yet they have to review them all. And those reports that do require action often contain content that is so obviously offensive that a computer algorithm should be able to detect it automatically. In the end, reports that do require human eyes to make a fair, nuanced decision often get passed over.

Predictive Moderation
For the last two years, we’ve been developing and refining a unique AI model to label and action user reports automatically, mimicking a human moderator’s workflow. We call it Predictive Moderation.

Predictive Moderation is all about efficiency. We want moderation teams to focus on the work that matters — reports that require human review, and retention and engagement-boosting activities with the community.

Two Hat’s technology is built around the philosophy that humans should do human work, and computers should do computer work. With Predictive Moderation, you can train our innovative AI to do just that — ignore reports that a human would ignore, action on reports that a human would action on, and send reports that require human review directly to a moderator.

What does this mean for you? A reduced workload, moderators who are protected from having to read high-risk content, and an increase in user loyalty and trust.

Getting started
We recently completed a sleek redesign of our moderation layout (check out the sneak peek!). Clients begin training the AI on their dataset in January. Luckily, training the model is easy — moderators simply review user reports in the new layout, closing reports that don’t require action and actioning on the reports that require it.

Image of chat moderation workflow for user-generated reports
Layout subject to change

“User reports are essential to our game, but they take a lot of time to review,” says one of our beta clients. “We are highly interested in smarter ways to work with user reports which could allow us to spend more time on the challenging reports and let the AI take care of the rest.”

Want to save time, money, and resources? 
As we roll out Predictive Moderation to everyone in the new year, expect to see more information including a brand-new feature page, webinars, and blog posts!

In the meantime, do you:

  • Have an in-house user report system?
  • Want to increase engagement and trust on your platform?
  • Want to prevent moderator burnout and turnover?

If you answered yes to all three, you might be the perfect candidate for Predictive Moderation.

Contact us at hello@twohat.com to start the conversation.


Two Hat CEO and founder Chris hosts a webinar on Wednesday, February 20th where he’ll share Two Hat’s vision for the future of content moderation, including a look at how Predictive Moderation is about to change the landscape of chat moderation. Don’t miss it — the first 25 attendees will receive a free Two Hat gift bag!



Adding Chat to Your Online Platform? First Ask Yourself These 4 Critical Questions

Want to retain users and lower the cost of acquisition on your platform? In 2018, social features including chat, private messaging, usernames, and user profiles are all must-haves in an overstuffed market where user retention is critical to long-term success. Nothing draws a crowd like a crowd — and a crowd of happy, loyal, and welcoming users will always bring in more happy, loyal, and welcoming users.

But there will always be risks involved when adding social features to your platform. A small percentage of users will post unwanted content like hate speech, NSFW images, or abusive language, all of which can cause serious damage to your brand’s reputation.

So while social features are must-haves in 2018, understanding — and mitigating — the risks inherent in adding those features are equally important.

If you’re just getting started with chat moderation (and even if you’ve been doing it for a while), here are four key questions to ask.

1. How much risk is my platform/brand willing to accept?
Every brand is different. Community demographic will usually be a major factor in determining your risk tolerance.

For instance, communities with users under 13 in the US have to be COPPA compliant, so preventing users from sharing PII (personally identifiable information) is essential. Edtech platforms have to mitigate risk by ensuring that they’re CIPA and FERPA compliant.

With legal ramifications to consider, these platforms that are designed for young people will always be far more risk-averse than brands that are marketed towards more mature audiences.

However, many older, more established brands — even if they cater to an older audience — will likely be less tolerant of risk than small or new organizations.

Consider your brand’s tone and history. Review your corporate guidelines to understand what your brand stands for. This is a great opportunity to define exactly what kind of an online community you want to create.

2. What kind of content is most dangerous to my platform/brand?
Try this exercise: Imagine that one item (say, a forum post or profile pic) containing pornography was posted on your platform. How would it affect the brand? How would your audience react to seeing pornography on your platform? How would your executive team respond? What would happen if the media/press found out?

Same with PII — for a brand associated with children or teens, this could be monumental. (And if it happens on a platform aimed at users under 13 in the US, a COPPA violation can lead to potentially millions of dollars in fines.)

What about hate speech? Sexual harassment? What is your platform/brand’s definition of abuse or harassment? The better you can define these terms in relation to your brand, the better you will understand what kind of content you need to moderate.

3. How will I communicate my expectations to the community?
Don’t expect your users to automatically know what is and isn’t acceptable on your platform. Post your community guidelines where users can see them. Make sure users have to agree to your guidelines before they can post.

In a recent blog for CMX, Two Hat Director of Community Trust & Safety Carlos Figueiredo explores writing community guidelines you can stick to. In it, he provides an engaging framework for everything from creating effective guidelines from the ground up, to collaborating with your production team to create products that encourage healthy interactions.

4. What tools can I leverage to manage risk and enforce guidelines in my community?
We recommend taking a proactive instead of a reactive approach to managing risk. What does that mean for chat moderation? First, let’s look at the different kinds of chat moderation:

  • Live moderation: Moderators follow live chat in real time and take action as needed. High risk, very expensive, and not a scalable solution.
  • Pre-moderation: Moderators review, then approve or reject all content before it’s posted. Low risk, but slow, expensive, and not scalable.
  • Post-moderation: Moderators review, then approve or reject all content after it’s posted. High-risk option.
  • User reports: Moderators depend on users to report content, then review and approve or reject. High-risk option.

On top of these techniques, there are also different tools you can use to take a proactive approach, including in-house filters (read about the build internally vs buy externally debate), and content moderation solutions like Two Hat’s Community Sift (learn about the difference between a simple profanity filter and a content moderation tool).

So what’s the best option?

Regardless of your risk tolerance, always use a proactive filter. Content moderation solutions like Two Hat’s Community Sift can be tuned to match your risk profile. Younger communities can employ a more restrictive filter, and more mature communities can be more permissive. You can even filter just the topics that matter most. For example, mature communities can allow sexual content while still blocking hate speech.

By using a proactive filter, you’ve already applied the first layer of risk mitigation. After that, we recommend using a blend of all four kinds of moderation, based on your brand’s unique risk tolerance. Brands that are less concerned about risk can depend on user reports for the most part, while more risk-averse platforms can pre or post-moderate content that they deem potentially risky, but not risky enough to filter automatically.

Once you understand and can articulate your platform/brand’s risk tolerance, you can start to build Terms of Use and community guidelines around it. Display your expectations front and center, use proven tools and techniques to manage risk, and you’ll be well on your way to building a healthy, thriving, and engaged community of users — all without putting your brand’s reputation at risk.

Now, with your brand protected, you can focus on user retention and revenue growth.