PubNub & Two Hat: Stronger Together at Removing Negative, Unwanted Behaviors from Your Communities

Recently, there has been a seismic shift in the way the world looks at internet safety and how user-generated content like chat, comments, images, and videos are shared. The COVID-19 pandemic has reshaped our offline and online worlds in record time. More than ever before, people are turning to online communities to find entertainment, solace, and connection.

At Two Hat, we believe that everyone should be free to share online without fear of harassment or abuse. We also believe that making this vision a reality requires a shared responsibility between many organizations, governments and non-profits.

That’s why we have allied ourselves with PubNub and forged an even stronger relationship aimed at reducing negative or disruptive online behavior, improving user experiences, and protecting users on your platform, regardless of your industry, company size, or volumes.

Today, we announced you can now access Two Hat’s AI-powered real-time content moderation platform via PubNub Integrations. At the core of this partnership are integrations with Two Hat’s Community Sift and Sift Ninja, offering two ways to start removing negative, unwanted behavior, and bad actors from your communities.

Community Sift is an AI-powered, all-in-one content moderation platform that filters, classifies, reports, and escalates online harms in real-time for enterprise-sized businesses. Meanwhile, Sift Ninja is a simple, yet powerful profanity filter designed for smaller services, apps, and games.

For those building chat on PubNub, profanity filtering is a common and important feature. It’s vital for community health, especially as user traffic climbs and manual moderation becomes difficult or impossible.

Regulations and app store policies require profanity filtering, and it’s table-stakes for any product or service used by children or families. For eLearning, high-occupancy live events, and games, content moderation and filtering play a key role in ensuring happy users and healthy communities.

“We couldn’t be more excited about this renewed partnership with PubNub,” said Mike Curliss, VP of Sales & Marketing at Two Hat. “PubNub provides great flexibility in letting you work with the tools via their PubNub Functions. Their commitment to building healthy communities is very much in alignment with Two Hat’s vision and mission.”

Both organizations firmly believe it is vital that their customers, and their end-users, have easy access to best-in-class tools to protect their communities. That’s where Two Hat comes in.

“Two Hat is the leading expert in AI-powered content moderation and profanity filtering, and we are thrilled to be partnered with them,” said Jonas Gray, VP of Business Development, PubNub. “By using API integrations between Two Hat and PubNub, developers can now easily enhance their chat applications with robust tools to intelligently remove harmful content before it reaches end-users.”

PubNub’s unwavering commitment to providing its clients with powerful, easy-to-implement solutions for in-app chat was another reason for this renewed partnership.

With these newest integrations, you can easily augment your in-app chat, whether it’s 1:1, group, or event chat for thousands of users. And, with the tools offered by Community Sift and Sift Ninja, you’ll have full control over how to handle filtered content and bad actors.

For more details, visit PubNub’s integrations pages for Community Sift and Sift Ninja. Additionally, each page contains sample code and documentation to get set up with these integrations in your PubNub application.

4 Musts for Safe In-Game Chat in any Language

A good in-game chat makes for more play.

Users engage more deeply, return more often which results in improvements in important metrics such as lifetime value (LTV).

Two Hat proved all this about a year ago in our whitepaper for the gaming industry, An Opportunity to Chat.

In order for chat experiences to be considered “good” by the user in the first place though, you have to make sure that no users are excluded, bullied, or harassed away from your chat community and game before they ever get a chance to fall in love with it.

That said, it’s hard to deliver a consistently positive chat experience in one language fluently and with nuance, let alone in the world’s 20 most popular langauges. Add in leet aka 1337 and other ever-evolving unnatural language hacks and the task of scaling content moderation for global chat can be daunting.

With this shifting landscape in mind, Two Hat offers these 4 Musts for Safe In-Game Chat in any Language.

#1. Set expectations with clear guidelines
Humans change our language and behavior based on our environment. The very act of being online allows for a loosening of some behavioral norms and often anonymity, so it’s important users understand the guidelines for behavior in your community. As you ponder how to establish these guidelines, remember that cultural norms around the world are very different.

In other words, what is a reasonable chat policy in one language or culture may be inappropriate in another.

#2. Develop unique policies for each culture
French is spoken fluently in Canada, Africa and the Caribbean, but the experiences of those places are entirely different.

Why?

Culture.

Native speakers know these nuances, translation engines do not. Two Hat can provide accurate and customizable chat filters built and supported by our in-house team of native speakers of over 20 languages.

These filters must be on every gaming site and inside every mobile gaming app.

#3. Let user reputation be your guide
Users with a good reputation should be rewarded. Positive users are aligned with the purpose of your product, as well as your business interests, and they’re the ones who keep others coming back.

For those few who harass others – in any language – set policies that automate appropriate measures.

For example: set a policy requiring human review of any message sent by a user with 2 negative incidents in the last 7 days, etc. In this way, user reputation becomes the impetus behind in-game experience, democratizing user socialization.

#4. Tap your natural resources
In every language and in every culture the key to building opportunity is engaging your most committed players. The key to building safer and more inclusive in-game communities is the same.

Engaged, positive users empowered to flag and report negative experiences are the glue that binds in every language and culture.

Make sure each has a voice if they feel threatened or bullied or witness others being harassed, provide the community leaders that emerge with the tools and voice to be of positive influence, and build a chat experience that’s as cool and inclusive as your game works to be.



What Is the Difference Between a Profanity Filter and a Content Moderation Tool?

Profanity filter, content moderation, automated moderation tool, oh my! Ever noticed that these terms are often used interchangeably in the industry? The thing is, the many subtle (and not so subtle) differences between them can affect your long-term growth plans, and leave you stuck in a lengthy service contract with a solution that doesn’t fit your community.

Selecting the right software for content moderation is an important step if you want to build a healthy, engaged online community. To make things easier for you, let’s explore the main points of confusion between profanity filters and automated moderation tools.

Profanity filters catch, well, profanity

Profanity filters are pretty straightforward. They work by using a set blacklist/whitelist to allow or deny certain words. They’re great at finding your typical four-letter words, especially when they’re spelled correctly. Be aware, though — the minute you implement a blacklist/whitelist, your users are likely to start using language subversions to get around the filter. Even a simple manipulation like adding punctuation in the middle of an offensive word can cause a profanity filter to misread it, allowing it to slip through the cracks.

Be prepared to work overtime adding words to your allow and deny list, based on community trends and new manipulations.

A typical example of escalating filter subversion.

Profanity filters can be set up fast

One benefit of profanity filters, at least at first glance? They’re easy to set up. Many profanity filters allow you to enter your credit card and integrate in just a few minutes, and they often offer freemium versions or free trials to boot.

While this is great news for pre-revenue platforms and one-person shows, trading accuracy for speed can come back to bite you in the end. If you’re in a growth mindset and expect your community to scale, it’s in your best interest to implement the most effective and scalable moderation tools at launch. Remember that service contract we mentioned earlier? This is where you don’t want to get stuck with the wrong software for your community.

So, what are your other options? Let’s take a look at content moderation tools.

Content moderation tools filter more than just profanity

Online communities are made up of real people, not avatars. That means they behave like real people and use language like real people. Disruptive behavior (what we used to call “toxicity”) comes in many forms, and it’s not always profanity.

Some users will post abusive content in other languages. Some will harass other community members in more subtle ways — urging them to harm themselves or even commit suicide, using racial slurs, engaging in bullying behavior without using profanity, or doxxing (sharing personal information without consent). Still others will manipulate language with l337 5p34k, ÙniÇode ÇharaÇters, or kreative mizzpellingzz.

Accuracy is key here — and a profanity filter that only finds four-letter words cannot provide that same level of fine-tuned detection.

A context-based moderation tool can even make a distinction between words that are perfectly innocent in one context… but whose meaning changes based on the conversation (“balls” or “sausage” are two very obvious examples).

What else should you look for?

Vertical Chat

Also known as “dictionary dancing”. Those same savvy users who leverage creative misspellings to bypass community guidelines will also use multiple lines of chat to get their message across:

Vertical chat in action.

Usernames

Most platforms allow users to create a unique username for their profile. But don’t assume that a simple profanity filter will detect and flag offensive language in usernames. Unlike other user-generated content like chat, messages, comments, and forum posts, usernames rarely consist of “natural” language. Instead, they’re made up of long strings of letters and numbers — “unnatural” language. Most profanity filters lack the complex technology to filter usernames accurately, but some moderation tools are designed to adapt to all kinds of different content.

Language & Culture

Can you think of many online communities where users only chat in English? Technology has brought people of different cultures, languages, and backgrounds together in ways that were unheard of in the past. If scaling into the global market is part of your business plan, choose a moderation tool that can support multiple languages. Accuracy and context are key here. Look for moderation software that supports languages built in-house by native speakers with a deep understanding of cultural and contextual nuances.

User Reputation

One final difference that we should call out here. Profanity filters treat everyone in the community the same. But anyone who has worked in online community management or moderation knows that human behavior is complex. Some users will never post a risky piece of content in their lifetime; some users will break your community guidelines occasionally; some will consistently post content that needs to be filtered.

Profanity filters apply the same settings to all of these users, while some content moderation tools will actually look at the user’s reputation over time, and apply a more permissive or restrictive filter based on behavior. Pretty sophisticated stuff.

Content moderation tools can be adapted to fit your community

A “set it and forget it” approach might work for a static, unchanging community with no plans for growth. If that’s the case for you, a profanity filter might be your best option. But if you plan to scale up, adding new users while keeping your current userbase healthy, loyal, and engaged, a content moderation tool with a more robust feature set is a much better long-term option.

Luckily, in today’s world, most content moderation technology is just a simple RESTful API call away.

Not only that, content moderation tools allow you to moderate your community much more efficiently and effectively than a simple profanity filter. With automated workflows in place, you can escalate alarming content (suicide threats, child exploitation, extreme harassment) to queues for your team to review, as well as take automatic action on accounts that post disruptive content.

Selecting a moderation solution for your platform is no easy task. When it’s time to make a decision, we hope you’ll use the information outlined above to make the right decision for your online community.