Chat, Comments, Forums, and more

Users who participate in positive social features are three times more likely to return on day two. Participation in healthy, moderated chat increases daily sessions by more than four times and session length by 60%.

Whether it’s chat, comments, or forum posts, Two Hat’s context-based text filter and classification platform detects and removes harmful content before it reaches end-users.

Amplify positive user engagement and encourage community growth with:

  • 19 topics including cyberbullying, sexual harassment, hate speech, violent threats, suicide/self-harm, PII, and more, with all language classified on a sliding scale of risk
  • The best subversion detection in the industry
  • Filter manipulation attempts detected across multiple lines
  • Flexible policy guides mapped to your community guidelines for different use cases including public chat, private chat, usernames, and more
  • Customizable workflows and escalations based on your moderator’s needs
  • The ability to make changes in real-time without submitting a ticket

What else sets us apart?

Unnatural Language Processing. Our CEO and founder developed this new form of AI to normalize text in milliseconds, and decipher manipulations like 1337 5P34k, vertical chat, misspellings, and more.

Innovative blend of linguistic templates and human-built rules augmented with artificial intelligence. This approach allows for unprecedented context and nuance. The Two Hat Language & Culture department and your community team can update the system in real-time as new trends develop.

User Reputation

For years, social products have relied on simple deny/allow lists to filter text, which means that words are either good or bad. We know that’s not true, as some words and phrases exist in a grey area – it all depends on the context and who is saying it.

When a user is constantly being abusive or offensive, our patented User Reputation technology blocks grey words where it matters and allows for freer human interactions on everything else.

And since all users have good and bad days, User Reputation automatically moves them between states, restricting and expanding permissions where appropriate.

 

 

 

User Reputation technology can also be leveraged to decrease manual moderation by automatically applying sanctions or rewards based on Trust level changes.

In addition, User Reputation can be used to identify users who attempt to manipulate the filter with unnatural language. In a Not-Trusted state, users are subjected to filter settings that are more restrictive about the ways people tend to break the filter.

 

Request Demo