Users who participate in positive social features are three times more likely to return on day two. Participation in healthy, moderated chat increases daily sessions by more than four times and session length by 60%.
Whether it’s chat, comments, or forum posts, Two Hat’s context-based text filter and classification platform detects and removes harmful content before it reaches end-users.
Amplify positive user engagement and encourage community growth with:
What else sets us apart?
Unnatural Language Processing. Our CEO and founder developed this new form of AI to normalize text in milliseconds, and decipher manipulations like 1337 5P34k, vertical chat, misspellings, and more.
Innovative blend of linguistic templates and human-built rules augmented with artificial intelligence. This approach allows for unprecedented context and nuance. The Two Hat Language & Culture department and your community team can update the system in real-time as new trends develop.
For years, social products have relied on simple deny/allow lists to filter text, which means that words are either good or bad. We know that’s not true, as some words and phrases exist in a grey area – it all depends on the context and who is saying it.
When a user is constantly being abusive or offensive, our patented User Reputation technology blocks grey words where it matters and allows for freer human interactions on everything else.
And since all users have good and bad days, User Reputation automatically moves them between states, restricting and expanding permissions where appropriate.
User Reputation technology can also be leveraged to decrease manual moderation by automatically applying sanctions or rewards based on Trust level changes.
In addition, User Reputation can be used to identify users who attempt to manipulate the filter with unnatural language. In a Not-Trusted state, users are subjected to filter settings that are more restrictive about the ways people tend to break the filter.