Since 2012, we have been developing industry-leading content moderation practices that help social networks build safe and healthy online communities. We believe that by removing negative social interactions we can make room for positive human connections.

We do this by applying our Five Layers of Community Protection for online safety.

Community Guidelines

First, align our policy guides to your community guidelines (ie, no hate speech, violence, sexual content, etc) to reinforce your Terms of Use. Two Hat provides more than just a technical solution. Instead, we work closely with you for a white glove, consultative service as you integrate. Let our Trust & Safety experts guide you through creating policy guides that reinforce your Terms of Use, and help you scale your community in a safe way.

LEARN MORE

Classify & Filter

Call our API before user-generated content goes live. We’ll return the right answer and find trends so you don’t have to. We use a unique blend of linguistic templates and human-built rules, augmented with artificial intelligence, for unprecedented context and nuance. This approach enables our language experts and your community team to update the AI in real-time as new trends develop.

LEARN MORE

User Reputation

If context is king, then reputation is queen. Some words and phrases exist in a grey area. When a user is constantly being abusive or offensive, our patented User Reputation technology blocks grey words where it matters and allows for freer human interactions on everything else. And since all users have good and bad days the system automatically moves them between states, restricting and expanding permissions where appropriate.

LEARN MORE

User Report Automation

International regulations like NetzDG in Germany and Online Harms in the UK require you to deal with user-reported content in a timely manner. You can’t always hire 10,000 moderators or get to the time-sensitive report like a terrorist attack immediately. With our fourth layer of protection, leverage a custom neural network and train AI to take the same actions your moderators take consistently, reducing manual review by up to 70%, so you can focus on the things that matter.

LEARN MORE

Transparency Reports

With new legislation being introduced worldwide, soon social networks will be expected to issue transparency reports outlining how they keep users safe. Surfacing online harms is no easy task, especially at scale. Our automation is already trained to find harms, so Two Hat will be able to produce transparency reports for you.

Request Demo