For years, social networks have relied on users to report abuse, hate speech, and other types of online harms. Reports are sent to moderation teams who review each abuse report individually. Many platforms receive thousands of reports daily, most of which can be closed without taking action.

Meanwhile, reports containing time-sensitive content — suicide threats, violence, terrorism, and child abuse — risk going unseen or not being reviewed until it’s too late.

There are legal implications as well. The German law known as NetzDG says that platforms must remove reported hate speech and illegal content within 24 hours — or face fines of up to 50 million euros. Similar laws concerning reported content are being introduced in France, Australia, the UK, and across the globe.

With Two Hat’s reported content product Predictive Moderation, platforms can train a custom AI model on their moderation team’s consistent decisions, automating the most time-consuming part of the moderation process by closing false reports, taking action on the obviously abusive reports, and triaging reports that require human eyes to queues for priority review.

Predictive Moderation is all about efficiency. We want moderation teams to streamline their workflows and focus on the work that matters — reports that require urgent review, and retention and engagement-boosting activities with the community.

Download our Predictive Moderation Overview to learn more:

 

Request Demo