Will This New AI Model Change How the Industry Moderates User Reports Forever?
You’re a moderator for a popular MMO. You spend hours slumped in front of your computer reviewing a seemingly endless stream of user-generated reports. You close most of them — people like to report their friends as a prank or just to test the report feature. After the 500th junk report, your eyes blur over and you accidentally close two reports containing violent hate speech — and you don’t even realize it. Soon enough, you’re reviewing reports that are weeks old — and what’s the point in taking action after so long? There are so many reports to review, and never enough time…
Doesn’t speak to you? Imagine this instead:
You’ve been playing a popular MMO for months now. You’re a loyal player, committed to the game and your fellow players. Several times a month, you purchase new items for your avatar. Recently, another player has been harassing you and your guild, using racial slurs, and generally disrupting your gameplay. You keep reporting them, but it seems like nothing ever happens – when you log back in the next day, they’re still there. You start to think that the game creators don’t care about you – are they even looking at your reports? You see other players talking about reports on the forum: “No wonder the community is so bad. Reporting doesn’t do anything.” You log on less often; you stop spending money on items. You find a new game with a healthier community. After a few months, you stop logging on entirely.
Still doesn’t resonate? One last try:
You’re the General Manager at a studio that makes a high-performing MMO. Every month your Head of Community delivers reports about player engagement and retention, operating costs, and social media mentions. You notice that operating costs go up while the lifetime value of a user is going down. Your Head of Community wants to hire three new moderators. A story in Wired is being shared on social media — players complain about rampant hate speech and homophobic slurs in the game that appear to go unnoticed. You’re losing money and your brand reputation is suffering — and you’re not happy about it.
The problem with reports
Most social platforms give users the ability to report offensive content. User-generated reports are a critical tool in your moderation arsenal. They surface high-risk content that you would otherwise miss, and they give players a sense of ownership over and engagement in their community.
They’re also one of the biggest time-wasters in content moderation.
Some platforms receive thousands of user reports a day. Up to 70% of those reports don’t require any action from a moderator — yet they have to review them all. And those reports that do require action often contain content that is so obviously offensive that a computer algorithm should be able to detect it automatically. In the end, reports that do require human eyes to make a fair, nuanced decision often get passed over.
For the last two years, we’ve been developing and refining a unique AI model to label and action user reports automatically, mimicking a human moderator’s workflow. We call it Predictive Moderation.
Predictive Moderation is all about efficiency. We want moderation teams to focus on the work that matters — reports that require human review, and retention and engagement-boosting activities with the community.
Two Hat’s technology is built around the philosophy that humans should do human work, and computers should do computer work. With Predictive Moderation, you can train our innovative AI to do just that — ignore reports that a human would ignore, action on reports that a human would action on, and send reports that require human review directly to a moderator.
What does this mean for you? A reduced workload, moderators who are protected from having to read high-risk content, and an increase in user loyalty and trust.
Predictive moderation is currently in beta with several clients spanning the industry, from a social network for tweens to a gaming forum site, and everything in between.
We’ve just completed a sleek redesign of our moderation layout (check out the sneak peek!). Clients begin training the AI on their dataset in January. Luckily, training the model is easy — moderators simply review user reports in the new layout, closing reports that don’t require action and actioning on the reports that require it.
“User reports are essential to our game, but they take a lot of time to review,” says one of our beta clients. “We are highly interested in smarter ways to work with user reports which could allow us to spend more time on the challenging reports and let the AI take care of the rest.”
Want to save time, money, and resources?
As we roll out Predictive Moderation to everyone in the new year, expect to see more information including a brand-new feature page, webinars, and blog posts!
In the meantime, do you:
- Have an in-house user report system?
- Want to increase engagement and trust on your platform?
- Want to prevent moderator burnout and turnover?
If you answered yes to all three, you might be the perfect candidate for Predictive Moderation.
Contact us at email@example.com to start the conversation.