The Changing Landscape of Automated Content Moderation in 2019

Is 2019 the year that content moderation goes mainstream? We think so.

Things have changed a lot since 1990 when Tim Berners-Lee invented the World Wide Web. A few short years later, the world started to surf the information highway – and we’ve barely stopped to catch our collective breath since.

Learn about the past, present, and future of online content moderation in an upcoming webinar

The internet has given us many wonderful things over the last 30 years – access to all of recorded history, an instant global connection that bypasses country, religious, and racial lines, Grumpy Cat – but it’s also had unprecedented and largely unexpected consequences.

Rampant online harassment, an alarming rise in child sexual abuse imagery, urgent user reports that go unheard – it’s all adding up. Now that well over half of Earth’s population is online (4 billion people as of January 2018), we’re finally starting to see an appetite to clean up the internet and create safe spaces for all users.

The change started two years ago.

Mark Zuckerberg’s 2017 manifesto hinted at what was to come:

“There are billions of posts, comments, and messages across our services each day, and since it’s impossible to review all of them, we review content once it is reported to us. There have been terribly tragic events — like suicides, some live streamed — that perhaps could have been prevented if someone had realized what was happening and reported them sooner. There are cases of bullying and harassment every day, that our team must be alerted to before we can help out. These stories show we must find a way to do more.”

In 2018, the industry finally realized that it was time to find solutions to the problems outlined in Facebook’s manifesto. The question was no longer, “Should we moderate content on our platforms?” and instead became, “How can we better moderate content on our platforms?”

Play button on a film stripLearn how you can leverage the latest advances in content moderation in an upcoming webinar

The good news is that in 2019, we have access to the tools, technology, and years of best practices to make the dream of a safer internet a reality. At Two Hat, we’ve been working behind the scenes for nearly seven years now (alongside some of the biggest games and social networks in the industry) to create technology to auto-moderate content so accurately that we’re on the path to “invisible AI” – filters that are so good you don’t even know they’re in the background.

On February 20th, we invite you to join us for a very special webinar, “Invisible AI: The Future of Content Moderation”. Two Hat CEO and founder Chris Priebe will share his groundbreaking vision of artificial intelligence in this new age of chat, image, and video moderation.

In it, he’ll discuss the past, present, and future of content moderation, expanding on why the industry shifted its attitude towards moderation in 2018, with a special focus on the trends of 2019.

He’ll also share exclusive, advance details about:

We hope you can make it. Give us 30 minutes of your time, and we’ll give you all the information you need to make 2019 the year of content moderation.

PS: Another reason you don’t want to miss this – the first 25 attendees will receive a free gift! ; )


Read about Two Hat’s big announcements:

Two Hat Is Changing the Landscape of Content Moderation With New Image Recognition Technology

Two Hat Leads the Charge in the Fight Against Child Sexual Abuse Images on the Internet

Two Hat Releases New Artificial Intelligence to Moderate and Triage User-Generated Reports in Real Time

 

Will This New AI Model Change How the Industry Moderates User Reports Forever?

Picture this:

You’re a moderator for a popular MMO. You spend hours slumped in front of your computer reviewing a seemingly endless stream of user-generated reports. You close most of them — people like to report their friends as a prank or just to test the report feature. After the 500th junk report, your eyes blur over and you accidentally close two reports containing violent hate speech — and you don’t even realize it. Soon enough, you’re reviewing reports that are weeks old — and what’s the point in taking action after so long? There are so many reports to review, and never enough time…

Doesn’t speak to you? Imagine this instead:

You’ve been playing a popular MMO for months now. You’re a loyal player, committed to the game and your fellow players. Several times a month, you purchase new items for your avatar. Recently, another player has been harassing you and your guild, using racial slurs, and generally disrupting your gameplay. You keep reporting them, but it seems like nothing ever happens – when you log back in the next day, they’re still there. You start to think that the game creators don’t care about you – are they even looking at your reports? You see other players talking about reports on the forum: “No wonder the community is so bad. Reporting doesn’t do anything.” You log on less often; you stop spending money on items. You find a new game with a healthier community. After a few months, you stop logging on entirely.

Still doesn’t resonate? One last try:

You’re the General Manager at a studio that makes a high-performing MMO. Every month your Head of Community delivers reports about player engagement and retention, operating costs, and social media mentions. You notice that operating costs go up while the lifetime value of a user is going down. Your Head of Community wants to hire three new moderators. A story in Wired is being shared on social media — players complain about rampant hate speech and homophobic slurs in the game that appear to go unnoticed. You’re losing money and your brand reputation is suffering — and you’re not happy about it.

The problem with reports

Most social platforms give users the ability to report offensive content. User-generated reports are a critical tool in your moderation arsenal. They surface high-risk content that you would otherwise miss, and they give players a sense of ownership over and engagement in their community.

They’re also one of the biggest time-wasters in content moderation.

Some platforms receive thousands of user reports a day. Up to 70% of those reports don’t require any action from a moderator — yet they have to review them all. And those reports that do require action often contain content that is so obviously offensive that a computer algorithm should be able to detect it automatically. In the end, reports that do require human eyes to make a fair, nuanced decision often get passed over.

Predictive Moderation

For the last two years, we’ve been developing and refining a unique AI model to label and action user reports automatically, mimicking a human moderator’s workflow. We call it Predictive Moderation.

Predictive Moderation is all about efficiency. We want moderation teams to focus on the work that matters — reports that require human review, and retention and engagement-boosting activities with the community.

Two Hat’s technology is built around the philosophy that humans should do human work, and computers should do computer work. With Predictive Moderation, you can train our innovative AI to do just that — ignore reports that a human would ignore, action on reports that a human would action on, and send reports that require human review directly to a moderator.

What does this mean for you? A reduced workload, moderators who are protected from having to read high-risk content, and an increase in user loyalty and trust.

Getting started 

We recently completed a sleek redesign of our moderation layout (check out the sneak peek!). Clients begin training the AI on their dataset in January. Luckily, training the model is easy — moderators simply review user reports in the new layout, closing reports that don’t require action and actioning on the reports that require it.

Image of chat moderation workflow for user-generated reports
Layout subject to change

“User reports are essential to our game, but they take a lot of time to review,” says one of our beta clients. “We are highly interested in smarter ways to work with user reports which could allow us to spend more time on the challenging reports and let the AI take care of the rest.”

Want to save time, money, and resources? 

As we roll out Predictive Moderation to everyone in the new year, expect to see more information including a brand-new feature page, webinars, and blog posts!

In the meantime, do you:

  • Have an in-house user report system?
  • Want to increase engagement and trust on your platform?
  • Want to prevent moderator burnout and turnover?

If you answered yes to all three, you might be the perfect candidate for Predictive Moderation.

Contact us at hello@twohat.com to start the conversation.


Two Hat CEO and founder Chris hosts a webinar on Wednesday, February 20th where he’ll share Two Hat’s vision for the future of content moderation, including a look at how Predictive Moderation is about to change the landscape of chat moderation. Don’t miss it — the first 25 attendees will receive a free Two Hat gift bag!

Managing Online Communities: How to Avoid Content Moderation Burn Out

Community managers and online moderators know this all too well — moderating user-generated content can be draining.

You spend hours looking at text, usernames, and images. Content ranges from the mind-numbingly dull (false or accidental reports) to the emotionally devastating (discussions about suicide or abuse). Often, with a mountain of reports to sift through and a seemingly endless supply of content, it feels like you’ll never catch up.

The industry has long depended on user reports to moderate online content, without leveraging any safety layers in between. This approach has made for long, tedious workdays, and the inevitable emotional burnout that goes with it.

There has to be a better way.

Here are a few ideas you can leverage to keep your moderation team — and yourself — sane this year.

Something to keep in mind — we mostly talk about online games in this piece, but every technique is just as applicable to virtual worlds, social sharing apps, forums, and more.


1. Triage reports

Figuring out what is and isn’t a priority is one of the biggest challenges faced by moderation teams. Some companies receive up to 60,000 reports a day. In that sea of content, how can you possibly know what to review first? Inevitably, you’ll fall behind. And the longer you wait to review reported content and take action where needed, the less impact that action will have on future behavior.

But here’s the thing: Humans no longer need to do this kind of work. In the last few years, artificial intelligence has gotten a lot more… well, intelligent. That’s why you can start with an algorithm that can analyze reports as they come in, then move them into different queues based on risk level and priority. The great part is that algorithms can be trained on your community and your moderator’s actions. It’s not a one-size-fits-all approach.

Reports come in three varieties: No action needed, questionable, and you-better-deal-with-this.

Computers are great at identifying the easy stuff — the good (no action needed) — and the obviously bad (you-better-deal-with-this). Computers haven’t yet figured out how to make complex, nuanced decisions, which is where humans come in (the questionable). Human moderators belong in this grey, middle ground where our natural understanding of context and human behavior gives us an upper hand when making tough decisions.

Remember when Facebook removed the harrowing yet iconic photo of 9-year-old Kim Phuc fleeing a napalm attack in Vietnam? We can all understand how an algorithm would misunderstand the image and automatically remove it — and why a human reviewing the same picture would instead understand and consider the historical importance of an otherwise disturbing image.

That’s why it’s crucial that you leverage a twofold approach — humans and machines, working together, each playing to its strengths

Players will report other players for no reason; it’s just human nature. So you will always have reports that don’t require action or human review. Whether you punish or restrict players for false reports is up to you (stay tuned for an upcoming blog where we explore this topic in further detail), but the end result is always the same — close the report and move on to the next one.

That’s wasted time that you and your team will never recover, regardless of how quickly you review, close, and take action on reports.

Want that time back? Try this approach:

  • Let the machine identify and automatically close reports that don’t require moderator review.
  • Identify and automatically take action on reports that clearly require moderator action. This can be done with a combination of automation and human review; many companies leverage auto-sanctions but still review high-risk content to ensure that no further action needs to be taken.
  • Identify content in the grey zone and escalate it to a queue for priority moderator review.

Of course, it doesn’t matter how smart AI is — building, testing, and tuning an algorithm takes precious time and resources — both of which are usually in short supply. That’s why we advocate for a “buy instead of build” approach (see Thinking of Building Your Own Chat Filter? 4 Reasons You’re Wasting Your Time! for more on this). Shop around to find the option that’s the best fit for your company.


2. Give moderators a break

Ever repeated a word so many times it lost all meaning? Spend too much time reviewing reports, and that’s exactly what can happen.

Even the most experienced and diligent moderators will eventually fall prey to one of two things:

  1. Everything starts to look bad, or
  2. Nothing looks bad

The more time you spend scanning chat or images, the more likely you are to see something that isn’t there or miss something important. The end result? Poor moderation and an unhappy community.

  • Break up the day with live moderation and community engagement (see below for more about these proactive techniques).
  • Have your team take turns working on reports so they are constantly being reviewed (if you want players to learn from their mistakes and change their behavior, it’s critical that you apply sanctions as close to the time of the offense as possible).
  • Ensure that moderators switch tasks every two hours to stay fresh, focused, and diligent.

3. Be proactive

Ultimately, what’s the best way to avoid having to review tens of thousands of reports? It’s simple: Don’t give players a reason to create reports in the first place. We don’t mean turn off the reporting function; it’s a critical piece in the moderation puzzle, and one of the best ways to empower your users. However, you can use a variety of proactive approaches that will curb the need to report.

Here are a few:

Use a chat/image filter

Until recently, content filters have had a bad rap in the industry. Many companies scoffed at the idea of blocking certain words or images because “the community can handle itself,” or they saw filters as promoting censorship.

Today, the conversation has changed — and we’re finally talking seriously about online abuse, harassment, and the very real damage certain words and phrases can have.

Not only that, companies have started to realize that unmoderated and unfiltered communities that turn toxic aren’t just unpleasant — they actually lose money. (Check out our case study about the connection between proactive moderation and user retention.)

Like we said earlier, computers are great at finding the best and worst content, as defined by your community guidelines. So why not use a chat filter to deal with the best and worst content in real time?

Think about it — how many fewer reports would be created if you simply didn’t allow players to use the words or phrases you’re taking action on already? If there’s nothing to report… there are no reports. Of course, you’ll never get rid of reports completely, and nor should you. Players should always be encouraged to report things that make them uncomfortable.

But if you could automatically prevent the worst of the worst from ever being posted, you can ensure a much healthier community for your users — and spare you and your hard-working team the headache of reviewing thousands of threatening chat lines and/or images in the first place.

You can also use your moderation software (and remember, we always recommend that you buy instead of build) to automatically sanction users who post abusive content (whether it’s seen by the community or not), as defined by your guidelines.

Speaking of automatic sanctions…

Warn players

We wrote about progressive sanctions in Five Moderation Workflows Proven to Decrease Workload. They’re a key component of any moderation strategy and will do wonders for decreasing reports. 

From the same blog:

Riot Games found that players who were clearly informed why their account was suspended — and provided with chat logs as backup — were 70% less likely to misbehave again.”

Consider this: What if you warned players — in real time — that if their behavior continued, they would be suspended? How many players would rethink their language? How many would keep posting abusive content, despite the warning?

The social networking site Kidzworld found that the Community Sift user reputation feature — which restricts and expands chat permissions based on player behavior — has encouraged positive behavior in their users.

Set aside time for live moderation

Even the smartest, most finely-tuned algorithm cannot compare to live moderation.

The more time spent following live chat and experiencing how the community interacts with each other in real time, the better and more effective your moderation practices will become.

Confused about a specific word or phrase you’re seeing in user reports? Monitor chat in real time for context. Concerned that a specific player may be targeting new players for abuse but haven’t been able to collect enough proof to take action? Take an hour or so to watch their behavior as it happens.

Live moderation is also an effective way to review the triggers you’ve set up to automatically close, sanction, or escalate content for review.

Engage with the community

What’s more fun than hanging out with your community — aside from playing the game, of course? ; )

There’s a reason you got into community management and moderation. You care about people. You’re passionate about your community and your product. You and your team of moderators are at your best when you’re interacting with players, experiencing their joy (and their frustration), and generally understanding what makes them tick.

So, in addition to live moderation, it’s critical that you and your team interact with the community and actively inspire positive, healthy communication. Sanctions work, but positive reinforcement works even better.

When you spend time with the community, you demonstrate that your moderation team is connected with players, engaged in the game, and above all, human.


Final thoughts

You’ll never eliminate all user reports. They’re a fundamental element of any moderation strategy and a key method of earning player trust.

But there’s no reason you should be forced to wade through thousands or even hundreds of reports a day.

There are techniques you and your team can leverage to mitigate the impact, including giving moderators a variety of tasks to prevent burnout and keep their minds sharp, using a proactive chat/image filter, and engaging with the community on a regular basis.

Want more articles like this? Subscribe to our mailing list and never miss an update!

* indicates required



Two Hat empowers gaming and social platforms to foster healthy, engaged online communities.

Uniting cutting-edge AI with expert human review, our user-generated content filter and automated moderation software Community Sift has helped some of the biggest names in the industry protect their communities and brand, inspire user engagement, and decrease moderation workload.