The Changing Landscape of Automated Content Moderation in 2019

Is 2019 the year that content moderation goes mainstream? We think so.

Things have changed a lot since 1990 when Tim Berners-Lee invented the World Wide Web. A few short years later, the world started to surf the information highway – and we’ve barely stopped to catch our collective breath since.

Learn about the past, present, and future of online content moderation in an upcoming webinar

The internet has given us many wonderful things over the last 30 years – access to all of recorded history, an instant global connection that bypasses country, religious, and racial lines, Grumpy Cat – but it’s also had unprecedented and largely unexpected consequences.

Rampant online harassment, an alarming rise in child sexual abuse imagery, urgent user reports that go unheard – it’s all adding up. Now that well over half of Earth’s population is online (4 billion people as of January 2018), we’re finally starting to see an appetite to clean up the internet and create safe spaces for all users.

The change started two years ago.

Mark Zuckerberg’s 2017 manifesto hinted at what was to come:

“There are billions of posts, comments, and messages across our services each day, and since it’s impossible to review all of them, we review content once it is reported to us. There have been terribly tragic events — like suicides, some live streamed — that perhaps could have been prevented if someone had realized what was happening and reported them sooner. There are cases of bullying and harassment every day, that our team must be alerted to before we can help out. These stories show we must find a way to do more.”

In 2018, the industry finally realized that it was time to find solutions to the problems outlined in Facebook’s manifesto. The question was no longer, “Should we moderate content on our platforms?” and instead became, “How can we better moderate content on our platforms?”

Play button on a film stripLearn how you can leverage the latest advances in content moderation in an upcoming webinar

The good news is that in 2019, we have access to the tools, technology, and years of best practices to make the dream of a safer internet a reality. At Two Hat, we’ve been working behind the scenes for nearly seven years now (alongside some of the biggest games and social networks in the industry) to create technology to auto-moderate content so accurately that we’re on the path to “invisible AI” – filters that are so good you don’t even know they’re in the background.

On February 20th, we invite you to join us for a very special webinar, “Invisible AI: The Future of Content Moderation”. Two Hat CEO and founder Chris Priebe will share his groundbreaking vision of artificial intelligence in this new age of chat, image, and video moderation.

In it, he’ll discuss the past, present, and future of content moderation, expanding on why the industry shifted its attitude towards moderation in 2018, with a special focus on the trends of 2019.

He’ll also share exclusive, advance details about:

We hope you can make it. Give us 30 minutes of your time, and we’ll give you all the information you need to make 2019 the year of content moderation.

PS: Another reason you don’t want to miss this – the first 25 attendees will receive a free gift! ; )


Read about Two Hat’s big announcements:

Two Hat Is Changing the Landscape of Content Moderation With New Image Recognition Technology

Two Hat Leads the Charge in the Fight Against Child Sexual Abuse Images on the Internet

Two Hat Releases New Artificial Intelligence to Moderate and Triage User-Generated Reports in Real Time

 

Top 6 Reasons You Should Combine Automation and Manual Review in Your Image Moderation Strategy

When you’re putting together an image moderation strategy for your social platform, you have three options:

  1. Automate everything with AI;
  2. Do everything manually with human moderators, or
  3. Combine both approaches for Maximum Moderation Awesomeness™

When consulting with clients and industry partners like PopJam, unsurprisingly, we advocate for option number three.

Here are our top six reasons why:

Human beings are, well… human (Part 1)

We get tired, we take breaks, and we don’t work 24/7. Luckily, AI hasn’t gained sentience (yet), so we don’t have to worry (yet) about an algorithm troubling our conscience when we make it work without rest.

Close up of Sophia the robot
Um, NO THANK YOU.

Human beings are, well… human (Part 2)

In this case, that’s a good thing. Humans are great at making judgments based on context and cultural understanding. An algorithm can find a swastika, but only a human can say with certainty if it’s posted by a troll propagating hate speech or is instead a photo from World War II with historical significance.

Child at computer giving thumbs up sign
Thumbs up for people!

We’re in a golden age of AI

Artificial intelligence is really, really good at detecting offensive images with near-perfect accuracy. For context, this wasn’t always the case. Even 10 years ago, image scanning technology was overly reliant on “skin tone” analysis, leading to some… interesting false positives.

Babies, being (sometimes) pink, round, and strangely out of proportion would often trigger false positives. And while some babies may not especially adorable, it was a bit cruel to label them “offensive.” Equally inoffensive but often the cause of false positives were light oak-colored desks, chair legs, marathon runners, some (but not all) brick walls, and even more bizarrely — balloons.

Today, the technology has advanced so far that it can distinguish between bikinis, shorts, beach shots, scantily-clad “glamour” photography, and explicit adult material.

Cartoon of Porky Pig slamming into a brick wall
Pictured: not pornography.

Humans beings are, well… human (Part 3)

Like we said, AI doesn’t yet have the capacity for shock, horror, or emotional distress of any kind.

(This is still true, right? We would know if a robot uprising was in the works, right? RIGHT?)

Until our sudden inevitable overthrow by the machines, go ahead and let AI automatically reject images with a high probability of containing pornography, gore, or anything that could have a lasting effect on your users and your staff.

That way, human mods can focus on human stuff like reviewing user reports and interacting with the community.

Black and white cat looking suprised
Protect the cats! Er, humans.

It’s the easiest way to give your users an unforgettable experience

The social app market is already overcrowded. “The next Instagram” is released every day. In a market where platforms vie to retain users, it’s critical that you ensure a positive user experience.

With AI, you can approve and reject posts in real time, meaning your users will never have to wait for their images to be reviewed.

And with human moderators engaging with the community — liking posts, upvoting images, and promptly reviewing and actioning user reports — your users will feel supported, safe, and heard.

You can’t put a price on that… no wait, you can. It’s called Cost of Customer Acquisition (CAC), and it can make or break a business that struggles to retain users.

You’re leveraging the best of both worlds

AI is crazy fast, scanning millions of images a day. By contrast, humans can scan about 2500 images daily before their eyes start to cross and they make a lot of mistakes. AI is more accurate than ever, but humans provide enhanced precision by understanding context.

A solid image moderation process supported by cutting-edge tech and a bright, well-trained staff? You’re well on your way to Maximum Moderation Awesomeness™.

Kip from Napoleon Dynamite celebrates a victory

Want to learn how one social app combines automation with manual review to reduce their workload and increase user engagement? Sign up for our webinar featuring the community team from PopJam!

The Role of Image Filtering in Shaping a Healthy Online Community

Digital citizenship, online etiquette, and user behavior involve many different tools of expression, from texting to photo sharing, and from voice chat to video streaming. In my last article, I wrote about who is responsible for the well-being of players/users online. Many of the points discussed relate directly to the challenges posed by chat communication.

However, those considerations are also applicable to image sharing on our social platforms as well as what intent is behind it.

Picture this

Online communities that allow users to share images have to deal with several risks and challenges that come with the very nature of the beast; meaning, creating and/or sharing images is a popular form of online expression, there’s no shortage of images, and they come in all shapes, flavors, and forms.

Unsurprisingly, you’re bound to encounter images that will challenge your community guidelines (think racy pictures without obvious nudity), while others will simply be unacceptable (for example, pornography, gore, or drug-related imagery).

Fortunately, artificial intelligence has advanced to a point where it can do things that humans cannot; namely, handle incredibly high volumes while maintaining high precision and accuracy.

This is not to say that humans are dispensable. Far from that. We still need human eyes to make the difficult, nuanced decisions that machines alone can’t yet make.

For example, let’s say a user is discussing history with another user and wants to share a historical picture related to hate speech. Without the appropriate context, a machine could simply identify a hateful symbol on a flag and automatically block the image, stopping them from sharing it.

Costs and consequences

Without an automated artificial intelligence system for image filtering, a company is looking at two liabilities:

  • An unsustainable, unscalable model that will incur a manual cost connected to human moderation hours;
  • Increased psychological impact of exposing moderators to excessive amounts of harmful images

The power of artificial intelligence

Automated image moderation can identify innocuous images and automate their approval. It can also identify key topics (like pornographic content and hateful imagery) with great accuracy and block them in real time, or hold them for manual review.

By using automation, you can remove two things from your moderators’ plates:

  • Context-appropriate images (most images: fun pictures with friends smiling, silly pictures, pets, scenic locations, etc )
  • Images that are obviously against your community guidelines (think pornography or extremely gory content)

Also, a smart system can serve up images in the grey area to your moderators for manual review, which means way less content to review than the two scenarios explored above. By leveraging automation you will have less manual work (reduced workload, therefore reduced costs) and less negative impact on your moderation team.

Give humans a break

Automated image moderation can also take the emotional burden off of your human moderators. Imagine yourself sitting in front of a computer for hours and hours, reviewing hundreds or even thousands of images, never knowing when your eyes (and mind) will be assaulted by a pornographic or very graphic violent image. Now consider the impact this has week after week.

What if a big part of that work can be taken by an automated system, drastically reducing the workload, and with that the emotional impact of reviewing offensive content? Why wouldn’t we seek to improve our team’s working situation and reduce employee burnout and turnover?

It is not only a business crucial thing to do. This also means taking better care of your people and supporting them. This is key to company culture.

An invitation

Normally, I talk and write about digital citizenship as it relates to chat and text. Now, I’m excited to be venturing into the world of images and sharing as much valuable insight as I can with all of you. After all, image sharing is an important form of communication and expression in many online communities.

It would be great if you could join me for a short, 20-minute webinar we are offering on Wednesday, August 22nd. I’ll be talking about actionable best practices you can put to good use as well as considering what the future may hold for this space. You can sign up here.

I’m looking forward to seeing you there!

Originally published on LinkedIn by Carlos Figueiredo, Two Hat Director of Community Trust & Safety