Top 6 Reasons You Should Combine Automation and Manual Review in Your Image Moderation Strategy

When you’re putting together an image moderation strategy for your social platform, you have three options:

  1. Automate everything with AI;
  2. Do everything manually with human moderators, or
  3. Combine both approaches for Maximum Moderation Awesomeness™

When consulting with clients and industry partners like PopJam, unsurprisingly, we advocate for option number three.

Here are our top six reasons why:

Human beings are, well… human (Part 1)

We get tired, we take breaks, and we don’t work 24/7. Luckily, AI hasn’t gained sentience (yet), so we don’t have to worry (yet) about an algorithm troubling our conscience when we make it work without rest.

Close up of Sophia the robot
Um, NO THANK YOU.

Human beings are, well… human (Part 2)

In this case, that’s a good thing. Humans are great at making judgments based on context and cultural understanding. An algorithm can find a swastika, but only a human can say with certainty if it’s posted by a troll propagating hate speech or is instead a photo from World War II with historical significance.

Child at computer giving thumbs up sign
Thumbs up for people!

We’re in a golden age of AI

Artificial intelligence is really, really good at detecting offensive images with near-perfect accuracy. For context, this wasn’t always the case. Even 10 years ago, image scanning technology was overly reliant on “skin tone” analysis, leading to some… interesting false positives.

Babies, being (sometimes) pink, round, and strangely out of proportion would often trigger false positives. And while some babies may not especially adorable, it was a bit cruel to label them “offensive.” Equally inoffensive but often the cause of false positives were light oak-colored desks, chair legs, marathon runners, some (but not all) brick walls, and even more bizarrely — balloons.

Today, the technology has advanced so far that it can distinguish between bikinis, shorts, beach shots, scantily-clad “glamour” photography, and explicit adult material.

Cartoon of Porky Pig slamming into a brick wall
Pictured: not pornography.

Humans beings are, well… human (Part 3)

Like we said, AI doesn’t yet have the capacity for shock, horror, or emotional distress of any kind.

(This is still true, right? We would know if a robot uprising was in the works, right? RIGHT?)

Until our sudden inevitable overthrow by the machines, go ahead and let AI automatically reject images with a high probability of containing pornography, gore, or anything that could have a lasting effect on your users and your staff.

That way, human mods can focus on human stuff like reviewing user reports and interacting with the community.

Black and white cat looking suprised
Protect the cats! Er, humans.

It’s the easiest way to give your users an unforgettable experience

The social app market is already overcrowded. “The next Instagram” is released every day. In a market where platforms vie to retain users, it’s critical that you ensure a positive user experience.

With AI, you can approve and reject posts in real time, meaning your users will never have to wait for their images to be reviewed.

And with human moderators engaging with the community — liking posts, upvoting images, and promptly reviewing and actioning user reports — your users will feel supported, safe, and heard.

You can’t put a price on that… no wait, you can. It’s called Cost of Customer Acquisition (CAC), and it can make or break a business that struggles to retain users.

You’re leveraging the best of both worlds

AI is crazy fast, scanning millions of images a day. By contrast, humans can scan about 2500 images daily before their eyes start to cross and they make a lot of mistakes. AI is more accurate than ever, but humans provide enhanced precision by understanding context.

A solid image moderation process supported by cutting-edge tech and a bright, well-trained staff? You’re well on your way to Maximum Moderation Awesomeness™.

Kip from Napoleon Dynamite celebrates a victory

Want to learn how one social app combines automation with manual review to reduce their workload and increase user engagement? Sign up for our webinar featuring the community team from PopJam!

Optimize Your Image Moderation Process With These Five Best Practices

If you run or moderate a social sharing site or app where users can upload their own images, you know how complex image moderation can be. We’ve compiled five best practices that will make you and your moderation team’s lives a lot easier:

1. Create robust internal moderation guidelines

While you’ll probably rely on AI to automatically approve and reject the bulk of submitted images, there will be images that an algorithm misses, or that users have reported as being inappropriate. In those cases, it’s crucial that your moderators are well-trained and have the resources at their disposal to make what can sometimes be difficult decisions.

Remember the controversy surrounding Facebook earlier this year when they released their moderation guidelines to the public? Turns out, their guidelines were so convoluted and thorny that it was near-impossible to follow them with any consistency. (To be fair, Facebook faces unprecedented challenges when it comes to image moderation, including incredibly high volumes and billions of users from all around the world.) There’s a lesson to be learned here, though — internal guidelines should be clear and concise.

Consider — you probably don’t allow pornography on your platform, but how do you feel about bathing suits or lingerie? And what about drugs — where do you draw the line? Do you allow images of pills? Alcohol?

Moderation isn’t a perfect science; there will always be grey areas. That’s why it’s important that you also…

2. Consider context

When you’re deciding whether to approve or reject an image that falls into the grey area, remember to look at everything surrounding the image. What is the user’s intent with posting the image? Is their intention to offend? Look at image tags, comments, and previous posts.

While context matters, it’s also key that you remember to…

3. Be consistent when approving/rejecting images and sanctioning users

Your internal guidelines should ensure that you and your team make consistent, replicable moderation decisions. Consistency is so important because it signals to the community that 1) you’re serious about their health and safety, and 2) you’ve put real thought and attention into your guidelines.

A few suggestions for maintaining consistency:

  • Notify the community publically if you ever change your moderation guidelines
  • Consider publishing your internal guidelines
  • Host moderator debates over challenging images and ask for as many viewpoints as possible ; this will help avoid biased decision-making
  • When rejecting an image (even if it’s done automatically by the algorithm), automate a warning message to the user that includes community guidelines
  • If a user complains about an image rejection or account sanction, take the time to investigate and fully explain why action was taken

Another effective way to ensure consistency is to…

4. Map out moderation workflows

Take the time to actually sketch out your moderation workflows on a whiteboard. By mapping out your workflows, you’ll notice any holes in your process.

 

Image Moderation Workflow for new users
Example of a possible image moderation workflow

Here are just a few scenarios to consider:

  • What do you do when a user submits an image that breaks your guidelines? Do you notify them? Sanction their account? Do nothing and let them submit a new image?
  • Do you treat new users differently than returning users (see example workflow for details)?
  • How do you deal with images containing CSAM (child sexual abuse material; formally referred to as child pornography)?

Coming across an image that contains illegal content can be deeply disturbing. That’s why you should…

5. Have a process to escalate illegal images

The heartbreaking reality of the internet is that it’s easier today for predators to share images than it has ever been. It’s hard to believe that your community members would ever upload CSAM, but it can happen, and you should be prepared.

If you have a Trust & Safety specialist, Compliance Officer, or legal counsel at your company, we recommend that you consult them for their best practices when dealing with illegal imagery. One option to consider is using Microsoft’s PhotoDNA, a free image scanning service that can automatically identify and escalate known child sexual abuse images to the authorities.

You may never find illegal content on your platform, but having an escalation process will ensure that you’re prepared for the worst-case scenario.

On a related note, make sure you’ve also created a wellness plan for your moderators. We’ll be discussing individuals wellness plans — and other best practices — in more depth in our Image Moderation 101 webinar on August 22nd. Register today to save your seat for this short, 20-minute chat.

 

Photo by Leah Kelley from Pexels

The Role of Image Filtering in Shaping a Healthy Online Community

Digital citizenship, online etiquette, and user behavior involve many different tools of expression, from texting to photo sharing, and from voice chat to video streaming. In my last article, I wrote about who is responsible for the well-being of players/users online. Many of the points discussed relate directly to the challenges posed by chat communication.

However, those considerations are also applicable to image sharing on our social platforms as well as what intent is behind it.

Picture this

Online communities that allow users to share images have to deal with several risks and challenges that come with the very nature of the beast; meaning, creating and/or sharing images is a popular form of online expression, there’s no shortage of images, and they come in all shapes, flavors, and forms.

Unsurprisingly, you’re bound to encounter images that will challenge your community guidelines (think racy pictures without obvious nudity), while others will simply be unacceptable (for example, pornography, gore, or drug-related imagery).

Fortunately, artificial intelligence has advanced to a point where it can do things that humans cannot; namely, handle incredibly high volumes while maintaining high precision and accuracy.

This is not to say that humans are dispensable. Far from that. We still need human eyes to make the difficult, nuanced decisions that machines alone can’t yet make.

For example, let’s say a user is discussing history with another user and wants to share a historical picture related to hate speech. Without the appropriate context, a machine could simply identify a hateful symbol on a flag and automatically block the image, stopping them from sharing it.

Costs and consequences

Without an automated artificial intelligence system for image filtering, a company is looking at two liabilities:

  • An unsustainable, unscalable model that will incur a manual cost connected to human moderation hours;
  • Increased psychological impact of exposing moderators to excessive amounts of harmful images

The power of artificial intelligence

Automated image moderation can identify innocuous images and automate their approval. It can also identify key topics (like pornographic content and hateful imagery) with great accuracy and block them in real time, or hold them for manual review.

By using automation, you can remove two things from your moderators’ plates:

  • Context-appropriate images (most images: fun pictures with friends smiling, silly pictures, pets, scenic locations, etc )
  • Images that are obviously against your community guidelines (think pornography or extremely gory content)

Also, a smart system can serve up images in the grey area to your moderators for manual review, which means way less content to review than the two scenarios explored above. By leveraging automation you will have less manual work (reduced workload, therefore reduced costs) and less negative impact on your moderation team.

Give humans a break

Automated image moderation can also take the emotional burden off of your human moderators. Imagine yourself sitting in front of a computer for hours and hours, reviewing hundreds or even thousands of images, never knowing when your eyes (and mind) will be assaulted by a pornographic or very graphic violent image. Now consider the impact this has week after week.

What if a big part of that work can be taken by an automated system, drastically reducing the workload, and with that the emotional impact of reviewing offensive content? Why wouldn’t we seek to improve our team’s working situation and reduce employee burnout and turnover?

It is not only a business crucial thing to do. This also means taking better care of your people and supporting them. This is key to company culture.

An invitation

Normally, I talk and write about digital citizenship as it relates to chat and text. Now, I’m excited to be venturing into the world of images and sharing as much valuable insight as I can with all of you. After all, image sharing is an important form of communication and expression in many online communities.

It would be great if you could join me for a short, 20-minute webinar we are offering on Wednesday, August 22nd. I’ll be talking about actionable best practices you can put to good use as well as considering what the future may hold for this space. You can sign up here.

I’m looking forward to seeing you there!

Originally published on LinkedIn by Carlos Figueiredo, Two Hat Director of Community Trust & Safety

Webinar: Image Moderation 101

Wondering about the latest industry trends in image moderation? Need to keep offensive and unwanted images out of your community — but no idea where to start?

Join us for 20 minutes on Wednesday, August 22 for an intimate chat with Carlos Figueiredo, Two Hat Director of Community Trust & Safety.

Register Now
In this 20 minute chat, we’ll cover:

  • Why image moderation is business-critical for sharing sites in 2018
  • An exclusive look at our industry-proven best practices
  • A sneak peek at the future of image moderation… will there be robots?

Sign up today to save your seat!