When you’re putting together an image moderation strategy for your social platform, you have three options:
- Automate everything with AI;
- Do everything manually with human moderators, or
- Combine both approaches for Maximum Moderation Awesomeness™
When consulting with clients and industry partners like PopJam, unsurprisingly, we advocate for option number three.
Here are our top six reasons why:
Human beings are, well… human (Part 1)
We get tired, we take breaks, and we don’t work 24/7. Luckily, AI hasn’t gained sentience (yet), so we don’t have to worry (yet) about an algorithm troubling our conscience when we make it work without rest.
Human beings are, well… human (Part 2)
In this case, that’s a good thing. Humans are great at making judgments based on context and cultural understanding. An algorithm can find a swastika, but only a human can say with certainty if it’s posted by a troll propagating hate speech or is instead a photo from World War II with historical significance.
We’re in a golden age of AI
Artificial intelligence is really, really good at detecting offensive images with near-perfect accuracy. For context, this wasn’t always the case. Even 10 years ago, image scanning technology was overly reliant on “skin tone” analysis, leading to some… interesting false positives.
Babies, being (sometimes) pink, round, and strangely out of proportion would often trigger false positives. And while some babies may not especially adorable, it was a bit cruel to label them “offensive.” Equally inoffensive but often the cause of false positives were light oak-colored desks, chair legs, marathon runners, some (but not all) brick walls, and even more bizarrely — balloons.
Today, the technology has advanced so far that it can distinguish between bikinis, shorts, beach shots, scantily-clad “glamour” photography, and explicit adult material.
Humans beings are, well… human (Part 3)
Like we said, AI doesn’t yet have the capacity for shock, horror, or emotional distress of any kind.
(This is still true, right? We would know if a robot uprising was in the works, right? RIGHT?)
Until our sudden inevitable overthrow by the machines, go ahead and let AI automatically reject images with a high probability of containing pornography, gore, or anything that could have a lasting effect on your users and your staff.
That way, human mods can focus on human stuff like reviewing user reports and interacting with the community.
It’s the easiest way to give your users an unforgettable experience
The social app market is already overcrowded. “The next Instagram” is released every day. In a market where platforms vie to retain users, it’s critical that you ensure a positive user experience.
With AI, you can approve and reject posts in real time, meaning your users will never have to wait for their images to be reviewed.
And with human moderators engaging with the community — liking posts, upvoting images, and promptly reviewing and actioning user reports — your users will feel supported, safe, and heard.
You can’t put a price on that… no wait, you can. It’s called Cost of Customer Acquisition (CAC), and it can make or break a business that struggles to retain users.
You’re leveraging the best of both worlds
AI is crazy fast, scanning millions of images a day. By contrast, humans can scan about 2500 images daily before their eyes start to cross and they make a lot of mistakes. AI is more accurate than ever, but humans provide enhanced precision by understanding context.
A solid image moderation process supported by cutting-edge tech and a bright, well-trained staff? You’re well on your way to Maximum Moderation Awesomeness™.
Want to learn how one social app combines automation with manual review to reduce their workload and increase user engagement? Sign up for our webinar featuring the community team from PopJam!