We get it. When you built your online game, virtual world, forum for Moomin-enthusiasts (you get the idea), you probably didn’t have content queues, workflow escalations, and account bans at the front of your mind. But now that you’ve launched and are acquiring users, it’s time to ensure that you maximise your content moderation team.
It’s been proven that smart moderation can increase user retention, decrease workload, and protect your brand. And that means more money in your company pocket for cool things like new game features, faster bug fixes… and maybe even a slammin’ espresso machine for your hard working devs.
Based on our experience at Two Hat, and with our clients across the industry — which include some of the biggest online games, virtual worlds, and social apps out there — we’ve prepared a list of five crucial moderation workflows.
Each workflow leverages AI-powered automation to enhance your mod’s efficiency. This gives them the time to do what humans do best — make tough decisions, engage with users, and ultimately build a healthy, thriving community.
Use Progressive Sanctions
At Two Hat, we are big believers in second chances. We all have bad days, and sometimes we bring those bad days online. According to research conducted by Riot Games, the majority of bad behavior doesn’t come from “trolls” — it comes from average users lashing out. In the same study, Riot Games found that players who were clearly informed why their account was suspended — and provided with chat logs as backup — were 70% less likely to misbehave again.
The truth is, users will always make mistakes and break your community guidelines, but the odds are that it’s a one-time thing and they probably won’t offend again.
We all know those parents who constantly threaten their children with repercussions — “If you don’t stop pulling the cat’s tail, I’ll take your Lego away!” but never follow through. Those are the kids who run screaming like banshees down the aisles at Whole Foods. They’ve never been given boundaries. And without boundaries and consequences, we can’t be expected to learn or to change our behavior.
That’s why we highly endorse progressive sanctions. Warnings and temporary muting followed by short-term suspensions that get progressively longer (1 hour, 6 hours, 12 hours, 24 hours, etc) are effective techniques — as long as they’re paired with an explanation.
And you can be gentle at first — sometimes all a user needs is a reminder that someone is watching in order to correct their behavior. Sanctioning doesn’t necessarily mean removing a user from the community — warning and muting can be just as effective as a ban. You can always temporarily turn off chat for bad-tempered users while still allowing them to engage with your platform.
And if that doesn’t work, and users continue to post content that disturbs the community, that’s when progressive suspensions can be useful. As always, ban messages should be paired with clear communication:
“You wrote [X], and as per our Community Guidelines and Terms of Use, your account is suspended for [X amount of time]. Please review the Community Guidelines.”
You can make it fun, too.
“Having a bad day? You wrote [X], which is against the Community Guidelines. How about taking a short break (try watching that video of cats being scared by cucumbers, zoning out to Bob Ross painting happy little trees, or, if you’re so inclined, taking a lavender-scented bubble bath), then joining the community again? We’ll see you in [X amount of time].”
If your system is smart enough, you can set up accurate behavioral triggers to automatically warn, mute, and suspend accounts in real time.
The workflow will vary based on your community and the time limits you set, but it will look something like this:
Warn → Mute → 1 hr suspension → 6 hr suspension → 12 hr suspension → 24 hr suspension → 48 hr suspension → Permanent ban
Use AI to Automate Image Approvals
Every community team knows that reviewing Every. Single. Uploaded. Image. Is a royal pain. 99% of images are mind-numbingly innocent (and probably contain cats, because the internet), while the 1% are well, shocking. After a while, everything blurs together, and the chances of actually missing that shocking 1% get higher and higher… until your eyes roll back into your head and you slump forward on your keyboard, brain matter leaking out of your ears.
OK, so maybe it’s not that bad.
But scanning image after image manually does take a crazy amount of time, and the emotional labor can be overwhelming and potentially devastating. Imagine scrolling through pic after pic of kittens, and then stumbling over full-frontal nudity. Or worse: unexpected violence and gore. Or the unthinkable: images of child or animal abuse.
All this can lead to stress, burnout, and even PTSD.
It’s in your best interests to automate some of the process. AI today is smarter than it’s ever been. The best algorithms can detect pornography with nearly 100% accuracy, not to mention images containing violence and gore, drugs, and even terrorism.
If you use AI to pre-moderate images, you can tune the dial based on your community’s resilience. Set the system to automatically approve any image with, say, a low risk of being pornography (or gore, drugs, terrorism, etc), while automatically rejecting images with a high risk of being pornography. Then, send anything in the ‘grey zone’ to a pre-moderation queue for your mods to review.
Or, if your user base is older, automatically approve images in the grey zone, and let your users report anything they think is inappropriate. You can also send those borderline images to an optional post-moderation queue for manual review.
This way, you take the responsibility off of both your moderators and your community to find the worst content.
What the flow looks like:
User submits image → AI returns risk probability → If safe, automatically approve and post → If unsafe, automatically reject → If borderline, hold and send to queue for manual pre-moderation (for younger communities) or → If borderline, publish and send to queue for optional post-moderation (for older communities).
Suicide/Self-Harm Support
For many people, online communities are the safest spaces to share their deepest, darkest feelings. Depending on your community, you may or may not allow users to discuss their struggles with suicidal thoughts and self-injury openly.
Regardless, users who discuss suicide and self-harm are vulnerable and deserve extra attention. Sometimes, just knowing that someone else is listening can be enough.
We recommend that you provide at-risk users with phone or text support lines where they can get help. Ideally, this should be done through an automated messaging system to ensure that users get help in real time. However, you can also send manual messages to establish a dialogue with the user.
Worldwide, there are a few resources that we recommend:
- US: Call the National Suicide Prevention Lifeline: 1-800-273-TALK (8255), or Crisis Text Line
- Canada: Call the Kids Help Phone (for youth under 20): 1-800-668-6868; or find a local crisis center here
- UK: Go to samaritans.org
If your community is outside of the US, Canada, or the UK, your local law enforcement agency should have phone numbers or websites that you can reference. In fact, it’s a good idea to build a relationship with local law enforcement; you may need to contact them if you ever need to escalate high-risk scenarios, like a user credibly threatening to harm themselves or others.
We don’t recommend punishing users who discuss their struggles by banning or suspending their accounts. Instead, a gentle warning message can go a long way:
“We noticed that you’ve posted an alarming message. We want you to know that we care, and we’re listening. If you’re feeling sad, considering suicide, or have harmed yourself, please know that there are people out there who can help. Please call [X] or text [X] to talk to a professional.”
When setting up a workflow, keep in mind that a user who mentions suicide or self-harm just once probably doesn’t need an automated message. Instead, tune your workflow to send a message after repeated references to suicide and self-harm. Your definition of “repeated” will vary based on your community, so it’s key that you monitor the workflow closely after setting it up. You will likely need to retune it over time.
Of course, users who encourage other users to kill themselves should receive a different kind of message. Look out for phrases like “kys” (kill yourself) and “go drink bleach,” among others. In these cases, use progressive sanctions to enforce your community guidelines and protect vulnerable users.
What the flow looks like:
User posts content about suicide/self-harm X amount of times → System automatically displays message to user suggesting they contact a support line → If user continues to post content about suicide/self-harm X number of times, send content to a queue for a moderator to manually review for potential escalation
Prepare for Breaking News & Trending Topics
We examined this underused moderation flow in a recent webinar. Never overestimate how deeply the latest news and emerging internet trends will affect your community. If you don’t have a process for dealing with conversations surrounding the next natural disaster, political scandal, or even another “covfefe,” you run the risk of alienating your community.
Consider Charlottesville. On August 11th marchers from the far-right, including white nationalists, neo-Nazis, and members of the KKK gathered to protest the removal of Confederate monuments throughout the city. The rally soon turned violent, and on August 12th a car plowed into a group of counter-protestors, killing a young woman.
The incident immediately began trending on social media and in news outlets and remained a trending topic for several weeks afterward.
How did your online community react to this news? Was your moderation team prepared to handle conversations about neo-Nazis on your platform?
While not a traditional moderation workflow, we have come up with a “Breaking News & Trending Topics” protocol that can help you and your team stay on top of the latest trends — and ensure that your community remains expressive but civil, even in the face of difficult or controversial topics.
- Compile vocabulary: When an incident occurs, compile the relevant vocabulary immediately.
- Evaluate: Review how your community is using the vocabulary. If you wouldn’t normally allow users to discuss the KKK, would it be appropriate to allow it based on what’s happening in the world at that moment?
- Adjust: Make changes to your chat filter based on your evaluation above.
- Validate: Watch live chat to confirm that your assumptions were correct.
- Stats & trends: Compile reports about how often or how quickly users use certain language. This can help you prepare for the next incident.
- Re-evaluate vocabulary over time: Always review and reassess. Language changes quickly. For example, the terms Googles, Skypes, and Yahoos were used in place of anti-Semitic slurs on Twitter in 2016. Now, in late 2017, they’ve disappeared — what have they been replaced with?
Stay diligent, and stay informed. Twitter is your team’s secret weapon. Have your team monitor trending hashtags and follow reputable news sites so you don’t miss anything your community may be talking about.
Provide Positive Feedback
Ever noticed that human beings are really good at punishing bad behavior but often forget to reward positive behavior? It’s a uniquely human trait.
If you’ve implemented the workflows above and are using smart moderation tools that blend automation with human review, your moderation team should have a lot more time on their hands. That means they can do what humans do best — engage with the community.
Positive moderation is a game changer. Not only does it help foster a healthier community, it can also have a huge impact on retention.
Some suggestions:
- Set aside time every day for moderators to watch live chat to see what the community is talking about and how users are interacting.
- Engage in purposeful community building — have moderators spend time online interacting in real time with real users.
- Forget auto-sanctions: Try auto-rewards! Use AI to find key phrases indicating that a user is helping another user, and send them a message thanking them, or even inviting them to collect a reward.
- Give your users the option to nominate a helpful user, instead of just reporting bad behavior.
- Create a queue that populates with users who have displayed consistent positive behavior (no recent sanctions, daily logins, no reports, etc) and reach out to them directly in private or public chat to thank them for their contributions.
Any one of these workflows will go a long way towards building a healthy, engaged, loyal community on your platform. Try them all, or just start out with one. Your community (and your team) will thank you.
With our chat filter and moderation software Community Sift, Two Hat has helped companies like Supercell, Roblox, Habbo, Friendbase, and more implement similar workflows and foster healthy, thriving communities.
Interested in learning how we can help your gaming or social platform thrive? Get in touch today!