There are five different approaches to User-Generated Content (UGC) moderation:
- Pre-moderate all content
- Post-moderate all content
- Crowdsourced (user reports)
- 100% computer-automated
- 100% human review
Each option has its merits and its drawbacks. But as with most things, the best method lies somewhere in between — a mixture of all five techniques.
Let’s take a look at the pros and cons of your different options.
Pre-moderate all content
- Pro: You can be fairly certain that nothing inappropriate will end up in your community; you know you have human eyes on all content.
- Con: Time and resource-consuming; subject to human error; does not happen in real time, and can be frustrating for users who expect to see their posts immediately.
Post-moderate all content
- Pro: Users can post and experience content in real-time.
- Con: Once risky content is posted, the damage is done; puts the burden on the community as it usually involves a lot of crowdsourcing and user reports.
Crowdsourcing/user reports
- Pro: Gives your community a sense of ownership; people are good at finding subtle language.
- Con: Similar to pre-moderating all content, once threatening content is posted, it’s already had its desired effect, regardless of whether it’s removed; forces the community to police itself.
100% computer-automated
- Pro: Computers are great at identifying the worst and best content; automation frees up your moderation team to engage with the community.
- Con: Computers aren’t great at identifying gray areas and making tough decisions.
100% human review
- Pro: Humans are good at making tough decisions about nuanced topics; moderators become highly attuned to community sentiment.
- Con: Humans burn out easily; not a scalable solution; reviewing disturbing content can have an adverse effect on moderator’s health and wellness.
So, if all five options have valid pros and cons, what’s the solution? In our experience, the most effective technique uses a blend of both pre- and post-moderation, human review, and user reports, in tandem with some level of automation.
The first step is to nail down your community guidelines. Social products that don’t clearly define their standards from the very beginning have a hard time enforcing them as they scale up. Twitter is a cautionary tale for all of us, as we witness their current struggles with moderation. They launched the platform without the tools to enforce their (admittedly fuzzy) guidelines, and the company is facing a very public backlash because of it.
Consider your stance on the following:
- Bullying: How do you define bullying? What behavior constitutes bullying in your community?
- Profanity: Do you block all swear words or only the worst obscenities? Do you allow acronyms like WTF?
- Hate speech: How do you define hate speech? Do you allow racial epithets if they’re used in a historical context? Do you allow discussions about religion or politics?
- Suicide/Self-harm: Do you filter language related to suicide or self-harm, or do you allow it? Is their a difference between a user saying “I want to kill myself,” “You should kill yourself,” and “Please don’t kill yourself”?
- PII (Personally Identifiable Information): Do you encourage users to use their real names, or does your community prefer anonymity? Can users share email addresses, phone numbers, and links to their profiles on other social networks? If your community is under-13 and in the US, you may be subject to COPPA.
Different factors will determine your guidelines, but the most important things to consider are:
- The nature of your product. Is it a battle game? A forum to share family recipes? A messaging app?
- Your target demographic. Are users over or under 13? Are portions of the experience age-gated? Is it marketed towards adults-only?
Once you’ve decided on community guidelines, you can start to build your moderation workflow. First, you’ll need to find the right software. There are plenty of content filters and moderation tools on the market, but in our experience, Community Sift is the best.
A high-risk content detection system designed specifically for social products, Community Sift works alongside moderation teams to automatically identify threatening UGC in real time. It’s built to detect and block the worst of the worst (as defined by your community guidelines), so your users and moderators don’t ever have to see it. There’s no need to force your moderation team to review disturbing content that a computer algorithm can be trained to recognize in a fraction of a second. Community Sift also allows you to move content into queues for human review, and automate actions (like player bans) based on triggers.
Once you’ve tuned the system to meet your community’s unique needs, you can create your workflows.
You may want to pre-moderate some content, even with a content filter running in the background. If your product is targeted at under-13 users, as an added layer of human protection, you might pre-moderate anything that the filter doesn’t classify as high-risk. Or maybe you route all content flagged as high-risk (extreme bullying, hate speech, rape threats, etc) into queues for moderators to review. For older communities, you may not require any pre-moderation and instead depend on user reports for any post-moderation work.
With an automated content detection system in place, you give your moderators their time back to do the tough, human stuff, like dealing with calls for help and reviewing user reports.
Another piece of the moderation puzzle is addressing negative user behavior. We recommend using automation, with the severity increasing with each offense. Techniques include warning users when they’ve posted high-risk content, and muting or banning their accounts for a short period. Users who persist can eventually lose their accounts. Again, the process and severity here will vary based on your product and demographic. The key is to have a consistent, well-thought-out process from the very beginning.
You will also want to ensure that you have a straightforward and accessible process for users to report offensive behavior. Don’t bury the report option, and make sure that you provide a variety of report tags to select from, like bullying, hate speech, sharing PII, etc. This will make it much easier for your moderation team to prioritize which reports they review first.
Ok, so moderation is a lot of work. It requires patience and dedication and a strong passion for community-building. But it doesn’t have to be hard if you leverage the right tools and the right techniques. And it’s highly rewarding, in the end. After all, what’s better than shaping a positive, healthy, creative, and engaged community in your social product? It’s the ultimate goal, and ultimately, it’s an attainable one — when you do it right.
Originally published on Quora