Moderation is a delicate art. It can take some real finesse to get it right. Every community is different and requires different techniques. But there are a few guiding principles that work for just about every product, from social networks to online games to forums.

Something to consider as you build your moderation strategy:

  • You have the power to shape the community.
  • Words have real consequences.

They may seem unconnected, but they’re profoundly linked. When creating a set of community guidelines and deciding how you will communicate and support them, you’re acknowledging that your community deserves the best experience possible, free of abuse, threats, and harassment. There is an old assumption that trolls and toxicity are inevitable by-products of the great social experiment that is the Internet, but that doesn’t have to be true. With the right techniques—and technology—you can build a healthy, thriving community.

First, it’s crucial that you set your community guidelines and display them in an area of your app or website that is readily available.

Some things to consider when setting guidelines:

  • The age/demographic of your community. If you’re in the US, and your community is marketed towards users under 13, by law you have to abide by the Children’s Online Privacy Protection Rule (COPPA). The EU has similar regulations under the new General Data Protection Rule (GDPR). In addition to regulating how you store Personally Identifiable Information(PII) on your platform, these laws also affect what kinds of information users can share with each other.
  • Know exactly where you stand on topics like profanity and sexting. It’s easy to take a stand on the really bad stuff like rape threats and hate speech. The trickier part is deciding where you draw the line with less dangerous subjects like swearing. Again, the age and demographic of your community will play into this. What is your community’s resilience level? Young audiences will likely need stricter policies, while mature audiences might be able to handle a more permissive atmosphere.
  • Ensure that your moderation team has an extensive policy guide to refer to. This will help avoid misunderstandings and errors when taking actions on user’s accounts. If your moderators don’t know your guidelines, how can you expect the community to follow them?

Then, decide how you are going to moderate content. Your best option is to leverage software that combines AI (Artificial Intelligence) with HI (Human Intelligence). Machine learning has taken AI to a new level in the last few years, so it just makes sense to take advantage of recent advances in technology. But you always need human moderators as well. The complex algorithms powering AI are excellent at some things, like identifying high-risk content (hate speech bullying, abuse, and threats). Humans are uniquely suited to more subtle tasks, like reviewing nuanced content and reaching out to users who have posted cries for help.

Many companies decide to build content moderation software in-house, but it can be expensive, complex, and time-consuming to design and maintain. Luckily, there are existing moderation tools on the market.

Full disclosure: My company Two Hat Security makes two AI-powered content moderation tools that were built to identify and remove high-risk content. Sift Ninja is ideal for startups and new products that are just establishing an audience.Community Sift is an enterprise-level solution for bigger products.

Once you’ve chosen a tool that meets your needs, you can build out the appropriate workflows for your moderators.

Start with these basic techniques:

  • Automatically filter content that doesn’t meet your guidelines. Why force your users to see content that you don’t allow? With AI-powered automation, you can filter the riskiest content in real time.
  • Automatically escalate dangerous content (excessive bullying, cries for help, and grooming) to queues for your moderators to review.
  • Automatically take action on users based on their behavior. Warn, mute, or ban users who don’t follow the guidelines. It’s not about punishment—Riot Games found that users who are given immediate feedback are far less likely to re-offend:

When players were informed only of what kind of behavior had landed them in trouble, 50% did not misbehave in a way that would warrant another punishment over the next three months.

  • Give users a tool to report objectionable content. Moderators can then review the content and determine if further action is required.

Building community is the fun part of launching a new social product. What kind of community do you want? Once you know the answer, you can get started. Draft your community guidelines, know how you will reinforce them, and invest in a moderation system that uses a blend of artificial and human intelligence. And once the hard stuff is out of the way—have fun, and enjoy the ride.  : )

Originally published on Quora

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Request Demo