We believe that a holistic approach to health, wellness, and safety is one of the best ways to help foster a positive and healthy community. To achieve this, platforms need five layers of community protection in place. This article is the second in a five-part series that explores the different layers digital platforms need to best protect their online communities.

One of these layers is classifying and filtering content using software, like Two Hat’s Community Sift, to monitor harmful and illegal content such as cyberbullying, abuse, hate speech, violent threats, and child exploitation. By following these community management tips, platforms can weed out harmful content that is detrimental to their users and optimize positive interactions.

People using social media on phones

Layer Two: Classify and Filter Content for a Safe, Optimal User Experience

A key part of Layer 1: Community Guidelines was to ensure that you define and declare the behaviors that go against the very purpose and values of your platform. Now enter Layer 2: Classify & Filter. Specifically, identify and filter in the worst of the worst cases that violate your non-negotiable community standards. By knowing the online harm patterns that should never be allowed in your community, you’re able to filter them before they can infect your platform like a virus.

Malicious or inappropriate content in chats, comments, forum posts, and other online settings can devastate users, destroy trust, empower online predators, and put companies in reputational or legal jeopardy. Conversely, moderated and participative community chat increases user retention and makes room for positive human connection.

By using a tool that harmonizes artificial intelligence capabilities with unique, multi-faceted human insights, digital communities can more efficiently remove harmful content before it can reach the end user. Human review ensures that context and nuance is taken into consideration when assessing flagged content and helps platforms improve user engagement, retention, and lifetime value.

Activate robust and customizable filtering

It’s important for filters to be customizable so that they can best meet the unique needs of different online communities. Two Hat’s filtering is built on a unique blend of linguistic templates with rules developed by our Language and Culture specialists and augmented by artificial intelligence (AI). This two-pronged approach helps identify new trends and topics so our language experts can update the AI in real time and help eliminate new types of harmful content appearing within different online spaces.

This flexibility allows digital platforms to adjust levels of risk to reflect their various types of content and different classes of users. Flexible Policy Guides are also mapped to a platform’s specific community guidelines for different use cases, such as public chat, private chat, usernames, among others.

Screen for multiple topics and issues

Harmful and illegal behaviors take many shapes. That is why it’s crucial for digital platforms to use a content moderation tool that can identify and flag content across a wide array of topics. This includes:

  • Bullying
  • Violence
  • Hate Speech
  • Personal Identifying Info
  • Vulgarity
  • Child Grooming
  • Drugs & Alcohol
  • Fraud
  • Extremism
  • Public Threat
  • Pornography
  • Gore
  • Weapons
  • And more

Monitoring for these kinds of topics helps to ensure community managers are alerted when a situation needs to be escalated and helps them better protect their communities. In some cases, it may be appropriate for a community manager to warn or suspend a user based on their use of inappropriate language. In another case, a community manager may need to ban a user or contact the authorities if there is an imminent threat of harm to an individual.

Decrypt and filter “unnatural” language

Chat and other messaging increasingly use abbreviations, acronyms, emoticons, and symbols, rather than standard grammar and spelling. These kinds of communication variants, known as subversions, are always evolving in unexpected ways. It’s important for online communities to be able to decode the actual meaning of subversions, especially the most sophisticated attempts that try to obscure questionable or malicious content.

Two Hat’s AI is engineered to capture manipulation like Leetspeak (1337 SP34K) and other substitutions of symbols and numerals for letters, Unicode content, invisible characters, vertical chat, misspellings, upside down text, emojis, mixed upper and lower case, and other variants. It can even filter manipulation attempts detected across multiple lines of content.

Customize your approach to each community audience

Digital platforms that utilize a content moderation tool that is adaptable to the needs of the online community they’re serving are best positioned to protect their community and maximize engagement. We recommend customizing your Policy Guides to match your audience, monitoring for a robust amount of content and topics, and detecting threatening or suspicious “unnatural” language.

Learn how Two Hat can help take a holistic approach to protecting your community by reading the rest of our Five Layers of Community Protection blog series. You can also request a live demo to see how our unique approach can help you classify and filter content to create a safe community and improved experience for your users.

Request Demo