Social Media Slang Every Community Manager Should Know in 2018

We all know how quickly news travels online. But what about new slang? Just like news stories, words and phrases can go viral in the blink of an eye (or the post of a Tweet, if you will).

No one is more aware of the ever-evolving language of social media than online community managers. Moderators and community managers who review user-generated chat, comments, and usernames every day have to stay in the loop when it comes to new online slang.

Here are eight new words that our language and culture experts identified this month:

hundo p

To know with 100% certainty. “This coffee is hundo p giving me life.”

trill

A combination of “true” and “real”. “To keep it trill, I need a break from reviewing usernames. I can’t look at another variation of #1ShawnMendesFan.”

otp

One True Pairing; the perfect couple you ship in fanfiction. “Link and Zelda are always and forever the otp. Don’t @ me.”

distractivated

Distracted in a way that motivates/inspires. “I was so distractivated today looking at Twitter for new slang, I mentally rearranged my entire apartment.”

JOMO

Joy of Missing Out; the opposite of FOMO. “I missed the catered lunch and Fornite battle yesterday but it’s okay because I was JOMOing in the park.”

ngl; tache

Not gonna lie; mustache. “I’m ngl, that new moderator who just started today has a serious Magnum PI tache going on.”

sus

Suspect. “These cat pics are pretty sus, no way does it have anime-size eyes.”

What’s an effective community management strategy to ensure that new phrases are added regularly? We recommend using a content moderation tool that automatically identifies trending terms and can be updated in real time.

Not sure how to choose the right solution for your community? Check out What is the difference between a profanity filter and a content moderation tool?

In the meantime, happy moderating (and try not to get too distractivated).

Affected by the Smyte Closure? Two Hat Security Protects Communities From Abusive Comments and Hate Speech

Statement from CEO and founder Chris Priebe: 

As many of you know, Smyte was recently acquired by Twitter and its services are no longer available, affecting many companies in the industry.

As CEO and founder of Two Hat Security, creators of the chat filter and content moderation solution Community Sift, I would like to assure both our valued customers and the industry at large that we are, and will always remain, committed to user protection and safety. For six years we have worked with many of the largest gaming and social platforms in the world to protect their communities from abuse, harassment, and hate speech.

We will continue to serve our existing clients and welcome the opportunity to work with anyone affected by this unfortunate situation. Our mandate is and will always be to protect the users on behalf of all sites. We are committed to uninterrupted service to those who rely on us.

If you’re in need of a filter to protect your community, we can be reached at hello@twohat.com.

Three Techniques to Protect Users From Cyberbullying

CEO Chris Priebe founded Two Hat Security back in 2012, with a big goal: To protect people of all ages from online bullying. Over the last six years, we’ve been given the opportunity to help some of the largest online games, virtual worlds, and messaging apps in the world grow healthy, engaged communities on their platforms.

Organizations like The Cybersmile Foundation provide crucial services, including educational resources and 24-hour global support, to victims of cyberbullying and online abuse.

But what about the platforms themselves? What can online games and social networks do to prevent cyberbullying from happening in the first place? And how can community managers play their part?

In honor of #StopCyberbullyingDay 2018 and our official support of the event, today we are sharing our top three techniques that community managers can implement to stop cyberbullying and abuse in their communities.

Community guidelines.Share community guidelines.

Clear community standards are the building blocks of a healthy community. Sure, they won’t automatically prevent users from engaging in toxic or disruptive behavior, but they do set language and behavior expectations up front.

Post guidelines where every community member can see them. For a forum, pin a “Forum Rules, Read Before Posting” post at the top of the page. For comment sections, include a link or popup next to the comment box. Online games can even embed a code of conduct reminder within their reporting feature. Include consequences — what can users expect to happen if policies are broken?

Don’t just include what not to do — include what to do! Want the community to encourage and support each other? Tell them!

Proactive moderation. Use proactive moderation.

Once community standards are clearly communicated, community managers need a method to filter, escalate, and review abusive content.

Often, that involves choosing the right moderation software. Most community managers use either a simple profanity filter or a content moderation tool. Proactive moderation involves filtering cyberbullying and abuse before it reaches the community. Profanity filters use a strict blacklist/whitelist to detect harassment, but they’re not sophisticated or accurate enough to understand context or nuance, and some only work for the English language.

Instead, find a content moderation tool that can accurately identify cyberbullying, remove it in real time — and ultimately prevent users from experiencing abuse.

Of course, platforms should still always have a reporting system. But proactive moderation means that users only have to report questionable, “grey-area” content or false positive, instead of truly damaging content like extreme bullying and hate speech.

Reward positive users. Reward positive users.

Positive user experience leads to increased engagement, loyalty, and profits.

Part of a good experience involves supporting the community’s code of conduct. Sanctioning users who post abusive comments or attack other community members is an essential technique in proactive moderation.

But with so much attention paid to disruptive behavior, positive community members can start to feel like their voices aren’t heard.

That’s why we encourage community managers to reinforce positive behavior by rewarding power users.

Emotional rewards add a lot of value, cost nothing, and take very little time. Forum moderators can upvote posts that embody community standards. Community managers can comment publicly on encouraging or supportive posts. Mods and community managers can even send private messages to users who contribute to community health and well-being.

Social rewards like granting access to exclusive content and achievement badges work, too. Never underestimate the power of popularity and peer recognition when it comes to encouraging healthy behavior!

When choosing a content moderation tool to aid in proactive moderation, look for software that measures user reputation based on behavior. This added technology takes the guesswork and manual review out of identifying positive users.

#StopCyberbullyingDay 2018, organized by the Cybersmile Foundation.

The official #StopCyberbullyingDay takes place once every year, on the third Friday in June. But for community managers, moderators, and anyone who works with online communities (including those of us at Two Hat Security), protecting users from bullying and harassment is a daily task. Today, start out by choosing one of our three healthy community building recommendations — and watch your community thrive.

After all, doesn’t everyone deserve to share online without fear of harassment or abuse?

Two Hat Security Announced as Official Supporter of Stop Cyberbullying Day 2018

Two Hat Security has been announced as an Official Supporter of Stop Cyberbullying Day 2018, helping to promote a positive and inclusive internet — free from fear, personal threats, and abuse.

Thanks to a generous donation by Two Hat Security, The Cybersmile Foundation can continue to help victims of online abuse around the world while raising awareness of the important issues surrounding the growing problem of harassment and cyberbullying in all its forms.

“We are delighted to receive this generous donation from Two Hat Security to help us continue our work supporting victims of cyberbullying and delivering educational programs to help people avoid cyberbullying related issues in the future,” says Iain Alexander, Head of Engagement at The Cybersmile Foundation.

Two Hat Security is the creator of Community Sift, a content filter and automated moderation tool that allows gaming and social platforms to proactively protect their communities from cyberbullying, abuse, profanity, and more.

“Stop Cyberbullying Day is such an important initiative,” says Carlos Figueiredo, Director of Community Trust and Safety at Two Hat Security. “We believe that digital citizenship and sportsmanship are the keys to understanding disruptive player behavior. The work that the Cybersmile Foundation does to support victims perfectly lines up with our mission to protect online communities from abuse and harassment.”

Stop Cyberbullying Day regularly features a host of global corporations, celebrities, influencers, educational institutions and governments who come together and make the internet a brighter, more positive place. The day has previously been supported by celebrities and brands including One Direction, Fifth Harmony, MTV, Twitter and many more.

To get involved with the Stop Cyberbullying Day 2018 activities, participants can share positive messages on social media using the hashtag #STOPCYBERBULLYINGDAY.

About Two Hat Security

Founded in 2012, Two Hat Security empowers gaming and social platforms to foster healthier online communities. With their flagship product Community Sift, an enterprise-level content filter and automated moderation tool, online communities can proactively filter abuse, harassment, hate speech, and other disruptive behavior.

Community Sift currently processes over 22 billion messages a month in 20 different languages, across a variety of communities and demographics, including Roblox, Animal Jam, Kabam, Habbo, and more.

For sales or media enquiries, please contact hello@twohat.com.

About The Cybersmile Foundation

The Cybersmile Foundation is a multi-award winning anti-cyberbullying nonprofit organization. Committed to tackling all forms of digital abuse, harassment and bullying online, Cybersmile work to promote diversity and inclusion by building a safer, more positive digital community.

Through education, innovative awareness campaigns and the promotion of positive digital citizenship Cybersmile reduce incidents of cyberbullying and through their professional help and support services they empower victims and their families to regain control of their lives.

For media enquiries contact pressoffice@cybersmile.org.

About Stop Cyberbullying Day

Stop Cyberbullying Day is an internationally recognized day of awareness and activities both on and offline that was founded and launched by The Cybersmile Foundation on June 17th 2012. Annually, every third Friday in June, Stop Cyberbullying Day encourages people around the world to show their commitment toward a truly inclusive and diverse online environment for all – without fear of personal threats, harassment or abuse. Users of social media include the hashtag #STOPCYBERBULLYINGDAY to show their support for inclusion, diversity, self-empowerment and free speech.

What Is the Difference Between a Profanity Filter and a Content Moderation Tool?

Profanity filter, content moderation, automated moderation tool, oh my! Ever noticed that these terms are often used interchangeably in the industry? The thing is, the many subtle (and not so subtle) differences between them can affect your long-term growth plans, and leave you stuck in a lengthy service contract with a solution that doesn’t fit your community.

Selecting the right software for content moderation is an important step if you want to build a healthy, engaged online community. To make things easier for you, let’s explore the main points of confusion between profanity filters and automated moderation tools.

Profanity filters catch, well, profanity

Profanity filters are pretty straightforward. They work by using a set blacklist/whitelist to allow or deny certain words. They’re great at finding your typical four-letter words, especially when they’re spelled correctly. Be aware, though — the minute you implement a blacklist/whitelist, your users are likely to start using language subversions to get around the filter. Even a simple manipulation like adding punctuation in the middle of an offensive word can cause a profanity filter to misread it, allowing it to slip through the cracks.

Be prepared to work overtime adding words to your allow and deny list, based on community trends and new manipulations.

A typical example of escalating filter subversion.

Profanity filters can be set up fast

One benefit of profanity filters, at least at first glance? They’re easy to set up. Many profanity filters allow you to enter your credit card and integrate in just a few minutes, and they often offer freemium versions or free trials to boot.

While this is great news for pre-revenue platforms and one-person shows, trading accuracy for speed can come back to bite you in the end. If you’re in a growth mindset and expect your community to scale, it’s in your best interest to implement the most effective and scalable moderation tools at launch. Remember that service contract we mentioned earlier? This is where you don’t want to get stuck with the wrong software for your community.

So, what are your other options? Let’s take a look at content moderation tools.

Content moderation tools filter more than just profanity

Online communities are made up of real people, not avatars. That means they behave like real people and use language like real people. Disruptive behavior (what we used to call “toxicity”) comes in many forms, and it’s not always profanity.

Some users will post abusive content in other languages. Some will harass other community members in more subtle ways — urging them to harm themselves or even commit suicide, using racial slurs, engaging in bullying behavior without using profanity, or doxxing (sharing personal information without consent). Still others will manipulate language with l337 5p34k, ÙniÇode ÇharaÇters, or kreative mizzpellingzz.

Accuracy is key here — and a profanity filter that only finds four-letter words cannot provide that same level of fine-tuned detection.

A context-based moderation tool can even make a distinction between words that are perfectly innocent in one context… but whose meaning changes based on the conversation (“balls” or “sausage” are two very obvious examples).

What else should you look for?

Vertical Chat

Also known as “dictionary dancing”. Those same savvy users who leverage creative misspellings to bypass community guidelines will also use multiple lines of chat to get their message across:

Vertical chat in action.

Usernames

Most platforms allow users to create a unique username for their profile. But don’t assume that a simple profanity filter will detect and flag offensive language in usernames. Unlike other user-generated content like chat, messages, comments, and forum posts, usernames rarely consist of “natural” language. Instead, they’re made up of long strings of letters and numbers — “unnatural” language. Most profanity filters lack the complex technology to filter usernames accurately, but some moderation tools are designed to adapt to all kinds of different content.

Language & Culture

Can you think of many online communities where users only chat in English? Technology has brought people of different cultures, languages, and backgrounds together in ways that were unheard of in the past. If scaling into the global market is part of your business plan, choose a moderation tool that can support multiple languages. Accuracy and context are key here. Look for moderation software that supports languages built in-house by native speakers with a deep understanding of cultural and contextual nuances.

User Reputation

One final difference that we should call out here. Profanity filters treat everyone in the community the same. But anyone who has worked in online community management or moderation knows that human behavior is complex. Some users will never post a risky piece of content in their lifetime; some users will break your community guidelines occasionally; some will consistently post content that needs to be filtered.

Profanity filters apply the same settings to all of these users, while some content moderation tools will actually look at the user’s reputation over time, and apply a more permissive or restrictive filter based on behavior. Pretty sophisticated stuff.

Content moderation tools can be adapted to fit your community

A “set it and forget it” approach might work for a static, unchanging community with no plans for growth. If that’s the case for you, a profanity filter might be your best option. But if you plan to scale up, adding new users while keeping your current userbase healthy, loyal, and engaged, a content moderation tool with a more robust feature set is a much better long-term option.

Luckily, in today’s world, most content moderation technology is just a simple RESTful API call away.

Not only that, content moderation tools allow you to moderate your community much more efficiently and effectively than a simple profanity filter. With automated workflows in place, you can escalate alarming content (suicide threats, child exploitation, extreme harassment) to queues for your team to review, as well as take automatic action on accounts that post disruptive content.

Selecting a moderation solution for your platform is no easy task. When it’s time to make a decision, we hope you’ll use the information outlined above to make the right decision for your online community.