Five Wellness Tips for Community Managers

Building healthy and safe digital spaces begins with healthy community managers and moderators. We need to help community managers be mindful and take care of their mental health as they are often exposed to some of the worst of the internet – on a daily basis.

Occupational burnout is an all-too-common result that we, as an industry, must highlight and focus on changing. Identifying job stress and giving employees flexibility to prioritize their wellbeing improves our communities.

We suggest that companies encourage community managers to follow these 5 tips to prioritize their wellness and resilience:

1 – Create a wellness plan

Community managers are often repeatedly exposed to the worst online behaviors and are left feeling emotionally drained at the end of the workday. A wellness plan helps them manage their stress and mentally recharge. This actionable set of activities helps community managers to take wellness breaks throughout the day and to create a buffer between work and their personal lives. Whether it’s taking a walk outside, listening to music, meditating, talking to family or friends, a wellness plan can help community managers decompress before transitioning to the next moment of their day.

2 – Leverage AI Plus

Community managers monitor for hate speech, graphic images, and other types of high-risk content. Prolonged exposure to traumatic content can severely impact an individual’s mental health and wellbeing. Content moderators can develop symptoms of P.T.S.D., including insomnia, nightmares, anxiety, and auditory hallucinations as a result of consistent exposure to traumatic content.

By proactively leveraging technology to filter content, reducing the exposure to human moderators, our partners have reduced the workload of their community managers by as much as 88%*. This gives community managers more time to focus on other aspects of their job and protects their wellbeing by minimizing the amount of time they’re exposed to high-risk content.

3 – Be mindful of the types of content you’re moderating for

Rotating the types of content for which each team member is monitoring can help alleviate the negative impact that constant exposure to a singular focus area may cause. Threats of harm and self-harm, racism, sexism, predatory behavior, and child grooming are just a few of the types of content community managers monitor for and are exposed to daily.

4 – Focus on the positive

Most chat, images, and videos in online communities are aligned with the intended experiences of those products. In our experience, about 85% of user-generated content across different verticals is what we classify as low-risk, very positive types of behavior. Think community members discussing matters pertinent to the community, their hobbies and passions, or sharing pictures of their pets. Focusing on the positive side of your community will help you keep this reality in mind, and also remember why you do what you do everyday.

One of the ways in which you can focus on the positive aspects of your community is spending time in your product and seeing how community members are engaging, their creativity and passion. Make a point to do that at least once a week with the intent of focusing on the positive side of the community. Similarly, if you leverage a classification and filtering system like Community Sift, you should dedicate time to looking at chat logs that are positive. After either of these activities, you should write down and reflect on 3 to 5 things that were meaningful to you.

5 – Remember you’re making an impact

Monitoring an endless stream of high-risk content can make community managers feel like their work isn’t making an impact. That couldn’t be further from the truth. Their work is directly contributing to the health and safety of social and online play communities. When community managers identify a self-harm threat or protect children from predators, they are immediately making an impact in the life of that individual. In addition to monitoring content, community managers help to ensure that users have a positive and happy experience when engaging with their platform.

According to a 2020 survey conducted by the Anti-Defamation League, 81 percent of U.S. adults aged 18-45 who played online multiplayer games experienced some form of harassment. Approximately 22% of those community members went onto quit an online platform because of harassment they experienced. Harassment is an issue actively driving community members away from engaging with their favorite platforms. By helping to create a safe and healthy space, community managers are creating an environment where individuals can make friends, feel like they belong to a community, and have overall positive social interactions without the fear of harassment – while also helping drive the success of the community and overall acquisition and retention metrics. A true win-win.

Help protect the well-being of your community managers. Request a demo today  to see how Two Hat’s content moderation platform can reduce your community manager’s workload and exposure to harmful content.

Source:
* Two Hat Customer analysis, 2020

Community Manager Academy: Resources for Growing Healthy Online Communities

As an online Community Manager, you’re responsible for a seemingly endless list of tasks and projects. From managing a team of moderators to reporting on community engagement metrics, your responsibilities never end.

You don’t have time to seek out the latest trends in community management – you’re too busy compiling your weekly “time to resolution for tickets” report!

We want community managers and moderators to thrive in their jobs – after all, protecting users from abusive content and fostering healthy communities is our passion. We’re always looking for the best way to share tips and tricks, best practices, and walkthroughs.

To save you time – and keep you up to date on the latest news in the business – we’ve created Community Manager Academy, our version of school (minus exams, grades, and deadlines, so you know… fun school).

Our Community Manager resource center consists of on-demand webinars and downloadable content that can be accessed anytime, anywhere. You’ll find UGC moderation best practices, community health checklists, COPPA compliance guides, and more.

Take a minute to check out the page, and let us know what you think. What do you like? What do you not like? What topics would you like us to cover in the future? It’s your page — we would love to hear from you!

Social Media Slang Every Community Manager Should Know in 2018

We all know how quickly news travels online. But what about new slang? Just like news stories, words and phrases can go viral in the blink of an eye (or the post of a Tweet, if you will).

No one is more aware of the ever-evolving language of social media than online community managers. Moderators and community managers who review user-generated chat, comments, and usernames every day have to stay in the loop when it comes to new online slang.

Here are eight new words that our language and culture experts identified this month:

hundo p

To know with 100% certainty. “This coffee is hundo p giving me life.”

trill

A combination of “true” and “real”. “To keep it trill, I need a break from reviewing usernames. I can’t look at another variation of #1ShawnMendesFan.”

otp

One True Pairing; the perfect couple you ship in fanfiction. “Link and Zelda are always and forever the otp. Don’t @ me.”

distractivated

Distracted in a way that motivates/inspires. “I was so distractivated today looking at Twitter for new slang, I mentally rearranged my entire apartment.”

JOMO

Joy of Missing Out; the opposite of FOMO. “I missed the catered lunch and Fornite battle yesterday but it’s okay because I was JOMOing in the park.”

ngl; tache

Not gonna lie; mustache. “I’m ngl, that new moderator who just started today has a serious Magnum PI tache going on.”

sus

Suspect. “These cat pics are pretty sus, no way does it have anime-size eyes.”

What’s an effective community management strategy to ensure that new phrases are added regularly? We recommend using a content moderation tool that automatically identifies trending terms and can be updated in real time.

Not sure how to choose the right solution for your community? Check out What is the difference between a profanity filter and a content moderation tool?

In the meantime, happy moderating (and try not to get too distractivated).

Quora: What are the different ways to moderate content?

There are five different approaches to User-Generated Content (UGC) moderation:

  • Pre-moderate all content
  • Post-moderate all content
  • Crowdsourced (user reports)
  • 100% computer-automated
  • 100% human review

Each option has its merits and its drawbacks. But as with most things, the best method lies somewhere in between — a mixture of all five techniques.

Let’s take a look at the pros and cons of your different options.

Pre-moderate all content

  • Pro: You can be fairly certain that nothing inappropriate will end up in your community; you know you have human eyes on all content.
  • Con: Time and resource-consuming; subject to human error; does not happen in real time, and can be frustrating for users who expect to see their posts immediately.

Post-moderate all content

  • Pro: Users can post and experience content in real-time.
  • Con: Once risky content is posted, the damage is done; puts the burden on the community as it usually involves a lot of crowdsourcing and user reports.

Crowdsourcing/user reports

  • Pro: Gives your community a sense of ownership; people are good at finding subtle language.
  • Con: Similar to pre-moderating all content, once threatening content is posted, it’s already had its desired effect, regardless of whether it’s removed; forces the community to police itself.

100% computer-automated

  • Pro: Computers are great at identifying the worst and best content; automation frees up your moderation team to engage with the community.
  • Con: Computers aren’t great at identifying gray areas and making tough decisions.

100% human review

  • Pro: Humans are good at making tough decisions about nuanced topics; moderators become highly attuned to community sentiment.
  • Con: Humans burn out easily; not a scalable solution; reviewing disturbing content can have an adverse effect on moderator’s health and wellness.
    So, if all five options have valid pros and cons, what’s the solution? In our experience, the most effective technique uses a blend of both pre- and post-moderation, human review, and user reports, in tandem with some level of automation.

The first step is to nail down your community guidelines. Social products that don’t clearly define their standards from the very beginning have a hard time enforcing them as they scale up. Twitter is a cautionary tale for all of us, as we witness their current struggles with moderation. They launched the platform without the tools to enforce their (admittedly fuzzy) guidelines, and the company is facing a very public backlash because of it.

Consider your stance on the following:

  • Bullying: How do you define bullying? What behavior constitutes bullying in your community?
  • Profanity: Do you block all swear words or only the worst obscenities? Do you allow acronyms like WTF?
  • Hate speech: How do you define hate speech? Do you allow racial epithets if they’re used in a historical context? Do you allow discussions about religion or politics?
  • Suicide/Self-harm: Do you filter language related to suicide or self-harm, or do you allow it? Is their a difference between a user saying “I want to kill myself,” “You should kill yourself,” and “Please don’t kill yourself”?
  • PII (Personally Identifiable Information): Do you encourage users to use their real names, or does your community prefer anonymity? Can users share email addresses, phone numbers, and links to their profiles on other social networks? If your community is under-13 and in the US, you may be subject to COPPA.

Different factors will determine your guidelines, but the most important things to consider are:

  • The nature of your product. Is it a battle game? A forum to share family recipes? A messaging app?
  • Your target demographic. Are users over or under 13? Are portions of the experience age-gated? Is it marketed towards adults-only?

Once you’ve decided on community guidelines, you can start to build your moderation workflow. First, you’ll need to find the right software. There are plenty of content filters and moderation tools on the market, but in our experience, Community Sift is the best.

A high-risk content detection system designed specifically for social products, Community Sift works alongside moderation teams to automatically identify threatening UGC in real time. It’s built to detect and block the worst of the worst (as defined by your community guidelines), so your users and moderators don’t ever have to see it. There’s no need to force your moderation team to review disturbing content that a computer algorithm can be trained to recognize in a fraction of a second. Community Sift also allows you to move content into queues for human review, and automate actions (like player bans) based on triggers.

Once you’ve tuned the system to meet your community’s unique needs, you can create your workflows.

You may want to pre-moderate some content, even with a content filter running in the background. If your product is targeted at under-13 users, as an added layer of human protection, you might pre-moderate anything that the filter doesn’t classify as high-risk. Or maybe you route all content flagged as high-risk (extreme bullying, hate speech, rape threats, etc) into queues for moderators to review. For older communities, you may not require any pre-moderation and instead depend on user reports for any post-moderation work.

With an automated content detection system in place, you give your moderators their time back to do the tough, human stuff, like dealing with calls for help and reviewing user reports.

Another piece of the moderation puzzle is addressing negative user behavior. We recommend using automation, with the severity increasing with each offense. Techniques include warning users when they’ve posted high-risk content, and muting or banning their accounts for a short period. Users who persist can eventually lose their accounts. Again, the process and severity here will vary based on your product and demographic. The key is to have a consistent, well-thought-out process from the very beginning.

You will also want to ensure that you have a straightforward and accessible process for users to report offensive behavior. Don’t bury the report option, and make sure that you provide a variety of report tags to select from, like bullying, hate speech, sharing PII, etc. This will make it much easier for your moderation team to prioritize which reports they review first.

Ok, so moderation is a lot of work. It requires patience and dedication and a strong passion for community-building. But it doesn’t have to be hard if you leverage the right tools and the right techniques. And it’s highly rewarding, in the end. After all, what’s better than shaping a positive, healthy, creative, and engaged community in your social product? It’s the ultimate goal, and ultimately, it’s an attainable one — when you do it right.

 

Originally published on Quora

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Quora: Does it make sense for media companies to disallow comments on articles?

It’s not hard to understand why more and more media companies are inclined to turn off comments. If you’ve spent any time reading the comments section on many websites, you’re bound to run into hate speech, vitriol, and abuse. It can be overwhelming and highly unpleasant. But the thing is, even though it feels like they’re everywhere, hate speech, vitriol, and abuse are only present in a tiny percentage of comments. Do the math, and you find that thoughtful, reasonable comments are the norm. Unfortunately, toxic voices almost always drown out healthy voices.

But it doesn’t have to be that way.

The path of least resistance is tempting. It’s easy to turn off comments — it’s a quick fix, and it always works. But there is a hidden cost. When companies remove comments, they send a powerful message to their best users: Your voice doesn’t matter. After all, users who post comments are engaged, they’re interested, and they’re active. If they feel compelled to leave a comment, they will probably also feel compelled to return, read more articles, and leave more comments. Shouldn’t media companies cater to those users, instead of the minority?

Traditionally, most companies approach comment moderation in one of two ways, both of which are ineffective and inefficient:

  • Pre-moderation. Costly and time-consuming, pre-moderating everything requires a large team of moderators. As companies scale up, it can become impossible to review every comment before it’s posted.
  • Crowdsourcing. A band-aid solution that doesn’t address the bigger problem. When companies depend on users to report the worst content, they force their best users to become de facto moderators. Engaged and enthusiastic users shouldn’t have to see hate speech and harassment. They should be protected from it.

I’ve written before about techniques to help build a community of users who give high-quality comments. The most important technique? Proactive moderation.

My company Two Hat Security has been training and tuning AI since 2012 using multiple unique data sets, including comments sections, online games, and social networks. In our experience, proactive moderation uses a blend of AI-powered automation, human review, real-time user feedback, and crowdsourcing.

It’s a balancing act that combines what computers do best (finding harmful content and taking action on users in real-time) and what humans do best (reviewing and reporting complex content). Skim the dangerous content — things like hate speech, harassment, and rape threats — off the top using a finely-tuned filter that identifies and removes it in real-time. That way no one has to see the worst comments. You can even customize the system to warn users when they’re about to post dangerous content. Then, your (much smaller and more efficient) team of moderators can review reported comments, and even monitor comments as they’re posted for anything objectionable that slips through the cracks.

Comments section don’t have to be the darkest places on the internet. Media companies have a choice — they can continue to let the angriest, loudest, and most hateful voices drown out the majority, or they can give their best users a platform for discussion and debate.

Originally published on Quora

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Quora: How do you build a community of users that give high-quality comments on a website?

There are a few key steps you can take to build a community that encourages high-quality comments:

  1. Know your community. What kind of articles will you be publishing? Will you cover controversial topics that are likely to elicit passionate responses? What demographic are you targeting? Once you know who will be posting (and what topics they will be posting about), you can start to…
  2. Think long and hard about community guidelines. If you know your community, you can create guidelines to protect. Be clear about your policies. If you allow profanity but not bullying, define bullying for your audience. If you allow racial slurs within the context of a historical article but not when they’re directed at another user, make sure it’s explained in your policy guide.
  3. Build a comprehensive moderation strategy. Visit the comments section of most websites, and you’re bound to walk away with a skewed — and highly unpleasant — view of humanity. Toxic voices will always drown out healthy voices. But it doesn’t have to be that way. If you’re using a blend of smart computer automation and human review to moderate comments, you can build a process that works for your unique community.
  4. Engage with your best users. Who doesn’t appreciate a good shout-out? Encouraging high-quality content can go a long way towards fostering a healthy community. Give your moderators the time to upvote, call out, or comment on quality posts. If you’ve done the first three steps, your moderators will have time to do what people do best, and what computers will likely never do—interact with users and engage them emotionally.

This is by no means an exhaustive list. Your community will grow and change over time, so you may have to adjust your policies as your audience changes. You will probably make mistakes and have to course-correct your moderation strategy. But if you start with a solid baseline, you will serve your audience well.

Originally published on Quora

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required