Want to retain users and lower the cost of acquisition on your platform? In 2018, social features including chat, private messaging, usernames, and user profiles are all must-haves in an overstuffed market where user retention is critical to long-term success. Nothing draws a crowd like a crowd — and a crowd of happy, loyal, and welcoming users will always bring in more happy, loyal, and welcoming users.
But there will always be risks involved when adding social features to your platform. A small percentage of users will post unwanted content like hate speech, NSFW images, or abusive language, all of which can cause serious damage to your brand’s reputation.
So while social features are must-haves in 2018, understanding — and mitigating — the risks inherent in adding those features are equally important.
If you’re just getting started with chat moderation (and even if you’ve been doing it for a while), here are four key questions to ask.
1. How much risk is my platform/brand willing to accept?
Every brand is different. Community demographic will usually be a major factor in determining your risk tolerance.
For instance, communities with users under 13 in the US have to be COPPA compliant, so preventing users from sharing PII (personally identifiable information) is essential. Edtech platforms have to mitigate risk by ensuring that they’re CIPA and FERPA compliant.
With legal ramifications to consider, these platforms that are designed for young people will always be far more risk-averse than brands that are marketed towards more mature audiences.
However, many older, more established brands — even if they cater to an older audience — will likely be less tolerant of risk than small or new organizations.
Consider your brand’s tone and history. Review your corporate guidelines to understand what your brand stands for. This is a great opportunity to define exactly what kind of an online community you want to create.
2. What kind of content is most dangerous to my platform/brand?
Try this exercise: Imagine that one item (say, a forum post or profile pic) containing pornography was posted on your platform. How would it affect the brand? How would your audience react to seeing pornography on your platform? How would your executive team respond? What would happen if the media/press found out?
Same with PII — for a brand associated with children or teens, this could be monumental. (And if it happens on a platform aimed at users under 13 in the US, a COPPA violation can lead to potentially millions of dollars in fines.)
What about hate speech? Sexual harassment? What is your platform/brand’s definition of abuse or harassment? The better you can define these terms in relation to your brand, the better you will understand what kind of content you need to moderate.
3. How will I communicate my expectations to the community?
Don’t expect your users to automatically know what is and isn’t acceptable on your platform. Post your community guidelines where users can see them. Make sure users have to agree to your guidelines before they can post.
In a recent blog for CMX, Two Hat Director of Community Trust & Safety Carlos Figueiredo explores writing community guidelines you can stick to. In it, he provides an engaging framework for everything from creating effective guidelines from the ground up, to collaborating with your production team to create products that encourage healthy interactions.
4. What tools can I leverage to manage risk and enforce guidelines in my community?
We recommend taking a proactive instead of a reactive approach to managing risk. What does that mean for chat moderation? First, let’s look at the different kinds of chat moderation:
- Live moderation: Moderators follow live chat in real time and take action as needed. High risk, very expensive, and not a scalable solution.
- Pre-moderation: Moderators review, then approve or reject all content before it’s posted. Low risk, but slow, expensive, and not scalable.
- Post-moderation: Moderators review, then approve or reject all content after it’s posted. High-risk option.
- User reports: Moderators depend on users to report content, then review and approve or reject. High-risk option.
On top of these techniques, there are also different tools you can use to take a proactive approach, including in-house filters (read about the build internally vs buy externally debate), and content moderation solutions like Two Hat’s Community Sift (learn about the difference between a simple profanity filter and a content moderation tool).
So what’s the best option?
Regardless of your risk tolerance, always use a proactive filter. Content moderation solutions like Two Hat’s Community Sift can be tuned to match your risk profile. Younger communities can employ a more restrictive filter, and more mature communities can be more permissive. You can even filter just the topics that matter most. For example, mature communities can allow sexual content while still blocking hate speech.
By using a proactive filter, you’ve already applied the first layer of risk mitigation. After that, we recommend using a blend of all four kinds of moderation, based on your brand’s unique risk tolerance. Brands that are less concerned about risk can depend on user reports for the most part, while more risk-averse platforms can pre or post-moderate content that they deem potentially risky, but not risky enough to filter automatically.
Once you understand and can articulate your platform/brand’s risk tolerance, you can start to build Terms of Use and community guidelines around it. Display your expectations front and center, use proven tools and techniques to manage risk, and you’ll be well on your way to building a healthy, thriving, and engaged community of users — all without putting your brand’s reputation at risk.
Now, with your brand protected, you can focus on user retention and revenue growth.