How Do You Calculate the ROI of Proactive Moderation in Chat?

On Tuesday, October 30th, I’m excited to be talking to Steve Parkis, a senior tech and entertainment executive who drove amazing growth in key products at Disney and Zynga, about how chat has a positive effect on user retention and overall revenue. It would be great to have you join us — you can sign up here.

Until then, I would like to get the conversation started here.

There is a fundamental understanding in online industries that encouraging prosocial, productive interactions and curbing anti-social, disruptive behavior in our online communities are important things to do.

The question I’ve been asking myself lately is this — do we have the numbers to prove that proactive moderation and other approaches are business crucial?

In my experience, our industries (games, apps, social networks, etc) lack the studies and numbers to prove that encouraging the productive and tackling negative interactions has a key impact on user engagement, retention, and growth.

This is why I’m on a mission this quarter to create new resources, including a white paper, that will shed light on this matter, and hopefully help as many people as possible in their quest to articulate this connection.

First steps and big questions
We already know that chat and social features are good for business — we have lots of metrics around this — but the key info that we’re missing is the ROI of proactive moderation and other community measures. Here’s where I need your help, please:

  • How have you measured the success of filtering and other approaches to tackle disruptive behavior (think spam, fraud, hate speech, griefing, etc) as it relates to increased user retention and growth in your communities?
  • Have you measured the effects of implementing human and/or automated moderation in your platforms, be it related to usernames, user reports, live chat, forum comments, and more?
  • Why have you measured this?

I believe the way we are currently operating is self-sabotage. By not measuring and surfacing the business benefits of proactive moderation and other measures to tackle anti-social and disruptive behaviour, our departments are usually seen as cost-centers rather than key pieces in revenue generation.

I believe that our efforts are crucial to removing the blockers to growth in our platforms, and also encouraging and fostering stronger user engagement and retention.

Starting the conversation
I’ve talked to many of you and I’m convinced we feel the same way about this and see similar gaps. I invite you to email your comments and thoughts to carlos.figueiredo@twohat.com.

Your feedback will help inform my next article as well as my next steps. So what’s in it for you? First, I’ll give you a shoutout (if you want) in the next piece about this topic, and will also give you exclusive access to the resources once they are ready, giving you credit where it’s due. You will also have my deepest gratitude : ) You know you can also count on me for help with any of your projects!

To recap, I would love to hear from you about how you and your company are measuring the return on investment from implementing measures (human and/or technology driven) to curb negative, antisocial behaviour in your platforms.

How are you thinking about this, what are you tracking, and how are you analyzing this data?

Thanks in advance for your input. I look forward to reading it!



Adding Chat to Your Online Platform? First Ask Yourself These 4 Critical Questions

Want to retain users and lower the cost of acquisition on your platform? In 2018, social features including chat, private messaging, usernames, and user profiles are all must-haves in an overstuffed market where user retention is critical to long-term success. Nothing draws a crowd like a crowd — and a crowd of happy, loyal, and welcoming users will always bring in more happy, loyal, and welcoming users.

But there will always be risks involved when adding social features to your platform. A small percentage of users will post unwanted content like hate speech, NSFW images, or abusive language, all of which can cause serious damage to your brand’s reputation.

So while social features are must-haves in 2018, understanding — and mitigating — the risks inherent in adding those features are equally important.

If you’re just getting started with chat moderation (and even if you’ve been doing it for a while), here are four key questions to ask.

1. How much risk is my platform/brand willing to accept?
Every brand is different. Community demographic will usually be a major factor in determining your risk tolerance.

For instance, communities with users under 13 in the US have to be COPPA compliant, so preventing users from sharing PII (personally identifiable information) is essential. Edtech platforms have to mitigate risk by ensuring that they’re CIPA and FERPA compliant.

With legal ramifications to consider, these platforms that are designed for young people will always be far more risk-averse than brands that are marketed towards more mature audiences.

However, many older, more established brands — even if they cater to an older audience — will likely be less tolerant of risk than small or new organizations.

Consider your brand’s tone and history. Review your corporate guidelines to understand what your brand stands for. This is a great opportunity to define exactly what kind of an online community you want to create.

2. What kind of content is most dangerous to my platform/brand?
Try this exercise: Imagine that one item (say, a forum post or profile pic) containing pornography was posted on your platform. How would it affect the brand? How would your audience react to seeing pornography on your platform? How would your executive team respond? What would happen if the media/press found out?

Same with PII — for a brand associated with children or teens, this could be monumental. (And if it happens on a platform aimed at users under 13 in the US, a COPPA violation can lead to potentially millions of dollars in fines.)

What about hate speech? Sexual harassment? What is your platform/brand’s definition of abuse or harassment? The better you can define these terms in relation to your brand, the better you will understand what kind of content you need to moderate.

3. How will I communicate my expectations to the community?
Don’t expect your users to automatically know what is and isn’t acceptable on your platform. Post your community guidelines where users can see them. Make sure users have to agree to your guidelines before they can post.

In a recent blog for CMX, Two Hat Director of Community Trust & Safety Carlos Figueiredo explores writing community guidelines you can stick to. In it, he provides an engaging framework for everything from creating effective guidelines from the ground up, to collaborating with your production team to create products that encourage healthy interactions.

4. What tools can I leverage to manage risk and enforce guidelines in my community?
We recommend taking a proactive instead of a reactive approach to managing risk. What does that mean for chat moderation? First, let’s look at the different kinds of chat moderation:

  • Live moderation: Moderators follow live chat in real time and take action as needed. High risk, very expensive, and not a scalable solution.
  • Pre-moderation: Moderators review, then approve or reject all content before it’s posted. Low risk, but slow, expensive, and not scalable.
  • Post-moderation: Moderators review, then approve or reject all content after it’s posted. High-risk option.
  • User reports: Moderators depend on users to report content, then review and approve or reject. High-risk option.

On top of these techniques, there are also different tools you can use to take a proactive approach, including in-house filters (read about the build internally vs buy externally debate), and content moderation solutions like Two Hat’s Community Sift (learn about the difference between a simple profanity filter and a content moderation tool).

So what’s the best option?

Regardless of your risk tolerance, always use a proactive filter. Content moderation solutions like Two Hat’s Community Sift can be tuned to match your risk profile. Younger communities can employ a more restrictive filter, and more mature communities can be more permissive. You can even filter just the topics that matter most. For example, mature communities can allow sexual content while still blocking hate speech.

By using a proactive filter, you’ve already applied the first layer of risk mitigation. After that, we recommend using a blend of all four kinds of moderation, based on your brand’s unique risk tolerance. Brands that are less concerned about risk can depend on user reports for the most part, while more risk-averse platforms can pre or post-moderate content that they deem potentially risky, but not risky enough to filter automatically.

Once you understand and can articulate your platform/brand’s risk tolerance, you can start to build Terms of Use and community guidelines around it. Display your expectations front and center, use proven tools and techniques to manage risk, and you’ll be well on your way to building a healthy, thriving, and engaged community of users — all without putting your brand’s reputation at risk.

Now, with your brand protected, you can focus on user retention and revenue growth.