In the past, once you figured out there was a problem in your online community, it was probably too late to do much about it. That’s because chat and other social features were often home-grown or off the shelf solutions tacked-on to games and other communities after the fact, rather than baked-in to product strategy.
So, the go-to solution when there was a problem in a chat community was simply to disallow (formerly known as blacklistlisting) ‘offensive’ users. But blacklisting alone (words, people, etc.) doesn’t really do anything to solve the underlying issues and invites accusations of censorship against community managers (i.e. your brand). It was (still is for some) an unsustainable approach, and a new way of thinking was needed.
Today, chat and chat moderation are considered and strategized for at the design stage. Community guidelines, policies and the means to educate users as to acceptable chat conduct are established before products ever go to market. There are many reasons for the change, but the biggest may be the global shift to prioritizing a user’s experience with your product, rather than the product itself.
The broader shift to experience-first practices has opened the door for brands to leverage chat communities as revenue drivers (see our whitepaper, An Opportunity to Chat, available on VentureBeat, for more on that).
At the same time though, prioritizing chat moderation means brands and community managers need to ask themselves some very tough, complex questions that they didn’t have to ask before.
Will new community members who may not know the rules be subject to the same policies as veteran users? How will you account for variance in ages (8-year-olds communicate differently than 38-year-olds)? What are your moderators going to do if a user threatens someone, or starts talking suicide? Should people be able to report one another? Are bans permanent? Do they carry over to other products, brands or communities?
Answering these questions takes a lot of research, discussion, and forethought. More than anything, it’s essential to be sure that the answers you get to, and the community experience you build, move your brand away from being perceived by users as a censor of discussion, and towards being perceived as their diligent partner in creating a great experience.
Diligence is the opposite of censorship
One of the conversations we encounter when discussing chat moderation policies with our clients pivots around turning concerns of taking freedom of expression away from users into an opportunistic discussion about how chat moderation as part of product, brand, and business strategy are often misaligned. In fact, it is essential for brands to move away from thinking of chat moderation as just a tool for managing risk, and towards the realization that it’s also a way to identify your most influential and profitable users. Why?
Because chat and chat moderation drive clear business improvements in user engagement, retention, and lifetime value. We also know that positive chat experiences contribute to ‘K Factor’, or virality, i.e. the better the chat experience, the more likely a user is to share their satisfaction with a friend.
So then, far from fearing the label of limiting use expression, the discussion your team needs to have about chat moderation is, “How can we encourage and scale the types of chat experiences shared by our most valuable users?”
Instead of just muting those who use bad words, pick out the positive things influential users chat about and see how they inspire others to engage and stick around. Discover what your most valuable, long-term users are chatting about and figure out how to surface those conversations for new and prospective users, sooner, and to greater effect.
Don’t fear the specter of censorship. Embrace the role of chat moderation as a powerful instrument of diligence, a productive business tool, and the backbone for a great user experience.