Freedom of Speech = Freedom From Accountability

We believe freedom of speech can be a positive force, especially when used with a level of care and respect for others. Realistically, we don’t live in a world where people will always be sweet and happy like Teletubbies. People are not always going to be kind to each other, and everyone is bound to have a bad day…

Here’s where the true challenge comes in for product owners — what do you do to protect free speech while also protecting your community from hate speech and online harassment? Do you allow users to threaten to rape other users in the name of freedom of expression? How will you discern the difference between someone having a bad day versus repeat offenders in need of correction?

Context is everything

The knee-jerk reaction might be to implement an Orwellian censorship strategy. In some cases, this may be the correct approach. Strict filtration is the right strategy for a child-directed product, where there are topics that are never acceptable from a legal perspective. However, filtering out certain words or phrases may not be the solution for a 17+ gaming community or a social media platform for adults. The context of a conversation between two strangers is much different from a conversation between a group of old friends, or a public chatroom where many voices are ‘shouting’ at each other.

Every community has different rules of engagement — each company has a philosophy about what they deem to be an appropriate conversation within their social product. What Flickr considers acceptable will differ significantly from what’s socially accepted on 4chan, or from within a professional Slack channel. Every product is different and unique, and that is one of the challenges we have in providing a protection service to such a wide variety of social apps and games.

Each product has a set of goals and guidelines that govern what they believe is acceptable or unacceptable within their community. Similarly, online collectives tend to have expectations about what they think is appropriate or inappropriate behaviour within their tribe. A moderator or community manager should act as a facilitator, reconciling any differences of expectation between the product owners and the community.

Respect each other as humans

With anonymity, it is much easier to divorce oneself from the reality that there’s a real human on the receiving end of cruel comments or so-called rape ‘jokes’.

Writer Lindy West shared a bit of her story about confronting a ‘troll’ who had been harassing her online in an excellent episode of “This American Life”. The writer and the man engage in a civil conversation, acknowledging the tension directly, eventually coming to somewhat of understanding about each other.

People forget that the victims of these ‘trolls’ are real people, but they also forget that ‘trolls’ are real people, too. As Lindy West describes, “empathy, boldness, and kindness” are some practical ways to bridge differences between two humans. There is a substantial difference between a virus and a corrupted file, just as there is a difference between a real troll and someone who’s having a bad day. With respect comes an opportunity to see each other as human beings rather than avatars on the other side of a screen.

Freedom of speech does not equal freedom from accountability

Some have described the internet as a haven for freedom of expression, where there is less pressure to be “politically correct”. While this may be partially true, there is still an inherent accountability that comes with our freedom. When someone chooses to exploit their freedom to publish hate speech, he or she will likely face some natural consequences, like the effect on his or her personal reputation (or in some extreme cases, legal repercussions).

Freedom of speech is not always sweet. It can even be ugly without crossing the line of transforming into toxic behavior. It can also be amazing and transformative. The democratization of thought enabled by modern social platforms have had a profound effect on society, empowering millions to share and exchange ideas and information.

One of our goals with Community Sift is to create safety without censorship, empowering product owners to preserve user freedom while also protecting their social apps and games. There are so many issues that plague online communities, including spam, radicalization, and illegal content. Businesses work with us because we use a combination of machine learning, artificial intelligence, and human community professionals to protect their products and services.

Moreover, while we respect the need for freedom of speech, we cannot support any activity that results in someone taking their own life. That is why we do what we do. If we can protect a single life through automated escalations and improved call-for-help workflows, we will have made the world a better place. While this may sound overly altruistic, we believe this is a challenge that is worth tackling head-on, regardless of the perspective about “freedom of speech.”

 

Originally published on Medium

Photo by Cory Doctorow. Source: Flickr

Tackling Toxicity in Online Gaming Communities

The gaming industry is making a breakthrough.

For most of its history, internet gaming has been one big free-for-all. Users have seen little reprieve from the pervasive theme of hostility, particularly within anonymous environments.

A sustained lack of maintenance to any system results in faults, so it should come as no surprise that many industry leaders are finally ready to stop ignoring the issue and embrace innovative approaches.

As product and game designers, we create social experiences to enrich people’s lives. We believe social connections can have a profound transformational effect on humanity by giving people the ability to connect with anyone from anywhere. When we take a look around at the most popular web products to date — social media, social games, instant messaging — the greatest common denominator becomes apparent: each other. The online world now offers us a whole new way of coming together.

There is, however, a problem created when the social environment we are used to operating within is paired down to bare language alone. In the physical world, social conventions and body language guide us through everyday human interaction. Much of our communication happens non-verbally, offering our brains a wider range of data to interpret. Our reactions to potentially misleading messages follow a similar pattern of logic, primarily driven by the rich database of the unconscious mind.

Online, these cues disappear, placing developers who wish to discourage toxic discourse in an awkward position. Should we act quickly and risk misinterpretation, or give users the benefit of the doubt until a moderator can take a closer look? The second option comes with the equally unsavoury proposition of leaving abusive speech unattended for hours at a time, by which point others will have already seen it. With reports showing that users who experience toxicity in an online community are 320% more likely to quit, developers concerned with user retention can no longer afford to look the other way. So what are our options?

Methods for tackling community management generally fall into one of two categories: penalty or reward. Typical responses to bad behaviour include warning messages, partial restrictions from game features and, as a final measure, temporary or permanent bans. On the flipside, rewards for exemplary behaviour seem to offer more room for creativity. Massive online battle arena game Defense of the Ancients has a commendation system whereby users can give out up to 6 commendations per week, based on four options: Friendly, Forgiving, Teaching, or Leadership. Commendable users receive no other tangible reward beyond prestige.

“Personally, [DotA’s commendation system] always incentivized me to try and be helpful in future games simply because leaving a game and feeling like you had a positive impact despite losing feels way better than raging at people and having them threaten to report you,” explains one Reddit user in a discussion thread centering around commendations in online games.

Another notable example is League of Legends’ recent move to give exclusive skins to users with no history of bans in the last year. A Pavlovian model of positive-reinforcement seems to be gaining fast traction in the gaming industry.

Still, a complex problem requires a complex solution, and toxicity continues to persist in both these communities. With all the work that goes into creating a successful game, few studios have the time or resources left over to build, perfect, and localize intricate systems of penalty and reward.

The first step is acknowledging two inconvenient truths: context is everything, and our words exist in shades of gray. Even foul language can play a positive role in a community depending on the context. An online world for kids has different needs from a social network for adults, so there’s no one-size-fits-all solution.

Competing with the ever-expanding database of the human mind is no easy task, and when it comes to distinguishing between subtle shifts in tone and meaning, machines have historically fallen short. The nuances of human communication make the supervision of online communities a notoriously difficult process to automate. Of course, with greater scale comes a greater need for automation — so what’s a Product Manager to do?