How League of Legends Is Teaching High School Teens the Values of Sportsmanship

Ivan Davies of Riot Games has one of the coolest job descriptions ever.

“My job is to try and make a difference to the League of Legends player and wider community,” he says. “I work in a publishing office in Oceania, where I’m not told what to do by my Manager. I’m simply entrusted to make a difference; it’s then up to the local team to decide what direction we should take.”

For Ivan and his team, making a difference means tackling one of the biggest issues facing the gaming world today: How do you educate young players about good online behavior?

Following the Summoner’s Code

Riot Games has long been a proponent of sportsmanship. With 100 million monthly players across the globe, League of Legends is the biggest game in the industry. Because of its intensely competitive nature, it has become known for its sometimes heated atmosphere. Players are expected to abide by the Summoner’s Code, a comprehensive guide to being a good team player.

League of Legends in action

Despite encouraging the Summoner’s Code and being at the forefront of player behavior studies, Ivan notes that “At times, it’s felt like we could do more. Video games are a fundamental reflection of humanity: how we learn, how we interact, how we come to understand our world. We all “play” throughout our lives in some capacity or another. Video games just provide a particular sandbox… the reason they work so well is because of these parallels. The social and competitive nature of League of Legends taps into human fundamentals.”

Last year, Ivan and his team started to wonder what they could do outside of the in-game experience to positively shape player behavior. They realized that it’s not just the gaming industry that isn’t doing enough — it’s also the education sector. Students are online every day, at school and at home, and yet schools are doing very little to teach students about acceptable online behavior.

“Some schools don’t do enough to set students up for an online future. I’ve heard a number of schools hire an external speaker to talk to their students about cyberbullying. This talk may happen once a year purely to tick a box; a curriculum standard has been met, and online etiquette is not considered a priority for another year.” Ivan says.

“Teachers and the education sector have been slow to respond to this online world and setting students up for a future of online activity. The education sector is meant to set you up for life and at the moment not enough is being done to ensure online educational needs are being met.”

It’s all about sportsmanship

In 2016 Ivan and his team created League of Legends High School Clubs — an initiative that is now spreading across Australia and New Zealand. Like other after-school clubs (think AV, drama, or Model UN), League of Legends clubs are led by a dedicated teacher. Under the teacher’s supervision, students play League of Legends in groups at school and even participate in championship tournaments against other schools.

To help students understand and follow the Summoner’s Code, Ivan and his team have outlined six aspects of sportsmanship, which teachers and students discuss before, during, and after a game.

The six aspects of sportsmanship studied in LoL High School Clubs.

“A League of Legends High School Club is intended to promote authentic, relatable learning experiences,” Ivan says. “It provides an opportunity for students to explore and model the key values that exist in schools and in the curriculum. We’ve chosen to focus on sportsmanship and have provided a code of acceptable behavior for players to abide by in their pursuit of fair play.”

Helping teachers and students

Ivan and his team haven’t just worked diligently to promote the clubs — they’ve also built a remarkable set of teaching materials structured around the “Assessment for Learning” framework. Popular in the UK and Australia, “Assessment for Learning” emphasizes ongoing review and adjustment based on each student’s unique needs. Teaching materials include everything from discussion cards and self-evaluation sheets to essential information for school IT departments.

“We need to meet students where they are, and the more the education sector supports what we’re doing, the more likely we can collectively make a difference.”

This connection to the tenets of education is no accident — it’s a particularly brilliant choice on the part of Ivan and his team. As he says, “The resources align with the national curriculum and Positive Behavior for Learning, an initiative in Oceania which many schools are looking to roll out. League of Legends High School Clubs is one way of implementing these initiatives.”

Online changes, offline improvements

The exciting news is that the clubs have a real effect on kids — and not just on their online behavior.

“A year ago, we had this hypothesis that League of Legends could teach right from wrong,” he says. “A club led by a dedicated Teacher can definitely provide those opportunities. Not only have Teachers seen students adopting sportsmanlike characteristics, which has led to outcomes like effective communication and leadership, but some Teachers are now starting to see this transfer out of the League of Legends High School clubs and into the wider school curriculum.”

In addition to the existing 30 schools participating in clubs, Ivan did a professional development session last year to 26 teachers in Perth. As of July 2017, he has spoken to 130 different teachers across Oceania, and he’s eager to meet with more.

“A League of Legends High School Club is intended to promote authentic, relatable learning experiences.”

In the future, he hopes to expand the program throughout Oceania, adding more schools, teachers, and students to the already-growing list of participants. Not only that, he hopes that the education departments in Australia and New Zealand will soon recognize the benefits of the program — and potentially change the way they teach online etiquette to kids.

Why early digital education is crucial

“This is the place to teach online behavior,” Ivan says of high school. “I’ve always seen the education sector as a critical evolution point for young people. As teens begin to explore and experiment with the online world, we must think about how we can best support them on this journey. Let’s not shift the responsibility onto someone else or hope that they will learn online skills themselves.”

He hopes that the success of the project will send a strong signal to the world — that it’s time we tackle the problem of toxic online behavior. “This whole notion of ‘We’re going to wrap kids up in cotton wool. We’re going to remove them from the internet,’ is not an effective solution,” he cautions.

“Our children and our students look to us to set expectations of what good behavior looks like.”

“What we have to do is meet them on their chosen journey and be prepared to walk alongside them, side by side, step by step. As parents and teachers, we need to allow students to inevitably trip up or fall, and as they do we should be prepared and able to provide support and guidance. We should help them to make sense of what happened and why, and then encourage them to continue walking until they are skilled enough to walk on their own.”

It’s clear that the time for early education is now. The Pew Research Center’s latest study reports that 40% of Americans have experienced online harassment, while 62% consider harassment a major problem. As Ivan points out, these numbers highlight just how serious the problem is. The clubs are only the first step.

The future is now

“We, as adults, educators, and teachers have to be prepared to act,” Ivan says. “Our children and our students look to us to set expectations of what good behavior looks like, and if we can’t find the courage, time or dedication to step up and make a difference — what hope does the next generation have? Now is the time for change. The future we hope for won’t exist unless we do something about the now.”

“This is the place to teach online behavior,” Ivan says of high school. “I’ve always seen the education sector as a critical evolution point for young people.”

He’s hopeful for the future. “This is a hot topic of conversation. I spoke to three teachers yesterday, and I’m speaking to two more today.”

He adds, “I believe in a broad and balanced education system which embraces diversity and new opportunities that enhance understanding and student learning. We spend time on the things we care about, and the same goes for today’s students, many of whom are already invested in a digital world.

We need to meet students where they are, and the more the education sector supports what we’re doing, the more likely we can collectively make a difference.”

Find out more about sportsmanship and League of Legends High School Clubs on their site. Don’t forget to download their fantastic Teacher’s Resources here.

Interested in starting a club at your school? Find out how.

Questions for Ivan and his team? Get in touch at OCE-Highschool@riotgames.com.

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Is Online Behavior Changing (For the Better) in 2017?

This year, it seems like every second article you read is about online behavior. From Mark Zuckerberg’s manifesto to Twitter’s ongoing attempts to address abuse, toxicity is a hot topic.

However, forward-thinking companies like Riot Games have been (not so quietly) researching online toxicity for years now. And one of their biggest takeaways is that when it comes to online behavior, as a society we’re still in the discovery stages… and we have a long way to go.

Luckily, we have experts like Riot’s brilliant Senior Technical Designer Kimberly Voll to help guide us on the journey.

A long-time gamer with a background in computer science, artificial intelligence, and cognitive science (told you she was brilliant), Kim believes passionately in the power of player experience on game design. She also happens to be an expert in player behavior and online communication.

We sat down with her recently to discuss the current state of online discourse, the psychology of player behavior, and how game designers can promote sportsmanship in their games.

You say you want a revolution

Two Hat: As an industry, it seems like 2017 is the year we start to talk about online behavior, honestly and with an eye to finding solutions.

Kim: We’re on the cusp of a pretty significant shift in how we think of online digital play. Step by step, it’s starting to mature into a real industry. We’re at that awkward teenage phase where all hell keeps breaking loose sometimes. The internet is the fastest spreading technology that humans beings have ever faced. You blink, it went global, and now suddenly everybody’s online.

“How do you teach your kids to behave online when we don’t even know how to behave online?”

It hasn’t been culturally appropriated yet. It’s here, we like it, and we’re using it. There’s not enough of us stepping back and looking at it critically.

The fanciest of etiquette!

TW: Is it something about the nature of the internet that makes us behave this way?

Kim: The way we normally handle etiquette is with actual social settings. When you go to a kid’s club, you use kid-friendly language. When you got to a nightclub, you use nightclub-friendly language. We solve for that pretty easily. Most of us are good at reading a room, knowing how to read our peers, knowing what’s okay to say at work, versus elsewhere, knowing what it’s okay to say when you’re on the player behavior team and you’re exposed to all manner of language [laughs]. We’ve been doing this since we moved out of caves.

But we don’t have that on the internet. You can’t reliably look around and trust that space. And you find with kids that they go into all of the spaces trusting. Or they do what kids do and push the limits. Both are not great. We want kids to push the limits so they can learn the limits, but we don’t want them to build up these terrible habits that propagate these ways of talking.

On the internet, you don’t get the gesticulations, you don’t the presence that is being in the room with another person. There are certain channels that right now are completely cut off. So right now we’re hyper-focusing on other channels — for a long time that’s just been chat. These limitations mean that you end up trying to amplify and bring out your humanity in different ways.

The nature of things

TH: As a gamer and a cognitive scientist, what is your take on toxic player behavior?

Kim: I think the first step is understanding the nature of the problem.

There are different ways to look at toxicity and unsportsmanship. We can’t paint it all with the same brush.

“Are there people who just want to watch the world burn? They’re out there, but in our experience, they’re really, really rare.”

Not everyone else is being a saint, but not everyone is the same.

MOBAs [Multiplayer Battle Arena Games] are frustrating because they’re super intense. If something goes wrong you’re particularly susceptible to losing your temper. That creates a tinderbox that gives rise to other things. Couple that with bad habits and socio-norms that have developed on the internet, and have been honed somewhat for a gaming audience, and they’re just that — they’re norms. Doesn’t make them necessarily right or wrong, and it doesn’t mean that players like them. We find that players don’t like them, overwhelmingly. And they’re becoming incredibly vocal, saying “We don’t want this.”

But there’s a second vocal group that’s saying “Suck it up. It’s the internet, it’s the way we talk.” And the balance is somewhere in the middle.

It’s always a balancing act

TH: How can game designers decide what tactic they should use to promote better behavior in their game?

Kim: There is obviously a line, but it shifts a bit. Where that line falls will depend largely on your community, your content. It’s the same way the line shifts dramatically when you’re out with friends drinking, versus at home with the family playing card games with your kid cousins.

Bandaids help, but they’re not the full solution.

There has to be flexibility. The first thing to do is understand your community, and try to gain a broader perspective of the motivation and underlying things that drive these behaviors. And also understand that there is no “one size fits all” approach. As a producer of interactive content, you need to figure out where your comfort level is. Then draw that line, and stick by that line. It’s your game; you can set those standards.

There is understanding the community, understanding it within the context of your game, and then there’s the work that Community Sift does, which is shield. I think that shielding remains ever-important. But there has to be balance. The shield is the band-aid, but if we only ever do that, we’re missing an opportunity to learn from what that bandaid is blocking.

There’s a nice tension there where we can begin to explore things.

You don’t need to fundamentally alter your core experience. But if you have that awareness it forces you to ask questions like, “Do I want to have chat in this part of the game or do I want to have voice chat immediately after a match when tempers are the most heated?

Change is good

TH: Do you have an example of a time when Riot made a change to gameplay based on player behavior?

Kim: Recently we added the ability to select your role before you go into the queue, with some exceptions. Before it used to be that you would pop into chat and the war would start, because there are some roles that people tend to like more.

Before, it used to be that you would pop into chat and then the war would start to ensure you got the role you wanted. Whoever could type “mid” fastest ideally got the role, assuming people were even willing to accept precedence, which sometimes they weren’t. And if you lagged for any reason, you could miss your chance at your role.

We realized we were starting the game out on the wrong foot with these mini-wars. What was supposed to be a cooperative team game — one team vs another — now included this intra-team fighting because we started off with that kind of atmosphere.

Being able to choose your role gives players agency in a meaningful way, and removes these pre-game arguments. It’s not perfect, but it’s made the game significantly better.

Trigger warnings, road rage, and language norms… oh my!

TH: What kinds of things trigger bad behavior?

Kim: There is a mix of things that trigger toxicity and unsportsmanlike behavior. Obviously, frustration is one. But let’s break that down: What do you want to do when you’re frustrated? You want to kick and scream. You want the world to know. And if somebody is there with you, you need them to know, even if they had nothing to do with it.

“Put yourself in a situation where you’re locked behind a keyboard, your frustration is bubbling over, and you’re quite likely alone in a room playing a game. How do you yell at the person on the other side of the screen? Well, you can use all caps, but that’s not very satisfying. So how do you get more volume into your words? You keep amping up what you’re saying. And what’s the top of that chain? Hate speech.”

It’s very similar to road rage. I remember my mom told me a story about some dude who was upset that she didn’t run a yellow light, He actually got out of the car and started pounding on her hood. And I bet he went home afterward, pulled into his driveway, greeted his kid, and was a normal person for the rest of the day.

You’re not an actual monster; you’re in a particular set of circumstances, in that situation, that have funneled you through the keyboard into typing things you might not otherwise type. So that’s one big bucket.

Sometimes, you Hulk out.

In the 70s and 80s, we used to say things like “You’re such a retard.” Now, we’re like “I can’t believe we used to say that.” There are certain phrases that were normal at the time. We had zero ill intent — it was just a way of saying “You’re a goofball.” That sort of normalcy that you get with language, no matter how severe, when you’re exposed to it regularly, becomes ingrained in you, and you carry that through your life and don’t even realize it.

We’ve sent people their chat logs, and I truly believe that they when they look at them, they have no idea what the problem is. Other people see the problem, but they just think, “Suck it up.” But there is a third group of people who look at it and they think “This is the way everybody talks, I don’t understand.” They’re caught in a weird spot where they don’t know how to move forward. And that can trigger defensiveness.

The thought process is roughly “So, you’re asking me to change, but I don’t quite get it, I don’t want to change, because I’m me, and I like talking this way, and when I say things like this, my friends acknowledge me and laugh, and that’s my bonding mechanism so you can’t take that away from me.”

Typically, no one thinks all those things consciously. But they do get angry, and now we’ve lost all productive discourse.

There is a full spectrum here. It’s a big tapestry of really interesting things that are going on when people behave this way on the internet. All of that feeds into the question how do we shield it?

“Shielding is great, but can we also give feedback in a way that increases the likelihood that people who are getting the feedback are receptive to it?”

Can we draw a line between what’s so bad that the cost of the pain caused to people is far more than the time it would take to try to help this person?

Can we actually prevent them from getting into this state by understanding what’s triggering it, whether it’s the game, human nature, or current socio-norms?

Let’s talk about toxicity

TH: What can we do to ensure that these conversations continue?

Kim: I think we need to steer away from accusations. We’re all in this together; we’re all on the internet. There’s a certain level of individual responsibility in how we conduct ourselves online.

I’ve had these conversations when people are like “Yes, let’s clean up the internet, let’s do everything we have to do to make this happen.” And the flipside is people who say “Just suck it up. People are far too sensitive.”

And what I often find is that the first group are just naturally well-behaved online, while the second group is more likely to lose it. So when we have these conversations, what we don’t realize is that our perspective can unconsciously become an affront on who they are.

If we don’t take that into account in the conversation, then we end up inadvertently pointing fingers again.

We have to get to a point where can we talk about it, without getting defensive.

Redefining our approach to player behavior

TH: Your empathetic approach is refreshing. Many of us have gotten into the habit of assuming the worst of people and being unwilling to see the other person’s perspective. And of course, that isn’t productive.

Kim: Despite our tendency to make flippant, sweeping comments — most people are not jerks. They’re a product of their own situation. And those journeys that have got each of us to where we are today are different, and they’re often dramatically different. And when we put people on the internet, we’ve got a mix of folks for whom the only thing connecting them is this game, and they come into the game with a bunch of bad experiences, or just generally feeling like “Everyone else is going to let me down.”

Then somebody makes an innocent mistake, or not even a mistake — maybe they took a direction you didn’t expect — and that just reinforces their worldview. “See, everyone is an idiot!”

When expectations aren’t met it leads to a lot of frustration, and players head into games with a lot of expectations.

I believe very viscerally that we have to listen before we try to aggressively push things out. But also we have to realize that the folks we are trying to understand may not be ready to talk. So we may have to go to them. And that applies to a lot of human tragedy, from racism to sexism.

We come in wagging our fingers, and our natural human defense is “Walls up, defenses up — this is the only way I will solve the cognitive dissonance that is you telling me that I should change who I am. Because I am who I am, and I don’t want to change who I am. Because who else would I be?” And that’s scary.

TH: It sounds like we need to take a step back and show a bit of grace. Like we said before, the conversation is finally starting to happen, so let’s give people time to adjust.

Kim: Think about the average company. You’re trying to make a buck to put food on the table and maybe make a few great games. That doesn’t leave a lot of room to do a lot of extra stuff. You may want to, but you may also think, “I have no idea what to do, and I tried a few things and it didn’t work, so what now? What do I do, stop making games?”

“At Riot, we’re lucky to have had the success that we’ve had to make it possible fund these efforts, and that’s why we want to share. Let’s talk, let’s share. I never thought I’d have this job in my life. We’re very lucky to fund our team and try to make a difference in a little corner of the internet.”

It’s harder for games that have been out for a long time. Because it’s harder to shift normative behavior and break those habits. But we’re trying.

 

Want to know more about Kim? Follow @zanytomato on Twitter

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Quora: Why do people say things on the internet which they wouldn’t say in the real world?

Way back in 2004 (only 13 years ago but several lifetimes in internet years), a Professor of Psychology at Rider University named John Suler wrote a paper called The Online Disinhibition Effect. In it, he identifies the two kinds of online disinhibition:

Benign disinhibition. We’re more likely to open up, show vulnerability, and share our deepest fears. We help others, and we give willingly to strangers on sites like GoFundMe and Kickstarter.

Toxic disinhibition. We’re more likely to harass, abuse, and threaten others when we can’t see their face. We indulge our darkest desires. We hurt people because it’s easy.

Suler identified eight ways in which the internet facilitates both benign and toxic disinhibition. Let’s look at three of them:

Anonymity. Have you ever visited an unfamiliar city and been intoxicated by the fact that no one knew you? You could become anyone you wanted; you could do anything. That kind of anonymity is rarely available in our real lives. Think about how you’re perceived by your family, friends, and co-workers. How often do you have the opportunity to indulge in unexpected — and potentially unwanted — thoughts, opinions, and activities?

Anonymity is a cloak. It allows us to become someone else (for better or worse), if only for the brief time that we’re online. If we’re unkind in our real lives, sometimes we’ll indulge in a bit of kindness online. And if we typically keep our opinions to ourselves, we often shout them all the louder on the internet.

Invisibility. Anonymity is a cloak that renders us—and the people we interact with—invisible. And when we don’t have to look someone in the eye it’s much, much easier to indulge our worst instincts.

“…the opportunity to be physically invisible amplifies the disinhibition effect… Seeing a frown, a shaking head, a sigh, a bored expression, and many other subtle and not so subtle signs of disapproval or indifference can inhibit what people are willing to express…”

Solipsistic Introjection & Dissociative Imagination. When we’re online, it feels like we exist only in our imagination, and the people we talk to are simply voices in our heads. And where do we feel most comfortable saying the kinds of things that we’re too scared to normally say? That’s right—in our heads, where it’s safe.

Just like retreating into our imagination, visiting the internet can be an escape from the overwhelming responsibilities of the real world. Once we’ve associated the internet with the “non-real” world, it’s much easier to say those things we wouldn’t say in real life.

“Online text communication can evolve into an introjected psychological tapestry in which a person’s mind weaves these fantasy role plays, usually unconsciously and with considerable disinhibition.”

The internet has enriched our lives in so many ways. We’re smarter (every single piece of information ever recorded can be accessed on your phone — think about that) and more connected (how many social networks do you belong to?) than ever.

We’re also dumber (how often do you mindlessly scroll through Facebook without actually reading anything?) and more isolated (we’re connected, but how well do we really know each other?)

Given that dichotomy, it makes sense that the internet brings out both the best and the worst in us. Benign disinhibition brings us together — and toxic disinhibition rips us apart.

Originally published on Quora

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Four Moderation Strategies To Keep the Trolls Away

To paraphrase the immortal Charles Dickens:

It was the : ) of times, it was the : ( of times…

Today, our tale of two communities continues.

Yesterday, we tested our theory that toxicity can put a dent in your profits. We used our two fictional games AI Warzone and Trials of Serathian as an A/B test, and ran their theoretical financials through our mathematical formula to see how they performed.

And what were the results? The AI Warzone community flourished. With a little help from a powerful moderation strategy, they curbed toxicity and kept the trolls at bay. The community was healthy, and users stuck around.

Trials of Serathian paid the cost of doing nothing. As toxicity spread, user churn went up, and the company had to spend more and more on advertising to attract new users just to meet their growth target.

Today, we move from the hypothetical to the real. Do traditional techniques like crowdsourcing and muting actually work? Are there more effective strategies? And what does it mean to engineer a healthy community?

Charles Kettering famously said that “A problem well stated is a problem half-solved”; so let’s start by defining a word that gets used a lot in the industry, but can mean very different things to different people: trolls.

What is a Troll?

We’re big fans of the Glove and Boots video Levels of Trolling.

Technically these are goblins, but still. These guys again!

The crux of the video is that trolling can be silly and ultimately harmless — like (most) pranks — or it can be malicious and abusive, especially when combined with anonymity.

When we talk about trolls, we refer to users who maliciously and persistently seek to ruin other users’ experiences.

Trolls are persistent. Their goal is to hurt the community. And unfortunately, traditional moderation techniques have inadvertently created a culture where trolls are empowered to become the loudest voices in the room.

Strategies That Aren’t Working

Many social networks and gaming companies— including Trials of Serathian —take a traditional approach to moderation. It follows a simple pattern: depend on your users to report everything, give users the power to mute, and let the trolls control the conversation.

Let’s take a look at each strategy to see where it falls short.

Crowdsourcing Everything

Crowdsourcing — depending on users to report toxic chat — is the most common moderation technique in the industry. As we’ll discover later, crowdsourcing is a valuable tool in your moderation arsenal. But it can’t be your only tool.

Let’s get real — chat happens in real time. So by relying on users to report abusive chat, aren’t you in effect allowing that abuse to continue? The damage is already done by the time the abusive player is finally banned, or the chat is removed. It’s already affected its intended victim.

Imagine if you approached software bugs the same way. You have QA testers for a reason — to find the big bugs. Would you release a game that was plagued with bugs? Would you expect your users to do the heavy lifting? Of course not.

Community is no different. There will always be bugs in our software, just as there will always be users who have a bad day, say something to get a rise out of a rival, or just plain forget the guidelines. Just like there will always be users who want to watch the world burn — the ones we call trolls. If you find and remove trolls without depending on the community to do it for you, you go a long way towards creating a healthier atmosphere.

You earn your audience’s trust — and by extension their loyalty — pretty quickly when you ship a solid, polished product. That’s as true of community as it is of gameplay.

If you’ve already decided that you won’t tolerate harassment, abuse, and hate speech in your community, why let it happen in the first place?

Muting Annoying Players

Muting is similar to crowdsourcing. Again, you’ve put all of the responsibility on your users to police abuse. In a healthy community, only about 1% of users are true trolls — players who are determined to upset the status quo and hurt the community. When left unmoderated, that number can rise to as much as 20%.

That means that the vast majority of users are impacted by the behavior of the few. So why would you ask good players to press mute every time they encounter toxic behavior? It’s a band-aid solution and doesn’t address the root of the problem.

It’s important that users have tools to report and mute other players. But they cannot be the only line of defense in the war on toxicity. It has to start with you.

Letting The Trolls Win

We’ve heard this argument a lot. “Why would I get rid of trolls? They’re our best users!” If trolls make up only 1% of your user base, why are you catering to a tiny minority?

Good users — the kind who spend money and spread the word among their friends — don’t put up with trolls. They leave, and they don’t come back.

Simon Fraser University’s Reddit study proved that a rise in toxicity always results in slower community growth. Remember our formula in yesterday’s post? The more users you lose, the more you need to acquire, and the smaller your profits.

Trust us — there is a better way.

Strategies That Work

Our fictional game AI Warzone took a new approach to community. They proactively moderated chat with the intention to shape a thriving, safe, and healthy community using cutting-edge techniques and the latest in artificial and human intelligence.

The following four strategies worked for AI Warzone — and luckily, they work in the real world too.

Knowing Community Resilience

One of the hardest things to achieve in games is balance. Developers spend tremendous amounts of time, money, and resources ensuring that no one dominant strategy defines gameplay. Both Trials of Serathian and AI Warzone spent a hefty chunk of development time preventing imbalance in their games.

The same concept can be applied to community dynamics. In products where tension and conflict are built into gameplay, doesn’t it make sense to ensure that your community isn’t constantly at each other’s throats? Some tension is good, but a community that is always at war can hardly sustain itself.

It all comes down to resilience — how much negativity can a community take before it collapses?

Without moderation, players in battle games like AI Warzone and Trials of Serathian are naturally inclined to acts — and words — of aggression. Unfortunately, that’s also true of social networks, comment sections, and forums.

The first step to building an effective moderation strategy is determining your community’s unique resilience level. Dividing content into quadrants can help:

  • High Risk, High Frequency
  • High Risk, Low Frequency
  • Low Risk, High Frequency
  • Low Risk, Low Frequency

 

Where does your community draw the line?

Younger communities will always have a lower threshold for high-risk chat. That means stricter community guidelines with a low tolerance for swearing, bullying, and other potentially dangerous activity.

The older the community gets, the stronger its resilience. An adult audience might be fine with swearing, as long as it isn’t directed at other users.

Once you know what your community can handle, it’s time to look closely at your userbase.

Dividing Users Based on Behavior

It’s tempting to think of users as just a collection of usernames and avatars, devoid of personality or human quirks. But the truth is that your community is made up of individuals, all with different behavior patterns.

You can divide this complex community into four categories based on behavior.

 

The four categories of user behavior.

Let’s take a closer look at each risk group:

  • Boundary testers: High risk, low frequency offenders. These players will log in and instantly see what they can get away with. They don’t start out as trolls — but they will upset your community balance if you let them get away with it.
  • Trolls: High risk, high frequency offenders. As we’ve discussed, these players represent a real threat to your community’s health. They exist only to harass good players and drive them away.
  • Average users/don’t worry: Low risk, low frequency offenders. These players usually follow community guidelines, but they have a bad day now and then. They might take their mood out on the rest of the community, mostly in a high-stress situation.
  • Spammers: Low risk, high frequency offenders. Annoying and tenacious, but they pose a minor threat to the community.

Once you’ve divided your users into four groups, you can start figuring out how best to deal with them.

Taking Action Based on Behavior

Each of the four user groups should be treated differently. Spammers aren’t trolls. And players who drop an f-bomb during a heated argument aren’t as dangerous as players who frequently harass new users.

 

How to deal with different kinds of behavior.

Filter and Ban Trolls

Your best option is to deal with trolls swiftly and surely. Filter their abusive chat, and ban their accounts if they don’t stop. Set up escalation queues for potentially dangerous content like rape threats, excessive bullying, and threats, then let your moderation team review them and take action.

Warn Boundary Testers

A combination of artificial intelligence and human intelligence works great for these users. Set up computer automation to warn and/or mute them in real time. If you show them that you’re serious about community guidelines early on, they are unlikely to re-offend.

Crowdsource Average Users

Crowdsourcing is ideal for this group. Content here is low risk and low frequency, so if a few users see it, it’s unlikely that the community will be harmed. Well-trained moderators can review reported content and take action on users if necessary.

Mute Spammers

There are a couple of options here. You can mute spammers and let them know they’ve been muted. Or, for a bit of fun try a stealth ban. Let them post away, blissfully unaware that no one in the room can see what they’re saying.

Combining Artificial and Human Intelligence

The final winning strategy? Artificial intelligence (AI) and computer automation are smarter, more advanced, and more powerful than they’ve ever been. Combine that with well-trained and thoughtful human teams, and you have the opportunity to bring moderation and community health to the next level.

A great real world example of this is Twitch. In December 2016 they introduced a new tool called AutoMod.

It allows individual streamers to select a unique resilience level for their own channel. On a scale of 1–4, streamers set their tolerance level for hate speech, bullying, sexual language, and profanity. AutoMod reviews and labels each message for the above topics. Based on the streamer’s chosen tolerance level, AutoMod holds the message back for moderators to review, then approve or reject.

Reactions to AutoMod were resoundingly positive:

Positive user responses and great press? We hope the industry is watching.

The Cost of Doing Nothing

So, what have Trials of Serathian and AI Warzone taught us? First, we really, really need someone to make these games. Like seriously. We’ll wait…

 

This is as far as we got.

 

We learned that toxicity increases user churn, that traditional moderation techniques don’t work, and that community resilience is essential. We learned that trolls can impact profits in surprising ways.

In the end, there are three costs of doing nothing:

  • Financial. Money matters.
  • Brand. Reputation matters.
  • Community. People matter.

Our fictional friends at AI Warzone found a way to keep the trolls away — and keep profits up. They carefully considered how to achieve community balance, and how to build resilience. They constructed a moderation strategy that divided users into four distinct groups and dealt with each group differently. They consistently reinforced community guidelines in real-time. And in the process, they proved to their community that a troll-free environment doesn’t diminish tension or competition. Quite the opposite — it keeps it alive and thriving.

Any community can use the four moderation strategies outlined here, whether it’s an online game, social sharing app, or comments section, and regardless of demographic. And as we’ve seen with Twitch’s AutoMod, communities are welcoming these strategies with open arms and open minds.

One final thought:

Think of toxicity as a computer virus. We know that online games and social networks attract trolls. And we know that if we go online without virus protection, we’re going to get a virus. It’s the nature of social products, and the reality of the internet. Would you deliberately put a virus on your computer, knowing what’s out there? Of course not. You would do everything in your power to protect your computer from infection.

By the same token, shouldn’t you do everything in your power to protect your community from infection?

Want more? Check out the rest of the series:

At Two Hat Security, we use Artificial Intelligence to protect online communities from high-risk content. Visit our website to learn more.

Just getting started? Growing communities deserve to be troll-free, too.

Originally published on Medium

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Doing The Math: Does Moderation Matter?

Welcome back to our series about the cost of doing nothing. Feeling lost? Take a minute to read the first two posts, The Other Reason You Should Care About Online Toxicity and A Tale of Two Online Communities.

Today we test our theory: when social products do nothing about toxicity, they lose money. Using AI Warzone and Trials of Serathian (two totally-made-up-but-awesome online games) as examples, we’ll run their theoretical financials through our mathematical formula to see how they perform.

Remember — despite being slightly different games, AI Warzone and Trials of Serathian have similar communities. They’re both competitive MMOs, are targeted to a 13+ audience, and are predominantly male.

But they differ in one key way. Our post-apocalyptic robot battle game AI Warzone proactively moderates the community, and our epic Medieval fantasy Trials of Serathian does nothing.

Let’s take a look at the math.

The Math of Toxicity

In 2014, Jeffrey Lin from Riot Games presented a stat at GDC that turned the gaming world on its head. According to their research, users who experience toxicity are 320% more likely to quit. That’s huge. To put that number in further perspective, consider this statistic from a 2015 study:

52% of MMORPG players reported that they had been cyber-victimized, and 35% said they had committed cyberbullying themselves.

A majority of players have experienced toxicity. And a surprising amount of them admit to engaging in toxic behavior.

We’ll take those numbers as our starting point. Now, let’s add a few key facts — based on real data — about our two fictional games to fill in the blanks:

  • Each community has 1 million users
  • Each community generates $13.51 in revenue from each user
  • The base monthly churn rate for an MMO is 5%, regardless of moderation
  • According to the latest Fiksu score, it costs $2.78 to acquire a new user
  • They’ve set a 10% Month over Month growth target

So far, so good — they’re even.

Now let’s add toxicity into the mix.

Even with a proactive moderation strategy in place, we expect AI Warzone users to experience about 10% toxicity. It’s a complex battle game where tension is built into the game mechanic, so there will be conflict. Users in Trials of Serathian — our community that does nothing to mitigate that tension— experience a much higher rate of toxicity, at 30%.

Using a weighted average, we’ll raise AI Warzone’s churn rate from 5% to 6.6%. And we’ll raise Trials of Serathian to 9.8%.

Taking all of these numbers into account, we can calculate the cost of doing nothing using a fairly simple formula, where U is total users, and U¹ is next month’s total users:

U¹ = U — (U * Loss Rate) + Acquired through Advertising

Using our formula to calculate user churn and acquisition costs, let’s watch what happens in their first quarter.

Increased User Churn = Increased Acquisition Costs

In their first quarter, AI Warzone loses 218,460 users. And to meet their 10% growth rate target, they spend $1,527,498 to acquire more.

Trials of Serathian, however, loses 324,380 users (remember, their toxicity rate is much higher). And they have to spend $1,821,956 to acquire more users to meet the same growth target.

Let’s imagine that AI Warzone spends an additional $60,000 in that first quarter on moderation costs. Even with the added costs, they’ve still saved $234,457 in profits.

That’s a lot. Not enough to break a company, but enough to make executives nervous.

Let’s check back in at the end of the year.

The Seven Million Dollar Difference

We gathered a few key stats from our two communities.

When Trials of Serathian does nothing, their EOY results are:

  • Churn rate: 9.8%
  • User Attrition: -8,672,738
  • Total Profits (after acquisition costs): $39,784,858

And when AI Warzone proactively moderates, their EOY results are:

  • Churn rate: 6.6%
  • User Attrition: -5,840,824
  • Total Profits (after acquisition costs): $47,177,580

AI Warzone deals with toxicity in real time and loses fewer users in the process — by nearly 3 million. They can devote more of their advertising budget to acquiring new users, and their userbase grows exponentially. The end result? They collect $7,392,722 more in profits than Trials of Serathian, who does nothing.

Userbase growth with constant 30% revenue devoted to advertising.

And what does AI Warzone do with $7 million more in revenue? Well, they develop and ship new features, fix bugs, and even start working on their next game. AI Warzone: Aftermath, anyone?

These communities don’t actually exist, of course. And there are a multitude of factors that can effect userbase growth and churn rate. But it’s telling, nonetheless.

And there are real-world examples, too.

Sticks and Stones

Remember the human cost that we talked about earlier? Money matters — but so do people.

We mentioned Twitter in The Other Reason You Share About Online Toxicity. Twitter is an easy target right now, so it’s tempting to forget how important the social network is, and how powerful it can be.

Twitter is a vital platform for sharing new ideas and forging connections around the globe. Crucially, it’s a place where activists and grassroots organizers can assemble and connect with like-minded citizens to incite real political change. The Arab Spring in 2011 and the Women’s March in January of this year are only two examples out of thousands.

But it’s become known for the kind of abuse that Lily Allen experienced recently — and for failing to deal with it adequately. Twitter is starting to do something — over the last two years, they’ve released new features that make it easier to report and block abusive accounts. And earlier this week even more new features were introduced. The question is, how long can a community go without doing something before the consequences catch up to them?

Twitter’s user base is dwindling, and their stock is plummeting, in large part due to their inability to address toxicity. Can they turn it around? We hope so. And we have some ideas about how they can do it (stay tuned for tomorrow’s post).

What Reddit Teaches us About Toxicity and Churn

Reddit is another real-world example of the cost of doing nothing.

In collaboration with Simon Fraser University, we provided the technology to conduct an independent study of 180 subreddits, using a public Reddit data set. In their academic paper “The Impact of Toxic Language on the Health of Reddit Communities,” SFU analyzes the link between toxicity and community growth.

They found a correlation between an increase in toxic posts and a decrease in community growth. Here is just one example:

The blue line shows high-risk posts decreasing; the red line shows the corresponding increase in community growth.

It’s a comprehensive study and well worth your time. You can download the whitepaper here.

What Now?

Using our formula, we can predict how a proactive moderation strategy can impact your bottom line. And using our two fictional games as a model, we can see how a real-world community might be affected by toxicity.

AI Warzone chose to engineer a healthy community — and Trials of Serathian chose to do nothing.

But what does it mean to “engineer a healthy community”? And what strategies can you leverage in the real world to shape a troll-free community?

In tomorrow’s post, we examine the moderation techniques that AI Warzone used to succeed.

Spoiler alert: They work in real games, too.

Originally published on Medium

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


A Tale of Two Online Communities

What happens when two games with similar communities take two very different approaches to chat?

Welcome to the end of the world. We have robots!

Picture this:

It’s dark. The faint green glow of a computer screen lights your field of vision. You swipe left, right, up, down, tracing the outline of a floating brain, refining a neural network, making connections. Now, an LED counter flashes red to your right, counting down from ten. You hear clanking machinery and grinding cogs in the distance. To your left, a new screen appears: a scrap yard, miles of twisted, rusty metal. The metal begins to move, slowly. It shakes itself like a wet dog. The counter is closer to zero. Urgent voices, behind, below, above you:

“NOW.”

“YOUR TURN.”

“DON’T MESS IT UP!”

“LET’S DO THIS!”

“YOU GOT THIS!”

Welcome to AI Warzone, a highly immersive, choice-driven game in which players create machines that slowly gain self-awareness, based on user’s key moral decisions. Set in 3030, machines battle each other in the industrial ruins of Earth. You create and join factions with other users that can help or hinder their progress, leading to — as we see above — a tense atmosphere rife with competition. A complex game with a steep learning curve, AI Warzone is not for the faint of heart.

Welcome to the past. We have dragons!

Now, imagine this:

You stand atop a great rocky crag, looking down on a small village consisting of a few thatch-roofed cottages. A motley crew stands behind you; several slope-browed goblins, the towering figure of a hooded female Mage, and two small dragons outfitted with rough-hewn leather saddles.

You hold a gleaming silver sword in your hand. A group of black-robed men and women, accompanied by trolls and Mages, approach the village, some on dragon-back, others atop snarling wolves. Some of them shout, their voices ringing across the bleak landscape. Almost time, you whisper, lifting your broadsword in the air and swinging it, so it shines in the pale sun. Almost time

“FUCK YOU FAGGOT,” you hear from far below.

“kill yurself,” a goblin behind you says.

“Show us yr tits!” yells one of the black-robed warriors in the village.

“Oh fuck this,” says the hooded female Mage. She disappears abruptly.

This is life in Trials of Serathian, an MMO set in the Medieval world of Haean. Users can play on the Dawn or Dusk side. On the Dawn side, they can choose to be descendants of the famed warrior Serathian, Sun Mages, or goblins; on the Dusk side, they can play as descendants of the infamous warrior Lord Warelind, Moon Mages, or trolls. Dawn and Dusk clans battle for the ultimate goal — control of Haean.

Two Communities, Two Approaches to Chat

Spoiler alert: AI Warzone and Trials of Serathian aren’t real games. We cobbled together elements from existing games to create two typical gaming communities.

Like most products with social components, both AI Warzone and Trials of Serathian struggle with trolls. And not the mythical, Tolkien-esque kind — the humans-behaving-badly-online kind.

In both games, players create intense bonds with their clan or faction, since they are dependent on fellow players to complete challenges. When players make mistakes, both games have seen incidents of ongoing harassment in retaliation. Challenges are complex, and new users are subject to intense harassment if they don’t catch on immediately.

Second spoiler alert: Only one of these games avoids excessive user churn. Only one of these games has to spend more and more out of their advertising budget to attract new users. And only one of these games nurtures a healthy, growing community that is willing to follow the creators — that’s you — to their next game. The difference? One of these games took steps to deal with toxicity, and the other did nothing.

In tomorrow’s post, we take a deep dive into the math. Remember our “math magic” from The Other Reason You Should Care About Online Toxicity? We’re going to put it to the test.

Originally published on Medium

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


The Other Reason You Should Care About Online Toxicity

In these divisive and partisan times, there seems to be one thing we can all agree on, regardless of party lines — online toxicity sucks.

Earlier this week Lily Allen announced that she was leaving Twitter. When you read this recent thread about her devastating early labor in 2010, it’s not hard to see why:

Does anyone want their social feeds to be peppered with hate speech or threats? Does anyone like logging into their favorite game and being greeted with a barrage of insults? And does anyone want to hear another story about cyberbullying gone tragically, fatally wrong? And yet we allow it to happen, time and time again.

The human cost of online abuse is obvious. But there’s another hidden cost when you allow trolls and toxicity to flourish in your product.

Toxicity is poison — and it will eat away at your profits.

Every company faces a critical decision when creating a social network or online game. Do you take steps to deal with toxicity from the very beginning? Do you proactively moderate the community to ensure that everyone plays nice?

Or — do you do nothing? Do you launch your product and hope for the best? Maybe you build a Report feature so users can report abuse or harassment. Maybe you build a Mute button so players can ignore other players who post offensive content. Sure, it’s a traditional approach to moderation, but does it really work?

If you’re not sure what to choose, you’re not alone. The industry has grappled with these questions for years now.

We want to make it an easy choice. We want it to be a no-brainer. We want doing something to be the industry standard. We believe that chat is a game mechanic like any other, and that community balance is as important as game balance.

When you choose to do something, not only do you build the framework for a healthy, growing, loyal community — you’ll also save yourself a bunch of money in the process.

In this series of posts, we’ll introduce two fictional online games, AI Warzone and Trials of Serathian. We’ll people them with communities, each a million users strong. One game will choose to proactively moderate the community, and the other will do nothing. Think of it as an A/B test.

Then, armed with real-world statistics, our own research, and a few brilliant data scientists, we’ll perform a bit of math magic. We’ll toss them all into a hat (minus the data scientists; they get cranky when we try to put them in hats), say the magic words, wave our wands, and — tada! — pull out a formula. We’ll run both games’ profits, user churn, and acquisition costs through our formula to determine, once and for all, the cost of doing nothing.

But first, let’s have a bit of fun and delve into our fictional communities. Who is Serathian and why is he on trial? And what kind of virtual battles can one expect in an AI Warzone?

Join us tomorrow for our second installment in this four-part series: A Tale of Two Online Communities.

 

Originally published on Medium

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Tackling Toxicity in Online Gaming Communities

The gaming industry is making a breakthrough.

For most of its history, internet gaming has been one big free-for-all. Users have seen little reprieve from the pervasive theme of hostility, particularly within anonymous environments.

A sustained lack of maintenance to any system results in faults, so it should come as no surprise that many industry leaders are finally ready to stop ignoring the issue and embrace innovative approaches.

As product and game designers, we create social experiences to enrich people’s lives. We believe social connections can have a profound transformational effect on humanity by giving people the ability to connect with anyone from anywhere. When we take a look around at the most popular web products to date — social media, social games, instant messaging — the greatest common denominator becomes apparent: each other. The online world now offers us a whole new way of coming together.

There is, however, a problem created when the social environment we are used to operating within is paired down to bare language alone. In the physical world, social conventions and body language guide us through everyday human interaction. Much of our communication happens non-verbally, offering our brains a wider range of data to interpret. Our reactions to potentially misleading messages follow a similar pattern of logic, primarily driven by the rich database of the unconscious mind.

Online, these cues disappear, placing developers who wish to discourage toxic discourse in an awkward position. Should we act quickly and risk misinterpretation, or give users the benefit of the doubt until a moderator can take a closer look? The second option comes with the equally unsavoury proposition of leaving abusive speech unattended for hours at a time, by which point others will have already seen it. With reports showing that users who experience toxicity in an online community are 320% more likely to quit, developers concerned with user retention can no longer afford to look the other way. So what are our options?

Methods for tackling community management generally fall into one of two categories: penalty or reward. Typical responses to bad behaviour include warning messages, partial restrictions from game features and, as a final measure, temporary or permanent bans. On the flipside, rewards for exemplary behaviour seem to offer more room for creativity. Massive online battle arena game Defense of the Ancients has a commendation system whereby users can give out up to 6 commendations per week, based on four options: Friendly, Forgiving, Teaching, or Leadership. Commendable users receive no other tangible reward beyond prestige.

“Personally, [DotA’s commendation system] always incentivized me to try and be helpful in future games simply because leaving a game and feeling like you had a positive impact despite losing feels way better than raging at people and having them threaten to report you,” explains one Reddit user in a discussion thread centering around commendations in online games.

Another notable example is League of Legends’ recent move to give exclusive skins to users with no history of bans in the last year. A Pavlovian model of positive-reinforcement seems to be gaining fast traction in the gaming industry.

Still, a complex problem requires a complex solution, and toxicity continues to persist in both these communities. With all the work that goes into creating a successful game, few studios have the time or resources left over to build, perfect, and localize intricate systems of penalty and reward.

The first step is acknowledging two inconvenient truths: context is everything, and our words exist in shades of gray. Even foul language can play a positive role in a community depending on the context. An online world for kids has different needs from a social network for adults, so there’s no one-size-fits-all solution.

Competing with the ever-expanding database of the human mind is no easy task, and when it comes to distinguishing between subtle shifts in tone and meaning, machines have historically fallen short. The nuances of human communication make the supervision of online communities a notoriously difficult process to automate. Of course, with greater scale comes a greater need for automation — so what’s a Product Manager to do?

How We Manage Toxicity for Social Apps and Websites

At Two Hat, we believe the social internet is a positive place with unlimited potential. We also believe bullying and toxicity are causing harm to real people and causing irreparable damage to social products. That’s why we made Community Sift.

We work with leading game studios and social platforms to find and manage toxic behaviours in their communities. We do this in real-time, and (at the time of writing) process over 1 billion messages a month.

Some interesting facts about toxicity in online communities:

  • According to the Fiksu Index, the cost of acquiring a loyal user is now $4.23, making user acquisition one of the biggest costs to a game.
  • Player Behavior in Online Games research published by Riot Games indicates that “players are 320% more likely to quit, the more toxicity they experience.”

Toxicity hurts everyone:

  • An estimated 1% of a new community is toxic. If that is ignored, the best community members leave and toxicity can grow as high as 20%.
  • If a studio spends $1 million launching its game and a handful of toxic users send destructive messages, their investment is at risk.
  • Addressing the problem early will model what the community is for, and what is expected of future members, thus reducing future costs.
  • Behaviour does change. That’s why we’ve created responsive tools that adapt to changing trends and user behaviours. We believe people are coachable and have built our technology with this assumption.
  • Even existing communities see an immediate drop in toxicity with the addition of strong tools.

Here’s a little bit about what Community Sift can do to help:

  • More than a Filter: Unlike products that only look for profanity, we have over 1 million human-validated rules and multiple AI systems to seek out bullying, toxicity, racism, fraud, and more.
  • Emphasis on Reputation: Every user has a bad day. The real problem is users who are consistently damaging the community.
  • Reusable Common Sense: Instead of simple reg-ex or black/whitelist, we measure the severity on a spectrum, from extreme good to extreme bad. You can use the same rules but a different permission level for group chat vs. private chat and for one game vs. another.
  • Industry Veterans: Our team has made games with over 300 million users and managed a wide variety of communities across multiple languages. We are live and battle-tested on top titles, processing over 1 billion messages a month at the time of writing.

To install Community Sift, you have your backend servers make one simple API call for each message, and we handle all the complexity in our cloud.

When toxic behaviour is found, we can:

  • Hash out the negative parts of a message: e.g. *”####ed out message”*
  • Educate the user
  • Reward positive users who are consistently helping others
  • Automatically trigger a temporary mute for regular offenders
  • Escalate for internal review when certain conditions like “past history of toxicity” are met
  • Group toxic users on a server together to help protect new users
  • Provide daily stats, BI reports, and analytics

We’d love to show you how we can help protect your social product. Feel free to book a demo anytime.