How Maslow’s Hierarchy of Needs Explains the Internet

Online comments.

Anonymous egg accounts.

Political posts.

… feeling nauseous?

Chances are, you shuddered slightly at the words “online comments.”

Presenting Exhibit A, from a Daily Mail article about puppies:

It gets worse. Presenting Exhibit B, from Twitter:

 

The internet has so much potential. It connects us across borders, cultural divides, and even languages. And oftentimes that potential is fulfilled. Remember the Arab Spring in 2011? It probably wouldn’t have happened without Twitter connecting activists across the Middle East.

Writers, musicians, and artists can share their art with fans across the globe on platforms like Medium and YouTube.

After the terror attacks in Manchester and London in May, many Facebook users used the Safety Check feature to reassure family and friends that they were safe from danger.

Every byte of knowledge that has ever existed is only a few taps away, stored, improbably, inside a device that fits in the palm of a hand. The internet is a powerful tool for making connections, for sharing knowledge, and for conversing with people across the globe.

And yet… virtual conversations are so often reduced to emojis and cat memes. Because who wants to start a real conversation when it’s likely to dissolve into insults and vitriol?

A rich, fulfilling, and enlightened life requires a lot more.

So what’s missing?

Maslow was onto something…

Remember Maslow’s hierarchy of needs? It probably sounds vaguely familiar, but here’s a quick refresher if you’ve forgotten.

A psychology professor at Brandeis University in Massachusetts, Abraham Maslow published his groundbreaking paper “A Theory of Human Motivation” in 1943. In this seminal paper, he identifies and describes the five basic levels of human needs. Each need forms a solid base under the next. And each basic need, when achieved, leads to the next, creating a pyramid. Years later he expanded on this hierarchy of human needs in the 1954 book Motivation and Personality.

The hierarchy looks like this:

  • Physiological: The basic physical requirements for human survival, including air, water, and food; then clothing, shelter, and sex.
  • Safety: Once our physical needs are met, we require safety and security. Safety needs include economic security as well as health and well-being.
  • Love/belonging: Human beings require a sense of belonging and acceptance from family and social groups.
  • Esteem: We need to be desired and accepted by others.
  • Self-actualization: The ultimate. When we self-actualize, we become who we truly are.

According to Maslow, our supporting needs must be met before we can become who we truly are — before we reach self-actualization.

So what does it mean to become yourself? When we self-actualize, we’re more than just animals playing dress-up — we are fulfilling the promise of consciousness. We are human.

Sorry, what does this have to do with the internet?

We don’t stop being human when we go online. The internet is just a new kind of community — the logical evolution of the offline communities that we started forming when the first species of modern humans emerged about 200,000 years ago in Eurasia. We’ve had many chances to reassess, reevaluate, and modify our offline community etiquette since then, which means that offline communities have a distinct advantage over the internet.

Merriam-Webster’s various definitions of “community” are telling:

people with common interests living in a particular area;
an interacting population of various kinds of individuals (such as species) in a common location;
a group of people with a common characteristic or interest living together within a larger society

Community is all about interaction and common interests. We gather together in groups, in public and private spaces, to share our passions and express our feelings. So, of course, we expect to experience that same comfort and kinship in our online communities. After all, we’ve already spent nearly a quarter of a million years cultivating strong, resilient communities — and achieving self-actualization.

But the internet has failed us because people are afraid to do just that. Those of us who aspire to online self-actualization are too often drowned out by trolls. Which leaves us with emojis and cat memes — communication without connection.

So how do we bridge that gap between conversation and real connection? How do we reach the pinnacle of Maslow’s hierarchy of needs in the virtual space?

Conversations have needs, too

What if there was a hierarchy of conversation needs using Maslow’s theory as a framework?

On the internet, our basic physical needs are already taken care of so this pyramid starts with safety.

So what do our levels mean?

  • Safety: Offline, we expect to encounter bullies from time to time. And we can’t get upset when someone drops the occasional f-bomb in public. But we do expect to be safe from targeted harassment, from repeated racial, ethnic, or religious slurs, and from threats against our bodies and our lives. We should expect the same when we’re online.
  • Social: Once we are safe from harm, we require places where we feel a sense of belonging and acceptance. Social networks, forums, messaging apps, online games — these are all communities where we gather and share.
  • Esteem: We need to be heard, and we need our voices to be respected.
  • Self-actualization: The ultimate. When we self-actualize online, we blend the power of community with the blessing of esteem, and we achieve something bigger and better. This is where great conversation happens. This is where user-generated content turns into art. This is where real social change happens.

Problem is, online communities are far too often missing that first level. And without safety, we cannot possibly move onto social.

The problem with self-censorship

In the 2016 study Online Harassment, Digital Abuse, and Cyberstalking in America, researchers found that nearly half (47%) of Americans have experienced online harassment. That’s big — but it’s not entirely shocking. We hear plenty of stories about online harassment and abuse in the news.

The real kicker? Over a quarter (27%) of Americans reported that they had self-censored their posts out of fear of harassment.

If we feel so unsafe in our online communities that we stop sharing what matters to us most, we’ve lost the whole point of building communities. We’ve forgotten why they matter.

How did we get here?

There are a few reasons. No one planned the internet; it just happened, site by site and network by network. We didn’t plan for it, so we never created a set of rules.

And the internet is still so young. Think about it: Communities have been around since we started to walk on two feet. The first written language began in Sumeria about 5000 years ago. The printing press was invented 600 years ago. The telegram has been around for 200 years. Even the telephone — one of the greatest modern advances in communication — has a solid 140 years of etiquette development behind it.

The internet as we know it today — with its complex web of disparate communities and user-generated content — is only about 20 years old. And with all due respect to 20-year-olds, it’s still a baby.

We’ve been stumbling around in this virtual space with only a dim light to guide us, which has led to the standardization of some… less-than-desirable behaviors. Kids who grew up playing MOBAS (multi-only battle games) have come to accept that toxicity is a byproduct of online competition. Those of us who use social media expect to encounter previously unimaginably vile hate speech when we scroll through our feed.

And, of course, we all know to avoid the comments section.

Can self-actualization and online communities co-exist?

Yes. Because why not? We built this thing — so we can fix it.

Three things need to happen if we’re going to move from social to esteem to self-actualization.

Industry-wide paradigm shift

The good news? It’s already happening. Every day there’s a new article about the dangers of cyberbullying and online abuse. More and more social products realize that they can’t allow harassment to run free on their platforms. The German parliament recently backed a plan to fine social networks up to €50 million if they don’t remove hate speech within 24 hours.

Even the Obama Foundation has a new initiative centered around digital citizenship.

As our friend David Ryan Polgar, Chief of Trust & Safety at Friendbase says:

“Digital citizenship is the safe, savvy, ethical use of social media and technology.”

Safe, savvy, and ethical: As a society, we can do this. We’ve figured out how to do it in our offline communities, so we can do it in our online communities, too.

A big part of the shift includes a newfound focus on bringing empathy back into online interactions. To quote David again:

“There is a person behind that avatar and we often forget that.”

Thoughtful content moderation

The problem with moderation is that it’s no fun. No one wants to comb through thousands of user reports, review millions of potentially horrifying images, or monitor a mind-numbingly long live-chat stream in real time.

Too much noise + no way to prioritize = unhappy and inefficient moderators.

Thoughtful, intentional moderation is all about focus. It’s about giving community managers and moderators the right techniques to sift through content and ensure that the worst stuff — the targeted bullying, the cries for help, the rape threats — is dealt with first.

Automation is a crucial part of that solution. With artificial intelligence getting more powerful every day, instead of forcing their moderation team to review posts unnecessarily, social products can let computers do the heavy lifting first.

The content moderation strategy will be slightly different for every community. But there are a few best practices that every community can adopt:

  • Know your community resilience. This is a step that too many social products forget to take. Every community has a tolerance level for certain behaviors. Can your community handle the occasional swear word — but not if it’s repeated 10 times? Resilience will tell you where to draw the line.
  • Use reputation to treat users differently. Behavior tends to repeat itself. If you know that a user posts things that break your community guidelines, you can place tighter restrictions on them. Conversely, you can give engaged users the ability to post more freely. But don’t forget that users are human; everyone deserves the opportunity to learn from their mistakes. Which leads us to our next point…
  • Use behavior-changing techniques. Strategies include auto-messaging users before they hit “send” on posts that breach community guidelines, and publicly honoring users for their positive behavior.
  • Let your users choose what they see. The ESRB has the right idea. We all know what “Rated E for Everyone” means — we’ve heard it a million times. So what if we designed systems that allowed users to choose their experience based on a rating? If you have a smart enough system in the background classifying and labeling content, then you can serve users only the content that they’re comfortable seeing.

It all comes back to our hierarchy of conversation needs. If we can provide that first level of safety, we can move beyond emojis and cats — and move onto the next level.

Early digital education

The biggest task ahead of us is also the most important — education. We didn’t have the benefit of 20 years of internet culture, behavior, and standards when we first started to go online. We have those 20 years of mistakes and missteps behind us now.

Which means that we have an opportunity with the next generation of digital citizens to reshape the culture of the internet. In fact, strides are already being made.

Riot Games (the studio that makes the hugely popular MOBA League of Legends) has started an initiative in Australia and New Zealand that’s gaining traction. Spearheaded by Rioter Ivan Davies, the League of Legends High School Clubs teaches students about good sportsmanship through actual gameplay.

It’s a smart move — kids are already engaged when they’re playing a game they love, so it’s a lot easier to slip some education in there. Ivan and his team have even created impressive teaching resources for teachers who lead the clubs.

Google recently launched Be Internet Awesome, a program that teaches young children how to be good digital citizens and explore the internet safely. In the browser game Interland, kids learn how to protect their personal information, be kind to other users, and spot phishing scams and fake sites. And similar to Riot, Google has created curriculum for educators to use in the classroom.

In addition, non-profits like the Cybersmile Foundation, UK Safer Internet Center, and more use social media to reach kids and teens directly.

Things are changing. Our kids will likely grow up to be better digital citizens than we ever were. And it’s unlikely that they will tolerate the bullying, harassment, and abuse that we’ve put up with for the last 20 years.

Along with a paradigm shift, thoughtful moderation, and education, if we want change to happen, we have to celebrate our communities. We have to talk about our wins, our successes… and especially our failures. Let’s not beat ourselves up if we don’t get it right the first time. We’re figuring this out.

We’re self-actualizing.

It’s time for the internet to grow up

Is this the year the internet achieves its full potential? From where most of us in the industry sit, it’s already happening. People are fed up, and they’re ready for a change.

This year, social products have an opportunity to decide what they really want to be. They can be the Wild West, where too many conversations end with a (metaphorical) bullet. Or they can be something better. They can be spaces that nurture humanity — real communities, the kind we’ve been building for the last 200,000 years.

This year, let’s build online communities that honor the potential of the internet.

That meet every level in our hierarchy of needs.

That promote digital citizenship.

That encourage self-actualization.

This year, let’s start the conversation.

***

At Two Hat Security, we empower social and gaming platforms to build healthy, engaged online communities, all while protecting their brand and their users from high-risk content.

Want to increase user retention, reduce moderation, and protect your brand?

Get in touch today to see how our chat filter and moderation software Community Sift can help you build a community that promotes good digital citizenship — and gives your users a safe space to connect.

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Three Powerful Lessons We Learned at the Protecting Innocence Hackathon

You rarely hear about them, but every day brave investigators across the globe review the most horrific stories and images you can ever imagine. It’s called child sexual abuse material (known as CSAM in the industry), and it hides in the dark corners of the internet, waiting to be found.

The scope is dizzying. The RCMP-led National Child Exploitation Coordination Centre (NCECC) alone received 27,000 cases in 2016. And right now, it’s nearly impossible for officers to review those cases fast enough to prioritize the ones that require their immediate attention.

That’s why, on July 6th and 7th, volunteers from law enforcement, academia, and the tech industry came together to collaborate on solving this problem, perhaps the biggest problem of our time — how do we quickly, accurately, and efficiently detect online CSAM? Artificial intelligence gets smarter and more refined every day. How can we leverage those breakthroughs to save victimized children and apprehend their abusers?

Along with event co-sponsors the RCMP, Microsoft, and Magnet Forensics, we had a simple goal at the Protecting Innocence Hackathon: to bring together the brightest minds in our respective industries to answer these questions.

We ended up learning a few valuable lessons along the way.

It starts with education

Participants across all three disciplines learned from each other. Attendees from the tech industry and academia were given a crash course in grooming and luring techniques (as well as the psychology behind them) from law enforcement, the people who study them every day.

Make no mistake, these were tough lessons to learn — but with a deeper understanding of how predators attract their victims, we can build smarter, more efficient systems to catch them.

Law enforcement studied the techniques of machine learning and artificial intelligence — which in turn provides them with a deeper understanding of the challenges facing data scientists, not to mention the need for robust and permanent datasets.

It’s crucial that we learn from each other. But that’s just the first step.

Nothing important happens without collaboration

Too often our industries are siloed, with every company, university, and agency working on a different project. Bringing professionals together from across these disciplines and encouraging them to share their diverse expertise, without reservations or fear, was a huge accomplishment, and an important lesson.

This isn’t a problem that can be solved alone. This is a 25,000,000-million-images-a-year problem. This is a problem that crosses industry, cultural, and country lines.

If we want to protect the innocence of children, we have a responsibility to be transparent and collaborative.

Just do it

Education and collaboration are commendable and necessary — but they don’t add up to much without actual results. Once you have the blueprints, you have no excuse not to build.

The great news? The five teams and 60+ participants made real, tangible progress.

Collectively, the teams built the following:

  • A proposed standard for internationally classifying and annotating child sexual exploitation images and text
  • A machine learning aggregation blueprint for both text and image classification
  • Machine learning models to detect sexploitation conversation, as well image detection for as age, anime, indoor and outdoor, nudity, and CSAM

We cannot overstate the importance of these achievements. They are the first steps towards building the most comprehensive and accurate CSAM detection system the world has seen.

Not only that, the proposed global standard for classifying text and images, if accepted, will lead to even more accurate detection.

The future of CSAM detection is now

We actually learned a fourth lesson at the hackathon, perhaps the most powerful of them all: Everyone wants to protect and save children from predators. And they’re willing to work together, despite their differences, to make that happen.

At Two Hat Security, we’re using the knowledge shared by our collaborators to further train our artificial intelligence model CEASE and to refine our grooming and luring detection in Community Sift. And we’ll continue to work alongside our partners and friends in law enforcement, academia, and the tech industry to find smart solutions to big problems.

There are challenges ahead, but if everyone continues to educate, collaborate, and create, projects like CEASE and events like Protecting Innocence can and will make great strides. We hope that the lessons we learned will be applied by any agency, company, or university that hopes to tackle this issue.

Thank you again to our co-sponsors the RCMP, Microsoft, and Magnet Forensics. And to the Chief EnforcersCode Warriors, and Data Mages who gave their time, their expertise, and their fearlessness to this event — your contributions are invaluable. You’re changing the world.

And to anyone who labors every day, despite the heartbreak, to protect children — thank you. You may work quietly, you may work undercover, and we may never know your names, but we see you. And we promise to support you, in every way we can.

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Is Online Behavior Changing (For the Better) in 2017?

This year, it seems like every second article you read is about online behavior. From Mark Zuckerberg’s manifesto to Twitter’s ongoing attempts to address abuse, toxicity is a hot topic.

However, forward-thinking companies like Riot Games have been (not so quietly) researching online toxicity for years now. And one of their biggest takeaways is that when it comes to online behavior, as a society we’re still in the discovery stages… and we have a long way to go.

Luckily, we have experts like Riot’s brilliant Senior Technical Designer Kimberly Voll to help guide us on the journey.

A long-time gamer with a background in computer science, artificial intelligence, and cognitive science (told you she was brilliant), Kim believes passionately in the power of player experience on game design. She also happens to be an expert in player behavior and online communication.

We sat down with her recently to discuss the current state of online discourse, the psychology of player behavior, and how game designers can promote sportsmanship in their games.

You say you want a revolution

Two Hat: As an industry, it seems like 2017 is the year we start to talk about online behavior, honestly and with an eye to finding solutions.

Kim: We’re on the cusp of a pretty significant shift in how we think of online digital play. Step by step, it’s starting to mature into a real industry. We’re at that awkward teenage phase where all hell keeps breaking loose sometimes. The internet is the fastest spreading technology that humans beings have ever faced. You blink, it went global, and now suddenly everybody’s online.

“How do you teach your kids to behave online when we don’t even know how to behave online?”

It hasn’t been culturally appropriated yet. It’s here, we like it, and we’re using it. There’s not enough of us stepping back and looking at it critically.

The fanciest of etiquette!

TW: Is it something about the nature of the internet that makes us behave this way?

Kim: The way we normally handle etiquette is with actual social settings. When you go to a kid’s club, you use kid-friendly language. When you got to a nightclub, you use nightclub-friendly language. We solve for that pretty easily. Most of us are good at reading a room, knowing how to read our peers, knowing what’s okay to say at work, versus elsewhere, knowing what it’s okay to say when you’re on the player behavior team and you’re exposed to all manner of language [laughs]. We’ve been doing this since we moved out of caves.

But we don’t have that on the internet. You can’t reliably look around and trust that space. And you find with kids that they go into all of the spaces trusting. Or they do what kids do and push the limits. Both are not great. We want kids to push the limits so they can learn the limits, but we don’t want them to build up these terrible habits that propagate these ways of talking.

On the internet, you don’t get the gesticulations, you don’t the presence that is being in the room with another person. There are certain channels that right now are completely cut off. So right now we’re hyper-focusing on other channels — for a long time that’s just been chat. These limitations mean that you end up trying to amplify and bring out your humanity in different ways.

The nature of things

TH: As a gamer and a cognitive scientist, what is your take on toxic player behavior?

Kim: I think the first step is understanding the nature of the problem.

There are different ways to look at toxicity and unsportsmanship. We can’t paint it all with the same brush.

“Are there people who just want to watch the world burn? They’re out there, but in our experience, they’re really, really rare.”

Not everyone else is being a saint, but not everyone is the same.

MOBAs [Multiplayer Battle Arena Games] are frustrating because they’re super intense. If something goes wrong you’re particularly susceptible to losing your temper. That creates a tinderbox that gives rise to other things. Couple that with bad habits and socio-norms that have developed on the internet, and have been honed somewhat for a gaming audience, and they’re just that — they’re norms. Doesn’t make them necessarily right or wrong, and it doesn’t mean that players like them. We find that players don’t like them, overwhelmingly. And they’re becoming incredibly vocal, saying “We don’t want this.”

But there’s a second vocal group that’s saying “Suck it up. It’s the internet, it’s the way we talk.” And the balance is somewhere in the middle.

It’s always a balancing act

TH: How can game designers decide what tactic they should use to promote better behavior in their game?

Kim: There is obviously a line, but it shifts a bit. Where that line falls will depend largely on your community, your content. It’s the same way the line shifts dramatically when you’re out with friends drinking, versus at home with the family playing card games with your kid cousins.

Bandaids help, but they’re not the full solution.

There has to be flexibility. The first thing to do is understand your community, and try to gain a broader perspective of the motivation and underlying things that drive these behaviors. And also understand that there is no “one size fits all” approach. As a producer of interactive content, you need to figure out where your comfort level is. Then draw that line, and stick by that line. It’s your game; you can set those standards.

There is understanding the community, understanding it within the context of your game, and then there’s the work that Community Sift does, which is shield. I think that shielding remains ever-important. But there has to be balance. The shield is the band-aid, but if we only ever do that, we’re missing an opportunity to learn from what that bandaid is blocking.

There’s a nice tension there where we can begin to explore things.

You don’t need to fundamentally alter your core experience. But if you have that awareness it forces you to ask questions like, “Do I want to have chat in this part of the game or do I want to have voice chat immediately after a match when tempers are the most heated?

Change is good

TH: Do you have an example of a time when Riot made a change to gameplay based on player behavior?

Kim: Recently we added the ability to select your role before you go into the queue, with some exceptions. Before it used to be that you would pop into chat and the war would start, because there are some roles that people tend to like more.

Before, it used to be that you would pop into chat and then the war would start to ensure you got the role you wanted. Whoever could type “mid” fastest ideally got the role, assuming people were even willing to accept precedence, which sometimes they weren’t. And if you lagged for any reason, you could miss your chance at your role.

We realized we were starting the game out on the wrong foot with these mini-wars. What was supposed to be a cooperative team game — one team vs another — now included this intra-team fighting because we started off with that kind of atmosphere.

Being able to choose your role gives players agency in a meaningful way, and removes these pre-game arguments. It’s not perfect, but it’s made the game significantly better.

Trigger warnings, road rage, and language norms… oh my!

TH: What kinds of things trigger bad behavior?

Kim: There is a mix of things that trigger toxicity and unsportsmanlike behavior. Obviously, frustration is one. But let’s break that down: What do you want to do when you’re frustrated? You want to kick and scream. You want the world to know. And if somebody is there with you, you need them to know, even if they had nothing to do with it.

“Put yourself in a situation where you’re locked behind a keyboard, your frustration is bubbling over, and you’re quite likely alone in a room playing a game. How do you yell at the person on the other side of the screen? Well, you can use all caps, but that’s not very satisfying. So how do you get more volume into your words? You keep amping up what you’re saying. And what’s the top of that chain? Hate speech.”

It’s very similar to road rage. I remember my mom told me a story about some dude who was upset that she didn’t run a yellow light, He actually got out of the car and started pounding on her hood. And I bet he went home afterward, pulled into his driveway, greeted his kid, and was a normal person for the rest of the day.

You’re not an actual monster; you’re in a particular set of circumstances, in that situation, that have funneled you through the keyboard into typing things you might not otherwise type. So that’s one big bucket.

Sometimes, you Hulk out.

In the 70s and 80s, we used to say things like “You’re such a retard.” Now, we’re like “I can’t believe we used to say that.” There are certain phrases that were normal at the time. We had zero ill intent — it was just a way of saying “You’re a goofball.” That sort of normalcy that you get with language, no matter how severe, when you’re exposed to it regularly, becomes ingrained in you, and you carry that through your life and don’t even realize it.

We’ve sent people their chat logs, and I truly believe that they when they look at them, they have no idea what the problem is. Other people see the problem, but they just think, “Suck it up.” But there is a third group of people who look at it and they think “This is the way everybody talks, I don’t understand.” They’re caught in a weird spot where they don’t know how to move forward. And that can trigger defensiveness.

The thought process is roughly “So, you’re asking me to change, but I don’t quite get it, I don’t want to change, because I’m me, and I like talking this way, and when I say things like this, my friends acknowledge me and laugh, and that’s my bonding mechanism so you can’t take that away from me.”

Typically, no one thinks all those things consciously. But they do get angry, and now we’ve lost all productive discourse.

There is a full spectrum here. It’s a big tapestry of really interesting things that are going on when people behave this way on the internet. All of that feeds into the question how do we shield it?

“Shielding is great, but can we also give feedback in a way that increases the likelihood that people who are getting the feedback are receptive to it?”

Can we draw a line between what’s so bad that the cost of the pain caused to people is far more than the time it would take to try to help this person?

Can we actually prevent them from getting into this state by understanding what’s triggering it, whether it’s the game, human nature, or current socio-norms?

Let’s talk about toxicity

TH: What can we do to ensure that these conversations continue?

Kim: I think we need to steer away from accusations. We’re all in this together; we’re all on the internet. There’s a certain level of individual responsibility in how we conduct ourselves online.

I’ve had these conversations when people are like “Yes, let’s clean up the internet, let’s do everything we have to do to make this happen.” And the flipside is people who say “Just suck it up. People are far too sensitive.”

And what I often find is that the first group are just naturally well-behaved online, while the second group is more likely to lose it. So when we have these conversations, what we don’t realize is that our perspective can unconsciously become an affront on who they are.

If we don’t take that into account in the conversation, then we end up inadvertently pointing fingers again.

We have to get to a point where can we talk about it, without getting defensive.

Redefining our approach to player behavior

TH: Your empathetic approach is refreshing. Many of us have gotten into the habit of assuming the worst of people and being unwilling to see the other person’s perspective. And of course, that isn’t productive.

Kim: Despite our tendency to make flippant, sweeping comments — most people are not jerks. They’re a product of their own situation. And those journeys that have got each of us to where we are today are different, and they’re often dramatically different. And when we put people on the internet, we’ve got a mix of folks for whom the only thing connecting them is this game, and they come into the game with a bunch of bad experiences, or just generally feeling like “Everyone else is going to let me down.”

Then somebody makes an innocent mistake, or not even a mistake — maybe they took a direction you didn’t expect — and that just reinforces their worldview. “See, everyone is an idiot!”

When expectations aren’t met it leads to a lot of frustration, and players head into games with a lot of expectations.

I believe very viscerally that we have to listen before we try to aggressively push things out. But also we have to realize that the folks we are trying to understand may not be ready to talk. So we may have to go to them. And that applies to a lot of human tragedy, from racism to sexism.

We come in wagging our fingers, and our natural human defense is “Walls up, defenses up — this is the only way I will solve the cognitive dissonance that is you telling me that I should change who I am. Because I am who I am, and I don’t want to change who I am. Because who else would I be?” And that’s scary.

TH: It sounds like we need to take a step back and show a bit of grace. Like we said before, the conversation is finally starting to happen, so let’s give people time to adjust.

Kim: Think about the average company. You’re trying to make a buck to put food on the table and maybe make a few great games. That doesn’t leave a lot of room to do a lot of extra stuff. You may want to, but you may also think, “I have no idea what to do, and I tried a few things and it didn’t work, so what now? What do I do, stop making games?”

“At Riot, we’re lucky to have had the success that we’ve had to make it possible fund these efforts, and that’s why we want to share. Let’s talk, let’s share. I never thought I’d have this job in my life. We’re very lucky to fund our team and try to make a difference in a little corner of the internet.”

It’s harder for games that have been out for a long time. Because it’s harder to shift normative behavior and break those habits. But we’re trying.

 

Want to know more about Kim? Follow @zanytomato on Twitter

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required