Gamers Unite to End Online Harassment

Southern New Hampshire University sophomore Abbey Sager has accomplished more in her eighteen years than most of us do in a lifetime.

In addition to studying Business Administration and Nonprofit Management in university, Abbey is the founder and president of Diverse Gaming Coalition, a 501(c)3 non-profit organization dedicated to ending online bullying and harassment.

Abbey Sager, founder of Diverse Gaming Coalition

Bullied so badly as a teen that she dropped out of high school, Abbey later pursued her GED and completed her high school education. Determined not to let the same thing happen to other bullied teens, she founded Diverse Gaming Coalition. The coalition distinguishes itself from other anti-bullying organizations by making fun an essential pillar of its initiatives — no mind-numbing Powerpoint presentations or bland speeches allowed.

We spoke to Abbey about her experiences with online harassment, how she thinks online games can promote healthy interactions, and the Diverse Gaming Coalition’s current initiatives.

Tell us about your experiences with harassment in online games.

Being a female gamer, online harassment happens almost daily. Plus, harassment can be a two-way street. Sometimes, people don’t care at all and will spew obscenities over their microphone. Other times, people choose to send mean, hurtful messages. That includes adding me, finding other personal accounts, finding out information about me, threatening me, and doing things people wouldn’t normally do through voice chat, let alone to your face.

Playing games that involve or promote voice chat where I speak, questions like, “Are you a girl?” or “How old are you?” are common. I would escape real-life bullying to find solace in video games with my friends, but sometimes, it just made matters worse.

On one occasion, I was playing a game that involved voice chat in the game itself. It didn’t take too long for some random person to find my address, the names of my parents, and announce it to the entire game. I felt extremely unsafe, and it even made me not want to play games for months.

We’ve seen major changes in the industry this year. For example, Twitch released AutoMod, which allows broadcasters to moderate their own channels. Overwatch has been updating their reporting system. And companies like Riot Games are pioneering innovative initiatives like the League of Legends High School Clubs in Australia and New Zealand. As a gamer, what do you think games can do to further promote fair play & digital citizenship in their products? What can players do?

Both the games themselves and players play a big role in promoting fair play. For one, gamers have control over what they say and do. For instance, if a teammate is flaming or harassing them, most of the times ignoring the bully gets them to stop. You can’t fuel the flames without your actions.

Plus, games can take more action by taking reports more seriously and finding ways to keep these offenders from stopping their ways.

What initiatives are the Coalition working on?

Currently, Diverse Gaming Coalition is working on various initiatives to cater and target people with interest in a specific topic.

Our main project is our Comic Project, which includes two parts. The first part is a full 16-page comic focusing on a story of bullying, friendship, and differences within others.

A sneak peek at the next Diverse Gaming Coalition comic book

The second part includes monthly webcomics that focus on different topics each month to cater to today’s prominent social issues. You can read some of our online comics on our blog.

Our other initiatives focus on the online world- social media, internet, etc. We want to spread inclusivity and safe-spaces onto all of the platforms that we’re present on. That’s why we created our “Diverse Gamer’s” groups. This includes platforms such as Discord, Twitch, Steam and League of Legends. By doing this, we intend to create an environment to cater to everyone, while promoting those who do good on their platforms.

We’re always actively working on other projects. Everyone can follow our social media and keep up to date by subscribing to our mailing list on our site.

Explain the significance of diverse gaming. What does diversity mean to you? Why is it important?

At Diverse Gaming Coalition, our focus is to end bullying, while collaborating with other causes to support people from all walks of life. We do this through incorporating youth into everything we do, including events, workshops, streaming, gaming, and anything of interest! We understand how dull and repetitive anti-bullying organizations can be. The Diverse Gaming Team is fueled by Millennials, so we understand the bland representation that is centered around anti-bullying campaigns, but we strive to be different. Our main goal is to relay all our information in lively and fun way to cater to people.

With this in mind, not a lot of people get the justice they deserve by simply having a diverse background. In video games, LGBTQ+ persons don’t get the representation that they deserve, women are overly-sexualized, and black people have little to no representation. Diversity is what fuels creativity, compassion, and overall kindness.

The Coalition at work — and play. ; )

What do you hope to achieve with the coalition?

I hope that our organization can promote peace, love, and positivity in the world through our work. We want people that would have never gotten involved with bullying initiatives in the past to come join us and realize, “That’s actually a pretty big issue that we should work on ending”.

How can gamers and non-gamers get involved with Diverse Gaming Coalition?

You can find out more about how to get involved here.

We’re always looking for blog writers, and anyone passionate about ending bullying, on and offline. Feel free to send us an email at contact@diversegaming.co!


At  Two Hat Security, we believe that everyone has the right to share online without fear of harassment or abuse.

With our chat filter and moderation software Community Sift, we empower social and gaming platforms to build healthy, engaged online communities, all while protecting their brand and their users from high-risk content.

Contact us today to discuss how we can help you grow a thriving community of users in a safe, welcoming, diverse environment.

Want more articles like this? Subscribe to our newsletter and never miss a blog!

* indicates required


 

Two Hat Headed to Slush 2017!

“Nothing normal ever changed a damn thing.” Slush, 2017

Now that’s a slogan.

It resonates deeply with us here in Canada. While sisu may be a uniquely Finnish trait, we’re convinced we have some of that grit and determination in Canada too. Maybe it’s the shared northern climate; cold weather and short, dark days tend to do that to a nation. 

Regardless, it caught our eye. We like to go against the grain, too. And we’re certainly far from normal.

How could we resist?

On Thursday, November 30th and Friday, December 1st, we’re attending Slush 2017 in Helsinki, Finland. It’s our first time at Slush (and our first time visiting Finland), and we couldn’t be more excited.

It’s a chance to meet with gaming and social companies from all over the world — not to mention our Finnish friends at Sulake (you know them as Habbo) and Supercell.

At Two Hat Security, our goal is to empower social and gaming platforms to build healthy, engaged online communities, all while protecting their brand and their users from high-risk content. Slush’s goal is to empower innovative thinkers to create technology that changes the world.

So, it’s kind of a perfect match.

We’re loving the two themes of Slush 2017:

#1 – Technology will not shape our future — we do.

Technology is no different from any other tool. A hammer can be used to harm, but it can also be used to build a home. In the same way, online chat can be used to spread hate speech, but it can also be used to make connections that enrich and empower us. 

We have a chance to use technology as a force for change, not a weapon. This is our chance to embrace the fundamental values of fair play, sportsmanship, and digital citizenship and reshape gaming and social communities for the better.

The tide is turning in the industry. Companies realize that an old-fashioned, hands-off approach to in-game chat and community building just doesn’t work. That smart, purposeful moderation increases user retention. That a blend of artificial intelligence and human review can significantly reduce moderation costs. And that you can protect your brand and your community without sacrificing freedom of expressivity.

#2 – Entrepreneurs are problem-solvers.

Everyone says the internet is a mess.

So let’s clean it up.

Let’s use state-of-the-art technology and pair it with state-of-the-heart humanity to make digital communities better. Safer. Stronger. And hey, let’s be honest — more profitable. Better for business. (Profitable-er? That’s a word, right?)

Sharon and Mike will be hanging out at the Elisa booth, showing off our chat filter and moderation software tool Community Sift.

You can even test it out. This is your chance to type all the naughty words you can think of… for business reasons, of course.

We’ll see you there, in cold, slushy Helsinki, at the end of November. As Canadians, we’re not bothered by the cold. (The cold never bothered us anyway.)

(Sorry not sorry.)

Let’s solve some problems together.

***

Two Hat empowers gaming and social platforms to foster healthy, engaged online communities. Want to see how we can protect your brand and your community from high-risk content? Get in touch today! 

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Five Moderation Workflows Proven to Decrease Workload

We get it. When you built your online game, virtual world, forum for Moomin-enthusiasts (you get the idea), you probably didn’t have content queues, workflow escalations, and account bans at the front of your mind. But now that you’ve launched and are acquiring users, it’s time to ensure that you maximise your content moderation team.

It’s been proven that smart moderation can increase user retention, decrease workload, and protect your brand. And that means more money in your company pocket for cool things like new game features, faster bug fixes… and maybe even a slammin’ espresso machine for your hard working devs.

Based on our experience at Two Hat, and with our clients across the industry — which include some of the biggest online games, virtual worlds, and social apps out there — we’ve prepared a list of five crucial moderation workflows.

Each workflow leverages AI-powered automation to enhance your mod’s efficiency. This gives them the time to do what humans do best — make tough decisions, engage with users, and ultimately build a healthy, thriving community.

Use Progressive Sanctions

At Two Hat, we are big believers in second chances. We all have bad days, and sometimes we bring those bad days online. According to research conducted by Riot Games, the majority of bad behavior doesn’t come from “trolls” — it comes from average users lashing out. In the same study, Riot Games found that players who were clearly informed why their account was suspended — and provided with chat logs as backup — were 70% less likely to misbehave again.

The truth is, users will always make mistakes and break your community guidelines, but the odds are that it’s a one-time thing and they probably won’t offend again.

We all know those parents who constantly threaten their children with repercussions — “If you don’t stop pulling the cat’s tail, I’ll take your Lego away!” but never follow through. Those are the kids who run screaming like banshees down the aisles at Whole Foods. They’ve never been given boundaries. And without boundaries and consequences, we can’t be expected to learn or to change our behavior.

That’s why we highly endorse progressive sanctions. Warnings and temporary muting followed by short-term suspensions that get progressively longer (1 hour, 6 hours, 12 hours, 24 hours, etc) are effective techniques — as long as they’re paired with an explanation.

And you can be gentle at first — sometimes all a user needs is a reminder that someone is watching in order to correct their behavior. Sanctioning doesn’t necessarily mean removing a user from the community — warning and muting can be just as effective as a ban. You can always temporarily turn off chat for bad-tempered users while still allowing them to engage with your platform.

And if that doesn’t work, and users continue to post content that disturbs the community, that’s when progressive suspensions can be useful. As always, ban messages should be paired with clear communication:

“You wrote [X], and as per our Community Guidelines and Terms of Use, your account is suspended for [X amount of time]. Please review the Community Guidelines.”

You can make it fun, too.

“Having a bad day? You wrote [X], which is against the Community Guidelines. How about taking a short break (try watching that video of cats being scared by cucumbers, zoning out to Bob Ross painting happy little trees, or, if you’re so inclined, taking a lavender-scented bubble bath), then joining the community again? We’ll see you in [X amount of time].”

If your system is smart enough, you can set up accurate behavioral triggers to automatically warn, mute, and suspend accounts in real time.

The workflow will vary based on your community and the time limits you set, but it will look something like this:

Warn Mute 1 hr suspension 6 hr suspension  12 hr suspension  24 hr suspension → 48 hr suspension  Permanent ban

Use AI to Automate Image Approvals

Every community team knows that reviewing Every. Single. Uploaded. Image. Is a royal pain. 99% of images are mind-numbingly innocent (and probably contain cats, because the internet), while the 1% are well, shocking. After a while, everything blurs together, and the chances of actually missing that shocking 1% get higher and higher… until your eyes roll back into your head and you slump forward on your keyboard, brain matter leaking out of your ears.

OK, so maybe it’s not that bad.

But scanning image after image manually does take a crazy amount of time, and the emotional labor can be overwhelming and potentially devastating. Imagine scrolling through pic after pic of kittens, and then stumbling over full-frontal nudity. Or worse: unexpected violence and gore. Or the unthinkable: images of child or animal abuse.

All this can lead to stress, burnout, and even PTSD.

It’s in your best interests to automate some of the process. AI today is smarter than it’s ever been. The best algorithms can detect pornography with nearly 100% accuracy, not to mention images containing violence and gore, drugs, and even terrorism.

If you use AI to pre-moderate images, you can tune the dial based on your community’s resilience. Set the system to automatically approve any image with, say, a low risk of being pornography (or gore, drugs, terrorism, etc), while automatically rejecting images with a high risk of being pornography. Then, send anything in the ‘grey zone’ to a pre-moderation queue for your mods to review.

Or, if your user base is older, automatically approve images in the grey zone, and let your users report anything they think is inappropriate. You can also send those borderline images to an optional post-moderation queue for manual review.

This way, you take the responsibility off of both your moderators and your community to find the worst content.

What the flow looks like:

User submits image → AI returns risk probability If safe, automatically approve and post If unsafe, automatically reject If borderline, hold and send to queue for manual pre-moderation (for younger communities) or If borderline, publish and send to queue for optional post-moderation (for older communities).

Suicide/Self-Harm Support

For many people, online communities are the safest spaces to share their deepest, darkest feelings. Depending on your community, you may or may not allow users to discuss their struggles with suicidal thoughts and self-injury openly.

Regardless, users who discuss suicide and self-harm are vulnerable and deserve extra attention. Sometimes, just knowing that someone else is listening can be enough.

We recommend that you provide at-risk users with phone or text support lines where they can get help. Ideally, this should be done through an automated messaging system to ensure that users get help in real time. However, you can also send manual messages to establish a dialogue with the user.

Worldwide, there are a few resources that we recommend:

If your community is outside of the US, Canada, or the UK, your local law enforcement agency should have phone numbers or websites that you can reference. In fact, it’s a good idea to build a relationship with local law enforcement; you may need to contact them if you ever need to escalate high-risk scenarios, like a user credibly threatening to harm themselves or others.

We don’t recommend punishing users who discuss their struggles by banning or suspending their accounts. Instead, a gentle warning message can go a long way:

“We noticed that you’ve posted an alarming message. We want you to know that we care, and we’re listening. If you’re feeling sad, considering suicide, or have harmed yourself, please know that there are people out there who can help. Please call [X] or text [X] to talk to a professional.”

When setting up a workflow, keep in mind that a user who mentions suicide or self-harm just once probably doesn’t need an automated message. Instead, tune your workflow to send a message after repeated references to suicide and self-harm. Your definition of “repeated” will vary based on your community, so it’s key that you monitor the workflow closely after setting it up. You will likely need to retune it over time.

Of course, users who encourage other users to kill themselves should receive a different kind of message. Look out for phrases like “kys” (kill yourself) and “go drink bleach,” among others. In these cases, use progressive sanctions to enforce your community guidelines and protect vulnerable users.

What the flow looks like:

User posts content about suicide/self-harm X amount of times System automatically displays message to user suggesting they contact a support line If user continues to post content about suicide/self-harm X number of times, send content to a queue for a moderator to manually review for potential escalation

Prepare for Breaking News & Trending Topics

We examined this underused moderation flow in a recent webinar. Never overestimate how deeply the latest news and emerging internet trends will affect your community. If you don’t have a process for dealing with conversations surrounding the next natural disaster, political scandal, or even another “covfefe,” you run the risk of alienating your community.

Consider Charlottesville. On August 11th marchers from the far-right, including white nationalists, neo-Nazis, and members of the KKK gathered to protest the removal of Confederate monuments throughout the city. The rally soon turned violent, and on August 12th a car plowed into a group of counter-protestors, killing a young woman.

The incident immediately began trending on social media and in news outlets and remained a trending topic for several weeks afterward.

How did your online community react to this news? Was your moderation team prepared to handle conversations about neo-Nazis on your platform?

While not a traditional moderation workflow, we have come up with a “Breaking News & Trending Topics” protocol that can help you and your team stay on top of the latest trends — and ensure that your community remains expressive but civil, even in the face of difficult or controversial topics.

  1. Compile vocabulary: When an incident occurs, compile the relevant vocabulary immediately.
  2. Evaluate: Review how your community is using the vocabulary. If you wouldn’t normally allow users to discuss the KKK, would it be appropriate to allow it based on what’s happening in the world at that moment?
  3. Adjust: Make changes to your chat filter based on your evaluation above.
  4. Validate: Watch live chat to confirm that your assumptions were correct.
  5. Stats & trends: Compile reports about how often or how quickly users use certain language. This can help you prepare for the next incident.
  6. Re-evaluate vocabulary over time: Always review and reassess. Language changes quickly. For example, the terms Googles, Skypes, and Yahoos were used in place of anti-Semitic slurs on Twitter in 2016. Now, in late 2017, they’ve disappeared — what have they been replaced with?

Stay diligent, and stay informed. Twitter is your team’s secret weapon. Have your team monitor trending hashtags and follow reputable news sites so you don’t miss anything your community may be talking about.

Provide Positive Feedback

Ever noticed that human beings are really good at punishing bad behavior but often forget to reward positive behavior? It’s a uniquely human trait.

If you’ve implemented the workflows above and are using smart moderation tools that blend automation with human review, your moderation team should have a lot more time on their hands. That means they can do what humans do best — engage with the community.

Positive moderation is a game changer. Not only does it help foster a healthier community, it can also have a huge impact on retention.

Some suggestions:

  • Set aside time every day for moderators to watch live chat to see what the community is talking about and how users are interacting.
  • Engage in purposeful community building — have moderators spend time online interacting in real time with real users.
  • Forget auto-sanctions: Try auto-rewards! Use AI to find key phrases indicating that a user is helping another user, and send them a message thanking them, or even inviting them to collect a reward.
  • Give your users the option to nominate a helpful user, instead of just reporting bad behavior.
  • Create a queue that populates with users who have displayed consistent positive behavior (no recent sanctions, daily logins, no reports, etc) and reach out to them directly in private or public chat to thank them for their contributions.

Any one of these workflows will go a long way towards building a healthy, engaged, loyal community on your platform. Try them all, or just start out with one. Your community (and your team) will thank you.

With our chat filter and moderation software Community Sift, Two Hat has helped companies like Supercell, Roblox, Habbo, Friendbase, and more implement similar workflows and foster healthy, thriving communities.

Interested in learning how we can help your gaming or social platform thrive? Get in touch today!

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required