Managing Online Communities: How to Avoid Content Moderation Burn Out

Community managers and online moderators know this all too well — moderating user-generated content can be draining.

You spend hours looking at text, usernames, and images. Content ranges from the mind-numbingly dull (false or accidental reports) to the emotionally devastating (discussions about suicide or abuse). Often, with a mountain of reports to sift through and a seemingly endless supply of content, it feels like you’ll never catch up.

The industry has long depended on user reports to moderate online content, without leveraging any safety layers in between. This approach has made for long, tedious workdays, and the inevitable emotional burnout that goes with it.

There has to be a better way.

Here are a few ideas you can leverage to keep your moderation team — and yourself — sane this year.

Something to keep in mind — we mostly talk about online games in this piece, but every technique is just as applicable to virtual worlds, social sharing apps, forums, and more.


1. Triage reports

Figuring out what is and isn’t a priority is one of the biggest challenges faced by moderation teams. Some companies receive up to 60,000 reports a day. In that sea of content, how can you possibly know what to review first? Inevitably, you’ll fall behind. And the longer you wait to review reported content and take action where needed, the less impact that action will have on future behavior.

But here’s the thing: Humans no longer need to do this kind of work. In the last few years, artificial intelligence has gotten a lot more… well, intelligent. That’s why you can start with an algorithm that can analyze reports as they come in, then move them into different queues based on risk level and priority. The great part is that algorithms can be trained on your community and your moderator’s actions. It’s not a one-size-fits-all approach.

Reports come in three varieties: No action needed, questionable, and you-better-deal-with-this.

Computers are great at identifying the easy stuff — the good (no action needed) — and the obviously bad (you-better-deal-with-this). Computers haven’t yet figured out how to make complex, nuanced decisions, which is where humans come in (the questionable). Human moderators belong in this grey, middle ground where our natural understanding of context and human behavior gives us an upper hand when making tough decisions.

Remember when Facebook removed the harrowing yet iconic photo of 9-year-old Kim Phuc fleeing a napalm attack in Vietnam? We can all understand how an algorithm would misunderstand the image and automatically remove it — and why a human reviewing the same picture would instead understand and consider the historical importance of an otherwise disturbing image.

That’s why it’s crucial that you leverage a twofold approach — humans and machines, working together, each playing to its strengths

Players will report other players for no reason; it’s just human nature. So you will always have reports that don’t require action or human review. Whether you punish or restrict players for false reports is up to you (stay tuned for an upcoming blog where we explore this topic in further detail), but the end result is always the same — close the report and move on to the next one.

That’s wasted time that you and your team will never recover, regardless of how quickly you review, close, and take action on reports.

Want that time back? Try this approach:

  • Let the machine identify and automatically close reports that don’t require moderator review.
  • Identify and automatically take action on reports that clearly require moderator action. This can be done with a combination of automation and human review; many companies leverage auto-sanctions but still review high-risk content to ensure that no further action needs to be taken.
  • Identify content in the grey zone and escalate it to a queue for priority moderator review.

Of course, it doesn’t matter how smart AI is — building, testing, and tuning an algorithm takes precious time and resources — both of which are usually in short supply. That’s why we advocate for a “buy instead of build” approach (see Thinking of Building Your Own Chat Filter? 4 Reasons You’re Wasting Your Time! for more on this). Shop around to find the option that’s the best fit for your company.


2. Give moderators a break

Ever repeated a word so many times it lost all meaning? Spend too much time reviewing reports, and that’s exactly what can happen.

Even the most experienced and diligent moderators will eventually fall prey to one of two things:

  1. Everything starts to look bad, or
  2. Nothing looks bad

The more time you spend scanning chat or images, the more likely you are to see something that isn’t there or miss something important. The end result? Poor moderation and an unhappy community.

  • Break up the day with live moderation and community engagement (see below for more about these proactive techniques).
  • Have your team take turns working on reports so they are constantly being reviewed (if you want players to learn from their mistakes and change their behavior, it’s critical that you apply sanctions as close to the time of the offense as possible).
  • Ensure that moderators switch tasks every two hours to stay fresh, focused, and diligent.

3. Be proactive

Ultimately, what’s the best way to avoid having to review tens of thousands of reports? It’s simple: Don’t give players a reason to create reports in the first place. We don’t mean turn off the reporting function; it’s a critical piece in the moderation puzzle, and one of the best ways to empower your users. However, you can use a variety of proactive approaches that will curb the need to report.

Here are a few:

Use a chat/image filter

Until recently, content filters have had a bad rap in the industry. Many companies scoffed at the idea of blocking certain words or images because “the community can handle itself,” or they saw filters as promoting censorship.

Today, the conversation has changed — and we’re finally talking seriously about online abuse, harassment, and the very real damage certain words and phrases can have.

Not only that, companies have started to realize that unmoderated and unfiltered communities that turn toxic aren’t just unpleasant — they actually lose money. (Check out our case study about the connection between proactive moderation and user retention.)

Like we said earlier, computers are great at finding the best and worst content, as defined by your community guidelines. So why not use a chat filter to deal with the best and worst content in real time?

Think about it — how many fewer reports would be created if you simply didn’t allow players to use the words or phrases you’re taking action on already? If there’s nothing to report… there are no reports. Of course, you’ll never get rid of reports completely, and nor should you. Players should always be encouraged to report things that make them uncomfortable.

But if you could automatically prevent the worst of the worst from ever being posted, you can ensure a much healthier community for your users — and spare you and your hard-working team the headache of reviewing thousands of threatening chat lines and/or images in the first place.

You can also use your moderation software (and remember, we always recommend that you buy instead of build) to automatically sanction users who post abusive content (whether it’s seen by the community or not), as defined by your guidelines.

Speaking of automatic sanctions…

Warn players

We wrote about progressive sanctions in Five Moderation Workflows Proven to Decrease Workload. They’re a key component of any moderation strategy and will do wonders for decreasing reports. 

From the same blog:

Riot Games found that players who were clearly informed why their account was suspended — and provided with chat logs as backup — were 70% less likely to misbehave again.”

Consider this: What if you warned players — in real time — that if their behavior continued, they would be suspended? How many players would rethink their language? How many would keep posting abusive content, despite the warning?

The social networking site Kidzworld found that the Community Sift user reputation feature — which restricts and expands chat permissions based on player behavior — has encouraged positive behavior in their users.

Set aside time for live moderation

Even the smartest, most finely-tuned algorithm cannot compare to live moderation.

The more time spent following live chat and experiencing how the community interacts with each other in real time, the better and more effective your moderation practices will become.

Confused about a specific word or phrase you’re seeing in user reports? Monitor chat in real time for context. Concerned that a specific player may be targeting new players for abuse but haven’t been able to collect enough proof to take action? Take an hour or so to watch their behavior as it happens.

Live moderation is also an effective way to review the triggers you’ve set up to automatically close, sanction, or escalate content for review.

Engage with the community

What’s more fun than hanging out with your community — aside from playing the game, of course? ; )

There’s a reason you got into community management and moderation. You care about people. You’re passionate about your community and your product. You and your team of moderators are at your best when you’re interacting with players, experiencing their joy (and their frustration), and generally understanding what makes them tick.

So, in addition to live moderation, it’s critical that you and your team interact with the community and actively inspire positive, healthy communication. Sanctions work, but positive reinforcement works even better.

When you spend time with the community, you demonstrate that your moderation team is connected with players, engaged in the game, and above all, human.


Final thoughts

You’ll never eliminate all user reports. They’re a fundamental element of any moderation strategy and a key method of earning player trust.

But there’s no reason you should be forced to wade through thousands or even hundreds of reports a day.

There are techniques you and your team can leverage to mitigate the impact, including giving moderators a variety of tasks to prevent burnout and keep their minds sharp, using a proactive chat/image filter, and engaging with the community on a regular basis.

Want more articles like this? Subscribe to our mailing list and never miss an update!

* indicates required



Two Hat empowers gaming and social platforms to foster healthy, engaged online communities.

Uniting cutting-edge AI with expert human review, our user-generated content filter and automated moderation software Community Sift has helped some of the biggest names in the industry protect their communities and brand, inspire user engagement, and decrease moderation workload.

Upcoming Webinar: Yes, Your Online Game Needs a Chat Filter

Are you unconvinced that you need a chat filter in your online game, virtual world, or social app? Undecided if purchasing moderation software should be on your product roadmap in 2018? Unsure if you should build it yourself?

You’re not alone. Many in the gaming and social industries are still uncertain if chat moderation is a necessity.

On Wednesday, January 31st at 10:00 am PST, Two Hat Community Trust & Safety Director Carlos Figueiredo shares data-driven evidence proving that you must make chat filtering and automated moderation a business priority in 2018.

In this quick 30-minute session, you’ll learn:

  • Why proactive moderation is critical to building a thriving, profitable game
  • How chat filtering combined with automation can double user retention
  • How to convince stakeholders that moderation software is the best investment they’ll make all year


Two Hat Headed to Slush 2017!

“Nothing normal ever changed a damn thing.” Slush, 2017

Now that’s a slogan.

It resonates deeply with us here in Canada. While sisu may be a uniquely Finnish trait, we’re convinced we have some of that grit and determination in Canada too. Maybe it’s the shared northern climate; cold weather and short, dark days tend to do that to a nation. 

Regardless, it caught our eye. We like to go against the grain, too. And we’re certainly far from normal.

How could we resist?

On Thursday, November 30th and Friday, December 1st, we’re attending Slush 2017 in Helsinki, Finland. It’s our first time at Slush (and our first time visiting Finland), and we couldn’t be more excited.

It’s a chance to meet with gaming and social companies from all over the world — not to mention our Finnish friends at Sulake (you know them as Habbo) and Supercell.

At Two Hat Security, our goal is to empower social and gaming platforms to build healthy, engaged online communities, all while protecting their brand and their users from high-risk content. Slush’s goal is to empower innovative thinkers to create technology that changes the world.

So, it’s kind of a perfect match.

We’re loving the two themes of Slush 2017:

#1 – Technology will not shape our future — we do.

Technology is no different from any other tool. A hammer can be used to harm, but it can also be used to build a home. In the same way, online chat can be used to spread hate speech, but it can also be used to make connections that enrich and empower us. 

We have a chance to use technology as a force for change, not a weapon. This is our chance to embrace the fundamental values of fair play, sportsmanship, and digital citizenship and reshape gaming and social communities for the better.

The tide is turning in the industry. Companies realize that an old-fashioned, hands-off approach to in-game chat and community building just doesn’t work. That smart, purposeful moderation increases user retention. That a blend of artificial intelligence and human review can significantly reduce moderation costs. And that you can protect your brand and your community without sacrificing freedom of expressivity.

#2 – Entrepreneurs are problem-solvers.

Everyone says the internet is a mess.

So let’s clean it up.

Let’s use state-of-the-art technology and pair it with state-of-the-heart humanity to make digital communities better. Safer. Stronger. And hey, let’s be honest — more profitable. Better for business. (Profitable-er? That’s a word, right?)

Sharon and Mike will be hanging out at the Elisa booth, showing off our chat filter and moderation software tool Community Sift.

You can even test it out. This is your chance to type all the naughty words you can think of… for business reasons, of course.

We’ll see you there, in cold, slushy Helsinki, at the end of November. As Canadians, we’re not bothered by the cold. (The cold never bothered us anyway.)

(Sorry not sorry.)

Let’s solve some problems together.

***

Two Hat empowers gaming and social platforms to foster healthy, engaged online communities. Want to see how we can protect your brand and your community from high-risk content? Get in touch today! 

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Five Moderation Workflows Proven to Decrease Workload

We get it. When you built your online game, virtual world, forum for Moomin-enthusiasts (you get the idea), you probably didn’t have content queues, workflow escalations, and account bans at the front of your mind. But now that you’ve launched and are acquiring users, it’s time to ensure that you maximise your content moderation team.

It’s been proven that smart moderation can increase user retention, decrease workload, and protect your brand. And that means more money in your company pocket for cool things like new game features, faster bug fixes… and maybe even a slammin’ espresso machine for your hard working devs.

Based on our experience at Two Hat, and with our clients across the industry — which include some of the biggest online games, virtual worlds, and social apps out there — we’ve prepared a list of five crucial moderation workflows.

Each workflow leverages AI-powered automation to enhance your mod’s efficiency. This gives them the time to do what humans do best — make tough decisions, engage with users, and ultimately build a healthy, thriving community.

Use Progressive Sanctions

At Two Hat, we are big believers in second chances. We all have bad days, and sometimes we bring those bad days online. According to research conducted by Riot Games, the majority of bad behavior doesn’t come from “trolls” — it comes from average users lashing out. In the same study, Riot Games found that players who were clearly informed why their account was suspended — and provided with chat logs as backup — were 70% less likely to misbehave again.

The truth is, users will always make mistakes and break your community guidelines, but the odds are that it’s a one-time thing and they probably won’t offend again.

We all know those parents who constantly threaten their children with repercussions — “If you don’t stop pulling the cat’s tail, I’ll take your Lego away!” but never follow through. Those are the kids who run screaming like banshees down the aisles at Whole Foods. They’ve never been given boundaries. And without boundaries and consequences, we can’t be expected to learn or to change our behavior.

That’s why we highly endorse progressive sanctions. Warnings and temporary muting followed by short-term suspensions that get progressively longer (1 hour, 6 hours, 12 hours, 24 hours, etc) are effective techniques — as long as they’re paired with an explanation.

And you can be gentle at first — sometimes all a user needs is a reminder that someone is watching in order to correct their behavior. Sanctioning doesn’t necessarily mean removing a user from the community — warning and muting can be just as effective as a ban. You can always temporarily turn off chat for bad-tempered users while still allowing them to engage with your platform.

And if that doesn’t work, and users continue to post content that disturbs the community, that’s when progressive suspensions can be useful. As always, ban messages should be paired with clear communication:

“You wrote [X], and as per our Community Guidelines and Terms of Use, your account is suspended for [X amount of time]. Please review the Community Guidelines.”

You can make it fun, too.

“Having a bad day? You wrote [X], which is against the Community Guidelines. How about taking a short break (try watching that video of cats being scared by cucumbers, zoning out to Bob Ross painting happy little trees, or, if you’re so inclined, taking a lavender-scented bubble bath), then joining the community again? We’ll see you in [X amount of time].”

If your system is smart enough, you can set up accurate behavioral triggers to automatically warn, mute, and suspend accounts in real time.

The workflow will vary based on your community and the time limits you set, but it will look something like this:

Warn Mute 1 hr suspension 6 hr suspension  12 hr suspension  24 hr suspension → 48 hr suspension  Permanent ban

Use AI to Automate Image Approvals

Every community team knows that reviewing Every. Single. Uploaded. Image. Is a royal pain. 99% of images are mind-numbingly innocent (and probably contain cats, because the internet), while the 1% are well, shocking. After a while, everything blurs together, and the chances of actually missing that shocking 1% get higher and higher… until your eyes roll back into your head and you slump forward on your keyboard, brain matter leaking out of your ears.

OK, so maybe it’s not that bad.

But scanning image after image manually does take a crazy amount of time, and the emotional labor can be overwhelming and potentially devastating. Imagine scrolling through pic after pic of kittens, and then stumbling over full-frontal nudity. Or worse: unexpected violence and gore. Or the unthinkable: images of child or animal abuse.

All this can lead to stress, burnout, and even PTSD.

It’s in your best interests to automate some of the process. AI today is smarter than it’s ever been. The best algorithms can detect pornography with nearly 100% accuracy, not to mention images containing violence and gore, drugs, and even terrorism.

If you use AI to pre-moderate images, you can tune the dial based on your community’s resilience. Set the system to automatically approve any image with, say, a low risk of being pornography (or gore, drugs, terrorism, etc), while automatically rejecting images with a high risk of being pornography. Then, send anything in the ‘grey zone’ to a pre-moderation queue for your mods to review.

Or, if your user base is older, automatically approve images in the grey zone, and let your users report anything they think is inappropriate. You can also send those borderline images to an optional post-moderation queue for manual review.

This way, you take the responsibility off of both your moderators and your community to find the worst content.

What the flow looks like:

User submits image → AI returns risk probability If safe, automatically approve and post If unsafe, automatically reject If borderline, hold and send to queue for manual pre-moderation (for younger communities) or If borderline, publish and send to queue for optional post-moderation (for older communities).

Suicide/Self-Harm Support

For many people, online communities are the safest spaces to share their deepest, darkest feelings. Depending on your community, you may or may not allow users to discuss their struggles with suicidal thoughts and self-injury openly.

Regardless, users who discuss suicide and self-harm are vulnerable and deserve extra attention. Sometimes, just knowing that someone else is listening can be enough.

We recommend that you provide at-risk users with phone or text support lines where they can get help. Ideally, this should be done through an automated messaging system to ensure that users get help in real time. However, you can also send manual messages to establish a dialogue with the user.

Worldwide, there are a few resources that we recommend:

If your community is outside of the US, Canada, or the UK, your local law enforcement agency should have phone numbers or websites that you can reference. In fact, it’s a good idea to build a relationship with local law enforcement; you may need to contact them if you ever need to escalate high-risk scenarios, like a user credibly threatening to harm themselves or others.

We don’t recommend punishing users who discuss their struggles by banning or suspending their accounts. Instead, a gentle warning message can go a long way:

“We noticed that you’ve posted an alarming message. We want you to know that we care, and we’re listening. If you’re feeling sad, considering suicide, or have harmed yourself, please know that there are people out there who can help. Please call [X] or text [X] to talk to a professional.”

When setting up a workflow, keep in mind that a user who mentions suicide or self-harm just once probably doesn’t need an automated message. Instead, tune your workflow to send a message after repeated references to suicide and self-harm. Your definition of “repeated” will vary based on your community, so it’s key that you monitor the workflow closely after setting it up. You will likely need to retune it over time.

Of course, users who encourage other users to kill themselves should receive a different kind of message. Look out for phrases like “kys” (kill yourself) and “go drink bleach,” among others. In these cases, use progressive sanctions to enforce your community guidelines and protect vulnerable users.

What the flow looks like:

User posts content about suicide/self-harm X amount of times System automatically displays message to user suggesting they contact a support line If user continues to post content about suicide/self-harm X number of times, send content to a queue for a moderator to manually review for potential escalation

Prepare for Breaking News & Trending Topics

We examined this underused moderation flow in a recent webinar. Never overestimate how deeply the latest news and emerging internet trends will affect your community. If you don’t have a process for dealing with conversations surrounding the next natural disaster, political scandal, or even another “covfefe,” you run the risk of alienating your community.

Consider Charlottesville. On August 11th marchers from the far-right, including white nationalists, neo-Nazis, and members of the KKK gathered to protest the removal of Confederate monuments throughout the city. The rally soon turned violent, and on August 12th a car plowed into a group of counter-protestors, killing a young woman.

The incident immediately began trending on social media and in news outlets and remained a trending topic for several weeks afterward.

How did your online community react to this news? Was your moderation team prepared to handle conversations about neo-Nazis on your platform?

While not a traditional moderation workflow, we have come up with a “Breaking News & Trending Topics” protocol that can help you and your team stay on top of the latest trends — and ensure that your community remains expressive but civil, even in the face of difficult or controversial topics.

  1. Compile vocabulary: When an incident occurs, compile the relevant vocabulary immediately.
  2. Evaluate: Review how your community is using the vocabulary. If you wouldn’t normally allow users to discuss the KKK, would it be appropriate to allow it based on what’s happening in the world at that moment?
  3. Adjust: Make changes to your chat filter based on your evaluation above.
  4. Validate: Watch live chat to confirm that your assumptions were correct.
  5. Stats & trends: Compile reports about how often or how quickly users use certain language. This can help you prepare for the next incident.
  6. Re-evaluate vocabulary over time: Always review and reassess. Language changes quickly. For example, the terms Googles, Skypes, and Yahoos were used in place of anti-Semitic slurs on Twitter in 2016. Now, in late 2017, they’ve disappeared — what have they been replaced with?

Stay diligent, and stay informed. Twitter is your team’s secret weapon. Have your team monitor trending hashtags and follow reputable news sites so you don’t miss anything your community may be talking about.

Provide Positive Feedback

Ever noticed that human beings are really good at punishing bad behavior but often forget to reward positive behavior? It’s a uniquely human trait.

If you’ve implemented the workflows above and are using smart moderation tools that blend automation with human review, your moderation team should have a lot more time on their hands. That means they can do what humans do best — engage with the community.

Positive moderation is a game changer. Not only does it help foster a healthier community, it can also have a huge impact on retention.

Some suggestions:

  • Set aside time every day for moderators to watch live chat to see what the community is talking about and how users are interacting.
  • Engage in purposeful community building — have moderators spend time online interacting in real time with real users.
  • Forget auto-sanctions: Try auto-rewards! Use AI to find key phrases indicating that a user is helping another user, and send them a message thanking them, or even inviting them to collect a reward.
  • Give your users the option to nominate a helpful user, instead of just reporting bad behavior.
  • Create a queue that populates with users who have displayed consistent positive behavior (no recent sanctions, daily logins, no reports, etc) and reach out to them directly in private or public chat to thank them for their contributions.

Any one of these workflows will go a long way towards building a healthy, engaged, loyal community on your platform. Try them all, or just start out with one. Your community (and your team) will thank you.

With our chat filter and moderation software Community Sift, Two Hat has helped companies like Supercell, Roblox, Habbo, Friendbase, and more implement similar workflows and foster healthy, thriving communities.

Interested in learning how we can help your gaming or social platform thrive? Get in touch today!

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


The Best Little Community on the Internet

 

Don’t believe the hype. Not every online community is crawling with harassment, abuse, and hate speech.

There are plenty of good places online. You just have to know where to look.

This is a story about one of those good places.

Art-inspired writing

Storybird is a global community of writers, readers, and artists of all ages. Users select from a variety of original artwork, then create picture books (short stories, heavy on visuals), longform stories (narrative-driven books), and poetry (you get the idea). Since launching in 2009, over 7 million Storybird members have created over 25 million stories. Not only that, 600,000 educators use Storybird in over 200,000 schools to help facilitate creative writing.

The stories are original and often strikingly effective, in particular, the poetry. The artwork is gorgeous. Lovingly curated, the images are as beautiful as anything you’ll see online.

A selection of artwork on Storybird. Users select an image and build their story from there.

Users can “heart” content, share it on social media, and leave comments. Positivity is encouraged, especially since most users are teens and pre-teens. Cruelty, profanity, hate speech, and bullying are not tolerated. As the first layer of defense against dangerous content, Storybird uses a chat filter and automated moderation software.

Behind the curtain

But Storybird doesn’t just rely on automation to manage disruptive users and bad behavior. Their secret weapon? Moderator, mom, and community builder extraordinaire Suz Holden (“skybluepurple” in the Storybird community).

A no-nonsense dynamo with purple hair and six (!) kids at home, Suz is a veteran of the moderation scene. She once worked for AOL as a volunteer moderator, responsible for the “working moms vs stay at home moms” board — aka, the toughest message board on AOL.

Eventually, AOL switched to an outsourced moderation service called LiveWorld. They hired all of the volunteers who had previously moderated for free, including Suz. In 2012, she joined Storybird as a moderator.

A different kind of moderation

Suz is heavily involved in the Storybird community. While some companies keep their moderators and moderation practices largely anonymous and often use stealth bans (in which user’s comments are blocked without notification), Storybird makes moderation a key ingredient in community interactions.

Suz and “Storyspotter” (Storybird-ese for volunteer moderator) Figment68 are active members of the community, posting comments, leaving hearts, and generally encouraging users to keep it positive.

“We are co-moms,” Suz says. “[We] offer support, advice, and all of the other ‘cool’ mom type comments.”

“We have the coolest community,” says Suz. It’s not hard to see why.

When users break community guidelines, they know — there are no stealth bans here.

Empowered by Storybird’s executive team to engage closely with the community, Suz will reach out to users directly to let them know when they’ve broken the rules.

“We don’t allow ugly,” she says. “We just don’t. We’re big on encouragement. We’re real big on positive reinforcement.

Most of the time, when given an explanation for why their book or comment was removed, users will change their behavior. There is always the chance for redemption.

Suz explains. “We’ll say, ‘Okay, look. You think about this, figure out what you did wrong. And holler back at me in a week and let me know what you’re going to do to fix this. And then we’ll let you back. We’re gonna watch you like a hawk — but we’re gonna let you back. And you know what, those become some of my best kids.”

And some of those kids go on to change the community in ways no one could expect.

Time for a story.

A not-so pointless task

Once upon a time, there was a user named cookie54lover. A self-proclaimed misfit, cookie54lover, was, according to Suz “One of our earliest, most… um… interesting (read: ornery!) Storybirders. She enjoyed making waves, and she would tell you that. Cookie and I went nose to nose a lot, for awhile.”

Despite this, cookie54lover was a smart kid, and she genuinely loved Storybird. She was a good writer; she wrote popular books. But she was constantly in trouble due to, as Suz calls it, her “sassy” comments.

At one point, cookie54lover published a book that she called A pointless task!

Under this book,” she wrote, “I was thinking about the most comments ever on a book on Storybird. This is a totally pointless task, but still it will be fun to see what you guys come out with. 🙂

Simply put — she wanted to see how many comments she could get on one book.

Cover of the original A pointless task! Since then, 57 books have been written, some with over 50,000 comments.

“We had been talking about chat rooms or message boards where the kids could have general conversations,” says Suz. Of course, they could always comment on books, but it was encouraged that comments be related to the book. The idea was put on hold as other priorities took precedence, and in truth, Storybird “[was] actually… a reading and writing website, not a chatting website.”

But the kids starting commenting on A pointless task! And commenting. And sharing. And as the community rallied together the book quickly amassed 15,000 comments, then 20,000.

And the number kept going up.

Let’s make a deal

The Storybird team watched as A pointless task! (soon abbreviated to APT) accumulated more and more comments — and more interest from the kids. Finally, when there were so many comments that the pages took three and a half minutes to load, Suz had an idea. She left a comment for cookie54lover.

“Let’s you and me make a deal, hon,” she wrote. “How about every 10,000 comments we just make a new book?”

Cookie said yes. She created A pointless task 2. And kids being kids, the race was on — how quickly could the community reach 10,000 comments? It didn’t take long, as the community rallied together again.

What kind of comments did they leave? “You can talk about anything,” Suz says, “but you have to keep it ‘Storybird’ safe, meaning appropriate for even our younger members. There is endless talk about singers and YouTubers and all the normal kid ridiculousness. It’s an anything-goes kind of place.”

The kids also used APT to chat about heavier topics.

“We’re real big on ‘We’re here for you, we listen to you.’ Which means a lot to kids,” says Suz. Our kids are writers, which means they’re often on the outside looking in. Misfits, outcasts, rebels, and upstarts. Many post that they don’t have strong offline connections. It’s almost like a peer counseling session, especially APT.”

Eventually, cookie54lover created a new APT every two weeks. And the community rallied, and the race to hit 10,000 comments continued.

Inevitably, as kids are wont to do, she grew up. She went to high school; she joined the Drama Club.

As is the way with all things, it was time for cookie54lover to move on.

cookie54lover created the first A pointless task! She still visits the community sometimes.

Suz met with Storybird co-founders Mark Ury and Kaye Puhlmann and proposed a solution: She would create all APTs going forward, as long as it was okay with cookie. She was, as Suz says, “More than happy to hand it off to me. She still checks in from time to time.”

Suz is currently at work on APT 58. “If you think about that, each book has a minimum of 10,000 comments. And some have 50,000 comments. Take a moment to wrap your brain around that.”

She’s especially proud of APT 50. As a community milestone and a genuinely touching tribute to kids by a company that clearly cares, it’s worth a read.

APT 50 — a labor of love celebrating the community.
Storybird co-founders Mark Ury, Kaye Puhlmann, and Adam Endicott all contributed to APT 50, as well as Suz’s moderator “co-mom” Figment68.

Digital citizens of the future

When asked how she shapes Storybird and the APT community, Suz is frank about her process — or lack thereof.

“I do what I think is right in the moment. More often than not, if something crosses a line, I’ll delete it. I might reach out to a kid on a private book and let them know that I pulled it down. Other times, if things are getting out of hand, I’ll just say ‘Knock it off.’ And I’ll tell them that — ‘I’m putting my mean mom voice on, y’all need to chill.’ And they do. Because they know that I will shut them down if they don’t. It’s how I run my house, it’s how I run my job.”

In fact, the kids take as much responsibility for the community’s health as Suz and her mods. “More often than not,” she says, “the kids monitor themselves really well. Because we’ve created this little circle of kids who want to be good kids. They want to be community leaders. Whatever the highest level is, these kids aspire to that.”

For many, Storybird — and APT in particular — is home.

Storybirders don’t just keep their own community positive and welcoming. They’re also inspired to bring that healthy dynamic to other communities.

“We send our little darlings out into the world, and they’ll tell people ‘This isn’t how we do it,’” says Suz, laughing. “As [co-founder] Kaye said the other day, ‘We’re creating good internet citizens.’”

Our poem inspired by the Storybird community : )

***

Visit the Storybird site and start creating.

Storybird uses Two Hat Security’s chat filter and automated moderation software Community Sift as a first layer of defense against high-risk content and behavior.

Want to learn how Two Hat Security can help protect your community? Get in touch today!

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Top Three Reasons You Should Meet us at Gamescom

Heading to Gamescom or devcom this year? It’s a huge conference, and you have endless sessions, speakers, exhibits, and meetings to choose from. Your time is precious — and limited. How do you decide where you go, and who you talk to?

Here are three reasons we think you should meet with us while you’re in Cologne.

You need practical community-building tips.

Got trolls?

Our CEO & founder Chris Priebe is giving an awesome talk at devcom. He’ll be talking about the connection between trolls, community toxicity, and increased user churn. The struggle is real, and we’ve got the numbers to prove it.

Hope to build a thriving, engaged community in your game? Want to increase retention? Need to reduce your moderation workload so you can focus on fun stuff like shipping new features?

Chris has been in the online safety and security space for 20 years now and learned a few lessons along the way. He’ll be sharing practical, time-and-industry-proven moderation strategies that actually work.

Check out Chris’s talk on Monday, August 21st, from 14:30 – 15:00.

You don’t want to get left behind in a changing industry.

This is the year the industry gets serious about user-generated content (UGC) moderation.

With recent Facebook Live incidents (remember this and this?), new hate speech legislation in Germany, and the latest online harassment numbers from the Pew Research Center, online behavior is a hot topic.

We’ve been studying online behavior for years now. We even sat down with Kimberly Voll and Ivan Davies of Riot Games recently to talk about the challenges facing the industry in 2017.

Oh, and we have a kinda crazy theory about how the internet ended up this way. All we’ll say is that it involves Maslow’s hierarchy of needs

So, it’s encouraging to see that more and more companies are acknowledging the importance of smart, thoughtful, and intentional content moderation.

If you’re working on a game/social network/app in 2017, you have to consider how you’ll handle UGC (whether it’s chat, usernames, or images). Luckily, you don’t have to figure it out all by yourself.

Because…

You deserve success.

And we love this stuff.

Everyone says it, but it’s true: We really, really care about your success. And smart moderation is key to any social product’s success in a crowded and highly competitive market.

Increasing user retention, reducing moderation workload, keeping communities healthy — these are big deals to us. We’ve been fortunate enough to work with hugely successful companies like Roblox, Supercell, Kabam, and more, and we would love to share the lessons we’ve learned and best practices with you.

We’re sending three of our very best Two Hatters/Community Sifters to Germany. Sharon has a wicked sense of humor (and the biggest heart around), Mike has an encyclopedic knowledge of Bruce Springsteen lore, and Chris — well, he’s the brilliant, free-wheeling brain behind the entire operation.

So, if you’d like to meet up and chat at Gamescom, Sharon, Mike, and Chris will be in Cologne from Monday, August 21st to Friday, August 25th. Send us a message at hello@twohat.com, and one of them will be in touch.

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


How Maslow’s Hierarchy of Needs Explains the Internet

Online comments.

Anonymous egg accounts.

Political posts.

… feeling nauseous?

Chances are, you shuddered slightly at the words “online comments.”

Presenting Exhibit A, from a Daily Mail article about puppies:

It gets worse. Presenting Exhibit B, from Twitter:

 

The internet has so much potential. It connects us across borders, cultural divides, and even languages. And oftentimes that potential is fulfilled. Remember the Arab Spring in 2011? It probably wouldn’t have happened without Twitter connecting activists across the Middle East.

Writers, musicians, and artists can share their art with fans across the globe on platforms like Medium and YouTube.

After the terror attacks in Manchester and London in May, many Facebook users used the Safety Check feature to reassure family and friends that they were safe from danger.

Every byte of knowledge that has ever existed is only a few taps away, stored, improbably, inside a device that fits in the palm of a hand. The internet is a powerful tool for making connections, for sharing knowledge, and for conversing with people across the globe.

And yet… virtual conversations are so often reduced to emojis and cat memes. Because who wants to start a real conversation when it’s likely to dissolve into insults and vitriol?

A rich, fulfilling, and enlightened life requires a lot more.

So what’s missing?

Maslow was onto something…

Remember Maslow’s hierarchy of needs? It probably sounds vaguely familiar, but here’s a quick refresher if you’ve forgotten.

A psychology professor at Brandeis University in Massachusetts, Abraham Maslow published his groundbreaking paper “A Theory of Human Motivation” in 1943. In this seminal paper, he identifies and describes the five basic levels of human needs. Each need forms a solid base under the next. And each basic need, when achieved, leads to the next, creating a pyramid. Years later he expanded on this hierarchy of human needs in the 1954 book Motivation and Personality.

The hierarchy looks like this:

  • Physiological: The basic physical requirements for human survival, including air, water, and food; then clothing, shelter, and sex.
  • Safety: Once our physical needs are met, we require safety and security. Safety needs include economic security as well as health and well-being.
  • Love/belonging: Human beings require a sense of belonging and acceptance from family and social groups.
  • Esteem: We need to be desired and accepted by others.
  • Self-actualization: The ultimate. When we self-actualize, we become who we truly are.

According to Maslow, our supporting needs must be met before we can become who we truly are — before we reach self-actualization.

So what does it mean to become yourself? When we self-actualize, we’re more than just animals playing dress-up — we are fulfilling the promise of consciousness. We are human.

Sorry, what does this have to do with the internet?

We don’t stop being human when we go online. The internet is just a new kind of community — the logical evolution of the offline communities that we started forming when the first species of modern humans emerged about 200,000 years ago in Eurasia. We’ve had many chances to reassess, reevaluate, and modify our offline community etiquette since then, which means that offline communities have a distinct advantage over the internet.

Merriam-Webster’s various definitions of “community” are telling:

people with common interests living in a particular area;
an interacting population of various kinds of individuals (such as species) in a common location;
a group of people with a common characteristic or interest living together within a larger society

Community is all about interaction and common interests. We gather together in groups, in public and private spaces, to share our passions and express our feelings. So, of course, we expect to experience that same comfort and kinship in our online communities. After all, we’ve already spent nearly a quarter of a million years cultivating strong, resilient communities — and achieving self-actualization.

But the internet has failed us because people are afraid to do just that. Those of us who aspire to online self-actualization are too often drowned out by trolls. Which leaves us with emojis and cat memes — communication without connection.

So how do we bridge that gap between conversation and real connection? How do we reach the pinnacle of Maslow’s hierarchy of needs in the virtual space?

Conversations have needs, too

What if there was a hierarchy of conversation needs using Maslow’s theory as a framework?

On the internet, our basic physical needs are already taken care of so this pyramid starts with safety.

So what do our levels mean?

  • Safety: Offline, we expect to encounter bullies from time to time. And we can’t get upset when someone drops the occasional f-bomb in public. But we do expect to be safe from targeted harassment, from repeated racial, ethnic, or religious slurs, and from threats against our bodies and our lives. We should expect the same when we’re online.
  • Social: Once we are safe from harm, we require places where we feel a sense of belonging and acceptance. Social networks, forums, messaging apps, online games — these are all communities where we gather and share.
  • Esteem: We need to be heard, and we need our voices to be respected.
  • Self-actualization: The ultimate. When we self-actualize online, we blend the power of community with the blessing of esteem, and we achieve something bigger and better. This is where great conversation happens. This is where user-generated content turns into art. This is where real social change happens.

Problem is, online communities are far too often missing that first level. And without safety, we cannot possibly move onto social.

The problem with self-censorship

In the 2016 study Online Harassment, Digital Abuse, and Cyberstalking in America, researchers found that nearly half (47%) of Americans have experienced online harassment. That’s big — but it’s not entirely shocking. We hear plenty of stories about online harassment and abuse in the news.

The real kicker? Over a quarter (27%) of Americans reported that they had self-censored their posts out of fear of harassment.

If we feel so unsafe in our online communities that we stop sharing what matters to us most, we’ve lost the whole point of building communities. We’ve forgotten why they matter.

How did we get here?

There are a few reasons. No one planned the internet; it just happened, site by site and network by network. We didn’t plan for it, so we never created a set of rules.

And the internet is still so young. Think about it: Communities have been around since we started to walk on two feet. The first written language began in Sumeria about 5000 years ago. The printing press was invented 600 years ago. The telegram has been around for 200 years. Even the telephone — one of the greatest modern advances in communication — has a solid 140 years of etiquette development behind it.

The internet as we know it today — with its complex web of disparate communities and user-generated content — is only about 20 years old. And with all due respect to 20-year-olds, it’s still a baby.

We’ve been stumbling around in this virtual space with only a dim light to guide us, which has led to the standardization of some… less-than-desirable behaviors. Kids who grew up playing MOBAS (multi-only battle games) have come to accept that toxicity is a byproduct of online competition. Those of us who use social media expect to encounter previously unimaginably vile hate speech when we scroll through our feed.

And, of course, we all know to avoid the comments section.

Can self-actualization and online communities co-exist?

Yes. Because why not? We built this thing — so we can fix it.

Three things need to happen if we’re going to move from social to esteem to self-actualization.

Industry-wide paradigm shift

The good news? It’s already happening. Every day there’s a new article about the dangers of cyberbullying and online abuse. More and more social products realize that they can’t allow harassment to run free on their platforms. The German parliament recently backed a plan to fine social networks up to €50 million if they don’t remove hate speech within 24 hours.

Even the Obama Foundation has a new initiative centered around digital citizenship.

As our friend David Ryan Polgar, Chief of Trust & Safety at Friendbase says:

“Digital citizenship is the safe, savvy, ethical use of social media and technology.”

Safe, savvy, and ethical: As a society, we can do this. We’ve figured out how to do it in our offline communities, so we can do it in our online communities, too.

A big part of the shift includes a newfound focus on bringing empathy back into online interactions. To quote David again:

“There is a person behind that avatar and we often forget that.”

Thoughtful content moderation

The problem with moderation is that it’s no fun. No one wants to comb through thousands of user reports, review millions of potentially horrifying images, or monitor a mind-numbingly long live-chat stream in real time.

Too much noise + no way to prioritize = unhappy and inefficient moderators.

Thoughtful, intentional moderation is all about focus. It’s about giving community managers and moderators the right techniques to sift through content and ensure that the worst stuff — the targeted bullying, the cries for help, the rape threats — is dealt with first.

Automation is a crucial part of that solution. With artificial intelligence getting more powerful every day, instead of forcing their moderation team to review posts unnecessarily, social products can let computers do the heavy lifting first.

The content moderation strategy will be slightly different for every community. But there are a few best practices that every community can adopt:

  • Know your community resilience. This is a step that too many social products forget to take. Every community has a tolerance level for certain behaviors. Can your community handle the occasional swear word — but not if it’s repeated 10 times? Resilience will tell you where to draw the line.
  • Use reputation to treat users differently. Behavior tends to repeat itself. If you know that a user posts things that break your community guidelines, you can place tighter restrictions on them. Conversely, you can give engaged users the ability to post more freely. But don’t forget that users are human; everyone deserves the opportunity to learn from their mistakes. Which leads us to our next point…
  • Use behavior-changing techniques. Strategies include auto-messaging users before they hit “send” on posts that breach community guidelines, and publicly honoring users for their positive behavior.
  • Let your users choose what they see. The ESRB has the right idea. We all know what “Rated E for Everyone” means — we’ve heard it a million times. So what if we designed systems that allowed users to choose their experience based on a rating? If you have a smart enough system in the background classifying and labeling content, then you can serve users only the content that they’re comfortable seeing.

It all comes back to our hierarchy of conversation needs. If we can provide that first level of safety, we can move beyond emojis and cats — and move onto the next level.

Early digital education

The biggest task ahead of us is also the most important — education. We didn’t have the benefit of 20 years of internet culture, behavior, and standards when we first started to go online. We have those 20 years of mistakes and missteps behind us now.

Which means that we have an opportunity with the next generation of digital citizens to reshape the culture of the internet. In fact, strides are already being made.

Riot Games (the studio that makes the hugely popular MOBA League of Legends) has started an initiative in Australia and New Zealand that’s gaining traction. Spearheaded by Rioter Ivan Davies, the League of Legends High School Clubs teaches students about good sportsmanship through actual gameplay.

It’s a smart move — kids are already engaged when they’re playing a game they love, so it’s a lot easier to slip some education in there. Ivan and his team have even created impressive teaching resources for teachers who lead the clubs.

Google recently launched Be Internet Awesome, a program that teaches young children how to be good digital citizens and explore the internet safely. In the browser game Interland, kids learn how to protect their personal information, be kind to other users, and spot phishing scams and fake sites. And similar to Riot, Google has created curriculum for educators to use in the classroom.

In addition, non-profits like the Cybersmile Foundation, UK Safer Internet Center, and more use social media to reach kids and teens directly.

Things are changing. Our kids will likely grow up to be better digital citizens than we ever were. And it’s unlikely that they will tolerate the bullying, harassment, and abuse that we’ve put up with for the last 20 years.

Along with a paradigm shift, thoughtful moderation, and education, if we want change to happen, we have to celebrate our communities. We have to talk about our wins, our successes… and especially our failures. Let’s not beat ourselves up if we don’t get it right the first time. We’re figuring this out.

We’re self-actualizing.

It’s time for the internet to grow up

Is this the year the internet achieves its full potential? From where most of us in the industry sit, it’s already happening. People are fed up, and they’re ready for a change.

This year, social products have an opportunity to decide what they really want to be. They can be the Wild West, where too many conversations end with a (metaphorical) bullet. Or they can be something better. They can be spaces that nurture humanity — real communities, the kind we’ve been building for the last 200,000 years.

This year, let’s build online communities that honor the potential of the internet.

That meet every level in our hierarchy of needs.

That promote digital citizenship.

That encourage self-actualization.

This year, let’s start the conversation.

***

At Two Hat Security, we empower social and gaming platforms to build healthy, engaged online communities, all while protecting their brand and their users from high-risk content.

Want to increase user retention, reduce moderation, and protect your brand?

Get in touch today to see how our chat filter and moderation software Community Sift can help you build a community that promotes good digital citizenship — and gives your users a safe space to connect.

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Quora: What are the different ways to moderate content?

There are five different approaches to User-Generated Content (UGC) moderation:

  • Pre-moderate all content
  • Post-moderate all content
  • Crowdsourced (user reports)
  • 100% computer-automated
  • 100% human review

Each option has its merits and its drawbacks. But as with most things, the best method lies somewhere in between — a mixture of all five techniques.

Let’s take a look at the pros and cons of your different options.

Pre-moderate all content

  • Pro: You can be fairly certain that nothing inappropriate will end up in your community; you know you have human eyes on all content.
  • Con: Time and resource-consuming; subject to human error; does not happen in real time, and can be frustrating for users who expect to see their posts immediately.

Post-moderate all content

  • Pro: Users can post and experience content in real-time.
  • Con: Once risky content is posted, the damage is done; puts the burden on the community as it usually involves a lot of crowdsourcing and user reports.

Crowdsourcing/user reports

  • Pro: Gives your community a sense of ownership; people are good at finding subtle language.
  • Con: Similar to pre-moderating all content, once threatening content is posted, it’s already had its desired effect, regardless of whether it’s removed; forces the community to police itself.

100% computer-automated

  • Pro: Computers are great at identifying the worst and best content; automation frees up your moderation team to engage with the community.
  • Con: Computers aren’t great at identifying gray areas and making tough decisions.

100% human review

  • Pro: Humans are good at making tough decisions about nuanced topics; moderators become highly attuned to community sentiment.
  • Con: Humans burn out easily; not a scalable solution; reviewing disturbing content can have an adverse effect on moderator’s health and wellness.
    So, if all five options have valid pros and cons, what’s the solution? In our experience, the most effective technique uses a blend of both pre- and post-moderation, human review, and user reports, in tandem with some level of automation.

The first step is to nail down your community guidelines. Social products that don’t clearly define their standards from the very beginning have a hard time enforcing them as they scale up. Twitter is a cautionary tale for all of us, as we witness their current struggles with moderation. They launched the platform without the tools to enforce their (admittedly fuzzy) guidelines, and the company is facing a very public backlash because of it.

Consider your stance on the following:

  • Bullying: How do you define bullying? What behavior constitutes bullying in your community?
  • Profanity: Do you block all swear words or only the worst obscenities? Do you allow acronyms like WTF?
  • Hate speech: How do you define hate speech? Do you allow racial epithets if they’re used in a historical context? Do you allow discussions about religion or politics?
  • Suicide/Self-harm: Do you filter language related to suicide or self-harm, or do you allow it? Is their a difference between a user saying “I want to kill myself,” “You should kill yourself,” and “Please don’t kill yourself”?
  • PII (Personally Identifiable Information): Do you encourage users to use their real names, or does your community prefer anonymity? Can users share email addresses, phone numbers, and links to their profiles on other social networks? If your community is under-13 and in the US, you may be subject to COPPA.

Different factors will determine your guidelines, but the most important things to consider are:

  • The nature of your product. Is it a battle game? A forum to share family recipes? A messaging app?
  • Your target demographic. Are users over or under 13? Are portions of the experience age-gated? Is it marketed towards adults-only?

Once you’ve decided on community guidelines, you can start to build your moderation workflow. First, you’ll need to find the right software. There are plenty of content filters and moderation tools on the market, but in our experience, Community Sift is the best.

A high-risk content detection system designed specifically for social products, Community Sift works alongside moderation teams to automatically identify threatening UGC in real time. It’s built to detect and block the worst of the worst (as defined by your community guidelines), so your users and moderators don’t ever have to see it. There’s no need to force your moderation team to review disturbing content that a computer algorithm can be trained to recognize in a fraction of a second. Community Sift also allows you to move content into queues for human review, and automate actions (like player bans) based on triggers.

Once you’ve tuned the system to meet your community’s unique needs, you can create your workflows.

You may want to pre-moderate some content, even with a content filter running in the background. If your product is targeted at under-13 users, as an added layer of human protection, you might pre-moderate anything that the filter doesn’t classify as high-risk. Or maybe you route all content flagged as high-risk (extreme bullying, hate speech, rape threats, etc) into queues for moderators to review. For older communities, you may not require any pre-moderation and instead depend on user reports for any post-moderation work.

With an automated content detection system in place, you give your moderators their time back to do the tough, human stuff, like dealing with calls for help and reviewing user reports.

Another piece of the moderation puzzle is addressing negative user behavior. We recommend using automation, with the severity increasing with each offense. Techniques include warning users when they’ve posted high-risk content, and muting or banning their accounts for a short period. Users who persist can eventually lose their accounts. Again, the process and severity here will vary based on your product and demographic. The key is to have a consistent, well-thought-out process from the very beginning.

You will also want to ensure that you have a straightforward and accessible process for users to report offensive behavior. Don’t bury the report option, and make sure that you provide a variety of report tags to select from, like bullying, hate speech, sharing PII, etc. This will make it much easier for your moderation team to prioritize which reports they review first.

Ok, so moderation is a lot of work. It requires patience and dedication and a strong passion for community-building. But it doesn’t have to be hard if you leverage the right tools and the right techniques. And it’s highly rewarding, in the end. After all, what’s better than shaping a positive, healthy, creative, and engaged community in your social product? It’s the ultimate goal, and ultimately, it’s an attainable one — when you do it right.

 

Originally published on Quora

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Quora: What is the single biggest problem on the internet?

It has to be the proliferation of dangerous content. For good or for evil, many social networks and online communities are built around the concept of total anonymity — the separation of our (socially, ethically, and legally) accountable offline identities from our (too often hedonistic, id-driven, and highly manufactured) online identities.

People have always behaved badly. That’s not pessimism or fatalism; it’s just the truth. We are not perfect; often we are good, but just as often we indulge our darkest desires, even if they hurt other people.

And so with the advent of a virtual space where accountability is all too often non-existent, the darkest parts of the real world—harassment, rape threats, child abuse — all moved onto the internet. In the “real world” (an increasingly amorphous concept, but that’s a topic for another day), we are generally held accountable for our behavior, whereas online we are responsible only to ourselves. And sometimes, we cannot be trusted.

Facebook Live is a recent example. When used to share, engage, connect, and tell stories, it’s a beautiful tool. It’s benign online disinhibition at its best. But when it’s used to live stream murder and sexual assault — that’s toxic online disinhibition at its worst. And in the case of that sexual assault, at least 40 people watched it happen in real time, and not one of them reported it.

How did this happen?

It started with cyberbullying. We associate bullying with the playground, and since those of us who make the rules — adults — are far removed from the playground, we forget just how much schoolyard bullying can hurt. So from the beginning social networks have allowed bullying to flourish. Bullying became harassment, which became threats, which became hate speech, and so on, and so forth. We’ve tolerated and normalized bad behavior so long that it’s built into the framework of the internet. It’s no surprise that 40 people watched a live video of a 15-year-old girl being assaulted, and did nothing. It’s not difficult to trace a direct line from consequence-free rape threats to actual, live rape.

When social networks operate without a safety net, everyone gets hurt.

The good thing is, more and more sites are realizing that they have a social, ethical, and (potentially) legal obligation to moderate content. It won’t be easy — as Facebook has discovered, live streaming videos are a huge challenge for moderators — but it’s necessary. There are products out there — like Community Sift — that are designed specifically to detect and remove high-risk content in real-time.

In 2017, we have an opportunity to reshape the internet. The conversation has already begun. Hopefully, we’ll get it right this time.

Originally published on Quora

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required