Welcome to the new Two Hat site!

Today’s the day! We’ve been working hard on our new website for months (or has it been years? It feels like years), and it’s finally ready.

The biggest change you’ll notice is that we’ve merged the Two Hat and Community Sift websites. Before, if you wanted to learn about the company you’d have to start on the Two Hat site, then hop over to the Community Sift site to learn about the product. Not anymore! Now they’re together — our vision, our story, and our work, all on one site.

We’ve also added more information about our latest project, CEASE. An artificial intelligence model built to find new child sexual abuse material (CSAM), CEASE is our most important project yet, and we’re thrilled to share our progress with the world.

We encourage you to head over to the CEASE homepage and learn more. It’s not an easy topic to face, but it’s crucial that we tackle it, head on. It’s not a problem we can solve alone, so we encourage you to get involved if you can. Find out more here.

We’ve also updated the Community Sift product page with lots of new content, including case studies (stay tuned for more!) and an all-new FAQ section.

We believe in a world free of online bullying, harassment, and child exploitation. And with this new site, we hope to share that vision and that dream with the world. Thank you for joining us on this journey.

In 2017, let’s build a better internet, together.

 

Quora: What are the different ways to moderate content?

There are five different approaches to User-Generated Content (UGC) moderation:

  • Pre-moderate all content
  • Post-moderate all content
  • Crowdsourced (user reports)
  • 100% computer-automated
  • 100% human review

Each option has its merits and its drawbacks. But as with most things, the best method lies somewhere in between — a mixture of all five techniques.

Let’s take a look at the pros and cons of your different options.

Pre-moderate all content

  • Pro: You can be fairly certain that nothing inappropriate will end up in your community; you know you have human eyes on all content.
  • Con: Time and resource-consuming; subject to human error; does not happen in real time, and can be frustrating for users who expect to see their posts immediately.

Post-moderate all content

  • Pro: Users can post and experience content in real-time.
  • Con: Once risky content is posted, the damage is done; puts the burden on the community as it usually involves a lot of crowdsourcing and user reports.

Crowdsourcing/user reports

  • Pro: Gives your community a sense of ownership; people are good at finding subtle language.
  • Con: Similar to pre-moderating all content, once threatening content is posted, it’s already had its desired effect, regardless of whether it’s removed; forces the community to police itself.

100% computer-automated

  • Pro: Computers are great at identifying the worst and best content; automation frees up your moderation team to engage with the community.
  • Con: Computers aren’t great at identifying gray areas and making tough decisions.

100% human review

  • Pro: Humans are good at making tough decisions about nuanced topics; moderators become highly attuned to community sentiment.
  • Con: Humans burn out easily; not a scalable solution; reviewing disturbing content can have an adverse effect on moderator’s health and wellness.
    So, if all five options have valid pros and cons, what’s the solution? In our experience, the most effective technique uses a blend of both pre- and post-moderation, human review, and user reports, in tandem with some level of automation.

The first step is to nail down your community guidelines. Social products that don’t clearly define their standards from the very beginning have a hard time enforcing them as they scale up. Twitter is a cautionary tale for all of us, as we witness their current struggles with moderation. They launched the platform without the tools to enforce their (admittedly fuzzy) guidelines, and the company is facing a very public backlash because of it.

Consider your stance on the following:

  • Bullying: How do you define bullying? What behavior constitutes bullying in your community?
  • Profanity: Do you block all swear words or only the worst obscenities? Do you allow acronyms like WTF?
  • Hate speech: How do you define hate speech? Do you allow racial epithets if they’re used in a historical context? Do you allow discussions about religion or politics?
  • Suicide/Self-harm: Do you filter language related to suicide or self-harm, or do you allow it? Is their a difference between a user saying “I want to kill myself,” “You should kill yourself,” and “Please don’t kill yourself”?
  • PII (Personally Identifiable Information): Do you encourage users to use their real names, or does your community prefer anonymity? Can users share email addresses, phone numbers, and links to their profiles on other social networks? If your community is under-13 and in the US, you may be subject to COPPA.

Different factors will determine your guidelines, but the most important things to consider are:

  • The nature of your product. Is it a battle game? A forum to share family recipes? A messaging app?
  • Your target demographic. Are users over or under 13? Are portions of the experience age-gated? Is it marketed towards adults-only?

Once you’ve decided on community guidelines, you can start to build your moderation workflow. First, you’ll need to find the right software. There are plenty of content filters and moderation tools on the market, but in our experience, Community Sift is the best.

A high-risk content detection system designed specifically for social products, Community Sift works alongside moderation teams to automatically identify threatening UGC in real time. It’s built to detect and block the worst of the worst (as defined by your community guidelines), so your users and moderators don’t ever have to see it. There’s no need to force your moderation team to review disturbing content that a computer algorithm can be trained to recognize in a fraction of a second. Community Sift also allows you to move content into queues for human review, and automate actions (like player bans) based on triggers.

Once you’ve tuned the system to meet your community’s unique needs, you can create your workflows.

You may want to pre-moderate some content, even with a content filter running in the background. If your product is targeted at under-13 users, as an added layer of human protection, you might pre-moderate anything that the filter doesn’t classify as high-risk. Or maybe you route all content flagged as high-risk (extreme bullying, hate speech, rape threats, etc) into queues for moderators to review. For older communities, you may not require any pre-moderation and instead depend on user reports for any post-moderation work.

With an automated content detection system in place, you give your moderators their time back to do the tough, human stuff, like dealing with calls for help and reviewing user reports.

Another piece of the moderation puzzle is addressing negative user behavior. We recommend using automation, with the severity increasing with each offense. Techniques include warning users when they’ve posted high-risk content, and muting or banning their accounts for a short period. Users who persist can eventually lose their accounts. Again, the process and severity here will vary based on your product and demographic. The key is to have a consistent, well-thought-out process from the very beginning.

You will also want to ensure that you have a straightforward and accessible process for users to report offensive behavior. Don’t bury the report option, and make sure that you provide a variety of report tags to select from, like bullying, hate speech, sharing PII, etc. This will make it much easier for your moderation team to prioritize which reports they review first.

Ok, so moderation is a lot of work. It requires patience and dedication and a strong passion for community-building. But it doesn’t have to be hard if you leverage the right tools and the right techniques. And it’s highly rewarding, in the end. After all, what’s better than shaping a positive, healthy, creative, and engaged community in your social product? It’s the ultimate goal, and ultimately, it’s an attainable one — when you do it right.

 

Originally published on Quora

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Quora: What is the single biggest problem on the internet?

It has to be the proliferation of dangerous content. For good or for evil, many social networks and online communities are built around the concept of total anonymity — the separation of our (socially, ethically, and legally) accountable offline identities from our (too often hedonistic, id-driven, and highly manufactured) online identities.

People have always behaved badly. That’s not pessimism or fatalism; it’s just the truth. We are not perfect; often we are good, but just as often we indulge our darkest desires, even if they hurt other people.

And so with the advent of a virtual space where accountability is all too often non-existent, the darkest parts of the real world—harassment, rape threats, child abuse — all moved onto the internet. In the “real world” (an increasingly amorphous concept, but that’s a topic for another day), we are generally held accountable for our behavior, whereas online we are responsible only to ourselves. And sometimes, we cannot be trusted.

Facebook Live is a recent example. When used to share, engage, connect, and tell stories, it’s a beautiful tool. It’s benign online disinhibition at its best. But when it’s used to live stream murder and sexual assault — that’s toxic online disinhibition at its worst. And in the case of that sexual assault, at least 40 people watched it happen in real time, and not one of them reported it.

How did this happen?

It started with cyberbullying. We associate bullying with the playground, and since those of us who make the rules — adults — are far removed from the playground, we forget just how much schoolyard bullying can hurt. So from the beginning social networks have allowed bullying to flourish. Bullying became harassment, which became threats, which became hate speech, and so on, and so forth. We’ve tolerated and normalized bad behavior so long that it’s built into the framework of the internet. It’s no surprise that 40 people watched a live video of a 15-year-old girl being assaulted, and did nothing. It’s not difficult to trace a direct line from consequence-free rape threats to actual, live rape.

When social networks operate without a safety net, everyone gets hurt.

The good thing is, more and more sites are realizing that they have a social, ethical, and (potentially) legal obligation to moderate content. It won’t be easy — as Facebook has discovered, live streaming videos are a huge challenge for moderators — but it’s necessary. There are products out there — like Community Sift — that are designed specifically to detect and remove high-risk content in real-time.

In 2017, we have an opportunity to reshape the internet. The conversation has already begun. Hopefully, we’ll get it right this time.

Originally published on Quora

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required