Two Hat’s CEASE.ai Technology Integrates with Griffeye Analyze to Help Investigators Rescue Child Sexual Abuse Victims Faster

With this technical integration, law enforcement agencies worldwide can now easily access cutting-edge artificial intelligence to aid in child sexual abuse investigations

KELOWNA, British Columbia, August 12, 2019: Technology company Two Hat Security announced today that CEASE.ai, an artificial intelligence model that can detect, sort, and prioritize new, previously uncatalogued child sexual abuse material (CSAM) for investigators, is now available for law enforcement agencies using the Griffeye Analyze platform.

“A technology partnership between CEASE.ai and Griffeye has been a goal for us since the beginning,” said Two Hat CEO and founder Chris Priebe. “The aim is to provide this technology to law enforcement agencies worldwide that already use Griffeye Analyze in their investigations. CEASE.ai is designed to not only protect investigators’ mental health, which can be severely affected by viewing these horrific images, but also to help them find and rescue innocent victims faster.”

Built in collaboration with Canadian law enforcement and with support from Canada’s Build in Canada Innovation Program and Mitacs with top Canadian universities, CEASE.ai uses multiple artificial intelligence models to detect and prioritize new images containing child abuse. After investigators run their caseload against a hash list of known images, they can then rescan the remaining items through the CEASE.ai plugin to flag new and uncatalogued images.

“We’re thrilled to integrate CEASE.ai with the Analyze platform,” said Griffeye CEO Johann Hofmann. “We strongly believe that artificial intelligence is the future of technology to fight child sexual abuse, and this is an opportunity for us to work with a company that builds state-of-the-art artificial intelligence and get it into the hands of our law enforcement community. This will help them speed up investigations and free up time to prioritize investigative work such as victim identification.”

The growing number of child sexual abuse material has put investigators under enormous pressure. According to their 2018 annual report, analysts at The Internet Watch Foundation processed 229,328 reports in 2018, a 73% increase on the 2017 figure of 132,636. With increasingly large caseloads containing anywhere from hundreds of thousands to 1-2 million images, investigators struggle to sort and manually review all material. The CEASE.ai technology aims to reduce their workload significantly.

“If we seize a hard drive that has 28 million photos, investigators need to go through all of them,” said Sgt. Arnold Guerin, who works in the technology section of the Canadian Police Centre for Missing and Exploited Children (CPCMEC). “But how many are related to children? Can we narrow it down? That’s where this project comes in, we can train the algorithm to recognize child exploitation.”

Two Hat has also made CEASE.ai available for social platforms to prevent illegal images from being uploaded and shared on social networks. Learn more about how CEASE.ai is assisting law enforcement to detect and prioritize new child sexual abuse material on the Two Hat website.

About Two Hat Security

Two Hat’s AI-powered content moderation platform classifies, filters, and escalates more than 30 billion human interactions, including messages, usernames, images, and videos a month, all in real-time. With an emphasis on surfacing online harms including cyberbullying, abuse, hate speech, violent threats, and child exploitation, they enable clients across a variety of social networks to foster safe and healthy user experiences.

In addition, they believe that removing illegal content is a shared responsibility among social platforms, technology companies, and law enforcement. To that end, Two Hat works with law enforcement to train AI to detect new child exploitative material.

www.twohat.com

About Griffeye

Griffeye provides one of the world’s premier software platforms for digital media investigations. Used by law enforcement, defense and national security agencies across the globe, the platform gives investigators and intelligence professionals a leg up on ever-increasing volumes of image and video files

www.griffeye.com

Witnessing the Dawn of the Internet’s Duty of Care

As I write this, we are a little more than two months removed from the terrorist attacks in Christchurch. Among many things, Christchurch will be remembered as the incident that galvanized world view, and more importantly global action, around online safety.

In the last two months, there has been a seismic shift in how we look at internet safety and how content is shared. Governments in London, Sydney, Washington, DC, Paris and Ottawa are considering or introducing new laws, financial penalties and even prison time for those who fail to remove harmful content and do so quickly. Others will follow, and that’s a good thing — securing the internet’s future requires the world’s governments to collectively raise the bar on safety, and cooperate across boundaries.

In order to reach this shared goal, it is essential that technology companies engage fully as partners. We witnessed a huge step forward in just last week when Facebook, Amazon, and other tech leaders came out in strong support of the Christchurch Call to Action. Two Hat stands proudly with them.

Clear terms of use, timely actions by social platforms on user reports of extremist content, and transparent public reporting are the building blocks of a safer internet. Two Hat also believes every web site should have baseline filtering for cyberbullying, images of sexual abuse, extremist content, and encouragement of self-harm or suicide.

Crisis protocols for service providers and regulators are essential, as well — we have to get better at managing incidents when they happen. Two Hat also echoes the need for bilateral education initiatives with the goal of helping people become better informed and safer internet users.

In all cases, open collaboration between technology companies, government, not for profit organizations, and both public and private researchers will be essential to create an internet of the future that is Safe by Design. AI + HI (artificial intelligence plus human intelligence) is the formula we talk about that can make it happen.

AI+HI is the perfect marriage of machines, which excel at processing billions of units of data quickly, guided by humans, who provide empathy, compassion and critical thinking. Add a shared global understanding of what harmful content is and how we define and categorize it, and we are starting to address online safety in a coordinated way.

New laws and technology solutions to moderate internet content are necessary instruments to help prevent the incitement of violence and the spread of online hate, terror and abuse. Implementing duty of care measures in the UK and around the world requires a purposeful, collective effort to create a healthier and safer internet for everyone.

Our vision of that safer internet will be realized when exposure to hate, abuse, violence and exploitation no longer feels like the price of admission for being online.

The United Kingdom’s new duty of care legislation, the Christchurch Call to Action, and the rise of the world’s collective will move us closer to that day.

Two Hat is currently offering no cost, no obligation community audits for anyone who could benefit from a second look at their moderation techniques.

Our Director of Community Trust & Safety will examine your community, locate areas of potential risk, and provide you with a personalized community analysis, including recommended best practices and tips to maximize user engagement. This is a unique opportunity to gain insight into your community from an industry expert.

Book your audit today.

Two Hat Named One of 2019 “Ready to Rocket” Growth Companies in British Columbia’s Technology Sectors

List Profiles B.C.’s Tech Companies Best-Positioned to Capitalize on Current Sector Trends

VANCOUVER, B.C. (March 20, 2019) – Rocket Builders announced its seventeenth (17th) annual “Ready to Rocket” lists naming leading automated content moderation company Two Hat as one of the “Ready to Rocket” companies in the Information and Communication Technology category. The list profiles British Columbia technology companies that are best positioned to capitalize on the technology sector trends that will lead them to faster growth than their peers. Two Hat was highlighted for their leading Community Sift chat filter.

The annual 2019 “Ready to Rocket” lists provide accurate predictions of private companies that will likely experience significant growth, venture capital investment or acquisition by a major player in the coming year. Two Hat is listed among 85 companies across this year’s list of companies in the Information and Communication Technology category.

“We’ve experienced incredible growth over the last year, and we expect it to only get better in 2019,” said Chris Priebe, Two Hat CEO and founder. “We’ve been working with the biggest gaming companies in the world for several years now. But last year social platforms went through a major paradigm shift, which opened content moderation solutions like ours to break into new and emerging industries like edtech, fintech, travel and hospitality, and more.”

Two Hat is the creator of Community Sift, a powerful risk-based chat filter and content moderation software that protects online communities, brands, and bottom lines. Community Sift is the industry leader in high-risk content detection and moderation, protecting some of the biggest online games, virtual worlds, and social products on the internet. With the number of child pornography incidents in Canada on the rise, Two Hat collaborated with Canadian law enforcement and leading academic partners to train a groundbreaking new AI model, CEASE.ai, to detect and remove child sexual abuse material (CSAM) for investigators and social platforms.

“Over the 17 years of the program, the B.C. technology sector has steadily grown each year, and presents a growing challenge to select and identify the most likely to succeed for our Ready to Rocket lists,” said Geoffrey Hansen, Managing Partner at Rocket Builders.

“In recent years, a startup economy has blossomed yielding a rich field of companies for our consideration, with over 450 companies reviewed to make our selections of 203 winners. Our Emerging Rocket lists enables us to profile those earlier stage companies that are well positioned for investment.”

The average growth rate on the list was over 40 percent growth, 32 companies exceeding double-digit growth and six companies exceeding 100 percent growth.

Two Hat has been named a “Ready to Rocket” company for four consecutive years. This year’s award follows Two Hat’s recent acquisition of ImageVision, an image recognition and visual search company, and the launch of CEASE.ai.

About Two Hat
Founded in 2012, Two Hat is an AI-based technology company that empowers gaming and social platforms to grow and protect their online communities. With their flagship product Community Sift, an enterprise-level content filter and automated chat, image, and video moderation tool, online communities can proactively filter abuse, harassment, hate speech, adult content, and other disruptive behavior.

About Rocket Builders
Rocket Builders produces the “Ready to Rocket” list which profiles information technology companies with the greatest potential for revenue growth in the coming year. The lists are predictive of future success making them unique in approach and unique in value for our business audience. The “Ready to Rocket” lists are the only predictive lists of its kind in North America, requiring many months of sector and company analysis. The 2019 list features 85 “Ready to Rocket” technology growth companies and 118 “Emerging Rocket” early stage startups.

Contact
GreenSmith PR
Mike Smith, 703.623.3834
mike@greensmithpr.com

The Changing Landscape of Automated Content Moderation in 2019

Is 2019 the year that content moderation goes mainstream? We think so.

Things have changed a lot since 1990 when Tim Berners-Lee invented the World Wide Web. A few short years later, the world started to surf the information highway – and we’ve barely stopped to catch our collective breath since.

Learn about the past, present, and future of online content moderation in an upcoming webinar

The internet has given us many wonderful things over the last 30 years – access to all of recorded history, an instant global connection that bypasses country, religious, and racial lines, Grumpy Cat – but it’s also had unprecedented and largely unexpected consequences.

Rampant online harassment, an alarming rise in child sexual abuse imagery, urgent user reports that go unheard – it’s all adding up. Now that well over half of Earth’s population is online (4 billion people as of January 2018), we’re finally starting to see an appetite to clean up the internet and create safe spaces for all users.

The change started two years ago.

Mark Zuckerberg’s 2017 manifesto hinted at what was to come:

“There are billions of posts, comments, and messages across our services each day, and since it’s impossible to review all of them, we review content once it is reported to us. There have been terribly tragic events — like suicides, some live streamed — that perhaps could have been prevented if someone had realized what was happening and reported them sooner. There are cases of bullying and harassment every day, that our team must be alerted to before we can help out. These stories show we must find a way to do more.”

In 2018, the industry finally realized that it was time to find solutions to the problems outlined in Facebook’s manifesto. The question was no longer, “Should we moderate content on our platforms?” and instead became, “How can we better moderate content on our platforms?”

Play button on a film stripLearn how you can leverage the latest advances in content moderation in an upcoming webinar

The good news is that in 2019, we have access to the tools, technology, and years of best practices to make the dream of a safer internet a reality. At Two Hat, we’ve been working behind the scenes for nearly seven years now (alongside some of the biggest games and social networks in the industry) to create technology to auto-moderate content so accurately that we’re on the path to “invisible AI” – filters that are so good you don’t even know they’re in the background.

On February 20th, we invite you to join us for a very special webinar, “Invisible AI: The Future of Content Moderation”. Two Hat CEO and founder Chris Priebe will share his groundbreaking vision of artificial intelligence in this new age of chat, image, and video moderation.

In it, he’ll discuss the past, present, and future of content moderation, expanding on why the industry shifted its attitude towards moderation in 2018, with a special focus on the trends of 2019.

He’ll also share exclusive, advance details about:

We hope you can make it. Give us 30 minutes of your time, and we’ll give you all the information you need to make 2019 the year of content moderation.

PS: Another reason you don’t want to miss this – the first 25 attendees will receive a free gift! ; )


Read about Two Hat’s big announcements:

Two Hat Is Changing the Landscape of Content Moderation With New Image Recognition Technology

Two Hat Leads the Charge in the Fight Against Child Sexual Abuse Images on the Internet

Two Hat Releases New Artificial Intelligence to Moderate and Triage User-Generated Reports in Real Time

 

The Future of Image Moderation: Why We’re Creating Invisible AI (Part Two)

Yesterday, we announced that Two Hat has acquired image moderation service ImageVision. With the addition of ImageVision’s technology to our existing image recognition tech stack, we’ve boosted our filter accuracy — and are determined to push image moderation to the next level.

Today, Two Hat CEO and founder Chris Priebe discusses why ImageVision was the ideal choice for a technology acquisition— and how he hopes to change the landscape of image moderation in 2019.

We were approached by ImageVision over a year ago. Their founder Steven White has a powerful story that led him to found the company (it’s his to tell so I won’t share). His story resonated with me and my own journey of why I founded Two Hat. He spent over 10 years perfecting his art. He had clients with Facebook, Yahoo, Flickr, and Apple. That is 10 years of experience and over $10 million in investment to solve the problems of accurately detecting pornographic images.

Of course 10 years ago we all did things differently. Neural networks weren’t popular yet. Back then, you would look at how much skin tone was in an image. You looked at angles and curves and how they relate to each other. ImageVision made 185 of these hand-coded features.

Later they moved on to neural networks but ImageVision did something amazing. They took their manually coded features and fed both them and the pixels into the neural network. And they got a result different from what everyone else was doing at the time.

Now here is the reality — there is no way I’m going to hire people to write nearly 200 manually coded features in this modern age. And yet the problem of child sexual abuse imagery is so important that we need to throw every resource we can at it. It’s not good enough to only prevent 90% of exploitation — we need all the resources we can get.

Like describing an elephant

So we did a study. We asked, “What would happen if we took several image detectors and mixed them together? Would they give a better answer than any alone?”

It’s like the story of several blind men describing an elephant. One describes a tail, another a trunk, another a leg. They each think they know what an elephant looks like, but until they start listening to each other they’ll never actually “see” the real elephant. Likewise in AI, some systems are good at finding one kind of problem and another at another problem. What if we trained another model (called an ensemble) to figure out when each of them is right?

For our study, we took 30,000 pornographic images and 55,000 clean images. We used ImageVision images since they are full of really hard ones to find; the kind of images you might actually see in real life and not just a lab experiment. The big cloud providers found between 89-98% of pornographic images out of all 30k images, while the precision rate was around 95-98% for all of them (precision refers to the proportion of positive identifications that are correct).

We were excited that our current system found most of the images, but we wanted to do better.

For the CEASE.ai project, we had to create a bunch of weak learners to find CSAM. Detecting CSAM is such a huge problem that we needed to throw everything we could at it. So we ensembled the weak learners all together to see what would happen — and we got another 1% of accuracy, which is huge because the gap from 97% to 100% is the hardest to close.

But how do you close the last 2%? This is where millions of dollars and decades of experience are critical. This is where we must acquire and merge every trick in the book. When we took ImageVision’s work and merged it with our own, we squeezed out another 1%. And that’s why we bought them.

We’re working on a white paper where we’ll present our findings in further detail. Stay tuned for that soon.

The final result

So if we bought ImageVision, not only would we gain 10 years of experience, multiple patents, and over $10 million in technology, but we would be the best NSFW detector in the industry. And if we added that into our CSAM detector (along with age detection, face detection, body part detection, and abuse detection) then we could push that accuracy even closer and hopefully save more kids from the horrors of abuse. Spending money to solve this problem was a no-brainer for us.

Today, we’re on the path to making AI invisible.


Learn more about Priebe’s groundbreaking vision of artificial intelligence in an on-demand webinar. He shares more details about the acquisition, CEASE.ai, and the content moderation trends that will dominate 2019. Register to watch the webinar here.

Further reading:

Part One of The Future of Image Moderation: Why We’re Creating Invisible AI
Official ImageVision acquisition announcement
Learn about CSAM detection with CEASE.ai on our site

The Future of Image Moderation: Why We’re Creating Invisible AI (Part One)

In December and early January, we teased exciting Two Hat news coming your way in the new year. Today, we’re pleased to share our first announcement of 2019 — we have officially acquired ImageVision, an image recognition and visual search company. With the addition of ImageVision’s groundbreaking technology, we are now poised to provide the most accurate NSFW image moderation service in the industry.

We asked Two Hat CEO and founder Chris Priebe to discuss the ambitious technology goals that led to the acquisition. Here is part one of that discussion:

The future of AI is all about quality. Right now the study of images is still young. Anyone can download TensorFlow or PyTorch, feed it a few thousand images and get a model that gets things right 80-90% of the time. People are excited about that because it seems magical – “They fed a bunch of images into a box and it gave an answer that surprisingly right most of the time!” But even if you get 90% right, you are still getting 10% wrong.

Think of it this way: If you do 10 million images a day that is a million mistakes. A million times someone tried to upload a picture that was innocent and meaningful to them and they had to wait for a human to review it. That is one million images humans need to review. We call those false positives.

Worse than false positives are false negatives, where someone uploads an NSFW (not safe for work) picture or video and it isn’t detected. Hopefully, it was a mature adult who saw it. Even if it was an adult, they weren’t expecting to see adult content, so their trust in the site is in jeopardy. They’re probably less likely to encourage a friend to join them on the site or app.

Worse if it was a child who saw it. Worst of all if it is a graphic depiction of a child being abused.

Protecting children is the goal

That last point is closest to our heart. A few years ago we realized that what really keeps our clients awake at night is the possibility someone will upload child sexual abuse material (CSAM; also known as child exploitive imagery, or CEI, and formerly called child pornography) to their platform. We began a long journey to solve that problem. It began with a hackathon where we gathered some of the largest social networks in the world with international law enforcement and academia all in the same room and attempted to build a solution together.

So AI must mature. We need to get beyond a magical box that’s “good enough” and push it until AI becomes invisible. What do I mean by invisible? For us, that means you don’t even notice that there is a filter because it gets it right every time.

Today, everyone is basically doing the same thing, like what I described earlier — label some NSFW images and throw them at the black box. Some of us are opening up the black box and changing the network design to hotrod the engine, but for the most part it’s a world of “good enough”.

Invisible AI

But in the future, “good enough” will no longer be tolerated. The bar of expectation will rise and people will expect it to just work. From that, we expect companies to hyper-specialize. Models will be trained that do one thing really, really well. Instead of a single model that answers all questions, instead, there will be groups of hyper-specialists with a final arbiter over them deciding how to best blend all their opinions together to make AI invisible.

We want to be at the top of the list for those models. We want to be the best at detecting child abuse, bullying, sextortion, grooming, and racism. We are already top of the market in several of those fields and trusted by many of the largest games and social sharing platforms. But we can do more.

Solving the biggest problems on the internet

That’s why we’ve turned our attention to acquiring. These problems are too big, too important to have a “not built here, not interested” attitude. If someone else has created a model that brings new experience to our answers, then we owe it our future to embrace every advantage we can get.

Success for me means that one day my children will take for granted all the hard work we’re doing today. That our technology will be invisible.

In part two, Chris discusses why ImageVision was the ideal choice for a technology acquisition— and how he hopes to change the landscape of image moderation in 2019.

Sneak peek:

“It’s like the story of several blind men describing an elephant. One describes a tail, another a trunk, another a leg. They each think they know what an elephant looks like, but until they start listening to each other they’ll never actually “see” the real elephant. Likewise in AI, some systems are good at finding one kind of problem and another at another problem. Could we train another model (called an ensemble) to figure out when each of them is right?”

 

Read the official ImageVision acquisition announcement
Learn about CSAM detection with CEASE.ai on our site

New Research Suggests Sentiment Analysis is Critical in Content Moderation

At Two Hat, research is the foundation of everything we do. We love to ask big questions and seek even bigger answers. And thanks to a generous grant from Mitacs, we’ve partnered with leading Canadian universities to conduct research into the subjects that we’re most passionate about — from protecting children by detecting child sexual abuse material to developing new and innovative advances in chat moderation.

Most recently, Université Laval student researcher Éloi Brassard-Gourdeau and professor Richard Khoury asked the question “What is the most accurate and effective way to detect toxic (also known as disruptive) behavior in online communities?” Specifically, their hypothesis was:

“While modifying toxic content and keywords to fool filters can be easy, hiding sentiment is harder.”

They wanted to see if sentiment analysis was more effective than keyword detection when identifying disruptive content like abuse and hate speech in online communities.

Definitions of sentiment analysis, toxicity, subversion, and keywords in content moderationIn Impact of Sentiment Detection to Recognize Toxic and Subversive Online Comments, Brassard-Gourdeau and Khoury analyzed over a million online comments using one Reddit and two Wikipedia datasets. The results show that sentiment information helps improve toxicity detection in all cases. In other words, the general sentiment of a comment — whether it’s positive or negative — is a more effective measure of toxicity than just keyword analysis.

But the real boost came when they used sentiment analysis on subversive language; that is, when users attempted to mask sentiment using L337 5p33k, deliberate misspellings, and word substitutions. According to the study, “The introduction of subversion leads to an important drop in the accuracy of toxicity detection in the network that uses the text alone… using sentiment information improved toxicity detection by as much as 3%.

You may be asking yourself, why does this matter? With chat moderation becoming more common in games and social apps, more users will find creative ways to subvert filters. Even the smartest content moderation tools on the market (like Two Hat’s Community Sift, which uses a unique AI called Unnatural Language Processing to detect complex manipulations), will find it increasingly difficult to flag disruptive content. As an industry, it’s time we started looking for innovative solutions to a problem that will only get harder in time.

In addition to asking big questions and seeking even bigger answers, we have several foundational philosophies at Two Hat that inform our technology. We believe that computers should do computer work and humans should do human work, and that an ensemble approach is key to exceptional AI.

This study validates our assumption that using multiple data points and multiple models in automated moderation algorithms are critical in boosting accuracy and ensuring a better user experience.

“We are in an exciting time in AI and content moderation,” says Two Hat CEO and founder Chris Priebe. “I am so proud of our students and the hard work they are doing. Every term they are pushing the boundaries of what is possible. Together, we are unlocking more and more pieces to the recipe that will one day make an Internet where people can share without fear of harassment or abuse.

To learn more, check out the full paper here.

Keep watching this space for more cutting-edge research. And stay tuned for major product updates and product launches from Two Hat in 2019!

Will This New AI Model Change How the Industry Moderates User Reports Forever?

Picture this:

You’re a moderator for a popular MMO. You spend hours slumped in front of your computer reviewing a seemingly endless stream of user-generated reports. You close most of them — people like to report their friends as a prank or just to test the report feature. After the 500th junk report, your eyes blur over and you accidentally close two reports containing violent hate speech — and you don’t even realize it. Soon enough, you’re reviewing reports that are weeks old — and what’s the point in taking action after so long? There are so many reports to review, and never enough time…

Doesn’t speak to you? Imagine this instead:

You’ve been playing a popular MMO for months now. You’re a loyal player, committed to the game and your fellow players. Several times a month, you purchase new items for your avatar. Recently, another player has been harassing you and your guild, using racial slurs, and generally disrupting your gameplay. You keep reporting them, but it seems like nothing ever happens – when you log back in the next day, they’re still there. You start to think that the game creators don’t care about you – are they even looking at your reports? You see other players talking about reports on the forum: “No wonder the community is so bad. Reporting doesn’t do anything.” You log on less often; you stop spending money on items. You find a new game with a healthier community. After a few months, you stop logging on entirely.

Still doesn’t resonate? One last try:

You’re the General Manager at a studio that makes a high-performing MMO. Every month your Head of Community delivers reports about player engagement and retention, operating costs, and social media mentions. You notice that operating costs go up while the lifetime value of a user is going down. Your Head of Community wants to hire three new moderators. A story in Wired is being shared on social media — players complain about rampant hate speech and homophobic slurs in the game that appear to go unnoticed. You’re losing money and your brand reputation is suffering — and you’re not happy about it.

The problem with reports

Most social platforms give users the ability to report offensive content. User-generated reports are a critical tool in your moderation arsenal. They surface high-risk content that you would otherwise miss, and they give players a sense of ownership over and engagement in their community.

They’re also one of the biggest time-wasters in content moderation.

Some platforms receive thousands of user reports a day. Up to 70% of those reports don’t require any action from a moderator — yet they have to review them all. And those reports that do require action often contain content that is so obviously offensive that a computer algorithm should be able to detect it automatically. In the end, reports that do require human eyes to make a fair, nuanced decision often get passed over.

Predictive Moderation

For the last two years, we’ve been developing and refining a unique AI model to label and action user reports automatically, mimicking a human moderator’s workflow. We call it Predictive Moderation.

Predictive Moderation is all about efficiency. We want moderation teams to focus on the work that matters — reports that require human review, and retention and engagement-boosting activities with the community.

Two Hat’s technology is built around the philosophy that humans should do human work, and computers should do computer work. With Predictive Moderation, you can train our innovative AI to do just that — ignore reports that a human would ignore, action on reports that a human would action on, and send reports that require human review directly to a moderator.

What does this mean for you? A reduced workload, moderators who are protected from having to read high-risk content, and an increase in user loyalty and trust.

Getting started 

We recently completed a sleek redesign of our moderation layout (check out the sneak peek!). Clients begin training the AI on their dataset in January. Luckily, training the model is easy — moderators simply review user reports in the new layout, closing reports that don’t require action and actioning on the reports that require it.

Image of chat moderation workflow for user-generated reports
Layout subject to change

“User reports are essential to our game, but they take a lot of time to review,” says one of our beta clients. “We are highly interested in smarter ways to work with user reports which could allow us to spend more time on the challenging reports and let the AI take care of the rest.”

Want to save time, money, and resources? 

As we roll out Predictive Moderation to everyone in the new year, expect to see more information including a brand-new feature page, webinars, and blog posts!

In the meantime, do you:

  • Have an in-house user report system?
  • Want to increase engagement and trust on your platform?
  • Want to prevent moderator burnout and turnover?

If you answered yes to all three, you might be the perfect candidate for Predictive Moderation.

Contact us at hello@twohat.com to start the conversation.


Two Hat CEO and founder Chris hosts a webinar on Wednesday, February 20th where he’ll share Two Hat’s vision for the future of content moderation, including a look at how Predictive Moderation is about to change the landscape of chat moderation. Don’t miss it — the first 25 attendees will receive a free Two Hat gift bag!