The Changing Landscape of Automated Content Moderation in 2019

 In Online Safety, Social Networks

Is 2019 the year that content moderation goes mainstream? We think so.

Things have changed a lot since 1990 when Tim Berners-Lee invented the World Wide Web. A few short years later, the world started to surf the information highway – and we’ve barely stopped to catch our collective breath since.

Learn about the past, present, and future of online content moderation in an upcoming webinar

The internet has given us many wonderful things over the last 30 years – access to all of recorded history, an instant global connection that bypasses country, religious, and racial lines, Grumpy Cat – but it’s also had unprecedented and largely unexpected consequences.

Rampant online harassment, an alarming rise in child sexual abuse imagery, urgent user reports that go unheard – it’s all adding up. Now that well over half of Earth’s population is online (4 billion people as of January 2018), we’re finally starting to see an appetite to clean up the internet and create safe spaces for all users.

The change started two years ago.

Mark Zuckerberg’s 2017 manifesto hinted at what was to come:

“There are billions of posts, comments, and messages across our services each day, and since it’s impossible to review all of them, we review content once it is reported to us. There have been terribly tragic events — like suicides, some live streamed — that perhaps could have been prevented if someone had realized what was happening and reported them sooner. There are cases of bullying and harassment every day, that our team must be alerted to before we can help out. These stories show we must find a way to do more.”

In 2018, the industry finally realized that it was time to find solutions to the problems outlined in Facebook’s manifesto. The question was no longer, “Should we moderate content on our platforms?” and instead became, “How can we better moderate content on our platforms?”

Play button on a film stripLearn how you can leverage the latest advances in content moderation in an upcoming webinar

The good news is that in 2019, we have access to the tools, technology, and years of best practices to make the dream of a safer internet a reality. At Two Hat, we’ve been working behind the scenes for nearly seven years now (alongside some of the biggest games and social networks in the industry) to create technology to auto-moderate content so accurately that we’re on the path to “invisible AI” – filters that are so good you don’t even know they’re in the background.

On February 20th, we invite you to join us for a very special webinar, “Invisible AI: The Future of Content Moderation”. Two Hat CEO and founder Chris Priebe will share his groundbreaking vision of artificial intelligence in this new age of chat, image, and video moderation.

In it, he’ll discuss the past, present, and future of content moderation, expanding on why the industry shifted its attitude towards moderation in 2018, with a special focus on the trends of 2019.

He’ll also share exclusive, advance details about:

We hope you can make it. Give us 30 minutes of your time, and we’ll give you all the information you need to make 2019 the year of content moderation.

PS: Another reason you don’t want to miss this – the first 25 attendees will receive a free gift! ; )


Read about Two Hat’s big announcements:

Two Hat Is Changing the Landscape of Content Moderation With New Image Recognition Technology

Two Hat Leads the Charge in the Fight Against Child Sexual Abuse Images on the Internet

Two Hat Releases New Artificial Intelligence to Moderate and Triage User-Generated Reports in Real Time

 

Recommended Posts
×
Subscribe to the blog and never miss a post!

Start typing and press Enter to search