Three Powerful Lessons We Learned at the Protecting Innocence Hackathon

You rarely hear about them, but every day brave investigators across the globe review the most horrific stories and images you can ever imagine. It’s called child sexual abuse material (known as CSAM in the industry), and it hides in the dark corners of the internet, waiting to be found.

The scope is dizzying. The RCMP-led National Child Exploitation Coordination Centre (NCECC) alone received 27,000 cases in 2016. And right now, it’s nearly impossible for officers to review those cases fast enough to prioritize the ones that require their immediate attention.

That’s why, on July 6th and 7th, volunteers from law enforcement, academia, and the tech industry came together to collaborate on solving this problem, perhaps the biggest problem of our time — how do we quickly, accurately, and efficiently detect online CSAM? Artificial intelligence gets smarter and more refined every day. How can we leverage those breakthroughs to save victimized children and apprehend their abusers?

Along with event co-sponsors the RCMP, Microsoft, and Magnet Forensics, we had a simple goal at the Protecting Innocence Hackathon: to bring together the brightest minds in our respective industries to answer these questions.

We ended up learning a few valuable lessons along the way.

It starts with education

Participants across all three disciplines learned from each other. Attendees from the tech industry and academia were given a crash course in grooming and luring techniques (as well as the psychology behind them) from law enforcement, the people who study them every day.

Make no mistake, these were tough lessons to learn — but with a deeper understanding of how predators attract their victims, we can build smarter, more efficient systems to catch them.

Law enforcement studied the techniques of machine learning and artificial intelligence — which in turn provides them with a deeper understanding of the challenges facing data scientists, not to mention the need for robust and permanent datasets.

It’s crucial that we learn from each other. But that’s just the first step.

Nothing important happens without collaboration

Too often our industries are siloed, with every company, university, and agency working on a different project. Bringing professionals together from across these disciplines and encouraging them to share their diverse expertise, without reservations or fear, was a huge accomplishment, and an important lesson.

This isn’t a problem that can be solved alone. This is a 25,000,000-million-images-a-year problem. This is a problem that crosses industry, cultural, and country lines.

If we want to protect the innocence of children, we have a responsibility to be transparent and collaborative.

Just do it

Education and collaboration are commendable and necessary — but they don’t add up to much without actual results. Once you have the blueprints, you have no excuse not to build.

The great news? The five teams and 60+ participants made real, tangible progress.

Collectively, the teams built the following:

  • A proposed standard for internationally classifying and annotating child sexual exploitation images and text
  • A machine learning aggregation blueprint for both text and image classification
  • Machine learning models to detect sexploitation conversation, as well image detection for as age, anime, indoor and outdoor, nudity, and CSAM

We cannot overstate the importance of these achievements. They are the first steps towards building the most comprehensive and accurate CSAM detection system the world has seen.

Not only that, the proposed global standard for classifying text and images, if accepted, will lead to even more accurate detection.

The future of CSAM detection is now

We actually learned a fourth lesson at the hackathon, perhaps the most powerful of them all: Everyone wants to protect and save children from predators. And they’re willing to work together, despite their differences, to make that happen.

At Two Hat Security, we’re using the knowledge shared by our collaborators to further train our artificial intelligence model CEASE and to refine our grooming and luring detection in Community Sift. And we’ll continue to work alongside our partners and friends in law enforcement, academia, and the tech industry to find smart solutions to big problems.

There are challenges ahead, but if everyone continues to educate, collaborate, and create, projects like CEASE and events like Protecting Innocence can and will make great strides. We hope that the lessons we learned will be applied by any agency, company, or university that hopes to tackle this issue.

Thank you again to our co-sponsors the RCMP, Microsoft, and Magnet Forensics. And to the Chief EnforcersCode Warriors, and Data Mages who gave their time, their expertise, and their fearlessness to this event — your contributions are invaluable. You’re changing the world.

And to anyone who labors every day, despite the heartbreak, to protect children — thank you. You may work quietly, you may work undercover, and we may never know your names, but we see you. And we promise to support you, in every way we can.

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


How a Hackathon Hopes to Stop Online Child Exploitation

Every year, the National Center for Missing & Exploited Children reviews 25,000,000 images containing child sexual abuse imagery (CSAM).

How do you conceptualize a number like 25,000,000? It’s unthinkable.

For perspective, there are just over 24,000,000 people in Australia. The population of a large country — that’s how many vile, horrific, and disturbing images NCMEC had to review in a year.

In 2016, the Internet Watch Foundation found 57,335 URLs containing confirmed child sexual abuse imagery. 57,335 — that’s about the population of a mid-sized city like Watertown, NY.

Still not convinced of the epidemic?

How about this? Over half of the children depicted on those 57,335 URLs were aged 10 or younger.

We’ve all been ten years old. Many of us have ten-year-old children, or nieces, or nephews.

Now that’s unthinkable.

Protecting the helpless

These images aren’t going away.

That’s why we’ve spearheaded a hackathon taking place on July 6th and 7th in Vancouver, British Columbia, Canada. Sponsored by the RCMP, Microsoft, Magnet, and Two Hat Security, the Protecting Innocence Hackathon is an attempt to build a bridge between three diverse disciplines — law enforcement, academia, and the technology sector — for the greater good.

The goal is to work together to build technology and global policy that helps stop online child exploitation.

Teams from across all three industries will gather to work on a variety of projects, including:

  • designing a text classification system to identify child luring conversations
  • training an image classification to identify child exploitation media
  • coordinating on a global protocol for sharing CSAM evidence between agencies
  • and more…

We are hopeful that by encouraging teamwork and partnerships across these three vital industries, we will come closer to ridding the internet of online child exploitation.

The beauty of a hackathon is that it’s a tried and true method for hacking away at tough problems in a short period. The time box encourages creativity, resourcefulness, critical thinking — and above all, collaboration.

We’re honored to be working on this. And we’re indebted to the RCMP, Microsoft, Magnet, and all the hackers attending for their selfless contributions.

Protecting the innocent

Forget incomprehensible numbers like 25,000,000 or 57,335. We’re doing this for all the ten-year-olds who’ve been robbed of their innocence.

Today, it’s easier than ever for predators to create and share pictures, videos, and stories. And every time those pictures, videos, and stories are shared, the victim is re-victimized.

It gets worse every year. The Internet Watch Foundation found that reports of child sexual abuse imagery rose by 417% between 2013 and 2015.

At Two Hat Security, we’re doing our part to fight the spread of illegal and immoral content. In collaboration with the RCMP and universities across the country, and with a generous grant from Mitacs, we’re building CEASE, an artificial intelligence model that can detect new CSAM.

But we can’t solve this problem alone.

So this July 6th and 7th, we salute the code warriors, chief enforcers, and data mages who are coming together to make a real difference in the world.

We hope you will too.

Just a little sneak peek…

***

Want to know more about CEASE? Read about the project here.

We believe in a world free of online bullying, harassment, and child exploitation. Find out how we’re making that vision a reality with our high-risk content detection system Community Sift.

We work with companies like ROBLOX, Animal Jam, and more to protect their communities from dangerous content.

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Welcome to the new Two Hat site!

Today’s the day! We’ve been working hard on our new website for months (or has it been years? It feels like years), and it’s finally ready.

The biggest change you’ll notice is that we’ve merged the Two Hat and Community Sift websites. Before, if you wanted to learn about the company you’d have to start on the Two Hat site, then hop over to the Community Sift site to learn about the product. Not anymore! Now they’re together — our vision, our story, and our work, all on one site.

We’ve also added more information about our latest project, CEASE. An artificial intelligence model built to find new child sexual abuse material (CSAM), CEASE is our most important project yet, and we’re thrilled to share our progress with the world.

We encourage you to head over to the CEASE homepage and learn more. It’s not an easy topic to face, but it’s crucial that we tackle it, head on. It’s not a problem we can solve alone, so we encourage you to get involved if you can. Find out more here.

We’ve also updated the Community Sift product page with lots of new content, including case studies (stay tuned for more!) and an all-new FAQ section.

We believe in a world free of online bullying, harassment, and child exploitation. And with this new site, we hope to share that vision and that dream with the world. Thank you for joining us on this journey.

In 2017, let’s build a better internet, together.

 

The Most Important Use of Artificial Intelligence in Human History?

Can you think of a better use of artificial intelligence than the elimination of child exploitation?

The rate of online child sexual abusive material (CSAM) is reaching alarming proportions. As technology has evolved, the frightening reality is that online child sexual abuse has evolved along with it.

According to the RCMP, the number of child sexual abuse cases in Canada grew from 14,951 in 2015 to over 27,361 in 2016. Current research indicates that as many as 22% of teenage girls have shared semi-nude photos of themselves online. The magnitude of this problem is enormous.

In honor of Safer Internet Day, we are proud to announce that we are developing the world’s first artificial intelligence software to detect and prevent the spread of child sexual abuse material online – CEASE.ai.

In collaboration with the RCMP, we are training a computer vision algorithm to uncover new, uncatalogued CSAM with the goal of stopping them from ever being posted online.

This cutting-edge artificial intelligence system will be the first in the world to accurately identify child sexual abuse images and stop them from being posted online.

We will be partnering with PhD graduate students from leading Canadian universities to develop this computer vision. Student researchers from the University of Manitoba, Simon Fraser University, and Laval University will be working with us as part of a five-year program coordinated by Mitacs, a government-funded agency working to bridge the gap between research and business.

This $3 million collaboration between Two Hat Security and Mitacs will support the development of the cutting-edge security software, with up to 200 people working on the project over the next five years.

“Of all the issues we are solving to keep the Internet safe, this is probably the most important,” said Two Hat CEO Chris Priebe, noting that stopping CSAM is a challenge every child exploitation unit faces. “Everyone would like to solve it, but nobody wants to touch it,” he said.

“It would be impossible to do this without the support of Mitacs,” said Priebe. “We are working in the darkest corner of the Internet that nobody wants to touch. By connecting with student interns, we are tapping into courageous researchers at the top of their respective fields who are not afraid to tackle the impossible.”

“Existing software tools search the Internet for known images previously reported to authorities as CSAM. Our product, CEASE.ai, will sit on the Internet and accurately scan for images that exploit children as they are uploaded, with the ultimate goal of stopping them from being posted – which is why global law enforcement and security agencies are watching closely,” said Two Hat Head of Product Development Brad Leitch.

“This is a rampant global problem,” said Arnold Guerin, Sergeant of the RCMP. “The ability to successfully detect and categorize newly distributed child sexual materials will be a game-changer in our fight against the online victimization of children.”

We can think of no better use of artificial intelligence than to protect the innocence of youth.

 

Quick Facts:

  • Mitacs is a national, not-for-profit organization that has designed and delivered research and training programs in Canada for 16 years.
  • Working with 60 universities, thousands of companies, and both federal and provincial governments, Mitacs builds partnerships that support industrial and social innovation in Canada.
  • Open to all disciplines and all industry sectors, projects can span a wide range of areas, including manufacturing, business processes, IT, social sciences, design, and more.

Learn more:

For information about Mitacs and its programs, visit http://mitacs.ca/newsroom.

Media information and to set up interviews:

Gail Bergman or Elizabeth Glassen
Gail Bergman PR
Tel: (905) 886-1340 or (905) 886-4091
Email: info@gailbergmanpr.com