The newest project from Two Hat Security, creators of Community Sift, a high-risk content detection system designed for social products. CEASE takes content detection to the next level. A cutting-edge artificial intelligence model that utilizes multiple artificial neural networks (ANNs), CEASE detects child sexual abuse material (CSAM) with unprecedented accuracy.
CEASE is currently being used by the RCMP in British Columbia. While we don’t have a specific deadline, we hope to make it available for beta to law enforcement agencies by August 31, 2017.
We would love to discuss collaborating with you to ease your team’s burden. Please contact us at email@example.com for more information.
The second iteration of CEASE will be available for social networks. With this version, our goal isn’t just detecting CSAM — we want to stop it before it’s shared across online communities.
We cannot solve this problem alone. We’re currently working with major law enforcement agencies and leading universities, and we welcome collaboration across all disciplines. Are you a research student, PhD candidate, machine learning expert, or simply passionate about protecting children’s innocence? We would love to hear from you. We can be reached at firstname.lastname@example.org.
Our system uses an ensemble of text and image classification models with a carefully calibrated ontology to create high detection accuracy. Leveraging the latest breakthroughs in artificial intelligence, our model has been trained against millions of verified CSAM images in the RCMP database. Most companies are required by law to delete images after 90 days, but we have access to a permanent dataset. This allows us to train our model to an unprecedented level of accuracy.
Absolutely. Child abuse is a global issue, so CEASE is a global project.