Two Hat CEO Chris Priebe discusses acquiring visual search pioneers ImageVision, and his plans to change the landscape of automated image moderation.
Researchers from Université Laval and Two Hat Security have determined that sentiment detection increases accuracy when moderating user-generated online content.
Up to 70% of user-generated reports don’t require any action from a moderator — yet they have to review them all, wasting time and resources. Find out how Predictive Moderation changes everything.
Wondering if you should use AI or humans to moderate images in your app or site? Find out how to retain users and protect the community using both techniques.
Two Hat Security is presenting two workshops this year, both providing investigators with a deeper understanding of the vital role artificial intelligence plays in the future of abuse investigations.
The brightest minds in law enforcement, academia, and the tech industry came together to build technology that can identify child sexual abuse material (CSAM).
On July 6th and 7th, experts from law enforcement, academia, and the tech sector will gather in Vancouver, BC to build technology that will detect and stop child sexual abuse material (CSAM).
Can you think of a better use of artificial intelligence?
Offensive images are a lot like computer viruses. Instead of managing your own set of threat signatures, why not use a third-party service and decrease the scope required to keep those images at bay?