The Other Reason You Should Care About Online Toxicity

In these divisive and partisan times, there seems to be one thing we can all agree on, regardless of party lines — online toxicity sucks.

Earlier this week Lily Allen announced that she was leaving Twitter. When you read this recent thread about her devastating early labor in 2010, it’s not hard to see why:

Does anyone want their social feeds to be peppered with hate speech or threats? Does anyone like logging into their favorite game and being greeted with a barrage of insults? And does anyone want to hear another story about cyberbullying gone tragically, fatally wrong? And yet we allow it to happen, time and time again.

The human cost of online abuse is obvious. But there’s another hidden cost when you allow trolls and toxicity to flourish in your product.

Toxicity is poison — and it will eat away at your profits.

Every company faces a critical decision when creating a social network or online game. Do you take steps to deal with toxicity from the very beginning? Do you proactively moderate the community to ensure that everyone plays nice?

Or — do you do nothing? Do you launch your product and hope for the best? Maybe you build a Report feature so users can report abuse or harassment. Maybe you build a Mute button so players can ignore other players who post offensive content. Sure, it’s a traditional approach to moderation, but does it really work?

If you’re not sure what to choose, you’re not alone. The industry has grappled with these questions for years now.

We want to make it an easy choice. We want it to be a no-brainer. We want doing something to be the industry standard. We believe that chat is a game mechanic like any other, and that community balance is as important as game balance.

When you choose to do something, not only do you build the framework for a healthy, growing, loyal community — you’ll also save yourself a bunch of money in the process.

In this series of posts, we’ll introduce two fictional online games, AI Warzone and Trials of Serathian. We’ll people them with communities, each a million users strong. One game will choose to proactively moderate the community, and the other will do nothing. Think of it as an A/B test.

Then, armed with real-world statistics, our own research, and a few brilliant data scientists, we’ll perform a bit of math magic. We’ll toss them all into a hat (minus the data scientists; they get cranky when we try to put them in hats), say the magic words, wave our wands, and — tada! — pull out a formula. We’ll run both games’ profits, user churn, and acquisition costs through our formula to determine, once and for all, the cost of doing nothing.

But first, let’s have a bit of fun and delve into our fictional communities. Who is Serathian and why is he on trial? And what kind of virtual battles can one expect in an AI Warzone?

Join us tomorrow for our second installment in this four-part series: A Tale of Two Online Communities.

 

Originally published on Medium

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Baking Goodies and Wearing Pink to Support Anti-Bullying

Photo from the original Pink Shirt Day idea from some Nova Scotia high school students

Here in Canada, we have a decade-long annual tradition of wearing pink shirts as a sign of solidarity against bullying. The tradition was started by two teenagers in Nova Scotia named Travis Price and David Sheppard, who heard that a younger student was being bullied for wearing a pink shirt to school.

Bullying isn’t just an issue in Canada (a nation mistakingly known as an overly-polite and apologetic country.) Young people around the world face issues with overly-aggressive bullies attacking their self-esteem. There are numerous reports of young people harming themselves when they don’t know how to cope with the endless bombardment from bullies. Teachers and parents don’t always know how to deal with the situation, offering ‘quick fixes’ like forcing the two participants (bully and the bullied) to just ‘hug it out’.

Spoiler alert: the issue of bullying isn’t exclusive to young people, and it’s not exclusive to schools, either. Bullying is a major problem at home, in the office, and across the internet (hence why we built Community Sift.) Teaching young people how to become more resilient is only becoming more important.

The team that bakes together stays together!

Today, our team at Two Hat Security hosted a bake sale to raise funds to support anti-bullying initiatives across Canada. We raised over $400 at our little sale, and our team is chipping in to bring the total to a nice big $1,000.

Hooray!

All the proceeds will be donated to the CKNW Orphans’ Fund, who disperse the funds to different child and youth programs. We’re excited about this, as these programs support healthy self-esteem for children and their peers. They teach empathy, compassion, and kindness – three things all close to any loving parent’s heart.

Every bake sale worth its weight in sugar needs to carry vegan and gluten-free options, of course!

 

Check out the interview courtesy of the Capital News:

VIDEO: Spread some kindness, enjoy a sweet treat

The theme of this year’s Pink Shirt Day is the “Pink Shirt Promise”, encouraging others to share kindness with others. We used up some precious whiteboard space to encourage each other in the office:

And our UX aficionado Jesse even whipped up this fun Pink Shirt Day virtual t-shirt maker to help spread the word. Fun!

Here’s to another successful Pink Shirt Day. In the meantime, we’re going to get back to work, since we spend all year working to end bullying online! There’s still so much to do, after all. Perhaps we need to consider wearing pink shirts as a new team uniform every day… ?

To Mark Zuckerberg

Re: Building Global Communities

“There are billions of posts, comments and messages across our services each day, and since it’s impossible to review all of them, we review content once it is reported to us. There have been terribly tragic events — like suicides, some live streamed — that perhaps could have been prevented if someone had realized what was happening and reported them sooner. There are cases of bullying and harassment every day, that our team must be alerted to before we can help out. These stories show we must find a way to do more.” — Mark Zuckerberg

This is hard.

I built a company (Two Hat Security) that’s also contracted to process 4 billion chat messages, comments, and photos a day. We specifically look for high-risk content in real-time, such as bullying, harassment, threats of self-harm, and hate speech. It is not easy.

“There are cases of bullying and harassment every day, that our team must be alerted to before we can help out. These stories show we must find a way to do more.”

I must ask — why wait until cases get reported?

If you wait for a report to be filed by someone, haven’t they already been hurt? Some things that are reported can never be unseen. Some like Amanda Todd cannot have that image retracted. Others post when they are enraged or drunk and the words like air cannot be taken back. The saying goes, “What happens in Vegas stays in Vegas, Facebook, Twitter and Instagram forever” so maybe some things should never go live. What if you could proactively create a safe global community for people by preventing (or pausing) personal attacks in real-time instead?

This, it appears, is key to creating the next vision point.

“How do we help people build an informed community that exposes us to new ideas and builds common understanding in a world where every person has a voice?”

One of the biggest challenges to free speech online in 2017 is that we allow a small group of toxic trolls the ‘right’ to shut up a larger group of people. Ironically, these users’ claim to free speech often ends up becoming hate speech and harassment, destroying the opportunity for anyone else to speak up, much like bullies in the lunchroom. Why would someone share their deepest thoughts if others would just attack them? Instead, the dream for real conversations gets lost beneath a blanket of fear. Instead, we get puppy pictures, non-committal thumbs up, and posts that are ‘safe.’ If we want to create an inclusive community, people need to be able to share ideas and information online without fear of abuse from toxic bullies. I applaud your manifesto, as it calls this out, and calls us all to work together to achieve this.

But how?

Fourteen years ago, we both set out to change the social network of our world. We were both entrepreneurial engineers, hacking together experiments using the power of code. It was back in the days of MySpace and Friendster and the later Orkut. We had to browse to every single friend we had on MySpace just to see if they wrote anything new. To solve this I created myTWU — a social stream of all the latest blogs and photos of fellow students, alumni and sports teams on our internal social tool. Our office was in charge of building online learning but we realized that education is not about ideas but community. It was not enough to dump curriculum online for independent study, people needed places of belonging.

A year later “The Facebook” came out. You reached beyond the walls of one University and over time opened it to the world.

So I pivoted. As part of our community, we had a little chat room where you could waddle around and talk to others. It was a skin of a little experiment my brother was running. He was caught by surprise when it grew to a million users which showed how users long for community and places of belonging. In those days chat rooms were the dark part of the web and it was nearly impossible to keep up with the creative ways users tried to hurt each other.

So I was helping my brother code the safety mechanisms for his little social game. That little social game grew to become a global community with over 300 million users and Disney bought it back in 2007. I remember huddling in my brother’s basement rapidly building the backend to fix the latest trick to get around the filter. Club Penguin was huge.

After a decade of kids breaking the filter and building tools to moderate the millions upon millions of user reports, I had a breakthrough. By then I was security at Disney, with the job to hack everything with a Mouse logo on it. In my training, we learned that if someone DDoS’es a network or tries to break the system, you find a signature of what they are doing and turn up the firewall against that.

“What if we did that with social networks and social attacks?” I thought.

I’ve spent the last five years building an AI system with signatures and firewalls as it relates to social content. As we process billions of messages with Community Sift, we build reputation scores in real-time. We know who the trolls are — they leave digital signatures everywhere they go. Moreover, I can adjust the AI to turn up the sensitivity only where it counts. In so doing we drastically dropped false positives, opened communication with the masses while detecting the highest risk when it matters.

I had to build whole new AI algorithms to do this since traditional methods only hit 90–95% percent. That is great for most AI tasks but when it comes to cyber-bullying, hate-speech, and suicide the stakes are too high for the current state of art in NLP.

“To prevent harm, we can build social infrastructure to help our community identify problems before they happen. When someone is thinking of suicide or hurting themselves, we’ve built infrastructure to give their friends and community tools that could save their life.”

Since Two Hat is a security company, we are uniquely positioned to prevent harm with the largest vault of high-risk signatures, like grooming conversations and CSAM (child sexual abuse material.) In collaboration with our partners at the RCMP (Royal Canadian Mounted Police), we are developing a system to predict and prevent child exploitation before it happens to complement the efforts our friends at Microsoft have made with PhotoDNA. With CEASE.ai, we are training AI models to find CSAM, and have lined up millions of dollars of Ph.D. research to give students world-class experience in working with our team.

“Artificial intelligence can help provide a better approach. We are researching systems that can look at photos and videos to flag content our team should review. This is still very early in development, but we have started to have it look at some content, and it already generates about one-third of all reports to the team that reviews content for our community.”

It is incredible what deep learning has accomplished in the last few years. And although we have been able to see near perfect recall in finding pornography with our current work there is an explosion of new topics we are training on. Further, the subtleties you outline are key.

I look forward to two changes to resolve this:

  1. I call on networks to trust that their users have resilience. It is not imperative to find everything just the worst. If all content can be sorted by maybe bad to absolutely bad we can then draw a line in the sand and say these cannot be unseen and these the community will find. In so doing we don’t have to wait for technology to reach perfection nor wait for users to report things we already know are bad. Let computers do what they do well and let humans deal with the rest.
  2. I call on users to be patient. Yes, sometimes in our ambition to prevent harm we may find a Holocaust photo. We know this is terrible but we ask for your patience. Computer vision is like a child still learning. A child that sees that image for the first time is still deeply impacted and is concerned. Join us to report these problems and to help train the system to mature and discern.

However, you are right that many more strides need to happen to get this to where it needs to be. We need to call on the world’s greatest thinkers. Of all the hard problems to solve, our next one is child pornography (CSAM). Some things cannot be unseen. There are things when seen re-victimize over and over again. We are the first to gain access to hundreds of thousands of CSAM material and train deep learning models on them with CEASE.ai. We are pouring millions of dollars and putting the best minds on this topic. It is a problem that must be solved.

And before I move on I want to give a shout out to your incredible team whom I have had the chance to volunteer at hack-a-thons with and who have helped me think through how to get this done. Your company commitment to social good is outstanding and they have helped many other companies and not for profits.

“The guiding principles are that the Community Standards should reflect the cultural norms of our community, that each person should see as little objectionable content as possible, and each person should be able to share what they want while being told they cannot share something as little as possible. The approach is to combine creating a large-scale democratic process to determine standards with AI to help enforce them.”

That is cool. I have got a couple of the main pieces needed for that completed if you need them.

“The idea is to give everyone in the community options for how they would like to set the content policy for themselves. Where is your line on nudity? On violence? On graphic content? On profanity?”

I had the chance to swing by Twitter 18 months ago. I took their sample firehose and have been running it through our system. We label each message across 1.8 million of our signatures, then put together a quick demo of what it would be like if you could turn off the toxicity on Twitter. It shows low, medium, and high-risk. I would not expect to see anything severe on there, as they have recently tried to clean it up.

My suggestion to Twitter was to allow each user the option to choose what they want to see. The suggestion was that a global policy gets rid of clear infractions against terms of use for content that can never be unseen such as gore or CSAM. After the global policy is applied, you can then let each user choose their own risk and tolerance levels.

We are committed to helping you and the Facebook team with your mission to build a safe, supportive, and inclusive community. We are already discussing ways we can help your team, and we are always open to feedback. Good luck on your journey to connect the world, and hope we cross paths next time I am in the valley.

Sincerely,
Chris Priebe
CEO, Two Hat Security

 

Originally published on Medium 

The Most Important Use of Artificial Intelligence in Human History?

Can you think of a better use of artificial intelligence than the elimination of child exploitation?

The rate of online child sexual abusive material (CSAM) is reaching alarming proportions. As technology has evolved, the frightening reality is that online child sexual abuse has evolved along with it.

According to the RCMP, the number of child sexual abuse cases in Canada grew from 14,951 in 2015 to over 27,361 in 2016. Current research indicates that as many as 22% of teenage girls have shared semi-nude photos of themselves online. The magnitude of this problem is enormous.

In honor of Safer Internet Day, we are proud to announce that we are developing the world’s first artificial intelligence software to detect and prevent the spread of child sexual abuse material online – CEASE.ai.

In collaboration with the RCMP, we are training a computer vision algorithm to uncover new, uncatalogued CSAM with the goal of stopping them from ever being posted online.

This cutting-edge artificial intelligence system will be the first in the world to accurately identify child sexual abuse images and stop them from being posted online.

We will be partnering with PhD graduate students from leading Canadian universities to develop this computer vision. Student researchers from the University of Manitoba, Simon Fraser University, and Laval University will be working with us as part of a five-year program coordinated by Mitacs, a government-funded agency working to bridge the gap between research and business.

This $3 million collaboration between Two Hat Security and Mitacs will support the development of the cutting-edge security software, with up to 200 people working on the project over the next five years.

“Of all the issues we are solving to keep the Internet safe, this is probably the most important,” said Two Hat CEO Chris Priebe, noting that stopping CSAM is a challenge every child exploitation unit faces. “Everyone would like to solve it, but nobody wants to touch it,” he said.

“It would be impossible to do this without the support of Mitacs,” said Priebe. “We are working in the darkest corner of the Internet that nobody wants to touch. By connecting with student interns, we are tapping into courageous researchers at the top of their respective fields who are not afraid to tackle the impossible.”

“Existing software tools search the Internet for known images previously reported to authorities as CSAM. Our product, CEASE.ai, will sit on the Internet and accurately scan for images that exploit children as they are uploaded, with the ultimate goal of stopping them from being posted – which is why global law enforcement and security agencies are watching closely,” said Two Hat Head of Product Development Brad Leitch.

“This is a rampant global problem,” said Arnold Guerin, Sergeant of the RCMP. “The ability to successfully detect and categorize newly distributed child sexual materials will be a game-changer in our fight against the online victimization of children.”

We can think of no better use of artificial intelligence than to protect the innocence of youth.

 

Quick Facts:

  • Mitacs is a national, not-for-profit organization that has designed and delivered research and training programs in Canada for 16 years.
  • Working with 60 universities, thousands of companies, and both federal and provincial governments, Mitacs builds partnerships that support industrial and social innovation in Canada.
  • Open to all disciplines and all industry sectors, projects can span a wide range of areas, including manufacturing, business processes, IT, social sciences, design, and more.

Learn more:

For information about Mitacs and its programs, visit http://mitacs.ca/newsroom.

Media information and to set up interviews:

Gail Bergman or Elizabeth Glassen
Gail Bergman PR
Tel: (905) 886-1340 or (905) 886-4091
Email: info@gailbergmanpr.com