Is 2017 the year we see a kinder, safer web? It’s starting to look like it. On February 16th, Mark Zuckerberg published his mission statement, Building Global Communities. Two weeks later on March 1st Facebook rolled out new suicide prevention tools.
It’s great to see a big player like Facebook take on a challenging subject in such a big way. They understand that to create a safe and thriving community, it’s always better to be proactive than to do nothing. Facebook is demonstrating its commitment to creating a safe, supportive, and inclusive community with these new tools. We expect to see more and more features like this in the months to come.
Suicide is one of the biggest issues facing social networks today. The internet is full of self-injury and suicidal language, images, and videos. If we want to build communities where users feel safe and find a place they can call home, then we’re also responsible for ensuring that at-risk users are given help and support when they need it most.
Facebook has over 1.86 billion monthly active users, so they have access to data and resources that other companies can only dream of. Every community deserves to be protected from dangerous content. Is there anything smaller companies can do to keep their users safe?
After years in the industry studying high-risk, dangerous content we have unique insight into this issue.
There are a few things we’ve learned about self-injury and suicidal language:
Using AI to build an automation workflow is crucial. Suicide happens in real time, so we can’t afford mistakes or reactions after-the-fact. If you can identify suicidal language as it happens, you can also use automation to push messages of hope, provide suicide and crisis hotline numbers, and suggest other mental health resources. With their new features, Facebook has taken a huge, bold step in this direction.
Suicidal language is complex. If you want to identify suicidal language, you need a system that recognizes nuance, looks for hidden (unnatural) meaning and understands context and user reputation. There is a huge difference between a user saying “I am going to kill myself” versus “You should go kill yourself.” One is a cry for help, and the other is bullying. So it’s vital that your system learns the difference because they require two very different responses.
Think about all the different ways someone could spell the word “suicide.” Does your system read l337 5p34k? What if “suicide” is hidden inside a string of random letters?
Chris Priebe, CEO and founder of Two Hat Security (creator of Community Sift) wrote a response to Mark’s initial manifesto. In it he wrote:
When it comes to cyber-bullying, hate-speech, and suicide the stakes are too high for the current state of art in NLP [Natural Language Processing].
At Two Hat Security, we’ve spent five years building a unique expert system that learns new rules through machine learning, aided by human intelligence. We use an automated feedback loop with trending phrases to update rules and respond in real-time. We call this approach Unnatural Language Processing (uNLP).
When it comes to suicide and other high-risk topics, we aren’t satisfied with traditional AI algorithms that are only 90-95% accurate. We believe in continual improvement. When lives are at stake, you don’t get to rest on your laurels.
Suicide is connected to bullying and harassment. If you want to keep your community safe, you have to deal with all high-risk content. Community guidelines are great, but you need cutting-edge technology to back them up.
We’ve identified a behavioral flow that shows a direct link between cyberbullying/harassment and self-injury/suicide. When users are bullied, they are more likely to turn to suicidal thoughts and self-injuring behavior. It’s important that you filter cyberbullying in your product to prevent vulnerable users from getting caught in a vicious cycle.
While Facebook is doing its part, we want to ensure that all communities have the tools to protect their most vulnerable users. If you’re concerned about high-risk content in your community, we can help. Our content filter and moderation engine Community Sift is highly tuned to identify sensitive content like suicide and self-injury language.
We believe that everyone should be able to share online without being worried about harassed or threatened. Our goal has always been to remove bullying and other high-risk content from the internet. A big part of that goal involves helping online communities keep their most vulnerable users safe and supported. Suicide is such a sensitive and meaningful issue, so we want to extend our gratitude to Mark and all of the product managers at Facebook for taking a stand.
Here’s to hoping that more social networks will follow.
Originally published on Quora