Quora: What can social networks do to provide safer spaces for women?
For many women, logging onto social media is inherently dangerous. Online communities are notoriously hostile towards women, with women in the public eye—journalists, bloggers, and performers—often facing the worst abuse. But abuse is not just the province of the famous. Nearly every woman who has ever expressed an opinion online has had these experiences: Rape threats. Death threats. Harassment. Sometimes, evenare targeted.
In the last few years, we’ve seen many well-documented cases of ongoing, targeted harassment of women online.. . . These women were once famous for their talent and success. Now their names are synonymous with online abuse of the worst kind.
And today we add a new woman to the list:. An animator for EA Labs, her social media accounts were targeted this weekend in a campaign of online harassment. A blog post misidentified her as the lead animator for Mass Effect: Andromeda, and blamed her for the main character’s awkward facial animations. Turns out, Leost never even worked on Mass Effect: Andromeda. And yet she was forced to spend a weekend defending herself against baseless, crude, and sexually violent attacks from strangers.
Clearly, social media has a problem, and it’s not going away anytime soon. And it’s been happening for years.
Aby the Pew Research Center found that:
Young women, those 18-24, experience certain severe types of harassment at disproportionately high levels: 26% of these young women have been stalked online, and 25% were the target of online sexual harassment.
We don’t want to discount the harassment and abuse that men experience online, in particular in gaming communities. This issue affects all genders. However, there is an additional level of violence and vitriol directed at women. And it almost always includes threats of sexual violence. Women are also more likely to be doxxed, the practice of sharing someone else’s personal information online without their consent.
So, what can social networks do to provide safer spaces for women?
First, they need to make clear in their community guidelines that harassment, abuse, and threats are unacceptable —regardless of whether they’re directed at a man or a woman. For too long social networks have adopted a “free speech at all costs” approach to community building. If open communities want to flourish, they have to define where.
Then, social networks need to employ moderation strategies that:
Prevent abuse in real time. Social networks cannot only depend on moderators or users to find and remove harassment as it happens. Not only does that put undue stress on the community to police itself, it also ignores the fundamental problem—when a woman receives a rape threat, the damage is already done, regardless of how quickly it’s removed from her feed.
The best option is to stop abuse in real time, which means finding the right content filter. Text classification is faster and more accurate than it’s ever been, thanks to recent advances in artificial intelligence, machine learning, and Natural Language Processing (NLP).
Our expert system uses a cutting-edge blend of human ingenuity and automation to identify and filter the worst content in real time. People make the rules, and the system implements them.
When it comes to dangerous content like abuse and rape threats, we decided that traditional NLP wasn’t accurate enough.uses Unnatural Language Processing (uNLP) to find the hidden, “unnatural” meaning. Any system can identify the word “rape,” but a determined user will always find a way around the obvious. The system also needs to identify the l337 5p34k version of r4p3, the backwards variant, and the threat hidden in a string of random text.
Take action on bad actors in real time. It’s critical that community guidelines are reinforced. Most people will change their behavior once they know it’s unacceptable. And if they don’t, social networks can take more severe action, including temporary or permanent bans. Again, automation is critical here. Companies can use the same content filter tool to automatically warn, mute, or suspend accounts as soon as they post abusive content.
Encourage users to report offensive content. Content filters are great at finding the worst stuff and allowing the best. Automation does the easy work. But there will always be content in between that requires human review. It’s essential that social networks provide accessible, user-friendly reporting tools for objectionable content. Reported content should be funnelled into prioritised queues based on content type. Moderators can then review the most potentially dangerous content and take appropriate action.
Social networks will probably never stop users from attempting to harass women with rape or death threats. It’s built into our culture, although we can hope for a change in the future. But they can do something right now—leverage the latest, smartest technology to identify abusive language in real time.
Originally published on Quora