How Maslow’s Hierarchy of Needs Explains the Internet

Online comments.

Anonymous egg accounts.

Political posts.

… feeling nauseous?

Chances are, you shuddered slightly at the words “online comments.”

Presenting Exhibit A, from a Daily Mail article about puppies:

It gets worse. Presenting Exhibit B, from Twitter:

 

The internet has so much potential. It connects us across borders, cultural divides, and even languages. And oftentimes that potential is fulfilled. Remember the Arab Spring in 2011? It probably wouldn’t have happened without Twitter connecting activists across the Middle East.

Writers, musicians, and artists can share their art with fans across the globe on platforms like Medium and YouTube.

After the terror attacks in Manchester and London in May, many Facebook users used the Safety Check feature to reassure family and friends that they were safe from danger.

Every byte of knowledge that has ever existed is only a few taps away, stored, improbably, inside a device that fits in the palm of a hand. The internet is a powerful tool for making connections, for sharing knowledge, and for conversing with people across the globe.

And yet… virtual conversations are so often reduced to emojis and cat memes. Because who wants to start a real conversation when it’s likely to dissolve into insults and vitriol?

A rich, fulfilling, and enlightened life requires a lot more.

So what’s missing?

Maslow was onto something…

Remember Maslow’s hierarchy of needs? It probably sounds vaguely familiar, but here’s a quick refresher if you’ve forgotten.

A psychology professor at Brandeis University in Massachusetts, Abraham Maslow published his groundbreaking paper “A Theory of Human Motivation” in 1943. In this seminal paper, he identifies and describes the five basic levels of human needs. Each need forms a solid base under the next. And each basic need, when achieved, leads to the next, creating a pyramid. Years later he expanded on this hierarchy of human needs in the 1954 book Motivation and Personality.

The hierarchy looks like this:

  • Physiological: The basic physical requirements for human survival, including air, water, and food; then clothing, shelter, and sex.
  • Safety: Once our physical needs are met, we require safety and security. Safety needs include economic security as well as health and well-being.
  • Love/belonging: Human beings require a sense of belonging and acceptance from family and social groups.
  • Esteem: We need to be desired and accepted by others.
  • Self-actualization: The ultimate. When we self-actualize, we become who we truly are.

According to Maslow, our supporting needs must be met before we can become who we truly are — before we reach self-actualization.

So what does it mean to become yourself? When we self-actualize, we’re more than just animals playing dress-up — we are fulfilling the promise of consciousness. We are human.

Sorry, what does this have to do with the internet?

We don’t stop being human when we go online. The internet is just a new kind of community — the logical evolution of the offline communities that we started forming when the first species of modern humans emerged about 200,000 years ago in Eurasia. We’ve had many chances to reassess, reevaluate, and modify our offline community etiquette since then, which means that offline communities have a distinct advantage over the internet.

Merriam-Webster’s various definitions of “community” are telling:

people with common interests living in a particular area;
an interacting population of various kinds of individuals (such as species) in a common location;
a group of people with a common characteristic or interest living together within a larger society

Community is all about interaction and common interests. We gather together in groups, in public and private spaces, to share our passions and express our feelings. So, of course, we expect to experience that same comfort and kinship in our online communities. After all, we’ve already spent nearly a quarter of a million years cultivating strong, resilient communities — and achieving self-actualization.

But the internet has failed us because people are afraid to do just that. Those of us who aspire to online self-actualization are too often drowned out by trolls. Which leaves us with emojis and cat memes — communication without connection.

So how do we bridge that gap between conversation and real connection? How do we reach the pinnacle of Maslow’s hierarchy of needs in the virtual space?

Conversations have needs, too

What if there was a hierarchy of conversation needs using Maslow’s theory as a framework?

On the internet, our basic physical needs are already taken care of so this pyramid starts with safety.

So what do our levels mean?

  • Safety: Offline, we expect to encounter bullies from time to time. And we can’t get upset when someone drops the occasional f-bomb in public. But we do expect to be safe from targeted harassment, from repeated racial, ethnic, or religious slurs, and from threats against our bodies and our lives. We should expect the same when we’re online.
  • Social: Once we are safe from harm, we require places where we feel a sense of belonging and acceptance. Social networks, forums, messaging apps, online games — these are all communities where we gather and share.
  • Esteem: We need to be heard, and we need our voices to be respected.
  • Self-actualization: The ultimate. When we self-actualize online, we blend the power of community with the blessing of esteem, and we achieve something bigger and better. This is where great conversation happens. This is where user-generated content turns into art. This is where real social change happens.

Problem is, online communities are far too often missing that first level. And without safety, we cannot possibly move onto social.

The problem with self-censorship

In the 2016 study Online Harassment, Digital Abuse, and Cyberstalking in America, researchers found that nearly half (47%) of Americans have experienced online harassment. That’s big — but it’s not entirely shocking. We hear plenty of stories about online harassment and abuse in the news.

The real kicker? Over a quarter (27%) of Americans reported that they had self-censored their posts out of fear of harassment.

If we feel so unsafe in our online communities that we stop sharing what matters to us most, we’ve lost the whole point of building communities. We’ve forgotten why they matter.

How did we get here?

There are a few reasons. No one planned the internet; it just happened, site by site and network by network. We didn’t plan for it, so we never created a set of rules.

And the internet is still so young. Think about it: Communities have been around since we started to walk on two feet. The first written language began in Sumeria about 5000 years ago. The printing press was invented 600 years ago. The telegram has been around for 200 years. Even the telephone — one of the greatest modern advances in communication — has a solid 140 years of etiquette development behind it.

The internet as we know it today — with its complex web of disparate communities and user-generated content — is only about 20 years old. And with all due respect to 20-year-olds, it’s still a baby.

We’ve been stumbling around in this virtual space with only a dim light to guide us, which has led to the standardization of some… less-than-desirable behaviors. Kids who grew up playing MOBAS (multi-only battle games) have come to accept that toxicity is a byproduct of online competition. Those of us who use social media expect to encounter previously unimaginably vile hate speech when we scroll through our feed.

And, of course, we all know to avoid the comments section.

Can self-actualization and online communities co-exist?

Yes. Because why not? We built this thing — so we can fix it.

Three things need to happen if we’re going to move from social to esteem to self-actualization.

Industry-wide paradigm shift

The good news? It’s already happening. Every day there’s a new article about the dangers of cyberbullying and online abuse. More and more social products realize that they can’t allow harassment to run free on their platforms. The German parliament recently backed a plan to fine social networks up to €50 million if they don’t remove hate speech within 24 hours.

Even the Obama Foundation has a new initiative centered around digital citizenship.

As our friend David Ryan Polgar, Chief of Trust & Safety at Friendbase says:

“Digital citizenship is the safe, savvy, ethical use of social media and technology.”

Safe, savvy, and ethical: As a society, we can do this. We’ve figured out how to do it in our offline communities, so we can do it in our online communities, too.

A big part of the shift includes a newfound focus on bringing empathy back into online interactions. To quote David again:

“There is a person behind that avatar and we often forget that.”

Thoughtful content moderation

The problem with moderation is that it’s no fun. No one wants to comb through thousands of user reports, review millions of potentially horrifying images, or monitor a mind-numbingly long live-chat stream in real time.

Too much noise + no way to prioritize = unhappy and inefficient moderators.

Thoughtful, intentional moderation is all about focus. It’s about giving community managers and moderators the right techniques to sift through content and ensure that the worst stuff — the targeted bullying, the cries for help, the rape threats — is dealt with first.

Automation is a crucial part of that solution. With artificial intelligence getting more powerful every day, instead of forcing their moderation team to review posts unnecessarily, social products can let computers do the heavy lifting first.

The content moderation strategy will be slightly different for every community. But there are a few best practices that every community can adopt:

  • Know your community resilience. This is a step that too many social products forget to take. Every community has a tolerance level for certain behaviors. Can your community handle the occasional swear word — but not if it’s repeated 10 times? Resilience will tell you where to draw the line.
  • Use reputation to treat users differently. Behavior tends to repeat itself. If you know that a user posts things that break your community guidelines, you can place tighter restrictions on them. Conversely, you can give engaged users the ability to post more freely. But don’t forget that users are human; everyone deserves the opportunity to learn from their mistakes. Which leads us to our next point…
  • Use behavior-changing techniques. Strategies include auto-messaging users before they hit “send” on posts that breach community guidelines, and publicly honoring users for their positive behavior.
  • Let your users choose what they see. The ESRB has the right idea. We all know what “Rated E for Everyone” means — we’ve heard it a million times. So what if we designed systems that allowed users to choose their experience based on a rating? If you have a smart enough system in the background classifying and labeling content, then you can serve users only the content that they’re comfortable seeing.

It all comes back to our hierarchy of conversation needs. If we can provide that first level of safety, we can move beyond emojis and cats — and move onto the next level.

Early digital education

The biggest task ahead of us is also the most important — education. We didn’t have the benefit of 20 years of internet culture, behavior, and standards when we first started to go online. We have those 20 years of mistakes and missteps behind us now.

Which means that we have an opportunity with the next generation of digital citizens to reshape the culture of the internet. In fact, strides are already being made.

Riot Games (the studio that makes the hugely popular MOBA League of Legends) has started an initiative in Australia and New Zealand that’s gaining traction. Spearheaded by Rioter Ivan Davies, the League of Legends High School Clubs teaches students about good sportsmanship through actual gameplay.

It’s a smart move — kids are already engaged when they’re playing a game they love, so it’s a lot easier to slip some education in there. Ivan and his team have even created impressive teaching resources for teachers who lead the clubs.

Google recently launched Be Internet Awesome, a program that teaches young children how to be good digital citizens and explore the internet safely. In the browser game Interland, kids learn how to protect their personal information, be kind to other users, and spot phishing scams and fake sites. And similar to Riot, Google has created curriculum for educators to use in the classroom.

In addition, non-profits like the Cybersmile Foundation, UK Safer Internet Center, and more use social media to reach kids and teens directly.

Things are changing. Our kids will likely grow up to be better digital citizens than we ever were. And it’s unlikely that they will tolerate the bullying, harassment, and abuse that we’ve put up with for the last 20 years.

Along with a paradigm shift, thoughtful moderation, and education, if we want change to happen, we have to celebrate our communities. We have to talk about our wins, our successes… and especially our failures. Let’s not beat ourselves up if we don’t get it right the first time. We’re figuring this out.

We’re self-actualizing.

It’s time for the internet to grow up

Is this the year the internet achieves its full potential? From where most of us in the industry sit, it’s already happening. People are fed up, and they’re ready for a change.

This year, social products have an opportunity to decide what they really want to be. They can be the Wild West, where too many conversations end with a (metaphorical) bullet. Or they can be something better. They can be spaces that nurture humanity — real communities, the kind we’ve been building for the last 200,000 years.

This year, let’s build online communities that honor the potential of the internet.

That meet every level in our hierarchy of needs.

That promote digital citizenship.

That encourage self-actualization.

This year, let’s start the conversation.

***

At Two Hat Security, we empower social and gaming platforms to build healthy, engaged online communities, all while protecting their brand and their users from high-risk content.

Want to increase user retention, reduce moderation, and protect your brand?

Get in touch today to see how our chat filter and moderation software Community Sift can help you build a community that promotes good digital citizenship — and gives your users a safe space to connect.

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Quora: Is Facebook doing enough to prevent suicide on its platform?

Is 2017 the year we see a kinder, safer web? It’s starting to look like it. On February 16th, Mark Zuckerberg published his mission statement, Building Global Communities. Two weeks later on March 1st Facebook rolled out new suicide prevention tools.

It’s great to see a big player like Facebook take on a challenging subject in such a big way. They understand that to create a safe and thriving community, it’s always better to be proactive than to do nothing. Facebook is demonstrating its commitment to creating a safe, supportive, and inclusive community with these new tools. We expect to see more and more features like this in the months to come.

Suicide is one of the biggest issues facing social networks today. The internet is full of self-injury and suicidal language, images, and videos. If we want to build communities where users feel safe and find a place they can call home, then we’re also responsible for ensuring that at-risk users are given help and support when they need it most.

Facebook has over 1.86 billion monthly active users, so they have access to data and resources that other companies can only dream of. Every community deserves to be protected from dangerous content. Is there anything smaller companies can do to keep their users safe?

After years in the industry studying high-risk, dangerous content we have unique insight into this issue.

There are a few things we’ve learned about self-injury and suicidal language:

Using AI to build an automation workflow is crucial. Suicide happens in real time, so we can’t afford mistakes or reactions after-the-fact. If you can identify suicidal language as it happens, you can also use automation to push messages of hope, provide suicide and crisis hotline numbers, and suggest other mental health resources. With their new features, Facebook has taken a huge, bold step in this direction.

Suicidal language is complex. If you want to identify suicidal language, you need a system that recognizes nuance, looks for hidden (unnatural) meaning and understands context and user reputation. There is a huge difference between a user saying “I am going to kill myself” versus “You should go kill yourself.” One is a cry for help, and the other is bullying. So it’s vital that your system learns the difference because they require two very different responses.

Think about all the different ways someone could spell the word “suicide.” Does your system read l337 5p34k? What if “suicide” is hidden inside a string of random letters?

Chris Priebe, CEO and founder of Two Hat Security (creator of Community Sift) wrote a response to Mark’s initial manifesto. In it he wrote:

When it comes to cyber-bullying, hate-speech, and suicide the stakes are too high for the current state of art in NLP [Natural Language Processing].

At Two Hat Security, we’ve spent five years building a unique expert system that learns new rules through machine learning, aided by human intelligence. We use an automated feedback loop with trending phrases to update rules and respond in real-time. We call this approach Unnatural Language Processing (uNLP).

When it comes to suicide and other high-risk topics, we aren’t satisfied with traditional AI algorithms that are only 90-95% accurate. We believe in continual improvement. When lives are at stake, you don’t get to rest on your laurels.
Suicide is connected to bullying and harassment. If you want to keep your community safe, you have to deal with all high-risk content. Community guidelines are great, but you need cutting-edge technology to back them up.

We’ve identified a behavioral flow that shows a direct link between cyberbullying/harassment and self-injury/suicide. When users are bullied, they are more likely to turn to suicidal thoughts and self-injuring behavior. It’s important that you filter cyberbullying in your product to prevent vulnerable users from getting caught in a vicious cycle.

While Facebook is doing its part, we want to ensure that all communities have the tools to protect their most vulnerable users. If you’re concerned about high-risk content in your community, we can help. Our content filter and moderation engine Community Sift is highly tuned to identify sensitive content like suicide and self-injury language.

We believe that everyone should be able to share online without being worried about harassed or threatened. Our goal has always been to remove bullying and other high-risk content from the internet. A big part of that goal involves helping online communities keep their most vulnerable users safe and supported. Suicide is such a sensitive and meaningful issue, so we want to extend our gratitude to Mark and all of the product managers at Facebook for taking a stand.

Here’s to hoping that more social networks will follow.

Originally published on Quora

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


To Mark Zuckerberg

Re: Building Global Communities

“There are billions of posts, comments and messages across our services each day, and since it’s impossible to review all of them, we review content once it is reported to us. There have been terribly tragic events — like suicides, some live streamed — that perhaps could have been prevented if someone had realized what was happening and reported them sooner. There are cases of bullying and harassment every day, that our team must be alerted to before we can help out. These stories show we must find a way to do more.” — Mark Zuckerberg

This is hard.

I built a company (Two Hat Security) that’s also contracted to process 4 billion chat messages, comments, and photos a day. We specifically look for high-risk content in real-time, such as bullying, harassment, threats of self-harm, and hate speech. It is not easy.

“There are cases of bullying and harassment every day, that our team must be alerted to before we can help out. These stories show we must find a way to do more.”

I must ask — why wait until cases get reported?

If you wait for a report to be filed by someone, haven’t they already been hurt? Some things that are reported can never be unseen. Some like Amanda Todd cannot have that image retracted. Others post when they are enraged or drunk and the words like air cannot be taken back. The saying goes, “What happens in Vegas stays in Vegas, Facebook, Twitter and Instagram forever” so maybe some things should never go live. What if you could proactively create a safe global community for people by preventing (or pausing) personal attacks in real-time instead?

This, it appears, is key to creating the next vision point.

“How do we help people build an informed community that exposes us to new ideas and builds common understanding in a world where every person has a voice?”

One of the biggest challenges to free speech online in 2017 is that we allow a small group of toxic trolls the ‘right’ to shut up a larger group of people. Ironically, these users’ claim to free speech often ends up becoming hate speech and harassment, destroying the opportunity for anyone else to speak up, much like bullies in the lunchroom. Why would someone share their deepest thoughts if others would just attack them? Instead, the dream for real conversations gets lost beneath a blanket of fear. Instead, we get puppy pictures, non-committal thumbs up, and posts that are ‘safe.’ If we want to create an inclusive community, people need to be able to share ideas and information online without fear of abuse from toxic bullies. I applaud your manifesto, as it calls this out, and calls us all to work together to achieve this.

But how?

Fourteen years ago, we both set out to change the social network of our world. We were both entrepreneurial engineers, hacking together experiments using the power of code. It was back in the days of MySpace and Friendster and the later Orkut. We had to browse to every single friend we had on MySpace just to see if they wrote anything new. To solve this I created myTWU — a social stream of all the latest blogs and photos of fellow students, alumni and sports teams on our internal social tool. Our office was in charge of building online learning but we realized that education is not about ideas but community. It was not enough to dump curriculum online for independent study, people needed places of belonging.

A year later “The Facebook” came out. You reached beyond the walls of one University and over time opened it to the world.

So I pivoted. As part of our community, we had a little chat room where you could waddle around and talk to others. It was a skin of a little experiment my brother was running. He was caught by surprise when it grew to a million users which showed how users long for community and places of belonging. In those days chat rooms were the dark part of the web and it was nearly impossible to keep up with the creative ways users tried to hurt each other.

So I was helping my brother code the safety mechanisms for his little social game. That little social game grew to become a global community with over 300 million users and Disney bought it back in 2007. I remember huddling in my brother’s basement rapidly building the backend to fix the latest trick to get around the filter. Club Penguin was huge.

After a decade of kids breaking the filter and building tools to moderate the millions upon millions of user reports, I had a breakthrough. By then I was security at Disney, with the job to hack everything with a Mouse logo on it. In my training, we learned that if someone DDoS’es a network or tries to break the system, you find a signature of what they are doing and turn up the firewall against that.

“What if we did that with social networks and social attacks?” I thought.

I’ve spent the last five years building an AI system with signatures and firewalls as it relates to social content. As we process billions of messages with Community Sift, we build reputation scores in real-time. We know who the trolls are — they leave digital signatures everywhere they go. Moreover, I can adjust the AI to turn up the sensitivity only where it counts. In so doing we drastically dropped false positives, opened communication with the masses while detecting the highest risk when it matters.

I had to build whole new AI algorithms to do this since traditional methods only hit 90–95% percent. That is great for most AI tasks but when it comes to cyber-bullying, hate-speech, and suicide the stakes are too high for the current state of art in NLP.

“To prevent harm, we can build social infrastructure to help our community identify problems before they happen. When someone is thinking of suicide or hurting themselves, we’ve built infrastructure to give their friends and community tools that could save their life.”

Since Two Hat is a security company, we are uniquely positioned to prevent harm with the largest vault of high-risk signatures, like grooming conversations and CSAM (child sexual abuse material.) In collaboration with our partners at the RCMP (Royal Canadian Mounted Police), we are developing a system to predict and prevent child exploitation before it happens to complement the efforts our friends at Microsoft have made with PhotoDNA. With CEASE.ai, we are training AI models to find CSAM, and have lined up millions of dollars of Ph.D. research to give students world-class experience in working with our team.

“Artificial intelligence can help provide a better approach. We are researching systems that can look at photos and videos to flag content our team should review. This is still very early in development, but we have started to have it look at some content, and it already generates about one-third of all reports to the team that reviews content for our community.”

It is incredible what deep learning has accomplished in the last few years. And although we have been able to see near perfect recall in finding pornography with our current work there is an explosion of new topics we are training on. Further, the subtleties you outline are key.

I look forward to two changes to resolve this:

  1. I call on networks to trust that their users have resilience. It is not imperative to find everything just the worst. If all content can be sorted by maybe bad to absolutely bad we can then draw a line in the sand and say these cannot be unseen and these the community will find. In so doing we don’t have to wait for technology to reach perfection nor wait for users to report things we already know are bad. Let computers do what they do well and let humans deal with the rest.
  2. I call on users to be patient. Yes, sometimes in our ambition to prevent harm we may find a Holocaust photo. We know this is terrible but we ask for your patience. Computer vision is like a child still learning. A child that sees that image for the first time is still deeply impacted and is concerned. Join us to report these problems and to help train the system to mature and discern.

However, you are right that many more strides need to happen to get this to where it needs to be. We need to call on the world’s greatest thinkers. Of all the hard problems to solve, our next one is child pornography (CSAM). Some things cannot be unseen. There are things when seen re-victimize over and over again. We are the first to gain access to hundreds of thousands of CSAM material and train deep learning models on them with CEASE.ai. We are pouring millions of dollars and putting the best minds on this topic. It is a problem that must be solved.

And before I move on I want to give a shout out to your incredible team whom I have had the chance to volunteer at hack-a-thons with and who have helped me think through how to get this done. Your company commitment to social good is outstanding and they have helped many other companies and not for profits.

“The guiding principles are that the Community Standards should reflect the cultural norms of our community, that each person should see as little objectionable content as possible, and each person should be able to share what they want while being told they cannot share something as little as possible. The approach is to combine creating a large-scale democratic process to determine standards with AI to help enforce them.”

That is cool. I have got a couple of the main pieces needed for that completed if you need them.

“The idea is to give everyone in the community options for how they would like to set the content policy for themselves. Where is your line on nudity? On violence? On graphic content? On profanity?”

I had the chance to swing by Twitter 18 months ago. I took their sample firehose and have been running it through our system. We label each message across 1.8 million of our signatures, then put together a quick demo of what it would be like if you could turn off the toxicity on Twitter. It shows low, medium, and high-risk. I would not expect to see anything severe on there, as they have recently tried to clean it up.

My suggestion to Twitter was to allow each user the option to choose what they want to see. The suggestion was that a global policy gets rid of clear infractions against terms of use for content that can never be unseen such as gore or CSAM. After the global policy is applied, you can then let each user choose their own risk and tolerance levels.

We are committed to helping you and the Facebook team with your mission to build a safe, supportive, and inclusive community. We are already discussing ways we can help your team, and we are always open to feedback. Good luck on your journey to connect the world, and hope we cross paths next time I am in the valley.

Sincerely,
Chris Priebe
CEO, Two Hat Security

 

Originally published on Medium