Four Must-Haves for the Internet of the Future

To make the internet of the future a safer and more enjoyable place, it is critical to get a clearly defined minimum standard of Safety by Design established internet-wide. That said, it is important to recognize that Design for Scale and Design for Monetization are the embedded norms.

Many websites and apps are built to reach live state as a first priority, and forget safety or fail to come back to it until their product is mired in a situation where making it safe is very hard. To that end, it’s important we develop guidelines for startups and SMEs to understand best practices for Safety by Design, and access resources to help them build that way.

The regulation stems from the concept of “Duty of Care.” This is an old concept that says if you are going to make a social space, such as a nightclub, you have a responsibility to ensure it is safe. Likewise, we need to learn from our past mistakes and build out shared standards of best practices so users don’t get hurt in our online social spaces. We believe that there are four layers of protection every site should have:

Clear terms of use

Communities don’t just happen, we create them. In real life, if you add a swing set to a park, the community expectation is that it is a place for kids. As a society, we change our language and behavior based on that environment. We still have free speech, but we regulate ourselves for the benefit of the kids. The adult equivalent of this scenario is a nightclub; the environment allows for a loosening of behavioral norms, but step out of line with house rules and the establishment’s bouncers deal with you. Likewise, step out of line while online, and there must be consequences.

Embedded filters that are situationally appropriate

Many don’t add automated filters because they are afraid of the slippery slope of inhibiting free speech. In so doing they fall down the other slippery slope – doing nothing — allowing harm to continue. For the most part, this is a solved problem. You can buy off-the-shelf solutions just like you can buy anti-virus technology that matches known signatures of things users say or share. These filters must be on every social platform, app, and web site.

Using User Reputation to make smarter decisions

Reward positive users. For those who keep harassing everyone else, take automated action. Two Hat are pioneers of a new technique where you can give all users maximum expression by only filtering the worst abusive content, and then increasing the filter level incrementally on those who harass others. Predictive Moderation based on user reputation is a must.

Let users report bad content

If someone has to report something then harm is already done. Everything that users can create needs to be able to be reported. When content is reported, record the moderator decisions (in a pseudonymized, minimized way) and train AI (like our Predictive Moderation) to scale out the easy decision making and escalate critical issues. Engaging and empowering users to assist in identifying and escalating objectionable content is a must.

Why we must create a better internet

In 2019, the best human intentions paired with best technology platforms and companies in the world couldn’t stop a terrorist from live-streaming the murder of innocents. We still can’t understand why 1.5 million chose to share it.

What we can do is continue to build and connect datasets and train AI models to get better. We can also find new ways to work together to make the internet a better, safer, place.

We’ll know it’s working when exposure to bullying, hate, abuse, and exploitation no longer feels like the price of admission for being online.

To learn more about Two Hat’s vision for a better internet that’s Safe by Design, download our white paper By Design: 6 Tenets for a Safer Internet

How #ICANHELP is Empowering the Next Generation of Digital Citizens

Two Hat believes that everyone should be free to share online without fear of harassment or abuse. We also believe that making this vision a reality is a shared responsibility.

That’s why we have allied ourselves with diverse organizations including non-profits, government agencies, private companies, and industry alliances to share best practices, produce online safety resources, and spread the word of proactive, purposeful content moderation. One of those organizations is the California-based non-profit #ICANHELP.

We recently sat down with Matt Soeth, co-founder and executive director of #ICANHELP to discuss the organization’s upcoming initiatives with the NY Yankees, his thoughts on social media legislation, and #Digital4Good, their annual event celebrating student achievements.

Tell us about your organization, #ICANHELP.

#ICANHELP educates and empowers students to use social media positively.

We train students how to be digital first responders. When they see something online we want them to know how to report content (when necessary), how to respond to negative content, and in the words of students, how to respond to all the “drama.”

At the same time, we work with students to build positive social media campaigns. We train educators and admin best practices around modeling and guiding students in developing their digital identity.

What initiatives are you working on right now?

We are excited to announce our partnership with the NY Yankees as part of their Bronx Education All Star Day. About 9000 students and educators were at the game on May 29 being recognized for academic achievement, civic engagement, and leadership. We will be working with Bronx schools in the fall with our curriculum and resources to help students develop social projects for the 2019-2020 school year.

We have two new online courses for teachers and administrators we are excited to share. Our teacher course walks educators through building up their social media presence, managing student social media teams, and the best way to model and guide students in creating and managing content. Our admin course will help anyone looking for policy examples and guidance on how to respond and investigate social media incidents. We walk everyone through how to create an incident response plan related to a social media incident as well as how to work with law enforcement, social media companies, and gaming companies to get content removed that violates a platform’s terms of service.

We are excited to announce a collaboration project with the Well Being Trust, the foundation for Providence/St. Joseph Health, to create mental health and wellness curriculum around digital wellbeing. So much of what we see in digital wellbeing currently is focused heavily on devices and we are looking to develop some tools for young people and educators to help them talk about stress, anxiety, and online negativity.

How can people get involved?

Check out our website. You can always contact us through our site. We are very active on social media, @icanhelp on Twitter, @icanhelpofficial on Facebook, Instagram, and YouTube. We are sharing content all the time, so if schools are looking for ideas, that’s the place to connect with us.

That being said – share! So much of what we do is word of mouth and we have students all over doing some amazing work. We’d love for more people to know those stories.

If you are an educator or parent and care about this topic, please reach out and share our resources, invite us to your community or school, help us grow the conversation and keep it going.

What is your take on the social media legislation being introduced around the world – Online Harms and Duty of Care in the UK, Sharing of Abhorrent Violent Material in Australia, the Christchurch Call?

All of these actions are leading to new policy and regulation to hold companies accountable for the content on their site. The challenge will come when trying to enforce these laws and regulations as that part is still unclear. The intent behind these actions is clear, making the internet a better and safer place for users, particularly youth.

The one challenge I do have with all of this is the emphasis on government regulation and corporate responsibility. Whenever there is a major social incident – offline, that then goes viral and plays out online – we as users react. In this case, with a landslide of recent incidents, we got the white paper, identifying and removing terrorist content, and so on. It solves the problem now, but I often wonder if gets to the systemic underlying issue causing all of these problems. For example, there is so much talk about cyberbullying, but kids are still more likely to be bullied in person than online. Responding to cyberbullying is good, and needs to happen, but regulating companies is not enough. If the internet is going to be a better place, it needs to be a collective effort: users, nonprofits, content experts, education institutions, companies, you name it. It takes all of us.

Can you give us a sneak peek at #Digital4Good 2019?

We are really excited about #Digital4Good 2019. It’s being held at Facebook HQ in Menlo Park, CA. We have our winners selected and will be sharing more about them soon. It’s a diverse group of students from all over the US covering a range of topics and projects around bullying, race, homelessness, robotics, leadership development, and cancer research.

These students will be sharing their stories live on September 16, 2019. We’ve invited a few schools to attend, though seating is limited. You can tune into the live stream and see the event as it happens (or watch again later). To get notified about the live stream, fill out the form on the page. We will tell you when the event is happening and share out the schedule of student speakers.