London Calling: A Week of Trust & Safety in the UK

Two weeks ago, the Two Hat team and I packed up our bags and flew to London for a jam-packed week of government meetings, media interviews, and two very special symposiums.

I’ve been traveling a lot recently – first to Australia in mid-September for the great eSafety19 conference, then London, and I’m off to Chicago next month for the International Bullying Prevention Association Conference – so I haven’t had much time to reflect. But now that the dust has settled on the UK visit (and I’m finally solidly back on Pacific Standard Time), I wanted to share a recap of the week as well as my biggest takeaways from the two symposiums I attended.

Talking Moderation

We were welcomed by several esteemed media companies and had the opportunity to be interviewed by journalists who had excellent and productive questions.

Haydn Taylor from GamesIndustry.Biz interviewed Two CEO and founder Chris Priebe, myself, and Cris Pikes, CEO of our partner Image Analyzer about moderating harmful online content, including live streams.

Rory Cellan-Jones from the BBC talked to us about the challenges of defining online harms (starts at 17:00).

Chris Priebe being interviewed
Chris Priebe being interviewed about online harms

I’m looking forward to more interviews being released soon.

We also met with branches of government and other organizations to discuss upcoming legislation. We continue to be encouraged by their openness to different perspectives across industries.

Chris Priebe continues to champion his angle regarding transparency reports. He believes that making transparency reports truly transparent – ie, digitizing and displaying them in app stores – has the greatest potential to significantly drive change in content moderation and online safety practices.

Transparency reports are the rising tide that will float all boats as nobody will want to be that one site or app with a report that doesn’t show commitment and progress towards a healthier online community. Sure, everyone wants more users – but in an age of transparency, you will have to do right by them if you expect them to join your platform and stick around.

Content Moderation Symposium – “Ushering in a new age of content moderation”

On Wednesday, October 2nd Two Hat hosted our first-ever Content Moderation Symposium. Experts from academia, government, non-profits, and industry came together to talk about the biggest content moderation challenges of our time, including tackling complex issues like defining cyberbullying and child exploitation behaviors in online communities to unpacking why a content moderation strategy is business-critical going into 2020.

Alex Holmes, Deputy CEO of The Diana Award opened the day with a powerful and emotional keynote about the effects of cyberbullying. For me, the highlight of his talk was this video he shared about the definition of “bullying” – it really drove home the importance of adopting nuanced definitions.

Next up were Dr. Maggie Brennan, a lecturer in clinical and forensic psychology at the University of Plymouth, and an academic advisor to Two Hat, and Zeineb Trabelsi, a third-year Ph.D. student in the Information System department at Laval University in Quebec, and an intern in the Natural Language Processing department at Two Hat.

Dr. Brennan and Zeineb have been working on academic frameworks for defining online child sexual victimization and cyberbullying behavior, respectively. They presented their proposed definitions, and our tables of six discussed them in detail. Discussion points included:

Are these definitions complete and do they make sense? What further information would we require to effectively use these definitions when moderating content? How do we currently define child exploitation and cyberbullying in our organizations?

My key takeaway from the morning sessions? Defining online harms is not going to be easy. It’s a complicated and nuanced task because human behavior is complicated and nuanced. There are no easy answers – but these cross-industry and cross-cultural conversations are a step in the right direction. The biggest challenge will be taking the academic definitions of online child sexual victimization and cyberbullying behaviors and using them to label, moderate, and act on actual online conversations.

I’m looking forward to continuing those collaborations.

Our afternoon keynote was presented by industry veteran David Nixon, who talked about the exponential and unprecedented growth of online communities over the last 20 years, and the need for strong Codes of Conduct and the resources to operationalize good industry practices. This was followed by a panel discussion with industry experts and several Two Hat customers. I was happy to sit on the panel as well.

My key takeaway from David’s session and the panel discussion? If you design your product with safety at the core (Safety by Design), you’re setting yourself up for community success. If not, reforming your community can be an uphill battle. One of our newest customers Peer Tutor is implementing Safety by Design in really interesting ways, which CEO Wayne Harrison shared during the panel. You’ll learn more in an upcoming case study.

Man standing in front of a screen that says Transparency Reports

Finally, I presented our 5 Layers of Community Protection (more about that in the future – stay tuned!), and we discussed best practices for each layer of content moderation. The fifth layer of protection is Transparency Reports, which yielded the most challenging conversation. What will Transparency Reports look like? What information will be mandatory? How will we define success benchmarks? What data should we start to collect today? No one knows – but we looked at YouTube’s Transparency Report as an example and guidance on what may be legislated in the future.

My biggest takeaway from this session? Best practices exist – many of us are doing them right now. We just need to talk about them and share them with the industry at large. More on that in an upcoming blog post.

Fair Play Alliance’s First European Symposium

Being a co-founder of the Fair Play Alliance and seeing it grow from a conversation between a few friends to a global organization of over 130 companies and many more professionals has been incredible, to say the least. This was the first time the alliance held an event outside of North America. As a global organization, it was very important to us, and it was a tremendous success! The feedback has been overwhelmingly positive, and we are so happy to see that it provided lots of value to attendees.

Members of the Fair Play Alliance

It was a wonderful two-day event held over October 3rd and 4th, with excellent talks and workshops that were hosted for members of the FPA. Chris Priebe, a couple of industry friends/veteran Trust & safety leaders, and I hosted one of the workshops. We’re all excited to take that work forward and see the results that will come out of it and benefit the games industry!

What. A. Week.

As you can tell, it was a whirlwind week and I’m sure I’ve forgotten at least some of it! It was great to connect with old friends and make new friends. All told, my biggest takeaway from the week was this:

Everyone I met cares deeply about online safety, and about finding the smartest, most efficient ways to protect users from online harms while still allowing them the freedom to express themselves. At Two Hat, we believe in an online world where everyone is free to share without fear of harassment or abuse. I’ve heard similar sentiments echoed countless times from other Trust & Safety professionals, and I truly believe that if we continue to collaborate across industries, across governments, and across organizations, we can make that vision a reality.

So let’s keep talking.

I’m still offering free community audits for any organization that wants a second look at their moderation and Trust & Safety practices. Sign up for a free consultation using the form below!



Three Ways Social Networks Can Embrace Safety by Design Today

Earlier this month, the Australia eSafety Office released their Safety by Design (SbD) Principles. As explained on their website, SbD is an “initiative which places the safety and rights of users at the centre of the design, development and deployment of online products and services.” It outlines three simple but comprehensive principles (service provider responsibilities, user empowerment & autonomy, and transparency & accountability) that social networks can follow to embed user safety into their platform from the design phase and onwards.

With this ground-breaking initiative, Australia has proven itself to be at the forefront of championing innovative approaches to online safety.

I first connected with the eSafety Office back in November 2018, and later had the opportunity to consult on Safety by Design. I was honored to be part of the consultation process and to bring some of my foundational beliefs around content moderation to the table. At Two Hat, we’ve long advocated for a Safety by Design approach to building social networks.

Many of the points and the Safety by Design Principles and the UK’s recent Online Harms white paper support the Trust & Safety practices we’ve been recommending to clients for years, such as leveraging filters and cutting-edge technology to triage user reports. And we’ve heartily embraced new ideas, like transparency reports, which Australia and the UK both strongly recommend in their respective papers.

As I read the SbD overview, I had a few ideas for clear, actionable measures that social networks across the globe can implement today to embrace Safety by Design. The first two fall under SbD Principle 1, and the third under SbD Principle 3.

Under SbD Principle 1: Service provider responsibilities
“Put processes in place to detect, surface, flag and remove illegal and harmful conduct, contact and content with the aim of preventing harms before they occur.”

Content filters are no longer a “nice to have” for social networks – today, they’re table stakes. When I first started in the industry, many people assumed that only children’s sites required filters. And until recently, only the most innovative and forward-thinking companies were willing to leverage filters in products designed for older audiences.

That’s all changed – and the good news is that you don’t have to compromise freedom of expression for user safety. Today’s chat filters (like Two Hat’s Community Sift) go beyond allow/disallow lists, and instead allow for intelligent, nuanced filtering of online harms that take into account various factors, including user reputation and context. And they can do it well in multiple languages, too. As a Portuguese and English speaker, this is particularly dear to my heart.

All social networks can and should implement chat, username, image, and video filters today. How they use them, and the extent to which they block, flag, or escalate harms will vary based on community guidelines and audience.

Also under SbD Principle 1: Service provider responsibilities
Put in place infrastructure that supports internal and external triaging, clear escalation paths and reporting on all user safety concerns, alongside readily accessible mechanisms for users to flag and report concerns and violations at the point that they occur.”

As the first layer of protection and user safety, baseline filters are critical. But users should always be encouraged to report content that slips through the cracks. (Note that when social networks automatically filter the most abusive content, they’ll have fewer reports.)

But what do you do with all of that reported content? Some platforms receive thousands of reports a day. Putting everything from false reports (users testing the system, reporting their friends, etc) to serious, time-sensitive content like suicide threats and child abuse into the same bucket is inefficient and ineffective.

That’s why we recommend implementing a mechanism to classify and triage reports so moderators purposefully review the high-risk ones first, while automatically closing false reports. We’ve developed technology called Predictive Moderation that does just this. With Predictive Moderation, we can train AI to take the same actions moderators take consistently and reduce manual review by up to 70%.

I shared some reporting best practices used by my fellow Fair Play Alliance members during the FPA Summit at GDC earlier this year. You can watch the talk here (starting at 37:30).

There’s a final but no less important benefit to filtering the most abusive content and using AI like Predictive Moderation to triage time-sensitive content. As we’ve learned from seemingly countless news stories recently, content moderation is a deeply challenging discipline, and moderators are too often subject to trauma and even PTSD. All of the practices that the Australian eSafety Office outlines, when done properly, can help protect moderator wellbeing.

Under SbD Principle 3: Transparency and accountability
Publish an annual assessment of reported abuses on the service, accompanied by the open publication of a meaningful analysis of metrics such as abuse data and reports, the effectiveness of moderation efforts and the extent to which community standards and terms of service are being satisfied through enforcement metrics.”

While transparency reports aren’t mandatory yet, I expect they will be in the future. Both the Australian SbD Principles and the UK Online Harms white paper outline the kinds of data these potential reports might contain.

My recommendation is that social networks start building internal practices today to support these inevitable reports. A few ideas include:

  • Track the number of user reports filed and their outcome (ie, how many were closed, how many were actioned on, how many resulted in human intervention, etc)
  • Log high-risk escalations and their outcome
  • Leverage technology to generate a percentage breakdown of abusive content posted and filtered

Thank you again to the eSafety Office and Commissioner Julie Inman-Grant for spearheading this pioneering initiative. We look forward to the next iteration of the Safety by Design framework – and can’t wait to join other online professionals at the #eSafety19 conference in September to discuss how we can all work together to make the internet a safe and inclusive space where everyone is free to share without fear of abuse or harassment.

To read more about Two Hat’s vision for a safer internet, download our new white paper By Design: 6 Tenets for a Safer Internet.

And if you, like so many of us, are concerned about community health and user safety, I’m currently offering no-cost, no-obligation Community Audits. I will examine your community (or the community from someone you know!), locate areas of potential risk, and provide you with a personalized community analysis, including recommended best practices and tips to maximize positive social interactions and user engagement.



Four Must-Haves for the Internet of the Future

To make the internet of the future a safer and more enjoyable place, it is critical to get a clearly defined minimum standard of Safety by Design established internet-wide. That said, it is important to recognize that “Design for Scale” and “Design for Monetization” are the embedded norms.

Many websites and apps are built to reach live state as a first priority, and forget safety or fail to come back to it until their product is mired in a situation where making it safe is very hard. To that end, it’s important we develop guidelines for startups and SMEs to understand best practices for Safety by Design, and access resources to help them build that way.

The regulation stems from the concept of “Duty of Care”. This is an old concept that says if you are going to make a social space, such as a nightclub, you have a responsibility to ensure it is safe. Likewise, we need to learn from our past mistakes and build out shared standards of best practices so users don’t get hurt in our online social spaces.

We believe that there are four layers of protection every site should have:

1. Clear terms of use
Communities don’t just happen, we create them. In real life, if you add a swing set to a park, the community expectation is that it is a place for kids. As a society, we change our language and behaviour based on that environment. We still have free speech, but we regulate ourselves for the benefit of the kids. The adult equivalent of this scenario is a nightclub; the environment allows for a loosening of behavioural norms, but step out of line with house rules and the establishment’s bouncers deal with you. Likewise, step out of line while online, and there must be consequences.

2. Embedded filters that are situationally appropriate
Many don’t add automated filters because they are afraid of the slippery slope of inhibiting free speech. In so doing they fall down the other slippery slope – doing nothing — allowing harm to continue. For the most part, this is a solved problem. You can buy off-the-shelf solutions just like you can buy anti-virus technology that matches known signatures of things users say or share. These filters must be on every social platform, app, and web site.

3. Using User Reputation to make smarter decisions
Reward positive users. For those who keep harassing everyone else, take automated action. Two Hat are pioneers of a new technique where you can give all users maximum expression by only filtering the worst abusive content, and then increasing the filter level incrementally on those who harass others. Predictive Moderation based on user reputation is a must.

4. Let users report bad content
If someone has to report something then harm is already done. Everything that users can create needs to be able to be reported. When content is reported, record the moderator decisions (in a pseudonymized, minimized way) and train AI (like our Predictive Moderation) to scale out the easy decision-making and escalate critical issues. Engaging and empowering users to assist in identifying and escalating objectionable content is a must.

Why we must create a better internet
In 2019, the best human intentions paired with best technology platforms and companies in the world couldn’t stop a terrorist from live-streaming the murder of innocents. We still can’t understand why 1.5 million chose to share it.

What we can do is continue to build and connect datasets and train AI models to get better. We can also find new ways to work together to make the internet a better, safer, place.

We’ll know it’s working when exposure to bullying, hate, abuse, and exploitation no longer feels like the price of admission for being online.

To learn more about Two Hat’s vision for a better internet that’s Safe by Design, download our white paper By Design: 6 Tenets for a Safer Internet.