Chris Priebe, CEO of Two Hat Security, explores how business, government and users will be affected by Online Harms and Duty of Care legislation in the UK and beyond.
London Calling: A Week of Trust & Safety in the UK
Two weeks ago, the Two Hat team and I packed up our bags and flew to London for a jam-packed week of government meetings, media interviews, and two very special symposiums.
I’ve been traveling a lot recently – first to Australia in mid-September for the great eSafety19 conference, then London, and I’m off to Chicago next month for the International Bullying Prevention Association Conference – so I haven’t had much time to reflect. But now that the dust has settled on the UK visit (and I’m finally solidly back on Pacific Standard Time), I wanted to share a recap of the week as well as my biggest takeaways from the two symposiums I attended.
Talking Moderation
We were welcomed by several esteemed media companies and had the opportunity to be interviewed by journalists who had excellent and productive questions.
Haydn Taylor from GamesIndustry.Biz interviewed Two CEO and founder Chris Priebe, myself, and Cris Pikes, CEO of our partner Image Analyzer about moderating harmful online content, including live streams.
Rory Cellan-Jones from the BBC talked to us about the challenges of defining online harms (starts at 17:00).

I’m looking forward to more interviews being released soon.
We also met with branches of government and other organizations to discuss upcoming legislation. We continue to be encouraged by their openness to different perspectives across industries.
Chris Priebe continues to champion his angle regarding transparency reports. He believes that making transparency reports truly transparent – ie, digitizing and displaying them in app stores – has the greatest potential to significantly drive change in content moderation and online safety practices.
Transparency reports are the rising tide that will float all boats as nobody will want to be that one site or app with a report that doesn’t show commitment and progress towards a healthier online community. Sure, everyone wants more users – but in an age of transparency, you will have to do right by them if you expect them to join your platform and stick around.
Content Moderation Symposium – “Ushering in a new age of content moderation”
On Wednesday, October 2nd Two Hat hosted our first-ever Content Moderation Symposium. Experts from academia, government, non-profits, and industry came together to talk about the biggest content moderation challenges of our time, including tackling complex issues like defining cyberbullying and child exploitation behaviors in online communities to unpacking why a content moderation strategy is business-critical going into 2020.
Alex Holmes, Deputy CEO of The Diana Award opened the day with a powerful and emotional keynote about the effects of cyberbullying. For me, the highlight of his talk was this video he shared about the definition of “bullying” – it really drove home the importance of adopting nuanced definitions.
Next up were Dr. Maggie Brennan, a lecturer in clinical and forensic psychology at the University of Plymouth, and an academic advisor to Two Hat, and Zeineb Trabelsi, a third-year Ph.D. student in the Information System department at Laval University in Quebec, and an intern in the Natural Language Processing department at Two Hat.
Dr. Brennan and Zeineb have been working on academic frameworks for defining online child sexual victimization and cyberbullying behavior, respectively. They presented their proposed definitions, and our tables of six discussed them in detail. Discussion points included:
Are these definitions complete and do they make sense? What further information would we require to effectively use these definitions when moderating content? How do we currently define child exploitation and cyberbullying in our organizations?
My key takeaway from the morning sessions? Defining online harms is not going to be easy. It’s a complicated and nuanced task because human behavior is complicated and nuanced. There are no easy answers – but these cross-industry and cross-cultural conversations are a step in the right direction. The biggest challenge will be taking the academic definitions of online child sexual victimization and cyberbullying behaviors and using them to label, moderate, and act on actual online conversations.
I’m looking forward to continuing those collaborations.
Our afternoon keynote was presented by industry veteran David Nixon, who talked about the exponential and unprecedented growth of online communities over the last 20 years, and the need for strong Codes of Conduct and the resources to operationalize good industry practices. This was followed by a panel discussion with industry experts and several Two Hat customers. I was happy to sit on the panel as well.
My key takeaway from David’s session and the panel discussion? If you design your product with safety at the core (Safety by Design), you’re setting yourself up for community success. If not, reforming your community can be an uphill battle. One of our newest customers Peer Tutor is implementing Safety by Design in really interesting ways, which CEO Wayne Harrison shared during the panel. You’ll learn more in an upcoming case study.
Finally, I presented our 5 Layers of Community Protection (more about that in the future – stay tuned!), and we discussed best practices for each layer of content moderation. The fifth layer of protection is Transparency Reports, which yielded the most challenging conversation. What will Transparency Reports look like? What information will be mandatory? How will we define success benchmarks? What data should we start to collect today? No one knows – but we looked at YouTube’s Transparency Report as an example and guidance on what may be legislated in the future.
My biggest takeaway from this session? Best practices exist – many of us are doing them right now. We just need to talk about them and share them with the industry at large. More on that in an upcoming blog post.
Fair Play Alliance’s First European Symposium
Being a co-founder of the Fair Play Alliance and seeing it grow from a conversation between a few friends to a global organization of over 130 companies and many more professionals has been incredible, to say the least. This was the first time the alliance held an event outside of North America. As a global organization, it was very important to us, and it was a tremendous success! The feedback has been overwhelmingly positive, and we are so happy to see that it provided lots of value to attendees.
It was a wonderful two-day event held over October 3rd and 4th, with excellent talks and workshops that were hosted for members of the FPA. Chris Priebe, a couple of industry friends/veteran Trust & safety leaders, and I hosted one of the workshops. We’re all excited to take that work forward and see the results that will come out of it and benefit the games industry!
What. A. Week.
As you can tell, it was a whirlwind week and I’m sure I’ve forgotten at least some of it! It was great to connect with old friends and make new friends. All told, my biggest takeaway from the week was this:
Everyone I met cares deeply about online safety, and about finding the smartest, most efficient ways to protect users from online harms while still allowing them the freedom to express themselves. At Two Hat, we believe in an online world where everyone is free to share without fear of harassment or abuse. I’ve heard similar sentiments echoed countless times from other Trust & Safety professionals, and I truly believe that if we continue to collaborate across industries, across governments, and across organizations, we can make that vision a reality.
So let’s keep talking.
I’m still offering free community audits for any organization that wants a second look at their moderation and Trust & Safety practices. Sign up for a free consultation using the form below!
Two Hat: Stop tweaking your game and start fixing your community
How a common definition of online abuse could help to tackle it more effectively
Three Ways Social Networks Can Embrace Safety by Design Today
Earlier this month, the Australia eSafety Office released their Safety by Design (SbD) Principles. As explained on their website, SbD is an “initiative which places the safety and rights of users at the centre of the design, development and deployment of online products and services.” It outlines three simple but comprehensive principles (service provider responsibilities, user empowerment & autonomy, and transparency & accountability) that social networks can follow to embed user safety into their platform from the design phase and onwards.
With this ground-breaking initiative, Australia has proven itself to be at the forefront of championing innovative approaches to online safety.
I first connected with the eSafety Office back in November 2018, and later had the opportunity to consult on Safety by Design. I was honored to be part of the consultation process and to bring some of my foundational beliefs around content moderation to the table. At Two Hat, we’ve long advocated for a Safety by Design approach to building social networks.
Many of the points and the Safety by Design Principles and the UK’s recent Online Harms white paper support the Trust & Safety practices we’ve been recommending to clients for years, such as leveraging filters and cutting-edge technology to triage user reports. And we’ve heartily embraced new ideas, like transparency reports, which Australia and the UK both strongly recommend in their respective papers.
As I read the SbD overview, I had a few ideas for clear, actionable measures that social networks across the globe can implement today to embrace Safety by Design. The first two fall under SbD Principle 1, and the third under SbD Principle 3.
Under SbD Principle 1: Service provider responsibilities
“Put processes in place to detect, surface, flag and remove illegal and harmful conduct, contact and content with the aim of preventing harms before they occur.”
Content filters are no longer a “nice to have” for social networks – today, they’re table stakes. When I first started in the industry, many people assumed that only children’s sites required filters. And until recently, only the most innovative and forward-thinking companies were willing to leverage filters in products designed for older audiences.
That’s all changed – and the good news is that you don’t have to compromise freedom of expression for user safety. Today’s chat filters (like Two Hat’s Community Sift) go beyond allow/disallow lists, and instead allow for intelligent, nuanced filtering of online harms that take into account various factors, including user reputation and context. And they can do it well in multiple languages, too. As a Portuguese and English speaker, this is particularly dear to my heart.
All social networks can and should implement chat, username, image, and video filters today. How they use them, and the extent to which they block, flag, or escalate harms will vary based on community guidelines and audience.
Also under SbD Principle 1: Service provider responsibilities
“Put in place infrastructure that supports internal and external triaging, clear escalation paths and reporting on all user safety concerns, alongside readily accessible mechanisms for users to flag and report concerns and violations at the point that they occur.”
As the first layer of protection and user safety, baseline filters are critical. But users should always be encouraged to report content that slips through the cracks. (Note that when social networks automatically filter the most abusive content, they’ll have fewer reports.)
But what do you do with all of that reported content? Some platforms receive thousands of reports a day. Putting everything from false reports (users testing the system, reporting their friends, etc) to serious, time-sensitive content like suicide threats and child abuse into the same bucket is inefficient and ineffective.
That’s why we recommend implementing a mechanism to classify and triage reports so moderators purposefully review the high-risk ones first, while automatically closing false reports. We’ve developed technology called Predictive Moderation that does just this. With Predictive Moderation, we can train AI to take the same actions moderators take consistently and reduce manual review by up to 70%.
I shared some reporting best practices used by my fellow Fair Play Alliance members during the FPA Summit at GDC earlier this year. You can watch the talk here (starting at 37:30).
There’s a final but no less important benefit to filtering the most abusive content and using AI like Predictive Moderation to triage time-sensitive content. As we’ve learned from seemingly countless news stories recently, content moderation is a deeply challenging discipline, and moderators are too often subject to trauma and even PTSD. All of the practices that the Australian eSafety Office outlines, when done properly, can help protect moderator wellbeing.
Under SbD Principle 3: Transparency and accountability
“Publish an annual assessment of reported abuses on the service, accompanied by the open publication of a meaningful analysis of metrics such as abuse data and reports, the effectiveness of moderation efforts and the extent to which community standards and terms of service are being satisfied through enforcement metrics.”
While transparency reports aren’t mandatory yet, I expect they will be in the future. Both the Australian SbD Principles and the UK Online Harms white paper outline the kinds of data these potential reports might contain.
My recommendation is that social networks start building internal practices today to support these inevitable reports. A few ideas include:
- Track the number of user reports filed and their outcome (ie, how many were closed, how many were actioned on, how many resulted in human intervention, etc)
- Log high-risk escalations and their outcome
- Leverage technology to generate a percentage breakdown of abusive content posted and filtered
Thank you again to the eSafety Office and Commissioner Julie Inman-Grant for spearheading this pioneering initiative. We look forward to the next iteration of the Safety by Design framework – and can’t wait to join other online professionals at the #eSafety19 conference in September to discuss how we can all work together to make the internet a safe and inclusive space where everyone is free to share without fear of abuse or harassment.
To read more about Two Hat’s vision for a safer internet, download our new white paper By Design: 6 Tenets for a Safer Internet.
And if you, like so many of us, are concerned about community health and user safety, I’m currently offering no-cost, no-obligation Community Audits. I will examine your community (or the community from someone you know!), locate areas of potential risk, and provide you with a personalized community analysis, including recommended best practices and tips to maximize positive social interactions and user engagement.
Prepare for Online Harms Legislation With a Community Audit
The regulatory landscape is changing rapidly. In the last two months, we have seen huge changes in the UK and Australia, with potentially more countries to follow, including France and Canada. And just this week 18 countries and 8 major tech companies pledged to eliminate terrorist and violent extremist content online in the Christchurch Call.
As part of my job as a Trust and Safety professional, I’ve been studying the UK Online Harms white paper, which proposes establishing a Duty of Care law, which would hold companies accountable for online harms on their platforms. Online harms would include anything from illegal activity and content to behaviours which are “harmful but not necessarily illegal.”
It’s an important read and I encourage everyone in the industry to spend time reviewing the Department for Digital, Culture, Media & Sports’ proposal because it could very well end up the basis for similar legislation around the world.
All of this has got me thinking – how can platforms be proactive and embed purposeful content moderation at the core of their DNA?
As an industry, none of us want hate speech, extremism, or abuse happening on our platforms – but how prepared are we to comply with changing regulations?
Where are our best practices?
Are we prepared to deal with the increasing challenges to maintain healthy spaces online?
The changes are complex but also deeply important.
The eSafety Commissioner in Australia has identified three Safety by Design principles and are creating a framework for SbD, with a white paper set to be published in the coming months. It’s exciting that they are proactively establishing best practices guidance for online safety.
Organizations like the Fair Play Alliance are also taking a proactive path and looking at how the very design of products (online games, in this particular case) can be conducive to productive and positive interactions while mitigating abuse and harassment.
Over the past year, I was consulted for pioneering initiatives and participated in roundtables as well as industry panels to discuss those topics. I also co-founded the FPA along with industry friends and have seen positive changes first hand as more and more companies come together to drive lasting change in this space. Now I want to do something else that can hopefully bring value – something tangible that I can provide my industry friends today.
To that end, I’m offering free community audits to any platform that is interested.
I will examine your community, locate areas of potential risk, and provide you with a personalized community analysis, including recommended best practices and tips to maximize positive social interactions and user engagement.
Of course, I can’t provide legal advice but I can provide tips and best practices based on my years of experience, first at Disney Online Studios and now at Two Hat, working with social and gaming companies across the globe.
I believe in a shared responsibility when it comes to fostering healthy online spaces and protecting users online. I’m already talking to many companies and going over the audit process with them and look forward to providing as much value as I possibly can.
If you’re concerned about community health, user safety, and compliance, let’s talk.