Digital Safety: Combining the Best of AI Plus Human Insight

As AI and machine learning technologies continue to advance, there is increasingly more hype – and debate – about what it can or cannot do effectively. On June 29, the World Economic Forum released a pivotal report on Digital Safety. Some of the challenges identified in the report are:

  • The pandemic created challenges for countering vaccine misinformation.
  • January 6 (storming the US Capital) has necessitated a deeper look into the relationship between social platforms and extremist activity.
  • Child exploitation and abuse material (CSEAM) has continued to spread online.

Internationally, the G7 has committed to grow the international safety tech sector. We at Two Hat have made several trips to the UK pre-pandemic to provide feedback on the new online harms bill. With attention on solving online harms on the rise, we are excited to see new startups enter the field. Over 500 new jobs were created in the last year and the industry needs to continue attracting the best technology talent to solve this problem.

AI is a Valuable Component

For many, AI is a critical part of the solution. As the largest digital safety provider, we alone handle 100B human interactions a month. To put that in scale, that is 6.57 times the volume of Twitter. If a human could review 500 items an hour you would need 1.15 million humans to review all that data. To ask humans to do that would never scale. Worse, human eyes would gloss over and miss things. Alternatively if they only saw the worst we would be subjecting humans to days filled with looking at beheadings, rape, child abuse, harassment and many other harms leading to PTSD.

AI Plus Humans is the Solution

One of our mantras at Two Hat is, “Let humans do what humans do well and let computers do what they do well.” Computers are great at scale. Teach it a clear signal like “hello” is good and “hateful badwords” are bad and it will scale that to the billions. Humans, however, understand why those “hateful badwords” are bad. They bring empathy, loosely connected context, and they can make exceptions. Humans fit things into a bigger picture while machines (as magical as they may seem) are just following rules. We need both. Thus a human feedback loop is essential. Humans provide the creativity, teach the nuances, are the ethics committee, and stay on top of emerging trends in language and culture. According to Personabots CEO, Lauren Kunze, internet trolls have tried and failed to corrupt Mitsuku, an award-winning chatbot persona, on several occasions due to human supervisors being required to approve any knowledge retained globally by the AI.

We also need multiple forms of AI. “If all you have is a hammer everything looks like a nail” – modern proverb. A common mistake we see is people relying too much on one form of AI and forgetting the other.

Let’s consider some definitions of several parts of AI:

  • Artificial Intelligence refers to any specialized task done by a machine. This includes machine learning and expert systems.
  • Expert System refers to systems that use databases of expert knowledge to offer advice or make decisions.
  • Machine Learning refers to a machine that was coded to learn a task ‘on its own’ from the data it was given, but the decisions it makes are not coded.
  • Deep Learning refers to a specific form of machine learning, which is very trendy at the moment. This type of machine learning is based on ‘deep’ artificial neural networks.

To avoid the “everything looks like a nail” we use Expert Systems, Machine Learning, and Deep Learning within our stack. The trick is to use the right tool at the right level and to have a good human feedback loop.

And, given that we only view AI as a contributor to the solution vs The Solution – it allows us to see the screws and other “non-nails” and use humans and other systems and methods more effectively than the AI hammer to solve those issues.

Don’t Leave It To Chance

There was a great article by Neal Lathia where he reminds us that we shouldn’t be afraid to launch a product without machine learning. In our case, if you know a particular offensive phrase is not acceptable in your community, you don’t need to train a giant neural network to find it. An expert system will do. The problem with a neural network in this case is that you’re leaving it to chance. You’re feeding examples of it into a black box , it begins to see it everywhere, perhaps where you don’t want it. If you give it more examples that are mislabelled or even just too many counterexamples, it may ignore it completely.

At this point we learn something from antivirus companies that impacted how we’ve modelled our company.

  1. Process 100 billion messages a month
  2. be aware of new patterns that are harming one community
  3. have humans write manual signatures that are well vetted and accurate
  4. roll that out proactively to the other communities.

Determined Users will Subvert Your Filter

“The moment you fix a new problem, the solution is obsolete.” Many think the problem is “find badword”, not realizing the moment they find “badword” then users change their behaviour and no longer use it. Now they use “ba.dword” and “b4dw0rd”. When you solve that, they move on to “pɹoʍpɐq” and “baᕍw⬡rd” and somehow hide “badword” inside “goodword” or in a phrase. After 9 years we have so many tests for these types of subversions that would make you want to give these guys an honorary phD in creative hacking.

However if you rely on logical rules alone to find “badword” in all its many subversive forms you run the risk of missing similar words. For instance, if you take the phrase “bad word” and feed it into a pre-trained machine learning model to find words that are similar, you get words like “terrible”, “horrible, and “lousy”. In the antivirus analogy, humans use their imagination to create a manual signature. They might find “badword” is trending but did they consider “terrible”, “horrible”, “lousy”. Maybe – maybe not; it depends on their imagination. This is not a good strategy if missing “lousyword” means someone may commit suicide. Obviously we are not really talking about “lousyword”, but things that really matter.

The Holistic 5 Layer Approach to Community Protection:

How do you get all your tools to work together? Self-driving cars have a piece of the answer. In that context, if the AI gets it wrong someone gets run over. To resolve that, manufacturers mount as many cameras and sensors as they can. They train multiple AI systems and blend them together. If one system fails another takes over. My new van can read the lines on the side of the road and “assist” me by turning the corner on the highway. One day I was coming home from skiing with my kids in the back and it flashed at me telling me humans were required.

To scale to billions of messages we need that multi-layered approach. If one layer is defeated there is another behind it to back us up. If the AI is not confident, it should call in humans and it should learn from them. That is why Community Sift has 5 Layers of Community Protection. Each layer combines AI plus human insight, using the best of both.

  • Community Guidelines: Tell your community what you expect. In this way, you are creating safety via defining the acceptable context for your community. This is incredibly effective, as it solves the problem before it’s even begun. You are creating a place of community so set the tone at the door. This can be as simple as a short page of icons of what the community is about as you sign up. You can learn more about designing for online safety here. Additionally, our Director of Trust & Safety, Carlos Figueiredo, consults clients on setting the right community tone from the ground up and creating community guidelines as the foundational element to community health and safety operations.
  • Classify and Filter: The moment you state in your Community Guidelines that you will not tolerate harassment, abuse, child exploitation and hate, someone will test you to see if you really care. The classify and filter line of defense backs up your promise that you actually care about these things by finding and removing the obviously bad and damaging. Think of this like anti-virus technology but for words and images. This should focus on what are “deal-breakers” to your company; things once seen that cannot be unseen. Things that will violate the trust your community has in you. Just like with anti-virus technology, you use a system that works across the industry so that new trends and signatures can keep you safe in real-time.
  • User Reputation: Some online harm occurs in the borderline content over several interactions. You don’t want to over-filter for this because it restricts communication and frustrates normally positive members of your community. In this layer we address those types of harm by building a reputation on each user and on each context. There is a difference between a normally positive community member exceptionally sharing something offensive and a bad actor or a bot willfully trying to disrupt normal interactions. For example, it may be okay that someone says “do you want to buy” once. It is not okay if they say it 20 times in a row. In a more advanced sense, everything about buying is marked as borderline spam. For new and long standing users that may be allowed. But for people or bots that misuse that privilege, it is taken away automatically and automatically re-added when they go back to normal. The same principle works for sexual harassment, hate-speech, grooming of children, and filter manipulations. All those categories are full of borderline words and counter statements that need context. If context is King then reputation is Queen. Working in concert with the other two layers, user reputation is used to discourage bad actors while only reinforcing the guidelines for the occasional misstep.
  • User Report Automation: Even with the best technology in the above three layers, some things will get through. We need another layer of protection. Anytime you allow users to add content, allow other users to report that content. Feedback from your community is essential to keep the other three layers fresh and relevant. As society is continuously establishing new norms, your community is doing the same and telling you through user reports. Those same reports can also tell you a crisis is emerging. Our custom AI learns to take the same actions your moderators take consistently, reducing manual review by up to 70%, so your human moderators can focus on the things that matter.
  • Transparency Reports: In addition to legislation being introduced worldwide requiring transparency from social networks on safety measures, data insights from the other four layers drive actions to improve your communities. Are the interactions growing over time? Are you filtering too heavily and restricting the flow of communication? Is bullying on the rise in a particular language? How long does it take you to respond to suicide or a public threat? How long are high reputation members contributing to the community? These data insights demonstrate the return on investment of community management because a community that plays well together stays together. A community that stays together longer builds the foundation and potential for a successful business.

To Achieve Digital Safety, Use A Multi-Layered Approach

Digital safety is a complex problem which is getting increasing attention from international governments and not-for-profit organizations like the World Economic Forum. AI is a critical part of the solution, but AI alone is not enough. To scale to billions of messages we need that multi-layered approach that blends multiple types of AI systems together with human creativity and agility to respond to emerging trends. At the end of the day, digital safety is not just classifying and filtering bad words and phrases. Digital safety is about appropriately handling what really matters.

 

Client Spotlight: Kidzworld

With many schools shut down indefinitely and the summer break approaching soon, it’s more important than ever that children have safe online spaces to share and make new friends.

We recently caught up with Executive Vice President James Achilles and Community Manager Jordan Achilles of Kidzworld to discuss how they are keeping kids connected.

First of its kind
The first truly safe and secure kids’ social network, Kidzworld began life in 2001 as an online magazine for kids, long before kid-friendly content was widely available online. In 2007, Kidzworld recognized that kids wanted more than just online content – they were also looking for safe spaces to chat and make new friends. Ahead of the explosion of kid-oriented social networks, Kidzworld introduced their own moderated chat room, forums, and profiles for young internet users.

Kidzworld Logo

Like many organizations at the time, Kidzworld originally used an in-house blacklist/whitelist to moderate their social features, with moderators manually enteri

ng new words and phrases to the lists based on community trends. “At one point, we even manually turned the chat room on and off, based on when our moderators were available to watch the chat in real-time,” says James Achilles, Executive Vice President at Kidzworld Media.

As technology evolved and the needs of a childrens’ social network changed, Kidzworld looked to new solutions to make their moderation process smarter and more efficient, and to provide their users with a safe platform that still allowed freedom of expression.

Early adopters

James Achilles

An early adopter of Two Hat’s chat filter and content moderation platform Community Sift, Kidzworld met with CEO and founder Chris Priebe in 2012 to help build the chat filter using their data. In 2017 they officially came on board. “We wanted to see how we could evolve as a filter and allow the kids so much more freedom,” says James. “That’s when we came to Community Sift because it allowed the kids to say certain words but only within a certain context.”

What is the biggest change in moderation that Kidzworld has seen over the years? “The freedom and flexibility with words and phrases which didn’t exist when we started,” James says.

The team also uses other techniques to enforce community guidelines.

Jordan Achilles

“We have auto-messaging set up through Community Sift,” says Online Community & Web Content Manager Jordan Achilles. “If a user hits a certain threshold, they get a warning on both the negative and positive. There are messages that say What you’re saying isn’t allowed, review the rules. But on the flip side, we can reward the user, with a message like You’ve been communicating well! You are now a trusted user, which gives them more freedom to chat.”

The community itself is generally positive, adds Jordan. “They come to me online all the time to let me know if they saw a message that just didn’t feel right, or if someone is asking weird questions. That is so beneficial to us. We have a reporting system that is really great when the kids can just report one message and they know that I’m going to look at the rest of the messages and see what the other user’s intention is.”

A virtual playground
Today, thanks to this robust moderation platform, Kidzworld is a bustling online community, made up of kids from across the globe.

“They come here to catch up with each other, go in the chat room and be silly or go in the forums and do different role plays,” says Jordan. “The roleplay forums are where the strongest community of friends exist because they rely on each other to have that communication for the story threads, these fantasy stories that they’ve created. They create these stories with each other each day; one person posts and then they respond to each other, creating a full story.”

The Kidzworld role-playing forums are truly wonderful. Full of interactive, text-only stories set in TV sitcoms, hospitals, the worlds of Marvel and Harry Potter, school, and original worlds, they are places where a child’s imagination can run wild.

“The forums are where we’ve seen a huge change with the flexibility of the filter,” says Jordan. “Some of the stuff that they’re saying, random characters or different personality traits, our previous filter would block and reject.”

Roleplay forums in Kidzworld
The roleplay forums in Kidzworld

It also helps that the Kidzworld team has full access to the moderation platform and can update it in real-time. “If a weird obscure name that they’ve created for the roleplay character is blocked, I can approve it and even add it to the filter so it’s not blocked again,” Jordan says.

“It’s so cool to see their imaginations go. And that’s why we’re so happy to give them this space,” adds Jordan. “They can be these different people that they want to be online and it creates a space for them to let their imagination run wild and write stories and be someone that they can’t necessarily be in real life.”

Looking ahead
Asked what the future holds for Kidzworld, James Achilles says, “We love seeing more and more kids on the site. It is great for them to take advantage of all the opportunities on the site. The kids that are here love it and they’re consistent users. We would like to be more widely known for everything we have to offer kids. We are always looking for ways to improve the site. Right now we are working on some new technology, in partnership with Community Sift, that we know the kids are going to love.”

Gary, the Kidzworld mascot

===

Learn more about Kidzworlds’ commitment to safety on the parent and teacher resources section on their website. Read about their safety guidelines here.

And don’t forget to check out the kid-friendly content, from quizzes to movie reviews, and everything in between!



There’s No Single Solution To Keeping Children Safer Online, But There Is Hope

Recently, I have read commentaries in response to the excellent New York Times series on child sexual abuse. One particular point that was raised inspired me to write this article: the claim that existing technologies are not sophisticated enough to stop predators online, and that artificial intelligence systems alone might provide a solution. In desperate times, when the horrid truth of online child sexual abuse (there’s no such thing as child pornography) and the staggering increase in images and videos being shared are crushing our collective spirits, it’s understandable that we will look for a silver bullet.

(more…)

Conversations About Gaming & Digital Civility With Laura Higgins from Roblox

In November, Laura Higgins, Director of Community Safety & Digital Civility at Roblox shared the fascinating results of a recent survey with International Bullying Prevention Association Conference attendees in Chicago. The results provide a refreshingly honest peek into the world of online gaming and family communication – and should serve as a wake-up call for family-based organizations.

Roblox conducted two separate surveys – in the UK and the US. In the UK, they spoke to over 1500 parents, and in the US they surveyed more than 3500 parents and 580 teenagers, with different questions but some of the similar themes.

Two Hat Director of Community Trust & Safety Carlos Figueiredo was lucky enough to share the stage with Laura during the Keynote Gaming Panel at the same conference. During the panel and in conversations afterward, he and Laura spoke at length about the surprising survey results and how the industry needs to adopt a “Communication by Design” approach when talking to parents.

What follows is a condensed version of their conversations, where Laura shares her biggest takeaways, advice for organizations, and thoughts on the future of digital conversations.

Carlos Figueiredo: Some fascinating and surprising results came out of these surveys. What were your biggest takeaways?

Laura Higgins: In the UK survey, unsurprisingly, 89% of parents told us that they were worried about their kids playing games online. They cited concerns about addiction, strangers contacting their children, and that gaming might lead to difficulties forming real-life friends or social interactions.

What was really interesting is that nearly the same number of parents said they could see the benefits of gaming, so that’s something we’re going to really unpack over the next year. They recognize improved cognitive skills, they loved the cooperation and teamwork elements that gaming provided, the improved STEM skills. They recognize that playing games can help kids in the future as they will need digital skills as adults, which was really interesting for us to hear about.

The big thing that came out of this that we really need to focus on is that, of those people who said they were worried about gaming, half of them told us that their fears were coming from stories they saw on media and social media, instead of real-life experience. We know there’s a lot of negativity in the press, particularly around grooming and addiction/gambling, so I think we need to be mindful of the way we talk to parents so that whilst we’re educating them about possible risks (and we know that there are risks), we’re also discussing how to raise resilient digital citizens and are giving them the tools to manage risks rather than just giving them bad news. We’re trying to proactively work with media outlets by telling them, if you want to talk about the risks, that’s fine, but let’s share some advice in there as well, empower rather than instill even more fear.

CF: Did you see different results with the US survey?

LH: With the US research we were also able to reach 580 teens and compare the data from them and parents. Some of the most startling stuff for us was the disconnect between what parents think is really happening versus what kids think is happening.

For example, 91% of parents were convinced that their kids would come and talk to them if they were being bullied. But only 26% of kids said they would tell their parents if they had witnessed bullying. In fact, they would tell anyone else but their parents; they would report it to the platform, they would challenge the bully directly, or they would go to another adult instead of their parents.

The gap was echoed throughout the whole survey. We asked if parents talked to their kids about online safety and appropriate online behavior, and 93% of parents said that they were at least occasionally or regularly discussing this topic with their kids, while 60% of teens said that their parents never or rarely talked to them about appropriate online behavior. So, whatever it is that parents are saying — kids aren’t hearing it.

We need to make sure we’re reaching kids. It’s more than just sitting down and talking to them; it’s how it’s being received by kids as well.

CF: It seems like your surveys are uncovering some uncomfortable realities – and the things that the industry needs to focus on. We talk a lot about Safety by Design, but it seems like a focus we’re missing is Communication by Design.

LH: We were surprised with how honest parents were. Over half of UK parents, for example, are still not checking privacy and security settings that are built in. Part of my role at Roblox has been to review how accessible the advice is, how easy to understand it is, and it’s an ongoing process. We appreciate how busy parents are – they don’t have time to go looking for things.

We asked US parents who rarely or never had conversations with their kids about appropriate behavior online, why they didn’t feel like they were necessary, and we got some fascinating quotes back. Parents think they’re out of their depth, they think that their kids know more than them. In some cases that may be true, but not really – digital parenting is still parenting.

We heard quotes like, “If my kid had a problem, they would tell me.” The research tells us that’s not true.

“If my child was having problems, I would know about it.” But if you’re not talking about it, how is that going to happen?

“I brought my kid up right.” Well, it’s not their behavior we always have to look at – it’s their vulnerabilities as well.

We need to talk more broadly than just how to use the settings, so I think there are many layers to these conversations for parents as well.

CF: What are some other things we can do as an industry to help parents?

LH: One is, give them the skills and easy, bite-sized tips: here’s how you check your safety settings, here’s how you set privacy settings, here’s how you report something in-game, practical things they can teach their kids as well.

There’s also a broader conversation that empowers parents to learn how to have conversations. At Roblox, we do lots of work around things like, how to say no to your kids, what is an appropriate amount of screen time for your child, how to manage in-game purchases, and setting boundaries and limits, all advice that parents are grateful for. But if we just had an advice section or FAQ on the website, they would never get to hear those messages.

It’s about amplifying the message, working with the media as much as possible, having some different outlets like our Facebook page that we just launched. So parents who are sitting on the bus on the way to work scrolling through and finding those little reminders is really helpful.

CF: Speaking of your new Facebook page, Roblox has been really innovative in reaching out to parents.

LH: We’re also taking it offline. We have visits to Roblox, for instance, with kids. We’ll be holding an online safety session for parents while kids are off doing other activities. So I’m helping to write that. And working with parents in organizations as well, so they can still get those messages out where people are.

Schools have a key place in all of these conversations. We know that the quality of online safety conversations in schools is poor, it’s often still an assembly once a year and we’re going to scare you silly, not actually talk about practical stuff, rather than delivering these lessons through the curriculum. They should be reminding kids of appropriate online behavior at all times and giving them those digital literacy skills as well.

We’re doing webinars, we’re doing visits, and hopefully, gradually we’ll keep feeding them those messages.

CF: It’s encouraging that you’re so committed to this, trying to change culture. Not every platform is putting in this effort.

LH: I think we have to. I’ve been working in digital safeguarding for years, and I don’t think that we’ve hit that sweet spot yet. We haven’t affected enough change, and we need to move even faster.

Now, with all of these conversations about online harms papers and regulations, we’ve worked with partners in Australia and New Zealand where they have the Harmful Digital Communications Act but it’s still not really changing, This is just a new approach – that drip-feed, that persistence that hopefully will affect change.

We’re very lucky at Roblox – our community is really lovely. By the way, 40% of Roblox users are females which is rare in gaming. And they are very young and very supportive of each other. They are happy to learn at that age. And we can help to shape them and mold them, and they can take those attitudes and behaviors through their online digital life, as they grow up.

In the survey, we wanted the kids to tell us about the positive and negative experiences that they’ve had online. Actually, what most of them reflected wasn’t necessarily around things like bullying and harassment – they were actually saying that the things that made them feel really bad were when they did badly in a game and they were a bit tough on themselves. And they said they would walk away for 10 minutes, come back, and it was fine. And when people were positive to them in-game, they were thinking about it a few days later. So when we’re looking at how we manage bad behavior in our platform, it’s really important that we have rules, that we have appropriate sanctions in place, and that we can use the positive as an educational tool. I think we really need that balance.

CF: I love that framing. It’s a reminder that most players are having a good time and enjoying the game the way it was meant to be enjoyed. We all have bad days but nasty behavior is not the norm.

LH: It’s in everybody’s interest to make it a positive experience. We have a role to play in that but so do the kids themselves. They self-regulate, they call out bad behaviors, they are very supportive of each other.

We asked them why they play online games and 72% said, “Because it’s fun!”

That should be the starting point. Ultimately, it’s about play and how important that is for all of us.

CF: What is your best advice for gaming organizations, from reinforcing positive behavior to better communicating with parents?

LH: Great question. The first thing is to listen to your community. Their voice is really important. Without our players and their families, we would not have Roblox. Gaming companies can sometimes make decisions that are good for the business, rather than what the players want and what the community needs. And act on it. Take their feedback.

If you’re working with children, have a Duty of Care to make it as safe as possible. That’s a difficult one, because we know that small companies and startups might struggle financially. We’re working with trade bodies on the idea of Safety by Design – what are the bare minimums that must be met before we let anyone communicate on your platform? It doesn’t have to be all of the best equipment, tools, systems in place, but there are some standards that I think we should all have in place.

For example, if you have chat functions you need to make that you’ve got the right filters in place. Make sure it is age-appropriate all the way through.

Ultimately, machine learning and AI is wonderful, but it can never replace humans in certain roles or situations. You need well-trained, good moderators. Moderators have one of the most important roles in gaming platforms, so making sure they’re really well supported is important. They have a tough job. They are dealing with very upsetting things that might happen, so making sure that they aren’t just trained to deal with it, but that they have after-care as well.

If you are a family-based platform make sure you reach out to parents. I met with a delegate and she said it was the first time she’s heard a tech company talk about engaging with parents. I think if we could all start doing that a little bit more, it would be better.

CF: You mentioned that in your 20 years in the digital civility industry, the needle has barely moved. Do you think that’s changing?

LH: I’m really hopeful for the future. I had talked with journalists a few months ago who were slightly scoffing at my aspirations of digital civility. If you’re coming from a starting point where you just assume that games are bad and the players are bad and the community is bad – you’re wrong. People are kind. People do have empathy. They want to see other people succeed.

For example, nearly all teens (96%) in our survey said they would likely help a friend they see being bullied online, and the majority of teens confirmed they get help from other players when they need it at least “sometimes,” with 41% saying they get peer help “often” or “always.” Those are all things we see all the time in gaming. And we have this opportunity to spread that out even more and build those really good positive online citizens.

This is much bigger than Roblox.

These kids are the future. The more that we can invest in them, the better.

We all need to enable those conversations, encourage those conversations, and equip parents with the right messages.

***

Further reading:



London Calling: A Week of Trust & Safety in the UK

Two weeks ago, the Two Hat team and I packed up our bags and flew to London for a jam-packed week of government meetings, media interviews, and two very special symposiums.

I’ve been traveling a lot recently – first to Australia in mid-September for the great eSafety19 conference, then London, and I’m off to Chicago next month for the International Bullying Prevention Association Conference – so I haven’t had much time to reflect. But now that the dust has settled on the UK visit (and I’m finally solidly back on Pacific Standard Time), I wanted to share a recap of the week as well as my biggest takeaways from the two symposiums I attended.

Talking Moderation

We were welcomed by several esteemed media companies and had the opportunity to be interviewed by journalists who had excellent and productive questions.

Haydn Taylor from GamesIndustry.Biz interviewed Two CEO and founder Chris Priebe, myself, and Cris Pikes, CEO of our partner Image Analyzer about moderating harmful online content, including live streams.

Rory Cellan-Jones from the BBC talked to us about the challenges of defining online harms (starts at 17:00).

Chris Priebe being interviewed
Chris Priebe being interviewed about online harms

I’m looking forward to more interviews being released soon.

We also met with branches of government and other organizations to discuss upcoming legislation. We continue to be encouraged by their openness to different perspectives across industries.

Chris Priebe continues to champion his angle regarding transparency reports. He believes that making transparency reports truly transparent – ie, digitizing and displaying them in app stores – has the greatest potential to significantly drive change in content moderation and online safety practices.

Transparency reports are the rising tide that will float all boats as nobody will want to be that one site or app with a report that doesn’t show commitment and progress towards a healthier online community. Sure, everyone wants more users – but in an age of transparency, you will have to do right by them if you expect them to join your platform and stick around.

Content Moderation Symposium – “Ushering in a new age of content moderation”

On Wednesday, October 2nd Two Hat hosted our first-ever Content Moderation Symposium. Experts from academia, government, non-profits, and industry came together to talk about the biggest content moderation challenges of our time, including tackling complex issues like defining cyberbullying and child exploitation behaviors in online communities to unpacking why a content moderation strategy is business-critical going into 2020.

Alex Holmes, Deputy CEO of The Diana Award opened the day with a powerful and emotional keynote about the effects of cyberbullying. For me, the highlight of his talk was this video he shared about the definition of “bullying” – it really drove home the importance of adopting nuanced definitions.

Next up were Dr. Maggie Brennan, a lecturer in clinical and forensic psychology at the University of Plymouth, and an academic advisor to Two Hat, and Zeineb Trabelsi, a third-year Ph.D. student in the Information System department at Laval University in Quebec, and an intern in the Natural Language Processing department at Two Hat.

Dr. Brennan and Zeineb have been working on academic frameworks for defining online child sexual victimization and cyberbullying behavior, respectively. They presented their proposed definitions, and our tables of six discussed them in detail. Discussion points included:

Are these definitions complete and do they make sense? What further information would we require to effectively use these definitions when moderating content? How do we currently define child exploitation and cyberbullying in our organizations?

My key takeaway from the morning sessions? Defining online harms is not going to be easy. It’s a complicated and nuanced task because human behavior is complicated and nuanced. There are no easy answers – but these cross-industry and cross-cultural conversations are a step in the right direction. The biggest challenge will be taking the academic definitions of online child sexual victimization and cyberbullying behaviors and using them to label, moderate, and act on actual online conversations.

I’m looking forward to continuing those collaborations.

Our afternoon keynote was presented by industry veteran David Nixon, who talked about the exponential and unprecedented growth of online communities over the last 20 years, and the need for strong Codes of Conduct and the resources to operationalize good industry practices. This was followed by a panel discussion with industry experts and several Two Hat customers. I was happy to sit on the panel as well.

My key takeaway from David’s session and the panel discussion? If you design your product with safety at the core (Safety by Design), you’re setting yourself up for community success. If not, reforming your community can be an uphill battle. One of our newest customers Peer Tutor is implementing Safety by Design in really interesting ways, which CEO Wayne Harrison shared during the panel. You’ll learn more in an upcoming case study.

Man standing in front of a screen that says Transparency Reports

Finally, I presented our 5 Layers of Community Protection (more about that in the future – stay tuned!), and we discussed best practices for each layer of content moderation. The fifth layer of protection is Transparency Reports, which yielded the most challenging conversation. What will Transparency Reports look like? What information will be mandatory? How will we define success benchmarks? What data should we start to collect today? No one knows – but we looked at YouTube’s Transparency Report as an example and guidance on what may be legislated in the future.

My biggest takeaway from this session? Best practices exist – many of us are doing them right now. We just need to talk about them and share them with the industry at large. More on that in an upcoming blog post.

Fair Play Alliance’s First European Symposium

Being a co-founder of the Fair Play Alliance and seeing it grow from a conversation between a few friends to a global organization of over 130 companies and many more professionals has been incredible, to say the least. This was the first time the alliance held an event outside of North America. As a global organization, it was very important to us, and it was a tremendous success! The feedback has been overwhelmingly positive, and we are so happy to see that it provided lots of value to attendees.

Members of the Fair Play Alliance

It was a wonderful two-day event held over October 3rd and 4th, with excellent talks and workshops that were hosted for members of the FPA. Chris Priebe, a couple of industry friends/veteran Trust & safety leaders, and I hosted one of the workshops. We’re all excited to take that work forward and see the results that will come out of it and benefit the games industry!

What. A. Week.

As you can tell, it was a whirlwind week and I’m sure I’ve forgotten at least some of it! It was great to connect with old friends and make new friends. All told, my biggest takeaway from the week was this:

Everyone I met cares deeply about online safety, and about finding the smartest, most efficient ways to protect users from online harms while still allowing them the freedom to express themselves. At Two Hat, we believe in an online world where everyone is free to share without fear of harassment or abuse. I’ve heard similar sentiments echoed countless times from other Trust & Safety professionals, and I truly believe that if we continue to collaborate across industries, across governments, and across organizations, we can make that vision a reality.

So let’s keep talking.

I’m still offering free community audits for any organization that wants a second look at their moderation and Trust & Safety practices. Sign up for a free consultation using the form below!