BC firm’s AI tool battles online abuse, pornography
Two Hat Security CEO Chris Priebe says the extent to which people are harassed online can end up costing businesses big bucks in the long run if companies don’t take the right steps to fight the problem. His Kelowna-based tech firm has been employing artificial-intelligence-powered tools to weed out inappropriate language or abusive content, such as pornographic images, on social networks.
Interview with Chris Priebe of Two Hat on AI and abusive content on social networks
Community Sift Moderation Solution Now Available in Nintendo Switch Developer’s Portal
Do you remember the first time you heard the Super Mario Bros. theme music?
The soundtrack to a million childhoods, that sprightly 8-bit calypso-inspired theme rarely fails to conjure up cherished adolescent memories.
Are you sitting in an office right now? Try singing it. Do-do-do… do-do-do-do… do-do-do-do-do-do-do-do-do-do… Is your officemate singing along yet?
Of course they are.
If you’re reading this, you probably have at least one fond Nintendo-related memory. If you grew up in the 80s, 90s, or 00s, you probably played Super Mario Bros, The Legend of Zelda, Street Fighter, Pokémon … and countless other classic titles. We sure did.
Big announcement
That’s why we’re so pumped to announce that Community Sift, our chat and image filter for social products is now an approved tool in the Nintendo Switch™ developer portal.
If you’re developing a Nintendo Switch™ game that features UGC (User-Generated Content), Community Sift can help keep your users safe from dangerous content like bullying, harassment, and child exploitation.
Connecting through games
What was so awesome about those Nintendo games that we grew up playing in our bedrooms, our basements, and our best friend’s living rooms? They were created for everyone to enjoy. Our parents didn’t have to worry about content (ok, maybe Street Fighter freaked them out a little bit).
We connected with friends, siblings, cousins, and neighbors. (With siblings, sometimes the controllers connected with our skulls.) And even though we were competing, we still felt a sense of camaraderie and belonging in the kingdoms of Mushroom and Hyrule.
That’s what the best Nintendo games do — they bring us together, across cultures, languages, and economic and social lines. In this newly-connected gaming world, it’s more important than ever that we preserve that sense of connection — and do it with safety in mind.
Protect your brand and your users
Now, if you’re building a Nintendo game that connects players through UGC, you can ensure that they are just as safe as we were when we were kids.
Whether you feature chat, usernames, profile pics, private messages (and more), our dream is to help developers craft safe, connected experiences. And isn’t that what Nintendo is all about?
We can’t wait to help you inspire another generation of dreamers, creators, and players.
Get in touch
Are you an authorized Nintendo Switch™ developer? Just search for Community Sift in the developer portal, and from there get in touch for more information.
Optimize Your Image Moderation Process With These Five Best Practices
If you run or moderate a social sharing site or app where users can upload their own images, you know how complex image moderation can be.
We’ve compiled five best practices that will make you and your moderation team’s lives a lot easier.
1. Create robust internal moderation guidelines
While you’ll probably rely on AI to automatically approve and reject the bulk of submitted images, there will be images that an algorithm misses, or that users have reported as being inappropriate. In those cases, it’s crucial that your moderators are well-trained and have the resources at their disposal to make what can sometimes be difficult decisions.
Remember the controversy surrounding Facebook earlier this year when they released their moderation guidelines to the public? Turns out, their guidelines were so convoluted and thorny that it was near-impossible to follow them with any consistency. (To be fair, Facebook faces unprecedented challenges when it comes to image moderation, including incredibly high volumes and billions of users from all around the world.) There’s a lesson to be learned here, though, which is that internal guidelines should be clear and concise.
Consider — you probably don’t allow pornography on your platform, but how do you feel about bathing suits or lingerie? And what about drugs — where do you draw the line? Do you allow images of pills? Alcohol?
Moderation isn’t a perfect science; there will always be grey areas.
2. Consider context
When you’re deciding whether to approve or reject an image that falls into the grey area, remember to look at everything surrounding the image. What is the user’s intent with posting the image? Is their intention to offend? Look at image tags, comments, and previous posts.
3. Be consistent when approving/rejecting images and sanctioning users
Your internal guidelines should ensure that you and your team make consistent, replicable moderation decisions. Consistency is so important because it signals to the community that 1) you’re serious about their health and safety, and 2) you’ve put real thought and attention into your guidelines.
A few suggestions for maintaining consistency:
- Notify the community publically if you ever change your moderation guidelines
- Consider publishing your internal guidelines
- Host moderator debates over challenging images and ask for as many viewpoints as possible ; this will help avoid biased decision-making
- When rejecting an image (even if it’s done automatically by the algorithm), automate a warning message to the user that includes community guidelines
- If a user complains about an image rejection or account sanction, take the time to investigate and fully explain why action was taken
4. Map out moderation workflows
Take the time to actually sketch out your moderation workflows on a whiteboard. By mapping out your workflows, you’ll notice any holes in your process.
Here are just a few scenarios to consider:
- What do you do when a user submits an image that breaks your guidelines? Do you notify them? Sanction their account? Do nothing and let them submit a new image?
- Do you treat new users differently than returning users (see example workflow for details)?
- How do you deal with images containing CSAM (child sexual abuse material; formally referred to as child pornography)?
Coming across an image that contains illegal content can be deeply disturbing.
5. Have a process to escalate illegal images
The heartbreaking reality of the internet is that it’s easier today for predators to share images than it has ever been. It’s hard to believe that your community members would ever upload CSAM, but it can happen, and you should be prepared.
If you have a Trust & Safety specialist, Compliance Officer, or legal counsel at your company, we recommend that you consult them for their best practices when dealing with illegal imagery. One option to consider is using Microsoft’s PhotoDNA, a free image scanning service that can automatically identify and escalate known child sexual abuse images to the authorities.
You may never find illegal content on your platform, but having an escalation process will ensure that you’re prepared for the worst-case scenario.
On a related note, make sure you’ve also created a wellness plan for your moderators. We’ll be discussing individuals wellness plans — and other best practices — in more depth in our Image Moderation 101 webinar on August 22nd. Register today to save your seat for this short, 20-minute chat.
Two Hat Security Announced as Official Supporter of Stop Cyberbullying Day 2018
Two Hat Security has been announced as an Official Supporter of Stop Cyberbullying Day 2018, helping to promote a positive and inclusive internet — free from fear, personal threats, and abuse.
Thanks to a generous donation by Two Hat Security, The Cybersmile Foundation can continue to help victims of online abuse around the world while raising awareness of the important issues surrounding the growing problem of harassment and cyberbullying in all its forms.
“We are delighted to receive this generous donation from Two Hat Security to help us continue our work supporting victims of cyberbullying and delivering educational programs to help people avoid cyberbullying related issues in the future,” says Iain Alexander, Head of Engagement at The Cybersmile Foundation.
Two Hat Security is the creator of Community Sift, a content filter and automated moderation tool that allows gaming and social platforms to proactively protect their communities from cyberbullying, abuse, profanity, and more.
“Stop Cyberbullying Day is such an important initiative,” says Carlos Figueiredo, Director of Community Trust and Safety at Two Hat Security. “We believe that digital citizenship and sportsmanship are the keys to understanding disruptive player behavior. The work that the Cybersmile Foundation does to support victims perfectly lines up with our mission to protect online communities from abuse and harassment.”
Stop Cyberbullying Day regularly features a host of global corporations, celebrities, influencers, educational institutions and governments who come together and make the internet a brighter, more positive place. The day has previously been supported by celebrities and brands including One Direction, Fifth Harmony, MTV, Twitter and many more.
To get involved with the Stop Cyberbullying Day 2018 activities, participants can share positive messages on social media using the hashtag #STOPCYBERBULLYINGDAY.
About Two Hat Security
Founded in 2012, Two Hat Security empowers gaming and social platforms to foster healthier online communities. With their flagship product Community Sift, an enterprise-level content filter and automated moderation tool, online communities can proactively filter abuse, harassment, hate speech, and other disruptive behavior.
Community Sift currently processes over 22 billion messages a month in 20 different languages, across a variety of communities and demographics, including Roblox, Animal Jam, Kabam, Habbo, and more.
For sales or media enquiries, please contact hello@twohat.com.
About The Cybersmile Foundation
The Cybersmile Foundation is a multi-award winning anti-cyberbullying nonprofit organization. Committed to tackling all forms of digital abuse, harassment and bullying online, Cybersmile work to promote diversity and inclusion by building a safer, more positive digital community.
Through education, innovative awareness campaigns and the promotion of positive digital citizenship Cybersmile reduce incidents of cyberbullying and through their professional help and support services they empower victims and their families to regain control of their lives.
For media enquiries contact pressoffice@cybersmile.org.
About Stop Cyberbullying Day
Stop Cyberbullying Day is an internationally recognized day of awareness and activities both on and offline that was founded and launched by The Cybersmile Foundation on June 17th 2012. Annually, every third Friday in June, Stop Cyberbullying Day encourages people around the world to show their commitment toward a truly inclusive and diverse online environment for all – without fear of personal threats, harassment or abuse. Users of social media include the hashtag #STOPCYBERBULLYINGDAY to show their support for inclusion, diversity, self-empowerment and free speech.
How 50 Cent helped the Club Penguin team learn to moderate better
Free Webinar: Six Essential Pillars of a Healthy Online Community
Updated April 17th, 2018
Watch the recording!
Does your online game, virtual world, or app include social features like chat, usernames, or user-generated images? Are you struggling with abusive content, lack of user engagement, or skyrocketing moderation costs?
Building an engaging, healthy, and profitable community in your product is challenging — that’s why we formulated the Six Essential Pillars of a Healthy Online Community!
In this exclusive talk, industry experts share their six techniques for creating a thriving, engaged, and loyal community in your social product:
- Tech Ethicist David Ryan Polgar (Funny as Tech, Friendbase)
- Social & Community Design Consultant Nate Sawatzky (Disney, Facebook, Tiny Speck)
- Community Trust & Safety Director Carlos Figueiredo (Two Hat Security)
Join us on Wednesday, April 4th at 10:00 AM PST/1:00 PM EST!
In this free, one-hour webinar, you’ll learn how to:
- Protect your brand using “safety by design”
- Increase user trust with consistent messaging
- Reduce moderator workload & empower meaningful work
- Improve community health using real-time feedback
Save your seat today for this ultimate guide to community health and engagement!
Upcoming Webinar: Yes, Your Online Game Needs a Chat Filter
Are you unconvinced that you need a chat filter in your online game, virtual world, or social app? Undecided if purchasing moderation software should be on your product roadmap in 2018? Unsure if you should build it yourself?
You’re not alone. Many in the gaming and social industries are still uncertain if chat moderation is a necessity.
On Wednesday, January 31st at 10:00 am PST, Two Hat Community Trust & Safety Director Carlos Figueiredo shares data-driven evidence proving that you must make chat filtering and automated moderation a business priority in 2018.
In this quick 30-minute session, you’ll learn:
- Why proactive moderation is critical to building a thriving, profitable game
- How chat filtering combined with automation can double user retention
- How to convince stakeholders that moderation software is the best investment they’ll make all year
Why Should Social Networks Encourage Digital Citizenship?
“Digital citizenship, and promoting a respectful yet vibrant environment is a multi-pronged effort.” — David Ryan Polgar
If one social network doesn’t prevent us from harassing strangers, then can we be expected to behave any differently when we switch platforms? If an online game is designed to create tension, then can we really be held responsible when we lash out at our teammates?
In other words — do social products have an ethical responsibility to encourage good citizenship?
To unpack this tricky topic, we turn to renowned writer, speaker, commentator and real-life Tech Ethicist David Ryan Polgar. He sat down with Two Hat Security’s Director of Community Trust & Safety Carlos Figueiredo to discuss the complex and sometimes divisive subject of social products and social responsibility.
Press play to listen:
Highlights & key quotes
On responsibility:
If [companies] want to have a sustainable business, they need to consider that this is business-critical. — Carlos Figueiredo
I think what we’re realizing is that the environment and the structures that we create are dramatically influential on human behavior… From a company standpoint, they now have that responsibility to try to prompt us towards the better use of their product. — David Ryan Polgar
On the “attention economy”:
We are giving a lot of time to [social media] companies. Does that create a responsibility because we are giving our time to them? — CF
It’s not a typical business-consumer relationship. Facebook and Twitter and Snapchat operate in this quasi-public space. And that’s very similar in law, what they’ve done with freedom of speech in a mall… Now, we, the general public, are expecting a voice in the way these companies operate. — DRP
On forging industry-wide alliances:
It’s so easy for us to think that each of these companies should take care of their own. Of course I believe that, but I also think that it’s time for the industry to have a wide discussion, and to have coalitions and alliances. If there is a consistency and a coherence amongst different companies, suddenly we can’t just have users and players jump from one platform to the other and bring bad behavior. — CF
Unless you have those coalitions, everybody is reinventing the wheel. You’re spending a lot of time, energy, and research in private endeavors instead of sharing, and having this open environment where we’re saying: as a community, as this collective, and as an industry, this is something we need to combat. — DRP
On the future:
I think the tide is turning in terms of digital citizenship, fair play, and sportsmanship when it comes to eSports and games. It’s financially smart and it’s ethically smart for the industry to talk about this. — CF
Social media is like a knife. It can be used to inflict pain or stab the truth. But it can also be used to carve a future that’s more socially just, more connected, and more intellectually curious. It’s like any tool.
The way to push social media forward, to build a better web, and to capitalize on what we know the internet should be, is to take that collective action, where people, businesses, and organizations come together and say “Here’s what we want — now how can we get there? How can we share the knowledge, how can we use the tools that we have and create new tools to build this better web?” — DRP
About the speakers
David Ryan Polgar
David Ryan Polgar has carved out a unique and pioneering career as a “Tech Ethicist.” With a background as an attorney and college professor, he transitioned in recent years to focus entirely on improving how children, teens, and adults utilize social media & tech. David is a tech writer (Big Think, Quartz, and IBM thinkLeaders), speaker (3-time TEDx, The School of The New York Times), and frequent tech commentator (SiriusXM, AP, Boston Globe, CNN.com, HuffPost). He has experience working with startups and social media companies (ASKfm), and co-founded the global Digital Citizenship Summit (held at Twitter HQ in 2016). Outside of writing and speaking, David currently serves as Trust & Safety for the teen virtual world Friendbase. He is also a board member for the non-profit #ICANHELP, which led the first #Digital4Good event at Twitter HQ on September 18th.
His forward-thinking approach to online safety and digital citizenship has been recognized by various organizations and outlets across the globe and was recently singled out online by the Obama Foundation.
Follow David on Twitter and LinkedIn.
Carlos Figueiredo
Carlos Figueiredo leads Two Hat Security’s Trust & Safety efforts, collaborating with clients and partners to challenge our views of healthy online communities.
Born and raised in Brazil, Carlos has been living in Canada for almost 11 years where he has worked directly with online safety for the last 9 years, helping large digital communities with their mission to stay healthy and engaged. From being a moderator himself to leading a multi-cultural department that was pivotal to the safety of global communities across different languages and cultures, Carlos has experienced the pains and joys of on-screen interactions.
He’s interested in tackling the biggest challenges of our connected times and thrives on collaborating and creating bridges in the industry. Most recently, he moderated the Tech Power Panel at #Digital4Good. On Wednesday, October 18th he’s presenting a free online workshop called Your Must-Have Moderation Strategy: Preparing for Breaking News & Trending Topics.
Follow Carlos on Twitter and LinkedIn.
About Two Hat Security
At Two Hat Security, we empower social and gaming platforms to build healthy, engaged online communities, all while protecting their brand and their users from high-risk content. Want to increase user retention, reduce moderation, and protect your brand?
Get in touch today to see how our chat filter and moderation software Community Sift can help your product encourage good digital citizenship.