How We Maintain a Chat Filter in a Changing World

Have you ever wondered how Two Hat maintains a dynamic and up-to-date chat filter in an ever-evolving world, in 20 languages?

In light of recent global events, including the COVID-19 pandemic and the #BlackLivesMatter movement, we wanted to provide insight into our internal process.

We spoke to Portuguese Language Specialist Richard Amante, a journalist with 15 years of international experience, about how he and the team stay on top of the latest news.

As Richard explains, the Language & Culture team researches trending topics and new linguistic expressions and adds them to the dictionary in multiple languages. It doesn’t end there.

After the team adds new expressions to the chat filter, they review and adjust how those expressions are used across different communities and in different contexts. That includes finding and fixing false positives (low-risk content incorrectly flagged as high-risk) and false negatives (you guessed it — high-risk content incorrectly flagged as low-risk), detecting new and unexpected linguistic patterns, and upgrading and downgrading the riskiness of a phrase based on cultural shifts.

Especially today, in a world where so many of us are stuck at home, the lines separating our “real life” from our “online life” are blurry, if they exist at all. Whether we’re forming a clan in a multiplayer game or commenting on a friend’s profile pic, we’re having conversations online about everything, from the latest news to pop culture.

You might be wondering — why does this matter? It’s simple. Our clients want to know what their players and users are talking about; and more importantly, they want to keep them safe from abuse, hate speech, and harassment — while still promoting healthy conversations.

Every online community has different standards of what a healthy conversation looks like. An edtech platform designed for 7-12-year-olds will likely have a different standard from a social network catering to millennials. Here is the great power of Two Hat’s Community Sift: While the Language & Culture team provides a baseline by adding new words and phrases to the dictionary, clients can augment those words and phrases in real-time, as needed.

In this way, we work closely with our clients to maintain a living, breathing global chat filter in an ever-changing world.

Want to learn how Two Hat can help you maintain a safe and healthy online community? Request a demo today.

 

4 Musts for Safe In-Game Chat in any Language

A good in-game chat makes for more play.

Users engage more deeply, return more often which results in improvements in important metrics such as lifetime value (LTV).

Two Hat proved all this about a year ago in our whitepaper for the gaming industry, An Opportunity to Chat.

In order for chat experiences to be considered “good” by the user in the first place though, you have to make sure that no users are excluded, bullied, or harassed away from your chat community and game before they ever get a chance to fall in love with it.

That said, it’s hard to deliver a consistently positive chat experience in one language fluently and with nuance, let alone in the world’s 20 most popular langauges. Add in leet aka 1337 and other ever-evolving unnatural language hacks and the task of scaling content moderation for global chat can be daunting.

With this shifting landscape in mind, Two Hat offers these 4 Musts for Safe In-Game Chat in any Language.

#1. Set expectations with clear guidelines
Humans change our language and behavior based on our environment. The very act of being online allows for a loosening of some behavioral norms and often anonymity, so it’s important users understand the guidelines for behavior in your community. As you ponder how to establish these guidelines, remember that cultural norms around the world are very different.

In other words, what is a reasonable chat policy in one language or culture may be inappropriate in another.

#2. Develop unique policies for each culture
French is spoken fluently in Canada, Africa and the Caribbean, but the experiences of those places are entirely different.

Why?

Culture.

Native speakers know these nuances, translation engines do not. Two Hat can provide accurate and customizable chat filters built and supported by our in-house team of native speakers of over 20 languages.

These filters must be on every gaming site and inside every mobile gaming app.

#3. Let user reputation be your guide
Users with a good reputation should be rewarded. Positive users are aligned with the purpose of your product, as well as your business interests, and they’re the ones who keep others coming back.

For those few who harass others – in any language – set policies that automate appropriate measures.

For example: set a policy requiring human review of any message sent by a user with 2 negative incidents in the last 7 days, etc. In this way, user reputation becomes the impetus behind in-game experience, democratizing user socialization.

#4. Tap your natural resources
In every language and in every culture the key to building opportunity is engaging your most committed players. The key to building safer and more inclusive in-game communities is the same.

Engaged, positive users empowered to flag and report negative experiences are the glue that binds in every language and culture.

Make sure each has a voice if they feel threatened or bullied or witness others being harassed, provide the community leaders that emerge with the tools and voice to be of positive influence, and build a chat experience that’s as cool and inclusive as your game works to be.



The Changing Landscape of Automated Content Moderation in 2019

Is 2019 the year that content moderation goes mainstream? We think so.

Things have changed a lot since 1990 when Tim Berners-Lee invented the World Wide Web. A few short years later, the world started to surf the information highway – and we’ve barely stopped to catch our collective breath since.

Learn about the past, present, and future of online content moderation in an upcoming webinar

The internet has given us many wonderful things over the last 30 years – access to all of recorded history, an instant global connection that bypasses country, religious, and racial lines, Grumpy Cat – but it’s also had unprecedented and largely unexpected consequences.

Rampant online harassment, an alarming rise in child sexual abuse imagery, urgent user reports that go unheard – it’s all adding up. Now that well over half of Earth’s population is online (4 billion people as of January 2018), we’re finally starting to see an appetite to clean up the internet and create safe spaces for all users.

The change started two years ago.

Mark Zuckerberg’s 2017 manifesto hinted at what was to come:

“There are billions of posts, comments, and messages across our services each day, and since it’s impossible to review all of them, we review content once it is reported to us. There have been terribly tragic events — like suicides, some live streamed — that perhaps could have been prevented if someone had realized what was happening and reported them sooner. There are cases of bullying and harassment every day, that our team must be alerted to before we can help out. These stories show we must find a way to do more.”

In 2018, the industry finally realized that it was time to find solutions to the problems outlined in Facebook’s manifesto. The question was no longer, “Should we moderate content on our platforms?” and instead became, “How can we better moderate content on our platforms?”

Play button on a film stripLearn how you can leverage the latest advances in content moderation in an upcoming webinar

The good news is that in 2019, we have access to the tools, technology, and years of best practices to make the dream of a safer internet a reality. At Two Hat, we’ve been working behind the scenes for nearly seven years now (alongside some of the biggest games and social networks in the industry) to create technology to auto-moderate content so accurately that we’re on the path to “invisible AI” – filters that are so good you don’t even know they’re in the background.

On February 20th, we invite you to join us for a very special webinar, “Invisible AI: The Future of Content Moderation”. Two Hat CEO and founder Chris Priebe will share his groundbreaking vision of artificial intelligence in this new age of chat, image, and video moderation.

In it, he’ll discuss the past, present, and future of content moderation, expanding on why the industry shifted its attitude towards moderation in 2018, with a special focus on the trends of 2019.

He’ll also share exclusive, advance details about:

We hope you can make it. Give us 30 minutes of your time, and we’ll give you all the information you need to make 2019 the year of content moderation.

PS: Another reason you don’t want to miss this – the first 25 attendees will receive a free gift! ; )


Read about Two Hat’s big announcements:

Two Hat Is Changing the Landscape of Content Moderation With New Image Recognition Technology

Two Hat Leads the Charge in the Fight Against Child Sexual Abuse Images on the Internet

Two Hat Releases New Artificial Intelligence to Moderate and Triage User-Generated Reports in Real Time

 

Affected by the Smyte Closure? Two Hat Security Protects Communities From Abusive Comments and Hate Speech

Statement from CEO and founder Chris Priebe: 

As many of you know, Smyte was recently acquired by Twitter and its services are no longer available, affecting many companies in the industry.

As CEO and founder of Two Hat Security, creators of the chat filter and content moderation solution Community Sift, I would like to assure both our valued customers and the industry at large that we are, and will always remain, committed to user protection and safety. For six years we have worked with many of the largest gaming and social platforms in the world to protect their communities from abuse, harassment, and hate speech.

We will continue to serve our existing clients and welcome the opportunity to work with anyone affected by this unfortunate situation. Our mandate is and will always be to protect the users on behalf of all sites. We are committed to uninterrupted service to those who rely on us.

If you’re in need of a filter to protect your community, we can be reached at hello@twohat.com.

Thinking of Building Your Own Chat Filter? Five Reasons You’re Wasting Your Time!

If you’re building an online community, whether a game or social network, flagging and dealing with abusive language and users is critical to success. Back in 2014, a Riot Games study suggested that users who experience abuse their first time in the game are potentially three times more likely to quit and never return.

“Chatting is a major step in our funnel towards creating engaged, paying users. And so, it’s really in Twitch’s best interests — and in the interest of most game dev companies and other social media companies — to make being social on our platform as pleasant and safe as possible.” – Ruth Toner, Twitch

At Two Hat, we found that smart moderation can potentially double user retention. And we’re starting to experience an industry-wide paradigm shift. Today, gaming and social companies realize that if they want to shape healthy, engaged, and ultimately profitable communities, they must employ some kind of chat filter and moderation software.

But that begs the question — should you build it yourself or use an outside vendor? Like anti-virus software, it’s better left to a team dedicated day in, day out, to keeping the software updated.

A few things to consider before investing a great deal of time and expense into an in-house chat filter.

1. An allow/disallow list doesn’t work because language isn’t binary
Traditionally, most filters use a binary allow/disallow list. The thing is, language isn’t binary. It’s complex and nuanced.

For instance, in many older gaming communities, some swear words will be acceptable, based on context. You could build a RegEx tool to string match input text, and it would have no problem finding an f-bomb. But can it recognize the critical difference between “Go #$%^ yourself” and “That was #$%^ing awesome”?

What if your players spell a word incorrectly? What if they use l337 5p34k (and they will)? What if they deliberately try to manipulate the filter?

It’s an endless arms race, and your users have way more time on their hands than you do.

Think about the hundreds of different variations of these phrases:

“You should kill yourself / She deserves to die / He needs to drink bleach / etc”
“You are a [insert racial slur here]”

Imagine the time and effort it would take to enter every single variation. Now add misspellings. Now add l337 mapping. Now add the latest slang. Now add the latest latest slang.

It never ends.

Now, imagine using a filter that has access to billions of lines of chat across dozens of different platforms. By using a third-party filter, you’ll benefit from the network effect, detecting words and phrases you would likely never find on your own.

2. Keep your team focused on building an awesome product — not chasing a few bad actors around the block

“When I think about being a game developer, it’s because we love creating this cool content and features. I wish we could take the time that we put into putting reporting [features] on console, and put that towards a match history system or a replay system instead. It was the exact same people that had to work on both who got re-routed to work on the other. – Jeff Caplin, Blizzard Entertainment

Like anything else built in-house, someone has to maintain the filter as well as identify and resolve specific incidents. If your plan is to scale your community, maintaining your own filter will quickly become unmanageable. The dev and engineering teams will end up spending more time keeping the community safe than actually building the community and features.

Compare that with simply tapping into the RESTful API of a service provider that reliably uses AI and human review to keep abusive language definitions current and quickly process billions of reports per day. Imagine letting community managers identify and effectively deal with the few bad actors while the rest of your team relentlessly improves the community itself.

3. Moderation without triage means drowning in user reports
There is a lot more to moderation than just filtering abusive chat. Filtering — regardless of how strict or permissive your community may be — is only the first layer of defense against antisocial behavior.

You’ll also need a way for users to report abusive behavior, an algorithm that bubbles the worst reports to the top for faster review, an automated process for escalating especially dangerous (and potentially illegal) content for your moderation team to review, various workflows to accurately and progressively message, warn, mute, and sanction accounts and (hopefully) correct user behavior, a moderation tool with content queues for moderators to actually review UGC, a live chat viewer, an engine to generate business intelligence reports…

“Invest in tools so you can focus on building your game with the community.”

That’s Lance Priebe, co-creator of the massively popular kid’s virtual world Club Penguin, sharing one of the biggest lessons he learned as a developer.

Focus on what matters to you, and on what you and your team do best — developing and shipping kickass new game features.

4. It’s obsolete before it ships
The more time and money you can put into your core product — improved game mechanics, new features, world expansions — the better.

Think of it this way. Would you build your own anti-virus software? Of course not. It would be outdated before launch. Researching, reviewing, and fighting the latest malware isn’t your job. Instead, you rely on the experts.

Now, imagine you’ve built your own chat filter and are hosting it locally. Every day, users find new ways around the filter, faster than you can keep up. That means every day you have to spend precious time updating the repository with new expressions. And that means testing and finally deploying the update… and that means an increase in game downtime.


Build your own chat filter, they said. “It’ll be fun,” they said.

This all adds up to a significant loss of resources and time — your time, your team’s time, and your player’s time.

5. Users don’t only chat in English
What if your community uses other languages? Consider the work that you’ll have to put into building an English-only filter. Now, double, triple, quadruple that work when you add Spanish, Portuguese, French, German, etc.

Word-for-word translation might work for simple profanity, but as soon as you venture into colloquial expressions (“let’s bang,” “I’m going to pound you,” etc) it gets messy.

In fact, many languages have complicated grammar rules that make direct translation literally impossible. Creating a chat filter in, say, Spanish, would require the expertise of a native speaker with a deep understanding of the language. That means hiring or outsourcing multiple language experts to build an internal multi-language filter.

And anyone who has ever run a company knows — people are awesome but they’re awfully expensive.

Lego businessman is stressed about expenses.

How complex are other languages? German has four grammar cases and three genders. Finnish uses 15 noun cases in the singular and 16 in the plural. And the Japanese language uses three independent writing systems (hiragana, katakana, kanji), all three of which can be combined in a single sentence.

TL;DR: because grammar: Every language is complex in its own way. Running your English filter through a direct translation like Google translate won’t result in a clean, accurate chat filter. In fact, it will likely alienate your community if you get it wrong.

Engineering time is too valuable to waste
Is there an engineering team on the planet that has the time (not to mention resources) to maintain an internally-hosted solution?

Dev teams are already overtaxed with overflowing sprint cycles, impossible QA workloads, and resource-depleting deployment processes. Do you really want to maintain another internal tool?

If the answer is “no,” luckily there is a solution — instead of building it yourself, rely on the experts.

Think of it as anti-virus software for your online community.

Talk to the experts
Consider Community Sift by Two Hat Security for your community’s chat filter. Specializing in identification and triage of high-risk and illegal content, we are under contract to process 4 billion messages every day. Since 2012 we have been empowering gaming and social platforms to build healthy, engaged communities by providing cost-effective, purposeful automated moderation.

You’ll be in good company with some of the largest online communities by Supercell, Roblox, Kabam, and many more. Simply call our secure RESTful API to moderate text, usernames, and images in over 20 of the most popular IRL and digital languages, all built and maintained by our on-site team of real live native speakers.



Does Your Online Community Need a Chat Filter?

“Chatting is a major step in our funnel towards creating engaged, paying users. And so, it’s really in Twitch’s best interests — and in the interest of most game dev companies and other social media companies — to make being social on our platform as pleasant and safe as possible.”

– Ruth Toner, Data Scientist at Twitch, UX Game Summit 2017

You’ve probably noticed a lot of talk in the industry these days about chat filters and proactive moderation. Heck, we talk about filters and moderation techniques all the time.

We’ve previously examined whether you should buy moderation software or build it yourself and shared five of the best moderation workflows we’ve found to increase productivity.

Today, let’s take a step back and uncover why filtering matters in the first place.

Filtering: First things first

Using a chat, username, or image filter is an essential technique for any product with social features. Ever find yourself buried under a mountain of user reports, unable to dig yourself out? That’s what happens when you don’t use a filter.

Every game, virtual world, social app, or forum has its own set of community guidelines. Guidelines and standards are crucial in setting the tone of an online community, but they can’t guarantee good behavior.

Forget about Draconian measures. Gone are the days of the inflexible blacklist/whitelist, where every word is marked as either good or bad with no room for the nuances and complexity of language. Contextual filters that are designed around the concept of varying risk levels let you make choices based on your community’s unique needs.

Plenty of online games contain violence and allow players to talk about killing each other, and that’s fine — in context. And some online communities may allow users to engage in highly sexual conversations in one-on-one chat.

For many communities, their biggest concern is hate speech. Users can swear, sext, and taunt each other to their heart’s content, but racial and religious epithets are forbidden. With a filter that can distinguish between vulgarity and hate speech, you can grant your users the expressivity they deserve while still protecting them from abuse.

With a smart filter that leverages an AI/human hybrid approach, you can give those different audiences the freedom to express themselves.

Key takeaway #1: A content filter and automated moderation system is business crucial for products with user-generated content and user interactions.

Social is king

“A user who experiences toxicity is 320% more likely to quit.” – Jeffrey Lin, Riot Games

Back in 2014, a Riot Games study found a correlation between abusive player behavior and user churn. In fact, the study suggested that users who experience abuse their first time in the game are potentially three times more likely to quit and never return.

In 2016 we conducted a study with an anonymous battle game, and interestingly, found that users who engaged in chat were three times more likely to keep playing after the first day.

While there is still a lot of work to be done in this field, these two preliminary studies suggest that social matters. Further studies may show slightly different numbers, but it’s well understood that negative behavior leads to churn in online communities.

When users chat, they form connections. And when they form connections, they return to the game.

But the flipside of that is also true: When users chat and their experience is negative, they leave. We all know it’s far more expensive to acquire a new user than to keep an existing one — so it’s critical that gaming and social platforms do everything in their power to retain new users. The first step? Ensure that their social experience is free from abuse and harassment.

 

Key takeaway #2: The benefits of chatting can be canceled by user churn driven by exposure to high-risk content

 

Reduce moderation workload and increase user retention

Proactive filtering and smart moderation doesn’t just protect your brand and your community from abusive content. It also has a major impact on your bottom line.

Habbo is a virtual hotel where users from around the world can design rooms, roleplay in organizations, and even open their own trade shops and cafes. When the company was ready to scale up and expand into other languages, they realized that they needed a smarter way to filter content without sacrificing user’s ability to express themselves. 

By utilizing a more contextual filter than their previous blacklist/whitelist, Habbo managed to reduce their moderation workload by a whopping 70%. Without reportable content, there just isn’t much for users to report.

Friendbase is a virtual world where teens can chat, create, and play as friendly avatars. Similar to Habbo, the company launched their chat features with a simple blacklist/whitelist filter technology. However, chat on the platform was quickly filled with sexism, racism, and bullying behavior. For Friendbase, this behavior led to high user churn.

By leveraging a smarter, more contextual chat filter they were able to manage their users’ first experiences and create a healthy, more positive environment. Within six months of implementing new filtering and moderation technology, user retention by day 30 had doubled. And just like Habbo, user reports decreased significantly. That means fewer moderators are needed to do less work. And not only that — the work they do is far more meaningful.

Key takeaway #3: Build a business case and invest in a solid moderation software to ensure you leverage the best of healthy interactions in your online community.

Does your online community need a chat filter?

[postcallout title=”Thinking of Building Your Own Chat Filter?” body=”If you’re thinking about building your own chat filter, here are 5 important things to consider.” buttontext=”Read More” buttonlink=”https://www.twohat.com/thinking-building-chat-filter-4-reasons-youre-wasting-time/”]Ultimately, you will decide what’s best for your community. But the answer is almost always a resounding “yes.”

Many products with social features launch without a filter (or with a rudimentary blacklist/whitelist) and find that the community stays healthy and positive — until it scales. More users means more moderation, more reports, and more potential for abuse.

Luckily, you can prevent those growing pains by launching with a smart, efficient moderation solution that blends AI and human review to ensure that your community is protected from abuse and that users are leveraging your platform for what it was intended — connection, interaction, and engagement.

 

 

Want more articles like this? Subscribe to our newsletter and never miss a blog!

* indicates required


Upcoming Webinar: Yes, Your Online Game Needs a Chat Filter

Are you unconvinced that you need a chat filter in your online game, virtual world, or social app? Undecided if purchasing moderation software should be on your product roadmap in 2018? Unsure if you should build it yourself?

You’re not alone. Many in the gaming and social industries are still uncertain if chat moderation is a necessity.

On Wednesday, January 31st at 10:00 am PST, Two Hat Community Trust & Safety Director Carlos Figueiredo shares data-driven evidence proving that you must make chat filtering and automated moderation a business priority in 2018.

In this quick 30-minute session, you’ll learn:

  • Why proactive moderation is critical to building a thriving, profitable game
  • How chat filtering combined with automation can double user retention
  • How to convince stakeholders that moderation software is the best investment they’ll make all year


Two Hat Headed to Slush 2017!

“Nothing normal ever changed a damn thing.” Slush, 2017

Now that’s a slogan.

It resonates deeply with us here in Canada. While sisu may be a uniquely Finnish trait, we’re convinced we have some of that grit and determination in Canada too. Maybe it’s the shared northern climate; cold weather and short, dark days tend to do that to a nation. 

Regardless, it caught our eye. We like to go against the grain, too. And we’re certainly far from normal.

How could we resist?

On Thursday, November 30th and Friday, December 1st, we’re attending Slush 2017 in Helsinki, Finland. It’s our first time at Slush (and our first time visiting Finland), and we couldn’t be more excited.

It’s a chance to meet with gaming and social companies from all over the world — not to mention our Finnish friends at Sulake (you know them as Habbo) and Supercell.

At Two Hat Security, our goal is to empower social and gaming platforms to build healthy, engaged online communities, all while protecting their brand and their users from high-risk content. Slush’s goal is to empower innovative thinkers to create technology that changes the world.

So, it’s kind of a perfect match.

We’re loving the two themes of Slush 2017:

#1 – Technology will not shape our future — we do.

Technology is no different from any other tool. A hammer can be used to harm, but it can also be used to build a home. In the same way, online chat can be used to spread hate speech, but it can also be used to make connections that enrich and empower us. 

We have a chance to use technology as a force for change, not a weapon. This is our chance to embrace the fundamental values of fair play, sportsmanship, and digital citizenship and reshape gaming and social communities for the better.

The tide is turning in the industry. Companies realize that an old-fashioned, hands-off approach to in-game chat and community building just doesn’t work. That smart, purposeful moderation increases user retention. That a blend of artificial intelligence and human review can significantly reduce moderation costs. And that you can protect your brand and your community without sacrificing freedom of expressivity.

#2 – Entrepreneurs are problem-solvers.

Everyone says the internet is a mess.

So let’s clean it up.

Let’s use state-of-the-art technology and pair it with state-of-the-heart humanity to make digital communities better. Safer. Stronger. And hey, let’s be honest — more profitable. Better for business. (Profitable-er? That’s a word, right?)

Sharon and Mike will be hanging out at the Elisa booth, showing off our chat filter and moderation software tool Community Sift.

You can even test it out. This is your chance to type all the naughty words you can think of… for business reasons, of course.

We’ll see you there, in cold, slushy Helsinki, at the end of November. As Canadians, we’re not bothered by the cold. (The cold never bothered us anyway.)

(Sorry not sorry.)

Let’s solve some problems together.

***

Two Hat empowers gaming and social platforms to foster healthy, engaged online communities. Want to see how we can protect your brand and your community from high-risk content? Get in touch today! 

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required


Top Three Reasons You Should Meet us at Gamescom

Heading to Gamescom or devcom this year? It’s a huge conference, and you have endless sessions, speakers, exhibits, and meetings to choose from. Your time is precious — and limited. How do you decide where you go, and who you talk to?

Here are three reasons we think you should meet with us while you’re in Cologne.

You need practical community-building tips.

Got trolls?

Our CEO & founder Chris Priebe is giving an awesome talk at devcom. He’ll be talking about the connection between trolls, community toxicity, and increased user churn. The struggle is real, and we’ve got the numbers to prove it.

Hope to build a thriving, engaged community in your game? Want to increase retention? Need to reduce your moderation workload so you can focus on fun stuff like shipping new features?

Chris has been in the online safety and security space for 20 years now and learned a few lessons along the way. He’ll be sharing practical, time-and-industry-proven moderation strategies that actually work.

Check out Chris’s talk on Monday, August 21st, from 14:30 – 15:00.

You don’t want to get left behind in a changing industry.

This is the year the industry gets serious about user-generated content (UGC) moderation.

With recent Facebook Live incidents (remember this and this?), new hate speech legislation in Germany, and the latest online harassment numbers from the Pew Research Center, online behavior is a hot topic.

We’ve been studying online behavior for years now. We even sat down with Kimberly Voll and Ivan Davies of Riot Games recently to talk about the challenges facing the industry in 2017.

Oh, and we have a kinda crazy theory about how the internet ended up this way. All we’ll say is that it involves Maslow’s hierarchy of needs

So, it’s encouraging to see that more and more companies are acknowledging the importance of smart, thoughtful, and intentional content moderation.

If you’re working on a game/social network/app in 2017, you have to consider how you’ll handle UGC (whether it’s chat, usernames, or images). Luckily, you don’t have to figure it out all by yourself.

Because…

You deserve success.

And we love this stuff.

Everyone says it, but it’s true: We really, really care about your success. And smart moderation is key to any social product’s success in a crowded and highly competitive market.

Increasing user retention, reducing moderation workload, keeping communities healthy — these are big deals to us. We’ve been fortunate enough to work with hugely successful companies like Roblox, Supercell, Kabam, and more, and we would love to share the lessons we’ve learned and best practices with you.

We’re sending three of our very best Two Hatters/Community Sifters to Germany. Sharon has a wicked sense of humor (and the biggest heart around), Mike has an encyclopedic knowledge of Bruce Springsteen lore, and Chris — well, he’s the brilliant, free-wheeling brain behind the entire operation.

So, if you’d like to meet up and chat at Gamescom, Sharon, Mike, and Chris will be in Cologne from Monday, August 21st to Friday, August 25th. Send us a message at hello@twohat.com, and one of them will be in touch.

Want more articles like this? Subscribe to our newsletter and never miss an update!

* indicates required