Habbo is a virtual hotel where millions of people from around the world go to live out their fantasies. It’s a massively multiplayer experience where users design rooms, roleplaying in organizations, and even open their own trade shops and cafes. Currently, there are over 120 million user-generated rooms in the 9 Habbo language communities.
Situation
The Massive Multiplayer Online (MMO) industry moves fast. As new apps, websites, and technology are introduced, companies that want to stay on the cutting edge have to keep up with the rapid pace of change. To retain their status as an industry leader and give them the freedom to focus on creating new features and products, Habbo realized that they needed a more efficient and cost-effective way to moderate their online community. And while it was crucial that they resolved this challenge quickly, after operating for 15 successful years, Habbo knew that they couldn’t compromise their users’ safety.
Habbo remained committed to protecting their growing userbase from high-risk content like cyberbullying, hate speech, Personally Identifiable Information (PII), and grooming. To implement their new approach to UGC (User-Generated Content), they needed a scalable moderation solution that would keep costs low but still maintain community standards.
Obstacle
Moderation teams are expensive. As products scale up and communities grow, more moderators need to be hired to deal with the increased UGC. Habbo was committed to protecting their users from dangerous content but recognized that it was impossible to maintain the cost of employing a huge staff of moderators to review reports and live chat. Their existing process — a blacklist, supported by human teams — would have to change.
As well, Habbo is available in nine different languages. The company needed to ensure that the same community standards and moderation practices would be applied across all hotels.
In addition to these concerns, they realized that it could be confusing and often jarring to add a new component like a content filter to an established community. So they needed a way to assure long-time users that the new moderation system was in their best interests.
Action
Habbo used Two Hat Security’s flagship content filter and moderation tool Community Sift to implement their new approach to community moderation. Powered by Artificial Intelligence (AI), and supported by human review, Community Sift replaced their existing blacklist to identify and filter dangerous content in real time.
All UGC now runs through Community Sift, including public chat, private messages, forum discussions, and usernames. Community Sift identifies high-risk content and filters or posts it based on Habbo’s unique community standards. As well, Community Sift classifies users based on their reputation — how often do they post high-risk content? How often do they post low-risk, positive content? It then applies a less permissive or more restrictive filter to that user, based on past behavior.
In addition to filtering high-risk content, Habbo set up auto-sanctions to take action on users who repeatedly break the rules. They can now reinforce community standards in real-time, giving users the opportunity to correct their behavior and re-join the community.
The Habbo moderation team also has full access to the Community Sift tool. They can retrain the AI on the spot based on new or reported language trends, ensuring that users are protected at all times.
Results
Since implementing Community Sift, Habbo has built a stronger moderation process, all while lowering their previous moderation workload:
With Sift we have effectively reduced moderation, with about 70% of mod workload having been reduced. The remaining moderation efforts goes towards reviewing CFHs (call for helps, aka user generated reports) that are mostly not actionable since users aren’t being exposed to bad content in the first place. This is giving time back to our mods who are able to engage with the community and otherwise focus on the user experience.
Giorgo Paizanis, VP, Habbo
Now, when Habbo builds new in-house tools, they base them on Community Sift’s unique technology:
Community Sift is changing our organization and how we work.
About Habbo
Situation
The Massive Multiplayer Online (MMO) industry moves fast. As new apps, websites, and technology are introduced, companies that want to stay on the cutting edge have to keep up with the rapid pace of change. To retain their status as an industry leader and give them the freedom to focus on creating new features and products, Habbo realized that they needed a more efficient and cost-effective way to moderate their online community. And while it was crucial that they resolved this challenge quickly, after operating for 15 successful years, Habbo knew that they couldn’t compromise their users’ safety.
Habbo remained committed to protecting their growing userbase from high-risk content like cyberbullying, hate speech, Personally Identifiable Information (PII), and grooming. To implement their new approach to UGC (User-Generated Content), they needed a scalable moderation solution that would keep costs low but still maintain community standards.
Obstacle
Moderation teams are expensive. As products scale up and communities grow, more moderators need to be hired to deal with the increased UGC. Habbo was committed to protecting their users from dangerous content but recognized that it was impossible to maintain the cost of employing a huge staff of moderators to review reports and live chat. Their existing process — a blacklist, supported by human teams — would have to change.
As well, Habbo is available in nine different languages. The company needed to ensure that the same community standards and moderation practices would be applied across all hotels.
In addition to these concerns, they realized that it could be confusing and often jarring to add a new component like a content filter to an established community. So they needed a way to assure long-time users that the new moderation system was in their best interests.
Action
Habbo used Two Hat Security’s flagship content filter and moderation tool Community Sift to implement their new approach to community moderation. Powered by Artificial Intelligence (AI), and supported by human review, Community Sift replaced their existing blacklist to identify and filter dangerous content in real time.
All UGC now runs through Community Sift, including public chat, private messages, forum discussions, and usernames. Community Sift identifies high-risk content and filters or posts it based on Habbo’s unique community standards. As well, Community Sift classifies users based on their reputation — how often do they post high-risk content? How often do they post low-risk, positive content? It then applies a less permissive or more restrictive filter to that user, based on past behavior.
In addition to filtering high-risk content, Habbo set up auto-sanctions to take action on users who repeatedly break the rules. They can now reinforce community standards in real-time, giving users the opportunity to correct their behavior and re-join the community.
The Habbo moderation team also has full access to the Community Sift tool. They can retrain the AI on the spot based on new or reported language trends, ensuring that users are protected at all times.
Results
Since implementing Community Sift, Habbo has built a stronger moderation process, all while lowering their previous moderation workload:
Now, when Habbo builds new in-house tools, they base them on Community Sift’s unique technology: