Case Study: Friendbase
Increasing User Retention
Everything you need to persuade your organization that content moderation is fundamental to user acquisition and retention.
Five Layers of Community ProtectionIndustry-leading content moderation best practices
Content Moderation Best PracticesDownload our free e-book and get six easy-to-implement best practices for your content moderation strategy.
Case StudiesWhite Papers & PDFsWebinarsDeveloper ResourcesGDPR & COPPACommunity ConsultationBook a free moderation consultation with our Trust & Safety expert today
Friendbase is a vibrant virtual world for teens to chat, create, and play as friendly avatars. Primarily composed of over-13 users, the Friendbase population enjoys making friends, taking fun quizzes, and playing games together.
The community is predominantly female, with girls between 13-16 making up 60% of users. Launched in its current version in 2016, as of 2017 over half of the community is located in Brazil (34%) and the US (20%). Users from India and Turkey make up another significant percentage of the population. Users have created over 600,000 accounts since Friendbase was released last year.
Situation
The Friendbase community expanded rapidly upon the release of its updated version in 2016. However, chat on the platform was quickly filled with the toxic sexism, racism, and bullying behavior that has become commonplace in many apps and games. For Friendbase, this behavior led to high user churn. Typically, users would create an account, encounter an unpleasant or threatening interaction with another user, then leave the app — never to return.
An associate at Friendbase, aware that toxic behavior was reducing retention, compared the problem to a leaky bucket. To improve retention, the Friendbase team needed to fix the leaks.
To transform this unfriendly environment, the experienced team behind Friendbase decided to follow a path they had started down back in 2007. At that time the team worked on a different online community and were looking to build or adopt an automated, AI-based system to moderate their globally scaling platform efficiently. Unfortunately, back then neither the technology nor pricing made it possible.
But in 2016, times had changed. And instead of trying (with uncertain results) to build a state-of-the-art system themselves, they turned to Community Sift to find a solution.
Action
Now, Friendbase uses Community Sift to detect and block hate speech, abusive comments, and player harassment. In addition to chat moderation, they use the user reputation system to identify their most toxic users. Community Sift assigns a trust level to each user in the community. Then, based on the user’s behavior, they move between trusted and not-trusted states. With each change, their chat permissions are restricted and expanded.
In the future, Friendbase plans to use the reputation system to further increase new user retention. They hope to invite trusted users to take part in an ambassador program that welcomes new members to the app.
Results
Since integrating with Community Sift in November 2016, user retention in Friendbase has increased significantly. In that time, the retention rate by day 30 has more than doubled.
And with toxic and threatening content now being filtered, user reports have decreased considerably. The workload can now be managed easily by the small moderation team.
Moreover, Friendbase has found that users who are temporarily banned for using vulgar and offensive language are changing their behavior in response to their lost privileges. In many cases, users simply didn’t realize that they were breaking community guidelines. When informed why they were banned, users will often apologize and return to the app — bringing an entirely different approach to community interactions with them.
Friendbase believes that the Community Sift tool provides exceptional value for all online communities, regardless of demographic:
They also believe that Community Sift provides a superior experience over an internal moderation tool: