Case Study: Brighten
Inspiring Positive Engagement
Everything you need to persuade your organization that content moderation is fundamental to user acquisition and retention.
Five Layers of Community ProtectionIndustry-leading content moderation best practices
Content Moderation Best PracticesDownload our free e-book and get six easy-to-implement best practices for your content moderation strategy.
Case StudiesWhite Papers & PDFsWebinarsDeveloper ResourcesGDPR & COPPACommunity ConsultationBook a free moderation consultation with our Trust & Safety expert today
About Brighten
That inspiration led to Brighten — the first positive social network that allows users to send compliments to friends. Users send “brightens” to friends, who can then send a “smile” — a one-second disappearing selfie of their reaction — in return. Compliments are anonymous unless users choose to reveal their identities.
To date, users have sent over 11 million brightens, all of which need to be moderated.
Brighten’s success depends on maintaining a safe community at all times. When users log in, they expect a cheerful and encouraging environment. Bullying, harassment, and abuse cannot be tolerated.
Situation
When Brighten first launched in beta, they used a simple blacklist to block obvious profanity. Unfortunately, the filter did not recognize words and phrases unless they were an exact match for items on the blacklist. Misspellings, slang, and deliberate manipulations all went undetected.
Brighten realized that bullies were ruining the product for everyone else. More importantly, good users were leaving. Word-of-mouth can make or break a new product. If word got around that Brighten — created for the sole purpose of spreading positivity — couldn’t prevent bullying, there was simply no way the product would attract an audience.
Their conclusion? They didn’t have the right tools to keep their users safe. They needed a smarter, more sophisticated way to identify and moderate high-risk content like bullying, hate speech, and abuse.
Action
Brighten replaced their inefficient blacklist with Community Sift’s flexible, context-based text classifier. Powered by an artificial intelligence model, but maintained by expert human pattern-building, Community Sift processes language differently than other filters. It’s easy to find obvious profanity or natural language — even the old Brighten blacklist could do that — but it’s not easy to find unnatural language, which is what most systems miss.
Users who are determined to post malicious content will always look for ways around traditional filters, so Community Sift looks for the hidden, unnatural meanings of words. It detects bullying, harassment, and threats even when they’re deliberately cloaked in misspellings, l337 5p34k, and tricky slang.
This sophisticated pattern detection is ideal for a community like Brighten, where even a single instance of bullying can damage community morale, ultimately resulting in high user churn and potentially negative press.
In addition to using the built-in text classifier, Brighten took advantage of Community Sift’s adaptable system and self-serve tools. They created hundreds of custom language patterns based on community trends and in-app slang. By building custom patterns, they catch all variations of any potential bullying phrase in real time.
They also use Community Sift to take preventative measures that protect their most vulnerable users. Using Community Sift’s pattern detection, they identify users who post messages about suicide and self-harm. Staff are alerted when a user posts potentially alarming content. They can then send the user a message encouraging them to contact a suicide hotline.
Results
Since implementing Community Sift, Brighten has built a healthy, friendly community. When users log into Brighten, they can trust that they will have a positive experience.
Studies show that positively reinforcing good behavior yields better results than just punishing bad behavior. Brighten uses Community Sift to send a clear message to bullies and toxic users.
For example, a new user joined Brighten. Their first two messages were “Kys” (a popular acronym for “kill yourself”) and “Your dumb.” Community Sift flagged both messages as bullying. They were automatically filtered and weren’t seen by their intended victim. Since no one saw them, the user didn’t get their intended reaction. No response = no fun. Their next two messages? Compliments: “Your so amazing” and “dude your hot.” The positive messages resulted in positive engagement. It’s simple — negativity breeds more negativity. And positivity? Well, that’s just plain infectious.
With Community Sift, Brighten has fostered a culture of openness, positivity, and safety. They’ve made a promise to their community — and they live up to that promise every day.