Brighten was born from tragedy. In 2013, founder and CEO Austin Kevitch lost his friend Oliver in a rock climbing accident. As friends shared happy memories of OIiver on his Facebook wall, Austin had a powerful realization: We’re so afraid of appearing vulnerable that, in life, we rarely tell our friends how much they mean to us. Instead, we wait until they’re gone. Austin set out to create a product that would encourage people to share positive thoughts, memories, and compliments every day.
That inspiration led to Brighten — the first positive social network that allows users to send compliments to friends. Users send “brightens” to friends, who can then send a “smile” — a one-second disappearing selfie of their reaction — in return. Compliments are anonymous unless users choose to reveal their identities.
To date, users have sent over 11 million brightens, all of which need to be moderated.
Brighten’s success depends on maintaining a safe community at all times. When users log in, they expect a cheerful and encouraging environment. Bullying, harassment, and abuse cannot be tolerated.
Maintaining and supporting a positive community on Brighten is vital to our success as a company. People come to the app expecting positivity, and one negative comment can ruin their entire experience.
Austin Kevitch, CEO, Brighten
Situation
When Brighten first launched in beta, they used a simple blacklist to block obvious profanity. Unfortunately, the filter did not recognize words and phrases unless they were an exact match for items on the blacklist. Misspellings, slang, and deliberate manipulations all went undetected.
Brighten realized that bullies were ruining the product for everyone else. More importantly, good users were leaving. Word-of-mouth can make or break a new product. If word got around that Brighten — created for the sole purpose of spreading positivity — couldn’t prevent bullying, there was simply no way the product would attract an audience.
Their conclusion? They didn’t have the right tools to keep their users safe. They needed a smarter, more sophisticated way to identify and moderate high-risk content like bullying, hate speech, and abuse.
Action
Brighten replaced their inefficient blacklist with Community Sift’s flexible, context-based text classifier. Powered by an artificial intelligence model, but maintained by expert human pattern-building, Community Sift processes language differently than other filters. It’s easy to find obvious profanity or natural language — even the old Brighten blacklist could do that — but it’s not easy to find unnatural language, which is what most systems miss.
Users who are determined to post malicious content will always look for ways around traditional filters, so Community Sift looks for the hidden, unnatural meanings of words. It detects bullying, harassment, and threats even when they’re deliberately cloaked in misspellings, l337 5p34k, and tricky slang.
This sophisticated pattern detection is ideal for a community like Brighten, where even a single instance of bullying can damage community morale, ultimately resulting in high user churn and potentially negative press.
In addition to using the built-in text classifier, Brighten took advantage of Community Sift’s adaptable system and self-serve tools. They created hundreds of custom language patterns based on community trends and in-app slang. By building custom patterns, they catch all variations of any potential bullying phrase in real time.
They also use Community Sift to take preventative measures that protect their most vulnerable users. Using Community Sift’s pattern detection, they identify users who post messages about suicide and self-harm. Staff are alerted when a user posts potentially alarming content. They can then send the user a message encouraging them to contact a suicide hotline.
Results
Since implementing Community Sift, Brighten has built a healthy, friendly community. When users log into Brighten, they can trust that they will have a positive experience.
Community Sift has allowed us to deliver on the most important promise we make to our users — a positive community.
Alec Lorraine, Chief Happiness Officer, Brighten
Studies show that positively reinforcing good behavior yields better results than just punishing bad behavior. Brighten uses Community Sift to send a clear message to bullies and toxic users.
Our community has gone from one where bullies and trolls had the power, to one where the nicest, most positive users set the tone. By filtering negative content we are able to make Brighten an extremely positive community.
Alec Lorraine, Chief Happiness Officer, Brighten
For example, a new user joined Brighten. Their first two messages were “Kys” (a popular acronym for “kill yourself”) and “Your dumb.” Community Sift flagged both messages as bullying. They were automatically filtered and weren’t seen by their intended victim. Since no one saw them, the user didn’t get their intended reaction. No response = no fun. Their next two messages? Compliments: “Your so amazing” and “dude your hot.” The positive messages resulted in positive engagement. It’s simple — negativity breeds more negativity. And positivity? Well, that’s just plain infectious.
Instead of just blocking trolls, we are able to rehabilitate them.
Alec Lorraine, Chief Happiness Officer, Brighten
With Community Sift, Brighten has fostered a culture of openness, positivity, and safety. They’ve made a promise to their community — and they live up to that promise every day.
Any company that needs to moderate and/or respond to chat should definitely use Community Sift. We’ve gone from being reactive to proactive and completely changed the culture of our app.
About Brighten
That inspiration led to Brighten — the first positive social network that allows users to send compliments to friends. Users send “brightens” to friends, who can then send a “smile” — a one-second disappearing selfie of their reaction — in return. Compliments are anonymous unless users choose to reveal their identities.
To date, users have sent over 11 million brightens, all of which need to be moderated.
Brighten’s success depends on maintaining a safe community at all times. When users log in, they expect a cheerful and encouraging environment. Bullying, harassment, and abuse cannot be tolerated.
Situation
When Brighten first launched in beta, they used a simple blacklist to block obvious profanity. Unfortunately, the filter did not recognize words and phrases unless they were an exact match for items on the blacklist. Misspellings, slang, and deliberate manipulations all went undetected.
Brighten realized that bullies were ruining the product for everyone else. More importantly, good users were leaving. Word-of-mouth can make or break a new product. If word got around that Brighten — created for the sole purpose of spreading positivity — couldn’t prevent bullying, there was simply no way the product would attract an audience.
Their conclusion? They didn’t have the right tools to keep their users safe. They needed a smarter, more sophisticated way to identify and moderate high-risk content like bullying, hate speech, and abuse.
Action
Brighten replaced their inefficient blacklist with Community Sift’s flexible, context-based text classifier. Powered by an artificial intelligence model, but maintained by expert human pattern-building, Community Sift processes language differently than other filters. It’s easy to find obvious profanity or natural language — even the old Brighten blacklist could do that — but it’s not easy to find unnatural language, which is what most systems miss.
Users who are determined to post malicious content will always look for ways around traditional filters, so Community Sift looks for the hidden, unnatural meanings of words. It detects bullying, harassment, and threats even when they’re deliberately cloaked in misspellings, l337 5p34k, and tricky slang.
This sophisticated pattern detection is ideal for a community like Brighten, where even a single instance of bullying can damage community morale, ultimately resulting in high user churn and potentially negative press.
In addition to using the built-in text classifier, Brighten took advantage of Community Sift’s adaptable system and self-serve tools. They created hundreds of custom language patterns based on community trends and in-app slang. By building custom patterns, they catch all variations of any potential bullying phrase in real time.
They also use Community Sift to take preventative measures that protect their most vulnerable users. Using Community Sift’s pattern detection, they identify users who post messages about suicide and self-harm. Staff are alerted when a user posts potentially alarming content. They can then send the user a message encouraging them to contact a suicide hotline.
Results
Since implementing Community Sift, Brighten has built a healthy, friendly community. When users log into Brighten, they can trust that they will have a positive experience.
Studies show that positively reinforcing good behavior yields better results than just punishing bad behavior. Brighten uses Community Sift to send a clear message to bullies and toxic users.
For example, a new user joined Brighten. Their first two messages were “Kys” (a popular acronym for “kill yourself”) and “Your dumb.” Community Sift flagged both messages as bullying. They were automatically filtered and weren’t seen by their intended victim. Since no one saw them, the user didn’t get their intended reaction. No response = no fun. Their next two messages? Compliments: “Your so amazing” and “dude your hot.” The positive messages resulted in positive engagement. It’s simple — negativity breeds more negativity. And positivity? Well, that’s just plain infectious.
With Community Sift, Brighten has fostered a culture of openness, positivity, and safety. They’ve made a promise to their community — and they live up to that promise every day.