In its first 48 hours, YOLO acquired 1 million users, a plague of cyberbullying, a scalable content moderation solution, and a new vision for the future.
YOLO may have never lived at all if not for a weekend experiment. Gregoire Henrion and his cofounders weren’t really interested in an anonymity app, they were just curious what they could build over an idle couple of days. But when YOLO hit the App Store, it found instant traction and caught a ride on a viral loop via Snapchat.
“We had a million users in two days,” says Henrion. Unfortunately, the anonymous nature of the app was also providing a platform for cyberbullying, which spread like wildfire. “We hadn’t thought of it before, because we’d never dreamed of the scale. But even after one day, we knew it was a big issue.”
“Within a day, we went from having lots of bad behaviors, to being safe as could be.”
YOLO was at this time in the midst of a feeding frenzy of meetings, media and monetization that only the developers of such viral app sensations can truly understand. “We were doing funding calls and everything else – it was crazy – but Two Hat sorted out what we needed, and the implementation was completed within hours.”
YOLO initially went with very strong guidelines before easing settings based on user feedback. “We wanted to fix what was wrong, and we didn’t want to be associated with bad behaviors,” says Henrion. By experimenting with policies and settings we find we are able to deal with 95 to 99 percent of the issues.”
In YOLO’s configuration, inappropriate messages or comments simply do not get shared, but the offending party doesn’t know this. In the content moderation industry, this is known as a false send. But the bully just knows they’re not getting any attention back, which is often enough for them to stop and go away. “Now, we have the app tuned so that the filters are super-efficient,” says Henrion.
“We have a lot of control now; our users are happy, and we are super happy.”
Moving forward, YOLO plans to apply what the company has learned about anonymity and social media to carve a new approach to online safety. “When we look at user behavior now, one of the secrets of YOLO being successful is that even if the user’s name is hidden, you still see the face of the user on their profile pic. This bit of exposure – you don’t know me but you can see my face – is very often enough to make users regulate their behaviors. That’s naturally ‘Safe by Design’ because it’s our normal behavior.”
“If you want to create value you have to make something secure. We’re not naive anymore. We know all the bad things that can happen in social.”
YOLO envisions their community and others as a place where Safety by Design has encouraged users to change behaviors — Why bully, harass or cajole in a community if life on mute is the only possible outcome?
“Anonymity alone is not a sustainable approach to managing communities,” says Henrion. “Two Hat’s Community Sift gives us tools to help shift user behavior, the security system to deal with those who cause trouble, and a solution we know scales quickly.”
We’re currently offering no-cost, no-obligation Community Audits for social networks that want an expert consultation on their community moderation practices.
Our Director of Community Trust & Safety, Carlos Figueiredo, will examine your community, locate areas of potential risk and provide you with a personalized community analysis, including recommended best practices and tips to maximize user engagement.
Sign up using the form below to request your community consultation.
Launched in 2016, Yubo is a social network of more than 20 million users from around the world. Yubo lets users meet new people and connect through live video streaming and chat. Developed and operated by Paris-based Twelve App SAS, the Yubo app is available for free on the App Store and Google Play.
Two Hat’s Community Sift platform powers content moderation for Yubo’s Live Titles, Comments, and Usernames, all in multiple languages. Use cases include detection and moderation of bullying, sexting, drugs/alcohol, fraud, racism, and grooming. Recently, Yubo’s COO, Marc-Antoine Durand, sat down with Two Hat to share his thoughts on building and operating a safe social platform for teens, and where future evolutions in content moderation may lead.
Two Hat: Talk about what it’s like to operate a community of young people from around the globe sharing 7 million comments every day on your platform.
Marc-Antoine Durand: It’s like running a city. You need to have rules and boundaries, and importantly you need to educate users about them, and you have to undertake prevention to keep things from getting out of hand in the first place. You’ll deal with all the bad things that exist elsewhere in society – drug dealing, fraud, prostitution, bullying and harassment, thoughts or attempts at suicide – and you will need a framework of policies and law enforcement to keep your city safe. It’s critical that these services are delivered in real-time.
The future safety of the digital world rests upon how willing we are to use behavioral insights to stop the bad from spoiling the good. If a Yubo moderator sees something happening that violates community guidelines or could put someone at risk, they send a warning message to the user. The message might say that their Live feed will be shut down in one minute, or it might warn the user they will be suspended from the app if they don’t change their behavior. We’re the only social video app to do this, and we do it because the best way for young people to learn is in the moment, through real-life experience.
TH: When Yubo first launched in 2016, content moderation was still quite a nascent industry. What were your solutions options at the time and how was your initial learning curve as a platform operator?
MD: There weren’t many options available then. You could hire a local team of moderators to check comments and label them, but that’s expensive and hard to scale. There was no way our little team of four could manage all that and be proficient in Danish, English, French, Norwegian, Spanish and Swedish all at the same time. So multi-language support was a must to have.
We created our own algorithms to detect images that broke Yubo’s community guidelines and acceptable use policies, but content moderation is a very special technical competency and it’s a never-ending job and there were only four of us and we simply couldn’t do all that was required to do this well. As a result, early on, we were targeted by the press as a ‘bad app.’ To win the trust back and establish the app as safe and appropriate for young people we had to start over. Our strategy was to show that we were working hard and fast to improve and we set out to establish that a small company with the right safety strategy and tools can be just as good, or better, at content moderation as any large company.
I applaud Yubo for extensively reworking its safety features to make its platform safer for teens. Altering its age restrictions, improving its real identity policy, setting clear policies around inappropriate content and cyberbullying, and giving users the ability to turn location data off demonstrates that Yubo is taking user safety seriously.
TH: What are some of the key content moderation issues on your platform and how do you engage users as part of the solution?
MD: One of the issues every service has is user fake profiles. These are particularly a problem in issues like grooming, or bullying. To address this, we have created a partnership with a company called Yoti that allows users to certify their identity. So, when you’re talking to somebody, you can see that they have a badge signifying that their identity has been certified, indicating they are ‘who they say they are.’ It’s a voluntary process for users to participate in this, but if we think a particular profile may be suspicious or unsafe, we can force the user to certify their identity, or they will be removed from the platform.
The other issues we deal with are often related to the user’s live stream title, which is customizable, and the comments in real-time chats. Very soon after launching, we saw that users were creating sexualized and ‘attention-seeking’ live stream titles not just for fun, but as a strategy to attract more views, for example, with a title such as: “I’m going to flash at 50 views.” People are very good at finding ways to bypass the system by creating variations of words. We realized immediately that we needed a technology to detect and respond to that subversion.
As to engaging users as part of our content moderation, it’s very important to give users who wish to participate in some way an opportunity to help and something they can do to help with the app. Users want and value this. When our users report bad or concerning behavior in the app, they give us a very precise reason and good context. They do this because they are very passionate about the service and want to keep it safe. Our job is to gather this feedback and data so that we may learn from it, but also to take action on what users tell us, and to reward those who help us. That’s how this big city functions.
TH: Yubo was referenced as part of the United Kingdom’s Online Harms white paper and consultation — What’s your take on pending duty of care legislation in the UK and elsewhere, and are you concerned that a more restrictive regulatory environment may stifle technical innovation?
MD: I think regulation is good as long as it’s thoughtful and agile to adjust to a constantly changing technical environment and not simply a way to blame apps and social platforms for all the bad things happening in society because that does not achieve anything. Perhaps most concerning is setting standards that only the Big Tech companies with thousands of moderators and technical infra-structure staff can realistically achieve, and this prohibits and restricts smaller start-ups being innovative and able to participate in the ecosystem. Certainly, people spend a lot of time on these platforms and they should not be unregulated, but the government can’t just set rules, they need to help companies get better at providing safer products and services.
It’s an ecosystem and everyone needs to work together to improve it and keep it as safe as possible, and this includes the wider public and users themselves. So much more is needed in the White Paper about media literacy and managing off-line problems escalating and being amplified online. Bullying and discrimination, for example, exist in society and strategies are needed in schools, families, and communities to tackle these issues – just focusing online will not deter or prevent these issues.
In France, by comparison to the UK, we’re very far away from this ideal ecosystem. We’ve started to work on moderation, but really the French government just does whatever Facebook says. No matter where you are, the more regulations you have, the more difficult it will be to start and grow a company, so barriers to innovation and market entry will be higher. That’s just where things are today.
It’s in our DNA to take safety features as far as we can to protect our users.
— Marc-Antoine Durand, COO of Yubo
TH: How do you see Yubo’s approach to content moderation evolving in the future?
MD: We want to build a reputation system for users, the idea being to do what I call pre-moderation, or detecting unsafe users by their history. For that, we need to gather as much data as we can from our user’s live streams, titles, and comments. The plan is to create a method where users are rewarded for good behavior. That’s the future of the app, to reward the good stuff and, for the very small minority who are doing bad stuff, like inappropriate comments or pictures or titles, we’ll engage them and let them know it’s not ok and that they need to change their behavior if they want to stay. So, user reputation as a baseline for moderation. That’s where we are going.
We’re currently offering no-cost, no-obligation Community Consultations for social networks that want an expert consultation on their community moderation practices.
Our Director of Community Trust & Safety will examine your community, locate areas of potential risk, and provide you with a personalized community analysis, including recommended best practices and tips to maximize user engagement.
Sign up using the form below to request your community consultation.