Meet the Mayor in a Town of 20 Million Teens

 In Online Safety, Social Networks

Launched in 2016, Yubo is a social network of more than 20 million users from around the world. Yubo lets users meet new people and connect through live video streaming and chat. Developed and operated by Paris-based Twelve App SAS, the Yubo app is available for free on the App Store and Google Play.

Two Hat’s Community Sift platform powers content moderation for Yubo’s Live Titles, Comments, and Usernames, all in multiple languages. Use cases include detection and moderation of bullying, sexting, drugs/alcohol, fraud, racism, and grooming. Recently, Yubo’s COO, Marc-Antoine Durand, sat down with Two Hat to share his thoughts on building and operating a safe social platform for teens, and where future evolutions in content moderation may lead.

Talk about what it’s like to operate a community of young people from around the globe sharing 7 million comments every day on your platform.

It’s like running a city. You need to have rules and boundaries, and importantly you need to educate users about them, and you have to undertake prevention to keep things from getting out of hand in the first place. You’ll deal with all the bad things that exist elsewhere in society – drug dealing, fraud, prostitution, bullying and harassment, thoughts or attempts at suicide – and you will need a framework of policies and law enforcement to keep your city safe. It’s critical that these services are delivered in real-time.

Marc-Antoine Durand, COO of Yubo

The future safety of the digital world rests upon how willing we are to use behavioral insights to stop the bad from spoiling the good. If a Yubo moderator sees something happening that violates community guidelines or could put someone at risk, they send a warning message to the user. The message might say that their Live feed will be shut down in one minute, or it might warn the user they will be suspended from the app if they don’t change their behavior. We’re the only social video app to do this, and we do it because the best way for young people to learn is in the moment, through real-life experience.

Yubo’s role is to always find a balance between ensuring self-expression and freedom of speech while preventing harm. Teenagers are very keen to talk about themselves, are interested in others and want to share the issues that are on their minds such as relationships and sexuality. This is a normal part of growing up and development at this point in teenagers’ lives. But this needs to be done within a context that is healthy and free from pressure and coercion, for example, sharing intimate images. Finding a limit or balance between freedom and protection in each case is important to make sure the app is appealing to young people and offers them the space for expression but keeps them as safe as possible.

When Yubo first launched in 2016, content moderation was still quite a nascent industry. What were your solutions options at the time and how was your initial learning curve as a platform operator?

There weren’t many options available then. You could hire a local team of moderators to check comments and label them, but that’s expensive and hard to scale. There was no way our little team of four could manage all that and be proficient in Danish, English, French, Norwegian, Spanish and Swedish all at the same time. So multi-language support was a must to have.

We created our own algorithms to detect images that broke Yubo’s community guidelines and acceptable use policies, but content moderation is a very special technical competency and it’s a never-ending job and there were only four of us and we simply couldn’t do all that was required to do this well…  As a result, early on, we were targeted by the press as a ‘bad app.’ To win the trust back and establish the app as safe and appropriate for young people we had to start over. Our strategy was to show that we were working hard and fast to improve and we set out to establish that a small company with the right safety strategy and tools can be just as good, or better, at content moderation as any large company.

I applaud Yubo for extensively reworking its safety features to make its platform safer for teens. Altering its age restrictions, improving its real identity policy, setting clear policies around inappropriate content and cyberbullying, and giving users the ability to turn location data off demonstrates that Yubo is taking user safety seriously.

Julie Inman Grant, Australian e-safety Commissioner

What are some of the key content moderation issues on your platform and how do you engage users as part of the solution?

One of the issues every service has is user fake profiles. These are particularly a problem in issues like grooming, or bullying. To address this, we have created a partnership with a company called Yoti that allows users to certify their identity. So, when you’re talking to somebody, you can see that they have a badge signifying that their identity has been certified, indicating they are ‘who they say they are.’ It’s a voluntary process for users to participate in this, but if we think a particular profile may be suspicious or unsafe, we can force the user to certify their identity, or they will be removed from the platform.

Real time intervention by Yubo moderatorsThe other issues we deal with are often related to the user’s live stream title, which is customizable, and the comments in real-time chats. Very soon after launching, we saw that users were creating sexualized and ‘attention-seeking’ live stream titles not just for fun, but as a strategy to attract more views, for example, with a title such as: “I’m going to flash at 50 views.” People are very good at finding ways to bypass the system by creating variations of words. We realized immediately that we needed a technology to detect and respond to that subversion.

As to engaging users as part of our content moderation, it’s very important to give users who wish to participate in some way an opportunity to help and something they can do to help with the app. Users want and value this. When our users report bad or concerning behavior in the app, they give us a very precise reason and good context. They do this because they are very passionate about the service and want to keep it safe. Our job is to gather this feedback and data so that we may learn from it, but also to take action on what users tell us, and to reward those who help us. That’s how this big city functions.

Yubo was referenced as part of the United Kingdom’s Online Harms white paper and consultation — What’s your take on pending duty of care legislation in the UK and elsewhere, and are you concerned that a more restrictive regulatory environment may stifle technical innovation?

I think regulation is good as long as it’s thoughtful and agile to adjust to a constantly changing technical environment and not simply a way to blame apps and social platforms for all the bad things happening in society because that does not achieve anything. Perhaps most concerning is setting standards that only the Big Tech companies with thousands of moderators and technical infra-structure staff can realistically achieve, and this prohibits and restricts smaller start-ups being innovative and able to participate in the ecosystem. Certainly, people spend a lot of time on these platforms and they should not be unregulated, but the government can’t just set rules, they need to help companies get better at providing safer products and services.

It’s an ecosystem and everyone needs to work together to improve it and keep it as safe as possible, and this includes the wider public and users themselves. So much more is needed in the White Paper about media literacy and managing off-line problems escalating and being amplified online. Bullying and discrimination, for example, exist in society and strategies are needed in schools, families, and communities to tackle these issues – just focusing online will not deter or prevent these issues.

In France, by comparison to the UK, we’re very far away from this ideal ecosystem. We’ve started to work on moderation, but really the French government just does whatever Facebook says. No matter where you are, the more regulations you have, the more difficult it will be to start and grow a company, so barriers to innovation and market entry will be higher. That’s just where things are today.

It’s in our DNA to take safety features as far as we can to protect our users.

— Marc-Antoine Durand, COO of Yubo

How do you see Yubo’s approach to content moderation evolving in the future?

We want to build a reputation system for users, the idea being to do what I call pre-moderation, or detecting unsafe users by their history. For that, we need to gather as much data as we can from our user’s live streams, titles, and comments. The plan is to create a method where users are rewarded for good behavior. That’s the future of the app, to reward the good stuff and, for the very small minority who are doing bad stuff, like inappropriate comments or pictures or titles, we’ll engage them and let them know it’s not ok and that they need to change their behavior if they want to stay. So, user reputation as a baseline for moderation. That’s where we are going.

 


We’re currently offering no-cost, no-obligation Community Audits for social networks that want an expert consultation on their community moderation practices.

Our Director of Community Trust & Safety will examine your community, locate areas of potential risk, and provide you with a personalized community analysis, including recommended best practices and tips to maximize user engagement.

Sign up today and we’ll be in touch.

 

Recommended Posts
x
Subscribe to the blog and never miss a post!

Start typing and press Enter to search