Design Around These Edtech Challenges Before They Become Barriers To Success

Educational technology in the classroom is one of the latest trends to sweep the startup and tech industry. Designed to connect teachers, students, and parents, edtech platforms grow in popularity, not to mention profitability, every day. In fact, according to TechCrunch, in the first 10 months of 2017, global investors staked $8.15 billion in edtech ventures.

The most effective edtech platforms include communication features like online forums, message boards, or chat options. What are the benefits of using technology in education? Online social features provide students and teachers with an additional, highly engaging level of interaction, much like they would experience in a traditional classroom setting.

As you can imagine, that opens the door to all sorts of challenges. How do you stop people from using profanity in your forum? Forget profanity – what about abusive language? The gaming industry has been dealing with online bullying and harassment for years – so what can edtech designers learn from them?

Pillar #1: Privacy by Design

GDPR and COPPA. CIPA and FERPA. Privacy regulations are everywhere. With Mark Zuckerberg’s recent testimony before the American Senate and European Parliament, and today’s General Data Protection Regulation deadline heightening industry awareness of data privacy issues, we expect to see new and enhanced privacy regulations introduced over the next few years.

Going forward, any company that collects personal information – whether it’s an email address, birthdate, IP address, or more – will be expected to embed robust and transparent privacy features into their product.

Due to the extra sensitive nature of children’s personal information, kids’ products are already strictly regulated by COPPA, FERPA, and now GDPR.

While it may seem daunting at first, compliance doesn’t have to be scary, and it’s not insurmountable. In our recent webinar “The ROI of COPPA” (watch it on-demand), we explored the surprising benefits of building COPPA compliant kid’s connected products, including increased user engagement, retention, and LTV.

If you’re getting into the edtech business, it’s critical that ensuring user privacy is part of your plan. Need guidance? We recommend that you work with a Safe Harbor company to ensure compliance.

Check out our beginner’s guide to COPPA compliance for a list of reputable companies that can help (they can assist with GDPR, too).

Pillar #2: Safety by Design

When it comes to kid’s online platforms, safety should never be an afterthought.

Any product that allows children to interact online (including games, social networks, messaging apps, and virtual worlds) is responsible for creating a safe environment where users are protected from bullying, harassment, hate speech, and child exploitation.

Regulations like COPPA and CIPA already mandate that under-13 products (and in the case of CIPA, K-12 libraries and schools) protect children from sharing and seeing Personally Identifiable Information (PII) and harmful online content.

Regardless of demographic, edtech products – from learning management systems to tutoring apps – must make user safety a core feature of their offering.

Here are the two fundamental kinds of content that you should address in your platform:

Personally Identifiable Information (PII)

COPPA says that you must take all reasonable measures to prevent users from sharing PII, which can include full name, phone number, email address, home address, and even images of children, like a profile pic. But COPPA and FERPA compliance aren’t the only reasons you should protect kids from sharing PII.

Children don’t always understand the dangers of exposing their personal information online. To prevent safety breaches, a text and image filter that is sophisticated enough to identify PII (notoriously difficult to detect, due to persistent filter manipulation attempts by young, savvy users) is a must-have. Wondering how hard it is to build a smart filter in-house? Check out the “Filter and redact PII” section of our beginner’s guide to COPPA.

High-risk content

We’re all familiar with the dangers of the internet. Abusive comments, bullying, harassment, sexual language and images, and illegal content like CSAM (child sexual abuse material) – they all put children online at risk.

When you include social features and user-generated content like chat, messaging, forums, and images in your edtech product, it’s important that you include a mechanism to proactively filter and moderate dangerous content. And if it’s built into your platform from the design phase onwards, you can avoid the challenges of implementing safety features at the last minute.

Much like ensuring privacy and compliance, we don’t recommend that you do it all yourself. Third-party software will save you loads of development time (and money) by providing battle-tested profanity filtering, subversion detection, context-based content triaging, and flexible options for multiple languages and demographics.

After all, you’re the education and technology expert. Your focus should be on creating the best possible experience for your users, and building content that is engaging, sticky, and informative.

Edtech is a growing industry with endless opportunities for new and established brands to make their mark with innovative products. Techniques for engaging students are changing every day, with new approaches like BYOD (Bring Your Own Device) and after school esports leagues (check out these teaching resources) entering the classroom.

Online games, virtual worlds, and social networks have spent years figuring out how to keep children safe on their platforms. So why not take a page from their playbook, and make Privacy by Design and Safety by Design two of the pillars of your edtech platform?

3 Myths About “Toxic” Gaming Communities: What We Learned at LA Games Conference

Two months after the successful Fair Play Alliance summit at GDC 2018, the LA Games Conference hosted the panel “Fighting Toxicity in Gaming.” Two Fair Play Alliance members were on hand to discuss the role technology plays in addressing disruptive player behavior.

Moderated by Dewey Hammond (Vice President of Games Research at Magid Advisors) and featuring panelists J Goldberg (Head of Community at Daybreak Game Company), Kat Lo (online moderation researcher and PhD student at the University of California), and Carlos Figueiredo (Director of Community Trust & Safety at Two Hat Security), the panelists tackled a series of challenging questions about growing healthy gaming communities. They opened with a frank discussion about misconceptions in the world of gaming.

We spoke with Carlos Figueiredo, co-founder of the Fair Play Alliance and digital citizenship and moderation expert. He shared the top three misconceptions about toxicity in video games — beginning with that tricky word, “toxic.”

Myth 1: How we describe toxicity doesn’t need to change

The gaming industry has been using words like “toxicity” and “trolling” for years now. They began as a necessity — catchy phrases like “toxic gamer culture” and “don’t feed the trolls” became a shorthand for player behavior that anyone in the community could reference.

Language, and our understanding of how the internet shapes behavior, has naturally evolved over time. Terminology like “toxic players” and “internet trolls” may be ingrained in our culture, but they are no longer sufficient when describing variable, nuanced human behavior. Human, in all our complexity, cannot be categorized using broad terms.

In fact, in a study released in 2017, Stanford University showed that, given the right set of circumstances (including the mood and context of a conversation), anyone can become a “troll.”

“When I say disruptive behavior, I’m referencing what we would normally refer to as toxicity, which is a very broad term,” Carlos says. “Disruptive behavior assumes different shapes. It’s not just language, although abuse and harassment are often the first things we think of.”

As its name suggests, disruptive behavior can also include cheating, griefing, and deliberately leaving a match early.

“Human behavior is complex,” says Carlos. “Think of it — we’re putting people from different cultures, with different expectations, together in games. They’ve never seen each other, and for fifteen minutes in a match, we’re expecting everything to go well. But what are we doing as an industry to facilitate healthy interactions?”

The first step in getting rid of internet trolls and toxic online gaming communities fostering healthier online gaming communities? Challenge, define, and refine the words we use and their meaning.

From left to right: J Goldberg, Dewey Hammond, Kat Lo, Carlos Figueiredo Image credit: Ferriz, V. (2018, May 8). LA Games Conference [Digital image]. Retrieved from https://www.facebook.com/DigitalMediaWire/

Myth 2: Anonymity is the problem

Countless articles have been written about the dangers of anonymous apps. And while it’s true that anonymity can be dangerous — the toxic (that word again!) side of online disinhibition — it can also be benign. As John Suler, Ph.D. writes in The Online Disinhibition Effect, “Sometimes people share very personal things about themselves [online]. They reveal secret emotions, fears, wishes [and] show unusual acts of kindness and generosity, sometimes going out of their way to help others.”

So what’s a bigger cause of disruptive player behavior, if not users hiding behind the mask of anonymity? “The lack of social consequences,” says Carlos.

“There are different social protocols when we are interacting face to face, and we know very well that our actions have tangible consequences,” he explains. “That’s not always the case online. We’re still figuring out the relatively new virtual spaces and how we are socializing within them.”

“Anonymity alone,” he continues, “is not the biggest driver of disruptive behavior.”

Kat Lo and Carlos Figueiredo Image credit: Ferriz, V. (2018, May 8). LA Games Conference [Digital image]. Retrieved from https://www.facebook.com/DigitalMediaWire/

Myth 3: AI alone will be the savior of player behavior

Disruptive behavior won’t be solved by algorithms or humans alone. Instead, as Carlos says, “A machine/human symbiosis that leverages the best of both can make all the difference.”

Using AI, gaming communities can proactively filter and triage the obviously unhealthy text or images, leaving humans to review the in-between content that requires empathy and human understanding to make a decision.

He also advocates for having a highly skilled and well-trained team of moderators who are well-versed in the game, understand the community, and have access to the right tools to do their jobs.

Having humans review content periodically is imperative, Carlos says. “You can’t just use automation and expect it to run blind. Models have biases if you don’t adjust and have eyes on them.”

He adds, “It’s important to remember that teams will have biases as well, and will require a lot of conscious effort to consider diverse views and motivations, overcome limitations in their thinking, and apply checks and balances along the way.”

Carlos believes that the answer to unhealthy communities is probably a lot more human than we realize. As he says, making a difference in gaming communities comes down to “People who care about games, care about players, care about the community, and are prepared to make the hard decisions.”

“We won’t always get things right,” he continues, “or have all the answers, but we can work really hard towards better online communities when we truly care about them. The answers are as human as they are technological.”

 

Want to learn more about combating toxicity deterring disruptive player behavior? Sign up for the Two Hat Security newsletter and receive monthly community management tips and tricks, invites to exclusive workshops, moderation best practices, and more!

* indicates required