Skip to main content

In the fourth installment of our Five Layers of Community Protection blog series, we dive deeper into how Automated User Reporting leads to organizations building healthier and safer communities by giving users a way to report when they’re uncomfortable and helping moderators prioritize actionable items.

Layer Four: User Reporting

We previously learned about Layer 3 and how to leverage User Reputation to more elegantly assess context and deploy the appropriate filter settings.

Layer 4 is focused on creating access for community members to report content they find harmful and/or which should have been blocked. This is both highly empowering to your community, and provides you with valuable feedback on the grey area, the content that requires human expertise, empathy, and insights in order to be fully assessed.

Act Quickly

Communicating your organization’s zero-tolerance policy regarding harmful content, enforcing it, and doing so quickly, shows your community you value them and their reporting. This builds credibility and trust. Following up quickly also discourages violators from acting again, as well as their copycats. Prompt resolution also helps protect a brand’s reputation and increases user engagement and retention.

The Long Backlog

Many community managers and moderators, especially those in large organizations, face a backlog of automated user reports. Small organizations may have only one person handling hundreds of reports. This backlog can hinder a team’s ability to address the most heinous reports in a timely manner. Leveraging AI software, in addition to human review and insight, can help moderators triage priority reports and close out any false or non-urgent ones. Moderators can then act quickly on the actionable and most severe reports that rise through the noise.

Making it Easy & Accessible for Members to Report Harmful Content

The content that doesn’t quite meet your cut-off threshold for proactive filtering is the harder-to-moderate grey area. This is the area that allows for healthy conflict and debate as well as for the development of resiliency, but it also brings the opportunity for critical input from your community so you better understand their needs. It’s critical to make it easy and straightforward for your community to report content that has made them feel unsafe. That means adding an intuitive reporting flow to your product that allows your community members to choose from a list of reporting reasons and provide actionable proof for the report they are sending you.

Get to the Critical Reports First

A critical insight we consistently see across verticals like social apps, gaming, and others is that 30% or less of all user reports are actionable. It’s absolutely essential to have a mechanism to sort, triage, and prioritize reports. Clearly, not all reports have the same level of importance or urgency.

Closing The Feedback Loop

Following up on reports lets concerned users know that their reports led to meaningful action and encourages them to continue to report behavior that violates the platform’s community guidelines. This helps to build trust by assuring users that you are doing your due diligence to keep your community a safe and inclusive place for all.

It’s important to thank community members for submitting reports and helping you maintain a healthy and safe community. A simple “thank you” can go a long way in building relationships with the members of your community as it shows that you take violations of community guidelines seriously.

Improving Scalability + Sustainability

Successfully applying Layer 2: Classify & Filter means that the majority of harmful content is filtered out in accordance with a platform’s community guidelines. When the worst of the worst content is filtered out, there is less negative behavior for community members to report on. This impacts Layer Four directly as it leads up to a 88% reduction* of the number of user-generated reports. This thereby increases the scalability and sustainability of the volume of content moderators need to monitor and helps to decrease the likelihood of burnout.

Empowering Users

Optimizing User Reporting operations empowers community members to take an active role in their safety and the community’s health and helps to build trust with the platform. Leveraging AI helps community managers reduce their workload and prioritize high-risk and sensitive reports. In turn, responding quickly to urgent reports and closing the feedback loop builds trust and credibility.

To find out how else you can better build healthy and thriving communities, read the rest of our Five Layers of Community Protection blog series. You can also request a live demo of our Community Sift platform now to learn how Automated User Reporting can help you protect your online community, address user reports faster and improve your users’ online experiences.

Learn about Layer 5 here.

*Two Hat Analysis 2020

Request Demo