Facebook needs to stop relying on us to police its content. It's time its AI took responsibility

Facebook Live crimes might be rare, but they should be an important target of AI research to help law enforcement and prevent sensationalism
Justin Sullivan / Getty

It appears we are on an accelerating descent into a Hunger Games-style entertainment dystopia. In the past 12 months, Facebook Live has been used to stream Larossi Abballa’s 12-minute-long allegiance to Isis after he murdered a French police officer and his partner (filmed while the couple’s three-year-old sat in the background); the gang rape of a 15-year-old girl in Chicago; and the torture of a disabled teenager, held captive.

Earlier this month, the service was used by 37-year-old Steve Stephens to publish footage of himself shooting dead 74-year-old grandfather Robert Godwin Sr on a residential street in Cleveland, Ohio. Godwin, father of ten children and grandfather to 14, had been clearing the streets of cans at the time the random attack happened. The video was live for at least two hours before it was removed, Facebook says, and it was shared and watched more widely a reported 1.6 million times. (Stephens has since killed himself following a police chase.) More recently, a Thai man filmed himself killing his baby daughter on Facebook Live, before taking his own life.

Read more: Zuckerberg outlines Facebook's future – and there's a lot of AI

But it’s okay. All is in hand. Facebook is reviewing its "reporting flows".

“As a result of this terrible series of events, we are reviewing our reporting flows to be sure people can report videos and other material that violates our standards as easily and quickly as possible,” VP of Global Operations, Justin Osofsky, said in a blog post. He does add, however, that the video remained on the site for so long because the public didn't report it sooner – seemingly by way of defence at the backlash from Live being used to dramatise the horrific crime.

Subscribe to WIRED

Facebook’s algorithms like to predict when you may be thinking of getting married or looking for a job so it can show appropriate ads. They analyse posts and comments to identify people who may be considering self-harm, to help them find support. They have also been accused of showing bias in trending topics and politics. An internal Facebook employee poll once chose to ask Mark Zuckerberg at a weekly Q&A: “What responsibility does Facebook have to help prevent President Trump in 2017?”, and Sheryl Sandberg has previously asserted that: “Facebook would never try to control elections”.

In fact, the company is quick to stress that its AI-led algorithms are used for commercial purposes, not political, legal or moral. This allows Facebook to put the onus on the public to police the site on its behalf, reporting inappropriate, troubling or even blatantly illegal content, while its algorithms can devolve a certain level of responsibility.

That's not to say they're not being used at all in the reporting process. AI “prevents videos from being reshared in their entirety”, explained Osofsky. Facebook uses automation to recognise duplicate reports, direct reports to those moderators with the appropriate expertise, identify nude or pornographic content that has been removed before and prevent spam attacks. Elsewhere, Facebook uses PhotoDNA tech to automatically identify known child abuse content from a global shared database overseen by the authorities. Although it falls short of using its own technology to identify new content.

In February, founder Mark Zuckerberg wrote a 5,500-word open letter explaining that the development of AI was a key focus of the business, and Facebook was “researching systems that can look at photos and videos to flag content our team should review". He continued that the software was "still very early in development" but already generates around one-third of all reports to the team that reviews content for our community."

Read more: Europe is leading the way in AI and machine learning (and even Silicon Valley wants in)

Zuckerberg’s letter and assurances over AI were partly in reaction to the increasing scrutiny it had been placed under, along with Google-owned YouTube and Twitter, for the continued proliferation of terrorist propaganda on its services. He said at the time that AI will ultimately help Facebook differentiate between news and propaganda and help discover the identity of terrorists. This is where the waters get a little muddied.

The social network has gone to great pains to make it clear it doesn't, and never will, regard itself as a publisher or media company. This is despite commissioning content, arguing that offensive content on its site (footage of beheadings, for example) is "newsworthy" and failing to take down posts the site believes are in the "public interest" – arguments made by publishers across the globe almost on a daily basis.

In his blog post, Osofsky reiterated this. “People are still able to share portions of the videos in order to condemn them or for public awareness, as many news outlets are doing in reporting the story online and on television.” He directly and explicitly compares Facebook’s decision and position to that of a news outlet, by means of justifying the content, six months after Zuckerberg explicitly stated: “We are a tech company, not a media company.” However, no publisher in the world has the same reach and influence.

Legally speaking, Facebook is not regarded as a publisher in the UK and cannot be held liable for content posted by users. However, the Times recently published an article claiming Facebook’s refusal to remove illegal content from its site – including child abuse footage and terrorist propaganda – could result in criminal prosecution. Yet the content, which included a video of a beheading and one of a sexual assault on a child, allegedly did not breach community standards. The BBC uncovered similar recent failings by Facebook to remove child sexual abuse content.

Read more: Zuckerberg: telepathy is the future of Facebook

Unless pushed, it's not prudent for Facebook to pour AI research into these areas while maintaining the argument that it is not responsible for what others post. The content it is being called out on – hate speech, terrorist propaganda, child abuse content, and even Facebook Live crimes – is also in the minority. The network deals with daily reports from its community of more than 1.7 billion users. The data it deals with, is staggering. Those staggering numbers are also what makes it its money, what drives its long-term plans and what ultimately helps the social network’s teams decide where to allocate their funds and research time.

To the public, it seems obvious Facebook’s data analytics and machine learning powers should be targeting criminal elements on the site. Their very presence is sensationalist and potentially damaging. Academics have repeatedly urged the media to curtail the manner by which they report terrorist activity. Michael Jetter, for instance, a professor at the School of Economics and Finance at Universidad EAFIT in Medellín, Colombia, looked at more than 60,000 terrorist attacks between 1970 and 2012 in a 2015 study reported by the Guardian. He found links between the number of articles dedicated to a terror incident, and the number of follow-up attacks. At a basic level, terrorists seek out media attention, he argues, and that media attention leads to more attacks.

“What this article is suggesting is that we may need to rethink the sensationalist coverage of terrorism and stop providing terrorists with a free media platform,” he said. “Media coverage of other events that are causing more harm in the world should not be neglected at the expense of media marathons discussing the cruelties of terrorists.”

There is an uncomfortable reality underlying all these issues. Horrific acts, privacy intrusions and revenge porn drive traffic. Facebook, and alike, are not actively encouraging any content of this kind but they are also not actively disrupting it until absolutely necessary. Facebook has taken responsibility for the proliferation of fake news on its site, introducing new tools to weed it out, but like its battle against hate speech, this only followed after mounting pressure from politicians across the globe.

Take the example of harassment of women, by means of publishing private photos. When this happened most recently to Emma Watson, Amanda Seyfried and other high profile women in March, WIRED came to cover the story the morning after it broke in the US. In that time, the story had become global front page news and Watson’s legal team had taken action. Yet, an incredibly quick Twitter search came up with at least one user posting graphic nude videos and photographs of targeted individuals, that multiple news outlets had linked to. I reported the account for harassment – that is the only option, since nudity, including pornography, is allowed in Twitter feeds just not in profiles – and didn't hear back. In prior correspondence, Twitter told me if anyone makes a report and receives a case number, its team will review material and accounts against its rules. I’ve never received any correspondence or case number following any report made on content on Twitter. In this context, it’s hard to see how action could have been so slow.

Read more: Facebook is working on tech that will read your thoughts and let you 'hear' with your skin

The answer is in the company’s own rulebook. Like Facebook’s ‘community standards’, Twitter prioritises its own terms, which allow nudity. Unless it is provided with a legal reason to remove stolen content, it is not obliged to when a report is made. Facebook’s terms stipulate nudity is banned, so famous artworks, a harrowing and Pulitzer-prize-winning wartime photograph, and everyday images of breastfeeding mothers doing what is biologically necessary, are removed without discussion. But give them a bloody murder, and they’ll have to judge it for public interest and edit it for the sake of keeping up with the internet Joneses.

There are many authoritative and valid arguments for Facebook prioritising its commercial needs first, sticking to its own internal policies, not agreeing to every government or public request, and continuing its groundbreaking work in AI (that through various open source offerings, will be beneficial to many beyond the walls of the internet giant). Nevertheless, someone probably should have stopped Osofsky from saying the murder of Godwin “has no place on Facebook, and goes against our policies and everything we stand for” before adding: “people are still able to share portions of the videos in order to condemn them or for public awareness.” And someone definitely should have stopped Osofsky from arguing Facebook “only received a report about the second video - containing the shooting - more than an hour and 45 minutes after it was posted”. WIRED contacted Facebook for comment but the company only provided details on background and would not give an official statement.

For a company estimated to have a future earning power of $1 trillion; one that revealed this week it is working on tech that could one day let users communicate on the internet just by thinking, it sounds a lot like the dog ate my homework.

Update 3/5/17: Following months of criticism, Mark Zuckerberg has announced that Facebook is hiring 3,000 more members of it community operations team to review user reports. It will bring the team total to 7,500.

“Over the last few weeks, we've seen people hurting themselves and others on Facebook – either live or in video posted later," Zuckerberg wrote on his Facebook page. "It's heartbreaking, and I've been reflecting on how we can do better for our community.”

The team looks at all types of reports, including hate speech and child exploitation, two issues the social network has been under pressure to dramatically improve upon. Zuckerberg added that Facebook needs to respond faster to reports, and is building new ways to make reporting simpler, and the review process faster.

This article was originally published by WIRED UK