Facebook's moderation task is impossibly large. It needs help

Moderating the posts of more two billion people is a colossal job. AI doesn't necessarily provide the solution that Facebook wants – only democracy can do that

In a bright, minimally decorated seminar room in California, an infamous painting is projected on to a wall, kicking off a heady debate about art — what it is, and what its purpose in society should be. The painting is Gustave Courbet’s L'Origine du Monde ("The Origin of the World"), a vivid, lifelike representation of a naked woman, which has shocked and thrilled its viewers since its appearance in 1866.

But this time, the discussion is not taking place in a Stanford art history seminar. We are just down the road, at Facebook’s Content Policy headquarters in Menlo Park. The image is being shown to a group of academics and researchers, myself included, who have trekked to the Bay Area to learn more about the way that Facebook moderates the millions of posts, images and videos that are published every day via its service.

Content moderation is a remarkable feature of the modern online experience which remains largely out of the view of the average user. A person that uses a social network exclusively to communicate with family members, or to watch cathartic cat clips, may never realise that there is a sprawling infrastructure and complex set of rules that underpin contemporary social-media sites, or that significant human labour is required to police them.

It’s only when things go wrong, or get political — as they tend to do — that we get a glimpse into the parallel world behind the screen: thousands of moderators, working around the clock at sites around the world, images and text flashing at them constantly, with only a few seconds to decide whether what they are seeing constitutes extreme violence, pornography or hate speech; the automated systems incessantly comparing videos with databases of forbidden content in an attempt to prevent users from posting terrorist material or child-abuse imagery.

For years, academics, civil society organisations and activists have been providing critical analyses of moderation in practice, calling on platform companies such as Facebook to be more transparent and accountable. The rules of the road were vague and unclear (what, precisely, constitutes “nudity”?) and if one had content removed, it was not only difficult to understand why, but one also faced little recourse.

At the end of April, Facebook made a major step, clarifying the rules (“Community Standards”) that govern what users can post on their service. For the first time, Facebook has instituted a limited appeals process (or, as some have called it, a “retrial” process). They have also published far more information about what exactly they view as hate speech, sexual content and the other things that may be removed from Facebook.

These changes are a significant step in the right direction and should be applauded. But now that the transparency that civil society has been been working towards for years is finally here (and we can thank the mounting global “techlash” for that), we cannot forget how difficult and fundamentally problematic moderation is.

The biggest challenge that was apparent from speaking to Facebook’s Content Policy Team in Menlo Park was what I would call its “operationalisation” problem. Why does Facebook care what art is? After all, it’s a question that has been debated for hundreds of years, and defies an easy definition. Leo Tolstoy, searching for one in 1896, reached for the broadest possible brush, defining it memorably as “one of the conditions of human life”.

People post millions of images on Facebook, so unsurprisingly, there is plenty of artistic content there. In 2011, a French art lover posted the “L'Origine du Monde” painting and had it removed. He sued Facebook (the case appears to be ongoing). While art is often provocative, and can be explicit, it is also perceived to have broad social value, so Facebook responded by sensibly carving out an exception to its policies that allow people to publish explicit art if they choose.

But only when I was at Menlo Park did I fully come to understand the staggering difficulties involved in implementing this kind of policy. Facebook has had to not only attempt to define what precisely “art” is, but it has sought to do so in a way that will be able to be easily “operationalised” by its many thousands of moderators, the majority of whom are contractors in the Global South, operating under major time constraints, with little context about the content they are looking at, and often working in a language which is not their first (or even a language they speak at all — there are thousands of languages in the world, and platforms cannot hope to be able to have employees or contract workers who can speak them all, so they are forced to use translations).

Given the difficulty of this task, it is not surprising that many of the definitions are unsatisfactory. The new community standards provide exemptions for images “of paintings, sculptures and other art that depicts nude figures”, operating on the rule of thumb that “art is handmade”. Of course, there are countless other forms of art, such as photography, which are not handmade, and the qualities that make them artistic may be difficult to pin down. Dozens of other exceptions — what Facebook terms “edge cases” — spring to mind, and with Facebook’s scale, edge cases can happen hundreds, if not thousands of times, a day.

It gets even trickier, as these rules are being actively challenged by significant communities of users around the world, who are in effect displeased with the norms that platforms have cast down from on high. Breastfeeding mothers, for example, long mounted concerted campaigns to fight against the constant removal of their breastfeeding photos on Facebook.

That means that as it stands, Facebook’s content policy team needs to not just set the rules, but also decide which forms of contestation are legitimate and deserve to be honoured (breastfeeding mothers), which ones are not (#freethenipple in most contexts other than breastfeeding), and most crucially, when the sphere of public acceptability has shifted far enough that these rules should be changed to keep up with the times. Now repeat this for the whole world, for hundreds, if not thousands, of various peoples and cultures which you simply cannot ever know enough about.

As bright, hardworking, and committed as these employees are, the question should not just be “Who is Facebook to decide what can be said around the world,” but also “how could they ever hope to actually do so”?

From hate speech to terrorism, the challenge re-emerges on virtually every politicised and controversial issue. Facebook has to define a highly disputed concept, create clear “bright line” rules, and then try to enforce them at scale, for 2.2 billion users in a vast array of languages.

It’s a Sisyphean task. The true irony, as the social media scholar Tarleton Gillespie outlines in his new book Custodians of the Internet, is that the social-media companies never wanted or fully embraced this role, which in many cases contradicts their own ideology and values. As they have expanded to be global businesses, platforms have found themselves damned if they do moderate (for instance, being called out for censorship), and damned if they don’t (see the longstanding frustration of Twitter’s users with what is perceived to be insufficient moderation efforts, especially around hate speech and misogyny).

This precarious situation just goes to show that in the long term, the current practice of moderation from on-high is unsustainable. What then, should they do? Mark Zuckerberg has repeatedly suggested that automated systems will be the way out of the current predicament, but these claims should be viewed sceptically, given both the complexities of moderation and the difficulties demonstrated by Facebook’s past efforts to incorporate algorithmic flagging.

Platforms should start by demonstrating they have the basic decency to ensure that moderators are well paid, trained and taken care of, but even that will not be enough. It’s time to start thinking about real solutions: systems that allow for norms to emerge from the ground up, and for users to provide meaningful consent that goes beyond just ticking away the terms of service. Facebook once claimed to “democratise”. Now it needs genuine democracy.

Robert Gorwa is Dahrendorf Scholar at St. Antony’s College, Oxford, and researches platforms as a PhD student in Oxford’s Department of Politics and International Relations

This article was originally published by WIRED UK