Facebook's News Feed manipulation was irresponsible

Shutterstock

Oh dear, Facebook. This weekend it emerged that the world's largest social network had conducted a study in which the company attempted to manipulate the emotional reaction of users by controlling the content that appeared in their News Feeds.

The public's reaction to the study, which was published in the journal PNAS in conjunction with Cornell University, has been justifiably negative.

It has brought to the fore the question of what we presume our Facebook data is being used for and what we think it is reasonable for it to be used for. It is not hard to imagine, for example, Facebook conducting tests to fine-tune the way information is presented in our News Feeds in order to improve the Facebook experience. Nor is hard to imagine Facebook analysing the big data it has access to in order to work out all manner of trends. It is hard to get our heads around the idea, however, that Facebook would purposefully attempt to manipulate our emotional reactions without explicitly telling us, or allowing us to opt out. That is a blatant violation of trust.

The one thing this study serves to remind us all of is the fact that Facebook as a company trades in information, not people. For a data scientist, working for Facebook or having access to the company's vast, rich datasets must be a dream come true. It is easy to imagine how in that situation one might stop being able distinguish between the data and the people it relates to -- how they might become one and the same. For the most part, this attitude is probably harmless and allows for the formation of complex algorithms and new products that keep the network ticking along nicely. The problem occurs when analysis turns into manipulation; when data scientists start to dabble in psychology.

Playing God with people's emotions is not the same as playing God with data.

In a blog post explaining and apologising for the experiment, Facebook data scientist and one of the co-authors of the study, Adam Kramer, reveals just how out of touch the researchers were. "The goal of all of our research at Facebook is to learn how to provide a better service. Having written and designed this experiment myself, I can tell you that our goal was never to upset anyone." Kramer's post explains in better detail than the study itself the measures that were taken when developing the methodology to minimise the impact the research had on users, but it also recognises that "in hindsight, the research benefits of the paper may not have justified all of this anxiety".

Thoughtlessness is no excuse of course for the grave and irresponsible mistakes that have been made here. We've all signed up to Facebook's epic data use policy, so it's highly doubtful that social network has done anything illegal in conducting the study. It will have covered all bases in there somewhere, and as such its legal department can sleep easy. But as this Atlantic article points out, there are a whole different set of rules when it comes performing to psychological experiments, which mean that its data scientists should be lying awake at night wrestling with their own consciences.

These rules may not be an entrenched set of laws, but an ethical code of conduct that, among other things, explains clearly the ways in which psychologists should demonstrate respect to every person they work with. In particular, these codes -- both in the US and the UK -- lay out explicitly the guidelines for gaining informed consent.

In the Code of Ethics and Conduct published by the British Psychological Society, it is stated that psychologists should: "Ensure that clients, particularly children and vulnerable adults, are given ample opportunity to understand the nature, purpose, and anticipated consequences of any professional services or research participation, so that they may give informed consent to the extent that their capabilities allow."

They also state that any intentional deception of participants should be avoided unless "additional safeguards required for the preservation of client welfare are specifically considered and "the nature of the deception is disclosed to clients at the earliest feasible opportunity". Neither in the study, nor anywhere else does it appear that welfare safeguards were put in place or that the study was disclosed to subjects. As far as we know, they still don't know who they are.

The only information in the study that determines how the unwitting study participants were chosen and gives us a clue to who they might be is the statement: "People who viewed Facebook in English were qualified for selection into the experiment." It goes without saying that that equates to a huge pool of people across the globe -- and in each country those participants seem to have been plucked indiscriminately, without any attempt by the social network to clarify any potential vulnerabilities or any other information about their existing emotional state. To describe this as irresponsible is an understatement.

Others have highlighted some of the general incompetencies of the study's methodology, but they are almost irrelevant when it comes to the question of whether the study should have been allowed to happen at all. In Kramer's blog post he explains: "While we've always considered what research we do carefully, we (not just me, several other researchers at Facebook) have been working on improving our internal review practices. The experiment in question was run in early 2012, and we have come a long way since then.

Those review practices will also incorporate what we've learned from the reaction to this paper."

We can hope that this is the case, and that the internal reviews process really has improved, but what could really help Facebook -- and what could have helped it avoid ending up in this situation in the first place -- is full disclosure. As Facebook claims that it is seeking to make itself more transparent in terms of its relationships with government agencies and law enforcement, there is little doubt that this will serve as a backward step in attempting to persuade users that it is always being open and honest with them.

When it comes to making ethical decisions about the right and wrong way to use data, the company needs to not only internally review its internal reviews process. It needs to look outside itself -- to professional bodies with experience of giving guidance on these matters and to its own users, who will always have an opinion on such matters. Sometimes, like now, that opinion might even be consensual.

This article was originally published by WIRED UK