Should the NHS share patient data with Google's DeepMind?

All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links.

Dan Kitwood/Getty Images

In giving Google access to the healthcare data of nearly 1.6 million patients, the NHS has used a loophole around "implied consent". It did not require patient consent for direct care, and the great unknown is how much Google is going to extend the definition of implied consent to fit its purpose.

There's a sense of inevitability when it comes to patient privacy and the use of innovative technologies such as AI in healthcare. We also know that in order to realise the full potential of an information society, personal and confidential data must be used and must also — at times — be shared.

The need for data to be shared across organisational boundaries in order to help a specific patient, or to design and pay for a service, is undeniable. But what is intriguing, and perhaps makes all of us uncomfortable about the relationship between Google-owned DeepMind and NHS, is the amount of personal data being shared without the knowledge of patients.

The data-sharing agreement between DeepMind and the Royal Free NHS Trust gives DeepMind access to a wide range of healthcare data. This includes information about people who are HIV-positive as well as details of drug overdoses and abortions, including up to five years of historical data. DeepMind would also have access to logs of day-to-day hospital activities, such as records of the location and status of patients.

It's not just a question of what could potentially go wrong when AI is used to analyse data from millions of patients; what has not been discussed, or at least made clear, is whether this move will benefit patients more than Google. DeepMind's work is shrouded in secrecy and in a way this has given Google the capability to create a monopoly over healthcare data analytics.

To me, however, there is a far more important question that's been overlooked in the debate: did the patients sign up for this? This issue has again highlighted the flaw of how "consent" is obtained by the NHS. It's fundamental to healthcare that the person receiving the care or treatment agrees to receive it. Consent is a key concept in the provision of healthcare – this is true across ethical, legal and practical dimensions.

Right now, medical confidentiality is under serious threat – and it's not a threat that will simply go away if ignored. There are contradictory things here: the NHS promised us that we, as patients, would be at the heart of healthcare decisions (they refer to it as "patient empowerment") but this agreement shows that to be an empty promise. The whole process of 'opt out' is not clear, most likely because the NHS doesn't want us to op-out. It's definitely not the choice and control we were promised.

A nurse uses a tablet to order medicine at The Queen Elizabeth Hospital, BirminghamChristopher Furlong / Getty

The Royal Free NHS Trust says that it "provides DeepMind with NHS patient data in accordance with strict information governance rules and for the purpose of direct clinical care only." But is that true? The NHS makes information-sharing decisions on behalf of us and this paternalistic behaviour has long been overlooked.

There is a big distinction between sharing personal, confidential data for direct care purposes and sharing personal, confidential data for indirect care purposes. Direct care purposes are those of a clinician or care worker, with a legitimate objective to treat a patient. The sharing of personal confidential data for these purposes has a direct benefit for the individual. Indirect care purposes are those that are not for the primary treatment or care of an individual. They include the analysis of information for research, commissioning or payment of service providers and auditing of services.

Demis Hassabis, co-founder of Google-owned DeepMind speaks during the company's AI challenge against Go world champion Lee Se-dol in Seoul, South Korea in March 2016Jeon Heon-Kyun-Pool/Getty Images

But this big distinction has been blurred — in part because of a lack of clarity in the law, and in part because those who currently use implied consent for direct care purposes are blurring the boundaries to make indirect care purposes fit within the definition. And it's this level of secrecy around the project that is raising questions about Google's and DeepMinds' motives.

People are happy for data to be used to improve their own care and the care of others, but less happy for data to be used for commercial purposes. There are also concerns over who gets to see the data, the ability of patients to understand their data and from where the data could be accessed. The Caldicott Report highlighted the illegality of some practices, such as commissioning, which has led to various forms of legislation (namely s.251 of theNHS Act 2006) but no overarching solution.

We are in this mess because there are currently too many laws, regulations and professional obligations in the area of medical information sharing. The level of complexity is daunting and confusing. Not only are there too many laws, regulations and obligations that healthcare professionals must make sense of – these regulations are not communicated to patients. The Data Protection Act (DPA) protects the rights of the Data Subject (the patients) and places a series of obligations on the Data Controller (the NHS). However, there is no evidence that these obligations are being met consistently across the health service.

Then there's the Human Rights Act, which protects the privacy of the individual. However, how this interacts with the DPA is unclear. Which act would an individual invoke in the first instance? How would they know they can use these legal instruments? Would there need to be a public breach before anyone knew either law could be invoked? Layered on top of this are a myriad of laws specific to healthcare that stipulate a health professional must not share information. Add to that mix mandatory legal, discretionary gateways where professions are allowed to share data when they think it is in the public interest.

This subjectivity is only the start of the discretionary window when we then layer on the common law duty of confidentiality. This depends on previous case law and is built upon the doctor-patient relationship of trust, which some might argue is eroding because of current confusion.

Lack of consent is not a simple issue. Both organisational and individual challenges remain, and confusion around how consent works is a major barrier. We should revisit the concept of consent and how it is obtained for indirect care. Further still, a more streamlined regulatory approach is required. This must be considered alongside developments such as the new EU General Data Protection Regulation, which includes a more prominent role for explicit, informed consent.

Clearly, the future of healthcare depends in part on information sharing. In principle I am not opposed to the use of innovative technologies for healthcare, but when it is at the benefit of one private company, without the say of patients, we are starting down a worrying road.

Subhajit Basu is associate professor, school of law, University of Leeds. His book,Privacy and Healthcare Data: 'Choice of Control' to 'Choice' and 'Control', co-authored with Christina Munns, is published by Ashgate.

This article was originally published by WIRED UK