Theresa May didn’t want to talk about Brexit at the World Economic Forum in Davos. Or, really, it seemed, about anything at all. So, her team decided, she would discuss artificial intelligence.
On the face of it, this is an oddity. Part of the job of prime minister is sounding confident on topics where the entire sum of your knowledge is the briefing you were handed that morning. But AI is a field even experts struggle to comprehend, with dizzying, fraught implications for every facet of political and economic life. Surely it merited a major policy address, not some inconsequential remarks to a two-thirds-full hall.
Of course, these days, you don’t need to be an expert to have an opinion on AI. Everyone has one! Literally, in the case of senior politicians, one opinion, which they pass around, presumably on some sort of timeshare. Three months ago, Tony Blair issued a report saying he had the opinion. Then Nick Clegg decided he had it too. And it was this air of emergent consensus that allowed a weak prime minister to tiptoe onto the territory, with all the faux-decisiveness of a LinkedIn thought leader announcing, “I’m not sure about bitcoin, but the blockchain is an extremely important innovation.”
The opinion – to paraphrase the speech – is this. “AI can do great things, but we must be sure it’s safe and ethical.”
If at this point you’re feeling underwhelmed, that’s understandable. There is, on the face of it, nothing objectionable or even unusual about that statement. Its vapid, a truism. But while that emptiness may seem to render it harmless, in fact it makes it extremely powerful.
From the outside, politics often resembles a combat sport, in which change happens via a series of gladiatorial trials, until the last person standing is declared the winner. Yet although elections may loosely follow that pattern, when it comes to policy, conflict is more likely to create stagnation than action. If you want to push through lasting reforms, then the best way to do so is not through argument, but under the cover of vacuous statements with apparently unquestioning support.
Thanks to the consensus among centrist politicians, and the Labour Party’s silence, reforms are already taking place in the governance of AI. These mostly concern what the government likes to call “business”. Echoing the chancellor at the 2017 budget, in Davos May trumpeted the fact that “we have seen a new AI startup created in the UK every week for the last three years”, vowing to “make the UK a world leader in innovation-friendly regulation”.
To justify this stance, May adopted the vacant rhetoric of a TED talk, asking her audience to “imagine a world” of self-driving cars and global disease prediction. However, the Davos elite would have understood her true meaning very clearly. In this context, “innovation in AI” stands for a cluster of strategic priorities, which feed what is now an international race to attract and develop talent in AI.
Some geopolitical background is necessary here. In July 2017, China declared its goal to become the global leader in AI by 2030. Two months later, Vladimir Putin announced that the nation that leads in AI “will be the ruler of the world.” This set alarm bells ringing among Western nations: they needed to up their game if they were to compete.
Add to this the firm, yet unfounded conviction that AI represents the next big source of economic growth, and you have the perfect recipe for a high-profile government strategy. AI is now a key element of May’s “industrial strategy”, the Keynesian plan to foster growth through central planning.
But what do AI firms require? Data, and lots of it. In China and Russia, there are no limits on the use of personal information: in fact, it’s actively encouraged, as it forms part of the state’s security apparatus. So, to compete, the UK government needs to free up data wherever possible.
Naturally, this creates concerns. So what safeguards are being put in place? In her speech, May echoed the empty phrases of the emergent consensus, noting that “personal data should be respected and used appropriately” and that algorithms should not “perpetuate the human biases of their developers”.
The accompanying Digital Charter (which is not, as its name may suggest, a Charter, but instead “a rolling programme of work to agree norms and rules”) added that “when decisions are made based on data, these [should be] fair and appropriately transparent”.
But algorithms, like all human systems, are bound to repeat not only the biases of their creators, but also their own, inherent biases – such as their inbuilt tendency to favour that which can be measured over that which cannot.
Nor is there any agreement among data scientists about what level of transparency is appropriate for AI systems, given that many of the most effective cannot, by their nature, be interrogated. Even their creators do not understand why they arrive at the answers that they do.
Such is the level of concern over this issue that the AI Now Institute recommended that this technology be banned for use in public services in criminal justice, health care, welfare and education.
Yet, rather than treading carefully, or even pausing its roll-out, the government is rushing to implement the technology, declaring its intention to “lead by example” and employ AI firms as government contractors. The UK's Behavioural Insights Team is testing machine learning systems that rate schools and GPs.
So complete is it the conviction that everything will turn out happily that the government recently announced the creation of a new body called the Centre for Data Ethics and Innovation, thereby ignoring every lesson of recent history, which tells us that innovation and ethics will inevitably end up being at odds. This is not so much making promises that cannot be kept, as making promises that have already been broken.
Of course, even as it frees up space for AI startups, the government is ramping up its tough talk on messaging firms and social media networks, asking them to provide what The Register aptly labelled “magic crypto backdoors”, so it can access encrypted messages on demand. This is justified by the need to, as May put it, “keep people safe online, especially children”.
So we see the development of a British version of the Chinese model, where content is policed – “No-one wants to be known as 'the terrorists’ platform' or the first choice app for paedophiles,” May warned, threateningly – while data and metadata flow freely thanks to increased “data portability”. Individual firms may rise and fall, but the market as a whole grows and expands.
The new Data Protection Bill, designed to mirror the European Union’s General Data Protection Regulation, seems designed to facilitate this brand of surveillance capitalism. As Jeni Tennison, CEO of the Open Data Institute, and a long-time advocate of open data, wrote recently:
“The data portability right will undoubtedly lead to the development of personal data markets. People will be encouraged to port data about themselves into personal data brokers, with the promise of control over use and a financial return when it is sold on. This in turn may lead to a future where access to data is determined by who can pay for it, accelerating knowledge, power and financial inequalities.”
Is this right? Is it even what the Bill intended? If the government knows, it is not saying. Instead, there are only bland pronouncements about the promise of innovation, that brush aside difficult questions, elide compromises, and obscure the trade-offs made in the name of the national good.
This article was originally published by WIRED UK