The House of Lords has urged the government to get a grip on algorithmic bias and stop large technology companies from monopolising control of data in its wide-ranging report into the use and development of artificial intelligence in the UK.
After nearly ten months of collecting evidence from more than 200 witness, including government officials, academics and companies, the Select Committee on Artificial Intelligence called on the government to use the Competition and Markets Authority to stop large technology companies operating in the UK from monopolising the control of data. “We must make sure that [UK companies] do have access [to datasets] and it isn't all stitched up by the big five, or whoever it might be,” says the chair of the committee, Lord Timothy Clement-Jones – pointing the finger at Amazon, Facebook, Google, Twitter and Microsoft.
The report also places strong emphasis on the UK’s role as an ethical leader in the AI world, calling for the creation of tools that can be used to identify algorithmic bias and make it easier for people to understand how AI systems explain how reach their decisions. From an economic perspective, this makes a lot of sense, says Nick Srnicek, a lecturer in digital economy at King's’ College, London. “There’s a real challenge for the UK to be able to keep up with the US and China in terms of investment in AI,” he says. “Instead, you have to think about cheaper ways to take leadership, and the ethical part could be really useful there.”
In extreme cases, Clement-Jones says regulators should be prepared to reject an algorithm altogether if auditors cannot work out how it reaches its decisions. “We do think there could be circumstances where the decision that is made with the aid of an algorithm is so important there may be circumstances where you may insist on that level of explainability or intelligibility from the outset,” he says. These rules would apply to any algorithm used to make decisions about UK citizens, not just algorithms developed within the UK.
The committee also recommends the responsibility of regulating AI systems should fall to existing regulators such as Ofcom, Ofgem and the Information Commissioner’s Office (ICO). Crucially, however, it doesn’t call for more funding of these bodies or set out how they should be equipped to carry out their new responsibilities. In the wake of the Cambridge Analytica scandal, the ICO was forced to wait four days before it received a court warrant to search the firm’s offices for evidence that it had retained Facebook data improperly acquired from the researcher Alexander Kogan.
Read more: In 2018, machines must start to learn without prejudice
The report namechecks a handful of newly-created government bodies, including the Centre for Data Ethics and Innovation, the AI Council and the Government Office of AI, as well as the private sector Alan Turing Institute, but doesn’t detail how each of these organisations will inform and influence government AI strategy. “With those bodies, you wonder if they’re spreading too thinly,” says Michael Veale, a public sector machine learning researcher at University College London.
According to Veale, the report makes plenty of sensible suggestions around AI regulation, but falls short of giving regulators the specific resources they need to enforce the government’s AI policy. The 1998 Data Protection Act, he points out, already gives individuals the right to receive an explanation of the reasoning behind any automated decision made about them, but this is rarely enforced.
“This idea that more bodies will solve it is quite problematic if we don’t have enough power in the regulator to manage the law right now,” Veale says. His preference is for regulators to have access to a range of AI experts who can work across a range of industries.
Srnicek is also concerned that, once it leaves the EU, the UK will find itself increasingly unable to put in place tough policies that limit the potential abuses of personal data by large corporations. He points the European General Data Protection Regulation, which will come into force on May 25, as an example of the kind of wide-reaching legislation that the UK will struggle to negotiate on its own. “I don’t think that the UK is going to have nearly as much power at instituting that kind of thing,” he says.
The report also warned that as AI decreases the demand for some jobs, the government and industry would have to shoulder the responsibility of retraining people throughout their careers. It recommended that industry match funds that the government puts into the National Retraining Scheme, a newly-announced programme designed to fill in skills shortages in certain sectors.
These funds should be raised by entire industries, and not just those companies that displace workers through automation, says Clement-Jones. He also warned that unless the government issued more visas specifically for the tech industry, the UK will continue to suffer from a lack of skilled AI workers. In November, the government announced that it'll double the number of “Exceptional Talent" visas to 2,000 but under the new arrangement these extra allocated visas are shared between arts, sciences and digital technology, so for every AI researcher that receives a visa, a skilled scientists or artists misses out.
But delivering on the aim of making AI systems more explainable and less biased will require an interdisciplinary approach, says Sandra Wachter, a researcher at the University of Oxford and Alan Turing Institute. Regulators and the newly-created AI institutions will need to have a mix of ethicists, social scientists and legal experts that decide on a case-by-case basis what fairness looks like. “These are very important societal questions and you won’t have a just have a technological answer to them,” she says.
This article was originally published by WIRED UK