From ethics to accountability, this is how AI will suck less in 2019

Artificial intelligence promises so much, and in 2019 it will finally deliver both on technical prowess and data transparency

All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links.

With the help of human analysts, AI systems will get better at understanding and flagging fake news and trollsJIM WATSON/AFP/Getty Images

If 2018 brought artificial intelligence systems into our homes, 2019 will be the year we think about their place in our lives. Next year, AIs will take on an even greater role: predicting how our climate will change, advising us on our health and controlling our spending. Conversational assistants will do more than ever at our command, but only if businesses and nation states become more transparent about their use. Until now, AIs have remained black boxes. In 2019, they will start to open up.

The coming year is also going to be the year that changes the way we talk about AI. From wide-eyed techno-lust or trembling anticipation of Roko's basilisk, by the end of next year, wild speculation about the future of AI will be replaced by hard decisions about ethics and democracy; 2019 will be the year that AI grows up.

Bots, troll farms, and fake news

At least 18 countries have seen their election results affected by fake news. On Facebook alone, an estimated 150 million people were targeted with inflammatory political ads.

“There are biased and inaccurate news sources virtually anywhere there are people,” says Preslav Nakov, research scientist at the Qatar Computing Research Institute, who has studied the impact of fake news on elections. “Studies have shown that 70 per cent of users cannot distinguish real news from fake news,” he adds. His team found that fake news stories spread six times faster on social media than real ones.

Nakov and colleagues at MIT are developing a system that will learn whether a news source is peddling propaganda or not. “Fighting misinformation isn’t easy; malicious actors constantly change their strategies. Yet, when they share news on social media, they typically post a link to a website. This is what we are exploiting: we try to characterise the outlet where the article is hosted.”

But machine learning can’t tackle the problem alone. “The most important element of the fight against disinformation is raising user awareness, as propaganda becomes less effective once we’re aware of it. It would also help limit the spread of disinformation as users would be less likely to share it,” says Nakov.

Read more: In 2019, despite everything, the UK's AI strategy will bear fruit

Artificial intelligence meets civil society

Far from being a niche subject, big data and deep learning are affecting us all. “I’m interested to see if the difficult year Facebook has had leads to any kind of cultural change or initiative – I think Facebook’s problems could be the early signs of a gradual change in the way we think about technology, and I hope that continues,” says Jamie Susskind, author of Future Politics and former fellow of Harvard University's Berkman Centre for Internet and Society.

“The digital is political,” Susskind says. “We must be helped to understand how the technologies that govern our lives actually work, the values they encode, and what purpose they serve. More radically, it means that we should have a hand in shaping or customising them so we don’t have to rely on the morality or wisdom of tech firms alone.”

The rise of the digital ethicist

Initiatives like Oxford University’s Future of Humanity Institute and DeepMind’s Ethics and Society project are bringing together specialists in technology and the humanities to try to foresee, and mitigate, the social costs of AI – as well as steering research and investment towards projects that benefit society.

In 2018, the Nuffield Foundation launched Ada Lovelace Institute, a charitable trust, educating a new generation of digital ethicists with a mission to foster research and inform debate. In the year ahead and beyond, expect more and more AI companies to hire professional ethicists into senior roles.

“By this time next year, we want Ada to be recognised as offering a trusted and informed contribution to complex questions,” says Tim Gardam, chief executive of the Nuffield Foundation. The aim, he continues, is to identify issues that need addressing collectively to ensure a data-driven society is socially inclusive.

The real laws of robotics

Suffice to say 2018 hasn’t been a great year for Facebook. From boycotts to governmental grillings, there’s been a lot of theatre but not much action. Slowly, that will change. “2019 is a bit soon for many hard laws to be enacted, but lots of non-binding standards and guidelines are already being released,” says Jacob Turner, international lawyer and author of Robot Rules: Regulating Artificial Intelligence. “It is often the case in regulation that soft laws of this kind are a precursor to binding ones.”

In the coming year, legislators should focus on getting clearer laws on the statute books. “It would be much better to create formal laws for AI rather than leaving things to judges,” says Turner. “Judges don’t have an opportunity to consult the wider public, or to do long-term studies into the impact of their decisions.”

So far, discussion on the future of technology is rife with nice-sounding but meaningless sine dicendos. According to Turner, this level of rhetoric is getting us nowhere. In 2019, “governments as well as private industry should stop trying to come up with short statements of vague, high-level principles like, ‘AI should be used for good’ and make a start on the more difficult task of working out more detailed rules and regulations,” he says.

Conversational customer service takes off

Last year we started to talk to our machines. Now it’s finally time for them to talk back without spouting utter gibberish of basic weather updates.

Google recently launched Duplex, a service that can, if flashy technical demos are to be believed, call restaurants and make reservations on your behalf. This technology will soon come to your banking app, your calendar, and your email, with smarter natural language generation. According to Robin Kearon, senior vice president at Kore.ai, systems that initiate conversations with you are about to become the “new normal”.

We’ve been promised this before, but now the technology is almost ready. While there may be some useability (and regulatory) wrinkles to sort out, the technologies of natural language understanding and generation are, finally, good to go.

The biggest challenge that remains to be solved is that of making machines more socially capable. “As deeply social animals, humans tend to treat AI and robots as social as well,” says professor Bertram Malle of Brown University. And that can lead to disappointments. “Real human conversation is too complex for current systems, and there is no social intelligence that would know when it might be appropriate [to say something].”

Think of the difference in how you type a query into Google and how you might ask that same question to a human. You might type ‘bars near me’ into Google. Say this to a human and you’ll look like an idiot. In moving our communications with machines from the typed word to the spoken word, one of the biggest remaining challenges is learning a new way of speaking.

Silicon Valley cops

Smart detection of fraud and money laundering aren’t new, but the trend is increasingly away from automation and towards augmented intelligence.

Take transaction fraud. The technology is great at running thousands of concurrent experiments to predict the likelihood of any specific order being fraudulent, but the real results come when experienced (human) analysts collaborate with machines.

Spotting suspicious behaviour also requires that fraud detection systems and analysts look in new places. “Financial criminals rarely operate as singletons,” says David Nicholson of BAE Systems Applied Intelligence. “Their signal is an abnormal network of connections between individuals, accounts, email addresses, residences, and so on.” In 2019, crime fighting AI will shift to spotting criminals, based on human networks evolving over time, rather than trying to spot one-off crimes.

The dawn of machine explanations

Systems that make predictions but can’t explain them are risky in several ways: a decision is based on some discriminatory feature like race or gender can be damaging for society, while a decision that is influenced by an easily-faked bit of data is extremely brittle. Marco Tulio Ribeiro of Microsoft Research recently Lime (local interpretable model-agnostic explanations), or, in simple terms, a software system that helps makes sense of decisions made by algorithms. “Explanations help developers and users assess whether or not they should trust the model before deploying it, and also help pointing out areas for improvement,” he says.

“Explanations provide insights into a model's vulnerability,” adds professor Leman Akoglu, who researches explainable AI at Carnegie Mellon University. “If you’re trying to identify terrorist suspects for example, and an explanation shows that the underlying model is relying on an individual's age then the model might be vulnerable, since it is easy to lie about one's age.”

Akoglu and Ribeiro see explainable AI as a new tool in its own right, helping people and machines work together. “I always hesitate to make predictions about the future, but I am very interested to see how the area of human-AI collaboration develops,” says Ribeiro. “There are many areas where the human-AI team is potentially more effective than either one taken separately.”

Climate change

“Physics-driven climate simulation models have generated more [data] than all satellite measurements of Earth's weather,” says professor Claire Monteleoni from the department of computer science at University of Colorado Boulder, who is using smart simulations to help predict and mitigate extreme climate events. “These data-driven technologies are actually the most cost-effective way to unlock insights from the massive amounts of both simulated and observed data that have already been collected.”

Monteleoni runs hackathons to encourage young people to enter the field of climate data science. “I’m thrilled to see a generation of data scientists and AI researchers, especially at the student and early-career levels, take interest in climate informatics. [In 2018] we overflowed the meeting room, and unfortunately were unable to admit everyone on the waiting list,” she says.

And as for her resolutions for 2019? “As researchers and educators in the fields of AI and machine learning, we should strive to expose students to diverse application areas that address major challenges, not only in the field of climate, but also in other areas of societal benefit, such as sustainability, agriculture, health, education, fairness, diversity, and inclusion.”

Learning the lay of the land

As populations urbanise, and food and water security continue to be major concerns, machine learning will help us make the most of the land we have. Take farming.

“The Farm Census in England is now only conducted every ten years providing data which is then clumped together into 2 km squares,” says professor Ian Bateman, director of the Land, Environment, Economics and Policy Institute, which is using AI to enhance land surveys. “Satellites provide a complete map of the country in less than a week, every week, and in tremendous detail. Machine learning techniques can turn that tsunami of data into clearly interpretable information, discovering messages that would take researchers an unfeasibly long time to discover,” he adds.

“Our hope is that by the end of 2019 we will have used machine learning techniques to bring earth observation data into our understanding of how land use can be changed in ways which allow policymakers to make decisions which are good for farmers, good for society and good for the environment.”

A healthier AI

The application of machine learning to medicine is helping to diagnose illnesses earlier, unlocking promising new avenues for treatment, or helping ensure that patients take the medication they’ve been prescribed.

Since 2013, we’ve known that medical professionals risk burnout when faced with the huge data wrangling tasks that modern medicine demands of them. In a study of nearly 500 hours of clinical time in a busy emergency department, 43 per cent of of time was spent on data entry, compared to only 28 per cent with patients. In a single ten-hour shift, a doctor could expect to make 4,000 mouse clicks.

The problem is so acute that one Stanford academic published a paper this year calling for a change. In 2019 and beyond, doctors will no longer be expected to feed machines data in onerous, specific ways. Instead, the grunt work will be shifted to the machines, who will have the intelligence to interpret looser, less formatted data. Rather than form-filling and box-ticking, conversation-driven data input will become a reality, allowing virtual assistant to automatically extract important information from conversations between medics and patients.

Updated 03/01/2019: Tim Gardam is the chief executive of the Nuffield Foundation

This article was originally published by WIRED UK