Since 2009, technology has been steadily blurring which tasks are best performed by a human, and which by a machine – from smart home sensors to music made from generative algorithms and the use of artificial intelligence in places like hospitals and schools. Since the inception of WIRED UK in 2009, this world has grown and shifted in ways that would have been hard to predict – and what were once buzzwords, the offshoots of science fiction, have increasingly become a part of our everyday life.
“We’re in this period of a massive convergence of a number of very high level trends,” says Jeremy Palmer, CEO of QuantumBlack, an advanced analytics firm which is a McKinsey company. “The amount and variety of data, computing power, infrastructure like cloud capabilities along with academic research and papers are all rapidly advancing. Machine learning and artificial intelligence are enabled by these things so we’re seeing it embedded into real world applications more and more. For example, we are increasingly seeing artificial intelligence and machine learning being subsumed into industries like healthcare, education and architecture.”
“Companies are really moving the field forward in a way that we haven’t seen for many, many years, and they’ve been typically playing with what we call toy problems,” says Greg Williams, the editor of WIRED UK. “But what we’re seeing is these developments being applied in ways outside of research facilities.”
The rift between technology and other disciplines – such as art and design – has narrowed too. “Technology is a language – it’s just not an app, it’s the way that we communicate and so it should never be dominated by one discipline,” says Daan Roosegaarde, an artist and technologist who spoke at the WIRED x Breaking Boundaries event. “The future is about moving away from linear thinking – we have to connect these disciplines together, in a way like during the Renaissance.”
“The really interesting stuff only happens when you get into interdisciplinarity,” says Palmer. “We know now that we can use machine learning to harness things like vision and speech. So, as soon as we know that we can reproduce speech or a computer can understand images, then we start to open up many opportunities to apply these capabilities in business, design and entertainment.”
But even as significant progress is made in these fields, larger questions around how these technologies should be used – and even whether they should be used – still remain. “We know that computer science is rapidly becoming one of the most popular undergraduate courses,” says Palmer. “But how do we get to a point where all of us can have enough of an understanding of these areas that we can feel a part of it?” Even as the capabilities of new technology accelerates – or even as new kinds of technology become more commonplace – there are still huge swathes of people who get left behind. Crucially, there’s also been a tech backlash in the last two to three years, as more and more people feel that technology might not be working in their best interests.
The next decade of progress in artificial intelligence could also have other, unpredictable ramifications. “We’re moving towards a world where physical evidence, whether a photograph or a video, could no longer be used in a courtroom, because a lawyer or a defendant can question the authenticity of that piece of evidence,” says Rachel Botsman, a Trust Fellow at Oxford University’s Said Business School, who has been researching the relationship between technology and trust in the modern world for over a decade. “This relationship between trust and truth is really significant, and we’re arriving within the next decade to a place where this concept of having a shared sense of reality is coming into question.”
Botsman suggests that we have to approach such questions carefully over the coming years. “We already give our trust away to things which make decisions about us,” she says. “We are going to become more aware that we’re outsourcing decision making to machines, and that’s going to be a huge trust leap. You want to believe that the machine will be empathetic – that it can respect your beliefs even if they’re different from the program.”
“Back in the 20th century, as Kevin Kelly puts it, we electrified everything we could, whether that was fires or kettles or our homes,” says Williams. “It’s a nice analogy for the way that we’re seeing this develop, which is that we’re going to effectively cognify everything. That won’t mean that all of our problems are over, or that we’ve solved everything. It just means that we’ll have artificial intelligence operating in the background, allowing human beings to do the things we’re better at.”
Despite the advances in technological innovation in the last decade, there are a few crucial characteristics which artificial intelligence does not quite possess – human ideas may remain the crucial factor that drives innovation. “When the camera was invented, it didn’t stop people from making paintings,” says Roosegaarde. “As our world becomes hyper technological, the human desire for beauty and challenging reality – those human skills will become more important.”
For more information, visit quantumblack.com
This article was originally published by WIRED UK