Your job automated

This article was taken from the January 2015 issue of WIRED magazine. Be the first to read WIRED's articles in print before they're posted online, and get your hands on loads of additional content by subscribing online.

Earlier this year, Deep Knowledge Ventures a Hong Kong investment house, announced that it had appointed an algorithm to its board of directors. Given the same powers as the human board members, the piece of software weighs up financial and business decisions to assess investments in biotechnology and regenerative medicine that could be worth millions of dollars. The algorithm's strength, its creators claim, is its ability to automate the kind of due diligence and historical knowledge about trends that would be difficult for a mere person to spot.

Because CEOs and senior executives routinely command seven- and eight-figure salaries based on their experience and ability, the appointment seems like a stunt concocted by a marketing department: surely no computer can make decisions superior to those of MBA-schooled executives? In fact, the really surprising thing about the Deep Knowledge Ventures algorithm isn't that it's an outlier, but how typical it is. No matter what line of work you're in, your skills are being undermined by rapidly improving algorithms.

Are you a doctor? Say hello to the medical apps that can provide diagnostic information more accurately than a human health worker.

A lawyer? LegalZoom and Wevorce are legal services that carry out the kind of complex tasks previously performed by a legal professional, for just a fraction of the fees. Do you work in manufacturing? 3D printers can now do much of the physical construction work previously carried out by assembly workers, and genetic algorithms (which mimic natural selection inside a computer) are now widely active in carry-out design work.

In most sectors, machines are faster, cheaper and do a more reliably consistent job than humans. They don't take sick days or holidays, don't exhibit irrational bias or leave for a better job.

As artificial intelligence systems have become more advanced, they have been disrupting more fields -- and they're showing no signs of slowing down. According to a 2013 study by the University of Oxford, around 47 per cent of employment is at risk of being automated within two decades.

This isn't a contemporary phenomenon: throughout history, technological progress has constantly shifted the market's demand for skills. The invention of the internal-combustion engine, for instance, prompted a shift from an agrarian to an industrial economy and created the city as we know it. New technologies have destroyed jobs, but they've also created them. This is what economists call the "capitalisation effect", by which companies enter areas where demand and productivity are high. This is then enough to offset the destructive effects of these types of economic shifts.

What is different now is that the number of jobs being created appears lower than the number being destroyed. In the early 20th century, factory work created more than enough jobs to offset those lost in agriculture. That is not so obvious in a world where a company such as Instagram, which employed 13 people in April 2012 when it was acquired by Facebook for $1 billion (£600 million), can displace photography giant Kodak, which employed more than 140,000 at its height. Writing in 1933, the economist John Maynard Keynes noted that widespread technological unemployment is the result of "our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour".

Today it's no longer only jobs perceived as low-skill that are being disrupted: white-collar professions involving a high level of training are just as likely to be displaced by software as assembly workers in a factory. Some of this is because once-untouchable fields such as law and medicine include specialisms that are vulnerable to automation: medical diagnosis, the drafting of contracts and comparison of trademarks can be better carried out by a computer than by human beings. "What we saw happening with physical labour during the industrial revolution we're now seeing happening with cognitive labour," says Michael Osborne, associate professor in machine learning at the University of Oxford. "It's all about taking cognitive tasks and breaking them down into their smallest possible subcomponents."

Surviving this transition will be tough. It has led to -- and will tend to increase -- a growing polarisation between rich and poor, as a few "superstars" flourish at the top of the food chain, while median income falls at the middle market. Between 1990 and 2005, the ratio of CEO pay compared to that of the average worker increased from 70 times to 300 times.

In the UK some CEOs take home more in three days than lower-paid employees make in an entire year. Although technology isn't the only reason for this, it does play a significant role. For example, it offers new income streams and potential audiences for the rock stars at the top, while removing the jobs of those underneath. The digital age enables JK Rowling to become a billionaire author by giving her more ways to monetise Harry Potter, but it also means that second-bests will find it tougher to compete.

In his book Average is Over, the economist Tyler Cowen writes: "One day soon, we'll look back and see that we have produced two nations -- a fantastically successful nation working in a technologically dynamic sector, and everything else." But speaking to WIRED, Cowen is also optimistic. "These proportions are not fixed," he says. "I think we can get a lot more people into the rich sector. A lot depends on what kind of education and other opportunities are available to them."

This is where technology's disruptive reach can help, whether it's a MOOC (massive open online course) that offers a university-level education for free, or the connectivity that lets a UK entrepreneur set up a production line in China. Technology might be shrinking certain industries, in terms of the numbers of workers they need in order to function, but it's also making all manner of new things possible. "Automation is definitely having a big impact on certain classes of lawyer, but I don't think it's going to do away with them completely," says John Kelly, CEO and founder of Blackstone Discovery, an e-discovery firm that uses software to sort through pre-trial documents, a job that would previously have been done by junior lawyers. "For example, in the area of legal discovery, one of the big revolutions of the digital age is that there's so much more data to comb through. Whereas in the past it might have been a few boxes of paper, today we're talking about emails, texts, documents, voicemails, social-media posts, hard-drive contents. This isn't a case of algorithms shutting down fields; they're opening them up. That presents a huge opportunity for new lawyers with the right skill sets."

Keeping abreast of these required skills will be key to surviving the transition. "On a personal level, people need to make sure they have the kinds of skills that computers are unlikely to be able to be a substitute for, particularly working with this technology in new and creative ways," says Erik Brynjolfsson, director of the MIT Center for Digital Business, and author of

The Second Machine Age.

What will distinguish the jobs that are likely to last is that they will be complementary to, rather than replaceable by, machines. We need to establish which tasks machines can do better than people and vice versa. This isn't easy. In 2004, in The New Division of Labor, MIT and Harvard economists Frank Levy and Richard Murnane argued that computers were adept at jobs that require the following of rules, but were far less effective at dealing with pattern-recognition tasks involving unstructured data.

An algorithm couldn't, they argued, drive a car because of the stream of unstructured data the vehicle would need to handle. With autonomous cars being tested on British roads from January 2015, the algorithms' progress is only going to accelerate.

For the next few years, humans will continue to be employed in areas where machines are not yet cheap or skilled enough to replace them. Logistics companies such as Amazon, for instance, are still reliant on human employees to pick and package orders. At some point in the near future, however, the staff will be replaced: the online retailer's purchase of robotics company Kiva Systems was an early step in increasing efficiency and lowering costs via a non-automated warehouse.

These advances in computation will change the tasks required of humans, but our input will not be mothballed quite yet -- just adapted. In order to be safe from the destructive effects of new technologies we must recast the way we think about employment (and by extension education), picking areas in which machines will probably never be able to match humans. The industrial revolution was all about turning man into machines: simplifying complex tasks to the point where they could easily be tailored to mass-production. The digital age, by comparison, needs to adapt the new possibilities of computation to, ironically, be more human.

Lawrence Katz, professor of economics at Harvard University, talks about an "artisan economy" of jobs that emphasise the bespoke, human element of work. As we become accustomed to an age of smart sensors, algorithmic selection and real-time feedback, we will also come to re-evaluate the importance of qualities such as empathy, creativity and, well, humanness.

We're already starting to see evidence of this. For instance, recently there has been a rise in the number of companies exploring human curation. One example of this is Beats Music, the company purchased by Apple for $3 billion (£1.8 billion) in May 2014. Beats stands out from some of its rivals by creating playlists that are the result of human editors, instead of machines. In August, former Apple executive Jean-Louis Gassée wrote an open letter to Apple CEO Tim Cook, suggesting that he apply the same idea to the App Store; using editorial picks over the currently employed recommender system model. Similarly, the nutrition app Rise matches users with human coaches rather than employing an algorithm.

Another example is OOLOO, a Siri-style assistant created by the storage service iDrive. Rather than focus on breakthroughs in natural-language processing, OOLOO's creators believe that they have optimised their service by using humans over algorithms. "Life is getting more and more automated, and what we've seen is a real desire on the part of users for a human connection," says iDrive CEO and president Raghu Kulkarni of OOLOO. "There's a warmth to our service, which sets us aside from some of the algorithmic options out there. There might be a bit of humour in your interaction, for example. I certainly don't think human interfaces are the only option, but we're definitely seeing more and more popularity regarding that way of thinking."

Although automation can replace people in many roles, there may be jobs that we simply don't want to hand over to machines: not because they can't do a good approximation of it, but because the result of doing so feels a little too mechanised. Algorithms, for example, can analyse the music of bands such as the Beatles, and may very soon be able to generate whole new tracks in that style without the need to pay expensive musicians. But the idea of replacing musicians, radio hosts or novelists with algorithms is one that human beings are likely to sense as peculiar.

Where to draw the line dividing computer-operated efficiency and emotionally intelligent response remains to be seen. Is that seemingly bespoke job you're interested in one where the human component is essential -- or is it one where efficiency is the more highly valued and the human element could be replaced by an algorithm?

Viv Labs, a startup by several of the creators of Siri, is currently working on a virtual assistant able to deal with complex sentences like: "Get me a ticket for the cheapest flight from SFO to Charles de Gaulle on July 2, with a return flight the following Monday" or, "On the way to my brother's house, I need to pick up some cheap wine that goes well with lasagne." At present, these sentiments can be better dealt with by a person with a deep understanding of language and semantics, which is OOLOO's business model. But judging by the progress made by Viv Labs, the same won't necessarily be true a few years down the line. It depends on how much we prize qualities like humour and human spontaneity in our interactions. "There's no single technological inevitability about the future when it comes to how it will affect employment and incomes," says Brynjolfsson. "There have been some disappointing trends when it comes to inequality and employment, but there have also been some great, positive trends, in terms of increased abundance and the overall pie getting bigger. Going forwards, we need to view technology as a tool -- not an autonomous force that imposes its will on us. We shape our own destiny."

Education

Our approach to learning needs a radical rethink

Your employability will increasingly depend on your repeated ability to develop new skills. The education system is not meeting these needs.

Schools and universities are based on a paradigm rooted in the Industrial Revolution, the so-called "factory model" of schooling, in which standardised lesson plans teach students specific skills for prescribed roles in the workplace.

Much of this thinking no longer applies. The factory model for schools imagined that these skills could be used by learners for the rest of their lives. Today, technological change means that many skills will be obsolete within a decade. New categories of job are constantly being invented. Why should memorisation of facts matter in the age of Wikipedia? "Education is built around the idea that we should teach our children to store large amounts of information," says Carl Benedikt Frey, a James Martin Fellow at the Oxford Martin Programme on the Impacts of Future Technology. "Why? Instead of rote learning, we should be pushing social interactions and getting kids to solve problems in creative ways." "There are two competing ideologies of education," says David Hursh, author of Twenty-First Century Schools and associate professor of education at the Warner Graduate School of Education at the University of Rochester, New York. "The first of these focuses on so-called objective assessments, standardised teaching, increased efficiency, reducing costs and ensuring that students are constantly learning and doing nothing frivolous. The second -- which I'm a believer in -- talks about using education to create a democratic citizenry. This means few standardised tests, if any, building more directly on teacher-student knowledge, and supporting human development. Rather than using education to train people for the workplace, it imagines that everywhere, including the workplace, should be an educational institution."

In a world of massive open online courses, high-quality education can take place anywhere. This opens up new technology-aided opportunities for people previously denied the chance of such learning. Within the next 20 years it is likely that everyone in the world will be able to learn using online courses.

This will require a change in how we think about educational establishments. "There is a real lock on certification held by a lot of more traditional establishments," says economist Tyler Cowen. "We think about Oxford and Harvard, but saying that you learned something online has nowhere near the same cachet. That really needs to evolve."

New Models

We shouldn't be giving away data for free; we should sell it

It is unlikely that WIRED readers would work for free, even if their boss occasionally let them use her personal gym or swimming pool as a perk. So why don't we behave in the same way when it comes to our data? Instead of one or two companies getting rich from the data generated via digital touchpoints, everyone should gain from the new streams of revenue.

Ironically, today's data-driven artificial-intelligence systems make humans more valuable than ever. Today's AI doesn't rely on rules coded by a few programmers, but on vast data sets of collective human intelligence, which are used to make algorithms smarter. On Google Translate every word deciphered draws on the knowledge of millions of anonymous translators. Their reward is the use of online services, such as search engines, for free. As white-collar jobs move online, though, should services and their users develop a different relationship? "Data is the new oil," argued mathematician Clive Humby in 2006, who, with his wife, Edwina Dunn, sold their company, which created the Tesco Clubcard, to Tesco for £93 million. But if data really is 21st-century oil, then should users be giving away such a valuable resource in exchange for free online spreadsheets and pictures of cats? Steve Jobs showed us through iTunes that people can be persuaded to pay for music online, rather than downloading it for free. As public awareness of the power of data grows, the same thing could happen with cognitive capital.

In his book Who Owns the Future? virtual-reality pioneer Jaron Lanier suggests that it would be relatively simple for companies to calculate a micropayment every time a person's action helps a machine to make an intelligent decision. When Amazon sells a book based on your preferences, say, or an online dating service uses your data to provide better matches, you would receive a micropayment.

Tyler Cowen has a variation on this idea. "Machines are going to displace more jobs, but we'll be able to produce more for free,"

Cowen says. "It's not unthinkable that a multi-billionaire could adopt a country and give its people the technologies they need for free. They would have phenomenal healthcare and education. You can already see the beginnings of this online." The concept of a neo-feudal Google-owned nation in which citizens access services in exchange for personal data may seem dystopian, but these are the questions we should be asking as the age of automation advances.

Jobs vs. Employment

Qualities, not qualification, will be the new currency

There's a difference between job security and employment security. The days of a single job for life may be gone, but that's not the same as employability.

Computer scientists can move from creating an algorithm to analysing star formations to working on nanotechnology; in a similar way those wanting to thrive can adapt their skills to fit a rapidly changing work environment.

One way this will play out is a move away from a model of specific job training and towards a focus on attributes.

In the age of the algorithm, skills such as creativity and empathy will be prized far more highly than rote knowledge of one particular area that may be disrupted.

Several startups are already exploring this area. Israeli entrepreneur Guy Halfteck's company Knack creates mobile games that isolate and grade qualities such as quick thinking, perceptiveness and spontaneity in a way that will make them less subjective and more quantifiable for employers, giving them greater importance. "The kinds of jobs we're going to see continued growth in are the ones that are heavily reliant on social and creative intelligence," says Carl Benedikt Frey, a James Martin Fellow at the Oxford Martin Programme on the Impacts of Future Technology.

Partnering with Michael Osborne, associate professor in machine learning at the University of Oxford, Frey has tried to identify what he calls "bottleneck" -- the areas in which machines find it difficult to match humans. "We've found three main ones," Frey says. "Personal intelligence, creativity and fine manipulation, which machines still cannot do well. You've got to find the jobs which require these skills. Jobs which require a high level of creative and social skill are quite difficult to automate. As a result there is a low risk that they will be subject to computerisation." "Traits such as creativity and emotional intelligence are going to be increasingly prized," says Erik Brynjolfsson, director of the MIT Center for Digital Business. "For now, these are areas where humans have a relative edge compared with machines."

Medicine is an example of a profession that will probably reflect this change. As algorithms are used for more and more diagnostic work -- meaning that doctors no longer need to be repositories for The Physicians' Desk Reference -- their role as empathic caregivers will increase.

This article was originally published by WIRED UK