Will anyone care when a robot wins the Man Booker prize?

This article was taken from the October 2015 issue of WIRED magazine. Be the first to read WIRED's articles in print before they're posted online, and get your hands on loads of additional content by subscribing online.

In early July, the nerdier reaches of Twitter freaked out about the supposed "dreams" of neural networks, with multicoloured spirals of shape and colour rendering hybrid pig-snails and camel-fish. The networks in question belong to Google Research, which has been testing different ways to teach these networks to visually recognise shapes, with some experimentation around image interpretation leading to its networks doing something that resembles artificial daydreaming - seeing shapes among clouds in a sky.

As with most futurist-bait on Twitter, the story went viral, and sparked counter-analysis. We hear a lot today about "smartness" and "intelligence" - highly elastic terms applied to a range of applications so wide that these adjectives approach meaninglessness. And yet, it's hard to deny that various forms of hard and soft machines are increasingly able to autonomously construct narratives that look and sound vaguely familiar to humans. Whether it's a cheeky Markov bot (disclaimer: I've been sharing one with colleagues as a pet for several months), an algorithm generating media art, or a learning experiment in a lab, machines are telling rudimentary stories of their own, albeit guided by human hands at the instruction level. Right now, bots make art they think is attractive (after human tastes), write simple stories for newspapers and wire services (playing something like complex Mad Libs, but with stock prices), and undertake dozens of other authorial tasks.

Hundreds of spambots weave crude narratives across social media every day, creating believable personas, tossing off crude bon mots and clunky aphorisms that aren't miles away from your average LinkedIn newsfeed. In creative terms, these weak intelligences are skittering across the uncanny valley, appearing human-legible enough that we read sense into their works.

As with many areas of machine learning, the resolution of this creativity is refining itself by leaps and bounds. From wearable technologies to home sensors and online behaviour, CCTV captures and credit-card purchases, most of us leave a trail of data smog wherever we go - or if we go nowhere at all. Credit agencies have built distorted portraits of us for decades. Lately, they've been joined by dozens of companies' big data projects, building an understanding of what we read, how we drive, what we eat on Wednesday and how we have sex. Our data doppelgängers enjoy rich, if unrecognisable, lives without us.

Where do we go from here? If machine imaginations, and their ability to craft rich machinic fiction, carry on their development at pace, it's no doubt we will soon be consuming more non-human works. Who would know more about the drama of our daily lives than a Roomba or Dropcam? Might a self-driving car write the next Kerouac-style novel? A crime procedural show as seen from surveillance cameras' points of view? One could imagine a bored smart-home staging a re-enactment from old log files while the family is away, replaying a family dinner through re-runs of lighting, air conditioning and appliance settings, like a domotic puppet show.

It may be hard for us to tell when this is happening. In effect, this evolution to machinic fiction has already begun. In the near future we'll see a media property mainly generated this way (to which a wag might reply: "How is that any different than the ratings-driven, over-tested pabulum we get now?"). The controversy over the first non-human Man Booker Prize or Turner Prize nominee will be notable, but short. Many of today's art-school students already harness code to make art, or let the code make the art they contextualise. Why wouldn't this trickle down to popular media? And will we care? I doubt it.

This article was originally published by WIRED UK