Governments don't set the political agenda anymore, bots do

2017's key influencers will be non-human accounts on social media

@ilduce2016 was born in November 2015 with only one purpose: to talk to Donald Trump. Every few hours, it sent him a message on Twitter, always signing off with a cheery #MakeAmericaGreatAgain. Months passed and nothing happened, but @ilduce2016 was patient and tireless.

Finally, the following February, it got what it wanted - a retweet from Trump himself. @ilduce2016 was a digital ambush. All its messages were quotes from strutting fascist strongman Benito Mussolini. The trap had sprung, and Trump was promptly hauled on to the TV networks to share his thoughts on fascism.

Read more: Apps are dying. Long live the subservient bots ready to fulfil your every desire

@ilduce2016 wasn't a human being, of course. It was a robo. "Bots" - often just a few lines of code, a set of programmed instructions - scamper all around us as we journey through the internet. Without them, search engines couldn't function. Hundreds toil across Wikipedia's vast digital estates alone, cleaning away graffiti (ClueBot), categorising articles (Cydebot), and reporting suspicious edits (COIBot). According to research firm Incapsula, 61 per cent of internet traffic in 2013 came from bots, a rise of ten per cent in just 12 months.

However, @ilduce2016 wasn't just a robot. It was also an activist, one of a growing number fighting in the political battles that are increasingly waged online. Across social media, and especially enlightened platforms such as Twitter that recognise bots as legitimate members of the community, robots are being used not to manufacture cars, but to try to manufacture political consent. We are living through the rise of automated activism.

At their best, these digital placard-holders have a two-fold edge over their fleshy, human counterparts. First, their sheer volume. Bots can shout constantly, tirelessly and indefinitely; in other words, they can be inhumanly loud. In early June 2016, as the UK's EU referendum campaign gathered pace, researchers Philip N Howard and Bence Kollanyi judged that a third of all Twitter traffic concerning Brexit was most likely caused by bots, because "it is difficult for human users to maintain this rapid pace of Twitter activity".

DroptheIBot, for example, was everywhere on Twitter at once, responding to every tweet using the phrase "illegal immigrant" with the snapped reply: "People aren't illegal. Try saying 'undocumented immigrant' or 'unauthorized immigrant' instead."

After the vote for Brexit, a petition for a second referendum on British membership of the EU was hijacked by bots forging digital signatures - more than 77,000 of them, according to the organisers.

Scale can beget scale. Bots can be programmed to make other bots, which means that it is entirely possible to raise an army of them. There are around 26 million on Twitter alone, and these legions often join together into networks called botnets. This ease of production makes them cheap: you can buy tens of thousands on Twitter for a few pounds to faithfully retweet your messages. Tom Feltwell is a British researcher who studies activist bots and runs botmaking workshops. "Making a bot is really quite easy," he says. "With a bit of entry-level tech knowledge you can do something quite powerful politically."

The second advantage bots have over humans is their ability to harvest data. Data itself is increasingly harnessed for political ends, and bots are much better and faster than humans at scooping up data from one place and pointedly making it visible in another.

Another Twitter bot @EveryTrumpDonor hooks into the US Federal Election database and tweets the name, location and occupation of - you've guessed it - every Trump donor. @stopandfrisk tweets every stop-and-search conducted by the NYPD in 2011. In Panama City, potholes weren't getting fixed - until a local TV station set up small pressure pads that automatically tweeted a complaint to the Department of Public Works every time a car disturbed them.

All this is both exciting and worrying. The power of bots is growing more quickly than our regulation and control of them. Bots break the law, sometimes entirely independent of the intent of their creators. Web developer Jeffry van der Goot made a bot to tweet random chunks of his tweets. A seemingly benign exercise, until - at random - it declared, "I seriously want to kill people." Van der Goot got a knock at the door from the police, who insisted that the bot be taken down. (The kicker here: the death threat was directed at another bot.)

Traditional legal theory holds that to be culpable of a crime, you need intent, or "malice aforethought". Where's the intent in an algorithm, especially a randomising one like van der Goot's? As activism becomes automated, it raises tricky ethical questions we are ill-prepared to deal with.

The more immediate danger is that "influence bots" are being used systematically to influence the online debate by making something appear popular when it is not. Andriy Gazin from Ukrainian non-governmental organisation Texty has been tracking how Russian botnets pump out pro-Kremlin propaganda to systematically influence public debates. "Last time I checked", he says, "I had 20,000 bot accounts in my database". The US government's Defense Advanced Research Projects Agency was so worried that it launched a competition in 2016 to use data science to hunt bots down. The winning team, from social-media analytics company SentiMetrix, was able to identify 38 of 39 Twitter bots from a sample of more than 7,000.

But every time the techniques used to detect bots evolve, so do the bots themselves. Some are becoming more personalised, targeting people on the basis of what they say. Others are learning how to sound more like humans, so that a typical unsuspecting user is likely to be fooled. These bots will push boundaries, change how we look at things, create public outcry, fake public outcry, skewer candidates, even try to become candidates. Their influence on the messy art of winning and exercising political power has only just begun. Res publica ex Machina: politics from the machine.

Carl Miller is the founding research director of the Centre for Analysis of Social Media at Demos, the cross-party think tank.

The WIRED World in 2017is WIRED's fifth annual trends briefing, predicting what's coming next in the worlds of technology, science and design

This article was originally published by WIRED UK