Get ready for the robot propaganda machine

Artificial intelligence and learning algorithms will make it almost impossible to tell robots from humans – and real news from fake. So what's on the agenda

Humanity has been advancing the field of propaganda for as long as we've been at war or had political fights to win. But today, propaganda is undergoing a significant change based on the latest advances in the fields of big data and artificial intelligence.

Over the past decade, billions of dollars have been invested in technologies that customise ads increasingly precisely based on individuals' preferences. Now this is making the jump to the world of politics and the manipulation of ideas.

Some recent military experiments in computational propaganda indicate where this could be taking us. In 2008, the US State Department, through its "foreign assistance" agency USAID, set up a fake social network in Cuba. Supposedly concerned with public health and civics, its operatives actively targeted likely dissidents. The site came complete with hashtags, dummy advertisements and a database of users' "political tendencies". For an estimated $1.6m (£1m), USAID was, between 2009 and 2012, able to control a major information platform in Cuba with potential to influence the spread of ideas among 40,000 unique profiles. Building on this project in 2011, USCENTCOM (United States Central Command) -- the US military force responsible for operations in the broader Middle East region -- awarded a contract to a Californian firm to build an "online persona management service", complete with fake online profiles that have convincing backgrounds and histories. The software will allow US service personnel to operate up to ten separate false identities based all over the world from their workstations "without fear of being discovered by sophisticated adversaries". These personas allow the military to recruit, spy on and manipulate peoples' behaviour and ideas.

Such projects represent the first wave of computational propaganda, but they are constrained in their scale (and ultimately their effectiveness) by the simple fact that each profile has to be driven by an actual human on the other side. In 2015, we will see the emergence of more automated computational propaganda -- bots using sophisticated artificial intelligence frameworks, removing the need to have humans operate the profiles. Algorithms will not only read the news, but write it.

These stories will be nearly indistinguishable from those written by humans. They will be algorithmically tailored to each individual and employed to change their political beliefs or to manipulate their actions. Already, Mexican drug cartels have employed propaganda bots to target messages at individual members of the public, to convince them that the death of a journalist in Las Choapas had nothing to do with hit men employed by the gangs. This type of propaganda can be produced at an almost limitless scale using the estimated ten million social-media bots. Such bots are currently available for rent on online hacker forums for between $5 and $200 per thousand, depending on how "human" -- and therefore how effective -- they appear.

The Russian foreign intelligence service has announced a 30-million-ruble (£500,000) contract for the "development of special software for automated information dissemination in major social networks". In 2015 we will also see the first results from initial field tests of the US IARPA (Intelligence Advanced Research Projects Activity) project to deploy propaganda bots in South America in an attempt to influence local political opinion.

It is still early days -- many of the bots deployed in 2015 will be programmed to use relatively simple heuristic techniques to imitate intelligence. But, powered by rapid advances in artificial intelligence, propaganda bots will soon run on genetic algorithms that let their ideas and messaging evolve, based on the resonance and effectiveness of previous messages. We are likely to see versions of these bots deployed on US audiences as part of the 2016 presidential election campaigns, and not only by the traditionally more tech-savvy Democrats.

This technology exploits the simple fact that we are much more impressionable than we think. Facebook's recent experiments to modify users' moods show us that the very language we use to communicate is subject to manipulation based on the stories that the Facebook algorithm chooses to show us. Furthermore, researchers at MIT have shown that a false upvote cast early on can improve the public response to a story by 25 per cent; a single early downvote can make an otherwise good story be perceived as a low-quality piece of journalism. In 2015, the propaganda bots will start to use this knowledge to influence news feeds -- automated "friends" will like, retweet and comment on stories that are in line with their propaganda goals.

We can also employ bots to help us determine if there are attempts at propaganda underway. Reactions to the downing of Malaysian Airlines flight MH17 over Ukraine show that the Russian and US media want the global audience to view things differently. We can employ algorithms to monitor mainstream media messaging out of Russia, compare that to what we are seeing in the US outlets, and flag substantive differences in language. We can also employ bots to monitor all of the millions of edits that happen daily on sites such as Wikipedia and uncover attempts to change language from "terrorists" to "Ukrainian soldiers", though it won't tell us which version is true. For that we still need humans to weigh the evidence.

Sean Gourley is CTO of Quid, an augmented intelligence company

Read more from The WIRED World in 2015 here.

This article was originally published by WIRED UK