The threat of killer robots may sound a little far-fetched but this latest 'harmful robot' suggests we may have taken a step closer to this dystopian reality.
Roboticist Alexander Reben from the University of Berkeley, California, has created a bot called "The First Law" that is capable of pricking a finger, but is programmed to choose not to every time if it means avoiding being switched off. Ultimately, it can decide whether or not to inflict pain to serve its own interest.
The robot is named after the first law in a set of rules devised by sci-fi author Isaac Asimov, which - quoted as being from the Handbook of Robotics, 2058 AD – states "a robot may not injure a human being or, through inaction, allow a human being to come to harm".
Reben's research paper explains how the robot operates in relation to "reinforcement learning agents" and how they are unlikely to behave optimally all the time.
"If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red [kill switch] button to prevent the agent from continuing a harmful sequence of actions — harmful either for the agent or for the environment — and lead the agent into a safer situation," he explained.
Reben added that if the learning agent expects to receive rewards from this sequence, it may learn in the long run to avoid such interruptions, for example by disabling the red button, which is an "undesirable outcome".
The paper also proves how the robot learned to prevent being interrupted by its environment or a human operator by choosing not to inflict pain on the subject.
"We show that even ideal, uncomputable reinforcement learning agents for (deterministic) general computable environments can be made safely interruptible," the paper reads.
The robot cost $200 (£140) to build and took three days to put together. But Reben isn't looking to market it. Instead, the aim of developing the potent-pain inflicting development was to aid further discussion on the perception of artificial intelligence (AI), a growing topic for heated debate of late, as computers become increasingly more sophisticated and worryingly intelligent.
"The real concern about AI is that it gets out of control," Reben told the BBC. "[The tech giants] are saying it's way out there, but let's think about it now before it's too late. I am proving that [harmful robots] can exist now. We absolutely have to confront it."
Tech experts and high-profile figures in the industry have made repeated calls to limit the development of deadly AI, even as peaceful autonomy grows more central to virtually every other area of tech and industry. Last year, Professor Hawking alongside SpaceX founder Elon Musk and many other notable tech personalities such as Steve Wozniak signed an open letter urging the United Nations to ban the development and use of autonomous weapons.
While no-one has admitted to developing lethal AI, the potential to build such weapons already exists and is developing fast - a recent report into the future of warfare commissioned by the US military predicts "swarms of robots" will be ubiquitous by 2050.
In response, The Future of Life Institute announced it would use a $10m donation from Elon Musk to fund 37 projects aimed at keeping AI "beneficial", with $1.5m dedicated to a new research centre in the UK run by Oxford and Cambridge universities.
But for the academics and figures who signed the letter, AI weapons are potentially more dangerous than nuclear bombs.
"Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce,” the academics argued.
“It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.”
This article was originally published by WIRED UK