Hackers are on the brink of launching a wave of AI attacks

Artificial intelligence's ability to learn will be key in cyberdefence and attack

In the summer of 2016, seven hacking machines travelled to Las Vegas with their human creators. They were there to compete in a global-hacking event: the DARPA-sponsored Cyber Grand Challenge designed for machines that can hack other machines. The winner would take home $2 million (£1.5m). The battle was waged over dozens of rounds, with each machine striving to find the most software vulnerabilities, exploit them and patch them before the other machines could use the same tactics to take it out of the game. Each machine was a cluster of processing power, software-analysis algorithms and exploitation tools purposely created by the human teams.

This was the ultimate (and, so far, the only) all-machine hacking competition. The winner, code-named Mayhem, now sits in the Smithsonian National Museum of American history in Washington DC, as the first "non-human entity" to win the coveted DEFCON black badge - one of the highest honours known to hackers.

Mayhem's next tournament, also in August 2017, was against teams of human hackers - and it didn't win. Although it could keep hacking for 24 hours like its Red Bull-fuelled human counterparts, it lacked that surge of energy and motivation that competing humans feel when, for example, a rival team fails to spot a software flaw. A machine can't think outside of the box and it doesn't yet possess the spark of creativity, intuition and audacity that allowed human hackers to win.

This will change in 2018. Advances in computing power and in theoretical and practical concepts in AI research, as well as breakthroughs in cybersecurity, promise that machine-learning algorithms and techniques will be a key part of cyberdefence – and possibly even attack. Human hackers whose machines competed in 2016 and 2017 are now evolving their technology, working in tandem with machines to win other hacking competitions and take on new challenges. (A notable example is Team Shellphish and its open-source exploit-automation tool "angr").

Read more: Take cybersecurity away from spies... for everyone's sake

From a defensive point of view, cyber-security professionals already use a great deal of automation and machine-powered analysis. Yet the offensive use of automated capabilities is also on the rise. The majority of information-security professionals (62 per cent) surveyed by Cylance at Black Hat USA 2017 think that hackers will weaponise AI, and begin using it offensively in 2018. And at DEFCON in 2017, a data scientist from Endgame (a US endpoint-security vendor) demonstrated and released a malware manipulation environment for Elon Musk's popular OpenAI Gym, the open-source toolkit for learning algorithms. Endgame created an automated tool that learns how to mask a malicious file from anti-virus engines, by changing just a few bytes of its code in a way that maintains malicious capacity. This allows it to evade common security measures, which typically rely on file signatures – much like a fingerprint – to detect a malicious file.

With several new such tools in development, and competitions fuelling innovation, it is not hard to imagine that the next few steps in this evolutionary ladder can create an autonomous system that will adapt, learn new environments and identify flaws, which it can exploit. This will be a true game changer.

This article was originally published by WIRED UK