Some time in 2017, a casino in North America hired Darktrace, a British cybersecurity company, about a data leak it was experiencing. Most cybersecurity firms promise to block outside attackers from penetrating your organisation, but Darktrace, founded by former MI5 operatives and Cambridge mathematicians, deploys a subtler approach. It uses machine learning to get to know a company from the inside – well enough that it can spot any deviation from the normal patterns of daily work. When the system spots something suspicious, it alerts the company.
Darktrace usually tells its customers not to expect much useful information in the first week or so, when its algorithms are busy learning. In this instance, though, the system almost immediately reported something odd: data leaking out of a fish tank. The casino had recently installed the aquarium as an attraction for guests. It had electronic sensors that communicated with the tank-servicing company, so that if the water dropped below a certain temperature, someone would be dispatched to fix the problem. Darktrace noted that more than 10GB of data had been transferred, via the tank, to an external device with which no other company device had communicated. An attacker in Finland had found an entry point to a supposedly well-protected citadel.
Strange as it is, the fish-tank story is reassuringly familiar. We are used to the idea that there are bad actors out there attempting to hack into companies and governments. But in fact, many of the threats that Darktrace uncovers are perpetrated by trusted people inside an organisation. “The incidents that make my jaw drop, the really audacious ones, tend to involve employees,” says Dave Palmer, co-founder and director of technology at Darktrace. He told me about the case of an Italian bank which discovered, after installing Darktrace’s system, that computers in its data centre were engaged in unusual activity. Eventually, they were found underneath the false flooring (data centres have raised floors to allow for air circulation). Members of the bank’s IT team had been siphoning off new computers, hiding them and using them to mine bitcoin. That wasn’t the only incident: at another company, an executive had set up a porn site, complete with billing system, from his office PC. In another case, a senior employee at a retailer was sending customer credit-card details to a site on the dark web.
The notion of the insider threat has become a hot topic among those whose job it is to protect organisations against digital crime. “If I was a chief security officer, my own employees would be what keeps me up at night,” says Justin Fier, Darktrace’s director for cyber intelligence and analytics. Cybersecurity firms are turning their gaze away from the horizon and back to the citadel itself. This represents a huge shift in the way managers think about the integrity of their organisations. Employees need to get used to the idea that they may be one false move away from being deemed human malware.
In 1988, Robert Morris, a graduate student at Cornell University in New York, set out to gauge the size of the internet by writing a program capable of burrowing into different networks. The worm he released from the Massachusetts Institute of Technology (MIT) servers had an impact he did not anticipate. It spread aggressively and rapidly, leaving copies of itself on host computers and overloading systems. Unwittingly, Morris had created a worm that crashed much of the then-nascent internet, impacting hundreds of businesses. In 1989, Morris was indicted under the newly minted Computer Fraud and Abuse Act (he was later appointed an assistant professor at MIT).
The Morris Worm, as it became known, was a prototype computer virus: code capable of spreading from host to host, replicating itself. It was also the first well-known instance of what became known as a denial-of-service attack, in which the perpetrator, instead of trying to steal data, seeks to make a system impossible to use. The cyber-security industry was formed in response to the Morris Worm and other nuisance attacks. As companies became more reliant on computers for their day-to-day operations, jobs were created for experts who could stop viruses from entering their networks. The industry’s mantra in those early days was “Prevention is better than cure”.
Today, it is becoming accepted that there is no such thing as prevention. “Organisations are going to get breached. It’s a question of when, not if,” says Fier. Based in Washington DC, he joined Darktrace three years ago, after working for US intelligence agencies on counterterrorism. “Networks have highly porous perimeters. A skilled adversary will always find a way in,” he says.
Conventionally, security software programs have acted as gatekeepers patrolling the company’s perimeter. They man the gates, scanning for features that fit the descriptions they’ve been given of known digital attackers. But today, malware is much easier to create and distribute. Viruses move faster and act more intelligently. Their creators give them baffling, frequently changing disguises which confuse software designed to recognise known threats.
Organisations are also vulnerable at many more points. The internet of things is rapidly expanding what security experts call “the attack surface”. Intruders can now enter an organisation through a vending machine, a smart thermostat or a TV, not to mention one of the many connected devices that employees carry or wear every day. The gatekeepers, outwitted and overrun, have responded like authoritarian leaders attempting to clamp down on crime, introducing increasingly draconian security policies. But when employees subsequently find it harder to work, innovate and experiment, the business suffers.
The human brain has two ways of coping with risk. The first is to spot a threat and instigate the appropriate action. Psychologist Daniel Gilbert describes the brain as “a beautifully engineered get-out-of-the-way machine that constantly scans the environment for things out of whose way it should right now get.” A primate on the savannah knows lions are a threat to her safety. When she sees one, a feeling of fear drives her to run or hide. This is our most ancient form of risk-management.
A second capacity to avoid possible dangers was developed much later in human evolution: the ability to anticipate and pre-empt. Hence helmets, insurance and antivirus software. This second approach means we can arrange our lives in such a way as to reduce exposure to threats. But it comes with downsides. For one thing, it hampers our freedom. If you know of a spot in the jungle where lions are frequent visitors, you don’t go there, even if there might be something wonderful to see or eat in that locale. For another, it relies on you having a fair idea of what future dangers might be. After imposing constraints on your jungle-roaming and investing in lion-protective body armour, you end up dying from a snakebite to the foot.
In their book Insider Threats, two of America’s foremost national security academics, Matthew Bunn and Scott D Sagan, explore the possibility that bad actors, including terrorists, might infiltrate nuclear facilities. They note that most people in the security field have backgrounds in engineering and safety, in which the goal is to defend against natural disasters and accidents, rather than against “reactive adversaries”. That can create a mindset of compliance – a belief that once the right system is in place, it will be effective. But insiders can figure out how to exploit its vulnerabilities without raising alarms. Edward Snowden was a computer systems administrator, and part of his job as a contractor for the National Security Agency (NSA) was to look for cybersecurity weaknesses. As James Clapper, former director of US National Intelligence put it, Snowden was “pretty skilled at staying below the radar”.
Darktrace started as a collaboration between British intelligence operatives and Cambridge academics. Palmer, for instance, worked on cybersecurity at MI5 and GCHQ. “A few of us were kicking this idea around, but we didn’t start in earnest until after the 2012 Olympics in London, which kept us busy,” he says. Their starting point was a conviction that nearly everyone was getting cybersecurity wrong. “The industry is almost entirely focused on being able to recognise the repetition of attacks seen in the past,” he says. For months, they discussed this problem over coffee with AI researchers from Cambridge.
When the company started, AI was still regarded as a theoretical field with little practical application. After initially struggling to describe its technology to potential clients, Darktrace’s founders hit upon a metaphor: the immune system. Like the nervous system, it is a wonder of complex information processing. It learns the body’s normal patterns of life and targets deviants. Once the system has recognised molecules that are foreign to the body, different types of antibodies swarm around the invader, co-ordinating their activity with each other (unlike the nervous system, which has a central controller – the brain – the immune system is self-organising). The system learns: after eliminating the pathogen, it retains a memory of it, so that it is better prepared for future encounters. Crucially, it doesn’t have to know the invader to recognise it as a threat. As viral DNA mutates, the immune system adapts its tactics in order to defeat it.
Cybersecurity has long used biological metaphors for attackers – the virus, the worm – but, until a few years ago, it didn’t have the equivalent of an immune system. Most companies still use security systems that can cope with the familiar, but are flummoxed by the unfamiliar. Antivirus software downloads descriptions of previous attacks, then it watches what you do and checks to see if it matches one of those historically known attacks. That can work fine – most of the time. But unlike the immune system, antivirus software has no way of adapting to, or even noticing, a new threat. When it misses, it misses an infinite number of times. As Sony, Yahoo! and the NHS have all recently discovered, an undetected threat can very quickly cause huge disruption. Darktrace doesn’t promise to totally stop breaches; it promises to stop breaches turning into disasters.
“The traditional way to do cyber-security is to exert more and more control over the business,” explains Palmer. “You take a list of worms, viruses and malware and plug it into the email system and laptops, so you know that if one of those attacks happens it will be stopped. Then you write down a list of all the things you don’t want your employees doing – for instance, uploading data to their personal Dropbox or visiting certain websites. Eventually, all those rules and policies harm the agility of the business, because they hinder people’s ability to experiment. Sometimes you walk into a big organisation and you feel your soul drip out. Everyone has the same laptop. They can only visit 600 websites. They can’t take inputs from other sources. It makes it harder to collaborate and create.”
“Another approach would be to write down everything that happens in the business – every action that gets taken on a regular basis – then look at the data, spot any unusual activity and intervene before it becomes dangerous,” he says. “But in a big business, there might be ten million things happening every day.” This is the kind of problem that machine learning is good at solving. It can pick up patterns and detect anomalies, at scale and at speed. “The business says to Darktrace’s immune system: ‘I want you to go and learn what’s normal – and then tell me what’s interesting.’”
Unlike conventional security software, AI-based systems don’t get stumped by technological diversity. One of Darktrace’s first clients was an energy company that deploys the latest software alongside power stations built in the 60s. The data networks of large enterprises are like highly complex living organisms with millions of moving parts, says Fier. “They move with us, grow with us. One business might use the same technology as another, but, like human beings, each is fundamentally different. That makes them hard to monitor.”
It probably never occurred to the casino that there was a computer underneath the fish tank, let alone that someone should be monitoring it. “On paper it looked like all the other desktop devices,” says Fier. “But we could see it was acting differently. It stuck out.”
After Edward Snowden’s data dump from the NSA and Chelsea Manning’s transfer of military intelligence to WikiLeaks, governments and companies woke up to the dangers of sabotage from within. President Obama established the National Insider Threat Task Force and ordered all government agencies to take steps to protect themselves from rogue staff. The possibility that someone within government might seek to undermine its operations is hardly new: Robert Hanssen, an FBI agent, leaked thousands of pages of classified material to Soviet and Russian intelligence services from 1979 to 2001. But as internet-connected computers have become central to most organisations, the population of potential miscreants has vastly expanded. According to a report from Cybersecurity Insiders, two thirds of US companies now consider insider threats more likely than external attacks. Even middle-ranking insiders can access vast amounts of sensitive data and get it out of the building in the blink of an eye. Their motivations? Money, ideology, petty revenge – and the pure rush of breaking rules.
Some cybersecurity experts believe companies need more than just better technology to cope with such threats; they need to get better at understanding people. Shari and Charles P Pfleeger are known as the godparents of cybersecurity. Their comprehensive overview of the field, Security in Computing, was published in 1984 and is now in its fifth edition. “The nature of the threat has changed so much,” Shari says. “Computing used to be an elite activity. Computers were used by a small group of people who were trained in maths and statistics. Nobody imagined that the we would all be walking around with a powerful and easy-to-use computer in their pocket. Developing software requires less knowledge than it used to; you can write apps with little or no technical training. What hasn’t changed is that computer experts are not trained in how people perceive and use software.”
Cybersecurity was once regarded as a purely technical problem, the province of computer experts. But in recent years there has been a growing awareness that human beings are central to it. Cybersecurity experts, however, still expend comparatively little effort trying to understand how employees think and behave. Companies that install a system as sophisticated as Darktrace may be tempted to give themselves a pass on the human problem. But although AI-based systems can catch insider threats before they develop into disasters, they can’t yet stop an insider becoming a threat in the first place. Doing so would require a fine-grained understanding of human psychology. “The way social sciences are talked about in cybersecurity is a little like how I remember people talking about AI in the 70s,” says Shari Pfleeger. “They said it was airy-fairy, that it had no real-world application.”
Charles Pfleeger told me that threat-detection technology needs to be better integrated with the study of people. Homeland Security asked him for an algorithm to predict insider attacks. “Sounds simple, but you get a lot of false negatives and false positives, because people are complex,” he says. “The truth is that humans can still make these judgements better than computers.” According to Pfleeger, such algorithms need layers of questions. Once a person is identified as a risk, then more information is required: is there something happening in their private life? Is there something at work they’re not happy about? “AI can get better at this kind of thing, but it won’t if it doesn’t incorporate the science of human behaviour,” he concludes.
Angela Sasse is professor of human-centred security at University College London (UCL). When she first joined its computer science department 20 years ago, she was asked by BT to help it solve a problem. The cost of running an IT help desk that helped people trying to reset their passwords had trebled every year for three years. “They wanted us to find out why the stupid users couldn’t remember their passwords,” Sasse says. “So we did a study, and the answer was, the users were not stupid – they had been set impossible rules.” Human brains simply are not optimised to memorise long passwords and six-digit PINs. As a result, a corrosive feedback loop developed. “Security people started issuing threats,” she says. “So the users started to lose their belief in all security, all rules.”
Sasse co-wrote a paper, “The user is not the enemy,” based on their findings, which is now regarded as seminal by cybersecurity academics. In the business world, there is still an iron curtain between the IT department and the rest of the company. “The technical and business people don’t talk about how they can work together to be resilient,” she says. “Security people are part of a cabal of international experts who make up rules and then try to ram them down into individual companies. But you can’t just take commandments from the mountain, you need to understand the priorities of people at your company – and you can’t do that if you don’t listen. If all you do is issue sanctions, nobody will listen to you.”
Debi Ashenden, a professor of cybersecurity at the University of Portsmouth, agrees. “It all starts with how you understand risk,” she says. “Traditionally, cybersecurity has used models of risk from maths and the hard sciences. But any time you have human beings interacting with tech, risk is fuzzier.” It’s about social context, she argues: how that person is feeling; what their taste for risk is. This information can’t be assessed by a questionnaire. Typically, people just give the minimum information needed, or fudge their answers, just so they can get on with their work. “The only way to get to the truth is to have open conversations,” Ashenden says. “A security professional once told me that when you have a relationship of trust with staff, they ‘fess up to things they’d never otherwise tell you.”
Nevertheless, attempts are being made to quantify the risk of an employee becoming a threat. Consultancy firm EY advises its clients to develop “comprehensive surveillance programs” that “cast light into [your] company’s dark corners”. Other startups offer data-based tools which assign “risk scores” to employees, based on items such as expense compliance, log-in times and attendance records. Is someone engaging in email conversations a lot less then they were, indicating disengagement with the company? Are they searching the network for information in a way that suggests malicious intent? The principle, of hunting for anomalies, is similar to Darktrace’s method, but with a focus on the human instead of the machine. Michael Gelles, a forensic psychologist and an expert on insider threats, told Security magazine that when several anomalies come together, the next step is to take a closer look at the individual: “We see an employee downloading a lot of information, and we also notice that their performance is poor, and they’re not coming into work that often. Those irregularities need to be investigated.”
Deanna Caputo is a behavioural scientist specialising in cybersecurity at MITRE, a think tank sponsored by the US government. Her aim is to establish a baseline for normal patterns of behaviour that make it easier to assess which deviations are significant. The trouble with most insider-threat programs, she told me, is that they are not based on a sophisticated understanding of human psychology. Consequently, they have a tendency to see threats where there are none: “There is a very high false positive rate with current programs,” she says. Caputo wants to ground the practice of insider-threat detection in behavioural science. She is building data-based models of what evasive computer behaviour looks like, but also of actions which indicate if someone is to be trusted. “IT professionals sometimes talk about humans being the ‘weak link’ in security,” Caputo says. “They’re not the weak link – but they have been the missing link.”
Ian Leslie is an author and journalist
This article was originally published by WIRED UK