Businesses need to understand AI before putting it to work

Far from being a threat to our human workforce, AI should be harnessed to support and empower companies to make better, more creative decisions

Artificial intelligence is a big deal for business – it's the biggest marketing buzzword this side of cryptocurrency and it's set to make a very real difference to the future of work. But what does that really look like? How will AI support employees? And what responsibilities do you have when you start applying AI to customer data?

BT's Adastral Park research facility is at the forefront of the development and application of AI technologies in the UK, from using real-time network analysis to detect and protect against cyberattacks as they happen, to enabling customer service agents to anticipate the future needs of their clients based on customer behaviour trends.

But as with any system that relies on algorithmic decision-making and the handling of customer information, it's incredibly important to implement AI as a secure technology that empowers people and companies to make better, more appropriate and more creative decisions. And the first step is ensuring that people know exactly what they're dealing with.

What AI is... and what it isn't

Simon Thompson, head of practice for big data and customer experience at BT Research, is an advocate for clarity in the world of AI.

"People are getting some very odd perceptions about what AI can do," he says. "AI is still very mindless. It reflects and transforms different aspects of human thought and reflects them back in a different way, but it isn't a mind. There isn't an actor there."

Although certain elements of the press still like to play upon sci-fi visions that have more to do with humanity's concept of itself than the reality of modern machine learning and artificial intelligence, Thompson says it's important to understand that "we don't have any way to make AI understand the world as it is, but we do have a way to build powerful tools that we can use”.

What we get out of AI is entirely dependent on what we put in: the datasets being analysed and the algorithms we apply to them.

Maja Pantic, professor of affective and behavioural computing at Imperial College London and research director of Samsung AI Research Centre in Cambridge, highlights the importance of understanding the limitations of AI and the necessity of using it to support human expertise.

In her research applying artificial intelligence tools to medical diagnosis, an AI can detect signs of cancer lesions from a photo with the same or higher accuracy than a trained human. Another application, which Pantic is currently working on, assesses human facial reactions to detect subtle early signs of depression or dementia.

"This is the kind of thing we can use artificial intelligence for, but that doesn't mean that artificial intelligence will replace doctors," she says. AI is set to become another tool for medical practitioners, much like X-rays – making diagnosis and treatment easier, rather than replacing the medic.

AI for security

Meanwhile, at BT's research lab, AI is used to analyse vast amounts of data passing across the telecoms giant's network. Trained on what standard communications transactions look like, it watches for abnormal traffic and flags up anything unusual for further examination by human experts.

Alex Healing, BT's chief researcher in visual analytics and AI for security, explains: "AI already offers unique insights into cyberattack patterns in volumes of data which were previously unfathomable."

It wouldn't be possible for humans to monitor more than a tiny fraction of BT's network infrastructure like this, but with an AI on the team, malicious activity can be spotted early and the source of an attack identified and – using systems such as Project SATURN (Self-organising Adaptive Technology underlying Resilient Networks) – visualised to make it comprehensible to human observers.

Healing says of SATURN's ability to transform complex data into a clear visual representation: "Development of technology to better link AI systems with people is a key to widespread adoption, and interactive visualisation can be an effective tool for two-way communication between human and machine." As in the medical field and in legal professions, where artificial intelligence can help analyse, file and even fill repetitive documents, these AI tools ultimately exist to support human staff, not to replace them or tell them how to do their jobs.

But it's important that the AI systems that analyse that cyberattack data are themselves kept secure. "AI is a key enabler for security, but also security is a key enabler for sustainable AI," says Healing. A compromised AI system could poison all the data it produces.

Checks and balances

Maja Pantic advocates for explicable AI, which calls for the background of every algorithm and dataset to be fully and clearly documented. "You want to have the AI explained to you in logical and readable terms," she tells WIRED. "How the algorithm has been trained, based on which data, and whether this data has biases or not, and how its conclusions have been made. This is something we're still working on."

For BT's AI and security researchers, Alex Healing says a balance has to be struck between engineering AI systems that are resilient to attack, while still maintaining transparency: "On one hand, making AI more explainable will be hugely valuable in understanding when something goes awry, but on the other, increased understanding in malicious hands may more easily allow for targeted attacks to be crafted.”

Where customers are involved, AI ethics specialist Damien Williams, a PhD researcher at Virginia Tech's Department of Science, Technology, and Society, says that, as with any system involving extensive data collection, it's important to take security into account.

"How will data be secured and protected against use by outside bad actors?" he asks. And if AIs are analysing consumer behaviour and making customer recommendations based on that, "how much of the work is being done to specifically push the customers into certain behaviours, rather than providing them with things they have chosen?"

Like Pantic and Thompson, Williams emphasises the importance of keeping both customers and staff informed, making sure that they're are aware of and have clearly understand what is collected and known about them.

BT's Thompson says education is needed at all levels to ensure that people whose organisations work with AI – from engineers to CEOs to policymakers – have the understanding required to make decisions about it.

"On an executive and leadership level, there's a need for engagement training and work to direct the policy debate and help form that debate in a rounded way. If we're not careful, one set of voices will be heard at that level, and I think that it's important that a wide set of voices from our society have a say in how the AI policies of the future are created."

-- For more, visit bt.com

This article was originally published by WIRED UK