Move over GUI, now's the time for the Intelligent User Interface

In 2018, an array of technologies will come together to intelligently solve real-world problems
iStock

Our Human Machine Interface (HMI) is broken. Designed in 1973 at Xerox PARC, the technology waits for a demand from the human before responding, and then guides them to push the buttons in the right order to activate technology inside a computational box. At the time, the graphical user interface (GUI) model worked very well. Most interactions with computers involved advanced computation and language processing, tasks most likely to be done seated at a desk or table with a keyboard.

Since then, an accelerating number of material objects and processes have dematerialised into the computer. We are taking them with us to places we never imagined they'd go. Yet we still use an HMI designed for sitting at a table doing maths and writing on a computational box. It's time for an update.

In 2018, we will see the rise of intelligent user interface (IUI) with perceptual computing, a computing platform that brings technology out of the box and into the real world. Its combination of artificial intelligence, machine learning, sensors and robotics enable these technologies to perceive and navigate the real world and act intelligently on behalf of us.

This is thanks, in large part, to a recent breakthrough in machine learning. Unlike previous algorithmic-learning models, it uses layered neural networks to learn from examples. As a result, machine learning is surpassing human abilities when provided with a specific frame of reference. For example, machine vision has surpassed humans in image recognition. So much so that Google has created an AI that was able to detect cancer faster than a human. In fact, wherever we focus machine learning it learns from our human-curated examples then rapidly builds on that knowledge with the speed, logic and lack of bias innate to technology. The more contextually specific the data, the faster and more perceptually accurate machine learning becomes. But without context, machine learning is lost.

Read more: Google's vision for the future of smartphone design

Far from just a virtual overlay, augmented reality is the user input/output layer to perceptual computing and it allows us to point it directly at real-world problems that need solving. So rather than guiding the user through the interface, we can instead guide technology to the problem. For example, we can receive real-time translation using Google Translate simply by pointing it at a sign in another language. Using Blippar, the "Wikipedia of the real world", we can aim at something in the real world and automatically receive information about it. In each of these examples, we are employing AR to point perceptual computing at a problem, and the technology is intelligently responding to the context. This new user interface is called intelligent UI (IUI).

AR lets us see the virtual while providing that user-specific contextual frame that machine vision needs to excel. In turn, technology will eventually become capable of understanding the world faster than human perception, anticipate human intent based on what it has learned and take action. The more AR proliferates, the better and more intelligent IUI will become.

Imagine then, what will happen in 2018 when approximately one billion AR devices come online, all armed with personal AI assistants eager to adapt to individual usage.

This will be the era of rapid learning. There are likely to be missteps along the way as it sorts through the dark data of unexpected encounters. But it won't be long before our HMI becomes the ubiquitous IUI we all will rely on.

This article was originally published by WIRED UK