Google is building its very own artificial brain using tens of thousands of computers, hoping to improve stuff like voice and image search. And Facebook has followed suit, aiming to solve its big data problems with help from the principles of neuroscience. There's even an open source framework for building software applications based on brains.
Although mimicking the structure of the brain was one of the original techniques researchers experimented with when trying to create machine intelligence in the 50s and 60s, the idea -- called neural networking -- eventually fell out of favor. But now it's back with a vengeance, and it might just change the way computer hardware is designed.
Qualcomm is now preparing a line of computer chips that mimic the brain. Eventually, the chips could be used to power Siri or Google Now-style digital assistants, control robotic limbs, or pilot self-driving cars and autonomous drones, says Qualcomm director of product management Samir Kumar.
But don't get too excited yet. The New York Times reported this week that Qualcomm plans to release a version of the chips in the coming year, and though that's true, we won't see any real hardware anytime soon. "We are going to be looking for a very small selection of partners to whom we'd make our hardware architecture available," Kumar explains. "But it will just be an emulation of the architecture, not the chips themselves."
Qualcomm calls the chips, which were first announced back in October, Zeroth, after the Isaac Asimov's zeroth law of robotics: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm."
The Zeroth chips are based on a new architecture that radically departs from the architectures that have dominated computing for the past few decades. Instead, it mimics the structure of the human brain, which consists of billions of cells called neurons that work in tandem. Kumar explains that although the human brain does its processing much more slowly than digital computers, it's able to complete certain types of calculations much more quickly and efficiently than a standard computer, because it can do many calculations at once.
Even the world's largest supercomputers are able to use "only" one million processing cores at a time.
What's more, Kumar explains, today's supercomputers must be programmed to break complex problems into smaller problems before they can work on them. The human brain can tackle a complex task -- identify an object, for example -- without extra steps. The brain starts work on the problem almost automatically.
Ultimately, the goal of neural networking is to create computers that can learn. Instead of being given a list of instructions to complete a task, computers that use these chips could theoretically learn to the task on their own -- given enough environmental clues and feedback. For example, Qualcomm trained a robot to navigate a grid by telling it which squares were the right ones to land on, rather than programming it with a set path to follow to specific squares (see video above).
Qualcomm calls the Zeroth chips "neural processing units," or NPUs. But this isn't the only option for neural networking. Google is building its massive brain using existing graphical processing units, or GPUs, chips originally intended for high-end video gaming.
In fact, Qualcomm expects Zeroth chips to compliment, rather than replace, other processors within a device. Just as your computer probably contains both a CPU and a GPU, Kumar believes the computers and smartphones of the future may have all three processors. He says NPUs could take much of the strain off CPUs and GPUs by handling the sorts of calculations that humans or even dogs can do simply but that vex supercomputers.
All of this is around the bend, yet so far away.