Reader's advisory: Wired News has been unable to confirm some sources for a number of stories written by this author. If you have any information about sources cited in this article, please send an e-mail to sourceinfo[AT]wired.com.
A new type of thinking machine that could completely change how people interact with computers is being developed at the Department of Energy's Sandia National Laboratories.
Over the past five years, a team led by Sandia cognitive psychologist Chris Forsythe has been working on creating intelligent machines: computers that can accurately infer intent, remember prior experiences with users, and allow users to call upon simulated experts to help them analyze problems and make decisions.
Forsythe's team was originally trying to create a "synthetic human" – software capable of thinking like a person – for use in national defense.
The thinking software was to create profiles of specific political leaders or entire populations. Once programmed, the synthetic human(s) could, along with analytic tools, predict potential responses to various hypothetical situations.
But along the way, the experiment took a different turn.
Forsythe needed help with the software, and asked some of the programmers in Sandia's robot lab for assistance. The robotics researchers immediately saw that the technology could be used to develop intelligent machines, and the research's focus quickly morphed from creating computerized people to creating computers that can help people by acting more like them.
Synthetic humans are still a big part of the Sandia cognitive machines project, but researchers have now extended their idea of what the technology can and will ultimately be used for.
"We would like to advance the field of simulation, and particularly simulations involving synthetic humans, to the point that it becomes a practical tool that can be used by anyone to answer a wide range of everyday questions," said Forsythe.
But fear not – this is not a new incarnation of Clippy the paperclip, Microsoft's much maligned "helper application."
"Clippy is a wonderful example of what not to do," said Forsythe. "Actually, most forms of online help are good examples of what not to do."
When two humans interact, two (hopefully) cognitive entities are communicating. As cognitive entities – thinking beings – each has some sense of what the other knows and does not know. They may have shared past experiences that they can use to put current events in context; they might recognize each other's particular sensitivities.
In contrast, Forsythe said, Clippy illustrates a flawed one-size-fits-all, lowest-common-denominator approach.
Forsythe and his team are trying to mimic real human interaction, embedding within computers an extremely human-like cognitive model that enables the machine to have an interaction with the user that more closely resembles communications between two thinking humans.
"If you had an aide tasked with watching everything you do, learning everything they could about you and helping you in whatever way they could, it is extremely unlikely that your interactions with that aide would in any way resemble interactions with Clippy," Forsythe said.
Forsythe believes the technology his team is developing will eventually be ubiquitous and allow almost anyone to quickly configure and execute relatively complex computer simulations.
"For instance, sitting in my car at a red light, I should be able to set up and run a simulation that shows me possible effects on traffic of the accident that is ahead of me," Forsythe said.
"Such a tool would not necessarily tell me the answer, but it would augment my own cognitive processes by making me aware of potential realities, as well as the interrelationships between various factors that I may or may not be able to control, influence or avoid."
Computer software often, but not exclusively, relies on programmed rules. If "A" happens, then so does "B." Humans are a bit more complex. Stress, fatigue, anger, hunger, joy and differing levels of ability can change how humans respond to any given stimulus.
"Humans are certainly capable of logical operations, but there is much more to human cognition," said Forsythe.
"We've focused on replicating the processes whereby an individual applies their unique knowledge to interpret ongoing situations or events. This is a pattern recognition process that involves episodic memory and emotional processes but not much of what one would typically consider logical operations."
Sandia's work on cognitive machines took off in 2002 with funding from the Defense Advanced Research Projects Agency to develop a real-time machine that could figure out what its user is thinking.
This capability would provide the potential for systems capable of augmenting the mental abilities of their users through "discrepancy detection," in which a machine uses an operator's cognitive model – what the machine knows about its user – to monitor its own state.
When evidence appears of a discrepancy between what's happening or what is being done on or to the machine and the operator's assumed perceptions or typical behavior, a discrepancy alert may be signaled.
The idea is to figure out ways to make humans smarter by improving human-hardware interactions, said John Wagner, manager of Sandia's Computational Initiatives Department.
Early this year work began on Sandia's Next Generation Intelligent Systems Grand Challenge project. The goal of Grand Challenge is to significantly improve the human capability to understand and solve national security problems, given the exponential growth of information and very complex environments, said Larry Ellis, the principal investigator.
Forsythe believes that cognitive machine technology will be embedded in most computer systems within the next 10 years. His team has completed trial runs of technology methodologies that allow the knowledge of a specific expert to be captured in computer models.
They've also worked out methods to provide synthetic humans with episodic memory (memory of experiences) so that computers might apply their knowledge of specific experiences to solving problems in a manner that closely parallels what people do on a regular basis.
"I can think of no better use of available CPU cycles than to have the machine learn and adapt to the individual user," said Forsythe. "It's the old issue of homogeneity vs. heterogeneity."
"Throughout the history of the computer industry, the tendency has been to force users toward a homogeneous model, instead of acknowledging and embracing the individual variability that users bring to computing environments."