This article was first published in the December 2015 issue of WIRED magazine. Be the first to read WIRED's articles in print before they're posted online, and get your hands on loads of additional content by subscribing online.
Researchers in Germany have just made lurking in the dark much harder. A team of computer scientists from Karlsruhe Institute of Technology near Stuttgart used deep neural networks -- a multi-layered digital imitation of human neurons -- to match a well-lit photo with a thermal image of the same face.
Unlike a person's visual appearance, their thermal signature varies wildly depending on factors such as air temperature or level of excitement. The solution was "to use pairs of thermal and visible images to train the neural network", explains lead researcher Saquib Sarfraz.
Feeding the network 4,585 images of 82 people, taken both in full light and in the dark using thermal sensors, resulted in an accuracy rate of more than 80 percent. Granted, Sarfraz explains, that kind of accuracy can only be attained if the network is provided with "four to eight normal pictures of a person".
If the network has just one visible light picture, its accuracy drops to 55 percent. Still, a handful of mugshots would suffice to identify a suspect captured by an infrared camera. "It has many applications for law enforcement," Sarfraz says.
This article was originally published by WIRED UK