Terrapattern finds the parts of cities other maps leave unlabelled. From San Francisco's empty swimming pools to New York's seas of shipping containers, this search engine for satellite imagery uncovers otherwise invisible urban patterns. Click anywhere on a map of one of the seven cities scanned with Terrapattern, and its algorithm finds similar-looking areas in the same city.
"We try to indicate what the world might be like in a few years," says Golan Levin, who created the system with his team of artists and coders at Carnegie Mellon University.
Terrapattern's deep convolutional neural network is trained using hundreds of thousands of satellite images from collaborative mapping wiki OpenStreetMap. Initial layers of algorithmic neurons identify basic visual elements - edges, curves, colours - and pass that information on to higher-level neurons that categorise images into 466 types.
This process teaches Terrapattern how to identify similarities. For the big picture above, it was asked to match a query patch which contained rows of shipping containers. At some point, it ran out of containers, so it moved to the next-most-similar patterns, such as pedestrian crossings and train yards.
Levin's open-source algorithm allows anyone to analyse their map data - and hopefully create an abstract artwork.
This article was originally published by WIRED UK