Google scientists have developed a new neural network that can identify the geographical location in which a photograph was taken without the need for geotags or GPS. The software, developed by Tobias Weyand and colleagues, is called Google PlaNet and it occupies a sufficiently small amount of memory that it could, according to Discovery News, fit on your smartphone.
PlaNet uses an enormous database of geotagged images in order to identify locations. The software has divided up the globe into a grid and sorted through the 91 million photographs in this database to allocate them to the correct ‘box’ using the tagged data, creating a virtual map of images it can then use to compare with whatever photographs it is fed. In order to test the system, Google scientists loaded PlaNet with 2.3 million geotagged Flickr photos and asked it to work out where they were taken. PC Magazine reports that the new software rose to the challenge. Although PlaNet is no more capable of identifying the location of a blurry or severely underexposed photo than a human would be, Weyand told MIT Technology Review that PlaNet’s ability to recognize the locations of photographs is superior to even the most well-traveled human:
We think PlaNet has an advantage over humans because it has seen many more places than any human can ever visit and has learned subtle cues of different scenes that are even hard for a well-traveled human to distinguish.
While not infallible, PlaNet beat a well-traveled human opponent in recognizing locations in over half the tests. The software was able to correctly identify the right continent in 48 percent of cases, the correct country in 28.4 percent of cases, the precise city 10.1 percent of times, and even the exact street where the photo was taken in 3.6 percent of the tests. While the possible future uses of this technology are still unclear, for now it could be useful in pinpointing the locations depicted in old photographs from before the days of geodata-tagging.