Date: Fall 2013, Spring 2014
Disciplines: Image Processing, Computer Vision, Machine Learning, Remote Sensing
Technologies Used: MATLAB, C, ArcGIS, Orfeo Toolbox
Team: Mark Jouppi, Shamn Singh, Jordan Zhang. Advised by Professor Avideh Zakhor.
The goal of this research project, which is funded by the Intelligence Advanced Research Projects Activity (IARPA), is the following: given a ground-level query photo taken in an unknown location and a huge database of reference data, generate a probability distribution showing the most likely places in the world the image was taken at. Traditional approaches in the literature often rely on huge geotagged ground-level photo reference databases like Google StreetView. In these schemes, the query photo is matched to the nearest neighbor reference photo using feature matching and appropriate scaling techniques, and the geotag location from the nearest neighbor reference is transferred to the query. But what about parts of the world that Google StreetView doesn't cover? Lack of coverage is often a problem for smaller towns as well as large areas of third world countries.
To address this problem, we use overhead reference data, namely satellite imagery, and digital elevation maps, for which we have data covering the entire world (at varying levels of resolution of course). Figuring out where an arbitrary ground photo was taken using overhead data is obviously a very hard problem and is an unsolved cutting-edge research topic. To begin to address this, in this project, I develop computer vision and image processing techniques for extracting buildings from satellite images. The idea is that buildings from overhead images could be matched to buildings in ground level query photos to figure out where in the world the ground level photo was taken.
The following paper details this research:
|Back to Projects|