We have discovered that 3D reconstruction can be achieved from a single still photographic capture due to accidental motions of the photographer, even while attempting to hold the camera still. Although these motions result in little baseline and therefore high depth uncertainty, in theory, we can combine many such measurements over the duration of the capture process (a few seconds) to achieve usable depth estimates. We present a novel 3D reconstruction system tailored for this problem that produces depth maps from short video sequences from standard cameras without the need for multi-lens optics, active sensors, or intentional motions by the photographer. This result leads to the possibility that depth maps of sufficient quality for RGB-D photography applications like perspective change, simulated aperture, and object segmentation, can come "for free" for a significant fraction of still photographs under reasonable conditions.
Fisher Yu and David Gallup
3D Reconstruction from Accidental Motion
Computer Vision and Pattern Recognition, 2014
Download [PDF][Poster][Video]
@inproceedings{Yu14, Author = {Fisher Yu and David Gallup}, Title = {3D Reconstruction from Accidental Motion}, Booktitle = {27th IEEE Conference on Computer Vision and Pattern Recognition}, Year = {2014}, }
We provide the data of which we have shown results in the paper here. The videos were taken by the back camera of Google Galaxy Nexus. I assume the focal length is 1781 pixels in my experiments.To
avoid decompression problem when you use the data and for fair comparison,
the decompressed image sequences are provided here.
Download [Data (1.9GB)]
The raw data for the results in the paper are also provided: [SfM] [Dense]