Scene Reconstruction from High Spatio-Angular Resolution Light Fields

Changil Kim (Disney Research Zurich / ETH Zurich)
Henning Zimmer (Disney Research Zurich / ETH Zurich)
Yael Pritch (Disney Research Zurich)
Alexander Sorkine-Hornung (Disney Research Zurich)
Markus Gross (Disney Research Zurich / ETH Zürich)

Scene Reconstruction from High Spatio-Angular Resolution Light Fields Teaser Image
The images on the left show a 2D slice of a 3D input light field, a so called epipolar-plane image (EPI), and two out of one hundred 21 megapixel images that were used to construct the light field. Our method computes 3D depth information for all visible scene points, illustrated by the depth EPI on the right. From this representation, individual depth maps or segmentation masks for any of the input views can be extracted as well as other representations like 3D point clouds. The horizontal red lines connect corresponding scanlines in the images with their respective positions in the EPI.


This paper describes a method for scene reconstruction of complex, detailed environments from 3D light fields. Densely sampled light fields in the order of 10^9 light rays allow us to capture the real world in unparalleled detail, but efficiently processing this amount of data to generate an equally detailed reconstruction represents a significant challenge to existing algorithms. We propose an algorithm that leverages coherence in massive light fields by breaking with a number of established practices in image-based reconstruction. Our algorithm first computes reliable depth estimates specifically around object boundaries instead of interior regions, by operating on individual light rays instead of image patches. More homogeneous interior regions are then processed in a fine-to-coarse procedure rather than the standard coarse-to-fine approaches. At no point in our method is any form of global optimization performed. This allows our algorithm to retain precise object contours while still ensuring smooth reconstructions in less detailed areas. While the core reconstruction method handles general unstructured input, we also introduce a sparse representation and a propagation scheme for reliable depth estimates which make our algorithm particularly effective for 3D input, enabling fast and memory efficient processing of “Gigaray light fields” on a standard GPU. We show dense 3D reconstructions of highly detailed scenes, enabling applications such as automatic segmentation and image-based rendering, and provide an extensive evaluation and comparison to existing image-based reconstruction techniques.
[Press Release]


Changil Kim, Henning Zimmer, Yael Pritch, Alexander Sorkine-Hornung, Markus Gross, Scene Reconstruction from High Spatio-Angular Resolution Light Fields, ACM Transactions on Graphics 32(4) (Proceedings of SIGGRAPH 2013)
Paper (PDF, 53.8 MB) | Paper (Low resolution PDF, 2.7 MB) | BibTeX entry

0 yorum: