Wide-FOV Monocentric LF Camera
This work presents the first single-lens wide-field-of-view (FOV) light field (LF) camera. Shown above are two 138° LF panoramas and a depth estimate. These are 2D slices of larger 72 MPix (15 × 15 × 1600 × 200) 4D LFs. The depth estimate is based on a standard local 4D gradient method. The superresolution, parallax pan, and refocus examples below demonstrate more of the 4D structure of these panoramas.
Click here for individual Panoramas.
Both wide-FOV and LF capture have been shown to simplify and enhance a range of tasks in computer vision, and we expect their combination to find application spanning autonomous vehicles, virtual and augmented reality capture, and robotics in general.
(left) The optical prototype employs a novel relay system and a rotating arm to emulate a tiled-sensor camera; (top-right) The main lens and lenslet array; (bot-right) The monocentric lens (fore), Lytro Illum (center), and a conventional lens with similar FOV and resolution (back).
Publications
• G. M. Schuster, D. G. Dansereau, G. Wetzstein, and J. E. Ford, “Panoramic single-aperture multi-sensor light field camera,” Optics Express, vol. 27, no. 26, pp. 37257–37273, 2019. Available here.
• D. G. Dansereau, G. Schuster, J. Ford, and G. Wetzstein, “A wide-field-of-view monocentric light field camera,” in Computer Vision and Pattern Recognition (CVPR), 2017. Available here, poster here.
• G. M. Schuster, I. P. Agurok, J. E. Ford, D. G. Dansereau, and G. Wetzstein, “Panoramic monocentric light field camera,” in International Optical Design Conference (IODC), 2017.
Collaborators
This work was a collaboration between Donald Dansereau and Gordon Wetzstein from the Stanford Computational Imaging Lab and Joseph Ford, Glenn Schuster and Ilya Agurok from the
Photonic Systems Integration Laboratory, UC San Diego.
Acknowledgments
We thank Kurt Akeley and Lytro for a hardware donation that enabled this work. This work is supported by the NSF/Intel Partnership on Visual and Experiential Computing (Intel #1539120, NSF #IIS-1539120). The authors thank Google ATAP for providing the Omnivision sensor interface, and Julie Chang and Sreenath Krishnan for their help with early optical prototypes. The monocentric lenses used in this work were fabricated within the DARPA SCENICC research program.
Gallery
(click to enlarge)
Parallax pan: Panning through a 138°, 72-MPix LF captured using the optical prototype. Shifting the virtual camera position over a circular trajectory during the pan reveals the parallax information captured by the LF. There is no complex post-processing or alignment between fields, this is the raw light field as measured by the camera.
Refocus examples, carried out using unmodified shift-and-sum LF refocus.
More examples.
Enhance! LF super-resolution, using a simple linear method.
More examples.
Optical layout: A monocentric lens (blue) produces a spherical image (red), which we capture using tiled sensors (gray), lenslet arrays (cyan), and LF processing to correct for field curvature.
Field flattening: the spherical lens yields LFs with distorted depth information. We derive a simple method for correcting this distortion. The video shows a sequence of before/after images for individual sensors behind the monocentric lens. The corrected versions show parallax behaviour close to ideal, while before correction they show noticeable bulging near the center.
The proposed camera-centric relative spherical parameterization, with monocentric lens at the center of the reference sphere, absolute entry angle , and relative exit angle .
The spherical parameterization is locally well approximated by a two-plane parameterization. This allows us to employ a rich range of existing LF processing techniques on spherical LFs.
Raytracer demonstrating the sampling pattern of an ideal spherical camera to be very close to a rectangular grid in the proposed parameterization; for clarity rays from only two lenslets are shown in the diagram at right, and is not depicted.