Learning to See with Sparse Light Field Video Cameras
- We generalize unsupervised odometry and depth estimation to operate on sparse 4D light fields
- We introduce an encoding scheme for sparse LFs appropriate for odometry and shape estimation
- We outperform the monocular approach, yielding more accurate trajectories and depth maps with known scale
We expect our method to work well with other cameras with regular overlapping views: regularly spaced 1D and 2D camera arrays, sparse cameras like the EPIModule, and lenslet-based plenoptic cameras like the Lytro and Raytrix devices.
Publications
• S. T. Digumarti, J. Daniel, A. Ravendran, R. Griffiths, and D. G. Dansereau, “Unsupervised learning of depth estimation and visual odometry for sparse light field cameras,” in Intelligent Robots and Systems (IROS), 2021. Preprint here.
Acknowledgments
Themes
Downloads
Data: please request instant access via return email here.
The EPIModule captures 17 overlapping views in the configuration shown at left. We mounted the module on a UR5e robotic arm and captured video over 46 trajectories in a variety of indoor scenes, yielding a total of 8298 LFs. After downsampling as described in the paper, the dataset occupies 13 GBytes.
See the dataset readme file here for further details.
Gallery
(click to enlarge)
We estimate depth over all LF pixels and perform a differentiable 4D warp to relate adjacent LF frames, such that the resulting photometric loss makes use of all measured pixels.
The proposed encoding yields more scene detail compared with monocular imaging, and outperforms LF encoding via focal and volumetric stacking.
Again the proposed encoding scheme outperforms focal and volumetric stacking.
Citing
@inproceedings{digumarti2021unsupervised, title = {Unsupervised Learning of Depth Estimation and Visual Odometry for Sparse Light Field Cameras}, author = {Sundara Tejaswi Digumarti and Joseph Daniel and Ahalya Ravendran and Ryan Griffiths and Donald G. Dansereau}, booktitle = {Intelligent Robots and Systems ({IROS})}, year = {2021}, publisher = {IEEE} }