Pubs Projects Tools Join Team About Home

Robotic Vision with Refractive Objects

(bottom-left) Tracking conventional features like SIFT through transparent objects can yield inconsistent apparent motion and break 3D vision algorithms like structure from motion (SfM). (bottom-right) We propose the Refracted Light Field Feature (RLFF) that shows consistent apparent motion, enabling SfM to operate around refractive objects.

  • We describe a new kind of feature, the RLFF, that exists in the patterns of light refracted through objects
  • We propose efficient methods for detecting and extracting RLFF features from LF imagery
  • RLFF can be used in place of conventional features like SIFT, and improves SfM performance in scenes dominated by refractive objects
  • We show more accurate camera trajectory estimates, 3D reconstructions, and more robust convergence, even in complex scenes where state-of-the-art methods fail

RLFF advances robotic vision around refractive objects, with applications in manufacturing, quality assurance, pick-and-place, and domestic robotics around glass and other transparent materials.

Publications

[1]  D. Tsai, P. Corke, T. Peynot, and D. G. Dansereau, “Refractive light-field features for curved transparent objects in structure from motion,” IEEE Robotics and Automation Letters (RA-L, IROS), 2021. Available here.

[2]  D. Tsai, D. G. Dansereau, T. Peynot, and P. Corke, “Distinguishing refracted features using light field cameras with application to structure from motion,” IEEE Robotics and Automation Letters (RA-L, ICRA), vol. 4, no. 2, pp. 177–184, Apr. 2019. Available here.

Collaborators

This work was a collaboration between Donald Dansereau from the Robotic Imaging group at the Australian Centre for Field Robotics, and Peter Corke, Thierry Peynot, and Dorian Tsai from QUT's Australian Centre for Robotic Vision.

Acknowledgments

This research was partly supported by the Australian Research Council (ARC) Centre of Excellence for Robotic Vision (CE140100016).

Themes

Downloads

The code for the RLFF feature is on GitHub here.

The dataset used in the RLFF paper is here (16GByte download).

The dataset was captured with a robotic arm-mounted Lytro Illum, and contains 218 LFs of 20 challenging scenes with a variety of refractive and Lambertian objects. The dataset includes the raw LFs and known camera trajectories.

Citing

If you find this work useful please cite
@article{tsai2021refractive,
  title = {Refractive Light-Field Features for Curved Transparent Objects in Structure from Motion},
  author = {Dorian Tsai and Peter Corke and Thierry Peynot and Donald G. Dansereau},  
  journal = {IEEE Robotics and Automation Letters ({RA-L, IROS}),
  year = {2021},
  organization = {IEEE}
}
@article{tsai2019distinguishing,
  title = {Distinguishing Refracted Features using Light Field Cameras with Application to Structure from Motion},
  author = {Dorian Tsai and Donald G. Dansereau and Thierry Peynot and Peter Corke},
  journal = {IEEE Robotics and Automation Letters ({RA-L, ICRA})},
  year = {2019}, 
  volume = {4}, 
  number = {2}, 
  pages = {177-184}, 
  month = apr,
  organization = {IEEE}
}