Pubs Projects Tools Join Team About Home

Light Stage Object Classifier

Distinguishing visually similar objects like forged/authentic bills, healthy/unhealthy plants, and real and synthetic fruit like those shown here is beyond the capabilities of even the most sophisticated classifiers. We propose the use of multiplexed illumination to extend the range of objects that can be successfully discriminated.

Our methodology uses the light stage in two ways: first we model and synthetically relight training samples to allow joint pattern selection and classifier training in simulation. Then we use the trained patterns and classifier to quickly classify previously unseen samples.

Publications

•  T. Wang and D. G. Dansereau, “Multiplexed illumination for classifying visually similar objects,” Applied Optics, vol. 60, no. 10, pp. B23–B31, Apr. 2021. Available here.

Acknowledgments

We would like to thank the University of Sydney Aerospace, Mechanical and Mechatronic Engineering FabLab for their support.

Themes

Dataset and Code

The code is available here.

The dataset contains 16000 10-bit images of five types of real and synthetic fruit. It is split across three files:

Alternative download link here.

Light Stage Prototype

The light stage prototype used to collect this data features a five-leg design with eight illumination sites distributed across four of its legs, four mounted on upper leg segments, and four mounted on lower.

Each illumination site has four LEDs centered on Red (615 nm), Green (515 nm), Blue (460 nm), and Near-Infrared (NIR, 850 nm) colour bands. We use a Basler acA1920-150um monochrome machine vision camera with Edmund Optics NIR-VIZ 6mm infrared compatible lens.

Relightable Models

We captured each image with a single illuminant active, and imaged each sample in 20 different poses. We used a long (120 ms) exposure duration and averaged each image over multiple exposures to obtain high-SNR images.

Filenames are of the form
c<i>_p<j>_l<k>.tiff
where:
  • <i> denotes colour channel: 1r = red, 2g = green, 3b = blue, 4n = NIR
  • <j> denotes the sample pose, 0..19
  • <k> denotes the active illumination site, 1..8; shown here are typical images taken with each site active

For example
c1r_p0_l1.tiff
was captured with red LEDs on, in the first of 20 poses, with the first illumination site active.

Inference-Time Captures

Each of the Greedy and SNR-Optimal sets contains images captured using trained illumination patterns. Shown here are examples of eight spatial patterns in a single colour channel, for a single pose of a single sample.

Filenames are of the form
<tag>_p<j>c<i>l<k>.tiff
where:
  • <tag> is one of "real" or "fake"
  • <j> denotes the sample pose, 0..19
  • <i> denotes colour channel: 0 = red, 1 = green, 2 = blue, 3 = NIR
  • <k> denotes the index of the illumination pattern, 0..7 (0..3 for the greedy method)

For example
real_p0c1l0.tiff
corresponds to a real (not synthetic) fruit sample, in the first of 20 poses, with green LEDs on, and with the first illumination pattern active.

Citing and Contact

For enquiries, please email twan8752 {at} uni dot sydney dot edu dot au.

If you find this work useful please cite
@article{wang2021multiplexed,
  title = {Multiplexed Illumination for Classifying Visually Similar Objects},
  author = {Taihua Wang and Donald G. Dansereau},
  journal = {OSA Applied Optics},
  year = {2021},
  volume = {60},
  number = {4},
  publisher={OSA}
}