×

4D Light Field Benchmark

ANNOUNCEMENT: We are organizing the 2nd Workshop on Light Fields for Computer Vision at CVPR 2017 in Honolulu, Hawaii.
Please visit the workshop website for further details, the call for papers, and information on the depth estimation challenge.

References

When using our datasets, benchmarks, or visualizations, please cite the following paper:

Katrin Honauer1, Ole Johannsen2, Daniel Kondermann1, Bastian Goldluecke2
A Dataset and Evaluation Methodology for Depth Estimation on 4D Light Fields
In Asian Conference on Computer Vision (ACCV), 2016

1HCI, Heidelberg University, {firstname.lastname}@iwr.uni-heidelberg.de
2University of Konstanz, {firstname.lastname}@uni-konstanz.de

Contact

For any questions or suggestions, please contact us: contact@lightfield-analysis.net.

Copyright

The 4D Light Field Benchmark was jointly created by the University of Konstanz and the HCI at Heidelberg University. The work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license visit www.creativecommons.org. If you use any part of the benchmark please cite our paper. Thanks!

We would like to thank Chocofur, the British Museum, and the Stanford 3D Scanning Repository for providing the 3D models used in our benchmark.

Changelog

  • 19.05.2017 Removed all submissions from the benchmark table which were computed on the old dataset version. The old scores and visualizations can be downloaded here.
  • 04.04.2017 The blender addon and all light field tools are now available on github: https://github.com/lightfield-analysis.
  • 29.03.2017 Added median scores per metric and runtimes in seconds in addition to log runtimes.
  • 28.03.2017 Added four additional scenes with high disparity range: Antinous, Dishes, Greek, and Tower. Updated existing additional scenes with fixed rendering bug.
  • 19.03.2017 Major update due to fixed pattern noise in the test and training scenes. All algorithms have to be reevaluated. For a limited amount of time the old results will be kept and marked by #. Thanks to Hendrik Schilling for reporting the error!
    Detailed explanation: The cycles renderer for Blender uses an initial seed to distribute the samples in the scene. If this seed value is the same for all views, all views exhibit the same noise pattern. We were aware of this and randomly set the seed for each view with a signed integer. Unfortunately, the cycles renderer uses unsigned integers and "silently" clipped all negative seeds to 0. Therefore, around 50% of the rendered views had the same noise pattern. This bug is now fixed for the new version of the dataset!
  • 09.02.2017 Fixed depth maps of Dots scene (disparity maps remain unchanged). Thanks to Caner Hazirbas and Soweon Yoon for reporting the error!
  • 03.01.2017 Added interactive 3D point cloud visualization.
  • 31.12.2016 Added radar and scatter charts for easier comparison of algorithm performance.
  • 29.12.2016 Added display of ground truth next to algorithm results on benchmark table.
  • 14.11.2016 Updated evaluation masks for Dino (planes) and Sideboard (fine and smooth).
  • 05.11.2016 The light field benchmark is now open for your submissions!
  • 03.11.2016 The evaluation package with evaluation toolkit, baseline algorithms, and detailed submission instructions is available on the tools page.
  • 03.11.2016 The official version v1.0 of the scene data is available for download. It includes evaluation masks and object segmentations for the stratified and training scenes.
  • 25.10.2016 Fixed upside down flip of Dots ground truth files.
  • 16.10.2016 Added license files and missing view input_Cam057.png of the Sideboard scene. Thanks to Antonin Sulc for reporting the missing file!
  • 05.10.2016 The data of the 24 ACCV scenes and the file IO scripts are available for download.

Acknowledgments

We gratefully acknowledge partial financial support for this research by: