4D Light Field Benchmark

ANNOUNCEMENT: The 2nd Workshop on Light Fields for Computer Vision will take place on July 26th in conjunction with CVPR 2017.
Visit our workshop website for a detailed workshop program. We are looking forward to meeting you in Hawaii!


When using our datasets, benchmarks, or visualizations, please cite the following papers:

A Dataset and Evaluation Methodology for Depth Estimation on 4D Light Fields
Katrin Honauer1, Ole Johannsen2, Daniel Kondermann1, Bastian Goldluecke2
In Asian Conference on Computer Vision (ACCV), 2016

1HCI, Heidelberg University, {firstname.lastname}@iwr.uni-heidelberg.de
2University of Konstanz, {firstname.lastname}@uni-konstanz.de

A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms
Ole Johannsen1, Katrin Honauer2, Bastian Goldluecke1, Anna Alperovich1, Federica Battisti3, Yunsu Bok4, Michele Brizzi3, Marco Carli3, Gyeongmin Choe5, Maximilian Diebold2, Marcel Gutsche2, Hae-Gon Jeon5, In So Kweon5, Jaesik Park6, Jinsun Park5, Hendrik Schilling2, Hao Sheng7, Lipeng Si8, Michael Strecke1, Antonin Sulc1, Yu-Wing Tai9, Qing Wang8, Ting-Chun Wang10, Sven Wanner11, Zhang Xiong7, Jingyi Yu12, Shuo Zhang7, Hao Zhu8
In Conference on Computer Vision and Pattern Recognition - LF4CV Workshop (CVPRW), 2017

1University of Konstanz, {firstname.lastname}@uni-konstanz.de,
2HCI, Heidelberg University, {firstname.lastname}@iwr.uni-heidelberg.de,
3Roma Tre University, Italy, 4ETRI, Republic of Korea, 5KAIST, Republic of Korea, 6Intel Visual Computing Lab, USA, 7Beihang University, China, 8Northwestern Polytechnical University, China, 9Tencent, China, 10UC Berkeley, USA, 11Lumitec, Germany, 12ShanghaiTech University, China


For any questions or suggestions, please contact us: contact@lightfield-analysis.net.


The 4D Light Field Benchmark was jointly created by the University of Konstanz and the HCI at Heidelberg University. The work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license visit www.creativecommons.org. If you use any part of the benchmark please cite our paper. Thanks!

We would like to thank Chocofur, the British Museum, and the Stanford 3D Scanning Repository for providing the 3D models used in our benchmark.


  • 11.07.2017 Major update of the evaluation-toolkit. The metrics and figures of the survey paper, various converter utilities, and a detailed README with figure demos are now available on github.
  • 10.07.2017 Benchmark update: novel high accuracy and surface metrics of the survey paper are now part of the benchmark. Boxes plane evaluation mask was updated to include the white box. See the survey paper for further metric details.
  • 10.07.2017 A snapshot of submitted disparity maps of all algorithms for the stratified and training scenes is now available on the tools page.
  • 10.07.2017 Raw evaluation scores for all benchmark scenes are now available per algorithm. Click on an algorithm on the benchmark table to see the method details and to download the file.
  • 10.07.2017 Survey paper published for LF4CV workshop at CVPR 2017.
  • 10.07.2017 Updated algorithm FBS. New version is using the same parameters for all scenes.
  • 19.05.2017 Removed all submissions from the benchmark table which were computed on the old dataset version. The old scores and visualizations can be downloaded here.
  • 04.04.2017 The blender addon and all light field tools are now available on github: https://github.com/lightfield-analysis.
  • 29.03.2017 Added median scores per metric and runtimes in seconds in addition to log runtimes.
  • 28.03.2017 Added four additional scenes with high disparity range: Antinous, Dishes, Greek, and Tower. Updated existing additional scenes with fixed rendering bug.
  • 19.03.2017 Major update due to fixed pattern noise in the test and training scenes. All algorithms have to be reevaluated. For a limited amount of time the old results will be kept and marked by #. Thanks to Hendrik Schilling for reporting the error!
    Detailed explanation: The cycles renderer for Blender uses an initial seed to distribute the samples in the scene. If this seed value is the same for all views, all views exhibit the same noise pattern. We were aware of this and randomly set the seed for each view with a signed integer. Unfortunately, the cycles renderer uses unsigned integers and "silently" clipped all negative seeds to 0. Therefore, around 50% of the rendered views had the same noise pattern. This bug is now fixed for the new version of the dataset!
  • 09.02.2017 Fixed depth maps of Dots scene (disparity maps remain unchanged). Thanks to Caner Hazirbas and Soweon Yoon for reporting the error!
  • 03.01.2017 Added interactive 3D point cloud visualization.
  • 31.12.2016 Added radar and scatter charts for easier comparison of algorithm performance.
  • 29.12.2016 Added display of ground truth next to algorithm results on benchmark table.
  • 14.11.2016 Updated evaluation masks for Dino (planes) and Sideboard (fine and smooth).
  • 05.11.2016 The light field benchmark is now open for your submissions!
  • 03.11.2016 The evaluation package with evaluation toolkit, baseline algorithms, and detailed submission instructions is available on the tools page.
  • 03.11.2016 The official version v1.0 of the scene data is available for download. It includes evaluation masks and object segmentations for the stratified and training scenes.
  • 25.10.2016 Fixed upside down flip of Dots ground truth files.
  • 16.10.2016 Added license files and missing view input_Cam057.png of the Sideboard scene. Thanks to Antonin Sulc for reporting the missing file!
  • 05.10.2016 The data of the 24 ACCV scenes and the file IO scripts are available for download.


We gratefully acknowledge partial financial support for this research by: