Next Article in Journal
Design Study of Broadband and Ultrahigh-Resolution Imaging Spectrometer Using Snapshot Multimode Interference in Fiber Bundles
Next Article in Special Issue
Sputtering Deposition of TiO2 Thin Film Coatings for Fiber Optic Sensors
Previous Article in Journal
The Optimization of Metal Nitride Coupled Plasmon Waveguide Resonance Sensors Using a Genetic Algorithm for Sensing the Thickness and Refractive Index of Diamond-like Carbon Thin Films
Previous Article in Special Issue
Ultra-Sensitive Si-Based Optical Sensor for Nanoparticle-Size Traditional Water Pollutant Detection
 
 
Article
Peer-Review Record

Accurate Depth Recovery Method Based on the Fusion of Time-of-Flight and Dot-Coded Structured Light

Photonics 2022, 9(5), 333; https://doi.org/10.3390/photonics9050333
by Feifei Gu 1,2, Huazhao Cao 1, Pengju Xie 1 and Zhan Song 1,2,3,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Photonics 2022, 9(5), 333; https://doi.org/10.3390/photonics9050333
Submission received: 25 April 2022 / Revised: 9 May 2022 / Accepted: 9 May 2022 / Published: 11 May 2022
(This article belongs to the Special Issue Optical Sensing)

Round 1

Reviewer 1 Report

Minor language issues:

page 2, line 68 : not sure perplexed is the right word. to be checked with native english speaker.
line 341-342-343 : language/sentence needs improvements. "owns the machining accuracy", "Put it at different distances", etc..

 

Technical content

Nice study, which looks at relevant problems such as MPI issues in ToF.

Since both are active light systems, do you expect any interference between them?

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

This image fusion between two datasets from TOF and structure light -based depth imaging framework is an interesting topic. However, the presentation quality of the work in terms of English language needs to improve. I have a few technical comments:

  1. What is the input prior knowledge required to align two datasets? How to evaluate the alignment results?
  2.  According to Figure 10 presented in the authors' manuscript, the depth map improvement is minor compared the the structure light results. This means the structure light based data has more significant contribution to the results by the proposed method. How about taking into account background interference on the depth map reconstruction, which is main noise for structure light-based depth imaging framework?
  3. Could the authors justify more on the limitation on their method?

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

The paper is a solid submission on the topic of depth sensing with some novel ideas and convincible results. However, the paper need be revised for improvement according to the points listed:
1. 

More research background should be reviewed and discussed with references e.g. Song Z. et al., High-speed 3D shape measurement with structured light methods: A review, Optics & Lasers in Engineering, 106, 119-131, 2018; B. Fu et al., Single-Shot Colored Speckle Pattern for High Accuracy Depth Sensing, IEEE Sensors Journal, 19(17), 7591-7597, 2019.

 

2. Page 5, 192th column. “the opening operation CL_S(X)”. Please check whether it should be “OP_S(CL_S(X))”.

3. During data fusion in continuous regions, how to use DCLS data to optimize TOF data accuracy? How to deal with their according points in the optimization process? 

Comments for author File: Comments.pdf

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop