Next Article in Journal
Uncertainty Quantification of Soil Organic Carbon Estimation from Remote Sensing Data with Conformal Prediction
Previous Article in Journal
Hyperspectral Image Super-Resolution Based on Feature Diversity Extraction
Previous Article in Special Issue
Single-Image Simultaneous Destriping and Denoising: Double Low-Rank Property
 
 
Article
Peer-Review Record

ERS-HDRI: Event-Based Remote Sensing HDR Imaging

Remote Sens. 2024, 16(3), 437; https://doi.org/10.3390/rs16030437
by Xiaopeng Li, Shuaibo Cheng, Zhaoyuan Zeng, Chen Zhao and Cien Fan *
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3:
Remote Sens. 2024, 16(3), 437; https://doi.org/10.3390/rs16030437
Submission received: 19 December 2023 / Revised: 19 January 2024 / Accepted: 20 January 2024 / Published: 23 January 2024
(This article belongs to the Special Issue Computer Vision and Image Processing in Remote Sensing)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The paper introduces the ERM-HDRI framework for HDRI in remote sensing, aiming to enhance LDR images for improved downstream tasks. This framework incorporates E-DRE network and G-HDRR network, employing a coarse-to-fine approach. The E-DRE network extracts dynamic range features from LDR frames and events, while the G-HDRR network enforces structure on the HDR image.

While the introduction effectively communicates the existing issues related to conventional methods and highlights the advantages of event cameras, the novelty of the contributions is somewhat overstated. The paper claims contributions in presenting an event-based HDRI framework, introducing a coarse-to-fine strategy, and providing a hybrid imaging system with a new dataset. However, these contributions may lack significant novelty as similar strategies have been explored in related works. The specific challenges addressed, such as domain gap and light attenuation, are not distinctly positioned in the context of prior art, leaving room for a clearer articulation of the paper's unique contributions. Additionally, the emphasis on a hybrid imaging system and dataset, while valuable, should be contextualized within the broader landscape of remote sensing research. Further clarification on how the proposed framework distinguishes itself from existing approaches would enhance the introduction. So, the authors are strongly recommended to follow the path for generating syntetic images as explained in A hybrid image dataset toward bridging the gap between real and simulation environments for robotics. The hybrid camera system details shown in Figure 5 should be given explicitly such as the dimensions of the mechanics, the distance between cams, etc. It is also very important for Spatial Calibration. It would be better to see the calculations done for Spatial Calibration. The authors can further benefit from the geometry as provided in A low-cost UAV framework towards ornamental plant detection and counting in the wild. The authors state they employed NVIDIA GeForce RTX 4090 GPUs for training however they did not mention the number of GPUs. The authors also did not provide the time performance of their model. The time performance comparison is also an important metric that should be given as a comparison table. the conclusion lacks a critical reflection on the limitations of the proposed method or potential areas for future improvement. Providing insights into the challenges or aspects where the framework may fall short would contribute to a more balanced conclusion. Additionally, while the claim of outperforming state-of-the-art methods is made, there's a need for more specific and quantifiable results or metrics to reinforce this assertion. The conclusion could benefit from a brief discussion on the broader implications of the proposed framework in the field of remote sensing and how it might inspire future research directions.

Author Response

Thank you very much for taking the time to review this manuscript. Your insightful comments and constructive suggestions greatly improve the quality of our paper. By taking into account all the comments, we have carefully revised the manuscript. Please see the attachment for the point-by-point response.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The paper addresses a modern topic of event-based remote sensing HDR imaging. There is an obvious novelty in the proposed approach and the obtained results are quite impressive. However, the paper can be improved: 

 Page 1: fig. 1 change to Fig. 1 and other places

Change table to Table in all places

Table 2 – what is meant by ↑ and ↓ in Table 2? I can guess but it is better to explain. 

How canYou explain such a large improvement for Your method compared to others? especially in Table 2. What is used (taken into account) in Your method and not used in others? 

What are noise properties? How have they been estimates and simulated?

SSIM equal to 0.88 and PSNR equal to 29 dB correspond to sufficient distrotions but I do not see them in Figures 6 and 7.  

Fig. 6 – what is GT?

Author Response

Thank you very much for taking the time to review this manuscript. Your insightful comments and constructive suggestions greatly improve the quality of our paper. By taking into account all the comments, we have carefully revised the manuscript. Please see the attachment for the point-by-point response.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

Authors Have presented their work with the title “ERM-HDRI: Event-based Remote Sensing HDR Imaging”. The research work contributed by the authors of this paper is mainly reflected in the following points:

·       Authors have addressed an event-based HDRI framework for remote sensing HDR image reconstruction by integrating LDR frames and event streams.

·       Authors also have proposed and worked on a coarse to fine strategy that efficiently achieves dynamic range enhancement and structure enhancement, where both the domain gap problem and light attenuation problem are alleviated.

·       Authors have conducted a hybrid imaging system with a conventional optical camera and an event camera.

·       In the work methodology, authors have illustrated the Event-based Remote Sensing HDR Imaging framework step by step clearly and it is easy to understand by a layman too.

·       Experimental results showed by the authors that their proposed method outperforms the state-of-the-art methods on both synthetic data as well as real-world data and the results section also articulated clearly in the paper.

·       Conclusion and references are also well organised and cited all the references in the running text.

·       There are few minor corrections/typographical errors that are to be corrected by the authors. Here are the points:

·       Abbreviations of a few words to be elaborated at the starting of the paper itself.

·       Some titles like figures numbers and table number formats are different throughout the paper, request you to please maintain the same.

·       Comparison of your work was not found anywhere in the paper, when we are saying good results but it should be said only when this work is compared with other work. So, I request the authors to update on this in paper.

·       Authors have not said clearly about the noise properties-like what type of noises taken/got while doing simulation analysis.

·       I request the authors to avoid the words like I, WE, YOU in the paper running text and the paper should be written in third person format.

Comments on the Quality of English Language

Authors Have presented their work with the title “ERM-HDRI: Event-based Remote Sensing HDR Imaging”.    The research work contributed by the authors of this paper is mainly reflected in the following points:

·         Authors have presented an event-based HDRI framework for remote sensing HDR image reconstruction by integrating LDR frames and event streams.

·         Authors also have proposed a coarse to fine strategy that efficiently achieves dynamic range enhancement and structure enhancement, where both the domain gap problem and light attenuation problem are alleviated.

·         Authors have conducted a hybrid imaging system with a conventional optical camera and an event camera.

·         Authors also proposed a novel remote sensing event based HDRI dataset that contains aligned LDR images, HDR images, and concurrent event streams.

·         Experiments shown the authors that their proposed method outperforms the state-of-the-art methods on both synthetic data as well as real-world data.

·         I request the authors to avoid the words like I, WE, YOU in the paper running text and the paper should be written in third person format.

Author Response

Thank you very much for taking the time to review this manuscript. Your insightful comments and constructive suggestions greatly improve the quality of our paper. By taking into account all the comments, we have carefully revised the manuscript. Please see the attachment for the point-by-point response.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

The authors addressed most of my concerns.

However there are still some issues that can be eliminated easily.

The citations are done in a mixed format. For example; "Sharif et al. [31]", "ExpandNet [26]",  "[29,30] enhance...", "Traditional approaches [21 – 24] ...". They must be done consistently and journal format must be used. By the way, I do not recommend starting the sentences by citing some references such as "[29,30] enhance...".

Furthermore, the authors state "... the alignment between events and frames is performed by using the transformation matrix estimated by matching SIFT features [59] between the frame captured by the RGB camera and the reconstructed intensity maps." I also recommend employing the study Analysis of feature detector and descriptor combinations with a localization experiment for various performance metrics for justifying their choice regarding the performance, which will make stronger sense. The authors provided the dimensions of the hybrid camera system but it would be better to elaborate why they have these sizes? Is there anything biologically inspired (e.g. the average distance between human eyes, etc.) or what? There are several typos: "Comparsion with ..." should be "Comparison with...", "Comparsion of different ..." should be "Comparison of different ...","The visual comparisons of results on an over-exposed LDR image is presented..." should be "The visual comparisons of results on an over-exposed LDR image are presented...", etc. All the manuscript must be scanned for fixing typos and other language issues. There must be a discussing for Efficiency Evaluation because the time performance of DeepHDR is better than the proposed method even the number of parameters is almost the 5 times of it. In addition, the time performance of HDRUnet and KUnet seem very competitive. However, there is a remarkable dispute between the number of parameters and time of the methods given in Table 4. So that a strong justification must be made for those issues.

Comments on the Quality of English Language

There are several typos: "Comparsion with ..." should be "Comparison with...", "Comparsion of different ..." should be "Comparison of different ...","The visual comparisons of results on an over-exposed LDR image is presented..." should be "The visual comparisons of results on an over-exposed LDR image are presented...", etc. All the manuscript must be scanned for fixing typos and other language issues.

Author Response

Thank you for the great effort on our paper. Your insightful comments and constructive suggestions greatly improve the quality of our paper. By taking into account all the comments, we have carefully revised the manuscript. Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

I am satisfied by the corrections done. 

Author Response

Thank you for the great effort on our paper.

Back to TopTop