Next Article in Journal
Clutter Effect Investigation on Co-Polarized Chipless RFID Tags and Mitigation Using Cross-Polarized Tags, Analytical Model, Simulation, and Measurement
Next Article in Special Issue
Review of Integrated Chassis Control Techniques for Automated Ground Vehicles
Previous Article in Journal
Detection of Pedestrians in Reverse Camera Using Multimodal Convolutional Neural Networks
Previous Article in Special Issue
Research on Road Scene Understanding of Autonomous Vehicles Based on Multi-Task Learning
 
 
Article
Peer-Review Record

Radar-Camera Fusion Network for Depth Estimation in Structured Driving Scenes

Sensors 2023, 23(17), 7560; https://doi.org/10.3390/s23177560
by Shuguang Li 1,*, Jiafu Yan 2, Haoran Chen 1 and Ke Zheng 1
Reviewer 1:
Reviewer 2: Anonymous
Sensors 2023, 23(17), 7560; https://doi.org/10.3390/s23177560
Submission received: 12 August 2023 / Revised: 27 August 2023 / Accepted: 29 August 2023 / Published: 31 August 2023
(This article belongs to the Special Issue Research Progress on Intelligent Electric Vehicles-2nd Edition)

Round 1

Reviewer 1 Report

1.      Please provide the unit of evaluation metrics, MAE and RMSE.

2.      What is the output of radar, i.e., x in (1)?

3.      Please explain the reason that using three decoder branches for road, tree and sky respectively, not others? What will happen if only using one decoder?

4.      Please give out the hyperparameters in (10) when training.

5.      Please discuss that the accuracy is enough for practical application? Is there any problems?

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

This paper proposes a novel depth estimation for autonomous driving. Overall, this work is solid and has some potential. However, some modifications should be made before publication.

(1)    In the introduction, you should present the importance of the perception system in detail. In the first survey related to control system design for autonomous vehicles and connected and automated vehicles (10.1109/JIOT.2023.3307002), this article elaborates in detail on the significance of the perceptual system for downstream modules. Additionally, it also envisions the feasibility of the perception system enhancing accuracy for the estimation system. Therefore, this article needs to encompass the aforementioned work.

(2)    Multi-sensor fusion is widespread for autonomous driving. For example, it could be used for vehicle state estimation: automated vehicle sideslip angle estimation considering signal measurement characteristic. Thus, you need to enumerate applications of sensor fusion in autonomous vehicles within the text, such as the fusion of speed estimation.

(3)    For the comparison study, you should provide the model’s parameter size/inference time.

(4)    In Figure 5, you need to zoom in on a specific area to demonstrate the effectiveness of the proposed method.

 

(5)    Please provide the work limitations and future work direction.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop