A Review of Unmanned Visual Target Detection in Adverse Weather
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThe authors clearly present the tasks set for the system discussed in the paper and its research development.
The paper presents an overview of obstacle detection methods during autonomous driving in adverse weather conditions. The authors discussed traditional methods used to improve images, methods based on deep learning, and the similarities and differences between the models used. They presented further solutions with characteristics and a summary.
They also presented issues related to the types of noise occurring in images, methods and correction and effectiveness in image enhancement and computational efficiency.
They also presented a summary and defined the challenges faced by current technologies and limitations related to the availability of images.
The article has been thematically formatted correctly and is written in a way that is accessible to the reader.
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for Authors<<Review Comments>>
This work addresses an important and timely topic in the field of computer vision and autonomous driving. After reviewing the manuscript thoroughly as a review paper, I offer the following comments and suggestions for improvement.
1. Organization and Clarity
Several figures (e.g., Fig. 1, 3, 5) are overly text-heavy and do not clearly illustrate the structural differences between model types (e.g., CNN vs. Transformer). Please revise these diagrams to visually highlight architectural components or processing pipelines. For example, a comparative flowchart showing input-output behavior between CNN-based and Transformer-based recovery models would enhance clarity.
Since your review aims to cover the full pipeline (from degradation recovery to detection and evaluation), I strongly recommend adding a single integrated diagram summarizing the overall framework.
Additionally, consider using visual comparison charts (e.g., bar graphs, radar plots) to complement or replace long text-based tables.
2. Lack of Critical Analysis
The paper largely summarizes existing works without critically analyzing or comparing them. Please provide clear evaluations on which methods perform best under specific conditions (e.g., fog vs. rain), and explain why — beyond listing general pros and cons.
3. Unclear Contribution Compared to Prior Reviews
Your review lacks a clear explanation of how it differs from existing review papers on similar topics. Please include a brief statement in both the abstract and early in the introduction explaining your review’s unique scope or perspective.
4. Language and Grammar
The manuscript contains many long, awkward, or repetitive sentences, along with translation-like expressions that reduce readability.
Examples:
- “which show good interpretability and real-time performance...” (redundant phrasing)
- “chapter 1 provides...” → use “section” instead of “chapter” in academic papers.
(*)A thorough proofreading by a native English speaker or a professional editing service is strongly recommended.
5. Suggestions for Improvement
To strengthen this review paper, I suggest the following:
- Provide critical insights and trend analysis, rather than listing summaries.
- Clearly define your novel contribution compared to previous survey papers.
- Enhance visual clarity by improving diagrams and adding summary graphics.
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsThis manuscript presents a comprehensive review of techniques for degraded image restoration and robustness enhancement in autonomous driving vision systems under adverse weather conditions. It covers both traditional physically-driven methods and deep learning-based approaches, referencing 63 representative studies and 11 datasets, demonstrating a wealth of information. However, the manuscript lacks academic sophistication in several key aspects, and revisions are necessary as outlined below:
-
While the strengths and weaknesses of each method are described, the critical comparative analysis from the authors’ own perspective is weak. The review often appears to be a mere summary of existing literature. For instance, deeper insights are required regarding the potential breakthroughs offered by Transformer-based methods and the limitations and future prospects of Retinex-based approaches.
-
The statement that “hybrid architectures combining physical models and deep learning are promising” is repeated several times, but this argument is commonly found in existing reviews and lacks originality. The authors should delve further into why hybrid approaches are desirable and what fundamental problems they aim to solve.
-
The paper would greatly benefit from the inclusion of mapping diagrams that show the temporal evolution or categorical positioning of various methods—such as technology matrices or classification charts—to enhance reader comprehension.
-
Numerous grammatical errors and unnatural expressions are present throughout the manuscript. A thorough native-level proofreading or the use of an English academic editing service is strongly recommended.
-
While the manuscript touches on Transformers and GANs, it completely omits discussion of several recent and noteworthy techniques, such as diffusion models, self-supervised learning (SSL), and Neural Radiance Fields (NeRF). At the very least, a brief introduction and discussion of their future prospects should be included.
-
In Chapter 5, future directions such as “multimodal fusion” and “edge deployment” are mentioned. However, there is a lack of concrete discussion on the technical directions necessary to achieve these, such as design guidelines for sensor fusion algorithms or cross-modal learning techniques.
Need to be improved.
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Reviewer 4 Report
Comments and Suggestions for AuthorsThe aim of this paper is to explore visual target detection methods under severe weather. The article reviews the traditional physically-driven models and deep learning methods for visual target detection under bad weather from two aspects: degraded image restoration and detection model robustness improvement, respectively. Then it summarizes eleven mainstream public datasets and analyzes their differences. Finally, a comprehensive performance evaluation using multiple metrics is conducted for adverse weather perception algorithms.
This systematic review provides valuable theoretical and engineering references for the development of autonomous driving visual detection technologies in adverse weather conditions.
- Keywords are not bolded
2.Wrong formatting of introduction title
- Wrong formatting of Author Contributions and its subsequent parts
- Would it be better to place Table 1 in the 1.1.6 Summary section, and the following Table 2, 3 and 4 in the same section?
- Inconsistent font size in Table 4.
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Round 2
Reviewer 3 Report
Comments and Suggestions for AuthorsThank you for submitting the revised manuscript. All responses are fine, and I think this manuscript should be accepted.