Next Article in Journal
Fast Entry Trajectory Planning Method for Wide-Speed Range UASs
Next Article in Special Issue
YOLO-SMUG: An Efficient and Lightweight Infrared Object Detection Model for Unmanned Aerial Vehicles
Previous Article in Journal
Enhancing Physical-Layer Security in UAV-Assisted Communications: A UAV-Mounted Reconfigurable Intelligent Surface Scheme for Secrecy Rate Optimization
Previous Article in Special Issue
A Multimodal Image Registration Method for UAV Visual Navigation Based on Feature Fusion and Transformers
 
 
Article
Peer-Review Record

Simultaneous Learning Knowledge Distillation for Image Restoration: Efficient Model Compression for Drones

by Yongheng Zhang
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Submission received: 17 January 2025 / Revised: 11 March 2025 / Accepted: 11 March 2025 / Published: 14 March 2025
(This article belongs to the Special Issue Intelligent Image Processing and Sensing for Drones, 2nd Edition)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This paper presents a dual-teacher single-student knowledge distillation for image restoration on drones, experiment results showed the restoration performance is superior to the compared methods. 

Comments:

1. Both teacher A and teacher B have the same structure but are trained with different purposes, if the student model are trained with both purposes without using knowledge distillation then what the performance will be?

2. The purpose of the proposed method is to achieve good detection results as shown in fig.1, why not train an appropriate detection model for drone, i.e. an end-to-end model?

3. Model teacher A and B are used for encoder and decoder part of the student model, how to balance between them? 

4. A test experiment by deploying the trained model on a drone is better conducted to validate its applicability.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The main contribution of the paper is the Image restoration. Image restoration method is an important stage or step in many image processing applications. Authors declare it is essential stage for drones applications. In my opinion, the most important thing for drone usage is to detect the object, particularly by its edges. The image details may be required only when we want to exactly recognise the object and specify what it is and what inside it is. In the paper I did not see anything about the image recognition or processing for the aim of object identification. 

Now, the question is why do we need to add costs and time for restoring the image? If the aim is to collect clear and beautiful pictures, images, ... by drones for human vision applications, it is all right. 

However, for other applications in Computer Vision applications we usually need to detect the object of interest and for immediate treatment or to save for later uses. For object detection in computer vision we need fast and less costy procedures. Additional restoring steps will take time and cost. Look at Fig.1 and 5. The object of interest can simply be dected by simple edge detection filters and masks without the need for image restoration unless there is another specific reason for that.

I would appreciate the authors' explanation for the purpose of Image Restoration in these examples and the exact reason for obtaining a better detection result other than the human vision applications.  

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

1.    In Table 2, reformat the symbols by using bold and underline, and then re-check the values to convey the correct meaning. Additionally, clearly denote the teacher and student models to avoid ambiguity.
2.    DHPHN and MPRNet are not lightweight models due to their large size and complexity. The authors should avoid labeling comparison models as "lightweight" and ensure consistent notation and presentation in the tables and manuscript. Please review all the models presented and categorize them accurately within the correct scope.
3.    The statement that the model achieves “over 80% reductions in FLOPs and model parameters while maintaining competitive image restoration performance” is ambiguous. In academic papers, authors should specify which models are being compared, provide exact values for the differences, and outline the specific metrics used. The evaluation should be conducted within the same scope and under comparable conditions. A more transparent presentation of the observed environment is necessary for a fair assessment.
4.    Since the proposed method is designed for real-world applications, authors should assess model’s effectiveness on real-world datasets with unknown ground truth, such as RWBI [a], or tested it on real-world degradation images to ensure its applicability in real-world scenarios.
[a] Zhang, K.; Luo, W.; Zhong, Y.; Ma, L.; Stenger, B.; Liu, W.; Li, H. Deblurring by realistic blurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 2737–2746

Comments on the Quality of English Language

The English language should be double-checked

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

All the comments have been resolved except the computational complexity. The paper presents reductions in FLOPs and inference time, but a more detailed analysis of the model's performance on different hardware platforms (e.g., drones) would strengthen the practical relevance of the method.

Author Response

Thank you for your valuable feedback. The deployment experiments and analysis on different hardware platforms can be found in Section 4.7 and Table 9. In our previous revision, we added deployment experiments on the NVIDIA Jetson Xavier NX platform. In this revision, we have further included experimental results and analysis on another commonly used UAV hardware platform, the NVIDIA Jetson Orin Nano, to strengthen the practical relevance of our method.

Reviewer 3 Report

Comments and Suggestions for Authors

The authors almost responded to all of my comments. I appreciate your reply. However, I suggest that you should mention more relevant research, such as low-light image enhancement [1], image denoising [2], and image deblurring [3], so readers can comprehensively understand the image restoration areas.

[1] doi: 10.1109/ACCESS.2024.3457514

10.1109/ACCESS.2022.3197629

[2] https://doi.org/10.3390/s24113608

10.1109/TMM.2022.3194993

[3]  https://doi.org/10.3390/s24206545

 

Comments on the Quality of English Language

You should double-check the English writing in terms of grammar, typos, formating, proofreading, etc.

Author Response

Thank you for your valuable feedback. We carefully reviewed the relevant studies you mentioned and found that they play an important role in enhancing readers' understanding of the image restoration field. Therefore, we have incorporated discussions of these works into Section 2.1.2 to provide a more comprehensive overview.

Back to TopTop