Next Article in Journal
Species Diversity, Growing Stock Variables and Carbon Mitigation Potential in the Phytocoenosis of Monotheca buxifolia Forests along Altitudinal Gradient across Pakistan
Next Article in Special Issue
A Novel Approach to Classify Telescopic Sensors Data Using Bidirectional-Gated Recurrent Neural Networks
Previous Article in Journal
Necrotizing Follicular Lymphoma of the Inguinal Region with Sternbergoid Cells: Clinical–Pathological Features of a Challenging Entity
 
 
Article
Peer-Review Record

Aircraft Rotation Detection in Remote Sensing Image Based on Multi-Feature Fusion and Rotation-Aware Anchor

Appl. Sci. 2022, 12(3), 1291; https://doi.org/10.3390/app12031291
by Feifan Tang 1, Wei Wang 1,*, Jian Li 1, Jiang Cao 1, Deli Chen 1, Xin Jiang 1, Huifang Xu 2 and Yanling Du 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Appl. Sci. 2022, 12(3), 1291; https://doi.org/10.3390/app12031291
Submission received: 15 November 2021 / Revised: 20 January 2022 / Accepted: 21 January 2022 / Published: 26 January 2022

Round 1

Reviewer 1 Report

Few remarks. In the sentence "It usually constructs rotation-in-45 variant and scale-invariant features based on the object shape, texture, and geometric features and collaborates with the general classifiers, e.g., SVM and neural networks." "neural networks" is a too general term to point to the particular type of the classifier, since SVM could be called also a neural network with a specific training algorithm/approach. 

The corner detector, used in the proposed approach may show different performance/results from image to image or depending on the manually selected detector parameters. It would be worth showing additionally how the internal parameters of the corner detector may change the performance of the detector because by applying a similar approach to another dataset we may receive unexpected results. 

It would be interesting to find out the impact of the selected IoU threshold equal to 0.7. Or at least some comments, why such value was selected and do it may have any impact to results if it will be changed.

I do not agree, that a speed decrease by 2.35 fps is significant if we are detecting those additional ~43 aircraft. In addition, it would be interesting to know if the variations of the F1 score are not in the higher range than 0.43%  (estimated difference between proposed an alternative method) if we select different examples for training and testing by keeping the same proportions 8:2. 

Since the advantages of the solution, proposed in this paper are shown in the view of processing speed. It would be more valuable to compare the performance with SSD MobileNet v2, YOLOv4. 

Author Response

Thank you for your suggestion and please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

The authors adequately propose a framework of Multi-feature fusion with rotating anchors generation mechanism for oriented aircraft detection.

The framework is described well. However no equation was presented for, e.g., the neural networks used. It is suggested to include them and explain what were the innovations there.

The use of binary images in the framework, that were obtained from the input images, was not adequately explained. Indeed the algorithms used to obtain those images were not described, and its limitations and robustness were not ptoperly described. Moreover, the images shown in figure 5 are very similar, and do not show that the algorithm works for images taken at different heights, and illumination conditions. Please do so, to improve the paper.

The proposed method was adequately validated, using public datasets, against state of the art methods. However, improvements need to be done to improve the paper, as stated in previous comments.

In the text, please crelay state what  FPN stands for.

Author Response

Thank you for your suggestion and please see the attachment.

Author Response File: Author Response.docx

Back to TopTop