Next Article in Journal
Orthogonal Msplit Estimation for Consequence Disaster Analysis
Previous Article in Journal
Robust Underwater Direction-of-Arrival Tracking Based on AI-Aided Variational Bayesian Extended Kalman Filter
 
 
Article
Peer-Review Record

High-Accuracy and Low-Latency Tracker for UAVs Monitoring Tibetan Antelopes

Remote Sens. 2023, 15(2), 417; https://doi.org/10.3390/rs15020417
by Wei Luo 1,2,3,4, Xiaofang Li 5, Guoqing Zhang 1, Quanqin Shao 2,6, Yongxiang Zhao 1, Denghua Li 7,*, Yunfeng Zhao 1,2,3, Xuqing Li 1,2,3, Zihui Zhao 1,2,3, Yuyan Liu 1,2,3 and Xiaoliang Li 1
Reviewer 1:
Reviewer 2:
Reviewer 3:
Remote Sens. 2023, 15(2), 417; https://doi.org/10.3390/rs15020417
Submission received: 23 November 2022 / Revised: 16 December 2022 / Accepted: 6 January 2023 / Published: 10 January 2023

Round 1

Reviewer 1 Report

The authors propose to use deep learning models with images acquired from UAVs to track the movement of the antelope which is an interesting topic. This work does contain some significance in animal protection applications. But the manuscript is poorly organized, which makes it hard for readers to follow. Some sections are unnecessarily given while some words are missed. The authors need to reorganize the work to emphasize their major contributions. Organizing major contributions point-by-point is recommended.

 

The major comments are as follows:

 

1.      The title of the manuscript is problematic. The work proposes a tracking method, not a ‘UAV’. Moreover, optical flow is not the base stone of the method.

 

2.      In the Introduction Section, how does the method pre-calculate the motions of moving objects without accessing the next image frame? It seems that the method can only calculate the current object moving direction based on the current image frame. If this is the case, the word pre-calculate is problematic.

 

3.      More works need to be reviewed in the related work section, especially those works on optical flow.

 

4.      The 3.2 section could be removed considering its irrelevance to the method.

 

5.      In figure 4, how does the adaptive search area selection work? What are the circles in the ROI? What do the boxes and the texts (3,1,N,L) on the right of the image mean? How does the circle on the right bottom sub-image generate?

 

6.      How do the tracking points in Step 1 generated?

 

7.      The method section should be thoroughly reorganized to clearly describe the workflow of the method. The title of each subsection should be carefully revised.

 

Minor comments:

Considering some readers are not familiar with object tracking, some words need to be explained to increase readability.

 

What is backtracking?

What is adjustable low latency? Does it mean that the proposed work adjusts adaptively to any task?

 

What is ‘stale’ CNN?

Author Response

请参阅附件。

Author Response File: Author Response.docx

Reviewer 2 Report

1.The article lacks the analysis of the model in the complex scene of many Tibetan antelopes.

2.The article should provide the model ID switch indicator to better show the tracking effect of the model

3.The tracking algorithm in this article lacks comparison with other algorithms. It is recommended to compare it with Byte track.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

This paper presents a high frame rate low-latency antelope tracking intelligent UAV based on optical flow. The topic is worthy of investigation, but the manuscript suffers from the following shortcomings:

 ·         The research question and has not been put forward clearly.

·         Both the level and context of the Related Works section should be revised and add more recent and relative references.

·         It is recommended to cover the Fourth Industrial Revolution and its pillars including UAV, IoT, AI…etc. Here some suggested works:

https://ieeexplore.ieee.org/document/9665580

https://www.mdpi.com/2071-1050/14/7/3758

 https://www.sciencedirect.com/science/article/pii/S2666603022000173

https://www.mdpi.com/2504-446X/6/7/177

·         The gap of knowledge is missing! Should be clarified.

·         The comparison Table (1) should compare the proposed solution with some available models would give more credits to the presented results, which in turn specify the motivation of this paper in better way.

·         It is not clear if the used UAV has an autonomousity feature! Should be clarified.

·         The used dataset's reference, training steps and samples are issues that have not been discussed and presented.

·         The mathematical formulation of proposed YOLOX algorithm should be added.

·         For validation, it is recommended to consider a comparison between the proposed YOLOX algorithm against similar algorithms.

·         Figures could be improved from size and resolution perspectives.

·         Overall, the manuscript suffers from some issues, so it should be addressed. 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 2 Report

It can be accepted in present form.

Back to TopTop