Next Article in Journal
Multi-Label Remote Sensing Image Land Cover Classification Based on a Multi-Dimensional Attention Mechanism
Previous Article in Journal
Analysis of the Spatiotemporal Characteristics and Influencing Factors of the NDVI Based on the GEE Cloud Platform and Landsat Images
 
 
Article
Peer-Review Record

TCM-Net: Mixed Global–Local Learning for Salient Object Detection in Optical Remote Sensing Images

Remote Sens. 2023, 15(20), 4977; https://doi.org/10.3390/rs15204977
by Junkang He 1,2,†, Lin Zhao 1,2,†, Wenjing Hu 1,2, Guoyun Zhang 1,2, Jianhui Wu 1,2 and Xinping Li 1,3,*
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2023, 15(20), 4977; https://doi.org/10.3390/rs15204977
Submission received: 19 July 2023 / Revised: 28 September 2023 / Accepted: 2 October 2023 / Published: 16 October 2023

Round 1

Reviewer 1 Report

The paper proposes a Transformer and Convolution Mix Network (TCM-Net) for salient object detection. The work is comprehensive and the experiments are sufficient. However, there are some issues that need to be addressed:

1, Some of the formulas in the paper can be unified and replaced with letters to avoid unnecessary repetition. The only difference between them is the number of layers applied.

2, The paper highlights the hybrid architecture as an innovative point. However, it would be beneficial to include a brief overview of other hybrid structures in the introduction, such as "Building extraction from remote sensing images with sparse token transformers" and "A hybrid network of CNN and transformer for lightweight image super-resolution."

3, The description of the proposed method is too detailed. It is suggested to reduce some of the content to increase reader interest and highlight the high-level information. In particular, detailed design elements could be omitted.

I recommend that the author focuses on these issues and revises the paper accordingly.

N/A

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

This paper proposes a Transformer and Convolution Mix Network (TCM-Net), with a U-shaped codec architecture for ORSI-SOD. Below the main points of this review is listed.

 

*  Some 10 fold cross validations are required to validate the overfitting issues.

* The reviewer suggests the authors make their code publicly available, which could be very helpful for the community.

* Why did the authors choose 0.2 for the weights in loss function? Did you do the hyperparameter tunning?

* The year for different models in Table 1 is needed.

* It is necessary to use statistical tests  to be able to claim that your results are significantly better that others.

* For the model complexity, model parameter, FLOPs and prediction speed should be included.

* The heat maps should be discussed to explain the role of the proposed optimization models in this paper.

* It is recommended to include the detection results of training models on different datasets evaluated on additional datasets. 

* Some failure cases should be presneted to show  the deficiencies of proposed method.

Moderate editing of English language required

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

The authors propose a network mechanism for detecting saliency targets in remote sensing images, namely MCT-net, which has certain research value and technical improvement. The principle description in this article is clear, and the experimental and comparative work is relatively sufficient. However, a few modifications and explanations are needed before publication; 

1) The fusion mode of global and local features has been proposed for a long time. What is the motivation behind this article and what are the new ideas? 

2) Is there a specific constraint range for the size range of saliency targets that can solve the problem of detecting small sized targets? 

3) In the conclusion section, the limitations of the method should be explained and specific response measures should be provided.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop