Next Article in Journal
Identifying Transshipment Hubs in a Global Container Shipping Network: An Approach Based on Reinforced Structural Holes
Next Article in Special Issue
A Benchmark for Maritime Object Detection with Centernet on an Improved Dataset, ABOships-PLUS
Previous Article in Journal
Comparative Analysis of Genetic Structure and Diversity in Five Populations of Yellowtail Kingfish (Seriola aureovittata)
Previous Article in Special Issue
Research on Multi-Target Path Planning for UUV Based on Estimated Path Cost
 
 
Article
Peer-Review Record

Target Tracking from Weak Acoustic Signals in an Underwater Environment Using a Deep Segmentation Network

J. Mar. Sci. Eng. 2023, 11(8), 1584; https://doi.org/10.3390/jmse11081584
by Won Shin 1, Da-Sol Kim 2 and Hyunsuk Ko 1,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3:
Reviewer 4: Anonymous
J. Mar. Sci. Eng. 2023, 11(8), 1584; https://doi.org/10.3390/jmse11081584
Submission received: 24 July 2023 / Revised: 6 August 2023 / Accepted: 10 August 2023 / Published: 12 August 2023
(This article belongs to the Special Issue AI for Navigation and Path Planning of Marine Vehicles)

Round 1

Reviewer 1 Report

The manuscript focuses on Bearing-Time Record (BTR) images visualized by the received sound signals of passive SONAR systems, and proposes an effective deep segmentation network to enhance the extraction of target’s bearing information acquired from challenging underwater environment, which is an interesting work. However, the following issues need to be revised.

1.       Subsections 5.2.1 to 5.2.3 are poorly presented, with key information not highlighted, and it is recommended that they be visualized in a table.

2.       For the analysis of the results in Table 3, the manuscript is presented using subsections 5.4.1 to 5.4.3 in a single paragraph, and it is suggested that numbering (a) to (c) or (1) to (3) be used instead.

3.       Following up on the previous comment, the analysis of Table 4 is similarly modified.

Numerous descriptions in this manuscript are in poor presentation, such as this sentence in the last paragraph of page 12, ' Additionally, the results of Precision and F1-scores were included for informative purposes to provide a comprehensive understanding of the performance characteristics.', where 'were' would be better replaced by 'are'.

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 2 Report

Target Tracking from Weak Acoustic Signals in an Underwater Environment using a Deep Segmentation Network

 

Minor revision

 

This article provides valuable insights into the challenges of detecting enemy targets in submarine warfare systems and demonstrates how a deep segmentation network can accurately measure bearing-time information. To enhance the publication's quality, the following refinements are suggested:

 

(1) In Section 3, four conditions for synthesizing the BTR image dataset are presented, but the methods used for creating these datasets are not provided. It is essential to include a detailed explanation of the methodologies employed to synthesize the datasets, as this information is crucial for the reproducibility and credibility of the study.

 

(2) In section 5.5.2, only the comparison with the DLV3+ESC algorithm reflects the superiority of DLV3+MSC. To strengthen the persuasive power of the improved algorithm, it is recommended to add comparisons with other relevant algorithms. Including more data points through comparisons with multiple algorithms will provide a more comprehensive evaluation of the improved algorithm's performance.

 

(3) The document cites a total of 31 references, with 13 publications produced in the last 5 years (42%), 14 in the last 5-10 years (45%), and 4 more than 10 years old (13%), resulting in a total percentage of 87% recent references. While the inclusion of recent references ensures the information's up-to-date nature, it is advisable to increase the total number of references to further enrich the article's scholarly foundation. Expanding the reference list will enhance the literature review and demonstrate a thorough understanding of the subject matter.

 

(4) The author may put some attention to the engineering applications based on the computer vision algorithms. For visual measurement applications, please refer to articles like: Novel visual crack width measurement based on backbone double-scale features for improved detection automation, Engineering Structures.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

The paper proposes a deep learning segmentation network for BTR images from SONAR signals. This is used for detection in underwater environments.
The authors evaluate their approach on a synthetic dataset and use the precision, recall, F1 and F3 metrics.
The language of the paper is good, however there are some very minor issues that need to be addressed.


1. In section 5.3 (Evaluation metrics) you use the F3 score. I would recommend you generalize the equations for F1 and F3 to Fbeta and assign the parameter beta at beta=1 and beta=3. This is to avoid duplicating what is in essence the Fbeta score equation. This change would of course reflect also in the tables with results that follow this section.

2. The results show that you achieve a high recall but low precision. Is this an expected result due to the circumstances being that there are weak acoustic signals? A short paragraph expanding on that would be helpful to the reader.

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 4 Report

1.  While the proposed DLV3+MSC network with the MTL loss function performs well in most cases, the segmentation results in the most challenging PD50+SNR5 dataset still need further improvement.

2. The qualitative evaluation indicates that the predicted images still exhibit some inaccuracies and thickness discrepancies in target pixels. This aspect needs further attention to achieve more precise segmentation results and better visual quality.

3.  In Fig 14, 15, Please add a few lines to explain what is inferred from the images.

Please also include any limitations that is inherent to this approach.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop