Next Article in Journal
Numerical Reconstruction in Maritime Archaeology
Next Article in Special Issue
Theoretical Analysis Method for Roll Motion of Popup Data Communication Beacons
Previous Article in Journal
Improved Baited Remote Underwater Video (BRUV) for 24 h Real-Time Monitoring of Pelagic and Demersal Marine Species from the Epipelagic Zone
Previous Article in Special Issue
Two-Phase Flow Pattern Identification by Embedding Double Attention Mechanisms into a Convolutional Neural Network
 
 
Article
Peer-Review Record

MDNet: A Fusion Generative Adversarial Network for Underwater Image Enhancement

J. Mar. Sci. Eng. 2023, 11(6), 1183; https://doi.org/10.3390/jmse11061183
by Song Zhang 1,2,3,4, Shili Zhao 1,2,3,4, Dong An 1,2,3,4, Daoliang Li 1,2,3,4 and Ran Zhao 1,2,3,4,*
Reviewer 2: Anonymous
J. Mar. Sci. Eng. 2023, 11(6), 1183; https://doi.org/10.3390/jmse11061183
Submission received: 8 May 2023 / Revised: 29 May 2023 / Accepted: 29 May 2023 / Published: 6 June 2023
(This article belongs to the Special Issue Young Researchers in Ocean Engineering)

Round 1

Reviewer 1 Report

The paper titled "MDNet: A Fusion Generative Adversarial Network for Underwater Image Enhancement" showcases excellent work by the authors, with well-explained methodology and results sections. However, two minor concerns exist regarding the introduction and related work sections. Addressing these limitations in those sections would greatly enhance the overall understanding of the field.

1. The introduction lacks a clear justification for the fusion-based idea, combining traditional methods with deep learning, and fails to explain why this approach is expected to overcome the limitations of individual approaches and is suitable for underwater image enhancement.

2. The related work lacks an explicit discussion on the limitations of deep learning for underwater image enhancement, including the need for large labeled datasets and the scarcity of such data in the underwater domain, necessitating a more thorough examination of these challenges.

Author Response

Response to Reviewers’ Comments

Dear Editor and Reviewers,

We are truly grateful for your kind letter and for reviewers’ critical comments concerning our article (Manuscript ID: jmse-2413653). These comments are valuable and helpful for improving our article. Based on these comments and suggestions, we have made careful modifications on the original manuscript to meet the requirements of the international journal of Journal of Marine Science and Engineering. In the revised version, all changes made to the text are highlighted using red colour. Point-by-point responses to the reviewers are listed below.

Reviewer #1:

The paper titled "MDNet: A Fusion Generative Adversarial Network for Underwater Image Enhancement" showcases excellent work by the authors, with well-explained methodology and results sections. However, two minor concerns exist regarding the introduction and related work sections. Addressing these limitations in those sections would greatly enhance the overall understanding of the field.

Comment 1: The introduction lacks a clear justification for the fusion-based idea, combining traditional methods with deep learning, and fails to explain why this approach is expected to overcome the limitations of individual approaches and is suitable for underwater image enhancement.

Response: Thanks for your comment. Fusion-based idea is a processing strategy that fuses images processed by multiple methods to achieve better results. The combination of multiple methods can provide multi-scale features, overcome the limitations of a single method to a certain extent, and thus achieve better results. Explanations have been added to lines 64 to 67 in the revised manuscript.

Comment 2: The related work lacks an explicit discussion on the limitations of deep learning for underwater image enhancement, including the need for large labeled datasets and the scarcity of such data in the underwater domain, necessitating a more thorough examination of these challenges.

Response: Deep learning heavily relies on large-scale datasets, and the quality of the datasets determines the upper limit of the effectiveness that deep learning can achieve. Due to the special feature of underwater environment, it has always been a challenge to obtain the ground truth of underwater image. The usual approach is to use different methods to enhance underwater images and select the best one as the ground truth. However, such generated ground truth still does not perform well in some challenging situations. Content has been added tolines 140 to 145 in the revised manuscript.

 

Author Response File: Author Response.pdf

Reviewer 2 Report

This paper proposes an underwater image enhancement framework. The proposed approach is compared with a few other methods. However, the following questions need further clarification:
1. Please clarify what new components are in the proposed MDNet model. The use of the GAN framework is not a new idea. The proposed MDNet model seems to be a customized model. Then an ablation study is needed to justify the proposed new components.
2. Please provide the full name of MDNet in the paper title.
3. It seems to have full-width symbols "," at lines 70 and 73.
4. It might be good to use the same test images to show various enhanced results and merge Figure 1 and Figure 2.
5. Figure 3 is a bit confusing.
- "Generate fake image" -> "Generated fake image".
- Please add outputs for the module "Is D correct".
6. What is the purpose of the application test conducted in Section 4.5? Please provide quantitative performance measurement for results in Figures 11 and 12.

na

Author Response

Reply as attached

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

The revision is fine, there is no further comments.

NA

Back to TopTop