Next Article in Journal
Classification of Arctic Sea Ice Type in CFOSAT Scatterometer Measurements Using a Random Forest Classifier
Next Article in Special Issue
Semi-Supervised Person Detection in Aerial Images with Instance Segmentation and Maximum Mean Discrepancy Distance
Previous Article in Journal
Intensity Normalisation of GPR C-Scans
Previous Article in Special Issue
AI-TFNet: Active Inference Transfer Convolutional Fusion Network for Hyperspectral Image Classification
 
 
Article
Peer-Review Record

Unsupervised Image Dedusting via a Cycle-Consistent Generative Adversarial Network

Remote Sens. 2023, 15(5), 1311; https://doi.org/10.3390/rs15051311
by Guxue Gao 1,2, Huicheng Lai 1,2,* and Zhenhong Jia 1,2
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Remote Sens. 2023, 15(5), 1311; https://doi.org/10.3390/rs15051311
Submission received: 16 December 2022 / Revised: 9 February 2023 / Accepted: 23 February 2023 / Published: 27 February 2023
(This article belongs to the Special Issue Active Learning Methods for Remote Sensing Data Processing)

Round 1

Reviewer 1 Report


Comments for author File: Comments.docx

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

This paper proposes a method based on Cycle-GAN for unpaired image dedusting. Although Cycle-GAN is well established, the paper proposes a jointly optimized guided module which seems to reduce the artifacts generated from the original Cycle-GAN. I have some concerns as follows :

1) Learning rate decays linearly to 0. ? => Typo error ?

2) How can we choose the parameters of Equation (21) as w_1=10, w_2=0.5 and w_3=1 ? Sensitivity analysis plots are welcomed to better understand the proposed method.

3) I wonder if the proposed method results the failure cases. If yes, the authors should discuss these cases.

4) The proposed method is tested on images of 256x256. Thus, can we apply this method for larger images ?

5) I really like Section “Other application”. Could the authors report the accuracy of object detection before/after using the proposed method?

6) Could the authors test the proposed method for other image translation tasks which contain the paired images in order to analyze the effect of paired/unpaired set on some quantitative metrics such as PSNR or SSIM ?

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

The paper presents an algorithm for image dedusting, based on the CycleGAN image transform. The results seem to be promising and the subjective evaluation shows excellent performance.

Abstract

„To improve the image quality and enhance the performance of image dedusting, we propose an end-to-end cyclic generative adversarial network (D-CycleGAN) for image dedusting, which does not require sand-dust images and the corresponding ground truth images for training.“

This sentence is not precise. The algorithm, although unsupervised, requires both of these sets, just not in pairs. (Lines 10-12)

„Specifically, we design a jointly optimized guided module (JOGM), including the Sandy Guided Synthesis Module (SGSM) and the Clean Guided Synthesis Module (CGSM)…“

Comprised of instead of including (although it isn’t exactly a module). (Lines 14-15)

Introduction

Add some references for lines 42-51.

The rest of the section should be revised (lines 76-104). The sentences from the abstract are basically repeated, and then again given as bullet points (and then again in lines 195-205).

The proposed method

In Figure 2, looks like you have two completely separate processes (instead of forward and backward CycleGAN procedures). You could also highlight the main differences between the baseline and the proposed CycleGAN architectures, either by different colours or in two separate images.

In lines 224 and 225, X and Y are sets of images (x  X and y  Y). Formulas (1) and (2) should be rewritten (you used G(x)1 and G(x)2 in the rest of the manuscript). The overall objective function given by (5) is highly abstract, and not well-connected with the following subsections. 

Given (6), and (8), expression (13) doesn’t really have a lot of sense ((6) is already contained within (11) and (8) within (12)). Also, the index G is misleading ((13) represents the overall-total generator loss, not the generator G loss). Superscript index terms 1 and 2 from (11) and (13) are also misleading.

Is there a difference between the colour identity preservation loss and the original identity loss from CycleGAN?

Experimental results

How did you optimize the coefficients w1, w2 and w3 (line 441)?

Does the baseline algorithm from Table 1 comprise both colour identity and semantic perceptual losses (it would explain the results presented in Table 2)?

You may also highlight the best results given in Tables 1-4.

How did you obtain the average run time (average per image, per set, multiple runs...)?

 

Are the database and the code available for download?

 

Author Response

Please see the attachment.

Round 2

Reviewer 3 Report

I suggest the authors to proofread the paper one more time before sending the final version. Some language corrections are advisable. 

Back to TopTop