Next Article in Journal
Covert Surveillance Video Transmission with QAM Constellations Modulation
Next Article in Special Issue
Biogas Production Prediction Based on Feature Selection and Ensemble Learning
Previous Article in Journal
Online and Offline Model for Housekeeping Order Assignment Based on Dynamic Programming Algorithm
Previous Article in Special Issue
Enhanced YOLOv8 with BiFPN-SimAM for Precise Defect Detection in Miniature Capacitors
 
 
Article
Peer-Review Record

Soft Generative Adversarial Network: Combating Mode Collapse in Generative Adversarial Network Training via Dynamic Borderline Softening Mechanism

Appl. Sci. 2024, 14(2), 579; https://doi.org/10.3390/app14020579
by Wei Li 1,2,* and Yongchuan Tang 3
Reviewer 1:
Reviewer 2: Anonymous
Appl. Sci. 2024, 14(2), 579; https://doi.org/10.3390/app14020579
Submission received: 1 December 2023 / Revised: 4 January 2024 / Accepted: 6 January 2024 / Published: 9 January 2024

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The paper proposes a novel method for image generation – SoftGAN. SoftGAN addresses the problem of training stability and mode collapse by introducing a novel objective function. The authors tested SoftGAN on four different relevant datasets and reached state-of-the-art results. Moreover, the authors provide extensive theoretical analysis of the proposed objective function and also explore possibilities of a conjunction of SoftGAN with already existing architectures and training tricks.

 

Paper Strengths:

-          The main idea of the paper is sounding.

-          Obtained results are promising.

-          The authors provide GitHub with codes.

-          The paper is well-written and easy to follow.

 

Paper Weaknesses:

-          An update mechanism of the fuzzy entropy term needs to be described.

 

Justification of Rating:

The paper proposes a very interesting method that can be combined with already existing methods to reach superior results. The proposed method is described in detail and codes are available on GitHub. The only weakness of the paper I see is the absence of an exact description of the update mechanism of the fuzzy entropy constant. Furthermore, the starting value of the constant should be listed for all the experiments.

 

Suggestions to Authors:

-          The authors should provide an equation of the update mechanism of the fuzzy entropy constant.

-          The authors should list the starting values of the fuzzy entropy constant for all the experiments.

-          Figure 8: It looks like DCGAN can benefit from further training. It would be interesting to train compared methods longer (150k training iterations for example).

-          It would be beneficial to test the proposed method on the generation of high-resolution images (>1024x1024px).

Comments on the Quality of English Language

-          Line 54: constraint[3 -> constraint [3

-          Line 67-69: section -> Section

-          Line 87: However, there are -> There are

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The proposed strategy of using the dynamic boundary softening mechanism using Fuzzy Concept Modeling to train Generative Adversarial Networks (GAN) is worth considering and the results obtained are interesting. I agree with the Authors that achieving coherent and stable GAN training remains an ongoing challenge.

However, the basic principle in the presentation of a mathematical model is not respected.

The first step is to introduce notation mathematical symbols used in functions, expressions, and formulas. When introducing mathematical formulas, explaining each element of the used notation is a necessary condition!

Formula (1) is mathematically incorrect. The domain(s) to which x and z belong are not specified. It is not known what Pd, Pg, and E mean. Pd and Pg appear on the next page but without explaining that they are representation spaces for real and fake samples, respectively.

For this formula, It would be worth defining the generator G using a function.

Authors should not assume that all readers of their article are familiar with Formula 1. Authors can only assume that readers are interested in machine learning.

Moreover, using Model  Concept Modeling the basic concept as a fuzzy set with a function called the membership function or grade of membership should be introduced and explained.

I suggest introducing a Section titled: Preliminaries (before the Recent Works section), which recalls some notations and basic concepts necessary in the article.

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

I have no further comments. Congratulation on the article.

Author Response

Thanks again for your decision and constructive comments on our manuscript.

Back to TopTop