Next Article in Journal
A Prediction-to-Prediction Remote Sensing Image Super-Resolution Network under a Multi-Level Supervision Paradigm
Next Article in Special Issue
Multi-Pedestrian Tracking Based on KC-YOLO Detection and Identity Validity Discrimination Module
Previous Article in Journal
Applying a Recurrent Neural Network-Based Deep Learning Model for Gene Expression Data Classification
Previous Article in Special Issue
Automatic Fruits Freshness Classification Using CNN and Transfer Learning
 
 
Article
Peer-Review Record

Bidirectional-Feature-Learning-Based Adversarial Domain Adaptation with Generative Network

Appl. Sci. 2023, 13(21), 11825; https://doi.org/10.3390/app132111825
by Chansu Han 1, Hyunseung Choo 2,* and Jongpil Jeong 3,*
Reviewer 1:
Reviewer 2: Anonymous
Appl. Sci. 2023, 13(21), 11825; https://doi.org/10.3390/app132111825
Submission received: 5 October 2023 / Revised: 26 October 2023 / Accepted: 27 October 2023 / Published: 29 October 2023
(This article belongs to the Special Issue Digital Image Processing: Advanced Technologies and Applications)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This paper addresses the issue of generative models' lack of generalization to other domains, such as Generative Adversarial Networks (GANs) and AutoEncoders.  To solve this issue, the research provides a new method for improving the adaptability of these generative models by combining bidirectional feature learning with generative networks. The results using several image datasets reveal that the model's capacity to adapt to new domains has improved, outperforming two well-known methods.

 The paper is well-written, creative, and sounds scientific. It adds to the body of current knowledge in computer vision and machine learning. While the paper introduces a promising method, several improvements and clarifications are necessary to enhance its quality and comprehensibility:

 

1.      Equation Definitions: It's essential to include a "where" clause for all equations to define all variables used. This clarification will improve the understanding of the mathematical aspects of the method.

2.      please Elaborate more on the input layers of the source Xs and destination Xt of the proposed BiFLP-AdvDA, particularly, when the size and type of images are different.

3.      “existing studies have tried to prove that our proposed model performs well by comparing it with models such as DANN and ADDA”. Which studies? Revise this, please.

4.      Two experimental environments were used, Linux-5 and Windows. Why two, and which of them is used for the reported results in Tables 2-5?

5.      The captions of Figures 2-5 are misleading, shouldn’t be a sample of the dataset?

6.       How exactly were the results of “Source only” obtained? For e.g., I could understand when using MNIST→ USPS, the first network used on MNIST, and the classification results of the images were obtained on the other end without having the second network work. Please elaborate more here.

7.      Comparing the proposed BiFLP-AdvDA performance to a pre-trained model such as ResNet can be an insightful approach. Such a comparison can help the reader understand the benefits of domain adaptation techniques and decide whether the additional complexity of the proposed BiFLP-AdvDA is justified.

8.      Are the reported accuracy metrics in Tables 2-5 related to the source domain, the target domain, or both? It's important to evaluate the model's performance in both domains to assess the effectiveness of domain adaptation.

9.      I would like to see the methodology comparison between the 3 models in terms of the architecture and different internal processes used for each to justify the superiority of the proposed method. A summary table showing what is used by each method, common and differences is beneficial. Moreover, two more figures showing the other two methods (DANN and ADDA) would help such a discussion.

10.  Nothing is mentioned about the time consumed by each method. Neither are the limitations of this study.

 

Comments on the Quality of English Language

The paper needs minor English editing.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

To improve the quality of this manuscript, my comments and suggestions are listed as follows.

 (1) The abstract is a little long. It can be concise.

 (2) Authors are suggested to list the main contributions of this paper as the last but one paragraph in Section Introduction.

 (3) The words “model” or “algorithm” can be considered to appear in the title of this manuscript.

 (4) Some equations such as Precision and Recall are not the authors’ contribution. The related references are expected.

 (5) The heading of Section 5 should be “Conclusions and outlooks”.

 (6) Most of the references are very old. Authors should make the literature survey again and updated them.

Comments on the Quality of English Language

There are some typos and grammar errors. Minor editing of English language required.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop