Review Reports
- Changliang Zheng1,
- Honglin Fang2,* and
- Lina Chen3,*
- et al.
Reviewer 1: Anonymous Reviewer 2: Anonymous
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThis paper proposes a C2FDA framework to address the challenges of negative transfer and unknown emotional class detection. Experiments show that C2FDA outperforms state-of-the-art baselines in three transfer scenarios. The paper is appropriate for the journal in case the following concerns are addressed.
(1) The paper mentions the use of hyperparameters such as α and λ but does not explain how these hyperparameters are selected. It is recommended to add a subsection or supplementary material to detail the hyperparameter tuning process, including the range of candidate values, the validation method, and the impact of different hyperparameter values on model performance. This will enhance the reproducibility of the proposed method.
(2) The paper introduces both C2FDA and C2FDA-G in line 222, and also describes the model as having four components. The relationship between the complex graph encoder and the simpler Feature Extractor H_f shown in Figure 4 is unclear. The terminology should be consolidated to clarify if the graph encoder is the feature extractor used in C2FDA, or if C2FDA-G is a specific variant.
(3) The paper contains a significant amount of repetitive text. For example, the descriptions of the SEED, SEED-IV, and SEED-V datasets repeat the exact same sentence. Similarly, the experimental setup restates the same information multiple times. This repetition should be edited for conciseness.
(4) Temper the claims regarding 6G and semantic communication. The paper heavily frames itself as a 6G-Oriented Semantic Communication solution. However, the core methodology is an open-set domain adaptation framework, and the experiments are offline analyses on existing datasets. There is no implementation, simulation, or measurement involving a 6G network, bandwidth efficiency, or semantic transmission protocols.
(5) some grammars. In Line 675, SED should be SEED. In line 698, Fig 14 should be Figure 14. On page 7 line 282, line 288, the text states the C2FDA framework is illustrated in Fig. 2. However, Figure 2 shows domain adaptation scenarios; the correct framework diagram is Figure 4. In Section 4.3, the text discussing the SEED → SEED-V task incorrectly references Fig. 9 for ROC analysis; the ROC plot is Fig. 10. The same paragraph references Fig. 10 for the confusion matrix, but the confusion matrix is in Fig. 11.
Author Response
Please take a look at the attached document for our response to the reviewers' comments.
Author Response File:
Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for Authors- “we propose a Coarse- to-Fine Open-set Domain Adaptation (C2FDA) framework”, “we propose a Coarse-to-Fine Open-set Domain Adaptation method for Emotion Recognition (C2FDA).”. What do you propose framework or method? The term “framework’ is not synonymous to the term “method”. Be consistent in using a terminology.
- “5G-Advanced/6G networks”. You separate sometimes “5G-Advanced” and “6G”. Are you doing this on purpose or by random? It seems that by random, e.g., “which is critical for bandwidth-constrained edge devices in 6G networks”. Such devices are not present in 5G-Advanced networks, are they? Moreover, only 6G is present in the title of the manuscript.
- “into three main paradigms, as illustrated in Fig. 2”. Why three, five paradigms are observed in Fig. 2.
- “Chang et al. [11] ”. We observe “11. H. Chen, Y. Xu, Y. Liu, L. Jiang, H. Tan et al., “ATPL: ”. Chen is not Chang. Moreover, the presentation of reference [11] is broken. All the future references must go wrong. Yes, they go wrong, e.g., “13. C. Feng, C. Zhong, J. Wang, J. Sun, and Y. Yokota “, and “Zhang et al. [13]”. Moreover, there is broken referencing order, in total, e.g. “Tang et al. [16]” and “29. Tang, L. Tian, and W. Zhang”. The material presented in the manuscript cannot be trusted.
- The abbreviation “C2FDA-G” used just two times in the manuscript. So, it has no meaning.
- “method framework”. What is it, method and framework together?
- The following “Fig 3. Coarse-to-fine sample separation via adaptive threshold selection.” is present as part of common text. The caption is as follows: “3 Provides appropriate academic context (open-set domain adaptation)”. It is not possible to read this manuscript.
- References to baseline methods are not presented. It is not possible to verify the results.
The English could be improved to more clearly express the research.
Author Response
For point-to-point responses, please refer to the attachment.
Author Response File:
Author Response.pdf
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsThe authors have done a good job revising their article, producing a sound paper that is worthy of publication.
Author Response
Thank you very much for your positive feedback and valuable suggestions during the review process.
Reviewer 2 Report
Comments and Suggestions for Authors- “Xu et al. [10] proposed a Dynamic Adversarial Domain Adaptive Network based on Multiple Kernel Maximum Mean Discrepancy (MK-MMD),”. FALSE. “In this paper, a dynamic adversarial domain adaptive network based on the multi kernel maximum mean discrepancy (MK_DAAN) is proposed”. The abbreviation is false. So, we cannot trust the abbreviations presented “C2FDA consistently outperforms existing approaches including DANN (28.5%, 25.7%, 34.8%), MMD (29.8%, 27.1%, 36.4%),”.
- My comment from the previous review still holds: “References to baseline methods are not presented. It is not possible to verify the results.”. Moreover, these abbreviations of the methods appear from nowhere. If you make comparison with some methods, these methods must be reviewed in the section of Related Work, since these works must be related.
The English could be improved to more clearly express the research.
Author Response
For detailed revisions, please see the attached file.
Author Response File:
Author Response.pdf
Round 3
Reviewer 2 Report
Comments and Suggestions for AuthorsThank you for the revision.