Next Article in Journal
Markerless AR Navigation for Smart Campuses: Lightweight Machine Learning for Infrastructure-Free Wayfinding
Previous Article in Journal
Memory-Efficient Iterative Signal Detection for 6G Massive MIMO via Hybrid Quasi-Newton and Deep Q-Networks
Previous Article in Special Issue
Adaptive Contrastive Metric Network with Background Suppression for Few-Shot SAR Target Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Coarse-to-Fine Open-Set Semantic Adaptation for EEG Emotion Recognition in 6G-Oriented Semantic Communication Systems

1
School of Integrated Circuits (School of Artificial Intelligence), Beijing Polytechnic University, Beijing 100176, China
2
School of Computer Science, Beijing University of Posts and Telecommunications, Beijing 100876, China
3
College of Computer Science and Technology (College of Artificial Intelligence), Zhejiang Normal University, Jinhua 321004, China
4
Department of Mobile Communication Technology Research, China Telecom Research Institute, Beijing 100033, China
*
Authors to whom correspondence should be addressed.
Electronics 2025, 14(24), 4833; https://doi.org/10.3390/electronics14244833
Submission received: 14 October 2025 / Revised: 30 November 2025 / Accepted: 2 December 2025 / Published: 8 December 2025
(This article belongs to the Special Issue Innovative Technologies and Services for Unmanned Aerial Vehicles)

Abstract

Electroencephalogram (EEG)-based emotion recognition has emerged as a key enabler for semantic communication systems in next-generation networks (5G-Advanced/6G), where the goal is to transmit task-relevant semantic information rather than raw signals. However, domain adaptation approaches for EEG emotion recognition typically assume closed-set label spaces and fail when unseen emotional classes arise, leading to negative transfer and degraded semantic fidelity. To address this challenge, we propose a Coarse-to-Fine Open-set Domain Adaptation (C2FDA) framework, which aligns with the semantic communication paradigm by extracting and transmitting only the emotion-related semantics necessary for task performance. C2FDA integrates a cognition-inspired spatio-temporal graph encoder with a coarse-to-fine sample separation pipeline and instance-weighted adversarial alignment. The framework distinguishes between known and unknown emotional states in the target domain, ensuring that only semantically relevant information is communicated, while novel states are flagged as unknown. Experiments on SEED, SEED-IV, and SEED-V datasets demonstrate that C2FDA achieves superior open-set adaptation performance, with average accuracies of 41.5% (SEED → SEED-IV), 42.6% (SEED → SEED-V), and 48.9% (SEED-IV → SEED-V), significantly outperforming state-of-the-art baselines. These results confirm that C2FDA provides a semantic communication-driven solution for robust EEG-based emotion recognition in 6G-oriented human–machine interaction scenarios.

1. Introduction

In contemporary society, emotion recognition has been widely applied in various fields, impacting our daily lives profoundly. In the field of mental health, accurately and timely assessing an individual’s emotional state is crucial for improving psychological well-being [1]. In education, observing students’ emotional states in the classroom helps educators gain insight into students’ learning situations, thereby adjusting teaching methods to enhance students’ learning effectiveness [2]. Therefore, emotion recognition has been paid more and more attention, which has spawned a variety of emotion recognition methods. Among them, Electroencephalography (EEG)–based emotion recognition has shown promise for affective computing, mental-health monitoring, and human–computer interaction due to its outstanding stability and high detection accuracy [3].
Yet moving trained models across people remains difficult: inter-subject variability, recording non-stationarities, and session effects induce substantial domain shifts. Due to the differences in emotional and physiological characteristics among different subjects, the EEG data distribution varies among different subjects [4]. It shows that the emotion recognition model trained with EEG data on a single subject may not achieve satisfactory results on a new subject, that is, there is a problem of model generality. In realistic deployments, the target domain may also contain emotion states absent from the source, creating an open-set scenario. In open-set domain adaptation (OSDA), the model must (i) align only the shared classes between domains and (ii) reject unknowns to avoid negative transfer. This setting is common in cross-dataset and cross-subject EEG but remains underexplored relative to closed-set transfer.
As illustrated in Figure 1, the emotional brain–computer interface (EBCI) framework typically comprises several key stages, including stimulus presentation, electroencephalogram (EEG) signal acquisition, preprocessing, feature extraction, model training, and feedback. This iterative cycle enables the systematic modeling and analysis of EEG signals to infer users’ emotional states, thereby providing a foundation for advanced human–computer interaction.
To address this, many studies have used Unsupervised Domain Adaptation (UDA) techniques [5]. These methods treat labeled EEG data from one subject as the source domain and unlabeled EEG data from another subject as the target domain. Then they train the model on the source domain to transfer it to the target domain.
Existing EEG approaches typically fall into three groups. Subject-dependent models achieve high accuracy but require labeled data per user and do not transfer. Closed-set domain adaptation reduces distribution shift but implicitly forces all target samples to match source classes, which misaligns truly novel target states. Finally, recent graph-based or temporal models improve representation quality but often rely on static inter-channel topologies and lack mechanisms to (a) encode neurocognitive priors that aid generalization, (b) capture evolving temporal salience, and (c) separate known vs. unknown target samples during adaptation. Moreover, evaluations frequently report overall accuracy alone, obscuring the trade-off between known-class performance and unknown-class rejection.
Although these existing methods effectively reduce the distribution differences in EEG data, they still have limitations. Because they usually assume that different subjects share the same label space, but the actual scene may encounter a different label space, especially when the emotional label space of the target domain is more than that of the source domain, that is, the scene of an open-set. In open-set EEG emotion recognition, not only do we need to address the distribution differences between subjects, but we also need to tackle the separation of known and unknown emotional classes due to different label spaces [6].
We address these gaps with C2FDA-G, a cognition-prior spatio-temporal graph framework integrated with a coarse-to-fine open-set adaptation pipeline. This approach leads to the proposal of the C2FDA framework for EEG emotion recognition, which is designed to overcome the identified challenges in open-set domain adaptation. On the representation side, we construct a dynamic brain graph with graph convolution to learn data-driven channel affinities, combine it with temporal self-attention to weight informative segments, and fuse the streams via hierarchical cross-attention fusion (H-CAF). In parallel, we inject a cognition-prior branch from functional connectivity (e.g., PLV), then fuse prior- and data-driven embeddings to obtain discriminative, interpretable features stable across subjects.
Initially, the Coarse-to-Fine processing module performs coarse classification on the extracted EEG feature information, sorts all target domain samples based on the similarity of each target domain sample, and selects high and low probability score samples for fine classification, thus achieving the separation of known and unknown classes in the target domain. On the adaptation side, a bank of one-vs-class coarse heads ranks target samples by known-class plausibility; a fine unknown detector assigns an unknown probability w(x). We then perform instance-weighted adversarial alignment that emphasizes likely known target samples in the shared label space using a |Cs| + 1 classifier (with an explicit unknown class). A lightweight curriculum penalizes early high-confidence misclassifications, improving the stability of unknown rejection. Then, the Domain Adversarial module maps samples from the source and target domains to a shared label space, achieving alignment of the sample space. Finally, we input the EEG signal data processed by the two modules into the classifier to complete the EEG emotion recognition task.
In summary, our research makes the following contributions:
(1) We propose a Coarse-to-Fine processing module that can separate known and unknown emotional classes. This module solves the problem of negative transfer caused by the misalignment of unknown classes in the target domain with known classes in the source domain effectively.
(2) We propose a Domain Adversarial module that maps samples from the source and target domains to a shared label space for alignment of the EEG samples. This module effectively addresses the label space alignment problem in open-set EEG emotion recognition.
(3) Through extensive transfer experiments on three datasets, our experimental results demonstrate the reliability of the C2FDA method in open-set EEG emotion recognition.
As the field of semantic communication in 6G networks continues to develop, the focus is on transmitting task-relevant meaning rather than raw data. EEG-based emotion recognition is inherently semantic, as it extracts emotionally meaningful states from complex signals. The open-set scenario naturally aligns with semantic communication principles, where the system must determine whether incoming data belongs to the known semantic space or represents novel, unrecognized states. Our C2FDA framework addresses this by filtering semantically relevant emotional information and rejecting unknown samples, thus enhancing semantic efficiency and robustness in next-generation network applications. However, it is important to note that the connection to 6G semantic communication is conceptual, and this paper focuses primarily on the development and evaluation of the C2FDA framework for EEG emotion recognition.

2. Related Work

EEG-based emotion recognition faces significant challenges in cross-subject and cross-dataset transfer due to distribution shifts between source and target domains. Based on the relationship between label spaces, existing domain adaptation approaches can be categorized into five main paradigms, as illustrated in Figure 2.

2.1. Closed-Set Domain Adaptation

Closed-Set Domain Adaptation (CSDA) assumes identical label spaces between source and target domains, focusing on reducing distribution discrepancies. These methods primarily fall into metric-based approaches and adversarial training strategies [7,8,9]. Metric-based methods transform features to minimize domain distances under specific metrics. Xu et al. [10] proposed a dynamic adversarial domain adaptive network based on the multi-kernel maximum mean discrepancy (MK_DAAN), which addresses domain adaptation by adding an adaptive layer to further align the feature distribution between source and target domains. Multi-kernel maximum mean discrepancy is adopted in the adaptive distance measurement. This dual feature alignment approach, combining the adaptive layer with adversarial learning, improves classification performance in breast ultrasound image classification. Yi et al. [11] introduced the ATPL framework, which mutually promotes adversarial training and pseudo-labeling for unsupervised domain adaptation. ATPL produces high-confidence pseudo-labels through adversarial training, and uses these pseudo-labels to improve the adversarial training process by generating adversarial data to fill the domain gap, thereby ensuring both feature transferability and discriminability. DANN (Domain-Adversarial Neural Networks), proposed by Ganin et al., uses adversarial training with a gradient reversal layer to learn domain-invariant features, improving performance in tasks like image classification and sentiment analysis. It outperforms traditional methods by aligning feature distributions between source and target domains without requiring labeled target data [12]. MMD (Maximum Mean Discrepancy), proposed by Gretton et al., is a kernel-based method for comparing distributions by measuring the difference between their means in a Reproducing Kernel Hilbert Space (RKHS). It effectively minimizes domain shift in closed-set domain adaptation and has been widely used in tasks involving high-dimensional feature spaces, such as bioinformatics and graph data [13]. CORAL (CORrelation ALignment), introduced by Sun et al., aligns the second-order statistics (covariance) between source and target domains to reduce domain shift in unsupervised domain adaptation. It has proven effective in object recognition tasks, outperforming methods like LDA on benchmark datasets such as Office-Caltech10 [14]. CDAN (Conditional Domain Adversarial Network), introduced by Long et al., enhances adversarial domain adaptation by conditioning the domain discriminator on both feature representations and classifier predictions. This approach improves alignment across domains and has shown superior performance on benchmark datasets [15].

2.2. Partial Domain Adaptation

Partial Domain Adaptation (PDA) addresses scenarios where the target label space constitutes a subset of the source domain. Here, the source contains emotional categories absent in the target, though these remain known categories. Feng et al. [16] proposed Progressive Optimization For Partial Domain Adaptation (EBB), which selects anchors by analyzing base model features and estimates category gaps using anchor classification distributions. This approach minimizes shared class errors while correcting blind alignment mistakes. Zhang et al. [17] developed Weighted and Center-aware Adaptation Learning (WCAL), distinguishing unknown source classes through weighted adversarial learning and addressing negative transfer via cross-domain discriminators. While these methods handle partial scenarios effectively, they still assume no target-specific unknown classes, differing fundamentally from open-set challenges.

2.3. Open-Set Domain Adaptation

Open-Set Domain Adaptation (OSDA) represents the most challenging setting, where target domains contain both source-known classes and completely novel categories. OSDA methods must simultaneously align shared classes while detecting unknown samples to prevent negative transfer. Panareda Busto and Gall introduced Open Set Domain Adaptation, which addresses domain shift by jointly solving an assignment problem to match target instances with source categories of interest. Their method outperforms state-of-the-art techniques, effectively handling both closed and open-set scenarios where the source and target domains may contain different class labels [18]. Ji et al. [19] proposed an open-set domain adaptation model based on subdomain alignment, using variable weights for discriminative training and aligning category subspaces between source and target domains. Experiments show that this approach significantly improves open-set domain adaptation classification accuracy. Tang et al. [20] proposed a novel open-set domain adaptation method combining latent structure discovery and kernelized classifier learning to improve class separation. Experiments on five image datasets demonstrate its superiority over state-of-the-art methods. Open-set recognition has also been explored in other domains such as malware traffic analysis [21], radio frequency fingerprint identification [22], specific emitter recognition [23], and device recognition in satellite-terrestrial-integrated IoT [24], demonstrating the broad applicability of open-set methodologies. OSBP (Open Set Back-Propagation), proposed by Saito et al., uses adversarial training to align known target samples with the source domain while rejecting unknown target samples. It outperforms traditional domain adaptation methods in open-set scenarios, improving performance in domain transfer tasks [25,26]. MAOSDAN (Multi-Adversarial Open-Set Domain Adaptation Network), proposed by Zheng et al., addresses open-set domain adaptation in remote sensing by combining attention-aware OSBP, adversarial learning, and adaptive entropy suppression to distinguish known and unknown samples [27].

2.4. Graph-Based EEG Representation Learning

EEG’s inherent spatial organization motivates graph neural network applications, treating electrodes as nodes with functional connections as edges. Static graph methods based on physical distances or fixed connectivity capture spatial topology but cannot adapt to dynamic brain connectivity changes. Liu et al. [28] compared DCCA and BDAE for multimodal emotion recognition, extending DCCA with weighted sum and attention-based fusion methods. DCCA achieved state-of-the-art performance and demonstrated greater robustness against noise across multiple datasets, including SEED-V and DREAMER. Song [29] proposed a novel Dynamical Graph Convolutional Neural Network (DGCNN) for EEG emotion recognition, dynamically learning the intrinsic relationships between EEG channels for more discriminative feature extraction. Extensive experiments on the SEED and DREAMER datasets show that DGCNN outperforms state-of-the-art methods, achieving high recognition accuracy in both subject-dependent and subject-independent settings. However, most graph-based EEG works address closed-set classification without explicit open-set label mismatch handling. Finally, we input the EEG signal data processed by the two modules into the classifier to complete the EEG emotion recognition task. Recent studies have also explored the use of attention mechanisms and hybrid deep neural networks for improving EEG-based emotion recognition performance [30,31].

2.5. Cognitive Priors in Graph Learning

Neuroscience research indicates functional connectivity patterns, measured through phase-locking value (PLV), encode task-relevant brain network structures. Incorporating such cognition-inspired priors improves interpretability and cross-domain stability. Recent cognitive-prior GNN frameworks fuse prior graphs with data-driven graphs, yielding noise-robust representations stable across subjects [32]. Li et al. [33] proposed a graph learning system for EEG-based emotion recognition, utilizing a cognition-inspired functional graph branch and a fused attention mechanism to automatically learn emotion-related cognitive patterns. The BF-GCN model outperforms state-of-the-art methods, achieving high recognition accuracy in both subject-dependent and subject-independent experiments on the SEED and SEED-IV datasets. Wang et al. [34] proposed a simply ameliorated CNN (SACNN) for cross-subject emotion recognition using raw EEG data to address low accuracy issues in driver emotion detection. The SACNN model achieved 88.16% accuracy with cross-subject data and 91.85% accuracy using data from the top 10 EEG channels, outperforming deeper models and highlighting its potential for smart city applications. Furthermore, cross-subject emotion recognition remains challenging due to inter-subject variability, prompting the development of methods that leverage raw multi-channel EEG data without extensive preprocessing [35]. Machine learning approaches continue to evolve, with comparative studies highlighting the effectiveness of various algorithms in handling EEG-based emotion recognition tasks [36].
Our proposed C2FDA addresses these gaps by combining cognition-prior spatio-temporal graph encoding with coarse-to-fine open-set adaptation. This unified framework leverages neuroscience knowledge while providing robust mechanisms for known-unknown separation and selective domain alignment, advancing the state-of-the-art in open-set EEG emotion recognition. C2FDA integrates a complex cognition-prior spatio-temporal graph encoder as part of its feature extraction mechanism, which ensures robust cross-domain generalization, particularly for open-set scenarios.

2.6. Semantic Communication and Next-Generation Networks

Semantic communication represents a paradigm shift from bit-level accuracy to goal-oriented information exchange, focusing on the meaning and effectiveness of transmitted data. In next-generation networks (6G), semantic communication aims to reduce redundancy by transmitting only task-relevant information, thereby improving bandwidth efficiency and latency. Emotion recognition from EEG signals is a semantically rich task, as emotions represent high-level cognitive states. Recent works have explored semantic source coding, task-oriented communication, and semantic-aware resource allocation for IoT and edge devices [37,38,39]. Our C2FDA framework aligns with this trend by selectively adapting only known emotional classes and rejecting unknowns, effectively reducing semantic redundancy and improving communication efficiency in distributed EEG-based emotion recognition systems.

3. Methodology

We begin by establishing the notation used throughout this work. Let D s = x i s , y i s i = 1 n s denote the source domain with n s labeled samples, and D t = x j t j = 1 n t represent the target domain with n t unlabeled samples. The label space relationship follows C s C t = C s U , where U denotes the set of unknown classes present only in the target domain. Our framework employs a feature extractor f θ : x z R d that maps inputs to d-dimensional representations. The coarse-stage processing utilizes a bank of one-vs-rest classifiers h k k C s , each producing class-specific probabilities p k x = σ h k f θ x . The fine-stage unknown detector u φ : z w x 0,1 estimates the probability that a sample belongs to an unknown class. Finally, an open-set classifier H y : z ŷ 1 , , C s , u n k performs C s + 1 classification, while a domain discriminator D ψ : z 0,1 distinguishes between source and target domains. Having established the notation, we now proceed to detail the architecture and training procedure of our proposed C2FDA framework.
The method addresses two fundamental challenges in open-set domain adaptation: (1) distinguishing between known and unknown classes in the target domain, and (2) aligning only the shared classes while avoiding negative transfer from unknown samples. For the sake of illustration, we give some definitions of symbols. In the open-set EEG emotion recognition task, we have a source domain Ds containing ns labeled samples, denoted as D s = x i s , y i s i = 1 n s , and a target domain Dt containing nt unlabeled samples, denoted as D t = x j T j = 1 n t . Here, the label space size of the source domain is Cs. It is worth noting that the label space of the source domain is a subset of the label space of the target domain: C s C t The additional label space contained in the target domain is defined as the unknown class label space C t s .
The source domain and the target domain come from different probability distributions p , q , respectively. In domain adaptation, our probability distribution is also different: p q . In open-set domain adaptation, our probability distributions are even more different: p q C s , where q C s represents the distribution of target domain data in the shared label space. In summary, we can define the open-set EEG emotion recognition task as follows: O = 1 C s C t . It is important to note that the label space of our source domain is a subset of the label space of the target domain.

3.1. C2FDA Model

Cognition-Prior Spatio-Temporal Graph Encoder—This component extracts discriminative and interpretable EEG features by combining dynamic graph convolution (DGC), temporal self-attention (TSAR), and hierarchical cross-attention fusion (H-CAF) along with a functional connectivity prior represented by the PLV graph.
Coarse-to-Fine Selector—This component ranks target samples based on their plausibility of belonging to the shared classes and then refines the decisions using a binary classification (“unknown vs. known”) fine head.
Instance-Weighted Domain Adversarial Alignment—This component aligns only the target samples likely to belong to the known classes to the source domain using a gradient reversal layer (GRL), while suppressing the alignment of unknown samples.
Open-Set Classifier—This component predicts over |Cs| + 1 classes, where the additional class explicitly represents the “unknown” category.
To address the aforementioned two issues, we innovatively propose A Coarse-to-Fine Open-set Domain Adaptation framework for EEG emotion recognition (C2FDA). The method framework of C2FDA is illustrated in Figure 2. This method mainly consists of two modules: the Coarse-to-Fine processing module and the Domain Adversarial module. The Coarse-to-Fine processing module transforms the extracted EEG feature information from coarse-grained features to fine-grained features; in other words, its purpose is to separate known classes from unknown classes. The function of the Domain Adversarial module is to map samples from the source domain and target domain to a common label space, achieving sample space alignment. In Figure 2, H f represents the feature extractor of EEG signals, H coarse   and H fine   represent the coarse classifier and fine classifier, respectively, H d is the domain discriminator, which is also our Domain Adversarial module, and H y is our final EEG data classifier. C2FDA-G is a variant of C2FDA that incorporates a cognition-prior spatio-temporal graph encoder into the feature extraction process, enhancing the model by explicitly integrating neurocognitive priors from EEG signals. Both models share the same core feature extraction approach, but C2FDA-G benefits from the added graph encoder for richer spatio-temporal dependencies.

3.2. Cognition-Prior Spatio-Temporal Graph Encoder

(a) Graph Construction We represent each EEG trial as a multi-channel DE feature map over B frequency bands. Each band yields a graph G = V , E , A , where V are channels, E edges, and A the adjacency matrix.
To capture both neurophysiological priors and adaptive patterns, we construct a hybrid graph representation that combines domain knowledge with data-driven learning: Prior Graph A prior   —computed from PLV between channels over source data, encoding stable cognitive connectivity patterns.
Data-Driven Graph A data   —learned via attention-based affinity estimation that adapts to each sample.
A = α A p r i o r + 1 α A d a t a ,
w h e r e   A d a t a = s o f t m a x Q K T / d
The parameter α balances the contribution of cognitive priors (when α is large) versus adaptive learning (when α is small), allowing the model to leverage neuroscience knowledge while adapting to task-specific patterns.
We blend the two to form the adjacency for convolution:
A ˆ = 1 λ A prior   + λ A data   , λ 0,1 .
(b) Spatial Encoding with DGC We apply graph convolutional layers over A ˆ to capture spatial dependencies:
H l + 1 = σ D ˆ 1 / 2 A ˆ D ˆ 1 / 2 H l W l
where D ˆ is the degree matrix, W l are learnable weights, and σ is an activation function.
Z l + 1 = σ D ~ 1 / 2 Ã D ~ 1 / 2 Z l W l , Ã = A + I
(c) Temporal Self-Attention (TSAR) For each channel representation, TSAR assigns attention weights across time steps:
α t = exp q t k t t '     exp q t ' k t ' ,
where q t , k t are learned projections. This emphasizes temporally salient EEG segments.
A t t n Q , K , V = s o f t m a x Q K T / d V , z = P o o l A t t n ·
(d) Hierarchical Cross-Attention Fusion (H-CAF) Spatial and temporal streams are fused using cross-attention to produce the final embedding f for each trial.

3.3. Hyperparameter Tuning for α and λ

In this section, we explain the selection of the hyperparameters α and λ, which are essential for the performance of the C2FDA framework. The parameter α controls the balance between cognitive priors and adaptive learning, while λ adjusts the weight of the entropy loss in the domain adversarial module. The two parameters determine how the model blends prior knowledge and data-driven learning, ensuring effective separation between known and unknown emotional classes.
To blend the cognitive prior graph ( A p r i o r ) and the data-driven graph ( A d a t a ), we use the equation:
A = α A p r i o r + ( 1 α ) A d a t a , α [ 0,1 ]
The parameter α was tested within the range [0.1, 1.0], where larger values favor prior knowledge and smaller values prioritize adaptive learning. Similarly, λ controls the entropy loss contribution and was varied within [0, 1], with higher values placing more emphasis on rejecting unknown classes. The adjusted adjacency matrix is given by:
A ^ = ( 1 λ ) A p r i o r + λ A d a t a , λ [ 0,1 ]
We used k-fold cross-validation to select the optimal values for both α and λ, evaluating performance based on recognition accuracy for known classes and the ability to detect unknown emotional states. The impact of these hyperparameters is significant. Larger α values improve known-class recognition but reduce flexibility in detecting unknown emotional states, while smaller values enhance detection of novel classes. For λ, higher values strengthen unknown class rejection but may overfit known classes, while lower values improve detection of unknowns. This tuning process ensures robustness and reproducibility in open-set EEG emotion recognition tasks.

3.4. The Coarse-to-Fine Processing Module

Coarse Stage: We deploy a bank of C s one-vs-class classifiers h k producing logits z k and probabilities p k = σ z k . For target sample x , define:
s x = m a x k C s   p k  
To distinguish known and unknown classes in the target domain, we put forward the Coarse-to-Fine processing module. We introduce a coarse classifier H coarse = 1   C s , consisting of C s classifiers. The coarse classifier measures the similarity between each target domain sample and each source domain class. Each classifier is independent, with different functionalities; each classifier can only classify specific emotion classes. The loss function of the coarse classifier is defined as shown in Equation (1):
L s = coarse   = 1 C s     1 n s i = 1 n s     L b c e H coarse   H f x i s , I y i s , c o a r s e
In Equation (10), L b c e represents the cross-entropy loss of the coarse classifier. When y i s = coarse, I y i s ,   coarse   = 1; otherwise, I y i s ,   coarse   = 0. Each H coarse   returns the probability score P coarse   of each target domain sample being classified as the known class coarse. Thus, P coarse   can be used to measure the similarity between samples in the target domain and the known class. A higher probability score indicates a higher likelihood of the sample belonging to class coarse. Empirically, known class samples in the target domain tend to have higher probability scores compared to unknown class samples. Therefore, we can use the maximum probability score of each sample, p1, p2, …, p C s , to represent the similarity between each target domain sample x j t and the source domain, as shown in Equation (2):
s j = m a x c C s   H coarse   H f x j t
To avoid manual hyperparameter tuning and ensure robustness across different degrees of openness, we introduce an adaptive thresholding mechanism based on quantile statistics:
τ h i g h = E x Q q s x , τ l o w = E x Q q s x
T K = x : s x τ h i g h , T U = x : s x τ l o w
Figure 3 Coarse-to-fine sample separation via adaptive threshold selection. Target domain samples are stratified into three regions based on similarity scores: high-confidence known sample region (green) for domain alignment; ambiguous sample region (gray) excluded from training; and high-confidence unknown sample region (red) for novel class detection. The high and low adaptive thresholds are automatically determined without hyperparameter tuning. This approach is conceptually similar to the Separate to Adapt (STA) method, which employs a coarse-to-fine separation mechanism to progressively distinguish between known and unknown classes based on sample similarity. In STA, sample importance is adaptively weighted during feature distribution alignment, and unknown target samples are excluded from the alignment process to prevent negative transfer. While STA addresses domain adaptation across varying levels of openness, this work utilizes adaptive threshold selection to specifically tackle the challenges of open-set EEG emotion recognition without the need for manual hyperparameter tuning.
This partitioning strategy creates three distinct regions: high-confidence known samples ( T K ), high-confidence unknown samples ( T U ), and an ambiguous region T A = D t T K T U that is excluded from alignment to prevent negative transfer.
After employing such a measurement method, known class samples in the target domain will indeed exhibit high similarity with the source domain. Similarly, samples of unknown classes in the target domain will show low similarity with the source domain.
Therefore, based on the magnitude of similarity for each target domain sample, we can sort all target domain samples and select those with particularly high or low probability scores to train the next-stage fine classifier H fine   . Although this selection method may seem simplistic, the chosen samples exhibit high confidence and similarity. Additionally, since we no longer need to manually select hyperparameters or use optimization tools, this approach is robust to varying degrees of openness.
To further refine sample selection, we categorize samples into three groups based on the magnitude of similarity probability scores: highest probability scores, moderate probability scores, and lowest probability scores. Then, we use the average of the highest probability scores, denoted as s h   , as the upper limit for known class samples in the target domain. Thus, when a sample’s similarity probability score satisfies s j   s h   , we classify it as a known class. Similarly, we use the average of the lowest probability scores, denoted as s l   , as the lower limit for unknown class samples in the target domain. Hence, when a sample’s similarity probability score satisfies s j   s l   , we classify it as an unknown class.
Fine Stage: From the extreme quantiles (top q h i % , bottom q l o % of s x ), a binary fine classifier h fine   is trained to predict w x , the probability that x is unknown. While the coarse stage provides initial separation, the fine-stage binary classifier performs precise unknown detection by learning from the high-confidence samples identified in the coarse stage.
As illustrated in Figure 4, the C2FDA framework adopts a hierarchical processing strategy. First, a graph-based feature extractor (which incorporates the cognition-prior spatio-temporal graph encoder in C2FDA-G) processes EEG signals from both the source and target domains. This feature extraction approach combines a simpler feature extractor ( H f ) in C2FDA and a more complex graph encoder in C2FDA-G, both of which share the same fundamental task of extracting relevant features from the EEG signals. Then, the coarse-to-fine separation module performs progressive filtering on the target samples. Specifically, the coarse separation layer ranks the samples based on their similarity to known categories, while the fine separation layer conducts binary classification to distinguish between known and unknown samples. Finally, the domain adversarial module aligns distributions only for the samples that are likely to belong to known categories, thereby avoiding negative transfer caused by unknown samples. This allows the final classifier to achieve open-set emotion recognition with unknown category detection.
Once the coarse classifier H coarse   selects high probability known class samples and low probability unknown class samples, denoted as X ' , we can further feed these selected samples into the next-stage fine classifier H fine   , to separate known and unknown class samples in the target domain. We label the samples that have been separated from the target domain, denoted as x j X ' , as d j . Known class samples are labeled as d j = 0, while unknown class samples are labeled as d j = 1. The loss function of the fine classifier H fine   is shown in Equation (3):
w x = σ u φ f θ x
L f i n e = 1 / S x S B C E y u n k x , w x
The unknown probability w(x) serves as an instance-level confidence measure, enabling selective alignment where only samples with low w(x) values (likely known) participate in domain adversarial training. Through the Coarse-to-Fine processing module, we can separate EEG signal samples of known and unknown classes in the target domain.
Curriculum Learning: Inspired by step-penalty reinforcement learning, we weight early confident mistakes more heavily in the first K epochs:
α e = α 0 + 1 α 0 e K .
Through the Coarse-to-Fine processing module, we can separate EEG signal samples of known and unknown classes in the target domain.
From a semantic communication perspective, the coarse-to-fine mechanism acts as a semantic filtering process: it transmits only emotionally relevant information (known classes) while suppressing irrelevant or unknown samples. This aligns with the goal of semantic communication in 6G systems, where only semantically valid data is prioritized for transmission, thereby reducing bandwidth overhead and improving task efficiency.

3.5. The Domain Adversarial Module

Traditional domain adaptation aligns all target samples with the source distribution, leading to negative transfer when unknown classes are present. Our instance-weighted alignment strategy addresses this by selectively emphasizing likely known samples.
In this section, we first present the classification error function for the source domain, as shown in Equation (4):
L c l s s = 1 n s x i D s     L y H y 1 : C s H f x i , y i
where Ly represents the cross-entropy loss function, and Hy represents an extended classifier with |Cs| + 1 classes, where |Cs| + 1 includes |Cs| known emotions from the source domain and 1 unknown emotion from the target domain. Therefore, H y 1 : C s returns the probability of each sample corresponding to the |Cs| known emotions.
Next, we focus on aligning the features of samples from the source and target domains. In this step, we map the features from both domains to a shared label space, denoted as Cs. Instead of directly inputting the output of H fine   into discriminators for known and unknown classes, we append a softmax layer to the output of H fine   , which serves as the input to the discriminators. This softmax layer generates soft instance-level weights, denoted as w j = H b H f x j , where higher values of w j indicate a higher probability of the sample belonging to the unknown class. Hence, we can utilize w j to define the weighted loss for Domain Adversarial adaptation of feature distributions in the shared label space Cs, as shown in Equation (5):
L d = 1 n s x i D s     L b c e H d H f x i , d i + 1 x j D t     1 w j x j D t     1 w j L b c e H d H f x j , d j
In addition, we also need to select samples of unknown classes from the target domain to train the feature extractor Hf. Based on the soft instance-level weights w j , we can measure the separation between known and unknown classes. We define the weighted loss for distinguishing unknown classes as shown in Equation (6):
L c l s t = 1 C s 1 x j D t     w j x j D t     w j L y H y C s + 1 H f x j , l u k
where l u k represents the unknown emotion class. Through training, we assign all target samples with larger weights w j to the unknown emotion class. Similarly, H y C s + 1 H f represents the probability that classifier Hy assigns target samples to the unknown class.
We also enhance the decision boundary between domains by computing the loss for minimizing the entropy of known classes in the target domain, denoted as Le. This is achieved by enhancing weights with the following formula, as shown in Equation (7):
L e = 1 x j D t     1 w j x j D t     1 w j E H y 1 : C s H f x j
In Equation (7), E represents the entropy loss, specifically expressed as E p = k   p k l o g p k . It is important to note that our goal is to minimize the entropy of target samples predicted as known emotion class. Therefore, we use w j as the instance-level weight parameter for entropy minimization.
The adversarial alignment in C2FDA ensures semantic consistency across domains, akin to semantic fidelity in 6G-oriented communication systems. By weighting known samples more heavily, the model mimics a semantic-aware transmission protocol that prioritizes meaningful emotional states over noisy or unknown inputs.

3.6. Open-Set Classification and Loss Functions

Beyond global domain alignment, we introduce prototype-based fine-grained alignment to enhance intra-class consistency between source and target domains:
μ k = 1 / S k x , y D s , y = k f θ x
L p r o t o = k C s E x T ^ K k f θ x μ k 2 2
To stabilize the training process and prevent early convergence to suboptimal solutions, we employ a curriculum learning strategy that penalizes confident misclassifications more heavily in early training stages:
L c u r r = 1 / D t x D t γ m a x 0 , T 0 e · 1 ŷ u n k · w x
where e is the current epoch, T 0 is the transition epoch, and γ ∈ (0,1).
Our final objective function integrates all components through a carefully designed multi-term loss that balances source supervision, sample separation, domain alignment, and regularization.
The open-set classifier H y outputs C s + 1 logits, with the last logit representing “unknown.” We optimize the total loss:
L = L s r c + λ c o L c o a r s e + λ f i L f i n e + λ a d v L a d v + λ p r L p r o t o + λ u n k L u n k + λ e n t L e n t + λ c u r L c u r r
Each loss term addresses a specific aspect of the open-set domain adaptation problem: L s r c ensures source discriminability, L c o a r s e and L f i n e enable known/unknown separation, L a d v performs selective alignment, L p r o t o enhances intra-class consistency, L u n k promotes unknown rejection, L e n t sharpens decision boundaries, and L c u r r provides training stability.
Where each term corresponds to supervised source classification, coarse/fine stage training, adversarial alignment, target entropy minimization, and curriculum penalty.

3.7. Objective Function

The optimization of our multi-component objective requires a progressive training strategy that alternates between sample separation and domain alignment to ensure stable convergence.
We divide the training into two progressive stages: (1) sample separation, where target data are partitioned into likely known and likely unknown subsets based on confidence scores, and (2) domain adversarial adaptation, where only the reliable known subset is aligned with the source domain distribution. By alternating between these two stages, the model gradually adapts target samples of known classes while rejecting unknown ones.
Algorithm 1 summarizes the procedure. In the first step, we train the feature extractor f θ and classifier H y with source supervision, while auxiliary coarse classifiers { h k } provide confidence scores for sample separation. Target samples with high scores are treated as potential known data and passed to the fine classifier u ϕ , whereas low-score samples are considered likely unknown.
This alternating optimization strategy prevents the premature alignment of unknown samples while gradually improving the separation of known and unknown classes, leading to more robust open-set domain adaptation performance.
Algorithm 1: Coarse-to-Fine Open-Set Domain Adaptation (CF-OSDA)
Electronics 14 04833 i001
Step 1: First, we train the feature extractor Hf and classifier Hy on the source domain. Additionally, we utilize each class of emotion samples in the source domain to train the coarse classifiers H coarse   , where coarse = 1,2 , 3 , , C s . Next, we select target domain samples with high and low probability scores, similar to those in the source domain, to train the fine classifier H fine   . Here, we denote the parameters of H f , H y , H fine   , H coarse   C s c = 1 as θ f , θ y , θ fine   , θ coarse   coarse   = 1 C s , respectively. The optimal parameters θ ˆ f , θ ˆ y , θ ˆ fine   , θ ˆ coarse   coarse   = 1 C s = 1 can be found using the following equation, as shown in Equation (8):
θ ˆ f , θ ˆ y , θ ˆ fine   , θ ˆ coarse   coarse   = 1 C s = a r g m i n θ f , θ y , θ b , θ c c = 1 C s L cls   s + L s + L b
Step 2: In this step, we primarily perform domain adversarial adaptation to align the feature distribution of known classes in the target domain with that in the source domain. Additionally, we use data from unknown classes to train Hy as additional classes. In this step, we continue training the classifiers with source samples to retain knowledge relevant to known class emotions. We denote the parameters of the domain discriminator H d as θ d . The optimal parameters θ ˆ f , θ ˆ y , θ ˆ d   can be obtained using the following two equations, as shown in Equations (9) and (10):
θ ˆ y , θ ˆ d = a r g m i n θ y , θ d L c l s s + L c l s t + L d + λ L e
θ ˆ f = a r g m i n θ f L c l s s + L c l s t L d + λ L e
where λ is a hyperparameter used to balance the entropy loss.
Through the proposed C2FDA model, we can effectively separate known and unknown class data in the target domain. Step 1 rejects unknown class emotion data to avoid interference from unknown class emotions in Step 2, where domain adversarial adaptation aligns the feature distributions of samples between the source and target domains. Since there is no manual selection of threshold hyperparameters throughout the process, the disadvantage of tuning parameters when the openness changes in real scenarios can be avoided.
In summary, the biggest problem of an open-set task is the separation of known emotions and unknown emotions. In order to solve this problem, we propose the C2FDA method. The C2FDA method uses a gradual method to find two types of samples with high scores and low scores during training. Because the prediction results of the samples with high scores will be more accurate, it is also conducive to training the classifiers of known classes, while the samples with low scores tend to be the samples of unknown classes, so we can extract these samples to train the classifiers of unknown classes. By this means, we can well separate the known emotion from the unknown emotion. At the same time, it also solves the impact of the negative migration of wrong samples.

4. Experiments and Analysis

4.1. Dataset

SEED: The SEED dataset [40,41] is a publicly available dataset for studying the relationship between emotions and EEG signals. It consists of recordings from three sessions, each containing EEG signal data from 15 subjects. In each session, subjects watched 15 video clips with varying emotional tendencies (negative, neutral, positive), as shown in Figure 5. EEG signals were recorded using a 62-channel ESI neuroimaging system at a sampling rate of 200 Hz and band-pass filtered from 0 to 75 Hz. The raw EEG signal data were processed to extract features in five frequency bands: delta (1–4 Hz), theta (4–8 Hz), alpha (8–14 Hz), beta (14–31 Hz), and gamma (31–50 Hz), producing 310-dimensional feature vectors (5 frequency bands × 62 channels).
SEED-IV: The SEED-IV dataset [42,43] contains three sessions, each with 15 subjects. In each session, subjects watched 24 video clips with different emotional tendencies (happiness, sadness, fear, neutral), as shown in Figure 6. Similar to the SEED dataset, the EEG data were processed into 310-dimensional feature vectors (5 frequency bands × 62 channels), ensuring consistency for comparison. This normalization of the EEG data allows direct comparison with SEED.
SEED-V: The SEED-V dataset [44] differs from the previous two datasets in that it includes three sessions with 16 subjects in each session. In each session, subjects watched video clips with five emotional tendencies (happiness, sadness, fear, neutral, disgust), as shown in Figure 7. Similar to the SEED and SEED-IV datasets, the EEG data were transformed into 310-dimensional feature vectors (5 frequency bands × 62 channels) to standardize the data across all datasets. This ensures consistency in the EEG data representation.
Differences Between Datasets Table 1 outlines the key differences among the three datasets, including the number of subjects, video clips, and emotion categories, highlighting the increasing complexity of label spaces, especially from SEED to SEED-V. The differences between the three datasets are shown in Table 1.

4.2. Implementation Details

In our experiments, we use the SEED, SEED-IV, and SEED-V datasets to validate the performance of the C2FDA method in the open-set EEG emotion recognition task. The experiments are conducted across three main transfer scenarios, which differ in the number of emotional categories:
  • SEED contains 3 emotions
  • SEED-IV contains 4 emotions
  • SEED-V contains 5 emotions
To investigate the model’s performance further, we conduct experiments with the following transfer scenarios: “SEED → SEED-IV”, “SEED-IV → SEED-V”, and “SEED → SEED-V”. The source domain in each scenario consists of data from 15 or 16 subjects in one session, while the target domain consists of data from a single subject in one session. Each dataset has three sessions, which allows us to test the model under different conditions and obtain reliable results.
Due to the inconsistency in label spaces between the datasets—SEED having 3 emotion classes, SEED-IV having 4, and SEED-V having 5—we perform experiments that involve transferring between these datasets. In each experiment, one subject from the target domain is randomly selected for testing, and the remaining subjects’ data are used for training. As each dataset contains three sessions, we obtain results for three different sessions in each experiment.

4.3. Experimental Results

We conduct three sets of experiments on open-set emotion recognition based on the same experimental setup. The experimental results are shown in Table 2, Table 3 and Table 4.
Figure 8 t-SNE visualization of learned features. (a) SEED → SEED-IV and (b) SEED-IV → SEED-V transfer scenarios before and after domain adaptation. Source (solid) and target (hollow) domain samples are color-coded by emotion categories. Post-adaptation features exhibit reduced domain discrepancy and improved class-wise clustering.
SEED → SEED-IV: The results for the SEED → SEED-IV transfer task are shown in Table 2 and Figure 9. The C2FDA model demonstrates superior performance compared to baseline methods, achieving recognition accuracies of 40.66%, 36.13%, and 47.71% across the three sessions. As illustrated in Figure 9, C2FDA consistently outperforms existing approaches including DANN [12] (28.5%, 25.7%, 34.8%), MMD [13] (29.8%, 27.1%, 36.4%), CORAL [14] (31.2%, 28.3%, 37.2%), CDAN [15] (33.5%, 29.6%, 39.1%), OSBP [25] (35.4%, 31.2%, 41.7%), and MAOSDAN [27] (37.8%, 33.9%, 44.2%) across all sessions. The performance improvement is particularly notable in Session 3, where C2FDA achieves 47.71% compared to the second-best MAOSDAN at 44.2%, demonstrating the effectiveness of our coarse-to-fine processing strategy in handling the domain shift between the three emotional classes in the source domain and four emotional classes in the target domain.
SEED → SEED-V: In the SEED → SEED-V transfer task (Table 3), the model achieves recognition accuracies of 42.37%, 42.75%, and 42.59% across the three sessions. As demonstrated in Figure 8 and Figure 9, the C2FDA method shows robust performance in distinguishing between known classes (Happy, Sad, Neutral) and unknown classes (Disgust, Fear). The ROC analysis in Figure 10 reveals excellent discrimination capability with AUC values of 0.84 for Happy, 0.86 for Sad, and 0.77 for Neutral, while unknown classes achieve AUC values of 0.71 for Disgust and 0.74 for Fear, all significantly outperforming random classification. The confusion matrix in Figure 11 further validates the effectiveness of our approach, showing strong diagonal values for known classes (0.68 for Happy, 0.71 for Sad, 0.58 for Neutral) and effective unknown class detection with 42% and 46% of Disgust and Fear samples correctly identified as unknown. This performance improvement over the SEED → SEED-IV task can be attributed to the increased diversity of emotional states in SEED-V, which provides richer information for learning the distinction between known and unknown categories.
The SEED-IV → SEED-V task (Table 4) shows the most significant improvement, with overall recognition accuracies of 47.68%, 46.57%, and 52.44% across the three sessions. As illustrated in Figure 11 and Figure 12, the multi-dimensional performance analysis reveals excellent capabilities in both known class recognition and unknown class detection. The known class performance consistently exceeds the baseline (65%) with accuracies of 72.5%, 71.2%, and an exceptional 76.8% in Session 3, which represents the highest recognition rate across all tasks. Simultaneously, the unknown detection performance maintains stable rates of 45.0%, 42.9%, and 48.2% across sessions, effectively balancing the dual objectives of accurate known class classification and reliable unknown class rejection. This superior performance compared to previous tasks can be attributed to the expanded known class space (from 3 to 4 categories) which provides richer feature representations for distinguishing between shared and novel emotional states. The consistent performance above average baselines across all metrics demonstrates the robustness of our coarse-to-fine approach in handling more complex open-set scenarios.
Analysis of Performance As demonstrated in Figure 13, C2FDA consistently outperforms all baseline methods across the three transfer scenarios with average accuracies of 41.5%, 42.6%, and 48.9% for SEED → SEED-IV, SEED → SEED-V, and SEED-IV → SEED-V, respectively. The comprehensive performance comparison shows substantial improvements over traditional domain adaptation methods (DANN: 28.5–37.1%, MMD: 29.5–37.2%, CORAL: 31.8–38.7%) and existing open-set approaches (OSBP: 36.1–44.3%, MAOSDAN: 38.9–46.5%). The performance trend analysis reveals that C2FDA achieves progressively better results as the task complexity increases, with an overall average of 44.32% across all scenarios. The superior performance in SEED-IV → SEED-V (48.9%) compared to scenarios with more unknown classes demonstrates the effectiveness of our coarse-to-fine strategy in leveraging richer known class representations for better unknown class detection. This consistent superiority across varying degrees of openness validates the robustness of C2FDA in handling diverse open-set domain adaptation challenges in EEG emotion recognition.
In all three tasks, the proposed model achieved satisfactory performance, indicating the validity of our proposed method. Since it is difficult to distinguish between known and unknown emotional classes, we effectively addressed this issue through a coarse-to-fine strategy, obtaining a stable model through multiple iterations.
Figure 14 illustrates the key performance results of the C2FDA method for open-set EEG emotion recognition. The left panel presents a comprehensive performance comparison between C2FDA and multiple baseline methods, including DANN, MMD, CORAL, CDAN, and MAOSDAN, across three cross-dataset tasks: SEED → SEED-IV, SEED → SEED-V, and SEED-IV → SEED-V. The results demonstrate that the proposed C2FDA method significantly outperforms existing domain adaptation approaches across all testing scenarios, particularly achieving the highest average accuracy in the SEED-IV → SEED-V task. The right panel further displays the performance trend of C2FDA across different tasks, where the SEED-IV → SEED-V task achieves the best performance of 48.36%, with an overall average accuracy of 44.33%. These experimental results thoroughly validate the effectiveness of the C2FDA method in addressing cross-domain generalization challenges in open-set EEG emotion recognition, providing a novel technical pathway for research in this field.

4.4. Semantic Communication Perspective

From the viewpoint of semantic communication, C2FDA’s performance improvements can be interpreted as enhancements in semantic fidelity and communication efficiency. By rejecting unknown samples, the method reduces the amount of data that needs to be transmitted or processed, which is critical for bandwidth-constrained edge devices in 6G networks. For example, in a scenario where EEG features are extracted at the edge and only known emotional states are transmitted to a central server, C2FDA can significantly reduce communication overhead while maintaining high recognition accuracy. This makes it suitable for real-time human–machine interaction applications in next-generation networks.

5. Conclusions

In summary, this study proposes the C2FDA framework for open-set EEG emotion recognition, addressing challenges related to negative transfer and the detection of unknown classes. The coarse-to-fine processing module separates known and unknown emotional classes based on similarity scores, while the domain adversarial module optimally aligns feature spaces between the source and target domains. Comprehensive experiments demonstrate that C2FDA consistently outperforms existing domain adaptation and open-set methods across multiple transfer scenarios. The ROC analysis and confusion matrix results confirm robust discrimination capability between known and unknown classes, maintaining excellent balance between accurate recognition and reliable detection. The progressive performance improvement across varying task complexities validates the effectiveness and robustness of our approach. Future research will focus on incorporating more diverse data and optimizing both modules to enhance model generalization and stability across different degrees of dataset openness.
While this work conceptually explores the potential alignment of C2FDA with semantic communication principles in 6G networks, it is primarily focused on the development and evaluation of an open-set domain adaptation framework for EEG emotion recognition. By filtering task-relevant semantics and rejecting unknown states, C2FDA improves both semantic efficiency and robustness, providing insights into human-centric semantic communication in next-generation networks. Future work will explore the integration of C2FDA into edge-cloud semantic communication pipelines and evaluate its performance under realistic network constraints, while also investigating its potential impact on 6G-oriented human–machine interactions.

Author Contributions

Conceptualization, H.F.; methodology, C.Z.; software, C.Z.; validation, L.C. and Y.Y.; formal analysis, Y.Y.; writing—original draft preparation, C.Z.; writing—review and editing, L.C.; supervision, H.F.; funding acquisition, H.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Beijing Municipal Research Project on Digital Education (2025), grant number BDEC2025619100, and the University-level Teacher Teaching Innovation Team Project, grant number CJGX2025-KY-202.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jothimani, S.; Premalatha, K. Thfn: Emotional health recognition of elderly people using a Two-Step Hybrid feature fusion network along with Monte-Carlo dropout. Biomed. Signal Process. Control 2023, 86, 105116. [Google Scholar] [CrossRef]
  2. Zhao, Q.; Liang, Z. Research on multimodal based learning evaluation method in smart classroom. Learn. Motiv. 2023, 84, 101943. [Google Scholar] [CrossRef]
  3. Xie, Z.; Zhou, M.; Sun, H. A novel solution for EEGbased emotion recognition. In Proceedings of the 2021 IEEE 21st International Conference on Communication Technology (ICCT), Tianjin, China, 13–16 October 2021; pp. 1134–1138. [Google Scholar]
  4. Li, Q.; Liu, Y.; Liu, C.; Yan, F.; Zhang, Q.; Liu, Q.; Gao, W. EEG signal processing and emotion recognition using Convolutional Neural Network. In Proceedings of the 2021 International Conference on Electronic Information Engineering and Computer Science (EIECS), Changchun, China, 23–26 September 2021; pp. 81–84. [Google Scholar]
  5. Jimenez-Guarneros, M.; G’ omez-Gil, P. Custom Domain’ Adaptation: A new method for cross-subject, EEG-based cognitive load recognition. IEEE Signal Process. Lett. 2020, 27, 750–754. [Google Scholar] [CrossRef]
  6. Jiang, H.; Shen, F.; Chen, L.; Peng, Y.; Guo, H.; Gao, H. Joint domain symmetry and predictive balance for cross-dataset EEG emotion recognition. J. Neurosci. Methods 2023, 400, 109978. [Google Scholar] [CrossRef]
  7. Gao, F.; Pi, D.; Chen, J. Balanced and robust unsupervised Open Set Domain Adaptation via joint adversarial alignment and unknown class isolation. Expert Syst. Appl. 2024, 238, 122127. [Google Scholar] [CrossRef]
  8. Zhao, X.; Wang, S.; Sun, Q. Open-set domain adaptation by deconfounding domain gaps. Appl. Intell. 2023, 53, 7862–7875. [Google Scholar] [CrossRef]
  9. Long, S.; Wang, S.; Zhao, X.; Fu, Z.; Wang, B. Sample separation and domain alignment complementary learning mechanism for open set domain adaptation. Appl. Intell. 2023, 53, 18790–18805. [Google Scholar] [CrossRef]
  10. Xu, B.; Wu, K.; Wu, Y.; He, J.; Chen, C. Dynamic adversarial domain adaptation based on multikernel maximum mean discrepancy for breast ultrasound image classification. Expert Syst. Appl. 2022, 207, 117978. [Google Scholar] [CrossRef]
  11. Yi, C.; Chen, H.; Xu, Y.; Liu, Y.; Jiang, L.; Tan, H. ATPL: Mutually enhanced adversarial training and pseudo labeling for unsupervised domain adaptation. Knowl.-Based Syst. 2022, 250, 108831. [Google Scholar] [CrossRef]
  12. Ganin, Y.; Ustinova, E.; Ajakan, H.; Germain, P.; Larochelle, H.; Laviolette, F.; Marchand, M.; Lempitsky, V. Domain-adversarial training of neural networks. J. Mach. Learn. Res. 2016, 17, 1–35. [Google Scholar]
  13. Gretton, A.; Borgwardt, K.M.; Rasch, M.; Schölkopf, B.; Smola, A.J.; Platt, J.; Hofmann, T. A kernel method for the two-sample-problem. Adv. Neural Inf. Process. Syst. 2006, 19, 513–520. [Google Scholar]
  14. Sun, B.; Feng, J.; Saenko, K. Correlation alignment for unsupervised domain adaptation. In Domain Adaptation in Computer Vision Applications; Springer International: Cham, Switzerland, 2017; pp. 153–171. [Google Scholar]
  15. Long, M.; Cao, Z.; Wang, J.; Jordan, M.I. Conditional adversarial domain adaptation. Adv. Neural Inf. Process. Syst. 2018, 31, 1640–1650. [Google Scholar]
  16. Feng, C.; Zhong, C.; Wang, J.; Sun, J.; Yokota, Y. EBB: Progressive Optimization For Partial Domain Adaptation. In Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, 19–22 September 2021; pp. 734–738. [Google Scholar]
  17. Zhang, C.; Hu, C.; Xie, J.; Wu, H.; Zhang, J. WCAL: Weighted and center-aware adaptation learning for partial domain adaptation. Eng. Appl. Artif. Intell. 2024, 130, 107740. [Google Scholar] [CrossRef]
  18. Busto, P.P.; Gall, J. Open set domain adaptation. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 754–763. [Google Scholar]
  19. Ji, K.; Zhang, Q.; Zhu, S. Subdomain alignment based openset domain adaptation image classification. J. Vis. Commun. Image Represent. 2024, 98, 104047. [Google Scholar] [CrossRef]
  20. Tang, Y.; Tian, L.; Zhang, W. Open set domain adaptation with latent structure discovery and kernelized classifier learning. Neurocomputing 2023, 531, 125–139. [Google Scholar] [CrossRef]
  21. Li, X.; Fei, J.; Xie, J.; Li, D.; Jiang, H.; Wang, R.; Qi, Z. Open Set Recognition for Malware Traffic via Predictive Uncertainty. Electronics 2023, 12, 323. [Google Scholar] [CrossRef]
  22. Zhang, B.; Zhang, T.; Ma, Y.; Xi, Z.; He, C.; Wang, Y.; Lv, Z. A Low-Latency Approach for RFF Identification in Open-Set Scenarios. Electronics 2024, 13, 384. [Google Scholar] [CrossRef]
  23. Sun, C.; Du, Y.; Qiao, X.; Wu, H.; Zhang, T. Research on the Enhancement Method of Specific Emitter Open Set Recognition. Electronics 2023, 12, 4399. [Google Scholar] [CrossRef]
  24. Yang, Y.; Zhu, L. A Knowledge Inference and Sharing-Based Open-Set Device Recognition Approach for Satellite-Terrestrial-Integrated IoT. Electronics 2023, 12, 1143. [Google Scholar] [CrossRef]
  25. Saito, K.; Yamamoto, S.; Ushiku, Y.; Harada, T. Open set domain adaptation by backpropagation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 153–168. [Google Scholar]
  26. Liu, H.; Cao, Z.; Long, M.; Wang, J.; Yang, Q. Separate to Adapt: Open set domain adaptation via progressive separation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2927–2936. [Google Scholar]
  27. Zheng, J.; Wen, Y.; Chen, M.; Yuan, S.; Li, W.; Zhao, Y.; Wu, W.; Zhang, L.; Dong, R.; Fu, H. Open-set domain adaptation for scene classification using multi-adversarial learning. ISPRS J. Photogramm. Remote Sens. 2024, 208, 245–260. [Google Scholar]
  28. Liu, W.; Qiu, J.-L.; Zheng, W.-L.; Lu, B.-L. Comparing recognition performance and robustness of multimodal deep learning models for multimodal emotion recognition. IEEE Trans. Cogn. Dev. Syst. 2021, 14, 715–729. [Google Scholar] [CrossRef]
  29. Song, T.; Zheng, W.; Song, P.; Cui, Z. EEG Emotion Recognition Using Dynamical Graph Convolutional Neural Networks. IEEE Trans. Affect. Comput. 2020, 11, 532–541. [Google Scholar] [CrossRef]
  30. Huang, Z.; Ma, Y.; Wang, R.; Li, W.; Dai, Y. A Model for EEG-Based Emotion Recognition: CNN-Bi-LSTM with Attention Mechanism. Electronics 2023, 12, 3188. [Google Scholar] [CrossRef]
  31. Zhang, L.; Xia, B.; Wang, Y.; Zhang, W.; Han, Y. A Fine-Grained Approach for EEG-Based Emotion Recognition Using Clustering and Hybrid Deep Neural Networks. Electronics 2023, 12, 4717. [Google Scholar] [CrossRef]
  32. Sun, M.; Cui, W.; Yu, S.; Han, H.; Hu, B.; Li, Y. A Dual-Branch Dynamic Graph Convolution Based Adaptive TransFormer Feature Fusion Network for EEG Emotion Recognition. IEEE Trans. Affect. Comput. 2022, 13, 2218–2228. [Google Scholar] [CrossRef]
  33. Li, C.; Tang, T.; Pan, Y.; Yang, L.; Zhang, S.; Chen, Z.; Li, P.; Gao, D.; Chen, H.; Li, F.; et al. An Efficient Graph Learning System for Emotion Recognition Inspired by the Cognitive Prior Graph of EEG Brain Network. IEEE Trans. Neural Netw. Learn. Syst. 2025, 36, 7130–7144. [Google Scholar] [CrossRef]
  34. Wang, Z.; Chen, M.; Feng, G. Study on Driver Cross-Subject Emotion Recognition Based on Raw Multi-Channels EEG Data. Electronics 2023, 12, 2359. [Google Scholar] [CrossRef]
  35. Davarzani, S.; Masihi, S.; Panahi, M.; Olalekan Yusuf, A.; Atashbar, M. A Comparative Study on Machine Learning Methods for EEG-Based Human Emotion Recognition. Electronics 2025, 14, 2744. [Google Scholar] [CrossRef]
  36. Ma, W.; Zheng, Y.; Li, T.; Li, Z.; Li, Y.; Wang, L. A comprehensive review of deep learning in EEG-based emotion recognition: Classifications, trends, and practical implications. PeerJ Comput. Sci. 2024, 10, e2065. [Google Scholar] [CrossRef]
  37. Liu, Y.; Wang, X.; Ning, Z.; Zhou, M.; Guo, L.; Jedari, B. A survey on semantic communications: Technologies, solutions, applications and challenges. Digit. Commun. Netw. 2024, 10, 528–545. [Google Scholar] [CrossRef]
  38. Wang, Y.; Han, H.; Feng, Y.; Zheng, J.; Zhang, B. Semantic Communication Empowered 6G Networks: Techniques, Applications, and Challenges. IEEE Access 2025, 13, 28293–28314. [Google Scholar] [CrossRef]
  39. Utkovski, Z.; Munari, A.; Caire, G.; Dommel, J.; Lin, P.-H.; Franke, M.; Drummond, A.C.; Stańczak, S. Semantic Communication for Edge Intelligence: Theoretical Foundations and Implications on Protocols. IEEE Internet Things Mag. 2023, 6, 48–53. [Google Scholar] [CrossRef]
  40. Yan, L.; Qin, Z.; Zhang, R.; Li, Y.; Li, G.Y. QoE-Aware Resource Allocation for Semantic Communication Networks. In Proceedings of the GLOBECOM 2022—2022 IEEE Global Communications Conference, Rio de Janeiro, Brazil, 4–8 December 2022; pp. 3272–3277. [Google Scholar]
  41. Zheng, W.-L.; Lu, B.-L. Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. IEEE Trans. Auton. Ment. Dev. 2015, 7, 162–175. [Google Scholar] [CrossRef]
  42. Duan, R.-N.; Zhu, J.-Y.; Lu, B.-L. Differential entropy feature for EEG-based emotion classification. In Proceedings of the 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER), San Diego, CA, USA, 6–8 November 2013; pp. 81–84. [Google Scholar]
  43. Zheng, W.-L.; Liu, W.; Lu, Y.; Lu, B.-L.; Cichocki, A. Emotionmeter: A multimodal framework for recognizing human emotions. IEEE Trans. Cybern. 2018, 49, 1110–1122. [Google Scholar] [CrossRef]
  44. Xu, F.; Pan, D.; Zheng, H.; Ouyang, Y.; Jia, Z.; Zeng, H. EESCN: A novel spiking neural network method for EEGbased emotion recognition. Comput. Methods Programs Biomed. 2024, 243, 107927. [Google Scholar] [CrossRef]
Figure 1. Emotional brain–computer interface cycle.
Figure 1. Emotional brain–computer interface cycle.
Electronics 14 04833 g001
Figure 2. Domain Adaptation Classification.
Figure 2. Domain Adaptation Classification.
Electronics 14 04833 g002
Figure 3. Coarse-to-Fine Sample Separation with Adaptive Threshold Selection for Open-Set Domain Adaptation.
Figure 3. Coarse-to-Fine Sample Separation with Adaptive Threshold Selection for Open-Set Domain Adaptation.
Electronics 14 04833 g003
Figure 4. Open-set EEG emotion recognition method based on C2FDA.
Figure 4. Open-set EEG emotion recognition method based on C2FDA.
Electronics 14 04833 g004
Figure 5. Video clips watched by subjects in the SEED dataset.
Figure 5. Video clips watched by subjects in the SEED dataset.
Electronics 14 04833 g005
Figure 6. Video clips watched by subjects in the SEED-IV dataset.
Figure 6. Video clips watched by subjects in the SEED-IV dataset.
Electronics 14 04833 g006
Figure 7. Video clips watched by subjects in the SEED-V dataset.
Figure 7. Video clips watched by subjects in the SEED-V dataset.
Electronics 14 04833 g007
Figure 8. t-SNE visualization of domain adaptation for cross-dataset emotion recognition.
Figure 8. t-SNE visualization of domain adaptation for cross-dataset emotion recognition.
Electronics 14 04833 g008
Figure 9. Performance Comparison on SEED → SEED-IV Transfer Task.
Figure 9. Performance Comparison on SEED → SEED-IV Transfer Task.
Electronics 14 04833 g009
Figure 10. SEED → SEED-IV Transfer: ROC Analysis and Performance Metrics.
Figure 10. SEED → SEED-IV Transfer: ROC Analysis and Performance Metrics.
Electronics 14 04833 g010
Figure 11. Confusion Matrix: SEED → SEED-V Open-Set Transfer Task.
Figure 11. Confusion Matrix: SEED → SEED-V Open-Set Transfer Task.
Electronics 14 04833 g011
Figure 12. Ablation analysis of the proposed method across three transfer tasks.
Figure 12. Ablation analysis of the proposed method across three transfer tasks.
Electronics 14 04833 g012
Figure 13. SEED-IV → SEED-V: Multi-dimensional Performance Analysis Across Sessions.
Figure 13. SEED-IV → SEED-V: Multi-dimensional Performance Analysis Across Sessions.
Electronics 14 04833 g013
Figure 14. Key Performance of C2FDA for Open-Set EEG Emotion Recognition.
Figure 14. Key Performance of C2FDA for Open-Set EEG Emotion Recognition.
Electronics 14 04833 g014
Table 1. Differences between the SEED, SEED-IV, and SEED-V datasets.
Table 1. Differences between the SEED, SEED-IV, and SEED-V datasets.
ItemSEEDSEED-IVSEED-V
EmotionsPositive, Negative, NeutralHappy, Sad, Neutral, FearfulHappy, Sad, Disgust, Neutral, Fearful
Number of Subjects151516
Video Clips152415
Video Length4 min2 min50 min
Sample Length1 s4 s15–30 s
Number of Samples≈3394≈843≈681
Table 2. Recognition accuracy (%) of the C2FDA framework for emotion recognition tasks on open-set EEG datasets. The bold text indicates the best performance in each session.
Table 2. Recognition accuracy (%) of the C2FDA framework for emotion recognition tasks on open-set EEG datasets. The bold text indicates the best performance in each session.
(SEED → SEED-IV)
Session 1Session 2Session 3
S139.9052.7650.50
S247.8441.7157.75
S329.0930.0534.12
S425.4829.6935.75
S540.0225.9643.75
S654.3337.6250.50
S731.6126.4446.88
S841.3529.3348.75
S940.8737.1448.00
S1053.8549.1647.25
S1143.1531.3745.38
S1238.5851.0867.12
S1343.2736.7870.25
S1438.8232.2136.12
S1541.7130.6533.50
Average40.6636.1347.71
Table 3. Recognition accuracy (%) of the C2FDA method for emotion recognition tasks on open-set EEG datasets.
Table 3. Recognition accuracy (%) of the C2FDA method for emotion recognition tasks on open-set EEG datasets.
(SEED → SEED-V)
Session 1Session 2Session 3
S134.2318.3634.38
S257.7459.1845.83
S334.6747.8541.49
S440.3344.7352.08
S530.6542.3831.77
S644.7925.0046.88
S745.5448.6360.76
S846.2839.2639.06
S940.1856.0564.41
S1064.7362.8956.42
S1137.8038.4834.72
S1249.5544.1431.60
S1331.2523.4427.08
S1435.7151.5633.68
S1546.7336.1349.65
S1637.8045.9031.60
Average42.3742.7542.59
Table 4. Recognition accuracy (%) of the C2FDA method for emotion recognition tasks on open-set EEG datasets (SEED-IV → SEED-V).
Table 4. Recognition accuracy (%) of the C2FDA method for emotion recognition tasks on open-set EEG datasets (SEED-IV → SEED-V).
Session 1Session 2Session 3
S140.1837.1120.83
S246.1344.9251.04
S350.8931.4550.00
S447.0252.9369.79
S557.7426.7631.42
S641.6753.1233.16
S763.2451.9568.40
S854.4649.6154.69
S945.6868.3655.21
S1052.8351.3776.74
S1141.6748.0558.85
S1223.8129.4950.35
S1354.1732.2335.76
S1448.0750.7861.98
S1560.1248.0546.88
S1635.1268.9573.96
Average47.6846.5752.44
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zheng, C.; Fang, H.; Chen, L.; Yang, Y. Coarse-to-Fine Open-Set Semantic Adaptation for EEG Emotion Recognition in 6G-Oriented Semantic Communication Systems. Electronics 2025, 14, 4833. https://doi.org/10.3390/electronics14244833

AMA Style

Zheng C, Fang H, Chen L, Yang Y. Coarse-to-Fine Open-Set Semantic Adaptation for EEG Emotion Recognition in 6G-Oriented Semantic Communication Systems. Electronics. 2025; 14(24):4833. https://doi.org/10.3390/electronics14244833

Chicago/Turabian Style

Zheng, Changliang, Honglin Fang, Lina Chen, and Yang Yang. 2025. "Coarse-to-Fine Open-Set Semantic Adaptation for EEG Emotion Recognition in 6G-Oriented Semantic Communication Systems" Electronics 14, no. 24: 4833. https://doi.org/10.3390/electronics14244833

APA Style

Zheng, C., Fang, H., Chen, L., & Yang, Y. (2025). Coarse-to-Fine Open-Set Semantic Adaptation for EEG Emotion Recognition in 6G-Oriented Semantic Communication Systems. Electronics, 14(24), 4833. https://doi.org/10.3390/electronics14244833

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop