Next Article in Journal
UMEAD: Unsupervised Multimodal Entity Alignment for Equipment Knowledge Graphs via Dual-Space Embedding
Previous Article in Journal
On Hilfer–Hadamard Tripled System with Symmetric Nonlocal Riemann–Liouville Integral Boundary Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Decoding Self-Imagined Emotions from EEG Signals Using Machine Learning for Affective BCI Systems

by
Charoenporn Bouyam
1,
Nannaphat Siribunyaphat
1,2,
Bukhoree Sahoh
1,2 and
Yunyong Punsawad
1,2,*
1
School of Informatics, Walailak University, Nakhon Si Thammarat 80160, Thailand
2
Informatics Innovative Center of Excellence, Walailak University, Nakhon Si Thammarat 80160, Thailand
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(11), 1868; https://doi.org/10.3390/sym17111868 (registering DOI)
Submission received: 14 October 2025 / Revised: 28 October 2025 / Accepted: 3 November 2025 / Published: 4 November 2025

Abstract

Research on self-imagined emotional imagery supports the development of practical affective brain–computer interface (BCI) systems. This study proposes a hybrid emotion induction approach that combines facial expression image cues with subsequent emotional imagery, involving six positive and six negative emotions across two- or four-class valence and arousal categories. Machine learning (ML) techniques were applied to interpret these self-generated emotions from electroencephalogram (EEG) signals. Experiments were conducted to observe brain activity and validate the proposed feature and classification algorithms. The results showed that absolute beta power features computed from power spectral density (PSD) across EEG channels consistently achieved the highest classification accuracy for all emotion categories with the K-nearest neighbors (KNN) algorithm, while alpha–beta ratio features also contributed. The nonlinear parametric ML models achieved high effectiveness; the K-nearest neighbor (KNN) classifier performed best in detecting neutral states, while the artificial neural network (ANN) achieved balanced accuracy across emotional stages. The proposed system supports the use of the hybrid emotion induction paradigm and PSD-derived EEG features to develop reliable, subject-independent affective BCI systems. In future work, we will expand the datasets, employ advanced feature extraction and deep learning models, integrate multi-modal signals, and validate the proposed approaches across broader populations.

1. Introduction

Affective computing is an interdisciplinary field that combines computer science, psychology, neuroscience, and human–computer interaction (HCI) to develop systems that recognize, interpret, and respond to human emotions [1,2,3]. Applications in mental health [4,5], human–artificial intelligence (AI) interaction [6,7], the metaverse [8,9], and digital communication [10,11] that evaluate users’ emotional states mostly depend on affective computing, thereby improving the naturalness, personalization, and effectiveness of interactions. This potential is essential for improving HCI, advancing mental health diagnosis, and developing adaptive learning systems.
Emotions [12] are complex and affect attention, decision-making, memory, and motivation. Detecting these emotions from brain signals involves extracting features from the time or frequency domain and classifying them using machine learning (ML) methods [13]. Brain–computer interface (BCI) technology enables direct communication between the brain and external devices, bypassing the need for peripheral nerves and muscles, helping people with motor impairments regain, enhance, or replace their communication abilities [14,15]. BCIs primarily use electroencephalography (EEG) to identify various affective emotional states, such as stress, happiness, sadness, or excitement [16,17]. EEG-based BCIs can be categorized as active or passive. Active BCIs require users to control signals via external stimuli, while passive BCIs detect emotional and mental states in spontaneous brain activity. Their applications include mental health monitoring [18], adaptive learning [19], emotional communication [20,21], and human–robot interactions [22].
Emotional states can be identified through facial expression, voice, body language, or physiological responses. Biomedical signals, including the electroencephalogram (EEG), electrocardiogram (ECG), and galvanic skin response (GSR), have been used and integrated into applications. Research on recognizing EEG-based, externally induced emotions [23,24,25] has shown that emotional states affect EEG patterns and their frequency bands, brain connectivity, and the symmetry or asymmetry of the cerebral hemispheres.
Recent studies on EEG-based emotion recognition have used various approaches, stimuli, and feature extraction techniques to improve classification accuracy. Zaidi et al. [26] proposed a novel EEG framework for detecting guilt by focusing on sex-specific neural patterns. They combined neuroscience and machine learning to classify guilt versus a neutral state. For EEG preprocessing, they used visual stimuli, independent component analysis (ICA), and support vector machine (SVM) classifiers. Their model achieved an accuracy of more than 83% without feature transformation, obtaining better results for females. Er and Aydilek [27] introduced a new method for emotion recognition using EEG signals from music stimuli to improve accuracy and ecological validity in affective computing. EEG data were recorded as the participants listened to music designed to evoke six emotions: happiness, sadness, fear, anger, surprise, and neutrality. They extracted time–frequency features and used ML classifiers such as SVM, K-nearest neighbors (KNN), and random forests (RF), achieving an accuracy of up to 87.2% in multiclass emotion classification, confirming that music-induced EEG signals provide reliable features for emotion decoding. Huang et al. [28] developed an EEG-based BCI to detect emotional states and evaluate residual emotional processing in patients with disorders of consciousness. Their research combines affective computing with clinical neuroscience. The participants listened to emotional auditory stimuli (positive, neutral, or negative). ML classifiers, such as SVMs, were trained on spectral analysis and features from EEG and event-related potentials (ERPs), achieving accuracies of up to 86.5% in healthy individuals. However, patients may exhibit different EEG responses, necessitating preserved emotional processing. Polo et al. [29] studied sensory modalities such as auditory (music) and visual (film clips), analyzing the emotional responses they triggered based on physiological signals that included autonomic nervous system (ANS) responses via ECG, electrodermal activity (EDA), respiration, and central nervous system (CNS) responses via EEG signals. Focusing on the dataset for emotion analysis using physiological signals (DEAP) from healthy participants who were exposed to emotional stimuli, they found that the effectiveness of emotional induction varies by modality. Auditory stimuli elicited greater ANS responses, such as heart rate variability and breathing changes, whereas visual stimuli elicited distinct EEG responses, particularly in the frontal and parietal regions.
Multi-modal approaches can enhance affective computing systems and emotion recognition models used in health technologies. Lian et al. [30] introduced a multi-modal emotion recognition framework that combined EEG signals and facial images (EEG–vision fusion) to enhance emotion classification accuracy using a dual-branch deep learning architecture. Notably, EEG spectrograms subjected to short-time Fourier transform (STFT) and a convolutional neural network (CNN) alone achieved 85% accuracy. Applying the residual network (ResNet50) to facial features increased the accuracy to 90.47%. Their experimental results demonstrated that EEG–vision fusion improves the efficiency of emotion recognition. However, the system relied on patients’ facial expressions. Therefore, EEG-based emotional imagery requires further development in patients with disorders of consciousness.
Brain–computer interfaces can translate internal states into communication, providing support for individuals who cannot physically express themselves (e.g., those with advanced dementia or locked-in syndrome). For example, EEG-based emotion recognition and pain detection provide alternative communication methods for both patients and doctors. EEG can be used to detect discomfort and emotional states in these patients [31]. Pain detection based on EEG signals typically results in reduced alpha and increased gamma waves [32,33]. This approach benefits patients with disorders of consciousness by supplementing traditional assessments to indicate that they may still feel pain [34,35]. An EEG-based affective BCI for emotional state monitoring typically comprises four main components: (1) EEG acquisition, (2) signal preprocessing, (3) feature extraction, and (4) emotion recognition and display, as illustrated in Figure 1. Ongoing research is exploring EEG responses during emotional imagery, with attention to experimental paradigms, feature extraction, and classification methods that could be adapted for future use in palliative care contexts.
For mental affective activity, several studies examined an emotional imagery paradigm based on EEG signals without video induction, as summarized in Table 1. Kothe et al. [36] studied emotion recognition using EEG data collected during self-paced emotional imagery, in which participants internally recalled or imagined emotional experiences without external stimuli. The experiment involved a self-paced emotional imagery task with 15 emotions—8 positive and 7 negative—allowing participants to elicit emotional states voluntarily. They used independent component analysis (ICA) to analyze high-frequency EEG signals for emotion recognition. Additionally, they identified independent modulators (IMs) and valence-specific spatial patterns, in which positive emotions were associated with mid-temporal-cortex IMs and negative emotions with occipital regions. Hsu et al. [37] investigated changes in brain activity during shifts between different imagined emotional states using a 128-channel high-density EEG dataset from an open source. They applied a hidden Markov model with multivariate autoregressive parameters (HMM-MAR) to identify common brain states characterized by distinct EEG patterns. These results demonstrate the potential of subject-independent emotion recognition: (1) some states were linked to specific emotions, whereas others were associated with multiple emotions, and (2) transitions occurred more quickly during fear and anger and more slowly during sadness. They used neural activity dynamics for imagined emotions, using unsupervised methods to decode emotional states in affective computing. Ji and Dong [38] used deep learning and an EEG dataset to recognize the emotions that individuals experience internally. The EEG was recorded while the person imagined an emotional scenario or recalled an experience. They used a hybrid deep learning model that combined convolutional neural networks (CNNs) to capture spatial brain patterns and long short-term memory (LSTM) networks to analyze time-based changes in EEG signals to identify emotions. The results showed that the model achieved a high classification accuracy of 89% while reducing computation time compared to traditional methods. Their findings demonstrated that EEG signals evoked by self-imagined emotions can be recognized using deep learning, supporting the development of passive BCIs for continuous emotion monitoring.
In addition, Proverbio and Pischedda [39] studied brain activity to identify various imagined physiological needs and motivational states, such as feeling cold, hungry, in pain, joyful, or playful. Their ERP analysis showed that N400 in the frontal region increases when imagining positive, appetitive states and intense bodily sensations, and that the ERP pattern shifts based on sensory or emotional context. In a subsequent study [40], pictograms were used to explore imagined emotions, revealing emotion-specific cortical activations: joy involved the orbitofrontal regions, sadness activated the temporal areas, and fear engaged the limbic system, along with increased activity in the frontal areas. These results show that the brain generates different responses when perceiving emotions and sensations versus when imagining or recalling them.
EEG research on self-imagined emotional imagery offers valuable insights into the development of practical affective BCI systems. However, several aspects require further validation, particularly the design of paradigms that accurately evoke emotional imagery to reflect the genuine emotional states for reliable emotion recognition. Previous research mainly relied on auditory cue datasets to prompt participants to imagine emotional scenarios and recall experiences [41,42]. However, using an image cue before recall can increase both the intensity and consistency of emotion induction [43,44]. In this study, we introduced an emotional imagination paradigm that uses images of facial expression as cues, followed by a briefly imagined emotional scenario, to explore feature extraction and classification methods. Experiments were conducted to assess the effectiveness of short-term emotional imagination. The objective of this study was to develop an EEG-based affective BCI system capable of detecting discomfort and emotional states to facilitate emotion monitoring in patients with disorders of consciousness or receiving palliative care. Experiments were conducted exclusively with healthy participants to validate the proposed emotional imagination paradigm and to observe its potential applications.

2. Materials and Methods

2.1. Self-Imagined Emotion Paradigms

Based on the previous work summarized in Table 1, we created a self-imagined emotional imagination paradigm with facial expressions of male and female actors serving as cues. The participants were instructed to imagine an emotional scenario for a short duration. The generated emotions included six positive (surprise, excitement, happiness, pleasantness, relaxation, and calmness) and six negative (boredom, depression, sadness, disgust, anger, and fear) categories covering the valence and arousal dimensions, following Russell’s circumplex model [45], as shown in Table 2.
According to the three schemes, emotions can be classified based on valence and arousal. In a two-class system based on valence, emotions are labeled as high valence (positive, “1”) or low valence (negative, “2”). In a separate two-class system based on arousal, emotions are labeled as high arousal (active, “1”) or low arousal (calm, “2”). In both systems, a neutral emotional state is labeled as “0.” In a four-class system based on the affective circumplex model, emotions are categorized based on a combination of valence and arousal: high valence with high arousal (HVHA, “1”), high valence with low arousal (HVLA, “2”), low valence with low arousal (LVLA, “3”), and low valence with high arousal (LVHA, “4”). The neutral state is again labeled as “0.”
The proposed paradigm consisted of three levels: sessions, trials, and events, as shown in Figure 2. In the first session, participants started with an instruction phase, followed by a 120 s rest period, during which they focused on a white screen to establish a neutral emotional baseline. Each session included three consecutive trials lasting 420 s each and separated by 120 s rest periods. During each trial, the participants performed a series of randomized emotional imagery tasks involving six positive and six negative emotional states, for a total of 12 events. Each event lasted 35 s, starting with a 15 s focus on a “+” for baseline recording, then presenting a facial expression cue as the target emotion for 5 s, followed immediately by a 15 s fixed-duration emotional imagery period during which participants imagined experiencing the specified emotion. After all events, they returned to a neutral emotional state and rested before the subsequent trial. Each participant completed two sessions, totaling 72 events.

2.2. EEG Acquisition

EEG signals were acquired using a BrainMaster Discovery-24 amplifier (BrainMaster Technologies, Inc., Bedford, OH, USA). The amplifier had 19 channels positioned according to the international 10–20 system (Figure 3a). An EEG cap ensured consistent electrode placement, and signals were sampled at 256 Hz. Preprocessing involved applying a 0.5–40 Hz bandpass filter and a 50 Hz notch filter to remove power-line noise using Discovery software (version 1.6.0).
A total of 20 healthy volunteers (9 males, 11 females; mean age = 22.63 ± 1.25 years) participated in the study. The participants had no history of neurological or psychiatric disorders and were not taking any medications or suffering from conditions that could affect brain function or emotional regulation, such as epilepsy, traumatic brain injury, or anxiety disorders. All procedures involving human participants were approved by the Office of Human Research Ethics Committee of Walailak University (Project No. WU-EC-IN-2-164-67; Approval no. WUEC-24-164-01, 30 April 2024) and were conducted in accordance with the Declaration of Helsinki, the Council for International Organizations of Medical Sciences, and World Health Organization guidelines.
In the experiment, the participants sat approximately 80 cm from the screen in a well-lit room, as shown in Figure 3b. Before recording EEG signals, each participant was fitted with an electrode cap, ensuring the electrodes were accurately placed according to the 10–20 system (Figure 3a). The EEG signals were inspected to ensure good quality. Subsequently, the participants were briefed on the emotional imagery experimental paradigm shown in Figure 2. The randomized emotions are listed in Table 2.

2.3. EEG Signal Preprocessing

The recorded EEG signals were bandpass filtered between 2 and 40 Hz using a finite impulse response (FIR) digital filter implemented in the EEGLAB toolbox (version 2023) [46] for MATLAB (MathWorks, R2021a) to retain neural oscillations relevant to cognitive and emotional processing. This filtering step was intended to remove low-frequency artifacts such as baseline drifts and eye blinks, as well as high-frequency noise typically associated with electromyogram (EMG) activity. In addition, a 50 Hz notch filter was applied to eliminate power-line interference. These preprocessing procedures improved the overall signal quality for subsequent analyses.
The filtered EEG signals were segmented according to the experiment to isolate neural activity associated with neutral and emotional states. During the neutral condition, a 15 s fixation was used, with the first 5 s removed to avoid perception artifacts for both baseline and emotional periods, leaving 10 s for analysis. For the emotional condition, a 15 s fixed-duration emotional imagery period was divided into overlapping 10 s segments, producing two segments per trial. These segments were then analyzed as described in Section 2.4.

2.4. Feature Extraction

2.4.1. Feature Parameters

The proposed system focuses on extracting and selecting discriminative features to decode self-imagined emotional states from EEG signals. Table 3 summarizes the proposed EEG analysis and classification algorithms, comprising five key components. The framework starts with a frequency analysis to capture relevant spectral information, then extracts feature parameters that describe the EEG signals. Specific EEG channels are selected to focus on the brain regions involved in emotional processing. Frontal regions are chosen for their role in emotional evaluation and affective regulation, while temporal regions are selected for their involvement in auditory and multi-modal emotional processing, supporting the use of frontal–temporal channel combinations. Consistent with prior studies [47], we also examined other regional configurations to evaluate their contributions to emotion recognition. The extracted features are classified using appropriate machine learning models. The system’s performance was verified using metrics such as accuracy, precision, and recall.
The frequency analysis entailed calculating the FFT [48] and power spectral density (PSD) using the Welch method [49], as detailed in Equations (1)–(3). These techniques were employed to extract the spectral features from the EEG signals, which were subsequently computed as follows:
X k   =   n = 0 N 1 x [ t ] · e j 2 π N k n
P S D w f k = 1 N U l = 1 L n = 0 N 1 w [ n ] · x l [ n ] e ( j 2 π N k n ) 2
U = 1 N n = 0 N 1 w 2 [ n ]
The FFT calculation is defined in Equation (1), where x [ t ] represents the sample EEG signal in the time domain; N is the number of samples in the FFT; k is the frequency bin index; and X [ k ] is the complex-valued Fourier coefficient at frequency bin k . The PSD estimation using the Welch method is specified in Equation (2), where P S D w ( f k ) is the estimated PSD of the signal; x l [ n ] denotes segment l of the signal; w [ n ] is the window function; and U is the normalization factor defined in Equation (3).
From the spectral analyses, the feature parameters were derived using the Absolute Power, Relative Power, and Band Ratio methods, as indicated in Table 3. Specifically, four common EEG frequency bands were examined: Theta (θ): 4–8 Hz, Alpha (α): 8–12 Hz, Beta (β): 13–25 Hz, and Gamma (γ): 25–45 Hz (these frequency ranges—Theta (4–7 Hz), Alpha (8–12 Hz), Beta (13–25 Hz), and Gamma (25–45 Hz)—were selected based on commonly adopted definitions in EEG-based emotion recognition studies, where Theta and Alpha bands are associated with drowsiness, relaxation, and attentional states, while Beta corresponds to “fast” beta activity linked to cognitive and attentional processes, and Gamma reflects higher-order integration and emotional processing, as supported by recent findings [50,51,52,53]). Both absolute ( a b ( i ) ) and relative power ( r e ( i ) ) were calculated using Equations (4) and (5), respectively. In addition, the Alpha/Beta (α/β) ratio was computed as an important discriminative feature for differentiating between various emotional and cognitive states, as expressed in Equation (6).
a b i = f = f 1 f 2 P S D w ( f k )
r e i = f = f 1 f 2 P S D w ( f k ) f = f m i n f m a x P S D w ( f k )
α β = a b ( α ) a b ( β )
where P S D w ( f k ) represents the power spectral density of the EEG signal at frequency f k , obtained using FFT and PSD. The index i denotes one of the EEG bands (θ, α, β, and γ). The interval range considered was 1–45 Hz.

2.4.2. EEG Channel Selections

Channel placements followed the international 10–20 system (Figure 3a) and were organized into three channel selection patterns. Whole brain (A) includes all 19 channels (A1: Fp1, F3, F7, Fz, Fp2, F4, F8, T3, T5, T4, T6, and C3, C4, Cz, P3, P4, Pz, O1, and O2), where f 1 , f 2 denotes the target frequency band (e.g., alpha = 8–13 Hz), and f m i n , f m a x , indicates the total frequency. Area-based (B) groups channels by brain region to allow for the examination of localized contributions: left hemisphere (B1: Fp1, F3, C3, P3, O1, F7, T3, and T5), right hemisphere (B2: Fp2, F4, C4, P4, O2, F8, T4, and T6), frontal (B3: Fp1, F3, F7, Fz, Fp2, F4, and F8), temporal (B4: T3, T5, T4, and T6), central (B5: C3, C4, and Cz), parietal (B6: P3, P4, and Pz), and occipital (B7: O1, and O2). Area combination (C) evaluates potential interactions by merging subsets: frontal–temporal (C1: B3 + B4), frontal–central (C2: B3 + B5), frontal–parietal (C3: B3 + B6), frontal–occipital (C4: B3 + B7), frontal–central–temporal (C5: B3 + B5 + B4), frontal–central–parietal (C6: B3 + B5 + B6), frontal–temporal–parietal (C7: B3 + B4 + B6), and temporal–central–parietal (C8: B4 + B5 + B6), as summarized in Table 3.
Because of physiological and experimental differences, all extracted features were standardized using z-score normalization [54], adjusting each feature to have a mean of zero and a standard deviation of one. This method allows for comparisons across features measured on different scales, prevents features with larger ranges from biasing the learning process, and reduces inter-subject variability by aligning EEG amplitude differences. The normalized features were used for classification.

2.5. Machine Learning Classification

This study systematically explored the decoding of self-imagined emotions from EEG signals by designing experiments to elicit distinct imagined emotions and extract diverse features to train ML models. The objectives were to identify the most informative features and brain regions for emotion decoding and to benchmark different strategies toward more ecologically valid, interpretable, and practical affective BCI systems.
For each proposed feature and channel selection pattern, the dataset used consistent trial labels based on valence, arousal, and a four-class affective scheme, yielding 2160 samples split into a 720-sample neutral set and a 1440-sample emotional set. The data were organized according to three classification schemes:
Two-class valence system: High valence (positive) and low valence (negative), including the neutral class, with 720 samples per class.
Two-class arousal system: High arousal (active) and low arousal (calm), including the neutral class, with 720 samples per class.
Four-class affective circumplex system: High valence–high arousal (HVHA), high valence–low arousal (HVLA), low valence–low arousal (LVLA), and low valence–high arousal (LVHA), with 360 trials per emotion class and 720 trials for the neutral class.
To address class imbalance, the Synthetic Minority Oversampling Technique (SMOTE) was employed. In the revised analysis, SMOTE was applied exclusively to the training data after splitting the dataset into 70% training and 30% testing subsets, ensuring that no synthetic samples contaminated the test set.
The four common EEG classifiers included an SVM with a radial basis function (RBF) kernel, KNN, ANN with a multilayer perceptron (MLP), and Naive Bayes (NB) [55,56]. The SVM with an RBF kernel can handle high-dimensional data and capture nonlinear boundaries. The KNN predicts based on the nearest neighbors. The ANN with an MLP is used for modeling complex nonlinear relationships but requires careful tuning and sufficient data. Naive Bayes is a quick and simple probabilistic classifier that assumes feature independence, making it effective for high-dimensional or categorical data. These classifiers were implemented in Python (version 3.13.0) using Scikit-learn, and the hyperparameters were optimized via Grid Search. The final hyperparameters of each model are summarized in Table 4. The dataset was partitioned into training (70%) and test (30%) subsets using Scikit-learn’s train_test_split function, with parameters selected to preserve class proportions and ensure reproducibility. Robust generalization was achieved through 10-fold stratified cross-validation, ensuring each sample was tested once and performance metrics were averaged across folds to reduce partitioning bias.

2.6. Performance Evaluation

The performance of the proposed algorithms for emotion recognition was evaluated using accuracy, precision, recall, and F1-score, as defined in Equations (7)–(10), respectively. These metrics were computed on the test set and averaged across folds to obtain a reliable measure of classification performance.
A c c u r a c y   ( A C ) = T P + T N T P + T N + F P + F N
P r e c i s i o n   P S = T P T P + F P
R e c a l l   R C =   T P T P + F N
F m e a s u r e   F 1 =   2 × P r e c i s i o n × R e c a l l P r e c i s i o n +   R e c a l l
True positive (TP) is the number of correctly predicted positive samples (expected outcome detected); true negative (TN) is the number of correctly predicted negative samples (non-target outcome rejected); false positive (FP) is the number of negative samples incorrectly predicted as positive (non-target misclassified as target); and false negative (FN) is the number of positive samples incorrectly predicted as negative (target misclassified as non-target).

3. Results

3.1. Verification of EEG Channel Selection Pattern

To evaluate the effectiveness of channel selection for self-imagined emotion classification, three EEG channel selection patterns were analyzed, as summarized in Table 3. The results are presented in Table 5, showing the accuracy rates for the FFT and PSD methods. The findings demonstrate that EEG classification accuracy varies depending on the channel selection strategies and feature extraction methods used. The FFT method achieved a higher average classification accuracy than the PSD method across all channel groups.
To ensure statistical rigor, 95% confidence intervals (C.I.s) were computed for all EEG channel selection patterns to quantify the reliability of model performance and inter-subject variability, following standard BCI evaluation practices [57,58]. Narrower C.I.s indicate more consistent classification performance across participants, whereas wider C.I.s reflect higher individual variability. This additional statistical reporting enables more precise comparisons between FFT- and PSD-based approaches regarding stability and reproducibility.
All channels (A1) and the selected channels with the area-based combination patterns (C1–C8) exhibited notably higher accuracies. Specifically, A1 with the FFT method achieved the highest overall performance, with a maximum classification accuracy of 0.86 and a mean accuracy of 0.66 ± 0.14 (SD), corresponding to a 95% CI = [0.60, 0.72]. In contrast, the same configuration using PSD achieved 0.59 ± 0.10 (SD) with a 95% CI = [0.55, 0.63]. Channels in the area-based pattern (B5–B7) showed lower efficiency and wider confidence intervals, while central-area combinations (C1–C8) provided more balanced and stable results.

3.2. Model Classification Evaluation

According to the results in Table 5, all channels (A1) were selected to evaluate the performance of the four machine-learning models (SVM, KNN, ANN, and NB) using the proposed feature parameters and applying FFT and PSD. The results are presented in Table 6 and Table 7.
Table 6 shows the average classification accuracies of four machine learning models across two-class valence, two-class arousal, and four-class valence–arousal tasks, using EEG features derived from FFT across all channels. For the two-class valence classification, KNN consistently achieved the highest accuracy across all feature types (AC = 0.75–0.86), with the strongest performance observed for absolute beta (β) features (AC = 0.86). SVM and ANN achieved moderate performance (AC = 0.64–0.76), while NB performed worst. For the two-class arousal classification, a similar trend was observed. KNN again outperformed other models (AC = 0.74–0.86), with absolute beta (β) and Gamma (γ) features yielding the best results (AC = 0.82–0.86). SVM and ANN achieved moderate performance (AC = 0.69–0.74), and NB remained the weakest (AC = 0.41–0.47). For the four-class valence–arousal classification, the performance of all models decreased, as expected for the more complex task. KNN still provided the best overall results (AC = 0.57–0.76), followed by SVM and ANN (AC = 0.43–0.55). NB consistently showed the lowest performance.
Table 7 shows the average classification accuracy of machine learning models on EEG features for two-class valence-arousal and four-class valence–arousal tasks. For two-class valence, ANN ranked highest (AC 0.66–0.73), especially with gamma features, while SVM and KNN showed moderate results (AC 0.61–0.68) and NB the lowest (AC 0.55–0.66). In arousal classification, KNN slightly outperformed the others (AC 0.63–0.71), with Gamma features performing best, followed by SVM and ANN (AC 0.60–0.66), and NB lowest (AC 0.40–0.45). For the four-class task, the accuracy dropped, with KNN and ANN leading (AC 0.46–0.58 and 0.32–0.42), SVM moderate (AC 0.36–0.48), and NB weakest (AC 0.25–0.29). Overall, ANN excelled in valence, KNN in arousal, and for four-class tasks, while SVM performed moderately and NB underperformed.
The comparison of results in Table 6 and Table 7 shows that features obtained through FFT achieve higher emotion classification accuracy than those derived from PSD across all classifiers, especially when using the KNN model with the absolute beta power (ab(β)) feature. However, despite this better performance, FFT method was found to be vulnerable to information leakage, possibly due to poor separation of information across signal ranges during processing or improper cross-validation, which can lead to artificially high accuracy but low reliability [59]. In contrast, PSD analysis using an overlapping-window method reduces the risk of information leakage [60], providing more reliable results, though with slightly lower accuracy than the FFT method. Therefore, for subsequent evaluations in this study, PSD-derived features were chosen because they offer greater methodological reliability and are more suitable for assessing machine learning models in EEG-based emotion classification.
Table 8, Table 9 and Table 10 summarize the performance of machine learning models for valence, arousal, and combined valence–arousal classification using absolute beta (β) power features from all EEG channels. For the binary valence classification shown in Table 8, KNN achieved the best overall performance, especially for neutral states (F1 = 0.99), while ANN showed balanced results across classes (F1 = 0.61–0.66). SVM favored neutral detection (F1 = 0.90), but was weaker for the negative and positive classes, and NB performed the worst across all categories. In the arousal classification, as shown in Table 9, KNN again outperformed other models (neutral F1 = 0.85), with ANN providing moderate results (F1 = 0.55–0.56). SVM exhibited bias toward neutral states, and NB remained the weakest. For the four-class valence–arousal classification in Table 10, which was evaluated on a synthetically balanced dataset, KNN and ANN demonstrated relatively stronger and more balanced performance (F1 = 0.32–0.48), while SVM and NB performed poorly, especially for high-/low-valence–arousal combinations.

3.3. Model Generalization Across Subjects (LOSO Validation)

Figure 4 presents the average classification accuracies of four machine learning models—SVM, KNN, ANN, and NB (as shown in Appendix A)—evaluated using leave-one-subject-out (LOSO) cross-validation for three emotional dimensions: valence, arousal, and valence–arousal, using absolute beta power features extracted from the power spectral density (PSD). The results, based on data from Table A1 in Appendix A, show that KNN achieves the highest overall accuracies (up to 0.59 for arousal), while NB yields the lowest, especially in the valence–arousal task. Models using PSD features also exhibit relatively narrow 95% confidence intervals, indicating consistent performance across subjects. These findings suggest that PSD features, when combined with KNN or SVM, yield robust, reliable performance in subject-independent EEG-based emotion classification.

4. Discussion

This study presents a preliminary investigation into a novel “facial cue + self-imagination” paradigm for affective BCI applications in healthy participants. Unlike traditional visual stimuli (e.g., videos and International Affective Picture System (IAPS) images), this method encourages active emotional recall, reducing habituation and potentially offering more ecologically valid responses. Compared to passive paradigms such as extended video viewing or IAPS tasks, which can lead to emotional desensitization over time or yield varied responses due to stimulus complexity. The proposed paradigm is helpful when passive stimuli might not maintain attention or emotional clarity. It could be appropriate for people with limited processing of complex stimuli.
First, we explored frequency analysis and channel selection. While features extracted using the FFT method showed slightly higher classification accuracy (Table 5). They were not pursued further due to concerns that they could lead to information leakage. Therefore, PSD-based features were selected for subsequent analysis, as they yielded more reliable—though slightly lower—accuracy. Using all EEG channels enabled the models to capture broad spatial patterns, though their ability to classify borderline or mixed emotional states remains limited, likely due to overlapping spectral activity across regions. The strong performance of the all-channel (A1) selection may be explained by its ability to capture distributed neural dynamics associated with emotion processing across multiple cortical regions. By encompassing signals from the entire scalp, the A1 selection can better capture the complex, interconnected nature of affective neural activity than region-based selections, which may omit relevant emotional information.
Second, we verified the performance of various feature parameters. The absolute beta power feature was found to be a strong predictor of emotional state, especially when combined with PSD and overlapping windowing. Neurophysiologically, beta oscillations (13–30 Hz) are linked to active cognitive engagement, attentional control, and emotional regulation, which are likely involved during self-paced emotional imagery. The dominance of beta activity in our findings supports its role in integrating internally generated emotional cues with cognitive control mechanisms, consistent with prior research associating beta power with emotion processing during self-reflective or imagery-based tasks. Consistent with previous findings [61,62] linking beta activity to internally directed cognition and self-referential emotional processing, our results highlight its importance in imagined emotional tasks.
Interestingly, KNN consistently outperformed more complex classifiers across all tasks (Table 6 and Table 7), which may be attributed to the relatively low dimensionality and structured nature of the PSD-derived beta features. KNN can leverage these features effectively without overfitting. The simplicity of KNN also allows it to generalize well on small datasets, as it does not depend on iterative learning or parameter tuning. In contrast, complex models such as ANN and SVM may be more sensitive to noise and feature overlap in small, noisy EEG datasets.
Third, model generalization using LOSO cross-validation emphasized the subject-independent potential of PSD-derived features. Again, KNN was effective in detecting both neutral and non-neutral states, while ANN offered stability across multiple classes. Nonetheless, these findings should be interpreted cautiously, given the limited sample size and inherent variability in self-imagined emotional imagery.
Overall, the findings suggest that short-duration self-imagined emotional imagery guided by facial expression cues can reliably induce emotional states in a healthy population, and that PSD-based EEG features, particularly absolute beta power, are promising for affective state decoding. However, these conclusions remain exploratory and preliminary. This study lays foundational groundwork for future BCI research.
Several limitations can be identified, including a small dataset and inter-subject variability, which may limit generalizability. EEG signals are highly susceptible to noise and physiological artifacts, and relying solely on power spectral features may not fully capture the complexity of the neural dynamics underlying emotions. To address these limitations, future research will aim to (1) incorporate more advanced and diverse EEG features (e.g., wavelet transform and entropy measures), (2) explore deep learning models for automated feature extraction and improved generalization, (3) include multi-modal data (e.g., physiological and behavioral signals), and (4) extend testing to clinical populations. These steps are essential for translating this early-stage research into practical, affective BCI applications.

5. Conclusions

This study presents a preliminary investigation into a self-imagined emotional imagery paradigm, combining facial expression cues with brief emotional imagery of six positive and six negative emotions for binary and multiclass emotion classification using valence and arousal dimensions. Conducted with healthy participants, this proof-of-concept study evaluated the feasibility of using EEG-based PSD features, particularly absolute beta power, for emotion recognition. Although FFT-based features showed slightly higher accuracy, they were excluded from later analyses due to the risk of information leakage, making PSD the primary method owing to its methodological reliability. Among the classifiers, KNN consistently achieved the highest performance, especially for neutral states, while ANN provided balanced accuracy across emotion categories. LOSO validation demonstrated the potential of PSD features for subject-independent emotion classification. The methods proposed offer a framework for EEG-based affective BCI systems that could monitor emotional states and discomfort. However, these applications remain hypothetical and have been tested only in healthy subjects. Future research should expand the dataset, explore advanced feature extraction techniques, and develop hybrid models, highlighting these as directions for further validation in real-world applications.

Author Contributions

Conceptualization, C.B. and Y.P.; methodology, C.B., N.S. and Y.P.; software, C.B., B.S. and Y.P.; validation, B.S. and Y.P.; formal analysis, C.B. and Y.P.; investigation, C.B., B.S. and Y.P.; resources, Y.P.; data curation, C.B. and Y.P.; writing—original draft preparation, C.B., N.S., B.S. and Y.P.; writing—review and editing, C.B., N.S., B.S. and Y.P.; visualization, C.B. and Y.P.; supervision, Y.P.; project administration, Y.P.; funding acquisition, C.B. and Y.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was financially supported by the Walailak University Graduate Research Fund (Contract No. CGS-RF-2025/01).

Institutional Review Board Statement

The study was conducted in accordance with the guidelines of the Declaration of Helsinki and approved by the Office of the Human Research Ethics Committee of Walailak University (Project No. WU-EC-IN-2-164-67; approval no. WUEC-24-164-01, 30 April 2024).

Informed Consent Statement

Informed consent was obtained from all participants involved in the study.

Data Availability Statement

The original contributions presented in the study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ACAccuracy
AIArtificial intelligence
ANNArtificial neural network
ANSAutonomic nervous system
BCIBrain–computer interface
CNNConvolutional neural network
CNSCentral nervous system
DEAPDataset for emotion analysis using physiological signals
ECGElectrocardiogram
EDAElectrodermal activity
EEGElectroencephalogram
ERP Event-related potential
FNFalse negative
FPFalse positive
FFTFast Fourier transform
GSRGalvanic skin response
HMM-MARHidden Markov model with multivariate autoregressive parameters
HCIHuman–computer interaction
IAPSInternational Affective Picture System
ICAIndependent component analysis
IMIndependent modulator
KNNK-nearest neighbor
LOSOLeave-one-subject-out
MLMachine learning
MLPMultilayer perceptron
NBNaive Bayes
PSDPower spectral density
RFRandom forest
STFTShort-time Fourier transform
SMOTESynthetic minority oversampling technique
SVMSupport vector machine
TNTrue negative
TPTrue positive

Appendix A

Subject-Wise Classification Performance Under LOSO Validation

Table A1. Comparative effectiveness of four machine learning models for leave-one-subject-out cross-validation using absolute beta power features obtained from PSD from all channels.
Table A1. Comparative effectiveness of four machine learning models for leave-one-subject-out cross-validation using absolute beta power features obtained from PSD from all channels.
SubjectValenceArousalValence and Arousal
SVMKNNANNNBSVMKNNANNNBSVMKNNANNNB
10.50 0.51 0.47 0.42 0.53 0.57 0.52 0.43 0.36 0.39 0.32 0.27
20.49 0.49 0.50 0.41 0.49 0.48 0.49 0.43 0.33 0.31 0.28 0.26
30.50 0.49 0.45 0.25 0.50 0.55 0.55 0.31 0.35 0.41 0.38 0.17
40.53 0.55 0.48 0.41 0.56 0.60 0.56 0.40 0.37 0.48 0.29 0.24
50.61 0.63 0.56 0.44 0.59 0.61 0.57 0.44 0.40 0.48 0.39 0.29
60.53 0.57 0.52 0.32 0.53 0.64 0.54 0.32 0.35 0.48 0.31 0.22
70.53 0.59 0.54 0.40 0.55 0.62 0.54 0.37 0.36 0.49 0.27 0.18
80.66 0.65 0.55 0.41 0.65 0.69 0.55 0.46 0.52 0.58 0.39 0.29
90.62 0.59 0.56 0.40 0.59 0.62 0.59 0.44 0.45 0.52 0.39 0.26
100.54 0.65 0.50 0.27 0.55 0.62 0.51 0.30 0.34 0.56 0.23 0.18
110.61 0.63 0.57 0.39 0.62 0.65 0.58 0.45 0.46 0.53 0.36 0.25
120.50 0.57 0.46 0.42 0.55 0.62 0.56 0.46 0.36 0.50 0.30 0.26
130.54 0.60 0.49 0.28 0.50 0.57 0.50 0.32 0.43 0.52 0.37 0.21
140.54 0.62 0.53 0.39 0.59 0.66 0.58 0.44 0.39 0.57 0.31 0.28
150.54 0.62 0.51 0.41 0.59 0.59 0.54 0.41 0.41 0.48 0.38 0.22
160.49 0.54 0.48 0.29 0.50 0.59 0.46 0.32 0.39 0.50 0.35 0.25
170.57 0.59 0.47 0.42 0.57 0.66 0.48 0.44 0.39 0.50 0.33 0.24
180.54 0.52 0.46 0.34 0.50 0.50 0.47 0.41 0.35 0.38 0.30 0.21
190.50 0.47 0.52 0.38 0.48 0.49 0.47 0.38 0.33 0.34 0.31 0.25
200.47 0.49 0.42 0.33 0.48 0.47 0.44 0.33 0.32 0.35 0.31 0.22
Average0.540.570.500.370.550.590.520.390.380.470.330.24

References

  1. Hamzah, H.A.; Abdalla, K.K. EEG-based emotion recognition systems; Comprehensive study. Heliyon 2024, 10, e31485. [Google Scholar] [CrossRef]
  2. Torres, E.P.; Torres, E.A.; Hernández-Álvarez, M.; Yoo, S.G. EEG-Based BCI Emotion Recognition: A Survey. Sensors 2020, 20, 5083. [Google Scholar] [CrossRef]
  3. Bhise, P.R.; Hulkarni, S.B.; Aldhaheri, T.A. Brain Computer Interface based EEG for Emotion Recognition System: A Systematic Review. In Proceedings of the 2020 2nd International Conference on Innovative Mechanisms for Industry Applications (ICIMIA), Bangalore, India, 5–7 March 2020; pp. 327–334. [Google Scholar] [CrossRef]
  4. Samal, P.; Hashmi, M.F. Role of machine learning and deep learning techniques in EEB-based BCI emotion recognition system: A review. Artif. Intell. Rev. 2024, 57, 50. [Google Scholar] [CrossRef]
  5. Perur, S.D.; Kenchannava, H.H. Enhancing Mental Well-Being Through OpenBCI: An Intelligent Approach to Stress Measurement. In Proceedings of the Third International Conference on Cognitive and Intelligent Computing, ICCIC 2023, Hyderabad, India, 8–9 December 2023; Cognitive Science and Technology. Springer: Singapore, 2025; Volume 1. [Google Scholar] [CrossRef]
  6. Ferrada, F.; Camarinha-Matos, L.M. Emotions in Human-AL Collaboration Navigating Unpredictability: Collaborative Networks in Non-linear Worlds. In Proceedings of the PRO-VE 2024, Albi, France, 28–29 October 2024; IFIP Advances in Information and Communication Technology. Springer: Cham, Switzerland, 2024; Volume 726, pp. 101–117. [Google Scholar] [CrossRef]
  7. Kolomaznik, M.; Petrik, V.; Slama, M.; Jurik, V. The role of socio-emotional attributes in enhancing human-AI collaboration. Front. Psychol. 2024, 15, 1369957. [Google Scholar] [CrossRef]
  8. Pervez, F.; Shoukat, M.; Usama, M.; Sandhu, M.; Latif, S.; Qadir, J. Affective Computing and the Road to an Emotionally Intelligent Metaverse. IEEE Open J. Comput. Soc. 2024, 5, 195–214. [Google Scholar] [CrossRef]
  9. Zhu, H.Y.; Hieu, N.Q.; Hoang, D.T.; Nguyen, D.N.; Lin, C.-T. A Human-Centric Metaverse Enabled by Brain-Computer Interface: A Survey. IEEE Commun. Surv. Tutorials 2024, 26, 2120–2145. [Google Scholar] [CrossRef]
  10. Lee, C.-H.; Huang, P.-H.; Lee, T.-H.; Chen, P.-H. Affective Communication: Designing Semantic Communication for Affective Computing. In Proceedings of the 33rd Wireless and Optical Communications Conference (WOCC), Hsinchu, Taiwan, 25–26 October 2024; pp. 35–39. [Google Scholar] [CrossRef]
  11. Faria, D.R.; Weinberg, A.I.; Ayrosa, P.P. Multi-modal Affective Communication Analysis: Fusing Speech Emotion and Text Sentiment Using Machine Learning. Appl. Sci. 2024, 14, 6631. [Google Scholar] [CrossRef]
  12. Emanuel, A.; Eldar, E. Emotions as computations. Neurosci. Biobehav. Rev. 2023, 144, 104977. [Google Scholar] [CrossRef]
  13. Alhalaseh, R.; Alasasfeh, S. Machine-Learning-Based Emotion Recognition System Using EEG Signals. Computers 2020, 9, 95. [Google Scholar] [CrossRef]
  14. Kawala-Sterniuk, A.; Browarska, N.; Al-Bakri, A.; Pelc, M.; Zygarlicki, J.; Sidikova, M.; Martinek, R.; Gorzelanczyk, E.J. Summary of over Fifty Years with Brion-Computer Interfaces—A Review. Brain Sci. 2021, 11, 43. [Google Scholar] [CrossRef]
  15. Peksa, J.; Mamchur, D. Stste-of-the-Art on Brain-Computer Interface Technology. Sensors 2023, 23, 6001. [Google Scholar] [CrossRef]
  16. Bano, K.S.; Bhuyan, P.; Ray, A. EEG-Based Brain Computer Interface for Emotion Recognition. In Proceedings of the 5th International Conference on Computational Intelligence and Networks (CINE), Bhubaneswar, India, 1–3 December 2022; pp. 1–6. [Google Scholar] [CrossRef]
  17. Suhaimi, N.S.; Mountstephens, J.; Teo, J. EEG-Based Emotion Recognition: A State-of-the-Art Review of Current Trends and Opportunities. Comp. Intell. Neurosci. 2020, 2020, 8875426. [Google Scholar] [CrossRef] [PubMed]
  18. Li, J. Optimal Modeling of College Students’ Mental Health Based on Brain-Computer Interface and Imaging Sensing. In Proceedings of the 5th International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 6–8 May 2021; pp. 772–775. [Google Scholar] [CrossRef]
  19. Beauchemin, N.; Charland, P.; Karran, A.; Boasen, J.; Tadson, B.; Sénécal, S.; Léger, P.M. Enhancing learning experiences: EEG-based passive BCI system adapts learning speed to cognitive load in real-tine, with motivation as catalyst. Front. Hum. Neurosci. 2024, 18, 1416683. [Google Scholar] [CrossRef] [PubMed]
  20. Semertzidis, N.; Vranic-Peters, M.; Andres, J.; Dwivedi, B.; Kulwe, Y.C.; Zambetta, F.; Mueller, F.F. Neo-Noumena: Augmenting Emotion Communication. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, New York, NY, USA, 25–30 April 2020; ACM: New York, NY, USA, 2020; pp. 1–13. [Google Scholar] [CrossRef]
  21. Papanastasiou, G.; Drigas, A.; Skianis, C.; Lytras, M. Brain computer interface based applications for training and rehabilitation of students with neurodevelopmental disorder. A literature review. Heliyon 2020, 6, e04250. [Google Scholar] [CrossRef] [PubMed]
  22. Alimardani, M.; Hiraki, K. Passive Brain-Computer Interfaces for Enhanced Human-Robot Interaction. Front. Robot. AI 2020, 7, 125. [Google Scholar] [CrossRef]
  23. Wu, M.; Teng, W.; Fan, C.; Pei, S.; Li, P.; Lv, Z. An Invertigation of Olfactory-Enhanced Video on EEG-Based Emotion Recognition. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 1602–1613. [Google Scholar] [CrossRef]
  24. Wang, Q.; Wang, M.; Yang, Y.; Zhang, X. Multi-modal emotion recognition using EEG and speech signals. Comput. Biol. Med. 2022, 149, 105907. [Google Scholar] [CrossRef]
  25. Zhou, T.H.; Liang, W.; Liu, H.; Wang, L.; Ryu, K.H.; Nam, K.W. EEG Emotion Recognition Applied to the Effect Analysis of Music on Emotion Changes in Psychological Healthcare. Int. J. Environ. Res. Public. Health 2022, 20, 378. [Google Scholar] [CrossRef]
  26. Zaidi, S.R.; Khan, N.A.; Hasan, M.A. Bridging Neuroscience and Machine Learning: A Gender-Based Electroencephalogram Framework for Guilt Emotion Identification. Sensors 2025, 25, 1222. [Google Scholar] [CrossRef]
  27. Er, M.B.; Çiğ, H.; Aydilek, İ.B. A new approach to recognition of human emotions using brain signals and music stimuli. Appl. Acoust. 2021, 175, 107840. [Google Scholar] [CrossRef]
  28. Huang, H.; Xie, Q.; Pan, J.; He, Y.; Wen, Z.; Yu, R.; Li, Y. An EEG-Based Brain Computer Interface for Emotion Recognition and Its Application in Patients with Disorder of Consciousness. IEEE Trans. Affect. Comput. 2021, 12, 832–842. [Google Scholar] [CrossRef]
  29. Polo, E.M.; Farabbi, A.; Mollura, M.; Paglialonga, A.; Mainardi, L.; Barbieri, R. Comparative Assessment of Physiological Responses to Emotional Elicitation by Auditory and Visual Stimuli. IEEE J. Transl. Eng. Health Med. 2024, 12, 171–181. [Google Scholar] [CrossRef]
  30. Lian, Y.; Zhu, M.; Sun, Z.; Liu, J.; Hou, Y. Emotion recognition based on EEG signals and face images. Biomed. Signal Process Control 2025, 103, 107462. [Google Scholar] [CrossRef]
  31. Mutawa, A.M.; Hassouneh, A. Multi-modal Real-Time Patient Emotion Recognition System using Facial Expressions and Brain EEG Signals based on Machine Learning and Log-Sync Methods. Biomed. Signal Process Control 2024, 91, 105942. [Google Scholar] [CrossRef]
  32. Chouchou, F.; Perchet, C.; Garcia-Larrea, L. EEG changes reflecting pain: Is alpha suppression better than gamma enhancement? Neurophysiol. Clin. 2021, 51, 209–218. [Google Scholar] [CrossRef] [PubMed]
  33. Mathew, J.; Perez, T.M.; Adhia, D.B.; Ridder, D.D.; Main, R. Is There a Difference in EEG Characteristics in Acute, Chronic, and Experimentally Induce Musculoskeletal Pain State? A Systematic Review. Clin. EEG Neurosci. 2022, 55, 101–120. [Google Scholar] [CrossRef]
  34. Feng, L.; Li, H.; Cui, H.; Xie, X.; Xu, S.; Hu, Y. Low Back Pain Assessment Based on Alpha Oscillation Changes in Spontaneous Electroencephalogram (EEG). Neural Plast. 2021, 2021, 8537437. [Google Scholar] [CrossRef]
  35. Wang, L.; Xiao, Y.; Urman, R.D.; Lin, Y. Clod pressor pain assessment based on EEG power spectrum. SN Appl. Sci. 2020, 2, 1976. [Google Scholar] [CrossRef]
  36. Kothe, C.A.; Makeig, S.; Onton, J.A. Emotion Recognition from EEG during Self-Paced Emotional Imagery. In Proceedings of the 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, ACII 2013, Geneva, Switzerland, 2–5 September 2013; pp. 855–858. [Google Scholar] [CrossRef]
  37. Hsu, S.H.; Lin, Y.; Onton, J.; Jung, T.P.; Makeig, S. Unsupervised learning of brain state dynamics during emotion imagination using high-density EEG. NeuroImage 2022, 249, 118873. [Google Scholar] [CrossRef]
  38. Ji, Y.; Dong, S.Y. Deep learning-based self-induced emotion recognition using EEG. Front. Neurosci. 2022, 16, 985709. [Google Scholar] [CrossRef]
  39. Proverbio, A.M.; Pischedda, F. Measuring brain potentials of imagination linked to physiological needs and motivational states. Front. Hum. Neurosci. 2023, 17, 1146789. [Google Scholar] [CrossRef]
  40. Proverbio, A.M.; Cesati, F. Neural correlates of recalled sadness, joy, and fear states: A source reconstruction EEG study. Front. Psychiatry 2024, 15, 1357770. [Google Scholar] [CrossRef]
  41. Proverbio, A.M.; Tacchini, M.; Jiang, K. What do you have in mind? ERP markers of visual and auditory imagery. Brain Cogn. 2023, 166, 105954. [Google Scholar] [CrossRef]
  42. Piţur, S.; Tufar, I.; Miu, A.C. Auditory imagery and poetry-elicited emotions: A study on the hard of hearing. Front. Psychol. 2025, 16, 1509793. [Google Scholar] [CrossRef]
  43. Kilmarx, J.; Tashev, I.; Millán, J.D.R.; Sulzer, J.; Lewis-Peacock, J. Evaluating the Feasibility of Visual Imagery for an EEG-Based Brain–Computer Interface. IEEE Trans. Neural Syst. Rehabil. Eng. 2024, 32, 2209–2219. [Google Scholar] [CrossRef]
  44. Bainbridge, W.A.; Hall, E.H.; Baker, C.I. Distinct Representational Structure and Localization, for Visual Encoding and Recall during Visual Imagery. Cereb. Cortex 2021, 31, 1898–1913. [Google Scholar] [CrossRef] [PubMed]
  45. Russell, J.A. A circumplex model of affect. J. Pers. Soc. Psychol. 1980, 39, 1161–1178. [Google Scholar] [CrossRef]
  46. Delorme, A.; Makeig, S. EEGLAB: An open-source toolbox for analysis of EEG dynamics. J. Neurosci. Methods 2004, 134, 9–21. [Google Scholar] [CrossRef] [PubMed]
  47. Qu, G.; Wen, S.; Bi, J.; Liu, J.; Wu, Q.; Han, L. EEG Emotion Recognition of Different Brain Regions Based on 2DCNN-DGRU. In Proceedings of the 2023 IEEE 13th International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), Qinhuangdao, China, 10–14 July 2023; pp. 692–697. [Google Scholar] [CrossRef]
  48. Duhamel, P.; Vetterli, M. Fast Fourier transforms: A tutorial review and A state of the art. Signal Process 1990, 19, 259–299. [Google Scholar] [CrossRef]
  49. Youngworth, R.N.; Gallagher, B.B.; Stamper, B.L. An overview of Power Spectral Density (PSD) calculations. In Optical Measurement Systems for Industrial Inspection IV; SPIE: Bellingham, WA, USA, 2005; Volume 5869, p. 58690U. [Google Scholar] [CrossRef]
  50. Gannouni, S.; Aledaily, A.; Belwafi, K.; Aboalsamh, H. Emotion detection using electroencephalography signals and a zero-time windowing-based epoch estimation and relevant electrode identification. Sci. Rep. 2021, 11, 7071. [Google Scholar] [CrossRef]
  51. Zheng, W.-L.; Lu, B.-L. Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. IEEE Trans. Auton. Ment. Dev. 2015, 7, 162–175. [Google Scholar] [CrossRef]
  52. Mishra, S.; Srinivasan, N.; Tiwary, U.S. Dynamic Functional Connectivity of Emotion Processing in Beta Band with Naturalistic Emotion Stimuli. Brain Sci. 2022, 12, 1106. [Google Scholar] [CrossRef]
  53. Aljribi, K.A. A comparative analysis of frequency bands in EEG-based emotion recognition system. In Proceedings of the 7th International Conference on Engineering & MIS (ICEMIS 2021), Almaty, Kazakhstan, 11–13 October 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 1–7. [Google Scholar] [CrossRef]
  54. Ali, P.J.M.; Faraj, R.H. Data Normalization and Standardization: A Technical Report. Mach. Learn. Rep. 2014, 1, 1–6. [Google Scholar] [CrossRef]
  55. Aggarwal, S.; Chugh, N. Review of Machine Learning Techniques for EEG Based Brain Computer Interface. Arch. Comp. Methods Eng. 2022, 29, 3001–3020. [Google Scholar] [CrossRef]
  56. Rasheed, S. A Review of the Role of Machine Learning Techniques towards Brain—Computer Interface Applications. Mach. Learn. Knowl. Extr. 2021, 3, 835–862. [Google Scholar] [CrossRef]
  57. Brookshire, G.; Kasper, J.; Blauch, N.M.; Wu, Y.C.; Glatt, R.; Merrill, D.A.; Gerrol, S.; Yoder, K.J.; Quirk, C.; Lucero, C. Data Leakage in Deep Learning Studies of Translational EEG. Front. Neurosci. 2024, 18, 1373515. [Google Scholar] [CrossRef] [PubMed]
  58. Ou, Y.; Sun, S.; Gan, H.; Zhou, R.; Yang, Z. An improved self-supervised learning for EEG classification. Math. Biosci. Eng. 2022, 19, 6907–6922. [Google Scholar] [CrossRef]
  59. Sujbert, L.; Orosz, G. FFT-Based Spectrum Analysis in the Case of Data Loss. IEEE Trans. Instrum. Meas. 2016, 65, 968–976. [Google Scholar] [CrossRef]
  60. Abdulaal, M.J.; Casson, A.J.; Gaydecki, P. Critical Analysis of Cross-Validation Methods and Their Impact on Neural Networks Performance Inflation in Electroencephalography Analysis. Int. Conf. J. Electr. Comput. Eng. 2021, 44, 75–82. [Google Scholar] [CrossRef]
  61. Knyazev, G.G. EEG Correlates of Self-Referential Processing. Front. Hum. Neurosci. 2013, 7, 264. [Google Scholar] [CrossRef]
  62. Wei, X.; Zhang, J.; Zhang, J.; Li, Z.; Li, Q.; Wu, J.; Yang, J.; Zhang, Z. Investigating the Human Brain’s Integration of Internal and External Reference Frames: The Role of the Alpha and Beta Bands in a Modified Temporal Order Judgment Task. Hum. Brain Mapp. 2025, 46, e70196. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Conceptual diagram of an EEG-based affective BCI for emotional state monitoring.
Figure 1. Conceptual diagram of an EEG-based affective BCI for emotional state monitoring.
Symmetry 17 01868 g001
Figure 2. Experimental paradigm for emotional imagery.
Figure 2. Experimental paradigm for emotional imagery.
Symmetry 17 01868 g002
Figure 3. (a) Electrode configuration for 19 channels following the 10–20 system; (b) example experimental scenario.
Figure 3. (a) Electrode configuration for 19 channels following the 10–20 system; (b) example experimental scenario.
Symmetry 17 01868 g003
Figure 4. Leave-one-subject-out (LOSO) cross-validation of four machine learning models using absolute beta power features from all channels obtained from PSD from all channels with 95% CI.
Figure 4. Leave-one-subject-out (LOSO) cross-validation of four machine learning models using absolute beta power features from all channels obtained from PSD from all channels with 95% CI.
Symmetry 17 01868 g004
Table 1. Research on emotional imagery based on EEG signals.
Table 1. Research on emotional imagery based on EEG signals.
AuthorTaskTriggerAlgorithmOutputs
Kothe et al. [36]Closed-eye, imagine an emotional
scenario
Voice-guidedICA + Machine Learning (ML)
  • Using ICA-based EEG features can detect emotion from self-paced imagery.
  • Self-paced emotional paradigms for affective BCI applications.
Hsu et al. [37]Closed-eye, imagine an emotional
scenario or recall an experience
Voice-guidedUnsupervised Learning on High-density EEG
  • Identified brain states associated with various emotional imagery.
  • Showed that unsupervised methods effectively capture spontaneous emotional responses.
Ji and Dong [38]Closed-eye, imagine an emotional
scenario or recall an experience
Voice-guidedDeep
Learning
  • Emotion classification accuracy up to 89% using CNN.
  • SVM achieved around 75–80%.
  • CNN achieved the highest accuracy in binary and multiclass classification.
Proverbio and Pischedda [39]Mental imagery of
internal bodily
sensations and
motivational needs
Pictograms ERP analysis (P300, N400 components)
  • Significant differences in ERP amplitudes and latencies (p < 0.01) were identified between imagined motivational states.
  • P300 and N400 demonstrated the ability to distinguish internal states.
Proverbio and Cesati [40]Silent recall of
emotional states
PictogramsSource
Reconstruction (sLORETA)
  • Observed emotion-specific cortical
    • activity:
    -
    Joy: orbitofrontal cortex.
    -
    Sadness: temporal cortex.
    -
    Fear: limbic system, frontal cortex.
  • Neurophysiological evidence supports emotion recall paradigms.
Table 2. Proposed emotions based on valence and arousal levels for two- and four-class systems.
Table 2. Proposed emotions based on valence and arousal levels for two- and four-class systems.
EmotionValence LevelArousal Level
SurpriseHigh (Positive)High (Active)
ExcitementHigh (Positive)High (Active)
HappinessHigh (Positive)High (Active)
PleasantnessHigh (Positive)Low (Calm)
RelaxationHigh (Positive)Low (Calm)
CalmnessHigh (Positive)Low (Calm)
BoredomLow (Negative)Low (Calm)
DepressionLow (Negative)Low (Calm)
SadnessLow (Negative)Low (Calm)
DisgustLow (Negative)High (Active)
AngerLow (Negative)High (Active)
FearLow (Negative)High (Active)
Table 3. Proposed methods for emotion recognition.
Table 3. Proposed methods for emotion recognition.
Frequency
Analysis
Feature
Parameters
EEG Channel SelectionsModel ClassifiersEvaluation
Metrics
  • Fast Fourier Transform (FFT)
  • Power Spectral Density (PSD)
  • Absolute Power
    -
    Theta (θ)
    -
    Alpha (α)
    -
    Beta (β)
    -
    Gamma (γ)
  • Relative Power
    -
    reTheta (rθ)
    -
    reAlpha (rα)
    -
    reBeta (rβ)
    -
    reGamma (rγ)
  • Band Ratio:
    -
    Alpha/Beta ( α / β )
  • A: Whole brain
    • A1: (All channels)
    • Fp1, F3, F7, Fz, Fp2, F4, F8, T3, T5, T4, T6, C3, C4, Cz, P3, P4, Pz, O1, O2
  • B: Area-Based
    • B1 (Left hemisphere):
    • Fp1, F3, C3, P3, O1, F7, T3, T5
    • B2 (Right hemisphere):
    • Fp2, F4, C4, P4, O2, F8, T4, T6
    • B3 (Frontal): Fp1, F3, F7, Fz, Fp2, F4, F8
    • B4 (Temporal): T3, T5, T4, T6
    • B5 (Central): C3, C4, Cz
    • B6 (Parietal): P3, P4, Pz
    • B7 (Occipital): O1, O2
  • C: Area Combinations
    • C1: B3 + B4
    • C2: B3 + B5
    • C3: B3 + B6
    • C4: B3 + B7
    • C5: B3 + B5 + B4
    • C6: B3 + B5 + B6
    • C7: B3 + B4 + B6
    • C8: B4 + B5 + B6
  • Naïve Bayes (NB)
  • Support vector machine (SVM)
  • K-nearest neighbor (KNN)
  • Artificial neural network (ANN)
  • Accuracy
  • Precision
  • Recall
  • F1-score
Table 4. Hyperparameters of the machine learning models used for EEG-based emotion classification.
Table 4. Hyperparameters of the machine learning models used for EEG-based emotion classification.
ModelsModel
SVMC = 10, γ = 0.1, kernel = ‘rbf’
KNNn_neighbors = 3, weights = ‘distance’, metric = ‘euclidean’
ANNactivation = ‘relu’, alpha = 0.001, hidden_layer_sizes = (50, 50), learning_rate_init = 0.01
Table 5. Average classification accuracy across EEG channel selection patterns using FFT- and PSD-based feature extraction methods. (a) Comparison of classification performance across channel selection groups A1–B7.
Table 5. Average classification accuracy across EEG channel selection patterns using FFT- and PSD-based feature extraction methods. (a) Comparison of classification performance across channel selection groups A1–B7.
EEG Channel
Selection Pattern
Average Classification Accuracy
FFT MethodPSD Method
MaxMean ± SD95% CIMaxMean ± SD95% CI
A10.860.66 ± 0.14[0.60–0.72]0.730.59 ± 0.10[0.55–0.63]
B10.790.59 ± 0.11[0.54–0.64]0.680.53 ± 0.09[0.49–0.57]
B20.790.59 ± 0.11[0.54–0.64]0.690.53 ± 0.09[0.49–0.57]
B30.770.55 ± 0.11[0.50–0.60]0.650.51 ± 0.09[0.47–0.55]
B40.720.51 ± 0.10[0.47–0.55]0.630.47 ± 0.08[0.43–0.51]
B50.620.47 ± 0.08[0.44–0.50]0.580.45 ± 0.07[0.42–0.48]
B60.660.50 ± 0.08[0.47–0.53]0.610.47 ± 0.08[0.44–0.50]
B70.590.45 ± 0.06[0.43–0.47]0.540.44 ± 0.06[0.42–0.46]
C10.820.62 ± 0.13[0.56–0.68]0.710.55 ± 0.10[0.51–0.59]
C20.810.59 ± 0.12[0.54–0.64]0.680.53 ± 0.10[0.49–0.57]
C30.790.59 ± 0.12[0.54–0.64]0.690.54 ± 0.09[0.50–0.58]
C40.810.59 ± 0.12[0.54–0.64]0.690.54 ± 0.09[0.50–0.58]
C50.810.64 ± 0.13[0.58–0.70]0.720.57 ± 0.10[0.53–0.61]
C60.820.62 ± 0.12[0.57–0.67]0.700.56 ± 0.09[0.52–0.60]
C70.800.64 ± 0.13[0.58–0.70]0.710.57 ± 0.10[0.53–0.61]
C80.790.60 ± 0.12[0.55–0.65]0.690.54 ± 0.09[0.50–0.58]
Table 6. Average classification accuracy rate of machine learning models across two-class valence, two-class arousal, and four-class valence–arousal tasks for emotion classification using EEG features obtained from FFT across all channels.
Table 6. Average classification accuracy rate of machine learning models across two-class valence, two-class arousal, and four-class valence–arousal tasks for emotion classification using EEG features obtained from FFT across all channels.
Model
Features
Average Accuracy Rate of Emotion Classification
Two-Class ValenceTwo-Class ArousalFour-Class Valence–Arousal
SVMKNNANNNBSVMKNNANNNBSVMKNNANNNB
a b ( θ ) 0.650.750.640.430.630.740.630.420.450.570.430.27
a b ( α ) 0.700.800.700.420.730.800.690.460.510.650.450.28
a b ( β ) 0.720.860.700.440.740.860.740.440.550.760.500.28
a b ( γ ) 0.720.800.690.440.730.820.710.410.540.690.470.27
r e ( θ ) 0.720.790.720.420.720.780.700.430.520.650.460.28
r e ( α ) 0.760.780.700.430.740.790.700.470.530.640.460.29
r e ( β ) 0.720.800.690.420.730.800.690.430.530.670.450.27
r e ( γ ) 0.740.790.690.450.740.790.710.450.540.670.460.31
α / β 0.750.800.710.450.740.800.690.450.530.670.430.31
Average0.720.800.690.430.720.800.700.440.520.660.460.28
Table 7. Average classification accuracy rate of machine learning models across two-class valence, two-class arousal, and four-class valence–arousal tasks for emotion classification using EEG features obtained from PSD across all channels.
Table 7. Average classification accuracy rate of machine learning models across two-class valence, two-class arousal, and four-class valence–arousal tasks for emotion classification using EEG features obtained from PSD across all channels.
Model
Features
Average Accuracy Rate of Emotion Classification
Two-Class ValenceTwo-Class ArousalFour-Class Valence–Arousal
SVMKNNANNNBSVMKNNANNNBSVMKNNANNNB
a b ( θ ) 0.560.630.670.550.530.630.500.400.360.460.320.27
a b ( α ) 0.600.610.650.590.570.630.580.400.400.470.330.25
a b ( β ) 0.670.660.720.640.660.700.640.450.460.540.410.28
a b ( γ ) 0.650.680.730.660.650.710.650.400.480.580.420.26
r e ( θ ) 0.660.620.690.610.630.660.610.420.450.480.350.26
r e ( α ) 0.680.610.660.620.640.630.600.430.420.460.340.28
r e ( β ) 0.670.630.680.660.650.650.620.410.450.480.380.29
r e ( γ ) 0.660.630.710.640.640.680.620.400.450.530.360.25
α / β 0.630.610.660.610.630.650.590.400.430.490.360.26
Average0.640.630.690.620.620.660.600.410.430.500.360.27
Table 8. Comparative effectiveness of five emotion recognition models for two-class valence-based classification (including neutral) using absolute beta power features from all channels.
Table 8. Comparative effectiveness of five emotion recognition models for two-class valence-based classification (including neutral) using absolute beta power features from all channels.
ModelSVMKNNANNNB
ClassPSRCF1PSRCF1PSRCF1PSRCF1
Neutral0.80 1.00 0.90 0.92 1.00 0.99 0.84 0.99 0.91 0.47 0.62 0.54
Negative0.68 0.56 0.62 0.85 0.83 0.84 0.58 0.50 0.54 0.41 0.33 0.37
Positive0.69 0.60 0.64 0.90 0.76 0.82 0.62 0.60 0.61 0.40 0.35 0.38
Table 9. Comparative effectiveness of five ML models for two-class arousal-based classification (including neutral) using absolute beta power features from all channels.
Table 9. Comparative effectiveness of five ML models for two-class arousal-based classification (including neutral) using absolute beta power features from all channels.
ModelSVMKNNANNNB
ClassPSRCF1PSRCF1PSRCF1PSRCF1
Neutral0.74 0.97 0.84 0.72 1.00 0.85 0.74 0.95 0.82 0.48 0.53 0.51
Active0.59 0.49 0.54 0.66 0.58 0.61 0.56 0.42 0.48 0.37 0.31 0.34
Calm0.61 0.52 0.56 0.71 0.48 0.57 0.55 0.57 0.56 0.44 0.47 0.45
Table 10. Comparative effectiveness of five machine learning models for four-class valence–arousal classification (including neutral) using absolute beta power features from all channels on a synthetically balanced dataset.
Table 10. Comparative effectiveness of five machine learning models for four-class valence–arousal classification (including neutral) using absolute beta power features from all channels on a synthetically balanced dataset.
ModelSVMKNNANNNB
ClassPSRCF1PSRCF1PSRCF1PSRCF1
Neutral0.55 0.81 0.65 0.58 0.92 0.71 0.55 0.76 0.64 0.30 0.42 0.35
HVHA0.33 0.34 0.33 0.44 0.51 0.48 0.32 0.36 0.34 0.25 0.19 0.21
HVLA0.35 0.28 0.31 0.49 0.41 0.45 0.37 0.28 0.32 0.20 0.11 0.14
LVHA0.37 0.28 0.32 0.45 0.38 0.41 0.33 0.23 0.27 0.22 0.26 0.24
LVLA0.38 0.36 0.37 0.46 0.27 0.34 0.37 0.39 0.38 0.28 0.31 0.29
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bouyam, C.; Siribunyaphat, N.; Sahoh, B.; Punsawad, Y. Decoding Self-Imagined Emotions from EEG Signals Using Machine Learning for Affective BCI Systems. Symmetry 2025, 17, 1868. https://doi.org/10.3390/sym17111868

AMA Style

Bouyam C, Siribunyaphat N, Sahoh B, Punsawad Y. Decoding Self-Imagined Emotions from EEG Signals Using Machine Learning for Affective BCI Systems. Symmetry. 2025; 17(11):1868. https://doi.org/10.3390/sym17111868

Chicago/Turabian Style

Bouyam, Charoenporn, Nannaphat Siribunyaphat, Bukhoree Sahoh, and Yunyong Punsawad. 2025. "Decoding Self-Imagined Emotions from EEG Signals Using Machine Learning for Affective BCI Systems" Symmetry 17, no. 11: 1868. https://doi.org/10.3390/sym17111868

APA Style

Bouyam, C., Siribunyaphat, N., Sahoh, B., & Punsawad, Y. (2025). Decoding Self-Imagined Emotions from EEG Signals Using Machine Learning for Affective BCI Systems. Symmetry, 17(11), 1868. https://doi.org/10.3390/sym17111868

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop