FREQ-EER: A Novel Frequency-Driven Ensemble Framework for Emotion Recognition and Classification of EEG Signals
Abstract
1. Introduction
- The proposed approach integrates a frequency manipulation-based data augmentation strategy with an ensemble of lightweight machine learning classifiers to improve performance, predominantly on small or imbalanced datasets.
- In the initial phase of the framework, five distinct augmentation techniques designed to synthetically expand the training data and improve classification accuracy without relying on computationally expensive deep learning architectures are explored.
- To further improve model efficiency and reduce dependence on numerous EEG channels, a systematic analysis of specific EEG frequency bands (such as alpha, beta, theta, etc.) was performed across diverse brain regions. This analysis enabled detachment of the most informative features pertinent to emotional states.
- An ensemble of traditional yet competent machine learning algorithms, namely random forest (RF) [9], CatBoost (CB) [10], and k-nearest neighbors (KNNs) [9,11,12], are employed for the purpose of classification. This employed ensemble approach not only accomplished state-of-the-art accuracy but also confirmed interpretability and low computational complexity, making it a suitable alternative to deep learning models.
- One of the important contributions of this paper is the region- and band-specific study that provides deeper insights into the neural basis of emotions and determines how different brain regions and EEG frequency bands are related to different emotional states.
2. Literature Review
3. Materials and Methods
3.1. Proposed Framework
3.2. Dataset Description
3.3. Band Filtration
3.4. Brain Region Segregation and Selection
3.5. Data Augmentation Techniques Used and Their Mathematical Formulations
3.6. Feature Extraction-Band Power Estimation Using Welch Method
3.7. Classification Using the Ensemble Model
- The comprehensive band–region-specific analysis. While most of the exiting works have considered all of the EEG channels uniformly or have not performed an explicit analysis of the influence of EEG frequency bands across brain regions, our model introduces the breakdown of EEG signals into four different frequency bands—theta, beta, alpha, and gamma and four brain regions—frontal, central, parietal, and occipital.
- To build a generalized model, a range of augmentation techniques, namely Gaussian noise addition, time flipping, time warping, data slicing and shuffling, random channel dropping, and frequency manipulation, were used. These techniques assisted in increasing the variability of the dataset while preserving the characteristics of the signal. Band power features [36,37] were extracted using Welch’s method [33,34], which is a spectral estimation technique that computes the power spectral density [35] of EEG signals.
- Feature extraction was performed from each of the distinguished frequency bands, which was followed by a detailed investigation into how specific oscillatory activity in specific brain regions correlated with distinct emotional states. For instance, analysis revealed that gamma activity strongly correlated with High Arousal positive emotions This band–region combination classification performance was evaluated across the four emotional classes: HAHV, HALV, LAL, and LAHV.
- The three classifiers—RF, CB, and KNN—were trained independently on the same set of features and emotion labels. Table 3 highlights the different parameters considered in our work for various classifiers employed in the ensemble model.
Algorithm 1: FREQ-EER Ensemble Emotion Classification | |
Input: | Raw EEG Data (DEAP dataset), Original Emotion Labels (Valence, Arousal) |
Output: | Predicted Emotion Classes (HAHV, HALV, LAHV, LALV) |
Step 1: | Begin |
Step 2: | Load EEG data and emotion labels |
EEG_data ← load_DEAP_dataset () | |
Labels ← extract_valence_arousal_labels () | |
Step 3: | Preprocess EEG data |
Apply bandpass filtering (alpha, beta, theta, gamma) | |
Down sample EEG_data to reduce complexity | |
Step 4: | Apply data augmentation |
For each signal in EEG_data: | |
Augment with Gaussian noise using Equation (1) | |
Perform time flipping using Equation (2) | |
Perform data slicing and shuffling using Equation (3) | |
Apply time warping using Equation (4) | |
Perform channel dropping using Equation (5) | |
Conduct frequency manipulation using Equation (6) | |
Step 5: | Segment EEG channels by brain regions: |
Frontal, central, parietal, occipital | |
Step 6: | Extract Band Power Features |
For each frequency band (theta, alpha, beta, gamma): | |
Compute power using Welch’s Method using Equations (7) and (8) | |
Step 7: | Relabel emotion labels |
For each sample in Labels: | |
If Valence ≥ 5 and Arousal ≥ 5 → HAHV | |
If Valence < 5 and Arousal ≥ 5 → HALV | |
If Valence ≥ 5 and Arousal < 5 → LAHV | |
If Valence < 5 and Arousal < 5 → LALV | |
Step 8: | Normalize features |
features ← normalize(features) | |
Step 9: | Split data (subject-independent) |
(X_train, y_train), (X_test, y_test) ← train_test_split (features, rela beled_labels) | |
Step 10: | Train base classifiers |
model_RF ← train random forest on (X_train, y_train) | |
model_CB ← train CatBoost on (X_train, y_train) | |
model_KNN ← train KNN on (X_train, y_train) | |
Step 11: | Perform predictions |
probs_RF ← model_RF. predict_proba(X_test) | |
probs_CB ← model_CB. predict_proba(X_test) | |
probs_KNN ← model_KNN.predict_proba(X_test) | |
Step 12: | Soft voting ensemble |
avg_probs ← (probs_RF + probs_CB + probs_KNN)/3 | |
y_pred ← argmax(avg_probs) | |
Step 13: | Evaluate model |
Calculate accuracy, ROC AUC | |
Step 14: | End |
4. Results and Discussion
4.1. Gaussian Noise Addition and Flip
4.2. Data Slicing and Shuffling
4.3. Time Warping
4.4. Random Channel Dropping
4.5. Frequency Manipulation
4.6. Combined Augmentation Techniques
5. Model Evaluation on the SEED
5.1. Gaussian Noise Addition and Flip
5.2. Data Slicing and Shuffling
5.3. Time Warping
5.4. Random Channel Dropping
5.5. Frequency Manipulation
6. Model Evaluation on the GAMEEMO Dataset
6.1. Gaussian Noise Addition
6.2. Data Slicing and Shuffling
6.3. Time Warping
6.4. Random Channel Dropping
6.5. Frequency Manipulation
7. Comparison with Existing Work
8. Conclusions and Future Work
- In the future, one can think of developing more sophisticated augmentation techniques to improve data diversity and enhance classification performance.
- The proposed FREQ-EER can be used on additional EEG datasets, such as DECAF-MEG and AMIGOS, to confirm the efficiency of the suggested model and assess generalization across various participants, sensors, and recording conditions.
- Attention mechanisms may be incorporated into machine learning models in the future, to enhance interpretability and highlight the most pertinent brain regions and frequency bands for emotion recognition.
- The FREQ-EER framework may be applied to real-world domains such as mental health monitoring, gaming, and adaptive interfaces to test its practical applicability.
- In the future, efforts can be made to optimize and deploy lightweight models on edge devices like mobile phones or wearables for real-time, on-device emotion recognition without cloud dependency.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Alarcao, S.M.; Fonseca, M.J. Emotions recognition using EEG signals: A survey. IEEE Trans. Affect. Comput. 2017, 10, 374–393. [Google Scholar] [CrossRef]
- Oude Bos, D. EEG-based emotion recognition. Influ. Vis. Audit. Stimuli 2006, 56, 1–17. [Google Scholar]
- Liu, Y.; Sourina, O.; Nguyen, M.K. Real-Time EEG-Based Emotion Recognition and Its Applications. In Transactions on Computational Science XII. Lecture Notes in Computer Science; Tan, C.J.K., Sourin, A., Sourina, O., Eds.; Springer: Berlin/Heidelber, Germany, 2011; Volume 6670, pp. 256–277. [Google Scholar] [CrossRef]
- Alhalaseh, R.; Alasasfeh, S. Machine-learning-based emotion recognition system using EEG signals. Computers 2020, 9, 95. [Google Scholar] [CrossRef]
- Zheng, W.L.; Lu, B.L. Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. IEEE Trans. Auton. Ment. Dev. 2015, 7, 162–175. [Google Scholar] [CrossRef]
- Wang, F.; Zhong, S.H.; Peng, J.; Jiang, J.; Liu, Y. Data augmentation for EEG-based emotion recognition with deep convolutional neural networks. In MultiMedia Modeling; Springer International Publishing: Bangkok, Thailand, 2018; Part II 24; pp. 82–93. [Google Scholar] [CrossRef]
- Acharjee, R.; Ahamed, S.R. EEG Data Augmentation Using Generative Adversarial Network for Improved Emotion Recognition. In Pattern Recognition; Springer Nature: Cham, Switzerland, 2025; pp. 238–252. [Google Scholar] [CrossRef]
- Luo, Y.; Zhu, L.Z.; Wan, Z.Y.; Lu, B.L. Data augmentation for enhancing EEG-based emotion recognition with deep generative models. J. Neural Eng. 2020, 17, 056021. [Google Scholar] [CrossRef] [PubMed]
- Adhikari, S.; Choudhury, N.; Bhattacharya, S.; Deb, N.; Das, D.; Ghosh, R.; Ghaderpour, E. Analysis of frequency domain features for the classification of evoked emotions using EEG signals. Exp. Brain Res. 2025, 243, 65. [Google Scholar] [CrossRef] [PubMed]
- Prakash, A.; Poulose, A. Electroencephalogram-Based Emotion Recognition: A Comparative Analysis of Supervised Machine Learning Algorithms. Data Sci. Manag. 2025, 8, 342–360. [Google Scholar] [CrossRef]
- Li, M.; Xu, H.; Liu, X.; Lu, S. Emotion recognition from multichannel EEG signals using K-nearest neighbor classification. Technol. Health Care 2018, 26, 509–519. [Google Scholar] [CrossRef]
- Kumar, A.; Kumar, A. Human emotion recognition using Machine learning techniques based on the physiological signal. Biomed. Signal Process. Control 2025, 100, 107039. [Google Scholar] [CrossRef]
- Alidoost, Y.; Asl, B.M. Entropy-based Emotion Recognition Using EEG Signals. IEEE Access 2025, 13, 51242–51254. [Google Scholar] [CrossRef]
- Cruz-Vazquez, J.A.; Montiel-Pérez, J.Y.; Romero-Herrera, R.; Rubio-Espino, E. Emotion recognition from EEG signals using advanced transformations and deep learning. Mathematics 2025, 13, 254. [Google Scholar] [CrossRef]
- Qiao, W.; Sun, L.; Wu, J.; Wang, P.; Li, J.; Zhao, M. EEG emotion recognition model based on attention and gan. IEEE Access 2024, 12, 32308–32319. [Google Scholar] [CrossRef]
- Zhang, Z.; Zhong, S.; Liu, Y. Beyond Mimicking Under-Represented Emotions: Deep Data Augmentation with Emotional Subspace Constraints for EEG-Based Emotion Recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2024. [Google Scholar] [CrossRef]
- Du, X.; Wang, X.; Zhu, L.; Ding, X.; Lv, Y.; Qiu, S.; Liu, Q. Electroencephalographic signal data augmentation based on improved generative adversarial network. Brain Sci. 2024, 14, 367. [Google Scholar] [CrossRef]
- Liao, C.; Zhao, S.; Wang, X.; Zhang, J.; Liao, Y.; Wu, X. EEG Data Augmentation Method Based on the Gaussian Mixture Model. Mathematics 2025, 13, 729. [Google Scholar] [CrossRef]
- Szczakowska, P.; Wosiak, A. Improving Automatic Recognition of Emotional States Using EEG Data Augmentation Techniques. In Proceedings of the Procedia Computer Science, Athens, Greece, 6–8 September 2023. [Google Scholar] [CrossRef]
- Russell, J.A.; Ridgeway, D. Dimensions underlying children’s emotion concepts. Dev. Psychol. 1983, 19, 795. [Google Scholar] [CrossRef]
- Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.S.; Yazdani, A.; Ebrahimi, T.; Patras, I. Deap: A database for emotion analysis; using physiological signals. IEEE Trans. Affect. Comput. 2011, 3, 18–31. [Google Scholar] [CrossRef]
- Abadi, M.K.; Subramanian, R.; Kia, S.M.; Avesani, P.; Patras, I.; Sebe, N. DECAF: MEG-based multimodal database for decoding affective physiological responses. IEEE Trans. Affect. Comput. 2015, 6, 209–222. [Google Scholar] [CrossRef]
- Katsigiannis, S.; Ramzan, N. DREAMER: A database for emotion recognition through EEG and ECG signals from wireless low-cost off-the-shelf devices. IEEE J. Biomed. Health Inform. 2017, 22, 98–107. [Google Scholar] [CrossRef] [PubMed]
- Zheng, W.L.; Zhu, J.Y.; Lu, B.L. Identifying stable patterns over time for emotion recognition from EEG. IEEE Trans. Affect. Comput. 2017, 10, 417–429. [Google Scholar] [CrossRef]
- Alakus, T.B.; Gonen, M.; Turkoglu, I. Database for an emotion recognition system based on EEG signals and various computer games–GAMEEMO. Biomed. Signal Process. Control 2020, 60, 101951. [Google Scholar] [CrossRef]
- Rommel, C.; Paillard, J.; Moreau, T.; Gramfort, A. Data augmentation for learning predictive models on EEG: A systematic comparison. J. Neural Eng. 2022, 19, 066020. [Google Scholar] [CrossRef] [PubMed]
- Lashgari, E.; Liang, D.; Maoz, U. Data augmentation for deep-learning-based electroencephalography. J. Neurosci. Methods 2020, 346, 108885. [Google Scholar] [CrossRef] [PubMed]
- Iwana, B.K.; Uchida, S. An empirical survey of data augmentation for time series classification with neural networks. PLoS ONE 2021, 16, e0254841. [Google Scholar] [CrossRef]
- Um, T.T.; Pfister, F.M.; Pichler, D.; Endo, S.; Lang, M.; Hirche, S.; Kulić, D. Data augmentation of wearable sensor data for parkinson’s disease monitoring using convolutional neural networks. In Proceedings of the 19th ACM international Conference On Multimodal Interaction, Glasgow, UK, 13–17 November 2017. [Google Scholar]
- Le Guennec, A.; Malinowski, S.; Tavenard, R. Data augmentation for time series classification using convolutional neural networks. In Proceedings of the ECML/PKDD Workshop on Advanced Analytics and Learning on Temporal Data, Riva Del Garda, Italy, 19 September 2016. [Google Scholar]
- Zhu, Z.; Wang, X.; Xu, Y.; Chen, W.; Zheng, J.; Chen, S.; Chen, H. An emotion recognition method based on frequency-domain features of PPG. Front. Physiol. 2025, 16, 1486763. [Google Scholar] [CrossRef]
- Pillalamarri, R.; Shanmugam, U. A review on EEG-based multimodal learning for emotion recognition. Artif. Intell. Rev. 2025, 58, 131. [Google Scholar] [CrossRef]
- Özçoban, M.A.; Tan, O. Electroencephalographic markers in Major Depressive Disorder: Insights from absolute, relative power, and asymmetry analyses. Front. Psychiatry 2025, 15, 1480228. [Google Scholar] [CrossRef]
- Ikizler, N.; Ekim, G. Investigating the effects of Gaussian noise on epileptic seizure detection: The role of spectral flatness, bandwidth, and entropy. Eng. Sci. Technol. Int. J. 2025, 64, 102005. [Google Scholar] [CrossRef]
- Wang, Z.; Wang, Y. Emotion recognition based on multimodal physiological electrical signals. Front. Neurosci. 2025, 19, 1512799. [Google Scholar] [CrossRef]
- Garg, S.; Patro, R.K.; Behera, S.; Tigga, N.P.; Pandey, R. An overlapping sliding window and combined features based emotion recognition system for EEG signals. Appl. Comput. Inform. 2021, 21, 114–130. [Google Scholar] [CrossRef]
- Yan, F.; Guo, Z.; Iliyasu, A.M.; Hirota, K. Multi-branch convolutional neural network with cross-attention mechanism for emotion recognition. Sci. Rep. 2025, 15, 3976. [Google Scholar] [CrossRef]
Particular | Description |
---|---|
Data (Original) | 32 subjects × 40 videos × 40 channels × 8064 samples |
Label | 40 videos × 4 labels (Arousal, Dominance, Valence, Liking) |
No. of Channels | 40 |
Sampling Frequency | 512 Hz |
Classifier | HAHV | LAHV | HALV | LALV | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Band | Channel | Accuracy | Band | Channel | Accuracy | Band | Channel | Accuracy | Band | Channel | Accuracy | |
SVM | Beta | Frontal | 69.38% | Alpha | Frontal | 76.55% | Gamma | Occipital | 81.43% | Alpha | Central | 74.27% |
KNN | Beta | Frontal | 87.62% | Beta | Frontal | 88.60% | Theta | Parietal | 85.99% | Theta | Parietal | 84.69% |
DT | Beta | Occipital | 82.74% | Alpha | Central | 83.06% | Beta | Parietal | 85.02% | Gamma | Central | 83.06% |
RF | Beta | Occipital | 85.34% | Gamma | Frontal | 87.62% | Beta | Parietal | 86.32% | Beta | Occipital | 82.74% |
NB | Beta | Occipital | 67.75% | Beta | Parietal | 76.22% | Theta | Occipital | 80.78% | Alpha | Central | 74.27% |
MLP | Beta | Occipital | 69.38% | Alpha | Parietal | 76.55% | Gamma | Parietal | 81.43% | Gamma | Central | 73.29% |
AB | Frontal | Theta | 70.68% | Alpha | Frontal | 77.52% | Gamma | Occipital | 82.08% | Beta | Central | 73.62% |
XG | Gamma | Occipital | 71.99% | Theta | Central | 78.18% | Theta | Occipital | 81.43% | Gamma | Central | 74.27% |
LGBM | Gamma | Central | 78.83% | Gamma | Central | 84.04% | Beta | Central | 83.71% | Theta | Occipital | 80.46% |
GPC | Beta | Frontal | 70.03% | Gamma | Frontal | 76.87% | Beta | Parietal | 81.43% | Beta | Occipital | 73.62% |
PER | Theta | Frontal | 69.71% | Alpha | Frontal | 75.90% | Beta | Frontal | 81.43% | Beta | Parietal | 71.99% |
CB | Beta | Central | 86.97% | Theta | Parietal | 88.60% | Beta | Parietal | 86.97% | Theta | Central | 85.99% |
Ensemble (KNN, RF, CB) | Beta | Frontal | 88.27% | Theta | Occipital | 91% | Beta | Central | 87% | Theta | Central | 86% |
Classifiers | Parameters | Values | Description |
---|---|---|---|
Random Forest (RF) | n_estimators | 100 | The number of trees in the forest |
max_depth | 20 | The maximum depth of the tree | |
min_samples_split | 5 | The minimum number of samples required to split an internal node | |
CatBoost (CB) | iterations | 1000 | It signifies the number of boosting iterations or trees to be used during training |
learning_rate | 0.1 | It controls the step size at each iteration while moving toward a minimum of the loss function | |
depth | 6 | It specifies the depth of the trees | |
loss_function | logloss | It specifies the loss function to be optimized during training | |
verbose | 0 | It determines the level of logging information that is displayed during the training process; a value set to 0 means minimal or no logging during training | |
K-Nearest Neighbors (KNN) | n_neighbor | 5 | The number of neighbors to use |
weights | distance | The weight function used in prediction | |
algorithm | auto | The algorithm used to compute the nearest neighbors |
Gaussian Noise Addition and Flip | ||||||||
---|---|---|---|---|---|---|---|---|
Label HAHV | Label HALV | |||||||
Frontal | Central | Parietal | Occipital | Frontal | Central | Parietal | Occipital | |
theta | 83.71% | 84.69% | 83.39% | 84.36% | 85.34% | 85.67% | 86.97% | 85.99% |
alpha | 85.67% | 85.67% | 86.64% | 86.97% | 85.67% | 86.97% | 87.30% | 85.67% |
beta | 87.62% | 86.97% | 86.97% | 88.60% | 86.32% | 85.34% | 86.64% | 85.67% |
gamma | 86.97% | 85.34% | 84.69% | 87.30% | 85.34% | 84.3% | 85.67% | 84.36% |
Gaussian Noise Addition and Flip | ||||||||
---|---|---|---|---|---|---|---|---|
Label LALV | Label LAHV | |||||||
Frontal | Central | Parietal | Occipital | Frontal | Central | Parietal | Occipital | |
theta | 81.76% | 85.67% | 83.39% | 84.04% | 87.30% | 88.60% | 89.25% | 89.90% |
alpha | 83.39% | 83.71% | 84.69% | 83.39% | 86.97% | 88.60% | 87.62% | 88.27% |
beta | 83.06% | 82.41% | 82.08% | 83.39% | 87.95% | 87.30% | 87.30% | 87.62% |
gamma | 81.76% | 82.74% | 81.76% | 81.43% | 88.93% | 87.95% | 84.69% | 83.71% |
Data Slicing and Shuffling | ||||||||
---|---|---|---|---|---|---|---|---|
Label HAHV | Label HALV | |||||||
Frontal | Central | Parietal | Occipital | Frontal | Central | Parietal | Occipital | |
theta | 76.47% | 75.00% | 75.49% | 75.49% | 75.98% | 76.96% | 77.94% | 75.98% |
alpha | 84.31% | 82.35% | 85.78% | 82.35% | 82.84% | 86.27% | 84.80% | 81.86% |
beta | 82.35% | 81.37% | 78.92% | 80.88% | 80.39% | 78.43% | 77.45% | 81.37% |
gamma | 76.96% | 78.92% | 78.92% | 81.86% | 74.51% | 76.96% | 79.90% | 79.41% |
Data Slicing and Shuffling | ||||||||
---|---|---|---|---|---|---|---|---|
Label LALV | Label LAHV | |||||||
Frontal | Central | Parietal | Occipital | Frontal | Central | Parietal | Occipital | |
theta | 74.51% | 75.00% | 77.45% | 76.47% | 83.33% | 84.31% | 82.84% | 83.33% |
alpha | 85.29% | 88.73% | 85.29% | 83.33% | 89.71% | 90.20% | 88.24% | 86.27% |
beta | 81.37% | 81.86% | 81.37% | 79.90% | 83.82% | 87.75% | 85.29% | 84.80% |
gamma | 75.49% | 78.43% | 79.90% | 74.02% | 79.90% | 83.82% | 82.84% | 81.37% |
Time Warping | ||||||||
---|---|---|---|---|---|---|---|---|
Label HAHV | Label HALV | |||||||
Frontal | Central | Parietal | Occipital | Frontal | Central | Parietal | Occipital | |
theta | 73.04% | 76.47% | 773.04% | 75.98% | 74.02% | 74.02% | 74.02% | 76.96% |
alpha | 71.57% | 74.02% | 71.08% | 74.02% | 76.47% | 76.47% | 75.49% | 75.49% |
beta | 75.98% | 75.49% | 74.02% | 73.04% | 70.10% | 75.00% | 74.02% | 73.53% |
gamma | 76.96% | 73.04% | 75.00% | 71.08% | 74.51% | 72.55% | 72.55% | 70.10% |
Time Warping | ||||||||
---|---|---|---|---|---|---|---|---|
Label LALV | Label LAHV | |||||||
Frontal | Central | Parietal | Occipital | Frontal | Central | Parietal | Occipital | |
theta | 77.45% | 76.96% | 74.51% | 75.00% | 81.37% | 81.37% | 81.37% | 79.90% |
alpha | 73.04% | 74.51% | 76.47% | 73.04% | 79.90% | 81.37% | 80.88% | 78.43% |
beta | 72.55% | 75.00% | 73.04% | 73.04% | 78.43% | 78.43% | 78.43% | 77.45% |
gamma | 75.49% | 71.57% | 75.00% | 70.10% | 77.94% | 77.45% | 77.45% | 77.45% |
Random Channel Dropping | ||||||||
---|---|---|---|---|---|---|---|---|
Label HAHV | Label HALV | |||||||
Frontal | Central | Parietal | Occipital | Frontal | Central | Parietal | Occipital | |
theta | 80.39% | 78.43% | 78.43% | 80.39% | 81.86% | 80.39% | 82.84% | 78.92% |
alpha | 82.35% | 79.41% | 78.43% | 79.90% | 81.37% | 82.84% | 84.80% | 80.39% |
beta | 82.84% | 79.41% | 79.90% | 81.86% | 83.82% | 78.92% | 85.78% | 80.88% |
gamma | 82.84% | 81.37% | 82.84% | 84.31% | 80.88% | 81.37% | 84.80% | 76.47% |
Random Channel Dropping | ||||||||
---|---|---|---|---|---|---|---|---|
Label LALV | Label LAHV | |||||||
Frontal | Central | Parietal | Occipital | Frontal | Central | Parietal | Occipital | |
theta | 83.33% | 81.86% | 83.33% | 82.35% | 84.31% | 87.75% | 84.80% | 83.82% |
alpha | 81.86% | 80.39% | 80.39% | 82.35% | 84.31% | 83.33% | 84.80% | 86.27% |
beta | 86.27% | 85.29% | 84.31% | 82.84% | 85.29% | 86.76% | 83.33% | 83.82% |
gamma | 81.86% | 81.86% | 85.78% | 85.78% | 86.76% | 85.78% | 84.80% | 86.27% |
Frequency Manipulation | ||||||||
---|---|---|---|---|---|---|---|---|
Label HAHV | Label HALV | |||||||
Frontal | Central | Parietal | Occipital | Frontal | Central | Parietal | Occipital | |
theta | 88.73% | 88.24% | 89.71% | 87.75% | 87.75% | 88.73% | 88.73% | 86.76% |
alpha | 88.24% | 88.24% | 88.24% | 87.75% | 89.71% | 89.22% | 87.75% | 88.73% |
beta | 90.69% | 89.22% | 88.24% | 89.22% | 90.20% | 91.67% | 88.24% | 88.73% |
gamma | 92.16% | 90.69% | 92.65% | 91.67% | 90.20% | 90.18% | 91.67% | 89.22% |
Frequency Manipulation | ||||||||
---|---|---|---|---|---|---|---|---|
Label LALV | Label LAHV | |||||||
Frontal | Central | Parietal | Occipital | Frontal | Central | Parietal | Occipital | |
theta | 88.73% | 92.16% | 88.24% | 87.25% | 93.63% | 91.67% | 94.12% | 92.16% |
alpha | 90.69% | 92.65% | 91.18% | 86.76% | 93.14% | 93.63% | 92.16% | 91.18% |
beta | 92.65% | 94.61% | 92.16% | 91.67% | 91.18% | 93.63% | 91.18% | 92.65% |
gamma | 92.65% | 91.18% | 95.59% | 93.14% | 91.18% | 92.65% | 94.61% | 95.59% |
Combined Augmentation Techniques | ||||||||
---|---|---|---|---|---|---|---|---|
Label HAHV | Label HALV | |||||||
Frontal | Central | Parietal | Occipital | Frontal | Central | Parietal | Occipital | |
theta | 84.69% | 87.30% | 86.48% | 85.50% | 85.18% | 87.62% | 87.46% | 87.46% |
alpha | 87.95% | 88.44% | 88.11% | 87.30% | 87.79% | 88.76% | 89.58% | 89.90% |
beta | 83.55% | 84.36% | 82.57% | 82.57% | 84.04% | 85.18% | 85.50% | 85.02% |
gamma | 83.55% | 81.76% | 82.74% | 82.74% | 84.53% | 82.08% | 85.02% | 82.41% |
Combined Augmentation Techniques | ||||||||
---|---|---|---|---|---|---|---|---|
Label LALV | Label LAHV | |||||||
Frontal | Central | Parietal | Occipital | Frontal | Central | Parietal | Occipital | |
theta | 87.79% | 87.46% | 87.30% | 87.79% | 84.44% | 89.09% | 89.25% | 88.44% |
alpha | 88.76% | 89.25% | 87.62% | 88.60% | 89.25% | 89.41% | 89.41% | 89.58% |
beta | 85.18% | 84.53% | 84.20% | 84.36% | 86.32% | 85.18% | 86.32% | 86.16% |
gamma | 84.36% | 80.94% | 82.25% | 81.76% | 85.02% | 83.71% | 83.39% | 86.32% |
Augmentation Technique Used | HAHV | HALV | LALV | LAHV | Time Involved | ||||
---|---|---|---|---|---|---|---|---|---|
Accuracy | Region and No. of Electrodes Involved | Accuracy | Region and No. of Electrodes Involved | Accuracy | Region and No. of Electrodes Involved | Accuracy | Region and No. of Electrodes Involved | In Seconds | |
Gaussian Noise Addition and Flip | 88.60% | Occipital 5 electrodes | 87.30% | Parietal 5 electrodes | 85.67% | Central 7 electrodes | 89.90% | Occipital 5 electrodes | 864.51 s |
Data Slicing and Shuffling | 85.78% | Parietal 5 electrodes | 86.27% | Central 7 electrodes | 88.73% | Central 7 electrodes | 90.20% | Central 7 electrodes | 590.45 s |
Time Warping | 76.96% | Frontal 5 electrodes | 76.96% | Occipital 5 electrodes | 76.47% | Parietal 5 electrodes | 81.37% | Frontal/Central 5/7 electrodes | 881.62 s |
Random Channel Dropping | 84.31% | Occipital 5 electrodes | 85.78% | Parietal 5 electrodes | 86.27% | Frontal 5 electrodes | 86.76% | Central 7 electrodes | 584.72 s |
Frequency Manipulation | 92.65% | Parietal 5 electrodes | 91.67% | Central 7 electrodes | 95.59% | Parietal 5 electrodes | 95.59% | Occipital 5 electrodes | 583.93 s |
Combined Augmentation Techniques | 88.44% | Central 7 electrodes | 89.90% | Occipital 5 electrodes | 89.25% | Central 7 electrodes | 89.58% | Occipital 5 electrodes | 2207.02 s |
Augmentation Technique Used | Class −1 (Negative Feeling) | Class 0 (Neutral Feeling) | Class 1 (Positive Feeling) | Overall Accuracy |
---|---|---|---|---|
Gaussian Noise Addition and Flip | 98% | 98% | 100% | 99% |
Data Slicing and Shuffling | 71% | 80% | 91% | 81% |
Time Warping | 92% | 93% | 100% | 95% |
Random Channel Dropping | 94% | 93% | 99% | 96% |
Frequency Manipulation | 97% | 96% | 99% | 97% |
Augmentation Technique Used | Boring | Calm | Horror | Funny | Overall |
---|---|---|---|---|---|
Gaussian Noise Addition and Flip | 98% | 100% | 100% | 98% | 98.9% |
Data Slicing and Shuffling | 98% | 99% | 99% | 97% | 98.03% |
Time Warping | 89% | 92% | 94% | 92% | 91.2% |
Random Channel Dropping | 93% | 95% | 94% | 92% | 93.8% |
Frequency Manipulation | 97% | 100% | 100% | 98% | 98.6% |
Paper | Dataset Used | Specific Band–Brain Region Analysis | Emotions Classified | Method Used for Augmentation | Accuracy | No. of Electrodes/ Channels Used |
---|---|---|---|---|---|---|
Alidoost, Y., & Asl, B. M. (2025) [13] | DEAP | Not Demonstrated | HAHV HALV LALV LAHV | Synthetic Minority Over-Sampling Technique (SMOTE) | HAHV: 94.44% HALV: 96.55% LALV: 98.19% LAHV: 97.52% | 18 Channels |
Cruz-Vazquez et al. (2025) [14] | Self-generated | Not Demonstrated | Happy Sad Neutral | Transformation with Fourier Neural Network (FNN) Transformation with Quantum Rotations | 95% overall accuracy using Quantum Rotations | 14 Channels |
Qiao, W et al. (2024) [15] | SEED | Not Demonstrated | Not Specified | GAN-based | 94.87% | 18 Channels |
DREAMER | Not Demonstrated | Not Specified | GAN-based | 87.26% | ||
Zhang, Z. et al. (2024) [16] | DEAP | Not Demonstrated | Valence | GAN-based | 96.33% | 32 Channels |
Not Demonstrated | Arousal | GAN-based | 96.68% | |||
AMIGOS | Not Demonstrated | Valence | GAN-based | 94.4% | 14 Channels | |
Not Demonstrated | Arousal | GAN-based | 95.23% | |||
SEED | Not Demonstrated | Not Specified | GAN-based | 97.14% | 62 Channels | |
Du, X et al. 2024 [17] | BCI-IV-2a dataset | Not Demonstrated | Not Specified | Improved generative adversarial network model L-C-WGAN-GP | PRD values lowered by 0.05–5.58% when generated data is included in the training set. | 16 Channels |
Liao, C et al. (2025) [18] | BCI Competition IV 2a | Not Demonstrated | Not Specified | Gaussian Mixture Model | 82.73% using Deep4Net | 22 Channels |
Szczakowska, P., & Wosiak, A. (2023) [19] | MANHOB | Not Demonstrated | Valence Arousal | Sliding Window, Overlapping Windows, Gaussian Noise | Arousal: 54.29% using Gaussian Noise Addition. Valence: 56.72% using Gaussian Noise Addition | 32 Channels |
Proposed FREQ-EER | DEAP | Demonstrated | HAHV HALV LALV LAHV | Frequency Manipulation Based | HAHV: 92.65% HALV: 91.67% LALV: 95.59% LAHV:95.59% | HAHV: 5 Channels HALV: 7 Channels LALV: 5 Channels LAHV: 5 Channels |
SEED | Validation of FREQ-EER | Positive Neutral Negative | Frequency Manipulation Based | Positive: 99% Negative:96% Neutral:97% | ||
GAMEEMO | Validation of FREQ-EER | Boring Calm Horror Funny | Frequency Manipulation Based | Boring: 97% Calm:100% Horror: 100% Funny: 98% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Thapa, D.; Rai, R. FREQ-EER: A Novel Frequency-Driven Ensemble Framework for Emotion Recognition and Classification of EEG Signals. Appl. Sci. 2025, 15, 10671. https://doi.org/10.3390/app151910671
Thapa D, Rai R. FREQ-EER: A Novel Frequency-Driven Ensemble Framework for Emotion Recognition and Classification of EEG Signals. Applied Sciences. 2025; 15(19):10671. https://doi.org/10.3390/app151910671
Chicago/Turabian StyleThapa, Dibya, and Rebika Rai. 2025. "FREQ-EER: A Novel Frequency-Driven Ensemble Framework for Emotion Recognition and Classification of EEG Signals" Applied Sciences 15, no. 19: 10671. https://doi.org/10.3390/app151910671
APA StyleThapa, D., & Rai, R. (2025). FREQ-EER: A Novel Frequency-Driven Ensemble Framework for Emotion Recognition and Classification of EEG Signals. Applied Sciences, 15(19), 10671. https://doi.org/10.3390/app151910671