Next Article in Journal
Assessing Blockchain Health Devices: A Multi-Framework Method for Integrating Usability and User Acceptance
Previous Article in Journal
Deep Learning Techniques for Retinal Layer Segmentation to Aid Ocular Disease Diagnosis: A Review
Previous Article in Special Issue
Unlocking the Potential of Smart Environments Through Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

EEG-Based Biometric Identification and Emotion Recognition: An Overview

by
Miguel A. Becerra
1,2,*,
Carolina Duque-Mejia
1,2,
Andres Castro-Ospina
3,
Leonardo Serna-Guarín
3,†,
Cristian Mejía
3,† and
Eduardo Duque-Grisales
1,2
1
Facultad de Ingeniería, Institución Universitaria Pascual Bravo, Medellín 050034, Colombia
2
Facultad de Estudios Empresariales, Institución Universitaria Esumer, Medellín 520001, Colombia
3
Facultad de Ingeniería, Instituto Tecnológico Metropolitano, Medellín 050013, Colombia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Computers 2025, 14(8), 299; https://doi.org/10.3390/computers14080299
Submission received: 27 March 2025 / Revised: 30 June 2025 / Accepted: 16 July 2025 / Published: 23 July 2025
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))

Abstract

This overview examines recent advancements in EEG-based biometric identification, focusing on integrating emotional recognition to enhance the robustness and accuracy of biometric systems. By leveraging the unique physiological properties of EEG signals, biometric systems can identify individuals based on neural responses. The overview discusses the influence of emotional states on EEG signals and the consequent impact on biometric reliability. It also evaluates recent emotion recognition techniques, including machine learning methods such as support vector machines (SVMs), convolutional neural networks (CNNs), and long short-term memory networks (LSTMs). Additionally, the role of multimodal EEG datasets in enhancing emotion recognition accuracy is explored. Findings from key studies are synthesized to highlight the potential of EEG for secure, adaptive biometric systems that account for emotional variability. This overview emphasizes the need for future research on resilient biometric identification that integrates emotional context, aiming to establish EEG as a viable component of advanced biometric technologies.

1. Introduction

Human identification systems are usually based on passwords, access cards, and PINs, which are vulnerable to theft, loss, or forgetfulness. These limitations are addressed by developing biometric systems that allow the identification of individuals based on physical characteristics or physiological signals. Commonly used physical characteristics include fingerprints, iris patterns, and facial features, while physiological signals involve data like voice, EEG, EMG, and ECG, among others [1]. Physiological signals, particularly EEG, have garnered significant interest for biometric applications due to their unique characteristics and inherent robustness against impersonation attacks, as they are not visible to the human eye [2].
Biometric identification has gained significant popularity due to its difficulty in being falsified compared to knowledge- and possession-based methods, which can be forgotten, duplicated, stolen, or lost. This method offers greater security, especially when using unimodal, bimodal, or multimodal systems, which incorporate one, two, or more physiological characteristics. The biometrics based on EEG signals have been little explored for biometric identification, so it is considered an open field of research that seeks to take advantage of unique brain wave patterns that are difficult to replicate and can be considered highly secure for applications that require rigorous identification [1]. Moreover, unlike trait-based biometrics, EEG-based systems offer more excellent resistance to external manipulation, as brainwave patterns are generated internally and are unique to each individual.
EEG signals, generated by the brain’s electrical activity, are commonly used in identifying pathologies such as brain tumors, cerebral dysfunctions, and sleep disorders. In addition, recent studies have demonstrated the utility of EEG signals in various applications, including monitoring student engagement, lie detection, and assessing political attitudes [3,4,5]. These expanding applications reveal the richness of EEG signal features and further support their potential for biometric identification tasks. Their suitability for biometric identification systems stems from their universality, uniqueness, permanence, and measurable properties [6]. These required properties make EEG an attractive candidate for secure identification systems, as it can effectively distinguish between individuals [7]; given these advantages, multiple biometric systems based on EEG have been proposed.
The literature reports multiple studies on biometric identification based on EEG signals [1,8], and it is considered an open research field that has explored various approaches for EEG-based biometric systems. In [9], a biometric system based on EEG signals was proposed. The researchers formulated a binary optimization problem for channel selection and employed a support vector machine with a radial basis function kernel (SVM-RBF), utilizing features based on autoregressive coefficients. The proposed method achieved an accuracy of 94.13% using 23 sensors with five autoregressive coefficients, indicating the potential for identification from this type of signal. In [10] the authors proposed a system using a support vector machine with a kernel of radial basis function (SVM-RBF) and achieved a precision of 94.13% considering all EEG channels. Additionally, advanced EEG channel selection methods were introduced, yielding high accuracy with fewer sensors. Multiple techniques for biometric identification have been tested, with artificial neural networks (ANNs) being the most prominent, which have also been widely used in emotion identification from EEG signals. These networks can learn complex patterns and extract relevant signal features, providing a solid foundation for biometric identification. Support vector machines (SVMs) are highly effective for biometric and emotion identification from features extracted from EEG signals. Their ability to handle high-dimensional datasets makes them a valuable option [11]. Deep neural network-based architectures, such as convolutional neural networks (CNNs), have been applied for EEG signal analysis due to their capability to extract spatial and temporal features, allowing for the identification of complex patterns present in EEG signals [11]. Similarly, LSTM (long short-term memory) networks, a variant of recurrent neural networks (RNNs), are effective in modeling temporal sequences in EEG signals [12] achieving accuracy scores above 96% [13]. In addition, EEG has also been applied in single-channel configurations, demonstrating substantial accuracy through signal segmentation and feature extraction techniques [14].
Biometric systems based on electroencephalographic signals have demonstrated high accuracy. However, these studies have been conducted using databases with a limited number of individuals, so this remains an active area of research that must consider the effects of multiple external and internal factors on EEG signals, such as emotions and diseases, among others. Therefore, if the various aspects that models built with learning machines face are not considered, they could present significant limitations in terms of performance in real-life environments. Additionally, it is essential to highlight that research is still being conducted in the context of identifying emotions from EEG signals, so it can be inferred that not considering emotions in biometric identification in real environments can generate significant limitations in performance [15,16,17].
On the other hand, numerous studies have been reported in the area of emotion recognition using EEG signals. In [18], a portable brain wave analysis system was proposed to recognize positive, negative, and neutral emotions using the DEAP and SEED databases. Among the methods tested, the long-short-term memory (LSTM) deep learning approach demonstrated the best performance, achieving 94.12% precision in identifying emotional states. In another study, [19] presented machine learning models, such as the KNN (k-Nearest Neighbor) regressor with Manhattan distance, which utilized features from the alpha, beta, and gamma bands, as well as the differential asymmetry of the alpha band [20]. This approach showed promising results in predicting valence and arousal, achieving an accuracy of 84.4%. These findings underscore the potential of EEG-based models to reliably infer emotional states and deepen the understanding of affective responses. Further studies on EEG-based biometric recognition using deep learning techniques illustrate how convolutional and recurrent neural networks can extract distinctive features from brain signals, achieving high levels of precision in biometric identification. These advanced approaches open new pathways for the development of more secure and adaptive identification systems that can function effectively under challenging conditions [21].
Building on recent advances in EEG signal processing, recent research in EEG-based biometric identification and emotion recognition highlights the importance of multimodal databases and fusion strategies to improve recognition accuracy. A systematic review of studies from 2017 to 2024 identified the DEAP (Dataset for Emotion Analysis using Physiological signals), SEED (Shanghái Jiao Tong University Emotion EEG Dataset), DREAMER (Database for Emotion Recognition through EEG and ECG Signals), and SEED-IV databases as the most widely used. Deep learning models such as TNAS (Transferable Neural Architecture Search), GLFANet (Graph-based Learning Feature Attention Network), ACTNN (Attention-based Convolutional Temporal Neural Network), and ECNN-C (Efficient Convolutional Neural Network with Contrastive Learning) have demonstrated effectiveness in emotion recognition [22]. Additionally, the MED4 database, which integrates EEG signals with photoplethysmography, speech, and facial images, has shown significant accuracy gains in emotion detection. These improvements, achieved through feature- and decision-level fusion, include a 25.92% increase over speech and a 1.67% increase over EEG alone in anechoic conditions [23].
Other studies have focused on optimization techniques for EEG channel selection, such as the binary particle swarm optimization (BPSO) algorithm, which allows decreasing the number of signals to be processed, identifying specific brain regions and therefore reducing noise, feature relevance, and accuracy in emotion recognition with a lower computational cost [24]. In [25], M3CV was generated seeking to expand the generality of recognition models by having multiple subjects, sessions, and tasks, which can be considered one of the most complex tasks, i.e., collecting data for model building and hypothesis validation effectively with the ability to manage intra and intersubject variability. Finally, some studies have analyzed multimodality considering different types of signals, such as the DEAP database, which can be beneficial in accuracy and generality but simultaneously increase costs and decrease its feasibility of use in real applications. Therefore, the need for techniques and methodologies to cover the existing gaps is highlighted [7].
Considering the above, it is evident that the study of biometric identification techniques is a tangible necessity, with a permanent need for improvement in computational performance and precision as well as the mitigation of vulnerability risks. Regarding biometric identification based on EEG signals, this work aims to reveal the importance and influence of emotions on the EEG signals and, consequently, their impact on biometric identification. Thus, developing new approaches and broadening the scope of studies that address the effects of emotions on EEG signals to improve the performance of biometric identification systems is necessary. Unlike previous reviews that have addressed EEG-based biometric identification and emotion recognition as distinct research lines, this article presents a unified perspective that explores how emotional states influence EEG signals and, consequently, affect biometric identification performance. This integrated approach is essential for advancing toward more robust, adaptive, and real-world applicable biometric systems. Additionally, this review considers underexplored contextual variables such as noise, temperature, and fatigue and includes a synthesis of recent advances in deep learning and multimodal strategies that simultaneously address identity and emotional variability. Thus, the present work provides a comprehensive and timely overview of an emerging research frontier that has not been previously systematized.
The conceptual framework underlying this analysis is presented in Figure 1. This framework provides a structured overview of the key elements of EEG-based biometric identification, emotion identification, and biometric identification, considering the effects of emotions. In the first stage of the studies, they start with the collection of the datasets, followed by their preprocessing, followed by the extraction and selection of features under different extraction techniques in the time, frequency, time–frequency, and nonlinear and spatial domains. Finally, biometric identification and emotion recognition methods based on machine learning and the limited studies that perform biometric identification considering emotions are highlighted. Additionally, meta-analyses focused on performance metrics (considering database, feature extraction, and classification techniques), such as accuracy, are highlighted, as summarized in Table 1, Table 2 and Table 3. The results reveal the challenges at different stages of EEG signal processing. Finally, the framework outlines future prospects, such as adaptive AI models and multimodal data integration, as strategies to improve the identification generality of biometric systems in emotionally dynamic contexts.
This article is structured as follows: after the introduction, the methods and databases used for EEG-based biometric identification and emotion recognition are presented. The following sections discuss biometric identification studies that considering the analysis of emotions. Finally, conclusions and current challenges highlight promising directions for future research.

2. Literature Review Process

Few studies specifically address biometric identification based on EEG signals about emotional states. This overview summarizes studies in emotion recognition, biometric identification, and the interrelationship between these two fields. To conduct this review, the query was carried out on the Scopus database, and specific search criteria were applied: (“emotion recognition” and “biometric identification” and “EEG biometric identification and emotions”). Article selection was guided by availability criteria, article categories, and the relevance of studies linking emotions, biometric identification, and experimental articles focused on biometrics and emotions. The articles were analyzed and discussed throughout this document.
Figure 2 shows the number of publications in Scopus under the following queries: ‘biometric identification and EEG’, ‘emotion recognition and EEG’, and ‘biometric identification and emotion recognition and EEG’, which has revealed findings in current research on EEG signal integration for biometric identification and emotion recognition. The results showed that emotion recognition based on EEG signals has been widely studied compared to biometric identification using EEG signals. Furthermore, biometric identification that considers emotional states is an emerging field. The above suggests the need to significantly increase efforts in EEG signal-based biometric identification studies to advance the development of more complex and context-aware systems. On the other hand, the Figure 2 shows that emotion identification has experienced steady growth, reaching its peak in 2023 with 539 studies, followed by biometric identification in 2021 with 34 studies. The integration of both areas has been less explored than individual disciplines, with a growing number of studies over the years, reaching its peak in 2021 with six studies.

3. Electroencephalography (EEG): Foundations and Applications

3.1. Brain Anatomy Relevant to EEG

The human brain, the most complex organ in the central nervous system (CNS), is fundamental to functions such as self-awareness, speech, movement, and memory storage. As noted by [46], ‘almost all organs in the human body are potentially transplantable. However, brain transplantation would be equivalent to transplanting the person,’ highlighting its unique role in defining individual identity. Structurally, the brain is divided into two interconnected hemispheres (see Figure 3), the right and the left, each specializing in distinct functions and maintaining an inverse relationship with the body: the right hemisphere controls the left side, while the left hemisphere controls the right. The right hemisphere is dominant in non-verbal processing, including emotional regulation, memory through images, and the interpretation of sensory inputs such as taste and visual stimuli. Conversely, the left hemisphere excels in verbal and rational functions, such as processing symbols, letters, numbers, and words [47]. These distinct yet complementary roles underscore the brain’s remarkable complexity and its critical role in EEG signal generation.

3.2. EEG Signals and Their Properties

Brain waves are the electrical impulses generated by chains of neurons, and these signals are distinguished by their speed and frequency. There are five types of brain waves alpha, beta, theta, delta, and gamma. Some of them have low frequencies, while others have higher ones. These five waves remain active throughout the day, and depending on the activity being performed, some of them tend to be stronger than others [48].
Delta waves (0.5 to 4 Hz) Delta waves are high-amplitude, low-frequency brain waves associated with deep sleep and unconscious states, they are related to unconscious activities such as the heartbeat. They are also observed during states of meditation. The production of delta rhythms coincides with the regeneration and restoration of the central nervous system [49,50].
Theta waves (4 to 8 Hz) are associated with imagination and reflection. These waves also appear during deep meditation. Theta waves are of great importance in learning and are produced between wakefulness and sleep when processing unconscious information, such as nightmares or fears [50,51]. Alpha wave (8 to 13 Hz) The alpha signals appear during states of low brain activity and relaxation. They are waves of greater amplitude compared to beta waves. Generally, alpha waves appear as a reward after a well-done job [50,52].
Beta waves (13 to 30 HZ) Beta waves appear during states when attention is directed towards external cognitive tasks. They have a high frequency and are associated with intense mental activities [50,53].
Gamma waves (30 to 100 Hz) Gamma waves originate in the thalamus, and these signals are related to tasks that require high cognitive processing [50,54].
Electroencephalographic (EEG) signals are generated by the synchronized electrical activity of cortical neurons, primarily pyramidal cells, and are commonly categorized into frequency bands based on their spectral characteristics. These bands reflect different brain states and cognitive processes.
Delta waves (0.5 to 4 Hz): Delta waves are the slowest and highest-amplitude EEG oscillations. They are predominantly observed during deep, non-REM sleep (stages 3 and 4) and are associated with unconscious physiological processes. Abnormal delta activity during wakefulness may indicate brain injury or dysfunction. Theta waves (4 to 8 Hz): Theta activity is typically associated with drowsiness, meditative or trance-like states, and the early stages of sleep. In children, theta rhythms are more prominent during wakefulness. In adults, they often emerge during memory encoding or emotional processing. Some of the literature extends the lower limit to 3.5 Hz, leaving the 3 to 3.5 Hz range as a transition zone between delta and theta activity. Alpha waves (8 to 13 Hz): Alpha rhythms are most prominent in the occipital cortex during relaxed wakefulness with eyes closed. They are attenuated by eye opening or mental effort (a phenomenon known as alpha blocking). Alpha activity is often interpreted as an indicator of cortical idling or inhibition. Beta waves (13 to 30 Hz): Beta waves are associated with active thinking, focused attention, and alert mental states. They are typically recorded over the frontal and central areas of the brain. Beta activity increases during motor activity and decreases during drowsiness or rest. Gamma waves (30 to 100 Hz): Gamma activity is involved in high-level cognitive functions, such as perception, attention, and memory binding. Although more difficult to measure due to low amplitude and susceptibility to noise, gamma rhythms are thought to reflect information integration across brain regions. Each frequency band plays a unique role in the brain’s functional architecture and is influenced by age, mental state, and neurological conditions. Understanding these oscillatory patterns is essential for both diagnostic and biometric applications involving EEG signals.

3.3. Feature Extraction from EEG Signals

Different feature extraction techniques include methods in the time domain, frequency domain, time–frequency domain, spatial domain, and nonlinear domain. These various techniques aim to describe a signal by its characteristics. Some methods used in EEG (electroencephalography) include variance, standard deviation, correlation coefficient, and Hjorth parameters. These methods are computationally low complexity. There are also autoregressive (AR) models, fast Fourier transform (FFT), short-time Fourier transform (STFT), spectral power, wavelet transform, Hilbert–Huang transform, common spatial patterns, entropy, among others [55].
Table 1, Table 2 and Table 3 summarize different datasets used, techniques applied to feature extraction from EEG signals and classification techniques, and the accuracy achieved in each study for biometric identification and emotion recognition. The presented studies utilize various EEG databases, with DEAP and SEED being the most widely used for biometrics and emotion identification, as well as self-collected databases using commercial devices such as the Emotiv Epoc+. In terms of feature extraction, in [56] was highlighted the use of the fast Fourier transform (FFT) technique (achieving an accuracy of 96.81% in classifying EEG signals) after a comparison of various techniques to extract features from EEG signals including Eigenvector methods (EM), different types of Wavelet transform such as Discrete Wavelet Transform, Continuous Wavelet Transform, Time–Frequency Distribution (TFD), and Autoregressive Method (ARM). The literature reports multiple other techniques for feature extraction, such as time- and frequency-domain methods, including Auto-Regressive (AR), power spectral density (PSD), and Wavelet Packet Decomposition (WPD). These last techniques have been widely used for their ability to capture relevant information from the EEG signals. Spatiotemporal domain methods, such as common spatial patterns (CSsP) and Phase Locking Value (PLV), have shown high effectiveness in discriminating between subjects and emotional states. Other commonly used techniques include multivariate variational modal decomposition (MVMD), Fourier–Bessel series expansion, convolutional neural networks (CNNs), and functional connectivity (FC) analysis.
Regarding classification, models based on deep neural networks, convolutional neural networks (DCNNs), long short-term memory (LSTM), and Hybrid Attention-based LSTM-MLP have achieved accuracy scores higher than 97%. However, it is highlighted that traditional methods such as k-nearest neighbors (K-NN), support vector machines (SVMs), Random Forest (RF), Hidden Markov Models, and Gaussian Naïve Bayes (GNB) have shown an accuracy ranging from 75.8% to 99.9% using feature selection techniques such as Sequential Floating Forward Selection (SFFS) and Random Forest-based binary selection. The best results were obtained by the SVM classifier in most cases, which is because this technique works very well when a relatively small amount of data is used compared to the amount of data required by a neural network to achieve comparable results. Hybrid models are also highlighted, achieving 99.96% accuracy and demonstrating artificial intelligence’s potential in optimizing biometric identification and emotion recognition from EEG signals. These studies highlight the diversity of feature extraction and classification methods to improve the precision of biometric identification. Methods in the time domain, such as AR, have advantages over FFT, offering better frequency resolution and improved spectral estimations in short segments of EEG signals. However, they also have limitations. One of these limitations is the lack of clear guidelines for selecting the parameters of spectral estimations. Additionally, AR models require an optimal order, as a too-low order may smooth out the spectrum, while a too-high order may introduce false peaks. Among the frequency-domain models, the FFT enables mapping from the time domain to the frequency domain, which facilitates the investigation of amplitude distribution in spectra and reflects different brain tasks. However, its limitations include being unsuitable for representing non-stationary signals, where the spectral content varies over time.
The short-time Fourier transform (STFT) is simple and easy to implement, but its limitations include longer segments that violate the quasi-stationarity assumption required by the Fourier transform. The power spectral density (PSD) provides information about the energy distribution of the signal across different frequencies. However, it is limited in presenting additional timescale information, considering that EEG signals possess non-stationary and nonlinear characteristics [57]. Wavelets are a time–frequency technique particularly effective in dealing with non-stationary signals. They allow the signal to be decomposed in both time and frequency domains, enabling the simultaneous use of long time intervals for low-frequency information and short-time intervals for high-frequency information. However, to accurately analyze EEG signals, a proper choice of the mother wavelet and an appropriate number of decomposition levels are required. The Hilbert–Huang Transform (HHT) does not require assumptions about the linearity and stationarity of the signal. It allows for adaptive and multi-scale decomposition of the signal and does not rely on any predefined function for decomposition. However, one of its limitations is that an iterative algorithm defines it and lacks a mathematical formula. The final results can be influenced by how the algorithm is implemented and the definition of variables and control structures [58].
Among the spatial techniques, common spatial patterns (CSPs) are capable of projecting EEG signals from multiple channels into a subspace where inter-class differences are emphasized and intra-class similarities are minimized. The alternative method, TDCSP, optimizes CSP filters and effectively reflects changes in discriminative spatial distribution over time. However, one of its limitations is that it requires training samples and class information to calculate the linear transformation matrix. Furthermore, this technique requires a large number of electrodes to be effective [59]. On the other hand, in [37,60], they discuss the SSVEP (Steady-State Visually Evoked Potential), which is a brain response generated by observing flashing visual stimuli at a constant frequency. This EEG signal is used in conjunction with deep neural networks and spatial patterns to recognize a person’s identity. Among the nonlinear techniques, entropy is robust in analyzing short data segments, is resistant to outliers, and is capable of dealing with noise through appropriate parameter tuning, making it applicable to both stochastic and deterministically chaotic signals. It offers various alternatives to characterize the complexity of the signal with changes over time and quantify dynamic changes in events related to EEG signals. However, one of its limitations is the lack of clear guidelines for choosing the parameters m (embedding dimension of the series) and r (similarity tolerance) before calculating the approximate or sample entropy. These parameters will affect the entropy of each EEG data record during different mental tasks, affecting the classification accuracy. Lyapunov exponents exploit the chaotic behavior of an EEG signal for classification tasks and, when combined with other linear or nonlinear features, can yield improved results. However, finding optimal parameters to calculate the Lyapunov exponents requires significant effort to enhance its performance [61].
While this review provides a qualitative comparison of performance across various EEG-based biometric and emotion recognition methods, a comprehensive statistical analysis remains challenging due to the heterogeneity in datasets, preprocessing protocols, feature extraction strategies, and classification metrics reported in the literature. Differences in sample size, electrode configuration, emotional paradigms, and performance reporting standards (e.g., accuracy vs. EER vs. F1-score) make direct comparisons difficult. Future meta-analyses could aim to normalize these variables and apply statistical techniques such as effect size estimation or model ranking to identify the most robust methods under controlled criteria. Such efforts would offer a valuable contribution to guiding the selection of algorithms and pipelines for real-world EEG biometric systems.

4. Emotion Recognition and Biometric Identification Using EEG

Electroencephalography (EEG) has gained increasing attention in emotion recognition and biometric identification due to its ability to capture unique physiological signals that reveal unique characteristics that allow for identifying the emotions and identity of individuals; however, information about emotional states can influence biometric measurements. This section explores three key applications of EEG signals: first, their use in biometric identification by analyzing brain wave patterns; second, the use of EEG signals for emotion recognition, highlighting how neural responses vary with emotional states; and finally, an integrated approach that combines biometric identification and emotion recognition. Together, these applications illustrate the versatility of EEG in creating robust and adaptive systems that leverage identity and emotional information for increased accuracy and security.

4.1. Biometric from EEG Signals

Biometrics is an area that quantifies physiological or behavioral traits to identify individuals. This capability is inherently present in humans, enabling them to recognize others by features such as voice tone, body shape, and facial characteristics. On the other hand, biometric authentication confirms a person’s identity, while biometric identification determines whether a person is who they claim to be. With nearly 8 billion people in the world, each distinguished by their unique identity, recognition methods fall into three main categories: (1) knowledge-based identification, which relies on information known only to the person, such as passwords, PINs, or ID numbers; (2) possession-based identification, using unique objects like ID cards, passports, or badges; and (3) biometric identification, based on distinctive physical or behavioral traits, including fingerprints, facial features, and voice patterns [62].
The predominant methodologies used for biometric identification from EEG signals typically follow a comprehensive approach encompassing the following steps: (i) Data acquisition and preprocessing: EEG signals are collected using specialized devices, and the preprocessing stage is carried out to improve data quality by reducing interference from non-neural activity by removing noise and irrelevant signals, using methods such as digital filters, decomposition techniques such as Wavelet transform, artifact countermeasure, and independent component analysis (ICA), among others, that allow removing artifacts and normalizing data, with the aim of improving the accuracy of prediction systems for biometric identification [63,64]. Another strategy applied at this stage is Epoch, which divides the EEG data into smaller segments to analyze specific time intervals of the EEG signals [65]. An important aspect to consider in signal acquisition is that it is not as easy to take a fingerprint. However, technology has been advancing, developing more comfortable and portable acquisition equipment, making it relevant to minimize the channels needed for biometric identification. Another important problem to consider in acquisition is visual, auditory, and olfactory stimuli, which affect the dynamics of EEG signals and a limited generalization. In addition, cognitive activities must be considered since participant fatigue can also affect the dynamics of EEG signals. Therefore, most studies have been carried out under controlled conditions, which limits the application of EEG signals for biometric identification in a real environment. (ii) Feature extraction: At this stage, relevant features such as amplitudes, frequencies, and temporal patterns are selected and extracted from EEG signals using advanced signal processing techniques (some techniques were discussed in Section 3.3). (iii) Predictive model construction: In this stage, the predictive model is trained and validated to identify individuals. This process is carried out using independent datasets and employing metrics to measure performance such as accuracy, sensitivity, and specificity. In addition, fine-tuning and optimization are performed in which the model parameters are adjusted to improve predictive capacity and generalization.
EEG signals have been studied in combination with various types of signals or traits, known as multimodal approaches. These approaches aim to improve the reliability and accuracy of biometric systems. However, these systems increase their complexity and implementation cost, which limits their use in some special applications. Some reported multimodal combinations include the following: (i) EEG + face recognition: This technique enhances security by combining brainwave patterns with facial features. An example is hybrid authentication systems designed for high-security access control. (ii) EEG + Electrooculography (EOG): Incorporates eye movement data to improve signal interpretation, particularly for eye-tracking and cognitive load assessment applications [66]. (iii) EEG + ECG (electrocardiography): This technique enhances robustness and accuracy in biometric identification by leveraging the complementary information provided by each type of signal. This approach can be advantageous as acquisition hardware evolves [67]. (iv) EEG + gait analysis: Explored for continuous authentication, especially in mobile and wearable systems [68]. (v) EEG + speech recognition: Investigated in cognitive-state authentication, particularly in stress-aware security systems, where speech characteristics combined with EEG responses improve user verification under varying conditions [1,69]. Compared to EEG-only systems, multimodal biometric approaches consistently show improved performance metrics. For example, the integration of EEG with photoplethysmography, facial images, or speech has led to measurable gains in classification accuracy and reductions in error rates, particularly under emotionally dynamic or noisy conditions. These gains are particularly pronounced in studies using the MED4 and DEAP datasets. Nevertheless, such improvements must be weighed against increased system complexity, longer setup times, and reduced user comfort. Therefore, the adoption of multimodal systems is often justified in applications requiring higher security or resilience, whereas EEG-only systems may remain preferable in mobile, low-intrusion contexts.
Additionally, it is essential to note the promising systems for biometric identification using EEG signals, including transformer-based models [70] and federated learning [71]. Transformers enable the capture of spatial and temporal dependencies in EEG signals more efficiently than traditional convolutional networks, thereby improving neural pattern classification accuracy. Furthermore, some architectures have demonstrated that self-attention mechanisms can effectively identify critical regions in EEG data, thereby optimizing the extraction of relevant features for biometric identification tasks [72]. In [70], a transformer-based model for person identification from EEG signals was proposed. It combines temporal and spatial attention to extract relationships over time and across channels, eliminating the need for manual feature extraction. This architecture makes it exceptionally robust to variations in mental states, noise, and cognitive tasks, allowing personal identification even under changing conditions. Including spatial positional encoding improves accuracy, while absolute temporal encoding can negatively impact performance. It is likely due to the disruption of temporal invariance. Furthermore, the model benefits from increased sample size through window overlap, which mitigates the data requirements of transformer architectures. This model achieved up to 97.45% accuracy for cross-condition mental states, outperforming models such as CNN, SVM, and GCNN and demonstrating outstanding generalization capabilities. On the other hand, federated learning (FL) can preserve privacy in EEG models, allowing decentralized training of neural networks without sharing sensitive data (EEG signals); only the learned parameters from the models trained locally on each node are communicated within a multi-node system. This allows for improving the models’ generalization by taking advantage of information distributed across multiple devices. This approach is particularly advantageous for PI based on EEG, facilitating the widespread adoption of EEG in identity verification. Moreover, this approach enhances model accuracy and generalization across varying mental states and recording conditions [73]. Combining transformers and federated learning in EEG biometrics can offer more accurate, secure, and scalable systems with applications in continuous authentication and access control in dynamic environments.
Finally, it is essential to highlight that Hybrid approaches combining learning and optimization algorithms have been shown to improve the robustness and accuracy of EEG-based biometric systems. Evolutionary algorithms, such as genetic algorithms (GAs) and particle swarm optimization (PSO), have been successfully applied to optimize convolutional neural network architectures, thereby improving convergence speed and generalization. Bayesian optimization (BO) has also been utilized to tune hyperparameters in EEG classification models, thereby enhancing the efficiency of training deep networks. Hybrid feature engineering (integrating features extracted by deep learning with features obtained by traditional feature extraction, such as power spectral density and wavelet entropy) has demonstrated enhanced discriminative ability. Furthermore, metaheuristic algorithms such as ant colony optimization (ACO) and differential evolution (DE) have been used for feature selection, reducing computational cost while maintaining high accuracy [21,74].

4.2. Emotion Recognition from EEG Signals

Emotions are responses to excitations accompanied by physiological changes that predispose to action. They differ from feelings by their high intensity in a short period of time. Emotion recognition is an open field of research not only because of the limited volume of datasets but also because of the diversity and complexity of the expressiveness of emotions in some individuals. Emotions can also be associated with diseases.
Emotion classification has mentioned four primary emotions, but according to some authors, there are six primary emotions. A preliminary study on emotions found that the theory is correct, and the six main emotions are sadness, surprise, anger, fear, happiness, and contempt [75]. Secondary emotions are combinations of primary emotions. Emotions can be represented graphically using valence, arousal, and dominance on a Cartesian plane.
Valence is a fundamental dimension in constructing emotional experiences, representing the motivational aspect of emotions, ranging from pleasant to unpleasant states. It originates from different neurobiological structures, where one activates the appetitive motivational system and the other the defensive system [76]. This distinction has been widely observed in humans, primates, and other mammals by functional magnetic resonance imaging (fMRI) [77]. Closely related to valence, arousal reflects the level of energy expenditure during an emotional response and corresponds to sympathetic activation. Studies indicate that arousal is often influenced by valence, as the involvement of the appetitive or defensive system tends to increase arousal levels. Meanwhile, dominance is the most recently conceptualized dimension and refers to perceived control over emotional responses. This function, which involves inhibiting or sustaining behavioral reactions, is associated with more evolved brain structures responsible for response inhibition, contextual evaluation, and planning [78].
The interaction between emotions and EEG signals is fundamental in biometric identification. Emotional states can cause variations in the patterns of EEG signals, which affects the stability and reliability of biometric systems based on EEG signals. Therefore, recognition from emotional signals is relevant to be treated within biometric identification systems based on EEG signals.
One of the significant limitations in the advancement of the study of emotions from EEG signals is the collection of data, as these require the use of paradigms that induce the evocation of emotions and the use of electrode systems that are usually not easy to install. Although there are various public databases, they can be considered limited and have a high degree of diversity to have a true certainty of the generality of machine learning models designed to recognize emotional states from EEG signals. Therefore, having large datasets is one of the major challenges in this field. The databases most reported in the studies are as follows:
(i) DEAP dataset This database contains EEG signals and peripheral physiological signals from 32 participants. Each participant watched 40 musical videos (1 min excerpts) that were rated for levels of arousal, valence, liking, dominance, and familiarity. Among the 32 participants, 50% were female, aged between 19 and 37, while the male participants had a mean age of 26.9. Peripheral recordings included EOG, four EMG signals (from the zygomatic major and trapezius muscles), GSR, BVP (blood volume pressure), temperature, and respiration. Additionally, facial recordings were taken from 22 participants’ frontal faces. The signals were recorded from 32 channels at a sampling rate of 512 Hz [79].
(ii) MAHNOB database includes 27 participants, consisting of 11 males and 16 females. EEG signal recordings were taken from 32 channels at a sampling rate of 256 Hz. Additionally, videos of the participants’ faces and bodies were recorded using six cameras at 60 frames per second (fps), eye gaze data was recorded at 60 Hz, and audio was recorded at a sampling rate of 44.1 KHz. During the recording, 20 videos were presented to the participants, and for each video, emotional keywords, arousal, valence, dominance, and predictability were assessed using a rating scale ranging from 1 to 9 [80].
(iii) The SEED database consists of EEG signals from seven men and eight women, with an average age between 23 and 27 years and a standard deviation of 2.37. For this database, 15 clips from Chinese movies were selected as stimulus material to generate positive, negative, and neutral emotions. Each experiment consists of 15 trials. The EEG signals are captured using a cap following the international 10–20 system for 62 channels. These signals are preprocessed and filtered at 200 Hz. The labels for each signal correspond to (−1 for negative, 0 for neutral, and +1 for positive) emotions [81].
(iv) The LUMED-2 (the Loughborough University Multimodal Emotion Database-2) is a dataset of multimodal emotions containing simultaneous multimodal data from 13 participants (6 women and 7 men) while presented with audiovisual stimuli. The total duration of all stimuli is 8 min and 50 s, consisting of short video clips selected from the web to provoke specific emotions. After each session, participants were asked to label the clips with the emotional states they experienced while watching them. Three different emotions resulted from the labeling: “sad,” “neutral,” and “happy”. The participants’ facial expressions were captured using a webcam at a resolution of 640 × 480 and 30 frames per second (fps). EEG data from the participants were captured using an 8-channel wireless EEG device called ENOBIO, with a temporal resolution of 500 Hz. The EEG data were filtered for the frequency range [0, 75 Hz], and baseline subtraction was applied for each window. Regarding peripheral physiological data, an EMPATICA E4 wristband, powered by Bluetooth, was used to record the participants’ GSR (Galvanic Skin Response) [82].
The datasets described above have been used for various research studies in data science for emotion identification and, more recently, for biometric identification from EEG signals. These databases offer a wide variety of physiological signals and emotional stimuli to train and evaluate classification models. These diverse datasets are essential for the advancement of emotion recognition systems, as they represent real-world scenarios and a variety of emotional expressions. However, with the limitation mentioned above of true generality in the construction of predictive models for emotion identification, considering the diversity between datasets in terms of sampling frequency, acquisition equipment, and paradigm, among others. However, researchers have experimented with these datasets by applying various pre-processing methods, feature extraction techniques, and classification models to improve the accuracy and generality of emotion recognition systems.
Table 4 presents an overview of emotion prediction from EEG signals by presenting each study’s different preprocessing, feature extraction, and classification techniques. Preprocessing steps include a variety of filtering techniques, such as Laplacian surface filtering, Butterworth filters, and blind source separation, as well as more advanced methods, such as artifact subspace reconstruction (ASR). For feature extraction, approaches such as wavelet transform, principal component analysis (PCA), higher-order spectral analysis (HOSA), and entropy-based methods are used. Classification methods vary, with techniques such as linear discriminant analysis (LDA), support vector machines (SVMs), k-nearest neighbors (K-NNs), quadratic discriminant analysis (QDA), and fuzzy cognitive maps (FCMs) commonly applied. The reported methods have demonstrated high performance in terms of accuracy for identifying emotional states from EEG signals, highlighting the effectiveness of various combinations of preprocessing, feature extraction, and classification for emotion recognition. On the other hand, it is important to highlight the efforts made to simplify the identification of emotions by limiting the number of channels, such as the study carried out in [83] where they achieved a performance of 91.88% accuracy with the MAHNOB database and using support vector machines for classification.
One of the most pressing challenges in EEG-based biometric systems is the effect of emotional variability on signal stability. While many traditional models are trained using emotionally neutral data, emotional states can alter EEG patterns significantly, leading to reduced intrasubject consistency. Emerging approaches have started to incorporate emotion-labeled EEG datasets and emotion-aware preprocessing to improve robustness. Some deep learning architectures, especially those using attention mechanisms or recurrent layers, show promise in extracting features that are less sensitive to affective changes. Nevertheless, current systems still lack full generalizability across emotional states, highlighting the need for adaptive and context-aware biometric frameworks.

4.3. Emotion-Aware Biometric Identification

The EEG signals capture information about brain activity. These are non-stationary, meaning they change over time and are influenced by factors such as human emotions, thoughts, and activities [93]. Emotion-aware biometric identification is an emerging and relatively unexplored area that seeks to improve biometric identification systems by incorporating emotional states, e.g., in [94], ECG signals were applied to biometric identification and emotion identification. Although traditional biometric systems focus solely on stable physiological signals, recent studies have investigated how emotions can influence these signals, particularly EEG data, to improve the robustness of identification methods [95]. Some approaches in this field have utilized datasets that include conditions such as driving fatigue and various emotional states, as well as artificially induced brain responses such as rapid serial visual presentation (RSVP). However, while these datasets involve conditions of fatigue and emotions, these factors are not explicitly analyzed for their impact on biometric identification. In [96] is presented a convolutional neural network model, GSLT-CNN, which directly processes raw EEG data without requiring feature engineering. This model was evaluated on a dataset of 157 subjects across four experiments. In contrast, [97] focuses on using olfactory stimuli, such as specific aromas, to evoke emotional responses that affect brainwave patterns, thereby uncovering unique aspects of individual identity. Nevertheless, some studies have demonstrated the variability in EEG signals evoked by olfactory and tactile stimuli [98,99].
Other research efforts have leveraged emotion-specific datasets designed to capture variations in emotional states for biometric identification purposes. These datasets enable researchers to identify distinct neural responses associated with emotions, providing valuable insights into how emotional context can be integrated into biometric systems [95,100]. Although still in its early stages, emotion-aware biometric identification has the potential to create adaptive, resilient systems that account for the dynamic nature of human emotions. This area of research has been controversial. Zhang et al. [101] investigated the use of emotional EEG for identification purposes and found that emotion did not affect identification accuracy when using 12-second EEG segments. However, as mentioned by Wang et al. [95], the robustness of this method across different emotional states was not verified.

4.4. Practical Limitations and Real-World Considerations in Emotion-Aware EEG Biometric Systems

While emotion-aware EEG biometric systems have shown great potential to identify individuals in varying affective states, with minimal studies it could be considered promising, but their implementation in real-world settings introduces several practical challenges. First, emotional variability can significantly alter EEG patterns, reducing intrasubject consistency and affecting the accuracy of identification [70]. Furthermore, wearability and user compliance remain critical issues: most EEG systems require uncomfortable electrodes or headsets, limiting their daily usability [102]. Moreover, EEG signals are susceptible to physiological and environmental noise, such as muscle artifacts, eye movements, and ambient electrical interference, which are harder to control outside laboratory conditions [103]. Session-to-session variability, caused by changes in electrode placement or the user’s mental state, also affects reproducibility [104]. Beyond technical concerns, ethical and privacy considerations arise from the fact that EEG signals can reveal both identity and cognitive or emotional states, raising concerns about consent and potential misuse [105]. In this context, federated learning has emerged as a promising strategy to address data privacy while allowing collaborative training between users or institutions without exposing raw EEG data [106]. However, even this approach requires careful consideration of communication costs, system heterogeneity, and model convergence in diverse emotional and behavioral conditions. All the issues shown are relevant for the widespread use of EEG signals as a real alternative to PI.

4.5. Ethical Considerations

The deployment of EEG-based emotion recognition systems entails significant ethical considerations, particularly concerning managing and privacy of emotional data derived from individuals’ unique and sensitive brain patterns. Such data could be misused to manipulate or influence individuals without their explicit consent, leading to risks of exploitation, discrimination, and mass surveillance. Therefore, it is essential to implement robust data protection measures, including anonymization and encryption, and establish clear regulatory frameworks defining acceptable uses of these technologies [17,107]. Biometric data, including EEG-derived information, is governed by various national and international regulations to protect individual privacy and rights. For instance, the European Union’s General Data Protection Regulation (GDPR) classifies biometric data as sensitive personal information. It imposes strict requirements for its processing, including obtaining explicit consent from individuals and restricting data use to legitimate, specified purposes. Similarly, the California Consumer Privacy Act (CCPA) recognizes neural data as sensitive personal information, subjecting it to stringent data protection obligations. At the international level, frameworks such as the United Nations Guiding Principles on Business and Human Rights provide ethical guidelines to prevent the misuse of biometric technologies. Adhering to these regulations safeguards users and fosters trust in developing innovative EEG-based technologies, promoting their responsible adoption and mitigating negative social consequences, such as emotion-based discrimination or mass surveillance [108,109].

5. Conclusions and Future Work

EEG-based biometric identification and emotion recognition face multiple challenges that impact their effectiveness and practical application. For biometric identification, a primary difficulty lies in the high variability of EEG features across sessions for the same individual, which complicates consistent identification [27]. Additional challenges include limitations related to the number of channels and temporal windows used and the risk of overfitting in deep learning models [35]. The complex data collection and computational requirements inherent in multi-channel EEG setups further complicate the deployment of these systems [31]. Moreover, identifying robust features from non-stationary EEG signals that are sufficiently discriminatory remains a significant hurdle [34], along with fundamental concerns around privacy, user-friendliness, and authentication standards [110]. Similarly, EEG-based emotion recognition encounters unique obstacles, particularly due to the variability of EEG signals between individuals, which challenges model generalization across unseen subjects [111]. Issues in data processing, generalizability, and the integration of these models into human–computer interaction frameworks present further difficulties [112]. The neural complexity of emotions and individual differences add another layer of complexity to emotion recognition models [113]. Furthermore, the use of single features, redundant signals, and the high number of channels required for effective recognition limit the accuracy and portability of these systems [114]. The feature redundancy and computational demands complicate implementation in wearable devices, underscoring the need for efficient, channel-optimized solutions [115]. In this context, emotion-aware biometric identification emerges as an innovative yet challenging approach, aiming to integrate emotional states into biometric systems to enhance robustness and adaptability. These systems could achieve more accurate and personalized identification by incorporating emotion recognition, especially in dynamic environments. However, achieving reliable emotion-aware biometric identification requires addressing additional challenges, such as emotional variability across sessions and individual differences in emotional expression. Future research should focus on optimizing feature selection methods to manage both the non-stationary nature of EEG signals and the influence of transient emotional states. Developing lightweight, high-performance models that integrate biometric and emotional data could open new avenues for secure, adaptive identification systems, particularly in applications where user engagement and real-time adaptability are critical.
Additionally, it is important to highlight that emotions are not the only factors significantly influencing biometric identification from EEG signals. Therefore, future work should consider multiple contextual elements that may affect the dynamics of EEG signals, such as the individual’s health status, the presence of neurological or psychiatric disorders, the level of fatigue, substance use, age [45], and even environmental factors such as noise or lighting. Ignoring these sources of variability could compromise the robustness and generalization of identification models, so future research should consider these aspects to develop systems that are more resilient and adaptive to the diversity of human conditions. Finally, future research should further explore the integration of transformer-based models and federated learning in EEG-based biometric systems. These techniques offer considerable advantages in terms of accuracy, robustness to emotional and contextual variability, and data privacy. Transformer architectures are especially well suited for capturing complex temporal and spatial dynamics in EEG signals, while federated learning provides a secure and scalable framework for training models across distributed datasets. Combining these methods could pave the way for highly adaptable and ethically responsible biometric identification systems suitable for real-world deployment.

Author Contributions

M.A.B., C.D.-M., C.M. and A.C.-O.: conceptualization, methodology, validation, formal analysis, investigation, resources, writing—original draft preparation, writing—review and editing; M.A.B., L.S.-G. and E.D.-G.: conceptualization, methodology, investigation, supervision; M.A.B. and C.M.: methodology, investigation, resources, writing—review and editing, supervision, project administration, funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Institución Universitaria Pascual Bravo.

Acknowledgments

The authors would like to thank the contributions of the research project titled “Framework de fusión de la información orientado a la protección de fraudes usando estrategias adaptativas de autenticación y validación de identidad,” supported by Institución Universitaria Pascual Bravo.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zapata, J.C.; Duque, C.M.; Gonzalez, M.E.; Becerra, M.A. Data Fusion Applied to Biometric Identification—A Review. Adv. Comput. Data Sci. 2017, 721, 721–733. [Google Scholar] [CrossRef]
  2. Zhong, W.; An, X.; Di, Y.; Zhang, L.; Ming, D. Review on identity feature extraction methods based on electroencephalogram signals. J. Biomed. Eng. 2021, 38, 1203–1210. [Google Scholar] [CrossRef]
  3. Apicella, A.; Arpaia, P.; Frosolone, M.; Improta, G.; Moccaldi, N.; Pollastro, A. EEG-based measurement system for monitoring student engagement in learning 4.0. Sci. Rep. 2022, 12, 5857. [Google Scholar] [CrossRef] [PubMed]
  4. Aslan, M.; Baykara, M.; Alakuş, T.B. LSTMNCP: Lie detection from EEG signals with novel hybrid deep learning method. Multimed. Tools Appl. 2023, 83, 31655–31671. [Google Scholar] [CrossRef]
  5. Galli, G.; Angelucci, D.; Bode, S.; Giorgi, C.D.; Sio, L.D.; Paparo, A.; Lorenzo, G.D.; Betti, V. Early EEG responses to pre-electoral survey items reflect political attitudes and predict voting behavior. Sci. Rep. 2021, 11, 18692. [Google Scholar] [CrossRef] [PubMed]
  6. Belhadj, F. Biometric System for Identification Belhadj F. (2017). Biometric System for Identification and Authentication.nd Authentication [Tesis Doctoral, École Nationale Supérieure d’informatique, Argelia]. HAL. 2017. Available online: https://hal.science/tel-01456829v1/location (accessed on 15 July 2025).
  7. Moreno-Revelo, M.; Ortega-Adarme, M.; Peluffo-Ordoñez, D.H.; Alvarez-Uribe, K.C.; Becerra, M.A. Comparison Among Physiological Signals for Biometric Identification. In Intelligent Data Engineering and Automated Learning—IDEAL 2017; Yin, H., Gao, Y., Chen, S., Wen, Y., Cai, G., Gu, T., Du, J., Tallón-Ballesteros, A.J., Zhang, M., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2017; pp. 436–443. [Google Scholar]
  8. Balci, F. DM-EEGID: EEG-Based Biometric Authentication System Using Hybrid Attention-Based LSTM and MLP Algorithm. Trait. Du Signal 2023, 40, 1–14. [Google Scholar]
  9. Alyasseri, Z.A.A.; Alomari, O.A.; Makhadmeh, S.N.; Mirjalili, S.; Al-Betar, M.A.; Abdullah, S.; Ali, N.S.; Papa, J.P.; Rodrigues, D.; Abasi, A.K. EEG Channel Selection for Person Identification Using Binary Grey Wolf Optimizer. IEEE Access 2022, 10, 10500–10513. [Google Scholar] [CrossRef]
  10. Abdi Alkareem Alyasseri, Z.; Alomari, O.A.; Al-Betar, M.A.; Awadallah, M.A.; Hameed Abdulkareem, K.; Abed Mohammed, M.; Kadry, S.; Rajinikanth, V.; Rho, S. EEG Channel Selection Using Multiobjective Cuckoo Search for Person Identification as Protection System in Healthcare Applications. Comput. Intell. Neurosci. 2022, 2022, 5974634. [Google Scholar] [CrossRef] [PubMed]
  11. Lai, C.Q.; Ibrahim, H.; Abdullah, M.Z.; Suandi, S.A. EEG-Based Biometric Close-Set Identification Using CNN-ECOC-SVM. In Artificial Intelligence in Data and Big Data Processing; Springer Nature: Berlin, Germany, 2022; pp. 723–732. [Google Scholar] [CrossRef]
  12. Sun, Y.; Lo, P.-W.; Lo, B.L. EEG-based user identification system using 1D-convolutional long short-term memory neural networks. Expert Syst. Appl. 2019, 125, 259–267. [Google Scholar] [CrossRef]
  13. Radwan, S.H.; El-Telbany, M.; Arafa, W.; Ali, R.A. Deep Learning Approaches for Personal Identification Based on EGG Signals. In Proceedings of the International Conference on Advanced Intelligent Systems and Informatics 2021, Cairo, Egypt, 11–13 December; Lecture Notes on Data Engineering and Communications Technologies. Springer Nature: Berlin, Germany, 2022; Volume 100, pp. 30–39. [Google Scholar] [CrossRef]
  14. Hendrawan, M.A.; Saputra, P.Y.; Rahmad, C. Identification of optimum segment in single channel EEG biometric system. Indones. J. Electr. Eng. Comput. Sci. 2021, 23, 1847–1854. [Google Scholar] [CrossRef]
  15. Kulkarni, D.; Dixit, V.V. Hybrid classification model for emotion detection using electroencephalogram signal with improved feature set. Biomed. Signal Process. Control 2025, 100, 106893. [Google Scholar] [CrossRef]
  16. Wang, C.; Li, Y.; Liu, S.; Yang, S. TVRP-based constructing complex network for EEG emotional feature analysis and recognition. Biomed. Signal Process. Control 2024, 96, 106606. [Google Scholar] [CrossRef]
  17. Kumar, A.; Kumar, A. DEEPHER: Human Emotion Recognition Using an EEG-Based DEEP Learning Network Model. Eng. Proc. 2021, 10, 32. [Google Scholar] [CrossRef]
  18. Sakalle, A.; Tomar, P.; Bhardwaj, H.; Acharya, D.; Bhardwaj, A. A LSTM based deep learning network for recognizing emotions using wireless brainwave driven system. Expert Syst. Appl. 2021, 173, 114516. [Google Scholar] [CrossRef]
  19. Galvão, F.; Alarcão, S.M.; Fonseca, M.J. Predicting exact valence and arousal values from EEG. Sensors 2021, 21, 3414. [Google Scholar] [CrossRef] [PubMed]
  20. Özerdem, M.S.; Polat, H. Emotion recognition based on EEG features in movie clips with channel selection. Brain Inform. 2017, 4, 241–252. [Google Scholar] [CrossRef] [PubMed]
  21. Maiorana, E. Deep learning for EEG-based biometric recognition. Neurocomputing 2020, 410, 374–386. [Google Scholar] [CrossRef]
  22. Prabowo, D.W.; Nugroho, H.A.; Setiawan, N.A.; Debayle, J. A systematic literature review of emotion recognition using EEG signals. Cogn. Syst. Res. 2023, 82, 101152. [Google Scholar] [CrossRef]
  23. Wang, Q.; Wang, M.; Yang, Y.; Zhang, X. Multi-modal emotion recognition using EEG and speech signals. Comput. Biol. Med. 2022, 149, 105907. [Google Scholar] [CrossRef] [PubMed]
  24. Kouka, N.; Fourati, R.; Fdhila, R.; Siarry, P.; Alimi, A.M. EEG channel selection-based binary particle swarm optimization with recurrent convolutional autoencoder for emotion recognition. Biomed. Signal Process. Control 2023, 84, 104783. [Google Scholar] [CrossRef]
  25. Huang, G.; Hu, Z.; Chen, W.; Zhang, S.; Liang, Z.; Li, L.; Zhang, L.; Zhang, Z. M3CV: A multi-subject, multi-session, and multi-task database for EEG-based biometrics challenge. NeuroImage 2022, 264, 119666. [Google Scholar] [CrossRef] [PubMed]
  26. Ortega-Rodríguez, J.; Gómez-González, J.F.; Pereda, E. Selection of the Minimum Number of EEG Sensors to Guarantee Biometric Identification of Individuals. Sensors 2023, 23, 4239. [Google Scholar] [CrossRef] [PubMed]
  27. Benomar, M.; Cao, S.; Vishwanath, M.; Vo, K.; Cao, H. Investigation of EEG-Based Biometric Identification Using State-of-the-Art Neural Architectures on a Real-Time Raspberry Pi-Based System. Sensors 2022, 22, 9547. [Google Scholar] [CrossRef] [PubMed]
  28. Tian, W.; Li, M.; Hu, D. Multi-band Functional Connectivity Features Fusion Using Multi-stream GCN for EEG Biometric Identification. In Proceedings of the 2022 International Conference on Autonomous Unmanned Systems (ICAUS 2022), Xi’an, China, 23–25 September; Springer Nature: Berlin, Germany, 2023; pp. 3196–3203. [Google Scholar] [CrossRef]
  29. Kralikova, I.; Babusiak, B.; Smondrk, M. EEG-Based Person Identification during Escalating Cognitive Load. Sensors 2022, 22, 7154. [Google Scholar] [CrossRef] [PubMed]
  30. Wibawa, A.D.; Mohammad, B.S.Y.; Fata, M.A.K.; Nuraini, F.A.; Prasetyo, A.; Pamungkas, Y. Comparison of EEG-Based Biometrics System Using Naive Bayes, Neural Network, and Support Vector Machine. In Proceedings of the 2022 International Conference on Electrical and Information Technology (IEIT), Malang, Indonesia, 15–16 September 2022; pp. 408–413. [Google Scholar] [CrossRef]
  31. Hendrawan, M.A.; Rosiani, U.D.; Sumari, A.D.W. Single Channel Electroencephalogram (EEG) Based Biometric System. In Proceedings of the 2022 IEEE 8th Information Technology International Seminar (ITIS), Surabaya, Indonesia, 19–21 October 2022; pp. 307–311. [Google Scholar] [CrossRef]
  32. Jijomon, C.M.; Vinod, A.P. EEG-based Biometric Identification using Frequently Occurring Maximum Power Spectral Features. In Proceedings of the 2018 IEEE Applied Signal Processing Conference (ASPCON), Kolkata, India, 7–9 December 2018; pp. 249–252. [Google Scholar] [CrossRef]
  33. Waili, T.; Johar, M.G.M.; Sidek, K.A.; Nor, N.S.H.M.; Yaacob, H.; Othman, M. EEG Based Biometric Identification Using Correlation and MLPNN Models. Int. J. Online Biomed. Eng. (iJOE) 2019, 15, 77–90. [Google Scholar] [CrossRef]
  34. Monsy, J.C.; Vinod, A.P. EEG-based biometric identification using frequency-weighted power feature. Inst. Eng. Technol. 2020, 9, 251–258. [Google Scholar] [CrossRef]
  35. Alsumari, W.; Hussain, M.; Alshehri, L.; Aboalsamh, H.A. EEG-Based Person Identification and Authentication Using Deep Convolutional Neural Network. Axioms 2023, 12, 74. [Google Scholar] [CrossRef]
  36. TajDini, M.; Sokolov, V.; Kuzminykh, I.; Ghita, B. Brainwave-based authentication using features fusion. Comput. Secur. 2023, 129, 103198. [Google Scholar] [CrossRef]
  37. Oikonomou, V.P. Human Recognition Using Deep Neural Networks and Spatial Patterns of SSVEP Signals. Sensors 2023, 23, 2425. [Google Scholar] [CrossRef] [PubMed]
  38. Bak, S.J.; Jeong, J. User Biometric Identification Methodology via EEG-Based Motor Imagery Signals. IEEE Access 2023, 11, 41303–41314. [Google Scholar] [CrossRef]
  39. Ortega-Rodríguez, J.; Martín-Chinea, K.; Gómez-González, J.F.; Pereda, E. Brainprint based on functional connectivity and asymmetry indices of brain regions: A case study of biometric person identification with non-expensive electroencephalogram headsets. IET Biom. 2023, 12, 129–145. [Google Scholar] [CrossRef]
  40. Cui, G.; Li, X.; Touyama, H. Emotion recognition based on group phase locking value using convolutional neural network. Sci. Rep. 2023, 13, 3769. [Google Scholar] [CrossRef] [PubMed]
  41. Khubani, J.; Kulkarni, S. Inventive deep convolutional neural network classifier for emotion identification in accordance with EEG signals. Soc. Netw. Anal. Min. 2023, 13, 34. [Google Scholar] [CrossRef]
  42. Zali-Vargahan, B.; Charmin, A.; Kalbkhani, H.; Barghandan, S. Deep time-frequency features and semi-supervised dimension reduction for subject-independent emotion recognition from multi-channel EEG signals. Biomed. Signal Process. Control 2023, 85, 104806. [Google Scholar] [CrossRef]
  43. Vahid, A.; Arbabi, E. Human identification with EEG signals in different emotional states. In Proceedings of the 2016 23rd Iranian Conference on Biomedical Engineering and 2016 1st International Iranian Conference on Biomedical Engineering, ICBME 2016, Tehran, Iran, 24–25 November 2016; pp. 242–246. [Google Scholar] [CrossRef]
  44. Arnau-González, P.; Arevalillo-Herráez, M.; Katsigiannis, S.; Ramzan, N. On the Influence of Affect in EEG-Based Subject Identification. IEEE Trans. Affect. Comput. 2021, 12, 391–401. [Google Scholar] [CrossRef]
  45. Kaur, B.; Kumar, P.; Roy, P.P.; Singh, D. Impact of Ageing on EEG Based Biometric Systems. In Proceedings of the 2017 4th IAPR Asian Conference on Pattern Recognition (ACPR), Nanjing, China, 26–29 November 2017; pp. 459–464. [Google Scholar] [CrossRef]
  46. Diamond, M.C.; Scheibel, A.B.; Elson, L.M. Libro de Trabajo el Cerebro Humano Barcelona: Editorial Ariel. 2014. Available online: https://books.google.co.th/books/about/El_cerebro_humano_libro_de_trabajo.html?id=n25VCwAAQBAJ&redir_esc=y (accessed on 15 July 2025).
  47. Romeo Urrea, H. El Dominio de los Hemisferios Cerebrales. Cienc. Unemi 2015, 3, 8–15. [Google Scholar] [CrossRef]
  48. Buzsáki, G.; Draguhn, A. Neuronal Oscillations in Cortical Networks. Science 2004, 304, 1926–1929. [Google Scholar] [CrossRef] [PubMed]
  49. Hestermann, E.; Schreve, K.; Vandenheever, D. Enhancing Deep Sleep Induction Through a Wireless In-Ear EEG Device Delivering Binaural Beats and ASMR: A Proof-of-Concept Study. Sensors 2024, 24, 7471. [Google Scholar] [CrossRef] [PubMed]
  50. Candela-Leal, M.O.; Alanis-Espinosa, M.; Murrieta-González, J.; de J. Lozoya-Santos, J.; Ramírez-Moreno, M.A. Neural signatures of STEM learning and interest in youth. Acta Psychol. 2025, 255, 104949. [Google Scholar] [CrossRef] [PubMed]
  51. Chen, S.; Tan, Z.; Xia, W.; Gomes, C.A.; Zhang, X.; Zhou, W.; Liang, S.; Axmacher, N.; Wang, L. Theta oscillations synchronize human medial prefrontal cortex and amygdala during fear learning. Sci. Adv. 2021, 7, abf4198. [Google Scholar] [CrossRef] [PubMed]
  52. Mikicin, M.; Kowalczyk, M. Audio-Visual and Autogenic Relaxation Alter Amplitude of Alpha EEG Band, Causing Improvements in Mental Work Performance in Athletes. Appl. Psychophysiol. Biofeedback 2015, 40, 219–227. [Google Scholar] [CrossRef] [PubMed]
  53. Lundqvist, M.; Herman, P.; Warden, M.R.; Brincat, S.L.; Miller, E.K. Gamma and beta bursts during working memory readout suggest roles in its volitional control. Nat. Commun. 2018, 9, 394. [Google Scholar] [CrossRef] [PubMed]
  54. Hsu, H.H.; Yang, Y.R.; Chou, L.W.; Huang, Y.C.; Wang, R.Y. The Brain Waves During Reaching Tasks in People With Subacute Low Back Pain: A Cross-Sectional Study. IEEE Trans. Neural Syst. Rehabil. Eng. 2025, 33, 183–190. [Google Scholar] [CrossRef] [PubMed]
  55. Subha, D.P.; Joseph, P.K.; U, R.A.; Lim, C. EEG Signal Analysis: A Survey. J. Med. Syst. 2010, 34, 195–212. [Google Scholar] [CrossRef] [PubMed]
  56. Acharya, D.; Lende, M.; Lathia, K.; Shirgurkar, S.; Kumar, N.; Madrecha, S.; Bhardwaj, A. Comparative Analysis of Feature Extraction Technique on EEG-Based Dataset. In Soft Computing for Problem Solving; Springer Nature: Berlin, Germany, 2020; pp. 405–416. [Google Scholar] [CrossRef]
  57. Zhang, H.; Zhou, Q.Q.; Chen, H.; Hu, X.Q.; Li, W.G.; Bai, Y.; Han, J.X.; Wang, Y.; Liang, Z.H.; Chen, D.; et al. The applied principles of EEG analysis methods in neuroscience and clinical neurology. Mil. Med. Res. 2023, 10, 67. [Google Scholar] [CrossRef] [PubMed]
  58. Colominas, M.A.; Schlotthauer, G.; Torres, M.E. Improved complete ensemble EMD: A suitable tool for biomedical signal processing. Biomed. Signal Process. Control 2014, 14, 19–29. [Google Scholar] [CrossRef]
  59. Jaipriya, D.; Sriharipriya, K.C. Brain Computer Interface-Based Signal Processing Techniques for Feature Extraction and Classification of Motor Imagery Using EEG: A Literature Review. Biomed. Mater. Devices 2024, 2, 601–613. [Google Scholar] [CrossRef]
  60. Oikonomou, V.P. A Sparse Representation Classification Framework for Person Identification and Verification Using Neurophysiological Signals. Electronics 2025, 14, 1108. [Google Scholar] [CrossRef]
  61. Egorova, L.; Kazakovtsev, L.; Vaitekunene, E. Nonlinear Features and Hybrid Optimization Algorithm for Automated Electroencephalogram Signal Analysis. In Mathematical Modeling in Physical Sciences; Springer Nature: Berlin, Germany, 2024; pp. 233–243. [Google Scholar] [CrossRef]
  62. Jain, A.K.; Ross, A.; Nandakumar, K. An introduction to biometrics. In Proceedings of the International Conference on Pattern Recognition, Tampa, FL, USA, 8–11 December 2008; p. 1. [Google Scholar] [CrossRef]
  63. Kaliraman, B.; Nain, S.; Verma, R.; Dhankhar, Y.; Hari, P.B. Pre-processing of EEG signal using Independent Component Analysis. In Proceedings of the 2022 10th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), Noida, India, 13–14 October 2022; pp. 1–5. [Google Scholar] [CrossRef]
  64. Yamashita, M.; Nakazawa, M.; Nishikawa, Y.; Abe, N. Examination and It’s Evaluation of Preprocessing Method for Individual Identification in EEG. J. Inf. Process. 2020, 28, 239–246. [Google Scholar] [CrossRef]
  65. Bhawna, K.; Priyanka; Duhan, M. Electroencephalogram Based Biometric System: A Review. Lect. Notes Electr. Eng. 2021, 668, 57–77. [Google Scholar] [CrossRef]
  66. Mishra, A.; Bhateja, V.; Gupta, A.; Mishra, A.; Satapathy, S.C. Feature Fusion and Classification of EEG/EOG Signals. In Soft Computing and Signal Processing; Springer Nature: Berlin, Germany, 2019; pp. 793–799. [Google Scholar] [CrossRef]
  67. Barra, S.; Casanova, A.; Fraschini, M.; Nappi, M. EEG/ECG Signal Fusion Aimed at Biometric Recognition. In New Trends in Image Analysis and Processing—ICIAP 2015 Workshops; Springer Nature: Berlin, Germany, 2015; pp. 35–42. [Google Scholar] [CrossRef]
  68. Zhang, X.; Yao, L.; Huang, C.; Gu, T.; Yang, Z.; Liu, Y. DeepKey. ACM Trans. Intell. Syst. Technol. 2020, 11, 1–24. [Google Scholar] [CrossRef]
  69. Moreno-Rodriguez, J.C.; Ramirez-Cortes, J.M.; Atenco-Vazquez, J.C.; Arechiga-Martinez, R. EEG and voice bimodal biometric authentication scheme with fusion at signal level. In Proceedings of the 2021 IEEE Mexican Humanitarian Technology Conference (MHTC), Puebla, Mexico, 21–22 April 2021; pp. 52–58. [Google Scholar] [CrossRef]
  70. Du, Y.; Xu, Y.; Wang, X.; Liu, L.; Ma, P. EEG temporal–spatial transformer for person identification. Sci. Rep. 2022, 12, 14378. [Google Scholar] [CrossRef] [PubMed]
  71. Lin, L.; Zhao, Y.; Meng, J.; Zhao, Q. A Federated Attention-Based Multimodal Biometric Recognition Approach in IoT. Sensors 2023, 23, 6006. [Google Scholar] [CrossRef] [PubMed]
  72. Delvigne, V.; Wannous, H.; Vandeborre, J.P.; Ris, L.; Dutoit, T. Spatio-Temporal Analysis of Transformer based Architecture for Attention Estimation from EEG. In Proceedings of the 2022 26th International Conference on Pattern Recognition (ICPR), Montreal, QC, Canada, 21–25 August 2022; pp. 1076–1082. [Google Scholar] [CrossRef]
  73. Fidas, C.A.; Lyras, D. A Review of EEG-Based User Authentication: Trends and Future Research Directions. IEEE Access 2023, 11, 22917–22934. [Google Scholar] [CrossRef]
  74. Moctezuma, L.A.; Molinas, M. Towards a minimal EEG channel array for a biometric system using resting-state and a genetic algorithm for channel selection. Sci. Rep. 2020, 10, 14917. [Google Scholar] [CrossRef] [PubMed]
  75. Carla, F.; Yanina, W.; Daniel Gustavo, P. ¿Cuántas Son Las Emociones Básicas? Anu. De Investig. 2017, 26, 253–257. [Google Scholar]
  76. LeDoux, J.E. Emotion Circuits in the Brain. Annu. Rev. Neurosci. 2000, 23, 155–184. [Google Scholar] [CrossRef] [PubMed]
  77. Bradley, M.M. Natural selective attention: Orienting and emotion. Psychophysiology 2009, 46, 1–11. [Google Scholar] [CrossRef] [PubMed]
  78. Carlos Gantiva, K.C. Caracteristicas de la respuesta emocional generada por las palabras: Un estudio experimental desde LA emoción y la motivación. Psychol. Av. Discip. 2016, 10, 55–62. [Google Scholar]
  79. Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. DEAP: A Database for Emotion Analysis; Using Physiological Signals. IEEE Trans. Affect. Comput. 2012, 3, 18–31. [Google Scholar] [CrossRef]
  80. Soleymani, M.; Lichtenauer, J.; Pun, T.; Pantic, M. A Multimodal Database for Affect Recognition and Implicit Tagging. IEEE Trans. Affect. Comput. 2012, 3, 42–55. [Google Scholar] [CrossRef]
  81. Zheng, W.L.; Guo, H.T.; Lu, B.L. Revealing critical channels and frequency bands for emotion recognition from EEG with deep belief network. In Proceedings of the 2015 7th International IEEE/EMBS Conference on Neural Engineering (NER), Montpellier, France, 22–24 April 2015; pp. 154–157. [Google Scholar] [CrossRef]
  82. Ekmekcioglu, E.; Cimtay, Y. Loughborough University Multimodal Emotion Dataset-2; Loughborough University: Loughborough, UK, 2020. [Google Scholar] [CrossRef]
  83. Li, J.W.; Lin, D.; Che, Y.; Lv, J.J.; Chen, R.J.; Wang, L.J.; Zeng, X.X.; Ren, J.C.; Zhao, H.M.; Lu, X. An innovative EEG-based emotion recognition using a single channel-specific feature from the brain rhythm code method. Front. Neurosci. 2023, 17, 1221512. [Google Scholar] [CrossRef] [PubMed]
  84. Murugappan, M.; Ramachandran, N.; Sazali, Y. Classification of human emotion from EEG using discrete wavelet transform. J. Biomed. Sci. Eng. 2010, 3, 390–396. [Google Scholar] [CrossRef]
  85. Lee, Y.Y.; Hsieh, S. Classifying Different Emotional States by Means of EEG-Based Functional Connectivity Patterns. PLoS ONE 2014, 9, e95415. [Google Scholar] [CrossRef] [PubMed]
  86. Iacoviello, D.; Petracca, A.; Spezialetti, M.; Placidi, G. A real-time classification algorithm for EEG-based BCI driven by self-induced emotions. Comput. Methods Programs Biomed. 2015, 122, 293–303. [Google Scholar] [CrossRef] [PubMed]
  87. Kumar, N.; Khaund, K.; Hazarika, S.M. Bispectral Analysis of EEG for Emotion Recognition. Procedia Comput. Sci. 2016, 84, 31–35. [Google Scholar] [CrossRef]
  88. Zhang, Y.; Zhang, S.; Ji, X. EEG-based classification of emotions using empirical mode decomposition and autoregressive model. Multimed. Tools Appl. 2018, 77, 26697–26710. [Google Scholar] [CrossRef]
  89. Daşdemir, Y.; Yıldırım, E.; Yıldırım, S. Emotion Analysis using Different Stimuli with EEG Signals in Emotional Space. Nat. Eng. Sci. 2017, 2, 1–10. [Google Scholar] [CrossRef]
  90. Singh, M.I.; Singh, M. Development of low-cost event marker for EEG-based emotion recognition. Trans. Inst. Meas. Control 2017, 39, 642–652. [Google Scholar] [CrossRef]
  91. Nakisa, B.; Rastgoo, M.N.; Rakotonirainy, A.; Maire, F.; Chandran, V. Long Short Term Memory Hyperparameter Optimization for a Neural Network Based Emotion Recognition Framework. IEEE Access 2018, 6, 49325–49338. [Google Scholar] [CrossRef]
  92. Sovatzidi, G.; Iakovidis, D.K. Interpretable EEG-Based Emotion Recognition Using Fuzzy Cognitive Maps; IOS Press Ebooks: Amsterdam, The Netherlands, Ebook: Volume 302 - Caring is Sharing – Exploiting the Value in Data for Health and Innovation, Series: Studies in Health Technology and Informatics; 2023. [Google Scholar] [CrossRef]
  93. Ong, Z.Y.; Saidatul, A.; Vijean, V.; Ibrahim, Z. Non Linear Features Analysis between Imaginary and Non-imaginary Tasks for Human EEG-based Biometric Identification. IOP Conf. Ser. Mater. Sci. Eng. 2019, 557, 012033. [Google Scholar] [CrossRef]
  94. Brás, S.; Ferreira, J.H.T.; Soares, S.C.; Pinho, A.J. Biometric and Emotion Identification: An ECG Compression Based Method. Front. Psychol. 2018, 9, 467. [Google Scholar] [CrossRef] [PubMed]
  95. Wang, Y.; Wu, Q.; Wang, C.; Ruan, Q. DE-CNN: An Improved Identity Recognition Algorithm Based on the Emotional Electroencephalography. Comput. Math. Methods Med. 2020, 2020, 7574531. [Google Scholar] [CrossRef] [PubMed]
  96. Chen, J.X.; Mao, Z.J.; Yao, W.X.; Huang, Y.F. EEG-based biometric identification with convolutional neural network. Multimed. Tools Appl. 2020, 79, 10655–10675. [Google Scholar] [CrossRef]
  97. Pandharipande, M.; Chakraborty, R.; Kopparapu, S.K. Modeling of Olfactory Brainwaves for Odour Independent Biometric Identification. In Proceedings of the 2023 31st European Signal Processing Conference (EUSIPCO), Helsinki, Finland, 4–8 September 2023; pp. 1140–1144. [Google Scholar] [CrossRef]
  98. Becerra, M.A.; Londoño-Delgado, E.; Pelaez-Becerra, S.M.; Serna-Guarín, L.; Castro-Ospina, A.E.; Marin-Castrillón, D.; Peluffo-Ordóñez, D.H. Odor Pleasantness Classification from Electroencephalographic Signals and Emotional States. In Advances in Computing; Springer: Cham, Switzerland, 2018; pp. 128–138. [Google Scholar] [CrossRef]
  99. Becerra, M.A.; Londoño-Delgado, E.; Pelaez-Becerra, S.M.; Castro-Ospina, A.E.; Mejia-Arboleda, C.; Durango, J.; Peluffo-Ordóñez, D.H. Electroencephalographic Signals and Emotional States for Tactile Pleasantness Classification. In Progress in Artificial Intelligence and Pattern Recognition; Springer Nature: Berlin, Germany, 2018; pp. 309–316. [Google Scholar] [CrossRef]
  100. Duque-Mejía, C.; Castro, A.; Duque, E.; Serna-Guarín, L.; Lorente-Leyva, L.L.; Peluffo-Ordóñez, D.; Becerra, M.A. Methodology for biometric identification based on EEG signals in multiple emotional states; [Metodología para la identificación biométrica a partir de señales EEG en múltiples estados emocionales]. RISTI—Rev. Iber. Sist. E Tecnol. Inf. 2023, 2023, 281–288. [Google Scholar]
  101. Zhang, D.; Yao, L.; Zhang, X.; Wang, S.; Chen, W.; Boots, R. Cascade and Parallel Convolutional Recurrent Neural Networks on EEG-based Intention Recognition for Brain Computer Interface. arXiv 2021, arXiv:1708.06578. [Google Scholar] [CrossRef]
  102. Niso, G.; Romero, E.; Moreau, J.T.; Araujo, A.; Krol, L.R. Wireless EEG: A survey of systems and studies. NeuroImage 2023, 269, 119774. [Google Scholar] [CrossRef] [PubMed]
  103. Lopez-Gordo, M.; Sanchez-Morillo, D.; Valle, F. Dry EEG Electrodes. Sensors 2014, 14, 12847–12870. [Google Scholar] [CrossRef] [PubMed]
  104. Alotaiby, T.; El-Samie, F.E.A.; Alshebeili, S.A.; Ahmad, I. A review of channel selection algorithms for EEG signal processing. EURASIP J. Adv. Signal Process. 2015, 2015, 66. [Google Scholar] [CrossRef]
  105. Lopez, C.A.F.; Li, G.; Zhang, D. Beyond Technologies of Electroencephalography-Based Brain-Computer Interfaces: A Systematic Review From Commercial and Ethical Aspects. Front. Neurosci. 2020, 14, 611130. [Google Scholar] [CrossRef]
  106. Antunes, R.S.; da Costa, C.A.; Küderle, A.; Yari, I.A.; Eskofier, B. Federated Learning for Healthcare: Systematic Review and Architecture Proposal. ACM Trans. Intell. Syst. Technol. 2022, 13, 54. [Google Scholar] [CrossRef]
  107. McCall, I.C.; Wexler, A. Peering into the mind? The ethics of consumer neuromonitoring devices. In Developments in Neuroethics and Bioethics; Elsevier B.V.: Amsterdam, The Netherlands, 2020; pp. 1–22. [Google Scholar] [CrossRef]
  108. Kiran, A.; Ahmed, A.B.G.E.; Khan, M.; Babu, J.C.; Kumar, B.P.S. An efficient method for privacy protection in big data analytics using oppositional fruit fly algorithm. Indones. J. Electr. Eng. Comput. Sci. 2025, 37, 670. [Google Scholar] [CrossRef]
  109. Green, D.J.; Barnes, T.A.; Klein, N.D. Emotion regulation in response to discrimination: Exploring the role of self-control and impression management emotion-regulation goals. Sci. Rep. 2024, 14, 26632. [Google Scholar] [CrossRef] [PubMed]
  110. Jalaly Bidgoly, A.; Jalaly Bidgoly, H.; Arezoumand, Z. A survey on methods and challenges in EEG based authentication. Comput. Secur. 2020, 93, 101788. [Google Scholar] [CrossRef]
  111. Su, J.; Zhu, J.; Song, T.; Chang, H. Subject-Independent EEG Emotion Recognition Based on Genetically Optimized Projection Dictionary Pair Learning. Brain Sci. 2023, 13, 977. [Google Scholar] [CrossRef] [PubMed]
  112. Yuvaraj, R.; Baranwal, A.; Prince, A.A.; Murugappan, M.; Mohammed, J.S. Emotion Recognition from Spatio-Temporal Representation of EEG Signals via 3D-CNN with Ensemble Learning Techniques. Emerg. Trends Biomed. Signal Process. Intell. Emot. Recognit. 2023, 13, 685. [Google Scholar] [CrossRef] [PubMed]
  113. Si, X.; Huang, D.; Sun, Y.; Huang, S.; Huang, H.; Ming, D. Transformer-based ensemble deep learning model for EEG-based emotion recognition. Brain Sci. Adv. 2023, 9, 210–223. [Google Scholar] [CrossRef]
  114. Ji, Y.; Dong, S.Y. Deep learning-based self-induced emotion recognition using EEG. Front. Neurosci. 2022, 16, 985709. [Google Scholar] [CrossRef] [PubMed]
  115. Deng, X.; Lv, X.; Yang, P.; Liu, K.; Sun, K. Emotion Recognition Method Based on EEG in Few Channels. In Proceedings of the 2022 IEEE 11th Data Driven Control and Learning Systems Conference (DDCLS), Chengdu, China, 3–5 August 2022; pp. 1291–1296. [Google Scholar]
Figure 1. Framework for understanding EEG-based biometric methods and emotional influences.
Figure 1. Framework for understanding EEG-based biometric methods and emotional influences.
Computers 14 00299 g001
Figure 2. Number of publications in Scopus.
Figure 2. Number of publications in Scopus.
Computers 14 00299 g002
Figure 3. Representation of cerebral hemispheres.
Figure 3. Representation of cerebral hemispheres.
Computers 14 00299 g003
Table 1. Feature extraction from EEG signals.
Table 1. Feature extraction from EEG signals.
Ref.DatabaseFeature ExtractionClasification MethodAccuracyYear
 [26]Own database (13 subjects) and PhysioNet BCI (109 subjects)PCA, Wilcoxon test, fast Fourier transform, Power Spectrum (PS), Asymmetry indexRBF-SVM, K-fold, Cross-validation99.9 ± 1.39%2023
 [27]The BED (Biometric EEG Dataset) 21 subjectsPCA, Wilcoxon test, optimal spatial filteringDeep learning (DL)86.74%2022
 [28]PhysioNet EEG MotorMovement/Imagery Dataset (1000 subjects)Functional connectivity (FC)Multi-stream GCN (MSGCN)98.05%2023
 [29]Own database (21 subjects)1D-CNNCross 5-fold, LDA, SVM, K-NN, DL99%2022
 [30]Own database (43 subjects)Power Spectral Density (PSD)Naive Bayes, Neural Network, SVM97.7%2022
 [31]Own database (8 subjects)Power Spectral Density (PSD) from delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–14 Hz), beta (14–30 Hz), gamma (30–50 Hz) bands, LDAK-NN, SVM80%2022
 [11]PhysioBank database (109)CNNCNN-ECOC-SVM98.49%2022
 [32]PhysioNet database (109)Power spectral, PSDMean Correlation Coefficient (MCC)Equal Error Rate 0.0142018
 [33]Own database (6 subjects)Daubechies (db8) wavelet, PSDMultilayer Perceptron Neural Network (MLPNN)75.8%2019
 [34]PhysioNet database (16 subjects)Frequency-weighted power (FWP)Proposed method by the authorEER of 0.00392020
 [35]PhysioNet dataset- 109 subjectsUses only two EEG channels and a signal measured over a short temporal window of 5 sCNNIdentification result of 99% and 0.187% of authentication error rate (ERR).2023
 [36]The data is collected from 50 volunteer(1) spectral information, (2) coherence, (3) mutual correlation coefficient, and (4) mutual information.SVM0.52% of ERR, with a classification rate of 99.06%.2023
Table 2. Feature Extraction from EEG Signals.
Table 2. Feature Extraction from EEG Signals.
Ref.DatabaseFeature ExtractionClasification MethodAccuracyYear
 [37]They have used two SSVEP datasets for PI (Person Identification), the speller dataset and the EPOC datasetauto-regressive (AR) modeling, power spectral density (PSD) energy of EEG channels, wavelet packet decomposition (WPD), and phase locking values (PLV).combines common spatial patterns with specialized deep-learning neural networks.recognition rate of 99%2023
 [8]Physionet (109 subjects)The system uses a Random Forest based binary feature selection method to filter out meaningless channels and determine the optimum number of channels for the highest accuracyHybrid Attention-based LSTM-MLP99.96% and 99.70% accuracy percentages for eyes-closed and eyes-open datasets2023
 [38]The authors used the dataset of ’Big Data of 2-classes MI’ and Dataset IVaIn this study, CSP, ERD/S, AR, and FFT were applied to transform segmented data into informative features. The TDP method was excluded from this work because it is more suitable for motor execution rather than motor imagination.SVM, GNBSVM (CSP (98.97%), ERD/S (98.94%), AR (98.93%), and FFT (97.92%)).GNB (CSP (97.47%), ERD/S (94.58%), FFT(53.80%), and AR (50.24%)).2023
 [39]Dataset I consisted of a self-collected dataset obtained using a low-cost EEG device and was used as the primary dataset. Dataset II, a widely used dataset from PhysioNet BCI, was employed to evaluate the proposed method with a larger number of subjects.EEG signals were processed using the FieldTrip toolbox for Matlab. The toolbox provides various useful tools to process EEG, MEG, and invasive electrophysiological data. EEG signals were processed by first applying a baseline correction relative to the mean voltage, and then a finite impulse response (FIR) bandpass filter from 5 to 40 Hz for noise reduction. These preprocessing steps were necessary to smooth the classification procedures and remove or minimize undesired noise nuisance.Support Vector Machines (SVM), Neural Networks (NN), and Discriminant Analysis (DA).identification accuracy rates of up to 100% with a low-cost EEG device2023
Table 3. Feature Extraction from EEG Signals.
Table 3. Feature Extraction from EEG Signals.
Ref.DatabaseFeature ExtractionClasification MethodAccuracyYear
 [40]DEAPphase locking value (PLV)CNN85%2023
 [41]SEED and DEAPThe proposed model uses an Inventive brain optimization algorithm and frequency features to enhance detection accuracy.optimized deep convolutional neural network (DCNN) K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Random Forest (RF), and Deep Belief Network (DBN).(DCNN) model achieved an accuracy of 97.12% at 90% of training and 96.83% according to K-fold analysis2023
 [42]SEEDTime-Frequency content of EEG signals using the modified Stockwell transform. Extracting deep features from each channel’s time-frequency content using a deep convolutional neural network. Fusing the reduced features of all channels to construct the final feature vector. Utilizing semi-supervised dimension reduction to reduce the features.CNNs The Inception-V3 CNN and support vector machine (SVM) classifier...2023
 [43]DEAP10-fold cross-validation has been employed for all experiments and scenarios. Sequential floating forward feature (SFFS) selection has been used to select the best features for classificationSupport Vector Machine (SVM) with Radial Basis Function (RBF) kernel has been applied for classificationIn our study the CCR is in the range of 88–99%, whilst the Equal Error Rate (EER) in the aforementioned research is in the range of 15–35% using SVM2017
 [44]DEAP, MAHNOB-HCI, and SEEDThe feature extraction process involved the use of time-domain and frequency-domain featuresSVM, Random Forest (RF), and k-Nearest Neighbors (k-NN)...2021
 [45]Own dataset with 60 users acquired with the Emotiv Epoc+The signals were filtered by the Savitzky-Golay filter to attenuate its short term variationsHidden Markov Model (HMM) and Support Vector Machine (SVM)User identification performance of 97.50% (HMM) and 93.83% (SVM).2017
Table 4. Feature extraction and classification techniques for emotion identification.
Table 4. Feature extraction and classification techniques for emotion identification.
CiteYearPreprocessingExtraction and SelectionEmotion Clasification
 [84]2010Laplacian surface filterwavelet transform, Fuzzy C Means (FCM) y Fuzzy K-Means (FKM)Linear Discriminant Analysis (LDA) and K Nearest Neighbor (K-NN)
 [85]2014FFT, EEGLABCorrelation, Coherence and phase synchronizationQuadratic discriminant analysis
 [86]2015Wavelet filterPCASVM
 [87]2015Blind source separation, Bandpass filter 4.0–45.0 HzHOSA (Higher order Spectral Analysis)LS-SVM, Artificial Neural Networks (ANN)
 [87]2016Filtro ButterworthBispectral analysis with HOSASVM
 [88]2016Algorithm based on Independent Component AnalysisSample Entropy, Quadratic Entropy, Distribution entropySVM
 [89]2017MARA, AARPhase Locking Value (PLV) with ANOVA to assess statistical significanceSVM
 [90]2017Laplacian surface filterWavelet transformPolynomial kernel- SVM
 [91]2018Filtro Butterworth NotchACA, SA, GA, SPO AlgorithmsSVM
 [83]2023DWT, EMDSmoothed pseudo-Wigner–Ville distribution (RSPWVD)K-NN, SVM, LDA y LR
 [92]2023Finite Impulse Response, Artefact Subspace Reconstruction (ASR)Power Spectral Density (PSD)Naïve Bayes (NB), K-NN, SVM, Fuzzy Cognitive Map (FCM)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Becerra, M.A.; Duque-Mejia, C.; Castro-Ospina, A.; Serna-Guarín, L.; Mejía, C.; Duque-Grisales, E. EEG-Based Biometric Identification and Emotion Recognition: An Overview. Computers 2025, 14, 299. https://doi.org/10.3390/computers14080299

AMA Style

Becerra MA, Duque-Mejia C, Castro-Ospina A, Serna-Guarín L, Mejía C, Duque-Grisales E. EEG-Based Biometric Identification and Emotion Recognition: An Overview. Computers. 2025; 14(8):299. https://doi.org/10.3390/computers14080299

Chicago/Turabian Style

Becerra, Miguel A., Carolina Duque-Mejia, Andres Castro-Ospina, Leonardo Serna-Guarín, Cristian Mejía, and Eduardo Duque-Grisales. 2025. "EEG-Based Biometric Identification and Emotion Recognition: An Overview" Computers 14, no. 8: 299. https://doi.org/10.3390/computers14080299

APA Style

Becerra, M. A., Duque-Mejia, C., Castro-Ospina, A., Serna-Guarín, L., Mejía, C., & Duque-Grisales, E. (2025). EEG-Based Biometric Identification and Emotion Recognition: An Overview. Computers, 14(8), 299. https://doi.org/10.3390/computers14080299

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop