Previous Article in Journal
Decision Tree and ANOVA as Feature Selection from Vibration Signals to Improve the Diagnosis of Belt Conveyor Idlers
Previous Article in Special Issue
Effects of Inertial Measurement Unit Location on the Validity of Vertical Acceleration Time-Series Data and Jump Height in Countermovement Jumping
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Personal Identification Using 3D Topographic Cubes Extracted from EEG Signals by Means of Automated Feature Representation

by
Muhammed Esad Oztemel
1 and
Ömer Muhammet Soysal
1,2,*
1
Division of Electrical and Computer Engineering, Louisiana State University, Baton Rouge, LA 70803, USA
2
Computer Science, Southeastern Louisiana University, Hammond, LA 70402, USA
*
Author to whom correspondence should be addressed.
Signals 2025, 6(3), 43; https://doi.org/10.3390/signals6030043
Submission received: 1 July 2025 / Revised: 6 August 2025 / Accepted: 14 August 2025 / Published: 21 August 2025

Abstract

Electroencephalogram (EEG)-based identification offers a promising biometric solution by leveraging the uniqueness of individual brain activity patterns. This study proposes a framework based on a convolutional autoencoder (CAE) along with a traditional classifier for identifying individuals using EEG brainprints. The convolutional autoencoder extracts a compact and discriminative representation from the topographic data cubes that capture both spatial and temporal dynamics of neural oscillations. The latent tensor features extracted by the CAE are subsequently classified by a machine learning module utilizing Support Vector Machine (SVM), Random Forest (RF), k-Nearest Neighbor (KNN), and Artificial Neural Network (ANN) models. EEG data were collected under three conditions—resting state, music stimuli, and cognitive task—to investigate a diverse range of neural responses. Training and testing datasets were extracted from separate sessions to enable a true longitudinal analysis. The performance of the framework was evaluated using the Area Under the Curve (AUC) and accuracy (ACC) metrics. The effect of subject identifiability was also investigated. The proposed framework achieved a performance score up to a maximum AUC of 99.89% and ACC of 96.98%. These results demonstrate the effectiveness of the proposed automated subject-specific patterns in capturing stable EEG brainprints and support the potential of the proposed framework for reliable, session-independent EEG-based biometric identification.

1. Introduction

Biometric systems have become essential tools for secure identification [1]. However, traditional methods such as fingerprint [2], facial recognition [3], and sound recognition systems suffer from various vulnerabilities [4]. In contrast, EEG signals provide a promising alternative due to their inherent complexity, uniqueness, and resistance to external manipulation. As EEG signals reflect internal neural activities, they are extremely difficult to forge or replicate, making them a reliable biometric modality [5]. The intrinsic variability and noise characteristics of EEG signals also make them robust against adversarial attack while offering high discrimination potential for individual identification tasks [6].
Machine learning (ML) techniques have been increasingly applied to EEG signal analysis to extract meaningful patterns and improve classification performance [7]. Traditional EEG feature extraction methods, such as Fast Fourier Transform (FFT) [8], Power Spectral Density (PSD) [9], and Wavelet Transform [10], are commonly used to capture frequency-domain and time–frequency-domain features. However, these techniques rely on hand-crafted parameters and assumptions, limiting their ability to generalize across subjects and tasks. Beyond these traditional feature extraction methods, researchers have studied pattern recognition approaches to identify distinctive characteristics in various forms of EEG signals [11,12].
Convolutional neural networks (CNNs) have emerged as a powerful tool for automated feature extraction from EEG signals, offering significant advantages over traditional manual feature engineering methods [13]. Thanks to their hierarchical learning capabilities, CNNs can capture complex and discriminative patterns directly from EEG data [14], making them particularly valuable for tasks such as brain–computer interfacing [15], neurological disorder diagnosis [16], and biometric identification [17]. Ozdenizci et al. [18] demonstrated the effectiveness of CNNs in analyzing multi-channel EEG data, where the network successfully identified localized features in raw signals, time–frequency representations, and topographic scalp projections. On the other hand, recurrent neural networks (RNNs), especially long short-term memory (LSTM) networks and gated recurrent units (GRUs), have proven highly effective in modeling the temporal dynamics inherent in EEG signals, which often exhibit non-stationary and time-varying properties [19]. Additionally, unsupervised approaches such as AEs have been explored for EEG feature extraction [20]. As shown in [21], AEs can learn compact, informative representations of EEG signals, enabling efficient personal identification.
Despite progress, most of the EEG-based studies still rely on 1D signal vectors or 2D time–frequency projections, which fail to fully preserve the spatial, temporal, and spectral relationships in the data. As stated by Shah et al., 3D representation of EEG data encompasses rich spectral, spatial, and temporal information [22]. The novelty of our study, therefore, lies in the design and use of a 3D topographic cube representation that stacks time-resolved EEG topographic maps to capture rich spatio-temporal dependencies. This 3D cube bridges raw EEG data and deep learning models in a more biologically and spatially interpretable format. Furthermore, this study leverages a self-referencing convolutional autoencoder (CAE) to automatically extract latent representations from these volumetric EEG cubes.
To the best of our knowledge, this is the first work that combines 3D EEG topographic cubes with a self-supervised CAE framework for personal identification. This combination enables a more discriminative and generalizable feature space without reliance on manual feature engineering. Soysal et al. [23] demonstrated that features extracted using autoencoder-based architectures outperform conventional statistical features in EEG-based classification tasks. This evidence highlights the strength of unsupervised representation learning in capturing subject-specific neural signatures. Building on that foundation, the present work introduces a convolutional autoencoder (CAE) applied to 3D EEG topographic cubes, enabling the model to effectively learn spatio-temporal dependencies under varying stimulus conditions. The significance of this work lies in its proposal of a scalable and transferable framework that enhances EEG-based personal identification through automated feature extraction from rich volumetric representations.
In this study, to ensure diverse neural activation and to enrich the training dataset, EEG signals were collected under three types of stimuli: resting state, cognitive brain activity, and sound stimuli. Resting-state EEG reflects the brain’s baseline activity and offers insights into stable, intrinsic neural patterns unique to individuals [24]. Cognitive tasks stimulate decision-making and motor planning areas, eliciting dynamic brain activity that varies significantly across subjects [25]. Sound stimulates the auditory cortex and emotional processing regions, providing an additional modality of brain engagement [26].
Permanence of the EEG signals presents a significant challenge for biometric applications, as neural patterns exhibit considerable variability in response to tasks, mental state, and experimental conditions recorded over time. Arnau González et al. designed the BED dataset with collected EEG signals across three separate sessions, indicating a more realistic cross-session biometric evaluation [27]. Note that the models using only within-session data for evaluation purposes may overestimate performance [25,26,27].
Topographic map representations of EEG signals offer a spatial view of neural activity by projecting channel-wise data onto a 2D scalp model [28]. This spatial encoding preserves the relationships between electrode positions and supports visualization of region-specific brain activity. By stacking these 2D topographic frames over consecutive time windows, a 3D EEG cube can be constructed. This volumetric representation captures both spectral and temporal dependencies, offering a richer context for feature learning. The 3D EEG cube serves as a bridge between raw EEG signals and machine-learnable input formats, making it ideal for convolutional-based learning models and facilitating the extraction of spatio-temporal patterns.
In this study, various machine learning classifiers were evaluated to classify features extracted from EEG signals. KNN is a simple yet effective distance-based classifier that works well with low-dimensional data [29]. RF, a tree-based ensemble method, provides robustness against overfitting and handles high-dimensional features efficiently [30]. SVMs are known for their ability to create optimal decision boundaries in high-dimensional space using kernels [31]. ANNs offer flexible nonlinear modeling and can capture complex data patterns [32]. These classifiers are assessed for their performance in distinguishing between individuals based on the features derived from the CAE.
This paper aims to explore the effectiveness of a self-referencing convolutional autoencoder as an automated extraction method, along with the utilization of spatio-temporal data cube representation for EEG-based personal identification. The rest of this paper is organized as follows: The ‘Previous Work’ section reviews the relevant literature and previous work in the field. Section 2 describes the proposed methods in detail. Section 3 outlines the experimental setup and the data used in this study. Section 4 presents the results and performance evaluation. Finally, Section 5 discusses outcomes and potential future directions.

Previous Work

EEG-biometric studies often utilize visually evoked potentials (VEPs) and other event-related EEG records. Subjects are presented with stimuli that bring about brain responses, and the resulting EEG waveforms are used as biometric signatures. This is clearly realized in the study carried out by Das et al., as they performed a longitudinal study using multiple visual stimuli and reported a consistent person-specific EEG pattern [33]. Similarly, Koike-Akino et al. showed that P300 components from a rapid serial visual presentation paradigm can yield very high identification accuracy [34].
Convolutional neural networks have also been applied to EEG signals to automatically learn discriminative features. Ozdenizci et al. present the utilization of CNNs across EEG channels for capturing local and special dependencies over the EEG data, spectrograms, and topographic projections [18]. Similarly, recurrent neural networks such as long short-term memory and gated recurrent units are used to learn temporal dependencies of EEG signals [19]. Banee et al. [21] showed the utilization of AEs for feature extraction in the application of EEG-based personal identification.
Resting-state recordings refer to a period when an individual is awake but not actively engaged in any specific task or mental effort [35]. Resting-state recordings are particularly valuable in EEG-based identification, not only because they are easy to acquire but also because they capture the brain’s intrinsic activity patterns, which are stable and distinctive enough for biometric recognition. Fraschini et al. used the eigenvector centrality of functional connectivity graphs composed from resting EEG as a brainprint signature [36]. Moreover, CNNs have been trained on resting EEG spectrograms or raw signals for identification [18]. They claimed that features extracted from resting state responses provide stable biometric patterns without any need for a specific stimulus.
Marcos et al. explored EEG-based biometric identification using both resting-state and task-induced EEG with visual/auditory stimuli. Time–frequency features were extracted via Continuous Wavelet Transform (CWT) across five bands: Delta (0.5–4 Hz), Theta (4–8 Hz), Alpha (8–13 Hz), Beta (13–30 Hz), and Gamma (30–50 Hz). They reported that the cross-session identification accuracy dropped to ~72–85% from the highest identification accuracy of ~99% when they used both training and testing datasets from the same session. The results highlight the superiority of stimulus-evoked EEG for biometric applications but underscore the challenge of longitudinal consistency [37]. Earlier in Section 1, this problem was also pointed out by several researchers.

2. Methods

This section describes the proposed framework for EEG-based identification. The framework consists of four key modules: preprocessing, EEG data cube generation from topo-maps, feature extraction using CAE, and subject identification using SVM, RF, KNN, and ANN models. Figure 1 illustrates the overall pipeline.

2.1. Preprocessing

In the preprocessing stage, notch filters were employed to suppress power line interference and its harmonics commonly present in EEG signals. A notch filter, also known as a band-stop filter [22], is designed to attenuate a narrow frequency band while allowing frequencies outside that range to pass. Specifically, notch filters were applied at 60 Hz, 120 Hz, 180 Hz, and 240 Hz to target the fundamental frequency of electrical line noise and its first three harmonics. Figure 2 shows an example of the resulting notch-filtered EEG data. In addition to noise removal, EEG signals were band-pass filtered to isolate eight specific frequency bands: Delta (1–4 Hz), Theta (4–8 Hz), Alpha (8–13 Hz), Beta (13–32 Hz), Delta–Beta (De2Be, 1–32 Hz), Theta–Beta (Th2Be, 4–32 Hz), Gamma (32–125 Hz), and an All band covering frequencies above 1 Hz.
Among these, hybrid bands were specifically investigated, such as Delta–Beta and Theta–Beta, which span broader neural oscillations across multiple adjacent frequency ranges. These hybrid bands enable the capture of more complex and comprehensive brain activity patterns that may be overlooked when analyzing traditional single bands alone. Prior to this step, a high-pass filter with a cutoff frequency of 0.5 Hz was applied to eliminate slow baseline drifts and low-frequency physiological artifacts.

2.2. Data Cube Generation

Following the preprocessing, a data cube was generated from 24-channel EEG streams. A data cube is a stack of topographic maps. The data cube was created in two main steps: preparation of streams and projection of streams onto 2D frames. Figure 3 demonstrates the topographic data cube generation steps.
The length of each stream is 3 s, producing 3000 EEG data points. Due to the minor data loss at the end of some trials, only the first 2880 data points were used for further processing. This stream was down-sampled to reduce the temporal resolution and to smooth out the high-frequency noise. The down-sampling was performed by taking the median value of every 10 consecutive data points, resulting in 288 time points per stream. Each down-sampled stream was divided into 9 non-overlapping segments. Each session data includes 35, 10, and 10 sets of streams from 24 channels for resting state, sound, and cognitive stimuli, respectively. Therefore, 315 (9 × 35), 90 (9 × 10), and 90 (9 × 10) EEG cubes were generated for resting state, music stimuli, and cognitive task per subject, respectively. Each cube was composed of 32 (= 288/9) frames as shown in Figure 3b.
In creating a topographic map grid for each EEG cube, 3D coordinates of the electrodes were projected onto 2D planes using an azimuthal projection method. Next, frames of EEG cubes were mapped to a 32 × 32 grid using the bilinear interpolation (as shown in Figure 3c), which estimates each pixel value as a weighted average of the four nearest neighbors. The colors in the topographic maps represent brain activity levels, where red indicates areas of high neural response and blue indicates regions with lower or minimal activity.

2.3. Automated Feature Extraction

In this study, 3D CAE is utilized to process EEG data cubes. Three-dimensional CAE is well suited for capturing both spatial and temporal dependencies. Formally, an autoencoder consists of an encoder function f enc ( X ) (1) with an input tensor X   ϵ   R 32 × 32 × 32 and a latent feature tensor Ƶ ϵ   R d × d × d (d denotes the size of the tensor) and a decoder function f dec   ( Z ) (2) with an output tensor of the same size as X; both functions consist of a non-linear activation function such as Relu, sigmoid, and tanh. The encoder and decoder have symmetric network structures, ensuring the input size and reconstructed output size remain the same. Training of CAE is optimized to minimize the reconstruction loss L ( X , X ^ ) function (3) that measures the mean squared error between the input and its reconstructed version by the decoder. This optimization enables extracting a distinct and compact representation from the input domain, along with eliminating the need for manual feature engineering. Figure 4 illustrates the general mechanism of an autoencoder applied to a 3D input space.
Ƶ = f enc ( X )
X ^ = f d e c ( Z ) ,
L ( X , X ^ ) = ǀ ǀ X X ^   ǀ ǀ 2 2   ,

2.4. Identification

A 3D CAE extracts compact latent features from EEG data cubes. These features represent the essential spatial and temporal patterns within the EEG signals and serve as input for a subsequent classification task. This two-stage method uses the autoencoder’s unsupervised learning to generate meaningful inputs for supervised classification. Four classification algorithms are employed for subject identification: KNN, ANN, SVM, and RF. These classifiers were selected to evaluate the generalizability and robustness of the learned feature representations across diverse algorithmic paradigms—ranging from ensemble learning, instance-based learning, deep learning, and margin-based classification. Moreover, during longitudinal analysis for brain signals, each classifier is trained with the data from Session 1 and tested with the data from Session 2.

2.5. Hyperparameter Tuning

To determine the optimal configurations for both the autoencoder and the classification algorithms, Keras Tuner with Bayesian optimization was utilized as a unified strategy for hyperparameter tuning. This approach efficiently explores complex and high-dimensional parameter spaces by balancing exploration and exploitation, leading to the automatic selection of configurations that enhance model performance and generalization. For the autoencoder, the search space included critical architectural parameters such as the number of convolutional layers, the number of units in the layers, kernel sizes, kernel initializer, activation functions, and optimizer. The objective was to minimize the reconstruction loss while preserving a compact and discriminative latent representation of the EEG input.
Similarly, to ensure optimal classification performance, Bayesian optimization was employed to systematically tune each classifier’s hyperparameters with the objective of maximizing validation accuracy. This includes tuning the number of trees and maximum depth for the RF, selecting the appropriate number of neighbors and distance metrics for KNN, and optimizing architectural choices such as the number of hidden layers, network units per layer, weight initializer, and the optimizer for the ANN. In addition, the SVM, hyperparameters such as the kernel type and regularization parameter C, and kernel-specific parameters like Gamma (in the case of RBF kernels) were fine-tuned.
Thanks to this automated optimization process across both the feature extraction and classification stages, the entire pipeline—from raw EEG cubes to subject identification—was built on robust and well-tuned components. Table 1 provides a comprehensive overview of the hyperparameter search spaces and value ranges used during this process.

3. Experimental Setup and Data

The EEG dataset was collected from 7 subjects who were college students after receiving the institutional review board (IRB) approval of the university. The EEG data was collected in two sessions in spring 2023. Each session was set apart for 10 days. We used the mBrain train amplifier with a headcap of 24 channels and Neuro Behavioral Systems’ Presentation (version 2.4) software for the collection of brain signals. The sampling frequency of the amplifier was 1 kHz. The headcap electrode locations were designed according to the 10–20 system. Each recording was saved using a designated subject identity. The subject IDs are as follows: sb106, sb328, sb330, sb381, sb455, sb717, and sb768. A longitudinal study requires that the test dataset be collected after the collection of the training dataset. To ensure the validity of longitudinal evaluation, the models were trained exclusively on the data from Session 1 and tested on the data from Session 2. This approach complies with the standard requirement for temporal separation between training and test datasets in longitudinal studies. Table 2 presents the hardware configuration used to perform all computations in this study, including training and testing of the autoencoder and classification models. It also provides the total train and test computation time for a specific band and stimuli (Gamma–bk_pic_EC).
Three types of stimuli were used to record EEG responses: resting state, cognitive, and auditory. During the resting state condition, participants were instructed to close their eyes and remain for 3 s without engaging in any task. In the cognitive task, participants asked to perform inner speech repeated the word “evergreen” in 3 s. For the auditory stimulus, participants listened to the sound of a musical instrument, the conga, for a duration of 3 s. Resting state EEG data were collected over three trials per session, resulting in a total of 35 resting-state signals. Cognitive and auditory responses were collected over 10 trials per session, yielding 10 EEG recordings for each condition. All EEG segments were recorded for a consistent 3 s duration per trial. Note that the dataset utilized in this study will be available upon request.

4. Results and Discussion

The hyperparameter search determined the optimized CAE architecture for each stimulus type, as shown in Figure 5. For resting state recordings, the optimized CAE generated a set of 16 latent feature tensors of size 8 × 8 × 8 each. In contrast, the size of each latent feature extracted from the sound and cognitive recordings was determined as 16 × 16 × 16, along with 32 network units at the last layer of the encoder module. This finding shows that the representation space obtained from the resting state patterns is more compact than that of other stimulus conditions. In comparison of the most discriminative EEG bands, the results showed that the highest score was observed at the Gamma band and the Th2Be [4–32 Hz] range when the resting state and both the sound and cognitive stimuli were used, respectively. Table 3 provides a summary of the optimal parameters selected for both the autoencoder and classifier networks within specific band ranges.
Regarding computational feasibility, it is important to acknowledge that the training of the convolutional autoencoder (CAE) model required approximately 49 min per configuration on a high-performance GPU (NVIDIA A100 40GB). While this level of computational demand is manageable in research and development settings, it may pose challenges for real-time or resource-constrained applications. However, it is important to note that CAE training is conducted offline, and once the model is trained, the feature extraction and classification processes are relatively lightweight and can be performed efficiently on standard hardware. This makes deployment in real-world applications feasible, especially in scenarios where models are trained centrally and deployed on edge devices for inference. Further optimization—such as model pruning, quantization, or use of more compact architectures—can also be explored to reduce the computational footprint without significantly sacrificing performance.
Figure 6 presents the AUC score distribution across different classifiers and EEG bands for three stimulus conditions: resting state, auditory task, and cognitive task. The violin plots present the performance distribution for four classifiers: KNN, SVM, RF, and ANN across eight frequency bands: Delta, Theta, Alpha, Beta, De2Be, Th2Be, Gamma, and All bands. The violin plots show the distribution of AUC values across 5-fold cross-validation scores. In the decoupling of resting state patterns, the Gamma band (32–125 Hz) demonstrated the highest discriminative power, achieving 90.23% AUC across all classifiers. In contrast, for sound and cognitive tasks, the Th2Be range (4–32 Hz) emerged as the most discriminative frequency band, consistently showing superior classification performance.
A clear enhancement in classification performance is observed across the broader frequency ranges, particularly in the 13–32 Hz, 8–32 Hz, 4–32 Hz, and 32–125 Hz bands compared to narrower bands like Delta, Theta, and Alpha. Regarding classifier performance, SVM and RF consistently emerge as the leading models across most EEG bands and stimulus conditions, demonstrating superior and more stable AUC distributions compared to KNN and ANN. The violin plots show that these two classifiers not only achieve higher median performance but also exhibit less variability across the cross-validation folds, indicating more reliable and robust classification outcomes across different frequency bands and experimental paradigms.
Figure 7 presents subject identifiability through AUC scores obtained from pairwise SVM-based classification of the Gamma band patterns extracted from the resting state EEG responses. The results showed a significant inter-subject variability in identifiability performance. Most subjects (sb328, sb330, sb381, sb717, and sb768) demonstrated high and consistent identifiability, with AUC distributions concentrated around 0.95. In contrast, subjects sb106 and sb455 exhibited significantly lower identifiability scores, with a wider score range of approximately 0.42 to 0.99, possibly due to intra-class variability and inter-class similarity of these subjects or weaker correlation structures in extracted features, which may reflect inconsistent cognitive or physiological states during data acquisition.
Figure 8 illustrates the performance metrics of the SVM-based classifier applied to Gamma band EEG signals recorded during the resting state. The results show that removing less identifiable subjects leads to a substantial improvement in the overall model performance.
In addition to the seven-subject and five-subject classification tasks, the SVM-based model was also evaluated on pairwise identification scenarios. Figure 9 presents the classification results for all subject pairs. Although the model exhibited lower performance when distinguishing between the less identifiable subject pair (sb106 and sb455), it achieved high accuracy in other pairings involving these subjects. For instance, the pair sb106 and sb330 was classified with an accuracy of 96.83%, indicating that sb106 remains distinguishable when paired with certain other subjects.
Figure 10 illustrates the impact of subject variability for the SVM-based classifier on the resting-state Gamma band EEG data cubes. Figure 10a presents the confusion matrix of seven subjects; the number of misclassified samples from sb106 and sb455 highlights the challenge that the framework faced during identification. We investigated the effect of weakly identifiable (highly similar) subjects by removing them from the datasets; Figure 10b presents the confusion matrix without these subjects. Compared to the seven-subject case, the classification performance is notably improved. Figure 10c,d represent the worst-performing and best-performing subject pairs, respectively. While the framework achieved only 44.6% accuracy (ACC) for the lowest-identifiable subject (sb106 and sb455), it attained a significantly higher accuracy of 96.98% when identifying the two most highly distinguishable subjects (sb328 and sb330). Figure 10e compares identification scores across two-, five-, and seven-subject groups in terms of AUC and ACC. The model achieves near-perfect scores for the two-subject case and maintains high performance with five subjects. However, the performance drops significantly in the even-subject setting, reflecting the challenges introduced by the inclusion of less discriminable subjects.
While the concept of signal permanence is referenced to indicate stable subject-specific patterns and a more explicit treatment of intra-subject variability is crucial for practical biometric applications, the temporal fluctuations in EEG signals within the same individual present a significant challenge. The proposed method addresses this variability implicitly through the use of autoencoder-based feature extraction, which aims to capture robust latent representations that generalize across temporal variations. Additionally, the classification models are trained on multiple EEG segments to improve resilience to within-subject fluctuations.
The superior performance of SVM and Random Forest (RF) classifiers compared to Artificial Neural Network (ANN) and k-Nearest Neighbor (KNN) models can be attributed to several factors. First, SVM and RF are well suited for high-dimensional, low-sample-size data scenarios—common in EEG-based biometrics—where they effectively manage overfitting and capture relevant discriminative patterns. SVM excels in finding optimal decision boundaries, especially in cases where class separation is complex but not abundant in training data. RF, as an ensemble method, enhances robustness through feature bagging and decision tree aggregation, reducing variance and improving generalization. In contrast, ANN models typically require larger datasets to fully leverage their learning capacity, and KNN is sensitive to noise and feature scaling, which can degrade performance in high-dimensional EEG feature spaces.

Comparison with the State of the Art

Using the same session data or mixed session data in model training and testing is a common mistake that results in misleadingly high accuracy scores in EEG-based biometric research [27]. In the assessment of a biometric system, a longitudinal study must be performed, as pointed out by Nakamura et al. [38]. Pluciska et al. pointed out this challenge by demonstrating a 20% drop in accuracy scores depending on cross-session data usage [39]. Similarly, Kostilek et al. reported a 10% drop in accuracy when test and train data were used from separate sessions [40]. In another study, cross-session classification using an SVM classifier on the publicly available SEED database yielded an accuracy of 79.34% [41]. Several pattern classifiers were applied for subject identification, achieving an accuracy ranging between 82 and 97% [42]. Das et al. demonstrate a pipeline using event-related potential (ERP) features that achieved 95% accuracy when trained on one session of data and tested on data from different recording sessions [33]. Similar to the above studies, Ref [11] presented a graph convolutional neural network for EEG-based human identification.
In this study, we report a maximum subject identification accuracy of 97.46% under a strict longitudinal evaluation protocol, with training and testing performed on data collected ten days apart. To the best of our knowledge, this is the first study to employ spatio-temporal autoencoder-based feature learning on 3D EEG topographic data cubes for personal identification while conducting a true longitudinal design. This work, therefore, makes a distinctive contribution to the literature by combining methodological thoroughness with state-of-the-art performance.
Table 4 presents a comparative overview of various EEG-based personal identification methods. Ozdenizci et al. utilized deep learning with quadratic discriminant analysis (QDA) on 10 subjects, achieving 72% accuracy [18]. Kostilek et al. applied autoregressive (AR) features with distance-based classification (DBC) for nine subjects, reaching 77% accuracy. Maiorana et al. reported a notably low equal error rate (EER) of 2% using AR features and hidden Markov models (HMMs) across 45 subjects [40]. Gonzales et al. combined AR, fractal complexity coefficients (FCCs), and Power Spectral Density (PSD) features with multiple classifiers, including SVM, KNN, Adaboost, and MLP, achieving up to 73% accuracy on 15 subjects [41]. The proposed method in this study employs autoencoders (AEs) for feature extraction and various classifiers, demonstrating superior performance with Area Under the Curve (AUC) scores ranging from 90.53% to 99.89% across experiments involving two, five, and seven subjects.

5. Conclusions

This study demonstrated the effectiveness of a 3D convolutional autoencoder (CAE) framework for extracting discriminative features from EEG data cubes that capture both spectral and temporal brain dynamics. The proposed method was evaluated using EEG recordings collected under three different stimulus conditions: resting state, sound stimuli, and cognitive tasks. Features extracted by the CAE were classified using four different machine learning models: KNN, SVM, RF, and ANN. Classification performance was assessed across various subject group sizes, achieving AUC scores ranging from 90.53% to 99.89%, depending on the number of subjects and the presence of highly similar (weakly distinguishable) individuals. The experimental design followed a longitudinal setup, where training and testing data were collected in different sessions, ensuring session independence and enabling robust analysis of temporal stability.
Frequency band analysis revealed that, for resting-state EEG, the Gamma band (32–125 Hz) consistently exhibited the highest discriminative power across all classifiers. In contrast, the Theta-to-Beta range (8–32 Hz) yielded superior results for the sound and cognitive task conditions. These findings underscore the adaptability of the proposed framework across different mental states and frequency bands.
Another conclusion to be drawn from this study is that the sample size of the subjects should be increased. Although seven subjects provide an indication of the classification capability of the classifiers, the variability of the subject identifiability is still higher than expected, which would require further analysis. Therefore, the performance trends observed should be interpreted as preliminary (proof of concept). The inter-subject variability observed—particularly among less identifiable individuals—indicates the need for further investigation using larger and more diverse populations to draw stronger general conclusions.
It should be noted that EEG acquisition systems, although non-invasive, may be generally bulky and not user-friendly. Reducing the number of EEG channels would be one of the solutions for making it feasible for real-world applications [44,45]. However, in this preliminary study, since we especially focused on and analyzed the spatio-temporal patterns, we intentionally considered all EEG channels. But improving the performance by decreasing the number of channels still needs to be studied.
As part of future work, we plan to develop an attention-based mechanism to better address weakly identifiable subjects. We also aim to explore temporal modeling techniques that capture sequential dependencies between consecutive EEG frames, which may further enhance subject-specific feature representations. Additionally, the integration of multi-stimuli fusion systems—combining EEG responses from different tasks—will be investigated to enrich the biometric signature. To further evaluate the generalizability and robustness of our proposed framework, we intend to validate it using publicly available EEG datasets such as BED and SEED. Furthermore, as the subject pool is expanded into future studies, the clustering techniques to group subjects based on feature similarity and adapt the model accordingly to improve identification performance within subpopulations will be investigated.

Author Contributions

All authors discussed the methods, results, and commented on the manuscript. Major individual contributions are as follows: M.E.O.: conceptualization, pipeline implementation, debugging, testing the code, data structure design, running the code, and leading the preparation of the manuscript; Ö.M.S.: supervising the project, conceptualization, supervising the data collection, advising in the design and implementation of the code. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Institutes of Health grant number P20GM103424-20.

Institutional Review Board Statement

The protocol to collect data used in this manuscript was reviewed by the Southeastern Louisiana University Institutional Review Board and approved with approval number 1022-149.

Informed Consent Statement

Informed consent was obtained from all participants.

Data Availability Statement

The datasets presented in this article are not readily available because the data are part of an ongoing study. Requests to access the datasets should be directed to omer.soysal@selu.edu.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chan, H.-L.; Kuo, P.-C.; Cheng, C.-Y.; Chen, Y.-S. Challenges and Future Persectives on Electroencephalogram-Based Biometrics in Person Recognition. Front. Neuroinform. 2018, 12, 66. [Google Scholar]
  2. Micheletto, M.; Marcialis, G.L.; Orrù, G.; Roli, F. Fingerprint Recognition With Embedded Presentation Attacks Detection: Are We Ready? IEEE Trans. Inf. Forensics Secur. 2021, 16, 5338–5351. [Google Scholar] [CrossRef]
  3. Pujol, F.A.; Pujol, M.J.; Rizo-Maestro, C.; Pujol, M. Entropy-Based Face Recognition and Spoof Detection for Security Applications. Sustainability 2020, 12, 85. [Google Scholar]
  4. Cai, H.; Zhang, P.; Dong, H.; Xiao, Y.; Koffas, S.; Li, Y. Toward Stealthy Backdoor Attacks Against Speech Recognition via Elements of Sound. IEEE Trans. Inf. Forensics Secur. 2024, 19, 5852–5866. [Google Scholar] [CrossRef]
  5. Campisi, P.; La Rocca, D. Brain waves for automatic biometric-based user recognition. IEEE Trans. Inf. Forensics Secur. 2014, 9, 782–800. [Google Scholar] [CrossRef]
  6. Aissa, N.E.H.S.B.; Kerrache, C.A.; Korichi, A.; Lakas, A.; Belkacem, A.N. Enhancing EEG Signal Classifier Robustness Against Adversarial Attacks Using a Generative Adversarial Network Approach. IEEE Internet Things Mag. 2024, 7, 44–49. [Google Scholar] [CrossRef]
  7. Hosseini, M.P.; Hosseini, A.; Ahi, K. A Review on Machine Learning for EEG Signal Processing in Bioengineering. IEEE Rev. Biomed. Eng. 2020, 14, 204–218. [Google Scholar] [CrossRef]
  8. Rajwal, S.; Aggarwal, S. Convolutional Neural Network-Based EEG Signal Analysis: A Systematic Review. Arch. Comput. Methods Is Eng. 2023, 30, 3585–3615. [Google Scholar]
  9. Redwan, S.M.; Uddin, M.P.; Ulhaq, A.; Sharif, M.I.; Krishnamoorth, G. Power spectral density-based resting-state EEG classification of first-episode pyschosis. Sci. Rep. 2024, 14, 15154. [Google Scholar]
  10. Gosala, B.; Kapgate, P.D.; Jain, P. Wavelet transfrom for feature engineering in EEG data processing: An application on Schizophrenia. Biomed. Signal Process. Control 2023, 85, 104811. [Google Scholar]
  11. Tian, W.; Li, M.; Ju, X.; Liu, Y. Applying Multiple Functional Connectivity Features in GCN for EEG-Based Human Identification. Brain Sci. 2022, 12, 1072. [Google Scholar] [CrossRef] [PubMed]
  12. Xiong, H.; Yan, Y.; Chen, Y.; Liu, J. Graph convolution network-based eeg signal analysis: A review. Med. Biol. Eng. Comput. 2025, 63, 1609–1625. [Google Scholar] [PubMed]
  13. Pathak, D.; Kashyap, R.; Rahamatkar, S. A study of deep learning approach for the classification of electroencephalogram (EEG) brain signals. In Artificial Intelligence and Machine Learning for EDGE Computing; ScienceDirect: Amsterdam, The Netherlands, 2022; pp. 133–144. [Google Scholar]
  14. Emanet, F.; Sekeroglu, K. Decoding Visual Stimuli and Visual Imaginary Information from EEG Signals Utilizing Multi-Prespective 3D-CNN Based Hierarchical Deep-Fusion Learning Network. In Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Rome, Italy, 27–29 February 2024. [Google Scholar]
  15. Nicolas-Alonso, L.F.; Gomez-Gil, J. Brain Computer Interfaces, a Review. Sensors 2012, 12, 1211–1279. [Google Scholar] [CrossRef]
  16. Acharya, U.R.; Oh, S.L.; Hagiwara, Y.; Tan, J.H.; Adeli, H. Deep convolutional neural network for the automated detection and diagnosis of seizure using EEG signals. Comput. Biol. Med. 2018, 100, 270–278. [Google Scholar] [CrossRef]
  17. Tatar, A.B. Biometric identification system using EEG signals. Neural Comput. Apl. 2022, 35, 1009–1023. [Google Scholar]
  18. Ozdenizci, O.; Wang, Y.; Koike-Akino, T.; Erdogmus, D. Adversarial Deep Learning in EEG Biometrics. IEEE Signal Process. Lett. 2019, 26, 710–714. [Google Scholar] [CrossRef]
  19. Gao, Z.; Dang, W.; Wang, X.; Hong, X.; Hou, L.; Ma, K.; Perc, M. Complex networks and deep learning for EEG signal analysis. Cogn. Neurodynamics 2020, 15, 369–388. [Google Scholar] [CrossRef] [PubMed]
  20. Bengio, Y.; Courville, A.; Vincent, P. Representation Learning: A Review and New Perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef]
  21. Bandana Das, B.; Kumar Ram, S.; Sathya Babu, K.; Mohapatra, R.K.; Mohanty, S.P. Person Identification using autoencoder-CNN aproach with multitask-based EEF biometric. Multimed. Tools Appl. 2024, 83, 83205–83225. [Google Scholar]
  22. Shah, D.; Gopan, K.G.; Sinha, N. An investigation of the multi-dimensional (1D vs. 2D vs. 3D) analyses of EEG signals using traditional methods and deep learning-based methods. Front. Signal Process. 2022, 2, 936790. [Google Scholar]
  23. Soysal, O.M.; Kelvin, I.E.; Oztemel, M.E. Effectiveness of Cross-Model Learning Through View-Model Ensemble on Detection of Spatiotemporal EEG Patterns. In Proceedings of the Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Porto, Portugal, 26–28 February 2025. [Google Scholar]
  24. Finn, E.S.; Shen, X.; Scheinost, D.; Rosenberg, M.D.; Huang, J.; Chun, M.M.; Papademetris, X.; Constable, R.T. Functional connectome fingerprinting: Identifying individuals using patterns of brain connectivity. Nat. Neurosci. 2015, 18, 1664–1671. [Google Scholar] [CrossRef] [PubMed]
  25. Kam, J.W.Y.; Griffin, S.; Shen, A.; Patel, S.; Hinrichs, H.; Heinze, H.-J.; Deouell, L.Y.; Knight, R.T. Systematic differences between perceptual and cognitive tasks in the EEG signatures they elicit. NeuroImage 2019, 186, 586–594. [Google Scholar]
  26. Koelsch, S. Brain correlates of music-evoked emotions. Nat. Rev. Neurosci. 2014, 15, 170–180. [Google Scholar] [CrossRef]
  27. Arnau-Gonzalez, P.; Katsigiannis, S.; Arevalillo-Herraez, M.; Ramzan, N. BED: A new dataset for EEG-based biometrics. IEEE Internet Things J. 2021, 8, 12219–12230. [Google Scholar]
  28. Li, D.; Zeng, Z.; Huang, N.; Wang, Z.; Yang, H. Brain topographic map: Avisual feature for multi-view fusion desing in EEG-based biometrics. Digit. Signal Process. 2025, 164, 105251. [Google Scholar]
  29. Ghosh, R.; Phadikar, S.; Deb, N.; Sinha, N.; Das, P.; Ghaderpour, E. Automatic Eyeblink and Muscular Artifact Detection and Removal From EEG Signals Using k-Nearest Neighbor Classifier and Long Short-Term Memory Networks. IEEE Sens. J. 2023, 23, 5422–5436. [Google Scholar]
  30. Cheng, J.; Chen, M.; Li, C.; Liu, Y.; Song, R.; Liu, A.; Chen, X. Emotion Recognition From Multi-Channel EEG via Deep Forest. IEEE J. Biomed. Health Inform. 2020, 25, 453–464. [Google Scholar]
  31. Su, Y.; Shi, W.; Hu, L.; Zhuang, S. Implementation of SVM-Based Low Power EEG Signal Classification Chip. IEEE Trans. Circuits Syst. II Express Briefs 2022, 69, 4048–4052. [Google Scholar]
  32. Leila, F.; Siuly, S.; Kabir, E.; Wang, H. Classification of Alcoholic EEG Signals Using a Deep Learning Method. IEEE Sens. J. 2020, 21, 3552–3560. [Google Scholar]
  33. Das, R.; Maiorana, E.; Campisi, P. EEG Biometrics Using Visual Stimuli: A Longitudinal study. IEEE Signal Process. Lett. 2016, 22, 341–345. [Google Scholar] [CrossRef]
  34. Koike-Akino, T.; Mahajan, R.; Marks, T.; Wang, Y.; Shinji, W.; Tuzel, O.; Philip, O. High-Accuracy User Identification Using EEG Biometrics. In Proceedings of the 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Orlando, FL, USA, 16–20 August 2016. [Google Scholar]
  35. Raichle, M.; Snyder, A. A default mode of brain function: A brief history of an evolving idea. NeuroImage 2007, 37, 1083–1090. [Google Scholar] [CrossRef]
  36. Fraschini, M.; Hillebrand, A.; Demuru, M.; Didaci, L.; Mircialis, L.G. An EEG-Base Biometric System Using Eigenvector Centrality in Resting State Brain Networks. IEEE Signal Process. Lett. 2015, 22, 666–670. [Google Scholar]
  37. DelPozo-Banos, M.; Travieso, C.M.; Weidemann, C.T.; Alonso, J.B. EEG biometric identification: A thorough exploration of the time-frequency domain. J. Neural Eng. 2015, 12, 056019. [Google Scholar] [CrossRef]
  38. Nakamura, T.; Goverdovsky, V.; Mandic, D. In-Ear EEG Biometrics for Feasible and Readily Collectable Real-World Person Authentication. IEEE Trans. Inf. Forensics Secur. 2018, 13, 648–661. [Google Scholar]
  39. Plucinska, R.; Jedrzejewski, K.; Malinowska, U.; Rogala, J. Leveraging Multiple Distinct EEG Training Sessions for Improvement of Spectral-Based Biometric Verification Results. Sensors 2023, 23, 2057. [Google Scholar] [CrossRef] [PubMed]
  40. Kositilek, M.; Stastny, J. EEG biometric identification: Repeatability and influence of movement-related EEG. In Proceedings of the International Conference on Applied Electronics, Pilsen, Czech Republic, 5–7 September 2012. [Google Scholar]
  41. Arnau-Gonzalez, P.; Arevalillo-Herraez; Katsigiannis, S.; Ramzan, N. On the Influence of Affect in EEG-Based Subject Identification. IEEE Trans. Affect. Comput. 2021, 12, 391–401. [Google Scholar]
  42. Armstong, B.C.; Ruiz-Blondet, M.V.; Khalifian, N.; Kurtz, K.J.; Jin, Z.; Laszlo, S. Brainprint: Assesing the uniqueniess, collectability, and permamnence of a novel method for ERP biometrics. Neurocomputing 2015, 166, 59–67. [Google Scholar]
  43. Maiorana, E.; Campisi, P. Longitudinal Evaluation of EEG-Based Biometric Recognition. IEEE Trans. Inf. Forensics Secur. 2018, 13, 1123–1138. [Google Scholar]
  44. Dokare, I.; Gupta, S. Optimized seizure detection leveraging band-specific insights from limited EEG channels. Health Inf. Sci. Syst. 2025, 13, 30. [Google Scholar]
  45. Ferrara, R.; Giaquinto, M.; Percannella, G.; Rundo, L.; Saggese, A. Personalizing Seizure Detection for Individual Patients by Optimal Selection of EEG Signals. Sensors 2025, 25, 2715. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Proposed framework.
Figure 1. Proposed framework.
Signals 06 00043 g001
Figure 2. Preprocessing, (a) raw EEG data, (b) notch filters applied to EEG data.
Figure 2. Preprocessing, (a) raw EEG data, (b) notch filters applied to EEG data.
Signals 06 00043 g002
Figure 3. Data cube generation, (a) EEG stream, purple vertical line is the segment of interest, (b) topo-maps with individual electrode recordings from frame 0 to 31, (c) interpolated topo-maps from frame 0 to 31, and (d) 3D representation of the EEG signals with respective ranges for a sample data point.
Figure 3. Data cube generation, (a) EEG stream, purple vertical line is the segment of interest, (b) topo-maps with individual electrode recordings from frame 0 to 31, (c) interpolated topo-maps from frame 0 to 31, and (d) 3D representation of the EEG signals with respective ranges for a sample data point.
Signals 06 00043 g003
Figure 4. Feature extraction.
Figure 4. Feature extraction.
Signals 06 00043 g004
Figure 5. Autoencoder network structures.
Figure 5. Autoencoder network structures.
Signals 06 00043 g005
Figure 6. Classification performance over three stimulus responses.
Figure 6. Classification performance over three stimulus responses.
Signals 06 00043 g006
Figure 7. Subject identifiability.
Figure 7. Subject identifiability.
Signals 06 00043 g007
Figure 8. SVM performance for the 7-subject and 5-subject classification tasks.
Figure 8. SVM performance for the 7-subject and 5-subject classification tasks.
Signals 06 00043 g008
Figure 9. SVM performance for pairwise classification.
Figure 9. SVM performance for pairwise classification.
Signals 06 00043 g009
Figure 10. Effect of weakly identifiable subjects in terms of confusion matrix from the resting-state Gamma SVM, ACC, and AUC. (a) Confusion matrix for 7-subject classification; (b) confusion matrix for 5-subject classification, not including sb106 and sb455; (c) confusion matrix for the lowest identifiable subject pair; (d) confusion matrix for the highest identifiable subject pair; and (e) effect of the number of subjects in terms of ACC and AUC. sb106 and sb455 are not included in the distribution of 2-subject and 5-subject results.
Figure 10. Effect of weakly identifiable subjects in terms of confusion matrix from the resting-state Gamma SVM, ACC, and AUC. (a) Confusion matrix for 7-subject classification; (b) confusion matrix for 5-subject classification, not including sb106 and sb455; (c) confusion matrix for the lowest identifiable subject pair; (d) confusion matrix for the highest identifiable subject pair; and (e) effect of the number of subjects in terms of ACC and AUC. sb106 and sb455 are not included in the distribution of 2-subject and 5-subject results.
Signals 06 00043 g010
Table 1. Hyperparameters.
Table 1. Hyperparameters.
ModelParameters
AutoencoderActivation Functions: [relu, elu, sigmoid, selu]
Optimizers: [adam, adamW]
Initializer: [glorot_uniform, glorot_normal, he_uniform, he_normal]
Kernel Size: [3 × 3 × 3, 5 × 5 × 3]
Number of Layers: [1, 2, 3]
Number of Units: [16, 32, 48, 64]
ANNActivation Functions: [relu, elu, sigmoid, selu]
Optimizer: [adam, adamW]
Initializer: [glorot_uniform, glorot_normal, he_uniform, he_normal]
Number of Layers: [1, 2, 3]
KNNNumber of Neighbors: [3, 4, 5]
Weights: [uniform, distance]
Algorithm: [auto, ball_tree, kd_tree, brute]
Leaf Size: [10, 15, 20, 25, 30, 35, 40, 45, 50]
Distance: [Manhattan, Euclidean]
SVMKernel: [linear, rbf, poly]
Gamma: [scale, auto]
C: [max_value = 0.1, max_value = 10, sampling = log]
RFNumber of Estimators: [100, 200, 300]
Maximum Depth: [5, 10, 15]
Minimum Sample Split: [2, 4, 6, 8, 10]
Minimum Sample Leaf: [1, 2, 3, 4, 5]
Maximum Features: [sqrt, log2]
Table 2. Hardware configuration and computation time.
Table 2. Hardware configuration and computation time.
Hardware Configuration
GPU: NVIDIA A100 80GB PCI (Santa Clara, CA, USA)
System: Linux
Processor: x86_64
CPU Brand: Intel(R) Xeon(R) Platinum 8358 CPU 2.60GHz (Santa Clara, CA, USA)
Cores: 64
Computation Time (minutes)
AEANNKNNSVMRF
48.8525.560.563.046.14
Table 3. Selected parameters.
Table 3. Selected parameters.
Stimuli–BandAutoencoderANNKNNSVMRF
Resting–
Gamma
NEs: 250
AF: Relu
NLs: 4
KI: Glorot_uniform
KS: 3 × 3 × 3
OPT: Adam
NUs: 48, 16
NEs: 400
AF: Relu
NLs: 2
KI: Glorot_normal
OPT: Adam
NNs: 3
WGTs: distance
ALGs: auto
LS: 25
DSC: Manhattan
KRN: rbf
GMA: scale
C: 0.98
NESTs: 300
MD: 10
MSS: 6
MSL: 3
MFs: sqrt
Sound–Th2BeNEs: 150
AF: Relu
NLs: 2
KI: Glorot_uniform
KS: 5 × 5 × 3
OPT: Adam
NUs: 32
NEs: 100
AF: Sigmoid
NLs: 1
KI: He_normal
OPT: Adam
NNs: 3
WGTs: distance
ALGs: auto
LS: 25
DSC: Manhattan
KRN: linear
GMA: auto
C: 0.50
NESTs: 200
MD: 10
MSS: 2
MSL: 3
MFs: sqrt
Cognitive–Th2BeNEs: 150
AF: Relu
NLs: 2
KI: Glorot_uniform
KS: 5 × 5 × 3
OPT: Adam
NUs: 32
NEs: 400
AF: Relu
NLs: 2
KI: Glorot_uniform
OPT: AdamW
NNs: 4
WGTs: distance
ALGs: auto
LS: 30
DSC: Manhattan
KRN: linear
GMA: auto
C: 0.50
NESTs: 300
MD: 10
MSS: 10
MSL: 1
MFs: log2
NEs: number of epochs, NLs: number of layers, AF: activation function, KI: kernel initializer, KS: kernel size, OPT: optimizer, NUs: number of units, NNs: number of neighbors, WGTs: weights, ALGs: algorithms, LS: leaf size, DSC: distance, KRN: kernel, GMA: Gamma, NESTs: number of estimators, MD: maximum depth, MSS: minimum sample split, MSL: minimum sample leaf, and MFs: maximum features.
Table 4. Comparison table.
Table 4. Comparison table.
ReferenceFeaturesModelNumber of SubjectsPerformance
Ozdenizci et al. [18]DLQDA10ACC 72%
Kostilek et al. [40]ARDBC9ACC 77%
Maiorana et. al [43]ARHMM45EER 2%
Gonzales et al. [41]AR, FCCs, PSDSVM
KNN
Adaboost
MLP
15ACC
73%
ProposedCAEANN
KNN
SVM
RF
7, 5, 2ACC (59.09–97.46%)
DL: deep learning, QDA: quadratic discriminant analysis, AR: autoregressive, DBC: distance-based classifier, ACC: accuracy, EER: equal error rate, HMM: hidden Markov model, MLP: multi-layer perceptron, FCCs: fractal complexity coefficients, and PSD: Power Spectral Density.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Oztemel, M.E.; Soysal, Ö.M. Personal Identification Using 3D Topographic Cubes Extracted from EEG Signals by Means of Automated Feature Representation. Signals 2025, 6, 43. https://doi.org/10.3390/signals6030043

AMA Style

Oztemel ME, Soysal ÖM. Personal Identification Using 3D Topographic Cubes Extracted from EEG Signals by Means of Automated Feature Representation. Signals. 2025; 6(3):43. https://doi.org/10.3390/signals6030043

Chicago/Turabian Style

Oztemel, Muhammed Esad, and Ömer Muhammet Soysal. 2025. "Personal Identification Using 3D Topographic Cubes Extracted from EEG Signals by Means of Automated Feature Representation" Signals 6, no. 3: 43. https://doi.org/10.3390/signals6030043

APA Style

Oztemel, M. E., & Soysal, Ö. M. (2025). Personal Identification Using 3D Topographic Cubes Extracted from EEG Signals by Means of Automated Feature Representation. Signals, 6(3), 43. https://doi.org/10.3390/signals6030043

Article Metrics

Back to TopTop