Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (359)

Search Parameters:
Keywords = electroencephalogram (EEG) classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 2640 KiB  
Article
DenseNet-Based Classification of EEG Abnormalities Using Spectrograms
by Lan Wei and Catherine Mooney
Algorithms 2025, 18(8), 486; https://doi.org/10.3390/a18080486 - 5 Aug 2025
Abstract
Electroencephalogram (EEG) analysis is essential for diagnosing neurological disorders but typically requires expert interpretation and significant time. Purpose: This study aims to automate the classification of normal and abnormal EEG recordings to support clinical diagnosis and reduce manual workload. Automating the initial screening [...] Read more.
Electroencephalogram (EEG) analysis is essential for diagnosing neurological disorders but typically requires expert interpretation and significant time. Purpose: This study aims to automate the classification of normal and abnormal EEG recordings to support clinical diagnosis and reduce manual workload. Automating the initial screening of EEGs can help clinicians quickly identify potential neurological abnormalities, enabling timely intervention and guiding further diagnostic and treatment strategies. Methodology: We utilized the Temple University Hospital EEG dataset to develop a DenseNet-based deep learning model. To enable a fair comparison of different EEG representations, we used three input types: signal images, spectrograms, and scalograms. To reduce dimensionality and simplify computation, we focused on two channels: T5 and O1. For interpretability, we applied Local Interpretable Model-agnostic Explanations (LIME) and Gradient-weighted Class Activation Mapping (Grad-CAM) to visualize the EEG regions influencing the model’s predictions. Key Findings: Among the input types, spectrogram-based representations achieved the highest classification accuracy, indicating that time-frequency features are especially effective for this task. The model demonstrated strong performance overall, and the integration of LIME and Grad-CAM provided transparent explanations of its decisions, enhancing interpretability. This approach offers a practical and interpretable solution for automated EEG screening, contributing to more efficient clinical workflows and better understanding of complex neurological conditions. Full article
(This article belongs to the Special Issue AI-Assisted Medical Diagnostics)
Show Figures

Figure 1

23 pages, 85184 KiB  
Article
MB-MSTFNet: A Multi-Band Spatio-Temporal Attention Network for EEG Sensor-Based Emotion Recognition
by Cheng Fang, Sitong Liu and Bing Gao
Sensors 2025, 25(15), 4819; https://doi.org/10.3390/s25154819 - 5 Aug 2025
Abstract
Emotion analysis based on electroencephalogram (EEG) sensors is pivotal for human–machine interaction yet faces key challenges in spatio-temporal feature fusion and cross-band and brain-region integration from multi-channel sensor-derived signals. This paper proposes MB-MSTFNet, a novel framework for EEG emotion recognition. The model constructs [...] Read more.
Emotion analysis based on electroencephalogram (EEG) sensors is pivotal for human–machine interaction yet faces key challenges in spatio-temporal feature fusion and cross-band and brain-region integration from multi-channel sensor-derived signals. This paper proposes MB-MSTFNet, a novel framework for EEG emotion recognition. The model constructs a 3D tensor to encode band–space–time correlations of sensor data, explicitly modeling frequency-domain dynamics and spatial distributions of EEG sensors across brain regions. A multi-scale CNN-Inception module extracts hierarchical spatial features via diverse convolutional kernels and pooling operations, capturing localized sensor activations and global brain network interactions. Bi-directional GRUs (BiGRUs) model temporal dependencies in sensor time-series, adept at capturing long-range dynamic patterns. Multi-head self-attention highlights critical time windows and brain regions by assigning adaptive weights to relevant sensor channels, suppressing noise from non-contributory electrodes. Experiments on the DEAP dataset, containing multi-channel EEG sensor recordings, show that MB-MSTFNet achieves 96.80 ± 0.92% valence accuracy, 98.02 ± 0.76% arousal accuracy for binary classification tasks, and 92.85 ± 1.45% accuracy for four-class classification. Ablation studies validate that feature fusion, bidirectional temporal modeling, and multi-scale mechanisms significantly enhance performance by improving feature complementarity. This sensor-driven framework advances affective computing by integrating spatio-temporal dynamics and multi-band interactions of EEG sensor signals, enabling efficient real-time emotion recognition. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

23 pages, 3055 KiB  
Article
RDPNet: A Multi-Scale Residual Dilated Pyramid Network with Entropy-Based Feature Fusion for Epileptic EEG Classification
by Tongle Xie, Wei Zhao, Yanyouyou Liu and Shixiao Xiao
Entropy 2025, 27(8), 830; https://doi.org/10.3390/e27080830 (registering DOI) - 5 Aug 2025
Abstract
Epilepsy is a prevalent neurological disorder affecting approximately 50 million individuals worldwide. Electroencephalogram (EEG) signals play a vital role in the diagnosis and analysis of epileptic seizures. However, traditional machine learning techniques often rely on handcrafted features, limiting their robustness and generalizability across [...] Read more.
Epilepsy is a prevalent neurological disorder affecting approximately 50 million individuals worldwide. Electroencephalogram (EEG) signals play a vital role in the diagnosis and analysis of epileptic seizures. However, traditional machine learning techniques often rely on handcrafted features, limiting their robustness and generalizability across diverse EEG acquisition settings, seizure types, and patients. To address these limitations, we propose RDPNet, a multi-scale residual dilated pyramid network with entropy-guided feature fusion for automated epileptic EEG classification. RDPNet combines residual convolution modules to extract local features and a dilated convolutional pyramid to capture long-range temporal dependencies. A dual-pathway fusion strategy integrates pooled and entropy-based features from both shallow and deep branches, enabling robust representation of spatial saliency and statistical complexity. We evaluate RDPNet on two benchmark datasets: the University of Bonn and TUSZ. On the Bonn dataset, RDPNet achieves 99.56–100% accuracy in binary classification, 99.29–99.79% in ternary tasks, and 95.10% in five-class classification. On the clinically realistic TUSZ dataset, it reaches a weighted F1-score of 95.72% across seven seizure types. Compared with several baselines, RDPNet consistently outperforms existing approaches, demonstrating superior robustness, generalizability, and clinical potential for epileptic EEG analysis. Full article
(This article belongs to the Special Issue Complexity, Entropy and the Physics of Information II)
Show Figures

Figure 1

23 pages, 19710 KiB  
Article
Hybrid EEG Feature Learning Method for Cross-Session Human Mental Attention State Classification
by Xu Chen, Xingtong Bao, Kailun Jitian, Ruihan Li, Li Zhu and Wanzeng Kong
Brain Sci. 2025, 15(8), 805; https://doi.org/10.3390/brainsci15080805 - 28 Jul 2025
Viewed by 279
Abstract
Background: Decoding mental attention states from electroencephalogram (EEG) signals is crucial for numerous applications such as cognitive monitoring, adaptive human–computer interaction, and brain–computer interfaces (BCIs). However, conventional EEG-based approaches often focus on channel-wise processing and are limited to intra-session or subject-specific scenarios, lacking [...] Read more.
Background: Decoding mental attention states from electroencephalogram (EEG) signals is crucial for numerous applications such as cognitive monitoring, adaptive human–computer interaction, and brain–computer interfaces (BCIs). However, conventional EEG-based approaches often focus on channel-wise processing and are limited to intra-session or subject-specific scenarios, lacking robustness in cross-session or inter-subject conditions. Methods: In this study, we propose a hybrid feature learning framework for robust classification of mental attention states, including focused, unfocused, and drowsy conditions, across both sessions and individuals. Our method integrates preprocessing, feature extraction, feature selection, and classification in a unified pipeline. We extract channel-wise spectral features using short-time Fourier transform (STFT) and further incorporate both functional and structural connectivity features to capture inter-regional interactions in the brain. A two-stage feature selection strategy, combining correlation-based filtering and random forest ranking, is adopted to enhance feature relevance and reduce dimensionality. Support vector machine (SVM) is employed for final classification due to its efficiency and generalization capability. Results: Experimental results on two cross-session and inter-subject EEG datasets demonstrate that our approach achieves classification accuracy of 86.27% and 94.01%, respectively, significantly outperforming traditional methods. Conclusions: These findings suggest that integrating connectivity-aware features with spectral analysis can enhance the generalizability of attention decoding models. The proposed framework provides a promising foundation for the development of practical EEG-based systems for continuous mental state monitoring and adaptive BCIs in real-world environments. Full article
Show Figures

Figure 1

29 pages, 2830 KiB  
Article
BCINetV1: Integrating Temporal and Spectral Focus Through a Novel Convolutional Attention Architecture for MI EEG Decoding
by Muhammad Zulkifal Aziz, Xiaojun Yu, Xinran Guo, Xinming He, Binwen Huang and Zeming Fan
Sensors 2025, 25(15), 4657; https://doi.org/10.3390/s25154657 - 27 Jul 2025
Viewed by 365
Abstract
Motor imagery (MI) electroencephalograms (EEGs) are pivotal cortical potentials reflecting cortical activity during imagined motor actions, widely leveraged for brain-computer interface (BCI) system development. However, effectively decoding these MI EEG signals is often overshadowed by flawed methods in signal processing, deep learning methods [...] Read more.
Motor imagery (MI) electroencephalograms (EEGs) are pivotal cortical potentials reflecting cortical activity during imagined motor actions, widely leveraged for brain-computer interface (BCI) system development. However, effectively decoding these MI EEG signals is often overshadowed by flawed methods in signal processing, deep learning methods that are clinically unexplained, and highly inconsistent performance across different datasets. We propose BCINetV1, a new framework for MI EEG decoding to address the aforementioned challenges. The BCINetV1 utilizes three innovative components: a temporal convolution-based attention block (T-CAB) and a spectral convolution-based attention block (S-CAB), both driven by a new convolutional self-attention (ConvSAT) mechanism to identify key non-stationary temporal and spectral patterns in the EEG signals. Lastly, a squeeze-and-excitation block (SEB) intelligently combines those identified tempo-spectral features for accurate, stable, and contextually aware MI EEG classification. Evaluated upon four diverse datasets containing 69 participants, BCINetV1 consistently achieved the highest average accuracies of 98.6% (Dataset 1), 96.6% (Dataset 2), 96.9% (Dataset 3), and 98.4% (Dataset 4). This research demonstrates that BCINetV1 is computationally efficient, extracts clinically vital markers, effectively handles the non-stationarity of EEG data, and shows a clear advantage over existing methods, marking a significant step forward for practical BCI applications. Full article
(This article belongs to the Special Issue Advanced Biomedical Imaging and Signal Processing)
Show Figures

Figure 1

35 pages, 6415 KiB  
Review
Recent Advances in Conductive Hydrogels for Electronic Skin and Healthcare Monitoring
by Yan Zhu, Baojin Chen, Yiming Liu, Tiantian Tan, Bowen Gao, Lijun Lu, Pengcheng Zhu and Yanchao Mao
Biosensors 2025, 15(7), 463; https://doi.org/10.3390/bios15070463 - 18 Jul 2025
Viewed by 370
Abstract
In recent decades, flexible electronics have witnessed remarkable advancements in multiple fields, encompassing wearable electronics, human–machine interfaces (HMI), clinical diagnosis, and treatment, etc. Nevertheless, conventional rigid electronic devices are fundamentally constrained by their inherent non-stretchability and poor conformability, limitations that substantially impede their [...] Read more.
In recent decades, flexible electronics have witnessed remarkable advancements in multiple fields, encompassing wearable electronics, human–machine interfaces (HMI), clinical diagnosis, and treatment, etc. Nevertheless, conventional rigid electronic devices are fundamentally constrained by their inherent non-stretchability and poor conformability, limitations that substantially impede their practical applications. In contrast, conductive hydrogels (CHs) for electronic skin (E-skin) and healthcare monitoring have attracted substantial interest owing to outstanding features, including adjustable mechanical properties, intrinsic flexibility, stretchability, transparency, and diverse functional and structural designs. Considerable efforts focus on developing CHs incorporating various conductive materials to enable multifunctional wearable sensors and flexible electrodes, such as metals, carbon, ionic liquids (ILs), MXene, etc. This review presents a comprehensive summary of the recent advancements in CHs, focusing on their classifications and practical applications. Firstly, CHs are categorized into five groups based on the nature of the conductive materials employed. These categories include polymer-based, carbon-based, metal-based, MXene-based, and ionic CHs. Secondly, the promising applications of CHs for electrophysiological signals and healthcare monitoring are discussed in detail, including electroencephalogram (EEG), electrocardiogram (ECG), electromyogram (EMG), respiratory monitoring, and motion monitoring. Finally, this review concludes with a comprehensive summary of current research progress and prospects regarding CHs in the fields of electronic skin and health monitoring applications. Full article
Show Figures

Figure 1

22 pages, 4882 KiB  
Article
Dual-Branch Spatio-Temporal-Frequency Fusion Convolutional Network with Transformer for EEG-Based Motor Imagery Classification
by Hao Hu, Zhiyong Zhou, Zihan Zhang and Wenyu Yuan
Electronics 2025, 14(14), 2853; https://doi.org/10.3390/electronics14142853 - 17 Jul 2025
Viewed by 266
Abstract
The decoding of motor imagery (MI) electroencephalogram (EEG) signals is crucial for motor control and rehabilitation. However, as feature extraction is the core component of the decoding process, traditional methods, often limited to single-feature domains or shallow time-frequency fusion, struggle to comprehensively capture [...] Read more.
The decoding of motor imagery (MI) electroencephalogram (EEG) signals is crucial for motor control and rehabilitation. However, as feature extraction is the core component of the decoding process, traditional methods, often limited to single-feature domains or shallow time-frequency fusion, struggle to comprehensively capture the spatio-temporal-frequency characteristics of the signals, thereby limiting decoding accuracy. To address these limitations, this paper proposes a dual-branch neural network architecture with multi-domain feature fusion, the dual-branch spatio-temporal-frequency fusion convolutional network with Transformer (DB-STFFCNet). The DB-STFFCNet model consists of three modules: the spatiotemporal feature extraction module (STFE), the frequency feature extraction module (FFE), and the feature fusion and classification module. The STFE module employs a lightweight multi-dimensional attention network combined with a temporal Transformer encoder, capable of simultaneously modeling local fine-grained features and global spatiotemporal dependencies, effectively integrating spatiotemporal information and enhancing feature representation. The FFE module constructs a hierarchical feature refinement structure by leveraging the fast Fourier transform (FFT) and multi-scale frequency convolutions, while a frequency-domain Transformer encoder captures the global dependencies among frequency domain features, thus improving the model’s ability to represent key frequency information. Finally, the fusion module effectively consolidates the spatiotemporal and frequency features to achieve accurate classification. To evaluate the feasibility of the proposed method, experiments were conducted on the BCI Competition IV-2a and IV-2b public datasets, achieving accuracies of 83.13% and 89.54%, respectively, outperforming existing methods. This study provides a novel solution for joint time-frequency representation learning in EEG analysis. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Biomedical Data Processing)
Show Figures

Figure 1

16 pages, 1714 KiB  
Article
MCAF-Net: Multi-Channel Temporal Cross-Attention Network with Dynamic Gating for Sleep Stage Classification
by Xuegang Xu, Quan Wang, Changyuan Wang and Yaxin Zhang
Sensors 2025, 25(14), 4251; https://doi.org/10.3390/s25144251 - 8 Jul 2025
Viewed by 343
Abstract
Automated sleep stage classification is essential for objective sleep evaluation and clinical diagnosis. While numerous algorithms have been developed, the predominant existing methods utilize single-channel electroencephalogram (EEG) signals, neglecting the complementary physiological information available from other channels. Standard polysomnography (PSG) recordings capture multiple [...] Read more.
Automated sleep stage classification is essential for objective sleep evaluation and clinical diagnosis. While numerous algorithms have been developed, the predominant existing methods utilize single-channel electroencephalogram (EEG) signals, neglecting the complementary physiological information available from other channels. Standard polysomnography (PSG) recordings capture multiple concurrent biosignals, where sophisticated integration of these multi-channel data represents a critical factor for enhanced classification accuracy. Conventional multi-channel fusion techniques typically employ elementary concatenation approaches that insufficiently model the intricate cross-channel correlations, consequently limiting classification performance. To overcome these shortcomings, we present MCAF-Net, a novel network architecture that employs temporal convolution modules to extract channel-specific features from each input signal and introduces a dynamic gated multi-head cross-channel attention mechanism (MCAF) to effectively model the interdependencies between different physiological channels. Experimental results show that our proposed method successfully integrates information from multiple channels, achieving significant improvements in sleep stage classification compared to the vast majority of existing methods. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

16 pages, 1351 KiB  
Article
A Comparative Study on Machine Learning Methods for EEG-Based Human Emotion Recognition
by Shokoufeh Davarzani, Simin Masihi, Masoud Panahi, Abdulrahman Olalekan Yusuf and Massood Atashbar
Electronics 2025, 14(14), 2744; https://doi.org/10.3390/electronics14142744 - 8 Jul 2025
Viewed by 466
Abstract
Electroencephalogram (EEG) signals provide a direct and non-invasive means of interpreting brain activity and are increasingly becoming valuable in embedded emotion-aware systems, particularly for applications in healthcare, wearable electronics, and human–machine interactions. Among various EEG-based emotion recognition techniques, deep learning methods have demonstrated [...] Read more.
Electroencephalogram (EEG) signals provide a direct and non-invasive means of interpreting brain activity and are increasingly becoming valuable in embedded emotion-aware systems, particularly for applications in healthcare, wearable electronics, and human–machine interactions. Among various EEG-based emotion recognition techniques, deep learning methods have demonstrated superior performance compared to traditional approaches. This advantage stems from their ability to extract complex features—such as spectral–spatial connectivity, temporal dynamics, and non-linear patterns—from raw EEG data, leading to a more accurate and robust representation of emotional states and better adaptation to diverse data characteristics. This study explores and compares deep and shallow neural networks for human emotion recognition from raw EEG data, with the goal of enabling real-time processing in embedded and edge-deployable systems. Deep learning models—specifically convolutional neural networks (CNNs) and recurrent neural networks (RNNs)—have been benchmarked against traditional approaches such as the multi-layer perceptron (MLP), support vector machine (SVM), and k-nearest neighbors (kNN) algorithms. This comparative study investigates the effectiveness of deep learning techniques in EEG-based emotion recognition by classifying emotions into four categories based on the valence–arousal plane: high arousal, positive valence (HAPV); low arousal, positive valence (LAPV); high arousal, negative valence (HANV); and low arousal, negative valence (LANV). Evaluations were conducted using the DEAP dataset. The results indicate that both the CNN and RNN-STM models have a high classification performance in EEG-based emotion recognition, with an average accuracy of 90.13% and 93.36%, respectively, significantly outperforming shallow algorithms (MLP, SVM, kNN). Full article
(This article belongs to the Special Issue New Advances in Embedded Software and Applications)
Show Figures

Figure 1

21 pages, 4118 KiB  
Article
A Novel Deep Learning Model for Motor Imagery Classification in Brain–Computer Interfaces
by Wenhui Chen, Shunwu Xu, Qingqing Hu, Yiran Peng, Hong Zhang, Jian Zhang and Zhaowen Chen
Information 2025, 16(7), 582; https://doi.org/10.3390/info16070582 - 7 Jul 2025
Viewed by 550
Abstract
Recent advancements in decoding electroencephalogram (EEG) signals for motor imagery tasks have shown significant potential. However, the intricate time–frequency dynamics and inter-channel redundancy of EEG signals remain key challenges, often limiting the effectiveness of single-scale feature extraction methods. To address this issue, we [...] Read more.
Recent advancements in decoding electroencephalogram (EEG) signals for motor imagery tasks have shown significant potential. However, the intricate time–frequency dynamics and inter-channel redundancy of EEG signals remain key challenges, often limiting the effectiveness of single-scale feature extraction methods. To address this issue, we propose the Dual-Branch Blocked-Integration Self-Attention Network (DB-BISAN), a novel deep learning framework for EEG motor imagery classification. The proposed method includes a Dual-Branch Feature Extraction Module designed to capture both temporal features and spatial patterns across different scales. Additionally, a novel Blocked-Integration Self-Attention Mechanism is employed to selectively highlight important features while minimizing the impact of redundant information. The experimental results show that DB-BISAN achieves state-of-the-art performance. Also, ablation studies confirm that the Dual-Branch Feature Extraction and Blocked-Integration Self-Attention Mechanism are critical to the model’s performance. Our approach offers an effective solution for motor imagery decoding, with significant potential for the development of efficient and accurate brain–computer interfaces. Full article
Show Figures

Figure 1

30 pages, 3292 KiB  
Review
Smart and Secure Healthcare with Digital Twins: A Deep Dive into Blockchain, Federated Learning, and Future Innovations
by Ezz El-Din Hemdan and Amged Sayed
Algorithms 2025, 18(7), 401; https://doi.org/10.3390/a18070401 - 30 Jun 2025
Cited by 1 | Viewed by 493
Abstract
In recent years, cutting-edge technologies, such as artificial intelligence (AI), blockchain, and digital twin (DT), have revolutionized the healthcare sector by enhancing public health and treatment quality through precise diagnosis, preventive measures, and real-time care capabilities. Despite these advancements, the massive amount of [...] Read more.
In recent years, cutting-edge technologies, such as artificial intelligence (AI), blockchain, and digital twin (DT), have revolutionized the healthcare sector by enhancing public health and treatment quality through precise diagnosis, preventive measures, and real-time care capabilities. Despite these advancements, the massive amount of generated biomedical data puts substantial challenges associated with information security, privacy, and scalability. Applying blockchain in healthcare-based digital twins ensures data integrity, immutability, consistency, and security, making it a critical component in addressing these challenges. Federated learning (FL) has also emerged as a promising AI technique to enhance privacy and enable decentralized data processing. This paper investigates the integration of digital twin concepts with blockchain and FL in the healthcare domain, focusing on their architecture and applications. It also explores platforms and solutions that leverage these technologies for secure and scalable medical implementations. A case study on federated learning for electroencephalogram (EEG) signal classification is presented, demonstrating its potential as a diagnostic tool for brain activity analysis and neurological disorder detection. Finally, we highlight the key challenges, emerging opportunities, and future directions in advancing healthcare digital twins with blockchain and federated learning, paving the way for a more intelligent, secure, and privacy-preserving medical ecosystem. Full article
Show Figures

Figure 1

17 pages, 879 KiB  
Article
Effect of EEG Electrode Numbers on Source Estimation in Motor Imagery
by Mustafa Yazıcı, Mustafa Ulutaş and Mukadder Okuyan
Brain Sci. 2025, 15(7), 685; https://doi.org/10.3390/brainsci15070685 - 26 Jun 2025
Viewed by 423
Abstract
The electroencephalogram (EEG) is one of the most popular neurophysiological methods in neuroscience. Scalp EEG measurements are obtained using various numbers of channels for both clinical and research applications. This pilot study explores the effect of EEG channel count on motor imagery classification [...] Read more.
The electroencephalogram (EEG) is one of the most popular neurophysiological methods in neuroscience. Scalp EEG measurements are obtained using various numbers of channels for both clinical and research applications. This pilot study explores the effect of EEG channel count on motor imagery classification using source analysis in brain–computer interface (BCI) applications. Different channel configurations are employed to evaluate classification performance. This study focuses on mu band signals, which are sensitive to motor imagery-related EEG changes. Common spatial patterns are utilized as a spatiotemporal filter to extract signal components relevant to the right hand and right foot extremities. Classification accuracies are obtained using configurations with 19, 30, 61, and 118 electrodes to determine the optimal number of electrodes in motor imagery studies. Experiments are conducted on the BCI Competition III Dataset Iva. The 19-channel configuration yields lower classification accuracy when compared to the others. The results from 118 channels are better than those from 19 channels but not as good as those from 30 and 61 channels. The best results are achieved when 61 channels are utilized. The average accuracy values are 83.63% with 19 channels, increasing to 84.70% with 30 channels, 84.73% with 61 channels, and decreasing to 83.95% when 118 channels are used. Full article
(This article belongs to the Section Neurotechnology and Neuroimaging)
Show Figures

Figure 1

18 pages, 1566 KiB  
Article
Supporting ASD Diagnosis with EEG, ML and Swarm Intelligence: Early Detection of Autism Spectrum Disorder Based on Electroencephalography Analysis by Machine Learning and Swarm Intelligence
by Flávio Secco Fonseca, Adrielly Sayonara de Oliveira Silva, Maria Vitória Soares Muniz, Catarina Victória Nascimento de Oliveira, Arthur Moreira Nogueira de Melo, Maria Luísa Mendes de Siqueira Passos, Ana Beatriz de Souza Sampaio, Thailson Caetano Valdeci da Silva, Alana Elza Fontes da Gama, Ana Cristina de Albuquerque Montenegro, Bianca Arruda Manchester de Queiroga, Marilú Gomes Netto Monte da Silva, Rafaella Asfora Siqueira Campos Lima, Sadi da Silva Seabra Filho, Shirley da Silva Jacinto de Oliveira Cruz, Cecília Cordeiro da Silva, Clarisse Lins de Lima, Giselle Machado Magalhães Moreno, Maíra Araújo de Santana, Juliana Carneiro Gomes and Wellington Pinheiro dos Santosadd Show full author list remove Hide full author list
AI Sens. 2025, 1(1), 3; https://doi.org/10.3390/aisens1010003 - 24 Jun 2025
Cited by 1 | Viewed by 589
Abstract
Deficits in social interaction and communication characterize Autism Spectrum Disorder (ASD). Although widely recognized by its symptoms, diagnosing ASD remains challenging due to its wide range of clinical presentations. Methods: In this study, we propose a method to assist in the early diagnosis [...] Read more.
Deficits in social interaction and communication characterize Autism Spectrum Disorder (ASD). Although widely recognized by its symptoms, diagnosing ASD remains challenging due to its wide range of clinical presentations. Methods: In this study, we propose a method to assist in the early diagnosis of autism, which is currently primarily based on clinical assessments. Our approach aims to develop an early differential diagnosis based on electroencephalogram (EEG) signals, seeking to identify patterns associated with ASD. In this study, we used EEG data from 56 participants obtained from the Sheffield dataset, including 28 individuals diagnosed with Autism Spectrum Conditions (ASC) and 28 neurotypical controls, applying numerical techniques to handle missing data. Subsequently, after a detailed analysis of the signals, we applied three different starting approaches: one with the original database and the other two with selection of the most significant attributes using the PSO and evolutionary search methods. In each of these approaches, we applied a series of machine learning models, where relatively high performances for classification were observed. Results: We achieved accuracies of 99.13% ± 0.44 for the dataset with original signals, 99.23% ± 0.38 for the dataset after applying PSO, and 93.91% ± 1.10 for the dataset after the evolutionary search methodology. These results were obtained using classical classifiers, with SVM being the most effective among the first two approaches, while Random Forest with 500 trees proved more efficient in the third approach. Conclusions: Even with all the limitations of the base, the results of the experiments demonstrated promising findings in identifying patterns associated with Autism Spectrum Disorder through the analysis of EEG signals. Finally, we emphasize that this work is the starting point for a larger project with the objective of supporting and democratizing the diagnosis of ASD both in children early and later in adults. Full article
Show Figures

Figure 1

32 pages, 2830 KiB  
Article
Hybrid Deep Learning Approach for Automated Sleep Cycle Analysis
by Sebastián Urbina Fredes, Ali Dehghan Firoozabadi, Pablo Adasme, David Zabala-Blanco, Pablo Palacios Játiva and Cesar A. Azurdia-Meza
Appl. Sci. 2025, 15(12), 6844; https://doi.org/10.3390/app15126844 - 18 Jun 2025
Viewed by 451
Abstract
Health and well-being, both mental and physical, depend largely on adequate sleep. Many conditions arise from a disrupted sleep cycle, significantly deteriorating the quality of life of those affected. The analysis of the sleep cycle provide valuable information about sleep stages, which are [...] Read more.
Health and well-being, both mental and physical, depend largely on adequate sleep. Many conditions arise from a disrupted sleep cycle, significantly deteriorating the quality of life of those affected. The analysis of the sleep cycle provide valuable information about sleep stages, which are employed in sleep medicine for the diagnosis of numerous diseases. The clinical standard for sleep data recording is polysomnography (PSG), which records electroencephalogram (EEG), electrooculogram (EOG), electromyogram (EMG), and other signals during sleep activity. Recently, machine learning approaches have exhibited high accuracy in applications such as the classification and prediction of biomedical signals. This study presents a hybrid neural network architecture composed of convolutional neural network (CNN) layers, bidirectional long short-term memory (BiLSTM) layers, and attention mechanism layers in order to process large volumes of EEG data in PSG files. The objective is to design a framework for automated feature extraction. To address class imbalance, an epoch-level random undersampling (E-LRUS) method is proposed, discarding full epochs from majority classes while preserving the temporal structure, unlike traditional methods that remove individual samples. This method has been tested on EEG recordings acquired from the public Sleep EDF Expanded database, achieving an overall accuracy rate of 78.67% along with an F1-score of 72.10%. The findings show that this method proves to be effective for sleep stage classification in patients. Full article
Show Figures

Figure 1

15 pages, 13180 KiB  
Article
Channel-Dependent Multilayer EEG Time-Frequency Representations Combined with Transfer Learning-Based Deep CNN Framework for Few-Channel MI EEG Classification
by Ziang Liu, Kang Fan, Qin Gu and Yaduan Ruan
Bioengineering 2025, 12(6), 645; https://doi.org/10.3390/bioengineering12060645 - 12 Jun 2025
Viewed by 494
Abstract
The study of electroencephalogram (EEG) signals is crucial for understanding brain function and has extensive applications in clinical diagnosis, neuroscience, and brain–computer interface technology. This paper addresses the challenge of recognizing motor imagery EEG signals with few channels, which is essential for portable [...] Read more.
The study of electroencephalogram (EEG) signals is crucial for understanding brain function and has extensive applications in clinical diagnosis, neuroscience, and brain–computer interface technology. This paper addresses the challenge of recognizing motor imagery EEG signals with few channels, which is essential for portable and real-time applications. A novel framework is proposed that applies a continuous wavelet transform to convert time-domain EEG signals into two-dimensional time-frequency representations. These images are then concatenated into channel-dependent multilayer EEG time-frequency representations (CDML-EEG-TFR), incorporating multidimensional information of time, frequency, and channels, allowing for a more comprehensive and enriched brain representation under the constraint of few channels. By adopting a deep convolutional neural network with EfficientNet as the backbone and utilizing pre-trained weights from natural image datasets for transfer learning, the framework can simultaneously learn temporal, spatial, and channel features embedded in the CDML-EEG-TFR. Moreover, the transfer learning strategy effectively addresses the issue of data sparsity in the context of a few channels. Our approach enhances the classification accuracy of motor imagery EEG signals in few-channel scenarios. Experimental results on the BCI Competition IV 2b dataset show a significant improvement in classification accuracy, reaching 80.21%. This study highlights the potential of CDML-EEG-TFR and the EfficientNet-based transfer learning strategy in few-channel EEG signal classification, laying a foundation for practical applications and further research in medical and sports fields. Full article
(This article belongs to the Special Issue Artificial Intelligence for Biomedical Signal Processing, 2nd Edition)
Show Figures

Graphical abstract

Back to TopTop