Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (622)

Search Parameters:
Keywords = electroencephalogram (EEG) signals

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
42 pages, 5531 KiB  
Article
Preliminary Analysis and Proof-of-Concept Validation of a Neuronally Controlled Visual Assistive Device Integrating Computer Vision with EEG-Based Binary Control
by Preetam Kumar Khuntia, Prajwal Sanjay Bhide and Pudureddiyur Venkataraman Manivannan
Sensors 2025, 25(16), 5187; https://doi.org/10.3390/s25165187 - 21 Aug 2025
Viewed by 182
Abstract
Contemporary visual assistive devices often lack immersive user experience due to passive control systems. This study introduces a neuronally controlled visual assistive device (NCVAD) that aims to assist visually impaired users in performing reach tasks with active, intuitive control. The developed NCVAD integrates [...] Read more.
Contemporary visual assistive devices often lack immersive user experience due to passive control systems. This study introduces a neuronally controlled visual assistive device (NCVAD) that aims to assist visually impaired users in performing reach tasks with active, intuitive control. The developed NCVAD integrates computer vision, electroencephalogram (EEG) signal processing, and robotic manipulation to facilitate object detection, selection, and assistive guidance. The monocular vision-based subsystem implements the YOLOv8n algorithm to detect objects of daily use. Then, audio prompting conveys the detected objects’ information to the user, who selects their targeted object using a voluntary trigger decoded through real-time EEG classification. The target’s physical coordinates are extracted using ArUco markers, and a gradient descent-based path optimization algorithm (POA) guides a 3-DoF robotic arm to reach the target. The classification algorithm achieves over 85% precision and recall in decoding EEG data, even with coexisting physiological artifacts. Similarly, the POA achieves approximately 650 ms of actuation time with a 0.001 learning rate and 0.1 cm2 error threshold settings. In conclusion, the study also validates the preliminary analysis results on a working physical model and benchmarks the robotic arm’s performance against human users, establishing the proof-of-concept for future assistive technologies integrating EEG and computer vision paradigms. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

26 pages, 3497 KiB  
Article
A Multi-Branch Network for Integrating Spatial, Spectral, and Temporal Features in Motor Imagery EEG Classification
by Xiaoqin Lian, Chunquan Liu, Chao Gao, Ziqian Deng, Wenyang Guan and Yonggang Gong
Brain Sci. 2025, 15(8), 877; https://doi.org/10.3390/brainsci15080877 - 18 Aug 2025
Viewed by 300
Abstract
Background: Efficient decoding of motor imagery (MI) electroencephalogram (EEG) signals is essential for the precise control and practical deployment of brain-computer interface (BCI) systems. Owing to the complex nonlinear characteristics of EEG signals across spatial, spectral, and temporal dimensions, efficiently extracting multidimensional [...] Read more.
Background: Efficient decoding of motor imagery (MI) electroencephalogram (EEG) signals is essential for the precise control and practical deployment of brain-computer interface (BCI) systems. Owing to the complex nonlinear characteristics of EEG signals across spatial, spectral, and temporal dimensions, efficiently extracting multidimensional discriminative features remains a key challenge to improving MI-EEG decoding performance. Methods: To address the challenge of capturing complex spatial, spectral, and temporal features in MI-EEG signals, this study proposes a multi-branch deep neural network, which jointly models these dimensions to enhance classification performance. The network takes as inputs both a three-dimensional power spectral density tensor and two-dimensional time-domain EEG signals and incorporates four complementary feature extraction branches to capture spatial, spectral, spatial-spectral joint, and temporal dynamic features, thereby enabling unified multidimensional modeling. The model was comprehensively evaluated on two widely used public MI-EEG datasets: EEG Motor Movement/Imagery Database (EEGMMIDB) and BCI Competition IV Dataset 2a (BCIIV2A). To further assess interpretability, gradient-weighted class activation mapping (Grad-CAM) was employed to visualize the spatial and spectral features prioritized by the model. Results: On the EEGMMIDB dataset, it achieved an average classification accuracy of 86.34% and a kappa coefficient of 0.829 in the five-class task. On the BCIIV2A dataset, it reached an accuracy of 83.43% and a kappa coefficient of 0.779 in the four-class task. Conclusions: These results demonstrate that the network outperforms existing state-of-the-art methods in classification performance. Furthermore, Grad-CAM visualizations identified the key spatial channels and frequency bands attended to by the model, supporting its neurophysiological interpretability. Full article
(This article belongs to the Section Neurotechnology and Neuroimaging)
Show Figures

Figure 1

45 pages, 9550 KiB  
Article
Wavelet-Based Denoising Strategies for Non-Stationary Signals in Electrical Power Systems: An Optimization Perspective
by Sıtkı Akkaya
Electronics 2025, 14(16), 3190; https://doi.org/10.3390/electronics14163190 - 11 Aug 2025
Viewed by 313
Abstract
Effective noise elimination is essential for ensuring data reliability in high-accuracy measurement systems. However, selecting the optimal denoising strategy for diverse and non-stationary signal types remains a major challenge. This study presents a wavelet-based denoising optimization framework that systematically identifies and applies the [...] Read more.
Effective noise elimination is essential for ensuring data reliability in high-accuracy measurement systems. However, selecting the optimal denoising strategy for diverse and non-stationary signal types remains a major challenge. This study presents a wavelet-based denoising optimization framework that systematically identifies and applies the most suitable noise reduction model for each signal segment. By evaluating multiple wavelet types and thresholding strategies, the proposed method enables adaptive and automated selection tailored to the specific characteristics of each signal. The framework was validated using synthetic, open-access, and experimentally acquired signals in both reference-based and reference-free scenarios. Extensive testing covered signals from power quality disturbance (PQD) events, electrocardiogram (ECG) data, and electroencephalogram (EEG) recordings, all of which represent critical applications where signal integrity under noise is essential. The method achieved optimal model selection in 22.15 s (across 4558 iterations) on a standard PC, with an average denoising time of 4.86 ms per signal window. These results highlight its potential for real-time and embedded applications, including smart grid monitoring systems, wearable health devices, and automated biomedical diagnostic platforms, where adaptive, fast, and reliable denoising is vital. The framework’s versatility makes it highly relevant for deployment in smart grid monitoring systems and intelligent energy infrastructures requiring robust signal conditioning. Full article
(This article belongs to the Special Issue Smart Grid Technologies and Energy Conversion Systems)
Show Figures

Figure 1

14 pages, 1405 KiB  
Article
Hybrid EEG-EMG Control Scheme for Multiple Degrees of Freedom Upper-Limb Prostheses
by Sorelis Isabel Bandes Rodriguez and Yasuharu Koike
Actuators 2025, 14(8), 397; https://doi.org/10.3390/act14080397 - 11 Aug 2025
Viewed by 251
Abstract
Upper-limb motor disabilities and amputation pose a significant burden on individuals, hindering their ability to perform daily activities independently. While various research studies aim to enhance the performance of current upper-limb prosthetic devices, electrically activated prostheses still face challenges in achieving optimal functionality. [...] Read more.
Upper-limb motor disabilities and amputation pose a significant burden on individuals, hindering their ability to perform daily activities independently. While various research studies aim to enhance the performance of current upper-limb prosthetic devices, electrically activated prostheses still face challenges in achieving optimal functionality. This paper explores the potential of utilizing electromyogram (EMG) and electroencephalogram (EEG) signals to not only decipher movement across multiple degrees of freedom (DOFs) but also offer a more intuitive means of control. In this study, six distinct control schemes for upper-limb prosthetic devices are proposed, each with different combinations of EEG and EMG signals. These schemes were designed to control multiple degrees-of-freedom movements, encompassing five different hand and forearm actions (hand-open, hand-close, wrist pronation, wrist supination, and rest-state). Using Linear Discriminant Analysis as a model results in classification accuracies of over 85% for combined EEG-EMG control schemes. The results suggest promising advancements in the field and show the potential for a more effective and user-friendly control interface for upper-limb prosthetic devices. Full article
Show Figures

Figure 1

14 pages, 661 KiB  
Article
Epileptic Seizure Prediction Using a Combination of Deep Learning, Time–Frequency Fusion Methods, and Discrete Wavelet Analysis
by Hadi Sadeghi Khansari, Mostafa Abbaszadeh, Gholamreza Heidary Joonaghany, Hamidreza Mohagerani and Fardin Faraji
Algorithms 2025, 18(8), 492; https://doi.org/10.3390/a18080492 - 7 Aug 2025
Viewed by 341
Abstract
Epileptic seizure prediction remains a critical challenge in neuroscience and healthcare, with profound implications for enhancing patient safety and quality of life. In this paper, we introduce a novel seizure prediction method that leverages electroencephalogram (EEG) data, combining discrete wavelet transform (DWT)-based time–frequency [...] Read more.
Epileptic seizure prediction remains a critical challenge in neuroscience and healthcare, with profound implications for enhancing patient safety and quality of life. In this paper, we introduce a novel seizure prediction method that leverages electroencephalogram (EEG) data, combining discrete wavelet transform (DWT)-based time–frequency analysis, advanced feature extraction, and deep learning using Fourier neural networks (FNNs). The proposed approach extracts essential features from EEG signals—including entropy, power, frequency, and amplitude—to effectively capture the brain’s complex and nonstationary dynamics. We measure the method based on the broadly used CHB-MIT EEG dataset, ensuring direct comparability with prior research. Experimental results demonstrate that our DWT-FS-FNN model achieves a prediction accuracy of 98.96 with a zero false positive rate, outperforming several state-of-the-art methods. These findings underscore the potential of integrating advanced signal processing and deep learning methods for reliable, real-time seizure prediction. Future work will focus on optimizing the model for real-world clinical deployment and expanding it to incorporate multimodal physiological data, further enhancing its applicability in clinical practice. Full article
(This article belongs to the Special Issue 2024 and 2025 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Graphical abstract

15 pages, 440 KiB  
Article
Automated Detection of Epileptic Seizures in EEG Signals via Micro-Capsule Networks
by Baozeng Wang, Jiayue Zhou, Hualiang Zhang, Jin Zhou and Changyong Wang
Brain Sci. 2025, 15(8), 842; https://doi.org/10.3390/brainsci15080842 - 7 Aug 2025
Viewed by 374
Abstract
Background: Epilepsy is a chronic neurological disorder that affects individuals across all age groups. Early detection and intervention are crucial for minimizing both physical and psychological distress. However, the unpredictable nature of seizures presents considerable challenges for timely detection and accurate diagnosis. Method: [...] Read more.
Background: Epilepsy is a chronic neurological disorder that affects individuals across all age groups. Early detection and intervention are crucial for minimizing both physical and psychological distress. However, the unpredictable nature of seizures presents considerable challenges for timely detection and accurate diagnosis. Method: To address the challenge of low recognition accuracy in small-sample, single-channel epileptic electroencephalogram (EEG) signals, this study proposes an automated seizure detection method using a micro-capsule network. First, we propose a dimensionality-increasing transformation technique for single-channel EEG signals to meet the network’s input requirements. Second, a streamlined micro-capsule network is designed by optimizing and simplifying the framework’s architecture. Finally, EEG features are encoded as feature vectors to better represent spatial hierarchical relationships between seizure patterns, enhancing the framework’s adaptability and improving detection accuracy. Result: Compared to existing EEG-based detection methods, our approach achieves higher accuracy on small-sample datasets while maintaining a reduction in computational complexity. Conclusions: By leveraging its micro-capsule network architecture, the framework demonstrates superior classification accuracy when analyzing single-channel epileptiform EEG signals, significantly outperforming both convolutional neural network-based implementations and established machine learning methodologies. Full article
(This article belongs to the Section Neurotechnology and Neuroimaging)
Show Figures

Figure 1

23 pages, 2640 KiB  
Article
DenseNet-Based Classification of EEG Abnormalities Using Spectrograms
by Lan Wei and Catherine Mooney
Algorithms 2025, 18(8), 486; https://doi.org/10.3390/a18080486 - 5 Aug 2025
Viewed by 331
Abstract
Electroencephalogram (EEG) analysis is essential for diagnosing neurological disorders but typically requires expert interpretation and significant time. Purpose: This study aims to automate the classification of normal and abnormal EEG recordings to support clinical diagnosis and reduce manual workload. Automating the initial screening [...] Read more.
Electroencephalogram (EEG) analysis is essential for diagnosing neurological disorders but typically requires expert interpretation and significant time. Purpose: This study aims to automate the classification of normal and abnormal EEG recordings to support clinical diagnosis and reduce manual workload. Automating the initial screening of EEGs can help clinicians quickly identify potential neurological abnormalities, enabling timely intervention and guiding further diagnostic and treatment strategies. Methodology: We utilized the Temple University Hospital EEG dataset to develop a DenseNet-based deep learning model. To enable a fair comparison of different EEG representations, we used three input types: signal images, spectrograms, and scalograms. To reduce dimensionality and simplify computation, we focused on two channels: T5 and O1. For interpretability, we applied Local Interpretable Model-agnostic Explanations (LIME) and Gradient-weighted Class Activation Mapping (Grad-CAM) to visualize the EEG regions influencing the model’s predictions. Key Findings: Among the input types, spectrogram-based representations achieved the highest classification accuracy, indicating that time-frequency features are especially effective for this task. The model demonstrated strong performance overall, and the integration of LIME and Grad-CAM provided transparent explanations of its decisions, enhancing interpretability. This approach offers a practical and interpretable solution for automated EEG screening, contributing to more efficient clinical workflows and better understanding of complex neurological conditions. Full article
(This article belongs to the Special Issue AI-Assisted Medical Diagnostics)
Show Figures

Figure 1

23 pages, 85184 KiB  
Article
MB-MSTFNet: A Multi-Band Spatio-Temporal Attention Network for EEG Sensor-Based Emotion Recognition
by Cheng Fang, Sitong Liu and Bing Gao
Sensors 2025, 25(15), 4819; https://doi.org/10.3390/s25154819 - 5 Aug 2025
Viewed by 481
Abstract
Emotion analysis based on electroencephalogram (EEG) sensors is pivotal for human–machine interaction yet faces key challenges in spatio-temporal feature fusion and cross-band and brain-region integration from multi-channel sensor-derived signals. This paper proposes MB-MSTFNet, a novel framework for EEG emotion recognition. The model constructs [...] Read more.
Emotion analysis based on electroencephalogram (EEG) sensors is pivotal for human–machine interaction yet faces key challenges in spatio-temporal feature fusion and cross-band and brain-region integration from multi-channel sensor-derived signals. This paper proposes MB-MSTFNet, a novel framework for EEG emotion recognition. The model constructs a 3D tensor to encode band–space–time correlations of sensor data, explicitly modeling frequency-domain dynamics and spatial distributions of EEG sensors across brain regions. A multi-scale CNN-Inception module extracts hierarchical spatial features via diverse convolutional kernels and pooling operations, capturing localized sensor activations and global brain network interactions. Bi-directional GRUs (BiGRUs) model temporal dependencies in sensor time-series, adept at capturing long-range dynamic patterns. Multi-head self-attention highlights critical time windows and brain regions by assigning adaptive weights to relevant sensor channels, suppressing noise from non-contributory electrodes. Experiments on the DEAP dataset, containing multi-channel EEG sensor recordings, show that MB-MSTFNet achieves 96.80 ± 0.92% valence accuracy, 98.02 ± 0.76% arousal accuracy for binary classification tasks, and 92.85 ± 1.45% accuracy for four-class classification. Ablation studies validate that feature fusion, bidirectional temporal modeling, and multi-scale mechanisms significantly enhance performance by improving feature complementarity. This sensor-driven framework advances affective computing by integrating spatio-temporal dynamics and multi-band interactions of EEG sensor signals, enabling efficient real-time emotion recognition. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

23 pages, 3055 KiB  
Article
RDPNet: A Multi-Scale Residual Dilated Pyramid Network with Entropy-Based Feature Fusion for Epileptic EEG Classification
by Tongle Xie, Wei Zhao, Yanyouyou Liu and Shixiao Xiao
Entropy 2025, 27(8), 830; https://doi.org/10.3390/e27080830 - 5 Aug 2025
Viewed by 414
Abstract
Epilepsy is a prevalent neurological disorder affecting approximately 50 million individuals worldwide. Electroencephalogram (EEG) signals play a vital role in the diagnosis and analysis of epileptic seizures. However, traditional machine learning techniques often rely on handcrafted features, limiting their robustness and generalizability across [...] Read more.
Epilepsy is a prevalent neurological disorder affecting approximately 50 million individuals worldwide. Electroencephalogram (EEG) signals play a vital role in the diagnosis and analysis of epileptic seizures. However, traditional machine learning techniques often rely on handcrafted features, limiting their robustness and generalizability across diverse EEG acquisition settings, seizure types, and patients. To address these limitations, we propose RDPNet, a multi-scale residual dilated pyramid network with entropy-guided feature fusion for automated epileptic EEG classification. RDPNet combines residual convolution modules to extract local features and a dilated convolutional pyramid to capture long-range temporal dependencies. A dual-pathway fusion strategy integrates pooled and entropy-based features from both shallow and deep branches, enabling robust representation of spatial saliency and statistical complexity. We evaluate RDPNet on two benchmark datasets: the University of Bonn and TUSZ. On the Bonn dataset, RDPNet achieves 99.56–100% accuracy in binary classification, 99.29–99.79% in ternary tasks, and 95.10% in five-class classification. On the clinically realistic TUSZ dataset, it reaches a weighted F1-score of 95.72% across seven seizure types. Compared with several baselines, RDPNet consistently outperforms existing approaches, demonstrating superior robustness, generalizability, and clinical potential for epileptic EEG analysis. Full article
(This article belongs to the Special Issue Complexity, Entropy and the Physics of Information II)
Show Figures

Figure 1

27 pages, 1766 KiB  
Article
A Novel Optimized Hybrid Deep Learning Framework for Mental Stress Detection Using Electroencephalography
by Maithili Shailesh Andhare, T. Vijayan, B. Karthik and Shabana Urooj
Brain Sci. 2025, 15(8), 835; https://doi.org/10.3390/brainsci15080835 - 4 Aug 2025
Viewed by 463
Abstract
Mental stress is a psychological or emotional strain that typically occurs because of threatening, challenging, and overwhelming conditions and affects human behavior. Various factors, such as professional, environmental, and personal pressures, often trigger it. In recent years, various deep learning (DL)-based schemes using [...] Read more.
Mental stress is a psychological or emotional strain that typically occurs because of threatening, challenging, and overwhelming conditions and affects human behavior. Various factors, such as professional, environmental, and personal pressures, often trigger it. In recent years, various deep learning (DL)-based schemes using electroencephalograms (EEGs) have been proposed. However, the effectiveness of DL-based schemes is challenging because of the intricate DL structure, class imbalance problems, poor feature representation, low-frequency resolution problems, and complexity of multi-channel signal processing. This paper presents a novel hybrid DL framework, BDDNet, which combines a deep convolutional neural network (DCNN), bidirectional long short-term memory (BiLSTM), and deep belief network (DBN). BDDNet provides superior spectral–temporal feature depiction and better long-term dependency on the local and global features of EEGs. BDDNet accepts multiple EEG features (MEFs) that provide the spectral and time-domain features of EEGs. A novel improved crow search algorithm (ICSA) was presented for channel selection to minimize the computational complexity of multichannel stress detection. Further, the novel employee optimization algorithm (EOA) is utilized for the hyper-parameter optimization of hybrid BDDNet to enhance the training performance. The outcomes of the novel BDDNet were assessed using a public DEAP dataset. The BDDNet-ICSA offers improved recall of 97.6%, precision of 97.6%, F1-score of 97.6%, selectivity of 96.9%, negative predictive value NPV of 96.9%, and accuracy of 97.3% to traditional techniques. Full article
Show Figures

Figure 1

13 pages, 1879 KiB  
Article
Dynamic Graph Convolutional Network with Dilated Convolution for Epilepsy Seizure Detection
by Xiaoxiao Zhang, Chenyun Dai and Yao Guo
Bioengineering 2025, 12(8), 832; https://doi.org/10.3390/bioengineering12080832 - 31 Jul 2025
Viewed by 381
Abstract
The electroencephalogram (EEG), widely used for measuring the brain’s electrophysiological activity, has been extensively applied in the automatic detection of epileptic seizures. However, several challenges remain unaddressed in prior studies on automated seizure detection: (1) Methods based on CNN and LSTM assume that [...] Read more.
The electroencephalogram (EEG), widely used for measuring the brain’s electrophysiological activity, has been extensively applied in the automatic detection of epileptic seizures. However, several challenges remain unaddressed in prior studies on automated seizure detection: (1) Methods based on CNN and LSTM assume that EEG signals follow a Euclidean structure; (2) Algorithms leveraging graph convolutional networks rely on adjacency matrices constructed with fixed edge weights or predefined connection rules. To address these limitations, we propose a novel algorithm: Dynamic Graph Convolutional Network with Dilated Convolution (DGDCN). By leveraging a spatiotemporal attention mechanism, the proposed model dynamically constructs a task-specific adjacency matrix, which guides the graph convolutional network (GCN) in capturing localized spatial and temporal dependencies among adjacent nodes. Furthermore, a dilated convolutional module is incorporated to expand the receptive field, thereby enabling the model to capture long-range temporal dependencies more effectively. The proposed seizure detection system is evaluated on the TUSZ dataset, achieving AUC values of 88.7% and 90.4% on 12-s and 60-s segments, respectively, demonstrating competitive performance compared to current state-of-the-art methods. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

23 pages, 19710 KiB  
Article
Hybrid EEG Feature Learning Method for Cross-Session Human Mental Attention State Classification
by Xu Chen, Xingtong Bao, Kailun Jitian, Ruihan Li, Li Zhu and Wanzeng Kong
Brain Sci. 2025, 15(8), 805; https://doi.org/10.3390/brainsci15080805 - 28 Jul 2025
Viewed by 468
Abstract
Background: Decoding mental attention states from electroencephalogram (EEG) signals is crucial for numerous applications such as cognitive monitoring, adaptive human–computer interaction, and brain–computer interfaces (BCIs). However, conventional EEG-based approaches often focus on channel-wise processing and are limited to intra-session or subject-specific scenarios, lacking [...] Read more.
Background: Decoding mental attention states from electroencephalogram (EEG) signals is crucial for numerous applications such as cognitive monitoring, adaptive human–computer interaction, and brain–computer interfaces (BCIs). However, conventional EEG-based approaches often focus on channel-wise processing and are limited to intra-session or subject-specific scenarios, lacking robustness in cross-session or inter-subject conditions. Methods: In this study, we propose a hybrid feature learning framework for robust classification of mental attention states, including focused, unfocused, and drowsy conditions, across both sessions and individuals. Our method integrates preprocessing, feature extraction, feature selection, and classification in a unified pipeline. We extract channel-wise spectral features using short-time Fourier transform (STFT) and further incorporate both functional and structural connectivity features to capture inter-regional interactions in the brain. A two-stage feature selection strategy, combining correlation-based filtering and random forest ranking, is adopted to enhance feature relevance and reduce dimensionality. Support vector machine (SVM) is employed for final classification due to its efficiency and generalization capability. Results: Experimental results on two cross-session and inter-subject EEG datasets demonstrate that our approach achieves classification accuracy of 86.27% and 94.01%, respectively, significantly outperforming traditional methods. Conclusions: These findings suggest that integrating connectivity-aware features with spectral analysis can enhance the generalizability of attention decoding models. The proposed framework provides a promising foundation for the development of practical EEG-based systems for continuous mental state monitoring and adaptive BCIs in real-world environments. Full article
Show Figures

Figure 1

29 pages, 2830 KiB  
Article
BCINetV1: Integrating Temporal and Spectral Focus Through a Novel Convolutional Attention Architecture for MI EEG Decoding
by Muhammad Zulkifal Aziz, Xiaojun Yu, Xinran Guo, Xinming He, Binwen Huang and Zeming Fan
Sensors 2025, 25(15), 4657; https://doi.org/10.3390/s25154657 - 27 Jul 2025
Viewed by 476
Abstract
Motor imagery (MI) electroencephalograms (EEGs) are pivotal cortical potentials reflecting cortical activity during imagined motor actions, widely leveraged for brain-computer interface (BCI) system development. However, effectively decoding these MI EEG signals is often overshadowed by flawed methods in signal processing, deep learning methods [...] Read more.
Motor imagery (MI) electroencephalograms (EEGs) are pivotal cortical potentials reflecting cortical activity during imagined motor actions, widely leveraged for brain-computer interface (BCI) system development. However, effectively decoding these MI EEG signals is often overshadowed by flawed methods in signal processing, deep learning methods that are clinically unexplained, and highly inconsistent performance across different datasets. We propose BCINetV1, a new framework for MI EEG decoding to address the aforementioned challenges. The BCINetV1 utilizes three innovative components: a temporal convolution-based attention block (T-CAB) and a spectral convolution-based attention block (S-CAB), both driven by a new convolutional self-attention (ConvSAT) mechanism to identify key non-stationary temporal and spectral patterns in the EEG signals. Lastly, a squeeze-and-excitation block (SEB) intelligently combines those identified tempo-spectral features for accurate, stable, and contextually aware MI EEG classification. Evaluated upon four diverse datasets containing 69 participants, BCINetV1 consistently achieved the highest average accuracies of 98.6% (Dataset 1), 96.6% (Dataset 2), 96.9% (Dataset 3), and 98.4% (Dataset 4). This research demonstrates that BCINetV1 is computationally efficient, extracts clinically vital markers, effectively handles the non-stationarity of EEG data, and shows a clear advantage over existing methods, marking a significant step forward for practical BCI applications. Full article
(This article belongs to the Special Issue Advanced Biomedical Imaging and Signal Processing)
Show Figures

Figure 1

35 pages, 6415 KiB  
Review
Recent Advances in Conductive Hydrogels for Electronic Skin and Healthcare Monitoring
by Yan Zhu, Baojin Chen, Yiming Liu, Tiantian Tan, Bowen Gao, Lijun Lu, Pengcheng Zhu and Yanchao Mao
Biosensors 2025, 15(7), 463; https://doi.org/10.3390/bios15070463 - 18 Jul 2025
Cited by 1 | Viewed by 586
Abstract
In recent decades, flexible electronics have witnessed remarkable advancements in multiple fields, encompassing wearable electronics, human–machine interfaces (HMI), clinical diagnosis, and treatment, etc. Nevertheless, conventional rigid electronic devices are fundamentally constrained by their inherent non-stretchability and poor conformability, limitations that substantially impede their [...] Read more.
In recent decades, flexible electronics have witnessed remarkable advancements in multiple fields, encompassing wearable electronics, human–machine interfaces (HMI), clinical diagnosis, and treatment, etc. Nevertheless, conventional rigid electronic devices are fundamentally constrained by their inherent non-stretchability and poor conformability, limitations that substantially impede their practical applications. In contrast, conductive hydrogels (CHs) for electronic skin (E-skin) and healthcare monitoring have attracted substantial interest owing to outstanding features, including adjustable mechanical properties, intrinsic flexibility, stretchability, transparency, and diverse functional and structural designs. Considerable efforts focus on developing CHs incorporating various conductive materials to enable multifunctional wearable sensors and flexible electrodes, such as metals, carbon, ionic liquids (ILs), MXene, etc. This review presents a comprehensive summary of the recent advancements in CHs, focusing on their classifications and practical applications. Firstly, CHs are categorized into five groups based on the nature of the conductive materials employed. These categories include polymer-based, carbon-based, metal-based, MXene-based, and ionic CHs. Secondly, the promising applications of CHs for electrophysiological signals and healthcare monitoring are discussed in detail, including electroencephalogram (EEG), electrocardiogram (ECG), electromyogram (EMG), respiratory monitoring, and motion monitoring. Finally, this review concludes with a comprehensive summary of current research progress and prospects regarding CHs in the fields of electronic skin and health monitoring applications. Full article
Show Figures

Figure 1

22 pages, 4882 KiB  
Article
Dual-Branch Spatio-Temporal-Frequency Fusion Convolutional Network with Transformer for EEG-Based Motor Imagery Classification
by Hao Hu, Zhiyong Zhou, Zihan Zhang and Wenyu Yuan
Electronics 2025, 14(14), 2853; https://doi.org/10.3390/electronics14142853 - 17 Jul 2025
Viewed by 356
Abstract
The decoding of motor imagery (MI) electroencephalogram (EEG) signals is crucial for motor control and rehabilitation. However, as feature extraction is the core component of the decoding process, traditional methods, often limited to single-feature domains or shallow time-frequency fusion, struggle to comprehensively capture [...] Read more.
The decoding of motor imagery (MI) electroencephalogram (EEG) signals is crucial for motor control and rehabilitation. However, as feature extraction is the core component of the decoding process, traditional methods, often limited to single-feature domains or shallow time-frequency fusion, struggle to comprehensively capture the spatio-temporal-frequency characteristics of the signals, thereby limiting decoding accuracy. To address these limitations, this paper proposes a dual-branch neural network architecture with multi-domain feature fusion, the dual-branch spatio-temporal-frequency fusion convolutional network with Transformer (DB-STFFCNet). The DB-STFFCNet model consists of three modules: the spatiotemporal feature extraction module (STFE), the frequency feature extraction module (FFE), and the feature fusion and classification module. The STFE module employs a lightweight multi-dimensional attention network combined with a temporal Transformer encoder, capable of simultaneously modeling local fine-grained features and global spatiotemporal dependencies, effectively integrating spatiotemporal information and enhancing feature representation. The FFE module constructs a hierarchical feature refinement structure by leveraging the fast Fourier transform (FFT) and multi-scale frequency convolutions, while a frequency-domain Transformer encoder captures the global dependencies among frequency domain features, thus improving the model’s ability to represent key frequency information. Finally, the fusion module effectively consolidates the spatiotemporal and frequency features to achieve accurate classification. To evaluate the feasibility of the proposed method, experiments were conducted on the BCI Competition IV-2a and IV-2b public datasets, achieving accuracies of 83.13% and 89.54%, respectively, outperforming existing methods. This study provides a novel solution for joint time-frequency representation learning in EEG analysis. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Biomedical Data Processing)
Show Figures

Figure 1

Back to TopTop