Advances in Brain–Computer Interfaces

A special issue of Biomimetics (ISSN 2313-7673).

Deadline for manuscript submissions: closed (20 November 2024) | Viewed by 5810

Special Issue Editor


E-Mail Website
Guest Editor
Mechanical Engineering, LUT School of Energy Systems, LUT University, Lappeenranta, Finland
Interests: brain–computer interface; rehabilitation; neuro-engineering
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Brain–computer interface (BCI) technology has been introduced to improve the quality of life for people with disabilities or difficulties in their daily lives. BCI applications such as driver assistants, sleep identification for drivers, and controlling a bionic hand/ankle–foot orthosis are widely used for healthy people as well as paralyzed patients. Research in the field mainly focuses on the development of mathematical calculations for brain-controlled vehicles, brain-controlled air vehicles, brain-controlled bionic hands, and brain-controlled foot-ankle braces using biosignals from an electroencephalogram (EEG), electrooculogram (EOG), electromyogram (EMG), and photoplethysmography (PPG).

The mathematical solutions are signal denoising (filtering), feature extraction, and machine learning algorithms. This collection of articles aims to highlight mathematical innovations as well as novel ideas for designing tasks to induce the brain to generate distinctive neuronal patterns. The final goal of this research topic is the discovery of new methods for BCI applications. We welcome manuscripts on the following subtopics:

  • Decoding brain neuron activities by developing mathematical methods for identifying patterns within the EEG signals automatically;
  • Identifying EEG patterns relative to human actions and decisions automatically;
  • Analyzing the patterns generated in a designed task to determine which method is more beneficial, e.g., wavelet, chaotic methods, common spatial patterns, or reinforcing methods;
  • Developing classifiers to automate identification procedures.

Dr. Amin Hekmatmanesh
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Biomimetics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • biosignal processing
  • pattern recognition
  • machine learning
  • brain–computer interface
  • health monitoring systems

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 10440 KiB  
Article
Hybrid BCI for Meal-Assist Robot Using Dry-Type EEG and Pupillary Light Reflex
by Jihyeon Ha, Sangin Park, Yaeeun Han and Laehyun Kim
Biomimetics 2025, 10(2), 118; https://doi.org/10.3390/biomimetics10020118 - 18 Feb 2025
Viewed by 302
Abstract
Brain–computer interface (BCI)-based assistive technologies enable intuitive and efficient user interaction, significantly enhancing the independence and quality of life of elderly and disabled individuals. Although existing wet EEG-based systems report high accuracy, they suffer from limited practicality. This study presents a hybrid BCI [...] Read more.
Brain–computer interface (BCI)-based assistive technologies enable intuitive and efficient user interaction, significantly enhancing the independence and quality of life of elderly and disabled individuals. Although existing wet EEG-based systems report high accuracy, they suffer from limited practicality. This study presents a hybrid BCI system combining dry-type EEG-based flash visual-evoked potentials (FVEP) and pupillary light reflex (PLR) designed to control an LED-based meal-assist robot. The hybrid system integrates dry-type EEG and eyewear-type infrared cameras, addressing the preparation challenges of wet electrodes, while maintaining practical usability and high classification performance. Offline experiments demonstrated an average accuracy of 88.59% and an information transfer rate (ITR) of 18.23 bit/min across the four target classifications. Real-time implementation uses PLR triggers to initiate the meal cycle and EMG triggers to detect chewing, indicating the completion of the cycle. These features allow intuitive and efficient operation of the meal-assist robot. This study advances the BCI-based assistive technologies by introducing a hybrid system optimized for real-world applications. The successful integration of the FVEP and PLR in a meal-assisted robot demonstrates the potential for robust and user-friendly solutions that empower the users with autonomy and dignity in their daily activities. Full article
(This article belongs to the Special Issue Advances in Brain–Computer Interfaces)
Show Figures

Figure 1

18 pages, 13888 KiB  
Article
A Personalized Multimodal BCI–Soft Robotics System for Rehabilitating Upper Limb Function in Chronic Stroke Patients
by Brian Premchand, Zhuo Zhang, Kai Keng Ang, Juanhong Yu, Isaac Okumura Tan, Josephine Pei Wen Lam, Anna Xin Yi Choo, Ananda Sidarta, Patrick Wai Hang Kwong and Lau Ha Chloe Chung
Biomimetics 2025, 10(2), 94; https://doi.org/10.3390/biomimetics10020094 - 7 Feb 2025
Viewed by 709
Abstract
Multimodal brain–computer interfaces (BCIs) that combine electrical features from electroencephalography (EEG) and hemodynamic features from functional near-infrared spectroscopy (fNIRS) have the potential to improve performance. In this paper, we propose a multimodal EEG- and fNIRS-based BCI system with soft robotic (BCI-SR) components for [...] Read more.
Multimodal brain–computer interfaces (BCIs) that combine electrical features from electroencephalography (EEG) and hemodynamic features from functional near-infrared spectroscopy (fNIRS) have the potential to improve performance. In this paper, we propose a multimodal EEG- and fNIRS-based BCI system with soft robotic (BCI-SR) components for personalized stroke rehabilitation. We propose a novel method of personalizing rehabilitation by aligning each patient’s specific abilities with the treatment options available. We collected 160 single trials of motor imagery using the multimodal BCI from 10 healthy participants. We identified a confounding effect of respiration in the fNIRS signal data collected. Hence, we propose to incorporate a breathing sensor to synchronize motor imagery (MI) cues with the participant’s respiratory cycle. We found that implementing this respiration synchronization (RS) resulted in less dispersed readings of oxyhemoglobin (HbO). We then conducted a clinical trial on the personalized multimodal BCI-SR for stroke rehabilitation. Four chronic stroke patients were recruited to undergo 6 weeks of rehabilitation, three times per week, whereby the primary outcome was measured using upper-extremity Fugl-Meyer Motor Assessment (FMA) and Action Research Arm Test (ARAT) scores on weeks 0, 6, and 12. The results showed a striking coherence in the activation patterns in EEG and fNIRS across all patients. In addition, FMA and ARAT scores were significantly improved on week 12 relative to the pre-trial baseline, with mean gains of 8.75 ± 1.84 and 5.25 ± 2.17, respectively (mean ± SEM). These improvements were all better than the Standard Arm Therapy and BCI-SR group when retrospectively compared to previous clinical trials. These results suggest that personalizing the rehabilitation treatment leads to improved BCI performance compared to standard BCI-SR, and synchronizing motor imagery cues to respiration increased the consistency of HbO levels, leading to better motor imagery performance. These results showed that the proposed multimodal BCI-SR holds promise to better engage stroke patients and promote neuroplasticity for better motor improvements. Full article
(This article belongs to the Special Issue Advances in Brain–Computer Interfaces)
Show Figures

Figure 1

19 pages, 5819 KiB  
Article
Plantar Pressure-Based Gait Recognition with and Without Carried Object by Convolutional Neural Network-Autoencoder Architecture
by Chin-Cheng Wu, Cheng-Wei Tsai, Fei-En Wu, Chi-Hsuan Chiang and Jin-Chern Chiou
Biomimetics 2025, 10(2), 79; https://doi.org/10.3390/biomimetics10020079 - 26 Jan 2025
Viewed by 537
Abstract
Convolutional neural networks (CNNs) have been widely and successfully demonstrated for closed set recognition in gait identification, but they still lack robustness in open set recognition for unknown classes. To improve the disadvantage, we proposed a convolutional neural network autoencoder (CNN-AE) architecture for [...] Read more.
Convolutional neural networks (CNNs) have been widely and successfully demonstrated for closed set recognition in gait identification, but they still lack robustness in open set recognition for unknown classes. To improve the disadvantage, we proposed a convolutional neural network autoencoder (CNN-AE) architecture for user classification based on plantar pressure gait recognition. The model extracted gait features using pressure-sensitive mats, focusing on foot pressure distribution and foot size during walking. Preprocessing techniques, including region of interest (ROI) selection, feature image extraction, and data horizontal flipping, were utilized to establish a CNN model that assessed gait recognition accuracy under two conditions: without carried items and carrying a 500 g object. To extend the application of the CNN to open set recognition for unauthorized personnel, the proposed convolutional neural network-autoencoder (CNN-AE) architecture compressed the average foot pressure map into a 64-dimensional feature vector and facilitated identity determination based on the distances between these vectors. Among 60 participants, 48 were classified as authorized individuals and 12 as unauthorized. Under the condition of not carrying an object, an accuracy of 91.218%, precision of 93.676%, recall of 90.369%, and an F1-Score of 91.993% were achieved, indicating that the model successfully identified most actual positives. However, when carrying a 500 g object, the accuracy was 85.648%, precision was 94.459%, recall was 84.423%, and the F1-Score was 89.603%. Full article
(This article belongs to the Special Issue Advances in Brain–Computer Interfaces)
Show Figures

Figure 1

24 pages, 9053 KiB  
Article
An Ensemble Deep Learning Approach for EEG-Based Emotion Recognition Using Multi-Class CSP
by Behzad Yousefipour, Vahid Rajabpour, Hamidreza Abdoljabbari, Sobhan Sheykhivand and Sebelan Danishvar
Biomimetics 2024, 9(12), 761; https://doi.org/10.3390/biomimetics9120761 - 14 Dec 2024
Viewed by 1039
Abstract
In recent years, significant advancements have been made in the field of brain–computer interfaces (BCIs), particularly in the area of emotion recognition using EEG signals. The majority of earlier research in this field has missed the spatial–temporal characteristics of EEG signals, which are [...] Read more.
In recent years, significant advancements have been made in the field of brain–computer interfaces (BCIs), particularly in the area of emotion recognition using EEG signals. The majority of earlier research in this field has missed the spatial–temporal characteristics of EEG signals, which are critical for accurate emotion recognition. In this study, a novel approach is presented for classifying emotions into three categories, positive, negative, and neutral, using a custom-collected dataset. The dataset used in this study was specifically collected for this purpose from 16 participants, comprising EEG recordings corresponding to the three emotional states induced by musical stimuli. A multi-class Common Spatial Pattern (MCCSP) technique was employed for the processing stage of the EEG signals. These processed signals were then fed into an ensemble model comprising three autoencoders with Convolutional Neural Network (CNN) layers. A classification accuracy of 99.44 ± 0.39% for the three emotional classes was achieved by the proposed method. This performance surpasses previous studies, demonstrating the effectiveness of the approach. The high accuracy indicates that the method could be a promising candidate for future BCI applications, providing a reliable means of emotion detection. Full article
(This article belongs to the Special Issue Advances in Brain–Computer Interfaces)
Show Figures

Figure 1

18 pages, 1260 KiB  
Article
Brain-Inspired Architecture for Spiking Neural Networks
by Fengzhen Tang, Junhuai Zhang, Chi Zhang and Lianqing Liu
Biomimetics 2024, 9(10), 646; https://doi.org/10.3390/biomimetics9100646 - 21 Oct 2024
Viewed by 1850
Abstract
Spiking neural networks (SNNs), using action potentials (spikes) to represent and transmit information, are more biologically plausible than traditional artificial neural networks. However, most of the existing SNNs require a separate preprocessing step to convert the real-valued input into spikes that are then [...] Read more.
Spiking neural networks (SNNs), using action potentials (spikes) to represent and transmit information, are more biologically plausible than traditional artificial neural networks. However, most of the existing SNNs require a separate preprocessing step to convert the real-valued input into spikes that are then input to the network for processing. The dissected spike-coding process may result in information loss, leading to degenerated performance. However, the biological neuron system does not perform a separate preprocessing step. Moreover, the nervous system may not have a single pathway with which to respond and process external stimuli but allows multiple circuits to perceive the same stimulus. Inspired by these advantageous aspects of the biological neural system, we propose a self-adaptive encoding spike neural network with parallel architecture. The proposed network integrates the input-encoding process into the spiking neural network architecture via convolutional operations such that the network can accept the real-valued input and automatically transform it into spikes for further processing. Meanwhile, the proposed network contains two identical parallel branches, inspired by the biological nervous system that processes information in both serial and parallel. The experimental results on multiple image classification tasks reveal that the proposed network can obtain competitive performance, suggesting the effectiveness of the proposed architecture. Full article
(This article belongs to the Special Issue Advances in Brain–Computer Interfaces)
Show Figures

Figure 1

Back to TopTop