Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (63)

Search Parameters:
Keywords = visual stimuli classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 4200 KiB  
Article
Investigation of Personalized Visual Stimuli via Checkerboard Patterns Using Flickering Circles for SSVEP-Based BCI System
by Nannaphat Siribunyaphat, Natjamee Tohkhwan and Yunyong Punsawad
Sensors 2025, 25(15), 4623; https://doi.org/10.3390/s25154623 - 25 Jul 2025
Viewed by 906
Abstract
In this study, we conducted two steady-state visual evoked potential (SSVEP) studies to develop a practical brain–computer interface (BCI) system for communication and control applications. The first study introduces a novel visual stimulus paradigm that combines checkerboard patterns with flickering circles configured in [...] Read more.
In this study, we conducted two steady-state visual evoked potential (SSVEP) studies to develop a practical brain–computer interface (BCI) system for communication and control applications. The first study introduces a novel visual stimulus paradigm that combines checkerboard patterns with flickering circles configured in single-, double-, and triple-layer forms. We tested three flickering frequency conditions: a single fundamental frequency, a combination of the fundamental frequency and its harmonics, and a combination of two fundamental frequencies. The second study utilizes personalized visual stimuli to enhance SSVEP responses. SSVEP detection was performed using power spectral density (PSD) analysis by employing Welch’s method and relative PSD to extract SSVEP features. Commands classification was carried out using a proposed decision rule–based algorithm. The results were compared with those of a conventional checkerboard pattern with flickering squares. The experimental findings indicate that single-layer flickering circle patterns exhibit comparable or improved performance when compared with the conventional stimuli, particularly when customized for individual users. Conversely, the multilayer patterns tended to increase visual fatigue. Furthermore, individualized stimuli achieved a classification accuracy of 90.2% in real-time SSVEP-based BCI systems for six-command generation tasks. The personalized visual stimuli can enhance user experience and system performance, thereby supporting the development of a practical SSVEP-based BCI system. Full article
Show Figures

Figure 1

26 pages, 15354 KiB  
Article
Adaptive Neuro-Affective Engagement via Bayesian Feedback Learning in Serious Games for Neurodivergent Children
by Diego Resende Faria and Pedro Paulo da Silva Ayrosa
Appl. Sci. 2025, 15(13), 7532; https://doi.org/10.3390/app15137532 - 4 Jul 2025
Viewed by 498
Abstract
Neuro-Affective Intelligence (NAI) integrates neuroscience, psychology, and artificial intelligence to support neurodivergent children through personalized Child–Machine Interaction (CMI). This paper presents an adaptive neuro-affective system designed to enhance engagement in children with neurodevelopmental disorders through serious games. The proposed framework incorporates real-time biophysical [...] Read more.
Neuro-Affective Intelligence (NAI) integrates neuroscience, psychology, and artificial intelligence to support neurodivergent children through personalized Child–Machine Interaction (CMI). This paper presents an adaptive neuro-affective system designed to enhance engagement in children with neurodevelopmental disorders through serious games. The proposed framework incorporates real-time biophysical signals—including EEG-based concentration, facial expressions, and in-game performance—to compute a personalized engagement score. We introduce a novel mechanism, Bayesian Immediate Feedback Learning (BIFL), which dynamically selects visual, auditory, or textual stimuli based on real-time neuro-affective feedback. A multimodal CNN-based classifier detects mental states, while a probabilistic ensemble merges affective state classifications derived from facial expressions. A multimodal weighted engagement function continuously updates stimulus–response expectations. The system adapts in real time by selecting the most appropriate cue to support the child’s cognitive and emotional state. Experimental validation with 40 children (ages 6–10) diagnosed with Autism Spectrum Disorder (ASD) and Attention Deficit Hyperactivity Disorder (ADHD) demonstrates the system’s effectiveness in sustaining attention, improving emotional regulation, and increasing overall game engagement. The proposed framework—combining neuro-affective state recognition, multimodal engagement scoring, and BIFL—significantly improved cognitive and emotional outcomes: concentration increased by 22.4%, emotional engagement by 24.8%, and game performance by 32.1%. Statistical analysis confirmed the significance of these improvements (p<0.001, Cohen’s d>1.4). These findings demonstrate the feasibility and impact of probabilistic, multimodal, and neuro-adaptive AI systems in therapeutic and educational applications. Full article
Show Figures

Figure 1

17 pages, 2799 KiB  
Article
The Phenomenology of Offline Perception: Multisensory Profiles of Voluntary Mental Imagery and Dream Imagery
by Maren Bilzer and Merlin Monzel
Vision 2025, 9(2), 37; https://doi.org/10.3390/vision9020037 - 21 Apr 2025
Cited by 1 | Viewed by 1407
Abstract
Both voluntary mental imagery and dream imagery involve multisensory representations without externally present stimuli that can be categorized as offline perceptions. Due to common mechanisms, correlations between multisensory dream imagery profiles and multisensory voluntary mental imagery profiles were hypothesized. In a sample of [...] Read more.
Both voluntary mental imagery and dream imagery involve multisensory representations without externally present stimuli that can be categorized as offline perceptions. Due to common mechanisms, correlations between multisensory dream imagery profiles and multisensory voluntary mental imagery profiles were hypothesized. In a sample of 226 participants, correlations within the respective state of consciousness were significantly bigger than across, favouring two distinct networks. However, the association between the vividness of voluntary mental imagery and vividness of dream imagery was moderated by the frequency of dream recall and lucid dreaming, suggesting that both networks become increasingly similar when higher metacognition is involved. Additionally, the vividness of emotional and visual imagery was significantly higher for dream imagery than for voluntary mental imagery, reflecting the immersive nature of dreams and the continuity of visual dominance while being awake and asleep. In contrast, the vividness of auditory, olfactory, gustatory, and tactile imagery was higher for voluntary mental imagery, probably due to higher cognitive control while being awake. Most results were replicated four weeks later, weakening the notion of state influences. Overall, our results indicate similarities between dream imagery and voluntary mental imagery that justify a common classification as offline perception, but also highlight important differences. Full article
(This article belongs to the Special Issue Visual Mental Imagery System: How We Image the World)
Show Figures

Figure 1

20 pages, 2133 KiB  
Article
Real-Time Mobile Robot Obstacles Detection and Avoidance Through EEG Signals
by Karameldeen Omer, Francesco Ferracuti, Alessandro Freddi, Sabrina Iarlori, Francesco Vella and Andrea Monteriù
Brain Sci. 2025, 15(4), 359; https://doi.org/10.3390/brainsci15040359 - 30 Mar 2025
Viewed by 1954
Abstract
Background/Objectives: The study explores the integration of human feedback into the control loop of mobile robots for real-time obstacle detection and avoidance using EEG brain–computer interface (BCI) methods. The goal is to assess the possible paradigms applicable to the most current navigation system [...] Read more.
Background/Objectives: The study explores the integration of human feedback into the control loop of mobile robots for real-time obstacle detection and avoidance using EEG brain–computer interface (BCI) methods. The goal is to assess the possible paradigms applicable to the most current navigation system to enhance safety and interaction between humans and robots. Methods: The research explores passive and active brain–computer interface (BCI) technologies to enhance a wheelchair-mobile robot’s navigation. In the passive approach, error-related potentials (ErrPs), neural signals triggered when users comment or perceive errors, enable automatic correction of the robot navigation mistakes without direct input or command from the user. In contrast, the active approach leverages steady-state visually evoked potentials (SSVEPs), where users focus on flickering stimuli to control the robot’s movements directly. This study evaluates both paradigms to determine the most effective method for integrating human feedback into assistive robotic navigation. This study involves experimental setups where participants control a robot through a simulated environment, and their brain signals are recorded and analyzed to measure the system’s responsiveness and the user’s mental workload. Results: The results show that a passive BCI requires lower mental effort but suffers from lower engagement, with a classification accuracy of 72.9%, whereas an active BCI demands more cognitive effort but achieves 84.9% accuracy. Despite this, task achievement accuracy is higher in the passive method (e.g., 71% vs. 43% for subject S2) as a single correct ErrP classification enables autonomous obstacle avoidance, whereas SSVEP requires multiple accurate commands. Conclusions: This research highlights the trade-offs between accuracy, mental load, and engagement in BCI-based robot control. The findings support the development of more intuitive assistive robotics, particularly for disabled and elderly users. Full article
(This article belongs to the Special Issue Multisensory Perception of the Body and Its Movement)
Show Figures

Figure 1

34 pages, 7670 KiB  
Article
A Safe and Efficient Brain–Computer Interface Using Moving Object Trajectories and LED-Controlled Activation
by Sefa Aydin, Mesut Melek and Levent Gökrem
Micromachines 2025, 16(3), 340; https://doi.org/10.3390/mi16030340 - 16 Mar 2025
Viewed by 1017
Abstract
Nowadays, brain–computer interface (BCI) systems are frequently used to connect individuals who have lost their mobility with the outside world. These BCI systems enable individuals to control external devices using brain signals. However, these systems have certain disadvantages for users. This paper proposes [...] Read more.
Nowadays, brain–computer interface (BCI) systems are frequently used to connect individuals who have lost their mobility with the outside world. These BCI systems enable individuals to control external devices using brain signals. However, these systems have certain disadvantages for users. This paper proposes a novel approach to minimize the disadvantages of visual stimuli on the eye health of system users in BCI systems employing visual evoked potential (VEP) and P300 methods. The approach employs moving objects with different trajectories instead of visual stimuli. It uses a light-emitting diode (LED) with a frequency of 7 Hz as a condition for the BCI system to be active. The LED is assigned to the system to prevent it from being triggered by any involuntary or independent eye movements of the user. Thus, the system user will be able to use a safe BCI system with a single visual stimulus that blinks on the side without needing to focus on any visual stimulus through moving balls. Data were recorded in two phases: when the LED was on and when the LED was off. The recorded data were processed using a Butterworth filter and the power spectral density (PSD) method. In the first classification phase, which was performed for the system to detect the LED in the background, the highest accuracy rate of 99.57% was achieved with the random forest (RF) classification algorithm. In the second classification phase, which involves classifying moving objects within the proposed approach, the highest accuracy rate of 97.89% and an information transfer rate (ITR) value of 36.75 (bits/min) were achieved using the RF classifier. Full article
(This article belongs to the Special Issue Bioelectronics and Its Limitless Possibilities)
Show Figures

Figure 1

12 pages, 1546 KiB  
Article
Multi-Domain Features and Multi-Task Learning for Steady-State Visual Evoked Potential-Based Brain–Computer Interfaces
by Yeou-Jiunn Chen, Shih-Chung Chen and Chung-Min Wu
Appl. Sci. 2025, 15(4), 2176; https://doi.org/10.3390/app15042176 - 18 Feb 2025
Viewed by 672
Abstract
Brain–computer interfaces (BCIs) enable people to communicate with others or devices, and improving BCI performance is essential for developing real-life applications. In this study, a steady-state visual evoked potential-based BCI (SSVEP-based BCI) with multi-domain features and multi-task learning is developed. To accurately represent [...] Read more.
Brain–computer interfaces (BCIs) enable people to communicate with others or devices, and improving BCI performance is essential for developing real-life applications. In this study, a steady-state visual evoked potential-based BCI (SSVEP-based BCI) with multi-domain features and multi-task learning is developed. To accurately represent the characteristics of an SSVEP signal, SSVEP signals in the time and frequency domains are selected as multi-domain features. Convolutional neural networks are separately used for time and frequency domain signals to extract the embedding features effectively. An element-wise addition operation and batch normalization are applied to fuse the time- and frequency-domain features. A sequence of convolutional neural networks is then adopted to find discriminative embedding features for classification. Finally, multi-task learning-based neural networks are used to detect the corresponding stimuli correctly. The experimental results showed that the proposed approach outperforms EEGNet, multi-task learning-based neural networks, canonical correlation analysis (CCA), and filter bank CCA (FBCCA). Additionally, the proposed approach is more suitable for developing real-time BCIs than a system where an input’s duration is 4 s. In the future, utilizing multi-task learning to learn the properties of the embedding features extracted from FBCCA can further improve the BCI system performance. Full article
Show Figures

Figure 1

24 pages, 1339 KiB  
Article
Bridging Neuroscience and Machine Learning: A Gender-Based Electroencephalogram Framework for Guilt Emotion Identification
by Saima Raza Zaidi, Najeed Ahmed Khan and Muhammad Abul Hasan
Sensors 2025, 25(4), 1222; https://doi.org/10.3390/s25041222 - 17 Feb 2025
Viewed by 1034
Abstract
This study explores the link between the emotion “guilt” and human EEG data, and investigates the influence of gender differences on the expression of guilt and neutral emotions in response to visual stimuli. Additionally, the stimuli used in the study were developed to [...] Read more.
This study explores the link between the emotion “guilt” and human EEG data, and investigates the influence of gender differences on the expression of guilt and neutral emotions in response to visual stimuli. Additionally, the stimuli used in the study were developed to ignite guilt and neutral emotions. Two emotions, “guilt” and “neutral”, were recorded from 16 participants after these emotions were induced using storyboards as pictorial stimuli. These storyboards were developed based on various guilt-provoking events shared by another group of participants. In the pre-processing step, collected data were de-noised using bandpass filters and ICA, then segmented into smaller sections for further analysis. Two approaches were used to feed these data to the SVM classifier. First, the novel approach employed involved feeding the data to SVM classifier without computing any features. This method provided an average accuracy of 83%. In the second approach, data were divided into Alpha, Beta, Gamma, Theta and Delta frequency bands using Discrete Wavelet Decomposition. Afterward, the computed features, including entropy, Hjorth parameters and Band Power, were fed to SVM classifiers. This approach achieved an average accuracy of 63%. The findings of both classification methodologies indicate that females are more expressive in response to depicted stimuli and that their brain cells exhibit higher feature values. Moreover, females displayed higher accuracy than males in all bands except the Delta band. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

37 pages, 3592 KiB  
Article
The Impact of Visual Elements of Packaging Design on Purchase Intention: Brand Experience as a Mediator in the Tea Bag Product Category
by Chang Liu, Mat Redhuan Samsudin and Yuwen Zou
Behav. Sci. 2025, 15(2), 181; https://doi.org/10.3390/bs15020181 - 9 Feb 2025
Cited by 4 | Viewed by 11005
Abstract
While packaging design plays a vital role in experience-oriented markets, how multiple visual elements influence purchase intention through brand experience remains unclear. This study addresses this gap by employing innovative orthogonal experiments to examine the complex relationship between the visual elements of packaging [...] Read more.
While packaging design plays a vital role in experience-oriented markets, how multiple visual elements influence purchase intention through brand experience remains unclear. This study addresses this gap by employing innovative orthogonal experiments to examine the complex relationship between the visual elements of packaging design and purchase intention for low-involvement products, integrating both design and marketing perspectives. Through orthogonal experimental design, we developed 14 packaging prototypes as stimuli by systematically manipulating five visual elements (Colour, Graphics, Logo, Typography, and Layout). The framework and prototypes were validated through expert evaluation. Data were collected via a cross-sectional survey from 490 tea bag consumers and analysed using SPSS (version 29.0) for preliminary data processing and Mplus (version 8.3) for structural equation modelling. Our results reveal the direct effects of visual packaging elements on consumer purchase intention. Notably, Colour, Graphics, Logo, and Layout significantly influence purchase intention through brand experience mediation. Importantly, our multi-level analysis of visual elements unveils distinct patterns in how different design levels (e.g., colour harmony, graphic types) affect consumer responses. This study provides novel theoretical insights into how consumers make purchase decisions based on packaging design visual elements, addressing a significant gap in existing research. Unlike previous studies focusing on isolated design elements, our systematic classification and multi-level analysis offer both theoretical insights into packaging design mechanisms and practical guidelines for designers and practitioners. Full article
Show Figures

Figure 1

23 pages, 1253 KiB  
Article
EEG Signal Analysis for Numerical Digit Classification: Methodologies and Challenges
by Augoustos Tsamourgelis and Adam Adamopoulos
Stats 2025, 8(1), 14; https://doi.org/10.3390/stats8010014 - 5 Feb 2025
Cited by 1 | Viewed by 1559
Abstract
Electroencephalography (EEG) has existed since the early 20th century. It has proven to be a vital tool for electrophysiological studies of conditions like epilepsy. Recently, it has been revitalized as the field of machine learning has been developing, widening its usefulness among a [...] Read more.
Electroencephalography (EEG) has existed since the early 20th century. It has proven to be a vital tool for electrophysiological studies of conditions like epilepsy. Recently, it has been revitalized as the field of machine learning has been developing, widening its usefulness among a plethora of neurological conditions and in brain–computer interface (BCI) applications. This study delves into the intricate process of classifying EEG signals elicited by the visual stimuli of subjects viewing the digits 0 and 1 and a blank screen. We focus on developing a comprehensive workflow for EEG preprocessing, as well as feature extraction and signal classification. We achieve strong differentiation capabilities between digit and non-digit values in all classification algorithms. However, our study also highlights the profound neurological challenges encountered in distinguishing between the digit values, as our model, inspired by the related bibliography, was unable to differentiate between digit values 0 and 1. These findings underscore the complexity of numerical processing in the brain, revealing critical insights into the limitations and potential of EEG-based digit classification and the need for clarity in the bioinformatics community. Full article
(This article belongs to the Section Time Series Analysis)
Show Figures

Figure 1

22 pages, 3141 KiB  
Article
Estimation of Pressure Pain in the Lower Limbs Using Electrodermal Activity, Tissue Oxygen Saturation, and Heart Rate Variability
by Youngho Kim, Seonggeon Pyo, Seunghee Lee, Changeon Park and Sunghyuk Song
Sensors 2025, 25(3), 680; https://doi.org/10.3390/s25030680 - 23 Jan 2025
Viewed by 1301
Abstract
Quantification of pain or discomfort induced by pressure is essential for understanding human responses to physical stimuli and improving user interfaces. Pain research has been conducted to investigate physiological signals associated with discomfort and pain perception. This study analyzed changes in electrodermal activity [...] Read more.
Quantification of pain or discomfort induced by pressure is essential for understanding human responses to physical stimuli and improving user interfaces. Pain research has been conducted to investigate physiological signals associated with discomfort and pain perception. This study analyzed changes in electrodermal activity (EDA), tissue oxygen saturation (StO2), heart rate variability (HRV), and Visual Analog Scale (VAS) under pressures of 10, 20, and 30 kPa applied for 3 min to the thigh, knee, and calf in a seated position. Twenty participants were tested, and relationships between biosignals, pressure intensity, and pain levels were evaluated using Friedman tests and post-hoc analyses. Multiple linear regression models were used to predict VAS and pressure, and five machine learning models (SVM, Logistic Regression, Random Forest, MLP, KNN) were applied to classify pain levels (no pain: VAS 0, low: VAS 1–3, moderate: VAS 4–6, high: VAS 7–10) and pressure intensity. The results showed that higher pressure intensity and pain levels affected sympathetic nervous system responses and tissue oxygen saturation. Most EDA features and StO2 significantly changed according to pressure intensity and pain levels, while NN interval and HF among HRV features showed significant differences based on pressure intensity or pain level. Regression analysis combining biosignal features achieved a maximum R2 of 0.668 in predicting VAS and pressure intensity. The four-level classification model reached an accuracy of 88.2% for pain levels and 81.3% for pressure intensity. These results demonstrated the potential of EDA, StO2, HRV signals, and combinations of biosignal features for pain quantification and prediction. Full article
Show Figures

Figure 1

27 pages, 27890 KiB  
Article
Optical Methods for Determining the Phagocytic Activity Profile of CD206-Positive Macrophages Extracted from Bronchoalveolar Lavage by Specific Mannosylated Polymeric Ligands
by Igor D. Zlotnikov, Alexander A. Ezhov, Natalia I. Kolganova, Dmitry Yurievich Ovsyannikov, Natalya G. Belogurova and Elena V. Kudryashova
Polymers 2025, 17(1), 65; https://doi.org/10.3390/polym17010065 - 30 Dec 2024
Viewed by 1453
Abstract
Macrophage (Mph) polarization and functional activity play an important role in the development of inflammatory lung conditions. The previously widely used bimodal classification of Mph into M1 and M2 does not adequately reflect the full range of changes in polarization and functional diversity [...] Read more.
Macrophage (Mph) polarization and functional activity play an important role in the development of inflammatory lung conditions. The previously widely used bimodal classification of Mph into M1 and M2 does not adequately reflect the full range of changes in polarization and functional diversity observed in Mph in response to various stimuli and disease states. Here, we have developed a model for the direct assessment of Mph from bronchial alveolar lavage fluid (BALF) functional alterations, in terms of phagocytosis activity, depending on external stimuli, such as exposure to a range of bacteria (E. coli, B. subtilis and L. fermentum). We have employed polymeric mannosylated ligands (the “trapping ligand”) specifically targeting the CD206 receptor to selectively isolate activated Mph from the BALF of patients with pulmonary inflammatory conditions: primary ciliary dyskinesia (PCD), pneumonia and bronchial asthma. An “imaging ligand” allows for the subsequent visualization of the isolated cells using a sandwich technique. Five model strains of E. coli, MH-1, JM109, BL21, W3110 and ATCC25922, as well as B. subtilis and L. fermentum strains, each exhibiting distinct properties and expressing red fluorescent protein (RFP), were used as a phagocytosis substrate. Fluorometric, FTIR- and confocal laser scanning microscopy (CLSM) assessments of the phagocytic response of Mph to these bacterial cells were performed. Mph absorbed different strains of E. coli with different activities due to the difference in the surface villosity of bacterial cells (pili and fimbriae, as well as signal patterns). In the presence of other competitor cells (like those of Lactobacilli), the phagocytic activity of Mph is changed between two and five times and strongly dependent on the bacterial strain. The relative phagocytic activity indexes obtained for BALF-Mph in comparison with that obtained for model human CD206+ Mph in the M1 polarization state (derived from THP-1 monocyte cultures) were considered as a set of parameters to define the Mph polarization profile from the BALF of patients. Mannan as a marker determining the selectivity of the binding to the CD 206 mannose receptor of Mph significantly inhibited the phagocytosis of E. coli and B. subtilis in cases of pneumonia, suggesting an important role of CD206 overexpression in acute inflammation. Conversely, L. fermentum binding was enhanced in PCD, possibly reflecting altered macrophage responsiveness in chronic lung diseases. Our approach based on the profiling of Mph from patient BALF samples in terms of phagocytosis for a range of model bacterial strains is important for the subsequent detailed study of the factors determining dangerous conditions and resistance to existing therapeutic options. Full article
(This article belongs to the Section Polymer Applications)
Show Figures

Graphical abstract

17 pages, 2337 KiB  
Article
Decoding Brain Signals from Rapid-Event EEG for Visual Analysis Using Deep Learning
by Madiha Rehman, Humaira Anwer, Helena Garay, Josep Alemany-Iturriaga, Isabel De la Torre Díez, Hafeez ur Rehman Siddiqui and Saleem Ullah
Sensors 2024, 24(21), 6965; https://doi.org/10.3390/s24216965 - 30 Oct 2024
Cited by 3 | Viewed by 2948
Abstract
The perception and recognition of objects around us empower environmental interaction. Harnessing the brain’s signals to achieve this objective has consistently posed difficulties. Researchers are exploring whether the poor accuracy in this field is a result of the design of the temporal stimulation [...] Read more.
The perception and recognition of objects around us empower environmental interaction. Harnessing the brain’s signals to achieve this objective has consistently posed difficulties. Researchers are exploring whether the poor accuracy in this field is a result of the design of the temporal stimulation (block versus rapid event) or the inherent complexity of electroencephalogram (EEG) signals. Decoding perceptive signal responses in subjects has become increasingly complex due to high noise levels and the complex nature of brain activities. EEG signals have high temporal resolution and are non-stationary signals, i.e., their mean and variance vary overtime. This study aims to develop a deep learning model for the decoding of subjects’ responses to rapid-event visual stimuli and highlights the major factors that contribute to low accuracy in the EEG visual classification task.The proposed multi-class, multi-channel model integrates feature fusion to handle complex, non-stationary signals. This model is applied to the largest publicly available EEG dataset for visual classification consisting of 40 object classes, with 1000 images in each class. Contemporary state-of-the-art studies in this area investigating a large number of object classes have achieved a maximum accuracy of 17.6%. In contrast, our approach, which integrates Multi-Class, Multi-Channel Feature Fusion (MCCFF), achieves a classification accuracy of 33.17% for 40 classes. These results demonstrate the potential of EEG signals in advancing EEG visual classification and offering potential for future applications in visual machine models. Full article
Show Figures

Figure 1

14 pages, 2977 KiB  
Article
The Development of a Multicommand Tactile Event-Related Potential-Based Brain–Computer Interface Utilizing a Low-Cost Wearable Vibrotactile Stimulator
by Manorot Borirakarawin, Nannaphat Siribunyaphat, Si Thu Aung and Yunyong Punsawad
Sensors 2024, 24(19), 6378; https://doi.org/10.3390/s24196378 - 1 Oct 2024
Viewed by 1930
Abstract
A tactile event-related potential (ERP)-based brain–computer interface (BCI) system is an alternative for enhancing the control and communication abilities of quadriplegic patients with visual or auditory impairments. Hence, in this study, we proposed a tactile stimulus pattern using a vibrotactile stimulator for a [...] Read more.
A tactile event-related potential (ERP)-based brain–computer interface (BCI) system is an alternative for enhancing the control and communication abilities of quadriplegic patients with visual or auditory impairments. Hence, in this study, we proposed a tactile stimulus pattern using a vibrotactile stimulator for a multicommand BCI system. Additionally, we observed a tactile ERP response to the target from random vibrotactile stimuli placed in the left and right wrist and elbow positions to create commands. An experiment was conducted to explore the location of the proposed vibrotactile stimulus and to verify the multicommand tactile ERP-based BCI system. Using the proposed features and conventional classification methods, we examined the classification efficiency of the four commands created from the selected EEG channels. The results show that the proposed vibrotactile stimulation with 15 stimulus trials produced a prominent ERP response in the Pz channels. The average classification accuracy ranged from 61.9% to 79.8% over 15 stimulus trials, requiring 36 s per command in offline processing. The P300 response in the parietal area yielded the highest average classification accuracy. The proposed method can guide the development of a brain–computer interface system for physically disabled people with visual or auditory impairments to control assistive and rehabilitative devices. Full article
(This article belongs to the Special Issue Brain Computer Interface for Biomedical Applications)
Show Figures

Figure 1

17 pages, 4616 KiB  
Article
Machine Learning-Based Classification of Body Imbalance and Its Intensity Using Electromyogram and Ground Reaction Force in Immersive Environments
by Jahan Zeb Gul, Muhammad Omar Cheema, Zia Mohy Ud Din, Maryam Khan, Woo Young Kim and Muhammad Muqeet Rehman
Appl. Sci. 2024, 14(18), 8209; https://doi.org/10.3390/app14188209 - 12 Sep 2024
Cited by 3 | Viewed by 1750
Abstract
Body balancing is a complex task that includes the coordination of muscles, tendons, bones, ears, eyes, and the brain. Imbalance or disequilibrium is the inability to maintain the center of gravity. Perpetuating body balance plays an important role in preventing us from falling [...] Read more.
Body balancing is a complex task that includes the coordination of muscles, tendons, bones, ears, eyes, and the brain. Imbalance or disequilibrium is the inability to maintain the center of gravity. Perpetuating body balance plays an important role in preventing us from falling or swaying. Biomechanical tests and video analysis can be performed to analyze body imbalance. The musculoskeletal system is one of the fundamental systems by which our balance or equilibrium is sustained and our upright posture is maintained. Electromyogram (EMG) and ground reaction force (GRF) monitoring can be utilized in cases where a rapid response to body imbalance is a necessity. Body balance also depends on visual stimuli that can be either real or virtual. Researchers have used virtual reality (VR) to predict motion sickness and analyze heart rate variability, as well as in rehabilitation. VR can also be used to induce body imbalance in a controlled way. In this research, body imbalance was induced in a controlled way by playing an Oculus game and, simultaneously, EMG and GRF were recorded. Features were extracted from the EMG and were then fed to a machine learning algorithm. Several machine learning algorithms were tested and upon 10-fold cross-validation; a minimum accuracy of 71% and maximum accuracy of 98% were achieved by Gaussian Naïve Bayes and Gradient Boosting classifiers, respectively, in the classification of imbalance and its intensities. This research can be incorporated into various rehabilitative and therapeutic systems. Full article
Show Figures

Figure 1

30 pages, 909 KiB  
Article
Emotion Detection from EEG Signals Using Machine Deep Learning Models
by João Vitor Marques Rabelo Fernandes, Auzuir Ripardo de Alexandria, João Alexandre Lobo Marques, Débora Ferreira de Assis, Pedro Crosara Motta and Bruno Riccelli dos Santos Silva
Bioengineering 2024, 11(8), 782; https://doi.org/10.3390/bioengineering11080782 - 2 Aug 2024
Cited by 15 | Viewed by 7280
Abstract
Detecting emotions is a growing field aiming to comprehend and interpret human emotions from various data sources, including text, voice, and physiological signals. Electroencephalogram (EEG) is a unique and promising approach among these sources. EEG is a non-invasive monitoring technique that records the [...] Read more.
Detecting emotions is a growing field aiming to comprehend and interpret human emotions from various data sources, including text, voice, and physiological signals. Electroencephalogram (EEG) is a unique and promising approach among these sources. EEG is a non-invasive monitoring technique that records the brain’s electrical activity through electrodes placed on the scalp’s surface. It is used in clinical and research contexts to explore how the human brain responds to emotions and cognitive stimuli. Recently, its use has gained interest in real-time emotion detection, offering a direct approach independent of facial expressions or voice. This is particularly useful in resource-limited scenarios, such as brain–computer interfaces supporting mental health. The objective of this work is to evaluate the classification of emotions (positive, negative, and neutral) in EEG signals using machine learning and deep learning, focusing on Graph Convolutional Neural Networks (GCNN), based on the analysis of critical attributes of the EEG signal (Differential Entropy (DE), Power Spectral Density (PSD), Differential Asymmetry (DASM), Rational Asymmetry (RASM), Asymmetry (ASM), Differential Causality (DCAU)). The electroencephalography dataset used in the research was the public SEED dataset (SJTU Emotion EEG Dataset), obtained through auditory and visual stimuli in segments from Chinese emotional movies. The experiment employed to evaluate the model results was “subject-dependent”. In this method, the Deep Neural Network (DNN) achieved an accuracy of 86.08%, surpassing SVM, albeit with significant processing time due to the optimization characteristics inherent to the algorithm. The GCNN algorithm achieved an average accuracy of 89.97% in the subject-dependent experiment. This work contributes to emotion detection in EEG, emphasizing the effectiveness of different models and underscoring the importance of selecting appropriate features and the ethical use of these technologies in practical applications. The GCNN emerges as the most promising methodology for future research. Full article
(This article belongs to the Special Issue Monitoring and Analysis of Human Biosignals, Volume II)
Show Figures

Figure 1

Back to TopTop