Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (74)

Search Parameters:
Keywords = visual stimuli classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 3773 KB  
Article
Relationship Between Display Pixel Structure and Gloss Perception
by Kosei Aketagawa, Midori Tanaka and Takahiko Horiuchi
J. Imaging 2026, 12(2), 71; https://doi.org/10.3390/jimaging12020071 - 9 Feb 2026
Viewed by 195
Abstract
The demand for accurate representation of gloss perception, which significantly contributes to the impression and evaluation of objects, is increasing owing to recent advancements in display technology enabling high-definition visual reproduction. This study experimentally analyzes the influence of display pixel structure on gloss [...] Read more.
The demand for accurate representation of gloss perception, which significantly contributes to the impression and evaluation of objects, is increasing owing to recent advancements in display technology enabling high-definition visual reproduction. This study experimentally analyzes the influence of display pixel structure on gloss perception. In a visual evaluation experiment using natural images, gloss perception was assessed across six types of stimuli: three subpixel arrays (RGB, RGBW, and PenTile RGBG) combined with two pixel–aperture ratios (100% and 50%). The experimental results statistically confirmed that regardless of pixel–aperture ratio, the RGB subpixel array was perceived as exhibiting the strongest gloss. Furthermore, cluster analysis of observers revealed individual differences in the effect of pixel structure on gloss perception. Additionally, gloss classification and image feature analysis suggested that the magnitude of pixel structure influence varies depending on the frequency components contained in the images. Moreover, analysis using a generalized linear mixed model supported the superiority of the RGB subpixel array even when accounting for variability across observers and natural images. Full article
Show Figures

Figure 1

28 pages, 3292 KB  
Review
Hydrogels as Promising Carriers for Ophthalmic Disease Treatment: A Comprehensive Review
by Wenxiang Zhu, Mingfang Xia, Yahui He, Qiuling Huang, Zhimin Liao, Xiaobo Wang, Xiaoyu Zhou and Xuanchu Duan
Gels 2026, 12(2), 105; https://doi.org/10.3390/gels12020105 - 27 Jan 2026
Viewed by 734
Abstract
Ocular disorders such as keratitis, glaucoma, age-related macular degeneration (AMD), diabetic retinopathy (DR), and dry eye disease (DED) are highly prevalent worldwide and remain major causes of visual impairment and blindness. Conventional therapeutic approaches for ocular diseases, such as eye drops, surgery, and [...] Read more.
Ocular disorders such as keratitis, glaucoma, age-related macular degeneration (AMD), diabetic retinopathy (DR), and dry eye disease (DED) are highly prevalent worldwide and remain major causes of visual impairment and blindness. Conventional therapeutic approaches for ocular diseases, such as eye drops, surgery, and laser therapy, are frequently hampered by limited drug bioavailability, rapid clearance, and treatment-related complications, primarily due to the eye’s unique anatomical and physiological barriers. Hydrogels, characterized by their three-dimensional network structure, high water content, excellent biocompatibility, and tunable physicochemical properties, have emerged as promising platforms for ophthalmic drug delivery. This review summarizes the classification, fabrication strategies, and essential properties of hydrogels, and highlights recent advances in their application to ocular diseases, including keratitis management, corneal wound repair, intraocular pressure regulation and neuroprotection in glaucoma, sustained drug delivery for AMD and DR, vitreous substitutes for retinal detachment, and therapies for DED. In particular, we highlight recent advances in stimuli-responsive hydrogels that enable spatiotemporally controlled drug release in response to ocular cues such as temperature, pH, redox state, and enzyme activity, thereby enhancing therapeutic precision and efficacy. Furthermore, this review critically evaluates translational aspects, including long-term ocular safety, clinical feasibility, manufacturing scalability, and regulatory challenges, which are often underrepresented in existing reviews. By integrating material science, ocular pathology, and translational considerations, this review aims to provide a comprehensive framework for the rational design of next-generation hydrogel systems and to facilitate their clinical translation in ophthalmic therapy. Full article
(This article belongs to the Special Issue Novel Hydrogels for Drug Delivery and Regenerative Medicine)
Show Figures

Figure 1

27 pages, 12800 KB  
Article
Olfactory Enrichment of Captive Pygmy Hippopotamuses with Applied Machine Learning
by Jonas Nielsen, Frej Gammelgård, Silje Marquardsen Lund, Anja Sofie Banasik Præstekær, Astrid Vinterberg Frandsen, Camilla Strandqvist, Mikkel Haugaard Nielsen, Rasmus Nikolajgaard Olsen, Sussie Pagh, Thea Loumand Faddersbøll and Cino Pertoldi
Animals 2026, 16(3), 385; https://doi.org/10.3390/ani16030385 - 26 Jan 2026
Viewed by 533
Abstract
The pygmy hippopotamus (Choeropsis liberiensis, Morton, 1849) is classified as Endangered by the International Union for the Conservation of Nature (IUCN). Compared to other large, threatened mammals, this species remains relatively understudied and new findings indicate potential welfare concerns, emphasizing the [...] Read more.
The pygmy hippopotamus (Choeropsis liberiensis, Morton, 1849) is classified as Endangered by the International Union for the Conservation of Nature (IUCN). Compared to other large, threatened mammals, this species remains relatively understudied and new findings indicate potential welfare concerns, emphasizing the need for further research on the species welfare in zoological institutions. One approach to improving welfare in captivity is through environmental enrichment. This study investigated the effects of olfactory enrichment on three individual pygmy hippopotamuses through behavioral analysis and heat-map visualization. Using continuous focal sampling, several behaviors were influenced by the stimuli, with results showing a general decrease in inactivity and an increase in environmental engagement and interaction, particularly through scenting behavior. To further enhance behavioral quantification, machine learning techniques were applied to video data, comparing manual and automated behavior classification using the pose estimation program SLEAP. Four behaviors Standing, Locomotion, Feeding/Foraging, and Lying Down were compared. A confusion matrix, time budgets, and Kendall’s Coefficient of Concordance (W) were used to assess agreement between methods. The results showed a strong and moderate agreement between manual and automated annotations, for the female and calf, respectively. This demonstrates the potential of automation to complement behavioral observations in future welfare monitoring. Full article
(This article belongs to the Section Animal System and Management)
Show Figures

Figure 1

23 pages, 1740 KB  
Article
Print Exposure Interaction with Neural Tuning on Letter/Non-Letter Processing During Literacy Acquisition: An ERP Study on Dyslexic and Typically Developing Children
by Elizaveta Galperina, Olga Kruchinina, Polina Boichenkova and Alexander Kornev
Languages 2026, 11(1), 15; https://doi.org/10.3390/languages11010015 - 14 Jan 2026
Viewed by 510
Abstract
Background/Objectives: The first step in learning an alphabetic writing system is to establish letter–sound associations. This process is more difficult for children with dyslexia (DYS) than for typically developing (TD) children. Cerebral mechanisms underlying these associations are not fully understood and are [...] Read more.
Background/Objectives: The first step in learning an alphabetic writing system is to establish letter–sound associations. This process is more difficult for children with dyslexia (DYS) than for typically developing (TD) children. Cerebral mechanisms underlying these associations are not fully understood and are expected to change during the training course. This study aimed to identify the neurophysiological correlates and developmental changes of visual letter processing in children with DYS compared to TD children, using event-related potentials (ERPs) during a letter/non-letter classification task. Methods: A total of 71 Russian-speaking children aged 7–11 years participated in the study, including 38 with dyslexia and 33 TD children. The participants were divided into younger (7–8 y.o.) and older (9–11 y.o.) subgroups. EEG recordings were taken while participants classified letters and non-letter characters. We analyzed ERP components (N/P150, N170, P260, P300, N320, and P600) in left-hemisphere regions of interest related to reading: the ventral occipito-temporal cortex (VWFA ROI) and the inferior frontal cortex (frontal ROI). Results: Behavioral differences, specifically lower accuracy in children with dyslexia, were observed only in the younger subgroup. ERP analysis indicated that both groups displayed common stimulus effects, such as a larger N170 for letters in younger children. However, their developmental trajectories diverged. The DYS group showed an age-related increase in the amplitude of early components (N/P150 in VWFA ROI), which contrasts with the typical decrease observed in TD children. In contrast, the late P600 component in the frontal ROI revealed an age-related decrease in the DYS group, along with overall reduced amplitudes compared to their TD peers. Additionally, the N320 component differentiated stimuli exclusively in the DYS group. Conclusions: The data obtained in this study confirmed that the mechanisms of letter recognition in children with dyslexia differ in some ways from those of their TD peers. This atypical developmental pattern involves a failure to efficiently specialize early visual processing, as evidenced by the increasing N/P150. Additionally, there is a progressive reduction in the cognitive resources available for higher-order reanalysis and control, indicated by the decreasing frontal P600. This disruption in neural specialization and automation ultimately hinders the development of fluent reading. Full article
Show Figures

Figure 1

16 pages, 1543 KB  
Article
Inferring Mental States via Linear and Non-Linear Body Movement Dynamics: A Pilot Study
by Tad T. Brunyé, Kana Okano, James McIntyre, Madelyn K. Sandone, Lisa N. Townsend, Marissa Marko Lee, Marisa Smith and Gregory I. Hughes
Sensors 2025, 25(22), 6990; https://doi.org/10.3390/s25226990 - 15 Nov 2025
Viewed by 776
Abstract
Stress, workload, and uncertainty characterize occupational tasks across sports, healthcare, military, and transportation domains. Emerging theory and empirical research suggest that coordinated whole-body movements may reflect these transient mental states. Wearable sensors and optical motion capture offer opportunities to quantify such movement dynamics [...] Read more.
Stress, workload, and uncertainty characterize occupational tasks across sports, healthcare, military, and transportation domains. Emerging theory and empirical research suggest that coordinated whole-body movements may reflect these transient mental states. Wearable sensors and optical motion capture offer opportunities to quantify such movement dynamics and classify mental states that influence occupational performance and human–machine interaction. We tested this possibility in a small pilot study (N = 10) designed to test feasibility and identify preliminary movement features linked to mental states. Participants performed a perceptual decision-making task involving facial emotion recognition (i.e., deciding whether depicted faces were happy versus angry) with variable levels of stress (via a risk of electric shock), workload (via time pressure), and uncertainty (via visual degradation of task stimuli). The time series of movement trajectories was analyzed both holistically (full trajectory) and by phase: lowered (early), raising (middle), aiming (late), and face-to-face (sequential). For each epoch, up to 3844 linear and non-linear features were extracted across temporal, spectral, probability, divergence, and fractal domains. Features were entered into a repeated 10-fold cross-validation procedure using 80/20 train/test splits. Feature selection was conducted with the T-Rex Selector, and selected features were used to train a scikit-learn pipeline with a Robust Scaler and a Logistic Regression classifier. Models achieved mean ROC AUC scores as high as 0.76 for stress classification, with the highest sensitivity during the full movement trajectory and middle (raise) phases. Classification of workload and uncertainty states was less successful. These findings demonstrate the potential of movement-based sensing to infer stress states in applied settings and inform future human–machine interface development. Full article
(This article belongs to the Special Issue Sensors and Data Analysis for Biomechanics and Physical Activity)
Show Figures

Figure 1

29 pages, 5273 KB  
Article
Intersession Robust Hybrid Brain–Computer Interface: Safe and User-Friendly Approach with LED Activation Mechanism
by Sefa Aydın, Mesut Melek and Levent Gökrem
Micromachines 2025, 16(11), 1264; https://doi.org/10.3390/mi16111264 - 8 Nov 2025
Cited by 1 | Viewed by 1167
Abstract
This study introduces a hybrid Brain–Computer (BCI) system with a robust and secure activation mechanism between sessions, aiming to minimize the negative effects of visual stimulus-based BCI systems on user eye health. The system is based on the integration of Electroencephalography (EEG) signals [...] Read more.
This study introduces a hybrid Brain–Computer (BCI) system with a robust and secure activation mechanism between sessions, aiming to minimize the negative effects of visual stimulus-based BCI systems on user eye health. The system is based on the integration of Electroencephalography (EEG) signals and Electrooculography (EOG) artefacts, and includes an LED stimulus operating at a frequency of 7 Hz for safe activation and objects moving in different directions. While the LED functions as an activation switch that reduces visual fatigue caused by traditional visual stimuli, moving objects provide command generation depending on the user’s intention. In order to evaluate the stability of the system against physiological and psychological conditions, data were collected from 15 participants in two different sessions. The Correlation Alignment (CORAL) method was applied to the data to reduce the variance between sessions and to increase stability. A Bootstrap Aggregating algorithm was used in the classification processes, and with the CORAL method, the system accuracy rate was increased from 81.54% to 94.29%. Compared to similar BCI approaches, the proposed system offers a safe activation mechanism that effectively adapts to users’ changing cognitive states throughout the day by reducing visual fatigue, despite using a low number of EEG channels, and demonstrates its practicality and effectiveness by performing on par or superior to other systems in terms of high accuracy and robust stability. Full article
(This article belongs to the Special Issue Bioelectronics and Its Limitless Possibilities)
Show Figures

Figure 1

23 pages, 8644 KB  
Article
Understanding What the Brain Sees: Semantic Recognition from EEG Responses to Visual Stimuli Using Transformer
by Ahmed Fares
AI 2025, 6(11), 288; https://doi.org/10.3390/ai6110288 - 7 Nov 2025
Cited by 1 | Viewed by 1733
Abstract
Understanding how the human brain processes and interprets multimedia content represents a frontier challenge in neuroscience and artificial intelligence. This study introduces a novel approach to decode semantic information from electroencephalogram (EEG) signals recorded during visual stimulus perception. We present DCT-ViT, a spatial–temporal [...] Read more.
Understanding how the human brain processes and interprets multimedia content represents a frontier challenge in neuroscience and artificial intelligence. This study introduces a novel approach to decode semantic information from electroencephalogram (EEG) signals recorded during visual stimulus perception. We present DCT-ViT, a spatial–temporal transformer architecture that pioneers automated semantic recognition from brain activity patterns, advancing beyond conventional brain state classification to interpret higher level cognitive understanding. Our methodology addresses three fundamental innovations: First, we develop a topology-preserving 2D electrode mapping that, combined with temporal indexing, generates 3D spatial–temporal representations capturing both anatomical relationships and dynamic neural correlations. Second, we integrate discrete cosine transform (DCT) embeddings with standard patch and positional embeddings in the transformer architecture, enabling frequency-domain analysis that quantifies activation variability across spectral bands and enhances attention mechanisms. Third, we introduce the Semantics-EEG dataset comprising ten semantic categories extracted from visual stimuli, providing a benchmark for brain-perceived semantic recognition research. The proposed DCT-ViT model achieves 72.28% recognition accuracy on Semantics-EEG, substantially outperforming LSTM-based and attention-augmented recurrent baselines. Ablation studies demonstrate that DCT embeddings contribute meaningfully to model performance, validating their effectiveness in capturing frequency-specific neural signatures. Interpretability analyses reveal neurobiologically plausible attention patterns, with visual semantics activating occipital–parietal regions and abstract concepts engaging frontal–temporal networks, consistent with established cognitive neuroscience models. To address systematic misclassification between perceptually similar categories, we develop a hierarchical classification framework with boundary refinement mechanisms. This approach substantially reduces confusion between overlapping semantic categories, elevating overall accuracy to 76.15%. Robustness evaluations demonstrate superior noise resilience, effective cross-subject generalization, and few-shot transfer capabilities to novel categories. This work establishes the technical foundation for brain–computer interfaces capable of decoding semantic understanding, with implications for assistive technologies, cognitive assessment, and human–AI interaction. Both the Semantics-EEG dataset and DCT-ViT implementation are publicly released to facilitate reproducibility and advance research in neural semantic decoding. Full article
(This article belongs to the Special Issue AI in Bio and Healthcare Informatics)
Show Figures

Figure 1

34 pages, 1172 KB  
Article
Stimulus-Evoked Brain Signals for Parkinson’s Detection: A Comprehensive Benchmark Performance Analysis on Cross-Stimulation and Channel-Wise Experiments
by Krishna Patel, Rajendra Gad, Marissa Lourdes de Ataide, Narayan Vetrekar, Teresa Ferreira and Raghavendra Ramachandra
Bioengineering 2025, 12(11), 1185; https://doi.org/10.3390/bioengineering12111185 - 30 Oct 2025
Viewed by 961
Abstract
Parkinson’s disease (PD) is a progressive neurodegenerative disorder that affects both motor and cognitive functions, often resulting in misdiagnosis during its early stages. The condition severely impacts daily living, diminishing an individual’s ability to work and carry out routine tasks independently. Consequently, the [...] Read more.
Parkinson’s disease (PD) is a progressive neurodegenerative disorder that affects both motor and cognitive functions, often resulting in misdiagnosis during its early stages. The condition severely impacts daily living, diminishing an individual’s ability to work and carry out routine tasks independently. Consequently, the development of automated methods for reliable PD detection has gained growing research interest. Among the available approaches, Electroencephalography (EEG) has emerged as a promising non-invasive and cost-effective tool. Nevertheless, most existing studies have predominantly focused on resting-state EEG, which constrains the generalizability and robustness of the proposed detection models. This study introduces a cross-stimulation evaluation framework to assess its impact on Parkinson’s disease detection algorithms and conducts channel-wise analysis to identify the most discriminative brain regions for accurate diagnosis. To support this research, we present the newly introduced Parkinson’s disease EEG (ParEEG) database, comprising 203,520 EEG samples from 60 subjects recorded based on Resting-State Visual Evoked Potential (RSVEP) and Steady-State Visually Evoked Potential (SSVEP) stimuli. In this study, we evaluate the performance of individual EEG channels using two handcrafted and two deep learning-based methods, employing a 10-fold cross-validation strategy, to ensure statistical reliability and establish benchmark results. Experimental results show that CRC and LSTM consistently achieved high accuracies (95–100%) with low variability (standard deviation < 2%). The analysis indicates that EEG channels in the frontal, fronto-central, and central–parietal regions consistently yield higher classification accuracy in Parkinson’s disease detection. Our findings offer valuable insights into channel-specific neural alterations for better interpretability in PD, and the cross-stimulation evaluation enhances the generalizability of EEG-based PD detection for practical diagnostic purposes. Full article
Show Figures

Figure 1

14 pages, 920 KB  
Article
AI-Based Facial Emotion Analysis for Early and Differential Diagnosis of Dementia
by Letizia Bergamasco, Anita Coletta, Gabriella Olmo, Aurora Cermelli, Elisa Rubino and Innocenzo Rainero
Bioengineering 2025, 12(10), 1082; https://doi.org/10.3390/bioengineering12101082 - 4 Oct 2025
Cited by 2 | Viewed by 1769
Abstract
Early and differential diagnosis of dementia is essential for timely and targeted care. This study investigated the feasibility of using an artificial intelligence (AI)-based system to discriminate between different stages and etiologies of dementia by analyzing facial emotions. We collected video recordings of [...] Read more.
Early and differential diagnosis of dementia is essential for timely and targeted care. This study investigated the feasibility of using an artificial intelligence (AI)-based system to discriminate between different stages and etiologies of dementia by analyzing facial emotions. We collected video recordings of 64 participants exposed to standardized audio-visual stimuli. Facial emotion features in terms of valence and arousal were extracted and used to train machine learning models on multiple classification tasks, including distinguishing individuals with mild cognitive impairment (MCI) and overt dementia from healthy controls (HCs) and differentiating Alzheimer’s disease (AD) from other types of cognitive impairment. Nested cross-validation was adopted to evaluate the performance of different tested models (K-Nearest Neighbors, Logistic Regression, and Support Vector Machine models) and optimize their hyperparameters. The system achieved a cross-validation accuracy of 76.0% for MCI vs. HCs, 73.6% for dementia vs. HCs, and 64.1% in the three-class classification (MCI vs. dementia vs. HCs). Among cognitively impaired individuals, a 75.4% accuracy was reached in distinguishing AD from other etiologies. These results demonstrated the potential of AI-driven facial emotion analysis as a non-invasive tool for early detection of cognitive impairment and for supporting differential diagnosis of AD in clinical settings. Full article
Show Figures

Figure 1

34 pages, 4605 KB  
Article
Forehead and In-Ear EEG Acquisition and Processing: Biomarker Analysis and Memory-Efficient Deep Learning Algorithm for Sleep Staging with Optimized Feature Dimensionality
by Roberto De Fazio, Şule Esma Yalçınkaya, Ilaria Cascella, Carolina Del-Valle-Soto, Massimo De Vittorio and Paolo Visconti
Sensors 2025, 25(19), 6021; https://doi.org/10.3390/s25196021 - 1 Oct 2025
Viewed by 3021
Abstract
Advancements in electroencephalography (EEG) technology and feature extraction methods have paved the way for wearable, non-invasive systems that enable continuous sleep monitoring outside clinical environments. This study presents the development and evaluation of an EEG-based acquisition system for sleep staging, which can be [...] Read more.
Advancements in electroencephalography (EEG) technology and feature extraction methods have paved the way for wearable, non-invasive systems that enable continuous sleep monitoring outside clinical environments. This study presents the development and evaluation of an EEG-based acquisition system for sleep staging, which can be adapted for wearable applications. The system utilizes a custom experimental setup with the ADS1299EEG-FE-PDK evaluation board to acquire EEG signals from the forehead and in-ear regions under various conditions, including visual and auditory stimuli. Afterward, the acquired signals were processed to extract a wide range of features in time, frequency, and non-linear domains, selected based on their physiological relevance to sleep stages and disorders. The feature set was reduced using the Minimum Redundancy Maximum Relevance (mRMR) algorithm and Principal Component Analysis (PCA), resulting in a compact and informative subset of principal components. Experiments were conducted on the Bitbrain Open Access Sleep (BOAS) dataset to validate the selected features and assess their robustness across subjects. The feature set extracted from a single EEG frontal derivation (F4-F3) was then used to train and test a two-step deep learning model that combines Long Short-Term Memory (LSTM) and dense layers for 5-class sleep stage classification, utilizing attention and augmentation mechanisms to mitigate the natural imbalance of the feature set. The results—overall accuracies of 93.5% and 94.7% using the reduced feature sets (94% and 98% cumulative explained variance, respectively) and 97.9% using the complete feature set—demonstrate the feasibility of obtaining a reliable classification using a single EEG derivation, mainly for unobtrusive, home-based sleep monitoring systems. Full article
Show Figures

Figure 1

28 pages, 6595 KB  
Article
Identifying Individual Information Processing Styles During Advertisement Viewing Through EEG-Driven Classifiers
by Antiopi Panteli, Eirini Kalaitzi and Christos A. Fidas
Information 2025, 16(9), 757; https://doi.org/10.3390/info16090757 - 1 Sep 2025
Cited by 1 | Viewed by 1280
Abstract
Neuromarketing studies the brain function as a response to marketing stimuli. A large amount of neuromarketing research uses data from electroencephalography (EEG) recordings as a response of individuals’ brains to marketing stimuli, aiming to identify the factors that influence consumer behaviour that they [...] Read more.
Neuromarketing studies the brain function as a response to marketing stimuli. A large amount of neuromarketing research uses data from electroencephalography (EEG) recordings as a response of individuals’ brains to marketing stimuli, aiming to identify the factors that influence consumer behaviour that they cannot articulate or are reluctant to reveal. Evidence suggests that individuals’ processing styles affect their reaction to marketing stimuli. In this study, we propose and evaluate a predictive model that classifies consumers as verbalizers or visualizers based on EEG signals recorded during exposure to verbal, visual, and mixed advertisements. Participants (N = 22) were categorized into verbalizers and visualizers using the Style of Processing (SOP) scale and underwent EEG recording while viewing ads. The EEG signals were preprocessed and the five EEG frequency bands were extracted. We employed three classification models for every set of ads: SVM, Decision Tree, and kNN. While all three classifiers performed around the same, with accuracy between 86 and 93%, during cross-validation SVM proved to be the more effective model, with kNN and Decision Tree showing sensitivity to data imbalances. Additionally, we conducted independent t-tests to look for statistically significant differences between the two classes. The t-tests implicated the Theta frequency band. Therefore, these findings highlight the potential of leveraging EEG-based technology to effectively predict a consumer’s processing style for advertisements and offers practical applications in fields such as interactive content designs and user-experience personalization. Full article
Show Figures

Figure 1

22 pages, 4200 KB  
Article
Investigation of Personalized Visual Stimuli via Checkerboard Patterns Using Flickering Circles for SSVEP-Based BCI System
by Nannaphat Siribunyaphat, Natjamee Tohkhwan and Yunyong Punsawad
Sensors 2025, 25(15), 4623; https://doi.org/10.3390/s25154623 - 25 Jul 2025
Cited by 1 | Viewed by 2645
Abstract
In this study, we conducted two steady-state visual evoked potential (SSVEP) studies to develop a practical brain–computer interface (BCI) system for communication and control applications. The first study introduces a novel visual stimulus paradigm that combines checkerboard patterns with flickering circles configured in [...] Read more.
In this study, we conducted two steady-state visual evoked potential (SSVEP) studies to develop a practical brain–computer interface (BCI) system for communication and control applications. The first study introduces a novel visual stimulus paradigm that combines checkerboard patterns with flickering circles configured in single-, double-, and triple-layer forms. We tested three flickering frequency conditions: a single fundamental frequency, a combination of the fundamental frequency and its harmonics, and a combination of two fundamental frequencies. The second study utilizes personalized visual stimuli to enhance SSVEP responses. SSVEP detection was performed using power spectral density (PSD) analysis by employing Welch’s method and relative PSD to extract SSVEP features. Commands classification was carried out using a proposed decision rule–based algorithm. The results were compared with those of a conventional checkerboard pattern with flickering squares. The experimental findings indicate that single-layer flickering circle patterns exhibit comparable or improved performance when compared with the conventional stimuli, particularly when customized for individual users. Conversely, the multilayer patterns tended to increase visual fatigue. Furthermore, individualized stimuli achieved a classification accuracy of 90.2% in real-time SSVEP-based BCI systems for six-command generation tasks. The personalized visual stimuli can enhance user experience and system performance, thereby supporting the development of a practical SSVEP-based BCI system. Full article
Show Figures

Figure 1

26 pages, 15354 KB  
Article
Adaptive Neuro-Affective Engagement via Bayesian Feedback Learning in Serious Games for Neurodivergent Children
by Diego Resende Faria and Pedro Paulo da Silva Ayrosa
Appl. Sci. 2025, 15(13), 7532; https://doi.org/10.3390/app15137532 - 4 Jul 2025
Cited by 2 | Viewed by 1783
Abstract
Neuro-Affective Intelligence (NAI) integrates neuroscience, psychology, and artificial intelligence to support neurodivergent children through personalized Child–Machine Interaction (CMI). This paper presents an adaptive neuro-affective system designed to enhance engagement in children with neurodevelopmental disorders through serious games. The proposed framework incorporates real-time biophysical [...] Read more.
Neuro-Affective Intelligence (NAI) integrates neuroscience, psychology, and artificial intelligence to support neurodivergent children through personalized Child–Machine Interaction (CMI). This paper presents an adaptive neuro-affective system designed to enhance engagement in children with neurodevelopmental disorders through serious games. The proposed framework incorporates real-time biophysical signals—including EEG-based concentration, facial expressions, and in-game performance—to compute a personalized engagement score. We introduce a novel mechanism, Bayesian Immediate Feedback Learning (BIFL), which dynamically selects visual, auditory, or textual stimuli based on real-time neuro-affective feedback. A multimodal CNN-based classifier detects mental states, while a probabilistic ensemble merges affective state classifications derived from facial expressions. A multimodal weighted engagement function continuously updates stimulus–response expectations. The system adapts in real time by selecting the most appropriate cue to support the child’s cognitive and emotional state. Experimental validation with 40 children (ages 6–10) diagnosed with Autism Spectrum Disorder (ASD) and Attention Deficit Hyperactivity Disorder (ADHD) demonstrates the system’s effectiveness in sustaining attention, improving emotional regulation, and increasing overall game engagement. The proposed framework—combining neuro-affective state recognition, multimodal engagement scoring, and BIFL—significantly improved cognitive and emotional outcomes: concentration increased by 22.4%, emotional engagement by 24.8%, and game performance by 32.1%. Statistical analysis confirmed the significance of these improvements (p<0.001, Cohen’s d>1.4). These findings demonstrate the feasibility and impact of probabilistic, multimodal, and neuro-adaptive AI systems in therapeutic and educational applications. Full article
Show Figures

Figure 1

17 pages, 2799 KB  
Article
The Phenomenology of Offline Perception: Multisensory Profiles of Voluntary Mental Imagery and Dream Imagery
by Maren Bilzer and Merlin Monzel
Vision 2025, 9(2), 37; https://doi.org/10.3390/vision9020037 - 21 Apr 2025
Cited by 2 | Viewed by 3198
Abstract
Both voluntary mental imagery and dream imagery involve multisensory representations without externally present stimuli that can be categorized as offline perceptions. Due to common mechanisms, correlations between multisensory dream imagery profiles and multisensory voluntary mental imagery profiles were hypothesized. In a sample of [...] Read more.
Both voluntary mental imagery and dream imagery involve multisensory representations without externally present stimuli that can be categorized as offline perceptions. Due to common mechanisms, correlations between multisensory dream imagery profiles and multisensory voluntary mental imagery profiles were hypothesized. In a sample of 226 participants, correlations within the respective state of consciousness were significantly bigger than across, favouring two distinct networks. However, the association between the vividness of voluntary mental imagery and vividness of dream imagery was moderated by the frequency of dream recall and lucid dreaming, suggesting that both networks become increasingly similar when higher metacognition is involved. Additionally, the vividness of emotional and visual imagery was significantly higher for dream imagery than for voluntary mental imagery, reflecting the immersive nature of dreams and the continuity of visual dominance while being awake and asleep. In contrast, the vividness of auditory, olfactory, gustatory, and tactile imagery was higher for voluntary mental imagery, probably due to higher cognitive control while being awake. Most results were replicated four weeks later, weakening the notion of state influences. Overall, our results indicate similarities between dream imagery and voluntary mental imagery that justify a common classification as offline perception, but also highlight important differences. Full article
(This article belongs to the Special Issue Visual Mental Imagery System: How We Image the World)
Show Figures

Figure 1

20 pages, 2133 KB  
Article
Real-Time Mobile Robot Obstacles Detection and Avoidance Through EEG Signals
by Karameldeen Omer, Francesco Ferracuti, Alessandro Freddi, Sabrina Iarlori, Francesco Vella and Andrea Monteriù
Brain Sci. 2025, 15(4), 359; https://doi.org/10.3390/brainsci15040359 - 30 Mar 2025
Cited by 3 | Viewed by 3102
Abstract
Background/Objectives: The study explores the integration of human feedback into the control loop of mobile robots for real-time obstacle detection and avoidance using EEG brain–computer interface (BCI) methods. The goal is to assess the possible paradigms applicable to the most current navigation system [...] Read more.
Background/Objectives: The study explores the integration of human feedback into the control loop of mobile robots for real-time obstacle detection and avoidance using EEG brain–computer interface (BCI) methods. The goal is to assess the possible paradigms applicable to the most current navigation system to enhance safety and interaction between humans and robots. Methods: The research explores passive and active brain–computer interface (BCI) technologies to enhance a wheelchair-mobile robot’s navigation. In the passive approach, error-related potentials (ErrPs), neural signals triggered when users comment or perceive errors, enable automatic correction of the robot navigation mistakes without direct input or command from the user. In contrast, the active approach leverages steady-state visually evoked potentials (SSVEPs), where users focus on flickering stimuli to control the robot’s movements directly. This study evaluates both paradigms to determine the most effective method for integrating human feedback into assistive robotic navigation. This study involves experimental setups where participants control a robot through a simulated environment, and their brain signals are recorded and analyzed to measure the system’s responsiveness and the user’s mental workload. Results: The results show that a passive BCI requires lower mental effort but suffers from lower engagement, with a classification accuracy of 72.9%, whereas an active BCI demands more cognitive effort but achieves 84.9% accuracy. Despite this, task achievement accuracy is higher in the passive method (e.g., 71% vs. 43% for subject S2) as a single correct ErrP classification enables autonomous obstacle avoidance, whereas SSVEP requires multiple accurate commands. Conclusions: This research highlights the trade-offs between accuracy, mental load, and engagement in BCI-based robot control. The findings support the development of more intuitive assistive robotics, particularly for disabled and elderly users. Full article
(This article belongs to the Special Issue Multisensory Perception of the Body and Its Movement)
Show Figures

Figure 1

Back to TopTop