Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,463)

Search Parameters:
Keywords = Gaze

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 2766 KB  
Article
Towards Safer Automated Driving: Predicting Drivers with Long Takeover Time Using Random Forest and Human Factors
by Jungsook Kim and Ohyun Jo
Electronics 2026, 15(7), 1390; https://doi.org/10.3390/electronics15071390 - 26 Mar 2026
Abstract
In highly automated driving systems (ADSs), drivers’ ability to resume manual driving remains a road safety issue. However, to the best of our knowledge, there is no existing computational model to predict which drivers require more than the 4 seconds mandated by United [...] Read more.
In highly automated driving systems (ADSs), drivers’ ability to resume manual driving remains a road safety issue. However, to the best of our knowledge, there is no existing computational model to predict which drivers require more than the 4 seconds mandated by United Nations Regulation No. 157 to regain manual control. To address this challenge, we developed a Random Forest model that predicts takeover time using measurable human factors. Three controlled driving simulator experiments were conducted in which participants engaged in distinct tasks—texting, drinking, and traffic monitoring—before responding to a takeover request. During the experiments, we collected human factor features, including gaze behavior, age, and scores, from the self-reported driving behavior questionnaire (K-DBQ). The Random Forest classifier achieved 77% accuracy. Recursive feature elimination selected 10 dominant predictors; notably, engaging in non-driving-related tasks, reduced on-road gaze, and older age were significantly associated with longer takeover times. Although K-DBQ scores were not directly correlated with takeover time, their inclusion improved model robustness, consistent with ensemble learning from weak yet complementary signals. The proposed model can be integrated into advanced driver assistance systems (ADASs) to proactively identify drivers likely to exceed the 4-second takeover window, support targeted interventions, and enhance human-centered transition safety in ADSs. Full article
Show Figures

Figure 1

22 pages, 2650 KB  
Article
Design and Implementation of an Eyewear-Integrated Infrared Eye-Tracking System
by Carlo Pezzoli, Marco Brando Mario Paracchini, Daniele Maria Crafa, Marco Carminati, Luca Merigo, Tommaso Ongarello and Marco Marcon
Sensors 2026, 26(7), 2065; https://doi.org/10.3390/s26072065 - 26 Mar 2026
Viewed by 46
Abstract
Eye-tracking is a key enabling technology for smart eyewear, supporting hands-free interaction, accessibility, and context-aware human–machine interfaces under strict constraints on size, power consumption, and computational complexity. While camera-based solutions provide high accuracy, their integration into lightweight and low-power wearable platforms remains challenging. [...] Read more.
Eye-tracking is a key enabling technology for smart eyewear, supporting hands-free interaction, accessibility, and context-aware human–machine interfaces under strict constraints on size, power consumption, and computational complexity. While camera-based solutions provide high accuracy, their integration into lightweight and low-power wearable platforms remains challenging. This paper is a feasibility study for the design, simulation, and experimental evaluation of a photosensor oculography (PSOG) eye-tracking system that is fully integrated into an eyewear frame, based on near-infrared (NIR) emitters and photodiodes. The proposed approach combines simulation-driven optimization of the optical constellation, a multi-frequency modulation and demodulation scheme enabling parallel source discrimination and robust ambient-light rejection, and a resource-efficient signal acquisition pipeline suitable for embedded implementation. Eye rotations in azimuth and elevation are inferred from differential reflectance patterns of ocular regions (sclera, iris, and pupil) using lightweight regression techniques, including shallow neural networks and Gaussian process regression, selected to balance estimation accuracy with computational and power constraints. System performance is evaluated using a controllable artificial-eye platform under defined geometric and illumination conditions, enabling repeatable assessment of gaze-estimation accuracy and algorithmic behavior. Sub-degree errors are achieved in this controlled setting, demonstrating the feasibility and potential effectiveness of the proposed architecture. Practical considerations for translation to real-world smart eyewear, including human-subject validation, anatomical variability, calibration strategies, and embedded deployment, are discussed and identified as directions for future work. By detailing the optical design methodology, modulation strategy, and algorithmic trade-offs, this work clarifies the distinct contributions of the proposed PSOG system relative to existing frame-integrated and camera-free eye-tracking approaches, and provides a foundation for further development toward wearable and augmented-reality applications. Full article
Show Figures

Figure 1

32 pages, 1329 KB  
Review
Deep Learning-Based Gaze Estimation: A Review
by Ahmed A. Abdelrahman, Basheer Al-Tawil and Ayoub Al-Hamadi
Robotics 2026, 15(4), 69; https://doi.org/10.3390/robotics15040069 - 25 Mar 2026
Viewed by 266
Abstract
Gaze estimation, a critical facet of understanding user intent and enhancing human–computer interaction, has seen substantial advancements with the integration of deep learning technologies. Despite the progress, the application of deep learning in gaze estimation presents unique challenges, notably in the adaptation and [...] Read more.
Gaze estimation, a critical facet of understanding user intent and enhancing human–computer interaction, has seen substantial advancements with the integration of deep learning technologies. Despite the progress, the application of deep learning in gaze estimation presents unique challenges, notably in the adaptation and optimization of these models for precise gaze tracking. This paper conducts a thorough review of recent developments in deep learning-based gaze estimation, with a particular focus on the evolution from traditional methods to sophisticated appearance-based techniques. We examine the key components of successful gaze estimation systems, including input feature processing, neural network architectures, and the importance of data preprocessing in achieving high accuracy. Our analysis extends to a comprehensive comparison of existing methods, shedding light on their effectiveness and limitations within various implementation contexts. Through this systematic review, we aim to consolidate existing knowledge in the field, identify gaps in current research, and suggest directions for future investigation. By providing a clear overview of the state-of-the-art in gaze estimation and discussing ongoing challenges and potential solutions, our work seeks to inspire further innovation and progress in developing more accurate and efficient gaze estimation systems. Full article
Show Figures

Figure 1

25 pages, 1772 KB  
Article
The Impact of Emotion Perception and Gaze Sharing on Collaborative Experience and Performance in Multiplayer Games
by Lu Yin, He Zhang and Renke He
J. Eye Mov. Res. 2026, 19(2), 34; https://doi.org/10.3390/jemr19020034 - 25 Mar 2026
Viewed by 132
Abstract
Compared to traditional offline collaboration, current online collaboration often lacks nonverbal social cues, resulting in lower efficiency and a reduced emotional connection between teammates. To address this issue, this study used a two-player collaborative puzzle game as the experimental setting to explore the [...] Read more.
Compared to traditional offline collaboration, current online collaboration often lacks nonverbal social cues, resulting in lower efficiency and a reduced emotional connection between teammates. To address this issue, this study used a two-player collaborative puzzle game as the experimental setting to explore the impact of two nonverbal social cues, emotion and gaze, on collaborative experience and performance. Specifically, this study designed four collaborative modes: with and without teammates’ facial expressions, and with and without teammates’ gaze points. Sixty-two participants took part in the experiment, and each pair was required to complete these four patterns. Subsequently, we analyzed their collaborative experience through subjective questionnaires, objective facial expressions, and gaze overlap rates. The experimental results revealed that teammates’ gaze could effectively enhance collaborative efficiency, while facial expression is key to optimizing subjective experience. Combining both cues further acquires advantages in cognitive and emotional dimensions, leading to improved performance outcomes. The study also indicated that facial expressions could alleviate the social pressure triggered by shared gaze from teammates. Additionally, the study also examined how personality differences influenced collaborative experiences and performance. The results indicated that individuals with high agreeableness actively seek social cues, leading to more positive collaborative experiences. This study provides empirical evidence for understanding the interactive mechanisms of cognitive and emotional processes during online collaboration, and points the way toward designing adaptive, personalized intelligent collaborative systems. Full article
Show Figures

Figure 1

22 pages, 2787 KB  
Article
Usability Validation of an Integrated Hemodynamic and Pulmonary Monitoring System Using Eye-Tracking Analysis
by Hyunju Jeong, Hyeonkyeong Choi, Hyungmin Kim and Wonseuk Jang
J. Clin. Med. 2026, 15(7), 2474; https://doi.org/10.3390/jcm15072474 - 24 Mar 2026
Viewed by 90
Abstract
Background/Objectives: Hemodynamic monitoring is essential for guiding appropriate treatment by assessing cardiac output and volume status, as well as for preventing complications associated with excessive fluid administration. The EdgeFlow CW10 Plus is a device that extends conventional hemodynamic monitoring by incorporating pulmonary [...] Read more.
Background/Objectives: Hemodynamic monitoring is essential for guiding appropriate treatment by assessing cardiac output and volume status, as well as for preventing complications associated with excessive fluid administration. The EdgeFlow CW10 Plus is a device that extends conventional hemodynamic monitoring by incorporating pulmonary abnormality surveillance through B-line detection. This study aimed to evaluate whether the hemodynamic monitoring and pulmonary monitoring functions are well integrated, and verify the usability and efficiency of the system. Methods: A usability test was conducted with a panel of 15 medical professionals from diverse specialties and varying levels of clinical experience. Data from satisfaction surveys, heat maps, the System Usability Scale (SUS), and the NASA-TLX were analyzed to determine whether usability differences existed based on the duration of clinical experience. Results: The device demonstrated a high overall task success rate, averaging 93.2%. Regarding eye-tracking analysis based on clinical experience, it was observed that participants with more years of experience either failed to direct their gaze toward task-relevant user interface (UI) elements as effectively as those with fewer years of experience or showed similar patterns. Conclusions: The usability evaluation confirmed that the hemodynamic and pulmonary monitoring functions of the EdgeFlow CW 10 PLUS are well integrated, with the device demonstrating high usability and satisfaction. This integration is expected to support medical professionals in monitoring cardiac output and fluid status, facilitating timely therapeutic interventions while preventing complications related to fluid overload. Full article
(This article belongs to the Section Intensive Care)
Show Figures

Figure 1

21 pages, 1559 KB  
Article
Material Images and Cultivation: An Iconographical Interpretation of Xingqi 行气 Pattern Bronze Mirrors Along the Middle Reaches of the Yangtze River During the Song Dynasty (960–1279 CE)
by Huijun Li
Religions 2026, 17(3), 403; https://doi.org/10.3390/rel17030403 - 23 Mar 2026
Viewed by 185
Abstract
The Xingqi (行气, breath circulation) pattern bronze mirrors of the Song Dynasty (960–1279 CE) represent a distinctive category of Daoist material culture in southern China. Despite their unique iconography, systematic research on their functions and religious significance has been lacking. This study examines [...] Read more.
The Xingqi (行气, breath circulation) pattern bronze mirrors of the Song Dynasty (960–1279 CE) represent a distinctive category of Daoist material culture in southern China. Despite their unique iconography, systematic research on their functions and religious significance has been lacking. This study examines sixteen Xingqi pattern bronze mirrors through iconographic analysis and textual research, integrating evidence from surviving Daoist scriptures and ritual manuals. Two primary types are identified: the “Tortoise-Swallowing and Crane-Breathing Style” and the “Sun and Moon Observing Style”. The former depicts practitioners imitating the breathing techniques of tortoises and cranes, while the latter shows figures gazing upward to ingest the essences of the sun and moon. Both motifs continue earlier health preservation traditions from the Pre-Qin (221–207 BCE) through Han dynasties, adapted within the Northern and Southern Song context. These mirrors were specifically used by Daoists along the middle Yangtze River for inner alchemy cultivation, particularly in visualized Cunsi (存思, contemplation practices). They were predominantly passed down through generations rather than buried, explaining their scarcity in archaeological contexts. These artifacts illuminate how Song Daoism translated abstract philosophical concepts into tangible, operable practices through material imagery. They provide new physical evidence for understanding historical Daoist cultivation methods and the materialization of religious experience. Full article
Show Figures

Figure 1

21 pages, 802 KB  
Systematic Review
Eye Tracking for Rehabilitation and Training in Paediatric Neurodevelopmental Disorders: A Systematic Review
by Guido Catalano, Sara Abbondio, Roberta Nicotra, Valentina Berselli, Marta Guarischi, Valentina Vezzali and Sabrina Signorini
Brain Sci. 2026, 16(3), 337; https://doi.org/10.3390/brainsci16030337 - 21 Mar 2026
Viewed by 223
Abstract
Background: Eye-tracking (ET) devices are gaining attention in technology-based paediatric rehabilitation through their intrinsic ability to assess patients’ engagement and visual attention within motivating, technology-based environments. We conducted a systematic review of available evidence from 2004 to 2025 on the implementation of ET [...] Read more.
Background: Eye-tracking (ET) devices are gaining attention in technology-based paediatric rehabilitation through their intrinsic ability to assess patients’ engagement and visual attention within motivating, technology-based environments. We conducted a systematic review of available evidence from 2004 to 2025 on the implementation of ET in rehabilitative trainings targeting paediatric populations with neurological and neurodevelopmental disorders. This paper aims to outline the rehabilitative outcomes pursued in the clinical populations considered. Methods: This systematic review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Three electronic databases (PubMed, Web of Science, and Scopus) were consulted to summarise the state of the art of the last 20 years. Selected articles were categorised according to the type of treated disorder and the rehabilitated function. Results: ET devices have been increasingly integrated into paediatric rehabilitation with promising results across multiple neurodevelopmental conditions (e.g., ASD, ADHD, cerebral palsy). These systems have proven effective not only in training gaze control, but also in enhancing executive functions, social cognition, communication, and participation. Furthermore, they promote personalised and data-driven solutions and support high levels of engagement, feasibility, and user satisfaction. Conclusions: ET represents a promising frontier for paediatric rehabilitation, addressing various neurodevelopmental disorders. The gaze-contingent protocols employed have demonstrated potential effects in promoting adaptive behaviour across multiple developmental areas. Further research is warranted to provide shared guidance and to strengthen practice recommendations. Full article
Show Figures

Figure 1

18 pages, 2996 KB  
Article
A Multimodal Agentic AI Framework for Intuitive Human–Robot Collaboration
by Xiaoyun Liang and Jiannan Cai
Sensors 2026, 26(6), 1958; https://doi.org/10.3390/s26061958 - 20 Mar 2026
Viewed by 322
Abstract
Widespread acceptance of collaborative robots in human-involved scenarios requires accessible and intuitive interfaces for lay workers and non-expert users. Existing interfaces often rely on users to plan and issue low-level commands, necessitating extensive knowledge of robot control. This study proposes a multimodal agentic [...] Read more.
Widespread acceptance of collaborative robots in human-involved scenarios requires accessible and intuitive interfaces for lay workers and non-expert users. Existing interfaces often rely on users to plan and issue low-level commands, necessitating extensive knowledge of robot control. This study proposes a multimodal agentic AI framework integrating natural user interfaces (NUIs) to foster effortless human-like partnerships in human–robot collaboration (HRC), which enhance intuitiveness and operational efficiency. First, it allows users to instruct robots using plain language verbally, coupled with gaze, revealing objects precisely. Second, it offloads users’ workload for robot motion planning by understanding context and reasoning task decomposition. Third, coordinating with AI agents built on large language models (LLMs), the system interprets users’ requests effectively and provides feedback to establish transparent communication. This proof-of-concept study included experiments to demonstrate a practical implementation of the agentic AI framework on a mobile manipulation robot in the collaborative task of human–robot wood assembly. Seven participants were recruited to interact with this AI-integrated agentic robotic system. Task performance and user experience metrics were measured in terms of completion time, intervention rate, NASA TLX survey for workload, and valuable insights of practical applications were summarized through a qualitative analysis. This study highlights the potential of NUIs and agentic AI-embodied robots to overcome existing HRC barriers and contributes to improving HRC intuitiveness and efficiency. Full article
(This article belongs to the Special Issue Advanced Sensors and AI Integration for Human–Robot Teaming)
Show Figures

Figure 1

27 pages, 2930 KB  
Article
Perspicuity, Acuity, and Illuminating Vision: Medieval and Early Modern Optics, Religion, and Literary Reflections of the Gaze in Hrotsvit of Gandersheim, Walter Map, Hartmann von Aue, the Melusine Romances (Jean d’Arras), and Froben Christoph von Zimmern
by Albrecht Classen
Humanities 2026, 15(3), 49; https://doi.org/10.3390/h15030049 - 20 Mar 2026
Viewed by 270
Abstract
Medieval literature often seems to be a remote, irrelevant, incomprehensible world of narrative texts lost in heroic, religious, or courtly themes, limited to stories about King Arthur, courtly lovers, military heroes, and religious martyrs, saints, and prophets. In reality, as any expert can [...] Read more.
Medieval literature often seems to be a remote, irrelevant, incomprehensible world of narrative texts lost in heroic, religious, or courtly themes, limited to stories about King Arthur, courtly lovers, military heroes, and religious martyrs, saints, and prophets. In reality, as any expert can easily confirm, when we turn our full attention to pre-modern literature from across Europe (and also other parts of the world), we can often recognize the true extent to which poets utilized their narratives for spiritual, philosophical, religious, scientific, and medical explorations that have much to tell us today and prove to be deeply meaningful in a timeless manner. One key aspect, which was shared among virtually all medieval artists, poets, and theologians, consisted of the unique experience by an individual who is entitled through a physical opening to see into the depth or the height of all existence and can thus discover a wholly different world. Through this motif of the gaze, an entire epiphanic realization can set in, which thus quickly transforms the purely entertaining narrative medium into a narrative catalyst of profound spiritual experiences, helping the individual to gain inspiration from the Godhead (e.g., mysticism). Indeed, numerous times, medieval poets employed the motif of the visionary gaze, developed in very concrete terms, to trace and explain the process of perspicuity and accompanying acuity which ultimately leads to new intellectual, emotional, and religious understandings and experiences. While many intellectuals already embraced this notion of a visionary concept of spiritual comprehension, it might come as a surprise that secular and religious poets also operated quite intentionally with the concept of a hole in the wall or some other opening as a springboard for intellectual and spiritual experiences, directly drawing from the concepts of the optical sciences as understood at that time. Oddly but highly significantly, Christian and pagan notions tend to intersect in those narrative moments, particularly in late medieval literature, merging the visionary experience with the monstrous within human society, associating the gaze with the erotic and religious dimension. Full article
Show Figures

Figure 1

22 pages, 6671 KB  
Article
Evaluating the Influence of Alert Modalities on Driver Attention Transitions Under Visual Distraction: A Sequence Analysis Approach
by Niloufar Shirani, Elena Orlova, Manmohan Joshi, Paul (Young Joun) Ha, Yu Song, Anshu Bamney, Kai Wang and Eric Jackson
Systems 2026, 14(3), 328; https://doi.org/10.3390/systems14030328 - 20 Mar 2026
Viewed by 244
Abstract
This study evaluates how different alert conditions influence driver attention transitions under conditions of visual distraction using sequence analysis. Employing a within-subject experimental design, 13 participants underwent trials in a driving simulator, experiencing three distinct alert conditions: face-tracking auditory alerts, steering wheel auditory [...] Read more.
This study evaluates how different alert conditions influence driver attention transitions under conditions of visual distraction using sequence analysis. Employing a within-subject experimental design, 13 participants underwent trials in a driving simulator, experiencing three distinct alert conditions: face-tracking auditory alerts, steering wheel auditory torque alerts, and a control scenario without alerts. An eye-tracking system was used to capture drivers’ gaze durations and sequences across three key areas of interest: road, dashboard, and tablet-based infotainment system. Analysis involved computation of transition probabilities, Markov chain modeling for long-term attentional distributions, and entropy analyses to quantify the randomness of gaze transitions. Results showed that face-tracking alerts significantly increased the likelihood of gaze redirection to the road compared to the other conditions, enhancing both immediate and sustained attention. Steering wheel torque alerts demonstrated minimal effectiveness, sometimes performing worse than the no-alert condition due to their passive nature, allowing drivers to bypass attention redirection. Steady-state analyses confirmed that face alerts notably improved sustained driver focus on the road by approximately 3.6%, reinforcing their utility for prolonged attentional control. Entropy analyses further revealed that face alerts provided an optimal balance between structured attention shifts and behavioral flexibility, enhancing attentional predictability. Findings are consistent with previous literature, emphasizing the superior effectiveness of active, gaze-based interventions over passive mechanisms. This research underscores the importance of designing proactive alert systems in vehicle safety technology to effectively mitigate visual distraction-related risks. Full article
(This article belongs to the Special Issue Safe Systems for Road Safety: A Human Factors Perspective)
Show Figures

Figure 1

17 pages, 229 KB  
Article
Iris Murdoch’s Concept of Imagination and Its Role in Moral Life
by Maria Gallego-Ortiz
Philosophies 2026, 11(2), 43; https://doi.org/10.3390/philosophies11020043 - 19 Mar 2026
Viewed by 219
Abstract
Iris Murdoch situates imagination at the core of moral life, challenging moral philosophy’s preference for abstract universal principles over the particularity of lived experience. This paper reconstructs Murdoch’s concept of imagination by tracing her engagement with Plato’s distinction between eikasia and the Demiurge’s [...] Read more.
Iris Murdoch situates imagination at the core of moral life, challenging moral philosophy’s preference for abstract universal principles over the particularity of lived experience. This paper reconstructs Murdoch’s concept of imagination by tracing her engagement with Plato’s distinction between eikasia and the Demiurge’s ‘high’ imagination, as well as Kant’s notions of empirical and esthetic imagination. I argue that Murdoch’s imagination is best understood as a hermeneutical capacity essential to moral vision. She distinguishes between egoistic fantasy, which distorts reality, and free and creative imagination, which enables a just and loving gaze upon the world. Through imagination, we can replace obscuring images with truer ones, making moral progress an exercise in vision and attention. Murdoch’s account thus offers an alternative to moral theories that overlook the inner life as a site of ethical transformation. Full article
16 pages, 3543 KB  
Article
AI-Assisted Strabismus Diagnosis Using Eye-Tracking and Machine Learning
by Malrey Lee
Diagnostics 2026, 16(6), 910; https://doi.org/10.3390/diagnostics16060910 - 19 Mar 2026
Viewed by 228
Abstract
Background: Strabismus diagnosis via the Alternate Cover Test (ACT) lacks quantitative standardization. This study proposes an AI-assisted framework using eye-tracking and machine learning for objective screening. Methods: Gaze coordinates were captured using a 60 Hz infrared eye tracker during ACT. Of the 291 [...] Read more.
Background: Strabismus diagnosis via the Alternate Cover Test (ACT) lacks quantitative standardization. This study proposes an AI-assisted framework using eye-tracking and machine learning for objective screening. Methods: Gaze coordinates were captured using a 60 Hz infrared eye tracker during ACT. Of the 291 initially screened individuals considered, 50 participants were ultimately included after quality filtering, yielding 335 valid samples. Seven algorithms were evaluated, with the dataset split into 294 training and 41 testing samples. Performance was measured by accuracy, sensitivity, specificity, PPV, and NPV. Results: Random Forest showed the best performance, achieving 97.56% accuracy (40/41) on the test set. It demonstrated a sensitivity of 1.00, specificity of 0.95, PPV of 0.95, and NPV of 1.00. The confusion matrix confirmed minimal false negatives, ensuring reliable clinical screening. Conclusions: The proposed system provides a robust, objective tool for strabismus diagnosis, standardizing ACT interpretation and reducing clinical bias. Full article
Show Figures

Figure 1

15 pages, 1686 KB  
Article
A Data-Driven Approach for Comparing Gaze Allocation Across Conditions
by Jack Prosser, Anna Metzger and Matteo Toscani
J. Eye Mov. Res. 2026, 19(2), 33; https://doi.org/10.3390/jemr19020033 - 18 Mar 2026
Viewed by 201
Abstract
Gaze analysis often relies on hypothesised, subjectively defined regions of interest (ROIs) or heatmaps: ROIs enable condition comparisons but reduce objectivity and exploration; while heatmaps avoid this, they require many pixel-wise comparisons, making differences hard to detect. Here, we propose an advanced data-driven [...] Read more.
Gaze analysis often relies on hypothesised, subjectively defined regions of interest (ROIs) or heatmaps: ROIs enable condition comparisons but reduce objectivity and exploration; while heatmaps avoid this, they require many pixel-wise comparisons, making differences hard to detect. Here, we propose an advanced data-driven approach for analysing gaze behaviour. We use DNNs (adapted versions of AlexNet) to classify conditions from gaze patterns, paired with reverse correlation to show where and how gaze differs between conditions. We test our approach on data from an experiment investigating the effects of object-specific sounds (e.g., church bell ringing) on gaze allocation. ROI-based analysis shows a significant difference between conditions (congruent sound, no sound, phase-scrambled sound and pink noise), with more gaze allocation on sound-associated objects in the congruent sound condition. However, as expected, significance depends on the definition of the ROIs. Heatmaps show some unclear qualitative differences, but none are significant after correcting for pixelwise comparisons. We showed that, for some scenes, the DNNs could classify the task based on individual fixations with accuracy significantly higher than chance. Our approach shows that sound can alter gaze allocation, revealing task-specific, non-trivial strategies: fixations are not always drawn to the sound source but shift away from salient features, sometimes falling between salient features and the sound source. Crucially, such fixation strategies could not be revealed using a traditional hypothesis-driven approach. Overall, the method is objective, data-driven, and enables clear comparisons of conditions. Full article
Show Figures

Figure 1

18 pages, 1581 KB  
Article
Effects of Task-Oriented Circuit Training on Dizziness, Vertigo Balance, Gait, and Quality of Life in Patients with Peripheral Vestibular Hypofunction: A Single-Blind, Randomized Controlled Trial
by Yasemin Apaydin, Çağla Özkul, Arzu Guclu-Gunduz, Umut Apaydin, Emre Orhan, Burak Kabiş, Ebru Şansal, Hakan Tutar and Bulent Gunduz
Healthcare 2026, 14(6), 762; https://doi.org/10.3390/healthcare14060762 - 18 Mar 2026
Viewed by 205
Abstract
Background/Objectives: Peripheral vestibular hypofunction (PVH) commonly causes dizziness, imbalance, gait disturbances, and reduced quality of life. Task-oriented circuit training (TOCT) is a rehabilitation approach in which patients perform structured, task-specific functional movements repetitively to improve real-life motor performance. TOCT integrates functional, multisensory, and [...] Read more.
Background/Objectives: Peripheral vestibular hypofunction (PVH) commonly causes dizziness, imbalance, gait disturbances, and reduced quality of life. Task-oriented circuit training (TOCT) is a rehabilitation approach in which patients perform structured, task-specific functional movements repetitively to improve real-life motor performance. TOCT integrates functional, multisensory, and repetitive exercises based on motor learning and neuroplasticity principles, potentially enhancing rehabilitation outcomes. This study aimed to investigate the effects of TOCT on dizziness, vertigo, balance, gait, disability, and quality of life in patients with PVH. Methods: In this single-blind, randomized controlled trial, 28 patients with PVH were randomly allocated to either a task-oriented circuit training (TOCT) group (n = 16) or a control group (n = 12). The control group performed a conventional home-based vestibular exercise program consisting of gaze stabilization and walking exercises. The TOCT group completed 25 task-specific stations, targeting gaze stabilization, balance, and gait, three times per week for four weeks. Outcomes were assessed at baseline and post-intervention using the Visual Analog Scale for dizziness and vertigo, the Sensory Organization Test for balance, spatiotemporal gait analysis, and the Dizziness Handicap Inventory (DHI) for disability and quality of life. Data were analyzed using two-way repeated-measures ANOVA, with the group × time interaction used to determine whether changes over time differed between the TOCT and control groups. Results: Significant time × group interactions favored TOCT for dizziness severity, vertigo severity, vestibular-related balance parameters, cadence during eyes-closed walking, and DHI total scores (p < 0.05). Within-group analyses demonstrated moderate-to-large improvements in all measured outcomes for the TOCT group, whereas the control group showed limited improvements in dizziness measures and minimal changes in balance, gait, and DHI scores. Conclusions: Task-oriented circuit training significantly improves dizziness, vertigo, balance, gait, disability, and overall quality of life in patients with PVH compared with conventional home-based vestibular exercises. Incorporating functional, multisensory, and task-specific activities within structured circuits may optimize vestibular rehabilitation outcomes. Full article
(This article belongs to the Section Healthcare Quality, Patient Safety, and Self-care Management)
Show Figures

Figure 1

18 pages, 1959 KB  
Article
Predictive and Reactive Control During Interception
by Mario Treviño, Nathaly Martín, Andrea Barrera and Inmaculada Márquez
Brain Sci. 2026, 16(3), 322; https://doi.org/10.3390/brainsci16030322 - 18 Mar 2026
Viewed by 183
Abstract
Background/Objectives: Successful interception of moving targets requires combining predictive control, which anticipates future target states, and reactive control, which compensates for ongoing sensory discrepancies. How these components evolve over time and are distributed across gaze and manual behavior remains unclear. We aimed to [...] Read more.
Background/Objectives: Successful interception of moving targets requires combining predictive control, which anticipates future target states, and reactive control, which compensates for ongoing sensory discrepancies. How these components evolve over time and are distributed across gaze and manual behavior remains unclear. We aimed to explore the time-resolved dynamics of predictive control during continuous interception and to dissociate eye and hand contributions. Methods: Human participants intercepted a moving target in a two-dimensional arena using a joystick while eye movements were recorded. Target speed was systematically varied, and visual information was selectively reduced by occluding either the target or the user-controlled cursor. Predictive control was assessed using two complementary metrics: a geometric strategy index capturing moment-to-moment spatial lead or lag relative to target motion, applied separately to gaze and manual trajectories, and root mean square error (RMSE) computed relative to current and forward-shifted target positions to quantify predictive alignment. Results: Successful interception was characterized by structured, speed-dependent transitions between predictive and reactive control rather than a fixed strategy. Predictive alignment emerged early and was dynamically reweighted as temporal constraints increased. Gaze and manual behavior showed complementary but partially dissociable predictive signatures. Occluding the target decreased predictive alignment, whereas occluding the user-controlled cursor had comparatively minor effects, indicating strong reliance on internal state estimation rather than continuous visual feedback of the effector. Conclusions: Predictive and reactive control are continuously and dynamically reweighted during interception. Their interaction unfolds within single trials and depends on target dynamics and sensory availability. These findings provide quantitative evidence for time-resolved coordination between anticipatory and feedback-driven control mechanisms in goal-directed behavior. Full article
(This article belongs to the Special Issue Predictive Processing in Brain and Behavior)
Show Figures

Figure 1

Back to TopTop