Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,467)

Search Parameters:
Keywords = gaze

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 910 KB  
Article
USGaze: Temporal Gaze Estimation via a Unified State-Space Modeling Framework
by Gefan Sun, Zhao Wang and Qinghua Xia
Electronics 2026, 15(7), 1430; https://doi.org/10.3390/electronics15071430 - 30 Mar 2026
Abstract
Existing appearance-based and video-based gaze estimation methods mainly rely on frame-wise prediction or local-window temporal fusion, which limits their ability to model long-range dependencies and to explicitly suppress output-level jitter. This leaves a gap in unified temporal gaze estimation frameworks that jointly address [...] Read more.
Existing appearance-based and video-based gaze estimation methods mainly rely on frame-wise prediction or local-window temporal fusion, which limits their ability to model long-range dependencies and to explicitly suppress output-level jitter. This leaves a gap in unified temporal gaze estimation frameworks that jointly address contextual feature aggregation and prediction-level stabilization. To address this limitation, we propose a unified state-space temporal gaze estimation framework to improve both angular accuracy and temporal consistency. Specifically, consecutive eye image sequences are mapped into a shared latent state space, where spatial appearance cues and inter-frame dynamics are jointly modeled. A feature-level temporal aggregation module is further designed to adaptively reweight historical observations for the current estimate, and a prediction-level temporal correction module is introduced to suppress short-term fluctuations while preserving rapid gaze shifts. On the TEyeD dataset after quality screening, the proposed method achieves a 3D gaze MAE of 0.533°, compared with 0.96° for Model-aware and 3.18°3.47° for the ResNet baselines reported in the original TEyeD paper, while maintaining manageable deployment overhead. These results indicate that the proposed framework provides a favorable balance between estimation accuracy, temporal stability, and practical efficiency. Full article
(This article belongs to the Special Issue AI Models for Human-Centered Computer Vision and Signal Analysis)
Show Figures

Figure 1

18 pages, 2953 KB  
Article
Quantitative Analysis of Real-Time Virtual Reality Sickness During 360° Video Viewing
by Hyun Tak Kim, Su Young Kim and Yoon Sang Kim
Appl. Sci. 2026, 16(7), 3313; https://doi.org/10.3390/app16073313 - 29 Mar 2026
Abstract
Virtual reality (VR) sickness induced by wearing a head-mounted display and viewing 360° videos has primarily been studied using subjective questionnaires administered before and after content viewing. However, this approach is limited to identifying the onset of sickness during content viewing. This study [...] Read more.
Virtual reality (VR) sickness induced by wearing a head-mounted display and viewing 360° videos has primarily been studied using subjective questionnaires administered before and after content viewing. However, this approach is limited to identifying the onset of sickness during content viewing. This study quantitatively addresses the association between objective measures (gaze direction, head pose, electrocardiogram, and optical flow) and VR sickness, adopting an exploratory approach. Real-time sickness during 360° video viewing was measured using the fast motion sickness scale, and overall sickness susceptibility was evaluated using the simulator sickness questionnaire. The results indicated that a higher VR sickness severity was associated with reduced gaze entropy and an increase in the magnitude and entropy of optical flow, suggesting its potential as an objective measure for real-time VR sickness assessment. Furthermore, in the comparison between susceptibility groups, the high-susceptibility group had a nominally significantly lower heart rate variability than the low-susceptibility group, indicating that physiological signals may serve as auxiliary tools for sensing the baseline of VR sickness. The optical flow reflects the visual stimuli of VR content independent of personal susceptibility, suggesting its potential as a content-driven indicator of VR sickness. Full article
(This article belongs to the Special Issue Virtual Reality (VR) in Healthcare)
Show Figures

Figure 1

15 pages, 1131 KB  
Article
The Influence of Forest Landscape Spaces on Psychological and Visual Attention Responses: An Analysis Based on Different Seasons and Sexes
by Soyeon Kim
Int. J. Environ. Res. Public Health 2026, 23(4), 425; https://doi.org/10.3390/ijerph23040425 (registering DOI) - 29 Mar 2026
Abstract
This study investigated seasonal and sex-based differences in psychological responses and area-of-interest (AOI)-based visual attention, as well as the associations between these variables, using images of the same forest-healing landscape captured in summer and autumn. A total of 40 adults (20 males and [...] Read more.
This study investigated seasonal and sex-based differences in psychological responses and area-of-interest (AOI)-based visual attention, as well as the associations between these variables, using images of the same forest-healing landscape captured in summer and autumn. A total of 40 adults (20 males and 20 females) participated in an eye-tracking experiment combined with psychological assessments, including the Perceived Restorativeness Scale (PRS-11) and semantic differential (SD) evaluations. Mixed-design ANOVA results indicated that perceived restorativeness remained stable across seasons, whereas emotional evaluations were significantly higher in autumn than in summer. Significant interaction effects between season and sex were observed in selected gaze metrics within the sky AOI, while the forest AOI showed a consistent main effect of sex across seasons. Spearman’s correlation analysis revealed a strong positive association between autumn PRS and SD scores, suggesting that aesthetic appreciation contributes to restorative perception. In addition, a significant negative correlation between forest and pond AOIs in autumn indicated a seasonal redistribution of visual attention. These findings highlight the importance of component-level landscape analysis and demonstrate that seasonal variation and user characteristics jointly influence perceptual and attentional responses in forest-healing environments. The results provide empirical implications for evidence-based forest landscape design and seasonal management strategies. Full article
Show Figures

Figure 1

22 pages, 8847 KB  
Article
DGAGaze: Gaze Estimation with Dual-Stream Differential Attention and Geometry-Aware Temporal Alignment
by Wei Zhang and Pengcheng Li
Appl. Sci. 2026, 16(7), 3298; https://doi.org/10.3390/app16073298 - 29 Mar 2026
Abstract
Gaze estimation plays a crucial role in human-computer interaction and behavior analysis. However, in dynamic scenes, rigid head movements and rapid gaze shifts pose significant challenges to accurate gaze prediction. Most existing methods either process single-frame images independently or rely on long video [...] Read more.
Gaze estimation plays a crucial role in human-computer interaction and behavior analysis. However, in dynamic scenes, rigid head movements and rapid gaze shifts pose significant challenges to accurate gaze prediction. Most existing methods either process single-frame images independently or rely on long video sequences, making it difficult to simultaneously achieve strong performance and high computational efficiency. To address this issue, we propose DGAGaze, a gaze estimation framework based on a difference-driven spatiotemporal attention mechanism. This framework uses a geometry-aware temporal alignment module to mitigate interference from rigid head movements, compensating for them through pose estimation and affine feature warping, thereby achieving explicit decoupling between global head motion and local eye motion. Based on the aligned features, inter-frame differences are used to adjust spatial and channel attention weights, enhancing motion-sensitive representations without introducing an additional temporal modeling layer. Extensive experiments on the EyeDiap and Gaze360 datasets demonstrate the effectiveness of the proposed approach. DGAGaze achieves improved gaze estimation accuracy while maintaining a lightweight architecture based on a ResNet-18 backbone, outperforming existing state-of-the-art methods. Full article
(This article belongs to the Special Issue Advances in Computer Vision and Digital Image Processing)
Show Figures

Figure 1

25 pages, 2766 KB  
Article
Towards Safer Automated Driving: Predicting Drivers with Long Takeover Time Using Random Forest and Human Factors
by Jungsook Kim and Ohyun Jo
Electronics 2026, 15(7), 1390; https://doi.org/10.3390/electronics15071390 - 26 Mar 2026
Viewed by 206
Abstract
In highly automated driving systems (ADSs), drivers’ ability to resume manual driving remains a road safety issue. However, to the best of our knowledge, there is no existing computational model to predict which drivers require more than the 4 seconds mandated by United [...] Read more.
In highly automated driving systems (ADSs), drivers’ ability to resume manual driving remains a road safety issue. However, to the best of our knowledge, there is no existing computational model to predict which drivers require more than the 4 seconds mandated by United Nations Regulation No. 157 to regain manual control. To address this challenge, we developed a Random Forest model that predicts takeover time using measurable human factors. Three controlled driving simulator experiments were conducted in which participants engaged in distinct tasks—texting, drinking, and traffic monitoring—before responding to a takeover request. During the experiments, we collected human factor features, including gaze behavior, age, and scores, from the self-reported driving behavior questionnaire (K-DBQ). The Random Forest classifier achieved 77% accuracy. Recursive feature elimination selected 10 dominant predictors; notably, engaging in non-driving-related tasks, reduced on-road gaze, and older age were significantly associated with longer takeover times. Although K-DBQ scores were not directly correlated with takeover time, their inclusion improved model robustness, consistent with ensemble learning from weak yet complementary signals. The proposed model can be integrated into advanced driver assistance systems (ADASs) to proactively identify drivers likely to exceed the 4-second takeover window, support targeted interventions, and enhance human-centered transition safety in ADSs. Full article
Show Figures

Figure 1

22 pages, 2650 KB  
Article
Design and Implementation of an Eyewear-Integrated Infrared Eye-Tracking System
by Carlo Pezzoli, Marco Brando Mario Paracchini, Daniele Maria Crafa, Marco Carminati, Luca Merigo, Tommaso Ongarello and Marco Marcon
Sensors 2026, 26(7), 2065; https://doi.org/10.3390/s26072065 - 26 Mar 2026
Viewed by 235
Abstract
Eye-tracking is a key enabling technology for smart eyewear, supporting hands-free interaction, accessibility, and context-aware human–machine interfaces under strict constraints on size, power consumption, and computational complexity. While camera-based solutions provide high accuracy, their integration into lightweight and low-power wearable platforms remains challenging. [...] Read more.
Eye-tracking is a key enabling technology for smart eyewear, supporting hands-free interaction, accessibility, and context-aware human–machine interfaces under strict constraints on size, power consumption, and computational complexity. While camera-based solutions provide high accuracy, their integration into lightweight and low-power wearable platforms remains challenging. This paper is a feasibility study for the design, simulation, and experimental evaluation of a photosensor oculography (PSOG) eye-tracking system that is fully integrated into an eyewear frame, based on near-infrared (NIR) emitters and photodiodes. The proposed approach combines simulation-driven optimization of the optical constellation, a multi-frequency modulation and demodulation scheme enabling parallel source discrimination and robust ambient-light rejection, and a resource-efficient signal acquisition pipeline suitable for embedded implementation. Eye rotations in azimuth and elevation are inferred from differential reflectance patterns of ocular regions (sclera, iris, and pupil) using lightweight regression techniques, including shallow neural networks and Gaussian process regression, selected to balance estimation accuracy with computational and power constraints. System performance is evaluated using a controllable artificial-eye platform under defined geometric and illumination conditions, enabling repeatable assessment of gaze-estimation accuracy and algorithmic behavior. Sub-degree errors are achieved in this controlled setting, demonstrating the feasibility and potential effectiveness of the proposed architecture. Practical considerations for translation to real-world smart eyewear, including human-subject validation, anatomical variability, calibration strategies, and embedded deployment, are discussed and identified as directions for future work. By detailing the optical design methodology, modulation strategy, and algorithmic trade-offs, this work clarifies the distinct contributions of the proposed PSOG system relative to existing frame-integrated and camera-free eye-tracking approaches, and provides a foundation for further development toward wearable and augmented-reality applications. Full article
Show Figures

Figure 1

32 pages, 1329 KB  
Review
Deep Learning-Based Gaze Estimation: A Review
by Ahmed A. Abdelrahman, Basheer Al-Tawil and Ayoub Al-Hamadi
Robotics 2026, 15(4), 69; https://doi.org/10.3390/robotics15040069 - 25 Mar 2026
Viewed by 349
Abstract
Gaze estimation, a critical facet of understanding user intent and enhancing human–computer interaction, has seen substantial advancements with the integration of deep learning technologies. Despite the progress, the application of deep learning in gaze estimation presents unique challenges, notably in the adaptation and [...] Read more.
Gaze estimation, a critical facet of understanding user intent and enhancing human–computer interaction, has seen substantial advancements with the integration of deep learning technologies. Despite the progress, the application of deep learning in gaze estimation presents unique challenges, notably in the adaptation and optimization of these models for precise gaze tracking. This paper conducts a thorough review of recent developments in deep learning-based gaze estimation, with a particular focus on the evolution from traditional methods to sophisticated appearance-based techniques. We examine the key components of successful gaze estimation systems, including input feature processing, neural network architectures, and the importance of data preprocessing in achieving high accuracy. Our analysis extends to a comprehensive comparison of existing methods, shedding light on their effectiveness and limitations within various implementation contexts. Through this systematic review, we aim to consolidate existing knowledge in the field, identify gaps in current research, and suggest directions for future investigation. By providing a clear overview of the state-of-the-art in gaze estimation and discussing ongoing challenges and potential solutions, our work seeks to inspire further innovation and progress in developing more accurate and efficient gaze estimation systems. Full article
Show Figures

Figure 1

25 pages, 1772 KB  
Article
The Impact of Emotion Perception and Gaze Sharing on Collaborative Experience and Performance in Multiplayer Games
by Lu Yin, He Zhang and Renke He
J. Eye Mov. Res. 2026, 19(2), 34; https://doi.org/10.3390/jemr19020034 - 25 Mar 2026
Viewed by 178
Abstract
Compared to traditional offline collaboration, current online collaboration often lacks nonverbal social cues, resulting in lower efficiency and a reduced emotional connection between teammates. To address this issue, this study used a two-player collaborative puzzle game as the experimental setting to explore the [...] Read more.
Compared to traditional offline collaboration, current online collaboration often lacks nonverbal social cues, resulting in lower efficiency and a reduced emotional connection between teammates. To address this issue, this study used a two-player collaborative puzzle game as the experimental setting to explore the impact of two nonverbal social cues, emotion and gaze, on collaborative experience and performance. Specifically, this study designed four collaborative modes: with and without teammates’ facial expressions, and with and without teammates’ gaze points. Sixty-two participants took part in the experiment, and each pair was required to complete these four patterns. Subsequently, we analyzed their collaborative experience through subjective questionnaires, objective facial expressions, and gaze overlap rates. The experimental results revealed that teammates’ gaze could effectively enhance collaborative efficiency, while facial expression is key to optimizing subjective experience. Combining both cues further acquires advantages in cognitive and emotional dimensions, leading to improved performance outcomes. The study also indicated that facial expressions could alleviate the social pressure triggered by shared gaze from teammates. Additionally, the study also examined how personality differences influenced collaborative experiences and performance. The results indicated that individuals with high agreeableness actively seek social cues, leading to more positive collaborative experiences. This study provides empirical evidence for understanding the interactive mechanisms of cognitive and emotional processes during online collaboration, and points the way toward designing adaptive, personalized intelligent collaborative systems. Full article
Show Figures

Figure 1

22 pages, 2787 KB  
Article
Usability Validation of an Integrated Hemodynamic and Pulmonary Monitoring System Using Eye-Tracking Analysis
by Hyunju Jeong, Hyeonkyeong Choi, Hyungmin Kim and Wonseuk Jang
J. Clin. Med. 2026, 15(7), 2474; https://doi.org/10.3390/jcm15072474 - 24 Mar 2026
Viewed by 115
Abstract
Background/Objectives: Hemodynamic monitoring is essential for guiding appropriate treatment by assessing cardiac output and volume status, as well as for preventing complications associated with excessive fluid administration. The EdgeFlow CW10 Plus is a device that extends conventional hemodynamic monitoring by incorporating pulmonary [...] Read more.
Background/Objectives: Hemodynamic monitoring is essential for guiding appropriate treatment by assessing cardiac output and volume status, as well as for preventing complications associated with excessive fluid administration. The EdgeFlow CW10 Plus is a device that extends conventional hemodynamic monitoring by incorporating pulmonary abnormality surveillance through B-line detection. This study aimed to evaluate whether the hemodynamic monitoring and pulmonary monitoring functions are well integrated, and verify the usability and efficiency of the system. Methods: A usability test was conducted with a panel of 15 medical professionals from diverse specialties and varying levels of clinical experience. Data from satisfaction surveys, heat maps, the System Usability Scale (SUS), and the NASA-TLX were analyzed to determine whether usability differences existed based on the duration of clinical experience. Results: The device demonstrated a high overall task success rate, averaging 93.2%. Regarding eye-tracking analysis based on clinical experience, it was observed that participants with more years of experience either failed to direct their gaze toward task-relevant user interface (UI) elements as effectively as those with fewer years of experience or showed similar patterns. Conclusions: The usability evaluation confirmed that the hemodynamic and pulmonary monitoring functions of the EdgeFlow CW 10 PLUS are well integrated, with the device demonstrating high usability and satisfaction. This integration is expected to support medical professionals in monitoring cardiac output and fluid status, facilitating timely therapeutic interventions while preventing complications related to fluid overload. Full article
(This article belongs to the Section Intensive Care)
Show Figures

Figure 1

21 pages, 1559 KB  
Article
Material Images and Cultivation: An Iconographical Interpretation of Xingqi 行气 Pattern Bronze Mirrors Along the Middle Reaches of the Yangtze River During the Song Dynasty (960–1279 CE)
by Huijun Li
Religions 2026, 17(3), 403; https://doi.org/10.3390/rel17030403 - 23 Mar 2026
Viewed by 224
Abstract
The Xingqi (行气, breath circulation) pattern bronze mirrors of the Song Dynasty (960–1279 CE) represent a distinctive category of Daoist material culture in southern China. Despite their unique iconography, systematic research on their functions and religious significance has been lacking. This study examines [...] Read more.
The Xingqi (行气, breath circulation) pattern bronze mirrors of the Song Dynasty (960–1279 CE) represent a distinctive category of Daoist material culture in southern China. Despite their unique iconography, systematic research on their functions and religious significance has been lacking. This study examines sixteen Xingqi pattern bronze mirrors through iconographic analysis and textual research, integrating evidence from surviving Daoist scriptures and ritual manuals. Two primary types are identified: the “Tortoise-Swallowing and Crane-Breathing Style” and the “Sun and Moon Observing Style”. The former depicts practitioners imitating the breathing techniques of tortoises and cranes, while the latter shows figures gazing upward to ingest the essences of the sun and moon. Both motifs continue earlier health preservation traditions from the Pre-Qin (221–207 BCE) through Han dynasties, adapted within the Northern and Southern Song context. These mirrors were specifically used by Daoists along the middle Yangtze River for inner alchemy cultivation, particularly in visualized Cunsi (存思, contemplation practices). They were predominantly passed down through generations rather than buried, explaining their scarcity in archaeological contexts. These artifacts illuminate how Song Daoism translated abstract philosophical concepts into tangible, operable practices through material imagery. They provide new physical evidence for understanding historical Daoist cultivation methods and the materialization of religious experience. Full article
Show Figures

Figure 1

21 pages, 802 KB  
Systematic Review
Eye Tracking for Rehabilitation and Training in Paediatric Neurodevelopmental Disorders: A Systematic Review
by Guido Catalano, Sara Abbondio, Roberta Nicotra, Valentina Berselli, Marta Guarischi, Valentina Vezzali and Sabrina Signorini
Brain Sci. 2026, 16(3), 337; https://doi.org/10.3390/brainsci16030337 - 21 Mar 2026
Viewed by 266
Abstract
Background: Eye-tracking (ET) devices are gaining attention in technology-based paediatric rehabilitation through their intrinsic ability to assess patients’ engagement and visual attention within motivating, technology-based environments. We conducted a systematic review of available evidence from 2004 to 2025 on the implementation of ET [...] Read more.
Background: Eye-tracking (ET) devices are gaining attention in technology-based paediatric rehabilitation through their intrinsic ability to assess patients’ engagement and visual attention within motivating, technology-based environments. We conducted a systematic review of available evidence from 2004 to 2025 on the implementation of ET in rehabilitative trainings targeting paediatric populations with neurological and neurodevelopmental disorders. This paper aims to outline the rehabilitative outcomes pursued in the clinical populations considered. Methods: This systematic review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Three electronic databases (PubMed, Web of Science, and Scopus) were consulted to summarise the state of the art of the last 20 years. Selected articles were categorised according to the type of treated disorder and the rehabilitated function. Results: ET devices have been increasingly integrated into paediatric rehabilitation with promising results across multiple neurodevelopmental conditions (e.g., ASD, ADHD, cerebral palsy). These systems have proven effective not only in training gaze control, but also in enhancing executive functions, social cognition, communication, and participation. Furthermore, they promote personalised and data-driven solutions and support high levels of engagement, feasibility, and user satisfaction. Conclusions: ET represents a promising frontier for paediatric rehabilitation, addressing various neurodevelopmental disorders. The gaze-contingent protocols employed have demonstrated potential effects in promoting adaptive behaviour across multiple developmental areas. Further research is warranted to provide shared guidance and to strengthen practice recommendations. Full article
Show Figures

Figure 1

18 pages, 2996 KB  
Article
A Multimodal Agentic AI Framework for Intuitive Human–Robot Collaboration
by Xiaoyun Liang and Jiannan Cai
Sensors 2026, 26(6), 1958; https://doi.org/10.3390/s26061958 - 20 Mar 2026
Viewed by 405
Abstract
Widespread acceptance of collaborative robots in human-involved scenarios requires accessible and intuitive interfaces for lay workers and non-expert users. Existing interfaces often rely on users to plan and issue low-level commands, necessitating extensive knowledge of robot control. This study proposes a multimodal agentic [...] Read more.
Widespread acceptance of collaborative robots in human-involved scenarios requires accessible and intuitive interfaces for lay workers and non-expert users. Existing interfaces often rely on users to plan and issue low-level commands, necessitating extensive knowledge of robot control. This study proposes a multimodal agentic AI framework integrating natural user interfaces (NUIs) to foster effortless human-like partnerships in human–robot collaboration (HRC), which enhance intuitiveness and operational efficiency. First, it allows users to instruct robots using plain language verbally, coupled with gaze, revealing objects precisely. Second, it offloads users’ workload for robot motion planning by understanding context and reasoning task decomposition. Third, coordinating with AI agents built on large language models (LLMs), the system interprets users’ requests effectively and provides feedback to establish transparent communication. This proof-of-concept study included experiments to demonstrate a practical implementation of the agentic AI framework on a mobile manipulation robot in the collaborative task of human–robot wood assembly. Seven participants were recruited to interact with this AI-integrated agentic robotic system. Task performance and user experience metrics were measured in terms of completion time, intervention rate, NASA TLX survey for workload, and valuable insights of practical applications were summarized through a qualitative analysis. This study highlights the potential of NUIs and agentic AI-embodied robots to overcome existing HRC barriers and contributes to improving HRC intuitiveness and efficiency. Full article
(This article belongs to the Special Issue Advanced Sensors and AI Integration for Human–Robot Teaming)
Show Figures

Figure 1

27 pages, 2930 KB  
Article
Perspicuity, Acuity, and Illuminating Vision: Medieval and Early Modern Optics, Religion, and Literary Reflections of the Gaze in Hrotsvit of Gandersheim, Walter Map, Hartmann von Aue, the Melusine Romances (Jean d’Arras), and Froben Christoph von Zimmern
by Albrecht Classen
Humanities 2026, 15(3), 49; https://doi.org/10.3390/h15030049 - 20 Mar 2026
Viewed by 309
Abstract
Medieval literature often seems to be a remote, irrelevant, incomprehensible world of narrative texts lost in heroic, religious, or courtly themes, limited to stories about King Arthur, courtly lovers, military heroes, and religious martyrs, saints, and prophets. In reality, as any expert can [...] Read more.
Medieval literature often seems to be a remote, irrelevant, incomprehensible world of narrative texts lost in heroic, religious, or courtly themes, limited to stories about King Arthur, courtly lovers, military heroes, and religious martyrs, saints, and prophets. In reality, as any expert can easily confirm, when we turn our full attention to pre-modern literature from across Europe (and also other parts of the world), we can often recognize the true extent to which poets utilized their narratives for spiritual, philosophical, religious, scientific, and medical explorations that have much to tell us today and prove to be deeply meaningful in a timeless manner. One key aspect, which was shared among virtually all medieval artists, poets, and theologians, consisted of the unique experience by an individual who is entitled through a physical opening to see into the depth or the height of all existence and can thus discover a wholly different world. Through this motif of the gaze, an entire epiphanic realization can set in, which thus quickly transforms the purely entertaining narrative medium into a narrative catalyst of profound spiritual experiences, helping the individual to gain inspiration from the Godhead (e.g., mysticism). Indeed, numerous times, medieval poets employed the motif of the visionary gaze, developed in very concrete terms, to trace and explain the process of perspicuity and accompanying acuity which ultimately leads to new intellectual, emotional, and religious understandings and experiences. While many intellectuals already embraced this notion of a visionary concept of spiritual comprehension, it might come as a surprise that secular and religious poets also operated quite intentionally with the concept of a hole in the wall or some other opening as a springboard for intellectual and spiritual experiences, directly drawing from the concepts of the optical sciences as understood at that time. Oddly but highly significantly, Christian and pagan notions tend to intersect in those narrative moments, particularly in late medieval literature, merging the visionary experience with the monstrous within human society, associating the gaze with the erotic and religious dimension. Full article
Show Figures

Figure 1

22 pages, 6671 KB  
Article
Evaluating the Influence of Alert Modalities on Driver Attention Transitions Under Visual Distraction: A Sequence Analysis Approach
by Niloufar Shirani, Elena Orlova, Manmohan Joshi, Paul (Young Joun) Ha, Yu Song, Anshu Bamney, Kai Wang and Eric Jackson
Systems 2026, 14(3), 328; https://doi.org/10.3390/systems14030328 - 20 Mar 2026
Viewed by 268
Abstract
This study evaluates how different alert conditions influence driver attention transitions under conditions of visual distraction using sequence analysis. Employing a within-subject experimental design, 13 participants underwent trials in a driving simulator, experiencing three distinct alert conditions: face-tracking auditory alerts, steering wheel auditory [...] Read more.
This study evaluates how different alert conditions influence driver attention transitions under conditions of visual distraction using sequence analysis. Employing a within-subject experimental design, 13 participants underwent trials in a driving simulator, experiencing three distinct alert conditions: face-tracking auditory alerts, steering wheel auditory torque alerts, and a control scenario without alerts. An eye-tracking system was used to capture drivers’ gaze durations and sequences across three key areas of interest: road, dashboard, and tablet-based infotainment system. Analysis involved computation of transition probabilities, Markov chain modeling for long-term attentional distributions, and entropy analyses to quantify the randomness of gaze transitions. Results showed that face-tracking alerts significantly increased the likelihood of gaze redirection to the road compared to the other conditions, enhancing both immediate and sustained attention. Steering wheel torque alerts demonstrated minimal effectiveness, sometimes performing worse than the no-alert condition due to their passive nature, allowing drivers to bypass attention redirection. Steady-state analyses confirmed that face alerts notably improved sustained driver focus on the road by approximately 3.6%, reinforcing their utility for prolonged attentional control. Entropy analyses further revealed that face alerts provided an optimal balance between structured attention shifts and behavioral flexibility, enhancing attentional predictability. Findings are consistent with previous literature, emphasizing the superior effectiveness of active, gaze-based interventions over passive mechanisms. This research underscores the importance of designing proactive alert systems in vehicle safety technology to effectively mitigate visual distraction-related risks. Full article
(This article belongs to the Special Issue Safe Systems for Road Safety: A Human Factors Perspective)
Show Figures

Figure 1

17 pages, 229 KB  
Article
Iris Murdoch’s Concept of Imagination and Its Role in Moral Life
by Maria Gallego-Ortiz
Philosophies 2026, 11(2), 43; https://doi.org/10.3390/philosophies11020043 - 19 Mar 2026
Viewed by 248
Abstract
Iris Murdoch situates imagination at the core of moral life, challenging moral philosophy’s preference for abstract universal principles over the particularity of lived experience. This paper reconstructs Murdoch’s concept of imagination by tracing her engagement with Plato’s distinction between eikasia and the Demiurge’s [...] Read more.
Iris Murdoch situates imagination at the core of moral life, challenging moral philosophy’s preference for abstract universal principles over the particularity of lived experience. This paper reconstructs Murdoch’s concept of imagination by tracing her engagement with Plato’s distinction between eikasia and the Demiurge’s ‘high’ imagination, as well as Kant’s notions of empirical and esthetic imagination. I argue that Murdoch’s imagination is best understood as a hermeneutical capacity essential to moral vision. She distinguishes between egoistic fantasy, which distorts reality, and free and creative imagination, which enables a just and loving gaze upon the world. Through imagination, we can replace obscuring images with truer ones, making moral progress an exercise in vision and attention. Murdoch’s account thus offers an alternative to moral theories that overlook the inner life as a site of ethical transformation. Full article
Back to TopTop