Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,757)

Search Parameters:
Keywords = eye-tracking

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 3210 KiB  
Systematic Review
The Mind-Wandering Phenomenon While Driving: A Systematic Review
by Gheorghe-Daniel Voinea, Florin Gîrbacia, Răzvan Gabriel Boboc and Cristian-Cezar Postelnicu
Information 2025, 16(8), 681; https://doi.org/10.3390/info16080681 - 8 Aug 2025
Viewed by 160
Abstract
Mind wandering (MW) is a significant safety risk in driving, yet research on its scope, underlying mechanisms, and mitigation strategies remains fragmented across disciplines. In this review guided by the PRISMA framework, we analyze findings from 64 empirical studies to address these factors. [...] Read more.
Mind wandering (MW) is a significant safety risk in driving, yet research on its scope, underlying mechanisms, and mitigation strategies remains fragmented across disciplines. In this review guided by the PRISMA framework, we analyze findings from 64 empirical studies to address these factors. The presented study quantifies the prevalence of MW in naturalistic and simulated driving environments and shows its impact on driving behaviors. We document its negative effects on braking reaction times and lane-keeping consistency, and we assess recent advancements in objective detection methods, including EEG signatures, eye-tracking metrics, and physiological markers. We also identify key cognitive and contextual risk factors, including high perceived risk, route familiarity, and driver fatigue, which increase MW episodes. Also, we survey emergent countermeasures, such as haptic steering wheel alerts and adaptive cruise control perturbations, designed to sustain driver engagement. Despite these advancements, the MW research shows persistent challenges, including methodological heterogeneity that limits cross-study comparisons, a lack of real-world validation of detection algorithms, and a scarcity of long-term field trials of interventions. Our integrated synthesis, therefore, outlines a research agenda prioritizing harmonized measurement protocols, on-road algorithm deployment, and rigorous evaluation of countermeasures under naturalistic driving conditions. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

9 pages, 2776 KiB  
Proceeding Paper
Analysis of Elementary Student Engagement Patterns in Science Class Using Eye Tracking and Object Detection: Attention and Mind Wandering
by Ilho Yang and Daol Park
Eng. Proc. 2025, 103(1), 10; https://doi.org/10.3390/engproc2025103010 - 8 Aug 2025
Viewed by 145
Abstract
This study aims to explore the individual engagement of two elementary students in science class to derive educational implications. Using mobile eye trackers and an object detection model, gaze data were collected to identify educational objects and analyze attention, mind wandering, and off-task [...] Read more.
This study aims to explore the individual engagement of two elementary students in science class to derive educational implications. Using mobile eye trackers and an object detection model, gaze data were collected to identify educational objects and analyze attention, mind wandering, and off-task periods. The data were analyzed in the context of class and student behaviors. Interviews with the students enabled an understanding of their engagement patterns. The first student demonstrated an average attention ratio of 21.42% and a mind wandering ratio of 21.54%, characterized by inconsistent mind wandering and frequent off-task behaviors, resulting in low attention. In contrast, the second student showed an average attention ratio of 32.35% and a mind wandering ratio of 11.53%, maintaining consistent engagement throughout the class. While the two students exhibited differences in attention, mind wandering, and off-task behaviors, common factors influencing engagement were identified. Both students showed higher attention during active learning activities, such as experiments and inquiry tasks, while group interactions and visual/auditory stimuli supported sustained attention or transitions from mind wandering to attention. However, repetitive or passive tasks were associated with increased mind wandering. Such results highlight differences in individual engagement patterns and emphasize the value of integrating eye tracking and object detection with qualitative data, which provides a reference for tailoring educational strategies and improving learning environments. Full article
Show Figures

Figure 1

17 pages, 886 KiB  
Article
Predicting Cartographic Symbol Location with Eye-Tracking Data and Machine Learning Approach
by Paweł Cybulski
J. Eye Mov. Res. 2025, 18(4), 35; https://doi.org/10.3390/jemr18040035 - 7 Aug 2025
Viewed by 91
Abstract
Visual search is a core component of map reading, influenced by both cartographic design and human perceptual processes. This study investigates whether the location of a target cartographic symbol—central or peripheral—can be predicted using eye-tracking data and machine learning techniques. Two datasets were [...] Read more.
Visual search is a core component of map reading, influenced by both cartographic design and human perceptual processes. This study investigates whether the location of a target cartographic symbol—central or peripheral—can be predicted using eye-tracking data and machine learning techniques. Two datasets were analyzed, each derived from separate studies involving visual search tasks with varying map characteristics. A comprehensive set of eye movement features, including fixation duration, saccade amplitude, and gaze dispersion, were extracted and standardized. Feature selection and polynomial interaction terms were applied to enhance model performance. Twelve supervised classification algorithms were tested, including Random Forest, Gradient Boosting, and Support Vector Machines. The models were evaluated using accuracy, precision, recall, F1-score, and ROC-AUC. Results show that models trained on the first dataset achieved higher accuracy and class separation, with AdaBoost and Gradient Boosting performing best (accuracy = 0.822; ROC-AUC > 0.86). In contrast, the second dataset presented greater classification challenges, despite high recall in some models. Feature importance analysis revealed that fixation standard deviation as a proxy for gaze dispersion, particularly along the vertical axis, was the most predictive metric. These findings suggest that gaze behavior can reliably indicate the spatial focus of visual search, providing valuable insight for the development of adaptive, gaze-aware cartographic interfaces. Full article
Show Figures

Figure 1

29 pages, 16016 KiB  
Article
An Eye Movement Monitoring Tool: Towards a Non-Invasive Device for Amblyopia Treatment
by Juan Camilo Castro-Rizo, Juan Pablo Moreno-Garzón, Carlos Arturo Narváez Delgado, Nicolas Valencia-Jimenéz, Javier Ferney Castillo García and Alvaro Alexander Ocampo-Gonzalez
Sensors 2025, 25(15), 4823; https://doi.org/10.3390/s25154823 - 6 Aug 2025
Viewed by 313
Abstract
Amblyopia, commonly affecting children aged 0–6 years, results from disrupted visual processing during early development and often leads to reduced visual acuity in one eye. This study presents the development and preliminary usability assessment of a non-invasive ocular monitoring device designed to support [...] Read more.
Amblyopia, commonly affecting children aged 0–6 years, results from disrupted visual processing during early development and often leads to reduced visual acuity in one eye. This study presents the development and preliminary usability assessment of a non-invasive ocular monitoring device designed to support oculomotor engagement and therapy adherence in amblyopia management. The system incorporates an interactive maze-navigation task controlled via gaze direction, implemented during monocular and binocular sessions. The device tracks lateral and anteroposterior eye movements and generates visual reports, including displacement metrics and elliptical movement graphs. Usability testing was conducted with a non-probabilistic adult sample (n = 15), including individuals with and without amblyopia. The System Usability Scale (SUS) yielded an average score of 75, indicating good usability. Preliminary tests with two adults diagnosed with amblyopia suggested increased eye displacement during monocular sessions, potentially reflecting enhanced engagement rather than direct therapeutic improvement. This feasibility study demonstrates the device’s potential as a supportive, gaze-controlled platform for visual engagement monitoring in amblyopia rehabilitation. Future clinical studies involving pediatric populations and integration of visual stimuli modulation are recommended to evaluate therapeutic efficacy and adaptability for early intervention. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Graphical abstract

19 pages, 4722 KiB  
Article
Effect of Dynamic Point Symbol Visual Coding on User Search Performance in Map-Based Visualizations
by Weijia Ge, Jing Zhang, Xingjian Shi, Wenzhe Tang and Longlong Qian
ISPRS Int. J. Geo-Inf. 2025, 14(8), 305; https://doi.org/10.3390/ijgi14080305 - 5 Aug 2025
Viewed by 162
Abstract
As geographic information visualization continues to gain prominence, dynamic symbols are increasingly employed in map-based applications. However, the optimal visual coding for dynamic point symbols—particularly concerning encoding type, animation rate, and modulation area—remains underexplored. This study examines how these factors influence user performance [...] Read more.
As geographic information visualization continues to gain prominence, dynamic symbols are increasingly employed in map-based applications. However, the optimal visual coding for dynamic point symbols—particularly concerning encoding type, animation rate, and modulation area—remains underexplored. This study examines how these factors influence user performance in visual search tasks through two eye-tracking experiments. Experiment 1 investigated the effects of two visual coding factors: encoding types (flashing, pulsation, and lightness modulation) and animation rates (low, medium, and high). Experiment 2 focused on the interaction between encoding types and modulation areas (fill, contour, and entire symbol) under a fixed animation rate condition. The results revealed that search performance deteriorates as the animation rate of the fastest target symbol exceeds 10 fps. Flashing and lightness modulation outperformed pulsation, and modulation areas significantly impacted efficiency and accuracy, with notable interaction effects. Based on the experimental results, three visual coding strategies are recommended for optimal performance in map-based interfaces: contour pulsation, contour flashing, and entire symbol lightness modulation. These findings provide valuable insights for optimizing the design of dynamic point symbols, contributing to improved user engagement and task performance in cartographic and geovisual applications. Full article
(This article belongs to the Topic Theories and Applications of Human-Computer Interaction)
Show Figures

Figure 1

18 pages, 1268 KiB  
Article
Visual Word Segmentation Cues in Tibetan Reading: Comparing Dictionary-Based and Psychological Word Segmentation
by Dingyi Niu, Zijian Xie, Jiaqi Liu, Chen Wang and Ze Zhang
J. Eye Mov. Res. 2025, 18(4), 33; https://doi.org/10.3390/jemr18040033 - 4 Aug 2025
Viewed by 196
Abstract
This study utilized eye-tracking technology to explore the role of visual word segmentation cues in Tibetan reading, with a particular focus on the effects of dictionary-based and psychological word segmentation on reading and lexical recognition. The experiment employed a 2 × 3 design, [...] Read more.
This study utilized eye-tracking technology to explore the role of visual word segmentation cues in Tibetan reading, with a particular focus on the effects of dictionary-based and psychological word segmentation on reading and lexical recognition. The experiment employed a 2 × 3 design, comparing six conditions: normal sentences, dictionary word segmentation (spaces), psychological word segmentation (spaces), normal sentences (green), dictionary word segmentation (color alternation), and psychological word segmentation (color alternation). The results revealed that word segmentation with spaces (whether dictionary-based or psychological) significantly improved reading efficiency and lexical recognition, whereas color alternation showed no substantial facilitative effect. Psychological and dictionary word segmentation performed similarly across most metrics, though psychological segmentation slightly outperformed in specific indicators (e.g., sentence reading time and number of fixations), and dictionary word segmentation slightly outperformed in other indicators (e.g., average saccade amplitude and number of regressions). The study further suggests that Tibetan reading may involve cognitive processes at different levels, and the basic units of different levels of cognitive processes may not be consistent. These findings hold significant implications for understanding the cognitive processes involved in Tibetan reading and for optimizing the presentation of Tibetan text. Full article
Show Figures

Figure 1

16 pages, 1047 KiB  
Article
Measuring Adult Heritage Language Lexical Proficiency for Studies on Facilitative Processing of Gender
by Zuzanna Fuchs, Emma Kealey, Esra Eldem-Tunç, Leo Mermelstein, Linh Pham, Anna Runova, Yue Chen, Metehan Oğuz, Seoyoon Hong, Catherine Pan and JK Subramony
Languages 2025, 10(8), 189; https://doi.org/10.3390/languages10080189 - 4 Aug 2025
Viewed by 396
Abstract
The present study analyzes individual differences in the facilitative processing of grammatical gender by heritage speakers of Spanish, asking whether these differences correlate with lexical proficiency. Results from an eye-tracking study in the Visual World Paradigm replicate prior findings that, as a group, [...] Read more.
The present study analyzes individual differences in the facilitative processing of grammatical gender by heritage speakers of Spanish, asking whether these differences correlate with lexical proficiency. Results from an eye-tracking study in the Visual World Paradigm replicate prior findings that, as a group, heritage speakers of Spanish show facilitative processing of gender. Importantly, in a follow-up within-group analysis, we test whether three measures of lexical proficiency—oral picture-naming, verbal fluency, and LexTALE—predict individual performance. We find that lexical proficiency, as measured by LexTALE, predicts overall word recognition; however, we observe no effects of the other measures and no evidence that lexical proficiency modulates the strength of the facilitative effect. Our results highlight the importance of carefully selecting tools for proficiency assessment in experimental studies involving heritage speakers, underscoring that the absence of evidence for an effect of proficiency based on a single measure should not be taken as evidence of absence. Full article
(This article belongs to the Special Issue Language Processing in Spanish Heritage Speakers)
Show Figures

Figure 1

20 pages, 14619 KiB  
Article
A Cognition–Affect–Behavior Framework for Assessing Street Space Quality in Historic Cultural Districts and Its Impact on Tourist Experience
by Dongsheng Huang, Weitao Gong, Xinyang Wang, Siyuan Liu, Jiaxin Zhang and Yunqin Li
Buildings 2025, 15(15), 2739; https://doi.org/10.3390/buildings15152739 - 3 Aug 2025
Viewed by 496
Abstract
Existing research predominantly focuses on the preservation or renewal models of the physical forms of historic cultural districts, with limited exploration of their roles in stimulating tourists’ cognitive, affective resonance, and behavioral interactions. This study addresses historic cultural districts by evaluating the space [...] Read more.
Existing research predominantly focuses on the preservation or renewal models of the physical forms of historic cultural districts, with limited exploration of their roles in stimulating tourists’ cognitive, affective resonance, and behavioral interactions. This study addresses historic cultural districts by evaluating the space quality and its impact on tourist experiences through the “cognition-affect-behavior” framework, integrating GIS, street view semantic segmentation, VR eye-tracking, and web crawling technologies. The findings reveal significant multidimensional differences in how space quality influences tourist experiences: the impact intensities of functional diversity, sky visibility, road network accessibility, green visibility, interface openness, and public facility convenience decrease sequentially, with path coefficients of 0.261, 0.206, 0.205, 0.204, 0.201, and 0.155, respectively. Additionally, space quality exerts an indirect effect on tourist experiences through the mediating roles of cognitive, affective, and behavioral dimensions, with a path coefficient of 0.143. This research provides theoretical support and practical insights for empowering cultural heritage space governance with digital technologies in the context of cultural and tourism integration. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

15 pages, 2879 KiB  
Article
Study on the Eye Movement Transfer Characteristics of Drivers Under Different Road Conditions
by Zhenxiang Hao, Jianping Hu, Xiaohui Sun, Jin Ran, Yuhang Zheng, Binhe Yang and Junyao Tang
Appl. Sci. 2025, 15(15), 8559; https://doi.org/10.3390/app15158559 - 1 Aug 2025
Viewed by 192
Abstract
Given the severe global traffic safety challenges—including threats to human lives and socioeconomic impacts—this study analyzes visual behavior to promote sustainable transportation, improve road safety, and reduce resource waste and pollution caused by accidents. Four typical road sections, namely, turning, straight ahead, uphill, [...] Read more.
Given the severe global traffic safety challenges—including threats to human lives and socioeconomic impacts—this study analyzes visual behavior to promote sustainable transportation, improve road safety, and reduce resource waste and pollution caused by accidents. Four typical road sections, namely, turning, straight ahead, uphill, and downhill, were selected, and the eye movement data of 23 drivers in different driving stages were collected by aSee Glasses eye-tracking device to analyze the visual gaze characteristics of the drivers and their transfer patterns in each road section. Using Markov chain theory, the probability of staying at each gaze point and the transfer probability distribution between gaze points were investigated. The results of the study showed that drivers’ visual behaviors in different road sections showed significant differences: drivers in the turning section had the largest percentage of fixation on the near front, with a fixation duration and frequency of 29.99% and 28.80%, respectively; the straight ahead section, on the other hand, mainly focused on the right side of the road, with 31.57% of fixation duration and 19.45% of frequency of fixation; on the uphill section, drivers’ fixation duration on the left and right roads was more balanced, with 24.36% of fixation duration on the left side of the road and 25.51% on the right side of the road; drivers on the downhill section looked more frequently at the distance ahead, with a total fixation frequency of 23.20%, while paying higher attention to the right side of the road environment, with a fixation duration of 27.09%. In terms of visual fixation, the fixation shift in the turning road section was mainly concentrated between the near and distant parts of the road ahead and frequently turned to the left and right sides; the straight road section mainly showed a shift between the distant parts of the road ahead and the dashboard; the uphill road section was concentrated on the shift between the near parts of the road ahead and the two sides of the road, while the downhill road section mainly occurred between the distant parts of the road ahead and the rearview mirror. Although drivers’ fixations on the front of the road were most concentrated under the four road sections, with an overall fixation stability probability exceeding 67%, there were significant differences in fixation smoothness between different road sections. Through this study, this paper not only reveals the laws of drivers’ visual behavior under different driving environments but also provides theoretical support for behavior-based traffic safety improvement strategies. Full article
Show Figures

Figure 1

16 pages, 3281 KiB  
Article
A Preprocessing Pipeline for Pupillometry Signal from Multimodal iMotion Data
by Jingxiang Ong, Wenjing He, Princess Maglanque, Xianta Jiang, Lawrence M. Gillman, Ashley Vergis and Krista Hardy
Sensors 2025, 25(15), 4737; https://doi.org/10.3390/s25154737 - 31 Jul 2025
Viewed by 197
Abstract
Pupillometry is commonly used to evaluate cognitive effort, attention, and facial expression response, offering valuable insights into human performance. The combination of eye tracking and facial expression data under the iMotions platform provides great opportunities for multimodal research. However, there is a lack [...] Read more.
Pupillometry is commonly used to evaluate cognitive effort, attention, and facial expression response, offering valuable insights into human performance. The combination of eye tracking and facial expression data under the iMotions platform provides great opportunities for multimodal research. However, there is a lack of standardized pipelines for managing pupillometry data on a multimodal platform. Preprocessing pupil data in multimodal platforms poses challenges like timestamp misalignment, missing data, and inconsistencies across multiple data sources. To address these challenges, the authors introduced a systematic preprocessing pipeline for pupil diameter measurements collected using iMotions 10 (version 10.1.38911.4) during an endoscopy simulation task. The pipeline involves artifact removal, outlier detection using advanced methods such as the Median Absolute Deviation (MAD) and Moving Average (MA) algorithm filtering, interpolation of missing data using the Piecewise Cubic Hermite Interpolating Polynomial (PCHIP), and mean pupil diameter calculation through linear regression, as well as normalization of mean pupil diameter and integration of the pupil diameter dataset with facial expression data. By following these steps, the pipeline enhances data quality, reduces noise, and facilitates the seamless integration of pupillometry other multimodal datasets. In conclusion, this pipeline provides a detailed and organized preprocessing method that improves data reliability while preserving important information for further analysis. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

23 pages, 6315 KiB  
Article
A Kansei-Oriented Morphological Design Method for Industrial Cleaning Robots Integrating Extenics-Based Semantic Quantification and Eye-Tracking Analysis
by Qingchen Li, Yiqian Zhao, Yajun Li and Tianyu Wu
Appl. Sci. 2025, 15(15), 8459; https://doi.org/10.3390/app15158459 - 30 Jul 2025
Viewed by 214
Abstract
In the context of Industry 4.0, user demands for industrial robots have shifted toward diversification and experience-orientation. Effectively integrating users’ affective imagery requirements into industrial-robot form design remains a critical challenge. Traditional methods rely heavily on designers’ subjective judgments and lack objective data [...] Read more.
In the context of Industry 4.0, user demands for industrial robots have shifted toward diversification and experience-orientation. Effectively integrating users’ affective imagery requirements into industrial-robot form design remains a critical challenge. Traditional methods rely heavily on designers’ subjective judgments and lack objective data on user cognition. To address these limitations, this study develops a comprehensive methodology grounded in Kansei engineering that combines Extenics-based semantic analysis, eye-tracking experiments, and user imagery evaluation. First, we used web crawlers to harvest user-generated descriptors for industrial floor-cleaning robots and applied Extenics theory to quantify and filter key perceptual imagery features. Second, eye-tracking experiments captured users’ visual-attention patterns during robot observation, allowing us to identify pivotal design elements and assemble a sample repository. Finally, the semantic differential method collected users’ evaluations of these design elements, and correlation analysis mapped emotional needs onto stylistic features. Our findings reveal strong positive correlations between four core imagery preferences—“dignified,” “technological,” “agile,” and “minimalist”—and their corresponding styling elements. By integrating qualitative semantic data with quantitative eye-tracking metrics, this research provides a scientific foundation and novel insights for emotion-driven design in industrial floor-cleaning robots. Full article
(This article belongs to the Special Issue Intelligent Robotics in the Era of Industry 5.0)
Show Figures

Figure 1

28 pages, 3441 KiB  
Article
Which AI Sees Like Us? Investigating the Cognitive Plausibility of Language and Vision Models via Eye-Tracking in Human-Robot Interaction
by Khashayar Ghamati, Maryam Banitalebi Dehkordi and Abolfazl Zaraki
Sensors 2025, 25(15), 4687; https://doi.org/10.3390/s25154687 - 29 Jul 2025
Viewed by 429
Abstract
As large language models (LLMs) and vision–language models (VLMs) become increasingly used in robotics area, a crucial question arises: to what extent do these models replicate human-like cognitive processes, particularly within socially interactive contexts? Whilst these models demonstrate impressive multimodal reasoning and perception [...] Read more.
As large language models (LLMs) and vision–language models (VLMs) become increasingly used in robotics area, a crucial question arises: to what extent do these models replicate human-like cognitive processes, particularly within socially interactive contexts? Whilst these models demonstrate impressive multimodal reasoning and perception capabilities, their cognitive plausibility remains underexplored. In this study, we address this gap by using human visual attention as a behavioural proxy for cognition in a naturalistic human-robot interaction (HRI) scenario. Eye-tracking data were previously collected from participants engaging in social human-human interactions, providing frame-level gaze fixations as a human attentional ground truth. We then prompted a state-of-the-art VLM (LLaVA) to generate scene descriptions, which were processed by four LLMs (DeepSeek-R1-Distill-Qwen-7B, Qwen1.5-7B-Chat, LLaMA-3.1-8b-instruct, and Gemma-7b-it) to infer saliency points. Critically, we evaluated each model in both stateless and memory-augmented (short-term memory, STM) modes to assess the influence of temporal context on saliency prediction. Our results presented that whilst stateless LLaVA most closely replicates human gaze patterns, STM confers measurable benefits only for DeepSeek, whose lexical anchoring mirrors human rehearsal mechanisms. Other models exhibited degraded performance with memory due to prompt interference or limited contextual integration. This work introduces a novel, empirically grounded framework for assessing cognitive plausibility in generative models and underscores the role of short-term memory in shaping human-like visual attention in robotic systems. Full article
Show Figures

Figure 1

14 pages, 1209 KiB  
Article
Visual Attention Patterns Toward Female Bodies in Anorexia Nervosa—An Eye-Tracking Study with Adolescents and Adults
by Valeska Stonawski, Oliver Kratz, Gunther H. Moll, Holmer Graap and Stefanie Horndasch
Behav. Sci. 2025, 15(8), 1027; https://doi.org/10.3390/bs15081027 - 29 Jul 2025
Viewed by 250
Abstract
Attentional biases seem to play an important role in anorexia nervosa (AN). The objective of this study was to measure visual attention patterns toward female bodies in adolescents and adults with and without AN in order to explore developmental and disease-specific aspects. Female [...] Read more.
Attentional biases seem to play an important role in anorexia nervosa (AN). The objective of this study was to measure visual attention patterns toward female bodies in adolescents and adults with and without AN in order to explore developmental and disease-specific aspects. Female adult and adolescent patients with AN (n = 38) and control participants (n = 39) viewed standardized photographic stimuli showing women’s bodies from five BMI categories. The fixation times on the bodies and specific body parts were analyzed. Differences between participants with and without AN did not emerge: All participants showed increased attention toward the body, while adolescents displayed shorter fixation times on specific areas of the body than adults. Increased visual attention toward areas indicative of weight (e.g., hips, thighs, abdomen, buttocks) and a shorter fixation time on unclothed body parts were observed in all participants. There is evidence for the developmental effect of differential viewing patterns when looking at women’s bodies. The attention behavior of patients with AN seems to be similar to that of the control groups, which is partly consistent with, and partly contradictory to, previous studies. Full article
(This article belongs to the Section Cognition)
Show Figures

Figure 1

20 pages, 2901 KiB  
Article
Exploring the Use of Eye Tracking to Evaluate Usability Affordances: A Case Study on Assistive Device Design
by Vicente Bayarri-Porcar, Alba Roda-Sales, Joaquín L. Sancho-Bru and Margarita Vergara
Appl. Sci. 2025, 15(15), 8376; https://doi.org/10.3390/app15158376 - 28 Jul 2025
Viewed by 242
Abstract
This study explores the application of Eye-Tracking technology for the ergonomic evaluation of assistive device usability. Sixty-four participants evaluated six jar-opening devices in a two-phase study. First, the participants’ gaze was recorded while they viewed six rendered pictures of assistive devices, each shown [...] Read more.
This study explores the application of Eye-Tracking technology for the ergonomic evaluation of assistive device usability. Sixty-four participants evaluated six jar-opening devices in a two-phase study. First, the participants’ gaze was recorded while they viewed six rendered pictures of assistive devices, each shown in two different versions: with and without rubber in the grip area. Second, the participants physically interacted with the devices in a hands-on usability task. In both phases, participants rated the devices according to six usability affordances: robustness, comfort, easiness to grip, lid slippery, effort level, and easiness to use. Eye-Tracking metrics (fixation duration, number of fixations, and visit duration) correlated with the on-screen ratings, which aligned with ratings after using the physical devices. High ratings in comfort and effort level correlated with more visual attention to the grip area, where the rubber acted as key signifier. Heatmaps revealed the grip area as important for comfort and easiness to use and the lid area for robustness and slipperiness. These findings demonstrate the potential of Eye Tracking in usability studies, providing valuable insights for the ergonomic evaluation of assistive devices. Moreover, they highlight the suitability of Eye Tracking for early-stage design evaluation, offering objective metrics to guide design decisions and improve user experience. Full article
(This article belongs to the Special Issue Advances in Human–Machine Interaction)
Show Figures

Figure 1

31 pages, 2262 KiB  
Article
Strike a Pose: Relationships Between Infants’ Motor Development and Visuospatial Representations of Bodies
by Emma L. Axelsson, Tayla Britton, Gurmeher K. Gulhati, Chloe Kelly, Helen Copeland, Luca McNamara, Hester Covell and Alyssa A. Quinn
Behav. Sci. 2025, 15(8), 1021; https://doi.org/10.3390/bs15081021 - 28 Jul 2025
Viewed by 659
Abstract
Infants discriminate faces early in the first year, but research on infants’ discrimination of bodies is plagued by mixed findings. Using a familiarisation novelty preference method, we investigated 7- and 9-month-old infants’ discrimination of body postures presented in upright and inverted orientations, and [...] Read more.
Infants discriminate faces early in the first year, but research on infants’ discrimination of bodies is plagued by mixed findings. Using a familiarisation novelty preference method, we investigated 7- and 9-month-old infants’ discrimination of body postures presented in upright and inverted orientations, and with and without heads, along with relationships with gross and fine motor development. In our initial studies, 7-month-old infants discriminated upright headless postures with forward-facing and about-facing images. Eye tracking revealed that infants looked at the bodies of the upright headless postures the longest and at the heads of upright whole figures for 60–70% of the time regardless of the presence of faces, suggesting that heads detract attention from bodies. In a more stringent test, with similarly complex limb positions between test items, infants could not discriminate postures. With longer trials, the 7-month-olds demonstrated a familiarity preference for the upright whole figures, and the 9-month-olds demonstrated a novelty preference, albeit with a less robust effect. Unlike previous studies, we found that better gross motor skills were related to the 7-month-olds’ better discrimination of upright headless postures compared to inverted postures. The 9-month-old infants’ lower gross and fine motor skills were associated with a stronger preference for inverted compared to upright whole figures. This is further evidence of a configural representation of bodies in infancy, but it is constrained by an upper bias (heads in upright figures, feet in inverted), the test item similarity, and the trial duration. The measure and type of motor development reveals differential relationships with infants’ representations of bodies. Full article
(This article belongs to the Special Issue The Role of Early Sensorimotor Experiences in Cognitive Development)
Show Figures

Figure 1

Back to TopTop