Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (96)

Search Parameters:
Keywords = pupil movements

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 3021 KiB  
Article
Uncovering the Characteristics of Pupil Cycle Time (PCT) in Neuropathies and Retinopathies
by Laure Trinquet, Suzon Ajasse, Frédéric Chavane, Richard Legras, Frédéric Matonti, José-Alain Sahel, Catherine Vignal-Clermont and Jean Lorenceau
Vision 2025, 9(3), 51; https://doi.org/10.3390/vision9030051 - 30 Jun 2025
Viewed by 466
Abstract
Pupil cycle time (PCT) estimates the dynamics of a biofeedback loop established between pupil size and stimulus luminance, size or colour. The PCT is useful for probing the functional integrity of the retinopupillary circuits, and is therefore potentially applicable for assessing the effects [...] Read more.
Pupil cycle time (PCT) estimates the dynamics of a biofeedback loop established between pupil size and stimulus luminance, size or colour. The PCT is useful for probing the functional integrity of the retinopupillary circuits, and is therefore potentially applicable for assessing the effects of damage due to retinopathies or neuropathies. In previous studies, PCT was measured by manually counting the number of pupil oscillations during a fixed period to calculate the PCT. This method is scarce, requires a good expertise and cannot be used to estimate several PCT parameters, such as the oscillation amplitude or variability. We have developed a computerised setup based on eye-tracking that expands the possibilities of characterising PCT along several dimensions: oscillation frequency and regularity, amplitude and variability, which can be used with a large palette of stimuli (different colours, sizes, shapes or locations), and further allows measuring blinking frequency and eye movements. We used this method to characterise the PCT in young control participants as well as in patients with several pathologies, including age-related macular degeneration (AMD), diabetic retinopathy (DR), retinitis pigmentosa (RP), Stargardt disease (SD), and Leber hereditary optic neuropathy (LHON). We found that PCT is very regular and stable in young healthy participants, with little inter-individual variability. In contrast, several PCT features are altered in older healthy participants as well as in ocular diseases, including slower dynamics, irregular oscillations, and reduced oscillation amplitude. The distinction between patients and healthy participants based on the calculation of the area under the curve of the receiver operating characteristics (AUC of ROC) were dependent on the pathologies and stimuli (0.7 < AUC < 1). PCT nevertheless provides relevant complementary information to assess the physiopathology of ocular diseases and to probe the functioning of retino-pupillary circuits. Full article
(This article belongs to the Section Retinal Function and Disease)
Show Figures

Figure 1

16 pages, 2054 KiB  
Article
Transformer-Based Detection and Clinical Evaluation System for Torsional Nystagmus
by Ju-Hyuck Han, Yong-Suk Kim, Jong Bin Lee, Hantai Kim, Jong-Yeup Kim and Yongseok Cho
Sensors 2025, 25(13), 4039; https://doi.org/10.3390/s25134039 - 28 Jun 2025
Viewed by 341
Abstract
Motivation: Benign paroxysmal positional vertigo (BPPV) is characterized by torsional nystagmus induced by changes in head position, where accurate quantitative assessment of subtle torsional eye movements is essential for precise diagnosis. Conventional videonystagmography (VNG) techniques face challenges in accurately capturing the rotational components [...] Read more.
Motivation: Benign paroxysmal positional vertigo (BPPV) is characterized by torsional nystagmus induced by changes in head position, where accurate quantitative assessment of subtle torsional eye movements is essential for precise diagnosis. Conventional videonystagmography (VNG) techniques face challenges in accurately capturing the rotational components of pupil movements, and existing automated methods typically exhibit limited performance in identifying torsional nystagmus. Methodology: The objective of this study was to develop an automated system capable of accurately and quantitatively detecting torsional nystagmus. We introduce the Torsion Transformer model, designed to directly estimate torsion angles from iris images. This model employs a self-supervised learning framework comprising two main components: a Decoder module, which learns rotational transformations from image data, and a Finder module, which subsequently estimates the torsion angle. The resulting torsion angle data, represented as time-series, are then analyzed using a 1-dimensional convolutional neural network (1D-CNN) classifier to detect the presence of nystagmus. The performance of the proposed method was evaluated using video recordings from 127 patients diagnosed with BPPV. Findings: Our Torsion Transformer model demonstrated robust performance, achieving a sensitivity of 89.99%, specificity of 86.36%, an F1-score of 88.82%, and an area under the receiver operating characteristic curve (AUROC) of 87.93%. These results indicate that the proposed model effectively quantifies torsional nystagmus, with performance levels comparable to established methods for detecting horizontal and vertical nystagmus. Thus, the Torsion Transformer shows considerable promise as a clinical decision support tool in the diagnosis of BPPV. Key Findings: Technical performance improvement in torsional nystagmus detection; System to support clinical decision-making for healthcare professionals. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

27 pages, 3422 KiB  
Article
Audiovisual Perception of Sentence Stress in Cochlear Implant Recipients
by Hartmut Meister, Moritz Wächtler, Pascale Sandmann, Ruth Lang-Roth and Khaled H. A. Abdel-Latif
Audiol. Res. 2025, 15(4), 77; https://doi.org/10.3390/audiolres15040077 - 24 Jun 2025
Viewed by 364
Abstract
Background/Objectives: Sentence stress as part of linguistic prosody plays an important role for verbal communication. It emphasizes particularly important words in a phrase and is reflected by acoustic cues such as the voice fundamental frequency. However, visual cues, especially facial movements, are also [...] Read more.
Background/Objectives: Sentence stress as part of linguistic prosody plays an important role for verbal communication. It emphasizes particularly important words in a phrase and is reflected by acoustic cues such as the voice fundamental frequency. However, visual cues, especially facial movements, are also important for sentence stress perception. Since cochlear implant (CI) recipients are limited in their use of acoustic prosody cues, the question arises as to what extent they are able to exploit visual features. Methods: Virtual characters were used to provide highly realistic but controllable stimuli for investigating sentence stress in groups of experienced CI recipients and typical-hearing (TH) peers. In addition to the proportion of correctly identified stressed words, task load was assessed via reaction times (RTs) and task-evoked pupil dilation (TEPD), and visual attention was estimated via eye tracking. Experiment 1 considered congruent combinations of auditory and visual cues, while Experiment 2 presented incongruent stimuli. Results: In Experiment 1, CI users and TH participants performed similarly in the congruent audiovisual condition, while the former were better at using visual cues. RTs were generally faster in the AV condition, whereas TEPD revealed a more detailed picture, with TH subjects showing greater pupil dilation in the visual condition. The incongruent stimuli in Experiment 2 showed that modality use varied individually among CI recipients, while TH participants relied primarily on auditory cues. Conclusions: Visual cues are generally useful for perceiving sentence stress. As a group, CI users are better at using facial cues than their TH peers. However, CI users show individual differences in the reliability of the various cues. Full article
Show Figures

Figure 1

23 pages, 9051 KiB  
Article
Predicting User Attention States from Multimodal Eye–Hand Data in VR Selection Tasks
by Xiaoxi Du, Jinchun Wu, Xinyi Tang, Xiaolei Lv, Lesong Jia and Chengqi Xue
Electronics 2025, 14(10), 2052; https://doi.org/10.3390/electronics14102052 - 19 May 2025
Viewed by 776
Abstract
Virtual reality (VR) devices that integrate eye-tracking and hand-tracking technologies can capture users’ natural eye–hand data in real time within a three-dimensional virtual space, providing new opportunities to explore users’ attentional states during natural 3D interactions. This study aims to develop an attention-state [...] Read more.
Virtual reality (VR) devices that integrate eye-tracking and hand-tracking technologies can capture users’ natural eye–hand data in real time within a three-dimensional virtual space, providing new opportunities to explore users’ attentional states during natural 3D interactions. This study aims to develop an attention-state prediction model based on the multimodal fusion of eye and hand features, which distinguishes whether users primarily employ goal-directed attention or stimulus-driven attention during the execution of their intentions. In our experiment, we collected three types of data—eye movements, hand movements, and pupil changes—and instructed participants to complete a virtual button selection task. This setup allowed us to establish a binary ground truth label for attentional state during the execution of selection intentions for model training. To investigate the impact of different time windows on prediction performance, we designed eight time windows ranging from 0 to 4.0 s (in increments of 0.5 s) and compared the performance of eleven algorithms, including logistic regression, support vector machine, naïve Bayes, k-nearest neighbors, decision tree, linear discriminant analysis, random forest, AdaBoost, gradient boosting, XGBoost, and neural networks. The results indicate that, within the 3 s window, the gradient boosting model performed best, achieving a weighted F1-score of 0.8835 and an Accuracy of 0.8860. Furthermore, the analysis of feature importance demonstrated that the multimodal eye–hand features play a critical role in the prediction. Overall, this study introduces an innovative approach that integrates three types of multimodal eye–hand behavioral and physiological data within a virtual reality interaction context. This framework provides both theoretical and methodological support for predicting users’ attentional states within short time windows and contributes practical guidance for the design of attention-adaptive 3D interfaces. In addition, the proposed multimodal eye–hand data fusion framework also demonstrates potential applicability in other three-dimensional interaction domains, such as game experience optimization, rehabilitation training, and driver attention monitoring. Full article
Show Figures

Figure 1

17 pages, 2842 KiB  
Article
YOLO Model-Based Eye Movement Detection During Closed-Eye State
by Shigui Zhang, Junhui He and Yuanwen Zou
Appl. Sci. 2025, 15(9), 4981; https://doi.org/10.3390/app15094981 - 30 Apr 2025
Viewed by 771
Abstract
Eye movement detection technology holds significant potential across medicine, psychology, and human–computer interaction. However, traditional methods, which primarily rely on tracking the pupil and cornea during the open-eye state, are ineffective when the eye is closed. To address this limitation, we developed a [...] Read more.
Eye movement detection technology holds significant potential across medicine, psychology, and human–computer interaction. However, traditional methods, which primarily rely on tracking the pupil and cornea during the open-eye state, are ineffective when the eye is closed. To address this limitation, we developed a novel system capable of real-time eye movement detection even in the closed-eye state. Utilizing a micro-camera based on the OV9734 image sensor, our system captures image data to construct a dataset of eyelid images during ocular movements. We performed extensive experiments with multiple versions of the YOLO algorithm, including v5s, v8s, v9s, and v10s, in addition to testing different sizes of the YOLO v11 model (n < s < m < l < x), to achieve optimal performance. Ultimately, we selected YOLO11m as the optimal model based on its highest AP0.5 score of 0.838. Our tracker achieved a mean distance error of 0.77 mm, with 90% of predicted eye position distances having an error of less than 1.67 mm, enabling real-time tracking at 30 frames per second. This study introduces an innovative method for the real-time detection of eye movements during eye closure, enhancing and diversifying the applications of eye-tracking technology. Full article
Show Figures

Figure 1

22 pages, 3634 KiB  
Article
SPEED: A Graphical User Interface Software for Processing Eye Tracking Data
by Daniele Lozzi, Ilaria Di Pompeo, Martina Marcaccio, Matias Ademaj, Simone Migliore and Giuseppe Curcio
NeuroSci 2025, 6(2), 35; https://doi.org/10.3390/neurosci6020035 - 16 Apr 2025
Cited by 1 | Viewed by 961
Abstract
Eye tracking is a tool that is widely used in scientific research, enabling the acquisition of precise and detailed data on an individual’s eye movements during interaction with visual stimuli, thus offering a rich source of information on visual perception and associated cognitive [...] Read more.
Eye tracking is a tool that is widely used in scientific research, enabling the acquisition of precise and detailed data on an individual’s eye movements during interaction with visual stimuli, thus offering a rich source of information on visual perception and associated cognitive processes. In this work, a new software called SPEED (labScoc Processing and Extraction of Eye tracking Data) is presented to process data acquired by Pupil Lab Neon (Pupil Labs, Berlin, Germany). The software is written in Python which helps researchers with the feature extraction step without any coding skills. This work also presents a pilot study in which five healthy subjects were included in research investigating oculomotor correlates during MDMT (Moral Decision-Making Task) and testing possible autonomic predictors of participants’ performance. A statistically significant difference was observed in reaction times and in the number of blinks made during the choice between the conditions of the personal and impersonal dilemma. Full article
Show Figures

Figure 1

22 pages, 1126 KiB  
Article
A Comparative Study of YOLO, SSD, Faster R-CNN, and More for Optimized Eye-Gaze Writing
by Walid Abdallah Shobaki and Mariofanna Milanova
Sci 2025, 7(2), 47; https://doi.org/10.3390/sci7020047 - 10 Apr 2025
Cited by 3 | Viewed by 2821
Abstract
Eye-gaze writing technology holds significant promise but faces several limitations. Existing eye-gaze-based systems often suffer from slow performance, particularly under challenging conditions such as low-light environments, user fatigue, or excessive head movement and blinking. These factors negatively impact the accuracy and reliability of [...] Read more.
Eye-gaze writing technology holds significant promise but faces several limitations. Existing eye-gaze-based systems often suffer from slow performance, particularly under challenging conditions such as low-light environments, user fatigue, or excessive head movement and blinking. These factors negatively impact the accuracy and reliability of eye-tracking technology, limiting the user’s ability to control the cursor or make selections. To address these challenges and enhance accessibility, we created a comprehensive dataset by integrating multiple publicly available datasets, including the Eyes Dataset, Dataset-Pupil, Pupil Detection Computer Vision Project, Pupils Computer Vision Project, and MPIIGaze dataset. This combined dataset provides diverse training data for eye images under various conditions, including open and closed eyes and diverse lighting environments. Using this dataset, we evaluated the performance of several computer vision algorithms across three key areas. For object detection, we implemented YOLOv8, SSD, and Faster R-CNN. For image segmentation, we employed DeepLab and U-Net. Finally, for self-supervised learning, we utilized the SimCLR algorithm. Our results indicate that the Haar classifier achieves the highest accuracy (0.85) with a model size of 97.358 KB, while YOLOv8 demonstrates competitive accuracy (0.83) alongside an exceptional processing speed and the smallest model size (6.083 KB), making it particularly suitable for cost-effective real-time eye-gaze applications. Full article
(This article belongs to the Special Issue Computational Linguistics and Artificial Intelligence)
Show Figures

Figure 1

15 pages, 3391 KiB  
Article
OKN and Pupillary Response Modulation by Gaze and Attention Shifts
by Kei Kanari and Moe Kikuchi
J. Eye Mov. Res. 2025, 18(2), 11; https://doi.org/10.3390/jemr18020011 - 7 Apr 2025
Viewed by 393
Abstract
Pupil responses and optokinetic nystagmus (OKN) are known to vary with the brightness and direction of motion of attended stimuli, as well as gaze position. However, whether these processes are controlled by a common mechanism remains unclear. In this study, we investigated how [...] Read more.
Pupil responses and optokinetic nystagmus (OKN) are known to vary with the brightness and direction of motion of attended stimuli, as well as gaze position. However, whether these processes are controlled by a common mechanism remains unclear. In this study, we investigated how OKN latency relates to pupil response latency under two conditions: gaze shifts (eye movement) and attention shifts (covert attention without eye movement). As a result, while OKN showed consistent temporal changes across both gaze and attention conditions, pupillary responses exhibited distinct patterns. Moreover, the results revealed no significant correlation between pupil latency and OKN latency in either condition. These findings suggest that, although OKN and pupillary responses are influenced by similar attentional processes, their underlying mechanisms may differ. Full article
Show Figures

Figure 1

23 pages, 1202 KiB  
Article
Can Saccade and Vergence Properties Discriminate Stroke Survivors from Individuals with Other Pathologies? A Machine Learning Approach
by Alae Eddine El Hmimdi and Zoï Kapoula
Brain Sci. 2025, 15(3), 230; https://doi.org/10.3390/brainsci15030230 - 22 Feb 2025
Cited by 2 | Viewed by 992
Abstract
Recent studies applying machine learning (ML) to saccade and vergence eye movements have demonstrated the ability to distinguish individuals with dyslexia, learning disorders, or attention disorders from healthy individuals or those with other pathologies. Stroke patients are known to exhibit visual deficits and [...] Read more.
Recent studies applying machine learning (ML) to saccade and vergence eye movements have demonstrated the ability to distinguish individuals with dyslexia, learning disorders, or attention disorders from healthy individuals or those with other pathologies. Stroke patients are known to exhibit visual deficits and eye movement disorders. This study focused on saccade and vergence measurements using REMOBI technology V3 and the Pupil Core eye tracker. Eye movement data were automatically analyzed with the AIDEAL V3 (Artificial Intelligence Eye Movement Analysis) cloud software developed by Orasis-Ear. This software computes multiple parameters for each type of eye movement, including the latency, accuracy, velocity, duration, and disconjugacy. Three ML models (logistic regression, support vector machine, random forest) were applied to the saccade and vergence eye movement features provided by AIDEAL to identify stroke patients from other groups: a population of children with learning disorders and a population with a broader spectrum of dysfunctions or pathologies (including children and adults). The different classifiers achieved macro F1 scores of up to 75.9% in identifying stroke patients based on the saccade and vergence parameters. An additional ML analysis using age-matched groups of stroke patients and adults or seniors reduced the influence of large age differences. This analysis resulted in even higher F1 scores across all three ML models, as the comparison group predominantly included healthy individuals, including some with presbycusis. In conclusion, ML applied to saccade and vergence eye movement parameters, as measured by the REMOBI and AIDEAL technology, is a sensitive method for the detection of stroke-related sequelae. This approach could be further developed as a clinical tool to evaluate recovery, compensation, and the evolution of neurological deficits in stroke patients. Full article
(This article belongs to the Section Neurorehabilitation)
Show Figures

Figure 1

18 pages, 2601 KiB  
Article
Changes in Pupil Size According to the Color of Cosmetic Packaging: Using Eye-Tracking Techniques
by Eui Suk Ko, Jai Neung Kim, Hyung Jong Na and Seong Tae Kim
Appl. Sci. 2025, 15(1), 73; https://doi.org/10.3390/app15010073 - 26 Dec 2024
Cited by 1 | Viewed by 1506
Abstract
This study examines the relationship between cosmetic packaging color and consumer attention by analyzing changes in pupil size using eye-tracking technology. A controlled experiment with 25 participants (mean age: 24.7 ± 3 years, 14 males and 11 females) was conducted to investigate the [...] Read more.
This study examines the relationship between cosmetic packaging color and consumer attention by analyzing changes in pupil size using eye-tracking technology. A controlled experiment with 25 participants (mean age: 24.7 ± 3 years, 14 males and 11 females) was conducted to investigate the impact of eight packaging colors (black, white, blue, yellow, orange, turquoise, pink, and sky blue) on pupil dilation during gaze fixation and movement. Pupil size data were analyzed using SAS 9.4, with T-tests used to determine significant differences across colors. The results revealed that pink packaging elicited significantly larger pupil sizes during fixation, indicating heightened attention, while black, white, blue, and orange led to smaller pupil sizes when fixated, suggesting greater focus on the surrounding environment rather than the packaging. In contrast, yellow and turquoise exhibited no significant differences in pupil size during fixation and movement. Additionally, the study highlights that gaze fixation is a more meaningful indicator of attention than gaze movement, as fixation reflects focused interest in specific stimuli. The findings suggest that pink packaging is most effective in attracting consumer attention, while black, white, blue, and orange are better suited for enhancing focus on the surrounding environment. These insights emphasize the growing importance of packaging design in influencing consumer behavior, particularly through color selection. This study contributes to marketing practices by providing empirical evidence for the visual impact of packaging colors, offering valuable guidance for cosmetic industry practitioners. Future research should expand sample sizes and explore additional packaging attributes, such as shape and material, to derive more comprehensive insights. Full article
Show Figures

Figure 1

14 pages, 2420 KiB  
Article
Workload Assessment of Operators: Correlation Between NASA-TLX and Pupillary Responses
by Yun Wu, Yao Zhang and Bin Zheng
Appl. Sci. 2024, 14(24), 11975; https://doi.org/10.3390/app142411975 - 20 Dec 2024
Cited by 6 | Viewed by 3274
Abstract
Operators in high-stress environments often face significant cognitive demands that can impair their performance, underscoring the need for comprehensive workload assessment. This study aims to study the relationship between subjective self-reported measures, the NASA task load index (NASA-TLX), objective bio-signal measures, and pupillary [...] Read more.
Operators in high-stress environments often face significant cognitive demands that can impair their performance, underscoring the need for comprehensive workload assessment. This study aims to study the relationship between subjective self-reported measures, the NASA task load index (NASA-TLX), objective bio-signal measures, and pupillary responses. The participants engaged in either a visual tracking task or a laparoscopic visuomotor task while their eye movements were recorded using a Tobii Pro Nano eye tracker (Tobii Technology Inc., Stockholm, Sweden). Immediately after completing the tasks, participants provided NASA-TLX scores to assess their perceived workload. The study tested three hypotheses: first, whether increased pupil dilation correlates with higher NASA-TLX scores; second, whether task type affects workload; and third, whether task repetition influences workload. The results showed a moderate positive correlation between pupil size and NASA-TLX scores (r = 0.513, p < 0.001). The laparoscopic surgery task, which requires visuomotor coordination, resulted in significantly higher NASA-TLX scores (t = –6.23, p < 0.001), larger original pupil sizes (t = –22.57, p < 0.001), and more adjusted pupil sizes (t = –22.57, p < 0.001) than the purely visual task. Additionally, task repetition led to a significant reduction in the NASA-TLX scores (t = 2.86, p = 0.005), the original mean pupil size (t = 5.50, p < 0.001), and the adjusted pupil size (t = 6.34, p < 0.001). In conclusion, the study confirms a positive correlation between NASA-TLX scores and pupillary responses. Task type and repetition were found to influence workload and pupillary responses. The findings demonstrate the value of using both subjective and objective measures for workload assessments. Full article
(This article belongs to the Special Issue Latest Research on Eye Tracking Applications)
Show Figures

Figure 1

19 pages, 6356 KiB  
Article
An Objective Handling Qualities Assessment Framework of Electric Vertical Takeoff and Landing
by Yuhan Li, Shuguang Zhang, Yibing Wu, Sharina Kimura, Michael Zintl and Florian Holzapfel
Aerospace 2024, 11(12), 1020; https://doi.org/10.3390/aerospace11121020 - 11 Dec 2024
Cited by 1 | Viewed by 1165
Abstract
Assessing handling qualities is crucial for ensuring the safety and operational efficiency of aircraft control characteristics. The growing interest in Urban Air Mobility (UAM) has increased the focus on electric Vertical Takeoff and Landing (eVTOL) aircraft; however, a comprehensive assessment of eVTOL handling [...] Read more.
Assessing handling qualities is crucial for ensuring the safety and operational efficiency of aircraft control characteristics. The growing interest in Urban Air Mobility (UAM) has increased the focus on electric Vertical Takeoff and Landing (eVTOL) aircraft; however, a comprehensive assessment of eVTOL handling qualities remains a challenge. This paper proposed a handling qualities framework to assess eVTOL handling qualities, integrating pilot compensation, task performance, and qualitative comments. An experiment was conducted, where eye-tracking data and subjective ratings from 16 participants as they performed various Mission Task Elements (MTEs) in an eVTOL simulator were analyzed. The relationship between pilot compensation and task workload was investigated based on eye metrics. Data mining results revealed that pilots’ eye movement patterns and workload perception change when performing Mission Task Elements (MTEs) that involve aircraft deficiencies. Additionally, pupil size, pupil diameter, iris diameter, interpupillary distance, iris-to-pupil ratio, and gaze entropy are found to be correlated with both handling qualities and task workload. Furthermore, a handling qualities and pilot workload recognition model is developed based on Long-Short Term Memory (LSTM), which is subsequently trained and evaluated with experimental data, achieving an accuracy of 97%. A case study was conducted to validate the effectiveness of the proposed framework. Overall, the proposed framework addresses the limitations of the existing Handling Qualities Rating Method (HQRM), offering a more comprehensive approach to handling qualities assessment. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

14 pages, 918 KiB  
Article
An Eye Tracker Study on the Understanding of Implicitness in French Elementary School Children
by Maria Pia Bucci, Aikaterini Premeti and Béatrice Godart-Wendling
Brain Sci. 2024, 14(12), 1195; https://doi.org/10.3390/brainsci14121195 - 27 Nov 2024
Viewed by 864
Abstract
Background: The aim of this study is to use an eye tracker to compare the understanding of three forms of implicitness (i.e., presupposition, conversational implicatures, and irony) in 139 pupils from the first to the fifth year of elementary school. Methods: The child [...] Read more.
Background: The aim of this study is to use an eye tracker to compare the understanding of three forms of implicitness (i.e., presupposition, conversational implicatures, and irony) in 139 pupils from the first to the fifth year of elementary school. Methods: The child was invited to read short texts composed of a context about some characters and a target sentence conveying one of the three kinds of implicitness. After that, there was a comprehension yes/no question to check whether the child had understood the implicit content of the target sentence. At the same time eye, movements were recorded by a remote system (Pro Fusion by Tobii). The number of correct answers, the duration, and the number of fixations on the texts were measured. Results: We showed that children’s reading time is positively correlated with the accurate comprehension of implicitness, and that children similarly understand the three types of implicitness. Furthermore, the number and the duration of fixations depend both on the age of the children and on their good or poor understanding of the implicit contents. This fact is particularly noticeable for children in the first-grade class, for whom fixations are significantly longer and more frequent when they correctly understand sentences containing implicitness. Conclusion: These results argue in favor of the possibility of teaching the comprehension of some types of implicitness (presupposition, implicature, and irony) from an early age. Full article
(This article belongs to the Section Developmental Neuroscience)
Show Figures

Figure 1

16 pages, 2612 KiB  
Article
Influencing Mechanism of Signal Design Elements in Complex Human–Machine System: Evidence from Eye Movement Data
by Siu Shing Man, Wenbo Hu, Hanxing Zhou, Tingru Zhang and Alan Hoi Shou Chan
Informatics 2024, 11(4), 88; https://doi.org/10.3390/informatics11040088 - 21 Nov 2024
Viewed by 1234
Abstract
In today’s rapidly evolving technological landscape, human–machine interaction has become an issue that should be systematically explored. This research aimed to examine the impact of different pre-cue modes (visual, auditory, and tactile), stimulus modes (visual, auditory, and tactile), compatible mapping modes (both compatible [...] Read more.
In today’s rapidly evolving technological landscape, human–machine interaction has become an issue that should be systematically explored. This research aimed to examine the impact of different pre-cue modes (visual, auditory, and tactile), stimulus modes (visual, auditory, and tactile), compatible mapping modes (both compatible (BC), transverse compatible (TC), longitudinal compatible (LC), and both incompatible (BI)), and stimulus onset asynchrony (200 ms/600 ms) on the performance of participants in complex human–machine systems. Eye movement data and a dual-task paradigm involving stimulus–response and manual tracking were utilized for this study. The findings reveal that visual pre-cues can captivate participants’ attention towards peripheral regions, a phenomenon not observed when visual stimuli are presented in isolation. Furthermore, when confronted with visual stimuli, participants predominantly prioritize continuous manual tracking tasks, utilizing focal vision, while concurrently executing stimulus–response compatibility tasks with peripheral vision. Furthermore, the average pupil diameter tends to diminish with the use of visual pre-cues or visual stimuli but expands during auditory or tactile stimuli or pre-cue modes. These findings contribute to the existing literature on the theoretical design of complex human–machine interfaces and offer practical implications for the design of human–machine system interfaces. Moreover, this paper underscores the significance of considering the optimal combination of stimulus modes, pre-cue modes, and stimulus onset asynchrony, tailored to the characteristics of the human–machine interaction task. Full article
(This article belongs to the Topic Theories and Applications of Human-Computer Interaction)
Show Figures

Figure 1

18 pages, 5155 KiB  
Article
Strabismus Detection in Monocular Eye Images for Telemedicine Applications
by Wattanapong Kurdthongmee, Lunla Udomvej, Arsanchai Sukkuea, Piyadhida Kurdthongmee, Chitchanok Sangeamwong and Chayanid Chanakarn
J. Imaging 2024, 10(11), 284; https://doi.org/10.3390/jimaging10110284 - 7 Nov 2024
Viewed by 1492
Abstract
This study presents a novel method for the early detection of strabismus, a common eye misalignment disorder, with an emphasis on its application in telemedicine. The technique leverages synchronized eye movements to estimate the pupil location of one eye based on the other, [...] Read more.
This study presents a novel method for the early detection of strabismus, a common eye misalignment disorder, with an emphasis on its application in telemedicine. The technique leverages synchronized eye movements to estimate the pupil location of one eye based on the other, achieving close alignment in non-strabismic cases. Regression models for each eye are developed using advanced machine learning algorithms, and significant discrepancies between estimated and actual pupil positions indicate the presence of strabismus. This approach provides a non-invasive, efficient solution for early detection and bridges the gap between basic research and clinical care by offering an accessible, machine learning-based tool that facilitates timely intervention and improved outcomes in diverse healthcare settings. The potential for pediatric screening is discussed as a possible direction for future research. Full article
Show Figures

Figure 1

Back to TopTop