Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (60)

Search Parameters:
Keywords = eye tracking performance parameters

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 13508 KB  
Article
Investigating XR Pilot Training Through Gaze Behavior Analysis Using Sensor Technology
by Aleksandar Knežević, Branimir Krstić, Aleksandar Bukvić, Dalibor Petrović and Boško Rašuo
Aerospace 2026, 13(1), 97; https://doi.org/10.3390/aerospace13010097 - 16 Jan 2026
Viewed by 270
Abstract
This research aims to characterize extended reality flight trainers and to provide a detailed account of the sensors employed to collect data essential for qualitative task performance analysis, with a particular focus on gaze behavior within the extended reality environment. A comparative study [...] Read more.
This research aims to characterize extended reality flight trainers and to provide a detailed account of the sensors employed to collect data essential for qualitative task performance analysis, with a particular focus on gaze behavior within the extended reality environment. A comparative study was conducted to evaluate the effectiveness of an extended reality environment relative to traditional flight simulators. Eight flight instructor candidates, advanced pilots with comparable flight-hour experience, were divided into four groups based on airplane or helicopter type and cockpit configuration (analog or digital). In the traditional simulator, fixation numbers, dwell time percentages, revisit numbers, and revisit time percentages were recorded, while in the extended reality environment, the following metrics were analyzed: fixation numbers and durations, saccade numbers and durations, smooth pursuits and durations, and number of blinks. These eye-tracking parameters were evaluated alongside flight performance metrics across all trials. Each scenario involved a takeoff and initial climb task within the traffic pattern of a fixed-wing aircraft. Despite the diversity of pilot groups, no statistically significant differences were observed in either flight performance or gaze behavior metrics between the two environments. Moreover, differences identified between certain pilot groups within one scenario were consistently observed in another, indicating the sensitivity of the proposed evaluation procedure. The enhanced realism and validated effectiveness are therefore crucial for establishing standards that support the formal adoption of extended reality technologies in pilot training programs. Integrating this digital space significantly enhances the overall training experience and provides a higher level of simulation fidelity for next-generation cadet training. Full article
(This article belongs to the Special Issue New Trends in Aviation Development 2024–2025)
Show Figures

Figure 1

16 pages, 2000 KB  
Article
The Impact of Ophthalmic Lens Power and Treatments on Eye Tracking Performance
by Marta Lacort-Beltrán, Adrián Alejandre, Sara Guillén, Marina Vilella, Xian Pan, Victoria Pueyo, Marta Ortin and Eduardo Esteban-Ibañez
J. Eye Mov. Res. 2026, 19(1), 4; https://doi.org/10.3390/jemr19010004 - 29 Dec 2025
Viewed by 379
Abstract
Eye tracking (ET) technology is increasingly used in both research and clinical practice, but its accuracy may be compromised by the presence of ophthalmic lenses. This study systematically evaluated the influence of different optical prescriptions and lens treatments on ET performance using DIVE [...] Read more.
Eye tracking (ET) technology is increasingly used in both research and clinical practice, but its accuracy may be compromised by the presence of ophthalmic lenses. This study systematically evaluated the influence of different optical prescriptions and lens treatments on ET performance using DIVE (Device for an Integral Visual Examination). Fourteen healthy participants underwent oculomotor control tests under thirteen optical conditions: six with varying dioptric powers and six with optical filters, compared against a no-lens control. Key parameters analysed included angle error, fixation stability (bivariate contour ellipse area, BCEA), saccadic accuracy, number of data gaps, and proportion of valid frames. High-powered spherical lenses (+6.00 D and −6.00 D) significantly increased gaze angle error, and the negative lens also increased data gaps, while cylindrical lenses had a moderate effect. Among filters, the Natural IR coating caused the greatest deterioration in ET performance, reducing valid samples and increasing the number of gaps with data loss, likely due to interference with the infrared-based detection system. The lens with basic anti-reflective treatment (SV Org 1.5 AR) also showed some deterioration in interaction with the ET. Other filters showed minimal or no significant impact. These findings demonstrate that both high-powered prescriptions and certain lens treatments can compromise ET data quality, highlighting the importance of accounting for optical conditions in experimental design and clinical applications. Full article
Show Figures

Graphical abstract

20 pages, 1956 KB  
Article
Temporal Capsule Feature Network for Eye-Tracking Emotion Recognition
by Qingfeng Gu, Jiannan Chi, Cong Zhang, Boxiang Cao, Jiahui Liu and Yu Wang
Brain Sci. 2025, 15(12), 1343; https://doi.org/10.3390/brainsci15121343 - 18 Dec 2025
Viewed by 440
Abstract
Eye Tracking (ET) parameters, as physiological signals, are widely applied in emotion recognition and show promising performance. However, emotion recognition relying on ET parameters still faces several challenges: (1) insufficient extraction of temporal dynamic information from the ET parameters; (2) a lack of [...] Read more.
Eye Tracking (ET) parameters, as physiological signals, are widely applied in emotion recognition and show promising performance. However, emotion recognition relying on ET parameters still faces several challenges: (1) insufficient extraction of temporal dynamic information from the ET parameters; (2) a lack of sophisticated features with strong emotional specificity, which restricts the model’s robustness and individual generalization capability. To address these issues, we propose a novel Temporal Capsule Feature Network (TCFN) for ET parameter-based emotion recognition. The network incorporates a Window Feature Module to extract Eye Movement temporal dynamic information and a specialized Capsule Network Module to mine complementary and collaborative relationships among features. The MLP Classification Module realizes feature-to-category conversion, and a Dual-Loss Mechanism is integrated to optimize overall performance. Experimental results demonstrate the superiority of the proposed model: the average accuracy reaches 83.27% for Arousal and 89.94% for Valence (three-class tasks) on the eSEE-d dataset, and the accuracy rate of four-category across-session emotion recognition is 63.85% on the SEED-IV dataset. Full article
(This article belongs to the Section Behavioral Neuroscience)
Show Figures

Figure 1

22 pages, 17233 KB  
Article
From Mechanical Instability to Virtual Precision: Digital Twin Validation for Next-Generation MEMS-Based Eye-Tracking Systems
by Mateusz Pomianek, Marek Piszczek, Paweł Stawarz and Aleksandra Kucharczyk-Drab
Sensors 2025, 25(20), 6460; https://doi.org/10.3390/s25206460 - 18 Oct 2025
Viewed by 2746
Abstract
The development of high-performance MEMS-based eye trackers, crucial for next-generation medical diagnostics and human–computer interfaces, is often hampered by the mechanical instability and time-consuming recalibration of physical prototypes. To address this bottleneck, we present the development and rigorous validation of a high-fidelity digital [...] Read more.
The development of high-performance MEMS-based eye trackers, crucial for next-generation medical diagnostics and human–computer interfaces, is often hampered by the mechanical instability and time-consuming recalibration of physical prototypes. To address this bottleneck, we present the development and rigorous validation of a high-fidelity digital twin (DT) designed to accelerate the design–test–refine cycle. We conducted a comparative study of a physical MEMS scanning system and its corresponding digital twin using a USAF 1951 test target under both static and dynamic conditions. Our analysis reveals that the DT accurately replicates the physical system’s behavior, showing a geometric discrepancy of <30 µm and a matching feature shift (1 µm error) caused by tracking dynamics. Crucially, the DT effectively removes mechanical vibration artifacts, enabling the precise analysis of system parameters in a controlled virtual environment. The validated model was then used to develop a pupil detection algorithm that achieved an accuracy of 1.80 arc minutes, a result that surpasses the performance of a widely used commercial system in our comparative tests. This work establishes a validated methodology for using digital twins in the rapid prototyping and optimization of complex optical systems, paving the way for faster development of critical healthcare technologies. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

12 pages, 507 KB  
Article
Clinical Assessment of a Virtual Reality Perimeter Versus the Humphrey Field Analyzer: Comparative Reliability, Usability, and Prospective Applications
by Marco Zeppieri, Caterina Gagliano, Francesco Cappellani, Federico Visalli, Fabiana D’Esposito, Alessandro Avitabile, Roberta Amato, Alessandra Cuna and Francesco Pellegrini
Vision 2025, 9(4), 86; https://doi.org/10.3390/vision9040086 - 11 Oct 2025
Viewed by 1133
Abstract
Background: This study compared the performance of a Head-mounted Virtual Reality Perimeter (HVRP) with the Humphrey Field Analyzer (HFA), the standard in automated perimetry. The HFA is the established standard for automated perimetry but is constrained by lengthy testing, bulky equipment, and limited [...] Read more.
Background: This study compared the performance of a Head-mounted Virtual Reality Perimeter (HVRP) with the Humphrey Field Analyzer (HFA), the standard in automated perimetry. The HFA is the established standard for automated perimetry but is constrained by lengthy testing, bulky equipment, and limited patient comfort. Comparative data on newer head-mounted virtual reality perimeters are limited, leaving uncertainty about their clinical reliability and potential advantages. Aim: The aim was to evaluate parameters such as visual field outcomes, portability, patient comfort, eye tracking, and usability. Methods: Participants underwent testing with both devices, assessing metrics like mean deviation (MD), pattern standard deviation (PSD), and duration. Results: The HVRP demonstrated small but statistically significant differences in MD and PSD compared to the HFA, while maintaining a consistent trend across participants. MD values were slightly more negative for HFA than HVRP (average difference −0.60 dB, p = 0.0006), while pattern standard deviation was marginally higher with HFA (average difference 0.38 dB, p = 0.00018). Although statistically significant, these differences were small in magnitude and do not undermine the clinical utility or reproducibility of the device. Notably, HVRP showed markedly shorter testing times with HVRP (7.15 vs. 18.11 min, mean difference 10.96 min, p < 0.0001). Its lightweight, portable design allowed for bedside and home testing, enhancing accessibility for pediatric, geriatric, and mobility-impaired patients. Participants reported greater comfort due to the headset design, which eliminated the need for chin rests. The device also offers potential for AI integration and remote data analysis. Conclusions: The HVRP proved to be a reliable, user-friendly alternative to traditional perimetry. Its advantages in comfort, portability, and test efficiency support its use in both clinical settings and remote screening programs for visual field assessment. Its portability and user-friendly design support broader use in clinical practice and expand possibilities for bedside assessment, home monitoring, and remote screening, particularly in populations with limited access to conventional perimetry. Full article
Show Figures

Figure 1

18 pages, 2040 KB  
Article
Diagnosis of mTBI in an ER Setting Using Eye-Tracking and Virtual Reality Technology: An Exploratory Study
by Felix Sikorski, Claas Güthoff, Ingo Schmehl, Witold Rogge, Jasper Frese, Arndt-Peter Schulz and Andreas Gonschorek
Brain Sci. 2025, 15(10), 1051; https://doi.org/10.3390/brainsci15101051 - 26 Sep 2025
Viewed by 750
Abstract
Background: The aim of this study was to systematically explore point-of-care biomarkers as diagnostic indicators for the detection and exclusion of mild traumatic brain injury (mTBI) in an emergency room (ER) setting using Eye-Tracking and Virtual Reality (ET/VR) technology. The primary target group [...] Read more.
Background: The aim of this study was to systematically explore point-of-care biomarkers as diagnostic indicators for the detection and exclusion of mild traumatic brain injury (mTBI) in an emergency room (ER) setting using Eye-Tracking and Virtual Reality (ET/VR) technology. The primary target group included patients who had suffered an acute trauma to the head and presented within 24 h to the emergency department. Methods: The BG Unfallkrankenhaus Berlin and the BG Klinikum Hamburg participated in this explorative, prospective, single-arm accuracy study. This study included patients who presented to the emergency department with suspected mTBI and were examined using ET/VR glasses. All further steps corresponded to clinical routine (e.g., decision on hospital admission, imaging diagnostics). After the completion of treatment, the patients were divided into mTBI and non-TBI subgroups by consensus between two independent clinical experts, who were blinded to the results of the index test (examination using ET/VR glasses) in the form of a clinical synopsis. The diagnosis was based on all clinical, neurological, neurofunctional, neuropsychological, and imaging findings. Routine trauma and neurological history, examination, and diagnosis were performed in each case. All statistical analyses were performed with exploratory intent. Results: The use of ET/VR glasses was found to be predominantly unproblematic. Two of the fifty-two analyzed parameters can be statistically distinguished from a random decision. No difference in oculomotor function was found between the two subgroups, and no correlations between the parameters recorded by the VR goggles and the detection of mTBI were found. Conclusions: At present, the use of VR goggles for the diagnosis of mTBI in an ER setting cannot be recommended. Full article
(This article belongs to the Section Neurotechnology and Neuroimaging)
Show Figures

Figure 1

49 pages, 2744 KB  
Review
A Comprehensive Framework for Eye Tracking: Methods, Tools, Applications, and Cross-Platform Evaluation
by Govind Ram Chhimpa, Ajay Kumar, Sunita Garhwal, Dhiraj Kumar, Niyaz Ahmad Wani, Mudasir Ahmad Wani and Kashish Ara Shakil
J. Eye Mov. Res. 2025, 18(5), 47; https://doi.org/10.3390/jemr18050047 - 23 Sep 2025
Viewed by 3025
Abstract
Eye tracking, a fundamental process in gaze analysis, involves measuring the point of gaze or eye motion. It is crucial in numerous applications, including human–computer interaction (HCI), education, health care, and virtual reality. This study delves into eye-tracking concepts, terminology, performance parameters, applications, [...] Read more.
Eye tracking, a fundamental process in gaze analysis, involves measuring the point of gaze or eye motion. It is crucial in numerous applications, including human–computer interaction (HCI), education, health care, and virtual reality. This study delves into eye-tracking concepts, terminology, performance parameters, applications, and techniques, focusing on modern and efficient approaches such as video-oculography (VOG)-based systems, deep learning models for gaze estimation, wearable and cost-effective devices, and integration with virtual/augmented reality and assistive technologies. These contemporary methods, prevalent for over two decades, significantly contribute to developing cutting-edge eye-tracking applications. The findings underscore the significance of diverse eye-tracking techniques in advancing eye-tracking applications. They leverage machine learning to glean insights from existing data, enhance decision-making, and minimize the need for manual calibration during tracking. Furthermore, the study explores and recommends strategies to address limitations/challenges inherent in specific eye-tracking methods and applications. Finally, the study outlines future directions for leveraging eye tracking across various developed applications, highlighting its potential to continue evolving and enriching user experiences. Full article
Show Figures

Figure 1

14 pages, 785 KB  
Article
Novel Structure–Function Models for Estimating Retinal Ganglion Cell Count Using Pattern Electroretinography in Glaucoma Suspects
by Andrew Tirsi, Isabella Tello, Timothy Foster, Rushil Kumbhani, Nicholas Leung, Samuel Potash, Derek Orshan and Celso Tello
Diagnostics 2025, 15(14), 1756; https://doi.org/10.3390/diagnostics15141756 - 11 Jul 2025
Viewed by 926
Abstract
Background/Objectives: The early detection of retinal ganglion cell (RGC) dysfunction is critical for timely intervention in glaucoma suspects (GSs). The combined structure–function index (CSFI), which uses visual field and optical coherence tomography (OCT) data to estimate RGC counts, may be of limited [...] Read more.
Background/Objectives: The early detection of retinal ganglion cell (RGC) dysfunction is critical for timely intervention in glaucoma suspects (GSs). The combined structure–function index (CSFI), which uses visual field and optical coherence tomography (OCT) data to estimate RGC counts, may be of limited utility in GSs. This study evaluates whether steady-state pattern electroretinogram (ssPERG)-derived estimates better predict early structural changes in GSs. Methods: Fifty eyes from 25 glaucoma suspects underwent ssPERG and spectral-domain OCT. Estimated RGC counts (eRGCC) were calculated using three parameters: ssPERG-Magnitude (eRGCCMag), ssPERG-MagnitudeD (eRGCCMagD), and CSFI (eRGCCCSFI). Linear regression and multivariable models were used to assess each model’s ability to predict the average retinal nerve fiber layer thickness (AvRNFLT), average ganglion cell layer–inner plexiform layer thickness (AvGCL-IPLT), and rim area. Results: eRGCCMag and eRGCCMagD were significantly correlated with eRGCCCSFI. Both PERG-derived models outperformed eRGCCCSFI in predicting AvRNFLT and AvGCL-IPLT, with eRGCCMagD showing the strongest association with AvGCL-IPLT. Conversely, the rim area was best predicted by eRGCCMag and eRGCCCSFI. These findings support a linear relationship between ssPERG parameters and early RGC structural changes, while the logarithmic nature of visual field loss may limit eRGCCCSFI’s predictive accuracy in GSs. Conclusions: ssPERG-derived estimates, particularly eRGCCMagD, better predict early structural changes in GSs than eRGCCCSFI. eRGCCMagD’s superior performance in predicting GCL-IPLT highlights its potential utility as an early biomarker of glaucomatous damage. ssPERG-based models offer a simpler and more sensitive tool for early glaucoma risk stratification, and may provide a clinical benchmark for tracking recoverable RGC dysfunction and treatment response. Full article
(This article belongs to the Special Issue Imaging and AI Applications in Glaucoma)
Show Figures

Figure 1

13 pages, 692 KB  
Article
Eye-Tracking Algorithm for Early Glaucoma Detection: Analysis of Saccadic Eye Movements in Primary Open-Angle Glaucoma
by Cansu Yuksel Elgin
J. Eye Mov. Res. 2025, 18(3), 18; https://doi.org/10.3390/jemr18030018 - 19 May 2025
Cited by 1 | Viewed by 1264
Abstract
Glaucoma remains a leading cause of irreversible blindness worldwide, with early detection crucial for preventing vision loss. This study developed and validated a novel eye-tracking algorithm to detect oculomotor abnormalities in primary open-angle glaucoma (POAG). We conducted a case–control study (March–June 2021), recruiting [...] Read more.
Glaucoma remains a leading cause of irreversible blindness worldwide, with early detection crucial for preventing vision loss. This study developed and validated a novel eye-tracking algorithm to detect oculomotor abnormalities in primary open-angle glaucoma (POAG). We conducted a case–control study (March–June 2021), recruiting 16 patients with moderate POAG, 16 with preperimetric POAG, and 16 age-matched controls. The participants underwent a comprehensive ophthalmic examination and eye movement recording using a high-resolution infrared tracker during two tasks: saccades to static targets and saccades to moving targets. The patients with POAG exhibited a significantly increased saccadic latency and reduced accuracy compared to the controls, with more pronounced differences in the moving target task. Notably, preperimetric POAG patients showed significant abnormalities despite having normal visual fields based on standard perimetry. Our machine learning algorithm incorporating multiple saccadic parameters achieved an excellent discriminative ability between glaucomatous and healthy subjects (AUC = 0.92), with particularly strong performance for moderate POAG (AUC = 0.97) and good performance for preperimetric POAG (AUC = 0.87). These findings suggest that eye movement analysis may serve as a sensitive biomarker for early glaucomatous damage, potentially enabling earlier intervention and improved visual outcomes. Full article
Show Figures

Figure 1

20 pages, 4777 KB  
Article
Quality Assurance of the Whole Slide Image Evaluation in Digital Pathology: State of the Art and Development Results
by Miklós Vincze, Béla Molnár and Miklós Kozlovszky
Electronics 2025, 14(10), 1943; https://doi.org/10.3390/electronics14101943 - 10 May 2025
Viewed by 1914
Abstract
One of the key issues in medicine is quality assurance. It is essential to ensure the quality, consistency and validity of the various diagnostic processes performed. Today, the reproducibility and quality assurance of the analysis of digitized image data is an unsolved problem. [...] Read more.
One of the key issues in medicine is quality assurance. It is essential to ensure the quality, consistency and validity of the various diagnostic processes performed. Today, the reproducibility and quality assurance of the analysis of digitized image data is an unsolved problem. Our research has focused on the design and development of functionalities that can be used to greatly increase the verifiability of the evaluation of digitized medical image data, thereby reducing the number of misdiagnoses. In addition, our research presents a possible application of eye-tracking to determine the evaluation status of medical samples. At the beginning of our research, we looked at how eye-tracking technology is used in medical fields today and investigated the consistency of medical diagnoses. In our research, we designed and implemented a solution that can determine the evaluation state of a tomogram-type 3D sample by monitoring physiological and software parameters while using the software. In addition, our solution described in this paper is able to capture and reconstruct/replay complete VR diagnoses made in a 3D environment. This allows the diagnoses made in our system to be shared and further evaluated. We set up our own equations to quantify the evaluation status of a given 3D tomogram. At the end of the paper, we summarize our results and compare them with those of other researchers. Full article
Show Figures

Figure 1

24 pages, 7986 KB  
Article
Employing Eye Trackers to Reduce Nuisance Alarms
by Katherine Herdt, Michael Hildebrandt, Katya LeBlanc and Nathan Lau
Sensors 2025, 25(9), 2635; https://doi.org/10.3390/s25092635 - 22 Apr 2025
Viewed by 1189
Abstract
When process operators anticipate an alarm prior to its annunciation, that alarm loses information value and becomes a nuisance. This study investigated using eye trackers to measure and adjust the salience of alarms with three methods of gaze-based acknowledgement (GBA) of alarms that [...] Read more.
When process operators anticipate an alarm prior to its annunciation, that alarm loses information value and becomes a nuisance. This study investigated using eye trackers to measure and adjust the salience of alarms with three methods of gaze-based acknowledgement (GBA) of alarms that estimate operator anticipation. When these methods detected possible alarm anticipation, the alarm’s audio and visual salience was reduced. A total of 24 engineering students (male = 14, female = 10) aged between 18 and 45 were recruited to predict alarms and control a process parameter in three scenario types (parameter near threshold, trending, or fluctuating). The study evaluated whether behaviors of the monitored parameter affected how frequently the three GBA methods were utilized and whether reducing alarm salience improved control task performance. The results did not show significant task improvement with any GBA methods (F(3,69) = 1.357, p = 0.263, partial η2 = 0.056). However, the scenario type affected which GBA method was more utilized (X2 (2, N = 432) = 30.147, p < 0.001). Alarm prediction hits with gaze-based acknowledgements coincided more frequently than alarm prediction hits without gaze-based acknowledgements (X2 (1, N = 432) = 23.802, p < 0.001, OR = 3.877, 95% CI 2.25–6.68, p < 0.05). Participant ratings indicated an overall preference for the three GBA methods over a standard alarm design (F(3,63) = 3.745, p = 0.015, partial η2 = 0.151). This study provides empirical evidence for the potential of eye tracking in alarm management but highlights the need for additional research to increase validity for inferring alarm anticipation. Full article
(This article belongs to the Special Issue New Trends in Biometric Sensing and Information Processing)
Show Figures

Figure 1

25 pages, 2408 KB  
Article
Enhancing Spatial Ability Assessment: Integrating Problem-Solving Strategies in Object Assembly Tasks Using Multimodal Joint-Hierarchical Cognitive Diagnosis Modeling
by Jujia Li, Kaiwen Man and Joni M. Lakin
J. Intell. 2025, 13(3), 30; https://doi.org/10.3390/jintelligence13030030 - 5 Mar 2025
Cited by 3 | Viewed by 2235
Abstract
We proposed a novel approach to investigate how problem-solving strategies, identified using response time and eye-tracking data, can impact individuals’ performance on the Object Assembly (OA) task. To conduct an integrated assessment of spatial reasoning ability and problem-solving strategy, we applied the Multimodal [...] Read more.
We proposed a novel approach to investigate how problem-solving strategies, identified using response time and eye-tracking data, can impact individuals’ performance on the Object Assembly (OA) task. To conduct an integrated assessment of spatial reasoning ability and problem-solving strategy, we applied the Multimodal Joint-Hierarchical Cognitive Diagnosis Model (MJ-DINA) to analyze the performance of young students (aged 6 to 14) on 17 OA items. The MJ-DINA model consists of three sub-models: a Deterministic Inputs, Noisy “and” Gate (DINA) model for estimating spatial ability, a lognormal RT model for response time, and a Bayesian Negative Binomial (BNF) model for fixation counts. In the DINA model, we estimated five spatial cognitive attributes aligned with problem-solving processes: encoding, falsification, mental rotation, mental displacement, and intractability recognition. Our model fits the data adequately, with Gelman–Rubin convergence statistics near 1.00 and posterior predictive p-values between 0.05 and 0.95 for the DINA, Log RT, and BNF sub-models, indicating reliable parameter estimation. Our findings indicate that individuals with faster processing speeds and fewer fixation counts, which we label Reflective-Scanner, outperformed the other three identified problem-solving strategy groups. Specifically, sufficient eye movement was a key factor contributing to better performance on spatial reasoning tasks. Additionally, the most effective method for improving individuals’ spatial task performance was training them to master the falsification attribute. This research offers valuable implications for developing tailored teaching methods to improve individuals’ spatial ability, depending on various problem-solving strategies. Full article
(This article belongs to the Special Issue Intelligence Testing and Assessment)
Show Figures

Figure 1

12 pages, 605 KB  
Article
Eye Tracking as Biomarker Compared to Neuropsychological Tests in Parkinson Syndromes: An Exploratory Pilot Study Before and After Deep Transcranial Magnetic Stimulation
by Celine Cont, Nathalie Stute, Anastasia Galli, Christina Schulte and Lars Wojtecki
Brain Sci. 2025, 15(2), 180; https://doi.org/10.3390/brainsci15020180 - 11 Feb 2025
Cited by 2 | Viewed by 2611
Abstract
Background/Objectives: Neurodegenerative diseases such as Parkinson’s disease (PD) are becoming increasingly prevalent, necessitating diverse treatment options to manage symptoms. The effectiveness of these treatments depends on accurate and sensitive diagnostic methods. This exploratory pilot study explores the use of eye tracking and compares [...] Read more.
Background/Objectives: Neurodegenerative diseases such as Parkinson’s disease (PD) are becoming increasingly prevalent, necessitating diverse treatment options to manage symptoms. The effectiveness of these treatments depends on accurate and sensitive diagnostic methods. This exploratory pilot study explores the use of eye tracking and compares it to neuropsychological tests on patients treated with deep transcranial magnetic stimulation (dTMS). Methods: We used the HTC Vive Pro Eye VR headset with Tobii eye tracker to measure eye movements in 10 Parkinson syndrome patients while viewing three 360-degree scenes. Eye movements were recorded pre- and post-dTMS, focusing on Fixation Duration, Longest Fixation Period, Saccade Rate, and Total Fixations. Neuropsychological assessments (MoCA, TUG, BDI) were conducted before and after stimulation. dTMS was performed using the Brainsway device with the H5 helmet, targeting the motor cortex (1 Hz) and the prefrontal cortex (10 Hz) for 7–12 sessions. Results: ROC analysis indicated a moderate ability to differentiate between states using eye movement parameters. Significant correlations were found between changes in the longest fixation period and MoCA scores (r = 0.65, p = 0.025), and between fixation durations and BDI scores (r = −0.55, p = 0.043). Paired t-tests showed no significant differences in eye movement parameters, but BDI scores significantly reduced post-dTMS (t(5) = 2.57, p = 0.049). Conclusions: Eye-tracking parameters, particularly the Longest Fixation Duration and Saccade Rate, could serve as sensitive and feasible biomarkers for cognitive changes in Parkinson’s Syndrome, offering a quick alternative to traditional methods. Traditional neuropsychological tests showed a significant improvement in depressive symptoms after dTMS. Further research with larger sample sizes is necessary to validate these findings and explore the diagnostic utility of eye tracking. Full article
(This article belongs to the Special Issue Cognition Training: From Classical Methods to Technical Applications)
Show Figures

Figure 1

17 pages, 2175 KB  
Case Report
Neurobehavioral Outcomes Relate to Activation Ratio in Female Carriers of Fragile X Syndrome Full Mutation: Two Pediatric Case Studies
by Elisa Di Giorgio, Silvia Benavides-Varela, Annamaria Porru, Sara Caviola, Marco Lunghi, Paola Rigo, Giovanna Mioni, Giulia Calignano, Martina Annunziata, Eloisa Valenza, Valentina Liani, Federica Beghetti, Fabiola Spolaor, Elisa Bettella, Roberta Polli, Zimi Sawacha and Alessandra Murgia
Int. J. Mol. Sci. 2025, 26(2), 771; https://doi.org/10.3390/ijms26020771 - 17 Jan 2025
Viewed by 2078
Abstract
Fragile X syndrome (FXS) is a genetic neurodevelopmental disorder that causes a range of developmental problems including cognitive and behavioral impairment and learning disabilities. FXS is caused by full mutations (FM) of the FMR1 gene expansions to over 200 repeats, with hypermethylation of [...] Read more.
Fragile X syndrome (FXS) is a genetic neurodevelopmental disorder that causes a range of developmental problems including cognitive and behavioral impairment and learning disabilities. FXS is caused by full mutations (FM) of the FMR1 gene expansions to over 200 repeats, with hypermethylation of the cytosine–guanine–guanine (CGG) tandem repeated region in its promoter, resulting in transcriptional silencing and loss of gene function. Female carriers of FM are typically less impaired than males. The Activation Ratio (AR), the fraction of the normal allele carried on the active X chromosome, is thought to play a crucial modifying role in defining phenotype severity. Here, we compare the cognitive, neuropsychological, adaptive, and behavioral profile of two FXS girls (10 and 11 years old) with seemingly identical FMR1 genotypic profile of FM but distinctive AR levels (70% vs. 30%). A multi-method protocol, combining molecular pathophysiology and phenotypical measures, parent reports, lab-based tasks, gait analyses, and eye-tracking was employed. Results showed that lower AR corresponds to worse performances in most (cognitive, neuropsychological, adaptive, behavioral, social, mathematical skills), but not all the considered areas (i.e., time perception and gait analysis). These observations underscore the importance of AR as a phenotypic modifying parameter in females affected with FXS. Full article
Show Figures

Figure 1

18 pages, 6414 KB  
Article
Stimuli-Induced Equilibrium Point-Based Algorithm for Motion Planning of a Heavy-Load Servo System
by Ziping Wan, Nanbin Zhao and Guang’an Ren
Automation 2025, 6(1), 3; https://doi.org/10.3390/automation6010003 - 7 Jan 2025
Viewed by 1643
Abstract
To tackle the problems of power saturation and high energy consumption of the heavy-load servo system in a servo process, we propose a motion planning algorithm based on the stimuli-induced equilibrium point (SIEP), named the SIEP-MP algorithm. First, we explore the correlation between [...] Read more.
To tackle the problems of power saturation and high energy consumption of the heavy-load servo system in a servo process, we propose a motion planning algorithm based on the stimuli-induced equilibrium point (SIEP), named the SIEP-MP algorithm. First, we explore the correlation between various modes of the bionic eye system and the heavy-load servo system through head-eye motion control theory and derive the core formula of the SIEP-MP algorithm from psychological field theory. Then, we design a speed loop of the heavy-load servo system by combining a speed controller and a disturbance observer. Furthermore, we create a position loop of the heavy-load servo system by combining a position controller and a feed-forward controller. We verify the low-pass filtering and range-limiting functions of the SIEP-MP algorithm by building the experimental platform, designing the target trajectory, and setting the control parameters. Experimental results demonstrate similar command filtering, elimination of power saturation, and energy-saving functions compared to low-pass filters, and the algorithm has a better mode-switching performance. The proposed SIEP-MP algorithm can ensure the optimal tracking performance of the heavy-load servo system in different modes through mode switching. Full article
Show Figures

Figure 1

Back to TopTop