Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (103)

Search Parameters:
Keywords = eye movement metrics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 1711 KB  
Article
Head and Eye Movements During Pedestrian Crossing in Patients with Visual Impairment: A Virtual Reality Eye Tracking Study
by Mark Mervic, Ema Grašič, Polona Jaki Mekjavić, Nataša Vidovič Valentinčič and Ana Fakin
J. Eye Mov. Res. 2025, 18(5), 55; https://doi.org/10.3390/jemr18050055 - 15 Oct 2025
Viewed by 382
Abstract
Real-world navigation depends on coordinated head–eye behaviour that standard tests of visual function miss. We investigated how visual impairment affects traffic navigation, whether behaviour differs by visual impairment type, and whether this functional grouping better explains performance than WHO categorisation. Using a virtual [...] Read more.
Real-world navigation depends on coordinated head–eye behaviour that standard tests of visual function miss. We investigated how visual impairment affects traffic navigation, whether behaviour differs by visual impairment type, and whether this functional grouping better explains performance than WHO categorisation. Using a virtual reality (VR) headset with integrated head and eye tracking, we evaluated detection of moving cars and safe road-crossing opportunities in 40 patients with central, peripheral, or combined visual impairment and 19 controls. Only two patients with a combination of very low visual acuity and severely constricted visual fields failed both visual tasks. Overall, patients identified safe-crossing intervals 1.3–1.5 s later than controls (p ≤ 0.01). Head-eye movement profiles diverged by visual impairment: patients with central impairment showed shorter, more frequent saccades (p < 0.05); patients with peripheral impairment showed exploratory behaviour similar to controls; while patients with combined impairment executed fewer microsaccades (p < 0.05), reduced total macrosaccade amplitude (p < 0.05), and fewer head turns (p < 0.05). Classification by impairment type explained behaviour better than WHO categorisation. These findings challenge acuity/field-based classifications and support integrating functional metrics into risk stratification and targeted rehabilitation, with VR providing a safe, scalable assessment tool. Full article
Show Figures

Graphical abstract

17 pages, 2603 KB  
Article
The Effect of Visual Attention Dispersion on Cognitive Response Time
by Yejin Lee and Kwangtae Jung
J. Eye Mov. Res. 2025, 18(5), 52; https://doi.org/10.3390/jemr18050052 - 10 Oct 2025
Viewed by 513
Abstract
In safety-critical systems like nuclear power plants, the rapid and accurate perception of visual interface information is vital. This study investigates the relationship between visual attention dispersion measured via heatmap entropy (as a specific measure of gaze entropy) and response time during information [...] Read more.
In safety-critical systems like nuclear power plants, the rapid and accurate perception of visual interface information is vital. This study investigates the relationship between visual attention dispersion measured via heatmap entropy (as a specific measure of gaze entropy) and response time during information search tasks. Sixteen participants viewed a prototype of an accident response support system and answered questions at three difficulty levels while their eye movements were tracked using Tobii Pro Glasses 2. Results showed a significant positive correlation (r = 0.595, p < 0.01) between heatmap entropy and response time, indicating that more dispersed attention leads to longer task completion times. This pattern held consistently across all difficulty levels. These findings suggest that heatmap entropy is a useful metric for evaluating user attention strategies and can inform interface usability assessments in high-stakes environments. Full article
Show Figures

Graphical abstract

15 pages, 2453 KB  
Article
Assessing REM Sleep as a Biomarker for Depression Using Consumer Wearables
by Roland Stretea, Zaki Milhem, Vadim Fîntînari, Cătălina Angela Crișan, Alexandru Stan, Dumitru Petreuș and Ioana Valentina Micluția
Diagnostics 2025, 15(19), 2498; https://doi.org/10.3390/diagnostics15192498 - 1 Oct 2025
Viewed by 2693
Abstract
Background: Rapid-eye-movement (REM) sleep disinhibition—shorter REM latency and a larger nightly REM fraction—is a well-described laboratory correlate of major depression. Whether the same pattern can be captured efficiently with consumer wearables in everyday settings remains unclear. We therefore quantified REM latency and proportion [...] Read more.
Background: Rapid-eye-movement (REM) sleep disinhibition—shorter REM latency and a larger nightly REM fraction—is a well-described laboratory correlate of major depression. Whether the same pattern can be captured efficiently with consumer wearables in everyday settings remains unclear. We therefore quantified REM latency and proportion of REM sleep out of total sleep duration (labeled “REM sleep coefficient”) from Apple Watch recordings and examined their association with depressive symptoms. Methods: 191 adults wore an Apple Watch for 15 consecutive nights while a custom iOS app streamed raw accelerometry and heart-rate data. Sleep stages were scored with a neural-network model previously validated against polysomnography. REM latency and REM sleep coefficient were averaged per participant. Depressive severity was assessed twice with the Beck Depression Inventory and averaged. Descriptive statistics, normality tests, Spearman correlations, and ordinary-least-squares regressions were performed. Results: Mean ± SD values were BDI 13.52 ± 6.79, REM sleep coefficient 24.05 ± 6.52, and REM latency 103.63 ± 15.44 min. REM latency correlated negatively with BDI (Spearman ρ = −0.673, p < 0.001), whereas REM sleep coefficient correlated positively (ρ = 0.678, p < 0.001). Combined in a bivariate model, the two REM metrics explained 62% of variance in depressive severity. Conclusions: Wearable-derived REM latency and REM proportion jointly capture a large share of depressive-symptom variability, indicating their potential utility as accessible digital biomarkers. Larger longitudinal and interventional studies are needed to determine whether modifying REM architecture can alter the course of depression. Full article
(This article belongs to the Special Issue A New Era in Diagnosis: From Biomarkers to Artificial Intelligence)
Show Figures

Figure 1

59 pages, 824 KB  
Systematic Review
A Systematic Review of Techniques for Artifact Detection and Artifact Category Identification in Electroencephalography from Wearable Devices
by Pasquale Arpaia, Matteo De Luca, Lucrezia Di Marino, Dunja Duran, Ludovica Gargiulo, Paola Lanteri, Nicola Moccaldi, Marco Nalin, Mauro Picciafuoco, Rachele Robbio and Elisa Visani
Sensors 2025, 25(18), 5770; https://doi.org/10.3390/s25185770 - 16 Sep 2025
Cited by 1 | Viewed by 1767
Abstract
Wearable electroencephalography (EEG) enables brain monitoring in real-world environments beyond clinical settings; however, the relaxed constraints of the acquisition setup often compromise signal quality. This review examines methods for artifact detection and for the identification of artifact categories (e.g., ocular) and specific sources [...] Read more.
Wearable electroencephalography (EEG) enables brain monitoring in real-world environments beyond clinical settings; however, the relaxed constraints of the acquisition setup often compromise signal quality. This review examines methods for artifact detection and for the identification of artifact categories (e.g., ocular) and specific sources (e.g., eye blink) in wearable EEG. A systematic search was conducted across six databases using the query: (“electroencephalographic” OR “electroencephalography” OR “EEG”) AND (“Artifact detection” OR “Artifact identification” OR “Artifact removal” OR “Artifact rejection”) AND “wearable”. Following PRISMA guidelines, 58 studies were included. Artifacts in wearable EEG exhibit specific features due to dry electrodes, reduced scalp coverage, and subject mobility, yet only a few studies explicitly address these peculiarities. Most pipelines integrate detection and removal phases but rarely separate their impact on performance metrics, mainly accuracy (71%) when the clean signal is the reference and selectivity (63%), assessed with respect to physiological signal. Wavelet transforms and ICA, often using thresholding as a decision rule, are among the most frequently used techniques for managing ocular and muscular artifacts. ASR-based pipelines are widely applied for ocular, movement, and instrumental artifacts. Deep learning approaches are emerging, especially for muscular and motion artifacts, with promising applications in real-time settings. Auxiliary sensors (e.g., IMUs) are still underutilized despite their potential in enhancing artifact detection under ecological conditions. Only two studies addressed artifact category identification. A mapping of validated pipelines per artifact type and a survey of public datasets are provided to support benchmarking and reproducibility. Full article
Show Figures

Figure 1

20 pages, 1239 KB  
Article
Monitoring Visual Fatigue with Eye Tracking in a Pharmaceutical Packing Area
by Carlos Albarrán Morillo, John F. Suárez-Pérez, Micaela Demichela, Mónica Andrea Camargo Salinas and Nasli Yuceti Miranda Arandia
Sensors 2025, 25(18), 5702; https://doi.org/10.3390/s25185702 - 12 Sep 2025
Viewed by 1784
Abstract
This study investigates visual fatigue in a real-world pharmaceutical packaging environment, where operators perform repetitive inspection and packing tasks under frequently suboptimal lighting conditions. A human-centered methodology was adopted, combining adapted self-report questionnaires, high-frequency eye-tracking data collected with Tobii Pro Glasses 3, and [...] Read more.
This study investigates visual fatigue in a real-world pharmaceutical packaging environment, where operators perform repetitive inspection and packing tasks under frequently suboptimal lighting conditions. A human-centered methodology was adopted, combining adapted self-report questionnaires, high-frequency eye-tracking data collected with Tobii Pro Glasses 3, and lux-level measurements. Key eye-movement metrics—including fixation duration, visit patterns, and pupil diameter—were analyzed within defined work zones (Areas of Interest). To reduce data complexity and uncover latent patterns of visual behavior, Principal Component Analysis was applied. Results revealed a progressive increase in visual fatigue across the workweek and throughout shifts, particularly during night work, and showed a strong association with inadequate lighting. Tasks involving high physical workload under poor illumination emerged as critical risk scenarios. This integrated approach not only confirmed the presence of visual fatigue but also identified high-risk conditions in the workflow, enabling targeted ergonomic interventions. The findings provide a practical framework for improving operator well-being and inspection performance through sensor-based monitoring and environment-specific design enhancements, in alignment with the goals of Industry 5.0. Full article
Show Figures

Figure 1

20 pages, 1051 KB  
Article
Managing Consumer Attention to Sustainability Cues in Tourism Advertising: Insights from Eye-Tracking Research
by Marek Jóźwiak
Sustainability 2025, 17(18), 8175; https://doi.org/10.3390/su17188175 - 11 Sep 2025
Viewed by 1021
Abstract
Sustainable tourism requires balancing environmental protection, social equity, and economic viability, yet its effective promotion depends on communication strategies that genuinely capture travelers’ attention. Despite growing emphasis on ecological responsibility in marketing, little is known about how sustainability-related content in tourism advertising is [...] Read more.
Sustainable tourism requires balancing environmental protection, social equity, and economic viability, yet its effective promotion depends on communication strategies that genuinely capture travelers’ attention. Despite growing emphasis on ecological responsibility in marketing, little is known about how sustainability-related content in tourism advertising is actually perceived. This study addresses this gap by examining visual attention to eco-oriented elements in promotional materials through eye-tracking technology. The research aimed to identify whether ecological certifications, slogans, and related cues attract attention and influence consumer choices, and to assess how these processes are moderated by individual ecological awareness. An experimental design was conducted with 23 young adults (aged 18–22) who viewed three tourism offers differing in their degree of sustainability messaging. Eye movements were recorded with the Gazepoint GP3 HD eye-tracker, focusing on predefined Areas of Interest (AOIs), including ecological certificates, pricing, and imagery. Heatmaps and fixation metrics were complemented by a post-exposure questionnaire. The results indicate that visually dominant components such as destination images and pricing consistently attracted the most attention, while sustainability cues were noticed but rarely prioritized. Participants with higher ecological awareness actively sought and recalled these elements, highlighting the moderating role of intrinsic motivation. The study contributes to both sustainable tourism and neuromarketing research by demonstrating how ecological values interact with perceptual behavior. Practically, it shows that eye-tracking can guide the optimal placement and design of sustainability cues in advertising. The exploratory nature and small, homogeneous sample are acknowledged as limitations, but they provide a valuable foundation for future large-scale studies. Full article
Show Figures

Figure 1

26 pages, 2810 KB  
Article
Assessment of Postural Stability in Semi-Open Prisoners: A Pilot Study
by Michalina Błażkiewicz, Jacek Wąsik, Justyna Kędziorek, Wiktoria Bandura, Jakub Kacprzak, Kamil Radecki, Karolina Kowalewska and Dariusz Mosler
J. Clin. Med. 2025, 14(18), 6399; https://doi.org/10.3390/jcm14186399 - 10 Sep 2025
Cited by 1 | Viewed by 505
Abstract
Background/Objectives: This study investigated postural stability in male inmates of a semi-open correctional facility, with a specific focus on comparing individuals with and without a history of substance dependence. The aim was to identify how addiction-related neurophysiological changes impact postural control under [...] Read more.
Background/Objectives: This study investigated postural stability in male inmates of a semi-open correctional facility, with a specific focus on comparing individuals with and without a history of substance dependence. The aim was to identify how addiction-related neurophysiological changes impact postural control under varying sensory and biomechanical demands. Methods: A total of 47 adult male prisoners (mean age: 24.3 years) participated in this study. Nineteen inmates had a documented history of alcohol or drug dependence (addicted group), while twenty-eight had no such history (non-addicted group). All participants were physically able and free of neurological disorders. Postural control was assessed using a stabilometric platform and wireless IMU across six 30 s standing tasks of varying difficulty (bipedal/unipedal stance and eyes open/closed). Linear (center of pressure path and ellipse area) and nonlinear (sample entropy, fractal dimension, and the Lyapunov exponent) sway metrics were analyzed, along with trunk kinematics from IMU data. This study received institutional ethical approval; trial registration was not required. Results: The addicted group showed greater instability, especially in the eyes-closed and single-leg tasks, with increased sway and irregularity in the anterior–posterior direction. IMU data indicated altered trunk motion, suggesting impaired neuromuscular control. In contrast, non-addicted individuals demonstrated more efficient, targeted postural strategies, while addicted participants relied on broader, less selective movements, possibly reflecting compensatory or neuroadaptive changes from substance use. Conclusions: Substance dependence is associated with compromised postural stability in incarcerated men. Balance assessments may be valuable for detecting functional impairments and guiding rehabilitation within prison healthcare systems. Full article
(This article belongs to the Special Issue Substance and Behavioral Addictions: Prevention and Diagnosis)
Show Figures

Figure 1

28 pages, 5366 KB  
Article
Interpretable Quantification of Scene-Induced Driver Visual Load: Linking Eye-Tracking Behavior to Road Scene Features via SHAP Analysis
by Jie Ni, Yifu Shao, Yiwen Guo and Yongqi Gu
J. Eye Mov. Res. 2025, 18(5), 40; https://doi.org/10.3390/jemr18050040 - 9 Sep 2025
Viewed by 584
Abstract
Road traffic accidents remain a major global public health concern, where complex urban driving environments significantly elevate drivers’ visual load and accident risks. Unlike existing research that adopts a macro perspective by considering multiple factors such as the driver, vehicle, and road, this [...] Read more.
Road traffic accidents remain a major global public health concern, where complex urban driving environments significantly elevate drivers’ visual load and accident risks. Unlike existing research that adopts a macro perspective by considering multiple factors such as the driver, vehicle, and road, this study focuses on the driver’s visual load, a key safety factor, and its direct source—the driver’s visual environment. We have developed an interpretable framework combining computer vision and machine learning to quantify how road scene features influence oculomotor behavior and scene-induced visual load, establishing a complete and interpretable link between scene features, eye movement behavior, and visual load. Using the DR(eye)VE dataset, visual attention demand is established through occlusion experiments and confirmed to correlate with eye-tracking metrics. K-means clustering is applied to classify visual load levels based on discriminative oculomotor features, while semantic segmentation extracts quantifiable road scene features such as the Green Visibility Index, Sky Visibility Index and Street Canyon Enclosure. Among multiple machine learning models (Random Forest, Ada-Boost, XGBoost, and SVM), XGBoost demonstrates optimal performance in visual load detection. SHAP analysis reveals critical thresholds: the probability of high visual load increases when pole density exceeds 0.08%, signage surpasses 0.55%, or buildings account for more than 14%; while blink duration/rate decrease when street enclosure exceeds 38% or road congestion goes beyond 25%, indicating elevated visual load. The proposed framework provides actionable insights for urban design and driver assistance systems, advancing traffic safety through data-driven optimization of road environments. Full article
Show Figures

Figure 1

16 pages, 1431 KB  
Article
Assessing Smooth Pursuit Eye Movements Using Eye-Tracking Technology in Patients with Schizophrenia Under Treatment: A Pilot Study
by Luis Benigno Contreras-Chávez, Valdemar Emigdio Arce-Guevara, Luis Fernando Guerrero, Alfonso Alba, Miguel G. Ramírez-Elías, Edgar Roman Arce-Santana, Victor Hugo Mendez-Garcia, Jorge Jimenez-Cruz, Anna Maria Maddalena Bianchi and Martin O. Mendez
Sensors 2025, 25(16), 5212; https://doi.org/10.3390/s25165212 - 21 Aug 2025
Viewed by 2230
Abstract
Schizophrenia is a complex disorder that affects mental organization and cognitive functions, including concentration and memory. One notable manifestation of cognitive changes in schizophrenia is a diminished ability to scan and perform tasks related to visual inspection. From the three evaluable aspects of [...] Read more.
Schizophrenia is a complex disorder that affects mental organization and cognitive functions, including concentration and memory. One notable manifestation of cognitive changes in schizophrenia is a diminished ability to scan and perform tasks related to visual inspection. From the three evaluable aspects of the ocular movements (saccadic, smooth pursuit, and fixation) in particular, smooth pursuit eye movement (SPEM) involves the tracking of slow moving objects and is closely related to attention, visual memory, and processing speed. However, evaluating smooth pursuit in clinical settings is challenging due to the technical complexities of detecting these movements, resulting in limited research and clinical application. This pilot study investigates whether the quantitative metrics derived from eye-tracking data can distinguish between patients with schizophrenia under treatment and healthy controls. The study included nine healthy participants and nine individuals receiving treatment for schizophrenia. Gaze trajectories were recorded using an eye tracker during a controlled visual tracking task performed during a clinical visit. Spatiotemporal analysis of gaze trajectories was performed by evaluating three different features: polygonal area, colocalities, and direction difference. Subsequently, a support vector machine (SVM) was used to assess the separability between healthy individuals and those with schizophrenia based on the identified gaze trajectory features. The results show statistically significant differences between the control and subjects with schizophrenia for all the computed indexes (p < 0.05) and a high separability achieving around 90% of accuracy, sensitivity, and specificity. The results suggest the potential development of a valuable clinical tool for the evaluation of SPEM, offering utility in clinics to assess the efficacy of therapeutic interventions in individuals with schizophrenia. Full article
(This article belongs to the Special Issue Biomedical Imaging, Sensing and Signal Processing)
Show Figures

Figure 1

13 pages, 1048 KB  
Article
Driving Behavior of Older and Younger Drivers in Simplified Emergency Scenarios
by Yun Xiao, Mingming Dai and Shouqiang Xue
Sensors 2025, 25(16), 5178; https://doi.org/10.3390/s25165178 - 20 Aug 2025
Viewed by 684
Abstract
This study focuses on exploring the differences in driving abilities in emergency traffic situations between older drivers (aged 60–70) and young drivers (aged 20–35) in a simple traffic environment. Two typical emergency scenarios were designed in the experiment: Scenario A (intrusion of electric [...] Read more.
This study focuses on exploring the differences in driving abilities in emergency traffic situations between older drivers (aged 60–70) and young drivers (aged 20–35) in a simple traffic environment. Two typical emergency scenarios were designed in the experiment: Scenario A (intrusion of electric bicycles) and Scenario B (pedestrians crossing the road). The experiment employed a driving simulation system to synchronously collect data on eye movement characteristics, driving behavior, and physiological metrics from 30 drivers. Two-factor covariance analysis, correlation analysis, and regression analysis were conducted on the experimental data. The comprehensive study results indicated that the older group exhibited better driving performance in emergency scenarios compared to the younger group. Specifically, in Scenario A, the older group had a faster first fixation time on the AOI compared to the younger group, a faster braking reaction time, a higher maximum brake pedal depth, and a higher skin conductance level. In Scenario B, the older group’s driving performance was similar to that in Scenario A, with better performance than the younger group. The study reveals that in some simple driving tasks, young-old drivers (60–70 years) can compensate for their physiological decline through self-regulation and self-restraint, thereby exhibiting safer driving behaviors. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

24 pages, 6883 KB  
Article
A Human-in-the-Loop Study of Eye-Movement-Based Control for Workload Reduction in Delayed Teleoperation of Ground Vehicles
by Qiang Zhang, Aiping Zhao, Feng Zhao and Wangyu Wu
Machines 2025, 13(8), 735; https://doi.org/10.3390/machines13080735 - 18 Aug 2025
Viewed by 1065
Abstract
Teleoperated ground vehicles (TGVs) are widely applied in hazardous and dynamic environments, where communication delay and low transparency increase operator workload and reduce control performance. This study explores the cognitive and physiological workload associated with such conditions and evaluates the effectiveness of an [...] Read more.
Teleoperated ground vehicles (TGVs) are widely applied in hazardous and dynamic environments, where communication delay and low transparency increase operator workload and reduce control performance. This study explores the cognitive and physiological workload associated with such conditions and evaluates the effectiveness of an eye-movement-based predicted trajectory guidance control (ePTGC) framework in alleviating operator burden. A human-in-the-loop teleoperation experiment was conducted using a 2 × 2 within-subject design, incorporating subjective ratings (NASA-TLX), objective performance metrics from a dual-task paradigm (one-back memory task), and multimodal physiological indicators (ECG and EDA). Results show that delay and low transparency significantly elevated subjective, objective, and physiological workload levels. Compared to direct control (DC), the ePTGC framework significantly reduced workload across all three dimensions, particularly under high-delay conditions, while maintaining or even improving task performance. Notably, ePTGC enabled even lower workload levels under low-delay conditions than the baseline condition. These findings demonstrate the potential of the ePTGC framework to enhance teleoperation stability and reduce operator burden in delay-prone and low-transparency scenarios. Full article
Show Figures

Figure 1

17 pages, 886 KB  
Article
Predicting Cartographic Symbol Location with Eye-Tracking Data and Machine Learning Approach
by Paweł Cybulski
J. Eye Mov. Res. 2025, 18(4), 35; https://doi.org/10.3390/jemr18040035 - 7 Aug 2025
Viewed by 531
Abstract
Visual search is a core component of map reading, influenced by both cartographic design and human perceptual processes. This study investigates whether the location of a target cartographic symbol—central or peripheral—can be predicted using eye-tracking data and machine learning techniques. Two datasets were [...] Read more.
Visual search is a core component of map reading, influenced by both cartographic design and human perceptual processes. This study investigates whether the location of a target cartographic symbol—central or peripheral—can be predicted using eye-tracking data and machine learning techniques. Two datasets were analyzed, each derived from separate studies involving visual search tasks with varying map characteristics. A comprehensive set of eye movement features, including fixation duration, saccade amplitude, and gaze dispersion, were extracted and standardized. Feature selection and polynomial interaction terms were applied to enhance model performance. Twelve supervised classification algorithms were tested, including Random Forest, Gradient Boosting, and Support Vector Machines. The models were evaluated using accuracy, precision, recall, F1-score, and ROC-AUC. Results show that models trained on the first dataset achieved higher accuracy and class separation, with AdaBoost and Gradient Boosting performing best (accuracy = 0.822; ROC-AUC > 0.86). In contrast, the second dataset presented greater classification challenges, despite high recall in some models. Feature importance analysis revealed that fixation standard deviation as a proxy for gaze dispersion, particularly along the vertical axis, was the most predictive metric. These findings suggest that gaze behavior can reliably indicate the spatial focus of visual search, providing valuable insight for the development of adaptive, gaze-aware cartographic interfaces. Full article
Show Figures

Figure 1

29 pages, 16016 KB  
Article
An Eye Movement Monitoring Tool: Towards a Non-Invasive Device for Amblyopia Treatment
by Juan Camilo Castro-Rizo, Juan Pablo Moreno-Garzón, Carlos Arturo Narváez Delgado, Nicolas Valencia-Jimenéz, Javier Ferney Castillo García and Alvaro Alexander Ocampo-Gonzalez
Sensors 2025, 25(15), 4823; https://doi.org/10.3390/s25154823 - 6 Aug 2025
Viewed by 989
Abstract
Amblyopia, commonly affecting children aged 0–6 years, results from disrupted visual processing during early development and often leads to reduced visual acuity in one eye. This study presents the development and preliminary usability assessment of a non-invasive ocular monitoring device designed to support [...] Read more.
Amblyopia, commonly affecting children aged 0–6 years, results from disrupted visual processing during early development and often leads to reduced visual acuity in one eye. This study presents the development and preliminary usability assessment of a non-invasive ocular monitoring device designed to support oculomotor engagement and therapy adherence in amblyopia management. The system incorporates an interactive maze-navigation task controlled via gaze direction, implemented during monocular and binocular sessions. The device tracks lateral and anteroposterior eye movements and generates visual reports, including displacement metrics and elliptical movement graphs. Usability testing was conducted with a non-probabilistic adult sample (n = 15), including individuals with and without amblyopia. The System Usability Scale (SUS) yielded an average score of 75, indicating good usability. Preliminary tests with two adults diagnosed with amblyopia suggested increased eye displacement during monocular sessions, potentially reflecting enhanced engagement rather than direct therapeutic improvement. This feasibility study demonstrates the device’s potential as a supportive, gaze-controlled platform for visual engagement monitoring in amblyopia rehabilitation. Future clinical studies involving pediatric populations and integration of visual stimuli modulation are recommended to evaluate therapeutic efficacy and adaptability for early intervention. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Graphical abstract

18 pages, 1588 KB  
Article
EEG-Based Attention Classification for Enhanced Learning Experience
by Madiha Khalid Syed, Hong Wang, Awais Ahmad Siddiqi, Shahnawaz Qureshi and Mohamed Amin Gouda
Appl. Sci. 2025, 15(15), 8668; https://doi.org/10.3390/app15158668 - 5 Aug 2025
Cited by 1 | Viewed by 2251
Abstract
This paper presents a novel EEG-based learning system designed to enhance the efficiency and effectiveness of studying by dynamically adjusting the difficulty level of learning materials based on real-time attention levels. In the training phase, EEG signals corresponding to high and low concentration [...] Read more.
This paper presents a novel EEG-based learning system designed to enhance the efficiency and effectiveness of studying by dynamically adjusting the difficulty level of learning materials based on real-time attention levels. In the training phase, EEG signals corresponding to high and low concentration levels are recorded while participants engage in quizzes to learn and memorize Chinese characters. The attention levels are determined based on performance metrics derived from the quiz results. Following extensive preprocessing, the EEG data undergoes several feature extraction steps: removal of artifacts due to eye blinks and facial movements, segregation of waves based on their frequencies, similarity indexing with respect to delay, binary thresholding, and (PCA). These extracted features are then fed into a k-NN classifier, which accurately distinguishes between high and low attention brain wave patterns, with the labels derived from the quiz performance indicating high or low attention. During the implementation phase, the system continuously monitors the user’s EEG signals while studying. When low attention levels are detected, the system increases the repetition frequency and reduces the difficulty of the flashcards to refocus the user’s attention. Conversely, when high concentration levels are identified, the system escalates the difficulty level of the flashcards to maximize the learning challenge. This adaptive approach ensures a more effective learning experience by maintaining optimal cognitive engagement, resulting in improved learning rates, reduced stress, and increased overall learning efficiency. Our results indicate that this EEG-based adaptive learning system holds significant potential for personalized education, fostering better retention and understanding of Chinese characters. Full article
(This article belongs to the Special Issue EEG Horizons: Exploring Neural Dynamics and Neurocognitive Processes)
Show Figures

Figure 1

18 pages, 1047 KB  
Article
Eye Movement Patterns as Indicators of Text Complexity in Arabic: A Comparative Analysis of Classical and Modern Standard Arabic
by Hend Al-Khalifa
J. Eye Mov. Res. 2025, 18(4), 30; https://doi.org/10.3390/jemr18040030 - 16 Jul 2025
Viewed by 704
Abstract
This study investigates eye movement patterns as indicators of text complexity in Arabic, focusing on the comparative analysis of Classical Arabic (CA) and Modern Standard Arabic (MSA) text. Using the AraEyebility corpus, which contains eye-tracking data from readers of both CA and MSA [...] Read more.
This study investigates eye movement patterns as indicators of text complexity in Arabic, focusing on the comparative analysis of Classical Arabic (CA) and Modern Standard Arabic (MSA) text. Using the AraEyebility corpus, which contains eye-tracking data from readers of both CA and MSA text, we examined differences in fixation patterns, regression rates, and overall reading behavior between these two forms of Arabic. Our analyses revealed significant differences in eye movement metrics between CA and MSA text, with CA text consistently eliciting more fixations, longer fixation durations, and more frequent revisits. Multivariate analysis confirmed that language type has a significant combined effect on eye movement patterns. Additionally, we identified different relationships between text features and eye movements for CA versus MSA text, with sentence-level features emerging as significant predictors across both language types. Notably, we observed an interaction between language type and readability level, with readers showing less sensitivity to readability variations in CA text compared to MSA text. These findings contribute to our understanding of how historical language evolution affects reading behavior and have practical implications for Arabic language education, publishing, and assessment. The study demonstrates the value of eye movement analysis for understanding text complexity in Arabic and highlights the importance of considering language-specific features when studying reading processes. Full article
Show Figures

Graphical abstract

Back to TopTop