error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (747)

Search Parameters:
Keywords = visual object tracking

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 4563 KB  
Article
Enhancing Multi Object Tracking with CLIP: A Comparative Study on DeepSORT and StrongSORT
by Khadijah Alkandary, Ahmet Serhat Yildiz and Hongying Meng
Electronics 2026, 15(2), 265; https://doi.org/10.3390/electronics15020265 - 7 Jan 2026
Abstract
Multi object tracking (MOT) is a crucial task in video analysis but is often hindered by frequent identity (ID) switches, particularly in crowded or occluded scenarios. This study explores the integration of a vision-language model, into two tracking by detection frameworks DeepSORT and [...] Read more.
Multi object tracking (MOT) is a crucial task in video analysis but is often hindered by frequent identity (ID) switches, particularly in crowded or occluded scenarios. This study explores the integration of a vision-language model, into two tracking by detection frameworks DeepSORT and StrongSORT to enhance appearance-based re-identification. YOLOv8x is employed as the base detector due to its robust localization performance, while CLIP’s visual features replace the default appearance encoders, providing more discriminative and semantically rich embeddings. We evaluated the CLIP enhanced DeepSORT and StrongSORT on sequences from two challenging real world benchmarks: MOT15 and MOT16. Furthermore, we analyze the generalizability of YOLOv8x when trained on the MOT20 benchmark and applied to the chosen trackers on MOT15 and MOT16. Our findings show that both CLIP enhanced trackers substantially reduce ID switches and improve ID-based tracking metrics, with CLIP StrongSORT achieving the most consistent gains. In addition, YOLOv8x demonstrates strong generalization capabilities for unseen datasets. These results highlight the effectiveness of incorporating vision language models into MOT frameworks, particularly under visually challenging conditions. Full article
Show Figures

Figure 1

21 pages, 1489 KB  
Article
A Study on the Color and Glossiness of Polypropylene (PP) Films Based on the Visual Perception of Elderly Users
by Dong Jin, Xu Chen, Wangting Jiang, Zhichang Xu and Xiaoxing Yan
Coatings 2026, 16(1), 68; https://doi.org/10.3390/coatings16010068 - 7 Jan 2026
Abstract
In age-friendly furniture design, the visual characteristics of material surfaces play a crucial role in shaping the usage experience and preferences of elderly users. In response to the application requirements of decorative materials for furniture surfaces, this study focuses on polypropylene (PP) films, [...] Read more.
In age-friendly furniture design, the visual characteristics of material surfaces play a crucial role in shaping the usage experience and preferences of elderly users. In response to the application requirements of decorative materials for furniture surfaces, this study focuses on polypropylene (PP) films, with the objective of investigating the visual perceptual preferences of elderly users concerning the surface color and glossiness attributes of these films. An eye-tracking experiment was initially conducted to identify color preferences among elderly participants, utilizing indicators including first fixation duration, total fixation duration, and total fixation count. Subsequently, a questionnaire survey was administered to assess user satisfaction with PP films featuring different glossiness levels—high-glossiness, semi-glossiness, and matte—and explored whether glossiness significantly influences color preference. The experimental results revealed the following: (1) red hues exhibited stronger initial visual attraction, while colors with low saturation and medium-to-high lightness sustained longer visual engagement; (2) the matte finishes received significantly higher satisfaction ratings than both semi-glossiness and high-glossiness finishes, with this preference remaining consistent across different color conditions; (3) glossiness does not exert a significant influence on color preference. The findings of this study provide consumer-oriented insights for the design of PP films that better accommodate the preferences of elderly users, while also offering guidance for the application and optimization of green and safe furniture materials. Full article
Show Figures

Figure 1

13 pages, 2714 KB  
Article
Millimeter-Wave Radar and Mixed Reality Virtual Reality System for Agility Analysis of Table Tennis Players
by Yung-Hoh Sheu, Li-Wei Tai, Li-Chun Chang, Tz-Yun Chen and Sheng-K Wu
Computers 2026, 15(1), 28; https://doi.org/10.3390/computers15010028 - 6 Jan 2026
Viewed by 20
Abstract
This study proposes an integrated agility assessment system that combines Millimeter-Wave (MMW) radar, Ultra-Wideband (UWB) ranging, and Mixed Reality (MR) technologies to quantitatively evaluate athlete performance with high accuracy. The system utilizes the fine motion-tracking capability of MMW radar and the immersive real-time [...] Read more.
This study proposes an integrated agility assessment system that combines Millimeter-Wave (MMW) radar, Ultra-Wideband (UWB) ranging, and Mixed Reality (MR) technologies to quantitatively evaluate athlete performance with high accuracy. The system utilizes the fine motion-tracking capability of MMW radar and the immersive real-time visualization provided by MR to ensure reliable operation under low-light conditions and multi-object occlusion, thereby enabling precise measurement of mobility, reaction time, and movement distance. To address the challenge of player identification during doubles testing, a one-to-one UWB configuration was adopted, in which each base station was paired with a wearable tag to distinguish individual athletes. UWB identification was not required during single-player tests. The experimental protocol included three specialized agility assessments—Table Tennis Agility Test I (TTAT I), Table Tennis Doubles Agility Test II (TTAT II), and the Agility T-Test (ATT)—conducted with more than 80 table tennis players of different technical levels (80% male and 20% female). Each athlete completed two sets of two trials to ensure measurement consistency and data stability. Experimental results demonstrated that the proposed system effectively captured displacement trajectories, movement speed, and reaction time. The MMW radar achieved an average measurement error of less than 10%, and the overall classification model attained an accuracy of 91%, confirming the reliability and robustness of the integrated sensing pipeline. Beyond local storage and MR-based live visualization, the system also supports cloud-based data uploading for graphical analysis and enables MR content to be mirrored on connected computer displays. This feature allows coaches to monitor performance in real time and provide immediate feedback. By integrating the environmental adaptability of MMW radar, the real-time visualization capability of MR, UWB-assisted athlete identification, and cloud-based data management, the proposed system demonstrates strong potential for professional sports training, technical diagnostics, and tactical optimization. It delivers timely and accurate performance metrics and contributes to the advancement of data-driven sports science applications. Full article
(This article belongs to the Section Human–Computer Interactions)
Show Figures

Figure 1

21 pages, 7464 KB  
Article
Enhanced CenterTrack for Robust Underwater Multi-Fish Tracking
by Jinfeng Wang, Mingrun Lin, Zhipeng Cheng, Renyou Yang and Qiong Huang
Animals 2026, 16(2), 156; https://doi.org/10.3390/ani16020156 - 6 Jan 2026
Viewed by 31
Abstract
Accurate monitoring of fish movement is essential for understanding behavioral patterns and group dynamics in aquaculture systems. Underwater scenes—characterized by dense populations, frequent occlusions, non-rigid body motion, and visually similar appearances—present substantial challenges for conventional multi-object tracking methods. We propose an improved CenterTrack-based [...] Read more.
Accurate monitoring of fish movement is essential for understanding behavioral patterns and group dynamics in aquaculture systems. Underwater scenes—characterized by dense populations, frequent occlusions, non-rigid body motion, and visually similar appearances—present substantial challenges for conventional multi-object tracking methods. We propose an improved CenterTrack-based framework tailored for multi-fish tracking in such environments. The framework integrates three complementary components: a multi-branch feature extractor that enhances discrimination among visually similar individuals, occlusion-aware output heads that estimate visibility states, and a three-stage cascade association module that improves trajectory continuity under abrupt motion and occlusions. To support systematic evaluation, we introduce a self-built dataset named Multi-Fish 25 (MF25), continuous video sequences of 75 individually annotated fish recorded in aquaculture tanks. The experimental results on MF25 show that the proposed method achieves an IDF1 of 82.5%, MOTA of 85.8%, and IDP of 84.7%. Although this study focuses on tracking performance rather than biological analysis, the produced high-quality trajectories form a solid basis for subsequent behavioral studies. The framework’s modular design and computational efficiency make it suitable for practical, online tracking in aquaculture scenarios. Full article
(This article belongs to the Special Issue Fish Cognition and Behaviour)
Show Figures

Figure 1

32 pages, 2191 KB  
Article
Evaluating Color Perception in Indoor Cultural Display Spaces of Traditional Chinese Floral Arrangements: A Combined Semantic Differential and Eye-Tracking Study
by Kun Yuan, Pingfang Fan, Han Qin and Wei Gong
Buildings 2026, 16(1), 181; https://doi.org/10.3390/buildings16010181 - 31 Dec 2025
Viewed by 193
Abstract
The color design of architectural interior display spaces directly affects the effectiveness of cultural information communication and the visual cognitive experience of viewers. However, there is currently a lack of combined subjective and objective evaluation regarding how to scientifically translate and apply traditional [...] Read more.
The color design of architectural interior display spaces directly affects the effectiveness of cultural information communication and the visual cognitive experience of viewers. However, there is currently a lack of combined subjective and objective evaluation regarding how to scientifically translate and apply traditional color systems in modern contexts. This study takes the virtual display space of traditional Chinese floral arrangements as a case, aiming to construct an evaluation framework integrating the Semantic Differential Method and eye-tracking technology, to empirically examine how color schemes based on the translation of traditional aesthetics affect the subjective perception and objective visual attention behavior of modern viewers. Firstly, colors were extracted and translated from Song Dynasty paintings and literature, constructing five sets of culturally representative color combination samples, which were then applied to standardized virtual exhibition booths. Eye tracking data of 49 participants during free viewing were recorded via an eye-tracker, and their subjective ratings on four dimensions—cultural color atmosphere perception, color matching comfort level, artwork form clarity, and explanatory text clarity—were collected. Data analysis comprehensively employed linear mixed models, non-parametric tests, and Spearman’s rank correlation analysis. The results show that, regarding subjective perception, different color schemes exhibited significant differences in traditional feel, comfort, and text clarity, with Sample 4 and Sample 5 performing better on multiple indicators; a moderate-strength, significant positive correlation was found between traditional cultural atmosphere perception and color matching comfort. Regarding objective eye-tracking behavior, color significantly influenced the overall visual engagement duration and the processing depth of the text area. Among them, the color scheme of Sample 5 better promoted sustained reading of auxiliary textual information, while the total fixation duration obtained for Sample 4 was significantly shorter than that of other schemes. No direct correlation was found between subjective ratings and spontaneous eye-tracking behavior under the experimental conditions of this study; the depth of processing textual information was a key factor driving overall visual engagement. The research provides empirical evidence and design insights for the scientific application of color in spaces such as cultural heritage displays to optimize visual experience. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

24 pages, 7208 KB  
Article
Dynamic SLAM by Combining Rigid Feature Point Set Modeling and YOLO
by Pengchao Ding, Weidong Wang, Xian Wu, Kangle Xu, Dongmei Wu and Zhijiang Du
Sensors 2026, 26(1), 235; https://doi.org/10.3390/s26010235 - 30 Dec 2025
Viewed by 285
Abstract
To obtain accurate location information in dynamic environments, we propose a dynamic visual–inertial SLAM algorithm that can operate in real-time. In this paper, we combine the YOLO-V5 algorithm and the depth threshold extraction algorithm to achieve real-time pixel-level segmentation of objects. Meanwhile, to [...] Read more.
To obtain accurate location information in dynamic environments, we propose a dynamic visual–inertial SLAM algorithm that can operate in real-time. In this paper, we combine the YOLO-V5 algorithm and the depth threshold extraction algorithm to achieve real-time pixel-level segmentation of objects. Meanwhile, to address the situation where dynamic targets are occluded by other objects, we design the object depth extraction method based on K-means clustering. We also design a factor graph optimization with rigid and non-rigid dynamic objects based on object category division, in order to better utilize the motion information of dynamic objects. We use the Kalman filter algorithm to achieve object matching and tracking. At the same time, to obtain as many rigid targets as possible, we design the adaptive rigid point set modeling algorithm to further supplement the rigid objects. Finally, we evaluate the algorithm through public datasets and self-built datasets, verifying its ability to handle dynamic environments. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

21 pages, 3769 KB  
Article
Benchmarking Robust AI for Microrobot Detection with Ultrasound Imaging
by Ahmed Almaghthawi, Changyan He, Suhuai Luo, Furqan Alam, Majid Roshanfar and Lingbo Cheng
Actuators 2026, 15(1), 16; https://doi.org/10.3390/act15010016 - 29 Dec 2025
Viewed by 243
Abstract
Microrobots are emerging as transformative tools in minimally invasive medicine, with applications in non-invasive therapy, real-time diagnosis, and targeted drug delivery. Effective use of these systems critically depends on accurate detection and tracking of microrobots within the body. Among commonly used imaging modalities, [...] Read more.
Microrobots are emerging as transformative tools in minimally invasive medicine, with applications in non-invasive therapy, real-time diagnosis, and targeted drug delivery. Effective use of these systems critically depends on accurate detection and tracking of microrobots within the body. Among commonly used imaging modalities, including MRI, CT, and optical imaging, ultrasound (US) offers an advantageous balance of portability, low cost, non-ionizing safety, and high temporal resolution, making it particularly suitable for real-time microrobot monitoring. This study reviews current detection strategies and presents a comparative evaluation of six advanced AI-based multi-object detectors, including ConvNeXt, Res2NeXt-101, ResNeSt-269, U-Net, and the latest YOLO variants (v11, v12), being applied to microrobot detection in US imaging. Performance is assessed using standard metrics (AP50–95, precision, recall, F1-score) and robustness to four visual perturbations: blur, brightness variation, occlusion, and speckle noise. Additionally, feature-level sensitivity analyses are conducted to identify the contributions of different visual cues. Computational efficiency is also measured to assess suitability for real-time deployment. Results show that ResNeSt-269 achieved the highest detection accuracy, followed by Res2NeXt-101 and ConvNeXt, while YOLO-based detectors provided superior computational efficiency. These findings offer actionable insights for developing robust and efficient microrobot tracking systems with strong potential in diagnostic and therapeutic healthcare applications. Full article
Show Figures

Figure 1

19 pages, 2276 KB  
Article
Towards Intelligent Water Safety: Robobuoy, a Deep Learning-Based Drowning Detection and Autonomous Surface Vehicle Rescue System
by Krittakom Srijiranon, Nanmanat Varisthanist, Thanapat Tardtong, Chatchadaporn Pumthurean and Tanatorn Tanantong
Appl. Syst. Innov. 2026, 9(1), 12; https://doi.org/10.3390/asi9010012 - 28 Dec 2025
Viewed by 276
Abstract
Drowning remains the third leading cause of accidental injury-related deaths worldwide, disproportionately affecting low- and middle-income countries where lifeguard coverage is limited or absent. To address this critical gap, we present Robobuoy, an intelligent real-time rescue system that integrates deep learning-based object detection [...] Read more.
Drowning remains the third leading cause of accidental injury-related deaths worldwide, disproportionately affecting low- and middle-income countries where lifeguard coverage is limited or absent. To address this critical gap, we present Robobuoy, an intelligent real-time rescue system that integrates deep learning-based object detection with an unmanned surface vehicle (USV) for autonomous intervention. The system employs a monitoring station equipped with two specialized object detection models: YOLO12m for recognizing drowning individuals and YOLOv5m for tracking the USV. These models were selected for their balance of accuracy, efficiency, and compatibility with resource-constrained edge devices. A geometric navigation algorithm calculates heading directions from visual detections and guides the USV toward the victim. Experimental evaluations on a combined open-source and custom dataset demonstrated strong performance, with YOLO12m achieving an mAP@0.5 of 0.9284 for drowning detection and YOLOv5m achieving an mAP@0.5 of 0.9848 for USV detection. Hardware validation in a controlled water pool confirmed successful target-reaching behavior in all nine trials, achieving a positioning error within 1 m, with traversal times ranging from 11 to 23 s. By combining state-of-the-art computer vision and low-cost autonomous robotics, Robobuoy offers an affordable and low-latency prototype to enhance water safety in unsupervised aquatic environments, particularly in regions where conventional lifeguard surveillance is impractical. Full article
(This article belongs to the Special Issue Recent Developments in Data Science and Knowledge Discovery)
Show Figures

Figure 1

14 pages, 2792 KB  
Article
Seeing the Flaws? Visual Perception of Faces in Individuals Screening Positive for Body Dysmorphic Disorder: An Eye-Tracking Study
by Łukasz Banaszek, Marta Wojtkiewicz, Monika Rudzińska, Piotr Krysiak, Albert Stachura, Łukasz Mokros and Wiktor Pascal
J. Clin. Med. 2026, 15(1), 236; https://doi.org/10.3390/jcm15010236 - 28 Dec 2025
Viewed by 248
Abstract
Background: Body dysmorphic disorder (BDD) is a psychiatric condition characterized by a preoccupation with perceived appearance flaws. It is highly prevalent among aesthetic surgery candidates and can negatively impact surgical outcomes. The Body Dysmorphic Disorder Questionnaire (BDDQ) is used for BDD screening, but [...] Read more.
Background: Body dysmorphic disorder (BDD) is a psychiatric condition characterized by a preoccupation with perceived appearance flaws. It is highly prevalent among aesthetic surgery candidates and can negatively impact surgical outcomes. The Body Dysmorphic Disorder Questionnaire (BDDQ) is used for BDD screening, but objective validation is limited. This study aimed to determine whether individuals screening positive for BDD exhibit different visual perception patterns of their own and model faces compared to controls, using eye-tracking technology. Methods: We conducted a cross-sectional study among 79 participants, including psychiatric patients and medical students. Participants completed the BDDQ and underwent eye-tracking while evaluating standardized photographs of models and their own faces. Gaze fixation patterns were recorded across pre-defined facial areas of interest. Perception and aesthetic assessment differences between the BDDQ-positive and BDDQ-negative groups were studied. Results: Participants focused most frequently on the nose, eyes and eyebrows. Compared to model faces, more attention was directed toward their own chin and cheeks. However, BDDQ screening results did not significantly influence fixation patterns or eye-tracking metrics. Psychiatric patients, regardless of BDDQ status, exhibited more numerous and shorter fixations than students. All participants rated model faces as significantly more attractive (i.e., higher aesthetic rating) than their own, with the largest difference observed in the BDDQ-positive group. Conclusions: While individuals screening positive for BDD reported lower self-attractiveness, eye-tracking patterns did not differ significantly from those of healthy participants. These findings suggest that BDDQ remains a useful screening tool for subjective dissatisfaction but may not correspond to objective differences in facial visual processing. Full article
(This article belongs to the Special Issue Facial Plastic and Cosmetic Medicine)
Show Figures

Figure 1

24 pages, 3742 KB  
Article
A Study on the Restorative Effects of Hydrangea Flower Color and Structure on Human Psychology and Physiology
by Qinhan Li, Xueni Ou, Shizhen Cai, Li Guo, Xiangyu Zhou, Xueqian Gong, Yinan Li, Zhigao Zhai, Mohamed Elsadek and Haoyuan Tang
Horticulturae 2026, 12(1), 34; https://doi.org/10.3390/horticulturae12010034 - 27 Dec 2025
Viewed by 209
Abstract
Amid growing “nature deficit” associated with urbanization and indoor living, flowering plants are increasingly used to support psychological restoration. Yet evidence on how floral color and structural morphology jointly shape restorative outcomes remains limited. This study employed a within-subjects, repeated-measures design, utilizing physiological [...] Read more.
Amid growing “nature deficit” associated with urbanization and indoor living, flowering plants are increasingly used to support psychological restoration. Yet evidence on how floral color and structural morphology jointly shape restorative outcomes remains limited. This study employed a within-subjects, repeated-measures design, utilizing physiological instruments and psychological questionnaires to investigate the physiological and psychological restorative benefits of Hydrangea macrophylla and to quantify the differences in restorative effects across five colors (blue, pink, white, mauve, red), two inflorescence types (mophead, lacecap), and two petal structures (single, double). Twenty-eight healthy young adults viewed 15 live hydrangea stimuli under controlled laboratory conditions. Multimodal outcomes combined objective measures—eye-tracking and single-channel EEG—with subjective measures (SD; POMS). Hydrangea exposure significantly reduced negative mood, and color and structure exerted distinct and interactive effects on visual attention and arousal. Red and mauve elicited larger pupil diameters than white and pink, while lacecap inflorescences were associated with lower cognitive load and improved attentional recovery relative to mophead. Double-petaled forms showed greater attentional dispersion than single-petaled forms. Interactions indicated that morphology modulated color effects. The mauve lacecap double-flowered cultivar (M02) showed the strongest observed restorative potential within this sample. These findings highlight the importance of integrating color and structural cues when selecting flowering plants for restorative environments and horticultural therapy, and they motivate field-based replications with broader samples and higher-density physiology. Full article
(This article belongs to the Section Outreach, Extension, and Education)
Show Figures

Figure 1

27 pages, 8689 KB  
Article
Comparative Evaluation of YOLO Models for Human Position Recognition with UAVs During a Flood
by Nataliya Bilous, Vladyslav Malko, Iryna Ahekian, Igor Korobiichuk and Volodymyr Ivanichev
Appl. Syst. Innov. 2026, 9(1), 6; https://doi.org/10.3390/asi9010006 - 25 Dec 2025
Viewed by 345
Abstract
Reliable recognition of people on water from UAV imagery remains a challenging task due to strong glare, wave-induced distortions, partial submersion, and small visual scale of targets. This study proposes a hybrid method for human detection and position recognition in aquatic environments by [...] Read more.
Reliable recognition of people on water from UAV imagery remains a challenging task due to strong glare, wave-induced distortions, partial submersion, and small visual scale of targets. This study proposes a hybrid method for human detection and position recognition in aquatic environments by integrating the YOLO12 object detector with optical-flow-based motion analysis, Kalman tracking, and BlazePose skeletal estimation. A combined training dataset was formed using four complementary sources, enabling the detector to generalize across heterogeneous maritime and flood-like scenes. YOLO12 demonstrated superior performance compared to earlier You Only Look Once (YOLO) generations, achieving the highest accuracy (mAP@0.5 = 0.95) and the lowest error rates on the test set. The hybrid configuration further improved recognition robustness by reducing false positives and partial detections in conditions of intense reflections and dynamic water motion. Real-time experiments on a Raspberry Pi 5 platform confirmed that the full system operates at 21 FPS, supporting onboard deployment for UAV-based search-and-rescue missions. The presented method improves localization reliability, enhances interpretation of human posture and motion, and facilitates prioritization of rescue actions. These findings highlight the practical applicability of YOLO12-based hybrid pipelines for real-time survivor detection in flood response and maritime safety workflows. Full article
(This article belongs to the Special Issue Advancements in Deep Learning and Its Applications)
Show Figures

Figure 1

17 pages, 5410 KB  
Article
Comparing Eye-Tracking and Verbal Reports in L2 Reading Process Research: Three Qualitative Studies
by Chengsong Yang, Guangwei Hu, Keyu Que and Na Fan
J. Eye Mov. Res. 2026, 19(1), 2; https://doi.org/10.3390/jemr19010002 - 25 Dec 2025
Viewed by 290
Abstract
This study compares the roles of eye-tracking and verbal reports (think-alouds and retrospective verbal reports, RVRs) in L2 reading process research through three qualitative studies. Findings indicate that eye-tracking provided precise, quantitative data on visual attention and reading patterns (e.g., fixation duration, gaze [...] Read more.
This study compares the roles of eye-tracking and verbal reports (think-alouds and retrospective verbal reports, RVRs) in L2 reading process research through three qualitative studies. Findings indicate that eye-tracking provided precise, quantitative data on visual attention and reading patterns (e.g., fixation duration, gaze plots) and choice-making during gap-filling. Based on our mapping, it was mostly effective in identifying 13 out of 47 reading processing strategies, primarily those involving skimming or scanning that had distinctive eye-movement signatures. Verbal reports, while less exact in measurement, offered direct access to cognitive processes (e.g., strategy use, reasoning) and uncovered content-specific thoughts inaccessible to eye-tracking. Both methods exhibited reactivity: eye-tracking could cause physical discomfort or altered reading behavior, whereas think-alouds could disrupt task flow or enhance reflection. This study reveals the respective strengths and limitations of eye-tracking and verbal reports in L2 reading research. It facilitates a more informed selection and application of these methodological approaches in alignment with specific research objectives, whether employed in isolation or in an integrated manner. Full article
Show Figures

Figure 1

28 pages, 3940 KB  
Article
Visual Quality Assessment of Rural Landscapes Based on Eye-Tracking Analysis and Subjective Perception
by Yu Li, Hao Luo, Siqi Sun, Kun Wang and Qing Zhao
Sustainability 2026, 18(1), 161; https://doi.org/10.3390/su18010161 - 23 Dec 2025
Viewed by 284
Abstract
Traditional visual quality assessments of rural landscapes rely on subjective methods. This study integrates eye-tracking technology with subjective perception evaluation to construct a visual quality assessment model for rural landscapes, aiming to reveal the intrinsic relationship between objective visual behavior and subjective perception, [...] Read more.
Traditional visual quality assessments of rural landscapes rely on subjective methods. This study integrates eye-tracking technology with subjective perception evaluation to construct a visual quality assessment model for rural landscapes, aiming to reveal the intrinsic relationship between objective visual behavior and subjective perception, with the aim of providing scientific guidance for rural landscape planning to promote sustainable rural development. Using landscape photographs from nine rural sampling sites in Guangzhou, eye-tracking experiments were conducted to collect participants’ eye movement data, combined with online questionnaires to obtain scenic beauty ratings and landscape characteristic factor evaluations. The findings reveal the following: (1) Eye-tracking experiments and subjective evaluation results showed high consistency, with samples having higher scenic beauty ratings demonstrating more prominent performance in core eye movement indicators such as total fixation duration and count, and total saccade duration, and typically possessing higher landscape characteristic factor values. (2) Urban–suburban-integrated rural landscapes exhibited poorer visual quality, characteristic-preservation rural landscapes elicited more in-depth and sustained visual exploration, and clustered-improvement rural landscapes possessed higher scenic beauty ratings and landscape characteristic factor values. (3) Total saccade duration was the key eye movement indicator for predicting scenic beauty ratings. (4) Multiple landscape characteristic factors significantly influence eye movement behavior. Full article
Show Figures

Figure 1

20 pages, 7063 KB  
Article
Effective Brain Connectivity Analysis During Endogenous Selective Attention Based on Granger Causality
by Walter Escalante Puente de la Vega and Alexander N. Pisarchik
Appl. Sci. 2026, 16(1), 101; https://doi.org/10.3390/app16010101 - 22 Dec 2025
Viewed by 549
Abstract
Endogenous selective attention, the cognitive process of selectively attending to non-literal, ambiguous, or multistable interpretations of sensory input, remains poorly understood at the network level. To address this gap, we applied Granger causality (GC) analysis to electroencephalographic (EEG) recordings to characterize effective connectivity [...] Read more.
Endogenous selective attention, the cognitive process of selectively attending to non-literal, ambiguous, or multistable interpretations of sensory input, remains poorly understood at the network level. To address this gap, we applied Granger causality (GC) analysis to electroencephalographic (EEG) recordings to characterize effective connectivity during sustained attention to ambiguous visual stimuli. Participants viewed the Necker cube, whose left and right faces were modulated at 6.67 Hz and 8.57 Hz, respectively, enabling objective tracking of perceptual dominance via steady-state visually evoked potentials (SSVEPs). GC analysis revealed robust directed connectivity between frontal and occipito-parietal areas during sustained perception of a specific cube orientation. We found that the magnitude of the GC-derived F-statistics correlated positively with attention performance indices during the left-face orientation task and negatively during the right-face orientation task, indicating that interregional causal influence scales with cognitive engagement in ambiguous interpretation. These results establish GC as a sensitive and reliable approach for characterizing dynamic, directional neural interactions during perceptual ambiguity, and, most notably, reveal, for the first time, an occipito-frontal effective connectivity architecture specifically recruited in support of endogenous selective attention. The methodology and findings hold translational potential for applications in neuroadaptive interfaces, cognitive diagnostics, and the study of disorders involving impaired symbolic processing. Full article
Show Figures

Figure 1

19 pages, 2564 KB  
Article
Dynamic Feature Elimination-Based Visual–Inertial Navigation Algorithm
by Jiawei Yu, Hongde Dai, Juan Li, Xin Li and Xueying Liu
Sensors 2026, 26(1), 52; https://doi.org/10.3390/s26010052 - 20 Dec 2025
Viewed by 413
Abstract
To address the problem of degraded positioning accuracy in traditional visual–inertial navigation systems (VINS) due to interference from moving objects in dynamic scenarios, this paper proposes an improved algorithm based on the VINS-Fusion framework, which resolves this issue through a synergistic combination of [...] Read more.
To address the problem of degraded positioning accuracy in traditional visual–inertial navigation systems (VINS) due to interference from moving objects in dynamic scenarios, this paper proposes an improved algorithm based on the VINS-Fusion framework, which resolves this issue through a synergistic combination of multi-scale feature optimization and real-time dynamic feature elimination. First, at the feature extraction front-end, the SuperPoint encoder structure is reconstructed. By integrating dual-branch multi-scale feature fusion and 1 × 1 convolutional channel compression, it simultaneously captures shallow texture details and deep semantic information, enhances the discriminative ability of static background features, and reduces mis-elimination near dynamic–static boundaries. Second, in the dynamic processing module, the ASORT (Adaptive Simple Online and Realtime Tracking) algorithm is designed. This algorithm combines an object detection network, adaptive Kalman filter-based trajectory prediction, and a Hungarian algorithm-based matching mechanism to identify moving objects in images in real time, filter out their associated dynamic feature points from the optimized feature point set, and ensure that only reliable static features are input to the backend optimization, thereby minimizing pose estimation errors caused by dynamic interference. Experiments on the KITTI dataset demonstrate that, compared with the original VINS-Fusion algorithm, the proposed method achieves an average improvement of approximately 14.8% in absolute trajectory accuracy, with an average single-frame processing time of 23.9 milliseconds. This validates that the proposed approach provides an efficient and robust solution for visual–inertial navigation in highly dynamic environments. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

Back to TopTop