Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (97)

Search Parameters:
Keywords = binocular coordination

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 25818 KB  
Article
FishKP-YOLOv11: An Automatic Estimation Model for Fish Size and Mass in Complex Underwater Environments
by Jinfeng Wang, Zhipeng Cheng, Mingrun Lin, Renyou Yang and Qiong Huang
Animals 2025, 15(19), 2862; https://doi.org/10.3390/ani15192862 - 30 Sep 2025
Viewed by 425
Abstract
The size and mass of fish are crucial parameters in aquaculture management. However, existing research primarily focuses on conducting fish size and mass estimation under ideal conditions, which limits its application in actual aquaculture scenarios with complex water quality and fluctuating lighting. A [...] Read more.
The size and mass of fish are crucial parameters in aquaculture management. However, existing research primarily focuses on conducting fish size and mass estimation under ideal conditions, which limits its application in actual aquaculture scenarios with complex water quality and fluctuating lighting. A non-contact size and mass measurement framework is proposed for complex underwater environments, which integrates the improved FishKP-YOLOv11 module based on YOLOv11, stereo vision technology, and a Random Forest model. This framework fuses the detected 2D key points with binocular stereo technology to reconstruct the 3D key point coordinates. Fish size is computed based on these 3D key points, and a Random Forest model establishes a mapping relationship between size and mass. For validating the performance of the framework, a self-constructed grass carp dataset for key point detection is established. The experimental results indicate that the mean average precision (mAP) of FishKP-YOLOv11 surpasses that of diverse versions of YOLOv5–YOLOv12. The mean absolute errors (MAEs) for length and width estimations are 0.35 cm and 0.10 cm, respectively. The MAE for mass estimations is 2.7 g. Therefore, the proposed framework is well suited for application in actual breeding environments. Full article
Show Figures

Figure 1

22 pages, 4598 KB  
Article
A ST-ConvLSTM Network for 3D Human Keypoint Localization Using MmWave Radar
by Siyuan Wei, Huadong Wang, Yi Mo and Dongping Du
Sensors 2025, 25(18), 5857; https://doi.org/10.3390/s25185857 - 19 Sep 2025
Viewed by 420
Abstract
Accurate human keypoint localization in complex environments demands robust sensing and advanced modeling. In this article, we construct a ST-ConvLSTM network for 3D human keypoint estimation via millimeter-wave radar point clouds. The ST-ConvLSTM network processes multi-channel radar image inputs, generated from multi-frame fused [...] Read more.
Accurate human keypoint localization in complex environments demands robust sensing and advanced modeling. In this article, we construct a ST-ConvLSTM network for 3D human keypoint estimation via millimeter-wave radar point clouds. The ST-ConvLSTM network processes multi-channel radar image inputs, generated from multi-frame fused point clouds through parallel pathways. These pathways are engineered to extract rich spatiotemporal features from the sequential radar data. The extracted features are then fused and fed into fully connected layers for direct regression of 3D human keypoint coordinates. In order to achieve better network performance, a mmWave radar 3D human keypoint dataset (MRHKD) is built with a hybrid human motion annotation system (HMAS), in which a binocular camera is used to measure the human keypoint coordinates and a 60 GHz 4T4R radar is used to generate radar point clouds. Experimental results demonstrate that the proposed ST-ConvLSTM, leveraging its unique ability to model temporal dependencies and spatial patterns in radar imagery, achieves MAEs of 0.1075 m, 0.0633 m, and 0.1180 m in the horizontal, vertical, and depth directions. This significant improvement underscores the model’s enhanced posture recognition accuracy and keypoint localization capability in challenging conditions. Full article
(This article belongs to the Special Issue Advances in Multichannel Radar Systems)
Show Figures

Figure 1

26 pages, 14192 KB  
Review
Current Research Status and Development Trends of Key Technologies for Pear Harvesting Robots
by Hongtu Zhang, Binbin Wang, Liyang Su, Zhongyi Yu, Xinchao Liu, Xiangsen Meng, Keyao Zhao and Xiongkui He
Agronomy 2025, 15(9), 2163; https://doi.org/10.3390/agronomy15092163 - 10 Sep 2025
Viewed by 591
Abstract
In response to the global labor shortage in the pear industry, the use of robots for harvesting has become an inevitable trend. Developing pear harvesting robots for orchard operations is of significant importance. This paper systematically reviews the progress of three key technologies [...] Read more.
In response to the global labor shortage in the pear industry, the use of robots for harvesting has become an inevitable trend. Developing pear harvesting robots for orchard operations is of significant importance. This paper systematically reviews the progress of three key technologies in pear harvesting robotics: Firstly, in the field of recognition technology, traditional methods are limited by sensitivity to lighting conditions and occlusion errors. In contrast, deep learning models, such as the optimized YOLO series and two-stage architectures, significantly enhance robustness in complex scenes and improve handling of overlapping fruits. Secondly, positioning technology has advanced from 2D pixel coordinate acquisition to 3D spatial reconstruction, with the integration of posture estimation (binocular vision + IMU) addressing occlusion issues. Finally, the end effector is categorized based on harvesting mechanisms: gripping–twisting, shearing, and adsorption (vacuum negative pressure). However, challenges such as fruit skin damage and positioning bottlenecks remain. The current technologies still face three major challenges: low harvesting efficiency, high fruit damage rates, and high equipment costs. In the future, breakthroughs are expected through the integration of agricultural machinery and agronomy (standardized planting), multi-arm collaborative operation, lightweight algorithms, and 5G cloud computing. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

19 pages, 10474 KB  
Article
Locations of Non-Cooperative Targets Based on Binocular Vision Intersection and Its Error Analysis
by Kui Shi, Hongtao Yang, Jia Feng, Guangsen Liu and Weining Chen
Appl. Sci. 2025, 15(18), 9867; https://doi.org/10.3390/app15189867 - 9 Sep 2025
Viewed by 352
Abstract
The precise locations of unknown non-cooperative targets are a long-standing technical problem that needs to be solved urgently in disaster relief and emergency rescue. An imaging model of photography to a non-cooperative target was established based on the binocular vision forward intersection. The [...] Read more.
The precise locations of unknown non-cooperative targets are a long-standing technical problem that needs to be solved urgently in disaster relief and emergency rescue. An imaging model of photography to a non-cooperative target was established based on the binocular vision forward intersection. The collinear equation representing the spatial position relationship between the target and its two images was obtained through coordinate system transformation, and the system of equations to calculate the geographic coordinates of the target was derived, which realized the geo-location of the unknown non-cooperative target with no control points and no source. The composition and source of the error of this target location method were analyzed, and the equation to calculate the total error of the target location was obtained according to the error synthesis theory. The accuracy of the target location was predicted. When the elevation difference between the camera and the target is 3 km, the location accuracy is 15.5 m. The same ground target was imaged by a certain type of aerial camera at different locations 3097 m above ground, and a target location verification experiment was completed. The longitude and latitude of the target obtained were compared with the true geographic longitude and latitude, and the location error of the verification experiment was calculated to be 16.3 m. The research work of this paper provides a theoretical basis and methods for the precise locations of unknown non-cooperative targets and proposes specific measures to improve the accuracy of target location. Full article
Show Figures

Figure 1

27 pages, 4681 KB  
Article
Gecko-Inspired Robots for Underground Cable Inspection: Improved YOLOv8 for Automated Defect Detection
by Dehai Guan and Barmak Honarvar Shakibaei Asli
Electronics 2025, 14(15), 3142; https://doi.org/10.3390/electronics14153142 - 6 Aug 2025
Viewed by 805
Abstract
To enable intelligent inspection of underground cable systems, this study presents a gecko-inspired quadruped robot that integrates multi-degree-of-freedom motion with a deep learning-based visual detection system. Inspired by the gecko’s flexible spine and leg structure, the robot exhibits strong adaptability to confined and [...] Read more.
To enable intelligent inspection of underground cable systems, this study presents a gecko-inspired quadruped robot that integrates multi-degree-of-freedom motion with a deep learning-based visual detection system. Inspired by the gecko’s flexible spine and leg structure, the robot exhibits strong adaptability to confined and uneven tunnel environments. The motion system is modeled using the standard Denavit–Hartenberg (D–H) method, with both forward and inverse kinematics derived analytically. A zero-impact foot trajectory is employed to achieve stable gait planning. For defect detection, the robot incorporates a binocular vision module and an enhanced YOLOv8 framework. The key improvements include a lightweight feature fusion structure (SlimNeck), a multidimensional coordinate attention (MCA) mechanism, and a refined MPDIoU loss function, which collectively improve the detection accuracy of subtle defects such as insulation aging, micro-cracks, and surface contamination. A variety of data augmentation techniques—such as brightness adjustment, Gaussian noise, and occlusion simulation—are applied to enhance robustness under complex lighting and environmental conditions. The experimental results validate the effectiveness of the proposed system in both kinematic control and vision-based defect recognition. This work demonstrates the potential of integrating bio-inspired mechanical design with intelligent visual perception to support practical, efficient cable inspection in confined underground environments. Full article
(This article belongs to the Special Issue Robotics: From Technologies to Applications)
Show Figures

Figure 1

26 pages, 15535 KB  
Article
BCA-MVSNet: Integrating BIFPN and CA for Enhanced Detail Texture in Multi-View Stereo Reconstruction
by Ning Long, Zhengxu Duan, Xiao Hu and Mingju Chen
Electronics 2025, 14(15), 2958; https://doi.org/10.3390/electronics14152958 - 24 Jul 2025
Viewed by 431
Abstract
The 3D point cloud generated by MVSNet has good scene integrity but lacks sensitivity to details, causing holes and non-dense areas in flat and weak-texture regions. To address this problem and enhance the point cloud information of weak-texture areas, the BCA-MVSNet network is [...] Read more.
The 3D point cloud generated by MVSNet has good scene integrity but lacks sensitivity to details, causing holes and non-dense areas in flat and weak-texture regions. To address this problem and enhance the point cloud information of weak-texture areas, the BCA-MVSNet network is proposed in this paper. The network integrates the Bidirectional Feature Pyramid Network (BIFPN) into the feature processing of the MVSNet backbone network to accurately extract the features of weak-texture regions. In the feature map fusion stage, the Coordinate Attention (CA) mechanism is introduced into 3DU-Net to obtain the position information on the channel dimension related to the direction, improve the detail feature extraction, optimize the depth map and improve the depth accuracy. The experimental results show that BCA-MVSNet not only improves the accuracy of detail texture reconstruction, but also effectively controls the computational overhead. In the DTU dataset, the Overall and Comp metrics of BCA-MVSNet are reduced by 10.2% and 2.6%, respectively; in the Tanksand Temples dataset, the Mean metrics of the eight scenarios are improved by 6.51%. Three scenes are shot by binocular camera, and the reconstruction quality is excellent in the weak-texture area by combining the camera parameters and the BCA-MVSNet model. Full article
Show Figures

Figure 1

30 pages, 9360 KB  
Article
Dynamic Positioning and Optimization of Magnetic Target Based on Binocular Vision
by Jing Li, Yang Wang, Ligang Qu, Guangming Lv and Zhenyu Cao
Machines 2025, 13(7), 592; https://doi.org/10.3390/machines13070592 - 8 Jul 2025
Viewed by 345
Abstract
Aiming at the problems of visual occlusion, reduced positioning accuracy and pose loss in the dynamic scanning process of aviation large components, this paper proposes a binocular vision dynamic positioning method based on magnetic target. This method detects the spatial coordinates of the [...] Read more.
Aiming at the problems of visual occlusion, reduced positioning accuracy and pose loss in the dynamic scanning process of aviation large components, this paper proposes a binocular vision dynamic positioning method based on magnetic target. This method detects the spatial coordinates of the magnetic target in real time through the binocular camera, extracts the target center to construct a unified reference system of the measurement platform, and uses MATLAB simulation to analyze the influence of different target layouts on the scanning stability and positioning accuracy. On this basis, a dual-objective optimization model with the objectives of ‘minimizing the number of targets’ and ‘spatial distribution uniformity’ is established, and Monte Carlo simulation is used to evaluate the robustness under Gaussian noise and random frame loss interference. The experimental results on the C-Track optical tracking platform show that the optimized magnetic target layout reduces the rotation error of the dynamic scanning from 0.055° to 0.035°, the translation error from 0.31 mm to 0.162 mm, and the scanning efficiency is increased by 33%, which significantly improves the positioning accuracy and tracking stability of the system under complex working conditions. This method provides an effective solution for high-precision dynamic measurement of aviation large components. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

19 pages, 2465 KB  
Article
The Design and Implementation of a Dynamic Measurement System for a Large Gear Rotation Angle Based on an Extended Visual Field
by Po Du, Zhenyun Duan, Jing Zhang, Wenhui Zhao, Engang Lai and Guozhen Jiang
Sensors 2025, 25(12), 3576; https://doi.org/10.3390/s25123576 - 6 Jun 2025
Cited by 1 | Viewed by 692
Abstract
High-precision measurement of large gear rotation angles is a critical technology in gear meshing-based measurement systems. To address the challenge of high-precision rotation angle measurement for large gear, this paper proposes a binocular vision method. The methodology consists of the following steps: First, [...] Read more.
High-precision measurement of large gear rotation angles is a critical technology in gear meshing-based measurement systems. To address the challenge of high-precision rotation angle measurement for large gear, this paper proposes a binocular vision method. The methodology consists of the following steps: First, sub-pixel edges of calibration circles on a 2D dot-matrix calibration board are extracted using edge detection algorithms to obtain pixel coordinates of the circle centers. Second, a high-precision calibration of the measurement reference plate is achieved through a 2D four-parameter coordinate transformation algorithm. Third, binocular cameras capture images of the measurement reference plates attached to large gear before and after rotation. Coordinates of the camera’s field-of-view center in the measurement reference plate coordinate system are calculated via image processing and rotation angle algorithms, thereby determining the rotation angle of the large gear. Finally, a binocular vision rotation angle measurement system was developed, and experiments were conducted on a 600 mm-diameter gear to validate the feasibility of the proposed method. The results demonstrate a measurement accuracy of 7 arcseconds (7”) and a repeatability precision of 3 arcseconds (3”) within the 0–30° rotation range, indicating high accuracy and stability. The proposed method and system effectively meet the requirements for high-precision rotation angle measurement of large gear. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

13 pages, 1510 KB  
Article
Binocular Advantage in Established Eye–Hand Coordination Tests in Young and Healthy Adults
by Michael Mendes Wefelnberg, Felix Bargstedt, Marcel Lippert and Freerk T. Baumann
J. Eye Mov. Res. 2025, 18(3), 14; https://doi.org/10.3390/jemr18030014 - 7 May 2025
Viewed by 996
Abstract
Background: Eye–hand coordination (EHC) plays a critical role in daily activities and is affected by monocular vision impairment. This study evaluates existing EHC tests to detect performance decline under monocular conditions, supports the assessment and monitoring of vision rehabilitation, and quantifies the binocular [...] Read more.
Background: Eye–hand coordination (EHC) plays a critical role in daily activities and is affected by monocular vision impairment. This study evaluates existing EHC tests to detect performance decline under monocular conditions, supports the assessment and monitoring of vision rehabilitation, and quantifies the binocular advantage of each test. Methods: A total of 70 healthy sports students (aged 19–30 years) participated in four EHC tests: the Purdue Pegboard Test (PPT), Finger–Nose Test (FNT), Alternate Hand Wall Toss Test (AHWTT), and Loop-Wire Test (LWT). Each participant completed the tests under both binocular and monocular conditions in a randomized order, with assessments conducted by two independent raters. Performance differences, binocular advantage, effect sizes, and interrater reliability were analyzed. Results: Data from 66 participants were included in the final analysis. Significant performance differences between binocular and monocular conditions were observed for the LWT (p < 0.001), AHWTT (p < 0.001), and PPT (p < 0.05), with a clear binocular advantage and large effect sizes (SMD range: 0.583–1.660) for the AHWTT and LWT. Female participants performed better in fine motor tasks, while males demonstrated superior performance in gross motor tasks. Binocular performance averages aligned with published reference values. Conclusions: The findings support the inclusion of the LWT and AHWTT in clinical protocols to assess and assist individuals with monocular vision impairment, particularly following sudden uniocular vision loss. Future research should extend these findings to different age groups and clinically relevant populations. Full article
Show Figures

Figure 1

22 pages, 16339 KB  
Article
MFSM-Net: Multimodal Feature Fusion for the Semantic Segmentation of Urban-Scale Textured 3D Meshes
by Xinjie Hao, Jiahui Wang, Wei Leng, Rongting Zhang and Guangyun Zhang
Remote Sens. 2025, 17(9), 1573; https://doi.org/10.3390/rs17091573 - 28 Apr 2025
Viewed by 943
Abstract
The semantic segmentation of textured 3D meshes is a critical step in constructing city-scale realistic 3D models. Compared to colored point clouds, textured 3D meshes have the advantage of high-resolution texture image patches embedded on each mesh face. However, existing studies predominantly focus [...] Read more.
The semantic segmentation of textured 3D meshes is a critical step in constructing city-scale realistic 3D models. Compared to colored point clouds, textured 3D meshes have the advantage of high-resolution texture image patches embedded on each mesh face. However, existing studies predominantly focus on their geometric structures, with limited utilization of these high-resolution textures. Inspired by the binocular perception of humans, this paper proposes a multimodal feature fusion network based on 3D geometric structures and 2D high-resolution texture images for the semantic segmentation of textured 3D meshes. Methodologically, the 3D feature extraction branch computes the centroid coordinates and face normals of mesh faces as initial 3D features, followed by a multi-scale Transformer network to extract high-level 3D features. The 2D feature extraction branch employs orthographic views of city scenes captured from a top-down perspective and uses a U-Net to extract high-level 2D features. To align features across 2D and 3D modalities, a Bridge view-based alignment algorithm is proposed, which visualizes the 3D mesh indices to establish pixel-level associations with orthographic views, achieving the precise alignment of multimodal features. Experimental results demonstrate that the proposed method achieves competitive performance in city-scale textured 3D mesh semantic segmentation, validating the effectiveness and potential of the cross-modal fusion strategy. Full article
(This article belongs to the Special Issue Urban Planning Supported by Remote Sensing Technology II)
Show Figures

Figure 1

26 pages, 9183 KB  
Article
Water Surface Spherical Buoy Localization Based on Ellipse Fitting Using Monocular Vision
by Shiwen Wu, Jianhua Wang, Xiang Zheng, Xianqiang Zeng and Gongxing Wu
J. Mar. Sci. Eng. 2025, 13(4), 733; https://doi.org/10.3390/jmse13040733 - 6 Apr 2025
Viewed by 670
Abstract
Spherical buoys serve as water surface markers, and their location information can help unmanned surface vessels (USVs) identify navigation channel boundaries, avoid dangerous areas, and improve navigation accuracy. However, due to the presence of disturbances such as reflections, water obstruction, and changes in [...] Read more.
Spherical buoys serve as water surface markers, and their location information can help unmanned surface vessels (USVs) identify navigation channel boundaries, avoid dangerous areas, and improve navigation accuracy. However, due to the presence of disturbances such as reflections, water obstruction, and changes in illumination for spherical buoys on the water surface, using binocular vision for positioning encounters difficulties in matching. To address this, this paper proposes a monocular vision-based localization method for spherical buoys using elliptical fitting. First, the edges of the spherical buoy are extracted through image preprocessing. Then, to address the issue of pseudo-edge points introduced by reflections that reduce the accuracy of elliptical fitting, a multi-step method for eliminating pseudo-edge points is proposed. This effectively filters out pseudo-edge points and obtains accurate elliptical parameters. Finally, based on these elliptical parameters, a monocular vision ranging model is established to solve the relative position between the USV and the buoy. The USV’s position from satellite observation is then fused with the relative position calculated using the method proposed in this paper to estimate the coordinates of the buoy in the geodetic coordinate system. Simulation experiments analyzed the impact of pixel noise, camera height, focal length, and rotation angle on localization accuracy. The results show that within a range of 40 m in width and 80 m in length, the coordinates calculated by this method have an average absolute error of less than 1.2 m; field experiments on actual ships show that the average absolute error remains stable within 2.57 m. This method addresses the positioning issues caused by disturbances such as reflections, water obstruction, and changes in illumination, achieving a positioning accuracy comparable to that of general satellite positioning. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

17 pages, 9081 KB  
Article
A Rapid Deployment Method for Real-Time Water Surface Elevation Measurement
by Yun Jiang
Sensors 2025, 25(6), 1850; https://doi.org/10.3390/s25061850 - 17 Mar 2025
Viewed by 727
Abstract
In this research, I introduce a water surface elevation measurement method that combines point cloud processing techniques and stereo vision cameras. While current vision-based water level measurement techniques focus on laboratory measurements or are based on auxiliary devices such as water rulers, I [...] Read more.
In this research, I introduce a water surface elevation measurement method that combines point cloud processing techniques and stereo vision cameras. While current vision-based water level measurement techniques focus on laboratory measurements or are based on auxiliary devices such as water rulers, I investigated the feasibility of measuring elevation based on images of the water surface. This research implements a monitoring system on-site, comprising a ZED 2i binocular camera (Stereolabs, San Francisco, CA, USA). First, the uncertainty of the camera is evaluated in a real measurement scenario. Then, the water surface images captured by the binocular camera are stereo matched to obtain parallax maps. Subsequently, the results of the binocular camera calibration are utilized to obtain the 3D point cloud coordinate values of the water surface image. Finally, the horizontal plane equation is solved by the RANSAC algorithm to finalize the height of the camera on the water surface. This approach is particularly significant as it offers a non-contact, shore-based solution that eliminates the need for physical water references, thereby enhancing the adaptability and efficiency of water level monitoring in challenging environments, such as remote or inaccessible areas. Within a measured elevation of 5 m, the water level measurement error is less than 2 cm. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

23 pages, 10404 KB  
Article
Steel Roll Eye Pose Detection Based on Binocular Vision and Mask R-CNN
by Xuwu Su, Jie Wang, Yifan Wang and Daode Zhang
Sensors 2025, 25(6), 1805; https://doi.org/10.3390/s25061805 - 14 Mar 2025
Viewed by 639
Abstract
To achieve automation at the inner corner guard installation station in a steel coil packaging production line and enable automatic docking and installation of the inner corner guard after eye position detection, this paper proposes a binocular vision method based on deep learning [...] Read more.
To achieve automation at the inner corner guard installation station in a steel coil packaging production line and enable automatic docking and installation of the inner corner guard after eye position detection, this paper proposes a binocular vision method based on deep learning for eye position detection of steel coil rolls. The core of the method involves using the Mask R-CNN algorithm within a deep-learning framework to identify the target region and obtain a mask image of the steel coil end face. Subsequently, the binarized image of the steel coil end face was processed using the RGB vector space image segmentation method. The target feature pixel points were then extracted using Sobel edges, and the parameters were fitted by the least-squares method to obtain the deflection angle and the horizontal and vertical coordinates of the center point in the image coordinate system. Through the ellipse parameter extraction experiment, the maximum deviations in the pixel coordinate system for the center point in the u and v directions were 0.49 and 0.47, respectively. The maximum error in the deflection angle was 0.45°. In the steel coil roll eye position detection experiments, the maximum deviations for the pitch angle, deflection angle, and centroid coordinates were 2.17°, 2.24°, 3.53 mm, 4.05 mm, and 4.67 mm, respectively, all of which met the actual installation requirements. The proposed method demonstrates strong operability in practical applications, and the steel coil end face position solving approach significantly enhances work efficiency, reduces labor costs, and ensures adequate detection accuracy. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

18 pages, 4036 KB  
Article
High-Accuracy Intermittent Strabismus Screening via Wearable Eye-Tracking and AI-Enhanced Ocular Feature Analysis
by Zihe Zhao, Hongbei Meng, Shangru Li, Shengbo Wang, Jiaqi Wang and Shuo Gao
Biosensors 2025, 15(2), 110; https://doi.org/10.3390/bios15020110 - 14 Feb 2025
Cited by 2 | Viewed by 2302
Abstract
An effective and highly accurate strabismus screening method is expected to identify potential patients and provide timely treatment to prevent further deterioration, such as amblyopia and even permanent vision loss. To satisfy this need, this work showcases a novel strabismus screening method based [...] Read more.
An effective and highly accurate strabismus screening method is expected to identify potential patients and provide timely treatment to prevent further deterioration, such as amblyopia and even permanent vision loss. To satisfy this need, this work showcases a novel strabismus screening method based on a wearable eye-tracking device combined with an artificial intelligence (AI) algorithm. To identify the minor and occasional inconsistencies in strabismus patients during the binocular coordination process, which are usually seen in early-stage patients and rarely recognized in current studies, the system captures temporally and spatially continuous high-definition infrared images of the eye during wide-angle continuous motion, and is effective in inducing intermittent strabismus. Based on the collected eye motion information, 16 features of the oculomotor process with strong physiological interpretations, which help biomedical staff understand and evaluate results generated later, are calculated through the introduction of pupil-canthus vectors. These features can be normalized, and reflect individual differences. After these features are processed by the random forest (RF) algorithm, this method experimentally yields 97.1% accuracy in strabismus detection in 70 people under diverse indoor testing conditions, validating the high accuracy and robustness of the method, and implying that the method has strong potential to support widespread and highly accurate strabismus screening. Full article
Show Figures

Figure 1

16 pages, 1448 KB  
Article
Interocular Timing Differences in Horizontal Saccades of Ball Game Players
by Masahiro Kokubu, Yoshihiro Komatsu and Takashi Kojima
Vision 2025, 9(1), 9; https://doi.org/10.3390/vision9010009 - 31 Jan 2025
Cited by 1 | Viewed by 1502
Abstract
In ball game sports, binocular visual function is important for accurately perceiving the distance of various objects in visual space. However, the temporal coordination of binocular eye movements during saccades has not been investigated extensively in athletes. The purpose of the present study [...] Read more.
In ball game sports, binocular visual function is important for accurately perceiving the distance of various objects in visual space. However, the temporal coordination of binocular eye movements during saccades has not been investigated extensively in athletes. The purpose of the present study was to compare the characteristics found in the interocular timing differences in horizontal saccades between ball game players. The participants included 32 university baseball players and 54 university soccer players. They were asked to shift their gaze to the onset of the light-emitting diodes located at 10 deg of visual field eccentricity to the left and right and alternated every 2 s. Horizontal movements of the left and right eyes were recorded separately with the electro-oculogram. Temporal variables for each eye were calculated with digital differentiation, and timing differences between the left and right eyes were compared between participant groups. The overall results showed significant interocular differences between left and right eye movements for the temporal variables of binocular saccades. The comparison between the participant groups revealed that baseball players had smaller interocular timing differences between the left and right eyes than soccer players in the onset time, time to peak velocity, duration, and peak velocity. These results suggest that baseball players have a higher degree of temporal coordination in binocular eye movements, particularly during the initial phase of horizontal saccades, compared to soccer players. This enhanced coordination might be attributable to the sport-specific visual demands of baseball, where players require precise stereoscopic vision to track a small high-speed ball within their visual space. Full article
Show Figures

Figure 1

Back to TopTop