Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (252)

Search Parameters:
Keywords = binocular system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 1546 KB  
Article
Sensor-Based and VR-Assisted Visual Training Enhances Visuomotor Reaction Metrics in Youth Handball Players
by Ricardo Bernárdez-Vilaboa, Juan E. Cedrún-Sánchez, Silvia Burgos-Postigo, Rut González-Jiménez, Carla Otero-Currás and F. Javier Povedano-Montero
Sensors 2026, 26(8), 2555; https://doi.org/10.3390/s26082555 - 21 Apr 2026
Viewed by 171
Abstract
Background: Sensor-based systems and virtual reality (VR) technologies provide new opportunities for the objective, technology-driven assessment and training of visuomotor performance in applied contexts such as sport. Methods: This study examined the effects of an integrated visual training program combining stroboscopic stimulation, VR-based [...] Read more.
Background: Sensor-based systems and virtual reality (VR) technologies provide new opportunities for the objective, technology-driven assessment and training of visuomotor performance in applied contexts such as sport. Methods: This study examined the effects of an integrated visual training program combining stroboscopic stimulation, VR-based vergence exercises, and instrumented reaction-light tasks in adolescent handball players. Twenty-eight adolescent handball players (under-18 competitive level) completed two baseline assessments separated by six weeks, followed by a six-session training program (approximately 15 min per session) integrated into regular team practice. The intervention targeted visuomotor reaction speed, accommodative dynamics, and peripheral visual responsiveness using sensor-based and virtual reality–assisted stimuli. Results: Compared with both baseline measurements, the intervention produced selective improvements in accommodative facility (cycles per minute, cpm)—particularly near–far focusing speed—and in multiple reaction-time conditions (milliseconds, ms) involving manual and decision-based responses. Specific peripheral-field locations showed increased response scores, whereas binocular alignment, AC/A ratio, near phoria, and stereoscopic acuity remained unchanged. Conclusions: These findings indicate that technology-supported visual training protocols incorporating sensor-based reaction systems and VR stimuli were associated with measurable adaptations in dynamic visuomotor processing while preserving fundamental binocular vision parameters. Full article
(This article belongs to the Special Issue Virtual Reality and Sensing Techniques for Human: 2nd Edition)
Show Figures

Figure 1

15 pages, 3994 KB  
Article
Three-Dimensional Shape Measurement Using Speckle-Assisted Phase-Order Lines Without Phase Unwrapping
by Ziyou Zhang and Weipeng Yang
Sensors 2026, 26(8), 2534; https://doi.org/10.3390/s26082534 - 20 Apr 2026
Viewed by 209
Abstract
Achieving high-accuracy and high-speed 3D shape measurement remains a significant challenge. This paper presents a novel technique using phase-order lines (POLs), which eliminates the need for phase unwrapping in a binocular system. By combining phase-shifting for high resolution and speckle projection for robust [...] Read more.
Achieving high-accuracy and high-speed 3D shape measurement remains a significant challenge. This paper presents a novel technique using phase-order lines (POLs), which eliminates the need for phase unwrapping in a binocular system. By combining phase-shifting for high resolution and speckle projection for robust features, our method extracts POLs directly from the wrapped phase. The speckle patterns are then used to establish robust POL correspondences between stereo images. These matched POLs serve as reliable seeds to guide dense, sub-pixel matching directly on the wrapped phase, thus bypassing the complex phase unwrapping process. This approach significantly reduces the number of required patterns. The experimental results demonstrate that our method achieves a root-mean-square (RMS) error of 0.058 mm using only five patterns, delivering accuracy comparable to a 12-pattern temporal phase unwrapping (TPU) method while being significantly faster. Full article
Show Figures

Figure 1

17 pages, 442 KB  
Review
Application of Eye-Tracking Technology in Assessing Binocular Vision Function in Paediatric Populations: A Scoping Review
by Ong Huei Koon, Noor Ezailina Badarudin and Byoung-Sun Chu
J. Eye Mov. Res. 2026, 19(2), 40; https://doi.org/10.3390/jemr19020040 - 17 Apr 2026
Viewed by 181
Abstract
Background: This review discusses the application of eye-tracking technology in the detection and monitoring of binocular vision anomalies among children. Methods: A scoping review using PRISMA guidelines was conducted through Scopus, ScienceDirect, and PubMed using the keywords “eye-tracking,” “binocular,” “vision,” “anomalies,” “paediatrics,” and [...] Read more.
Background: This review discusses the application of eye-tracking technology in the detection and monitoring of binocular vision anomalies among children. Methods: A scoping review using PRISMA guidelines was conducted through Scopus, ScienceDirect, and PubMed using the keywords “eye-tracking,” “binocular,” “vision,” “anomalies,” “paediatrics,” and “children” from 2015 to 2025. Studies excluded were not written in English, did not apply the eye tracker as a research tool, involved an ineligible population, or involved non-human subjects. Results: The search strategy identified 77 citations, yet only 14 studies met the inclusion criteria. This review revealed a variety of binocular vision anomalies detectable through eye-tracking systems, along with the specific models and parameters employed in these assessments. Application of eye-tracking technology in diagnosing conditions such as strabismus and amblyopia demonstrated potential for improved accuracy and early detection. Discussion: Eye-tracking technology demonstrates considerable potential for the detection and monitoring of binocular vision anomalies in children, particularly as a non-invasive method for early screening, thereby strengthening its clinical applicability. By assessing fixation stability, saccadic movements, and vergence responses, eye-tracking allows for the early detection of subtle visual anomalies, especially in the paediatric population. Conclusions: Eye-tracking technology represents a valuable advancement in paediatric vision care, enabling the more objective and earlier detection of binocular vision anomalies in the paediatric population. Full article
(This article belongs to the Special Issue Digital Advances in Binocular Vision and Eye Movement Assessment)
Show Figures

Graphical abstract

19 pages, 4757 KB  
Article
SCSANet: Split Convolution Selective Attention Network of Drivable Area Detection for Mobile Robots
by Maozhang Ye, Xiaoli Li, Jidong Dai, Hongyi Li, Zhouyi Xu and Chentao Zhang
Eng 2026, 7(4), 176; https://doi.org/10.3390/eng7040176 - 11 Apr 2026
Viewed by 196
Abstract
Detecting drivable areas is a fundamental task in autonomous driving systems. Although semantic segmentation networks have demonstrated strong performance in segmenting drivable regions, two key challenges persist. First, acquiring sufficient contextual information in complex road scenarios remains difficult, often leading to segmentation errors. [...] Read more.
Detecting drivable areas is a fundamental task in autonomous driving systems. Although semantic segmentation networks have demonstrated strong performance in segmenting drivable regions, two key challenges persist. First, acquiring sufficient contextual information in complex road scenarios remains difficult, often leading to segmentation errors. Second, the coarseness of extracted features may degrade accuracy even when texture information is available in RGB images. To address these issues, we propose an enhanced DeepLabv3+ algorithm called Split Convolution Selective Attention Network (SCSANet), which incorporates the Adaptive Kernel (AK) and Split Convolution Attention (SCA) modules. AK adaptively adjusts the receptive field to accommodate varying road scenarios, while SCA improves boundary clarity by enhancing channel interaction. In addition, we employ surface normals to provide complementary geometric information, thereby strengthening the ability of the network to recognize drivable areas. To compensate for the lack of publicly available datasets for closed or semi-closed scenarios, we introduce XMUROAD, a new dataset of binocular disparity images. Experiments on the XMUROAD dataset demonstrate that the proposed architectural improvements yield an mIoU gain of 1.63% under the same RGB input, and the full pipeline with surface normal input achieves improvements of 1.55% to 2.59% in mF1 and 2.94% to 4.83% in mIoU over state-of-the-art methods. Experiments on the KITTI dataset further verify the generalization capability of SCSANet, with improvements of 1.58% in mF1 and 2.88% in mIoU over state-of-the-art methods. The proposed method provides a practical approach for accurate drivable area detection in closed and semi-closed mobile-robot scenarios. Full article
(This article belongs to the Special Issue Artificial Intelligence for Engineering Applications, 2nd Edition)
Show Figures

Figure 1

21 pages, 4038 KB  
Article
Fused Complementary 3D Reconstruction Based on Polarization Binocular Line-Structured Light
by Mingsheng Liu, Hongyuan Zhou, Sisheng Nie, Yan Jiang, Zhong Wu, Dahai Xu, Ling Zhu, Yanliang Zhan and Zhenmin Zhu
Photonics 2026, 13(3), 238; https://doi.org/10.3390/photonics13030238 - 28 Feb 2026
Viewed by 452
Abstract
Line-structured light three-dimensional (3D) measurement is commonly used for three-dimensional contour reconstruction of objects in complex industrial environments, but the problem of missing information occurs when three-dimensional reconstruction is performed on objects with smooth surfaces, single texture, and high reflectivity, resulting in defective [...] Read more.
Line-structured light three-dimensional (3D) measurement is commonly used for three-dimensional contour reconstruction of objects in complex industrial environments, but the problem of missing information occurs when three-dimensional reconstruction is performed on objects with smooth surfaces, single texture, and high reflectivity, resulting in defective reconstructed object surfaces. For this reason, this study proposes a fused complementary 3D reconstruction technique based on a polarization-based binocular line-structured light system. First, the reconstructed image of the object is captured using a Polarization Binocular Camera, and the polarized imaging effectively reduces the strong highlights and extracts more detailed information on the surface of the object. Then, the calibrated camera and optical planes are used to acquire the spatial coordinates of the object reconstructed by the left camera and right camera. Finally, the spatial coordinates obtained by the left camera and right camera are aligned, and the high-precision 3D reconstruction results are generated. The experimental results show that the proposed method can effectively improve the accuracy and robustness of 3D reconstruction, has a good application prospect, and can meet the technical requirements of industrial 3D measurement. Full article
(This article belongs to the Special Issue New Perspectives in Micro-Nano Optical Design and Manufacturing)
Show Figures

Figure 1

18 pages, 304321 KB  
Article
Two-Stage Pose Estimation for AUV Visual Guidance Using PnP and Binocular Constraints
by Xinyu Wang, Miao Yang, Hao Liu, Yanbing Tang and Perry Xiao
J. Mar. Sci. Eng. 2026, 14(4), 405; https://doi.org/10.3390/jmse14040405 - 23 Feb 2026
Viewed by 515
Abstract
Accurate pose estimation is crucial for reliable docking and recovery of Autonomous Underwater Vehicles (AUVs). Traditional visual-based pose estimation methods face inherent challenges: monocular methods often struggle with depth inference, and conventional Perspective-n-Point (PnP) algorithms exhibit accuracy degradation at large viewing angles and [...] Read more.
Accurate pose estimation is crucial for reliable docking and recovery of Autonomous Underwater Vehicles (AUVs). Traditional visual-based pose estimation methods face inherent challenges: monocular methods often struggle with depth inference, and conventional Perspective-n-Point (PnP) algorithms exhibit accuracy degradation at large viewing angles and limited noise resistance, while binocular systems involve higher computational complexity. This paper proposes a two-stage algorithm that combines iterative PnP initialization with binocular constraint optimization. By using iterative PnP to establish reliable initial estimates, the approach avoids convergence difficulties of direct binocular optimization, while the subsequent binocular refinement leverages stereo geometric constraints to enhance accuracy. Comprehensive evaluation through simulation, land-based experiments, and underwater validation demonstrates consistent performance improvements over conventional geometric methods. In simulation experiments across 60° to 60° yaw angles, the method achieves 93.2% and 28.6% improvements in translation and rotation accuracy respectively compared to iterative PnP. Land-based validation confirms 32.7% average rotation error reduction, while underwater experiments demonstrate 76.5% average distance error reduction under real optical conditions including refraction and light attenuation. The method maintains real-time processing capability (2.16 ms per frame), offering a practical solution for AUV pose estimation in docking applications. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

21 pages, 7192 KB  
Article
Expectation–Maximization Method for RGB-D Camera Calibration with Motion Capture System
by Jianchu Lin, Guangxiao Du, Yugui Zhang, Yiyan Zhao, Qian Xie, Jian Yao and Ashim Khadka
Photonics 2026, 13(2), 183; https://doi.org/10.3390/photonics13020183 - 12 Feb 2026
Viewed by 484
Abstract
Camera calibration is an essential research direction in photonics and computer vision. It achieves the standardization of camera data by using intrinsic and extrinsic parameters. Recently, RGB-D cameras have been an important device by supplementing deep information, and they are commonly divided into [...] Read more.
Camera calibration is an essential research direction in photonics and computer vision. It achieves the standardization of camera data by using intrinsic and extrinsic parameters. Recently, RGB-D cameras have been an important device by supplementing deep information, and they are commonly divided into three kinds of mechanisms: binocular, structured light, and Time of Flight (ToF). However, the different mechanisms cause calibration methods to be complex and hardly uniform. Lens distortion, parameter loss, and sensor degradation et al. even fail calibration. To address the issues, we propose a camera calibration method based on the Expectation–Maximization (EM) algorithm. A unified model of latent variables is established for the different kinds of cameras. In the EM algorithm, the E-step estimates the hidden intrinsic parameters of cameras, while the M-step learns the distortion parameters of the lens. In addition, the depth values are calculated by the spatial geometric method, and they are calibrated using the least squares method under an optical motion capture system. Experimental results demonstrate that our method can be directly employed in the calibration of monocular and binocular RGB-D cameras, reducing image calibration errors between 0.6 and 1.2% less than least squares, Levenberg–Marquardt, Direct Linear Transform, and Trust Region Reflection. The deep error is reduced by 16 to 19.3 mm. Therefore, our method can effectively improve the performance of different RGB-D cameras. Full article
Show Figures

Figure 1

31 pages, 22732 KB  
Article
Binocular Rivalry and Fusion-Inspired Hierarchical Complementary Ensemble for No-Reference Stereoscopic Image Quality Assessment
by Yiling Tang, Shunliang Jiang, Shaoping Xu, Jian Xiao and Haiwen Yu
Sensors 2026, 26(3), 883; https://doi.org/10.3390/s26030883 - 29 Jan 2026
Viewed by 434
Abstract
No-reference stereoscopic image quality assessment (NR-SIQA) remains a fundamental challenge due to the complex biological mechanisms of binocular rivalry and fusion, particularly under asymmetric distortions. In this paper, we propose a novel framework termed Multi-Stage Complementary Ensemble (MSCE). The core innovation lies in [...] Read more.
No-reference stereoscopic image quality assessment (NR-SIQA) remains a fundamental challenge due to the complex biological mechanisms of binocular rivalry and fusion, particularly under asymmetric distortions. In this paper, we propose a novel framework termed Multi-Stage Complementary Ensemble (MSCE). The core innovation lies in the Adaptive Selective Propagation (ASP) strategy embedded within a hierarchical Transformer architecture to dynamically regulates the fusion of binocular features. Specifically, by simulating the human visual system’s transition from binocular rivalry to fusion, the ASP strategy applies nonlinear gain control to selectively reinforce features from the governing view based on binocular discrepancies. Furthermore, the proposed Hierarchical Complementary Fusion (HCF) module effectively captures and integrates low-level texture integrity, mid-level structural degradation, and high-level semantic consistency, leveraging ensemble learning principles, within a unified quality-aware manifold. Experimental results on four benchmark datasets demonstrate that the MSCE framework achieves state-of-the-art performance, particularly in terms of prediction consistency under complex asymmetric distortions. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

16 pages, 1594 KB  
Article
Virtual Reality-Based Dichoptic Therapy in Acquired Brain Injury: Functional and Symptom Outcomes
by Carla Otero-Currás, Francisco J. Povedano-Montero, Ricardo Bernárdez-Vilaboa, Pilar Rojas, Rut González-Jiménez, Gema Martínez-Florentín and Juan E. Cedrún-Sánchez
J. Clin. Med. 2026, 15(3), 1004; https://doi.org/10.3390/jcm15031004 - 27 Jan 2026
Viewed by 720
Abstract
Background: Acquired brain injury (ABI) often disrupts binocular vision, causing deviations on the cover test and reduced stereopsis that impair functional visual performance. This study investigated the effects of a dichoptic vision therapy protocol—based on an immersive virtual reality (VR) system—on visual [...] Read more.
Background: Acquired brain injury (ABI) often disrupts binocular vision, causing deviations on the cover test and reduced stereopsis that impair functional visual performance. This study investigated the effects of a dichoptic vision therapy protocol—based on an immersive virtual reality (VR) system—on visual field parameters, oculomotor reaction times, and self-reported visual symptoms in adults with ABI. Methods: In a controlled parallel-group design, adult ABI patients (median age 51 years) were assigned to an experimental group (dichoptic VR therapy) or a control group. Six sessions of visual therapy were performed. Primary outcomes included perimetric visual field indices and oculomotor reaction times; the secondary outcome was the Brain Injury Vision Symptom Survey (BIVSS) score. Etiology (stroke vs. traumatic brain injury) was recorded. Results: No statistically significant improvements were found in perimetric visual field indices (p > 0.05), except for a slight gain in the top-right quadrant in the experimental group. Reaction times did not differ significantly between groups. However, the experimental group reported a greater reduction in visual symptoms as measured by the BIVSS. Patients with traumatic brain injury exhibited better functional improvement, particularly in the top-left quadrant (p = 0.04). Conclusions: Dichoptic VR-based therapy did not restore perimetric field losses in ABI patients but reduced visual symptoms and may enhance functional adaptation of residual vision rather than structural recovery. The therapeutic response varied by etiology, favoring traumatic brain injury. Larger, longer trials integrating objective and subjective measures, including neuroimaging, are warranted. Full article
(This article belongs to the Special Issue Traumatic Brain Injury: Clinical Diagnosis and Management)
Show Figures

Figure 1

16 pages, 3075 KB  
Article
Liner Wear Evaluation of Jaw Crushers Based on Binocular Vision Combined with FoundationStereo
by Chuyu Wen, Zhihong Jiang, Zhaoyu Fu, Quan Liu and Yifeng Zhang
Appl. Sci. 2026, 16(2), 998; https://doi.org/10.3390/app16020998 - 19 Jan 2026
Viewed by 313
Abstract
To address the bottlenecks of traditional jaw crusher liner wear detection—high safety risks, insufficient precision, and limited full-range analysis—this paper proposes a non-contact, high-precision wear analysis method based on binocular vision and deep learning. At its core is the integration of the state-of-the-art [...] Read more.
To address the bottlenecks of traditional jaw crusher liner wear detection—high safety risks, insufficient precision, and limited full-range analysis—this paper proposes a non-contact, high-precision wear analysis method based on binocular vision and deep learning. At its core is the integration of the state-of-the-art FoundationStereo zero-shot stereo matching algorithm, following scenario-specific adaptations, into the 3D reconstruction of industrial liners for wear analysis. A novel wear quantification methodology and corresponding indicator system are also proposed. After calibrating the ZED2 binocular camera and fine-tuning the algorithm, FoundationStereo achieves an Endpoint Error (EPE) of 0.09, significantly outperforming traditional algorithms. To meet on-site efficiency requirements, a “single-view rapid acquisition + CUDA engineering acceleration” strategy is implemented, reducing point cloud generation latency from 165 ms to 120 ms by rewriting kernel functions and optimizing memory access patterns. Geometric accuracy verification shows a Mean Absolute Error (MAE) ≤ 0.128 mm, fully meeting industrial measurement standards. A complete process of “3D reconstruction–model registration–quantitative analysis” is constructed, utilizing three core indicators (maximum wear depth, average wear depth, and wear area ratio) to characterize liner wear. Statistical results—such as an average maximum wear depth of 55.05 mm—are highly consistent with manual inspection data, providing a safe, efficient, and precise digital solution for the predictive maintenance and intelligent operation and maintenance (O&M) of liners. Full article
Show Figures

Figure 1

13 pages, 5551 KB  
Case Report
Inaugural Sixth Nerve Palsy in a Patient with Neuroborreliosis: A Case Report
by Yasmine Lahrichi, Jean-Marie Rakic and Anne-Catherine Chapelle
J. Clin. Transl. Ophthalmol. 2026, 4(1), 3; https://doi.org/10.3390/jcto4010003 - 17 Jan 2026
Viewed by 579
Abstract
Background: We report an uncommon presentation of Lyme disease and highlight the importance of a detailed history in a patient with new-onset sixth nerve palsy. Methods: Case report and literature review. Results: A 46-year-old man receiving infliximab presented to the ophthalmology emergency department [...] Read more.
Background: We report an uncommon presentation of Lyme disease and highlight the importance of a detailed history in a patient with new-onset sixth nerve palsy. Methods: Case report and literature review. Results: A 46-year-old man receiving infliximab presented to the ophthalmology emergency department with horizontal binocular diplopia. History revealed a diffuse headache that had begun three weeks earlier. Ophthalmologic examination demonstrated a left sixth cranial nerve palsy. The workup showed positive Borrelia serum IgG, which was interpreted as a likely false-positive result given the limited specificity of serologic testing. At follow-up, the patient reported left-sided peripheral facial palsy, and worsening headache and diplopia. Further history revealed prior erythema migrans treated with doxycycline four months earlier. Considering these new findings, a lumbar puncture was performed and demonstrated intrathecal production of Borrelia antibodies. Neuroborreliosis, a neurologic involvement secondary to systemic infection by the spirochete Borrelia burgdorferi, was diagnosed. The patient was treated with oral doxycycline for 28 days with complete resolution of symptoms. Conclusions: Lyme disease may present with progressive neuro-ophthalmologic symptoms, underscoring the crucial role of ophthalmologists in its diagnosis. Moreover, immunosuppression may delay diagnosis and allow neurological progression, highlighting the need for careful history taking and close follow-up. Full article
Show Figures

Figure 1

24 pages, 310 KB  
Essay
Power and Love in Intimate Partner Violence Theories: A Conceptual Integration
by Roberta Di Pasquale and Andrea Rivolta
Soc. Sci. 2026, 15(1), 45; https://doi.org/10.3390/socsci15010045 - 15 Jan 2026
Viewed by 1007
Abstract
The field of study on intimate partner violence has long been characterized by a bitter debate between the following two opposing theoretical and ideological positions on the nature of the phenomenon: the first is typical of the feminist perspective and considers IPV as [...] Read more.
The field of study on intimate partner violence has long been characterized by a bitter debate between the following two opposing theoretical and ideological positions on the nature of the phenomenon: the first is typical of the feminist perspective and considers IPV as an expression of gender-based violence; the second is typical—among others—of the attachment-based perspective and maintains that IPV would be a neutral form of violence with respect to gender. The aim of this contribution is to try to show how it is possible to make a more heuristically fruitful comparison between these two antagonistic perspectives, shifting the focus on the conceptual frameworks that underlie them and on their two different corresponding key explanatory concepts as follows: on the one hand, gender-based power on which the feminist perspective hinges, and on the other, love and love-related emotional dynamics on which the attachment-based perspective focuses. Finally, we will argue how these two key explanatory concepts can be kept combined in a sort of binocular vision and integrated into a more complex “power-and-love” explanatory framework. To this end, we will refer to a systemic approach to IPV, in particular to the contribution of Virginia Goldner, who proposes a model based on the close interconnection between power dynamics and love-related dynamics in the genesis and perpetuation of male violence in heterosexual intimate relationships. Full article
(This article belongs to the Special Issue Contemporary Work in Understanding and Reducing Domestic Violence)
9 pages, 816 KB  
Case Report
Dim Flicker: An Endogenous Visual Percept and Its Disease Associations
by Abdullah Amini, Adam Besic, Avery Freund, Yousif Subhi, Oliver Niels Klefter, Jes Olesen, Jette Lautrup Frederiksen and Michael Larsen
J. Clin. Med. 2026, 15(2), 622; https://doi.org/10.3390/jcm15020622 - 13 Jan 2026
Viewed by 724
Abstract
Background/Purpose: Four patients independently reported episodes of seeing a dimly flickering overlay on an otherwise intact part of their binocular visual field. The aim of the study was to describe the clinical characteristics of this episodic phenomenon, which we call dim flicker. Methods: [...] Read more.
Background/Purpose: Four patients independently reported episodes of seeing a dimly flickering overlay on an otherwise intact part of their binocular visual field. The aim of the study was to describe the clinical characteristics of this episodic phenomenon, which we call dim flicker. Methods: Retrospective chart review and patient evaluation of an animated reference simulation. Results: The patients described repeated episodes of a seeing a patch of rhythmically oscillating dim flicker overlaid on a circumscribed patch of their otherwise normal binocular visual field. The flicker was typically seen at low ambient light levels and disappeared in bright light or when one or both eyes were covered. Episodes lasted seconds to minutes. Some flicker patches crossed the vertical midline. The flicker was subjectively experienced as coming from one specific eye. Compared to a 7 Hz flicker simulation, patients reported differences in location, prominence, and frequency, with the latter ranging from 3 to 10 Hz. In three patients, the flicker was sometimes experienced during aerobic exercise and in two patients sometimes when they rose at night in the dark. In one patient, the flicker corresponded to an area of ischemic macular edema secondary to central retinal vein occlusion. There was no headache during or after the flicker. Associated maladies included retinal venous congestion, central serous chorioretinopathy, arterial hypertension, atrial fibrillation, and migraine with visual aura distinctly different from the dim flicker. Conclusions: Episodes of seeing an endogenous, rhythmically oscillating transparent overlay within a confined, non-expanding part of an otherwise intact binocular visual field appears to be a distinct nosological entity that can be associated with ocular and systemic vascular disease. Full article
(This article belongs to the Section Ophthalmology)
Show Figures

Figure 1

33 pages, 14779 KB  
Article
A Vision-Based Robot System with Grasping-Cutting Strategy for Mango Harvesting
by Qianling Liu and Zhiheng Lu
Agriculture 2026, 16(1), 132; https://doi.org/10.3390/agriculture16010132 - 4 Jan 2026
Viewed by 1364
Abstract
Mango is the second most widely cultivated tropical fruit in the world. Its harvesting mainly relies on manual labor. During the harvest season, the hot weather leads to low working efficiency and high labor costs. Current research on automatic mango harvesting mainly focuses [...] Read more.
Mango is the second most widely cultivated tropical fruit in the world. Its harvesting mainly relies on manual labor. During the harvest season, the hot weather leads to low working efficiency and high labor costs. Current research on automatic mango harvesting mainly focuses on locating the fruit stem harvesting point, followed by stem clamping and cutting. However, these methods are less effective when the stem is occluded. To address these issues, this study first acquires images of four mango varieties in a mixed cultivation orchard and builds a dataset. Mango detection and occlusion-state classification models are then established based on YOLOv11m and YOLOv8l-cls, respectively. The detection model achieves an AP0.5–0.95 (average precision at IoU = 0.50:0.05:0.95) of 90.21%, and the accuracy of the classification model is 96.9%. Second, based on the mango growth characteristics, detected mango bounding boxes and binocular vision, we propose a spatial localization method for the mango grasping point. Building on this, a mango-grasping and stem-cutting end-effector is designed. Finally, a mango harvesting robot system is developed, and verification experiments are carried out. The experimental results show that the harvesting method and procedure are well-suited for situations where the fruit stem is occluded, as well as for fruits with no occlusion or partial occlusion. The mango grasping success rate reaches 96.74%, the stem cutting success rate is 91.30%, and the fruit injury rate is less than 5%. The average image processing time is 119.4 ms. The results prove the feasibility of the proposed methods. Full article
Show Figures

Figure 1

20 pages, 7461 KB  
Article
A Wall-Climbing Robot with a Mechanical Arm for Weld Inspection of Large Pressure Vessels
by Ming Zhong, Mingjian Pan, Zhengxiong Mao, Ruifei Lyu and Yaxin Liu
Actuators 2025, 14(12), 607; https://doi.org/10.3390/act14120607 - 12 Dec 2025
Viewed by 801
Abstract
Inspecting the inner walls of large pressure vessels requires accurate weld seam recognition, complete coverage, and precise path tracking, particularly in low-feature environments. This paper presents a fully autonomous mobile robotic system that integrates weld seam detection, localization, and tracking to support ultrasonic [...] Read more.
Inspecting the inner walls of large pressure vessels requires accurate weld seam recognition, complete coverage, and precise path tracking, particularly in low-feature environments. This paper presents a fully autonomous mobile robotic system that integrates weld seam detection, localization, and tracking to support ultrasonic testing. An improved Differentiable Binarization Network (DBNet) combined with the Spatially Variant Transformer (SVTR) model enhances digital stamp recognition, while weld paths are reconstructed from three-dimensional position data acquired via binocular stereo vision. To ensure complete traversal and accurate tracking, a global–local hierarchical planning strategy is implemented: the A-star (A*) algorithm performs global path planning, the Rapidly Exploring Random Tree Connect (RRT-Connect) algorithm handles local path generation, and point cloud normal–based spherical interpolation produces smooth tracking trajectories for robotic arm motion control. Experimental validation demonstrates a 94.7% digital stamp recognition rate, 95.8% localization success, 1.65 mm average weld tracking error, 2.12° normal fitting error, 98.2% seam coverage, and a tracking speed of 96 mm/s. These results confirm the system’s capability to automate weld seam inspection and provide a reliable foundation for subsequent ultrasonic testing in pressure vessel applications. Full article
(This article belongs to the Topic Advances in Mobile Robotics Navigation, 2nd Volume)
Show Figures

Figure 1

Back to TopTop