Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (77)

Search Parameters:
Keywords = monocular distance measurement

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
9 pages, 350 KiB  
Article
Visual Quality and Symptomatology Following Implantation of a Non-Diffractive Extended Depth-of-Focus Intraocular Lens
by Antonio Cano-Ortiz, Álvaro Sánchez-Ventosa, Timoteo González-Cruces, Marta Villalba-González, Francisco Javier Aguilar-Salazar, Juan J. Prados-Carmona, Carlos Carpena-Torres, Gonzalo Carracedo and Alberto Villarrubia
J. Clin. Med. 2025, 14(13), 4460; https://doi.org/10.3390/jcm14134460 - 23 Jun 2025
Viewed by 334
Abstract
Background/Objectives: This study aimed to evaluate the visual quality and symptomatology of a non-diffractive extended depth-of-focus (EDoF) intraocular lens (IOL), the Elon 877PEY (Medicontur, Zsámbék, Hungary), three months after implantation. Methods: A cross-sectional case series study was conducted, with measurements taken [...] Read more.
Background/Objectives: This study aimed to evaluate the visual quality and symptomatology of a non-diffractive extended depth-of-focus (EDoF) intraocular lens (IOL), the Elon 877PEY (Medicontur, Zsámbék, Hungary), three months after implantation. Methods: A cross-sectional case series study was conducted, with measurements taken three months post-implantation of the Elon IOL. A total of 56 implanted eyes from 28 patients (mean age: 64.5 ± 9.5 years) were included in the statistical analysis. The variables analyzed to assess the effectiveness of the Elon IOL included high-contrast visual acuity, contrast sensitivity, the defocus curve, and visual symptoms. Results: Three months after implantation, the mean residual sphere was 0.00 ± 0.33 D, while the mean residual cylinder was −0.25 ± 0.41 D. Without correction, patients achieved monocular decimal visual acuity values of 0.94 ± 0.26 for distance, 0.79 ± 0.17 for intermediate, and 0.58 ± 0.15 for near vision. The mean uncorrected contrast sensitivity was 1.61 ± 0.15 log. The defocus curve showed visual acuity exceeding 0.80 decimal (0.10 logMAR) over a 2.00 D range and above 0.63 decimal (0.20 logMAR) over a 2.50 D range. The most frequently reported symptoms, with mild severity and bothersomeness, were glare, starbursts, halos, and focusing difficulties. Conclusions: Patients implanted with the Elon IOL achieved satisfactory visual quality at all distances, comparable to outcomes reported for other EDoF IOLs in the scientific literature. Full article
Show Figures

Figure 1

14 pages, 2035 KiB  
Article
Integration of YOLOv9 Segmentation and Monocular Depth Estimation in Thermal Imaging for Prediction of Estrus in Sows Based on Pixel Intensity Analysis
by Iyad Almadani, Aaron L. Robinson and Mohammed Abuhussein
Digital 2025, 5(2), 22; https://doi.org/10.3390/digital5020022 - 13 Jun 2025
Viewed by 436
Abstract
Many researchers focus on improving reproductive health in sows and ensuring successful breeding by accurately identifying the optimal time of ovulation through estrus detection. One promising non-contact technique involves using computer vision to analyze temperature variations in thermal images of the sow’s vulva. [...] Read more.
Many researchers focus on improving reproductive health in sows and ensuring successful breeding by accurately identifying the optimal time of ovulation through estrus detection. One promising non-contact technique involves using computer vision to analyze temperature variations in thermal images of the sow’s vulva. However, variations in camera distance during dataset collection can significantly affect the accuracy of this method, as different distances alter the resolution of the region of interest, causing pixel intensity values to represent varying areas and temperatures. This inconsistency hinders the detection of the subtle temperature differences required to distinguish between estrus and non-estrus states. Moreover, failure to maintain a consistent camera distance, along with external factors such as atmospheric conditions and improper calibration, can distort temperature readings, further compromising data accuracy and reliability. Furthermore, without addressing distance variations, the model’s generalizability diminishes, increasing the likelihood of false positives and negatives and ultimately reducing the effectiveness of estrus detection. In our previously proposed methodology for estrus detection in sows, we utilized YOLOv8 for segmentation and keypoint detection, while monocular depth estimation was used for camera calibration. This calibration helps establish a functional relationship between the measurements in the image (such as distances between labia, the clitoris-to-perineum distance, and vulva perimeter) and the depth distance to the camera, enabling accurate adjustments and calibration for our analysis. Estrus classification is performed by comparing new data points with reference datasets using a three-nearest-neighbor voting system. In this paper, we aim to enhance our previous method by incorporating the mean pixel intensity of the region of interest as an additional factor. We propose a detailed four-step methodology coupled with two stages of evaluation. First, we carefully annotate masks around the vulva to calculate its perimeter precisely. Leveraging the advantages of deep learning, we train a model on these annotated images, enabling segmentation using the cutting-edge YOLOv9 algorithm. This segmentation enables the detection of the sow’s vulva, allowing for analysis of its shape and facilitating the calculation of the mean pixel intensity in the region. Crucially, we use monocular depth estimation from the previous method, establishing a functional link between pixel intensity and the distance to the camera, ensuring accuracy in our analysis. We then introduce a classification approach that differentiates between estrus and non-estrus regions based on the mean pixel intensity of the vulva. This classification method involves calculating Euclidean distances between new data points and reference points from two datasets: one for “estrus” and the other for “non-estrus”. The classification process identifies the five closest neighbors from the datasets and applies a majority voting system to determine the label. A new point is classified as “estrus” if the majority of its nearest neighbors are labeled as estrus; otherwise, it is classified as “non-estrus”. This automated approach offers a robust solution for accurate estrus detection. To validate our method, we propose two evaluation stages: first, a quantitative analysis comparing the performance of our new YOLOv9 segmentation model with the older U-Net and YOLOv8 models. Secondly, we assess the classification process by defining a confusion matrix and comparing the results of our previous method, which used the three nearest points, with those of our new model that utilizes five nearest points. This comparison allows us to evaluate the improvements in accuracy and performance achieved with the updated model. The automation of this vital process holds the potential to revolutionize reproductive health management in agriculture, boosting breeding success rates. Through thorough evaluation and experimentation, our research highlights the transformative power of computer vision, pushing forward more advanced practices in the field. Full article
Show Figures

Figure 1

18 pages, 2718 KiB  
Article
Adaptive Measurement of Space Target Separation Velocity Based on Monocular Vision
by Haifeng Zhang, Han Ai, Zeyu He, Delian Liu, Jianzhong Cao and Chao Mei
Electronics 2025, 14(11), 2137; https://doi.org/10.3390/electronics14112137 - 24 May 2025
Viewed by 301
Abstract
Spacecraft separation safety is the key characteristic of flight safety. Obtaining the velocity and distance curves of spacecraft and booster at the separation time is at the core of separation safety analysis. In order to solve the separation velocity measurement problem, this paper [...] Read more.
Spacecraft separation safety is the key characteristic of flight safety. Obtaining the velocity and distance curves of spacecraft and booster at the separation time is at the core of separation safety analysis. In order to solve the separation velocity measurement problem, this paper introduces the YOLOv8_n target detection algorithm and the circle fitting algorithm based on random sample consistency (RANSAC) to measure the separation velocity of space targets according to a space-based video obtained by a monocular camera installed on the spacecraft arrow-shaped body. Firstly, MobileNetV3 network is used to replace the backbone network of YOLOv8_n. Then, the circle fitting algorithm based on RANSAC is improved to improve the anti-interference performance and the adaptability to various light environments. Finally, by analyzing the imaging principle of the monocular camera and the results of circle feature detection, distance information is obtained, and then the measurement results of velocity are obtained. The experimental results based on a space-based video show that the YOLOv8_n target detection algorithm can detect the booster target quickly and accurately, and the improved circle fitting algorithm based on RANSAC can measure the separation speed in real time while maintaining the detection speed. The ground simulation results show that the error of this method is about 1.2%. Full article
(This article belongs to the Special Issue 2D/3D Industrial Visual Inspection and Intelligent Image Processing)
Show Figures

Figure 1

23 pages, 6679 KiB  
Article
Fusion Ranging Method of Monocular Camera and Millimeter-Wave Radar Based on Improved Extended Kalman Filtering
by Ye Chen, Qirui Cui and Shungeng Wang
Sensors 2025, 25(10), 3045; https://doi.org/10.3390/s25103045 - 12 May 2025
Viewed by 694
Abstract
To address the limitations of single-sensor systems in environmental perception, such as the difficulty in comprehensively capturing complex environmental information and insufficient detection accuracy and robustness in dynamic environments, this study proposes a distance measurement method based on the fusion of millimeter-wave (MMW) [...] Read more.
To address the limitations of single-sensor systems in environmental perception, such as the difficulty in comprehensively capturing complex environmental information and insufficient detection accuracy and robustness in dynamic environments, this study proposes a distance measurement method based on the fusion of millimeter-wave (MMW) radar and monocular camera. Initially, a monocular ranging model was constructed based on object detection algorithms. Subsequently, the pixel-distance joint dual-constraint matching algorithm is employed to accomplish cross-modal matching between the MMW radar and the monocular camera. Furthermore, an adaptive fuzzy extended Kalman filter (AFEKF) algorithm was established to fuse the ranging data acquired from the monocular camera and MMW radar. Experimental results demonstrate that the AFEKF algorithm achieved an average root mean square error (RMSE) of 0.2131 m across 15 test datasets. Compared to the raw MMW radar data, inverse variance weighting (IVW) filtering, and traditional extended Kalman filter (EKF), the AFEKF algorithm improved the average RMSE by 10.54%, 11.10%, and 22.57%, respectively. The AFEKF algorithm improves the extended Kalman filter by integrating an adaptive fuzzy mechanism, providing a reliable and effective solution for enhancing localization accuracy and system stability. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

22 pages, 30414 KiB  
Article
Metric Scaling and Extrinsic Calibration of Monocular Neural Network-Derived 3D Point Clouds in Railway Applications
by Daniel Thomanek and Clemens Gühmann
Appl. Sci. 2025, 15(10), 5361; https://doi.org/10.3390/app15105361 - 11 May 2025
Viewed by 564
Abstract
Three-dimensional reconstruction using monocular camera images is a well-established research topic. While multi-image approaches like Structure from Motion produce sparse point clouds, single-image depth estimation via machine learning promises denser results. However, many models estimate relative depth, and even those providing metric depth [...] Read more.
Three-dimensional reconstruction using monocular camera images is a well-established research topic. While multi-image approaches like Structure from Motion produce sparse point clouds, single-image depth estimation via machine learning promises denser results. However, many models estimate relative depth, and even those providing metric depth often struggle with unseen data due to unfamiliar camera parameters or domain-specific challenges. Accurate metric 3D reconstruction is critical for railway applications, such as ensuring structural gauge clearance from vegetation to meet legal requirements. We propose a novel method to scale 3D point clouds using the track gauge, which typically only varies in very limited values between large areas or countries worldwide (e.g., 1.435 m in Europe). Our approach leverages state-of-the-art image segmentation to detect rails and measure the track gauge from a train driver’s perspective. Additionally, we extend our method to estimate a reasonable railway-specific extrinsic camera calibration. Evaluations show that our method reduces the average Chamfer distance to LiDAR point clouds from 1.94 m (benchmark UniDepth) to 0.41 m for image-wise calibration and 0.71 m for average calibration. Full article
Show Figures

Figure 1

9 pages, 700 KiB  
Article
Comparison of Visual Performance Between Two Diffractive Trifocal Intraocular Lenses
by Gloria Segura-Duch, David Oliver-Gutierrez, Mar Arans, Susana Duch-Tuesta, Carlos Carpena-Torres, Gonzalo Carracedo and David Andreu-Andreu
J. Clin. Med. 2025, 14(9), 3128; https://doi.org/10.3390/jcm14093128 - 30 Apr 2025
Viewed by 467
Abstract
Background/Objectives: This study aimed to compare the visual outcomes of two diffractive trifocal intraocular lenses (IOLs): the Bi-Flex Liberty 677MY (Medicontur; Zsámbék, Hungary) and the FineVision POD F (BVI Medical; Waltham, MA, USA). Methods: A prospective study with a 3-month follow-up [...] Read more.
Background/Objectives: This study aimed to compare the visual outcomes of two diffractive trifocal intraocular lenses (IOLs): the Bi-Flex Liberty 677MY (Medicontur; Zsámbék, Hungary) and the FineVision POD F (BVI Medical; Waltham, MA, USA). Methods: A prospective study with a 3-month follow-up was conducted. A total of 62 patients were divided into two groups according to the type of lens implanted: 31 patients with the Liberty lens (61.1 ± 6.4 years) and 31 patients with the Finevision lens (61.9 ± 6.8 years). Three measurement sessions were conducted (baseline, 1 month, and 3 months). These sessions included measurements of the subjective refraction, visual acuity, and defocus curve. Both eyes of each patient were operated on and included in the statistical analysis. Results: Three months after surgery, monocular visual acuity with the Liberty lens was statistically greater than with the Finevision lens at defocus values of −2.00 D (50 cm) and −2.50 D (40 cm) (p < 0.01). In this regard, the near visual acuity results (40–50 cm) with the Liberty lens showed greater variability compared to those of the Finevision lens. Binocularly, however, the Finevision lens demonstrated a statistically significant improvement in visual acuity than the Liberty lens at a defocus of −1.50 D (67 cm) (p = 0.01). Both IOLs provided visual acuities better than 0.20 logMAR for a defocus range from distance (0.00 D) to near (−3.50 D). Conclusions: Future studies are needed to investigate which patient ocular parameters could predict improved near vision with the Liberty lens or intermediate vision with the Finevision lens. Full article
Show Figures

Figure 1

11 pages, 1950 KiB  
Article
Pilot Study Evaluating the Early Clinical Outcomes Obtained with a Novel, Customized, Multifocal Corneo-Scleral Contact Lens for Presbyopia Correction
by Laura Barberán-Bernardos, Daniel Soriano Salcedo, Sergio Díaz-Gómez and David P. Piñero
Life 2025, 15(5), 700; https://doi.org/10.3390/life15050700 - 25 Apr 2025
Viewed by 588
Abstract
Background: The objective was to preliminarily evaluate the short-term clinical outcomes obtained in presbyopic patients with a novel, multifocal, customized corneo-scleral contact lens (CSCL). Methods: A total of 11 presbyopic subjects (age 45–80 years, corrected-distance visual acuity ≤ 0.1 LogMAR, near addition ≥ [...] Read more.
Background: The objective was to preliminarily evaluate the short-term clinical outcomes obtained in presbyopic patients with a novel, multifocal, customized corneo-scleral contact lens (CSCL). Methods: A total of 11 presbyopic subjects (age 45–80 years, corrected-distance visual acuity ≤ 0.1 LogMAR, near addition ≥ +1.00 D) were recruited and fitted with a multifocal corneo-scleral contact lens in this pilot study. Pre-fitting evaluations included stereopsis, contrast sensitivity (CS), and ocular aberrometry, with follow-up assessments conducted at 20 min and 1-month post-fitting. The defocus curve was also measured to assess visual performance across varying distances. Results: Twenty-two eyes from 11 participants (53.9 ± 4.7 years, 10 female) were included in this study. Significant changes were observed post-fitting for primary and secondary spherical aberration, coma, and stereopsis (p ≤ 0.033). No significant changes in Strehl ratio and total root mean square were detected (p ≥ 0.182). Binocular contrast sensitivity was better with spectacles than with the fitted CSCL at all frequencies (p ≤ 0.048), but the change in monocular did not reach statistical significance for 18 cycles per degree (p = 0.109). All patients and 90.9% of patients achieved a visual acuity of 0.0 LogMAR or better at distance and at intermediate, respectively, and 91.8% achieved 0.3 LogMAR or better for near vision. Conclusions: The customized CSCL evaluated provided functional recovery of visual quality across distances, with acceptable reductions of CS and stereopsis that are comparable to those reported for other multifocal contact lenses. Full article
(This article belongs to the Special Issue Vision Science and Optometry)
Show Figures

Figure 1

18 pages, 9039 KiB  
Article
An Intelligent Monitoring System for the Driving Environment of Explosives Transport Vehicles Based on Consumer-Grade Cameras
by Jinshan Sun, Jianhui Tang, Ronghuan Zheng, Xuan Liu, Weitao Jiang and Jie Xu
Appl. Sci. 2025, 15(7), 4072; https://doi.org/10.3390/app15074072 - 7 Apr 2025
Viewed by 563
Abstract
With the development of industry and society, explosives are widely used in social production as an important industrial product and require transportation. Explosives transport vehicles are susceptible to various objective factors during driving, increasing the risk of transportation. At present, new transport vehicles [...] Read more.
With the development of industry and society, explosives are widely used in social production as an important industrial product and require transportation. Explosives transport vehicles are susceptible to various objective factors during driving, increasing the risk of transportation. At present, new transport vehicles are generally equipped with intelligent driving monitoring systems. However, for old transport vehicles, the cost of installing such systems is relatively high. To enhance the safety of older explosives transport vehicles, this study proposes a cost-effective intelligent monitoring system using consumer-grade IP cameras and edge computing. The system integrates YOLOv8 for real-time vehicle detection and a novel hybrid ranging strategy combining monocular (fast) and binocular (accurate) techniques to measure distances, ensuring rapid warnings and precise proximity monitoring. An optimized stereo matching workflow reduces processing latency by 23.5%, enabling real-time performance on low-cost devices. Experimental results confirm that the system meets safety requirements, offering a practical, application-specific solution for improving driving safety in resource-limited explosive transport environments. Full article
Show Figures

Figure 1

22 pages, 4915 KiB  
Article
A CNN-Based Indoor Positioning Algorithm for Dark Environments: Integrating Local Binary Patterns and Fast Fourier Transform with the MC4L-IMU Device
by Nan Yin, Yuxiang Sun and Jae-Soo Kim
Appl. Sci. 2025, 15(7), 4043; https://doi.org/10.3390/app15074043 - 7 Apr 2025
Viewed by 502
Abstract
In our previous study, we proposed a vision-based ranging algorithm (LRA) that utilized a monocular camera with four lasers (MC4L) for indoor positioning in dark environments. The LRA achieved a positioning error within 2.4 cm using a logarithmic regression algorithm to establish a [...] Read more.
In our previous study, we proposed a vision-based ranging algorithm (LRA) that utilized a monocular camera with four lasers (MC4L) for indoor positioning in dark environments. The LRA achieved a positioning error within 2.4 cm using a logarithmic regression algorithm to establish a linear relationship between the illuminated area and real distance. However, it cannot distinguish between obstacles and walls. Hence, it results in severe errors in complex environments. To address this limitation, we developed an LBP-CNNs model that combines local binary patterns (LBPs) and self-attention mechanisms. The model effectively identifies obstacles based on the laser reflectivity of different material surfaces. It reduces positioning errors to 1.27 cm and achieves an obstacle recognition accuracy of 92.3%. In this paper, we further enhance LBP-CNNs by combining it with fast Fourier transform (FFT) to create an LBP-FFT-CNNs model that significantly improves the recognition accuracy of obstacles with similar textures to 96.3% and reduces positioning errors to 0.91 cm. In addition, an inertial measurement unit (IMU) is integrated into the MC4L device (MC4L-IMU) to design an inertial-based indoor positioning algorithm. Experimental results show that the LBP-FFT-CNNs model achieves the highest determination coefficient (R2 = 0.9949), outperforming LRA (R2 = 0.9867) and LBP-CNN (R2 = 0.9934). In addition, all models show strong stability, and the prediction standard index (PSI) values are always below 0.02. To evaluate model robustness and MC4L-IMU work reliably under different conditions, the experiments were conducted in a controlled indoor environment with different obstacle materials and lighting conditions. Full article
Show Figures

Figure 1

11 pages, 1021 KiB  
Article
Clinical Outcomes of Combined Phacoemulsification, Extended Depth-of-Focus Intraocular Lens Implantation, and Epiretinal Membrane Peeling Surgery
by Ho-Seok Chung, Dabin Lee and Jin-Hyoung Park
J. Clin. Med. 2025, 14(7), 2423; https://doi.org/10.3390/jcm14072423 - 2 Apr 2025
Viewed by 666
Abstract
Background/Objectives: To evaluate the clinical efficacy and safety of combined phacoemulsification, extended depth-of-focus (EDOF) intraocular lens (IOL) implantation, and epiretinal membrane (ERM) peeling during vitrectomy surgery for treating patients with ERM, cataracts, and presbyopia. Methods: Patients with preexisting low-grade ERM who [...] Read more.
Background/Objectives: To evaluate the clinical efficacy and safety of combined phacoemulsification, extended depth-of-focus (EDOF) intraocular lens (IOL) implantation, and epiretinal membrane (ERM) peeling during vitrectomy surgery for treating patients with ERM, cataracts, and presbyopia. Methods: Patients with preexisting low-grade ERM who underwent cataract surgery with the implantation of an EDOF IOL were included. Corrected distance visual acuity (CDVA), uncorrected distance visual acuity (UDVA), uncorrected intermediate visual acuity (UIVA), uncorrected near visual acuity (UNVA), autorefraction and keratometry, manifest refraction, and central foveal thickness (CFT) were measured before surgery and at postoperative months 3 and 6. A monocular defocus curve was measured 6 months postoperatively. Furthermore, patients were instructed to report symptoms of photic phenomena at each visit. Results: In total, 16 eyes of 16 patients (median age, 59.5 years) were included in this study. Compared with those at baseline, the CDVA, UDVA, UIVA, UNVA, and CFT significantly improved at 3 and 6 months postoperatively. The defocus curve revealed that a visual acuity of 0.12 logarithm of the minimal angle of resolution or better was maintained from +0.5 to –1.5 diopters. No patients reported visual disturbances suggestive of photic phenomena, such as glare or halo. Conclusions: EDOF IOL implantation had excellent outcomes, including improved distance and intermediate visual acuity, functional near visual acuity, and absence of visual symptoms in patients who received phacovitrectomy to treat low-grade ERM. Full article
Show Figures

Figure 1

23 pages, 1756 KiB  
Article
IVU-AutoNav: Integrated Visual and UWB Framework for Autonomous Navigation
by Shuhui Bu, Jie Zhang, Xiaohan Li, Kun Li and Boni Hu
Drones 2025, 9(3), 162; https://doi.org/10.3390/drones9030162 - 22 Feb 2025
Cited by 1 | Viewed by 1010
Abstract
To address the inherent scale ambiguity and positioning drift in monocular visual Simultaneous Localization and Mapping (SLAM), this paper proposes a novel localization method that integrates monocular visual SLAM with Ultra-Wideband (UWB) ranging information. This method enables high-precision localization for unmanned aerial vehicles [...] Read more.
To address the inherent scale ambiguity and positioning drift in monocular visual Simultaneous Localization and Mapping (SLAM), this paper proposes a novel localization method that integrates monocular visual SLAM with Ultra-Wideband (UWB) ranging information. This method enables high-precision localization for unmanned aerial vehicles (UAVs) in complex environments without global navigation information. The proposed framework, IVU-AutoNav, relies solely on distance measurements between a fixed UWB anchor and the UAV’s UWB device. Initially, it jointly solves for the position of the UWB anchor and the scale factor of the SLAM system using the scale-ambiguous SLAM data and ranging information. Subsequently, a pose optimization equation is formulated, which integrates visual reprojection errors and ranging errors, to achieve precise localization with a metric scale. Furthermore, a global optimization process is applied to enhance the global consistency of the localization map and optimize the positions of the UWB anchors and scale factor. The proposed approach is validated through both simulation and experimental studies, demonstrating its effectiveness. Experimental results show a scale error of less than 1.8% and a root mean square error of 0.23 m, outperforming existing state-of-the-art visual SLAM systems. These findings underscore the potential and efficacy of the monocular visual-UWB coupled SLAM method in advancing UAV navigation and localization capabilities. Full article
(This article belongs to the Special Issue Drones Navigation and Orientation)
Show Figures

Figure 1

26 pages, 6921 KiB  
Article
Automated Docking System for LNG Loading Arm Based on Machine Vision and Multi-Sensor Fusion
by Rui Xiang, Wuwei Feng, Songling Song and Hao Zhang
Appl. Sci. 2025, 15(5), 2264; https://doi.org/10.3390/app15052264 - 20 Feb 2025
Cited by 2 | Viewed by 670
Abstract
With the growth of global liquefied natural gas (LNG) demand, automation technology has become a key trend to improve the efficiency and safety of LNG handling. In this study, a novel automatic docking system is proposed which adopts a staged docking strategy based [...] Read more.
With the growth of global liquefied natural gas (LNG) demand, automation technology has become a key trend to improve the efficiency and safety of LNG handling. In this study, a novel automatic docking system is proposed which adopts a staged docking strategy based on a monocular camera for positioning and combines ultrasonic sensors to achieve multi-stage optimization in the fine docking stage. In the coarse docking stage, the system acquires flange image data through the monocular camera, calculates 3D coordinates based on geometric feature extraction and coordinate transformation, and completes the preliminary target localization and fast approach; in the fine docking stage, the ultrasonic sensor is used to measure the multidirectional distance deviation, and the fusion of the monocular data is used to make dynamic adjustments to achieve high-precision alignment and localization. Simulation and experimental verification show that the system has good robustness in complex environments, such as wind and waves, and can achieve docking accuracy within 3 mm, which is better than the traditional manual docking method. This study provides a practical solution for automated docking of LNG loading arms, which can significantly improve the efficiency and safety of LNG loading and unloading operations. Full article
Show Figures

Figure 1

22 pages, 7262 KiB  
Article
Reliability and Validity Examination of a New Gait Motion Analysis System
by Tadamitsu Matsuda, Yuji Fujino, Tomoyuki Morisawa, Tetsuya Takahashi, Kei Kakegawa, Takanari Matsumoto, Takehiko Kiyohara, Hiroshi Fukushima, Makoto Higuchi, Yasuo Torimoto, Masaki Miwa, Toshiyuki Fujiwara and Hiroyuki Daida
Sensors 2025, 25(4), 1076; https://doi.org/10.3390/s25041076 - 11 Feb 2025
Viewed by 1605
Abstract
Recent advancements have made two-dimensional (2D) clinical gait analysis systems more accessible and portable than traditional three-dimensional (3D) clinical systems. This study evaluates the reliability and validity of gait measurements using monocular and composite camera setups with VisionPose, comparing them to the Vicon [...] Read more.
Recent advancements have made two-dimensional (2D) clinical gait analysis systems more accessible and portable than traditional three-dimensional (3D) clinical systems. This study evaluates the reliability and validity of gait measurements using monocular and composite camera setups with VisionPose, comparing them to the Vicon 3D motion capture system as a reference. Key gait parameters—including hip and knee joint angles, and time and distance factors—were assessed under normal, maximum speed, and tandem gait conditions during level walking. The results show that the intraclass correlation coefficient (ICC(1,k)) for the 2D model exceeded 0.969 for the monocular camera and 0.963 for the composite camera for gait parameters. Time–distance gait parameters demonstrated excellent relative agreement across walking styles, while joint range of motion showed overall strong agreement. However, accuracy was lower for measurements during tandem walking. The Cronbach’s alpha coefficient for time–distance parameters ranged from 0.932 to 0.999 (monocular) and from 0.823 to 0.998 (composite). In contrast, for joint range of motion, the coefficient varied more widely, ranging from 0.826 to 0.985 (monocular) and from 0.314 to 0.974 (composite). The correlation coefficients for spatiotemporal gait parameters were greater than 0.933 (monocular) and 0.837 (composite). However, for joint angle parameters, the coefficients were lower during tandem walking. This study underscores the potential of 2D models in clinical applications and highlights areas for improvement to enhance their reliability and application scope. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

12 pages, 4282 KiB  
Article
Simplifying the Diagnosis of Pediatric Nystagmus with Fundus Photography
by Noa Cohen-Sinai, Inbal Man Peles, Basel Obied, Noa Netzer, Noa Hadar, Alon Zahavi and Nitza Goldenberg-Cohen
Children 2025, 12(2), 211; https://doi.org/10.3390/children12020211 - 11 Feb 2025
Viewed by 1087
Abstract
Background/Objectives: To simplify diagnosing congenital and acquired nystagmus using fundus photographs. Methods: A retrospective study included patients with congenital or childhood-acquired nystagmus examined at a hospital-based ophthalmology clinic (September 2020–September 2023) with fundus photos taken. Exclusions were for incomplete data or low-quality images. [...] Read more.
Background/Objectives: To simplify diagnosing congenital and acquired nystagmus using fundus photographs. Methods: A retrospective study included patients with congenital or childhood-acquired nystagmus examined at a hospital-based ophthalmology clinic (September 2020–September 2023) with fundus photos taken. Exclusions were for incomplete data or low-quality images. Demographics, aetiology, orthoptic measurements, and ophthalmologic and neurological exams were reviewed. Two independent physicians graded fundus photos based on amplitude (distance between “ghost” images), the number of images visible, and the direction of nystagmus. Severity was rated on a 0–3 scale using qualitative and quantitative methods. Photographic findings were compared to clinical data, and statistical analysis used Mann-Whitney tests. Results: A total of 53 eyes from 29 patients (16 females, 13 males; mean age 12.5 years, range 3–65) were studied: 25 with binocular nystagmus and 3 with monocular nystagmus. Diagnoses included congenital (n = 15), latent-manifest (n = 3), neurologically associated (n = 2), and idiopathic (n = 9). Types observed were vertical (n = 5), horizontal (n = 23), rotatory (n = 10), and multidirectional (n = 15). Visual acuity ranged from 20/20 to no light perception. Fundus photos correlated with clinical diagnoses, aiding qualitative assessment of direction and amplitude and mitigating eye movement effects for clearer retinal detail visualization. Conclusions: Fundus photography effectively captures nystagmus characteristics and retinal details, even in young children, despite continuous eye movements. Integrating fundus cameras into routine practice may enhance nystagmus diagnosis and management, improving patient outcomes. Full article
(This article belongs to the Section Pediatric Ophthalmology)
Show Figures

Figure 1

14 pages, 6076 KiB  
Article
Fast and Slow Response of the Accommodation System in Young and Incipient-Presbyope Adults During Sustained Reading Task
by Ebrahim Safarian Baloujeh, António Queirós, Rafael Navarro and José Manuel González-Méijome
J. Clin. Med. 2025, 14(4), 1107; https://doi.org/10.3390/jcm14041107 - 9 Feb 2025
Viewed by 901
Abstract
Objectives: To investigate the dynamics of accommodation during and immediately after a sustained reading task on a digital device across various age groups under monocular and binocular conditions. Methods: Seventeen subjects were selected and divided into three age groups: young adults [...] Read more.
Objectives: To investigate the dynamics of accommodation during and immediately after a sustained reading task on a digital device across various age groups under monocular and binocular conditions. Methods: Seventeen subjects were selected and divided into three age groups: young adults (n = 4, age: 21.3 ± 3.2 years), adults (n = 4, age: 34 ± 3.56 years), and incipient presbyopes (n = 9, age: 45 ± 3.61 years). Dynamic accommodation and disaccommodation were objectively measured using the WAM-5500 open-view autorefractor during 2 min of distance fixation (Maltese cross at 6 m), 5 min of sustained near reading on a teleprompter app at the nearest readable distance, and 2 min of distance vision. Six sequential temporal landmarks were identified. Quantitative metrics for accommodation lag (AL), slope of slow accommodation (SSA), slope of slow disaccommodation (SSD), peak velocity of accommodation (PVA) and peak velocity of disaccommodation (PVD) were obtained as absolute values of spherical equivalent refractive (SER) change. Results: SSA, SSD, and AL were significantly and positively correlated with age (ρ = 0.75, 0.73, 0.51, respectively; p ≤ 0.038). For subjects under 45 years of age SSA and SSD increased quadratically with age, while for those above 45 years, both SSA and SSD decreased linearly. Linear regression of PVA and PVD with age indicated that the disaccommodation mechanism is faster than accommodation (slope = –0.15 and –0.23, respectively). PVA was significantly faster under monocular than binocular conditions (p = 0.124). Conclusions: Incipient presbyopes demonstrate a complex response in both accommodation and disaccommodation. The accommodation system responds quickly, but there is also a slower response that can provide up to an additional 1D of accommodative response during sustained near reading tasks. It is hypothesized that the crystalline lens exhibits hysteresis in returning to its unaccommodated state, due to its viscoelastic properties, which means it takes time to relax. Full article
Show Figures

Figure 1

Back to TopTop