Previous Article in Journal
Reynolds-Dependent Velocity Profile Correction and Its Uncertainty Demonstrated on an Ultrasonic Clamp-On Meter
Previous Article in Special Issue
Polar Fitting and Hermite Interpolation for Freeform Droplet Geometry Measurement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Vision-Based Characterization of Gear Transmission Mechanisms to Improve 3D Laser Scanner Accuracy

by
Fernando Lopez-Medina
1,2,
José A. Núñez-López
1,2,*,
Oleg Sergiyenko
1,
Dennis Molina-Quiroz
2,
Cesar Sepulveda-Valdez
2,
Jesús R. Herrera-García
2,
Vera Tyrsa
2 and
Ruben Alaniz-Plata
1
1
Instituto de Ingeniería, Universidad Autónoma de Baja California, Blvd. Benito Juárez y Calle de La Normal, s/n, Col. Insurgentes Este, Mexicali C.P. 21280, Mexico
2
Facultad de Ingeniería, Universidad Autónoma de Baja California, Blvd. Benito Juárez y Calle de La Normal, s/n, Col. Insurgentes Este, Mexicali C.P. 21280, Mexico
*
Author to whom correspondence should be addressed.
Metrology 2025, 5(4), 58; https://doi.org/10.3390/metrology5040058
Submission received: 24 July 2025 / Revised: 15 September 2025 / Accepted: 23 September 2025 / Published: 25 September 2025
(This article belongs to the Special Issue Advancements in Optical Measurement Devices and Technologies)

Abstract

Some laser scanners utilize stepper motor-driven optomechanical assemblies to position the laser beam precisely during triangulation. In laser scanners such as the presented Technical Vision System (TVS), to enhance motion resolution, gear transmissions are implemented between the motor and the optical assembly. However, due to the customized nature of the mechanical design, errors in manufacturing or insufficient mechanical characterization can introduce deviations in the computed 3D coordinates. In this work, we present a novel method for estimating the degrees-per-step ratio at the output of the laser positioner’s transmission mechanism using a stereovision system. Experimental results demonstrate the effectiveness of the proposed method, which reduces the need for manual metrological instruments and simplifies the calibration procedure through vision-assisted measurements. The method yielded estimated angular resolutions of approximately 0.06° and 0.07° per motor step in the horizontal and vertical axes, respectively, key parameters that define the minimal resolvable displacement of the projected beam in dynamic triangulation.

1. Introduction

Three-dimensional scanning systems have proven to be essential tools across a wide range of applications, including obstacle detection in dynamic environments [1], industrial inspection and manufacturing tasks [2,3], and detailed 3D reconstruction for geometric analysis or autonomous navigation [4,5].
Among the most commonly used technologies are camera-based systems, which reconstruct the geometry of a scene from images using techniques such as disparity estimation, followed by epipolar triangulation for 3D reconstruction [6,7,8]. However, a key limitation of these passive systems is their reliance on adequate lighting conditions, which constrains their performance in dark or poorly illuminated environments [9,10]. Beyond passive stereo, photogrammetry and structured-light scanning are widely used in optical metrology: the former reconstructs geometry from multi-view images, while the latter projects coded/phase patterns to achieve high short-range accuracy. Both can be sensitive to illumination and motion [7,8]. In contrast, active laser-based scanning systems can operate reliably even in complete darkness [11,12]. One widely used solution is the LiDAR (Light Detection and Ranging) system, which applies the time-of-flight (ToF) technique by measuring the temporal delay between the emission and reception of a reflected laser pulse [4,12]. Nevertheless, ToF-based systems may suffer from limited angular accuracy, especially in compact or low-cost implementations [13,14,15].
An alternative approach is the Technical Vision System (TVS), which relies on the principle of dynamic triangulation [16,17]. In this system, a laser spot is projected onto the target surface, and its reflection is captured by an optical sensor. Given the precisely known emission and reception angles, the 3D coordinates of the laser spot can be computed using trigonometric relations. A complete surface scan is achieved by systematically sweeping the laser beam across the measurement field. In active triangulation systems, spatial accuracy strongly depends on the fixed angular resolution of the projected laser beam [18,19,20,21].
The TVS includes a positioning subsystem that directs the laser beam toward specific regions of interest using rotational actuators. This subsystem has been implemented using both DC motors and stepper motors, with the latter being preferred due to their ability to perform discrete angular movements, which facilitates accurate estimation of the laser position [16,17]. Although the TVS uses an 808 nm near-infrared laser, the spatial resolution of the system is primarily defined by the angular precision of the mechanical laser positioner, rather than by optical wavelength or power [9,17].
Since the triangulation process critically depends on angular accuracy, it is essential to ensure proper calibration of the mechanical components involved in laser beam steering [22]. In particular, mechanical transmissions can introduce both systematic and random errors if parameters such as the degrees-per-step ratio are not accurately known [23,24,25,26]. Many TVS prototypes are conceived as low-cost alternatives using additive manufacturing [9]. As a result, their mechanical transmissions are not standardized and often exhibit variations arising from the manufacturing and assembly processes. Therefore, experimental characterization of these components becomes necessary to ensure proper system calibration [17].
Numerous studies have focused on the fusion of heterogeneous sensors and their extrinsic calibration, which involves estimating the rigid transformation that spatially aligns the reference frames of each system, enabling coherent data integration [17,27,28,29]. However, very few works explore the potential of such multisensor configurations to estimate internal parameters of one of the constituent systems. In this work, we leverage the data provided by the stereovision system to characterize a key mechanical property of the Technical Vision System: the degrees-per-step ratio of the two gear transmissions responsible for controlling the orientation of the laser beam within the positioner subsystem. This approach not only enhances the accuracy of dynamic triangulation, but also enables the development of autonomous robotic systems with self-calibration capabilities, eliminating the need for external references or human intervention.
Non-contact calibration techniques based on virtual gear artefacts have been proposed for validating vision-based gear measurement systems [30]. However, these approaches remain focused on software-level verification and do not directly recover fundamental transmission parameters (such as the effective degrees-per-step ratio) in fully assembled 3D laser scanners. In contrast, our stereovision-based method performs an in situ kinematic characterization after system assembly, thereby accounting for mounting tolerances and eliminating cumulative mechanical alignment errors. To ensure accurate stereo correspondence and 3D reconstruction, the stereovision system was previously calibrated using Zhang’s method [31], which provides intrinsic and extrinsic camera parameters from a planar target. This camera calibration supports the metrological reliability of the proposed angular estimation.
In this study, the main objective is to propose a method for characterizing the mechanical transmission parameters of the Technical Vision System (TVS) by leveraging a stereovision system as an auxiliary metrological reference. By segmenting the laser spot projected onto a planar surface, we developed an algorithm capable of accurately estimating the angular displacement of the beam, thereby enabling precise quantification of the degrees-per-step ratio in the gear transmission of the laser positioner subsystem. Experimental results are provided to validate the effectiveness of the proposed algorithm. The novelty of the presented method is that it provides a non-invasive and cost-effective alternative for accurate and automated characterization of the mechanical subsystem, reducing reliance on manual measurement tools or direct intervention on the laser scanner—an essential task to ensure the system’s metrological reliability.

2. Materials and Methods

This study employs two 3D scanning systems: a Technical Vision System (TVS) and a two-camera module that constitutes a stereovision system. The stereovision system serves as an external reference for calibrating the gear transmission mechanism of the TVS, with the aim of improving the accuracy of its 3D measurements.
This section describes the operating principles of each system, the hardware configuration, the experimental setup, and the procedures followed during the data acquisition and calibration phases.

2.1. Stereovision System

A stereovision system consists of two cameras separated by a known horizontal baseline and aligned vertically to ensure collinearity. After performing camera calibration, which allows the estimation of the intrinsic parameters (such as focal length, principal point, and lens distortion), both cameras can capture synchronized images of the same scene.
When a 3D point is simultaneously captured by both cameras, its projections appear at different horizontal positions in each image. This difference, known as disparity, is the basis for depth estimation. This disparity, which occurs along the horizontal axis for rectified stereo pairs, can be exploited using the constraints of epipolar geometry to triangulate the 3D coordinates of the observed point.
Figure 1 illustrates the typical geometric configuration of a calibrated stereovision system under the pinhole camera model. After calibration, the coordinate system is defined at the optical center of the left camera ( 0 , 0 , 0 ) , while the right camera center is located at ( b , 0 , 0 ) , with b representing the baseline distance along the x-axis. A 3D point P ( x , y , z ) is projected onto the left and right image planes as ( u l , v l ) and ( u r , v r ) , respectively. The disparity between these horizontal coordinates, together with the known baseline b and focal length f, enables the estimation of the depth z via triangulation.

2.2. Technical Vision System (TVS)

The Technical Vision System (TVS) performs 3D measurements using a technique known as dynamic triangulation [16]. It is composed of two main subsystems: the positioner, which directs the laser beam, and the aperture, which serves as the optical reception module that detects the reflected signal. The geometric alignment between these subsystems is critical for ensuring metrological accuracy.
Figure 2 shows the overall configuration of the TVS and its two principal subsystems.
In the positioner, the laser beam is emitted from the top and redirected by two 45 mirrors. The orientation of these mirrors is controlled by stepper motors, allowing precise redirection of the laser beam. To improve angular resolution, the motors are coupled to gear transmissions. Figure 3 details the components of the positioner.
On the other hand, the aperture subsystem is responsible for acquiring the reflected laser beam. A rotating mirror collects the incoming light and directs it toward a photodiode, which converts the optical signal into an electrical one. This signal is then digitized and processed by a microcontroller. The angular position of the rotating mirror is synchronized through an optical interrupter that detects a reference hole on an encoder disk. Figure 4 illustrates the optical acquisition subsystem, referred to as the aperture.
The 3D coordinates are computed using trigonometric relationships derived from the law of sines. The angles of emission and reception, along with the known baseline distance a, define a triangle from which the x, y, and z coordinates are computed as follows:
x i j = a sin B i j · sin C i j · cos j = 1 j β j sin [ 180 ( B i j + C i j ) ]
y i j = a 1 2 sin B i j · cos C i j sin [ 180 ( B i j + C i j ) ] , if B i j 90 a 1 2 + sin B i j · cos C i j sin [ 180 ( B i j + C i j ) ] , if B i j > 90
z i j = a sin B i j · sin C i j · sin j = 1 j β j sin [ 180 ( B i j + C i j ) ]
A two-axis scanning motion is achieved by driving the stepper motors horizontally and vertically, while the rotating mirror in the aperture subsystem enables continuous acquisition. The resulting point cloud accurately captures the scanned surface.

2.3. Prototypes of Scanning Systems and Experimental Setup

The two scanning systems used in this study are shown in Figure 5. The first is a customized prototype of the Technical Vision System (TVS), which features a baseline of 20 cm and employs a near-infrared laser with a wavelength of 808 nm. The second is a low-cost commercial stereovision module (GXIVision) equipped with LSM22100 cameras. The module integrates two 1 MP CMOS sensors ( 1280 × 720 pixels each) with 2.4 mm M12 lenses and operates with a fixed stereo baseline of 85 mm.
Since the stereovision system was used as a ground-truth reference for the intrinsic calibration of the TVS, it was first calibrated using Zhang’s method [31]. This technique estimates both intrinsic and extrinsic parameters based on the observation of a planar checkerboard with known grid spacing. It is particularly effective when multiple images are acquired from diverse viewpoints, ensuring sufficient geometric variation.
The calibration target is shown in Figure 6. It consists of a 10 × 7 checkerboard, yielding 54 inner corners. A total of twenty images were captured from different positions and orientations to ensure wide field-of-view coverage and geometric richness. These images were used to minimize the reprojection error through a non-linear optimization process. Each square of the checkerboard measured 25 mm per side, which ensured an appropriate scale relative to the 1280 × 720 -pixel resolution of the stereo cameras. This configuration provided sufficient spatial detail for reliable corner detection during calibration and guaranteed that the laser spot, whose diameter spanned several pixels, could be adequately sampled for accurate centroid estimation.
Once calibrated, the stereovision system was employed to capture the position of the laser spot projected by the TVS. These 3D reference measurements were used to characterize the two gear transmission mechanisms responsible for the horizontal and vertical movements of the positioner. Accurate characterization of these mechanisms is essential for precise 3D coordinate computation using the TVS.
Figure 7 shows both scanning systems and the planar target, all mounted on an optical breadboard with dimensions of 80 × 50 cm . The flat surface, consisting of plain white bond paper ( 40 × 40 cm ), included no fiducial markers or predefined dimensions, as the proposed method relies exclusively on laser spot segmentation across the surface. Notably, the procedure does not require perfect planarity of the target surface, making it suitable for practical, unprepared environments.
A Teensy 4.1 microcontroller was used to control the stepper motors of the Technical Vision System (TVS). Communication with the host computer (Intel Core i5-10300H, 2.50 GHz, Windows OS) was established via a serial interface. Custom Python scripts executed on the laptop were responsible for motor control, image acquisition synchronization, and 3D coordinate computation.

2.4. Novel Estimation Method for Degrees-per-Step Ratio

This section presents the methodology for estimating the relationship between the number of commanded motor steps and the resulting angular displacement of the laser beam, after transmission through the gear mechanisms coupled to the motors. The approach involves capturing the 3D coordinates of the laser impact point on a surface before and after a controlled angular shift. From these data, the beam direction is computed in each configuration, and the change in inclination is used to determine the angular resolution of the system in units of degrees-per-step.
The methodology is illustrated in Figure 8. The procedure begins by projecting the TVS laser beam onto a planar surface positioned at an arbitrary distance d 1 . It is essential that the laser spot remains within the field of view (FoV) of both stereo cameras throughout the process.
Next, stereo image pairs of the laser spot are captured using both cameras of the stereovision system. These images are rectified to satisfy epipolar constraints, and a region of interest is extracted and processed in the HSV color space to segment the laser spot contour. Figure 9 shows an example of the segmentation result and the identification of its centroid. Notably, the laser spot extended across multiple pixels in the images, ensuring reliable centroid estimation relative to the 1280 × 720 -pixel resolution of the stereo sensors.
The centroid of the segmented spot, C 1 = ( x c , y c ) , is computed from the binarized contour. This point, identified in both the left and right images, is then triangulated to obtain the 3D coordinates of the laser spot, P 1 = ( X 1 , Y 1 , Z 1 ) , with respect to the stereo reference frame.
Next, the flat surface is moved to a new arbitrary position d 2 , and the same acquisition and processing steps are repeated to obtain a second 3D point P 2 = ( X 2 , Y 2 , Z 2 ) , corresponding to the new impact location.
Horizontal displacement of the laser beam is then performed by commanding a known number of motor steps, denoted as N steps . While the surface remains fixed at d 2 , a third point P 3 is acquired. The surface is then returned to its initial position d 1 , and a fourth point P 4 is registered.
This sequence results in two distinct point pairs: P 1 , P 2 before the beam is shifted, and P 3 , P 4 afterward. Each pair defines a projected beam direction corresponding to a specific orientation of the mirror assembly driven by the stepper motor and gear transmission within the positioner subsystem of the TVS.
Figure 10 illustrates the two projected trajectories of the laser beam in the X Y plane, corresponding to the beam directions before and after the commanded displacement of the mirror assembly.
Analyzing a horizontal displacement, the direction of each beam can be modeled in the X Y plane, where the X-axis represents depth (i.e., distance from the stereovision system), and the Y-axis corresponds to the lateral shift of the beam. The projected angle is computed as the arctangent of the ratio between the horizontal displacement and the depth variation, as shown in Equation (4):
θ i = tan 1 Y 2 Y 1 X 2 X 1 , θ f = tan 1 Y 3 Y 4 X 3 X 4
The angular resolution of the system, defined as the degrees-per-step ratio, is computed by dividing the difference in beam orientation angles by the number of commanded motor steps:
Degrees-per-step ratio = θ f θ i N steps
This procedure is repeated for all four directions of motion—left, right, up, and down— to account for potential asymmetries introduced by the gear transmission, such as fabrication inconsistencies. In the case of vertical displacements, the analysis is performed in the X Z plane, where the X-axis represents depth and the Z-axis corresponds to the height of the laser spot.

3. Results and Discussion

To implement the proposed method for characterizing the degrees-per-step ratio, a total of 24 independent experiments were conducted for each of the four movement directions of the TVS positioner: left, right, up, and down. The horizontal displacements (left and right) correspond to the Y-axis, while the vertical displacements (up and down) are associated with the Z-axis; the X-axis represents the depth direction. The aim was to experimentally evaluate the angular increment induced by each motor step, considering that it is coupled to a gear-based mechanical transmission. This entire process is performed without manual measurements, relying solely on the stereovision system as an auxiliary metrological tool.
Figure 11 illustrates the 3D coordinate measurements obtained during an experiment involving leftward displacement. Points P 1 and P 2 denote the initial position of the laser beam prior to motor actuation, whereas P 3 and P 4 correspond to its location after executing 200 motor steps. Notably, P 2 and P 3 are captured on a second position of the planar surface, which is shifted from an initial depth of approximately d 1 = 17.5 cm to d 2 = 27.5 cm . The method demonstrates robustness against misalignments or placement inaccuracies of the planar target on the optical bench, as the analysis relies on stereo triangulation using 3D world-coordinate data, which contains information about the three spatial components of the laser spot and does not depend on the geometry of the optical breadboard.
By projecting these vectors onto the X Y plane, the angular inclination of each trajectory can be computed using the arctangent of the ratio between the lateral displacement and the change in depth, as expressed in Equation (6):
θ i = tan 1 Y 2 Y 1 X 2 X 1 , θ f = tan 1 Y 3 Y 4 X 3 X 4
Figure 12 displays the projected triangles used to estimate the initial and final angles for the previously analyzed experiment.
In this particular example, the initial angle was measured as θ i = 18 . 53 , and the final angle as θ f = 30 . 41 , resulting in an angular difference of Δ θ = 11 . 89 . Given that 200 motor steps were applied during this experiment, the estimated ratio is presented in Equation (7):
Degrees-per-step ratio = θ f θ i N steps = 30 . 41 18 . 53 200 = 11 . 89 200 = 0 . 0594 / step
The described procedure was repeated for the remaining three directions of motion. For rightward displacements, the same analysis in the X Y plane was applied, whereas for vertical movements (upward and downward), the analysis was conducted in the X Z plane. In these cases, the depth axis was retained as the adjacent leg, and the vertical component Z was used as the opposite leg for angular estimation, which is computed according to Equation (8):
θ = tan 1 Z final Z initial X final X initial
To assess the consistency of the results, the degrees-per-step ratio was graphically represented for each direction of motion. Figure 13, Figure 14, Figure 15 and Figure 16 show the corresponding plots for the 24 experiments conducted per direction, including the overall mean and the tolerance bands corresponding to ± 1 σ , ± 2 σ , and ± 3 σ .
Table 1 provides a statistical summary of the 24 experiments conducted for each direction of motion. It reports the mean and standard deviation of the degrees-per-step ratio, enabling a quantitative description of the system’s behavior in each case. For the horizontal transmission, the leftward and rightward experiments yielded mean values of 0 . 059 / step and 0 . 062 / step , with standard deviations of 8.08 × 10 4 / step and 1.76 × 10 4 / step , respectively. The relatively small dispersion confirms the repeatability of the vision-based estimation, while the difference between leftward and rightward mean values highlights the directional anisotropy of the gear transmission. In the case of vertical motion, the upward and downward directions yielded mean values of 0 . 069 / step and 0 . 068 / step , with standard deviations below 3.2 × 10 4 / step .
Overall, the proposed method offers a non-invasive and automated solution for characterizing the laser positioner, avoiding disassembly or manual measurements. Its main limitation is the need to place a flat reference surface within the field of view, which, although practical in many setups, may constrain certain mobile or irregular scenarios.

4. Conclusions

In this work, a stereovision system previously calibrated using Zhang’s method [31] was implemented as a metrological reference for the characterization of the gear-based transmission mechanisms in a Technical Vision System (TVS). This characterization is critical for accurate 3D reconstruction based on dynamic triangulation, since the identified parameter directly affects the precision of the computed 3D coordinates.
To evaluate the angular behavior of the TVS laser positioner subsystem, an experimental procedure was designed using the stereo system as an auxiliary measurement tool. This strategy enabled a high-precision estimation of the degrees-per-step ratio for the stepper motors in each motion direction of the laser positioner subsystem, including both horizontal and vertical displacements. The integration of the stereo system enhanced robustness against misplacements of the reference surface, thereby increasing the operational tolerance of the method and facilitating its use in mobile or constrained environments.
The experimental results yielded a degrees-per-step ratio of 0 . 059 for leftward motion, 0 . 062 for rightward motion, 0 . 069 for upward motion, and 0 . 068 for downward motion. These variations reveal directional anisotropy, likely caused by manufacturing tolerances or assembly misalignments in the gear transmission. Such anisotropy is particularly relevant when estimating the laser emission angle α in the dynamic triangulation method used to compute 3D coordinates with the TVS. This angle defines the laser beam’s projection direction and is determined exclusively by the control commands sent to the positioner, as no feedback sensors are available to monitor its actual orientation. On the other hand, part of the observed variability may be attributed to the limited specifications of the low-cost stereo module employed in this study; nevertheless, the method proved feasible and could benefit from higher-resolution cameras in future implementations, potentially improving the segmentation reliability of the laser spot.
In addition to these experimental findings, and beyond the specific case of the TVS, the proposed vision-based characterization method for gear transmissions can be extended to other engineering systems that rely on stepper motor actuators. Potential applications include robotic platforms with hand–eye vision systems, CNC machines, and additive manufacturing equipment, where accurate motion transmission is essential and disassembly of the mechanisms is impractical. Autonomous ground vehicles equipped with 3D scanning modules for navigation, as well as agricultural robots and structural inspection systems, can also benefit from such non-invasive characterization. These examples highlight the versatility of the proposed approach as a general solution to maintain the accuracy of gear-based transmission mechanisms across diverse application domains.
Therefore, accurate and automated characterization of the mechanical subsystem is essential to ensure the metrological reliability of the TVS. The proposed method provides a non-invasive and cost-effective solution that eliminates the need for manual measurement tools or direct intervention on the laser scanner. It relies solely on two controlled displacements of a planar surface placed in front of the system, which significantly simplifies implementation and enables seamless integration into self-calibration routines. In future work, this method is envisioned to be deployed on a mobile robot platform, where the required spatial displacements can be performed autonomously by the robot itself, eliminating the need to manipulate the calibration target manually.

Author Contributions

Conceptualization, F.L.-M.; methodology, F.L.-M.; software, R.A.-P.; validation, C.S.-V. and J.R.H.-G.; formal analysis, D.M.-Q. and V.T.; investigation, J.A.N.-L. and J.R.H.-G.; resources, O.S.; data curation, C.S.-V.; writing—original draft preparation, F.L.-M.; writing—review and editing, J.A.N.-L.; visualization, D.M.-Q.; supervision, R.A.-P. and O.S.; project administration, R.A.-P.; funding acquisition, O.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Acknowledgments

The authors of this work would like to thank the anonymous reviewers for their valuable support.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
TVSTechnical Vision System
LiDARLight Detection and Ranging
3DThree-Dimensional
ToFTime of Flight
FoVField of View
HSVHue, Saturation, Value

References

  1. Qin, H.; Bi, Y.; Feng, L.; Zhang, Y.F.; Chen, B.M. A 3D Rotating Laser-Based Navigation Solution for Micro Aerial Vehicles in Dynamic Environments. Unmanned Syst. 2018, 6, 297–305. [Google Scholar] [CrossRef]
  2. Hayashi, T.; Mori, N.; Ueno, T. Non-contact imaging of subsurface defects using a scanning laser source. Ultrasonics 2022, 119, 106560. [Google Scholar] [CrossRef]
  3. Stavropoulos, P. Digitization of Manufacturing Processes: From Sensing to Twining. Technologies 2022, 10, 98. [Google Scholar] [CrossRef]
  4. Huang, Z.; Li, D. A 3D reconstruction method based on one-dimensional galvanometer laser scanning system. Opt. Lasers Eng. 2023, 170, 107787. [Google Scholar] [CrossRef]
  5. Tran, T.Q.; Becker, A.; Grzechca, D. Environment Mapping Using Sensor Fusion of 2D Laser Scanner and 3D Ultrasonic Sensor for a Real Mobile Robot. Sensors 2021, 21, 3184. [Google Scholar] [CrossRef]
  6. Lee, M.J.; Park, S.Y. Forward and Backward Propagation of Stereo Matching Cost for Incremental Refinement of Multiview Disparity Maps. IEEE Access 2022, 10, 134074–134085. [Google Scholar] [CrossRef]
  7. Chiang, P.J.; Lin, C.H. Active Stereo Vision System with Rotated Structured Light Patterns and Two-Step Denoising Process for Improved Spatial Resolution. Opt. Lasers Eng. 2022, 152, 106958. [Google Scholar] [CrossRef]
  8. Wu, Z.; Nan, M.; Zhang, H.; Huo, J.; Chen, S.; Chen, G.; Cheng, Z. Photogrammetric system of non-central refractive camera based on two-view 3D reconstruction. ISPRS J. Photogramm. Remote Sens. 2025, 222, 112–129. [Google Scholar] [CrossRef]
  9. Lopez-Medina, F.; Alaniz-Plata, R.; Sergiyenko, O.; Núñez-López, J.A.; Sepulveda-Valdez, C.; Meza-García, D.; Villa-Manríquez, J.F.; Andrade-Collazo, H.; Flores-Fuentes, W.; Rodríguez-Quiñonez, J.C.; et al. Extrinsic Calibration Method Under Low-Light Conditions for Hybrid Vision System. In Proceedings of the 2025 International Aegean Conference on Electrical Machines and Power Electronics (ACEMP) & 2025 International Conference on Optimization of Electrical and Electronic Equipment (OPTIM), Timisoara, Romania, 14–17 May 2025; pp. 1–7. [Google Scholar] [CrossRef]
  10. Wang, W.; Luo, R.; Yang, W.; Liu, J. Unsupervised Illumination Adaptation for Low-Light Vision. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 5951–5966. [Google Scholar] [CrossRef]
  11. Zhang, X.; Guan, Z.; Liu, X.; Zhang, Z. Digital Reconstruction Method for Low-Illumination Road Traffic Accident Scenes Using UAV and Auxiliary Equipment. World Electr. Veh. J. 2025, 16, 171. [Google Scholar] [CrossRef]
  12. Wang, Z.; Wu, Y.; Li, D.; Li, G.; Zhu, P.; Zhang, Z.; Jiang, R. LiDAR-assisted image restoration for extreme low-light conditions. Knowl.-Based Syst. 2025, 316, 113382. [Google Scholar] [CrossRef]
  13. Agishev, R.; Comerón, A.; Bach, J.; Rodriguez, A.; Sicard, M.; Riu, J.; Royo, S. Lidar with SiPM: Some capabilities and limitations in real environment. Opt. Laser Technol. 2013, 49, 86–90. [Google Scholar] [CrossRef]
  14. Li, Y.; Ibanez-Guzman, J. Lidar for Autonomous Driving: The Principles, Challenges, and Trends for Automotive Lidar and Perception Systems. IEEE Signal Process. Mag. 2020, 37, 50–61. [Google Scholar] [CrossRef]
  15. Trautmann, T.; Blechschmidt, F.; Friedrich, M.; Mendt, F. Possibilities and Limitations of Object Detection Using Lidar. In Proceedings of the 23. Internationales Stuttgarter Symposium; Kulzer, A.C., Reuss, H.C., Wagner, A., Eds.; Springer Vieweg: Wiesbaden, Germany, 2023; pp. 36–43. [Google Scholar] [CrossRef]
  16. Sergiyenko, O.; Alaniz-Plata, R.; Flores-Fuentes, W.; Rodríguez-Quiñonez, J.C.; Miranda-Vega, J.E.; Sepulveda-Valdez, C.; Núñez-López, J.A.; Kolendovska, M.; Kartashov, V.; Tyrsa, V. Multi-view 3D data fusion and patching to reduce Shannon entropy in Robotic Vision. Opt. Lasers Eng. 2024, 177, 108132. [Google Scholar] [CrossRef]
  17. Alaniz-Plata, R.; Lopez-Medina, F.; Sergiyenko, O.; Flores-Fuentes, W.; Rodríguez-Quiñonez, J.C.; Sepulveda-Valdez, C.; Núñez-López, J.A.; Meza-García, D.; Villa-Manríquez, J.F.; Andrade-Collazo, H.; et al. Extrinsic calibration of complex machine vision system for mobile robot. Integration 2025, 102, 102370. [Google Scholar] [CrossRef]
  18. Surmann, H.; Nüchter, A.; Hertzberg, J. An autonomous mobile robot with a 3D laser range finder for 3D exploration and digitalization of indoor environments. Robot. Auton. Syst. 2003, 45, 181–198. [Google Scholar] [CrossRef]
  19. Blais, F.o. Review of 20 years of range sensor development. J. Electron. Imaging 2004, 13, 231–243. [Google Scholar] [CrossRef]
  20. Dorsch, R.G.; Häusler, G.; Herrmann, J.M. Laser triangulation: Fundamental uncertainty in distance measurement. Appl. Opt. 1994, 33, 1306–1314. [Google Scholar] [CrossRef]
  21. Gerbino, S.; Del Giudice, D.M.; Staiano, G.; Lanzotti, A.; Martorelli, M. On the influence of scanning factors on the laser scanner-based 3D inspection process. Int. J. Adv. Manuf. Technol. 2016, 84, 1787–1799. [Google Scholar] [CrossRef]
  22. Sergiyenko, O.; Núñez-López, J.A.; Tyrsa, V.; Alaniz-Plata, R.; Pérez-Landeros, O.M.; Selpúlveda-Valdez, C.; Flores-Fuentes, W.; Rodríguez-Quiñonez, J.C.; Murrieta-Rico, F.N.; Kartashov, V.; et al. 3D coordinate sensing with nonsmooth friction dynamical discontinuities compensation in laser scanning system. Mechatronics 2025, 110, 103382. [Google Scholar] [CrossRef]
  23. Zhang, M.; Zhang, Z.; Xiong, J.; Chen, X. Accuracy Analysis of Complex Transmission System with Distributed Tooth Profile Errors. Machines 2024, 12, 459. [Google Scholar] [CrossRef]
  24. Zhou, D.; Guo, Y.; Yang, J.; Zhang, Y. Study on the Parameter Influences of Gear Tooth Profile Modification and Transmission Error Analysis. Machines 2024, 12, 316. [Google Scholar] [CrossRef]
  25. Theodossiades, S.; Natsiavas, S. Periodic and chaotic dynamics of motor-driven gear-pair systems with backlash. Chaos Solitons Fractals 2001, 12, 2427–2440. [Google Scholar] [CrossRef]
  26. Xun, C.; Long, X.; Hua, H. Effects of random tooth profile errors on the dynamic behaviors of planetary gears. J. Sound Vib. 2018, 415, 91–110. [Google Scholar] [CrossRef]
  27. Lyu, X.; Liu, S.; Qiao, R.; Jiang, S.; Wang, Y. Camera, LiDAR, and IMU Spatiotemporal Calibration: Methodological Review and Research Perspectives. Sensors 2025, 25, 5409. [Google Scholar] [CrossRef]
  28. Domhof, J.; Kooij, J.F.; Gavrila, D.M. An Extrinsic Calibration Tool for Radar, Camera and Lidar. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 8107–8113. [Google Scholar] [CrossRef]
  29. Domhof, J.; Kooij, J.F.P.; Gavrila, D.M. A Joint Extrinsic Calibration Tool for Radar, Camera and Lidar. IEEE Trans. Intell. Veh. 2021, 6, 571–582. [Google Scholar] [CrossRef]
  30. Wang, X.; Yao, T.; Shi, Z. Calibration Method Based on Virtual Gear Artefact for Computer Vision Measuring Instrument of Fine Pitch Gear. Sensors 2024, 24, 2289. [Google Scholar] [CrossRef] [PubMed]
  31. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
Figure 1. Geometric configuration of a calibrated stereovision system.
Figure 1. Geometric configuration of a calibrated stereovision system.
Metrology 05 00058 g001
Figure 2. General configuration of the Technical Vision System (TVS).
Figure 2. General configuration of the Technical Vision System (TVS).
Metrology 05 00058 g002
Figure 3. The positioner subsystem of the Technical Vision System (TVS).
Figure 3. The positioner subsystem of the Technical Vision System (TVS).
Metrology 05 00058 g003
Figure 4. The aperture subsystem of the Technical Vision System (TVS).
Figure 4. The aperture subsystem of the Technical Vision System (TVS).
Metrology 05 00058 g004
Figure 5. Prototypes of the Technical Vision System (TVS) and the stereovision system employed in the experimental procedures.
Figure 5. Prototypes of the Technical Vision System (TVS) and the stereovision system employed in the experimental procedures.
Metrology 05 00058 g005
Figure 6. The checkerboard used for calibrating the stereovision system.
Figure 6. The checkerboard used for calibrating the stereovision system.
Metrology 05 00058 g006
Figure 7. Experimental configuration showing the Technical Vision System (TVS), the stereovision module, and the planar calibration surface mounted on an optical breadboard.
Figure 7. Experimental configuration showing the Technical Vision System (TVS), the stereovision module, and the planar calibration surface mounted on an optical breadboard.
Metrology 05 00058 g007
Figure 8. Flowchart of experimental procedure for degrees-per-step ratio estimation.
Figure 8. Flowchart of experimental procedure for degrees-per-step ratio estimation.
Metrology 05 00058 g008
Figure 9. An example of laser spot segmentation and centroid detection. The green contour outlines the segmented region, while the red dot marks the computed centroid.
Figure 9. An example of laser spot segmentation and centroid detection. The green contour outlines the segmented region, while the red dot marks the computed centroid.
Metrology 05 00058 g009
Figure 10. Angular trajectories of the laser beam projected onto the X Y plane, before and after mirror actuation.
Figure 10. Angular trajectories of the laser beam projected onto the X Y plane, before and after mirror actuation.
Metrology 05 00058 g010
Figure 11. A 3D representation of the triangulated coordinates obtained during an experiment involving leftward laser beam displacement.
Figure 11. A 3D representation of the triangulated coordinates obtained during an experiment involving leftward laser beam displacement.
Metrology 05 00058 g011
Figure 12. Estimation of the laser beam angular displacement in the X Y plane, based on the projected trajectories before and after 200 motor steps.
Figure 12. Estimation of the laser beam angular displacement in the X Y plane, based on the projected trajectories before and after 200 motor steps.
Metrology 05 00058 g012
Figure 13. The statistical distribution of the degrees-per-step ratio for leftward displacements.
Figure 13. The statistical distribution of the degrees-per-step ratio for leftward displacements.
Metrology 05 00058 g013
Figure 14. The statistical distribution of the degrees-per-step ratio for rightward displacements.
Figure 14. The statistical distribution of the degrees-per-step ratio for rightward displacements.
Metrology 05 00058 g014
Figure 15. The statistical distribution of the degrees-per-step ratio for upward displacements.
Figure 15. The statistical distribution of the degrees-per-step ratio for upward displacements.
Metrology 05 00058 g015
Figure 16. The statistical distribution of the degrees-per-step ratio for downward displacements.
Figure 16. The statistical distribution of the degrees-per-step ratio for downward displacements.
Metrology 05 00058 g016
Table 1. A statistical summary of the degrees-per-step ratio for each direction of motion.
Table 1. A statistical summary of the degrees-per-step ratio for each direction of motion.
DirectionMean [°/step]Standard Deviation
Left (X+)0.0593940.000808
Right (X−)0.0620230.000176
Up (Z+)0.0689740.000319
Down (Z−)0.0678650.000173
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lopez-Medina, F.; Núñez-López, J.A.; Sergiyenko, O.; Molina-Quiroz, D.; Sepulveda-Valdez, C.; Herrera-García, J.R.; Tyrsa, V.; Alaniz-Plata, R. Vision-Based Characterization of Gear Transmission Mechanisms to Improve 3D Laser Scanner Accuracy. Metrology 2025, 5, 58. https://doi.org/10.3390/metrology5040058

AMA Style

Lopez-Medina F, Núñez-López JA, Sergiyenko O, Molina-Quiroz D, Sepulveda-Valdez C, Herrera-García JR, Tyrsa V, Alaniz-Plata R. Vision-Based Characterization of Gear Transmission Mechanisms to Improve 3D Laser Scanner Accuracy. Metrology. 2025; 5(4):58. https://doi.org/10.3390/metrology5040058

Chicago/Turabian Style

Lopez-Medina, Fernando, José A. Núñez-López, Oleg Sergiyenko, Dennis Molina-Quiroz, Cesar Sepulveda-Valdez, Jesús R. Herrera-García, Vera Tyrsa, and Ruben Alaniz-Plata. 2025. "Vision-Based Characterization of Gear Transmission Mechanisms to Improve 3D Laser Scanner Accuracy" Metrology 5, no. 4: 58. https://doi.org/10.3390/metrology5040058

APA Style

Lopez-Medina, F., Núñez-López, J. A., Sergiyenko, O., Molina-Quiroz, D., Sepulveda-Valdez, C., Herrera-García, J. R., Tyrsa, V., & Alaniz-Plata, R. (2025). Vision-Based Characterization of Gear Transmission Mechanisms to Improve 3D Laser Scanner Accuracy. Metrology, 5(4), 58. https://doi.org/10.3390/metrology5040058

Article Metrics

Back to TopTop