Next Article in Journal
Very High Cycle Fatigue and Fatigue Crack Growth of Steels: A Review
Previous Article in Journal
Medicine in the Age of Artificial Intelligence: Cybersecurity, Hybrid Threats and Resilience
Previous Article in Special Issue
Electro-Optic Fabry-Perot Etalon for Frequency Stabilization in in Single-Longitudinal Mode Nd:YVO4 Laser
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid Vision and Optimization Strategy for Accurate 3D Laser Projection Calibration

School of Opto-Electronic Engineering, Changchun University of Science and Technology, Changchun 130022, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(4), 1733; https://doi.org/10.3390/app16041733
Submission received: 9 January 2026 / Revised: 6 February 2026 / Accepted: 6 February 2026 / Published: 10 February 2026

Abstract

A galvanometer-based laser 3D projection system requires accurate mapping between galvanometer control signals and workpiece coordinates to ensure reliable on-part marking. This study presents a calibration and verification pipeline that uses a color camera and a depth sensor to reconstruct 3D target points and estimate the extrinsic parameters between the projector and the workpiece. Laser spot centers are localized in color images, and corresponding depth values are acquired after color–depth alignment. The resulting 3D points are back-projected and transformed into the workpiece coordinate frame. A hybrid solver is employed: the Whale Optimization Algorithm (WOA) provides a global initial estimate, followed by Levenberg–Marquardt (LM) refinement to enhance convergence stability under noisy and small-sample conditions. Experimental validation on an independent 13-point set demonstrates sub-millimeter accuracy, with a mean error of approximately 0.37 mm and a maximum error of 0.87 mm. A further rectangular contour projection test confirms consistent performance, yielding a mean error of 0.434 mm and a maximum error of 0.879 mm, with all errors remaining below 1 mm.

1. Introduction

Laser-scanning projection and optical three-dimensional marking are crucial for the digital manufacturing of large, complex components. These technologies directly project CAD-based data—such as contours, hole locations, and assembly datums—onto the workpiece surface [1,2]. This offers on-site visual guidance for key processes, including composite layup, drilling/riveting, and assembly alignment, thereby enhancing operational efficiency and positioning accuracy while supporting process digitalization and traceability [3,4]. In such systems, a high-speed two-axis galvanometric scanner steers the laser beam for spatial scanning. The accuracy of the mapping from galvanometer deflection commands to three-dimensional projected coordinates fundamentally limits the marking precision achievable, making stable, repeatable calibration a central research focus [5].
Current calibration approaches in laser projection systems typically fall into two categories: physical-model-based methods and vision-assisted methods. Physical-model methods construct a projection model based on scanner geometry and rigid-body transformations, often reinforced by external geometric measurements. For instance, Hou et al. (2021) incorporated laser ranging as a strong geometric constraint to improve extrinsic estimation and error control [6]. Although such methods offer clear constraints and a well-defined solution procedure, the additional measurement chain tends to increase system integration complexity and on-site operational cost. To reduce this dependency, vision-assisted techniques have been developed. Yeung et al., for example, proposed an in situ scheme that employs a coaxial camera and a dimension-known reference fixture; laser spot positions captured during scanning are used to infer system parameters, effectively integrating calibration within the device itself.
Compared to physical-model methods that rely on external measurements, vision-assisted calibration strategies use camera observations to replace or reduce dependence on costly measurement links, while jointly solving for geometric parameters within the projection model. For instance, Tu et al. (2018) introduced a staged estimation approach to mitigate parameter coupling and enhance convergence stability [7]. Subsequently, Lao et al. (2023) strengthened three-dimensional constraints by incorporating binocular vision in a compact setup, further lowering the reliance on specialized external equipment [3]. Recent efforts have also explored task-specific vision pipelines and lightweight monocular self-calibration frameworks, aiming to improve practical applicability and deployment flexibility in industrial settings [8].
Despite recent progress, key limitations continue to hinder the rapid on-site deployment and reusability of existing calibration methods. Many current approaches depend on extensive data acquisition—such as dense sampling, grid-based lookup tables, or repeated measurements [9,10]—to achieve stable mapping and effective error compensation. This requirement substantially increases field workload and time costs [11]. When only limited data are available, insufficient constraints and measurement noise are exacerbated, leading to reduced accuracy and repeatability in extrinsic estimation and mapping models. Moreover, the optimization process often becomes unstable or highly sensitive to initial guesses under such conditions [12]. To compensate for the lack of constraints in small-sample scenarios, some methods introduce more complex procedures—including staged solving or additional target setups—which further compromise operational simplicity and general usability. Consequently, there remains a need for a calibration framework that can deliver high accuracy and stable repeatability with small sample sizes, without relying on expensive external measurement equipment or large-scale data collection, while maintaining simplicity for on-site implementation.
To address this need, we propose a compact calibration method for an RGB-D camera and a laser projection system, designed to achieve stable and repeatable estimation of galvanometer extrinsic parameters with minimal data acquisition. The method integrates planar coordinates obtained from the color camera with depth measurements from the depth sensor, establishing a unified geometric mapping among the camera, workpiece, and galvanometer coordinate systems. Using only six calibration points with known three-dimensional coordinates (x, y, z) and their corresponding galvanometer commands (H, V), we formulate the extrinsic estimation as a nonlinear optimization problem constrained by the projection model. A hybrid solving strategy is adopted, in which the Whale Optimization Algorithm (WOA) provides a robust global initialization, followed by Levenberg–Marquardt (LM) refinement for precise nonlinear least-squares convergence, thereby reliably solving for the rotation matrix R and translation vector t. In contrast to methods that rely on extensive training datasets, the proposed approach significantly reduces the effort required for target setup and data collection under small-sample conditions, while enhancing convergence stability and repeatability.
In contrast to conventional commercial galvanometer laser projection systems, which often depend on expensive external metrology devices, dense sampling grids, or pre-calibrated lookup tables to achieve high accuracy, the method proposed herein offers a compact and cost-effective alternative suitable for rapid on-site deployment. By replacing dedicated laser rangefinders or coordinate measuring machines with a consumer RGB-D camera, the system acquires both color and depth information in a single capture, simplifying hardware integration and reducing reliance on complex measurement chains. Moreover, a hybrid Whale Optimization Algorithm (WOA) and Levenberg–Marquardt (LM) strategy is introduced to solve the extrinsic parameters robustly with only six calibration points—significantly lowering the data-collection burden while maintaining sub-millimeter accuracy. This represents a clear advance over our prior work, which employed a laser-ranging module and particle-swarm optimization; the current approach not only streamlines the sensor setup but also improves convergence stability under small-sample conditions through WOA’s global search capability. Although designed for scenarios where the region of interest lies within the common field-of-view of the camera and projector (typical of fixture-based assembly guidance), the framework can be extended in future work to multi-view configurations for larger or more geometrically complex workpieces.

2. System Overview and Calibration Principle

This study presents an integrated calibration system that combines color imaging, depth sensing, and two-axis galvanometer laser projection. The objective is to establish an accurate mapping between galvanometer control signals and three-dimensional workpiece coordinates using a practical sensor configuration, while enabling stable estimation and repeatable verification of the projection extrinsic parameters. As illustrated in Figure 1, the system comprises a laser emission and beam-expansion/focusing unit, a beam-splitter and sensing module, a two-axis galvanometric scanning unit, an image acquisition unit, and a measurement and control unit. The controller outputs horizontal and vertical drive signals (H, V) to the galvanometer, steering the laser beam for two-dimensional scanning and projecting calibration targets onto the workpiece or a reference plate. A color camera captures high-resolution images for precise target-center localization, while a depth camera provides registered depth measurements (Z) at the corresponding pixels. By fusing two-dimensional pixel observations with synchronized depth data [13], the system reconstructs the target centers from image coordinates to three-dimensional coordinates (X, Y, Z), forming the foundation for subsequent extrinsic parameter estimation and projection error evaluation.
The system operates within three coordinate frames: the vision (camera) frame, the projection (galvanometer) frame, and the workpiece frame. The core calibration objective is to estimate the extrinsic parameters—specifically the rotation matrix R and translation vector t—of the projection frame relative to the workpiece frame. Using these estimated parameters (R, t), the system computes the inverse mapping from any 3D point to the required galvanometer drive signals and evaluates the resulting projection errors, thereby enabling in situ marking applications such as assembly guidance. To mitigate solution instability arising from parameter coupling in the galvanometer model and on-site measurement noise, a hybrid optimization scheme is employed. This strategy combines global search with local refinement to enhance the stability and repeatability of the estimated extrinsic parameters [14].
To improve the reliability and repeatability of extrinsic parameter estimation under small-sample conditions, we design a closed-loop calibration pipeline comprising four sequential stages: Data Acquisition, 3D Reconstruction, Extrinsic Solving, and Projection Validation. The complete workflow is illustrated in Figure 2, and the key steps are summarized as follows:
(1)
Data Acquisition and Correspondence Establishment
Project a laser spot onto a planar calibration target using a given galvanometer command pair (H, V).
For each command pair, an RGB image and a corresponding depth map of the target are captured synchronously, establishing the initial correspondence between the command input and the sensor observation.
(2)
Spot Center Localization and 3D Reconstruction
Image Processing: The laser spot region in the RGB image is extracted using thresholding and connected-component filtering.
Sub-pixel Localization: A quadratic surface is fitted within a local neighborhood around the spot. The extremum of this fitted surface is calculated to determine the sub-pixel coordinates (uc, vc) of the spot center.
Depth Association and Filtering: The corresponding depth value Z at (uc, vc) is retrieved from the aligned depth map. Median filtering within a small neighborhood is applied to suppress depth noise.
3D Back-Projection: Using the camera intrinsics, the triplet (uc, vc, Z) is back-projected to obtain the 3D point Pc in the camera coordinate frame [15]. This point is then transformed into the workpiece coordinate frame via the pre-calibrated camera-to-workpiece transformation (Rcw, tcw), yielding Pw. This completes the establishment of the core correspondence: (H, V) ↔ Pw.
(3)
Extrinsic Parameter Solving via Hybrid Optimization
Problem Formulation: Using three correspondence pairs as a minimal solving unit, a nonlinear optimization problem is formulated to estimate the rigid transformation (R, t) between the projector (galvanometer) frame and the workpiece frame [16]. The optimization variables are the rotation angles (ω, φ, κ) and the translation vector (Px, Py, Pz). The objective is to minimize the geometric residuals defined by the projection model, i.e., J = F2 + G2.
Hybrid WOA-LM Strategy:
Global Search (WOA): The Whale Optimization Algorithm performs a broad exploration of the parameter space. This stage generates multiple promising candidate initial solutions, reducing sensitivity to initialization.
Local Refinement (LM): Each candidate solution is refined using the Levenberg–Marquardt algorithm for precise, fast local convergence via nonlinear least-squares minimization.
Solution Selection: All candidate solutions (Ri, ti) are evaluated on the full set of calibration points. The solution with the smallest and most consistent residual is selected as the final extrinsic parameters (R, t).
(4)
Closed-loop Projection Validation
Inverse Mapping and Projection: The calibrated extrinsic parameters (R, t) are used to compute the required galvanometer commands (H, V) for a set of independent 3D test points (e.g., forming rectangle or triangle contours) defined in the workpiece frame. The system then projects the corresponding pattern.
Error Evaluation: The projected pattern is captured. The procedure from Stage 2 (spot localization to 3D reconstruction in workpiece coordinates) is repeated to obtain the measured positions of the projected spots. The in-plane Euclidean distance between these measured positions and the nominal target positions is calculated as the projection error.
Verification Loop: If the evaluated error meets the predefined accuracy requirement (e.g., sub-millimeter), the calibration is validated, and the process ends. Otherwise, the procedure revisits earlier stages for adjustment or recalibration.
This structured pipeline forms a rigorous “Acquisition-Reconstruction-Solving-Validation” closed loop, ensuring reliable and verifiable estimation of the projection extrinsics. The incorporation of the parameterized WOA-LM hybrid strategy significantly improves the robustness and convergence efficiency of the solver in the presence of measurement noise and limited sample size.
During the calibration procedure, a laser spot is projected onto the reference board for each given pair of galvanometer control signals (H, V), while synchronized color images and depth data are acquired. The color image provides high-accuracy two-dimensional localization of the target point. First, thresholding and connected-component filtering are applied to isolate the spot region, providing an integer-pixel coarse location (u0, v0). To achieve sub-pixel precision, a quadratic surface is fitted to the intensity values within a local area W × W neighborhood (where W is an odd integer) centered at (u0, v0). The extremum of this fitted surface is taken as the refined spot center, yielding sub-pixel coordinates (uc, vc) [17]. The corresponding closed-form solution can be expressed as follows:
x center y center = x 0 y 0 + 1 2 a 3 a 5 a 4 2 a 4 a 2 2 a 5 a 1 a 4 a 1 2 a 3 a 2
where (x0, y0) denotes the local coordinates corresponding to the coarse center (u0, v0), (xcenter, ycenter) is the refined sub-pixel offset relative to (x0, y0), and a1 to a5 are the coefficients obtained from least-squares fitting of the quadratic intensity surface within the local window.
After color–depth alignment, the depth value Z corresponding to the pixel coordinates (uc, vc) is retrieved from the depth map. To suppress depth noise, the median depth within a small neighborhood around this pixel is computed and adopted. Subsequently, using the camera intrinsic parameters, the coordinates (uc, vc, Z) are back-projected into the camera coordinate system to obtain the corresponding three-dimensional point Pc (x, y, z) [18].
p ˜ = u c v c 1 , P c = Z K 1 p ˜ , K = f x 0 c x 0 f y c y 0 0 1
where fx, fy are the focal lengths in pixel units, cx, cy are the principal point coordinates, and Pc is expressed in the camera frame. p ˜ represents homogeneous pixel coordinates. Since Z is provided in millimeters, the reconstructed 3D point Pc is also expressed in millimeters.
To establish a unified reference for subsequent calibration and error evaluation, the three-dimensional points in the camera coordinate system are transformed into the workpiece coordinate system. Using the camera extrinsic parameters relative to the workpiece frame (Rcw, tcw), each point Pc is mapped to the workpiece frame as Pw according to the following relation [19]:
P w = R c w P c + t c w
where Pc and Pw denote 3D points in the camera and workpiece frames (unit: mm), (Rcw, tcw) is the rigid transform from the camera frame to the workpiece frame.
Once the three-dimensional points in the workpiece coordinate system are obtained, the geometric mapping between the workpiece frame and the projector (galvanometer/projection) frame must be established. This mapping is constructed by treating the rigid-body extrinsic parameters—the rotation matrix R and translation vector t relating the two frames—as unknowns. These parameters are then used to transform a workpiece point Pw into the projector coordinate frame as follows:
P p = R P w + t
Here, Pp (s, q, w) denotes the point coordinates in the projector frame.
Based on the measurement model described above, the estimation of the extrinsic parameters R and t is formulated as an optimization problem. For a calibration point with known projector-frame coordinates (s, q, w) and corresponding galvanometer deflection angles (H, V), together with the fixed structural parameter e, two scalar residual functions F and G are derived from the geometric projection constraints [20]. In an ideal, noise-free case, the conditions F = 0 and G = 0 would be satisfied exactly. Accordingly, the residual vector for the i-th calibration point is defined as follows:
r i ( θ ) = F i G i = w i tan V i + q i e cos V i w i tan H i s i cos V i , r ( θ ) = r 1 r N
Here, θ denotes the extrinsic parameters (R, t). Each calibration point provides two independent scalar constraints (F, G) through the projection model, while the six degrees of freedom in (R, t) imply that, in an ideal noise-free and geometrically non-degenerate case, a minimum of three points is required for a unique solution. To mitigate the effects of measurement noise, geometric degeneracy, and local minima in practice, we employ six well-distributed calibration points. These points are used to generate multiple distinct three-point combinations, each yielding a candidate estimate for (R, t). A residual-based consistency criterion is then applied to select the most reliable candidates, thereby improving initialization robustness and providing stable starting points for subsequent Levenberg–Marquardt (LM) refinement. The overall extrinsic estimation is consequently cast as a nonlinear least-squares minimization problem [21]. In this work, the residual vector is constructed as r = F 1 , G 1 , , F N , G N by stacking the two residuals from all N points, and the objective is J ( θ ) = 1 2 r 2 2 .
J ( θ ) = 1 2 i = 1 N r i ( θ ) 2 2
By minimizing the objective function J(θ), the aggregated residuals F and G across all calibration points are reduced, thereby enforcing geometric consistency between the workpiece frame and the projector frame. Since J(θ) is nonlinear and potentially multimodal, local optimization methods are often sensitive to the choice of initial values. To enhance global stability, we first employ the Whale Optimization Algorithm (WOA) to perform a global search in the parameter space [22]. In this step, the residual cost defined in Equation (5) serves as the fitness function, yielding a robust initial estimate θ0. Within the WOA framework, candidate solutions (whales) update their positions through behaviors such as encircling prey and spiral movement, which can be summarized by the following update rules:
θ k + 1   = θ k   A C θ k   θ k  
where ⊙ denotes the Hadamard (element-wise) product, |·| denotes the element-wise absolute value, and θ represents the current best solution vector. A and C are coefficient vectors defined in the WOA formulation.
After the Whale Optimization Algorithm (WOA) provides a robust initial estimate θ0. The Levenberg–Marquardt (LM) method is employed for local refinement. This step enhances the accuracy of the solution and accelerates the final convergence. The complete WOA-LM hybrid optimization procedure for solving the extrinsic parameters is illustrated in Figure 3. In the LM stage, the residual functions are linearized via a first-order Taylor expansion around the current parameter estimate θ, expressed as [23]:
r ( θ + Δ θ ) r ( θ ) + J Δ θ
Here, J = ∂r/∂θ is the Jacobian matrix, and Δθ is the parameter increment. λ is the damping factor, and I is the identity matrix with the same dimension as JTJ. The LM iterative update equation is given by:
J T J + λ I Δ θ = J T r , θ k + 1 = θ k + Δ θ
In the LM refinement stage, a residual-threshold check is embedded within an iterative control loop. If the residual exceeds the preset tolerance ε, the damping factor is adaptively adjusted, and the iteration continues. When necessary, a new candidate initial value is selected to restart the refinement process, repeating until the residual falls below ε or a predefined stopping condition is satisfied. The final output is the optimal extrinsic parameters (R, t) that satisfy the tolerance ε while minimizing the overall residual, thereby ensuring geometric consistency between the camera and projector coordinate systems.
The experimental design is structured to validate the proposed calibration method under clear and controlled conditions. The use of only the two galvanometer deflection commands (H, V) is appropriate because the calibration objective is to estimate the 6-DOF rigid transformation between the projector and workpiece frames, which is fully defined by this mapping. A dynamic focusing module, while necessary for operation on complex 3D surfaces, is not required for this core extrinsic calibration when using a planar target.
A flat, high-precision reference board was chosen to provide a stable and unambiguous geometric baseline for accuracy evaluation, representing a common industrial use case. The calibration model itself is general: the estimated rigid transformation applies in principle to any workpiece pose within the sensor’s field of view. Performance under extreme orientations or on highly curved surfaces involves additional practical considerations (e.g., spot distortion, depth measurement limits) that constitute valuable future work but do not affect the validity of the calibration methodology established here.

3. Experimental Results and Validation

To assess the system performance and data acquisition reliability in practical settings, we constructed an experimental platform integrating a depth camera and a galvanometer-based laser projection module, and performed three validation experiments. An Azure Kinect DK sensor was employed to simultaneously capture color images and depth data, delivering both 2D pixel coordinates and corresponding depth measurements for 3D reconstruction and subsequent error analysis. A checkerboard target was used for camera calibration [24], color–depth alignment, and lens distortion compensation. A custom-made, high-reflectivity metal reference board was fabricated as a stable projection surface to ensure reliable laser spot detection and center localization, with a machining tolerance maintained within 0.005 mm. Experimental measurements of the board’s surface under laboratory conditions confirmed that its form error remained within 0.02 mm, which was the effective accuracy used as the ground truth in our calibration. To maintain this accuracy during experiments, the board was rigidly mounted in a temperature-controlled laboratory environment to minimize the effects of thermal expansion and gravitational sag. This ensured that the board’s geometric error was not a dominant source of uncertainty in the subsequent measurement pipeline. As illustrated in Figure 4, the working distance between the laser projection system and the reference board is approximately 1560 mm.

3.1. Experimental Setup and Implementation Details

To ensure the reproducibility of the presented results, this subsection details the key hardware specifications, software environment, and algorithm parameters used throughout the calibration and validation experiments.
(1)
Hardware Configuration:
RGB-D Sensor: Azure Kinect DK. Color resolution: 1920 × 1080 pixels; Depth mode: NFOV unbinned, providing depth measurements within 0.5–3.5 m.
Galvanometer Laser Projector: Custom system with a 532 nm green diode laser. The scanning field angle is ±30°.
Controller: A National Instruments PCIe-6343 card was used to generate the analog control signals (H, V) for the galvanometers.
Computing Hardware Platform: All timing comparisons and optimization computations (including WOA and LM) were performed on a desktop computer equipped with an Intel Core i7-12700K CPU (3.6 GHz) and 32 GB of RAM, running the Windows 11 operating system.
(2)
Software and Algorithmic Parameters:
Processing Framework: The calibration pipeline was implemented in Python 3.8. Key libraries included OpenCV (4.8.0) for image processing, NumPy/SciPy for numerical computations and nonlinear least-squares optimization (used for the LM refinement), and the Azure Kinect SDK for sensor control.
Spot Localization: The sub-pixel fitting window size W was set to 7 pixels. The depth denoising filter used a 5 × 5 pixel neighborhood median filter.
Optimization (WOA): Population size N = 40, maximum iterations = 2000, convergence threshold ϵ = 1 × 10−5.
Optimization (LM): The initial damping factor λ was set to 0.01, with an update factor of 10. The refinement stopped when the parameter change was below 1 × 10−6, or when the residual change was below 1 × 10−8.
Calibration Data: A set of six (H, V) command pairs was used for calibration, with their corresponding 3D points reconstructed as described in the previous subsection.
(3)
Data Acquisition Protocol:
For each calibration or test point, the galvanometer was driven to the target (H, V) position and held stable. A trigger signal synchronized the capture of one RGB image and one depth map from the Azure Kinect. This procedure was repeated for all points in the set.

3.2. Point-Wise 3D Measurement Accuracy Validation

This experiment evaluates the 3D measurement accuracy of the Azure Kinect DK under the proposed setup and processing pipeline [25,26], thereby establishing a reliable data foundation for subsequent calibration and projection tasks. Thirteen points on the reference board are selected as an independent test set, with their nominal 3D coordinates, denoted as P k r e f , derived from the board’s certified machining parameters. For each point, a synchronized color image and depth frame are captured. The sub-pixel spot center (uk, vk) is extracted from the color image [27,28], and the corresponding depth value Zk is retrieved from the aligned depth map. Using the camera intrinsic parameters, the coordinates (uk, vk, Zk) are back-projected to obtain the 3D point in the camera frame, which is then transformed into the workpiece frame via the known extrinsic parameters, yielding the measured 3D coordinate P k m e a s . The 3D Euclidean distance between each measured point and its nominal counterpart serves as the error metric [29], defined as:
d k   =   | | P k m e a s   P k r e f | | 2
The above procedure is repeated for all 13 test points, yielding a set of 13 Euclidean errors. The error distribution is then analyzed by calculating the mean and maximum values to evaluate the overall 3D measurement precision of the system.
As shown in Figure 5, we statistically analyze the three-dimensional error components (Δx, Δy, Δz) in the workpiece frame, along with the corresponding 3D Euclidean distance errors for the 13 test points. For the component errors, |Δz| displays the smallest interquartile range, indicating that depth-direction errors are more tightly clustered and consistent for most measurement points. Nevertheless, several prominent outliers are still present, suggesting that sporadic error peaks can occur at certain locations, likely due to depth-sensing noise or residual misalignment in the color–depth registration. In contrast,|Δx| and |Δy| exhibit broader dispersion, with |Δy| showing the largest whisker range, which reflects appreciable in-plane variability across different target positions.
For the overall three-dimensional Euclidean error d, both the median and mean values are approximately 0.4 mm, with a compact interquartile range and only a few minor outliers at the lower end. These results indicate that, under the present working distance and imaging conditions, the 3D measurement error remains generally stable and exhibits no obvious systematic drift. Consequently, the acquired 3D data provide a reliable foundation for subsequent extrinsic parameter estimation and projection-accuracy evaluation.

3.3. Extrinsic Accuracy and Solver Comparison

To further validate the accuracy and repeatability of the estimated extrinsic parameters (R, t), an angle-domain consistency check is conducted following the methodology outlined in Section 3.1. This experiment confirms that by introducing a distance constraint variable, the minimal solving unit under planar geometric conditions is reduced from the six points typically required by the conventional Levenberg–Marquardt (LM) method to a three-point solvable case. Moreover, the proposed WOA-LM hybrid strategy enhances solving efficiency without compromising accuracy. A validation set of 19 points is constructed by combining calibration points with independent verification points. For each point, the nominal 3D coordinate in the workpiece frame and the corresponding measured galvanometer signals (Hi, Vi) are known. Using the same projection model and coordinate-transformation chain, two extrinsic-solving strategies are applied to obtain the respective parameter estimates:
(1)
Conventional LM-based approach: The standard Levenberg–Marquardt (LM) method estimates (R, t) via nonlinear least-squares optimization using a set of six calibration points, resulting in the estimate (RLM, tLM).
(2)
Proposed WOA-LM hybrid approach with distance augmentation: The method incorporates distance measurements to augment geometric constraints, enabling a minimal solving unit of only three points. All possible three-point combinations (20 groups) derived from the six calibration points are enumerated. Each combination is processed via the WOA-LM solver to generate a candidate solution (Rk, tk). Every candidate is then evaluated on the full set of six points by computing the total residual cost Jk. The optimal solution (R0, t0) is selected as the candidate that minimizes Jk.
For a fair comparison, the measured galvanometer signals (Hi, Vi) of the 19 validation points are first converted into corresponding deflection angles using the established signal-to-angle calibration model. The angular differences between predicted and reference deflection angles are then adopted as the evaluation metric. As shown in Figure 6, the predictions of the proposed WOA-LM hybrid method align closely with the reference values across the entire horizontal and vertical deflection ranges. In contrast, the conventional LM approach exhibits noticeable deviations in certain regions.
Since this study focuses on proposing and validating a WOA + LM hybrid framework for small-sample calibration, the experiments primarily compare the conventional LM method with the proposed hybrid approach. WOA was selected for its balanced performance between exploration capability and convergence speed in high-dimensional, nonlinear optimization problems. While other global optimizers (e.g., PSO, GA) are also applicable to such tasks, a systematic comparison among different global algorithms is beyond the main scope of this paper. Future work will conduct a comprehensive comparative study among PSO, GA, WOA, and other optimizers on a unified experimental platform to further evaluate their performance in calibration tasks.
Figure 6b–e presents detailed views around four representative locations to highlight the positional discrepancies more clearly. The corresponding point-wise angular errors are summarized in Table 1 and Table 2. Overall, the WOA-LM method achieves consistently smaller and less dispersed errors in both horizontal and vertical deflection predictions across all test points. Its stable performance on the independent validation set further confirms the superior accuracy and repeatability of the extrinsic parameters (R, t) estimated by the proposed hybrid approach.
Beyond accuracy and consistency, the computational efficiency of the two solving strategies is also evaluated. Figure 7 summarizes the timing results from ten independent runs under identical hardware and software configurations. On average, the conventional LM method requires 34.33 s per run, while the proposed WOA-LM hybrid approach completes in only 6.08 s, demonstrating a substantial reduction in computation time.
Over multiple repeated runs, the proposed WOA-LM hybrid approach demonstrates both lower average runtime and more stable computation time, underscoring the efficiency and robustness of the “global search followed by local refinement” strategy. Although the inclusion of WOA introduces an additional global initialization step, it significantly improves the quality of the initial guess, thereby reducing ineffective iterations and the need for random restarts. As a result, the overall optimization process becomes more predictable and computationally efficient, without compromising estimation accuracy. In summary, the angle-domain validation on 19 points together with the runtime comparisons confirm that the proposed method achieves stable convergence and high-accuracy extrinsic estimation even with limited samples, offering superior prediction consistency and enhanced engineering applicability compared to the conventional LM-only approach.
The superior accuracy of the hybrid WOA-LM approach over the standalone LM method can be attributed to its enhanced ability to escape local minima. The LM algorithm, being a gradient-based local optimizer, is highly sensitive to the initial guess of parameters (R, t). When initialized poorly in the complex, non-convex error landscape of the projection model, LM can converge to a sub-optimal local minimum, resulting in larger final errors. In contrast, the WOA first performs a global exploration of the parameter space, identifying a region near the global optimum. This robust initial estimate provided by WOA ensures that the subsequent LM refinement starts from a favorable point, guiding it towards a more accurate and globally consistent solution.

3.4. Continuous-Contour Projection Accuracy Validation

To validate the estimated extrinsic parameters and the corresponding inverse projection model for continuous-trajectory marking, rectangular and triangular contours are projected onto the workpiece. Each contour shape is discretized in the workpiece frame to generate an ordered sequence of 3D target points. For every point, the required galvanometer commands (Hk, Vk) are computed via the inverse mapping and executed sequentially, thereby drawing closed contours on the metal reference board. Dense sampling along the contour boundaries is employed to prevent visible breaks in the projected path and to accurately capture the overall projection performance.
To mitigate the adverse effects of specular reflections and uneven intensity distribution inherent to the high-reflectance metal reference board, a dedicated image processing pipeline was implemented for robust laser spot extraction. The pipeline consists of four stages: (1) Acquisition Parameter Tuning: The exposure time of the color camera was optimized to prevent over-saturation of the laser spot while preserving sufficient contrast against the bright metal background. The laser output power was concurrently adjusted to maintain a consistent and detectable spot size. (2) Pre-processing: A Gaussian filter was applied to raw images to suppress high-frequency noise and reflection speckle. Contrast-limited adaptive histogram equalization (CLAHE) was then used locally to enhance the spot region’s visibility without amplifying background noise globally. (3) Spot Segmentation: An adaptive thresholding method, based on the local mean intensity within a sliding window, dynamically calculates the binarization threshold for each image region. This effectively separated the laser spot from varying reflective backgrounds. Morphological opening was subsequently applied to remove small, reflection-induced noise pixels. (4) Center Localization: The centroid of the largest contiguous blob was calculated as an initial estimate. A quadratic surface was then fitted to the intensity values within a predefined neighborhood of this initial estimate to achieve sub-pixel precision. This fitting process is inherently robust to minor, symmetric intensity variations caused by diffuse reflections. This multi-stage pipeline ensured consistent and accurate laser spot detection across the entire metal surface, forming a reliable basis for the subsequent contour error evaluation.
For quantitative evaluation, sixteen white circular markers (5 mm in diameter) are arranged near a precision-machined groove on the reference board. The groove centerline is aligned to pass through the centers of the markers, establishing a stable local coordinate reference. As shown in Figure 8, the projected contours and the marker layout are displayed together, enabling visual assessment of contour closure, edge straightness, and corner continuity.
To evaluate local projection accuracy, the centers of the 16 white circular markers and the corresponding projected laser trajectory are extracted from the captured images. The shortest distance between each marker center and the laser path is computed as the local projection error. The distribution of these 16 errors is summarized in the heatmap shown in Figure 9b. As illustrated in Figure 9a,b, the projected contour closely follows the reference groove centerline, with errors varying smoothly and no conspicuous outliers. The mean projection error is 0.434 mm (standard deviation: 0.225 mm). The maximum and minimum errors are 0.879 mm and 0.039 mm, respectively, with all values remaining below the 1 mm threshold. These results demonstrate consistent sub-millimeter performance in practical contour projection [30], confirming that the calibrated inverse mapping from workpiece coordinates to galvanometer control signals reliably supports both precise point-to-point projection and continuous trajectory marking.
To evaluate the distance-based assessment method on a different trajectory, the projected triangular contour is analyzed using nine white circular markers arranged as shown in Figure 10. The extracted circle centers and laser trajectory are displayed in Figure 11a. Note that in Figure 10, the projected laser intensity appears weaker at the edges of the plate. This effect is likely due to defocusing as the distance from the scanner increases, suggesting that the system may need to employ the z-focusing unit for uniform projection across the entire plate. Following the same image-processing procedure described for Figure 9, the centers of the markers and the laser trajectory are extracted. The shortest distance from each marker center to the projected contour is then computed and used as the local projection error.
Figure 11b visualizes the corresponding errors for the nine markers. The projected laser trajectory closely follows the reference centerline, with errors varying smoothly and exhibiting no abrupt local deviations. These results demonstrate that the proposed inverse mapping performs effectively not only for rectangular paths but also for triangular contours with sharp corners, thereby validating its capability to support practical in situ marking of diverse geometric shapes.

4. Conclusions

This study presents a compact and robust calibration framework for galvanometer-based laser 3D projection systems, integrating a complete pipeline of small-sample point acquisition, extrinsic parameter solving, and closed-loop projection validation. By augmenting heterogeneous vision-based 3D points with distance measurements, the geometric constraints are strengthened, allowing the extrinsic parameters (rotation R and translation t) to be solved with a minimal set of only three points. A hybrid WOA-LM optimization strategy is employed, where the Whale Optimization Algorithm provides a reliable global initial estimate, and the Levenberg–Marquardt method subsequently refines the solution through nonlinear least-squares, ensuring stable convergence and high repeatability under limited data conditions. Experimental validation on an independent point set achieved sub-millimeter accuracy (mean ≈ 0.37 mm, max ≈ 0.87 mm). Continuous-contour projection tests further confirmed consistent performance, with a mean error of 0.434 mm and all errors below 1 mm. The proposed method is validated under controlled laboratory conditions on planar, high-reflectance metal surfaces and assumes stable relative mounting of key components. Its performance on non-planar, textured, or colored surfaces, under strong ambient light (which may degrade spot extraction), or in comparison with other global optimizers (e.g., PSO, GA) remains to be investigated, representing the primary limitations for broader industrial application. Future work will focus on enhancing robustness in complex lighting, developing faster automated recalibration, and exploring multi-sensor or learning-based compensation to extend adaptability in dynamic industrial environments.

Author Contributions

C.L., S.T., and T.L. conceived the idea and designed the experiments. C.L. and S.T. developed the methodology and performed the experiments. M.H. collected the data and conducted the formal analysis. T.L. evaluated the proposed method, optimized key parameters, and supervised the study. C.L. wrote the original draft. S.T., M.H., and T.L. reviewed and edited the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Jilin Provincial Natural Science Foundation (General Program for Free Exploration; No. YDZJ202401587ZYTS) and the Science and Technology Research Project of the Education Department of Jilin Province (No. JJKH20230808KJ).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Wang, M.; Xu, J.; Lassiter, H.A.; Li, S. BIM-driven laser spatial augmented reality for in-situ layout and assembly. Autom. Constr. 2025, 178, 106405. [Google Scholar] [CrossRef]
  2. Xu, Z.; Duan, X.; Zhu, Y.; Zhang, D. On galvanometer laser projection positioning to layups of large composite material. Machines 2023, 11, 215. [Google Scholar] [CrossRef]
  3. Lao, D.; Wang, Y.; Wang, F.; Gao, C. Calibrating laser three-dimensional projection systems using binocular vision. Sensors 2023, 23, 1941. [Google Scholar] [CrossRef]
  4. Yeung, H.; Lane, B.M.; Donmez, M.A.; Moylan, S. In-situ calibration of laser/galvo scanning system using dimensional reference artefacts. CIRP Ann.-Manuf. Technol. 2020, 69, 441–444. [Google Scholar] [CrossRef]
  5. Tu, J.; Zhang, L. Effective data-driven calibration for a galvanometric laser scanning system using binocular stereo vision. Sensors 2018, 18, 197. [Google Scholar] [CrossRef] [PubMed]
  6. Hou, M.; Shi, Z.; Liu, J.; Chen, Y.; Li, T. Development of a laser scanning projection system with a dual-diameter fitting method and particle swarm optimization. Appl. Opt. 2021, 60, 1250–1259. [Google Scholar] [CrossRef]
  7. Tu, J.; Zhang, L. Rapid on-site recalibration for binocular vision galvanometric laser scanning system. Opt. Express 2018, 26, 32608–32623. [Google Scholar] [CrossRef]
  8. Mao, Y.; Zeng, L.-C.; Jiang, J.; Yu, C. Plane-constraint-based calibration method for a galvanometric laser scanner. Adv. Mech. Eng. 2018, 10, 1687814018773670. [Google Scholar] [CrossRef]
  9. Godineau, A.; Lavernhe, C.; Tournier, C. Calibration of galvanometric scan heads for additive manufacturing with machine assembly defects consideration. Addit. Manuf. 2019, 26, 250–257. [Google Scholar] [CrossRef]
  10. Pjanic, P.; Willi, S.; Grundhöfer, A. Geometric and photometric consistency in a mixed video and galvanoscopic scanning laser projection mapping system. IEEE Trans. Vis. Comput. Graph. 2017, 23, 2430–2439. [Google Scholar] [CrossRef]
  11. Lane, B.; Moylan, S.; Yeung, H.; Neira, J.; Chavez-Chao, J. Quasi-Static Position Calibration of the Galvanometer Scanner on the Additive Manufacturing Metrology Testbed. In NIST Technical Note 2099; US Department of Commerce, National Institute of Standards and Technology: Gaithersburg, Maryland, 2020. [Google Scholar] [CrossRef]
  12. Ha, V.-T.; Do, V.-P.; Kim, H.-G.; Hong, S.-H.; Lee, J.-Y.; Lee, B.-R. Calibration method for laser-camera scanners with a motorized rotating mechanism using a hand-eye calibration framework. Opt. Eng. 2025, 64, 063103. [Google Scholar] [CrossRef]
  13. Sels, S.; Bogaerts, B.; Vanlanduit, S.; Penne, R. Extrinsic Calibration of a Laser Galvanometric Setup and a Range Camera. Sensors 2018, 18, 1478. [Google Scholar] [CrossRef]
  14. Niu, Y.B.; Zhou, Y.Q.; Luo, Q.F. Optimize star sensor calibration based on integrated modeling with hybrid WOA-LM algorithm. J. Intell. Fuzzy Syst. 2020, 38, 2691–2693. [Google Scholar] [CrossRef]
  15. Herrera, D.; Kannala, J.; Heikkilä, J. Joint Depth and Color Camera Calibration with Distortion Correction. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2058–2064. [Google Scholar] [CrossRef]
  16. Fischler, M.A.; Bolles, R.C. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  17. Zhu, Q.; Wu, B.; Wan, N. A sub-pixel location method for interest points by means of the Harris interest strength. Photogramm. Rec. 2007, 22, 321–335. [Google Scholar] [CrossRef]
  18. Tsai, R.Y. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot. Autom. 1987, 3, 323–344. [Google Scholar] [CrossRef]
  19. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  20. Kaufman, S.P.; Savikovsky, A. Apparatus and Method for Projecting a 3D Image. U.S. Patent 6,547,397 B1, 15 April 2003. [Google Scholar]
  21. Lawson, C.L.; Hanson, R.J. Solving Least Squares Problems; Prentice-Hall: Englewood Cliffs, NJ, USA, 1974. [Google Scholar]
  22. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  23. Gavin, H.P. The Levenberg–Marquardt Algorithm for Nonlinear Least Squares Curve-Fitting Problems; Duke University: Durham, NC, USA, 2024; Available online: https://people.duke.edu/~hpgavin/lm.pdf (accessed on 7 January 2026).
  24. Romeo, A.; Carraro, M.; Fenu, S.; Pinna, S.; Vezzani, R. Microsoft Azure Kinect Calibration for Three-Dimensional Dense Point Clouds and Reliable Skeletons. Sensors 2022, 22, 4986. [Google Scholar] [CrossRef]
  25. Pasinetti, S.; Nuzzi, C.; Luchetti, A.; Zanetti, M.; Lancini, M.; De Cecco, M. Experimental Procedure for the Metrological Characterization of Time-of-Flight Cameras for Human Body 3D Measurements. Sensors 2023, 23, 538. [Google Scholar] [CrossRef]
  26. He, Y.; Liang, B.; Zou, Y.; He, J.; Yang, J. Depth Errors Analysis and Correction for Time-of-Flight (ToF) Cameras. Sensors 2017, 17, 92. [Google Scholar] [CrossRef] [PubMed]
  27. Zhao, H.; Wang, S.; Shen, W.; Jing, W.; Li, L.; Feng, X.; Zhang, W. Laser Spot Centering Algorithm of Double-Area Shrinking Iteration Based on Baseline Method. Appl. Sci. 2022, 12, 11302. [Google Scholar] [CrossRef]
  28. Sun, J.; Xie, Y. Subpixel Spot Localization Using Multiscale Anisotropic Gaussian Tensor. Measurement 2023, 214, 112756. [Google Scholar] [CrossRef]
  29. Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  30. Kobiela, K.; Jedynak, M.; Harmatys, W.; Krawczyk, M.; Sładek, J.A. Assessment of Laser Galvanometer Scanning System Accuracy Using Ball-Bar Standard. Appl. Sci. 2021, 11, 8929. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the RGB-D-measurement-based laser projection system.
Figure 1. Schematic diagram of the RGB-D-measurement-based laser projection system.
Applsci 16 01733 g001
Figure 2. Flowchart of the calibration procedure for the laser projection system.
Figure 2. Flowchart of the calibration procedure for the laser projection system.
Applsci 16 01733 g002
Figure 3. Overall workflow of the WOA–LM hybrid optimization for extrinsic parameter estimation.
Figure 3. Overall workflow of the WOA–LM hybrid optimization for extrinsic parameter estimation.
Applsci 16 01733 g003
Figure 4. Photograph of the experimental setup.
Figure 4. Photograph of the experimental setup.
Applsci 16 01733 g004
Figure 5. Box plots of absolute errors in X, Y, and Z and the 3D Euclidean distance error (d) for the test points.
Figure 5. Box plots of absolute errors in X, Y, and Z and the 3D Euclidean distance error (d) for the test points.
Applsci 16 01733 g005
Figure 6. Comparison of projection results obtained using the conventional LM and the proposed WOA-LM methods. (a) positions of all points in the H–V command space (in command units), including 13 verification points and 6 calibration points. (b) point 7. (c) point 10. (d) point 16, and (e) point 19.
Figure 6. Comparison of projection results obtained using the conventional LM and the proposed WOA-LM methods. (a) positions of all points in the H–V command space (in command units), including 13 verification points and 6 calibration points. (b) point 7. (c) point 10. (d) point 16, and (e) point 19.
Applsci 16 01733 g006aApplsci 16 01733 g006b
Figure 7. Runtime comparison between LM and WOA + LM over 10 repeated runs (unit: s).
Figure 7. Runtime comparison between LM and WOA + LM over 10 repeated runs (unit: s).
Applsci 16 01733 g007
Figure 8. Overall contour projection results on the reference board and the layout of the rectangle projection marker points.
Figure 8. Overall contour projection results on the reference board and the layout of the rectangle projection marker points.
Applsci 16 01733 g008
Figure 9. Local accuracy evaluation for rectangular contour projection: (a) extracted circle centers and laser trajectory; (b) error heatmap (color-coded visualization) of the 16 reference points.
Figure 9. Local accuracy evaluation for rectangular contour projection: (a) extracted circle centers and laser trajectory; (b) error heatmap (color-coded visualization) of the 16 reference points.
Applsci 16 01733 g009
Figure 10. Layout of triangular projection marker points.
Figure 10. Layout of triangular projection marker points.
Applsci 16 01733 g010
Figure 11. Local accuracy evaluation of triangular contour projection: (a) Extracted circle center and laser trajectory; (b) Error heatmap of 9 reference points.
Figure 11. Local accuracy evaluation of triangular contour projection: (a) Extracted circle center and laser trajectory; (b) Error heatmap of 9 reference points.
Applsci 16 01733 g011
Table 1. Angular errors at the calibration points (unit: °).
Table 1. Angular errors at the calibration points (unit: °).
Point IDθH|
LM (Deg)
ΔθH
LM (%)
θV|
LM (Deg)
ΔV
LM (%)
θH|
WOA + LM (Deg)
ΔθH
WOA + LM (%)
θV|
WOA + LM (Deg)
ΔθV
WOA + LM (%)
10.01920.0320.00350.0060.01610.0270.00010.000
20.01780.0300.01340.0220.01460.0240.01000.017
30.00460.0080.01280.0210.00350.0060.01040.017
40.02950.0490.01080.0180.02310.0380.00790.013
50.00600.0100.02370.0400.00240.0040.01840.031
60.01140.0190.02990.0500.00970.0160.02540.042
Mean0.01480.0250.01570.0260.01160.0190.01200.020
Std Dev0.00860.0140.00990.0170.00760.0130.00930.015
Note: Δ θ = θ m e t h o d θ s t d indicates the solving method (LM or WOA + LM); absolute errors in Δ θ are reported in degrees.
Table 2. Angular errors at the verification points (unit: °).
Table 2. Angular errors at the verification points (unit: °).
Point IDθH|
LM (Deg)
ΔθH
LM (%)
θV|
LM (Deg)
ΔV
LM (%)
θH|
WOA + LM (Deg)
ΔθH
WOA + LM (%)
θV|
WOA + LM (Deg)
ΔθV
WOA + LM (%)
70.10940.1820.02340.0390.00970.0160.00210.004
80.04070.0680.01040.0170.00370.0060.00100.002
90.04180.0700.00000.0000.00350.0060.00300.005
100.09050.1510.00820.0140.00780.0130.00070.001
110.07510.1250.05600.0930.00590.0100.00430.007
120.09000.1500.10970.1830.00730.0120.00890.015
130.04550.0760.06450.1080.00390.0060.00560.009
140.00100.0020.08280.1380.00010.0000.00760.013
150.05020.0840.10060.1680.00390.0060.00790.013
160.19290.3220.08510.1420.01660.0280.00730.012
170.06480.1080.03690.0620.00500.0080.00290.005
180.05940.0990.01400.0230.00510.0080.00120.002
190.20050.3340.06180.1030.01610.0270.00500.008
Mean0.08130.1350.05360.0890.00670.0110.00440.007
Std Dev0.06050.1010.03640.0610.00440.0070.00280.005
Note: Δ θ = θ m e t h o d θ s t d indicates the solving method (LM or WOA + LM); absolute errors in Δ θ are reported in degrees.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, C.; Tong, S.; Liu, T.; Hou, M. A Hybrid Vision and Optimization Strategy for Accurate 3D Laser Projection Calibration. Appl. Sci. 2026, 16, 1733. https://doi.org/10.3390/app16041733

AMA Style

Liu C, Tong S, Liu T, Hou M. A Hybrid Vision and Optimization Strategy for Accurate 3D Laser Projection Calibration. Applied Sciences. 2026; 16(4):1733. https://doi.org/10.3390/app16041733

Chicago/Turabian Style

Liu, Chuang, Shaogao Tong, Tao Liu, and Maosheng Hou. 2026. "A Hybrid Vision and Optimization Strategy for Accurate 3D Laser Projection Calibration" Applied Sciences 16, no. 4: 1733. https://doi.org/10.3390/app16041733

APA Style

Liu, C., Tong, S., Liu, T., & Hou, M. (2026). A Hybrid Vision and Optimization Strategy for Accurate 3D Laser Projection Calibration. Applied Sciences, 16(4), 1733. https://doi.org/10.3390/app16041733

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop