Next Article in Journal
Security Audit of IoT Device Networks: A Reproducible Machine Learning Framework for Threat Detection and Performance Benchmarking
Previous Article in Journal
The Research on a Collaborative Management Model for Multi-Source Heterogeneous Data Based on OPC Communication
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Accuracy-Enhanced Calibration Method for Robot-Assisted Laser Scanning of Key Features on Large-Sized Components

College of Mechanical and Vehicle Engineering, Linyi University, Linyi 276000, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(24), 7518; https://doi.org/10.3390/s25247518
Submission received: 15 November 2025 / Revised: 4 December 2025 / Accepted: 8 December 2025 / Published: 10 December 2025
(This article belongs to the Section Physical Sensors)

Abstract

In advanced manufacturing, accurate and reliable 3D geometry measurement is vital for the quality control of large-sized components with multiple small key local features. To obtain both the geometric form and spatial position of these local features, a hybrid robot-assisted laser scanning strategy is introduced, combining a laser tracker, a fringe-projection 3D scanner, and a mobile robotic unit that integrates an industrial robot with an Automated Guided Vehicle. As for improving the overall measurement accuracy, we propose an accuracy-enhanced calibration method that incorporates both error control and compensation strategies. Firstly, an accurate extrinsic parameter calibration method is proposed, which integrates robust target sphere center estimation with distance-constrained-based optimization of local common point coordinates. Subsequently, to construct a high-accuracy, large-scale spatial measurement field, an improved global calibration method is proposed, incorporating coordinate optimization and a hierarchical strategy for error control. Finally, a robot-assisted laser scanning hybrid measurement system is developed, followed by calibration and validation experiments to verify its performance. Experiments verify its high precision over 14 m (maximum error: 0.117 mm; mean: 0.112 mm) and its strong applicability in large-scale scanning of key geometric features, providing reliable data for quality manufacturing of large-scale components.

1. Introduction

The manufacturing capabilities of high-end equipment used in major projects such as aviation, aerospace, maritime engineering, and energy serve as a critical indicator of a nation’s technological advancement [1,2]. Large-sized components (LSCs) serve as the core load-bearing structure of high-end equipment, featuring large size and complex structure. Among them, the small-scale key local features (KLFs, 1 × 1 mm to 25 × 25 mm) on LSCs, such as feature holes and feature surfaces, are distributed discretely over a wide area (2 to 12 m). Ensuring precise and automated 3D shape measurement of these features plays a vital role in quality assurance, product reliability improvement, and cost reduction in manufacturing processes [3,4]. For instance, in the production process of large-sized components, in order to ensure high-precision integrated manufacturing, it is necessary to conduct online detection of key features and use the detection results to guide the subsequent manufacturing process. Moreover, to ensure the assembly accuracy, cross-scale high-precision inspection of key features is essential to avoid forced assembly caused by geometric interference at connection interfaces. Therefore, the research on high-precision online detection methods for local small-scale key features within a large-scale space is of great significance for ensuring the high-quality and reliable manufacturing of large-sized components.
With the increasing extremity in both the size and manufacturing precision of core structural components, the measurement requirements for their key features, such as size and accuracy, have also shown a polarized trend. Consequently, hybrid measurement methods that combine large-scale spatial coordinate measurement systems with small-range vision-based systems have become an essential means. According to the measurement principle, the main large-sized spatial coordinate measurement systems include laser tracker [5], total station [6], indoor global position system (i-GPS) [7], laser radar [8], photogrammetry [9], etc. Among small-range vision-based measurement systems, those based on structured light are the most widely used. To guarantee reliable measurement of key local features (KLFs), the hybrid measurement system (HMS) undergoes a series of calibration steps prior to data acquisition: (1) camera calibration, (2) intrinsic calibration of the 3D scanner, (3) local extrinsic parameter calibration, and (4) global calibration for constructing the measurement field. Within the HMS, extrinsic parameter calibration establishes the spatial transformation between the 3D scanner’s coordinate system and an intermediate reference frame. Within the global measurement domain, global calibration plays a critical role in minimizing cumulative errors in final measurements and maintaining overall accuracy across large-scale workspaces. Because camera and 3D scanner calibration is usually predefined, system setup primarily focuses on performing global and local calibration.
For extrinsic parameter calibration, among global direct pose estimation methods in structured-light vision measurement systems, the extrinsic parameter calibration method based on cooperative targets is the most widely used [10]. Barone et al. [11,12] utilized a two-dimensional planar checkerboard calibration plate, combined with Singular Value Decomposition (SVD), to perform the extrinsic parameter calibration of a hybrid measurement system integrating a 3D structured-light scanner with global stereo vision. Liu et al. [13] combined the indirect calibration method based on rigid body transformation to achieve the rapid calibration of extrinsic parameters. Du et al. [14] utilized target balls that could be simultaneously identified by both global and local measurement devices as cooperative targets. Based on the single-point multiple-point method and the rotation method, they constructed common feature points with non-coplanar distribution and pose reference coordinate systems, respectively, achieving the rapid calculation of the extrinsic parameters matrix of the 3D scanner. To enhance the calibration accuracy of extrinsic parameters, Qu et al. [15] designed a cooperative target and combined the adjustment optimization technology to establish a calibration model that joins multiple independent reference frame transformations, achieving high-accuracy and rapid calibration of extrinsic parameters. However, it did not take into account the issue of differences in measurement data accuracy levels. To address this issue, Jiang et al. [16,17,18] proposed an extrinsic parameter calibration method that takes into account the scale factor. This method first established a coordinate fusion model for the combined measurement of global binocular vision and local optical scanning, and then, based on numerical simulation, explored the influence law of scale differences on the global positioning accuracy, established an extrinsic parameter calibration model considering the scale factor, and combined multi-view adjustment optimization technology to enhance the calibration accuracy of extrinsic parameters. Garcia-D’Urso et al. [19] proposed a method for estimating multi-camera extrinsic parameters based on structured 3D markers and iterative optimization. By integrating AI-driven deep learning or regression modules, it offers significant advantages in feature extraction, outlier removal, and initial value estimation, thereby enhancing the robustness and automation level of external parameter solution. Pan et al. [20] proposed an online extrinsic calibration method that unifies LiDAR and camera data into depth-map representations and leverages depth-edge discontinuities as robust geometric constraints. By incorporating a miscalibration detection module and on-manifold optimization for continuous parameter refinement, the method effectively addresses vibration- and deformation-induced extrinsic drift and enhances the resilience and autonomy of multi-sensor calibration in dynamic environments. This line of work highlights emerging AI- and optimization-driven approaches for achieving resilient, autonomous extrinsic calibration in dynamic environments. In summary, current studies on extrinsic parameter calibration of terminal measurement systems have achieved significant results in aspects such as the construction of calibration targets and algorithms for solving extrinsic parameters. However, the critical challenges of severe accumulation of multi-baseline transformation errors caused by the difficulty in tracing, constraining, and compensating heterogeneous measurement errors remain effectively unresolved. Moreover, multi-reference frame transformations involve large-angle rotations, which lead to poor stability in the calculation of transformation parameters, making it difficult to ensure solution accuracy.
For global calibration, constructing a large-scale global measurement field based on laser tracker is one of the most effective means for achieving global accuracy control. Among its key challenges, the measurement error control of global common points (GCPs) is the key research direction to ensure the accuracy of such field construction [21]. Jin et al. [22] elaborated in detail on the construction method of a large-scale measurement field using station-transfer method based on a laser tracker and established a transfer station error model that takes into account both the global common point configuration and measurement errors. The study revealed the correlation between the layout of common points, the measurement errors of the laser tracker, the parameter errors of the transfer station, and the transfer station errors. Predmore [23] proposed a method for optimizing the coordinate transformation parameters considering the ellipsoid of the measurement uncertainty of the GCP, which utilized the Marhalobanb beam adjustment technique to solve the orientation problem of the multi-base station measurement system and the weighted optimization of the common point coordinate measurements. To address the issue of unknown and uncontrollable measurement errors in GCPs, Wang et al. [24] proposed a large-scale coordinate unification method based on standard artifact. This method first introduces standard artifact with prior geometric knowledge to replace the traditional independent common points and eliminates gross errors by setting allowable error limits, thereby obtaining qualified initial values for common reference point measurements. On this basis, a coordinate value optimization method based on geometric constraints is proposed, further improving the measurement accuracy of common reference points. Additionally, through the Procrustes spatial coordinate system registration method, the coordinate unification of large-scale measurement systems was achieved. However, this method cannot effectively reduce the impact of laser tracker angle measurement errors on the measurement accuracy of spatial points. To improve the global measurement accuracy in the combined measurement of large-sized and complex components, Lin et al. [25] developed a strategy to establish a large-scale spatial measurement field based on laser interferometric distance constraints. The approach initially applied a multi-station measurement scheme to derive the preliminary 3D coordinates of the GCPs, followed by singular value decomposition (SVD) to determine the stations’ initial orientation. An error model incorporating high-precision laser interferometric distance constraints is established to mitigate the impact of angular measurement errors from the laser tracker on spatial point accuracy. In addition, by introducing multiple one-dimensional carbon fiber rods to form spatial length references, a precision enhancement strategy based on spatial distance constraints is proposed. Length and angular constraint equations are formulated, and a nonlinear optimization algorithm is applied to calculate the correction values of the reference point coordinates, thereby refining the transfer errors and improving measurement accuracy in localized areas. Fan et al. [26], based on the construction of the weighted rank deficiency adjustment model for laser interferometry, introduced a known-length reference ruler, constructed length constraints for a single posture, and established an adjustment model with additional constraints, thereby further improving the overall accuracy of point distribution across the measurement field. However, under complex on-site conditions, the aforementioned methods face significant challenges in simultaneously identifying and controlling outlier measurements, which limits their ability to effectively suppress the influence of angular measurement errors from laser trackers on spatial point accuracy. Recent advances in large-scale metrology have also highlighted the importance of dynamic error compensation and multi-station self-calibration for laser tracker networks. Zou et al. [27] proposed a high-precision construction strategy for multi-station laser tracker measurement networks, incorporating ERS-based weighted optimization, iterative refinement of inter-station transformation parameters, and optimal LTMS configuration planning (including station number and spatial placement) to minimize global network error. This line of research demonstrates the growing emphasis on intelligent, optimization-driven frameworks for improving large-scale automated measurement accuracy. Ma et al. [28] introduced an enhanced registration strategy that integrates Enhanced Reference System (ERS) point-weighted self-calibration with thermal deformation compensation, significantly improving the stability of multi-station global measurements under varying environmental conditions. However, due to the limitations of the difficulty in constructing multi-domain and multi-pose geometric constraints and the insufficient constraints of external high-precision features, it is difficult to effectively control the global datum accuracy under the restricted layout.
To address the aforementioned limitations, we introduce an accuracy-enhanced calibration method that integrates error control and compensation strategies. Firstly, an accurate extrinsic parameter calibration method is proposed, which integrates robust target sphere center estimation with distance-constrained-based optimization of local common point coordinates. This method establishes a correction model for reference frame transformation parameters, effectively compensating for multi-source heterogeneous measurement errors. Subsequently, to construct a high-accuracy large-scale spatial measurement field, an improved global calibration method is proposed. This method incorporates coordinate measurement value optimization and applies a hierarchical strategy for measurement error control. Finally, a robot-assisted laser scanning hybrid measurement system was developed, followed by calibration and validation experiments to verify its performance.
This paper is organized as follows. Section 2 presents the proposed measurement method along with its fundamental principles. Section 3 describes the procedure for calibrating the extrinsic parameters of the 3D scanner. Section 4 illustrates the method for constructing a large-scale spatial measurement field. Section 5 presents the calibration and validation experiments, whereas Section 6 summarizes the key outcomes and future work of the study.

2. Measurement Method and Principle

2.1. Overview of Robot-Assisted Laser Scanning Hybrid Measurement System

The proposed robot-assisted laser scanning hybrid measurement system (RLSHS) is composed of three essential modules: a laser tracker responsible for global reference acquisition, a structured-light 3D scanner for local geometry capture, and a mobility-enabled robotic platform that integrates an industrial robot with an automated mobile platform. A schematic overview is presented in Figure 1.
Automated and high-accuracy 3D measurement of KLFs plays a pivotal role in ensuring manufacturing quality while reducing inspection time and manual effort. Given the extensive measurement range of LSCs, the multiscale nature of their features, and the need for high accuracy in both shape and position, as well as multi-source data integration, a laser tracker is employed as the global coordinate measurement device. This enables long-range measurements while maintaining consistency between global and local accuracy. In addition, a fringe-projection-based 3D scanner is employed for local terminal measurements, ensuring high accuracy in close-range inspections. The measurement system integrates a mobile robotic unit consisting of an industrial manipulator and a mobile platform, thereby enhancing both operational efficiency and flexibility. System coordination is achieved through industrial Fieldbus networks and dedicated communication software, forming a robust auxiliary control framework. The scanning terminal, integrating a 3D scanner and several spherically mounted retroreflectors (SMRs), is installed on the end-effector of an industrial robot, which serves as the main measurement actuator by combining motion flexibility with high positioning repeatability. This configuration allows accurate positioning of the 3D scanner to acquire KLF information in segmented work areas. Moreover, a mobile platform transports the actuator across larger workspaces, thereby enlarging the effective measurement range of the proposed RLSHS.
In the proposed RLSHS, the laser tracker, mobile platform, and 3D scanner operate in a real-time cooperative manner. The measurement process of the system is as follows: Firstly, before the formal measurement, a large-scale spatial measurement field (spatial measurement network) is constructed by combining laser tracker transfer station technology with several multi-dimensional cooperative calibrators (MDCC), and a high-precision global measurement reference is established. The global measurement network is established only once prior to system operation, functioning as a stable, reusable geometric reference throughout all subsequent measurements; thus, its setup time is not part of the per-task efficiency metric. At the initial stage of formal measurement, the mobile platform first transports the robot to the vicinity of the target region, after which the laser tracker continuously measures the SMRs mounted on the 3D scanner and robot to provide global pose feedback. Using this real-time guidance, the robot performs fine positioning of the scanner along predefined viewpoints. Before each scan, the laser tracker verifies pose stability, ensuring that the scanner captures the target area under a globally referenced coordinate frame. Through this closed-loop interaction, the system maintains coherent coordination of global referencing, robot motion, and high-accuracy 3D data acquisition.

2.2. Measurement Model

The coordinate systems of the proposed RLSHS primarily consist of three frames: the laser tracker coordinate system O R X R Y R Z R (RCS), the coordinate system of the dynamic 3D scanner O s X s Y s Z s (SCS), and the intermediate coordinate system O I X I Y I Z I (ICS), which defines the reference pose of the local 3D scanner. As illustrated in Figure 1, three transformation matrices are involved among these coordinate systems: matrix H S I describes the transformation from ICS to SCS, matrix H I R from RCS to ICS, and matrix H S R from RCS to SCS. Specifically, the ICS serves as an intermediate frame bridging the RCS and the SCS. To establish this reference frame, multiple sophisticated magnetic nests (SMNs) for fixing SMRs are rigidly mounted on the scanner fixture. The relative positions of these reference observation points are pre-calibrated using a high-precision coordinate measuring machine (CMM). During operation, the laser tracker measures the positions of the SMRs to determine the pose of the scanner and to enable accurate real-time tracking.
Let P i denote a point located on the key geometric feature of a large-sized component, and P i S = x i S , y i S ,   z i S represent its corresponding 3D coordinates in the SCS. These can be transformed into the ICS through the following formula:
P i I = R S I P i S + T S I
where R S I and T S I represent the rotation matrix and translation vector of matrix H S I , respectively. Matrix H S I , also referred to as the extrinsic parameter matrix of the 3D scanner, is a fixed transformation matrix.
Usually, any station RCS can be designated as the global coordinate system (GCS), and the point coordinates defined in the ICS are transformed into the GCS using the following transformation:
P i G = R I G P i I + T I G
By combining Equations (1) and (2), a cross-scale measurement data transfer model is established as follows:
P i G = R I G R S I P i S + R I G T S I + T I G
where R S G = R I G R S I represents the rotation matrix of the SCS with respect to the GCS and T I G = R I G T S I + T I G denotes the corresponding translation vector.
The measurement of KLFs in LSCs presents several challenges, including uncertain measurement deviations and uncontrollable calibration errors. To enhance the overall measurement accuracy, a series of calibration procedures was conducted prior to deploying the RLSHS for KLF inspection. First, the matrix H S I was obtained through the extrinsic parameters’ calibration. Second, global calibration was performed to minimize alignment errors among local datasets and to establish a unified large-scale spatial measurement field. Specifically, the extrinsic parameter calibration was categorized as local calibration (refer to Section 3), while the construction of the large-scale spatial measurement field was considered global calibration (see Section 4).

3. Extrinsic Parameter Calibration

A high-precision coordinate transformation model H S I between the SCS and ICS is constructed through extrinsic parameter calibration. As calibration is required before the RLSHS is applied to large-scale measurement tasks and cannot be repeated once operation begins, the reliability of the entire measurement process critically depends on the calibration accuracy. To suppress transformation errors, we propose a refined extrinsic calibration approach that combines robust estimation of target sphere centers with a distance-constrained optimization of local common point (LCP) coordinates.

3.1. Extrinsic Parameter Calibration Model

During the calibration process, a specially designed multi-source, non-coplanar cooperative target (MNCT), comprising several SMRs and standard ceramic balls (SCBs), was fixed within the measurement workspace. A schematic of the calibration procedure is shown in Figure 2.
Let P s i S = x s i S ,   y s i S ,   z s i S denote the 3D coordinates of the observed target point center Q s i ,   s i = s 1 ,   s 2 ,   ,   s n on the MNCT in the SCS, and let P g i R = x g i R ,   y g i R ,   z g i R denote the corresponding coordinates of the same target point center Q g i ,   g i = g 1 ,   g 2 ,   ,   g n in the RCS. Based on the principle of coordinate transformation, the following equation is established:
P g i R 1 = H S R P s i S 1 = R S R T S R 0 1 P s i S 1
Let P i I = x i I ,   y i I ,   z i I represent the 3D coordinates of the pose observation points Q g i I arranged on the surface of the scanner in the ICS, and P I i R = x I i R ,   y I i R ,   z I i R be the measured coordinate values of the corresponding observation points in the RCS, then the following equation can be established:
P I i R 1 = H I R P i I 1 = R I R T I R 0 1 P i I 1
By combining Equations (3)–(5), the transformation matrix between SCS and ICS is obtained:
H S I = H I R 1 H S R = R I R 1 R S R R I R 1 T S R T S R 0 1
The coordinate values of the target points obtained by the measurement equipment contain measurement errors, which lead to deviations in each stage of reference frame transformation. To improve the estimation accuracy of transformation parameter, it is necessary to reduce the multi-source heterogeneous measurement errors, thereby establishing an optimization calibration model that jointly constrains the errors of each independent transformation, that is:
H ~ S I = H ~ I R 1 H ~ S R
where H ~ S I , H ~ I R and H ~ S R represent the optimized values of H S I , H I R and H S R , respectively.

3.2. Robust Target Sphere Center Estimation via Denoised Point Cloud Fitting

As mentioned earlier, the three-dimensional coordinate values of the center of the scanning target ball (SCB) in the SCS are one of the important input coordinate values for solving the transformation matrix H S R , and the measurement accuracy directly affects the accuracy of solving the transformation parameters. During the measurement process, a 3D scanner was used to scan the SCB, and the obtained original point cloud of the SCB’s spherical top surface is shown in Figure 3.
Due to the large volume of point cloud data and the presence of complex background noise, the 3D spatial coordinates of the SCB’s center cannot be directly obtained. Therefore, a point cloud preprocessing algorithm is required to process the SCB’s scanned data and accurately fit the center coordinates of the sphere. However, in the scanned data, there are various types of complex noises, such as high-density redundant points and sparse spatial discrete points, which affect the fitting accuracy of the sphere’s center. To address this issue, this study investigates a scanning target ball center-fitting method based on point cloud denoising. This method uses the radius filtering method, the random sample consensus (RANSAC) algorithm, and the Euclidean clustering segmentation method to remove noise and accurately fit the coordinates of the ball center, providing a reliable data basis for the subsequent solution of the reference transformation matrix.
First, to improve processing efficiency of the point cloud and ensure the accuracy of sphere center fitting, the initial point cloud is subjected to downsampling. Based on this, radius filtering method is applied to eliminate the sparse space discrete points, obtaining the sphere top surface point cloud that only contains high-density redundant noise points, as shown in Figure 4a. Further, for the high-density redundant noise points, considering that their point cloud density is basically the same as that of the sphere’s top surface point cloud, the Random Sample Consensus (RANSAC) algorithm is utilized to fit them into a plane and eliminate them, yielding the point cloud of the sphere’s top surface, as shown in Figure 4b.
Subsequently, the point cloud is segmented into multiple groups of individual target spherical point clouds using the Euclidean clustering segmentation method, as shown in Figure 5. Finally, the RANSAC algorithm is applied to fit the spherical surfaces, yielding the coordinates of the SCB sphere centers.

3.3. Distance-Constraint-Based Optimization Method for Local Common Point Coordinates

To address the difficulties in tracing, constraining and compensating for the multi-source heterogeneous measurement errors during the extrinsic parameter calibration, a coordinate optimization method based on multi-distance constraints is proposed. This method utilizes the prior distances between target points on the MNCT as reference values. First, a constraint model is formulated using these known distances. Then, the model is linearized, and the spatial relationships between measurement points are represented as a system of equations. Finally, conditional least-squares adjustment is applied to estimate coordinate corrections, thereby compensating for errors from heterogeneous sources.
According to the principles of conditional adjustment, when the number of constraint distance equations must be greater than the number of coordinates to be solved, that is, the number of local common points is no less than 7, there will be redundant equations. To reduce the number of actual physical targets, this study introduces a hybrid layout strategy that incorporates both real and virtual points. This method determines a virtual point coordinate value by calculating the coordinates of three actual points and then constructs constraint distances, as shown in Figure 6. Finally, it realizes the optimization of coordinate measurement value errors.
This section takes the local common points on the MNCT as an example to elaborate on the coordinate value optimization method proposed in this study. Denote X M = x 1 M ,   y 1 M ,   ,   y n M ,   z n M as the measured three-dimensional coordinates and X ̑ r = x ̑ 1 r ,   y ̑ 1 r ,   ,   y ̑ n r ,   z ̑ n r as the corresponding theoretical coordinates of the LCPs on the MNCT, acquired using either the 3D scanner or the laser tracker. Accordingly, the theoretical distance L ̑ i j D between any two LCPs can be expressed as a function of their theoretical coordinates:
x ̑ 1 r x ̑ 2 r 2 + y ̑ 1 r y ̑ 2 r 2 + z ̑ 1 r z ̑ 2 r 2 L ̑ 12 D 2 = 0 x ̑ 1 r x ̑ 3 r 2 + y ̑ 1 r y ̑ 3 r 2 + z ̑ 1 r z ̑ 3 r 2 L ̑ 13 D 2 = 0 x ̑ i r x ̑ j r 2 + y ̑ i r y ̑ j r 2 + z ̑ i r z ̑ j r 2 L ̑ i j D 2 = 0 x ̑ n 1 r x ̑ n r 2 + y ̑ n 1 r y ̑ n r 2 + z ̑ n 1 r z ̑ n r 2 L ̑ n n 1 D 2 = 0
Suppose the following condition holds:
f D = x ̑ i r x ̑ j r 2 + y ̑ i r y ̑ j r 2 + z ̑ i r z ̑ j r 2 i ,   j = 1 ,   2 ,   ,   n ;   i j
The distance constraint is linearized by performing a second-order Taylor expansion and disregarding higher-order terms:
f D = L i j M + f D x ̑ i r δ x ̑ i M + f D y ̑ i r δ y ̑ i M + f D z ̑ i r δ z ̑ i M + f D x ̑ j r δ x ̑ j M + f D y ̑ j r δ y ̑ j M + f D z ̑ j r δ z ̑ j M
where L i j M = x i M x j M 2 + y i M y j M 2 + z i M z j M 2 denotes the measured distance between two LCPs, while δ X ̑ M = δ x ̑ i M ,   δ y ̑ i M ,   δ z ̑ i M ,   δ x ̑ j M ,   δ y ̑ j M ,   δ z ̑ j M T represents the correction vector accounting for their on-site measurement errors; f D x ̑ i r ,   f D y ̑ i r ,   f D z ̑ i r ,   f D x ̑ j r ,   f D y ̑ j r ,   f D z ̑ j r is the partial derivative.
Further, we obtain:
Δ x i j 0 L i j M δ x ̑ i M + Δ y i j 0 L i j M δ y ̑ i M + Δ z i j 0 L i j M δ z ̑ i M Δ x i j 0 L i j M δ x ̑ j M Δ y i j 0 L i j M δ y ̑ j M Δ z i j 0 L i j M δ z ̑ j M + l i j D = 0
where l i j D = L i j M L ̑ i j D is the closure error.
Accordingly, Equation (11) can be expressed in matrix form as:
A D δ X ̑ M δ L D = 0
where A D is the coefficient matrix; δ L D = l 12 D ,   l 13 D ,   ,   l n 1 n D T is the closure error vector.
In Equation (12), there are 3n unknowns and n n 1 2 equations. By employing Lagrange multipliers to solve conditional extrema, the objective function is derived as follows:
Φ D = δ X ̑ M T P D δ X ̑ M 2 K D T A D δ X ̑ M δ L D
where K D = k 1 D ,   k 2 D ,   ,   k n D T is the contact number vector.
Setting the first derivative of δ X ̑ M to zero yields:
d Φ D d δ X ̑ M = δ X ̑ M T P D K D T A D = 0
Then, by transposing both sides, the following expression is obtained:
P D δ X ̑ M = A D T K D T
Therefore, in conclusion, we can obtain:
K D = A D P D 1 A D T 1 δ L D δ X ̑ M = P D 1 A D T A D P D 1 A D T 1 δ L D
Finally, the optimized results of the observed values are obtained:
X ̑ D = X M + δ X ̑ M
The above optimized process for obtaining local common point coordinates is summarized, and the pseudo-code is provided in Algorithm 1.
Algorithm 1: Distance-constraint-based optimization for local common point coordinates
Input: Initial coordinates of observation points X M
Output: Coordinates correction data of observation points δ X ̑ M
1Compute coordinate corrections δ X ̑ M and optimized coordinates X ̑ M
2For i = 2 to n do
3   For j = I − 1 do
4Build the measurement equation f D of the distance observation (Equation (9));
5Build the residual equation A D δ X ̑ M δ L D = 0 (Equation (12));
6End for;
7End for;
8Do begin;
9Build the objective function Φ D (Equation (13));
10Calculate the minimum value Φ D for δ X ̑ M (Equation (14));
11Set d Φ D d δ X ̑ M = δ X ̑ M T P D K D T A D = 0 ;
12Compute the contact number vector K D (Equation (15));
13Return  δ X ̑ M
14Set  X ̑ D = X M + δ X ̑ M

3.4. Correction Model for Coordinate Transformation Parameters Considering Multi-Source Heterogeneous Measurement Error Compensation

During the estimation of reference frame transformation parameters, challenges arise due to the limited field of view of the 3D scanner and the dense distribution of local common points on the MNCT, which increases the correlation among those points. Moreover, large-angle rotations often lead to instability in the solution of transformation parameters, and the accumulation of multi-source heterogeneous measurement errors further degrades the accuracy and reliability of the computed results. Therefore, based on the above error optimization strategy for coordinate measurement values, this section constructs a transformation parameter correction model that takes into account the compensation for multi-source heterogeneous measurement errors.
As an example, the estimation of the transformation matrix H S R parameters between the SCS and RCS is carried out using a stepwise strategy while accounting for arbitrary rotation angles, as illustrated in Figure 7. Specifically, this strategy begins by applying the coordinate optimization method introduced in Section 3.3 to compensate for measurement errors of local common points. Subsequently, a middle reference frame is constructed using three non-collinear points to derive an initial estimate of the transformation parameters. Finally, the parameters are refined using the Bursa–Wolf model combined with a weighted least-squares criterion, resulting in improved accuracy of the final solution.
Let P ̑ 1 S , P ̑ 2 S , and P ̑ 3 S denote the compensated coordinates of three non-collinear points on the MNCT in the SCS, and let P ̑ 1 R , P ̑ 2 R , and P ̑ 3 R represent their corresponding compensated coordinates in the RCS. Using point P ̑ 1 S as the origin, a transitional coordinate frame P ̑ 1 S e M X S e M Y S e M Z S is constructed via the three-point method. Based on this, the transformation matrix from the SCS to the middle transitional coordinate system (MCS) can be derived as follows:
H M S = R M S T M S 0 1 R M S = e M X S e M Y S e M Z S ,   T M S = P ̑ 1 S
where e M X S , e M Y S and e M Z S are unit direction vectors.
Similarly, by establishing a transitional coordinate system centered at point P ̑ 1 R , the transformation matrix of the MCS relative to RCS can be obtained as:
H M R = R M R T M R 0 1 R M R = e M X R e M Y R e M Z R ,   T M R = P ̑ 1 R
By combining Equations (18) and (19), the initial transformation matrix between the SCS and the RCS is obtained as follows:
H ̑ S R = R ̑ S R T ̑ S R 0 1 = R M R T M R 0 1 R M S T M S 0 1 1
where t ̑ S x R , t ̑ S y R and t ̑ S z R represent the corresponding initial translation parameters, while w ̑ S x R , w ̑ S y R and w ̑ S z R represent the initial rotation angle parameters:
w ̑ S x R = arctan R ̑ S R 3 ,   2 / R ̑ S R 3 ,   3 w ̑ S y R = arcsin R ̑ S R 3 ,   2 w ̑ S z R = arctan R ̑ S R 2 ,   1 / R ̑ S R 1 ,   1
where R ̑ S R i ,   j represents the element in the i-th row and j-th column of the rotation matrix R ̑ S R .
Let P i S be the measured coordinates of a common point in the SCS, and let P ̑ S i R = x ̑ S i R ,   y ̑ S i R ,   z ̑ S i R be the coordinates after the initial transformation. The relationship between them can be computed using the following equation:
P ̑ S i R = R ̑ S R P i S + T ̑ S R
At this stage, the reference transformation angle between P i R and P ̑ S i R satisfies the small-angle condition. According to the Bursa–Wolf model, the transformation relationship between them is established as follows:
P i R = P ̑ S i R + Δ R S R P ̑ S i R + Δ T S R
where the specific forms of Δ R S R and Δ T S R are as follows:
Δ R S R = Δ u S R Δ w S z R Δ w S y R Δ w S z R Δ u S R Δ w S x R Δ w S y R Δ w S x R Δ u S R Δ T S R = Δ t S x R Δ t S y R Δ t S z R T
where Δ u S R is the scaling factor; Δ w S x R , Δ w S y R and Δ w S z R are the rotational parameter errors based on the initial transformation, while Δ t S x R , Δ t S y R and Δ t S z R are the translational parameter errors.
Due to the existence of multi-source heterogeneous measurement errors, the corrected model of Equation (23) is obtained:
P i R + Δ P i R = P ̑ S i R + Δ P ̑ S i R + Δ R S R P ̑ S i R + Δ P ̑ S i R + Δ T S R
where Δ P i R = Δ x i R ,   Δ y i R ,   Δ z i R represents the actual measurement error of the point obtained using the laser tracker, while Δ P ̑ S i R = Δ x ̑ S i R ,   Δ y ̑ S i R ,   Δ z ̑ S i R refers to the initial transformation value of the measurement error for the 3D scanner-acquired point.
By omitting higher-order terms, the equation is further simplified as follows:
P i R P ̑ S i R + Δ P i R = Δ R S R P ̑ S i R + Δ T S R + Δ P ̑ S i R
For each local common point, Equation (26) can be extended accordingly:
x 1 R x ̑ S 1 R y 1 R y ̑ S 1 R z 1 R z ̑ S 1 R x n R x ̑ S n R y n R y ̑ S n R z n R z ̑ S n R l S R + Δ x 1 R Δ y 1 R Δ z 1 R Δ x n R Δ y n R Δ z n R V Π = 1 0 0 0 z ̑ S 1 R y ̑ S 1 R x ̑ S 1 R 0 1 0 z ̑ S 1 R 0 x ̑ S 1 R y ̑ S 1 R 0 0 1 y ̑ S 1 R x ̑ S 1 R 0 z ̑ S 1 R 1 0 0 0 z ̑ S n R y ̑ S n R x ̑ S n R 0 1 0 z ̑ S n R 0 x ̑ S n R y ̑ S n R 0 0 1 y ̑ S n R x ̑ S n R 0 z ̑ S n R A S R Δ w S x R Δ w S y R Δ w S z R Δ t S x R Δ t S y R Δ t S z R Δ μ S R Δ ξ S R + Δ x ̑ S 1 R Δ y ̑ S 1 R Δ z ̑ S 1 R Δ x ̑ S n R Δ y ̑ S n R Δ z ̑ S n R V Ι
where l S R denotes the difference vector between the local common point coordinates and the transformed measured coordinates in the RCS; A S R is the coefficient matrix; Δ ξ S R represents the vector of transformation parameter errors to be solved; V Π is the measurement error vector obtained from the laser tracker; and V Ι denotes the vector of the values obtained after the initial transformation of the measurement error values of the scanner.
Let V = V Π V Ι , then the error equation can be expressed as follows:
V = A S R Δ ξ S R l S R
The above equation is a classic indirect adjustment model. Obviously, the measurement errors of laser trackers and scanners are independent of each other and follow the principle of weighted sum of squares minimum:
V Π T D Π 1 V Π + V Ι T D Ι 1 V Ι = min
where D Π represents the variance–covariance matrix of the measurement errors for the local common point coordinates in the RCS, which can be calculated using the method described in reference [25]; D Ι is the variance–covariance matrix of measurement errors for the local common point coordinates after the initial coordinate transformation in the SCS:
D Ι = R ̑ S R D S R ̑ S R T
where D S is the variance matrix of the scanner measurement error:
D S = σ s x 2 0 0 0 σ s y 2 0 0 0 σ s z 2
where σ s x 2 , σ s y 2 and σ s z 2 are the variances of the coordinate measurement errors of the scanner in the X, Y, and Z directions, respectively.
The objective function is formulated using the Lagrange extremum method as follows:
Φ = V Π T D Π 1 V Π + V Ι T D Ι 1 V Ι 2 K S R T l S R + V Π V Ι A S R Δ ξ V S R
where K S R T is the contact vector; By taking the partial derivatives of V Ι , V Π and Δ ξ S R with respect to their respective parameters and setting them to zero, the following result is obtained:
Φ V Π = 2 V Π T D Π 1 2 K S R T = 0 Φ V Ι = 2 V Ι T D Ι 1 + 2 K S R T = 0 Φ Δ ξ S R = 2 K S R T A S R = 0
Then, we obtain:
K S R = D Ι + D Π 1 A S R Δ ξ S R l S R
Thus, the least-squares solution of Δ ξ S R can be expressed as:
Δ ξ S R = A S R T D Ι + D Π 1 A S R 1 A S R T D Ι + D Π 1 l S R
Finally, by incorporating the initial reference transformation parameters, the optimized transformation matrix H ~ S R is obtained. Similarly, the optimized transformation matrix H ~ I R is derived. Based on Equation (7), the optimized calibration matrix H ~ S I is then calculated.

4. Construction of Large-Scale Spatial Measurement Field

To address the issue of effectively controlling measurement errors of GCPs during the construction of large-scale spatial measurement fields for global calibration under constrained layouts, this section proposes an optimization method for coordinate measurement values considering the hierarchical measurement error control, which further improved the position measurement accuracy of GCPs across the entire measurement range. As illustrated in Figure 8, the principle of the proposed method is schematically shown.
First, to eliminate gross errors in the measurement of GCPs and to monitor the working performance of the laser tracker on-site, thereby ensuring measurement consistency, a distance-based model was developed to reject out-of-tolerance measurement points. A multi-dimensional cooperative calibrator (MDCC) based on the four-point non-coplanar principle is introduced, consisting of carbon fiber plates, carbon fiber rods, SMNs, and SMRs, as shown in Figure 9. By leveraging the characteristic that the distance between any two points is independent of the coordinate system, the geometric relationship between the target points on the MDCC is calibrated using a high-precision CMM, thereby constructing multiple posture distance constraints. The laser tracker acquires measurements of the target points on the MDCC, which serve as GCPs. By calculating the difference between the measured and calibrated distances between two GCPs and comparing this difference with the allowable error limit, it can determine whether the measurement value of the GCP is out of tolerance.
Let x i k ,   y i k ,   z i k and x j k ,   y j k ,   z j k represent the measured coordinates of two points used to construct the spatial constraint distance, as measured by the laser tracker at station k k = 1 ,   2 ,   ,   m . Then, the following equation can be established:
L i j k 2 = L i j + Δ L i j k 2 = x i k x j k 2 + y i k y j k 2 + z i k z j k 2 i j
where L i j k denotes the measured spatial constraint distance, L i j represents the nominal distance between the two points, and Δ L i j k is the difference between the measured and nominal distance.
The allowable tolerance for each constraint distance is defined according to the measured distance and the required measurement accuracy:
Δ lim Δ L i j k
where Δ lim is the allowable tolerance.
In the measured data at the site, if the measurement error of a certain distance on the MDCC exceeds the allowable error limit, it can be judged that one of the two associated points has a significant error, indicating the presence of an unqualified point. Conversely, if the distances from a given target point to all other target points fall within the specified tolerance range, the point can be considered a qualified GCP. After obtaining the qualified GCPs, further control of their measurement errors is required. In this study, the large-scale spatial measurement field, established through multi-station laser tracker measurements combined with multiple MDCCs, essentially forms an edge-type measurement network. During the adjustment stage, the precision with which the initial GCP coordinates are assigned plays a decisive role in the accuracy of the final coordinate transformation parameters.
To enhance the assignment accuracy of the initial coordinates of both the laser tracker stations and the GCPs, this section first employs the 7-parameter Procrustes method to establish the preliminary orientation of multiple stations. Subsequently, a measurement model is developed for each station to obtain the corresponding covariance and Jacobian matrices. Following the matrix-weighted linear minimum variance fusion criterion, the weight matrix is derived to effectively fuse the multi-station measurement data and estimate the 3D coordinates of the GCPs. Finally, based on the common point transformation method, the initial 3D coordinates of the laser tracker stations are determined. Building on this foundation, the high-precision interferometric distance measurements of the laser tracker, combined with multi-domain and multi-pose length constraints, are utilized to further optimize the 3D coordinates of the GCPs. Through this procedure, the impact of angular measurement errors on spatial point localization is mitigated, thereby enabling refined correction of GCP measurement deviations.
Let X k ,   Y k ,   Z k denote the measurement station coordinates of the k-th k = 1 ,   2 ,   ,   m laser tracker, and let x i ,   y i ,   z i denote the coordinates of the i-th i = 1 ,   2 ,   ,   n GCP. Based on the distance formula, the corresponding error equation is established as follows:
l i k + v i k 2 = x i X k 2 + y i Y k 2 + z i Z k 2
where l i k represents the laser interferometric measurement length value of the laser tracker at the k-th station for the i-th common point, and v i k represents the corresponding error.
By linearizing Equation (38), the following expression is obtained:
v i k = a i k Δ X k Δ x i + b i k Δ Y k Δ y i + c i k Δ Z k Δ z i + l i k 0 l i k
where Δ X k ,   Δ Y k ,   Δ Z k and Δ x i ,   Δ y i ,   Δ z i represent the correction values for the measurement station center coordinates and the GCP coordinates, respectively; l i k 0 is the oblique distance computed from the initial coordinate values, and a i k , b i k , and c i k are the corresponding coefficients.
Based on the above analysis, n constraint equations can be formulated under a single laser tracker measurement station. Therefore, for m independent measurement stations, a total of m × n equations can be established. Equation (39) can thus be rewritten in matrix form as follows:
V G = A G Δ X G b G
where V G = v 11 , , v m n T , b G = l 11 l 11 0 , , l m n l m n 0 T ; Vector Δ X G = Δ X 1 , , Δ z m + n T consists of the correction terms for both the measurement station coordinates and the measured coordinates of the common points, while A G denotes the corresponding coefficient matrix.
In Equation (40), the unknown parameters to be estimated are the 3D correction values of both the GCPs and the laser tracker measurement stations. The total number of unknown parameters is 3m + 3n, and the number of equations in the error equation system is m × n. Due to the rank deficiency of the system matrix A G , a unique solution can be obtained by introducing centroid-based reference constraints in accordance with the principles of rank-deficient network adjustment. However, this approach may lead to uneven distribution of point accuracy across the measurement field. To address this issue, based on the previously established error model, in this section, by combining the multi-region and multi-attitude length constraint conditions in space, an optimization model for coordinate measurement values based on spatial multi-attitude length constraints is established:
d i j Δ x j + e i j Δ y j + f i j Δ z j d i j Δ x i + e i j Δ y i + f i j Δ z i + w i j = 0
where vectors Δ x i ,   Δ y i ,   Δ z i and Δ x j ,   Δ y j ,   Δ z j represent the coordinate corrections of the constrained points, while d i j , e i j , and f i j denote the associated coefficient matrices, and w i j is the constant term. By combining Equations (40) and (41), the following system can be obtained:
V G = A G Δ X G b G 0 = B G Δ X G + W x
where B G is a computationally large sparse matrix composed of first-order derivative terms.
Under the constrained optimization method, the objective function is given by:
Φ G = V G T P Δ X V G + 2 K G T ( B G Δ X G + W x )
where K G is the vector of association coefficients.
Finally, the correction values for coordinate optimization are obtained as:
Δ X G = N A A 1 N A A 1 B G T N B B 1 B G N A A 1 W b N A A 1 B G T N B B 1 W x
where N A A = A G T P Δ X A G is the invertible matrix, W b = A G T P Δ X b G .

5. Experimental and Discussion

5.1. System Construction

To validate the effectiveness of the proposed calibration method, a hybrid measurement system integrating global laser tracking and high-precision local scanning was constructed, and corresponding calibration experiments were conducted. As illustrated in Figure 10, the setup consists of a laser tracker, a structured-light 3D scanner, and a mobile robotic platform that integrates an industrial robot with an AGV. Specifically, the main structure of the MNCT is made of stable performance carbon fiber material, with eight SMNs for placing 0.5-inch target balls (SCB and SMR) on the surface, achieving a repeatability accuracy of up to 0.001 mm. The observation target of the laser tracker is the 0.5-inch SMR, and the spherical deviation is approximately 0.002 mm. The observation target of the 3D scanner is the 0.5-inch SCB, and the spherical deviation is approximately 0.0015 mm. Therefore, the center positions of these two types of observation targets have a high spatial co-location accuracy. Furthermore, several 0.5-inch SMNs were arranged on the surface of the 3D scanner to accommodate the 0.5-inch SMR, enabling the laser tracker to accurately locate and track the position and orientation of the 3D scanner.
The binocular structured-light 3D scanner used in this study was the LMI Gocator3 series (LMI Technologies Inc., Burnaby, BC, Canada), featuring a resolution of 5 million pixels, an X-Y spatial resolution of 0.025 mm, a measurement depth range of 87 mm, a field of view (FOV) of 27 mm × 45 mm, and a verified VDE accuracy of 0.025 mm. The laser tracker employed in this study was the Leica AT960-MR (Leica Geosystems AG, Heerbrugg, Switzerland), offering a full 360° horizontal rotation and a ±145° vertical tilt range. It features an angular resolution of 0.07 arcseconds, an angular accuracy of 1.7 arcseconds, and a maximum measurement range of 80 m. The full-range measurement accuracy is specified as ± (0.015 + 6 ppm × L) mm, where L represents the measured distance in meters. Prior to the experimental procedure, the spatial relationships among the centers of the target spheres in MDCCs and MNCT were calibrated in advance using a Zeiss PRISMO coordinate measuring machine (Carl Zeiss Industrielle Messtechnik GmbH, Oberkochen, Germany), ensuring high measurement accuracy and experimental reliability. For the typical distances between target spheres in our experiments, the resulting measurement error is less than 0.0015 mm. A conservative combined uncertainty of ±0.002 mm (k = 2) was determined by integrating Type A analysis from repeated measurements with Type B estimates derived from instrument specifications and environmental stability. The integrated uncertainty provides a reliable reference for evaluating the calibration accuracy of the developed method. All experimental procedures were performed under controlled laboratory conditions, where the ambient temperature was maintained within 22–23 °C and the relative humidity was kept at 55–60%.

5.2. Extrinsic Parameter Calibration Experiments

Firstly, the relative pose of the laser tracker, 3D scanner, and MNCT was varied to acquire several distinct measurement positions for the two instruments, while the MNCT itself was kept fixed. At each measurement position, both instruments repeatedly measured the respective observation target points on the MNCT, thereby generating redundant datasets. After noise reduction and centroid estimation of the scanning sphere (Section 3.2), the optimization procedure in Section 3.3 and Section 3.4 was applied to calculate refined and corrected target point coordinates of LCPs acquired by 3D scanner and laser tracker, as reported in Table 1 and Table 2.
In addition, the angle-constrained coordinate optimization method reported in reference [25] was applied to derive optimal and corrected coordinates for the scanning pose reference point, with results summarized in Table 3.
Based on the above data and in combination with the correction model for the reference transformation parameters, the optimized extrinsic calibration matrix H ~ S I is obtained as follows:
H ~ S I = 0.00312 0.00243 0.99999 214.23490 0.06656 0.99777 0.00264 110.93797 0.99777 0.06656 0.00295 7.73651 0 0 0 1
To validate the calibration accuracy, the three-dimensional coordinates dispersion of local common points is calculated using the calibration matrix before and after optimization; that is, the calibration accuracy of extrinsic parameters is measured by the root mean square error of the distance between the two points before and after optimization. In this study, the traditional indirect calibration method and the calibration method proposed are used to calculate the calibration errors of the extrinsic parameters, respectively, and the mean calibration errors under multiple positions are obtained, as shown in Figure 11. The conventional calibration approach resulted in mean extrinsic parameter calibration errors of 0.026, 0.023, and 0.028 mm at Positions 1–3, whereas the distance-constrained method reduced these errors to 0.014, 0.013, and 0.016 mm, respectively.
The proposed distance-constrained method maintained a mean calibration error below 0.016 mm across all positions, outperforming the conventional indirect calibration, which showed a mean error of 0.028 mm, thereby confirming its effectiveness.

5.3. Experiments on the Construction of Large-Scale Spatial Measurement Field

Following the method outlined in Section 4, a high-precision spatial measurement field was established over a 14 × 6 × 3 m volume, serving as a global reference framework for precise measurements. The experimental site configuration is illustrated in Figure 12. For accurate global calibration, several MDCCs were developed specifically. As shown in Figure 13, the MDCC is primarily constructed from carbon fiber and has an approximate length of 1000 mm. Eight 1.5-inch SMNs for 1.5-inch SMRs are rigidly attached along its surface. Before conducting the experiments, the spatial arrangements of all SMR centers were accurately calibrated with a high-precision CMM to guarantee reliable and precise experimental data.
Firstly, five MDCCs were deployed across the measurement site, with the maximum elevation difference between common points reaching approximately 3 m. Four measurement stations are set up at different locations. At each station, the laser tracker measured the GCPs on the MDCCs. The first station’s coordinate system served as the GCS, while the remaining three were treated as local coordinate systems for subsequent measurements.
Subsequently, based on the measured distance between the laser tracker and the common points, permissible error thresholds were defined to identify and exclude outliers on site. Once the gross errors were removed and qualified data were retained, the measured coordinates of the common points from the three LCSs were transformed into the GCS. Then, by adopting the method proposed in this study, the coordinate measurement values of the GCPs with redundant measurements are optimized. For each MDCC, two representative GCPs were selected. Applying the method presented in Section 4, optimized coordinates along with their associated correction values for the selected GCPs within the GCS and the LCS1 were determined, as summarized in Table 4 and Table 5.
Currently, evaluating the accuracy of large-scale spatial measurement networks often relies on computing station transfer errors of GCPs [4]. For this, three methods were applied: the best-fit algorithm in Spatial Analyzer 2023.1 [29,30], the single-distance constraint method, and the proposed method. The maximum registration (MPE) and mean registration errors (ME) of selected GCPs obtained from these methods are listed in Table 6.
In the GCS, the average station transfer error of the GCPs was calculated to be 0.078 mm using the best-fit method. In comparison, the single-distance constraint method reduced the error to 0.036 mm. Furthermore, the proposed method achieved an even lower average error of 0.033 mm, demonstrating improved accuracy in coordinate transformation. Furthermore, the average station transfer errors of all GCPs under each local measurement station were evaluated, as illustrated in Figure 14. Using the single-distance constraint method, the error magnitudes at local stations 1 through 3 decreased from 0.083, 0.061, and 0.059 mm to 0.032, 0.026, and 0.025 mm, respectively. Applying the method proposed in this study further reduced these errors to 0.023, 0.022, and 0.020 mm. Collectively, these results demonstrate the high effectiveness of the proposed calibration method.

5.4. Accuracy Verification and Analysis of the RLSHS

To ensure reliable measurement performance, a standardized and widely accepted accuracy verification method is essential for evaluating the measurement accuracy of RLSHS. There are generally two approaches for verifying measurement system accuracy. One involves using a higher-precision metrology device to evaluate the test results. However, this method has notable drawbacks: such devices are often costly, require experienced personnel for operation and maintenance, and demand significant time and resources for setup and calibration. Moreover, discrepancies in accuracy, measurement range, and system characteristics between different instruments can introduce uncertainties and compromise the consistency of the evaluation results. An alternative and more cost-effective method for accuracy evaluation involves using standard artifacts. In this method, the measurement system follows a predefined procedure to measure the calibrated reference object, and the results are then compared with the known pre-calibration values to assess measurement accuracy. Due to its practicality and lower cost, this method is more widely used for evaluating the accuracy of measurement systems.
This study adopts the “sphere spacing error” (SSE) evaluation approach by measuring a calibrated metric tool, and the corresponding tests were performed in accordance with the international accuracy verification standard VDI/VDE 2634 Part 3 [31]. Sphere-to-sphere spacing error is commonly used as a primary indicator to assess measurement system accuracy. It is quantified as the deviation between the measured inter-sphere distance and the reference distance obtained from calibration. Evaluating this metric provides a comprehensive assessment of system performance and offers a reliable reference for ensuring overall measurement quality. To address considerations such as manufacturing complexity and cost, a customized metric tool was designed in this study. As illustrated in Figure 15, the metric tool consists of multiple spheres, corresponding holders, and a carbon fiber plate. The inter-sphere distances were precisely calibrated using a high-accuracy coordinate measuring machine.
The accuracy validation procedure for the laser scanning hybrid measurement method is outlined as follows. First, a calibrated reference artifact mounted on an adjustable tripod is sequentially positioned at multiple heights and orientations throughout the measurement volume, while the laser tracker remains fixed. The mobile robot system transports the 3D scanner to various spatial locations, where the scanner captures the metric tool to assess measurement accuracy. Metric tool placements were configured to fully cover the measurement space. Since the sphere spacing exceeded the 3D scanner’s view, a multi-view strategy was employed to reconstruct sphere centers, while the laser tracker simultaneously captured reference points to determine the scanner’s position. All reconstructed sphere centers were then transformed from the SCS to the RCS. The reconstructed sphere centers were subsequently transformed into the global coordinate frame using the established large-scale spatial measurement field. Finally, the inter-sphere distance errors are computed for each metric tool position to evaluate the system’s measurement precision. The accuracy verification setup is illustrated in Figure 16.
Four spheres were selected from the metric tool as representatives, with calibrated inter-sphere distances of D1 = 851.476 mm and D2 = 1007.418 mm. Table 7 presents the measured inter-sphere distances along with their corresponding errors. For distance D1, the maximum error (MPE) and mean error (ME) were 0.113 mm and 0.108 mm, respectively, while for distance D2, these values were 0.117 mm and 0.112 mm. All measured errors are below the 0.2 mm threshold, indicating that the RLSHS developed using the proposed method meets the required accuracy standards.

5.5. Experimental Study and Analysis of On-Site Measurement for Key Geometrical Features of Large-Sized Components

To validate the effectiveness and applicability of the proposed method, the RLSHS was deployed on-site for measurement of key geometrical features of large-sized components. On-site measurements were conducted on four key geometric features located on two typical local measurement areas (LMAs), as illustrated in Figure 17.
At the initial stage of measurement, the mobile platform first transports the industrial robot to a position near the target features. Subsequently, the scanner pose is established through laser tracker measurements of the SMRs fixed on the scanner. Then, the robot is used to position the 3D scanner at a series of predefined locations. At each location, the scanner captures the target area to generate the initial point cloud of the key geometric features, as illustrated in Figure 18a,c.
Subsequently, point cloud preprocessing was performed to eliminate noise and extract the target region, as illustrated in Figure 18b,d. The center coordinates of the holes in the SCS were subsequently derived based on the reconstruction method outlined in Section 3.2. Meanwhile, the laser tracker captured the reference observation point mounted on the 3D scanner to determine its current pose. Based on this pose, the computed hole centers were transformed from the SCS to the RCS. After completing the measurement of KLFs within the current workspace, the mobile robotic system advanced to the next station, where the same procedure was repeated until all target features had been inspected.
Finally, the three-dimensional global coordinates of the hole centers for all KLFs were obtained, as summarized in Table 8. The proposed RLSHS completed the inspection of the KLFs in approximately 8 min, whereas manual measurement required about 30 min for the same task. Quantitative comparative experiments indicate that the proposed measurement system and methodology significantly outperform traditional manual measurement in terms of time efficiency and measurement stability. Moreover, our experimental observations indicate that the system exhibits good robustness within moderate thermal fluctuations (approximately ±5–10 °C) typically encountered in industrial workshops. Overall, the developed method demonstrates strong applicability and provides reliable and effective measurement data to support high-quality manufacturing of large-scale components.

6. Conclusions

In this study, a robot-assisted laser scanning hybrid measurement method was developed to achieve accurate and automated 3D shape measurement of large-sized components with numerous small key local features. The proposed method integrates a laser tracker, a 3D scanner, and a mobile robotic system, thereby enabling flexible large-scale measurements. An accuracy-enhanced calibration strategy was introduced, including an accurate extrinsic parameter calibration method based on robust target sphere center estimation and distance-constrained optimization of local common points, as well as an improved global calibration method incorporating coordinate measurement value optimization and hierarchical error control. The accuracy validation experiments demonstrated that, over a measurement span of 14 m, the maximum error (MPE) reached 0.117 mm, while the mean error (ME) was 0.112 mm, thereby verifying the calibration method’s high precision. Moreover, large-scale scanning experiments conducted on representative local measurement areas of a complex large-sized component further verified the method’s applicability and its ability to provide reliable measurement data for high-quality manufacturing of large-sized components.
Future work will focus on developing advanced error modeling and compensation strategies to address environmental influences and robot-induced disturbances, such as thermal drift, vibration, and long-term stability issues, while also conducting a detailed analysis of the spatial distribution of measurement errors, including potential accuracy attenuation in edge regions. In addition, intelligent measurement planning methods, such as adaptive scanning and path optimization, will be further investigated to improve measurement efficiency and ensure comprehensive coverage of complex geometries.

Author Contributions

Conceptualization, Z.Z.; methodology, Z.Z. and X.S.; software, X.Z.; validation, Z.Z., F.X. and X.Z.; formal analysis, Z.Z. and X.S.; investigation, Z.Z. and J.Z.; resources, Z.Z.; writing—original draft preparation, Z.Z.; writing—review and editing, Z.Z.; visualization, X.Z. and F.X.; supervision, X.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Shandong Province (No. ZR2024QE152) and the Science and Technology Smes Innovation Ability Improvement Project of Shandong Province (No. 2024TSGC0829 ZKT) and the Scientific Research of Linyi University (Grant No. Z6124007) and the Youth Entrepreneurship Technology Support Program for Higher Education Institutions of Shandong Province (Grant No. 2023KJ215).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

During the preparation of this study, the authors used ChatGPT4.0 solely for language editing and formatting in some sentences, without involvement in core ideas, data analysis, conclusions, or scientific writing. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Du, P.; Duan, Z.; Zhang, J.; Zhao, W.; Lai, E. The design and implementation of a dynamic measurement system for a large gear rotation angle based on an extended visual field. Sensors 2025, 25, 3576. [Google Scholar] [CrossRef]
  2. Chen, Z.; Zhang, F.; Qu, X.; Liang, B. Fast measurement and reconstruction of large workpieces with freeform surfaces by combining local scanning and global position data. Sensors 2015, 15, 14328–14344. [Google Scholar] [CrossRef]
  3. Chen, F.; Brown, G.M.; Song, M. Overview of three-dimensional shape measurement using optical methods. Opt. Eng. 2000, 39, 10–22. [Google Scholar] [CrossRef]
  4. Zhou, Z.; Liu, W.; Wang, Y.; Yu, B.; Cheng, X.; Yue, Y.; Zhang, J. A combined calibration method of a mobile robotic measurement system for large-sized components. Measurement 2022, 189, 110543. [Google Scholar] [CrossRef]
  5. Lin, J.; Xin, R.; Shi, S.; Huang, Z.; Zhu, J. An accurate 6-dof dynamic measurement system with laser tracker for large-scale metrology. Measurement 2022, 204, 112052. [Google Scholar] [CrossRef]
  6. Zhou, H.; Mao, Q.; Wu, A. Beam vector model: A more applicable calibration method for terrestrial laser scanner. Measurement 2025, 204, 117814. [Google Scholar] [CrossRef]
  7. Chen, Z.; Du, F. Measuring principle and uncertainty analysis of a large volume measurement network based on the combination of iGPS and portable scanner. Measurement 2017, 104, 263–277. [Google Scholar] [CrossRef]
  8. Laser-Radar. Available online: https://www.faro.com (accessed on 28 July 2025).
  9. Jiang, D.; Guo, J.; Zhang, J. A laser target digital photogrammetry system for on-orbit surface error measurement of large spaceborne antennas. Measurement 2026, 257, 118526. [Google Scholar]
  10. Zhao, F.; Tamaki, T.; Kurita, T. Marker-based non-overlapping camera calibration methods with additional support camera views. Image Vis. Comput. 2018, 70, 46–54. [Google Scholar] [CrossRef]
  11. Barone, S.; Paoli, A.; Razionale, A.V. 3D Reconstruction and restoration monitoring of sculptural artworks by a multi-sensor framework. Sensors 2012, 12, 16785–16801. [Google Scholar] [CrossRef]
  12. Barone, S.; Paoli, A.; Razionale, A.V. Shape Measurement by a multi-view methodology based on the remote tracking of a 3D optical scanner. Opt. Lasers Eng. 2012, 50, 380–390. [Google Scholar] [CrossRef]
  13. Lu, Y.; Liu, W.; Zhang, Y. An error analysis and optimization method for combined measurement with binocular vision. Chin. J. Aeronaut. 2020, 34, 282–292. [Google Scholar] [CrossRef]
  14. Du, H.; Chen, X.; Xi, J. Development and verification of a novel robot-integrated fringe projection 3D scanning system for large-scale metrology. Sensors 2017, 17, 2886. [Google Scholar] [CrossRef]
  15. Qu, X.; Meng, B. Onsite global calibration and optimization of combining measurement for large free-form surfaces part. Comput. Integr. Manuf. Syst. 2015, 21, 2384–2392. [Google Scholar]
  16. Jiang, T.; Cheng, X.; Cui, H. Combined shape measurement based on locating and tracking of an optical scanner. J. Instrum. 2019, 14, P01006. [Google Scholar] [CrossRef]
  17. Jiang, T.; Cheng, X.; Cui, H. Calibration and uncertainty analysis of a combined tracking-based vision measurement system using Monte Carlo Simulation. Meas. Sci. Technol. 2021, 32, 095007. [Google Scholar] [CrossRef]
  18. Jiang, T.; Cheng, X.; Cui, H. Accurate calibration for large-scale tracking-based visual measurement system. IEEE Trans. Instrum. Meas. 2021, 70, 5003011. [Google Scholar] [CrossRef]
  19. Garcia-D’Urso, N.; Sanchez-Sos, B.; Azorin-Lopez, J. Marker-based extrinsic calibration method for accurate multi-camera 3D reconstruction. arXiv 2025, arXiv:2505.02539. [Google Scholar]
  20. Pan, F.; Wang, W. CalibOnline: Online Detection and Calibration of Extrinsic Parameters Between LiDAR and Monocular Camera. Sens. J. 2025, 25, 20323–20332. [Google Scholar] [CrossRef]
  21. Ma, Y.; Zhou, W.; Zhang, Z. Unified least squares adjustment of laser tracker network measurement. Measurement 2025, 253, 117566. [Google Scholar] [CrossRef]
  22. Lin, J.; Zhu, J.; Guo, Y. Establishment of precise three-dimensional coordinate control network in field large-space measurement. J. Mech. Eng. 2012, 48, 6–11. (In Chinese) [Google Scholar] [CrossRef]
  23. Predmore, C. Bundle adjustment of multi-position measurements using the mahalanobis distance. Precis. Eng. 2010, 34, 113–123. [Google Scholar] [CrossRef]
  24. Wang, D.; Tang, W.; Zhao, X. Coordinate unification method in large scale metrology system based on standard artifact. Chin. J. Sci. Instrum. 2015, 36, 1845–1852. (In Chinese) [Google Scholar]
  25. Xie, Z.; Lin, J.; Zhu, J. Accuracy enhancement method for coordinate control field based on space length constraint. Chin. J. Lasers 2015, 42, 261–267. (In Chinese) [Google Scholar] [CrossRef]
  26. Fan, B.; Li, G.; Li, P. Weighted rank-deficient free network adjustment using laser interferometric distance measurement for three-dimensional networks. Geomat. Inf. Sci. Wuhan Univ. 2015, 40, 222–226+232. (In Chinese) [Google Scholar]
  27. Zou, L.; Luo, C.; Zhang, G.; He, Y.M.; Zhang, Q.H.; Zhou, Y.J. High-precision construction method for laser tracker measurement network. Precis. Eng. 2025, 97, 1006–1018. [Google Scholar] [CrossRef]
  28. Ma, S.; Lu, Y.; Deng, K.; Wu, Z.; Xu, X. Accuracy improvement of laser tracker registration based on enhanced reference system points weighting self-calibration and thermal compensation. Rev. Sci. Instrum. 2024, 95, 085111. [Google Scholar] [CrossRef]
  29. Lu, Y.; Liu, W.; Zhang, Y.; Xing, H.; Li, J.; Liu, S.; Zhang, L. An Accurate calibration method of large-scale reference system. IEEE Trans. Instrum. 2020, 69, 6957–6967. [Google Scholar] [CrossRef]
  30. Puntanen, S. Projection matrices, generalized inverse matrices, and singular value decomposition by Haruo Yanai, Kei Takeuchi. Yoshio Takane Int. Stat. Rev. 2011, 79, 503–504. [Google Scholar] [CrossRef]
  31. VDI/VDE Innovation + Technik GmbH. Optical 3D-Measuring Systems-Part 3: Multiple View Systems Based on Area Scanning; VDI/VDE: Berlin, Germany, 2008. [Google Scholar]
Figure 1. Schematic overview of robot-assisted laser scanning hybrid measurement system.
Figure 1. Schematic overview of robot-assisted laser scanning hybrid measurement system.
Sensors 25 07518 g001
Figure 2. Schematic of calibration principle based on MNCT.
Figure 2. Schematic of calibration principle based on MNCT.
Sensors 25 07518 g002
Figure 3. Original point cloud of SCB acquired by the 3D scanner.
Figure 3. Original point cloud of SCB acquired by the 3D scanner.
Sensors 25 07518 g003
Figure 4. Point cloud after noise reduction. (a) point cloud obtained after removing sparse space discrete points, and (b) point cloud obtained after removing high-density redundant noise points.
Figure 4. Point cloud after noise reduction. (a) point cloud obtained after removing sparse space discrete points, and (b) point cloud obtained after removing high-density redundant noise points.
Sensors 25 07518 g004
Figure 5. Point cloud of each SCB after point cloud segmentation.
Figure 5. Point cloud of each SCB after point cloud segmentation.
Sensors 25 07518 g005
Figure 6. Schematic representation of constraint distances defined by actual and virtual points.
Figure 6. Schematic representation of constraint distances defined by actual and virtual points.
Sensors 25 07518 g006
Figure 7. Schematic diagram for solving coordinate system transformation parameters.
Figure 7. Schematic diagram for solving coordinate system transformation parameters.
Sensors 25 07518 g007
Figure 8. Schematic diagram of large-scale spatial measurement field construction with hierarchical measurement error control.
Figure 8. Schematic diagram of large-scale spatial measurement field construction with hierarchical measurement error control.
Sensors 25 07518 g008
Figure 9. Diagram of Euclidean distances in the MDCC.
Figure 9. Diagram of Euclidean distances in the MDCC.
Sensors 25 07518 g009
Figure 10. Overview of the robot-assisted laser scanning hybrid measurement system.
Figure 10. Overview of the robot-assisted laser scanning hybrid measurement system.
Sensors 25 07518 g010
Figure 11. Mean calibration error at different locations.
Figure 11. Mean calibration error at different locations.
Sensors 25 07518 g011
Figure 12. Construction experiment of large-scale spatial measurement field on-site.
Figure 12. Construction experiment of large-scale spatial measurement field on-site.
Sensors 25 07518 g012
Figure 13. Multi-dimensional cooperative calibrator.
Figure 13. Multi-dimensional cooperative calibrator.
Sensors 25 07518 g013
Figure 14. Mean registration error of all GCPs at each local measuring station.
Figure 14. Mean registration error of all GCPs at each local measuring station.
Sensors 25 07518 g014
Figure 15. Diagram of sphere spacing in metric tool.
Figure 15. Diagram of sphere spacing in metric tool.
Sensors 25 07518 g015
Figure 16. On-site accuracy verification for measurement system.
Figure 16. On-site accuracy verification for measurement system.
Sensors 25 07518 g016
Figure 17. On-site measurement of key geometric features in LSC.
Figure 17. On-site measurement of key geometric features in LSC.
Sensors 25 07518 g017
Figure 18. Scanning results of KLFs in LSC. (a) Point clouds of KLFs in LMA 1 at each station. (b) Reconstructed point clouds of KLFs in LMA 1. (c) Point clouds of KLFs in LMA 2 at each station. (d) Reconstructed point clouds of KLFs in LMA 2.
Figure 18. Scanning results of KLFs in LSC. (a) Point clouds of KLFs in LMA 1 at each station. (b) Reconstructed point clouds of KLFs in LMA 1. (c) Point clouds of KLFs in LMA 2 at each station. (d) Reconstructed point clouds of KLFs in LMA 2.
Sensors 25 07518 g018
Table 1. Optimized coordinates and correction data of LCPs acquired by 3D scanner on the MNCT.
Table 1. Optimized coordinates and correction data of LCPs acquired by 3D scanner on the MNCT.
No. x ˜ s i S /mm y ˜ s i S /mm z ˜ s i S /mm Δ x ˜ s i S /mm Δ y ˜ s i S /mm Δ z ˜ s i S /mm
1−6.107−17.248−9.555−0.0020.008−0.006
2−9.334−6.339−3.6610.007−0.004−0.005
3−5.4675.741−9.7800.0120.002−0.004
49.3225.707−10.260−0.0150.003−0.007
55.459−6.167−4.1150.0030.0040.001
69.504−17.260−10.051−0.0050.003−0.005
Table 2. Optimized coordinates and correction data of LCPs acquired by laser tracker on the MNCT.
Table 2. Optimized coordinates and correction data of LCPs acquired by laser tracker on the MNCT.
No. x ˜ g i R /mm y ˜ g i R /mm z ˜ g i R /mm Δ x ˜ g i R /mm Δ y ˜ g i R /mm Δ z ˜ g i R /mm
14439.619−1432.338−199.469−0.006−0.0020.002
24430.278−1438.580−193.3110.0030.008−0.007
34419.383−1443.164−200.970−0.0040.003−0.005
44412.970−1431.178−206.820−0.005−0.0130.003
54423.679−1426.684−199.1520.0040.011−0.006
64432.830−1419.695−205.640−0.007−0.0080.001
Table 3. Optimized coordinates and correction data of scan position points acquired by laser tracker.
Table 3. Optimized coordinates and correction data of scan position points acquired by laser tracker.
No. x ˜ I i R /mm y ˜ I i R /mm z ˜ I i R /mm Δ x ˜ I i R /mm Δ y ˜ I i R /mm Δ z ˜ I i R /mm
14276.124−1423.8661.5650.003−0.004−0.021
24285.499−1436.884−39.2930.002−0.0050.004
34314.775−1446.752−94.0190.004−0.007−0.015
44314.427−1372.497−60.1480.0010.003−0.019
Table 4. Optimized coordinates and correction data of partial GCPs in GCS.
Table 4. Optimized coordinates and correction data of partial GCPs in GCS.
No. x ̑ i G /mm Δ x i G /mm y ̑ i G /mm Δ y i G /mm z ̑ i G /mm Δ z i G /mm
11577.987−0.0496629.3380.018−1223.005−0.109
23696.4280.0514429.587−0.020−1225.259−0.017
34319.3820.0593617.086−0.027−1215.1780.050
46461.4150.0171197.048−0.042−1231.484−0.045
51843.496−0.066−2553.324−0.040−1263.557−0.061
61097.620−0.077−3226.831−0.050−1283.2100.047
79354.4850.068228.4030.067663.8130.098
89394.815−0.016263.7070.0441671.1300.136
Table 5. Optimized coordinates and correction data of partial GCPs in LCS1.
Table 5. Optimized coordinates and correction data of partial GCPs in LCS1.
No. x ̑ i L 1 /mm Δ x i L 1 /mm y ̑ i L 1 /mm Δ y i L 1 /mm z ̑ i L 1 /mm Δ z i L 1 /mm
14999.718−0.0136352.989−0.005−1087.086−0.106
27589.819−0.0194735.052−0.037−1105.365−0.019
38391.804−0.0174098.552−0.0121101.1770.055
41,1058.373−0.0222272.7980.016−1135.106−0.075
57492.005−0.004−2488.423−0.040−1192.864−0.030
66932.4350.039−3323.063−0.054−1217.1440.061
71,4103.6790.0142023.371−0.049830.587−0.098
81,4146.572−0.0322052.973−0.0581682.0450.119
Table 6. Registration error of selected GCPs in GCS.
Table 6. Registration error of selected GCPs in GCS.
No123456MPE/mmME/mm
Best-fit method0.0630.0570.0700.0710.0800.1080.1850.078
Single constraint method0.0360.0280.0350.0320.0390.0450.0820.036
Proposed method0.0320.0260.0320.0290.0340.0410.0670.033
Table 7. Measuring results of sphere spacing.
Table 7. Measuring results of sphere spacing.
No.D1/mmD2/mm
Measured ValueSphere
Spacing Error
Measured ValueSphere
Spacing Error
1851.5830.1071007.5310.113
2851.5880.1121007.5350.117
3851.5770.1011007.5160.098
4851.5860.1101007.5330.115
5851.5890.1131007.5340.116
MPE851.5890.1131007.5350.117
ME851.5840.1081007.5300.112
Table 8. Measurement value of feature holes centered on LMAs in GCS.
Table 8. Measurement value of feature holes centered on LMAs in GCS.
NoKLFsThree-Dimensional Coordinates/mm
LMA 1Hole 1
Hole 2
Hole 3
Hole 4
(7392.554, −433.481, 992.677)
(7485.963, −487.790, 1058.371)
(7546.156, −525.868, 910.609)
(7455.501, −471.924, 846.932)
LMA 2-1Hole 1
Hole 2
Hole 3
Hole 4
(6337.459, −359.366, 332.194)
(6387.375, −367.301, 285.976)
(6339.471, −335.406, 237.449)
(6297.899, −332.021, 289.563)
LMA 2-2Hole 1
Hole 2
Hole 3
Hole 4
(5383.108, 121.363, 358.896)
(5436.644, 113.361, 313.480)
(389.528, 141.866, 272.803)
(5342.718, 146.307, 321.817)
LMA 2-3Hole 1
Hole 2
Hole 3
Hole 4
(4386.555, 566.960, 378.743)
(4426.865, 558.537, 331.288)
(4392.510, 590.790, 280.258)
(4345.942, 602.556, 324.067)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhou, Z.; Zhang, X.; Sun, X.; Xia, F.; Zeng, J. Accuracy-Enhanced Calibration Method for Robot-Assisted Laser Scanning of Key Features on Large-Sized Components. Sensors 2025, 25, 7518. https://doi.org/10.3390/s25247518

AMA Style

Zhou Z, Zhang X, Sun X, Xia F, Zeng J. Accuracy-Enhanced Calibration Method for Robot-Assisted Laser Scanning of Key Features on Large-Sized Components. Sensors. 2025; 25(24):7518. https://doi.org/10.3390/s25247518

Chicago/Turabian Style

Zhou, Zhilong, Xu Zhang, Xuemei Sun, Faqiang Xia, and Jinhao Zeng. 2025. "Accuracy-Enhanced Calibration Method for Robot-Assisted Laser Scanning of Key Features on Large-Sized Components" Sensors 25, no. 24: 7518. https://doi.org/10.3390/s25247518

APA Style

Zhou, Z., Zhang, X., Sun, X., Xia, F., & Zeng, J. (2025). Accuracy-Enhanced Calibration Method for Robot-Assisted Laser Scanning of Key Features on Large-Sized Components. Sensors, 25(24), 7518. https://doi.org/10.3390/s25247518

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop