Next Article in Journal
BiLSTM-LN-SA: A Novel Integrated Model with Self-Attention for Multi-Sensor Fire Detection
Previous Article in Journal
Albendazole Detection at a Nanomolar Level Through a Fabry–Pérot Interferometer Realized via Molecularly Imprinted Polymers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Binocular Stereo Vision-Based Structured Light Scanning System Calibration and Workpiece Surface Measurement Accuracy Analysis

1
Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
2
Equipment Technology Research Institute, Liuzhou OVM Machinery Co., Ltd., Liuzhou 545006, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(20), 6455; https://doi.org/10.3390/s25206455
Submission received: 21 September 2025 / Revised: 12 October 2025 / Accepted: 16 October 2025 / Published: 18 October 2025
(This article belongs to the Section Sensing and Imaging)

Abstract

Precise online measurement of large structural components is urgently needed in modern manufacturing and intelligent construction, requiring a measurement range over 1 m, near-millimeter accuracy, second-level measurement speed, and adaptability to complex environments. In this paper, three mainstream measurement technologies, namely the image method, line laser scanning method, and structured light method, are comparatively analyzed. The structured light method exhibits remarkable comprehensive advantages in terms of accuracy and speed; however, it suffers from the issue of occlusion during contour measurement. To tackle this problem, multi-camera stitching is employed, wherein the accuracy of camera calibration plays a crucial role in determining the quality of point cloud stitching. Focusing on the cable tightening scenario of meter-diameter cables in cable-stayed bridges, this study develops a contour measurement system based on the collaboration of multiple structured light cameras. Measurement indicators are optimized through modeling analysis, system construction, and performance verification. During verification, four structured light scanners were adopted, and measurements were repeated 11 times for the test workpieces. Experimental results demonstrate that although the current measurement errors have not yet been stably controlled within the millimeter level, this research provides technical exploration and practical experience for high-precision measurement in the field of intelligent construction, thus laying a solid foundation for subsequent accuracy improvement.

1. Introduction

Online precise measurement of the external dimensions and dimensional variations of large structural components is in urgent demand across key fields such as modern manufacturing and intelligent construction [1,2,3]. In industrial manufacturing, the dimensional accuracy of large components directly determines the overall performance and quality of products; for instance, minor dimensional deviations in the large blades of aero-engines may lead to severe issues during high-speed operation. In the field of intelligent construction, such as the construction process of cable-stayed bridges, dimensional monitoring of key bridge structural components serves as a crucial link to ensuring the safety and stability of the bridge. The measurement requirements in these scenarios typically exhibit the following characteristics: a measurement range of over one meter to cover the overall dimensions of large structural components; measurement accuracy approaching the millimeter scale to meet the basic standards of high-precision manufacturing and construction; second-level measurement speed to enable real-time monitoring and rapid feedback in production processes; and adaptability to complex on-site environments to ensure the stability of measurement results. To meet the aforementioned requirements, the current mainstream measurement technologies include the image-based method, line laser scanning method, and structured light method [4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26]. The image-based method primarily relies on computer vision principles, acquiring dimensional information through image capture from multiple angles of the object and subsequent analysis. Its advantages lie in relatively low equipment cost and flexible adaptability to environments. However, this method exhibits limitations in measurement accuracy: especially for large structural components, it struggles to stably achieve the desired high-precision requirements, and its measurement results are susceptible to interference under complex lighting conditions. The line laser scanning method leverages the high directionality and energy density of lasers: it emits a linear laser onto the object surface, captures the reflected light from a specific angle using a camera, and calculates the 3D coordinates of each point on the object surface based on the triangulation principle. This method achieves high measurement accuracy, even reaching the sub-micron level in some precision measurement scenarios. Nevertheless, line laser scanning relies on point-by-point measurement, resulting in relatively slow speed that cannot meet the second-level measurement requirement; additionally, it is sensitive to ambient light, and its measurement accuracy is significantly affected under strong or complex lighting conditions. The structured light method projects specific patterns (e.g., fringes, Gray codes) onto the measured object surface; a camera then captures images of the deformed patterns, and 3D topographic information of the object surface is accurately obtained via algorithms such as phase calculation. This method enables high-precision measurement with fast speed, allowing for the simultaneous acquisition of large-area information on the object surface. However, when measuring the complete contour of complex objects, the structured light method is prone to occlusion issues, leading to missing measurement data in certain regions. Despite these limitations, the structured light method has garnered increasing attention in industries and intelligent construction due to its comprehensive advantages in accuracy and speed.
In practical applications, multi-camera stitching is commonly adopted to address the occlusion issue of the structured light method and achieve full-range measurement of objects. Currently, numerous relevant applications and corresponding key technologies have been proposed. For example, some studies have improved the measurement accuracy and stability of multi-camera systems by optimizing camera layout and calibration algorithms [27,28,29,30,31,32,33,34,35]; other works have focused on enhancing the design and decoding algorithms of structured light patterns to improve adaptability to complex scenarios [36,37,38,39,40]. These methods provide valuable references for multi-camera stitching measurement. However, due to differences in hardware configurations, measurement principles, and application scenarios among various structured light systems, camera calibration becomes extremely complex, and calibration results vary significantly across systems. As a core link in multi-camera structured light measurement systems, camera calibration accuracy plays a decisive role in the subsequent point cloud stitching quality. High-precision calibration ensures the accurate alignment of point cloud data captured by different cameras in the spatial coordinate system, thereby enabling complete and precise 3D reconstruction of the object. If the calibration accuracy meets a certain standard, point cloud stitching can even be achieved solely through coordinate transformation, without relying on feature point matching. This advantage holds great significance in practical applications: it significantly reduces the time cost required for point-cloud stitching and minimizes stitching errors caused by inaccurate feature point matching.
In view of this, the present study focuses on the practical scenario of cable tensioning for meter-diameter cables in cable-stayed bridges within the context of intelligent construction, and conducts research on a contour measurement method and system based on the collaboration of multiple structured light cameras. In this scenario, real-time measurement of the cable diameter changes during tensioning is crucial for ensuring the installation quality and structural safety of the bridge. According to practical test results, although the current measurement error has not yet been fully and stably controlled within the 1 mm range, the study optimizes measurement indicators through a series of efforts: conducting modeling analysis of the measurement system to refine parameter configurations; constructing the measurement system by selecting appropriate hardware (e.g., structured light cameras, projectors) and completing system integration; and performing performance verification to continuously evaluate and optimize key performance indicators (KPIs) such as measurement accuracy, speed, and stability, with the goal of constantly moving closer to the millimeter-level accuracy requirement. Through these efforts, this study aims to provide new technical exploration and practical experience for high-precision measurement in intelligent construction and related industrial fields, laying a foundation for further improving measurement accuracy in subsequent studies.

2. Materials and Methods

2.1. Measurement System Hardware and Configuration

Four identical structured light 3D scanners were employed as the core sensing units, each integrating a stereo vision module (one industrial camera) and a structured light projector (capable of emitting fringe and Gray-code patterns).
The scanners were symmetrically distributed around the circular test workpiece (Figure 1a), with the workpiece idealized as a cylinder with radius R, while the instantaneous measurement volume of each scanner is represented by a triangular footprint. Four scanners are arranged symmetrically distributed around the cylinder axis; by carefully setting the working distance—defined as the perpendicular separation between the foremost optical interface of the scanner and the cylindrical surface—a multi-view scanning network is established. The field-of-view of a single scanner, indicated by the black dashed rectangle in the figure, covers a localized region of the cylindrical surface; through mutual compensation of the individual perspectives, the array enables full-circumference 3D reconstruction of the workpiece.
Figure 1b illustrates the field of view (FoV) of a single structured-light 3D scanner in detail. The core sensing module comprises a camera and a projector: the projector emits a patterned fringe sequence onto the target surface, while the camera synchronously acquires the correspondingly modulated images. The overlapping region of the two optical cones—highlighted as the shaded area within the solid black contour—defines the valid measurement volume. Within this domain, the complete fringe set is recorded by the camera, enabling the internal algorithms (phase unwrapping and stereo matching) to reconstruct the three-dimensional geometry of the object.
Figure 2 illustrates the comprehensive coverage capability of a single 3D structured light scanner for curved workpieces from the perspective of geometrical optics, including the coupling relationship between the curvature of the workpiece surface and the scanning field of view (FOV). As shown in the figure, the orange solid line represents the maximum coverage range of the scanner, while the tangent position B or B of the blue dashed line tangent to the curved surface determines whether the workpiece will be occluded at the front and rear ends relative to the scanner within the effective range of the scanner, causing point cloud loss at the occluded area. The main parameters in the figure are as follows: α represents the angle value corresponding to half of the arc length A A of the workpiece area that the scanner’s FOV can cover; β represents the angle value corresponding to the arc A C or A C between the tangent point B of the camera’s FOV and the edge of the workpiece and the intersection point C of the line connecting the camera and the workpiece center O with the curved surface; δ is an angular parameter introduced for the convenience of geometric analysis and has no actual physical significance. As shown in Figure 2, if β δ is greater than α , no occlusion will occur; otherwise, occlusion will occur. d represents the length of the 3D structured light scanner, D represents the working distance (the perpendicular distance from the foremost end of the scanner to the workpiece surface), R represents the radius of the circular workpiece, L represents the linear distance between the camera and the workpiece center (which can be obtained by the Pythagorean theorem), and h represents the length of the line segment with the tangent point between the camera and the edge of the workpiece as the endpoint. W represents the lateral distance covered by the scanner’s FOV on the workpiece. From the Pythagorean theorem and relevant knowledge of plane geometry, the calculation formulas for α , β , and L can be easily derived as follows.
α = arcsin W 2 R
β = arccos R L
L = ( D + R ) 2 + d 2 2
δ = arccos D + R L
Two types of workpiece were used to validate the accuracy of the measurement of the system, with a priorly known surface areas (serving as the ground-truth reference):
  • Cylindrical-like workpiece: a hollow plastic cylinder simulating the regular contour of stay cables in cable-stayed bridges, with a true CSA (cross sectional area) of 1.81581 m 2 .
  • Irregular annular workpiece: a metal annular component simulating the irregular contour of aged cables, with a true CSA of 1.75569 m 2 .
A high-precision chessboard calibration board (Figure 3) was utilized to establish the correspondence between 3D world points and 2D image points, a prerequisite for camera intrinsic and extrinsic calibration.
  • Specifications: the board featured a 10 × 7 corner array (10 horizontal corners, 7 vertical corners) with an inter-corner physical distance of 65 mm .
  • Corner Detection: subpixel-level corner coordinates were extracted using computer vision algorithms (e.g., OpenCV functions), matching the known 3D world coordinates of the corners (with the calibration board plane defined as Z = 0 in the world coordinate system).

2.2. Multi-Scanner Calibration and Coordinate Transformation

To unify the coordinate systems of the four scanners (numbered ①–④), a global calibration strategy based on relative extrinsic parameters was proposed (Figure 4), addressing the limited overlapping FOV between Scanner ① and ④ (which prevented direct acquisition of high-quality calibration images):
  • Calibrate Scanner ① and ③ to compute the relative extrinsic matrix T 2 1 (containing rotation matrix R 2 1 and translation vector t 2 1 ), establishing their pose relationship.
  • Calibrate Scanner ① and ④ to obtain T 4 1 , linking Scanner ④ to the reference frame of Scanner ①.
  • Calibrate Scanner ③ and ④ to derive T 3 4 , defining Scanner ③’s pose relative to Scanner ④.
  • Calculate T 3 1 (Scanner ③ to ①) via matrix chain multiplication: T 3 1 = T 3 4 · T 4 1 , where R 3 1 = R 3 4 · R 4 1 , and t 3 1 = t 3 4 + R 3 4 · t 4 1 .
After calibration, all scanners’ coordinates were unified under Scanner ①’s camera coordinate system (defined as the global reference frame). The calibration algorithm logic (Figure 5) included corner data verification, intrinsic parameter reuse, essential matrix calculation, and singular value decomposition (SVD) for extrinsic parameter extraction.
The mapping from 3D world points to 2D pixel points involved seven coordinate systems (Figure 6), namely the world coordinate system O w X w Y w Z w (a global reference system that describes the absolute position of the 3D scene), the left camera coordinate system O c X c Y c Z c (a local reference system of the left camera with its origin at the optical center of the left camera), the left image coordinate system O t X t Y t (a pixel-level coordinate system of the left image, with its origin being the projection point of the origin of the world coordinate system onto the left image coordinate system), the left pixel coordinate system O p U V (a pixel-level coordinate system of the left image, with its origin at the upper-left corner of the image), the right camera coordinate system O c X c Y c Z c (a local reference system of the right camera, with its origin at the optical center of the right camera), the right image coordinate system O t X t Y t (a millimeter-level coordinate system of the right image, with its origin being the projection point of the origin of the world coordinate system onto the right image coordinate system), and the right pixel coordinate system O p U V (a pixel-level coordinate system of the right image with its origin at the upper-left corner of the image).

2.2.1. Key Coordinate Transformations

  • World → Camera Coordinates: Taking the left scanner as the analysis object, there exists a transformation matrix from the world coordinate system to the left camera coordinate system [41]. Then, we have
    x c y c z c 1 = R t 0 1 x w y w z w 1 ,
    where R (3 × 3) and t (3 × 1) are the rotation and translation parameters of the scanner relative to the world frame.
  • Camera → Image Coordinates: This process realizes the conversion from three-dimensional coordinates to two-dimensional coordinates [42]. The imaging process is shown in Figure 7a. For the convenience of description, we swap the positions of the camera coordinate system and the image coordinate system, flip the image, and equivalent a erect virtual image at a distance equal to the front focal length from the optical center, resulting in the arrangement shown in Figure 7b.
    From Figure 7b, we can obtain
    x c f f = x c z c y c f f = y c z c ,
    where f is the camera’s focal length.
  • Image → Pixel Coordinates: The coordinate of the origin of the image coordinate system in the pixel coordinate system is ( u 0 , v 0 ) . The physical dimensions of each pixel in the x-axis and y-axis directions of the image coordinate system are α x and α y , and the coordinate of the image point in the actual image coordinate system is ( x cf , y cf ) .
    Converting to the homogeneous coordinate representation form, we can get the mapping relationship from the world coordinate system to the pixel coordinate system:
    u v 1 = 1 z c f α x 0 u 0 0 f α y v 0 0 0 1 R t 0 1 x w y w z w 1 ( let z w = 0 ) = 1 z c f α x 0 u 0 0 f α y v 0 0 0 1 r 1 r 2 r 3 t 0 0 0 1 x w y w z w 1 = 1 z c f x 0 u 0 0 f y v 0 0 0 1 r 1 r 2 t 0 0 1 0 0 1 x w y w 1 , ( where r 1 r 2 t 0 0 1 0 0 1 x w y w 1 = x c y c 1 ) .
    In Equation (7), distortion is not included in the formula, but it does not prevent the use of distortion coefficients for correction in the actual process.

2.2.2. Distortion Correction

From the ideal image coordinate system to the actual image coordinate system (distortion needs to be considered) [43], radial and tangential distortions are most significant; so, we only consider radial and tangential distortions here.
Radial distortion is caused by the manufacturing process of the lens. The two types of radial distortion are shown in Figure 7c and Figure 7d: barrel distortion and pincushion distortion, respectively.
Tangential distortion is caused by the installation position errors of the lens and CMOS or CCD.
Therefore, the overall distortion correction formula is as follows.
x c f y c f = 1 + k 1 r 2 + k 2 r 4 x c f y c f + 2 p 1 x c f y c f + p 2 r 2 + 2 x c f 2 2 p 2 x c f y c f + p 1 r 2 + 2 y c f 2 .
In the formula, the first term represents the radial distortion, and the second term represents the tangential distortion, where k 1 , k 2 are mirror distortion correction coefficients, and p 1 , p 2 are tangential distortion correction coefficients r = x 2 + y 2 .

2.2.3. The Solution Principle of Stereo Calibration

Our ultimate goal is to find the relative external parameters R and t between the left and right cameras. The solution principle of binocular calibration is as follows:
Let us assume that there is a point a whose coordinate in the camera coordinate system is A and whose coordinates in the left and right pixel coordinate systems are a 1 and a 2 respectively. The internal parameter matrices are K 1 and K 2 . Therefore, we can formulate
a 1 = K 1 A a 2 = K 2 ( R A + t ) .
Let x 1 = k 1 1 a 1 , x 2 = k 2 1 a 2 ; then, x 2 = R x 1 + t . After some derivation, we obtain
x 2 T t R x 1 = 0 .
Let E = t R , and E is the essential matrix, which is the basis for solving the relative external parameters later.

2.2.4. Verification of Full Coverage

The actual coverage of the four scanners on the cylindrical workpiece is shown in Figure 8 (a specific case of Figure 2 under experimental parameters). The sizes of various key parameters in this experimental environment are shown in Table 1 below.
Substituting the key parameters in Table 1 into Equations (1)–(4), we can get L 1.875 m , α = 51.2 ° , β 66 ° , and δ = 9.36 36 ° . Since β δ > α , it can be judged that the curvature factor of the workpiece surface will not affect the coverage range of the camera.
Local magnification of the FOV overlap between Scanner ② and ③ (Figure 9) confirmed that the edge points of each scanner’s FOV fell within the adjacent scanner’s FOV, ensuring redundant coverage and no data loss.

2.3. 3D Contour Measurement and Data Processing

2.3.1. Measurement Process

  • Structured Light Projection and Image Acquisition: Each scanner’s projector emitted coded patterns (fringes) onto the workpiece surface. The structured light scanning system utilizes a camera from Luster Inc, ShenZhen, China. (model: Y2000L), equipped with an 8 mm focal length lens and a CMOS photoreceiver matrix with 1624 × 1240 resolution and 4.5 µm/pixel pixel size. The stereo cameras synchronously captured deformed pattern images within the effective FOV (Figure 1b), with data acquired via custom software (developed using Visual Studio 2022 and OpenCV 4.11).
  • Local 3D Reconstruction: The scanner’s internal algorithm performed phase unwrapping and stereo matching on the captured images to generate local 3D point clouds of the workpiece surface.
  • Global Point Cloud Stitching: Local point clouds from the four scanners were transformed into the global reference frame (Scanner ①’s coordinate system) using the calibrated relative extrinsic parameters T i 1 ( i = 2 , 3 , 4 ). High calibration accuracy enabled stitching via direct coordinate transformation, eliminating the need for feature point matching.
Figure 10 shows the physical image of the four scanners and test workpieces.

2.3.2. Data Validation and Error Analysis

  • Experimental Protocol: The measurement was repeated 11 times for both workpieces to assess stability. For each trial, the workpiece diameter was computed by fitting the stitched point cloud to an irregular annular contour.
  • Error Calculation: the relative error was defined as
    Relative Error = | Measured CSA True CSA | True CSA × 100 % .
    The measurement results (including measured values, true values, and relative errors) are summarized in Table 2.
  • Statistical Analysis: Quartile analysis (upper limit, 75th percentile, median, 25th percentile, lower limit) and outlier detection (interquartile range, IQR method) were conducted. No outliers were identified (IQR = 0.032 for the irregular workpiece, IQR = 0.044 for the cylindrical-like workpiece), confirming the system’s reliability.

2.4. Ethical Statement and Data Availability

  • Ethical Approval: ethical review and approval were waived for this study, as it involved only inanimate industrial workpieces and no human or animal subjects.
  • Data Availability: the experimental data supporting the findings—including scanner calibration parameters, raw image datasets, reconstructed point clouds, and measurement results—are available from the corresponding author upon reasonable request.
  • Software and Tools: the measurement system was developed using Visual Studio 2022 (Microsoft, Redmond, WA, USA) and OpenCV 4.11 (Open Source Computer Vision Library, https://opencv.org/releases/, (accessed on 3 July 2025)).

3. Results

3.1. Verification of Full Coverage by Four Scanners

To confirm that the four structured light scanners could achieve complete circumferential coverage of the cylindrical-like workpiece (radius R = 0.75 m ) without blind spots, geometric analysis and experimental validation were conducted.
Substituting the key system parameters (Table 1) into Equations (1)–(4) yielded the following critical angles:
  • Coverage angle α = 51.2 ° (half the arc angle covered by a single scanner);
  • Constraint angle β = 66.4 ° (angle determining potential occlusion);
  • Angle δ = 9.36 ° (angular parameter introduced for the convenience of geometric analysis and has no actual physical significance).
Since β δ > α , the workpiece curvature did not block the scanner’s FOV, confirming no local data loss from a single scanner.
Figure 11 illustrates the actual coverage of the four scanners on the cylindrical workpiece in a real experimental environment. It visually shows that the FOVs of adjacent scanners (e.g., Scanner ① and ②, ③ and ④) overlap by approximately 12.4 ° , ensuring no gaps in circumferential coverage.
For further verification, Figure 9 provides a local magnification of the FOV overlap between Scanner ② and ③. The edge point of Scanner ②’s FOV on the workpiece surface fell within Scanner ③’s FOV and vice versa. This redundant coverage confirmed that the four scanners collectively achieved full circumferential reconstruction of the workpiece without blind spots.

3.2. Measurement Accuracy of the System

The system’s measurement accuracy was evaluated using two types of workpieces (cylindrical-like and irregular annular) with known true CSA. Each workpiece was measured 11 times, and the results are summarized in Table 2.
1. Measurement Results for Cylindrical-like Workpiece
  • True CSA: 1.81581 m2;
  • Measured CSA range: 1.809097–1.810775 m2;
  • Relative error range: 0.277–0.370% (corresponding to an absolute error of D about the range of 2.1–2.8 mm);
  • Median relative error: 0.345% (absolute error: 2.6 mm).
2. Measurement Results for Irregular Annular Workpiece
  • True CSA: 1.75569 m2;
  • Measured CSA range: 1.751210–1.752639 m2;
  • Relative error range: 0.174–0.255% (corresponding to an absolute error range of 1.1–2.1 mm);
  • Median relative error: 0.241% (absolute error: 1.8 mm).
Notably, the irregular workpiece exhibited lower relative errors, which was attributed to its rougher surface enhancing structured light pattern modulation—unlike the smooth surface of the cylindrical-like workpiece, which caused slight specular reflection and reduced pattern contrast.

3.3. Statistical Analysis of Measurement Stability

To assess the system’s stability, quartile analysis and outlier detection (using the Interquartile Range, IQR method) were performed on the 11 sets of measurement data.
The quartile statistics for relative errors are shown in Table 3.
Outlier analysis was conducted using the IQR criterion (outliers defined as values outside [ Q 1 1.5 × I Q R , Q 3 + 1.5 × I Q R ] ):
  • For the irregular workpiece: IQR = 0.032%, range [ 0.166 % , 0.294 % ] ; all 11 relative errors fell within this range, with 0 outliers.
  • For the cylindrical-like workpiece: IQR = 0.044%, range [ 0.246 % , 0.422 % ] ; all 11 relative errors were within this range, with 0 outliers.
These results confirmed the system’s high stability, with no abnormal fluctuations in repeated measurements.

4. Discussion

4.1. Interpretation of Measurement Results

The experimental results demonstrate that the multi-structured light system achieves stable circumferential coverage and measurable accuracy for large-scale cylindrical-like components. The key findings and their implications are discussed below.
The geometric calculation ( β α > δ ) and experimental validation (Figure 1 and Figure 10) confirm that the four scanners’ symmetric layout (Figure 1a) effectively avoids occlusion caused by workpiece curvature. This addresses the core limitation of single-structured light systems (local data loss) and provides a feasible solution for the full-contour measurement of large cylindrical components (e.g., stay cables of cable-stayed bridges).
The system’s relative errors (0.174–0.370%) are higher than the target of “millimeter-level absolute error” (1 mm), primarily due to two factors:
  • Distortion Residuals: Although radial and tangential distortions were corrected using Equations (8), residual distortion (<0.3 pixels) from lens manufacturing defects (Figure 5) led to an absolute error of ∼2.1 mm in 3D reconstruction. This residual distortion error is systematic, as it stems from the inherent approximation of the distortion model and introduces consistent bias in measurements.
  • Calibration Error: For each camera-pair calibration (using the shared calibration board), we captured 15 images with varying calibration board poses. The calibration error is bounded by reprojection error: the reprojection error for all camera pairs is less than 0.7 pixels. The chained calibration for Scanner ① and ③ (Figure 4) introduced a cumulative error in relative pose estimation, as matrix chain multiplication amplifies small errors from intermediate steps (Scanner ④’s calibration error).
Notably, the irregular workpiece’s lower error (0.174–0.255%) highlights the system’s better adaptability to rough surfaces; this is because structured light patterns are more effectively modulated by rough surfaces, improving the stereo matching accuracy.
The absence of outliers (Section 3.3) and narrow percentile range confirm the system’s high stability, which benefits from two aspects:
  • Hardware Synchronization: since the measured object was static in the experiment, the scanners were set to acquire asynchronously to avoid mutual interference between structured light patterns.
  • Algorithm Robustness: the calibration algorithm (Figure 5) included corner data verification and SVD-based extrinsic decomposition, which reduced the impact of noise (e.g., uneven lighting) on parameter estimation.

4.2. Current Limitations

  • Environmental Sensitivity: the system’s accuracy decreases and may even lead to point-cloud dropouts under strong ambient light, as excessive light washes out structured light patterns, which limits outdoor applications (e.g., on-site cable-stayed bridge measurements).
  • Calibration Complexity: the chained calibration for non-overlapping scanners (① and ③) requires 15–20 sets of calibration images, which is time-consuming (approximately 40 min per system).

4.3. Future Perspectives

  • Anti-Glare Design: integrate a narrow-band filter (450 nm, matching the scanner’s projection wavelength) to reduce ambient light interference, enabling outdoor use.
  • Fast Calibration: develop a dynamic calibration method using a portable reference sphere to reduce calibration time to <10 min.
  • Error Compensation: introduce a laser interferometer to measure residual errors and establish a compensation model, aiming to further reduce absolute error (approaching the millimeter-level target).
  • Refined calibration: use higher-precision calibration targets (e.g., ceramic-coated chessboards with sub-micron flatness) and increase the calibration image count to reduce residual systematic errors.

5. Conclusions

This study developed a multi-structured light contour measurement system for large cylindrical components (e.g., stay cables of cable-stayed bridges) and validated its performance through experiments. The key conclusions are as follows:
  • Coverage Capability: The four scanners’ symmetric layout (1.1 m working distance) and optimized FOV design achieve full circumferential coverage of cylindrical workpieces (radius 0.75 m) without blind spots, as confirmed by geometric calculations ( β α > δ ) and experimental validation (overlapping FOVs of adjacent scanners).
  • Measurement Performance: The system exhibits stable and measurable accuracy: for irregular annular workpieces: relative error 0.174–0.255% (absolute error 1.1–2.1 mm); for cylindrical-like workpieces: relative error 0.277–0.370% (absolute error 2.1–2.8 mm); no outliers in 11 repeated measurements, confirming high stability.
  • Practical Value: The system addresses the occlusion limitation of single-structured light systems and reduces hardware costs by using only four scanners. It provides a feasible technical solution for the real-time contour measurement of large cylindrical components in intelligent construction (e.g., cable tensioning monitoring of cable-stayed bridges), laying a foundation for future millimeter-level accuracy optimization.

Author Contributions

Conceptualization, X.Z. and X.L.; methodology, X.Z.; software, X.Z. and L.L.; validation, X.Z. and X.L.; formal analysis, X.Z.; investigation, X.Z. and L.L.; resources, Y.W., S.X., Y.Z. and H.Z.; data curation, X.Z.; writing—original draft preparation, X.Z.; writing—review and editing, X.Z. and X.L.; visualization, X.Z.; supervision, X.L. and X.W.; project administration, L.L.; funding acquisition, R.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Shenzhen Science and Technology Program, grant number JCYJ20240813112003005, for the project “Research on Anti-harmonic Digital Grating 3D Reconstruction Technology and System Based on MEMS Scanning Mirror”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Datasets generated during the current study are available from the corresponding authors on reasonable request.

Acknowledgments

The authors express gratitude to the editors and the reviewers for their constructive and helpful review comments.

Conflicts of Interest

Authors Yuexue Wang, Shi Xie, Hao Zhang and Yiqing Zou were employed by the company Liuzhou OVM Machinery Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Guo, Q. Reviews on the Machining and Measurement of Large Components. Adv. Mech. Eng. 2023, 15, 9. [Google Scholar] [CrossRef]
  2. Wang, K.; Li, Z.; Xu, L.; Shi, L.; Liu, M. Measurement and Compensation of Contour Errors for Profile Grinding. Measurement 2025, 242, 115959. [Google Scholar] [CrossRef]
  3. Xiong, Z.; Zuo, Z.; Li, H.; Pan, L. Research on Dynamic Measurement of Hot Ring Rolling Dimension Based on Machine Vision. IFAC PapersOnLine 2022, 55, 125–130. [Google Scholar] [CrossRef]
  4. Xu, J.; Yuan, Y.B.; Ding, Z.L.; Li, H.; Chen, Z. High-accuracy Measurement of Dimensional Minichanges of Large-size Components with Laser Interferometry. In Proceedings of the Measurement Technology and Intelligent Instruments, Wuhan, China, 29 October–5 November 1993; Volume 2101, pp. 597–600. [Google Scholar] [CrossRef]
  5. Nozdrzykowski, K.; Janecki, D. Comparative Studies of Reference Measurements of Cylindrical Surface Roundness Profiles of Large Machine Components. Metrol. Meas. Syst. 2014, 21, 67–76. [Google Scholar] [CrossRef]
  6. Li, J.; Zhou, Q.; Li, X.; Chen, R.; Ni, K. An Improved Low-Noise Processing Methodology Combined with PCL for Industry Inspection Based on Laser Line Scanner. Sensors 2019, 19, 3398. [Google Scholar] [CrossRef]
  7. Ferraro, P. Measurement of 3D Position of Large Objects by Laser Scanning System to Aid the Fabrication Process of Composite Aerospace Components. In Proceedings of the Lasers, Optics, and Vision for Productivity in Manufacturing I, Besancon, France, 10–14 June 1996; Volume 2786, pp. 132–138. [Google Scholar] [CrossRef]
  8. Li, Y.; Yin, X.; Liu, W.; Gou, W.; Cheng, K. Cold Centering Algorithm on Pipe Based on Laser Measurement. J. Adv. Comput. Intell. Intell. Inform. 2017, 21, 397–402. [Google Scholar] [CrossRef]
  9. Chen, R.; Li, X.; Wang, X.; Li, J.; Xue, G.; Zhou, Q.; Ni, K. A Planar-Pattern Based Calibration Method for High-Precision Structured Laser Triangulation Measurement. In Optical Metrology and Inspection for Industrial Applications VI; SPIE: Bellingham, WA, USA, 2019; p. 21. [Google Scholar] [CrossRef]
  10. Peng, J.Q.; Xu, W.F.; Yan, L.; Pan, E.; Liang, B.; Wu, A.G. A Pose Measurement Method of a Space Noncooperative Target Based on Maximum Outer Contour Recognition. IEEE Trans. Aerosp. Electron. Syst. 2020, 56, 512–526. [Google Scholar] [CrossRef]
  11. Chen, R.; Li, Y.; Xue, G.; Tao, Y.; Li, X. Laser Triangulation Measurement System with Scheimpflug Calibration Based on the Monte Carlo Optimization Strategy. Opt. Express 2022, 30, 25290. [Google Scholar] [CrossRef] [PubMed]
  12. Bai, Q.W.; Chen, X.; Han, S. Line Laser Scanning Microscopy Based on the Scheimpflug Principle for High-Resolution Topography Restoration and Quantitative Measurement. Appl. Opt. 2023, 62, 5014–5022. [Google Scholar] [CrossRef]
  13. Zhang, Y.; Han, J.; Fu, X.; Lin, H.B. An Online Measurement Method Based on Line Laser Scanning for Large Forgings. Int. J. Adv. Manuf. Technol. 2014, 70, 439–448. [Google Scholar] [CrossRef]
  14. Li, K.; Zhang, Z.; Lin, J.; Sato, R.; Matsukuma, H.; Gao, W. Angle Measurement Based on Second Harmonic Generation Using Artificial Neural Network. Nanomanuf. Metrol. 2023, 6, 28. [Google Scholar] [CrossRef]
  15. Bešić, I.; Gestel, V.N.; Kruth, J.; Bleys, P.; Hodolič, J. Accuracy Improvement of Laser Line Scanning for Feature Measurements on CMM. Opt. Lasers Eng. 2011, 49, 1274–1280. [Google Scholar] [CrossRef]
  16. Svetkoff, D.J. A High Resolution, High Speed 3-D Laser Line Scan Camera for Inspection and Measurement. In Proceedings of the 1989 Symposium on Visual Communications, Image Processing, and Intelligent Robotics Systems, Philadelphia, PA, USA, 1–3 November 1989; Volume 1194, pp. 253–263. [Google Scholar] [CrossRef]
  17. Han, M.; Wang, X.; Li, X. Fast and Accurate Fringe Projection Based on a MEMS Microvibration Mirror. In Optical Metrology and Inspection for Industrial Applications XI; SPIE: Bellingham, WA, USA, 2024; p. 2. [Google Scholar] [CrossRef]
  18. Xia, B.; Li, Z.; Huang, J.; Zeng, K.; Pang, S. Efficient Measurement of Power Tower Based on Tilt Photography with Unmanned Aerial Vehicle and Laser Scanning. J. Eng. 2021, 2021, 724–730. [Google Scholar] [CrossRef]
  19. Wang, Z.; Fu, Y.; Zhong, K.; Ni, W.; Bao, W. Fast 3D Laser Scanning of Highly Reflective Surfaces Based on a Dual-Camera System. J. Mod. Opt. 2021, 68, 1229–1239. [Google Scholar] [CrossRef]
  20. Sato, R.; Li, X.; Fischer, A.; Chen, L.-C.; Chen, C.; Shimomura, R.; Gao, W. Signal Processing and Artificial Intelligence for Dual-Detection Confocal Probes. Int. J. Precis. Eng. Manuf. 2024, 25, 199–223. [Google Scholar] [CrossRef]
  21. Xu, P.; Yao, X.; Chen, L.; Zhao, C.; Liu, K.; Moon, S.K.; Bi, G. In-Process Adaptive Dimension Correction Strategy for Laser Aided Additive Manufacturing Using Laser Line Scanning. J. Mater. Process. Technol. 2022, 303, 117544. [Google Scholar] [CrossRef]
  22. Han, M.; Xing, Y.; Wang, X.; Li, X. Projection Superimposition for the Generation of High-Resolution Digital Grating. Opt. Lett. 2024, 49, 4473–4476. [Google Scholar] [CrossRef]
  23. Han, Y.L.; Sun, H.L.; Zhang, R.F. Three-Dimensional Linear Restoration of a Tunnel Based on Measured Track and Uncontrolled Mobile Laser Scanning. Sensors 2021, 21, 3815. [Google Scholar] [CrossRef] [PubMed]
  24. Wang, S.; Luo, L.; Li, X. Design and Parameter Optimization of Zero Position Code Considering Diffraction Based on Deep Learning Generative Adversarial Networks. Nanomanuf. Metrol. 2024, 7, 2. [Google Scholar] [CrossRef]
  25. Mao, Z.R.; Zhang, C.L.; Guo, B.J.; Xu, Y.P.; Kong, C.; Zhu, Y.; Xu, Z.J.; Jin, J. The Flatness Error Evaluation of Metal Workpieces Based on Line Laser Scanning Digital Imaging Technology. Photonics 2023, 10, 12. [Google Scholar] [CrossRef]
  26. Ma, Q.; Yu, H. Artificial Intelligence-Enabled Mode-Locked Fiber Laser: A Review. Nanomanuf. Metrol. 2023, 6, 36. [Google Scholar] [CrossRef]
  27. Chen, Z.; Yang, M.; Zhang, J.; Zhang, M.; Liang, C.; Wang, D. Camera Calibration Based on Hybrid Differential Evolution and Crayfish Optimization Algorithm. Opt. Lasers Eng. 2025, 193, 109088. [Google Scholar] [CrossRef]
  28. Li, L.; Xiao, Z.; Hu, T. Camera Calibration Optimization Algorithm Based on Nutcracker Optimization Algorithm. Sensors 2025, 25, 3521. [Google Scholar] [CrossRef]
  29. Zhao, R.; Yan, R.; Chen, Z.; Mao, K.; Wang, P.; Gao, R.X. Deep Learning and Its Applications to Machine Health Monitoring. Mech. Syst. Signal Process. 2019, 115, 213–237. [Google Scholar] [CrossRef]
  30. Huang, J.; Liu, S.; Liu, J.; Jian, Z. Camera Calibration Optimization Algorithm That Uses a Step Function. Opt. Express 2024, 32, 18453–18471. [Google Scholar] [CrossRef]
  31. Wang, Z.R.; Chen, G.C.; Tian, L.S. Optimization of Stereo Calibration Parameters for the Binocular Camera Based on Improved Beetle Antennae Search Algorithm. J. Phys. Conf. Ser. 2021, 2029, 012000. [Google Scholar] [CrossRef]
  32. Caggiano, A.; Zhang, J.; Alfieri, V.; Caiazzo, F.; Gao, R.; Teti, R. Machine Learning-Based Image Processing for On-Line Defect Recognition in Additive Manufacturing. CIRP Ann. 2019, 68, 451–454. [Google Scholar] [CrossRef]
  33. Guan, C.X.; Wang, X.G.; Guan, S.X.; Ding, J.; He, Z.L.; Tang, G.F. Research on Camera Calibration Optimization Method Based on Chaotic Sparrow Search Algorithm. In Proceedings of the International Conference on Biomedical and Intelligent Systems (IC-BIS 2022), Chengdu, China, 24–26 June 2022; Volume 12458, p. 124583C. [Google Scholar] [CrossRef]
  34. Deng, L.; Lu, G.; Shao, Y.; Fei, M.; Hu, H. A Novel Camera Calibration Technique Based on Differential Evolution Particle Swarm Optimization Algorithm. Neurocomputing 2016, 174, 456–465. [Google Scholar] [CrossRef]
  35. Guo, J.; Zhu, Y.; Wang, J.Y.; Du, S.; He, X. Research on Camera Calibration Optimization Method Based on Improved Sparrow Search Algorithm. J. Electron. Imaging 2023, 32, 013040. [Google Scholar] [CrossRef]
  36. Guan, Q.; Chen, S.Y.; Wang, W.L.; Li, Y.F. Grid-pattern Design for Fast Scene Reconstruction by a 3D Vision Sensor. In Proceedings of the Optical Measurement Systems for Industrial Inspection IV, Munich, Germany, 13–17 June 2005; Volume 5856, pp. 201–209. [Google Scholar] [CrossRef]
  37. Yu, S.J.; Choi, J. Pattern Design for Structured Light System. Adv. Sci. Lett. 2017, 23, 10223–10227. [Google Scholar] [CrossRef]
  38. Yin, W.; Feng, S.; Tao, T.; Huang, L.; Trusiak, M.; Chen, Q.; Zuo, C. High-Speed 3D Shape Measurement Using the Optimized Composite Fringe Patterns and Stereo-Assisted Structured Light System. Opt. Express 2019, 27, 2411–2431. [Google Scholar] [CrossRef] [PubMed]
  39. Sun, X.L. Generation of Structured Illumination for Three-Dimensional Shape Measurement Using Phase Modulation. In Computational Imaging VI; SPIE: Bellingham, WA, USA, 2021; Volume 11731, pp. 51–56. [Google Scholar] [CrossRef]
  40. Wang, J.; Ma, Y.; Zhang, L.; Gao, R.X.; Wu, D. Deep Learning for Smart Manufacturing: Methods and Applications. J. Manuf. Syst. 2018, 48, 144–156. [Google Scholar] [CrossRef]
  41. Zhang, Z. A Flexible New Technique for Camera Calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  42. Heikkila, J.; Silven, O. A four-step camera calibration procedure with implicit image correction. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, 17–19 June 1997. [Google Scholar] [CrossRef]
  43. Kang, S.; Kim, S.D.; Kim, M. Structural-Information-Based Robust Corner Point Extraction for Camera Calibration Under Lens Distortions and Compression Artifacts. IEEE Access 2021, 9, 151037–151048. [Google Scholar] [CrossRef]
Figure 1. Layout and FOV of four structured light scanners.
Figure 1. Layout and FOV of four structured light scanners.
Sensors 25 06455 g001
Figure 2. Coverage angle and curvature analysis of a single scanner.
Figure 2. Coverage angle and curvature analysis of a single scanner.
Sensors 25 06455 g002
Figure 3. Physical image of the chessboard calibration board.
Figure 3. Physical image of the chessboard calibration board.
Sensors 25 06455 g003
Figure 4. Schematic of the calibration process.
Figure 4. Schematic of the calibration process.
Sensors 25 06455 g004
Figure 5. Logic of the calibration algorithm.
Figure 5. Logic of the calibration algorithm.
Sensors 25 06455 g005
Figure 6. Schematic of coordinate transformation in stereo calibration.
Figure 6. Schematic of coordinate transformation in stereo calibration.
Sensors 25 06455 g006
Figure 7. Schematic of perspective projection and radial distortion.
Figure 7. Schematic of perspective projection and radial distortion.
Sensors 25 06455 g007
Figure 8. Actual coverage of the four scanners.
Figure 8. Actual coverage of the four scanners.
Sensors 25 06455 g008
Figure 9. Local magnification of FOV overlap.
Figure 9. Local magnification of FOV overlap.
Sensors 25 06455 g009
Figure 10. Physical image of the four scanners and test workpieces.
Figure 10. Physical image of the four scanners and test workpieces.
Sensors 25 06455 g010
Figure 11. Point cloud overlap images acquired by the actual scanner.
Figure 11. Point cloud overlap images acquired by the actual scanner.
Sensors 25 06455 g011
Table 1. Key parameter values of the system.
Table 1. Key parameter values of the system.
Key ParameterParameter Value
Working distance D1.1 m
Projector length dApproximately 60 cm (57–63 cm)
Workpiece radius R0.75 m
Lateral distance W covered by the scanner on the workpiece1.17 m
Table 2. Measurement results.
Table 2. Measurement results.
Serial
Number
Cylindrical-Like (m2)True Value (m2)Relative Error (%)Irregular Workpiece (m2)True
Value (m2)
Relative
Error (%)
11.8096871.815810.3371.7523011.755690.193
21.8095381.815810.3451.7513741.755690.246
31.8095251.815810.3461.7517411.755690.225
41.8092871.815810.3591.7514411.755690.242
51.8090971.815810.3701.7526391.755690.174
61.8093411.815810.3561.7512101.755690.255
71.8093931.815810.3531.7514051.755690.244
81.8101421.815810.3121.7512341.755690.253
91.8101021.815810.3141.7518681.755690.218
101.8101471.815810.3121.7514641.755690.241
111.8107751.815810.2771.7519311.755690.214
Table 3. Statistical distribution of measurement relative errors.
Table 3. Statistical distribution of measurement relative errors.
StatisticIrregular Annular Workpiece%Cylindrical-Like Workpiece%
Upper Limit0.2550.370
75th Percentile0.2460.356
Median0.2410.345
25th Percentile0.2140.312
Lower Limit0.1740.277
The narrow range of percentiles (e.g., 0.081% for the irregular workpiece, 0.093% for the cylindrical-like workpiece) indicated consistent measurement results across repeated trials.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, X.; Luo, L.; Ma, R.; Wang, Y.; Xie, S.; Zhang, H.; Zou, Y.; Wang, X.; Li, X. Binocular Stereo Vision-Based Structured Light Scanning System Calibration and Workpiece Surface Measurement Accuracy Analysis. Sensors 2025, 25, 6455. https://doi.org/10.3390/s25206455

AMA Style

Zhang X, Luo L, Ma R, Wang Y, Xie S, Zhang H, Zou Y, Wang X, Li X. Binocular Stereo Vision-Based Structured Light Scanning System Calibration and Workpiece Surface Measurement Accuracy Analysis. Sensors. 2025; 25(20):6455. https://doi.org/10.3390/s25206455

Chicago/Turabian Style

Zhang, Xinbo, Li Luo, Rui Ma, Yuexue Wang, Shi Xie, Hao Zhang, Yiqing Zou, Xiaohao Wang, and Xinghui Li. 2025. "Binocular Stereo Vision-Based Structured Light Scanning System Calibration and Workpiece Surface Measurement Accuracy Analysis" Sensors 25, no. 20: 6455. https://doi.org/10.3390/s25206455

APA Style

Zhang, X., Luo, L., Ma, R., Wang, Y., Xie, S., Zhang, H., Zou, Y., Wang, X., & Li, X. (2025). Binocular Stereo Vision-Based Structured Light Scanning System Calibration and Workpiece Surface Measurement Accuracy Analysis. Sensors, 25(20), 6455. https://doi.org/10.3390/s25206455

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop