You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

7 May 2022

Linear Laser Scanning Measurement Method Tracking by a Binocular Vision

,
,
and
1
College of Metrology & Measurement Engineering, China Jiliang University, Hangzhou 310018, China
2
College of Information Engineering, China Jiliang University, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
This article belongs to the Section Optical Sensors

Abstract

The 3D scanning of a freeform structure relies on the laser probe and the localization system. The localization system, determining the effect of the point cloud reconstruction, will generate positioning errors when the laser probe works in complex paths with a fast speed. To reduce the errors, in this paper, a linear laser scanning measurement method is proposed based on binocular vision calibration. A simple and effective eight-point positioning marker attached to the scanner is proposed to complete the positioning and tracking procedure. Based on this, the method of marked point detection based on image moment and the principle of global coordinate system calibration are introduced in detail. According to the invariance principle of space distance, the corresponding points matching method between different coordinate systems is designed. The experimental results show that the binocular vision system can complete localization under different light intensities and complex environments, and that the repeated translation error of the binocular vision system is less than 0.22 mm, while the rotation error is less than 0.15°. The repeated error of the measurement system is less than 0.36 mm, which can meet the requirements of the 3D shape measurement of the complex workpiece.

1. Introduction

In recent years, 3D topography measurement technology has been widely used in cultural relic protection, the aerospace industry, reverse engineering, biomedicine, and other fields [1,2,3,4]. A non-contact measurement—the scanning method based on optics and vision—has been increasingly popular due to its high accuracy, good flexibility, and fast speed [5].
According to the measurement principles, optical-based scanning methods mainly consist of the laser scanning method, the interferometry method, and the structured light method [6,7,8]. The linear laser scanning method is suitable for the measurement of complex hole surfaces based on laser triangulation technology [9]. This method generally requires a mobile platform to complete the scanning measurement. The interferometry method [10] is based on the dual-beam interference, the multi-beam interference, or the holographic interference to generate interference fringes. The geometric shape of the measured object can be measured according to the difference of the fringes. The measurement stability of this method is easily affected by optical vibration, humidity, and other factors. By utilizing image coding technology such as the gray code or the step-by-step phase-shifting method [11], the surface structured light method [12] can restore the 3D coordinate of an object surface to the measurement system. Compared with the linear laser scanning method, this method is more complicated and time-consuming. Therefore, in this paper, the objective is to achieve large-sized 3D measurement with the line laser technique.
Limited by the measurement principle, efficiency, and other issues, however, the field of view of the optical scanning equipment cannot be large enough, which prevents the probe from measuring the topography of a large-sized part directly. Therefore, it is necessary to adopt a hybrid measurement method to expand the measurement range. Also, since the mere local coordinate measurement cannot achieve global unification of measured values, the fusion and conversion methods are required for measured data under different measurement systems [13]. There are three kinds of methods [14] to be applied to the 3D data stitching, and they are as follows:
The first method is the multi-sensor perspective splicing method. For example, Liu et al. [15] proposed a 3D measurement system to improve the accuracy of point cloud stitching based on the Indoor Global Positioning System (IGPS) [16]. Du et al. [17] proposed a flexible large-scale 3D scanning system assembled by combining a robot, a binocular structured light scanner, and a laser tracker. This method relies on high-accuracy instruments such as the laser tracker and the IGPS to directly measure the pose of the scanning device in the global coordinate system. Apart from its high price, this method has a high measurement accuracy and speed. In addition, the relative position relationship between each sensor needs to be calibrated before use by utilizing this method.
The second method is the mechanical splicing method. For example, Novak et al. [18] proposed a new system for a 3D foot-shape rotating scanning measurement system based on the laser-multiple-line-triangulation principle. Liu et al. [19] introduced the axis-eye calibration algorithm to design a line laser rotating scanning measurement system, aiming to complete the registration of the point cloud in the same coordinate system. The mechanical splicing method mainly uses a moving platform such as the parallel guide rail and the high precision rotating table to complete data splicing with a known moving speed or the angular velocity. The measurement accuracy and the range of this type of method are limited by the motion mechanism.
The last method is the marker-assisted splicing method. For example, Braone et al. [20] developed a stereo vision system to efficiently align 3D point clouds, which allows for an automatic alignment by detecting fiducial markers distributed on overlapping areas of adjacent images. Wang et al. [21] presented a mobile 3D scanning system based on the known marked points. This method relies on the artificial markers to build a connection relationship to complete the conversion from the local measurement coordinate system to the global coordinate system. However, the process of pasting easily affects the surface characteristics of the measured workpiece, which will lead to an error accumulation on the scanning probe.
In addition, the hybrid scanning measurement is improved by scholars for different application areas and work scenarios. For example, Chen et al. [22] proposed a two-stage binocular vision system to measure the relative pose between two components, aiming to reduce coordinate transformation errors. Yin et al. [23] proposed a free-moving surface reconstruction technique based on the binocular structured light. This method can eliminate the movement constraints of the parallel guide rail and reduce the accumulation of error but is not suitable for large-sized objects. Liu et al. [24] built a binocular structured light system combined with a wide field of view camera, and a plane target was used as an intermediary to realize the alignment of local point cloud data. Shi et al. [25] proposed a 3D scanning measurement method based on a stereoscopic tracker. The position alignment of the scanner in different views was calculated by tracking the LED markers fixed to the scanner. However, in the case of occlusion, LED markers may be seriously deformed, which will result in inaccurate positioning. Hu et al. [26] proposed a new real-time catadioptric stereo tracking method, which can realize stereo measurement under monocular vision. Huang et al. [27] designed a new 3D scanner with a zoom lens unit, which realized large-area scanning at a low magnification rate and high-precision detail scanning at a high magnification rate. The two scanning results complement each other to ensure that the system has both high reconstruction accuracy and a large scanning area. Jiang et al. [28] proposed a system calibration method to reduce measurement errors caused by scale differences, which is suitable for combined measurement systems with different scales and ranges.
In order to enhance the flexibility of hybrid measurement, a method for localized surface scanning measurement is proposed by using circular markers to position the laser scanner. The main contribution lies in the design of a new circular mark recognition method and a coordinate transformation calibration method in the combined system. The proposed circle detection method based on image moments takes the weighted mean of the centroid of multiple circles as observation points, which can reduce the center deviation error and improve positioning accuracy. Different from the existing work, the laser scanner can be accurately located without fully identifying all the marked points, which can overcome the limitation of local occlusion and enhance the robustness of the system to a certain extent. Finally, through the calibration of each coordinate system, the alignment of 3D point cloud data at different angles is completed.
The remainder of this paper is structured as follows. Section 2 introduces the structure of the measurement system. Section 3 introduces the measurement principle and detection algorithms in the positioning process. In Section 4, the proposed method is verified through calibration experiments and measurement experiments, and concluding remarks are provided in Section 5.

2. Overview of the Laser Scanning System

The laser scanning system is designed based on binocular vision and mainly consists of a laser scanner, a 6 DOF robot, and a binocular camera. The structure and composition of the system are shown in Figure 1a. The laser scanner is fixed on the end of the robotic arm through a flange. As a significant mark for the binocular camera, the circular marked points are randomly pasted on the surface of the laser scanner to determine the pose of the laser scanner. The binocular camera is installed at a suitable distance to ensure that the circular marked points can be recognized.
Figure 1. (a) System composition and structure diagram. (b) The markers and the marker coordinate system.
In order to facilitate the positioning of the scanner, eight circular marked points are randomly pasted on the scanner to achieve robust and precise positioning. The inner diameters of the eight circular marking points are 10 mm. With the first center point in the upper left corner as the starting point, each circle is numbered in a clockwise direction. In addition, the first center point of the upper left corner is used as the reference point to establish the marker coordinate system, as shown in Figure 1b. The x-axis is parallel to the upper surface of the scanner, and the y-axis is parallel to the right side of the scanner. The three axes are perpendicular to each other, which satisfies the right-hand rule.
In the process of scanning, the laser scanner moves in any direction under the drive of the mechanical arm to complete the scanning measurement of the workpiece. The point cloud data, based on the scanner measurement coordinate system, is constantly changing with the scanning process. In order to complete the splicing of all point cloud data, it is necessary to transform point cloud data to a fixed coordinate system. The process of unifying point cloud data to the same coordinate system is shown in Figure 2.
Figure 2. Flow chart of linear laser scanning measurement.
The scanner measurement coordinate system is assumed as the laser coordinate system (LCS), which is the local coordinate system, noted as O s X s Y s Z s . The marker coordinate system (MCS) is expressed by O b X b Y b Z b . The camera coordinate system (CCS) is expressed by O c X c Y c Z c , which is the global coordinate system. The point cloud data in the local coordinate system is mainly unified to the global coordinate system through the following equation:
P c = [ R b c t b c 0 1 ] [ R s b t s b 0 1 ] P s
where P s is the 3D coordinate of the surface point of the workpiece in LCS, P c is the 3D coordinate of the corresponding point in CCS. R s b and t s b are denoted as the rotation matrix and the translation vector from LCS to MCS, respectively, and R b c and t b c are assumed as the rotation matrix and the translation vector from MCS to CCS, respectively.

4. Experiment and Analysis

The physical diagram of the whole system construction is shown in Figure 8. The binocular vision system consists of two industrial cameras. The model of the industrial cameras is JHSM130Bs, with a frame rate of 30 fps and a resolution of 1280 × 1024 pixels. The focal length of the industrial fixed focus lens is 4mm. The linear laser scanner is an nxSensor-I with a frame rate of 15 fps. A maximum of 960 points can be obtained in one scan measurement.
Figure 8. Physical diagram of system configuration.
Regarding the laser scanner used in this paper, its internal placement structure is such that the optical axis of the laser is perpendicular to the surface of the measured object, and the optical axis of the camera is at an angle of 55° with the laser plane. This geometry is suitable for measuring objects with small surface height differences and can obtain a higher resolution [33]. For the laser measurement system used in this paper, the single measurement distance is not more than 200 mm, the scanning depth distance in the z-axis direction is 225–345 mm, the resolution in the depth direction can reach 0.0050 mm, the accuracy is 0.045 mm, and the minimum movement displacement of the driving system is 0.050 mm.

4.1. Accuracy Experiment for the Binocular Reconstruction

The checkerboard with 9 × 6 corner points is applied to calibrate the internal and external parameters of the two cameras. The side length of each small square of the checkerboard is 30 mm. The internal and external parameters obtained by the calibration are as shown in Table 1. It takes about two minutes to calibrate the binocular camera with a checkerboard.
Table 1. Internal parameter coefficients of two cameras.
The position conversion relationship from the right camera to the left camera is as follows. The unit of translation vector in the pose relationship is the millimeter.
R l r = [ 0.999503 0.022021 0.022570 0.021810 0.999717 0.009537 0.022773 0.009040 0.999670 ] ,   t l r = [ 264.449 7.38219 2.20070 ]
After calibration, the reprojection error of the camera is 0.14 pixel, and the epipolar error is 0.18 pixel. According to the calibrated camera parameters, the eight marked points randomly pasted on the surface of the laser scanner can be accurately positioned. The order of the marked points is as follows. Taking the first marked point in the upper left corner as the starting point, then the remaining marked points are sequentially numbered in a clockwise direction. In the positioning experiment, the matching result of the binocular camera at a certain moment is shown in Figure 9. The left and right pixel coordinates of the marked points at a certain moment are shown in Table 2.
Figure 9. The matching result of the left image and the right image at a certain moment.
Table 2. The coordinate value of the marked points at a certain time.
For the eight marked points, the TRITOP optical measurement system developed by the German Company GOM has high measurement accuracy and can directly measure the center coordinates of the circular marked points. The TRITOP system consists of one DSLR camera with the largest resolution of 4288 × 2824 pixels, a fixed focal length of 35 mm, contrast coded and uncoded points, and scale bars. In the measurement process, the scale bars are placed on both sides of the object to be measured, and the contrast coded points are pasted on the surface of the object to be measured. The digital camera is used to take continuous photos from different perspectives, and then the 3D coordinates of the circular marked points can be obtained by processing these photos with its own software. Considering the high measurement accuracy of the TRITOP optical measurement system, the system is used to measure the 3D coordinates of these eight marked points, and the Euclidean distance between the two marked points is calculated as the reference value. The binocular positioning system built in this paper is used to perform 71 random positioning experiments on the laser scanner. The results of the first 12 groups are shown in Table 3.
Table 3. Measurement results of the distance between the marked points.
In these 71 experiments, the comparison curve between the positioning distance of the marked points and the actual distance is shown in Figure 10.
Figure 10. Comparison of the reconstruction distance and the actual distance.
The AVG denotes the average of the measured values, the RMSE denotes the root mean square error, the AE denotes the absolute error, the D-AE denotes the absolute error of the distance between two points, and the p i denotes the i-th marked point. It can be seen from Table 4 and Figure 10 that the reconstruction measurement of the binocular positioning system has good repeatability and reliability, and the reconstruction accuracy is high. The absolute error is less than 0.08 mm, and the root mean square error is better than 0.01 mm, which can ensure the high-precision positioning of the marked points. It is the basis for the accuracy of the subsequent calculation of the position.
Table 4. Comparison of the distance measured result with the actual value.

4.2. Evaluation of Binocular Positioning Performance

For the laser scanning measurement system based on binocular positioning proposed in this paper, ensuring the accuracy of positioning in different environments is the premise to complete high-precision 3D reconstruction. Therefore, this part mainly evaluates the positioning accuracy of the positioning system from the lightness, distance, viewing angle, shelter, and other factors. Here, with the aid of a 6-DOF mechanical arm as the experimental mobile platform, the rotation accuracy of the arm is 0.003°, and the translation accuracy is 0.03 mm.

4.2.1. Localization Evaluation under Light Changes

In this part, the influence of different lightness on the positioning performance of eight-point markers is analyzed. Different lighting conditions are simulated mainly by changing the exposure time of the camera. The binocular camera is placed 1.6 m away from the laser scanner, the initial exposure time research range is set as 400–3200 µs, and the step interval is set as 100 µs. 29 groups of data are collected in the whole experiment, with 30 pairs of images in each group. The average pose solved by these 30 images is taken as the pose solution value under the current exposure time. Among them, the pose is expressed in the form of a Euler Angle, which is divided into rotations and translations in x, y, and z directions, represented by rx, ry, rz, tx, ty, and tz, respectively. Figure 11 shows the image recorded at a partial exposure time. Especially in the first pair of images in Figure 11, in a relatively dark environment, the eight markers may not be fully recognized, but the pose can still be calculated via the correct matching of some points.
Figure 11. Positioning images with exposure times of 400 µs and 3200 µs.
Because the positions of the binocular camera and scanner have not moved, the pose obtained from the theoretical analysis should be relatively consistent. The average value of 29 groups of data within the exposure range is taken as the reference value to calculate the deviation between the solved pose and the reference value under each exposure condition. The experimental results are shown in Figure 12.
Figure 12. Positioning errors under different brightness conditions. (a) Rotational errors on the x, y, and z axes. (b) Translation errors on the x, y, and z axes.
As can be seen in Figure 12, even under the influence of different exposure factors, the interference of solving the pose is small. The solution of the angle pose in the z direction is the most stable, with an error of less than 0.015°. The translation deviation in the y and z directions is also less affected, with an error of less than 0.076 mm and 0.12 mm, respectively. The maximum angle deviation in the x direction is 0.11°, and the maximum angle deviation in the y direction is 0.16°, both of which appear under high exposure conditions. However, the lightness condition has a great influence on the pose calculation in the x direction, and the error increases with the increase of brightness. The reason for this may be that when the brightness increases, the extracted circular contour may become smaller, leading to the deviation of the center of the circle.

4.2.2. Positioning Accuracy at Different Distances

In this section, the positioning accuracy of the system at different distances is mainly studied. Referring to the experiment in the previous section, in order to have better recognition results, the exposure time of the camera is set as 1200 µs in subsequent experiments. The research range of positioning detection distance is set as 1.2–2.3 m. With the industrial mechanical arm as the moving platform, 30 frames of pictures are recorded every 0.1 m of movement along the optical axis of the camera, and the average value is taken as the current value. Figure 13 shows some experimental images. The first frame of each set of data is used as the initial pose, and then the relative poses of the other images are obtained. The Euclidean distance of the translation error is used as a comparison, and the calculation formula is as follows:
Δ t = t x × t x + t y × t y + t z × t z
Figure 13. Left camera images at distances of 1.2 m and 2.3 m.
As can be seen in Figure 14, the translation deviation of the positioning system is relatively stable within the range of 1.8 m, and the error is less than 0.50 mm. When the distance increases to 2.3 m, the translation error reaches 1.34 mm. The possible reason for this is that at a long distance, the size of the circular mark is too small to be easily identified, resulting in a large error.
Figure 14. Translation errors at different distances.

4.2.3. Positioning Accuracy at Different Angles

The localization accuracy of the localization system under different angles and occlusions is demonstrated in detail in this section. The binocular camera is placed 1.6 m away from the laser scanner. With the y axis of the camera as the rotation axis, the scanner rotates from 0 to 40°. Ten pairs of continuous images are recorded at each rotation of 5°. In order to compare the robustness of anti-occlusion, two circular markers are artificially blocked at the same position, and ten images are also recorded. Figure 15 shows some experimental images at a rotation angle of 30°. The first frame of each set of data is used as the initial pose, and then the relative poses of the other images are obtained. Then, the average errors of each degree of freedom are calculated. The experimental results are shown in Figure 16.
Figure 15. Left camera image at a rotation angle of 30°. (a) In the case of no occlusion. (b) In the case of the occlusion of two marker points.
Figure 16. The pose errors based on eight markers and six markers at different angles. (a) Rotation error along the x-axis. (b) Rotation error along the y-axis. (c) Rotation error along the z-axis. (d) Translation error along the x-axis. (e) Translation error along the y-axis. (f) Translation error along the z-axis.
In Figure 16, when there is no occlusion, the camera can fully recognize eight points. The maximum rotation errors in the x, y, and z directions are −0.19°, −0.24°, and 0.23°, respectively. The maximum translation errors are 0.28 mm, 0.35 mm, and 0.25 mm, respectively. In the case of artificial occlusion, the camera can only locate the scanner through six markers. The maximum average errors of rx, ry, and rz are −0.23°, −0.21°, and 0.26°, and the maximum average errors of tx, ty, and tz are 0.27 mm, 0.35 mm, and 0.41 mm, respectively. The experimental results show that the positioning accuracy does decrease when the marker is partially occluded. The most influential is the translation error in the z-direction, which has increased from 0.25 mm to 0.41 mm. The rotation error has little effect, and the rotation error in both cases is basically kept below 0.26°.

4.3. Hand–Eye Calibration Experiment

(1)
The standard sphere with a diameter of 30.0055 mm is placed in a suitable position. The binocular camera is installed at a suitable location to ensure that the laser scanner is within the measurement range of the field of view of the positioning system.
(2)
The robot is controlled to drive the linear laser scanner to move to a certain position, where the laser beam can be effectively projected on the surface of the standard sphere. Afterward, the line point cloud data can be obtained, and the images taken by the left camera and the right camera are obtained at the same time.
(3)
Step (2) is repeated N times (N > 8).
(4)
The line point cloud data on the surface of the standard sphere is extracted for fitting, and then the coordinate of the center of the sphere can be calculated in the current laser coordinate system. According to the matching algorithm in Section 3.3, the circular marked points are recognized, and the spatial position changes of the marked points are calculated.
(5)
According to the standard sphere calibration model established in Section 3.4, the conversion relationship between LCS and MCS is calculated as follows:
R s b = [ 0.996745 0.00427368 0.00807998 0.0146129 0.00129409 0.989261 0.0293743 0.999739 0.0376670 ] , t s b = [ 89.5174 270.536 14.8889 ]
At the same time, the 3D coordinate of the center of the standard sphere in CCS during the calibration process is as follows:
P c = [ 35.9700 , 216.343 , 1303.39 ] T
The whole calibration process takes about 10 min. During the process, it takes 9 min to obtain the point cloud data of multiple groups of different poses by moving the scanner, and 189 ms for the transformation matrix obtained via program processing.
In order to verify the calibration accuracy of the system, without changing the position of the standard sphere, the standard sphere is scanned and measured using the calibrated position relationship. Combined with the known diameter of the standard sphere and the sphere center coordinate obtained by the calibration, the standard sphere model is established. The deviation analysis between the scanning point cloud and the sphere fitted by the standard sphere is shown in Figure 17.
Figure 17. Deviation analysis between the standard sphere model and the scanned point cloud.
In Figure 17, the parts from green to red represent positive deviation, and the parts from green to blue represent negative deviation. The result shows that the error of the point cloud near the edge of the standard sphere is relatively large, the maximum positive deviation is 0.38 mm, and the maximum negative deviation is −0.27 mm. The possible reason for this is that the data obtained at the edge of the scanner is relatively unstable due to the limitation of the scanner principle. The geometric standard deviation is 0.043 mm, showing that the overall deviation is more evenly distributed.

4.4. System Spatial Measurement Accuracy Experiment

In order to determine the spatial measurement accuracy of the system, several measurement experiments are carried out with the standard parts. The standard parts used mainly include a standard sphere with a diameter of 30.0055 mm and a standard gauge block with a length of 70 mm. The accuracy of standard parts is 0.0015 mm. To verify the reliability of the standard part data, ten measurement experiments were carried out on the standard ball and standard gauge block using a three-coordinate measuring machine (CMM). The model of the bridge-type CMM is Global 09.15.08 from Hexagon, and the indication error is 1.2 µm. Figure 18 shows photos of the process of using CMM to measure the standard part, and Table 5 records the data of the diameter of the standard ball and the length of the standard gauge block measured by CMM.
Figure 18. Images of CMM measuring standard parts. (a) Measuring standard sphere. (b) Measuring standard gauge block.
Table 5. Measurement results measured with CMM.
It can be determined from the data in Table 5 that the values measured by the CMM are relatively stable and reliable. The average diameter of the measurement standard sphere is 30.002 mm, and the average radius value of the standard sphere is 15.001 mm. The average length of the measurement standard block is 70.000 mm. The value of the standard part is calibrated, and the two average values are taken as the reference true value of the standard part, which is used to compare with the measurement data of the system.
According to the calibrated position relationship, the position of the standard sphere and the standard gauge block is fixed, and the scanner measures the gauge block and the standard sphere in a variety of different postures. At the same time, the positioning of the 6-DOF robot itself is used to scan and measure the measuring block and the standard sphere for comparison [34]. The point cloud image and the rendering image of the standard sphere are shown in Figure 19. Similarly, the point cloud image and the rendering image of the standard gauge block are shown in Figure 20.
Figure 19. The related images of the standard sphere. (a) The original image. (b) The point cloud image based on binocular camera positioning. (c) The rendering image based on (b). (d) Point cloud image based on robot positioning.
Figure 20. The related images of the standard gauge block. (a) The original image. (b) The point cloud image based on binocular camera positioning. (c) The rendering image based on (b). (d) Point cloud image based on robot positioning.
According to the laser point cloud data obtained at different positions each time, the radius of the fitted sphere is obtained. The scanner is moved to different positions to measure the radius of the standard sphere and the length of the standard gauge block. Twelve measurement results are recorded in Table 6. The data measured by these two methods are compared with the true value data measured by the CMM, and the standard deviation and root mean square error of the system measurement are calculated. The data are shown in Table 7.
Table 6. Measurement results of the standard sphere radius and gauge block length.
Table 7. Comparison of the measured value with the actual value.
The SD is the standard deviation. It can be seen from Table 7 that within the measurement range of 70 mm, the standard deviation is less than 0.17 mm, indicating that the measurement system has good stability. Compared with the splicing results using robot positioning, our average measurement error is smaller. Compared with the true value of the CMM, the root mean square error of the measurement system does not exceed 0.20 mm, the maximum measurement error for the standard sphere does not exceed 0.19 mm, and the maximum measurement error for the standard gauge block does not exceed 0.36 mm. In summary, referring to the maximum measurement deviation, the spatial measurement accuracy of the system can be determined to be 0.36 mm.

4.5. Scanning Measurement Application Experiment

In order to verify the feasibility of the entire scanning measurement system, this system is applied to scan and reconstruct the ceramic bowl. The robot is used to drive the laser scanner to scan the ceramic bowl according to the planned route. In the entire scanning process, the linear laser scans 1194 times, and 977,292 points are obtained. The point cloud image of the scanned ceramic bowl is obtained as shown in Figure 21b. Due to the path setting, repeated measurements are made in some parts of the ceramic bowl, resulting in dense point clouds in some parts. With the distance tolerance of 0.5 mm as the constraint, the original point cloud image is downsampled to obtain the downsampled image as shown in Figure 21c. The downsampled image is rendered by “Imageware” software, and the 3D curved surface shape of the ceramic bowl is obtained as shown in Figure 21d.
Figure 21. The related images of the ceramic bowl. (a) the original image. (b) the original point cloud image. (c) the point cloud image after down-sampling. (d) the rendering image.
It can be seen in Figure 21b,c that there are fewer point clouds at the side of the bowl with larger curvatures, indicating that more instances of measurement are required to obtain complete point cloud data at the curved location. In the repeated measurement area, the overlap degree of point cloud is relatively high, showing that the measurement system has good repeatability. In Figure 21d, the scanned point cloud data can fully recover the 3D shape of the measured object, and the overall stitching effect is good, indicating that the measurement system has the 3D measurement capability of a continuous surface.

5. Conclusions

This paper mainly introduces a laser scanning measurement system based on binocular vision positioning and its calibration method. Compared to traditional methods, the circular marker is designed to avoid sticking marked points on the measured workpiece. The complex calibration process is not required, and the positioning accuracy is not limited by the motion mechanism. The proposed circle detection method based on image moments takes the weighted mean of the centroid of multiple circles as observation points, which can reduce the center deviation error and improve positioning accuracy. This method is also suitable for the rapid detection and recognition of other circular markers. Compared to a specific target, the marker we designed overcomes the limitation that it cannot be located due to local occlusion, and the laser scanner can be accurately located without completely identifying all the marked points. The measurement range of the positioning system is wider. Therefore, the system can be conveniently applied to measure the 3D shape of large-sized workpieces on site with complex features.
Regarding the eight-point markers designed in this paper: (1) A corresponding marked points detection method is designed based on the image moment properties, which enhances the accuracy of detection. (2) A matching method of spatial points is built by both the epipolar constraint and the principle of the invariance of spatial distance. (3) The mathematical relationship between the laser point cloud and the global coordinates is obtained by a standard spherical calibration model. The marker coordinate system is used as an intermediate bridge to complete the data unification. The system is used to measure the ceramic bowl to verify the practicability and reliability of the measurement system. The experimental results show that the binocular vision system can complete localization under different light intensities and complex environments, the repeated translation error of the binocular vision system is less than 0.22 mm, and the rotation error is less than 0.15°. The repetition error of the measurement system is less than 0.36 mm, and the average measurement result is better than the splicing method relying on the self positioning of the manipulator, which can complete the precise positioning of the scanner and meet the requirements of 3D shape measurement.
The eight-point marking method in the paper also has some limitations. When the measurement distance is large, the marker is too small for positioning, and the size of the circular marker can be increased in the follow-up study. When the rotation angle of the scanner is too large and the number of marked points identified by the positioning system is less than five, it is easy to cause inaccurate positioning. In the follow-up work, we try to solve this problem by sticking circular markers on multiple surfaces of the scanner, so that the binocular positioning system can still recognize more circular markers when the scanner rotates at a large angle, and also the flexibility of the system can be further enhanced.

Author Contributions

Conceptualization, C.W., L.Y., Z.L. and W.J.; methodology, C.W., L.Y., Z.L. and W.J.; software, C.W.; validation, C.W. and L.Y.; investigation, C.W.; resources, Z.L.; data curation, C.W. and L.Y.; writing—original draft preparation, C.W., L.Y. and W.J.; project administration, C.W., L.Y., Z.L. and W.J.; supervision, funding acquisition, L.Y., Z.L. and W.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was financially supported by the National Natural Science Foundation of China (No. 52075511, No. 51927811, and No. 52005471), Natural Science Foundation of Zhejiang Province (No. LQ20E050017, No. LY19F030012), Science and Technology Project of State Administration for Market Regulation (No. 2021MK189), and Zhejiang University Student Science and Technology Innovation Activity Plan (No. 2022R409052).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are available from the first author upon request.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Sitnik, R.; Karaszewski, M.; Zaluski, W.; Bolewicki, P. Automated full-3D shape measurement of cultural heritage objects. In Proceedings of the SPIE Europe Optical Metrology, Munich, Germany, 14–18 June 2009. [Google Scholar] [CrossRef]
  2. Zhou, Z.; Liu, W.; Wu, Q.; Wang, Y.; Yu, B.; Yue, Y.; Zhang, J. A Combined measurement method for large-size aerospace components. Sensors 2020, 20, 4843. [Google Scholar] [CrossRef] [PubMed]
  3. Barone, S.; Paoli, A.; Razionale, A.V. Optical tracking of a tactile probe for the reverse engineering of industrial impellers. J. Comput. Inf. Sci. Eng. 2017, 17, 041003. [Google Scholar] [CrossRef]
  4. Poredoš, P.; Čelan, D.; Možina, J.; Jezeršek, M. Determination of the human spine curve based on laser triangulation. BMC Med. Imaging 2015, 15, 2. [Google Scholar] [CrossRef] [Green Version]
  5. Zhang, S. Recent progresses on real-time 3D shape measurement using digital fringe projection techniques. Opt. Lasers Eng. 2010, 48, 149–158. [Google Scholar] [CrossRef]
  6. Zuo, C.; Feng, S.; Huang, L.; Tao, T.; Yin, W.; Chen, Q. Phase shifting algorithms for fringe projection profilometry: A review. Opt. Lasers Eng. 2018, 109, 23–59. [Google Scholar] [CrossRef]
  7. Brown, G.M.; Chen, F.; Song, M. Overview of three-dimensional shape measurement using optical methods. Opt. Eng. 2000, 39, 10–22. [Google Scholar] [CrossRef]
  8. Sioma, A.; Romaniuk, R.S.; Linczuk, M. 3D imaging methods in quality inspection systems. In Proceedings of the Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments, Wilga, Poland, 25 May–2 June 2019. [Google Scholar]
  9. Auerswald, M.M.; von Freyberg, A.; Fischer, A. Laser line triangulation for fast 3D measurements on large gears. Int. J. Adv. Manuf. Technol. 2018, 100, 2423–2433. [Google Scholar] [CrossRef]
  10. Yang, L.; Xie, X.; Zhu, L.; Wu, S.; Wang, Y. Review of electronic speckle pattern interferometry (ESPI) for three dimensional displacement measurement. Chin. J. Mech. Eng. 2014, 27, 1–13. [Google Scholar] [CrossRef]
  11. Salvi, J.; Pagès, J.; Batlle, J. Pattern codification strategies in structured light systems. Pattern Recognit. 2004, 37, 827–849. [Google Scholar] [CrossRef] [Green Version]
  12. Marrugo, A.G.; Gao, F.; Zhang, S. State-of-the-art active optical techniques for three-dimensional surface metrology: A review [Invited]. J. Opt. Soc. Am. A 2020, 37, B60. [Google Scholar] [CrossRef]
  13. Yu, H.; Huang, Y.; Zheng, D.; Bai, L.; Han, J. Three-dimensional shape measurement technique for large-scale objects based on line structured light combined with industrial robot. Optik 2019, 202, 163656. [Google Scholar] [CrossRef]
  14. Wang, J.; Tao, B.; Gong, Z.; Yu, W.; Yin, Z. A Mobile robotic 3-D measurement method based on point clouds alignment for large-scale complex surfaces. IEEE Trans. Instrum. Meas. 2021, 70, 7503011. [Google Scholar] [CrossRef]
  15. Liu, L.; Wang, W.; Li, W. Flexible measurement technology of complex curved surface three-dimensional shape robot based on iGPS. Chin. J. Lasers 2019, 46, 200–205. [Google Scholar]
  16. Norman, A.R.; Schönberg, A.; Gorlach, I.A.; Schmitt, R. Validation of iGPS as an external measurement system for cooperative robot positioning. Int. J. Adv. Manuf. Technol. 2012, 64, 427–446. [Google Scholar] [CrossRef]
  17. Du, H.; Chen, X.; Xi, J.; Yu, C.; Zhao, B. Development and verification of a novel robot-integrated fringe projection 3D scanning system for large-scale metrology. Sensors 2017, 17, 2886. [Google Scholar] [CrossRef] [Green Version]
  18. Novak, B.; Babnik, A.; Možina, J.; Jezeršek, M. Three-dimensional foot scanning system with a rotational laser-based measuring head. Stroj. Vestn.-J. Mech. Eng. 2014, 60, 685–693. [Google Scholar] [CrossRef]
  19. Liu, T.; Wang, N.N.; Fu, Q.; Zhang, Y.; Wang, M.H. Research on 3D reconstruction method based on laser rotation scanning. In Proceedings of the 2019 IEEE International Conference on Mechatronics and Automation (ICMA), Tianjin, China, 4–7 August 2019; pp. 1600–1604. [Google Scholar]
  20. Barone, S.; Paoli, A.; Razionale, A.V. Multiple alignments of range maps by active stereo imaging and global marker framing. Opt. Lasers Eng. 2013, 51, 116–127. [Google Scholar] [CrossRef]
  21. Wang, X.; Xie, Z.; Wang, K.; Zhou, L. Research on a handheld 3D laser scanning system for measuring large-sized objects. Sensors 2018, 18, 3567. [Google Scholar] [CrossRef] [Green Version]
  22. Chen, Y.; Zhou, F.; Zhou, M.; Zhang, W.; Li, X. Pose measurement approach based on two-stage binocular vision for docking large components. Meas. Sci. Technol. 2020, 31, 125002. [Google Scholar] [CrossRef]
  23. Yin, L.; Wang, X.; Ni, Y. Flexible three-dimensional reconstruction via structured-light-based visual positioning and global optimization. Sensors 2019, 19, 1583. [Google Scholar] [CrossRef] [Green Version]
  24. Liu, Z.; Li, X.; Li, F.; Wei, X.; Zhang, G. Fast and flexible movable vision measurement for the surface of a large-sized object. Sensors 2015, 15, 4643–4657. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Shi, J.; Sun, Z. Large-scale three-dimensional measurement based on LED marker tracking. Vis. Comput. 2015, 32, 179–190. [Google Scholar] [CrossRef]
  26. Hu, S.; Matsumoto, Y.; Takaki, T.; Ishii, I. Monocular stereo measurement using high-speed catadioptric tracking. Sensors 2017, 17, 1839. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Huang, J.-C.; Liu, C.-S.; Chiang, P.-J.; Hsu, W.-Y.; Liu, J.-L.; Huang, B.-H.; Lin, S.-R. Design and experimental validation of novel 3D optical scanner with zoom lens unit. Meas. Sci. Technol. 2017, 28, 105904. [Google Scholar] [CrossRef] [Green Version]
  28. Jiang, T.; Cui, H.; Cheng, X. Accurate calibration for large-scale tracking-based visual measurement system. IEEE Trans. Instrum. Meas. 2020, 70, 5003011. [Google Scholar] [CrossRef]
  29. Suzuki, S.; Abe, K. Topological structural analysis of digitized binary images by border following. Comput. Vis. Graph. Image Process. 1985, 29, 396. [Google Scholar] [CrossRef]
  30. Peng, J.; Chen, D.; Xu, W.; Liang, B. An efficient virtual stereo-vision measurement method of a space non-cooperative target. In Proceedings of the 2018 13th World Congress on Intelligent Control and Automation (WCICA), Changsha, China, 4–8 July 2018; pp. 7–12. [Google Scholar]
  31. Umeyama, S. Least-squares estimation of transformation parameters between two point patterns. IEEE Trans. Pattern Anal. Mach. Intell. 1991, 13, 376–380. [Google Scholar] [CrossRef] [Green Version]
  32. Enebuse, I.; Foo, M.; Ibrahim, B.S.K.K.; Ahmed, H.; Supmak, F.; Eyobu, O.S. A comparative review of hand-eye calibration techniques for vision guided robots. IEEE Access 2021, 9, 113143–113155. [Google Scholar] [CrossRef]
  33. Sioma, A.; Romaniuk, R.S.; Linczuk, M. Geometry and resolution in triangulation vision systems. In Proceedings of the Photonics Applications in Astronomy, Communications, Industry, and High Energy Physics Experiments, Wilga, Poland, 31 August–2 September 2020. [Google Scholar]
  34. Mu, N.; Wang, K.; Xie, Z.; Ren, P. Calibration of a flexible measurement system based on industrial articulated robot and structured light sensor. Opt. Eng. 2017, 56, 054103. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.