Next Article in Journal
A Junction Temperature Prediction Method Based on Multivariate Linear Regression Using Current Fall Characteristics of SiC MOSFETs
Previous Article in Journal
In Situ Response Time Measurement of RTD Based on LCSR Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Digital Inspection Technology for Sheet Metal Parts Using 3D Point Clouds

by
Jian Guo
,
Dingzhong Tan
*,
Shizhe Guo
,
Zheng Chen
and
Rang Liu
College of Mechanical and Electrical Engineering, Harbin Engineering University, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(15), 4827; https://doi.org/10.3390/s25154827
Submission received: 24 June 2025 / Revised: 26 July 2025 / Accepted: 2 August 2025 / Published: 6 August 2025
(This article belongs to the Section Industrial Sensors)

Abstract

To solve the low efficiency of traditional sheet metal measurement, this paper proposes a digital inspection method for sheet metal parts based on 3D point clouds. The 3D point cloud data of sheet metal parts are collected using a 3D laser scanner, and the topological relationship is established by using a K-dimensional tree (KD tree). The pass-through filtering method is adopted to denoise the point cloud data. To preserve the fine features of the parts, an improved voxel grid method is proposed for the downsampling of the point cloud data. Feature points are extracted via the intrinsic shape signatures (ISS) algorithm and described using the fast point feature histograms (FPFH) algorithm. After rough registration with the sample consensus initial alignment (SAC-IA) algorithm, an initial position is provided for fine registration. The improved iterative closest point (ICP) algorithm, used for fine registration, can enhance the registration accuracy and efficiency. The greedy projection triangulation algorithm optimized by moving least squares (MLS) smoothing ensures surface smoothness and geometric accuracy. The reconstructed 3D model is projected onto a 2D plane, and the actual dimensions of the parts are calculated based on the pixel values of the sheet metal parts and the conversion scale. Experimental results show that the measurement error of this inspection system for three sheet metal workpieces ranges from 0.1416 mm to 0.2684 mm, meeting the accuracy requirement of ±0.3 mm. This method provides a reliable digital inspection solution for sheet metal parts.

1. Introduction

At present, the measurement of sheet metal parts is mainly divided into two kinds: contact measurement and non-contact measurement [1,2]. Contact measurement has strong versatility, but it has some drawbacks. For example, the measurement speed is slow, and it requires contact with the surface of the workpiece. As a result, the workpiece is prone to deformation, and scratches may occur on the surface. At the same time, this method has relatively high technical requirements for workers. Non-contact measurement refers to the measurement of the workpiece through the principles of optics, electromagnetism, acoustics, etc., without contact with the surface of the workpiece, thus reducing the additional loss of the workpiece. The most common methods of non-contact measurement are monocular measurement, binocular measurement, structured light measurement and laser measurement.
Laser measurement technology, as an advanced detection means, has been widely applied in aerospace and other fields. Schalk et al. [3] designed a pipe measuring system based on laser triangulation to measure the size of rolled seamless pipes. The pipe size is obtained by fitting the circle and plane with the least squares method. Krzysztof Magda et al. [4] designed a measurement system to obtain the surface profiles of logs, which consists of six laser scanners that can scan logs from 250 mm to 500 mm in diameter and up to 4000 mm in length. Tong et al. [5] designed a contactless measuring system for the measurement of thread parameters. After obtaining the contour data points, the measurement system fits and partitions the data and calculates parameters such as the thread pitch and pitch diameter based on the processed data. Cai et al. [6] designed a test system for the measurement, reconstruction and calculation of the volumes of potatoes. In this system, the potato is scanned by a line laser scanner, the lost top and bottom point clouds are repaired via interpolation methods and three-dimensional reconstruction is carried out. Finally, the linear regression method is used to determine the linear relationship between the measured potato volume and the actual potato volume and calculate the density of the potato.
The point cloud data obtained from workpiece scanning by a 3D laser scanner should be denoised, registered and 3D-reconstructed. Haque et al. [7] proposed a method for the robust noise reduction of point clouds while retaining fine features. Outliers of point clouds are detected and removed utilizing a dissimilarity measure based on point positions and their corresponding normals, and the bilateral filtering method is used for the noise reduction of point clouds. Lee et al. [8] proposed an algorithm based on the voxel grid method for data preprocessing. This method removes the noise in the point cloud, reduces the point cloud data by about 40% and removes 13% of the overlapping points to reduce the number of point clouds. Heinzler et al. [9] proposed a method based on convolutional neural networks to denoise point cloud data, which can significantly improve the denoising effect in light detection and ranging (LiDAR) point cloud data under poor weather conditions.
Ren et al. [10] proposed a multi-scale noise filtering algorithm. Firstly, the algorithm calculates the normal vectors of the point cloud data and the three eigenvalues of the sampling points through principal component analysis, and it distinguishes noise of different scales according to the relationship between the variation factor of the sampling point surface and the average surface factor. Then, the large-scale noise of the flat region and the small-scale noise of the abrupt region are filtered via a statistical filtering algorithm and bilateral filtering algorithm, respectively. The processed data can better display the surface of the rock. Bochang Zou et al. [11] proposed an improved method to address the problem whereby using a centroid to replace all point clouds within a certain range during the downsampling process leads to the loss of some feature points of the point clouds within this range. This method uses downsampling to find the centroid, adds it to the farthest point sampling and conducts ten iterations. The distances of the obtained 11 points are weighted and averaged to find the feature points. This algorithm simplifies 90,840 points within 48 s, while retaining the fine features of the point clouds. Zhao et al. [12] proposed a point cloud preprocessing method for the detection of workpiece surface defects. This method uses a laser scanner to obtain the 3D point cloud data of the workpiece and adopts the K-nearest neighbors (KNN) algorithm to reduce the data density. While retaining the data characteristics, it compresses the point cloud data to 60.20% of the original data.
Makovetskii et al. [13] proposed an improved algorithm based on the point-to-point ICP registration algorithm. This method uses points and normal vectors to align 3D point clouds, while the common point-to-point approach uses only the coordinates of points. At the same time, a regularization term is added to find a suitable transformation matrix between point clouds with poor correspondence, which helps to improve the accuracy of point cloud registration. Koide et al. [14] proposed a voxelized generalized iterative closest point algorithm. By combining the generalized iterative closest point (GICP) method with voxelization, it reduces the time required for the nearest neighbor search. Compared with the GICP and normal distributions transform (NDT) algorithms, this algorithm has better registration accuracy and efficiency. Young et al. [15] proposed a grid-based GICP algorithm for point cloud registration. Compared with the traditional GICP algorithm, the registration range of this method is 4 to 17 times larger. At the same time, it is more accurate and faster than the traditional GICP algorithm in registering sparse or non-uniform point clouds.
Yu et al. [16], addressing problems such as long calculation times and poor registration accuracy in point cloud registration, proposed an improved ICP algorithm based on matching points. This method first uses the RANSAC algorithm to segment the point cloud data obtained by LiDAR and filters out abnormal matching points. Finally, it combines the KD tree with ICP for point cloud registration. This method combines the advantages of point cloud filtering and the secondary filtering of matching point pairs, improving the calculation speed and registration accuracy. He et al. [17] proposed a new registration algorithm that combines PointNet++ and ICP. It samples and extracts the local feature descriptors of the input point cloud. Then, according to the feature descriptors, it finds the corresponding points required for ICP matching in the source point cloud and the target point cloud to estimate the rotation and translation matrices, reducing the calculation time required and decreasing the dependence on the initial values.
Vizzo et al. [18] proposed an improved algorithm based on Poisson surface reconstruction to obtain the specific shape of terrain. The point cloud data obtained through multiple scans are registered to obtain the overall point cloud data of the terrain, and then 3D reconstruction is carried out, resulting in a relatively accurate map. Maurya et al. [19] proposed a reconstruction method based on the greedy triangulation algorithm to reconstruct the point cloud data of a coastal dune area obtained by a 3D scanner. They used KD tree search, local meshing and surface angle and search radius limitations to process the point clouds in the coastal dune area, achieving the reconstruction of coastal dunes. The reconstruction integrity was between 93.4% and 96.8%. Ando et al. [20] proposed a 3D reconstruction algorithm to reconstruct the surfaces of plant leaves. They achieved the 3D reconstruction of the leaves of two crops: soybeans and beets. Compared with the traditional Poisson reconstruction algorithm, this algorithm can reduce the influence of noise and better reconstruct the leaf surface and has good robustness.
Gu et al. [21] implemented the preprocessing of point cloud data based on the C++ point cloud library, including filtering and smoothing, feature extraction and hole filling. The greedy projection triangulation method and the Poisson reconstruction method were applied to conduct the 3D reconstruction of the point cloud data. The error of the circle reconstructed by Poisson reconstruction was 0.195%, and the error of the circle reconstructed by greedy triangulation was 0.178%. In order to reconstruct a rose fruit model with complex geometric features and high precision, Xie et al. [22] proposed a 3D point cloud reconstruction method for Rosa roxburghii fruits based on a 3D laser scanner, which included 3D point cloud acquisition, point cloud registration, fruit segmentation and 3D reconstruction. Compared with the volume measured manually, the error in the volume of the reconstructed fruit was 1.01%. Pan et al. [23] proposed a segmentation method based on voxel structures and global optimization that can effectively separate bridge features. At the same time, in [23], the authors reconstructed the Hongde Bridge and the Tongxin Bridge. The reconstruction accuracies were 83% and 80%, respectively, and they were able to better reconstruct features such as the bridge deck and the fence.
In summary, scholars from various countries have achieved certain results in point cloud preprocessing technology. Some algorithms simplify the point cloud data after downsampling but fail to completely retain the features of the point clouds, resulting in poor results in subsequent operations on the point cloud data. Point cloud registration technology usually includes rough registration and fine registration. The accuracy of rough registration is generally low, while the accuracy of fine registration is high but it takes a long time. At present, the effects of 3D point cloud registration in terms of registration accuracy and speed are still not ideal. When dealing with large point cloud data and cases in which the initial position of the point cloud is not satisfactory, both the registration rate and the registration accuracy are poor. The models obtained after current 3D reconstruction have holes, and there is a large difference from the actual objects, making it impossible to accurately retain the features of these objects.
This paper takes aircraft sheet metal components as the research objects. Based on the digital detection technology for sheet metal parts using 3D point clouds, a digital detection system based on a laser scanner is constructed. The workpiece is scanned by a 3D laser scanner to obtain point cloud data. The geometric dimensions of the workpiece are measured based on methods such as point cloud data processing and planarization. By comparing the measured dimensions of the workpiece with the actual dimensions, it is shown that the proposed digital detection method can meet the measurement accuracy requirements.
The rest of the paper is organized as follows. Section 2 describes the digital detection method for sheet metal parts based on 3D point clouds, including the acquisition and preprocessing of point cloud data, point cloud registration, 3D reconstruction and other aspects. In Section 3, the experiments on point cloud denoising, registration and the 3D reconstruction of sheet metal parts are presented. Section 4 presents the conclusions of this paper.

2. Digital Inspection Method for Sheet Metal Parts Using 3D Point Clouds

2.1. Acquisition and Preprocessing of Point Cloud Data of Sheet Metal Parts

In this paper, the EinScan Pro 2X 3D scanner (Shining 3D Technology Co., Ltd., Hangzhou, China) is used to scan sheet metal parts to obtain point cloud files in ply format. The ply format files are then converted into pcd format files, which enable the processing of the point cloud data using the PCL library. The specific parameters of the EinScan Pro 2X are shown in Table 1. When using this scanner to measure the dimensions of sheet metal parts, it can meet the requirement for measurement accuracy of ±0.3 mm.
(1) Establishment of point cloud topological relationship of sheet metal parts
In this paper, the KD tree method is adopted to construct the topological structure. The parameter of the KD tree is set to 20—that is, 20 points are used to construct the topological structure. This value is determined based on the performance of the scanning equipment and the dimensions of the measured part used in this paper. The steps to construct a 3D KD tree are as follows. First, determine the splitting plane. Calculate the variances of the 3D data in the x, y, and z directions and select the coordinate axis with the largest variance as the splitting axis. Then, sort the data on the selected coordinate axis and use the median point as the splitting point to establish a splitting plane that passes through this point and is perpendicular to the selected axis. In this way, the data points are divided into two parts: those with coordinate values less than the median point form the left subtree, and those with coordinate values greater than the median point form the right subtree. By recursively repeating this process, continue to construct the left and right subtrees until there are no more data points that can be split, and the construction of the 3D KD tree is completed.
When using the KD tree method to establish the topological relationship of point clouds and dealing with large-scale point cloud data, constructing a KD tree may be time-consuming and require a large amount of memory resources.
(2) Point cloud filtering and processing of sheet metal parts
The preprocessing of point cloud data, such as denoising and downsampling, can reduce the amount of data and noise, thus optimizing the processing time and improving the processing accuracy.
Pass-Through Filtering Algorithm
The principle of the pass-through filtering algorithm is to determine whether to retain a point based on the coordinate values of the points in the point cloud data. Its main principle is to select one of the coordinate axes—namely the x-axis, y-axis, or z-axis—for filtering. Then, a filtering range [a, b] is specified. All point cloud data are traversed, and only the points whose coordinate values on the selected axis fall within the filtering range are retained, while the other points are filtered out.
This paper uses the pass-through filtering algorithm to filter the point cloud data. The interval of the x-axis is [20.7, 100.5], the interval of the y-axis is [40.6, 120.4], and the interval of the z-axis is [6.8, 11.6]. These values are determined based on the size of the measured part, aiming to completely retain the point clouds that represent the features of the part. All point cloud data within these intervals are retained, while the other point cloud data are deleted to complete the pass-through filtering. A schematic diagram of the pass-through filtering algorithm is shown in Figure 1.
Downsampling and Processing for Improved Voxel Grid Filtering
For the collected point cloud data, the voxel grid filtering algorithm is adopted to achieve their simplification [24]. When applying the voxel grid method, the calculated centroid points of each voxel grid may not be the actual coordinate points in the original point cloud data. Directly using these centroid points to represent the entire voxel grid may not allow the fine features of the original point cloud to be retained. Therefore, in this paper, the nearest neighbor search method is used to find the point in the original point cloud that is closest to the centroid within each voxel grid, and this point is used to represent all points of the entire voxel. In this way, it is ensured that the downsampled point cloud data still originate from the original data set, and, at the same time, the fine features of the original point cloud can be retained. The specific steps are as follows:
(1)
Suppose that the original point cloud data set P contains N data coordinate points, and the voxel grid method is adopted to perform downsampling on the point cloud data.
(2)
Calculate the centroid coordinate points, p(x, y, z), of each non-empty voxel grid and construct a new set of centroid points, pi(xi, yi, zi).
(3)
Search each voxel grid according to the KD tree nearest neighbor search method and take the point closest to the centroid in it as the new downsampled point to obtain a new set of centroid points, pi.
In this paper, an improved voxel grid filtering algorithm is used for downsampling. The size of the voxel grid is set to [1.3, 1.3, 1.3]. These values are determined based on the size of the measured part and the number of point clouds obtained per unit area by the scanning equipment, aiming to reduce the number of retained point clouds while preserving the image features of the point cloud data, as well as reducing the amount of computation. The KD tree is used to search for the point closest to the centroid to replace the centroid in order to complete the simplification of the point cloud data.

2.2. Point Cloud Registration of Sheet Metal Parts

2.2.1. Rough Registration of Point Clouds

(1) ISS feature point extraction algorithm
The ISS algorithm is a point cloud feature detection algorithm used to extract feature points from point cloud data. It is applied to identify and describe surface features. It can be applied in high-quality initial point cloud registration [25]. Suppose that the point cloud data P contain N data points, and the coordinate of any point pi is pi(xi, yi, zi), i = 0, 1, 2,…, N − 1. The specific algorithm process is as follows.
A local coordinate system is established for each point pi in the point cloud data, and a search radius r is set for all points. An appropriate search radius r needs to be selected based on the actual situation, such as the density of the point cloud.
Query all points in the area of radius r centered on each point pi in point cloud data P and calculate the weights wij of these points, as shown in Equation (1):
w i j = 1 | p i p j | , | p i p j | < r
Calculate the covariance matrix of each point pi, as shown in Equation (2):
cov ( p i ) = | p i p i | < r w i j ( p i p j ) ( p i p j ) T | p i p i | < r w i j
Calculate the eigenvalues { λ i 1 , λ i 2 , λ i 3 } of covariance matrix cov(pi) for each point pi in the point cloud data and arrange them in order from largest to smallest.
Set the threshold values ε 1 and ε 2 , and the points that meet the screening conditions of Equation (3) are ISS feature points.
λ i 2 λ i 1 ε 1 , λ i 3 λ i 1 ε 2
ε 1 is used to judge whether a point is a marginal point. The larger this value is, the more likely it is that the point is an edge point. ε 2 is used to determine whether a point is a corner point; the larger the value is, the more likely it is that the point is a corner point.
The feature points of two workpieces are extracted through the ISS feature point extraction algorithm, and the experimental results are shown in Figure 2.
Through the ISS algorithm used to extract features of different sheet metal parts, it can be seen that this algorithm deletes most of the points located on the plane of the sheet metal part and retains the blue key points, including edge points, corner points, etc., which can reflect the shape of the sheet metal part well.
(2) FPFH feature description
FPFH is an improvement method based on point feature histograms (PFH). They reduce the computational complexity and the amount of calculation. FPFH introduces the method of histogram statistics for the attributes of adjacent points, avoiding the complex feature calculation process of PFH. It retains, as much as possible, the description of the local geometric information of the point cloud, and the calculation speed is fast.
The calculation steps of FPFH are as follows.
① For the point Pq to be calculated, take it as the center, with r as the radius, and find the nearest k neighboring points in the neighborhood space. Then, connect the remaining points in the neighborhood space with the points Pq and find the corresponding four values ( α , ϕ , θ , d ) so as to express the position relationship between the two points. Thus, the simplified point feature histogram SPFH is established and denoted as S ( M q ) .
② For all neighboring points P k of the point Pq, re-establish the neighborhood space with r as the radius according to step ①, search k neighboring points P k in the neighborhood space and obtain the SPFH value of each nearby point P k , denoted as S ( M q ) . According to this value, calculate the FPFH value of point Pq, denoted as S ( M q ) . The calculation formula is as follows:
F ( M q ) = S ( M q ) + 1 k i = 1 k 1 w k S ( M k )
where w k is the weighted value of the k-th adjacent point of point Pq; 1 w k is the distance between the calculated point Pq and the k-th neighboring point.
The FPFH of point Pq is shown in Figure 3.
The FPFH descriptor is used to describe the features of the two sheet metal parts, as shown in Figure 4.
In the FPFH feature histogram in Figure 4, the range [0, 35] on the x-axis represents the relevant components of the surface shape of the sheet metal part, and the y-axis represents the FPFH values in different intervals. It can be seen that the FPFH line charts of the different sheet metal parts have significant differences.
(3) SAC-IA algorithm
The ISS algorithm is used to extract point cloud feature points, the FPFH algorithm is used to describe point cloud features, and the SAC-IA algorithm is used for rough registration. The specific steps of the SAC-IA algorithm are as follows.
① Selection of sampling points: Select points to be matched from the source point cloud to be registered and set a distance threshold d. Ensure that the distance between the points to be matched is greater than d to avoid the repeated or overly concentrated selection of point clouds. Guarantee that the selected points to be matched are relatively dispersed, and ensure that the FPFH values of the sampling points are different.
② Search for corresponding points regarding the sampling points: According to the calculated FPFH features of the point cloud to be matched, search for points with the same or similar features to the sampling points from the target point cloud and take them as the corresponding points to the points to be matched in the source point cloud to be registered.
③ Remove the incorrect matching points: Set the corresponding distance l of the matching points, which is used to remove incorrect corresponding points in order to improve the accuracy of registration.
④ Set the number of iterations: In order to prevent the algorithm from failing to converge and to limit the running time of the algorithm, a maximum number of iterations is set [26].
⑤ Calculate the transformation matrix: Calculate the rotation matrix and the translation matrix for all sets of corresponding point pairs. Then, calculate the “sum of distance errors” function between the corresponding point sets according to the calculated transformation matrix. This function is used as an indicator for the evaluation of the registration performance. See Equation (5):
H ( l i ) = 1 2 l i 2 , | | l i | | < m l 1 2 m l ( 2 | | l i m l ) , | | l i | | > m l
where m 1 is a fixed value set in advance; l i is the distance difference between the points to be matched that are selected from the i-th group of point clouds to be registered, after transformation, and the corresponding points in the target point cloud.

2.2.2. ICP Fine Registration

(1) Rigid body transformation matrix and its solution
The essence of point cloud registration is to ensure that the point clouds of the same object, which are measured at different times, positions and directions and in different coordinate systems, are located in the same coordinate system through rigid body transformation [27]. Rigid body transformation includes translation transformation and rotation transformation. Suppose that the two point clouds that need to be registered are the point cloud P to be registered and the target point cloud Q. Then, the transformation matrix H from the point cloud P to be registered to the target point cloud Q is as follows:
H = α 11 α 12 α 13 t x α 21 α 22 α 23 t y α 31 α 32 α 33 t z v x v y v z s
After simplification, it is shown as follows:
H = R T V S
where R = α 11 α 12 α 13 α 21 α 22 α 23 α 31 α 32 α 33 denotes the rotation matrix; T = t x t y t z T denotes the translation matrix; V = v x v y v z denotes the perspective transformation matrix; and S denotes the overall scale factor.
In general, the point cloud data obtained by scanning the workpiece with a 3D scanner will not be deformed. Therefore, the rigid body transformation matrix H is usually as shown in Equation (8):
H = R 3 × 3 T 3 × 1 V 1 × 3 S
The rotation matrix R and the translation matrix T are as shown in Equations (9) and (10), respectively:
R 3 × 3 = 1 0 0 0 cos α sin α 0 sin α cos α cos β 0 sin β 0 1 0 sin β 0 cos β cos γ sin γ 0 sin γ cos γ 0 0 0 1
T 3 × 1 = t x t y t z T
where α , β , γ denote the angle of rotation of the point cloud along the x-axis, y-axis and z-axis; t x , t y , t z denote the translation distance of the point cloud along the x-axis, y-axis and z-axis.
From Equation (9), the rotation matrix R3×3 can be expressed via Formula (11):
R 3 × 3 = cos β cos γ cos β sin γ sin β cos a sin γ sin a sin β cos γ cos a cos γ + sin a sin β sin γ sin a cos β sin a sin γ + cos a sin β cos γ sin a cos γ cos a sin β sin γ cos a cos β
In order to determine the six parameters α , β , γ , t x , t y , t z in the rigid body transformation matrix described above, at least six equations are theoretically required. This requires selecting three pairs of non-collinear matching point pairs (P1, Q1), (P2, Q2) and (P3, Q3) in the overlapping area of the roughly registered point cloud data, so as to correctly calculate the parameter values of the rigid body transformation matrix, as shown in Equation (12):
Q = R P + T
Let the coordinates of the point cloud data P be X p Y p Z p T and the coordinates of Q be X Q Y Q Z Q T .
Substitute Equations (9) and (10) into Equation (12), and the result is as follows:
X Q Y Q Z Q = 1 0 0 0 cos α sin α 0 sin α cos α cos β 0 sin β 0 1 0 sin β 0 cos β cos γ sin γ 0 sin γ cos γ 0 0 0 1 X P Y P Z P + t x t y t z
Point cloud registration aims to solve the rigid body transformation matrix between the point cloud P to be registered and the target point cloud Q. The solution process is as follows.
A quaternion is a four-dimensional vector composed of one real number and three imaginary numbers. The representation form of a quaternion is shown in Equation (14):
a = a 0 + a 1 i + a 2 j + a 3 k = a 0 a 1 a 2 a 3 T
where a 0 , a 1 , a 2 , a 3 are real numbers; i , j , k denote the generalized imaginary units. These units are perpendicular to each other and satisfy i 2 = j 2 = k 2 = 1 .
① Calculate the centroids Cp and Cq of point clouds P and Q, as shown in Equations (15) and (16):
C p = 1 n i = 1 n P i
C q = 1 n i = 1 n Q i
where n denotes the number of point clouds in point cloud data P and Q; Pi, Qi denote the corresponding point pairs in the two sets of point cloud data.
② Calculate the covariance matrix, as shown in Equation (17):
E = 1 n i = 1 n ( P i C p ) ( Q i C q ) T = 1 n i = 1 n P i Q i T C p C q T
③ Solve the cyclic column vector Δ = e 23 e 31 e 12 of the covariance matrix, and the three values in the formula e 23 , e 31 , e 12 can be calculated according to e i j = ( E E T ) , ( i , j = 1 , 2 , 3 ) .
④ Use the covariance matrix E and the cyclic column vector △ to construct a 4 × 4 symmetric matrix, as shown in Equation (18):
M = t r ( E ) Δ T Δ E + E T t r ( E ) I 3
where I3 denotes a 3 × 3 identity matrix; tr(E) denotes the trace of the covariance matrix E.
⑤ In order to determine the eigenvalue of the symmetric matrix M and its corresponding eigenvector, it is necessary to select the eigenvector corresponding to the maximum eigenvalue, which is the required quaternion value a = [ a 0 a 1 a 2 a 3 ] T .
⑥ Solve for the rotation matrix R and the translation matrix T.
E = U Λ V T can be obtained through the SVD decomposition of the covariance matrix E, where Λ is the non-negative diagonal matrix composed of the eigenvalues of E. Both U and V are 3 × 3 identity orthogonal matrices. The rotation matrix R and the transfer matrix T can be calculated via Equations (19) and (20):
R = a 0 + 2 a 1 2 a 2 2 a 3 2 2 ( a 1 a 2 a 0 a 3 ) 2 ( a 1 a 3 + a 0 a 2 ) 2 ( a 1 a 2 + a 0 a 3 ) a 0 + 2 a 2 2 a 1 2 a 3 2 2 ( a 2 a 3 a 0 a 1 ) 2 ( a 1 a 3 a 0 a 2 ) 2 ( a 2 a 3 + a 0 a 1 ) a 0 + 2 a 3 2 a 1 2 a 2 2
T = C p R C q
(2) ICP algorithm
Point cloud registration involves two known point clouds: the point cloud P to be registered and the target point cloud Q. Points p i are selected from the point cloud P to be registered; then, in the target point cloud Q, the points q i with the closest Euclidean distance are searched for. These points are paired with the selected points p i from P to form corresponding point pairs. The parameters of the rigid body transformation matrix are solved through continuous iteration. The parameters of the rigid body transformation matrix that minimize Equation (21) are taken as the condition for the termination of the iteration, and this is used to achieve the updating of the point cloud positions [28].
f R , T = 1 k i = 1 k q i R p i + T 2 = min
where R is the rotation matrix; T is the translation matrix; and k is the number of corresponding points.
The ICP algorithm, through an iterative approach based on the closest point matching between point cloud data, gradually optimizes the rigid body transformation matrix. It searches for the corresponding points in the target point cloud that have the minimum distance from the source point cloud. Subsequently, by iterating this process, it gradually reduces the spatial position deviation between the two point clouds in the same coordinate system. The algorithm continues until the termination conditions of the iteration are met, thus achieving the precise alignment of the point clouds.
The registration process of the ICP algorithm is shown in Figure 5.
(3) Improved ICP algorithm
The ICP algorithm has relatively strict requirements for the initial position of the point cloud. By searching for the corresponding points of the point cloud to be registered in the target point cloud, since every point in the target point cloud participates in the search, the computational time is increased. During the process of searching for corresponding point pairs, the ICP algorithm may result in a large number of incorrect point pairs, which reduces the registration accuracy, affects the final registration result and makes it difficult to quickly converge to the optimal solution. More iterations are required to obtain a satisfactory registration result.
In response to the above problems existing in the ICP algorithm, this paper proposes an improved ICP algorithm as follows.
① Addressing the characteristics of point cloud data, the KD tree algorithm is adopted to realize the search for corresponding point pairs, which improves the search efficiency.
② The ISS algorithm is used to extract key feature points, and point cloud registration is carried out by matching these significant feature points, so as to improve the registration speed.
③ By combining the FPFH and SAC-IA algorithms, rough registration is achieved before the fine registration of point clouds, providing a better initial position for the point clouds. The FPFH values are calculated to express the local geometric features of the point clouds, providing more informative feature descriptors for the SAC-IA rough registration, which helps to estimate the initial corresponding relationships more accurately. The rough registration by the SAC-IA algorithm provides a better initial position for the ICP algorithm, reducing the sensitivity of the ICP algorithm to the initial position and better solving the problem whereby the ICP algorithm is prone to falling into locally optimal solutions. This helps the ICP algorithm to converge to the optimal solution more quickly, reduces the number of algorithm iterations and improves the registration efficiency.
The specific steps of the improved ICP algorithm are shown in Figure 6.

2.3. A 3D Reconstruction Algorithm for Sheet Metal Parts

2.3.1. Greedy Projection Triangulation Processing

The greedy projection triangulation algorithm is mainly used for the 3D reconstruction of point cloud data. It uses locally optimal solutions to approximately achieve the overall optimal surface structure [29]. The specific process is as follows.
(1) Nearest neighbor search. For any point P in the point cloud data, the KD tree neighborhood search method is adopted to determine its neighborhood containing k nearest neighbor points.
(2) Two-dimensional projection of the local plane. Determine the projection tangent plane of a certain point P and all the points within its k-neighborhood and project all the points within the neighborhood onto this two-dimensional plane, as shown in Figure 7.
(3) Construct a spatial network. Use the greedy projection triangulation algorithm to perform planar triangulation on the points on the projection plane to determine the topological relationship. This algorithm randomly selects a point from the point cloud data as the initial growing point. Each time, a locally optimal point is selected as the new growing point. Then, according to the projection relationship, the growing points on the two-dimensional plane are mapped back to the three-dimensional space, thereby constructing the topological relationship in the three-dimensional space.

2.3.2. Poisson Surface Reconstruction Processing

Poisson reconstruction is the process of transforming 3D point cloud data into an implicit function representation. This method constructs a smooth surface model from the point cloud data by solving partial differential equations [30]. In particular, it constructs a densified function field that can represent the topological structure and geometric information of the point cloud data set P. Using the method of implicit fitting, by solving the discrete Poisson equation with the point cloud data as the boundary condition, the values of the surface function are solved. Then, the isosurface of the function field is extracted to construct a model surface that contains the complete information of the set entity, more accurately reflecting the surface features of the set and its subtle differences. The Poisson reconstruction algorithm is shown in Figure 8.
If the implicit function f(x,y,z) is used to represent the surface model, the position of the point cloud data can be determined by comparing the value of the function f(x,y,z) with 0. The solution set of f(x,y,z) = 0 is the boundary of the model surface, as shown in Figure 9.
The advantage of this algorithm lies in its ability to ensure the sealing of the model reconstruction surface while accurately reflecting the geometric characteristics and subtle features of the model surface. The Poisson algorithm mainly transforms the problem of surface reconstruction into solving the Poisson equation—that is, determining the indicator function. First, an octree is constructed. Using the characteristics of the octree, the implicit function is represented. Then, the function space is set, and the vector field is created to complete the solution of the Poisson equation—that is, to solve the indicator function. Finally, the isosurface is extracted. Isosurface extraction means obtaining the surface corresponding to the sampling points on the surface of the measured object model.

2.3.3. MLS Smoothing Processing

Since the scanning light cannot completely cover the surface of a smooth object, irregular point cloud data are generated, resulting in an uneven surface and even the appearance of holes, which affects the integrity of the surface information of the parts. In order to achieve a better reconstruction effect, it is necessary to smooth the point cloud data before performing 3D reconstruction. This paper uses the moving least squares (MLS) method to smooth and resample the point cloud data. The missing parts of the surface are filled through high-order polynomial interpolation, and the influence of the overlapping of the point cloud data surface after registration is eliminated.
The MLS method first selects a certain number of neighboring points around each point p in the point cloud data and establishes the relationship between the coordinates of these neighboring points and the fitting surface g as a system of constraint equations. Then, by solving the system of constraint equations, the optimal fitting surface at point p is obtained, and the point p in the original point cloud data is replaced by the points on this surface. Subsequently, the above steps are repeated continuously until the entire point cloud data are processed, so as to achieve the smoothing of the surface. It is necessary to determine the fitting function and the weight function, and the specific process is as follows.
(1)Determine the fitting function
Regard the point cloud data as a fitting data region. On a local sub-region U of this region, the fitting function f(x) is as shown in Equation (22):
f ( x ) = i = 1 m a i ( x ) p i ( x ) = p T ( x ) a ( x ) f ( x )
where α ( x ) = α 1 ( x ) , α 2 ( x ) , α m ( x ) T denotes the coefficients to be determined, which are functions of x; m is the number of terms in the basis function; p ( x ) = p 1 ( x ) , p 2 ( x ) , p m ( x ) T is the basis function, and it is a complete polynomial of order k.
For a one-dimensional problem, the basis function is p ( x ) = 1 , x , x 2 , , x m T . For a two-dimensional problem, the basis function usually has two forms: the linear basis is p ( x ) = [ 1 , x , y ] T , m = 6 ; the quadratic basis is p ( x ) = [ 1 , u , v , u 2 , u v , v 2 ] T , m = 6 . In practical situations, the quadratic basis is more frequently used. To summarize, the fitting function f ( x ) can be expressed as in Equation (23):
f ( x ) = a 0 ( x ) + a 1 ( x ) u + a 2 ( x ) v + a 3 ( x ) u 2 + a 4 ( x ) v 2 + a 5 ( x ) u v
In order to obtain more accurate local fitting, it is necessary to minimize the weighted sum of the squares of the local fitting differences between f ( x i ) and the nodal values y i . Therefore, the discrete weighted norm can be described as
J = i = 1 n w ( x w i ) [ f ( x ) y i ] = i = 1 n w ( x x i ) [ p T ( x ) a ( x ) y i ] 2 = i = 1 n w ( x x i ) [ ( a 0 ( x ) + a 1 ( x ) u + a 2 ( x ) v + + a 3 ( x ) u 2 + a 4 ( x ) v 2 + a 5 ( x ) u v ) y i ] 2
where n is the number of nodes in the influence area; f ( x ) is the fitting function; y i = y ( x i ) is the nodal value at the node x = x i ; and w ( x x i ) is the weight function at the node x = x i .
In order to determine the coefficient a ( x ) , Equation (24) should take a minimum value. By differentiating with respect to a ( x ) and setting J q = 0 , the equation can be obtained as follows:
a = ( B W B ) 1 B W y
where B = 1 u 1 v 1 u 1 2 u 1 v 1 v 1 2 1 u n v n u n 2 u n v n u n 2 ;
W is an n-order diagonal matrix, and W = w ( x x 1 ) 0 0 0 w ( x x 2 ) 0 0 0 w ( x x n ) ;
y = y ( x 1 ) , y ( x 2 ) , , y ( x n ) T .
(2) Determine the weight function
For the MLS method, the determination of the weight function is of crucial importance for the smoothing of point cloud data. Usually, a circular region is selected as the support domain of the weight function, and its radius is defined as r. The weight function w(xxi) has the following characteristics.
① Compact support: The weight function w(xxi) is non-zero within a specific sub-domain and zero outside the sub-domain.
② Non-negativity: The values of the weight function are always non-negative. Within the support domain, the values of the weight function are always greater than zero, which ensures that the weight function has a positive impact on the data points within the selected region.
③ Smoothness: The weight function w(xxi) is a smooth and continuous function, which helps to ensure that a smooth fitting surface can be obtained during the processing of the point cloud data and improves the fitting accuracy for local data.
④ Monotonic decrease: Within the support domain, the value of the weight function w(xxi) decreases monotonically as x x i increases.
A commonly used weight function is the cubic spline weight function, and its expression is shown in Equation (26):
w ( x ) = 2 3 4 x 2 + 4 x 3 , 0 < x 0.5 3 4 ( 1 x ) 3 , 0.5 < x 1.0 0 , x > 1.0
where x = r i β h i , r i = x x i ; h i is the size of the support domain of the weight function for the i-th node; β is the influence coefficient, and its value usually ranges between 1.25 and 2.5.

3. Digital Detection Experiments on Sheet Metal Parts Based on 3D Point Clouds

3.1. Experiments on Point Cloud Denoising of Sheet Metal Parts

The point cloud of a sheet metal part scanned by a 3D scanner is shown in Figure 10a. In order to remove irrelevant point clouds such as the workbench and the surrounding environment, the pass-through filtering algorithm is adopted for noise reduction, as shown in Figure 10b.
The 3D point cloud obtained after the pass-through filtering process is subjected to statistical filtering. When k = 1, the processing results with a change in the value of λ are as shown in Figure 11, and the experimental result data are presented in Table 2.
According to the filtering effect shown in Figure 11 and the number of point clouds after noise reduction in Table 2, when k = 1, as λ increases, the number of point clouds removed gradually decreases, and the number of points retained on the front-view plane of the sheet metal part gradually increases. However, the number of retained invalid points also increases accordingly, resulting in an increase in the subsequent processing time. When λ is less than 1, the number of point clouds removed is excessive, which may affect the effects of subsequent point cloud registration and 3D reconstruction.
When λ = 1, the filtering effects when changing the value of k are as shown in Figure 12, and the experimental data are shown in Table 3.
According to Figure 12 and Table 3, when the value of k changes, the number of point clouds does not change significantly. Therefore, the value of k has little impact on the number of point clouds. Based on the above analysis, λ = 1 and k = 5 are ultimately selected.
Figure 13 shows a schematic diagram of the downsampling of sheet metal parts. Here, Figure 13a shows the original point cloud data, Figure 13b shows the processing results after downsampling by voxel grid filtering, and Figure 13c shows the processing results of the improved downsampling algorithm. Table 4 shows the number of point clouds after processing by different downsampling algorithms.
According to Figure 13 and Table 4, the number of point clouds after downsampling by the improved algorithm is the same as that after downsampling by the voxel grid method. However, the improved algorithm can preserve the characteristics of the point clouds well and has the optimal downsampling effect.

3.2. Registration Experiments on Sheet Metal Parts

The ICP registration algorithm and the improved algorithm are used to register the sheet metal parts. The registration results are shown in Figure 14 and Figure 15, respectively.
The statistical results in terms of the registration errors and time consumption for sheet metal parts are shown in Table 5.
From the experimental results, it can be observed that, when the ICP algorithm is applied to register sheet metal parts, if the number of iterations is low, obvious registration errors will occur. Increasing the number of iterations can gradually optimize the registration effect, but the corresponding registration time will also increase, resulting in a decrease in registration efficiency. Since the ICP algorithm is highly sensitive to small changes in the initial values, this affects its overall registration performance.
The results of ICP with different numbers of iterations for partially overlapping sheet metal parts are shown in Figure 16. The results of the improved algorithm with different numbers of iterations are shown in Figure 17.
The statistical results in terms of the registration errors and time consumption for partially overlapping sheet metal parts are shown in Table 6.
According to Figure 16 and Figure 17 and Table 6, when the ICP algorithm is applied to register sheet metal parts, a small number of iterations results in the poor overlapping of the two sets of point clouds, and obvious registration errors occur. Compared with the direct application of the ICP algorithm, the improved method proposed in this paper, with an increase in the number of iterations, can not only significantly improve the registration accuracy but also effectively reduce the registration time, thus improving the registration efficiency.

3.3. Three-Dimensional Reconstruction Experiments on Sheet Metal Parts

The sheet metal parts after registration by the improved algorithm are subjected to 3D reconstruction, as shown in Figure 18. The reconstructed model is magnified locally, as shown in Figure 19. In both Figure 18 and Figure 19, (a) represents the greedy triangulation reconstruction, (b) represents the Poisson reconstruction, and (c) represents the reconstruction results of the improved algorithm.
A comparison of the running times of the three different reconstruction algorithms applied to sheet metal parts is carried out, and the results are shown in Table 7.
According to Figure 18 and Figure 19 and Table 7, the surface of the sheet metal part after greedy reconstruction is not smooth and has many sharp edges and corners. The surface of the sheet metal part after Poisson reconstruction is smooth, but it performs poorly in the reconstruction of circular and square holes and cannot achieve accurate shapes. The surface model generated by the Poisson reconstruction algorithm exhibits excellent performance both in terms of overall consistency and surface smoothness. However, when dealing with objects that have multiple interconnected parts, it cannot effectively reconstruct each part individually; instead, it creates a continuous overall surface. In addition, for objects with relatively complex shapes and large variations in the density of surface point clouds, the features of the reconstructed surface model cannot be clearly expressed. The improved greedy reconstruction can not only reconstruct the shape of the sheet metal parts but also produces fewer sharp edges and corners and a smooth surface. Although the reconstruction time of the improved algorithm is longer than that of greedy reconstruction, it has a better reconstruction effect. Therefore, this paper uses the improved algorithm to complete the 3D reconstruction of sheet metal parts.

3.4. Digital Detection Experiments on Sheet Metal Parts

3.4.1. Digital Detection Algorithm

(1) Straight-line detection algorithm
Since a point in the image space corresponds to a unique equation in the parameter space, the Hough transform can be used for straight-line detection. By utilizing the duality property between points and lines, and conducting statistical analysis on the point set in the image, the precise identification of straight lines can be achieved.
In the Cartesian coordinate system XOY, the straight line passing through point ( x 0 , y 0 ) is as shown in Equation (27):
y 0 = k x 0 + b a = ( B W B ) 1 B W y
where k is the slope of a straight line, and b is the intercept of a straight line.
At the same time, the above formula can be rewritten as follows:
b = k x 0 + y 0 a = ( B W B ) 1 B W y
Point ( x 0 , y 0 ) in the image space is mapped to a straight line in the parameter space through the Hough transform, and the equation of this straight line is uniquely determined. Similarly, point ( x 1 , y 1 ) in the image space also corresponds to a straight line in the parameter space, and the straight line corresponding to point ( x 1 , y 1 ) intersects with the straight line corresponding to point ( x 0 , y 0 ) at point ( k 1 , b 1 ) in the parameter space, as shown in Figure 20.
Therefore, any straight line passing through points ( x 0 , y 0 ) and ( x 1 , y 1 ) in the image space corresponds to a straight line in the k-b parameter space. If these straight lines intersect at a point ( k 1 , b 1 ) in the parameter space, then k 1 and b 1 are the parameters of the straight line connecting ( x 0 , y 0 ) and ( x 1 , y 1 ) in the image space.
The shape of the curve in the parameter space is determined by the curve function in the original image space. In order to avoid the situation whereby the slope of a straight line may be infinite, a common practice is to transform the image pixels from the Cartesian coordinate system ( x , y ) to the polar coordinate system ( r , θ ) . In this coordinate system, the equation of a straight line can be represented by the polar radius r and the polar angle θ , as shown in Equation (29):
r = x cos θ + y sin θ , r > 0 , 0 < θ < 2 π a = ( B W B ) 1 B W y
where r is the distance from the line to the origin of the coordinate; θ is the angle between the horizontal direction and the vertical line of the detection line.
In the cumulative analysis of the polar coordinate system, each intersecting straight line is assigned a unit weight. The weight of an intersection point is determined by the sum of the weights of all straight lines passing through that point. If multiple straight lines intersect at the same point, a higher sum of weights indicates that the straight line corresponding to this point is composed of a large number of points. By setting a specific parameter accumulator L ( k , b ) , the accumulator L ( k 0 , b 0 ) corresponding to the intersection point will be gradually accumulated. When the value of the accumulator reaches a peak, the parameter ( r , θ ) of this point represents a straight line in the Cartesian coordinate system.
(2) Circle detection algorithm
The Hough transform is widely used in the recognition of circular contours in two-dimensional image analysis. It can effectively distinguish circular structures from complex backgrounds and maintain stable performance even in the presence of noise interference. The equation of a circle in the known Cartesian coordinate system is
( x a ) 2 + ( y b ) 2 = r 2
The transformation from the Cartesian coordinate system to the polar coordinate system can be expressed as
a = x r cos θ b = y r sin θ
In a three-dimensional coordinate system composed of the parameters a, b and r, any point can represent a circle. If the center point and the radius of a circle in an image are known, then, by rotating the coordinates of any point on the circle around the center point, the polar coordinates of all points on the circle can be obtained.

3.4.2. Projection Experiments on Sheet Metal Parts

This paper adopts a method that combines point cloud and image processing to improve the contour-based measurement method. The key step of this method is to flatten the 3D point cloud. The main steps are as follows:
(1) Perform 3D reconstruction on the point cloud data obtained by scanning the workpiece with a 3D scanner;
(2) Project the 3D reconstructed model onto a plane to obtain a 2D graphic.
Figure 21a shows a standard disc part, and an image of its projection onto a plane is shown in Figure 21b.
The dimensions of the standard disc part are obtained by measuring it with a universal tool microscope. The pixel values of the disc projection are measured by the Hough algorithm. The universal tool microscope is shown in Figure 22, and the measurement results are shown in Table 8.
From Table 8, it can be concluded that the ratio of the dimensions of the standard disc part to the pixel values is 0.0633. When the 3D scanner scans the sheet metal part at the same distance, the ratio of the measured dimensions to the pixel values remains unchanged. This value is taken as the conversion scale factor for the scanning of the sheet metal part. The measured dimensions of the sheet metal part can be obtained through this scale factor and the pixel values of the sheet metal part.
The dimensions of the sheet metal part are measured by a universal tool microscope, and these are regarded as the actual dimensions of the sheet metal part. The dimensions of the workpiece obtained by measuring the sheet metal part through the digital detection system are the measured values. The measured values are compared with the actual dimensions of the sheet metal part measured by the universal tool microscope to obtain the measurement error of the digital detection system.

3.4.3. Experiments on Dimension Measurement of Sheet Metal Parts

Workpiece 1 is shown in Figure 23a. The image obtained by projecting the 3D model of the sheet metal part, which is acquired through the 3D reconstruction of the point cloud data scanned by a 3D scanner, onto a plane is shown in Figure 23b. The measurement results are shown in Table 9.
According to Table 9, the minimum error value of the dimensions measured for this sheet metal part by the digital detection system is 0.1416 mm, and the maximum is 0.2637 mm, both of which are less than 0.3 mm. The measurement accuracy of the digital detection system for workpiece 1 meets the technical requirements. Workpiece 2 is shown in Figure 24a. The image obtained by projecting the 3D model of the sheet metal part, which is acquired through the 3D reconstruction of the point cloud data scanned by a 3D scanner, onto a plane is shown in Figure 24b. The measurement results are shown in Table 10.
According to Table 10, the minimum error value of the dimensions measured for this sheet metal part by the digital detection system is 0.1559 mm, and the maximum is 0.2672 mm, both of which are less than 0.3 mm. The measurement accuracy of the digital detection system for workpiece 2 meets the technical requirements. Workpiece 3 is shown in Figure 25a. The image obtained by projecting its 3D reconstruction onto a plane is shown in Figure 25b. The measurement results are shown in Table 11.
According to Table 11, the minimum error value of the dimensions measured for this sheet metal part by the digital detection system is 0.1443 mm, and the maximum is 0.2684 mm, both of which are less than 0.3 mm. The measurement accuracy of the digital detection system for workpiece 3 meets the technical requirements.
Based on the measurement results in Table 9, Table 10 and Table 11, the minimum error value of the digital detection system when measuring the three parts is 0.1416 mm, and the maximum error value is 0.2684 mm. The main reasons for the occurrence of errors are as follows:
(1) The surface of the model after 3D reconstruction is not smooth enough. As can be seen from Figure 18, the surface of the 3D model of the reconstructed sheet metal part has sharp edges and corners, and it is not completely consistent with the sheet metal part, resulting in an inaccurate projected image.
(2) There are errors in the 3D scanner.
(3) The parameters of the point cloud processing algorithm are not the optimal parameters.

4. Conclusions

This paper presents a digital inspection technology for sheet metal parts based on 3D point clouds, which integrates point cloud acquisition, denoising, registration, 3D reconstruction and dimensional measurement. Using an EinScan Pro 2X 3D scanner to acquire point cloud data, this technology establishes topological relationships via KD trees, denoises through pass-through filtering and applies an improved voxel grid downsampling algorithm that retains fine features while reducing the data volume by 80.9%. For point cloud registration, the ISS algorithm extracts feature points described by FPFH, with SAC-IA for rough registration and an improved ICP algorithm (incorporating KD tree nearest-neighbor search and feature point matching), achieving 75% fewer iterations and an 84.7% lower registration error rate compared to traditional ICP. Moreover, 3D reconstruction adopts an MLS-optimized greedy projection triangulation algorithm, which outperforms Poisson reconstruction in preserving geometric features like holes. Dimensional measurement, via 3D-to-2D projection and the Hough transform with a 0.0633 scale factor, yields measurement errors of 0.1416–0.2684 mm across three workpieces, meeting the ±0.3 mm accuracy requirement, thus providing a reliable digital inspection solution for sheet metal parts.

Author Contributions

Conceptualization, J.G. and D.T.; methodology, S.G., Z.C. and R.L.; software, S.G. and Z.C.; investigation, R.L.; writing—original draft preparation, S.G. and R.L.; writing—review and editing, J.G. and D.T.; supervision, J.G.; project administration, D.T.; funding acquisition, J.G. and D.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Key Research and Development Project of Heilongjiang Province (Grant No. 2024ZXDXA17).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data supporting the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, Y.; Zhang, Y.C. Length dimension measurement for large hot forgings based on green laser recognition. Acta Metrol. Sin. 2018, 39, 316–320. [Google Scholar]
  2. Cheng, Y.B.; Chen, X.H.; Wang, H.B. Uncertainty estimation model of CMM for size measurement. Acta Metrol. Sin. 2016, 37, 462–466. [Google Scholar]
  3. Lilienblum, E.; Al-Hamadi, A. A structured light approach for 3-D surface reconstruction with a stereo line-scan system. IEEE Trans. Instrum. Meas. 2015, 64, 1258–1266. [Google Scholar] [CrossRef]
  4. Siekański, P.; Magda, K.; Zagórski, A. On-line laser triangulation scanner for wood logs surface geometry measurement. Sensors 2019, 19, 1074–1079. [Google Scholar] [CrossRef]
  5. Tong, Q.B.; Jiao, C.Q.; Huang, H.; Li, G.B.; Ding, Z.L.; Yuan, F. An automatic measuring method and system using laser triangulation scanning for screw thread parameters. Meas. Sci. Technol. 2014, 25, 35202–35209. [Google Scholar] [CrossRef]
  6. Cai, Z.Y.; Jin, C.Q.; Xu, J.; Yang, T.X. Measurement of potato volume with laser triangulation and three-dimensional reconstruction. IEEE Access 2020, 8, 176565–176574. [Google Scholar] [CrossRef]
  7. Haque, S.M.; Govindu, V.M. Robust feature-preserving denoising of 3D point clouds. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Lyon, France, 25–28 October 2016. [Google Scholar]
  8. Lee, M.Y.; Lee, S.H.; Jung, K.D.; Lee, S.H.; Kwon, S.C. A novel preprocessing method for dynamic point-cloud compression. Appl. Sci. 2021, 11, 5941–5948. [Google Scholar] [CrossRef]
  9. Heinzler, R.; Piewak, F.; Schindler, P.; Stork, W. CNN-based lidar point cloud de-noising in adverse weather. IEEE Robot. Autom. Lett. 2020, 5, 2514–2521. [Google Scholar] [CrossRef]
  10. Ren, Y.J.; Li, T.Z.; Xu, J.K.; Hong, W.W.; Zheng, Y.C.; Fu, B. Overall filtering algorithm for multiscale noise removal from point cloud data. IEEE Access 2021, 9, 110723–110734. [Google Scholar] [CrossRef]
  11. Zou, B.C.; Qiu, H.D.; Lu, Y.F. Point cloud reduction and denoising based on optimized downsampling and bilateral filtering. IEEE Access 2020, 8, 136316–136326. [Google Scholar] [CrossRef]
  12. Xu, Z.; Kang, R.; Lu, R.D. 3D reconstruction and measurement of surface defects in prefabricated elements using point clouds. J. Comput. Civil. Eng. 2020, 34, 04020033. [Google Scholar] [CrossRef]
  13. Makovetskii, A.; Voronin, S.; Kober, V.; Voronin, A. A regularized point cloud registration approach for orthogonal transformations. J. Glob. Optim. 2022, 83, 497–519. [Google Scholar] [CrossRef]
  14. Koide, K.; Yokozuka, M.; Oishi, S.; Banno, A. Voxelized GICP for fast and accurate 3D point cloud registration. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021. [Google Scholar]
  15. Young, M.; Pretty, C.; McCulloch, J.; Green, R. Sparse point cloud registration and aggregation with mesh-based generalized iterative closest point. J. Field Robot. 2021, 38, 1078–1091. [Google Scholar] [CrossRef]
  16. Yu, J.W.; Yu, C.G.; Lin, C.D.; Wei, F.Q. Improved iterative closest point (ICP) point cloud registration algorithm based on matching point pair quadratic filtering. In Proceedings of the 2021 International Conference on Computer, Internet of Things and Control Engineering (CITCE), Guangzhou, China, 12–14 November 2021. [Google Scholar]
  17. He, Y.W.; Lee, C.H. An improved ICP registration algorithm combining PointNet++ and ICP algorithm. In Proceedings of the 2020 Sixth International Conference on Control, Automation and Robotics (ICCAR), Singapore, Singapore, 20–23 April 2020. [Google Scholar]
  18. Vizzo, I.; Chen, X.Y.L.; Chebrolu, N.; Behley, J.; Stachniss, C. Poisson surface reconstruction for lidar odometry and mapping. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021. [Google Scholar]
  19. Maurya, S.R.; Magar, G.M. Performance of greedy triangulation algorithm on reconstruction of coastal dune surface. In Proceedings of the2018 Third International Conference for Convergence in Technology (I2CT), Pune, India, 6–8 April 2018. [Google Scholar]
  20. Ando, R.; Ozasa, Y.; Guo, W. Robust surface reconstruction of plant leaves from 3D point clouds. Plant Phenomics 2021, 2021, 3184185. [Google Scholar] [CrossRef] [PubMed]
  21. Gu, R.R.; Sun, A.B.; Li, Y.R.; Wang, Z.Y. An analysis of triangulation reconstruction based on 3D point cloud with geometric features. In Proceedings of the 2021 Tenth International Symposium on Precision Mechanical Measurements, Qingdao, China, 15-17 October 2021. [Google Scholar]
  22. Xie, Z.P.; Lang, Y.C.; Chen, L.Q. Geometric modeling of rosa roxburghii fruit based on three-dimensional point cloud reconstruction. J. Food Qual. 2021, 2021, 9990499. [Google Scholar] [CrossRef]
  23. Pan, Y.; Dong, Y.Q.; Wang, D.L.; Chen, A.R.; Ye, Z. Three-dimensional reconstruction of structural surface model of heritage bridges using UAV-based photogrammetric point clouds. Remote Sens. 2019, 11, 1204. [Google Scholar] [CrossRef]
  24. Liao, L.X.; Zhou, J.D.; Ma, Y.Q.; Liu, Y.B.; Lei, J. A thinning method of building 3D laser point cloud data based on voxel grid. Geomat. Spat. Inf. Technol. 2023, 46, 53–56, 60. [Google Scholar]
  25. Zhang, Z.L.; Dong, Y.M.; Zhu, J.X.; Lu, J.J. An improved ICP point cloud registration algorithm based on ISS-FPFH features. Appl. Laser 2023, 43, 124–131. [Google Scholar]
  26. Rusu, R.B.; Blodow, N.; Beetz, M. Fast point feature histograms (FPFH) for 3D registration. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation (ICRA), Kobe, Japan, 12–17 May 2009. [Google Scholar]
  27. Zhang, M.; Wen, J.H.; Zhang, Z.X.; Zhang, J.Q. Laser points cloud registration using euclidean distance measure. Sci. Surv. Mapp. 2010, 35, 5–8. [Google Scholar]
  28. Li, Y.; Qu, R.F.; Gu, S.; Xie, Y.M. 3D point cloud registration method of ICP based on k-d tree. Comput. Knowl. Technol. 2022, 18, 82–84. [Google Scholar]
  29. Liu, X.Y.; Wang, J.; Chang, Q.F.; Wang, X.G. Fast 3D reconstruction of point cloud based on improved greedy projection triangulation algorithm. Laser Infrared 2022, 52, 763–770. [Google Scholar]
  30. Lu, M.S.; Yao, J.; Dong, S.Y. The poisson surface reconstruction algorithm for normal constraint’s for point cloud data. J. Geomat. 2022, 47, 51–55. [Google Scholar]
Figure 1. Schematic diagram of the pass-through filtering algorithm.
Figure 1. Schematic diagram of the pass-through filtering algorithm.
Sensors 25 04827 g001
Figure 2. ISS feature point extraction diagrams of different sheet metal parts.
Figure 2. ISS feature point extraction diagrams of different sheet metal parts.
Sensors 25 04827 g002
Figure 3. FPFH of point Pq.
Figure 3. FPFH of point Pq.
Sensors 25 04827 g003
Figure 4. The FPFH feature line diagrams of different sheet metal parts.
Figure 4. The FPFH feature line diagrams of different sheet metal parts.
Sensors 25 04827 g004
Figure 5. The flow chart of the ICP algorithm.
Figure 5. The flow chart of the ICP algorithm.
Sensors 25 04827 g005
Figure 6. The improved ICP algorithm.
Figure 6. The improved ICP algorithm.
Sensors 25 04827 g006
Figure 7. Local plane fitting and point cloud projection.
Figure 7. Local plane fitting and point cloud projection.
Sensors 25 04827 g007
Figure 8. Steps of Poisson reconstruction algorithm.
Figure 8. Steps of Poisson reconstruction algorithm.
Sensors 25 04827 g008
Figure 9. Schematic diagram of Poisson reconstruction.
Figure 9. Schematic diagram of Poisson reconstruction.
Sensors 25 04827 g009
Figure 10. The 3D point cloud of a sheet metal part and the result of its noise reduction through pass-through filtering.
Figure 10. The 3D point cloud of a sheet metal part and the result of its noise reduction through pass-through filtering.
Sensors 25 04827 g010
Figure 11. Effects of noise removal from sheet metal part through statistical filtering at different values of λ.
Figure 11. Effects of noise removal from sheet metal part through statistical filtering at different values of λ.
Sensors 25 04827 g011
Figure 12. Effects of noise removal from sheet metal part through statistical filtering at different values of k.
Figure 12. Effects of noise removal from sheet metal part through statistical filtering at different values of k.
Sensors 25 04827 g012
Figure 13. Downsampling of sheet metal parts.
Figure 13. Downsampling of sheet metal parts.
Sensors 25 04827 g013
Figure 14. Registration of sheet metal parts using ICP algorithm.
Figure 14. Registration of sheet metal parts using ICP algorithm.
Sensors 25 04827 g014
Figure 15. Registration of sheet metal parts using the improved algorithm.
Figure 15. Registration of sheet metal parts using the improved algorithm.
Sensors 25 04827 g015aSensors 25 04827 g015b
Figure 16. Registration of partially overlapping sheet metal parts using the ICP algorithm.
Figure 16. Registration of partially overlapping sheet metal parts using the ICP algorithm.
Sensors 25 04827 g016
Figure 17. Registration of partially overlapping sheet metal parts using the improved algorithm.
Figure 17. Registration of partially overlapping sheet metal parts using the improved algorithm.
Sensors 25 04827 g017
Figure 18. The 3D reconstruction results for sheet metal parts.
Figure 18. The 3D reconstruction results for sheet metal parts.
Sensors 25 04827 g018
Figure 19. Locally enlarged views of the 3D reconstruction of sheet metal parts.
Figure 19. Locally enlarged views of the 3D reconstruction of sheet metal parts.
Sensors 25 04827 g019
Figure 20. The Hough transform in the Cartesian coordinate system.
Figure 20. The Hough transform in the Cartesian coordinate system.
Sensors 25 04827 g020
Figure 21. A disc and its planarized image.
Figure 21. A disc and its planarized image.
Sensors 25 04827 g021
Figure 22. The universal tool microscope.
Figure 22. The universal tool microscope.
Sensors 25 04827 g022
Figure 23. Workpiece 1 and its projection image.
Figure 23. Workpiece 1 and its projection image.
Sensors 25 04827 g023
Figure 24. Workpiece 2 and its projection image.
Figure 24. Workpiece 2 and its projection image.
Sensors 25 04827 g024
Figure 25. Workpiece 3 and its projection image.
Figure 25. Workpiece 3 and its projection image.
Sensors 25 04827 g025
Table 1. Parameters of 3D scanner.
Table 1. Parameters of 3D scanner.
Device ModelEinScan Pro 2X
Scanning precision0.045 mm
Scanning range per single scan150 × 120 mm~250 × 200 mm
Working center distance400 mm
Printable data output formatOBJ, STL, ASC, PLY, P3, 3MF
Weight of scanning head1.13 kg
System requirementsWindows 10, 64-bit
Table 2. Test results after removing the noise of sheet metal parts with different proportional coefficients.
Table 2. Test results after removing the noise of sheet metal parts with different proportional coefficients.
Serial NumberNumber of Point CloudsλkNumber of Point Clouds After Noise Reduction
(a)20,4630.05113,713
(b)20,4630.1114,120
(c)20,4630.5116,552
(d)20,4631117,901
(e)20,4631.5118,674
(f)20,4632119,253
Table 3. Test results after removing the noise of sheet metal parts at different adjacent points.
Table 3. Test results after removing the noise of sheet metal parts at different adjacent points.
Serial NumberNumber of Point CloudsλkNumber of Point Clouds After Noise Reduction
(a)17,9011117,901
(b)17,9011217,761
(c)17,9011317,687
(d)17,9011417,622
(e)17,9011517,617
(f)17,9011617,616
Table 4. Number of point clouds for downsampling of sheet metal parts.
Table 4. Number of point clouds for downsampling of sheet metal parts.
AlgorithmNumber of Point Clouds
Initial point clouds17,617
Downsampling by the voxel grid algorithm3359
Downsampling by the improved algorithm3359
Table 5. The registration results for sheet metal parts.
Table 5. The registration results for sheet metal parts.
AlgorithmNumber of IterationsRegistration Error (m)Time (s)
ICP algorithm48.00264 × 10−55.936
81.93961 × 10−58.453
125.56931 × 10−610.867
161.23578 × 10−613.349
206.96264 × 10−715.896
242.63578 × 10−718.345
Improved algorithm12.46069 × 10−611.522
22.03659 × 10−611.572
31.60361 × 10−611.634
41.18995 × 10−611.681
57.51249 × 10−711.736
64.22656 × 10−711.793
Table 6. The registration results for partially overlapping sheet metal parts.
Table 6. The registration results for partially overlapping sheet metal parts.
AlgorithmNumber of IterationsRegistration Error (m)Time (s)
ICP algorithm47.07786 × 10−55.032
81.89264 × 10−57.982
126.68919 × 10−610.975
161.52197 × 10−614.063
205.69119 × 10−717.192
242.03491 × 10−720.038
Improved algorithm12.59116 × 10−610.581
22.10359 × 10−610.623
31.68134 × 10−610.674
41.20319 × 10−610.716
58.76916 × 10−710.759
64.95649 × 10−710.807
Table 7. Time consumed by different algorithms for 3D reconstruction.
Table 7. Time consumed by different algorithms for 3D reconstruction.
AlgorithmTime (s)
Greedy triangulation algorithm4.368
Poisson algorithm4.136
Improved algorithm4.697
Table 8. Measurement results regarding the actual dimensions of the disc and its pixel values.
Table 8. Measurement results regarding the actual dimensions of the disc and its pixel values.
Size NameValue Measured by the Universal Tool Microscope (mm)Pixel Value
d195.99411517
d243.9887695
Table 9. Dimensional measurement results for workpiece 1.
Table 9. Dimensional measurement results for workpiece 1.
Dimension AnnotationValue Measured by the Universal Tool Microscope (mm)Measured Value
(mm)
Error
(mm)
h180.012980.20110.1882
h280.006780.26440.2577
h380.011780.20110.1894
h479.996280.13780.1416
d115.016514.81220.2043
d215.009715.19220.1825
d315.012614.74890.2637
d415.020315.25530.2350
h520.010520.25620.2457
h620.010719.74960.2611
h720.009620.19270.1831
h820.014820.19270.1779
Table 10. Dimensional measurement results for workpiece 2.
Table 10. Dimensional measurement results for workpiece 2.
Dimension AnnotationValue Measured by the Universal Tool Microscope (mm)Measured Value
(mm)
Error
(mm)
h1120.0026120.20670.2041
h280.010380.20110.1908
h3120.0067120.27040.2637
h479.997280.26440.2672
h518.006518.23040.2239
h618.011218.16710.1559
h718.013017.85060.1624
h817.982618.16710.1845
h920.019220.25600.2368
h1019.976320.19270.2164
h1120.012520.19270.1802
d140.007040.25880.2518
d220.010819.74960.2612
d310.020310.25460.2343
Table 11. Dimensional measurement results for workpiece 3.
Table 11. Dimensional measurement results for workpiece 3.
Dimension AnnotationValue Measured by the Universal Tool Microscope (mm)Measured Value
(mm)
Error
(mm)
h1120.0116120.20670.1951
h280.012680.20110.1885
h3119.9776120.14340.1658
h479.986380.13780.1515
h520.006720.25610.2494
h615.002615.25530.2527
h715.012015.19240.1804
h814.977315.12870.1514
h917.992617.72420.2684
h1015.002814.81220.1906
h1115.010215.25530.2451
h1215.011314.81260.1987
h1321.003620.82570.1779
h1415.009615.25530.2457
h1515.019814.87550.1443
h1615.011414.81220.1992
h1715.972716.20480.2321
d18.00767.84920.1584
d28.01038.22900.2187
d38.00608.16570.1597
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, J.; Tan, D.; Guo, S.; Chen, Z.; Liu, R. Digital Inspection Technology for Sheet Metal Parts Using 3D Point Clouds. Sensors 2025, 25, 4827. https://doi.org/10.3390/s25154827

AMA Style

Guo J, Tan D, Guo S, Chen Z, Liu R. Digital Inspection Technology for Sheet Metal Parts Using 3D Point Clouds. Sensors. 2025; 25(15):4827. https://doi.org/10.3390/s25154827

Chicago/Turabian Style

Guo, Jian, Dingzhong Tan, Shizhe Guo, Zheng Chen, and Rang Liu. 2025. "Digital Inspection Technology for Sheet Metal Parts Using 3D Point Clouds" Sensors 25, no. 15: 4827. https://doi.org/10.3390/s25154827

APA Style

Guo, J., Tan, D., Guo, S., Chen, Z., & Liu, R. (2025). Digital Inspection Technology for Sheet Metal Parts Using 3D Point Clouds. Sensors, 25(15), 4827. https://doi.org/10.3390/s25154827

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop