Next Article in Journal
Trustworthy Intelligence: Split Learning–Embedded Large Language Models for Smart IoT Healthcare Systems
Previous Article in Journal
Multi-Cell Extended Equalization Circuit and Dual Closed-Loop Control Method Based on the Boost–LC Architecture
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Calibration of Roughness of Standard Samples Using Point Cloud Based on Line Chromatic Confocal Method

1
College of Metrology Measurement and Instrument, China Jiliang University, Hangzhou 310018, China
2
Zhejiang Institute of Quality Sciences, Hangzhou 310018, China
3
Zhejiang Key Laboratory of Digital Precision Measurement Technology Research, Hangzhou 310018, China
4
Zhejiang Shuanghong Technology Co., Ltd., Wenling 317500, China
*
Authors to whom correspondence should be addressed.
Electronics 2026, 15(7), 1517; https://doi.org/10.3390/electronics15071517
Submission received: 11 March 2026 / Revised: 30 March 2026 / Accepted: 2 April 2026 / Published: 4 April 2026

Abstract

This article proposes a calibration method combining line chromatic confocal and 3D point cloud processing to solve surface damage and low efficiency in traditional roughness sample calibration. Line chromatic confocal sensors scan roughness samples to obtain dense point clouds. We propose a back projection mechanism, the adaptive density-based spatial clustering of applications with noise statistical outlier removal (BPM-ADBSCAN-SOR) algorithm that utilizes the ADBSCAN and SOR algorithms to address outlier noise and near-field noise in low-resolution point clouds, respectively, and then employs bounding boxes to crop the original high-resolution point cloud, thereby achieving multi-scale noise removal and point cloud clustering. We propose a Steady-State Confidence-Weighted Robust Gaussian Filtering (SSCW-RGF) algorithm, which calculates the range of the steady-state region, designs a steady-state region credibility weighting function to apply a weighted correction to the baseline fitting results, and then incorporates M-estimation theory to develop a robust Gaussian filtering algorithm weighted by steady-state region credibility, thereby mitigating the impact of outliers on Gaussian baseline fitting. Experiments verify the system accuracy: repeatability standard deviation is 0.0355 μm, relative repeatability error 0.3984%. Compared with sample block nominal values, the maximum absolute error is −0.745 μm, meeting specification tolerance. Compared with the contact profilometer, the maximum absolute error is 0.050 μm, the maximum relative error is +4.5%, and the calibration efficiency is improved by 90%. It provides a new approach for surface roughness calibration

1. Introduction

Surface roughness directly determines the contact state, lubrication conditions, fatigue crack initiation position and sealing reliability of friction pairs, and is an important factor affecting the reliability and service life of mechanical systems [1]. Roughness sample block, as a standard instrument of surface roughness measurement, is the core material basis to ensure the accuracy of measurement results, so its calibration level is directly related to the reliability of the manufacturing quality control system, and its accuracy is directly related to the performance and life of the product [2,3].
At present, the mainstream surface roughness detection methods can be roughly divided into two categories: contact measurement and non-contact measurement. Stylus profilometer is widely used in specimen calibration because of its mature measurement chain and good repeatability, but the convolution effect and micro-deformation caused by the contact force introduced by the stylus profilometer are especially prominent in calibration of soft or micro-structured specimens.
In order to make up for the defects of traditional contact measurement methods, researchers began to look for an efficient, simple and non-destructive detection method, and non-contact roughness measurement technology emerged. This technology has attracted researchers ‘attention and research because of its advantages of non-contact, non-damage, high efficiency and easy online measurement. At present, some scholars have made corresponding achievements in non-contact roughness measurement. Hweju [4] selected the support vector machine model (SVM) that is more suitable for small sample datasets to complete the roughness detection of turning surfaces, and established a regression model. Comparing the two models, it was confirmed that SVM has better predictive performance. Vishwanatha et al. [5] proposed a feature extraction method based on dual tree complex wavelet transform (DTCWT) and gray level co-occurrence matrix (GLCM). The preprocessed image was fused using DTCWT, and the fused image coefficients were converted to GLCM to extract second-order texture features. The features and cutting conditions were used as inputs to the neural network to simulate and predict the surface roughness Ra; Lishchenko [6] developed a non-contact surface roughness measurement system based on a chromatic confocal sensor and validated it using roughness reference specimens and comparator standards. The results showed that the system can measure roughness in the range of Ra 0.4–12 μm with relative errors within 10%. Lin et al. [7] proposed a method of road 3D reconstruction and homogeneous matrix error correction based on an RGB-D camera to achieve the dynamic detection of asphalt and block pavement under 40 km/h. The IRI value and the traditional measurement error were ≤9.5%, and the cumulative error was reduced from 40 cm to 2.8 cm; Yuan et al. [8] proposed an optimization method combining motion recovery structure (SFM) photogrammetry and an XGBoost model. The point cloud data was obtained through a fixed camera shooting strategy, and the estimation accuracy of rock joint roughness coefficient (JRC) under different section intervals was improved by 84.7% on average by using three key parameters, such as target spatial resolution error.
Based on the above analysis, there are still the following problems in roughness detection at this stage:
  • The traditional roughness calibration method relies heavily on manual work, and at the same time, the probe will cause surface damage when it moves on the measured surface, affecting the final detection result. At the same time, the traditional method can only calculate the roughness value of one measured surface at a time, and the calibration efficiency is low. It usually takes 2–3 h to calibrate a single group of samples.
  • At present, the non-contact measurement method mainly deals with one-dimensional signals and two-dimensional images [9,10,11], and is typically applied to commonly encountered machined parts; however, there has been relatively little research into the calibration of standard reference blocks for surface roughness. One-dimensional signals struggle to effectively characterize three-dimensional surfaces; they require manual selection of sampling positions and the delineation of evaluation lengths to obtain signals. Meanwhile, two-dimensional imaging methods are currently used predominantly for predicting roughness via machine learning rather than for calibration; they tend to establish empirical models rather than perform direct measurement, and two-dimensional images themselves contain only greyscale information and do not reflect the actual surface height variations. In contrast, 3D point clouds capture the spatial coordinates of the measured surface; these coordinate values can be directly utilized for relevant calculations. Furthermore, when the test specimen exhibits complex structures, such as machined surfaces, 3D point clouds can reveal the true three-dimensional morphology and volume of the surface, whereas 2D images cannot. Consequently, measuring surface roughness using 3D point clouds offers significant advantages.
  • At present, the accuracy of roughness detection using point clouds is mostly in millimeters [12,13,14]. There is still room for development for sub-micron roughness detection. At the same time, when detecting the roughness of the workpiece point cloud, it can only solve the point cloud data of one sample block in a single measurement [15]. In fact, there are often many groups of different processing types and roughness comparison samples with different roughness values in the process of roughness calibration. The detection efficiency is low, and the point cloud acquisition methods are mainly structured light, lidar, laser scanning arm and RGB-D camera. The scanning accuracy is usually maintained at about 10 μm, and the accuracy of the obtained point cloud data is low.
  • Chromatic confocal technology has made remarkable progress in the field of geometric measurement, but it is rarely used in the calibration of roughness sample sets [16,17,18,19]. At the same time, the lattice scanning method is often used for the measurement of small workpieces [20,21], which has a long scanning time and low measurement efficiency, and is not suitable for large area scanning. For the scanning of a roughness sample set, lattice scanning finds it difficult to meet the high density and fast acquisition requirements. The line chromatic confocal sensor can quickly obtain high-density point cloud data, but the data collection scale is very large, usually tens of millions or hundreds of millions of points, and the computing power is extremely high. Most traditional point cloud processing algorithms are difficult to process efficiently, so it is necessary to design a set of efficient, fast and stable point cloud processing algorithms for roughness calculation.
Based on the above analysis, this paper uses a line chromatic confocal sensor to build an experimental system, and uses the sensor to scan a multi-group roughness sample set to obtain high-precision, large-scale point cloud data. Aiming at the noise problem in point cloud data, the BPM-ADSNCAN-SOR algorithm is proposed, which effectively removes stray points in point cloud data and optimizes the calculation speed. In addition, according to the correlation calibration criterion, this paper designs an algorithm to implement the non contact roughness calibration method at the point cloud level, and proposes a robust Gaussian filtering algorithm based on steady-state region credibility weighting to deal with point cloud data, aiming at the boundary effect and outlier sensitivity of the traditional Gaussian filtering algorithm when fitting the reference center line.

2. Principle of Calibration

2.1. Collection Principle of Linear Chromatic Confocal Sensor

The detection principle of the line chromatic confocal measurement system is shown in Figure 1. The system uses a broad-spectrum white light source covering visible light bands as an illumination source. After slit shaping, it forms a linear light source and realizes linear illumination in the X direction. Then, the linear light enters the dispersive objective lens through a beam-splitting prism, and the dispersion effect is introduced in the Y direction, so that light of different wavelengths corresponds to different focal positions in the optical axis direction. The dispersed light beam forms a series of confocal planes varying with wavelength in the normal direction, and the light of different wavelength bands is focused at different axial height positions respectively. Then, the reflected light enters the subsequent signal acquisition system through the receiving slit, and the defocused light that fails to focus in the normal direction of the measured surface is effectively blocked by the slit and cannot reach the detector, thus realizing the spatial selective filtering of the axial information. The mapping relationship between pixel position and wavelength can be established by system calibration, and the pixel position corresponding to the spectral peak can be converted into the axial height information of the measurement point by combining with the axial calibration curve.

2.2. Point Cloud Denoising Based on the BPM-ADBSCAN-SOR Algorithm

When the line chromatic confocal sensor collects the point cloud data of the surface roughness sample set, noise interference exists in the point cloud data due to the disturbance of the surrounding environment, which will have a great impact on the accuracy of the final roughness calculation result. In addition, with the development of three-dimensional scanning technology, the scale of point cloud data becomes larger and larger, and the efficiency of subsequent point cloud processing steps will be significantly reduced if noise and other data are not eliminated. Therefore, it is necessary to preprocess the point cloud data collected by the system. At present, the point cloud preprocessing mainly faces the following three problems:
  • In the point cloud data collected by the line chromatic confocal sensor, the noise type is mixed noise, which mainly includes outlier noise around the main part of the roughness sample cloud and near-field noise in the gap of the sample block. Traditional point cloud denoising algorithms, such as statistical filtering, through filtering and low-pass filtering, have a good processing effect on single noise, but the processing effect on mixed noise is mediocre.
  • Since the nominal value of the surface roughness sample block is mostly at the micron level, the three-dimensional surface profile fluctuation is small. In order to ensure the high accuracy of the calculation results, the point cloud data collected by the line chromatic confocal is a large-scale, high-density point cloud, and the number of collected points is tens of millions or hundreds of millions. The requirements for computing power and memory are relatively high. The traditional filtering algorithm is difficult to support the calculation of large quantities.
  • Since the purpose of this study is to process the point cloud data of multiple sets of roughness sample sets, and according to the relevant provisions in JJF 1099-2018 [22], the roughness sample blocks with different nominal values under different processing conditions have different evaluation lengths in the calibration process, so it is necessary to separate the point clouds of each sample block during pretreatment.
Based on the aforementioned analysis, this paper proposes a combined denoising algorithm, ADBSCAN-SOR with back projection mechanism (BPM-ADBSCAN-SOR), which utilizes the back projection mechanism. This algorithm enables rapid and efficient removal of mixed noise from roughness sample point clouds and clustering of multiple sets of samples. The overall process of the algorithm is illustrated in Figure 2:
Firstly, considering that the point cloud data collected by the line spectral confocal sensor is large-scale dense point cloud data, in order to reduce the computing power requirements and improve the computing efficiency, the point cloud data should be simplified first, and voxel down-sampling based on octree should be adopted. Octree [23], as a typical hierarchical spatial partition data structure, divides the bounding volume of a scene into eight uniform sub-regions recursively until the number of elements in each sub-node meets a preset threshold or other termination conditions, so as to realize the orderly organization and management of point clouds. In the process of establishing the topological relationship of the octree, first traverse all points, and record the extreme values in X, Y and Z directions to establish a minimum three-dimensional grid that can surround all points. The grid size is:
l x = x max x min l y = y max y min l z = z max z min
Set the level d for the octree, and calculate the voxel mesh size of each leaf node of the octree as:
D x = l x / 2 d D y = l y / 2 d D z = l z / 2 d
Traversing each leaf node, voxel down-sampling is completed by calculating the voxel center of the current node and replacing all points in the whole voxel with the voxel center of the current node. In this paper, the octree hierarchy is selected as 10 levels. The original point cloud data and the down-sampling hierarchical division results based on the octree are shown in Figure 3 and Figure 4 respectively. The point cloud data before and after down-sampling are opened by the CloudCompare v2.13.0 software. The comparison of point cloud density before and after octree down-sampling is shown in Figure 5.
After the point cloud down-sampling based on the octree, the point cloud density decreases significantly, and the number of points is down-sampled from 38,720,845 to 842,642. The point cloud structure is effectively simplified, which greatly reduces the computational burden and improves the computational speed for subsequent denoising and clustering.
After obtaining the low-resolution point cloud data, the noise type of the point cloud main body is analyzed, and it is found that the noise in the point cloud data mainly includes outlier noise distributed around the main body and near-field noise existing in the roughness sample block gap. The specific noise distribution is shown in Figure 6.
Aiming at outlier noise, combined with clustering requirements of sample blocks, the DBSCAN algorithm can effectively remove outlier noise and clustering effect. It uses the concept of neighborhood and data point similarity to achieve effective classification of data. However, when using the line chromatic confocal sensor to obtain the three-dimensional point cloud data of the roughness sample block, placement or platform errors often cause the measurement surface to tilt. Consequently, a low-frequency trend item is superimposed on the original point cloud data.
This trend item will affect the subsequent DBSCAN clustering and roughness parameter calculation, so it is necessary to correct the sample tilt. The sample tilt shows a first-order linear trend term in the point cloud.
To eliminate sample skew, it is necessary to estimate the trend term:
z p l a n e ( x , y ) = a x + b y + c
The least square method is often used for plane fitting, and the objective function is:
min a , b , c i = 1 N ( z i a x i b y i c ) 2
where (xi, yi, zi) is the spatial coordinates of points, and N is the number of points, and its matrix form is:
Z = z 1 z 2 z n
A = x 1 y 1 1 x 2 y 2 1 x n y n 1
Then the plane parameter is:
θ = a b c
According to the least square method:
θ = ( A T A ) 1 A T Z
Thus, the optimal plane parameters a, b, and c can be obtained. After obtaining the fitting plane, subtract the fitting plane from the original point cloud:
z i = z i ( a x i + b y i + c )
Then the corrected point cloud is:
z ( x , y ) = z r ( x , y )
In the process of fitting the plane, the global trend estimation is used, the neighborhood average is not used, and the local height difference is not changed. Therefore, only the low-frequency trend item will be removed, and the surface undulation will not be smoothed. After correcting the inclination, the subsequent clustering denoising steps can be carried out.
DBSCAN [24] can detect arbitrary cluster structures without specifying the number of target clusters, and can automatically detect and eliminate noise points without additional noise point removal operations. ε and MinPts are two important parameters used to measure the neighborhood density of data points in the algorithm. ε represents the distance threshold of data points in a specific range, that is, the defined radius; MinPts refers to the threshold of the minimum number of samples in the range of ε radius. DBSCAN’s clustering concept is a density-reachability-based classification method that groups data points with higher densities into a cluster group, which usually includes one or more core objects.
However, traditional DBSCAN relies on manually specified ε and MinPts, which have a decisive influence on clustering results. In actual scanned point clouds, point cloud density often varies significantly in different regions, and fixed parameters often fail to accommodate high-density and low-density regions, resulting in low-density regions being misclassified as noise or high-density regions being misclassified as the same structure. Therefore, this study introduces an adaptive parameter selection mechanism based on k-nearest neighbor distance distribution on the traditional DBSCAN structure to automatically estimate the optimal ε and improve the robustness of the algorithm.
Specifically, for a point cloud sample set p, the distance dk(pi) from each data point to its kth nearest neighbor is first computed, forming a set of k-nearest neighbor distances:
D k = d k ( p 1 ) , d k ( p 2 ) , , d k ( p m )
After sorting the distance set, a typical “elbow phenomenon” can be observed, reflecting the change trend of point cloud density from dense area to sparse area. Generally speaking, there are quantile estimation methods, the elbow method and the local adaptive method to estimate the ε value. All three methods can realize automatic estimation of ε. The difference is that the quantile method has the highest stability, but is not sensitive to density change. The elbow method is more sensitive than the quantile method, but it has weak processing ability for super-large point cloud data. The local adaptive method has the highest sensitivity to density, but its stability is poor. The denoising effects of the three adaptive strategies are shown in Table 1.
According to the data in Table 1, it can be found that the elbow method and local adaptive method are obviously superior to the elbow 90% quantile method in noise reduction effect, while the elbow method is obviously faster than the local adaptive method. The local adaptive method is mainly used to deal with a data set with uneven density distribution, and the clustering performance is improved by setting different neighborhood radii for different regions. However, the roughness sample point cloud data processed in this paper is obtained by the line spectral confocal system, the sampling interval is fixed, and the overall distribution of point cloud density is relatively uniform, so the problem of density change is not significant. In this case, the improvement of the local adaptive method may increase the complexity of the algorithm and the number of parameters, but the improvement of clustering performance is limited. In addition, the roughness sample calibration is a precision measurement application, which requires high stability and repeatability of the algorithm, while the local adaptive method usually introduces additional instability factors. In contrast, the elbow method can determine the uniform neighborhood radius through global analysis, which has better stability and computational efficiency. In summary, the elbow method is used to estimate ε in this study. First, for the sample set p, the ε neighborhood is defined as
N ε ( p i ) = p j p i p j ε
The core criteria are:
N ε ( x i ) M i n P t s
For each sample point, calculate the k nearest neighbor distance dk(xi), let k = MinPts, and arrange all dk(xi) in ascending order to obtain the sequence:
d ( 1 ) d ( 2 ) d ( n )
Define the function f(i) = d(i), then the function is a k-distance curve. Suppose f(i) is a discrete function, then its first-order difference is:
Δ f ( i ) = f ( i + 1 ) f ( i )
The second order difference is:
Δ 2 f ( i ) = f ( i + 1 ) 2 f ( i ) + f ( i 1 )
When |Δ2f(i)| reached a local maximum, the location can be considered as the curvature maximum. The adaptive estimate of the response is:
ε * = f ( i * )
Then ε* is the adaptive neighborhood radius. In addition, MinPts is set to be consistent with k, so that the density definition is consistent and the stability of the algorithm in different density intervals is enhanced. After ε estimation is completed, the subsequent calculation steps are consistent with the traditional DBSCAN algorithm, realizing the automatic calculation of filter parameters, avoiding manual parameter adjustment and improving the processing efficiency. As for the value of k, this study selects several commonly used k values for processing, and the processing results are shown in Figure 7 and Table 2.
It can be seen from Figure 7 and Table 2 that when k is 100, the original sample block will be cut off, resulting in an error in the number of clusters. When k is 200, clustering will occur across, and a part of the point cloud of adjacent sample blocks will also be clustered into its own cluster. When k = 250, clustering adhesion will occur, resulting in a reduction in the number of clusters. Therefore, in this study, set k = 150, and the calculated k-distance is shown in Figure 8.
After processing by the ADBSCAN algorithm, the point cloud clusters of each roughness sample block at low-resolution are obtained. Because the characteristic scale of roughness is in micron level, and the calculation of roughness needs to keep the tiny features of roughness sample blocks as much as possible, the point cloud at low resolution needs to be backprojected to the original point cloud data, and the original large-scale point cloud is trimmed, which is approximately regarded as clustering operation at the original point cloud scale, so as to achieve the purpose of removing outlier noise and point cloud clustering. For back projection clipping, firstly, we need to extract the boundary of the point cloud cluster with low-resolution roughness. In this paper, point cloud bounding box technology is adopted [25]. According to the geometric characteristics of the point cloud model, a box with a shape close to a cube is used to enclose the point cloud data. AABB bounding box is used in this paper. An AABB bounding box is a cuboid that can encompass the surface of an object without being closely attached. Suppose there is a point cloud p = {((xi, yi, zi)}Ni=1, and its minimum and maximum values in X, Y, and Z directions are:
x m i n = min i x i , x m a x = max i x i y m i n = min i y i , y m a x = max i y i z m i n = min i z i , z m a x = max i z i
Construct a rectangular bounding box using the minima, with the boundary:
A A B B = ( x , y , z ) x m i n x x m a x , y m i n y y m a x , z m i n z z m a x
Find out the minimum coordinate point and maximum coordinate point in the AABB bounding box and record them as pmin and pmax. The basic structure of an AABB can be two diagonal points, pmin and pmax. The central coordinate of an AABB is c = (cx, cy, cz)T. The difference between the central point and pmin and pmax is d = (dx, dy, dz)T respectively. Then the size of the AABB bounding box is:
A A B B = c d , c + d = min M c x ± d x c y ± d y c z ± d z max M c x ± d x c y ± d y c z ± d z
Calculate and display the bounding box size according to Formula (12). After determining the bounding range, project the bound locked by the bounding box back to the original roughness sample block point cloud data, and trim the original data, as shown in Figure 9.
It can be seen from the figure that the clipped point cloud can maintain a good main structure and will not cause sample block data loss, but there is still near-field noise around the sample block that needs to be further removed.
After the BPM-ADBSCAN algorithm, the main body of the point cloud data is transformed from the original point cloud of the sample set to the point cloud of every single sample, and the near-field noise inside the original small gap can be regarded as random single-point noise near the main body of the new point cloud. Based on this characteristic, a statistical filter can filter by analyzing the local density of the point cloud [26]: when the local density of the point cloud in a certain area is lower than a set threshold, the point in this area will be judged as an outlier and eliminated.
For each point Pi in the cloud, first determine its neighborhood range, select k points closest to Pi to form a neighborhood. Let Pi’s neighborhood consist of k points, and the Euclidean distance between Pi and the jth neighborhood point is dj, then its average distance davg can be expressed as.
d a v g = 1 k j = 1 k d j
Calculate the standard deviation of the distance between points in the neighborhood of point Pi. The standard deviation dσ indicates the degree of change in the distance between neighboring points, reflecting the smoothness of the point cloud in local areas.
d σ = 1 k j = 1 k ( d j d a v g ) 2
In the outlier determination process, the degree of abnormality of each point is measured by calculating its outlier degree. The outlier degree is defined as the ratio of the average distance in the neighborhood of the point to the standard deviation. When the outlier degree exceeds the preset threshold T, the point is determined to be an outlier; otherwise, it is regarded as a normal point. The denoising effect of statistical filtering is shown in Figure 10.

2.3. Roughness Calculation Algorithm Based on Steady-State Confidence-Weighted Robust Gaussian Filtering

In the process of designing a surface roughness calibration algorithm, according to the relevant provisions in JJF 1099-2018 surface roughness comparison block calibration specification, it is necessary to determine the evaluation length first. After selecting the evaluation position, the roughness value is measured by a Gaussian filter. However, two problems inevitably occur in the execution of a traditional Gaussian filter: (1) boundary effect; (2) outlier sensitivity. The schematic diagrams illustrating the boundary effect and sensitivity to outliers of traditional Gaussian filtering are presented in Figure 11 and Figure 12, respectively.
To solve these two problems, this paper designs the SSCW-RGF (Steady-State Confidence-Weighted Robust Gaussian Filtering) algorithm to fit an accurate reference centerline and obtain the corresponding roughness curve, thereby solving for the roughness value.
The prerequisite for filtering is to acquire the corresponding surface contour. In obtaining the surface contour, this study employs the concept of profile extraction, treating the point cloud distribution from the side view of the extracted profile as the surface contour. Given that different machining processes exhibit distinct texture orientations, it is advisable to position the roughness specimens such that their texture directions are as perpendicular as possible to the line laser direction prior to scanning. Following the application of the denoising and clustering algorithm mentioned earlier, for each specimen’s point cloud, the centroid coordinates are first determined. Subsequently, ten random intervals are selected from the centroid outward in both directions along the X-axis for cropping. The cropping direction is perpendicular to the X-axis, resulting in the acquisition of ten YOZ profile contours. Subsequently, the next filtering operation is conducted.
To perform filtering in the steady-state region, the first step is to determine the steady-state region, According to the mathematical model of finite-field convolution, it is assumed that the actual measured profile is a discrete sequence z(xn), xn = nx, n = 0, …, N − 1, its measurement interval is a finite interval D = [x0, xN−1], the essence of Gaussian filtering is convolution, but there is no real data outside the measurement contour, so convolution can only be performed in a finite field:
w ( x n ) = K = 0 N 1 z ˜ ( x k ) h ( x n x k ) Δ x
z ~ ( x ) is the contour after continuation. The continuation methods generally include reflection, period, zero padding, etc., and can be expressed as:
z ˜ ( x k ) = z ( x k ) , E z ( x k ) , x k D x k D
Thus, the convolution value can be decomposed into:
w ( x n ) = x k D z ( x k ) h ( x n x k ) Δ x real   convolution   term + x k D E z ( x k ) h ( x n x k ) Δ x boundary   error   term
If the error term is ε(xn), then the convolution is completely error-free if:
ε x n = 0 x k D , h ( x n x k ) = 0
The Gaussian kernel theory has an infinite support domain, so the error can be zero in the engineering sense only if xn is farther than 3σ from the boundary. Thus, the steady-state region can be defined as Dsteady = [x0 + 3σ, xN−1 − 3σ], then the length of the steady-state region is Lsteady = (xN−1x0) − 6σL − 3λc. The value of λc is specified by the JJF 1099-2018 calibration specification. According to the relevant content of the calibration specification, the value of λc corresponding to each sample block is set in advance according to the order of placement before scanning. By using the BPM-ADBSCAN-SOR algorithm, the point cloud data of each roughness sample block is obtained. The setting of λc is implemented through case logic, and each sample point cloud corresponds to a case. At the same time, the calibration specification stipulates that the evaluation length needs to be greater than five times the sampling length, and the sampling length is numerically equal to the cutoff wavelength λc. Therefore, in order to make the evaluation length completely fall within the steady-state region, Lsteady must be greater than 5λc; that is, when the measurement length is at least eight times the cutoff wavelength, the evaluation length can completely fall into the steady-state region. In this case, the contour signal with length of 5λc can be directly intercepted as the output result in the steady state region, but the contour extension is artificially introduced when Gaussian filtering is carried out, so the filtering result is weighted according to the steady state degree, and the credibility of the filtering result in the boundary region is reduced by constructing a credibility function without contour extension. Firstly, according to ISO16610-21 [27], the valid support range of Gaussian kernel is defined as: R = Lcλc, where Lc is a dimensionless constant, usually 0.5, indicating that the current effective support length is 0.5 times the cutoff frequency, and when the distance exceeds this range, the effect of the filter gradually decreases until the signal is considered unfilterable. For a given profile signal z(x), it is necessary to determine whether each point is in the steady state region according to its distance from the signal boundary. Assume the boundaries of the signal are xmin and xmax, then for any point x on the signal, define the minimum distance to the nearest boundary as:
d ( x ) = min ( x x m i n , x m a x x )
Define the confidence function c(x) of the steady state region as:
c ( x ) = 1 , d ( x ) > R d ( x ) R , d ( x ) < R
At this time, the reference center line w(x) fitted by the traditional Gaussian filter is weighted with the steady-state region reliability function c(x), and the weighted reference center line ŵ(x) expression is calculated as:
w ^ ( x ) = c ( x ) w ( x ) + 1 c ( x ) z ( x )
From the Gaussian median line extracted from Equation (29), the roughness signal r(i) can be calculated from Equation (30):
r ( x ) = z ( x ) w ^ ( x )
Based on the reliability weighted Gaussian filter in the steady state region, M estimation theory is introduced, and a robust Gaussian filtering algorithm weighted by steady-state reliability is designed. This algorithm can fit most of the data and identify possible outliers, avoid the influence of outliers on the filtering results, and maintain a certain robustness in the measured data when there is a large deviation.
The SSCW-RGF algorithm is based on the SSCW-Gaussian algorithm, introducing a vertical weight function, with the objective function being:
w ( x ) = c ( x ) λ c λ c z ( x + δ ) h ( δ ) ρ ( x + δ ) d δ + 1 c ( x ) z ( x )
where ρ(x) is the vertical weight function, and the choice of ρ function should be based on the actual distribution mode of the signal, which can effectively improve the computational efficiency while realizing reliable signal processing.
Robust estimation theory holds that different robust weight functions may produce different results when processing the same observation signal, and improper selection of a robust function may sometimes lead to the failure of the evaluation. Robust estimation theory holds that different robust weight functions may produce different results when processing the same observation signal, and improper selection of a robust function may directly lead to the failure of evaluation sometimes [28]. According to ISO 16610-31 [29], the robust filtering of surface contour often uses Tukey double weight estimation to construct a vertical weight function, and its corresponding vertical weight function is:
ρ i + 1 ( χ ) = 1 χ c B 2 2 , χ c B < 1 0 , χ c B 1
where i is the number of iterations, χ is the residual after the ith iteration, cB = 4.4median(|z(x) − w(x)|) represents the residual function. Tukey estimation assigns weights to the measurement data based on the actual statistical value of the residual. The setting of convergence precision t is related to the operation results of the whole SSCW-RGF algorithm. Generally, the value range of t is between 10−6 and 10−3. The operation results of SSCW-RGF under different convergence precisions are shown in Figure 13:
It can be seen from the figure that when the convergence accuracy is 10−3, the filtering result will still be affected by outliers; when the convergence accuracy is further reduced to 10−4, the change in filtering results has been basically stable, indicating that the algorithm has basically reached a stable state near 10−4. At this point, if we continue to reduce the convergence accuracy t, the number of iterations and calculation time will be significantly increased. Therefore, considering the filtering accuracy and computational efficiency, this paper selects 10−4 as the convergence threshold of SSCW-RGF.
After fitting the stable benchmark centerline, the roughness signal can be separated according to relevant provisions in JJF 1099-2018 calibration specification, and then the roughness value can be calculated. The surface roughness of the workpiece is generally expressed by the arithmetic mean deviation Ra of profile, and calculated according to Equation (25):
R a = 1 l 0 l z ( x )
where l is the sampling length and z(x) is the surface profile height based on the surface profile benchmark centerline. Ra values within the evaluation length are calculated from Equation (26):
R a = 1 n R a n n
where Ran represents the Ra value calculated in the nth sample length.

3. Experiment and Results

This chapter mainly includes three parts: system repeatability precision experiment, roughness calibration experiment and uncertainty analysis. The model of the line chromatic confocal sensor selected in this paper is FocalSpecLCI401, and its main performance parameters are shown in Table 3:
The whole system is built on the vibration isolation platform, and the line chromatic confocal sensor is fixed on the Z-axis slide rail, so that it can make linear motion along the Z direction. Two one-dimensional linear motion platforms are vertically placed on the vibration isolation platform and make linear motion along the X and Y directions respectively. The physical object of the experimental platform is shown in Figure 14:
The object used in this study is a set of surface roughness comparison samples with multiple different processing techniques. One sample was selected from the sample combination shown in Figure 15 for re peatability testing, and five groups of 19 samples were selected for roughness testing. The physical image of the roughness sample set is shown in Figure 15:

3.1. Algorithm Feasibility Verification

In order to verify the operational effect of the two algorithms proposed in this paper, ablation experiments were designed. The roughness samples with a nominal value of 10 μm and a profilometer value of 8.9409 μm were calibrated under the conditions of stripping the two algorithms respectively. The calibration results are shown in Table 4:
It can be seen from Table 4 that with the gradual optimization of the algorithm, the roughness measurement results gradually improve. Among them, the initial result of ADBSCAN combined with the traditional Gaussian filtering method is 7.7160 μm. After adding the SOR algorithm, the near-field noise between the point cloud sample blocks is removed, which improves the accuracy of the result to a certain extent. However, because ADBSCAN-SOR operates under the low-resolution point cloud obtained by down-sampling, the down-sampling itself will lose part of the point cloud data. Therefore, after the BPM strategy is further introduced, the calibration result is improved to 8.0563 μm using the high-resolution point cloud obtained by back cutting. Due to boundary effect and outlier protrusion, only the BPM-ADBSCAN-SOR algorithm will lead to an offset in the process of fitting the datum centerline, resulting in the error of the roughness value. Under the dual effect of the BPM-ADBSCAN-SOR algorithm and the SSCW-RGF algorithm, it can remove a large amount of noise around the main body of the point cloud, obtain pure point cloud data of various blocks, and then fit the accurate reference line and calculate the roughness.
In the process of denoising, the source of noise is mainly because the linear spectral confocal sensor detects some signals in the scanning process that are insufficient for correct calculation (poor signal-to-noise ratio). As a result, the equipment gives incorrect values that are much higher or lower than the surrounding area, forming outliers, such as the boundary of the sample block, or step mutations in the gap. In contrast, this type of noise is not generated in the main part of the sample block. Therefore, the algorithm designed in this paper only processes the noise around the main part of the point cloud, and will not affect the height variation characteristics of the subject point cloud [30].
In the process of fitting the benchmark centreline, SSCW-RGF is used to solve the boundary effect and outlier sensitivity of traditional Gaussian filtering respectively. First, SSCW-GF and traditional Gaussian filtering are used to filter the same measurement contour. The filtering results are shown in Figure 16. It can be seen that the traditional Gaussian filtering method has filtering distortion at both ends of the contour due to the influence of the boundary effect. The credibility weighted Gaussian filtering algorithm in the steady-state region uses the weighted modified reference line, which covers both ends of the contour curve and suppresses the boundary effect. It not only maintains the integrity of data, but also improves the measurement accuracy.
On this basis, the SSCW-GF algorithm is improved to obtain the SSCW-RGF algorithm, which uses the SSCW-RGF and the traditional Gaussian filtering algorithm to filter the same set of surface contours, and the results are shown in Figure 17. It can be seen that the SSCW-RGF has strong anti-interference capability for abnormal signals, which can reduce filtering distortion and improve filtering accuracy.

3.2. Repeatability Experiments

In order to verify the repeatability of the calibration system during roughness measurement, the repeatability test of the system is carried out in this section. The environmental conditions, instrument parameters and other factors are kept unchanged in the experiment. The roughness sample block with Ra nominal value of 10 μm and a profilometer value of 8.9409 μm in the wire cutting process is measured many times, and 10 groups of data are recorded and statistically analyzed. The measurement results are shown in Table 5:
The calibration results of the current roughness sample block can be obtained by analyzing the data in Table 3. The mean value is 8.910896 μm. Among the 10 groups of data, the maximum mean deviation is −0.052884 μm, the minimum mean deviation is +0.001364 μm, the maximum absolute error is −0.07808 μm, and the minimum absolute error is +0.014310 μm. The standard deviation of repeatability is 0.0355 μm, and the relative repeatability error is 0.3984%. No more than 1% of the calibration specifications are met, indicating that the repeatability accuracy of this system can meet the requirements of relevant specifications. Figure 18 and Figure 19 show the experimental results of repeatability measurement, which can more intuitively reflect the absolute error dispersion of repeatability of this system.

3.3. Calibration Experiment of Surface Roughness

Five groups of 19 standard roughness sample sets are scanned by the linear chromatic confocal calibration system studied in this paper, and the noise reduction clustering test, roughness signal extraction and roughness calibration results are the outputs. The sample set parameters used in this experiment are shown in Table 6.
During the experiment, ensure that the sample set is fixed on the moving platform, move the focused linear laser to the lower left corner as the initial scanning position, set the moving speed of the platform as 5 mm/s, and the cycle waiting time as 20 ms, so that the collected point cloud data interval is 0.1 mm.
To process the collected original point cloud data, which contains different scales of noise, we use the BPM-ADBSCAN-SOR algorithm to remove noise. Then we cluster the roughness sample point cloud to obtain the point cloud of each sample block. For the 19 single sample point clouds obtained after cutting and circular processing, we calculate the center of each sample point cloud using the cluster number generated. Finally, 10 positions are evenly selected from the center along the Y-axis direction, and 10 profiles are generated along the X-axis direction. Considering that there are empty points when making profiles, the thickness of profiles is selected to be 1 μm, and robust Gaussian filtering weighted by the reliability of the steady state region is used to fit the reference center line and calculate the roughness value. The original profile line, reference center line and roughness signal are shown in Figure 20.
The calibration results of this system for 19 roughness samples are shown in Table 7.
According to the relevant provisions of the national calibration specification JJF 1099-2018, the deviation of calibration value Ra of the surface roughness of the working surface of the sample block from the nominal value is compared. Except grinding process shall not exceed +20%~−25%, the tolerance range of the average value of other processes is +12%~−17%. It can be seen from the data in Table 5 that the maximum absolute error of this system is −0.745 m, and the maximum relative error is −16%, compliance with relevant calibration specifications.
In the actual calibration process, roughness samples may deviate from their nominal values due to improper storage or surface oxidation. Therefore, the comparison with the nominal value can only indicate that the current sample meets the dimensional tolerance requirements as a standard measuring instrument. In order to further verify the accuracy of this calibration system, this paper designed a comparative experiment of contact measurement. The contact measurement value was taken as the true value, and the FTSI120 surface profilometer produced by Taylor Hobson Company in the UK was used for the contact measurement experiment. The measurement range of this instrument is 0.01~10 μm. The comparison calibration results are shown in Table 8. The error distribution between the calibration results of this system and the nominal value, as well as the contact measurement results, is shown in Figure 21 and Figure 22:
According to the specifications, the performance indicators of the calibration tool are required to have a repeatability error not exceeding 1% and an indication error not exceeding ±5%. Analysis of the data in the table shows that, compared to contact measurement, the absolute error of this system has a maximum value of 0.050 μm and a maximum relative error of +4.5%, which meets the specification requirement that the indication error should be less than 5%. Therefore, it can be used for the calibration of roughness samples. The comparison experiment of detection speed between this calibration system and the contact calibration system was conducted, and the results are shown in Table 9:
The experimental results demonstrate that the calibration system proposed in this paper significantly outperforms traditional contact profilometers in terms of calibration speed, achieving a 90% increase in efficiency. Moreover, it addresses the issues of surface damage to sample blocks during contact measurement and the limitation of calibrating only one sample block at a time, thereby reducing the complexity of the calibration process.

3.4. Uncertainty Analysis

Considering the wire-cutting processing method, the Ra nominal value of 10 μm is an example for sample comparison and the uncertainty analysis of calibration results. Refer to JJF 1099-2018 uncertainty evaluation template. The standard uncertainty components introduced by the analysis mainly include:
  • The standard uncertainty component u1 introduced by the repeatability of calibration comparison sample block is repeatedly measured 10 times, and the experimental standard deviation s = 0.0355 μm is calculated, so u1 = 2.05%
    u 1 = 0.0355 3 × 100 % = 2.05 %
  • The standard uncertainty component u2 introduced by the indication error of this device can be obtained from the above experimental results. Under the condition that the Ra measurement range is 0.025~10 μm, the maximum allowable error ± 5% of this device is taken as a uniform distribution. If k = 2, its standard uncertainty is:
    u 2 = 5 % 2 = 2.5 %
  • The standard uncertainty component u3 introduced by the non-uniformity of the comparison sample block is 12% according to the specification. The standard deviation of the experiment with the processing technology of wire cutting is 12%. The standard deviation of 10 measurements is:
    u 3 = 12 % 10 = 3.79 %
Combining these uncertainties yields ucr as:
u c r = u r ( d ) = u 1 2 + u 2 2 + u 3 2 = 2.05 % 2 + 2.5 % 2 + 3.79 % 2 = 4.98 %
Then the extended uncertainty Urel is:
U r e l = k × u c r = 9.96 % ( k = 2 )
The reasons for the difference between the results of non-contact measurement and contact measurement may be as follows:
  • Chromatic confocal technology is based on the principle of perfect focusing on the measured surface, which is easily disturbed by surface quality and the external environment.
  • The measurable inclination angle of the sample is ±28°. For some, it is too inclined, such as vertical steps, step surfaces, etc., and there is a problem of data distortion.
  • Compared with contact measurement, non-contact measurement has a larger working distance and larger vertical cosine error.

4. Conclusions

In this paper, based on the line-confocal measurement system, the theoretical analysis and experimental research are carried out by expanding its application scene, especially paying attention to the calibration measurement of surface roughness. For large-scale dense point cloud data, the BPM-ADBSCAN-SOR algorithm is designed to remove noise and cluster from each block. The roughness value is calculated by fitting the reference median line with the reliability-weighted robust Gaussian filter in the steady state region. The experimental results show that the maximum absolute error of the roughness value calculated by this system is −0.745 m, and the maximum relative error is −16%. Comparing the accuracy of this system with that of the traditional contact profiler, the experimental results show that the accuracy of this system is close to that of the profiler. The maximum absolute error of this system is 0.050 μm, the maximum relative error is +4.5%, and the error of the indication specified in the specification should be less than 5%. The uncertainty analysis of the experimental results shows that the extended uncertainty is 9.96% (k = 2), thus providing a possible solution for roughness calibration.

Author Contributions

Conceptualization, H.G., T.C. and X.X.; methodology, H.G.; software, H.G.; validation, Y.Q., X.C. and J.W.; formal analysis, T.C. and X.X.; resources, Y.Q.; data curation, L.W.; writing—original draft preparation, H.G.; writing—review and editing, H.Y. and N.C.; supervision, T.C.; project administration, T.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Science and Technology Plan of Zhejiang Provincial Market Supervision and Administration Bureau No. ZC2023002, No. ZC2023010, No. ZD2024009, No. 2024018, Zhejiang Provincial Natural Science Foundation of China (No. LTGC24E050001), Science and Technology Plan of the State Administration for Market Regulation (No. 2023MK063).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

Author Lei Wang was employed by the company Zhejiang Shuanghong Technology Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Shao, M.; Xu, D.; Li, S.; Zuo, X.; Chen, C.; Peng, G.; Zhang, J.; Wang, X.; Yang, Q. A Review of Surface Roughness Measurements Based on Laser Speckle Method. J. Iron Steel Res. Int. 2023, 30, 1897–1915. [Google Scholar] [CrossRef]
  2. Bleicher, F.; Raumauf, B.; Poszvek, G. Study on the Correlation Between Surface Roughness and Tool Wear Using Automated In-Process Roughness Measurement in Milling. Metrology 2025, 5, 62. [Google Scholar] [CrossRef]
  3. Koblar, V.; Filipič, B. Evolutionary Design of a System for Online Surface Roughness Measurements. Mathematics 2021, 9, 1904. [Google Scholar] [CrossRef]
  4. Hweju, Z.; Abou-El-Hossein, K. Analogy of Support Vector Machine and Linear Regression Models in Surface Roughness Prediction. J. Phys. Conf. Ser. 2020, 1710, 012005. [Google Scholar] [CrossRef]
  5. Vishwanatha, J.S.; Srinivasa Pai, P.; D’Mello, G.; Sampath Kumar, L.; Bairy, R.; Nagaral, M.; Channa Keshava Naik, N.; Lamani, V.T.; Chandrashekar, A.; Yunus Khan, T.M.; et al. Image-Processing-Based Model for Surface Roughness Evaluation in Titanium Based Alloys Using Dual Tree Complex Wavelet Transform and Radial Basis Function Neural Networks. Sci. Rep. 2024, 14, 28261. [Google Scholar] [CrossRef]
  6. Lishchenko, N.; O’Donnell, G.E.; Culleton, M. Contactless Method for Measurement of Surface Roughness Based on a Chromatic Confocal Sensor. Machines 2023, 11, 836. [Google Scholar] [CrossRef]
  7. Lin, W.; Lu, X.; Yin, G.; Roh, S. Three-Dimensional Pavement Surface Reconstruction and Roughness Assessment Using RGB-D Cameras. Measurement 2026, 262, 120042. [Google Scholar] [CrossRef]
  8. Yuan, J.; Wang, Q.; Yang, Q.; Fan, Y.; Jiao, W. Improvement of Rock Surface Roughness Accuracy by Combining Object Space Resolution Error and 3D Point Cloud Features. Front. Earth Sci. 2025, 13, 1497871. [Google Scholar] [CrossRef]
  9. Zuperl, U.; Cus, F. Simulation and Visual Control of Chip Size for Constant Surface Roughness. Int. J. Simul. Model. 2015, 14, 392–403. [Google Scholar] [CrossRef]
  10. Toorandaz, S.; Taherkhani, K.; Liravi, F.; Toyserkani, E. A Novel Machine Learning-Based Approach for in-Situ Surface Roughness Prediction in Laser Powder-Bed Fusion. Addit. Manuf. 2024, 91, 104354. [Google Scholar] [CrossRef]
  11. Tominaga, S.; Doi, M.; Sakai, H. Roughness Estimation and Image Rendering for Glossy Object Surface. J. Imaging 2025, 11, 296. [Google Scholar] [CrossRef]
  12. Kaiser, J.; Dědič, M. Influence of Material on the Density of a Point Cloud Created Using a Structured-Light 3D Scanner. Appl. Sci. 2024, 14, 1476. [Google Scholar] [CrossRef]
  13. Zhang, J.; Hu, Y.; Wang, Y.; Zhang, D. Sidewall Roughness Measurement and Bearing Performance Simulation of Rock-Socketed Piles Based on Laser Scanning Point Cloud. Appl. Sci. 2025, 15, 889. [Google Scholar] [CrossRef]
  14. Seo, H. 3D Roughness Measurement of Failure Surface in CFA Pile Samples Using Three-Dimensional Laser Scanning. Appl. Sci. 2021, 11, 2713. [Google Scholar] [CrossRef]
  15. Bouhadja, K.; Remli, F.; Tchantchane, Z. Surface Roughness Measurement by Analysis of 3D Scan Data According to ISO 25178-2. J. Inf. Syst. Eng. Manag. 2025, 10, 613–619. [Google Scholar] [CrossRef]
  16. Qu, D.; Zhou, Z.; Li, Z.; Ding, R.; Jin, W.; Luo, H.; Xiong, W. Wafer Eccentricity Deviation Measurement Method Based on Line-Scanning Chromatic Confocal 3D Profiler. Photonics 2023, 10, 398. [Google Scholar] [CrossRef]
  17. Qin, M.; Xiong, X.; Xiao, E.; Xia, M.; Gao, Y.; Xie, H.; Luo, H.; Zhao, W. A Kernel-Based Calibration Algorithm for Chromatic Confocal Line Sensors. Sensors 2024, 24, 6649. [Google Scholar] [CrossRef] [PubMed]
  18. Kurtev, K.I.; Trujillo-Sevilla, J.M.; Rodríguez-Ramos, J.M. Long-Distance Measurements Using a Chromatic Confocal Sensor. Appl. Sci. 2024, 14, 9943. [Google Scholar] [CrossRef]
  19. Yu, Q.; Zhang, Y.; Shang, W.; Dong, S.; Wang, C.; Wang, Y.; Liu, T.; Cheng, F. Thickness Measurement for Glass Slides Based on Chromatic Confocal Microscopy with Inclined Illumination. Photonics 2021, 8, 170. [Google Scholar] [CrossRef]
  20. Fu, S.; Kor, W.S.; Cheng, F.; Seah, L.K. In-Situ Measurement of Surface Roughness Using Chromatic Confocal Sensor. Procedia CIRP 2020, 94, 780–784. [Google Scholar] [CrossRef]
  21. Nagy, A. Influence of Measurement Settings on Areal Roughness with Confocal Chromatic Sensor on Face-Milled Surface. Cut. Tools 2020, 93, 65–75. [Google Scholar] [CrossRef]
  22. State Administration for Market Regulation. JJF 1099-2018 Calibration Specification for Surface Roughness Specimens; China Metrology Publishing House: Beijing, China, 2018. [Google Scholar]
  23. Deng, Z.; Wang, L.; Han, W.; Ranjan, R.; Zomaya, A. G-ML-Octree: An Update-Efficient Index Structure for Simulating 3D Moving Objects Across GPUs. IEEE Trans. Parallel Distrib. Syst. 2018, 29, 1075–1088. [Google Scholar] [CrossRef]
  24. Ester, M.; Kriegel, H.-P.; Sander, J.; Xu, X. A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. In KDD’96: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining; AAAI Press: Menlo Park, CA, USA, 1996. [Google Scholar]
  25. Peng, Y.; Feng, H.; Chen, T.; Hu, B. Point Cloud Instance Segmentation with Inaccurate Bounding-Box Annotations. Sensors 2023, 23, 2343. [Google Scholar] [CrossRef]
  26. Rusu, R.B. Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments. Künstl. Intell. 2010, 24, 345–348. [Google Scholar] [CrossRef]
  27. ISO 16610-21; Geometrical Product Specifications (GPS)—Filtration—Part 21: Linear Profile Filters: Gaussian Filters. International Organization for Standardization: Geneva, Switzerland, 2011.
  28. Brinkmann, S.; Bodschwinna, H.; Lemke, H.-W. Accessing Roughness in Three-Dimensions Using Gaussian Regression Filtering. Int. J. Mach. Tools Manuf. 2001, 41, 2153–2161. [Google Scholar] [CrossRef]
  29. ISO 16610-31:2016; Geometrical Product Specifications (GPS)—Filtration—Part 31: Robust Gaussian Regression Filter. ISO: Geneva, Switzerland, 2016.
  30. Digital Surf. Filtration Techniques. Digital Surf Surface Metrology Guide. Available online: https://guide.digitalsurf.com/en/guide-filtration-techniques.html (accessed on 10 March 2026).
Figure 1. Principle of line chromatic confocal detection.
Figure 1. Principle of line chromatic confocal detection.
Electronics 15 01517 g001
Figure 2. Algorithm flowchart of BPM-ADBSCAN-SOR.
Figure 2. Algorithm flowchart of BPM-ADBSCAN-SOR.
Electronics 15 01517 g002
Figure 3. Original point cloud data.
Figure 3. Original point cloud data.
Electronics 15 01517 g003
Figure 4. Octree down-sampling hierarchy diagram.
Figure 4. Octree down-sampling hierarchy diagram.
Electronics 15 01517 g004
Figure 5. Comparison of point cloud density before and after down-sampling based on octree: (a) Point cloud density before down-sampling; (b) point cloud density after down-sampling.
Figure 5. Comparison of point cloud density before and after down-sampling based on octree: (a) Point cloud density before down-sampling; (b) point cloud density after down-sampling.
Electronics 15 01517 g005
Figure 6. Mixed noise distribution of point cloud data.
Figure 6. Mixed noise distribution of point cloud data.
Electronics 15 01517 g006
Figure 7. Clustering results of the ADBSCAN algorithm under different k values: (a) k = 100; (b) k = 150; (c) k = 200; (d) k = 250.
Figure 7. Clustering results of the ADBSCAN algorithm under different k values: (a) k = 100; (b) k = 150; (c) k = 200; (d) k = 250.
Electronics 15 01517 g007
Figure 8. The 150-neighbor distance plot.
Figure 8. The 150-neighbor distance plot.
Electronics 15 01517 g008
Figure 9. Backprojection of original point cloud crop partial result: (a) Back projection cutting result of planer6.3 μm sample block; (b) back projection cutting result of flat-ground 0.2 μm sample block.
Figure 9. Backprojection of original point cloud crop partial result: (a) Back projection cutting result of planer6.3 μm sample block; (b) back projection cutting result of flat-ground 0.2 μm sample block.
Electronics 15 01517 g009
Figure 10. Schematic diagram of statistical outlier removal denoising effect: (a) Planer 6.3 μm sample block before statistical filtering; (b) flat-grind 0.2 μm sample block before statistical filtering; (c) planer 6.3 μm sample block after statistical filtering; (d) flat-grind 0.2 μm sample block after statistical filtering.
Figure 10. Schematic diagram of statistical outlier removal denoising effect: (a) Planer 6.3 μm sample block before statistical filtering; (b) flat-grind 0.2 μm sample block before statistical filtering; (c) planer 6.3 μm sample block after statistical filtering; (d) flat-grind 0.2 μm sample block after statistical filtering.
Electronics 15 01517 g010
Figure 11. Schematic diagram of boundary effects.
Figure 11. Schematic diagram of boundary effects.
Electronics 15 01517 g011
Figure 12. Schematic diagram of outlier sensitivity.
Figure 12. Schematic diagram of outlier sensitivity.
Electronics 15 01517 g012
Figure 13. SSCW-RGF operation results under different t values: (a) t = 10−3; (b) t = 10−4.
Figure 13. SSCW-RGF operation results under different t values: (a) t = 10−3; (b) t = 10−4.
Electronics 15 01517 g013
Figure 14. Physical figure of the line chromatic confocal calibration system.
Figure 14. Physical figure of the line chromatic confocal calibration system.
Electronics 15 01517 g014
Figure 15. Physical drawing of roughness sample set.
Figure 15. Physical drawing of roughness sample set.
Electronics 15 01517 g015
Figure 16. Comparison of the SSCW-Gaussian filter and the traditional Gaussian filter.
Figure 16. Comparison of the SSCW-Gaussian filter and the traditional Gaussian filter.
Electronics 15 01517 g016
Figure 17. Comparison of SSCW-RGF and traditional Gaussian filtering.
Figure 17. Comparison of SSCW-RGF and traditional Gaussian filtering.
Electronics 15 01517 g017
Figure 18. Absolute error of the system repeatability experiment.
Figure 18. Absolute error of the system repeatability experiment.
Electronics 15 01517 g018
Figure 19. Mean error of the system repeatability experiment.
Figure 19. Mean error of the system repeatability experiment.
Electronics 15 01517 g019
Figure 20. Roughness calibration experiment and sample block calibration results. (a) Surface contour of 6.3 μm sample by flat milling; (b) benchmark centerline of 6.3 μm sample by flat milling; (c) roughness signal of 6.3 μm sample by flat milling; (d) surface contour of 1.6 μm sample by end milling; (e) benchmark centerline of 1.6 μm sample by end milling; (f) roughness signal of 1.6 μm sample by end milling; (g) surface contour of 3.2 μm sample by planer; (h) benchmark centerline of 3.2 μm sample by planer; (i) roughness signal of 3.2 μm sample by planer; (j) surface contour of 0.8 μm sample by flat grind; (k) benchmark centerline of 0.8 μm sample by flat grind; (l) roughness signal of 0.8 μm sample by flat grind.
Figure 20. Roughness calibration experiment and sample block calibration results. (a) Surface contour of 6.3 μm sample by flat milling; (b) benchmark centerline of 6.3 μm sample by flat milling; (c) roughness signal of 6.3 μm sample by flat milling; (d) surface contour of 1.6 μm sample by end milling; (e) benchmark centerline of 1.6 μm sample by end milling; (f) roughness signal of 1.6 μm sample by end milling; (g) surface contour of 3.2 μm sample by planer; (h) benchmark centerline of 3.2 μm sample by planer; (i) roughness signal of 3.2 μm sample by planer; (j) surface contour of 0.8 μm sample by flat grind; (k) benchmark centerline of 0.8 μm sample by flat grind; (l) roughness signal of 0.8 μm sample by flat grind.
Electronics 15 01517 g020aElectronics 15 01517 g020b
Figure 21. Scatter plot of absolute error between calibration results and nominal values.
Figure 21. Scatter plot of absolute error between calibration results and nominal values.
Electronics 15 01517 g021
Figure 22. Scatter plot of calibration results versus the absolute error of contact measurement.
Figure 22. Scatter plot of calibration results versus the absolute error of contact measurement.
Electronics 15 01517 g022
Table 1. Comparison of DBSCAN adaptive strategies for noise reduction.
Table 1. Comparison of DBSCAN adaptive strategies for noise reduction.
Adaptive StrategyNumber of Points Before DenoisingNumber of Points After Denoisingε EstimateRuntime/s
90% quantile method842,642840,1110.71864.725
Elbow method842,642821,9880.70179.783
Local adaptive method842,642821,5540.700243.608
Table 2. Clustering results of the ADBSCAN algorithm under different k values.
Table 2. Clustering results of the ADBSCAN algorithm under different k values.
k Valueε EstimateNumber of Cluster
1000.578621
1500.701719
2000.847819
2500.946613
Table 3. FocalSpec LCI401 line spectral confocal sensor hardware parameters.
Table 3. FocalSpec LCI401 line spectral confocal sensor hardware parameters.
Functional IndicatorsParameter Value
line width/mm4.3
Resolution in X direction/μm2.1
Z-direction repeatability/μm0.05
working distance/mm8.0
depth of field/mm1.1
Panoramic depth scanning rate/Hz300
Maximum scanning rate/Hz800
Number of points/Number of contours2048
Maximum compatibility of angles under the mirror surface±15.0
Table 4. Calibration results of the ablation experiment.
Table 4. Calibration results of the ablation experiment.
Calibration AlgorithmRa Measurement/μm
ADBSCAN7.7160
ADBSCAN-SOR7.9917
BPM-ADBSCAN-SOR8.0563
ADBSCAN+SSCW-RGF8.5070
ADBSCAN-SOR+SSCW-RGF8.6654
BPM-ADBSCAN-SOR+SSCW-RGF8.9578
Table 5. System repeatability test results.
Table 5. System repeatability test results.
Measurement Serial NumberRa Measurement/μm
18.89896
28.87862
38.86282
48.88363
58.89418
68.90376
78.95574
88.96378
98.95521
108.91226
Table 6. Parameter table of roughness sample set.
Table 6. Parameter table of roughness sample set.
Processing TechnicRa Nominal/μm
Flat Milling6.33.21.60.8
End Milling6.33.21.60.8
Planer6.33.21.60.8
Flat Grind0.80.40.20.1
Grind0.10.050.025
Table 7. Results of roughness calibration experiments.
Table 7. Results of roughness calibration experiments.
Processing TechnicRa Nominal/μmMeasured Value of This System/μmAbsolute Error/μm
Flat Milling6.35.555−0.745
3.23.063−0.137
1.61.557−0.043
0.80.718−0.082
End Milling6.35.910−0.390
3.23.035−0.165
1.61.624+0.024
0.80.718−0.082
Planer6.36.522+0.222
3.23.207+0.007
1.61.560−0.040
0.80.723−0.077
Flat Grind0.80.727−0.073
0.40.415+0.015
0.20.196−0.004
0.10.107+0.007
Grind0.10.095−0.005
0.050.042−0.008
0.0250.026+0.001
Table 8. Comparison of experimental results with contact measurement.
Table 8. Comparison of experimental results with contact measurement.
Processing TechnicProfilometer Indication/μmMeasured Value of This System/μmAbsolute Error/μm
Flat Milling5.355.555+0.005
3.053.063+0.013
1.551.557+0.002
0.700.718+0.018
End Milling5.965.910−0.050
3.053.035−0.015
1.631.624−0.006
0.700.718+0.018
Planer6.556.522−0.028
3.183.207+0.027
1.551.560+0.010
0.700.723+0.023
Flat Grind0.710.727+0.017
0.420.415−0.005
0.2030.196−0.007
0.1030.107+0.004
Grind0.0910.095+0.004
0.0440.042−0.002
0.0250.026+0.001
Table 9. Comparison results of calibration speed.
Table 9. Comparison results of calibration speed.
Calibration MethodScanning Time/minCalibration Time/minTotal Time/min
contact calibration system 137.39137.39
ours10.982.7713.75
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, H.; Chen, T.; Xu, X.; Qiu, Y.; Wu, J.; Wang, L.; Ye, H.; Chen, X.; Chen, N. Calibration of Roughness of Standard Samples Using Point Cloud Based on Line Chromatic Confocal Method. Electronics 2026, 15, 1517. https://doi.org/10.3390/electronics15071517

AMA Style

Guo H, Chen T, Xu X, Qiu Y, Wu J, Wang L, Ye H, Chen X, Chen N. Calibration of Roughness of Standard Samples Using Point Cloud Based on Line Chromatic Confocal Method. Electronics. 2026; 15(7):1517. https://doi.org/10.3390/electronics15071517

Chicago/Turabian Style

Guo, Haotian, Ting Chen, Xinke Xu, Yuexin Qiu, Jian Wu, Lei Wang, Huaichu Ye, Xuwen Chen, and Ning Chen. 2026. "Calibration of Roughness of Standard Samples Using Point Cloud Based on Line Chromatic Confocal Method" Electronics 15, no. 7: 1517. https://doi.org/10.3390/electronics15071517

APA Style

Guo, H., Chen, T., Xu, X., Qiu, Y., Wu, J., Wang, L., Ye, H., Chen, X., & Chen, N. (2026). Calibration of Roughness of Standard Samples Using Point Cloud Based on Line Chromatic Confocal Method. Electronics, 15(7), 1517. https://doi.org/10.3390/electronics15071517

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop