Innovative Methodology of On-Line Point Cloud Data Compression for Free-Form Surface Scanning Measurement

Featured Application: On-line point cloud data compression process for 3D free-form surface contact or non-contact scanning measuring equipment. Abstract: In order to obtain a highly accurate proﬁle of a measured three-dimensional (3D) free-form surface, a scanning measuring device has to produce extremely dense point cloud data with a great sampling rate. Bottlenecks are created owing to inefﬁciencies in manipulating, storing and transferring these data, and parametric modelling from them is quite time-consuming work. In order to effectively compress the dense point cloud data obtained from a 3D free-form surface during the real-time scanning measuring process, this paper presents an innovative methodology of an on-line point cloud data compression algorithm for 3D free-form surface scanning measurement. It has the ability to identify and eliminate data redundancy caused by geometric feature similarity between adjacent scanning layers. At ﬁrst, the new algorithm adopts the bi-Akima method to compress the initial point cloud data; next, the data redundancy existing in the compressed point cloud is further identiﬁed and eliminated; then, we can get the ﬁnal compressed point cloud data. Finally, the experiment is conducted, and the results demonstrate that the proposed algorithm is capable of obtaining high-quality data compression results with higher data compression ratios than other existing on-line point cloud data compression/reduction methods.


Introduction
With the rapid development of modern industry, three-dimensional (3D) free-form surface parts are being utilized more and more widely. These involve, but are not limited to, aviation, aerospace, shipbuilding, automotive, biomedical and home appliance industries [1,2]. Recently, the automated 3D digitization of free-form surface objects has been widely applied in many areas, such as additive manufacturing (3D printing), rapid prototyping, reverse engineering, civil buildings, medical prosthetics and clinical diagnosis [3][4][5][6][7][8][9][10][11][12][13]. Scanning measurement is one of the key technologies for digitizing 3D physical models with free-form surfaces [14][15][16][17]. Unfortunately, in order to obtain a high-quality profile of a measured surface, scanning measuring devices have to produce massive amounts of point cloud data with great sampling rates, and not all these points are indispensable [18][19][20]. Bottlenecks arise from the inefficiencies of storing, manipulating and transferring them [21]. Furthermore, the parametric point sets are connected by straight segments. Therefore, the surface reconstructed by the compressed point cloud will be full of cusp points, and so we cannot obtain a smooth interpolated surface. In view of this limitation, they presented an on-line point cloud data extraction algorithm using bi-Akima spline interpolation.
Although the above methods implement on-line point cloud data compression, they can only eliminate data redundancy of the current scanning line. Nevertheless, most surface 3D scanning measuring devices adopt a layer-by-layer scanning path (e.g., contact scanning probes [38], laser triangle displacement sensors [39], linear structured light systems [40], industrial computed tomography (CT) systems [41], etc.), and adjacent scanning lines are extremely similar in shape. The geometric feature similarity between such scanning layers is bound to result in data redundancy, which makes it possible to further compress the point cloud data during the scanning measuring process. Therefore, this study focuses on identifying and eliminating this kind of data redundancy caused by geometric feature similarity between adjacent scanning layers. After that, the massive amount of point cloud data can be further compressed during the 3D free-form surface measuring process.
The contents of this paper consist of four sections. In Section 2, the innovative methodology of the on-line point cloud data compression algorithm for 3D free-form surface scanning measurement is described in detail. In Section 3, the proposed algorithm was tested in the real-time scanning measuring process and compared with existing methods. Finally, some conclusions are drawn from this paper in Section 4.

Innovative Methodology
As shown in Figure 1, the overall process of on-line point cloud data compression in this work consists of four steps. in Step 1, the initial point cloud flow is obtained by 3D scanning measuring devices using an isochronous [42] or equidistant sampling method and the layer-by-layer scanning path is adopted ( Figure 2). in Step 2, the initial point cloud data flow is immediately compressed by the chordal method [36] or bi-Akima method [37], both of which compress the amount of point cloud data based on the data redundancy in the current single scanning layer. In Step 3, the data redundancy in the compressed point cloud which is obtained in the previous step is further identified. In Step 4, the identified redundant point data is eliminated, and then we can obtain the final compressed point cloud. At last, the final compressed data flow is transmitted to the storage space of the measurement system. methods is the chordal method, which specifies that all discrete dense point sets are connected by straight segments. Therefore, the surface reconstructed by the compressed point cloud will be full of cusp points, and so we cannot obtain a smooth interpolated surface. In view of this limitation, they presented an on-line point cloud data extraction algorithm using bi-Akima spline interpolation.
Although the above methods implement on-line point cloud data compression, they can only eliminate data redundancy of the current scanning line. Nevertheless, most surface 3D scanning measuring devices adopt a layer-by-layer scanning path (e.g., contact scanning probes [38], laser triangle displacement sensors [39], linear structured light systems [40], industrial computed tomography (CT) systems [41], etc.), and adjacent scanning lines are extremely similar in shape. The geometric feature similarity between such scanning layers is bound to result in data redundancy, which makes it possible to further compress the point cloud data during the scanning measuring process. Therefore, this study focuses on identifying and eliminating this kind of data redundancy caused by geometric feature similarity between adjacent scanning layers. After that, the massive amount of point cloud data can be further compressed during the 3D free-form surface measuring process.
The contents of this paper consist of four sections. In Section 2, the innovative methodology of the on-line point cloud data compression algorithm for 3D free-form surface scanning measurement is described in detail. In Section 3, the proposed algorithm was tested in the real-time scanning measuring process and compared with existing methods. Finally, some conclusions are drawn from this paper in Section 4.

Innovative Methodology
As shown in Figure 1, the overall process of on-line point cloud data compression in this work consists of four steps. In Step 1, the initial point cloud flow is obtained by 3D scanning measuring devices using an isochronous [42] or equidistant sampling method and the layer-by-layer scanning path is adopted ( Figure 2). In Step 2, the initial point cloud data flow is immediately compressed by the chordal method [36] or bi-Akima method [37], both of which compress the amount of point cloud data based on the data redundancy in the current single scanning layer. In Step 3, the data redundancy in the compressed point cloud which is obtained in the previous step is further identified. In Step 4, the identified redundant point data is eliminated, and then we can obtain the final compressed point cloud. At last, the final compressed data flow is transmitted to the storage space of the measurement system. Herein, the real-time performance of the proposed data compression algorithm needs to be further analyzed and described. The path planning is performed before the start of the scanning measurement in Step 1. As shown in Figure 2, a layer-by-layer scanning path is adopted. The distance between the adjacent scanning layers is determined by the preset measuring accuracy. The measured surface is cut by the scanning layers to form a number of corresponding scanning lines. As shown in Figure 2, there are two planning modes for scanning directions: (i) the progressive scanning mode, and (ii) the S-type scanning mode. Regardless of the scanning mode, the measuring device in Step 1 will continuously transmit the initial point cloud data flow to the data compressor in Step 2. The compressor performs data compression immediately after receiving all initial point data of a single scanning layer, rather than waiting for the entire surface to be scanned before performing data compression. That is, each time the point cloud data in the current scanning layer is completely transmitted to the compressor, the subsequent data compression algorithm is executed immediately. Herein, the real-time performance of the proposed data compression algorithm needs to be further analyzed and described. The path planning is performed before the start of the scanning measurement in Step 1. As shown in Figure 2, a layer-by-layer scanning path is adopted. The distance between the adjacent scanning layers is determined by the preset measuring accuracy. The measured surface is cut by the scanning layers to form a number of corresponding scanning lines. As shown in Figure 2, there are two planning modes for scanning directions: (i) the progressive scanning mode, and (ii) the S-type scanning mode. Regardless of the scanning mode, the measuring device in Step 1 will continuously transmit the initial point cloud data flow to the data compressor in Step 2. The compressor performs data compression immediately after receiving all initial point data of a single scanning layer, rather than waiting for the entire surface to be scanned before performing data compression. That is, each time the point cloud data in the current scanning layer is completely transmitted to the compressor, the subsequent data compression algorithm is executed immediately. Therefore, the proposed data compression algorithm is essentially a quasi-real-time method, which we call an on-line data compression method. Therefore, the proposed data compression algorithm is essentially a quasi-real-time method, which we call an on-line data compression method. The flow chart of this algorithm is illustrated in Figure 3, and its principle is described in detail as follows: The flow chart of this algorithm is illustrated in Figure 3, and its principle is described in detail as follows: Appl

Data Redundancy Identification
In order to identify redundant data points in the compressed point cloud data flow from Step 2, it is first necessary to predict the current scan line in the unmeasured area. Herein, the prediction is realized by Hermite extrapolation [43], and a predicted curve is created. The data redundancy identification algorithm is detailed as follows:

Data Redundancy Identification
In order to identify redundant data points in the compressed point cloud data flow from Step 2, it is first necessary to predict the current scan line in the unmeasured area. Herein, the prediction is realized by Hermite extrapolation [43], and a predicted curve is created. The data redundancy identification algorithm is detailed as follows: Figure 4 shows the schematic diagram of the data redundancy identification algorithm, in which line i is the current scanning line during the on-line measuring process, and P i,j represents the jth point in scanning line i. If j ≥ 2, a shape-preserving piecewise bicubic Hermite curve can be built to predict the shape and direction of the current scanning line; here, we name this the predicted curve, as shown in Figure 4. After that, suppose k is a positive integer and let 1 ≤ k < j and the coordinates of point P i,k be (x k , y k , z k ); then, a series of specific Hermite interpolation polynomials can be determined by where and the first derivatives of y k , y k+1 , z k , z k+1 can be estimated by the following formulas.
and the first derivatives of ' k y , 1 ' k y + , ' k z , 1 ' k z + can be estimated by the following formulas. When 1 k j < < : When 1 < k < j: When k = 1: When k = j: Herein, based on the compressed point cloud data flow from Step 2, the shape-preserving piecewise bicubic Hermite polynomials can be created according to the above algorithm. Then, Hermite extrapolation is performed to create a predicted curve, which is marked in blue as shown in Figure 4, and its analytical formula can be described as follows: After that, an estimated point P est is created to move along the predicted curve with a stepping distance of λ. P i,j is the starting point of P est . Meanwhile, a bounding sphere is built with point P est as the center. The radius of the sphere is R sph = κh ls , (12) in which κ ∈ [1, 2] is the radius adjustment coefficient, and h ls is the distance between two parallel scannning layers. As shown in Figure 4, the predicted curve with estimated point P est are used to search for the neighbor point P nb from the previous scanning line i−1. The necessary and sufficient conditions for point P nb as the neighbor point of P est are P est P nb ≤ R sph , which means that P nb is inside the bounding sphere with point P est as its center. At the very beginning, P est coincides with P i,j . At this point, there are two possibilities: (i) P i−1,u is inside the bounding sphere (i.e., P i−1,u P i,j ≤ R sph ), or (ii) P i−1,u is outside the bounding sphere. In case (i), P i−1,u is the first found neighbor point. As P est moves along the scanning direction with a stepping distance of λ, if P est P i−1,u < P i,j P i−1,u , then P i−1,u is the neighbor point of P est ; otherwise, discard point P i−1,u , as it is not the neighbor point of P est but of P i,j . In case (ii), there is no operation because no neighbor point has been found. After case (i) or case (ii) is completed, point P est continues to move forward along the scanning direction until the neighbor point P nb of P est is found; if the neighbor point cannot be found, the search is stopped. If the neighbor point P nb is found in line i − 1 (e.g., P i−1,u+1 in Figure 4), then a new bounding sphere is built with P i−1,u+1 as the center and the radius is R sph . After that, we use this new bounding sphere to search for the neighbor point of P i−1,u+1 in line i − 2; and if the neighbor point cannot be found, we stop searching. Next, we take the new neighbor point in line i − 2 (e.g., P i−2,v+1 ) as a new center to build a bounding sphere and repeat the above process until we find three neighbor points in different scanning lines (e.g., P i−1,u+1 , P i−2,v+1 , P i−3,w+1 in Figure 4).
Based on the neighbor point set {P i−1,u+1 , P i−2,v+1 , P i−3,w+1 }, the coordinates of estimated point P est can be fixed uniquely. As shown in Figure 4, a bi-cubic Hermite curve is built, and it can be expressed as in which y is an independent variable; α i−1 (y), α i−2 (y), β i−1 (y), β i−2 (y) are obtained by Equation (2); x i−2 , z i−2 are acquired by Equations (3) and (4); and x i−1 , z i−1 are obtained by Equations (8)- (10). Obviously, the bicubic Hermite curve must be in the curved surface with the equation and the predicted curve will pass through this curved surface. Therefore, estimated point P est can be fixed at the intersection of the predicted curve and the curved surface which is described in Equation (14). That is, the coordinates of estimated point P est (x est , y est , z est ) can be determined by Equations (11) and (14).

Data Redundancy Elimination
After the coordinates of estimated point P est are determined, we use P est to replace P i,j+1 in scanning line i. Afterwards, the new point set that contains P est is used for bi-Akima interpolation, and there is a deviation h i,k between the interpolated curve and each initial sampled point Q k , where i is the scanning line number and k is the serial number of initial point cloud in line i. As mentioned earlier, the initial point cloud is obtained by 3D scanning measuring devices using the isochronous or equidistant sampling method in Step 1 as shown in Figure 1. The deviation h i,k can be obtained by where point Q k (x k , y k , z k ) is an initial sampled point between P i,j (X j , Y j , Z j ) and P i,j+1 (X j+1 , Y j+1 , Z j+1 ), and P curv (x, y, z) is the point in interpolated curve that makes the distance S shortest. Then, the max deviation d max of the whole curve (i.e., from P i,1 to P est ) can be calculated by the following formula: which is compared with the required accuracy ε. If d max > ε, discard point P est . If d max < ε, delete current compressed point P i,j+1 which is input from Step 2. Next, create an estimative flag F i,j+1 = 1 to replace point P i,j+1 . This flag takes up only one bit of data storage space. After completing the above process, output the final compressed point cloud data flow, which contains the point coordinate and estimative flag information to the data storage devices. Afterwards, make j = j + 1, build a new shape-preserving piecewise bicubic Hermite curve to predict the shape and direction of the current scanning line, and create a new estimative point P est to loop through the above data redundancy identification and elimination process until P i,j is the end point of the current scanning line i or the data sampling is over. In addition, when P i,j is the end point of line i, make i = i + 1 and continue to loop the above algorithm until the measurement is completed.

Experimental Results
In order to verify the feasibility of the proposed methodology, some experiments were performed in this section.

Test A
The on-line point cloud data compression algorithm was tested in the industrial real-time measuring process and compared with existing methods (chordal method and bi-Akima method). The measuring system consists of a contact 3D scanning probe, a vertical lathe and a commercial computer numerical control (CNC) system of SINUMERIK 840D (Munich, Bayern, Germany) as shown in Figure 5. The proposed algorithm is integrated in the original equipment manufacturer (OEM) application that runs on the host computer of the CNC system. The product model of the contact 3D scanning probe is DIGIT-02 (Dalian, Liaoning Province, China). More detailed technical characteristics of the measuring instrument are shown in Table 1.

Experimental Results
In order to verify the feasibility of the proposed methodology, some experiments were performed in this section.

Test A
The on-line point cloud data compression algorithm was tested in the industrial real-time measuring process and compared with existing methods (chordal method and bi-Akima method). The measuring system consists of a contact 3D scanning probe, a vertical lathe and a commercial computer numerical control (CNC) system of SINUMERIK 840D (Munich, Bayern, Germany) as shown in Figure 5. The proposed algorithm is integrated in the original equipment manufacturer (OEM) application that runs on the host computer of the CNC system. The product model of the contact 3D scanning probe is DIGIT-02 (Dalian, Liaoning Province, China). More detailed technical characteristics of the measuring instrument are shown in Table 1.  The measured part is a half-ellipsoidal surface which is welded together by seven pieces of thin-walled aluminum alloy sheet, as shown in Figure 5d, with a semi-major axis of 1450 mm and semi-minor axis of 950 mm. A rotational progressive scanning mode is adopted, and the layer spacing is 7 mm. Figure 6 shows the spatial distribution of the initial point cloud data. The isochronous sampling method is adopted and the number of initial sampling points is 272,638.  The measured part is a half-ellipsoidal surface which is welded together by seven pieces of thin-walled aluminum alloy sheet, as shown in Figure 5d, with a semi-major axis of 1450 mm and semi-minor axis of 950 mm. A rotational progressive scanning mode is adopted, and the layer spacing is 7 mm. Figure 6 shows the spatial distribution of the initial point cloud data. The isochronous sampling method is adopted and the number of initial sampling points is 272,638. Using the same initial point cloud data set as shown in Figure 6, the comparison of data compression performance is made between the proposed method, chordal method and bi-Akima method under different required accuracies (i.e., from 0.001 mm to 1 mm). Table 2 summarizes the results of the data compression performance including the number of points and data compression ratio, where the compression ratio is defined as the ratio between the uncompressed size and compressed size: Obviously, the proposed method has a higher data compression ratio than the chordal method and bi-Akima method, and the chordal method obtains the lowest data compression ratio under the same required accuracy. The number of data points obtained by the proposed method is about half of that obtained by the bi-Akima method under the same required accuracy.  Figure 7 provides the comparison of the compression ratios between the three methods under the different required accuracies. With the decrease in accuracy requirements, the compression ratio increases for all methods; however, for all levels of required accuracy, our proposed compression Using the same initial point cloud data set as shown in Figure 6, the comparison of data compression performance is made between the proposed method, chordal method and bi-Akima method under different required accuracies (i.e., from 0.001 mm to 1 mm). Table 2 summarizes the results of the data compression performance including the number of points and data compression ratio, where the compression ratio is defined as the ratio between the uncompressed size and compressed size: Obviously, the proposed method has a higher data compression ratio than the chordal method and bi-Akima method, and the chordal method obtains the lowest data compression ratio under the same required accuracy. The number of data points obtained by the proposed method is about half of that obtained by the bi-Akima method under the same required accuracy.  Figure 7 provides the comparison of the compression ratios between the three methods under the different required accuracies. With the decrease in accuracy requirements, the compression ratio increases for all methods; however, for all levels of required accuracy, our proposed compression method manifests a superior compression ratio than the other two methods. Obviously, the chordal method has the lowest data compression ratio. Therefore, we focus on comparing our proposed method with the bi-Akima method in the subsequent experiments.  To make the comparison more vivid and intuitive, Figure 8 visually illustrates the difference between the proposed method and bi-Akima method by displaying spatial distributions of compressed point sets under different required accuracies. Subfigures a, d, g and j show the point cloud distribution compressed by bi-Akima method while subfigures b, e, h and k give the point cloud distribution after data redundancy identification by the proposed method, with the identified redundant points marked in red. In subfigures c, f, i and l, the identified redundant points are eliminated. These subfigures show the distributions of the final compressed point cloud data. By contrast, we can clearly observe the difference of point cloud density between these two methods under the same required accuracy. Take subfigures g-i, for example: when using the bi-Akima method, we can observe that there are many curves roughly along the welded region (Figure 8g), because the bi-Akima method can only deal with the point set in the current scanning line and the data redundancy outside the current scanning line cannot be eliminated. With the involvement of our proposed method, redundant data points are identified and marked in red ( Figure 8h) and the data redundancy in the adjacent scanning layers is eliminated and the final compressed point cloud data is obtained (Figure 8i).
To verify the accuracy of the proposed algorithm, Figure 9 analyzes the spatial distribution of deviation between each initial sampled point and the interpolated surface obtained from the final compressed point cloud data under different required accuracies. As can be seen, all the deviations are within the allowable range of required accuracy. Our method can tightly control the deviation within the error tolerance range (i.e., the deviation between each initial sampled point and interpolation curve is less than or equal to the required accuracy). In addition, deviations are far lower than the required accuracy in most of the measured area. In Figure 9d, there is an interesting and noteworthy phenomenon: the upper right sector has a higher deviation. As mentioned earlier, the measured part is a large thin-walled surface which is welded together by seven pieces of aluminum alloy sheet (Figure 5d). The aluminum alloy sheet has a thickness of only 0.8 mm, but its size is very large (the semi-major axis of the ellipse is 1450 mm). The part has undergone great deformation after welding. There is a large and random deviation between each welded part and the original design size. According to past experience, the maximum deviation in a local section can even reach 3 mm. Consequently, we infer that the upper right sector has a higher deviation because of deformation in this area. In the case where the required accuracy is on the order of millimeters (e.g., required accuracy =1 mm ε in Figure 9d), the compressed point cloud data is very sparse. To make the comparison more vivid and intuitive, Figure 8 visually illustrates the difference between the proposed method and bi-Akima method by displaying spatial distributions of compressed point sets under different required accuracies. Subfigures a, d, g and j show the point cloud distribution compressed by bi-Akima method while subfigures b, e, h and k give the point cloud distribution after data redundancy identification by the proposed method, with the identified redundant points marked in red. In subfigures c, f, i and l, the identified redundant points are eliminated. These subfigures show the distributions of the final compressed point cloud data. By contrast, we can clearly observe the difference of point cloud density between these two methods under the same required accuracy. Take subfigures g-i, for example: when using the bi-Akima method, we can observe that there are many curves roughly along the welded region (Figure 8g), because the bi-Akima method can only deal with the point set in the current scanning line and the data redundancy outside the current scanning line cannot be eliminated. With the involvement of our proposed method, redundant data points are identified and marked in red ( Figure 8h) and the data redundancy in the adjacent scanning layers is eliminated and the final compressed point cloud data is obtained (Figure 8i).
To verify the accuracy of the proposed algorithm, Figure 9 analyzes the spatial distribution of deviation between each initial sampled point and the interpolated surface obtained from the final compressed point cloud data under different required accuracies. As can be seen, all the deviations are within the allowable range of required accuracy. Our method can tightly control the deviation within the error tolerance range (i.e., the deviation between each initial sampled point and interpolation curve is less than or equal to the required accuracy). In addition, deviations are far lower than the required accuracy in most of the measured area. In Figure 9d, there is an interesting and noteworthy phenomenon: the upper right sector has a higher deviation. As mentioned earlier, the measured part is a large thin-walled surface which is welded together by seven pieces of aluminum alloy sheet (Figure 5d). The aluminum alloy sheet has a thickness of only 0.8 mm, but its size is very large (the semi-major axis of the ellipse is 1450 mm). The part has undergone great deformation after welding. There is a large and random deviation between each welded part and the original design size. According to past experience, the maximum deviation in a local section can even reach 3 mm. Consequently, we infer that the upper right sector has a higher deviation because of deformation in this area. In the case where the required accuracy is on the order of millimeters (e.g., required accuracy ε= 1 mm in Figure 9d), the compressed point cloud data is very sparse. Therefore, this phenomenon is formed in a region where the point cloud density is low and the local deformation is large. However, in any case, the proposed method can tightly control the deviation within the preset range.

Test B
The overall structure of the model in Test A is relatively simple. In order to further verify the universality and adaptability of the proposed method, we chose a more complex surface model with a large number of details, edges and sharp features for experimentation. As shown in Figure 10, the tested model is a piece of jewelry, which is inlaid with 30 diamonds of different sizes.  Figure 11 shows the initial point cloud data acquisition result. The progressive scanning mode and equidistant sampling mode were adopted. Scanning lines are along the X-direction (horizontal direction). The distance between two adjacent scanning layers is 0.1 mm, and the distance between adjacent points is 0.05 mm in each scanning layer. The initial point number is 63,376. Figure 11. Spatial distribution of initial point cloud data.

Test B
The overall structure of the model in Test A is relatively simple. In order to further verify the universality and adaptability of the proposed method, we chose a more complex surface model with a large number of details, edges and sharp features for experimentation. As shown in Figure 10, the tested model is a piece of jewelry, which is inlaid with 30 diamonds of different sizes. Figure 11 shows the initial point cloud data acquisition result. The progressive scanning mode and equidistant sampling mode were adopted. Scanning lines are along the X-direction (horizontal direction). The distance between two adjacent scanning layers is 0.1 mm, and the distance between adjacent points is 0.05 mm in each scanning layer. The initial point number is 63,376.
The comparison is made between the proposed method and bi-Akima method under different required accuracies (i.e., from 0.001 mm to 1 mm). Table 3 gives the results of data compression performance, including the number of points and data compression ratio. Obviously, the proposed method has a higher data compression ratio than the bi-Akima method. The number of points obtained by the proposed method is about half of that obtained by bi-Akima method under the same required accuracy. Figure 12 provides the comparison of the compression ratios between these two methods under different required accuracies. With the decrease in accuracy requirements, the compression ratio increases for all methods; however, for all levels of required accuracy, our proposed compression method manifests a superior compression ratio than the bi-Akima method.

Test B
The overall structure of the model in Test A is relatively simple. In order to further verify the universality and adaptability of the proposed method, we chose a more complex surface model with a large number of details, edges and sharp features for experimentation. As shown in Figure 10, the tested model is a piece of jewelry, which is inlaid with 30 diamonds of different sizes.  Figure 11 shows the initial point cloud data acquisition result. The progressive scanning mode and equidistant sampling mode were adopted. Scanning lines are along the X-direction (horizontal direction). The distance between two adjacent scanning layers is 0.1 mm, and the distance between adjacent points is 0.05 mm in each scanning layer. The initial point number is 63,376.  In subfigures c, f, i and l, the identified redundant points are eliminated. These subfigures show the distributions of the final compressed point cloud data. By contrast, we can clearly observe the difference in point cloud density between these two methods under the same required accuracy. Take subfigures j, k and l, for example: when using the bi-Akima method, we can observe that there are many curves roughly along the vertical direction (Figure 13j). This is because the bi-Akima method can only deal with the point set in the current single scanning line, which is along the horizontal direction, and the data redundancy outside the current scanning line cannot be eliminated. With the involvement of our proposed method, redundant data points are identified and marked in red (Figure 13k), the data redundancy in adjacent scanning layers is eliminated and the final compressed point cloud data is obtained (Figure 13l). a large number of details, edges and sharp features for experimentation. As shown in Figure 10, the tested model is a piece of jewelry, which is inlaid with 30 diamonds of different sizes.  Figure 11 shows the initial point cloud data acquisition result. The progressive scanning mode and equidistant sampling mode were adopted. Scanning lines are along the X-direction (horizontal direction). The distance between two adjacent scanning layers is 0.1 mm, and the distance between adjacent points is 0.05 mm in each scanning layer. The initial point number is 63,376. Figure 11. Spatial distribution of initial point cloud data. Figure 11. Spatial distribution of initial point cloud data. The comparison is made between the proposed method and bi-Akima method under different required accuracies (i.e., from 0.001 mm to 1 mm). Table 3 gives the results of data compression performance, including the number of points and data compression ratio. Obviously, the proposed method has a higher data compression ratio than the bi-Akima method. The number of points obtained by the proposed method is about half of that obtained by bi-Akima method under the same required accuracy.  Figure 12 provides the comparison of the compression ratios between these two methods under different required accuracies. With the decrease in accuracy requirements, the compression ratio increases for all methods; however, for all levels of required accuracy, our proposed compression method manifests a superior compression ratio than the bi-Akima method.   marked in red (Figure 13k), the data redundancy in adjacent scanning layers is eliminated and the final compressed point cloud data is obtained (Figure 13l).  In order to verify the accuracy of the proposed algorithm, Figure 14 analyzes the spatial distribution of deviation between each initial sampled point and the interpolated surface obtained from the final compressed point cloud data under different required accuracies. As can be seen, all the deviations are within the allowable range of required accuracy, which proves that the proposed method can tightly control the deviation within the error tolerance range (i.e., the deviation between In order to verify the accuracy of the proposed algorithm, Figure 14 analyzes the spatial distribution of deviation between each initial sampled point and the interpolated surface obtained from the final compressed point cloud data under different required accuracies. As can be seen, all the deviations are within the allowable range of required accuracy, which proves that the proposed method can tightly control the deviation within the error tolerance range (i.e., the deviation between each initial sampled point and interpolation curve is less than or equal to the required accuracy). In addition, deviations are far less than the required accuracy in most of the measured area. each initial sampled point and interpolation curve is less than or equal to the required accuracy). In addition, deviations are far less than the required accuracy in most of the measured area.

Discussion
The experimental results in Section 3 indicate that the proposed on-line point cloud data compression algorithm for free-form surface scanning measurement has the following features: • It can further compress point cloud data and obtain a higher data compression ratio than the existing methods under the same required accuracy. Its compression performance is obviously superior to the bi-Akima and chordal methods; • It is capable of tightly controlling the deviation within the error tolerance range, and deviations in most measured area are far less than the required accuracy; • Test A preliminarily verifies the application feasibility of the proposed method in an industrial environment. Test B demonstrates that the method is equally effective for complex surfaces with a large number of details, edges and sharp features, and it has stable performance; • The proposed method has the potential to be applied to industrial environments to replace traditional on-line point cloud data compression methods (bi-Akima and chordal methods). Its potential applications may be in the real-time measurement processes of scanning devices such as contact scanning probes, laser triangle displacement sensors, mobile laser scanners, linear structured light systems, industrial CT systems, etc. The application feasibility of this method needs to be further confirmed in subsequent case studies.

Discussion
The experimental results in Section 3 indicate that the proposed on-line point cloud data compression algorithm for free-form surface scanning measurement has the following features: • It can further compress point cloud data and obtain a higher data compression ratio than the existing methods under the same required accuracy. Its compression performance is obviously superior to the bi-Akima and chordal methods; • It is capable of tightly controlling the deviation within the error tolerance range, and deviations in most measured area are far less than the required accuracy; • Test A preliminarily verifies the application feasibility of the proposed method in an industrial environment. Test B demonstrates that the method is equally effective for complex surfaces with a large number of details, edges and sharp features, and it has stable performance; • The proposed method has the potential to be applied to industrial environments to replace traditional on-line point cloud data compression methods (bi-Akima and chordal methods). Its potential applications may be in the real-time measurement processes of scanning devices such as contact scanning probes, laser triangle displacement sensors, mobile laser scanners, linear structured light systems, industrial CT systems, etc. The application feasibility of this method needs to be further confirmed in subsequent case studies.
However, the proposed method is not perfect and still has the following limitations. In future work, the following aspects need to be further developed:

•
This method can only handle 3D point cloud data streams and is not suitable for processing point cloud data containing additional high-dimensional information (e.g., 3D point cloud data with grayscale or color information). We will try to solve the above problem in our future research work; • This method can only compress the point cloud data stream which is scanned layer by layer. If the 3D point cloud is randomly sampled and there are no regular scan lines (e.g., 3D measurement with speckle-structure light), our method cannot perform effective data compression. It is a huge challenge to solve the above problems.

Conclusions
In an attempt to effectively compress dense point cloud data obtained from a 3D free-form surface during the real-time scanning measuring process, this paper presents a novel on-line point cloud data compression algorithm which has the ability to identify and eliminate data redundancy caused by geometric feature similarity between adjacent scanning layers. At first, the new algorithm adopts the bi-Akima method to compress the initial point cloud data obtained by 3D scanning measuring devices. Next, the data redundancy in the compressed point cloud obtained in the previous stage is further identified and eliminated, and then we can obtain the final compressed point cloud data. Finally, the proposed on-line point cloud data compression algorithm was tested in the real-time scanning measuring process and compared with existing methods (the chordal method and bi-Akima method). The experimental results have preliminarily verified the application feasibility of our proposed method in industrial environment, and shown that it is capable of obtaining high-quality compressed point cloud data with a higher compression ratio than other existing methods. In particular, it can tightly control the deviation within the error tolerance range, which demonstrates the superior performance of the proposed algorithm. This algorithm could be used in the data acquisition process of 3D free-form surface scanning measurement to replace other existing on-line point cloud data compression/reduction methods.