Next Article in Journal
Towards Quantitative Acoustic Emission by Finite Element Modelling: Contribution of Modal Analysis and Identification of Pertinent Descriptors
Next Article in Special Issue
Real-Time Tunnel Deformation Monitoring Technology Based on Laser and Machine Vision
Previous Article in Journal
Acoustic Target Strength of the Endangered Chinese Sturgeon (Acipenser sinensis) by Ex Situ Measurements and Theoretical Calculations
Previous Article in Special Issue
Novel Boundary Edge Detection for Accurate 3D Surface Profilometry Using Digital Image Correlation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Innovative Methodology of On-Line Point Cloud Data Compression for Free-Form Surface Scanning Measurement

1
School of Manufacturing Science and Engineering, Sichuan University, Chengdu 610065, China
2
Aerospace Research Institute of Materials and Processing Technology, China Academy of Launch Vehicle Technology, Beijing 100076, China
3
Energy Research Center of Lower Saxony (EFZN), 38640 Goslar, Germany
*
Author to whom correspondence should be addressed.
Appl. Sci. 2018, 8(12), 2556; https://doi.org/10.3390/app8122556
Submission received: 23 October 2018 / Revised: 27 November 2018 / Accepted: 3 December 2018 / Published: 10 December 2018
(This article belongs to the Special Issue Precision Dimensional Measurements)

Abstract

:

Featured Application

On-line point cloud data compression process for 3D free-form surface contact or non-contact scanning measuring equipment.

Abstract

In order to obtain a highly accurate profile of a measured three-dimensional (3D) free-form surface, a scanning measuring device has to produce extremely dense point cloud data with a great sampling rate. Bottlenecks are created owing to inefficiencies in manipulating, storing and transferring these data, and parametric modelling from them is quite time-consuming work. In order to effectively compress the dense point cloud data obtained from a 3D free-form surface during the real-time scanning measuring process, this paper presents an innovative methodology of an on-line point cloud data compression algorithm for 3D free-form surface scanning measurement. It has the ability to identify and eliminate data redundancy caused by geometric feature similarity between adjacent scanning layers. At first, the new algorithm adopts the bi-Akima method to compress the initial point cloud data; next, the data redundancy existing in the compressed point cloud is further identified and eliminated; then, we can get the final compressed point cloud data. Finally, the experiment is conducted, and the results demonstrate that the proposed algorithm is capable of obtaining high-quality data compression results with higher data compression ratios than other existing on-line point cloud data compression/reduction methods.

1. Introduction

With the rapid development of modern industry, three-dimensional (3D) free-form surface parts are being utilized more and more widely. These involve, but are not limited to, aviation, aerospace, shipbuilding, automotive, biomedical and home appliance industries [1,2]. Recently, the automated 3D digitization of free-form surface objects has been widely applied in many areas, such as additive manufacturing (3D printing), rapid prototyping, reverse engineering, civil buildings, medical prosthetics and clinical diagnosis [3,4,5,6,7,8,9,10,11,12,13]. Scanning measurement is one of the key technologies for digitizing 3D physical models with free-form surfaces [14,15,16,17]. Unfortunately, in order to obtain a high-quality profile of a measured surface, scanning measuring devices have to produce massive amounts of point cloud data with great sampling rates, and not all these points are indispensable [18,19,20]. Bottlenecks arise from the inefficiencies of storing, manipulating and transferring them [21]. Furthermore, the parametric modelling from these massive amount of point cloud data is a time-consuming task [22,23,24]. For this reason, compressing the measured point data while maintaining the required accuracy is a crucial task during the scanning measuring process [25]. Herein, the required accuracy is a threshold of distance, which is preset to a constant positive integer before the beginning of scanning measurement. The accuracy of a certain data compression algorithm is characterized by the distance from each sampled point in the initial dense point cloud data to the surface generated by the compressed point cloud. Describing a measured surface with the least point data while guaranteeing a certain data compression accuracy is always an expectation [26,27]. Therefore, a high-quality point cloud data compression algorithm for 3D free-form surface scanning measurement is being pursued constantly [28].
Experts and scholars around the world have been paying more and more attention to this issue, and a number of point cloud data compression/reduction algorithms for free-form/irregular surface scanning measurement have been developed. Lee et al. [29] proposed an algorithm for processing point cloud data obtained by laser scanning devices. This algorithm adopts a one-directional (1D) or bi-directional (2D) non-uniform grid to reduce the amount of point cloud data. Chen et al. [5] presented a data compression method based on a bi-directional point cloud slicing strategy for reverse engineering. This method can preserve local details (geometric features in both two parametric directions) when performing data compression. Ma and Cripps [30] proposed a new data compression algorithm for surface points to preserve the original surface points. The error metric is defined as the relative Hausdorff distance between two principal curvature vector sets for surface shape comparison. After comparison, the difference between the compressed data points and original data points can be obtained. Therefore, redundant points are repeatedly removed until the difference induced exceeds the specified tolerance. Smith, Petrova, and Schaefer [31] presented a progressive encoding and compression method for surfaces generated from point cloud data. At first, an octree is built whose nodes contain planes that are constructed as the least square fit of the data within that node. Then, this octree is pruned to remove redundant data while avoiding topological changes created by merging disjointed linear pieces. Morell et al. [32] presented a geometric 3D point cloud lossy compression system based on plane extraction, which represents the points of each scene plane as a Delaunay triangulation and a set of points/area information. This compression system can be customized to achieve different data compression or accuracy ratios. The above methods have focused on optimizing data compression quality based on building and processing polyhedral models or numerical iterative calculations. Nevertheless, they are all off-line data compression algorithms and can only compress the point cloud data of a whole measured surface after data acquisition. In other words, they cannot perform online data compression during real-time measurement. Data acquisition and data compression processes are completely separate. A large amount of redundant point cloud data occupies a great deal of storage space in scanning measuring devices. Moreover, the transmission and processing of point cloud data still takes up a significant amount of time and hardware resources.
This problem has attracted the attention of many scholars and engineers, and they have proposed quite a number of on-line point cloud data compression/reduction methods. Lu et al. [33] adopted the chordal method to compress point cloud data, and realized the on-line data compression of point cloud data during real-time scanning measurement for the first time. ElKott and Veldhuis [34] presented an automatic surface sampling approach based on scanning isoparametric lines. The sampling locations are confirmed by the deviations between the alternative geometry and sampled model, and the location of each sampling line is confirmed by the curvature of the sampled surface model. Wozniak, Balazinski, and Mayer [35] presented a point cloud data compression method based on fuzzy logic and the geometric solution of an arc at each measured point. This is an on-line data compression method and can be used in the surface scanning measuring process of coordinate measuring machines (CMMs). Jia et al. [36] proposed an on-line data compression method based on the equal-error chordal method and isochronous sampling. In order to solve the problem of massive data storage, dual-buffer and dual-thread dynamic storage is adopted. Tao et al. [37] found that the essence of all the above on-line point cloud data compression methods is the chordal method, which specifies that all discrete dense point sets are connected by straight segments. Therefore, the surface reconstructed by the compressed point cloud will be full of cusp points, and so we cannot obtain a smooth interpolated surface. In view of this limitation, they presented an on-line point cloud data extraction algorithm using bi-Akima spline interpolation.
Although the above methods implement on-line point cloud data compression, they can only eliminate data redundancy of the current scanning line. Nevertheless, most surface 3D scanning measuring devices adopt a layer-by-layer scanning path (e.g., contact scanning probes [38], laser triangle displacement sensors [39], linear structured light systems [40], industrial computed tomography (CT) systems [41], etc.), and adjacent scanning lines are extremely similar in shape. The geometric feature similarity between such scanning layers is bound to result in data redundancy, which makes it possible to further compress the point cloud data during the scanning measuring process. Therefore, this study focuses on identifying and eliminating this kind of data redundancy caused by geometric feature similarity between adjacent scanning layers. After that, the massive amount of point cloud data can be further compressed during the 3D free-form surface measuring process.
The contents of this paper consist of four sections. In Section 2, the innovative methodology of the on-line point cloud data compression algorithm for 3D free-form surface scanning measurement is described in detail. In Section 3, the proposed algorithm was tested in the real-time scanning measuring process and compared with existing methods. Finally, some conclusions are drawn from this paper in Section 4.

2. Innovative Methodology

As shown in Figure 1, the overall process of on-line point cloud data compression in this work consists of four steps. In Step 1, the initial point cloud flow is obtained by 3D scanning measuring devices using an isochronous [42] or equidistant sampling method and the layer-by-layer scanning path is adopted (Figure 2). In Step 2, the initial point cloud data flow is immediately compressed by the chordal method [36] or bi-Akima method [37], both of which compress the amount of point cloud data based on the data redundancy in the current single scanning layer. In Step 3, the data redundancy in the compressed point cloud which is obtained in the previous step is further identified. In Step 4, the identified redundant point data is eliminated, and then we can obtain the final compressed point cloud. At last, the final compressed data flow is transmitted to the storage space of the measurement system.
Herein, the real-time performance of the proposed data compression algorithm needs to be further analyzed and described. The path planning is performed before the start of the scanning measurement in Step 1. As shown in Figure 2, a layer-by-layer scanning path is adopted. The distance between the adjacent scanning layers is determined by the preset measuring accuracy. The measured surface is cut by the scanning layers to form a number of corresponding scanning lines. As shown in Figure 2, there are two planning modes for scanning directions: (i) the progressive scanning mode, and (ii) the S-type scanning mode. Regardless of the scanning mode, the measuring device in Step 1 will continuously transmit the initial point cloud data flow to the data compressor in Step 2. The compressor performs data compression immediately after receiving all initial point data of a single scanning layer, rather than waiting for the entire surface to be scanned before performing data compression. That is, each time the point cloud data in the current scanning layer is completely transmitted to the compressor, the subsequent data compression algorithm is executed immediately. Therefore, the proposed data compression algorithm is essentially a quasi-real-time method, which we call an on-line data compression method.
The flow chart of this algorithm is illustrated in Figure 3, and its principle is described in detail as follows:

2.1. Data Redundancy Identification

In order to identify redundant data points in the compressed point cloud data flow from Step 2, it is first necessary to predict the current scan line in the unmeasured area. Herein, the prediction is realized by Hermite extrapolation [43], and a predicted curve is created. The data redundancy identification algorithm is detailed as follows:
Figure 4 shows the schematic diagram of the data redundancy identification algorithm, in which line i is the current scanning line during the on-line measuring process, and P i , j represents the j th point in scanning line i . If j 2 , a shape-preserving piecewise bicubic Hermite curve can be built to predict the shape and direction of the current scanning line; here, we name this the predicted curve, as shown in Figure 4. After that, suppose k is a positive integer and let 1 k < j and the coordinates of point P i , k be ( x k , y k , z k ) ; then, a series of specific Hermite interpolation polynomials can be determined by
{ H y ( x ) = y k α k ( x ) + y k + 1 α k + 1 ( x ) + y k β k ( x ) + y k + 1 β k + 1 ( x ) H z ( x ) = z k α k ( x ) + z k + 1 α k + 1 ( x ) + z k β k ( x ) + z k + 1 β k + 1 ( x ) ,
where
{ α k ( x ) = ( 1 + 2 x x k x k + 1 x k ) ( x x k + 1 x k x k + 1 ) 2 α k + 1 ( x ) = ( 1 + 2 x x k + 1 x k x k + 1 ) ( x x k x k + 1 x k ) 2 β k ( x ) = ( x x k ) ( x x k + 1 x k x k + 1 ) 2 β k + 1 ( x ) = ( x x k + 1 ) ( x x k x k + 1 x k ) 2 ,
and the first derivatives of y k , y k + 1 , z k , z k + 1 can be estimated by the following formulas.
When 1 < k < j :
y k = f y ( x k ) = { 0 , i f : y k + 1 y k x k + 1 x k · y k y k 1 x k x k 1 < 0 1 2 ( y k + 1 y k x k + 1 x k + y k y k 1 x k x k 1 ) , i f : y k + 1 y k x k + 1 x k · y k y k 1 x k x k 1 0 ,
z k = f z ( x k ) = { 0 , i f : z k + 1 z k x k + 1 x k · z k z k 1 x k x k 1 < 0 1 2 ( z k + 1 z k x k + 1 x k + z k z k 1 x k x k 1 ) , i f : z k + 1 z k x k + 1 x k · z k z k 1 x k x k 1 0 .
When k = 1 :
y 1 = { 0 , i f : d y · y 2 y 1 x 2 x 1 < 0 3 · y 2 y 1 x 2 x 1 , i f : | d y | > 3 | y 2 y 1 x 2 x 1 | & ( y 2 y 1 ) ( y 3 y 2 ) ( x 2 x 1 ) ( x 3 x 2 ) < 0 ,
z 1 = { 0 , i f : d z · z 2 z 1 x 2 x 1 < 0 3 · z 2 z 1 x 2 x 1 , i f : | d z | > 3 | z 2 z 1 x 2 x 1 | & ( z 2 z 1 ) ( z 3 z 2 ) ( x 2 x 1 ) ( x 3 x 2 ) < 0 ,
in which
{ d y = ( x 3 + x 2 2 x 1 ) ( y 2 y 1 ) ( x 2 x 1 ) ( x 3 x 1 ) ( x 2 x 1 ) ( y 3 y 2 ) ( x 3 x 2 ) ( x 3 x 1 ) d z = ( x 3 + x 2 2 x 1 ) ( z 2 z 1 ) ( x 2 x 1 ) ( x 3 x 1 ) ( x 2 x 1 ) ( z 3 z 2 ) ( x 3 x 2 ) ( x 3 x 1 ) .
When k = j :
y j = { 0 , i f : d y · y j y j 1 x j x j 1 < 0 3 · y j y j 1 x j x j 1 , i f : | e y | > 3 | y j y j 1 x j x j 1 | & ( y j y j 1 ) ( y j 1 y j 2 ) ( x j x j 1 ) ( x j 1 x j 2 ) < 0 ,
z j = { 0 , i f : d y · z j z j 1 x j x j 1 < 0 3 · z j z j 1 x j x j 1 , i f : | e z | > 3 | z j z j 1 x j x j 1 | & ( z j z j 1 ) ( z j 1 z j 2 ) ( x j x j 1 ) ( x j 1 x j 2 ) < 0 ,
in which
{ e y = ( 2 x j x j 1 x j 2 ) ( y j y j 1 ) ( x j x j 1 ) ( x j x j 2 ) ( x j x j 1 ) ( y j 1 y j 2 ) ( x j 1 x j 2 ) ( x j x j 2 ) e z = ( 2 x j x j 1 x j 2 ) ( z j z j 1 ) ( x j x j 1 ) ( x j x j 2 ) ( x j x j 1 ) ( z j 1 z j 2 ) ( x j 1 x j 2 ) ( x j x j 2 ) .
Herein, based on the compressed point cloud data flow from Step 2, the shape-preserving piecewise bicubic Hermite polynomials can be created according to the above algorithm. Then, Hermite extrapolation is performed to create a predicted curve, which is marked in blue as shown in Figure 4, and its analytical formula can be described as follows:
{ H y ( x ) = y j 1 α j 1 ( x ) + y j α j ( x ) + y j 1 β j 1 ( x ) + y j β j ( x ) H z ( x ) = z j 1 α j 1 ( x ) + z j α j ( x ) + z j 1 β j 1 ( x ) + z j β j ( x ) .
After that, an estimated point P e s t is created to move along the predicted curve with a stepping distance of λ . P i , j is the starting point of P e s t . Meanwhile, a bounding sphere is built with point P e s t as the center. The radius of the sphere is
R s p h = κ h l s ,
in which κ [ 1 , 2 ] is the radius adjustment coefficient, and h l s is the distance between two parallel scannning layers. As shown in Figure 4, the predicted curve with estimated point P e s t are used to search for the neighbor point P n b from the previous scanning line i−1. The necessary and sufficient conditions for point P n b as the neighbor point of P e s t are | P e s t P n b ¯ | R s p h , which means that P n b is inside the bounding sphere with point P e s t as its center. At the very beginning, P e s t coincides with P i , j . At this point, there are two possibilities: (i) P i 1 , u is inside the bounding sphere (i.e., | P i 1 , u P i , j ¯ | R s p h ), or (ii) P i 1 , u is outside the bounding sphere. In case (i), P i 1 , u is the first found neighbor point. As P e s t moves along the scanning direction with a stepping distance of λ , if | P e s t P i 1 , u ¯ | < | P i , j P i 1 , u ¯ | , then P i 1 , u is the neighbor point of P e s t ; otherwise, discard point P i 1 , u , as it is not the neighbor point of P e s t but of P i , j . In case (ii), there is no operation because no neighbor point has been found. After case (i) or case (ii) is completed, point P e s t continues to move forward along the scanning direction until the neighbor point P n b of P e s t is found; if the neighbor point cannot be found, the search is stopped.
If the neighbor point P n b is found in line i 1 (e.g., P i 1 , u + 1 in Figure 4), then a new bounding sphere is built with P i 1 , u + 1 as the center and the radius is R s p h . After that, we use this new bounding sphere to search for the neighbor point of P i 1 , u + 1 in line i 2 ; and if the neighbor point cannot be found, we stop searching. Next, we take the new neighbor point in line i 2 (e.g., P i 2 , v + 1 ) as a new center to build a bounding sphere and repeat the above process until we find three neighbor points in different scanning lines (e.g., P i 1 , u + 1 , P i 2 , v + 1 , P i 3 , w + 1 in Figure 4).
Based on the neighbor point set { P i 1 , u + 1 , P i 2 , v + 1 , P i 3 , w + 1 }, the coordinates of estimated point P e s t can be fixed uniquely. As shown in Figure 4, a bi-cubic Hermite curve is built, and it can be expressed as
{ H x ( y ) = x i 2 α i 2 ( y ) + x i 1 α i 1 ( y ) + x i 2 β i 2 ( y ) + x i 1 β i 1 ( y ) H z ( y ) = z i 2 α i 2 ( y ) + z i 1 α i 1 ( y ) + z i 2 β i 2 ( y ) + z i 1 β i 1 ( y ) ,
in which y is an independent variable; α i 1 ( y ) , α i 2 ( y ) , β i 1 ( y ) , β i 2 ( y ) are obtained by Equation (2); x i 2 , z i 2 are acquired by Equations (3) and (4); and x i 1 , z i 1 are obtained by Equations (8)–(10). Obviously, the bicubic Hermite curve must be in the curved surface with the equation
H x ( y ) = x i 2 α i 2 ( y ) + x i 1 α i 1 ( y ) + x i 2 β i 2 ( y ) + x i 1 β i 1 ( y ) ,
and the predicted curve will pass through this curved surface. Therefore, estimated point P e s t can be fixed at the intersection of the predicted curve and the curved surface which is described in Equation (14). That is, the coordinates of estimated point P e s t ( x e s t , y e s t , z e s t ) can be determined by Equations (11) and (14).

2.2. Data Redundancy Elimination

After the coordinates of estimated point P e s t are determined, we use P e s t to replace P i , j + 1 in scanning line i. Afterwards, the new point set that contains P e s t is used for bi-Akima interpolation, and there is a deviation h i , k between the interpolated curve and each initial sampled point Q k , where i is the scanning line number and k is the serial number of initial point cloud in line i. As mentioned earlier, the initial point cloud is obtained by 3D scanning measuring devices using the isochronous or equidistant sampling method in Step 1 as shown in Figure 1. The deviation h i , k can be obtained by
h i , k = MIN x ( X j , X j + 1 ) ( s ) = MIN x ( X j , X j + 1 ) ( ( x x k ) 2 + ( y y k ) 2 + ( z z k ) 2 ) ,
where point Q k ( x k , y k , z k ) is an initial sampled point between P i , j ( X j , Y j , Z j ) and P i , j + 1 ( X j + 1 , Y j + 1 , Z j + 1 ) , and P c u r v ( x , y , z ) is the point in interpolated curve that makes the distance S shortest. Then, the max deviation d max of the whole curve (i.e., from P i , 1 to P e s t ) can be calculated by the following formula:
d max = MAX ( h i , k ) ,
which is compared with the required accuracy ε . If d max > ε , discard point P e s t . If d max < ε , delete current compressed point P i , j + 1 which is input from Step 2. Next, create an estimative flag F i , j + 1 = 1 to replace point P i , j + 1 . This flag takes up only one bit of data storage space. After completing the above process, output the final compressed point cloud data flow, which contains the point coordinate and estimative flag information to the data storage devices. Afterwards, make j = j + 1 , build a new shape-preserving piecewise bicubic Hermite curve to predict the shape and direction of the current scanning line, and create a new estimative point P e s t to loop through the above data redundancy identification and elimination process until P i , j is the end point of the current scanning line i or the data sampling is over. In addition, when P i , j is the end point of line i, make i = i + 1 and continue to loop the above algorithm until the measurement is completed.

3. Experimental Results

In order to verify the feasibility of the proposed methodology, some experiments were performed in this section.

3.1. Test A

The on-line point cloud data compression algorithm was tested in the industrial real-time measuring process and compared with existing methods (chordal method and bi-Akima method). The measuring system consists of a contact 3D scanning probe, a vertical lathe and a commercial computer numerical control (CNC) system of SINUMERIK 840D (Munich, Bayern, Germany) as shown in Figure 5. The proposed algorithm is integrated in the original equipment manufacturer (OEM) application that runs on the host computer of the CNC system. The product model of the contact 3D scanning probe is DIGIT-02 (Dalian, Liaoning Province, China). More detailed technical characteristics of the measuring instrument are shown in Table 1.
The measured part is a half-ellipsoidal surface which is welded together by seven pieces of thin-walled aluminum alloy sheet, as shown in Figure 5d, with a semi-major axis of 1450 mm and semi-minor axis of 950 mm. A rotational progressive scanning mode is adopted, and the layer spacing is 7 mm. Figure 6 shows the spatial distribution of the initial point cloud data. The isochronous sampling method is adopted and the number of initial sampling points is 272,638.
Using the same initial point cloud data set as shown in Figure 6, the comparison of data compression performance is made between the proposed method, chordal method and bi-Akima method under different required accuracies (i.e., from 0.001 mm to 1 mm). Table 2 summarizes the results of the data compression performance including the number of points and data compression ratio, where the compression ratio is defined as the ratio between the uncompressed size and compressed size:
Compression   Ratio = Uncompressed   Size Compressed   Size = Number   of   Initial   Points Number   of   Compressed   Points .
Obviously, the proposed method has a higher data compression ratio than the chordal method and bi-Akima method, and the chordal method obtains the lowest data compression ratio under the same required accuracy. The number of data points obtained by the proposed method is about half of that obtained by the bi-Akima method under the same required accuracy.
Figure 7 provides the comparison of the compression ratios between the three methods under the different required accuracies. With the decrease in accuracy requirements, the compression ratio increases for all methods; however, for all levels of required accuracy, our proposed compression method manifests a superior compression ratio than the other two methods. Obviously, the chordal method has the lowest data compression ratio. Therefore, we focus on comparing our proposed method with the bi-Akima method in the subsequent experiments.
To make the comparison more vivid and intuitive, Figure 8 visually illustrates the difference between the proposed method and bi-Akima method by displaying spatial distributions of compressed point sets under different required accuracies. Subfigures a, d, g and j show the point cloud distribution compressed by bi-Akima method while subfigures b, e, h and k give the point cloud distribution after data redundancy identification by the proposed method, with the identified redundant points marked in red. In subfigures c, f, i and l, the identified redundant points are eliminated. These subfigures show the distributions of the final compressed point cloud data. By contrast, we can clearly observe the difference of point cloud density between these two methods under the same required accuracy. Take subfigures g–i, for example: when using the bi-Akima method, we can observe that there are many curves roughly along the welded region (Figure 8g), because the bi-Akima method can only deal with the point set in the current scanning line and the data redundancy outside the current scanning line cannot be eliminated. With the involvement of our proposed method, redundant data points are identified and marked in red (Figure 8h) and the data redundancy in the adjacent scanning layers is eliminated and the final compressed point cloud data is obtained (Figure 8i).
To verify the accuracy of the proposed algorithm, Figure 9 analyzes the spatial distribution of deviation between each initial sampled point and the interpolated surface obtained from the final compressed point cloud data under different required accuracies. As can be seen, all the deviations are within the allowable range of required accuracy. Our method can tightly control the deviation within the error tolerance range (i.e., the deviation between each initial sampled point and interpolation curve is less than or equal to the required accuracy). In addition, deviations are far lower than the required accuracy in most of the measured area. In Figure 9d, there is an interesting and noteworthy phenomenon: the upper right sector has a higher deviation. As mentioned earlier, the measured part is a large thin-walled surface which is welded together by seven pieces of aluminum alloy sheet (Figure 5d). The aluminum alloy sheet has a thickness of only 0.8 mm, but its size is very large (the semi-major axis of the ellipse is 1450 mm). The part has undergone great deformation after welding. There is a large and random deviation between each welded part and the original design size. According to past experience, the maximum deviation in a local section can even reach 3 mm. Consequently, we infer that the upper right sector has a higher deviation because of deformation in this area. In the case where the required accuracy is on the order of millimeters (e.g., required accuracy ε = 1   mm in Figure 9d), the compressed point cloud data is very sparse. Therefore, this phenomenon is formed in a region where the point cloud density is low and the local deformation is large. However, in any case, the proposed method can tightly control the deviation within the preset range.

3.2. Test B

The overall structure of the model in Test A is relatively simple. In order to further verify the universality and adaptability of the proposed method, we chose a more complex surface model with a large number of details, edges and sharp features for experimentation. As shown in Figure 10, the tested model is a piece of jewelry, which is inlaid with 30 diamonds of different sizes.
Figure 11 shows the initial point cloud data acquisition result. The progressive scanning mode and equidistant sampling mode were adopted. Scanning lines are along the X-direction (horizontal direction). The distance between two adjacent scanning layers is 0.1 mm, and the distance between adjacent points is 0.05 mm in each scanning layer. The initial point number is 63,376.
The comparison is made between the proposed method and bi-Akima method under different required accuracies (i.e., from 0.001 mm to 1 mm). Table 3 gives the results of data compression performance, including the number of points and data compression ratio. Obviously, the proposed method has a higher data compression ratio than the bi-Akima method. The number of points obtained by the proposed method is about half of that obtained by bi-Akima method under the same required accuracy.
Figure 12 provides the comparison of the compression ratios between these two methods under different required accuracies. With the decrease in accuracy requirements, the compression ratio increases for all methods; however, for all levels of required accuracy, our proposed compression method manifests a superior compression ratio than the bi-Akima method.
Figure 13 visually illustrates the difference between the proposed method and bi-Akima method by displaying spatial distributions of the compressed point sets under different required accuracies. Subfigures a, d, g and j show the point cloud distribution compressed by the bi-Akima method, while subfigures b, e, h and k give the point cloud distribution after data redundancy identification by the proposed method, with the identified redundant points marked in red. In subfigures c, f, i and l, the identified redundant points are eliminated. These subfigures show the distributions of the final compressed point cloud data. By contrast, we can clearly observe the difference in point cloud density between these two methods under the same required accuracy. Take subfigures j, k and l, for example: when using the bi-Akima method, we can observe that there are many curves roughly along the vertical direction (Figure 13j). This is because the bi-Akima method can only deal with the point set in the current single scanning line, which is along the horizontal direction, and the data redundancy outside the current scanning line cannot be eliminated. With the involvement of our proposed method, redundant data points are identified and marked in red (Figure 13k), the data redundancy in adjacent scanning layers is eliminated and the final compressed point cloud data is obtained (Figure 13l).
In order to verify the accuracy of the proposed algorithm, Figure 14 analyzes the spatial distribution of deviation between each initial sampled point and the interpolated surface obtained from the final compressed point cloud data under different required accuracies. As can be seen, all the deviations are within the allowable range of required accuracy, which proves that the proposed method can tightly control the deviation within the error tolerance range (i.e., the deviation between each initial sampled point and interpolation curve is less than or equal to the required accuracy). In addition, deviations are far less than the required accuracy in most of the measured area.

4. Discussion

The experimental results in Section 3 indicate that the proposed on-line point cloud data compression algorithm for free-form surface scanning measurement has the following features:
  • It can further compress point cloud data and obtain a higher data compression ratio than the existing methods under the same required accuracy. Its compression performance is obviously superior to the bi-Akima and chordal methods;
  • It is capable of tightly controlling the deviation within the error tolerance range, and deviations in most measured area are far less than the required accuracy;
  • Test A preliminarily verifies the application feasibility of the proposed method in an industrial environment. Test B demonstrates that the method is equally effective for complex surfaces with a large number of details, edges and sharp features, and it has stable performance;
  • The proposed method has the potential to be applied to industrial environments to replace traditional on-line point cloud data compression methods (bi-Akima and chordal methods). Its potential applications may be in the real-time measurement processes of scanning devices such as contact scanning probes, laser triangle displacement sensors, mobile laser scanners, linear structured light systems, industrial CT systems, etc. The application feasibility of this method needs to be further confirmed in subsequent case studies.
However, the proposed method is not perfect and still has the following limitations. In future work, the following aspects need to be further developed:
  • This method can only handle 3D point cloud data streams and is not suitable for processing point cloud data containing additional high-dimensional information (e.g., 3D point cloud data with grayscale or color information). We will try to solve the above problem in our future research work;
  • This method can only compress the point cloud data stream which is scanned layer by layer. If the 3D point cloud is randomly sampled and there are no regular scan lines (e.g., 3D measurement with speckle-structure light), our method cannot perform effective data compression. It is a huge challenge to solve the above problems.

5. Conclusions

In an attempt to effectively compress dense point cloud data obtained from a 3D free-form surface during the real-time scanning measuring process, this paper presents a novel on-line point cloud data compression algorithm which has the ability to identify and eliminate data redundancy caused by geometric feature similarity between adjacent scanning layers. At first, the new algorithm adopts the bi-Akima method to compress the initial point cloud data obtained by 3D scanning measuring devices. Next, the data redundancy in the compressed point cloud obtained in the previous stage is further identified and eliminated, and then we can obtain the final compressed point cloud data. Finally, the proposed on-line point cloud data compression algorithm was tested in the real-time scanning measuring process and compared with existing methods (the chordal method and bi-Akima method). The experimental results have preliminarily verified the application feasibility of our proposed method in industrial environment, and shown that it is capable of obtaining high-quality compressed point cloud data with a higher compression ratio than other existing methods. In particular, it can tightly control the deviation within the error tolerance range, which demonstrates the superior performance of the proposed algorithm. This algorithm could be used in the data acquisition process of 3D free-form surface scanning measurement to replace other existing on-line point cloud data compression/reduction methods.

Author Contributions

All work with relation to this paper has been accomplished by the efforts of all authors. Conceptualization, Y.L. and Y.T.; methodology, Z.H.; software, Y.T.; validation, Y.M. and Z.H.; formal analysis, Y.T.; investigation, Y.M.; resources, Y.M.; data curation, Y.M.; writing—original draft preparation, Y.T.; writing—review and editing, Y.T.; visualization, Z.H.; supervision, Y.L.; project administration, Y.L.; funding acquisition, Y.L.

Funding

This research was funded by the National Natural Science Foundation of China (Grant Nos. 51505310, 51435011), the Key Research and Development Program of Sichuan Province of China (Grant No. 2018GZ0282) and the Key Laboratory for Precision and Non-traditional Machining of Ministry of Education, Dalian University of Technology (Grant Nos. JMTZ201802, B201802).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Galetto, M.; Vezzetti, E. Reverse engineering of free-form surfaces: A methodology for threshold definition in selective sampling. J. Mach. Tools Manuf. 2006, 46, 1079–1086. [Google Scholar] [CrossRef]
  2. Han, Z.H.; Wang, Y.M.; Ma, X.H.; Liu, S.G.; Zhang, X.D.; Zhang, G.X. T-spline based unifying registration procedure for free-form surface workpieces in intelligent CMM. Appl. Sci. 2017, 7, 1092. [Google Scholar] [CrossRef]
  3. Ngo, T.D.; Kashani, A.; Imbalzano, G.; Nguyen, K.T.Q.; Hui, D. Additive manufacturing (3D printing): A review of materials, methods, applications and challenges. Compos. Pt. B Eng. 2018, 143, 172–196. [Google Scholar] [CrossRef]
  4. Liu, J.; Bai, D.; Chen, L. 3-D point cloud registration algorithm based on greedy projection triangulation. Appl. Sci. 2018, 8, 1776. [Google Scholar] [CrossRef]
  5. Chen, L.; Jiang, Z.D.; Li, B.; Ding, J.J.; Zhang, F. Data reduction based on bi-directional point cloud slicing for reverse engineering. Key Eng. Mater. 2010, 437, 492–496. [Google Scholar] [CrossRef]
  6. Budak, I.; Hodolic, J.; Sokovic, M. Development of a programme system for data-point pre-processing in Reverse Engineering. J. Mater. Process. Technol. 2005, 162, 730–735. [Google Scholar] [CrossRef]
  7. Yan, R.J.; Wu, J.; Lee, J.Y.; Khan, A.M.; Han, C.S.; Kayacan, E.; Chen, I.M. A novel method for 3D reconstruction: Division and merging of overlapping B-spline surfaces. Comput. Aided Des. 2016, 81, 14–23. [Google Scholar] [CrossRef]
  8. Pal, P.; Ballav, R. Object shape reconstruction through NURBS surface interpolation. Int. J. Prod. Res. 2007, 45, 287–307. [Google Scholar] [CrossRef]
  9. Calì, M.; Ambu, R. Advanced 3D Photogrammetric Surface Reconstruction of Extensive Objects by UAV Camera Image Acquisition. Sensors 2018, 18, 2815. [Google Scholar] [CrossRef]
  10. Zanetti, E.; Aldieri, A.; Terzini, M.; Calì, M.; Franceschini, G.; Bignardi, C. Additively manufactured custom load-bearing implantable devices. Australas. Med. J. 2017, 10. [Google Scholar] [CrossRef]
  11. Cavas-Martinez, F.; Fernandez-Pacheco, D.G.; Canavate, F.J.F.; Velazquez-Blazquez, J.S.; Bolarin, J.M.; Alio, J.L. Study of Morpho-Geometric Variables to Improve the Diagnosis in Keratoconus with Mild Visual Limitation. Symmetry 2018, 10, 306. [Google Scholar] [CrossRef]
  12. Manavella, V.; Romano, F.; Garrone, F.; Terzini, M.; Bignardi, C.; Aimetti, M. A novel image processing technique for 3D volumetric analysis of severely resorbed alveolar sockets with CBCT. Minerva Stomatol. 2017, 66, 81–89. [Google Scholar] [CrossRef] [PubMed]
  13. Aldieri, A.; Terzini, M.; Osella, G.; Priola, A.M.; Angeli, A.; Veltri, A.; Audenino, A.L.; Bignardi, C. Osteoporotic Hip Fracture Prediction: Is T-Score-Based Criterion Enough? A Hip Structural Analysis-Based Model. J. Biomech. Eng. Trans. ASME 2018, 140, 111004. [Google Scholar] [CrossRef] [PubMed]
  14. Jia, Z.Y.; Lu, X.H.; Yang, J.Y. Self-learning fuzzy control of scan tracking measurement in copying manufacture. Trans. Inst. Meas. Control 2010, 32, 307–318. [Google Scholar] [CrossRef]
  15. Wang, Y.Q.; Tao, Y.; Nie, B.; Liu, H.B. Optimal design of motion control for scan tracking measurement: A CMAC approach. Measurement 2013, 46, 384–392. [Google Scholar] [CrossRef]
  16. Li, W.L.; Zhou, L.P.; Yan, S.J. A case study of blade inspection based on optical scanning method. Int. J. Prod. Res. 2015, 53, 2165–2178. [Google Scholar] [CrossRef]
  17. Khameneifar, F.; Feng, H.Y. Extracting sectional contours from scanned point clouds via adaptive surface projection. Int. J. Prod. Res. 2017, 55, 4466–4480. [Google Scholar] [CrossRef]
  18. Budak, I.; Sokovic, M.; Barisic, B. Accuracy improvement of point data reduction with sampling-based methods by Fuzzy logic-based decision-making. Measurement 2011, 44, 1188–1200. [Google Scholar] [CrossRef]
  19. Shi, B.Q.; Liang, J.; Liu, Q. Adaptive simplification of point cloud using k-means clustering. Comput. Aided Des. 2011, 43, 910–922. [Google Scholar] [CrossRef]
  20. Feng, C.; Taguchi, Y. FasTFit: A fast T-spline fitting algorithm. Comput. Aided Des. 2017, 92, 11–21. [Google Scholar] [CrossRef]
  21. Meng, X.L.; He, W.T.; Liu, J.Y. An investigation of the high efficiency estimation approach of the large-scale scattered point cloud normal vector. Appl. Sci. 2018, 8, 454. [Google Scholar] [CrossRef]
  22. Song, H.; Feng, H.Y. A progressive point cloud simplification algorithm with preserved sharp edge data. Int. J. Adv. Manuf. Technol. 2009, 45, 583–592. [Google Scholar] [CrossRef]
  23. Chen, L.C.; Hoang, D.C.; Lin, H.I.; Nguyen, T.H. Innovative methodology for multi-view point cloud registration in robotic 3D object scanning and reconstruction. Appl. Sci. 2016, 6, 132. [Google Scholar] [CrossRef]
  24. Macher, H.; Landes, T.; Grussenmeyer, P. From point clouds to building information models: 3D semi-automatic reconstruction of indoors of existing buildings. Appl. Sci. 2017, 7, 1030. [Google Scholar] [CrossRef]
  25. Han, H.; Han, X.; Sun, F.; Huang, C. Point cloud simplification with preserved edge based on normal vector. Optik 2015, 126, 2157–2162. [Google Scholar] [CrossRef]
  26. Wang, Y.Q.; Tao, Y.; Zhang, H.J.; Sun, S.S. A simple point cloud data reduction method based on Akima spline interpolation for digital copying manufacture. Int. J. Adv. Manuf. Technol. 2013, 69, 2149–2159. [Google Scholar] [CrossRef]
  27. Arpaia, P.; Buzio, M.; Inglese, V. A two-domain real-time algorithm for optimal data reduction: A case study on accelerator magnet measurements. Meas. Sci. Technol. 2010, 21. [Google Scholar] [CrossRef]
  28. Wang, D.; He, C.; Li, X.; Peng, J. Progressive point set surface compression based on planar reflective symmetry analysis. Comput. Aided Des. 2015, 58, 34–42. [Google Scholar] [CrossRef]
  29. Lee, K.H.; Woo, H.; Suk, T. Data reduction methods for reverse engineering. Int. J. Adv. Manuf. Technol. 2001, 17, 735–743. [Google Scholar] [CrossRef]
  30. Ma, X.; Cripps, R.J. Shape preserving data reduction for 3D surface points. Comput. Aided Des. 2011, 43, 902–909. [Google Scholar] [CrossRef]
  31. Smith, J.; Petrova, G.; Schaefer, S. Progressive encoding and compression of surfaces generated from point cloud data. Comput. Graph. 2012, 36, 341–348. [Google Scholar] [CrossRef] [Green Version]
  32. Morell, V.; Orts, S.; Cazorla, M.; Garcia-Rodriguez, J. Geometric 3D point cloud compression. Pattern Recognit. Lett. 2014, 50, 55–62. [Google Scholar] [CrossRef] [Green Version]
  33. Lu, J.C.; Yang, J.K.; Mu, L.C. Automatic tracing measurement and close data collection system of the free-form surfaces. J. Dalian Univ. Technol. 1986, 24, 55–59. (In Chinese) [Google Scholar]
  34. ElKott, D.F.; Veldhuis, S.C. Isoparametric line sampling for the inspection planning of sculptured surfaces. Comput. Aided Des. 2005, 37, 189–200. [Google Scholar] [CrossRef]
  35. Wozniak, A.; Balazinski, A.; Mayer, R. Application of fuzzy knowledge base for corrected measured point determination in coordinate metrology. In Proceedings of the Annual Meeting of the North American Fuzzy Information Processing Society, San Diego, CA, USA, 24–27 June 2007. [Google Scholar]
  36. Jia, Z.Y.; Lu, X.H.; Wang, W.; Yang, J.Y. Data sampling and processing for contact free-form surface scan-tracking measurement. Int. J. Adv. Manuf. Technol. 2010, 46, 237–251. [Google Scholar] [CrossRef]
  37. Tao, Y.; Li, Y.; Wang, Y.Q.; Ma, Y.Y. On-line point cloud data extraction algorithm for spatial scanning measurement of irregular surface in copying manufacture. Int. J. Adv. Manuf. Technol. 2016, 87, 1891–1905. [Google Scholar] [CrossRef]
  38. Li, R.J.; Fan, K.C.; Huang, Q.X.; Zhou, H.; Gong, E.M. A long-stroke 3D contact scanning probe for micro/nano coordinate measuring machine. Precis. Eng. 2016, 43, 220–229. [Google Scholar] [CrossRef]
  39. Wang, Y.Q.; Liu, H.B.; Tao, Y.; Jia, Z.Y. Influence of incident angle on distance detection accuracy of point laser probe with charge-coupled device: Prediction and calibration. Opt. Eng. 2012, 51, 083606. [Google Scholar] [CrossRef]
  40. Valkenburg, R.J.; McIvor, A.M. Accurate 3D measurement using a structured light system. Image Vis. Comput. 1998, 16, 99–110. [Google Scholar] [CrossRef] [Green Version]
  41. Carmignato, S. Accuracy of industrial computed tomography measurements: Experimental results from an international comparison. CIRP Ann. Manuf. Technol. 2012, 61, 491–494. [Google Scholar] [CrossRef]
  42. Lamberty, A.; Schimmel, H.; Pauwels, J. The study of the stability of reference materials by isochronous measurements. Anal. Bioanal. Chem. 1998, 360, 359–361. [Google Scholar] [CrossRef]
  43. Li, W.D.; Zhou, H.X.; Hong, W. A Hermite inter/extrapolation scheme for MoM matrices over a frequency band. IEEE Antennas Wirel. Propag. Lett. 2009, 8, 782–785. [Google Scholar] [CrossRef]
Figure 1. The overall process of on-line data compression for 3D free-form surface scanning measurement.
Figure 1. The overall process of on-line data compression for 3D free-form surface scanning measurement.
Applsci 08 02556 g001
Figure 2. Layer-by-layer scanning path for 3D free-form surface scanning measurement.
Figure 2. Layer-by-layer scanning path for 3D free-form surface scanning measurement.
Applsci 08 02556 g002
Figure 3. The flow chart of point cloud data redundancy identification and elimination algorithm.
Figure 3. The flow chart of point cloud data redundancy identification and elimination algorithm.
Applsci 08 02556 g003
Figure 4. The schematic diagram of data redundancy identification.
Figure 4. The schematic diagram of data redundancy identification.
Applsci 08 02556 g004
Figure 5. The measuring system and measured 3D free-form surface: (a) vertical lathes; (b) computer numerical control (CNC) system; (c) scanning probe; (d) half-ellipsoidal measured part.
Figure 5. The measuring system and measured 3D free-form surface: (a) vertical lathes; (b) computer numerical control (CNC) system; (c) scanning probe; (d) half-ellipsoidal measured part.
Applsci 08 02556 g005aApplsci 08 02556 g005b
Figure 6. Spatial distribution of initial point cloud data.
Figure 6. Spatial distribution of initial point cloud data.
Applsci 08 02556 g006
Figure 7. Data compression ratios under different required accuracies.
Figure 7. Data compression ratios under different required accuracies.
Applsci 08 02556 g007
Figure 8. Spatial distributions of compressed point cloud data under different required accuracies ε : (a) bi-Akima compression, ε = 0.001   mm ; (b) redundancy identification, ε = 0.001   mm ; (c) redundancy elimination, ε = 0.001   mm ; (d) bi-Akima compression, ε = 0.01   mm ; (e) redundancy identification, ε = 0.01   mm ; (f) redundancy elimination, ε = 0.01   mm ; (g) bi-Akima compression, ε = 0.1   mm ; (h) redundancy identification, ε = 0.1   mm ; (i) redundancy elimination, ε = 0.1   mm ; (j) bi-Akima compression, ε = 1   mm ; (k) redundancy identification, ε = 1   mm ; (l) redundancy elimination, ε = 1   mm .
Figure 8. Spatial distributions of compressed point cloud data under different required accuracies ε : (a) bi-Akima compression, ε = 0.001   mm ; (b) redundancy identification, ε = 0.001   mm ; (c) redundancy elimination, ε = 0.001   mm ; (d) bi-Akima compression, ε = 0.01   mm ; (e) redundancy identification, ε = 0.01   mm ; (f) redundancy elimination, ε = 0.01   mm ; (g) bi-Akima compression, ε = 0.1   mm ; (h) redundancy identification, ε = 0.1   mm ; (i) redundancy elimination, ε = 0.1   mm ; (j) bi-Akima compression, ε = 1   mm ; (k) redundancy identification, ε = 1   mm ; (l) redundancy elimination, ε = 1   mm .
Applsci 08 02556 g008
Figure 9. Spatial distributions of deviation under different required accuracies ε : (a) ε = 0.001   mm ; (b) ε = 0.01   mm ; (c) ε = 0.1   mm ; (d) ε = 1   mm .
Figure 9. Spatial distributions of deviation under different required accuracies ε : (a) ε = 0.001   mm ; (b) ε = 0.01   mm ; (c) ε = 0.1   mm ; (d) ε = 1   mm .
Applsci 08 02556 g009aApplsci 08 02556 g009b
Figure 10. The tested complex surface model: jewelry.
Figure 10. The tested complex surface model: jewelry.
Applsci 08 02556 g010
Figure 11. Spatial distribution of initial point cloud data.
Figure 11. Spatial distribution of initial point cloud data.
Applsci 08 02556 g011
Figure 12. Data compression ratios under different required accuracies.
Figure 12. Data compression ratios under different required accuracies.
Applsci 08 02556 g012
Figure 13. Spatial distributions of compressed point cloud data under different required accuracies ε : (a) bi-Akima compression, ε = 0.001   mm ; (b) redundancy identification, ε = 0.001   mm ; (c) redundancy elimination, ε = 0.001   mm ; (d) bi-Akima compression, ε = 0.01   mm ; (e) redundancy identification, ε = 0.01   mm ; (f) redundancy elimination, ε = 0.01   mm ; (g) bi-Akima compression, ε = 0.1   mm ; (h) redundancy identification, ε = 0.1   mm ; (i) redundancy elimination, ε = 0.1   mm ; (j) bi-Akima compression, ε = 1   mm ; (k) redundancy identification, ε = 1   mm ; (l) redundancy elimination, ε = 1   mm .
Figure 13. Spatial distributions of compressed point cloud data under different required accuracies ε : (a) bi-Akima compression, ε = 0.001   mm ; (b) redundancy identification, ε = 0.001   mm ; (c) redundancy elimination, ε = 0.001   mm ; (d) bi-Akima compression, ε = 0.01   mm ; (e) redundancy identification, ε = 0.01   mm ; (f) redundancy elimination, ε = 0.01   mm ; (g) bi-Akima compression, ε = 0.1   mm ; (h) redundancy identification, ε = 0.1   mm ; (i) redundancy elimination, ε = 0.1   mm ; (j) bi-Akima compression, ε = 1   mm ; (k) redundancy identification, ε = 1   mm ; (l) redundancy elimination, ε = 1   mm .
Applsci 08 02556 g013
Figure 14. Spatial distributions of deviation under different required accuracies ε : (a) ε = 0.001   mm ; (b) ε = 0.01   mm ; (c) ε = 0.1   mm ; (d) ε = 1   mm .
Figure 14. Spatial distributions of deviation under different required accuracies ε : (a) ε = 0.001   mm ; (b) ε = 0.01   mm ; (c) ε = 0.1   mm ; (d) ε = 1   mm .
Applsci 08 02556 g014
Table 1. Detailed technical characteristics of the measuring system.
Table 1. Detailed technical characteristics of the measuring system.
Technical CharacteristicsValues
Scope of X axis2400 mm
Positioning accuracy of X axis0.019 mm/1000 mm
Repeatability of X axis0.016 mm/1000 mm
Scope of Z axis1200 mm
Positioning accuracy of Z axis0.010 mm/1000 mm
Repeatability of Z axis0.003 mm/1000 mm
Positioning accuracy of C axis6.05″
Repeatability of C axis2.22″
Measuring range of scanning probe±1 mm
Accuracy of scanning probe±8 μm
Repeatability of scanning probe±4 μm
Stylus length of probe100 mm/150 mm/200 mm
Contact force (with stylus of 200 mm)1.6 N/mm
Weight of scanning probe1.8 kg
Table 2. Compression performance under different required accuracies.
Table 2. Compression performance under different required accuracies.
Required Accuracy (mm)Number of PointsCompression Ratio
Chordal MethodBi-Akima MethodProposed MethodChordal MethodBi-Akima MethodProposed Method
0.001237,363122,92967,4481.152.224.04
0.002189,824120,95267,1211.442.254.06
0.005152,674110,17563,8131.792.474.27
0.01136,02793,58851,0622.002.915.34
0.02123,89171,62941,8622.203.816.51
0.05103,20544,07228,8372.646.199.45
0.187,00827,89415,9743.139.7717.07
0.261,12412,19171024.4622.3638.39
0.528,473559431409.5848.7486.83
190293969221730.2068.69122.99
Table 3. Compression performance under different required accuracies.
Table 3. Compression performance under different required accuracies.
Required Accuracy (mm)Number of PointsCompression Ratio
Bi-Akima MethodProposed MethodBi-Akima MethodProposed Method
0.00118,90685163.357.44
0.00216,85776093.768.33
0.00514,32365634.429.66
0.0112,43257435.1011.04
0.0210,72050075.9112.66
0.05876742327.2314.98
0.1719035358.8117.93
0.25892297410.7621.31
0.54625241213.7026.28
14204221315.0828.64

Share and Cite

MDPI and ACS Style

Li, Y.; Ma, Y.; Tao, Y.; Hou, Z. Innovative Methodology of On-Line Point Cloud Data Compression for Free-Form Surface Scanning Measurement. Appl. Sci. 2018, 8, 2556. https://doi.org/10.3390/app8122556

AMA Style

Li Y, Ma Y, Tao Y, Hou Z. Innovative Methodology of On-Line Point Cloud Data Compression for Free-Form Surface Scanning Measurement. Applied Sciences. 2018; 8(12):2556. https://doi.org/10.3390/app8122556

Chicago/Turabian Style

Li, Yan, Yuyong Ma, Ye Tao, and Zhengmeng Hou. 2018. "Innovative Methodology of On-Line Point Cloud Data Compression for Free-Form Surface Scanning Measurement" Applied Sciences 8, no. 12: 2556. https://doi.org/10.3390/app8122556

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop