Innovative Methodology for Multi-view Point Cloud Registration in Robotic 3d Object Scanning and Reconstruction

The paper is concerned with the problem of multi-view three-dimensional (3D) point cloud registration. A novel global registration method is proposed to accurately register two series of scans into an object model underlying 3D imaging digitization by using the proposed oriented bounding box (OBB) regional area-based descriptor. A robot 3D scanning strategy is nowadays employed to generate the complete set of point cloud of physical objects by using 3D robot multi-view scanning and data registration. The automated operation has to successively digitize view-dependent area-scanned point cloud from complex-shaped objects by simultaneous determination of the next best robot pose and multi-view point cloud registration. To achieve this, the OBB regional area-based descriptor is employed to determine an initial transformation matrix and is then refined employing the iterative closest point (ICP) algorithm. The key technical breakthrough can resolve the commonly encountered difficulty in accurately merging two neighboring area-scanned images when no coordinate reference exists. To verify the effectiveness of the strategy, the developed method has been verified through some experimental tests for its registration accuracy. Experimental results have preliminarily demonstrated the feasibility and applicability of the developed method.


Introduction
Recently, automated three-dimensional (3D) object digitization, known under various terminologies such as 3D scanning, 3D digitizers or reconstruction, has been widely applied in many applications, such as 3D printing, reverse engineering, rapid prototyping and medical prosthetics.According to the sensing principle being employed, the current solutions can be generally classified into two main categories, namely hand-guided and automated scanning techniques.Hand-guided scanning allows for acquiring arbitrary shapes [1,2].However, the effectiveness of this scanning method highly depends on the skills of the user and the scanning process is generally time-consuming.In contrast, the automated scanning solution using turntable-based 3D scanners is faster, less expensive, more automated, and easier to use.However, it is still not able to reconstruct objects with arbitrary or complex geometry [3,4].To enhance the efficiency, the six-axis robot arm integrated with 3D imaging scanners has recently emerged as a technical developing trend for 3D surface scanning for objects with arbitrary or complex geometry [5][6][7].Both Callieri [5] and Larsson [6] presented a system for automated 3D modeling consisting of a 3D laser scanner, an industrial six-axis robot and a turntable.The advantage of this system is that it automatically achieves the shape of the object from initial scanning and thus, if possible, will scan the object using an orientation of the scanner which gives the most accurate result.Meanwhile, an autonomous 3D modeling system, consisting of an industrial robot and a laser striper for efficient surface reconstruction of unknown objects was proposed by Simon Kriegel [7].The system iteratively searches for possible scan paths in the local surface model and selects a next-best-scan (NBS) in its volumetric model.The data reconstruction of the robotic scanning systems reviewed above is mainly performed in two steps: data registration and data merge.First, an initial transformation matrix between two neighboring area scans can be traditionally determined by using the nominal scanning position of the robot arm or turntable as a starting reference, although it is not always accurate due to the unavoidable robot positioning errors.Thus, the matrix can serve as a reasonable initial estimate by using the Iterative Closest Point (ICP) algorithm [8] for further refinement of the transformation matrix.However, due to the lack of a global registration between successively scanned data, the existing scanning techniques still fail to robustly satisfy accurate object digitization with 100% surface coverage.Figure 1 shows two cases that are missing some parts of the object surfaces after performing automated object digitization using robot scanning.Appl. Sci. 2016, 6, 132 2 of 17 integrated with 3D imaging scanners has recently emerged as a technical developing trend for 3D surface scanning for objects with arbitrary or complex geometry [5][6][7].Both Callieri [5] and Larsson [6] presented a system for automated 3D modeling consisting of a 3D laser scanner, an industrial six-axis robot and a turntable.The advantage of this system is that it automatically achieves the shape of the object from initial scanning and thus, if possible, will scan the object using an orientation of the scanner which gives the most accurate result.Meanwhile, an autonomous 3D modeling system, consisting of an industrial robot and a laser striper for efficient surface reconstruction of unknown objects was proposed by Simon Kriegel [7].The system iteratively searches for possible scan paths in the local surface model and selects a next-best-scan (NBS) in its volumetric model.The data reconstruction of the robotic scanning systems reviewed above is mainly performed in two steps: data registration and data merge.First, an initial transformation matrix between two neighboring area scans can be traditionally determined by using the nominal scanning position of the robot arm or turntable as a starting reference, although it is not always accurate due to the unavoidable robot positioning errors.Thus, the matrix can serve as a reasonable initial estimate by using the Iterative Closest Point (ICP) algorithm [8] for further refinement of the transformation matrix.However, due to the lack of a global registration between successively scanned data, the existing scanning techniques still fail to robustly satisfy accurate object digitization with 100% surface coverage.Figure 1 shows two cases that are missing some parts of the object surfaces after performing automated object digitization using robot scanning.
(a) (b) The above issue is mainly caused by the fact that 3D sensor cannot measure point data from the scanned object surface that is in contact with its supporting ground.To resolve the issue, the object can be reoriented manually or automatically by using a robot grabber.Nevertheless, the changing orientation of the object produced by these two methods will definitively lose the initial pose estimate from before and after the orientation change.To estimate the pose transformation, some existing algorithms, such as local descriptor-based coarse alignment [9][10][11], use local-regional surface characteristics.As a general case, to compute the initial pose estimate, the geometric distances between corresponding 3D points existing in different views are minimized.The most common correspondences are points, curves and surfaces.The point signature [9] is a point descriptor for searching correspondences, which describes the structural neighborhood of a 3D point by the distances between neighboring points and their corresponding contour points.Johnson and Hebert proposed the concept of a spin image [10] to create an oriented point at a vertex in the surface, which is a two-dimensional (2D) histogram containing the number of points in a surrounding supporting volume.More recently, Tombari et al. [11] presented a 3D descriptor called the Signature of Histograms of OrienTations (SHOT) that encodes information about the surface within a spherical support structure.These methods generally work well for objects with unique local geometric characteristics.However, the methods can easily lead to less discrimination or The above issue is mainly caused by the fact that 3D sensor cannot measure point data from the scanned object surface that is in contact with its supporting ground.To resolve the issue, the object can be reoriented manually or automatically by using a robot grabber.Nevertheless, the changing orientation of the object produced by these two methods will definitively lose the initial pose estimate from before and after the orientation change.To estimate the pose transformation, some existing algorithms, such as local descriptor-based coarse alignment [9][10][11], use local-regional surface characteristics.As a general case, to compute the initial pose estimate, the geometric distances between corresponding 3D points existing in different views are minimized.The most common correspondences are points, curves and surfaces.The point signature [9] is a point descriptor for searching correspondences, which describes the structural neighborhood of a 3D point by the distances between neighboring points and their corresponding contour points.Johnson and Hebert proposed the concept of a spin image [10] to create an oriented point at a vertex in the surface, which is a two-dimensional (2D) histogram containing the number of points in a surrounding supporting volume.More recently, Tombari et al. [11] presented a 3D descriptor called the Signature of Histograms of OrienTations (SHOT) that encodes information about the surface within a spherical support structure.These methods generally work well for objects with unique local geometric characteristics.However, the methods can easily lead to less discrimination or sensitivity when dealing with object geometry having symmetrical or plural repeating surface contours.Therefore, modeling scanning completion with 100% surface coverage still remains one of the most challenging problems in current 3D object digitization.
Thus, accurately registering point cloud of two series of scanned point cloud into the reconstructed object model without an established coordinate reference has proven to be a difficult process [8][9][10][11][12].To overcome the above difficulty, an automated robot scanning system integrated with a NTU-developed optical 3D measuring probe (NTU represents National Taiwan University) is developed on a global registration method to accurately register two series of scans into an object model underlying 3D imaging digitization by using the proposed oriented bounding box (OBB) regional area-based descriptor.The proposed descriptor is used to register two series of scans by matching the hardware and basic system configuration as well as data processing algorithms such as preprocessing, overlapping detection and robot path planning, which have been described in [12].

Principle Concept and Flow Chart Diagram of the Global Registration Method
To find the transformation between two series of scans that are acquired from multiple single scans, the OBB regional area-based descriptors of source point cloud are matched with the descriptors of the target point cloud.Figure 2 illustrates the concept of the proposed registration method based on the OBB regional area-based descriptor to register two series of scans which are represented by the red and green point cloud for the target and source 3D point cloud that are scanned and obtained from the optical probe.Each of the proposed descriptors includes two critical components.The first component contains the information about the OBB of the point cloud.Each OBB is represented by a datum corner, C(x C , y C , z C ), and three vectors, CC 1 (x max , y max , z max ), CC 2 (x mid , y mid , z mid ), CC 3 (x min , y min , z min ), corresponding with the maximum, middle, and minimum dimensions of the OBB, respectively.The second component represents the spatial distribution of the surface area of the object in the OBB.The similarity between the regional area-based descriptor representing the source point cloud and the regional area-based descriptor representing the target one is then determined by using the normalized cross-correlation.sensitivity when dealing with object geometry having symmetrical or plural repeating surface contours.Therefore, modeling scanning completion with 100% surface coverage still remains one of the most challenging problems in current 3D object digitization.Thus, accurately registering point cloud of two series of scanned point cloud into the reconstructed object model without an established coordinate reference has proven to be a difficult process [8][9][10][11][12].To overcome the above difficulty, an automated robot scanning system integrated with a NTU-developed optical 3D measuring probe (NTU represents National Taiwan University) is developed on a global registration method to accurately register two series of scans into an object model underlying 3D imaging digitization by using the proposed oriented bounding box (OBB) regional area-based descriptor.The proposed descriptor is used to register two series of scans by matching the hardware and basic system configuration as well as data processing algorithms such as preprocessing, overlapping detection and robot path planning, which have been described in [12].

Principle Concept and Flow Chart Diagram of the Global Registration Method
To find the transformation between two series of scans that are acquired from multiple single scans, the OBB regional area-based descriptors of source point cloud are matched with the descriptors of the target point cloud.Figure 2 illustrates the concept of the proposed registration method based on the OBB regional area-based descriptor to register two series of scans which are represented by the red and green point cloud for the target and source 3D point cloud that are scanned and obtained from the optical probe.Each of the proposed descriptors includes two critical components.The first component contains the information about the OBB of the point cloud.Each OBB is represented by a datum corner, C(xC, yC, zC), and three vectors, CC1(xmax, ymax, zmax), CC2(xmid, ymid, zmid), CC3(xmin, ymin, zmin), corresponding with the maximum, middle, and minimum dimensions of the OBB, respectively.The second component represents the spatial distribution of the surface area of the object in the OBB.The similarity between the regional area-based descriptor representing the source point cloud and the regional area-based descriptor representing the target one is then determined by using the normalized cross-correlation.To make the proposed method clear in its operation procedure, the following flow chart diagram, shown in Figure 3, as well as Algorithm 1, is used to describe the proposed method in its main five steps.In step 1, the boundary points located near the scanning point missing zone can be detected through the geometric relationship with the support plane, using a set of connected edges that satisfy two conditions, in which all the boundary edges in the point missing zone have a similar orientation and each edge lacks an assigned triangle, either on the left or on the right side [7].Then, the oriented bounding box (OBB), which is a rectangular bounding box that covers all the object point cloud, is determined in step 2. In the next step, the generation of the descriptors of the point cloud is performed.Then, the matching process between the descriptors is implemented and achieves the initial transformation that can be used for further refined registration using ICP.
point cloud, is determined in step 2. In the next step, the generation of the descriptors of the point cloud is performed.Then, the matching process between the descriptors is implemented and achieves the initial transformation that can be used for further refined registration using ICP.

Algorithm 1
Input: Two series of scans P and Q Output: Transformation matrix 1 Estimate the boundary of the missing points and enclose the missing regions 2 Determine individual OBBs of the two series of scans 3 Generate the descriptors of P and Q 3.1 Calculate the OBB regional area-based descriptor of P

3.2
Translate the OBB of P to Q using the translation matrix defined by the centers of gravity of P with its estimated missing points and Q with its estimated missing points

3.3
Rotate Q around the three axes of the coordinate system defined by the center of gravity of Q as the origin and the three unit vectors considered as the three directions of the OBB of point cloud P with every increment Δθ on each axis, generating the OBB regional area-based descriptors of Q 4 Match the descriptors between P and every Q having an increment Δθ of rotation performed in Step 3.3 5 Determine the best machine and obtain the initial transformation matrix Calculate the OBB regional area-based descriptor of P

3.2
Translate the OBB of P to Q using the translation matrix defined by the centers of gravity of P with its estimated missing points and Q with its estimated missing points

3.3
Rotate Q around the three axes of the coordinate system defined by the center of gravity of Q as the origin and the three unit vectors considered as the three directions of the OBB of point cloud P with every increment ∆θ on each axis, generating the OBB regional area-based descriptors of Q 4 Match the descriptors between P and every Q having an increment ∆θ of rotation performed in Step 3.3 Determine the best machine and obtain the initial transformation matrix

Estimation of Missing Regions and Initial Translation
Assuming that two series of scans to be registered are represented as two sets of unorganized points P = {p 1 , p 2 , . . ., p k } and Q = {q 1 , q 2 , . . ., q k }, the initial transformation, which represents the coarse alignment between the two series, can be estimated as follows.
If P and Q represent the same free-form shape, the least squares method can be used to calculate the initial rotation and translation by minimizing the sum of the squares of alignment errors.Let E be the sum of the squares of alignment errors.Accordingly, where R is the rotation matrix, t is the translation vector, p i and q i is the corresponding point pair between the P and Q point clouds.
Let p be the centroid of the set of corresponding points in P {p 1 , p 2 , . . ., p k } and q be the centroid of the set of corresponding points in Q {q 1 , q 2 , . . ., q k }.Then p and q can be defined as: The new coordinates of points are denoted by: Note that: The registration error term is then expressed and rewritten as: where t 1 " t ´p `R ¨q The sum of the squares of the errors becomes: The first term in Equation ( 9) does not depend on t 1 .Therefore, the total error is minimized if t 1 " 0, and then: t " p ´R ¨q (10) Therefore, the initial translation matrix can be computed based on the difference between the centroid of P and the centroid of Q.
In addition, the weight loss at the missing parts of two the scan series may result in a significant bias between the translation matrix being estimated (by the difference between the centroid of P and the centroid of Q) and the real translation matrix.The shifting of two gravity centers (the red and green points shown in Figure 4) being created using a virtual camera from a model having the centroid (white point) in Figure 4a illustrates the influence of weight loss from the missing points to the centroid point position.Assume that the two point sets are separated as shown in Figure 4b.The accurate translation matrix to merge set Q (green point cloud) with set P (red point cloud) is defined through the difference between the current centroid of P and the original centroid of the green point cloud (green points).Therefore, it is necessary to estimate the original centroid of P on the object for translation.
Therefore, the initial translation matrix can be computed based on the difference between the centroid of P and the centroid of Q.
In addition, the weight loss at the missing parts of two the scan series may result in a significant bias between the translation matrix being estimated (by the difference between the centroid of P and the centroid of Q) and the real translation matrix.The shifting of two gravity centers (the red and green points shown in Figure 4) being created using a virtual camera from a model having the centroid (white point) in Figure 4a illustrates the influence of weight loss from the missing points to the centroid point position.Assume that the two point sets are separated as shown in Figure 4b.The accurate translation matrix to merge set Q (green point cloud) with set P (red point cloud) is defined through the difference between the current centroid of P and the original centroid of the green point cloud (green points).Therefore, it is necessary to estimate the original centroid of P on the object for translation.The boundary points located near the missing part can be detected through the geometric relationship with the support plane and using a set of connected edges that satisfy two requirements, in which all edges have a similar orientation and each edge lacks an assigned triangle, either on the left or on the right side [7] (as shown in Figure 4c).Then, the Random Sample Consensus (RANSAC) algorithm [13] is used to fit a mathematical model to the boundary and enclose the missing region by adding extra points.Therefore, the initial translation matrix can be computed based on the difference between the centroid of enclosed P and the centroid of enclosed Q.From now on, P and Q are referred to as enclosed P and enclosed Q, respectively.

Determination of Oriented Bounding Box (OBB)
The oriented bounding box is a rectangular bounding box that covers whole object point cloud.Each OBB is represented by a corner, C(xc, yc, zc), and three vectors, CC1(xmax, ymax, zmax), CC2(xmid, ymid, zmid), CC3(xmin, ymin, zmin), corresponding with the maximum, middle, and minimum dimensions of the OBB, respectively (as shown in Figure 5).The orientations of the OBB can be determined by using the covariance-based method, which is proposed by Gottschalk (1996) [14].The algorithm is started by obtaining the centroid   , , p x y z of the object point cloud: where n is the number of points in the object point cloud.The covariance matrix of the input point cloud COV is then computed as follows: The boundary points located near the missing part can be detected through the geometric relationship with the support plane and using a set of connected edges that satisfy two requirements, in which all edges have a similar orientation and each edge lacks an assigned triangle, either on the left or on the right side [7] (as shown in Figure 4c).Then, the Random Sample Consensus (RANSAC) algorithm [13] is used to fit a mathematical model to the boundary and enclose the missing region by adding extra points.Therefore, the initial translation matrix can be computed based on the difference between the centroid of enclosed P and the centroid of enclosed Q.From now on, P and Q are referred to as enclosed P and enclosed Q, respectively.

Determination of Oriented Bounding Box (OBB)
The oriented bounding box is a rectangular bounding box that covers whole object point cloud.Each OBB is represented by a corner, C(x c , y c , z c ), and three vectors, CC 1 (x max , y max , z max ), CC 2 (x mid , y mid , z mid ), CC 3 (x min , y min , z min ), corresponding with the maximum, middle, and minimum dimensions of the OBB, respectively (as shown in Figure 5).The orientations of the OBB can be determined by using the covariance-based method, which is proposed by Gottschalk (1996) [14].The algorithm is started by obtaining the centroid p px, y, zq of the object point cloud: Appl.Sci.2016, 6, 132 where n is the number of points in the object point cloud.The covariance matrix of the input point cloud COV is then computed as follows: p and three unit vectors v1, v2, and v3 (as shown in Figure 6), the coordinates of the point pi(xi, yi, zi) in this coordinate system are determined by following equation: x ve ve ve x x y v e v e v e y y z ve ve ve z z where e1, e2, and e3 are the three unit vectors of the original coordinate system.Note that matrix T is the orthogonal matrix.It means that T −1 = T T .The above matrix has three real eigenvalues, λ 1 ě λ 2 ě λ 3 , and three normalized eigenvectors, v 1 (v 11 , v 12 , v 13 ), v 2 (v 21 , v 22 , v 23 ), and v 3 (v 31 , v 32 , v 33 ), respectively.These eigenvectors are considered as the three directions of the OBB.By projecting all points in the object point cloud onto the eigenvectors, three dimensions of the object can be determined as the distance between the nearest and farthest projected points in each eigenvector.Considering the coordinate system with the origin p and three unit vectors v 1 , v 2 , and v 3 (as shown in Figure 6), the coordinates of the point p i (x i , y i , z i ) in this coordinate system are determined by following equation: » --

»
-- where e 1 , e 2 , and e 3 are the three unit vectors of the original coordinate system.Let: The parameters of the OBB can be determined as follows:

Generating Descriptors of Point Cloud
2.4.1.The OBB Regional Area-Based Descriptor Each proposed feature descriptor includes two components [15].The first component contains the information about the OBB of the point cloud.Each OBB is represented by a corner, C(xC, yC, zC), and three vectors, CC1(xmax, ymax, zmax), CC2(xmid, ymid, zmid), CC3(xmin, ymin, zmin), corresponding with the maximum, middle, and minimum dimensions of the OBB, respectively.The second component represents the distribution of the surface area of the object in the OBB.We assume that the three directions of the OBB are divided into k1, k2, and k3 segments (as shown in Figure 7a); the total number of subdivided boxes in the OBB is n = k1 × k2 × k3.The surface area of the object in each subdivided box Vijk (i = 0, ..., k1 − 1; j = 0, ..., k2 − 1; k = 0, ..., k3 − 1) is denoted as fv, where v = k(k1k2) + j(k1) + I; fv can be determined by subdividing the considered surface into the triangle mesh [16] (as shown in Figure 7c).The distribution of Sv is defined as the regional area-based descriptor, shown in Figure 7d.Thus, the feature descriptor FV can be described as follows: Note that matrix T is the orthogonal matrix.It means that T ´1 = T T .Let: The parameters of the OBB can be determined as follows: 2.4.Generating Descriptors of Point Cloud 2.4.1.The OBB Regional Area-Based Descriptor Each proposed feature descriptor includes two components [15].The first component contains the information about the OBB of the point cloud.Each OBB is represented by a corner, C(x C , y C , z C ), and three vectors, CC 1 (x max , y max , z max ), CC 2 (x mid , y mid , z mid ), CC 3 (x min , y min , z min ), corresponding with the maximum, middle, and minimum dimensions of the OBB, respectively.The second component represents the distribution of the surface area of the object in the OBB.We assume that the three directions of the OBB are divided into k 1 , k 2 , and k 3 segments (as shown in Figure 7a); the total number of subdivided boxes in the OBB is n = k 1 ˆk2 ˆk3 .The surface area of the object in each subdivided box V ijk (i = 0, ..., k 1 ´1; j = 0, ..., k 2 ´1; k = 0, ..., k 3 ´1) is denoted as f v , where v = k(k 1 k 2 ) + j(k 1 ) + I; f v can be determined by subdividing the considered surface into the triangle mesh [16] (as shown in Figure 7c).The distribution of S v is defined as the regional area-based descriptor, shown in Figure 7d.Thus, the feature descriptor FV can be described as follows: where n V = k 1 k 2 k 3 ´1.

Generating Database and Calculating Descriptors
Firstly, the regional area-based descriptor of point cloud P is computed as FVP = {CP; CPC1P, CPC2P, CPC3P, fPv, v = 0, …, n} in the OBB of P. Then we shift the OBB of P to Q using the translation matrix defined by the centers of gravity of P and Q.We consider the coordinate system OQxqyqzq with the origin OQ is the center of gravity of Q and three unit vectors ν1, ν2, and ν3, which are considered as the three directions of the OBB of point cloud P. Rotating Q around the three axes of the coordinate system OQxqyqzq is performed with every increment Δθ on each axis.For each rotation, the regional area-based descriptor Q in the OBB of P is FVQi = {CQi; CQiC1Qi, CQiC2Qi, CPC3Qi, fQv, v = 0, …, n} which can be then determined as described as Algorithm 2. To enhance the accuracy, the descriptors FVP and FVQi can ignore the subdivided boxes in which the missing parts of point cloud P are located.

Generating Database and Calculating Descriptors
Firstly, the regional area-based descriptor of point cloud P is computed as FV P = {C P ; C P C 1P , C P C 2P , C P C 3P , f Pv , v = 0, . . ., n} in the OBB of P. Then we shift the OBB of P to Q using the translation matrix defined by the centers of gravity of P and Q.We consider the coordinate system O Q x q y q z q with the origin O Q is the center of gravity of Q and three unit vectors ν 1 , ν 2 , and ν 3 , which are considered as the three directions of the OBB of point cloud P. Rotating Q around the three axes of the coordinate system O Q x q y q z q is performed with every increment ∆θ on each axis.For each rotation, the regional area-based descriptor Q in the OBB of P is FV Qi = {C Qi ; C Qi C 1Qi , C Qi C 2Qi , C P C 3Qi , f Qv , v = 0, . . ., n} which can be then determined as described as Algorithm 2. To enhance the accuracy, the descriptors FV P and FV Qi can ignore the subdivided boxes in which the missing parts of point cloud P are located.

Matching Descriptors
In the first step of the matching process, the OBB's parameters of point cloud P are compared with the OBB's parameters of point cloud Qi.If the OBB matching satisfies the given condition, the regional area-based descriptor of point cloud P is then compared with the regional area-based descriptor of Qi.These two OBBs should be satisfied with the following equation: where α thresh is the given adequate threshold.
The similarity between the regional area-based descriptors that represent the object point clouds P and Q is determined by using the normalized cross-correlation.The normalized cross-correlation between F P = {f Pv , v = 0, . . ., n} and F Q = {f Qv , v = 0, . . ., n} is computed as follows: where The matching is the best in Equation ( 20) in which the normalized cross-correlation C `FP , F Q ȓeaches its peak.

Transformation Estimation and Refinement
After the best matching is defined, the correspondence feature vectors FV P and FV Q can be determined.Based on these feature vectors, the initial transformation matrix T initial between two series of scans can be estimated by aligning the frame {C Q , v Q1 , v Q2 , v Q3 } that represents the series of scans 1 to the frame {C P , v Qbest1 , v Qbest2 , v Qbest3 } that represents the series of scans 2. Although the parameters of the transformation matrix can be obtained in the initial registration, the accuracy may not be satisfactory due to the error caused by the limited number of the rotating Q iterations.Fortunately, a refined registration such as the iterative closest point (ICP) algorithm can be performed to achieve precise registration between multi-view point cloud.

Case Study on Measured Data
To verify the feasibility of the developed methodology, some experiments were performed and evaluated.In the experiments, the point cloud of a hammer head were acquired from the NTU-developed optical 3D measuring probe being developed by using a random speckle pattern projection and triangulation measurement principle [12,15].The dimensions of the hammer head are approximately 110 ˆ35.5 ˆ21 mm 3 .Firstly, the hammer head was placed on a fixed table, allowing for viewing the object from several positions around the object.However, the whole object surface could not be detected due to optical occlusion.Therefore, after the first series of automated scans to obtain point cloud P, the object was reoriented manually and the second series of automated scans was continuously carried out to acquire point cloud Q. Figure 8 illustrates the coarse registration process between the two series of scans.The width of the normalized cross-correlation peak depends on the number of iterations and ∆θ, as shown in Figure 9.Meanwhile, the height of the normalized cross-correlation depends on the number of OBB segments k 1 ˆk2 ˆk3 and ∆θ.NTU-developed optical 3D measuring probe being developed by using a random speckle pattern projection and triangulation measurement principle [12,15].The dimensions of the hammer head are approximately 110 × 35.5 × 21 mm 3 .Firstly, the hammer head was placed on a fixed table, allowing for viewing the object from several positions around the object.However, the whole object surface could not be detected due to optical occlusion.Therefore, after the first series of automated scans to obtain point cloud P, the object was reoriented manually and the second series of automated scans was continuously carried out to acquire point cloud Q. Figure 8 illustrates the coarse registration process between the two series of scans.The width of the normalized cross-correlation peak depends on the number of iterations and Δθ, as shown in Figure 9.Meanwhile, the height of the normalized cross-correlation depends on the number of OBB segments k1 × k2 × k3 and Δθ.The performance of the registration can be estimated by using the distance from each overlapping point qi in point cloud Q to the fitting control point pi which is the projected point of qi onto the triangle mesh of P. If di denotes the distance between a point pi in P and its closest neighbor point qi in Q, the mean distance μ and standard deviation σ, which are used to evaluate the performance of the object registration, are computed as follows: The performance of the registration can be estimated by using the distance from each overlapping point q i in point cloud Q to the fitting control point p i which is the projected point of q i onto the triangle mesh of P. If d i denotes the distance between a point p i in P and its closest neighbor point q i in Q, the mean distance µ and standard deviation σ, which are used to evaluate the performance of the object registration, are computed as follows: In this experiment, the mean deviation value is 0.0347 mm and the standard deviation value is 0.0235 mm. Figure 10

Case Study on Synthetic Data
In this section we provide a discussion on the influence of parameters on the performance of the proposed method.In addition, a comparison against several already published local feature descriptor methods is also introduced.The considered approaches are: Spin Images (SI), Point Signatures (PS), Point Feature Histograms (PFH), and the Signature of Histograms of OrienTations (SHOT).All methods were implemented in C++ using the open source Point Cloud Library (PCL).To estimate the computation cost of the registration process, the experiments are processed on a computer with an Intel core i7 processor with 3.40 GHz and 8 GB RAM.
In real application, it is very difficult to distinguish different error sources such as shape measurement error (noise, surface sampling, etc.), correspondence error (occlusion, outliers, etc.), and registration error.In order to evaluate the coarse registration errors, we have generated synthetic data, which is provided by the Industrial Technology Research Institute (ITRI) (as shown in Figure 11).The dimensions of the socket model, connector model, cylinder model, and Brazo Control model are 45 × 25 × 25 mm 3 , 41 × 33 × 25 mm 3 , 35 × 35 × 35 mm 3 , and 86 × 53 × 30 mm 3 , respectively.Then, the point cloud corresponding with different views of the models are extracted by the NTU-developed virtual camera software (Precision Metrology (PM) lab, National Taiwan University, Taipei, Taiwan).The motion between green and red point cloud is a displacement of 50 mm in each axis and a rotation of π/4 around a fixed axis.Therefore, we have precise knowledge of the motion parameter rotation matrix R and translation vector t to serve as a reference for error estimation and validation of the coarse registration methods.The measures used to determine the accuracy include the rotation error and translation error.In order to evaluate the accuracy, the estimated transformation is compared to an expected one.

Case Study on Synthetic Data
In this section we provide a discussion on the influence of parameters on the performance of the proposed method.In addition, a comparison against several already published local feature descriptor methods is also introduced.The considered approaches are: Spin Images (SI), Point Signatures (PS), Point Feature Histograms (PFH), and the Signature of Histograms of OrienTations (SHOT).All methods were implemented in C++ using the open source Point Cloud Library (PCL).To estimate the computation cost of the registration process, the experiments are processed on a computer with an Intel core i7 processor with 3.40 GHz and 8 GB RAM.
In real application, it is very difficult to distinguish different error sources such as shape measurement error (noise, surface sampling, etc.), correspondence error (occlusion, outliers, etc.), and registration error.In order to evaluate the coarse registration errors, we have generated synthetic data, which is provided by the Industrial Technology Research Institute (ITRI) (as shown in Figure 11).The dimensions of the socket model, connector model, cylinder model, and Brazo Control model are 45 ˆ25 ˆ25 mm 3 , 41 ˆ33 ˆ25 mm 3 , 35 ˆ35 ˆ35 mm 3 , and 86 ˆ53 ˆ30 mm 3 , respectively.Then, the point cloud corresponding with different views of the models are extracted by the NTU-developed virtual camera software (Precision Metrology (PM) lab, National Taiwan University, Taipei, Taiwan).The motion between green and red point cloud is a displacement of 50 mm in each axis and a rotation of π/4 around a fixed axis.Therefore, we have precise knowledge of the motion parameter rotation matrix R and translation vector t to serve as a reference for error estimation and validation of the coarse registration methods.The measures used to determine the accuracy include the rotation error and translation error.In order to evaluate the accuracy, the estimated transformation is compared to an expected one.Results obtained show that sampling does not considerably affect the accuracy of the registration results.Thus, low resolution surfaces can be used in the proposed approach to reduce the computation time.Meanwhile, as is shown in Table 1, errors in the proposed registration algorithms extremely depend on the rotation increment  (rad).To enhance the accuracy, the rotation increment should become smaller.However, the increment has a significant impact on the runtime the proposed coarse registration process.As an example, the experiment with 1000 points in Table 1, with computation time in the case  = 0.01 (rad), at 126.439 (s), was much higher than that in the case  = 0.05 (rad), at only 1.063 (s).Another important characteristic of the developed method is the total number of subdivided boxes in the OBB, n = k1 × k2 × k3.From Table 2, it can be seen that if the OBB contains many subdivided boxes, it is more expensive to find the best rotation.For a fast registration procedure, fewer subdivided boxes are preferable.However, it is also required that the number of subdivided boxes is great enough to achieve the expected rotation.For instance, in the experiment using a Results obtained show that sampling does not considerably affect the accuracy of the registration results.Thus, low resolution surfaces can be used in the proposed approach to reduce the computation time.Meanwhile, as is shown in Table 1, errors in the proposed registration algorithms extremely depend on the rotation increment ∆θ (rad).To enhance the accuracy, the rotation increment should become smaller.However, the increment has a significant impact on the runtime of the proposed coarse registration process.As an example, the experiment with 1000 points in Table 1, with computation time in the case ∆θ = 0.01 (rad), at 126.439 (s), was much higher than that in the case ∆θ = 0.05 (rad), at only 1.063 (s).Another important characteristic of the developed method is the total number of subdivided boxes in the OBB, n = k 1 ˆk2 ˆk3 .From Table 2, it can be seen that if the OBB contains many subdivided boxes, it is more expensive to find the best rotation.For a fast registration procedure, fewer subdivided boxes are preferable.However, it is also required that the number of subdivided boxes is great enough to achieve the expected rotation.For instance, in the experiment using a connector model of 1000 points, the rotation error rose to 0.057 (rad) for the OBB of n = k 1 ˆk2 ˆk3 = 4 ˆ5 ˆ6, while it reached just 0.017 (rad) if the OBB of n = k 1 ˆk2 ˆk3 = 7 ˆ8 ˆ9.Besides, the ratio of the overlapped area is especially crucial.As shown in Table 3, translation and rotation errors in the proposed approach grow in direct proportion to the percentage of the non-overlapping region.This change is especially significant with over 40% of the outliers.This is because it is difficult to estimate translation exactly without a high level of overlapping.The excellent results are obtained for over 80% of shape the overlapping.In this case, the translation and rotation for tested models were at under 1.5 (mm) and 0.02 (rad), respectively.As described in the Introduction, the proposed registration method is developed to register two series of surface scans which exist with a certain degree of optical occlusion from each individual viewpoint of the probe scanning.To accurately register two series of scans under such a circumstance, it is important to have them overlap as much as possible.However, realistically, the overlapped ratio really depends on the complexity of the scanned object's geometry and how the object is placed on the rotation table.In general, the higher the overlapped ratio between two series of scans, the more accurate the registration that can be achieved.The result of Table 4 reveals the performances of the proposed approach and 3D key point descriptors.It shows that the developed method in this paper outperforms in registration accuracy with reasonable operation efficiency.As we know, one significant common problem using feature descriptors is that they usually fail to deal with object geometry that has symmetrical or plural repeated surface contours.We demonstrate the developed approach by using four test cases comprising two regular shapes, a Brazo and a connector, and two featureless models, a cylinder and a socket with 80% shape similarity which was provided by the Industrial Technology Research Institute (ITRI) (as shown in Figure 11).Given the reported results, it is clear that the proposed approach performs more accurately than the other methods on all the tested data.The computation efficiency of the proposed method is approximately the half of the fastest method, the Signature of Histograms of OrienTations SHOT.However, its time efficiency still ranks within a reasonable range.To test the noise influence, the synthetic experiments were carried out with respect to three different levels of Gaussian noises: σ = 0.01, 0.02 and 0.03 (standard deviation).The coarse registration results on the connector point-sets are shown in Table 5.We found that the proposed approach is able to achieve acceptable results with a certain level of noise.

Conclusions
In this study, a global registration method is developed to overcome one of the most challenging problems remaining in current 3D scanning systems for accurate image registration, even with no reference coordinate existing between the neighboring scanned surface patches.The key innovation in this work is the strategy in determining a robust registration transformation between two neighboring scans.The experimental results demonstrate the validity and applicability of the proposed approach.

Figure 1 .
Figure 1.Illustration of two series of scans with missing parts of object surfaces.(a) Triangle mesh of series of scans 1; (b) Triangle mesh of series of scans 2.

Figure 1 .
Figure 1.Illustration of two series of scans with missing parts of object surfaces.(a) Triangle mesh of series of scans 1; (b) Triangle mesh of series of scans 2.

Figure 2 .
Figure 2. The proposed registration method based on the OBB (oriented bounding box) regional area-based descriptor.

Figure 2 .
Figure 2. The proposed registration method based on the OBB (oriented bounding box) regional area-based descriptor.

Figure 3 .Figure 3 . 1
Figure 3.The flow chart diagram of the proposed method.

Figure 4 .
Figure 4. Improvement of translation: (a) The influence of weight loss from the missing points to the centroid point position; (b) Two point sets are separated; (c) Detection of the boundary points located near the missing part.

Figure 4 .
Figure 4. Improvement of translation: (a) The influence of weight loss from the missing points to the centroid point position; (b) Two point sets are separated; (c) Detection of the boundary points located near the missing part.

Figure 5 .
Figure 5.The oriented bounding box of a point cloud.Figure 5.The oriented bounding box of a point cloud.

Figure 5 .
Figure 5.The oriented bounding box of a point cloud.Figure 5.The oriented bounding box of a point cloud.

Figure 6 .
Figure 6.The coordinates of point pi (green dot) in the absolute and relative coordinate systems.

Figure 6 .
Figure 6.The coordinates of point p i (green dot) in the absolute and relative coordinate systems.

Figure 7 .
Figure 7. Illustration of the regional area-based descriptor.(a) The OBB of the object is subdivided into smaller boxes; (b) The object point cloud and OBB; (c) Triangle mesh is generated over the point cloud data; (d) the regional area-based descriptor of the object.

Figure 7 .
Figure 7. Illustration of the regional area-based descriptor.(a) The OBB of the object is subdivided into smaller boxes; (b) The object point cloud and OBB; (c) Triangle mesh is generated over the point cloud data; (d) the regional area-based descriptor of the object.

Figure 8 .
Figure 8.The coarse registration process between series of scans point cloud P and series of scans point cloud Q: (a) hammer point cloud Q from the first series of scans; (b-d) rotation of point cloud Q; (e) hammer point cloud P from the second series of scans; (f-h) the OBB regional area-based descriptor of P and Q; (i) overlap rejected model after refine registration; (j-l) the triangle meshes of the hammer head.

Figure 8 .
Figure 8.The coarse registration process between series of scans point cloud P and series of scans point cloud Q: (a) hammer point cloud Q from the first series of scans; (b-d) rotation of point cloud Q; (e) hammer point cloud P from the second series of scans; (f-h) the OBB regional area-based descriptor of P and Q; (i) overlap rejected model after refine registration; (j-l) the triangle meshes of the hammer head.
illustrates the other examples being achieved by the developed method.
this experiment, the mean deviation value is 0.0347 mm and the standard deviation value is 0.0235 mm.Figure10illustrates the other examples being achieved by the developed method.

Figure 10 .
Figure 10.The triangle meshes of three objects being scanned by the developed method.

Figure 10 .
Figure 10.The triangle meshes of three being scanned by the developed method.

Figure 11 .
Figure 11.The synthetic data used in experiments: (a-d) CAD models of the objects; (e-h) point cloud of the objects; (i-l) point cloud of the objects from the virtual camera.

Figure 11 .
Figure 11.The synthetic data used in experiments: (a-d) CAD models of the objects; (e-h) point cloud of the objects; (i-l) point cloud of the objects from the virtual camera.

Table 1 .
Experimental results using the connector model obtained by the proposed coarse registration method with different samplings and increments .

Table 1 .
Experimental results using the connector model obtained by the proposed coarse registration method with different samplings and increments ∆θ.

Table 2 .
Experimental results using the connector model obtained by the proposed coarse registration method with different numbers of the oriented bounding box (OBB) segments k 1 ˆk2 ˆk3 .

Table 3 .
Experimental results using the connector model obtained by the proposed coarse registration method with different ratios of overlapped area.

Table 4 .
Experimental results using synthetic data obtained by the proposed method and local feature descriptor methods.

Table 5 .
Experimental results using the connector model obtained by the proposed coarse registration method with different levels of Gaussian noise.