Real-Time 3D Reconstruction of Thin Surface Based on Laser Line Scanner

The truncated signed distance field (TSDF) has been applied as a fast, accurate, and flexible geometric fusion method in 3D reconstruction of industrial products based on a hand-held laser line scanner. However, this method has some problems for the surface reconstruction of thin products. The surface mesh will collapse to the interior of the model, resulting in some topological errors, such as overlap, intersections, or gaps. Meanwhile, the existing TSDF method ensures real-time performance through significant graphics processing unit (GPU) memory usage, which limits the scale of reconstruction scene. In this work, we propose three improvements to the existing TSDF methods, including: (i) a thin surface attribution judgment method in real-time processing that solves the problem of interference between the opposite sides of the thin surface; we distinguish measurements originating from different parts of a thin surface by the angle between the surface normal and the observation line of sight; (ii) a post-processing method to automatically detect and repair the topological errors in some areas where misjudgment of thin-surface attribution may occur; (iii) a framework that integrates the central processing unit (CPU) and GPU resources to implement our 3D reconstruction approach, which ensures real-time performance and reduces GPU memory usage. The proposed results show that this method can provide more accurate 3D reconstruction of a thin surface, which is similar to the state-of-the-art laser line scanners with 0.02 mm accuracy. In terms of performance, the algorithm can guarantee a frame rate of more than 60 frames per second (FPS) with the GPU memory footprint under 500 MB. In total, the proposed method can achieve a real-time and high-precision 3D reconstruction of a thin surface.


Introduction
Recent years have witnessed the release of a large number of fast, accurate depth sensors. With the opening up of graphics processors to general computing, 3D reconstruction by the depth sensors has attracted new attention in the field of computer vision [1]. It has a wide range of applications in reverse engineering, 3D printing, industrial inspection, robotics, augmented reality (AR), autonomous driving, and other fields [2][3][4]. Traditional depth sensors, such as the Structured Light Camera, Time of Flight (ToF) Camera, and Terrestrial Laser Scanner (TLS), provide depth data as depth images or sets of unorganized points. These sensors have particular tradeoffs in terms of speed, accuracy, flexibility, and price [5]. The hand-held laser line scanner projects a laser pattern onto the surface of the object, and densely measures the surface profiles (3D curves) based on the optical triangulation method. Given its advantages in speed, accuracy, and flexibility [6], this kind of scanner, such as HandySCAN [7], HSCAN [8], and FreeScan [9] have been widely used in high-precision 3D measurement applications in industrial manufacturing. judgment method to distinguish opposite surfaces and store them separately, which effectively avoids the interference of observations of thin surfaces. In some exceptional cases, real-time thin-surface attribution judgment may fail, so we have developed a post-processing method to detect and repair regions that have topological problems.
The efficiency of the underlying data structure is another critical issue in real-time 3D reconstruction. Many advanced real-time reconstruction systems have implemented different underlying data structures on graphics processors, but they have made tradeoffs in terms of speed, quality, and scale. The kinect fusion [18] achieves a real-time frame rate but has a large memory footprint since it uses the volume to represent the whole space instead of only reconstructing densely near the observed surface. The moving volume method uses the motion of a camera to stream data from the graphics processing unit (GPU) to central processing unit (CPU) to expand the space [50]. However, the current active volume structure is the same as the Kinect Fusion, and a lot of memory is required to represent the empty space. In recent years, sparse volumetric representations have been developed to compress space [20]. The main idea is to reconstruct densely only near the observed surface and discard empty space. To index the allocated data, octree [51][52][53], and hash table [1,19,20,24,27,34] have been successfully applied. An octree is unsuitable for graphics processors because of its low degree of parallelism. Nießner et al. proposed a voxel-hashing method for storing TSDF. The complexity is O (1) for accessing, inserting, and deleting data in hash tables; this method both effectively reduces the memory footprint and has a high-speed advantage [20].
Hashing-based fusion methods can even run on mobile platforms with relatively low computing capability, such as tablets and mobile phones [24,31,35,54]. Based on the work of Nießner et al., Kahler et al. improved the storage structure of the hash table to reduce the impact of hash conflict [24]. Moreover, they have developed a hierarchical structure based on hash with few levels to improve storage efficiency further [1]. A method called structured skip list also has been proposed to increase the number of voxels that a hash table can accommodate [55]. Although these improved hashing methods alleviate the memory pressure to some degree by compressing data, they do not solve the problem from the root. GPU memory still limits the spatial extent and resolution of the voxel grid. Nießner et al. alleviate the memory pressure of GPU through bidirectional GPU-CPU streaming, but the GPU memory may still be exhausted when reconstruction scenarios are very dense. Our second contribution is to improve the data structure of voxel hashing. We only store hash tables on the GPU, and all voxels are stored on the CPU; thus, the limitation of GPU memory capacity is lifted on the scale of the reconstructed scene.
In this work, aiming at the problem of thin-surface reconstruction in the existing TSDF fusion method, we add a step of thin-surface attribute judgment in the real-time process to eliminate the mutual interference between the opposite sides of the thin surface; the angle between the surface and the observed ray is used to distinguish the TSDF observations from different parts of the thin surface. For the regions where surface attribution misjudgment may occur, we exploit a method of weak topological regions repairing to fix the possible topological errors in these regions. To solve the problem that the underlying data structure of the existing TSDF method takes up too much memory of the GPU, we design a new memory management method to reduce the memory utilization of GPU. By moving all voxel data to CPU, the memory pressure of GPU is alleviated, and the scale of reconstruction scene is further increased.
The remainder of this paper is structured as follows: Section 2.1. outlines the TSDF implementation based on 3D curves. Section 2.2. describes our improvements to thin-surface reconstruction in real-time processing; Section 2.3. explains a post-processing method of repairing weak topological regions, Section 2.4. exposes the underlying data structure for storing TSDF. The results and discussion follow in Section 3. Finally, we conclude and outline some ideas for future work in Section 4.

Truncated Signed Distance Field (TSDF) Computation from 3D Curves
Our real-time 3D reconstruction approach is implemented with a highly integrated hand-held laser scanner, as shown in Figure 1. The scanner measures the 3D curves (a set of 3D points) of the object's surface using laser triangulation [56]. A laser pattern is projected onto the measured object, and the left and right cameras capture the 2D images of the object at the same time; from the 2D images, we extract the 2D surface curves of the projection pattern reflected on the object; a set of 3D curves is computed in the sensor coordinate system using the 2D surface curves. Meanwhile, the 3D curves measured from different frames are aligned to the global coordinate system by the sensor's position and orientation, which are continuously computed from the circular mark-points pasted on the object surface. Our 3D reconstruction algorithm integrates these arbitrary intersecting 3D curves into a single volumetric representation and refines the reconstructed surface continuously based on [6].
Sensors 2020, 20, x FOR PEER REVIEW 4 of 32 images, we extract the 2D surface curves of the projection pattern reflected on the object; a set of 3D curves is computed in the sensor coordinate system using the 2D surface curves. Meanwhile, the 3D curves measured from different frames are aligned to the global coordinate system by the sensor's position and orientation, which are continuously computed from the circular mark-points pasted on the object surface. Our 3D reconstruction algorithm integrates these arbitrary intersecting 3D curves into a single volumetric representation and refines the reconstructed surface continuously based on [6].
(a) (b) Figure 1. Schematic of the 3D laser scanning system; (a) components of the 3D scanner; (b) laser patterns; there are two kinds of projection patterns shown as red and blue respectively; we alternately project them onto the measured object; once one of the patterns is extracted, it will switch to the other immediately; the reflection of these two patterns will form multiple intersecting curves on the object.
TSDF constructed from 3D curves is a vector field, which is different from the classical signed distance field [21]. As shown in Figure 2a, each voxel v in a field stores the distance vector to its closest point on the reconstructed surface. The vector's direction is parallel to the normal of the closest point on the surface. The sign of the distance indicates whether a voxel v is inside or outside the surface. If v is outside of the surface, its sign is positive, and if v is inside the surface, its sign is negative. Since voxels that are too far from the surface have little effect on the surface construction, the distance field is truncated at distance d T ≤ . Truncation T affects not only the effect of surface reconstruction but also the runtime performance and memory consumption of the algorithm. On the one hand, if T is set too large, the surface details will be smoothed out. At the same time, increasing T means processing more voxels, thus slowing down the algorithm and consuming more memory.
On the other hand, if T is smaller than the level of noise, it will lead to the appearance of isolated patches around the reconstructed surface. Therefore, it is generally appropriate to set T to the minimum value larger than the noise level of the apparatus, which is 0.9 mm for the scanner used in our experiment. Truncation T is also the leading cause of thin-surface problems in TSDF. How T affects thin-surface reconstruction will be mentioned in the next section. Figure 1. Schematic of the 3D laser scanning system; (a) components of the 3D scanner; (b) laser patterns; there are two kinds of projection patterns shown as red and blue respectively; we alternately project them onto the measured object; once one of the patterns is extracted, it will switch to the other immediately; the reflection of these two patterns will form multiple intersecting curves on the object.
TSDF constructed from 3D curves is a vector field, which is different from the classical signed distance field [21]. As shown in Figure 2a, each voxel v in a field stores the distance vector to its closest point on the reconstructed surface. The vector's direction is parallel to the normal of the closest point on the surface. The sign of the distance indicates whether a voxel v is inside or outside the surface. If v is outside of the surface, its sign is positive, and if v is inside the surface, its sign is negative. Since voxels that are too far from the surface have little effect on the surface construction, the distance field is truncated at distance |d| ≤ T. Truncation T affects not only the effect of surface reconstruction but also the runtime performance and memory consumption of the algorithm. On the one hand, if T is set too large, the surface details will be smoothed out. At the same time, increasing T means processing more voxels, thus slowing down the algorithm and consuming more memory. On the other hand, if T is smaller than the level of noise, it will lead to the appearance of isolated patches around the reconstructed surface. Therefore, it is generally appropriate to set T to the minimum value larger than the noise level of the apparatus, which is 0.9 mm for the scanner used in our experiment. Truncation T is also the leading cause of thin-surface problems in TSDF. How T affects thin-surface reconstruction will be mentioned in the next section.
Reconstruction of surfaces is based on such a fundamental principle that two intersecting lines can determine a plane. Then the local reconstructed surface patch can be approximated in the neighborhood of the intersection points of all the curves on the surface. Figure 2b illustrates the principle of reconstruction in a simple case where two curves intersect. Assuming that p 1 and p 2 are the closest points to voxel v on curves c 1 and c 2 , respectively, v 1 and v 2 are two vectors from the voxel center point p v to p 1 and p 2 , respectively. The tangents of the curves at p 1 and p 2 have been computed in advance and are expressed as t 1 and t 2 , respectively. Since the estimated normal n of the tangent plane to the surface should be perpendicular to t 1 and t 2 , n should be the cross-product of t 1 and t 2 . The distance d from p v to the closest surface can be approximated as the average of the projections of v 1 and v 2 on n. Therefore, the value f of the distance vector field at v is dn. A full description of how to locate the closest points on a curve and how to calculate the tangent at the closest point is beyond the scope of this paper. Interested readers can find relevant content in [6]. point on the surface. The sign of the distance indicates whether a voxel v is inside or outside the surface. If v is outside of the surface, its sign is positive, and if v is inside the surface, its sign is negative. Since voxels that are too far from the surface have little effect on the surface construction, the distance field is truncated at distance d T ≤ . Truncation T affects not only the effect of surface reconstruction but also the runtime performance and memory consumption of the algorithm. On the one hand, if T is set too large, the surface details will be smoothed out. At the same time, increasing T means processing more voxels, thus slowing down the algorithm and consuming more memory.
On the other hand, if T is smaller than the level of noise, it will lead to the appearance of isolated patches around the reconstructed surface. Therefore, it is generally appropriate to set T to the minimum value larger than the noise level of the apparatus, which is 0.9 mm for the scanner used in our experiment. Truncation T is also the leading cause of thin-surface problems in TSDF. How T affects thin-surface reconstruction will be mentioned in the next section.  More formally, let n curves of c 1 , . . . , c n pass through the local surface patch; p 1 , . . . , p n are their respective closest points to voxel v and t 1 , . . . , t n are the tangents of p 1 , . . . , p n respectively. Then the distance from the central point p v of voxel v to the surface ∧ S can be obtained as the average value of the projected distance vectors on the estimated normal, i.e.,: where p i is the closest point to p v on the curve c i , and n is the estimated normal of the surface. Since the normal n is perpendicular to all tangents t 1 , . . . , t n , the following equation can be obtained: C is the covariance matrix of tangents, and the eigenvector associated with the minimum eigenvalue of C is the solution to n. Then, the value of vector field f: R 3 → R 3 at v is: Therefore, the closest surface point p c of voxel v can be obtained by: A curve can be considered as a linear interpolation of measure points {p 1 , . . . , p n } represented by a set of line segments c = {l 1 , . . . , l n-1 }, where l i = p i p i+1 . In online scanning, the measured surface curves are used to update the covariance matrix C and distance d of all neighboring voxels incrementally. Each curve only affects voxels within a certain vicinity. As shown in Figure 3, the voxels affected by each line segment p i p i+1 on a curve can form an approximate cylinder (called a fundamental cell) whose radius is determined by the truncation distance, which is called the envelope size in [6]. The envelope here serves the same purpose as the truncation mentioned earlier. Curves only affect voxels in the envelope and do not affect voxels beyond it. The following low-pass weight function prevents the discontinuity of the field over the envelope boundary: σ stands for the size of the envelope. Increasing the envelope size can make the reconstructed surface smoother, but it will also lose any sharp features. Conversely, reducing the envelope size makes the surface noisy. Tubić et al. recommended 2 · δ as the envelope size as in [6], and δ as the voxel size.
Sensors 2020, 20, x FOR PEER REVIEW 6 of 32 pp ; t1 and t2 are the tangents at each endpoint of the line segment; all voxels in the cell must be located between the two planes perpendicular to the two tangents and the distance from the central point pv of voxel v toward the closest point pc must be less than the cylinder's radius (envelope size).
σ stands for the size of the envelope. Increasing the envelope size can make the reconstructed surface smoother, but it will also lose any sharp features. Conversely, reducing the envelope size makes the surface noisy. Tubić et al. recommended 3 2 δ ⋅ as the envelope size as in [6], and δ as the voxel size.
After weighting with ωi, new observations are integrated into the individual field as follows: The average distance-vector v  is projected onto the normal n calculated from the covariance matrix C to obtain the value of TSDF at v: is only calculated via real-time rendering and post-processing to generate surface meshes. The reconstructed surface can be obtained by extracting the isosurface from the zero-level set of f(v). In the online process, raycasting is used to render the reconstructed surface from the current view. After stopping scanning, a marching cubes method can be used to generate mesh models.
The above content briefly introduces method of Tubić et al. [6] of generating a surface model from a set of intersecting 3D curves. This method is topologically flexible and has linear complexity, but it cannot reconstruct the surface of very thin objects. We use a laser line scanner to measure the 3D curves from the surface of a sheet metal part with a thickness of 0.8 mm and use method of Tubić et al. to reconstruct the surface mesh model of this object, which is shown in Figure 4a. The resolution (voxel size) of the model is 0.5 mm, and the size of the truncation (also known as the fundamental After weighting with ω i , new observations are integrated into the individual field as follows: The average distance-vectorṽ is projected onto the normal n calculated from the covariance matrix C to obtain the value of TSDF at v: is only calculated via real-time rendering and post-processing to generate surface meshes. The reconstructed surface can be obtained by extracting the isosurface from the zero-level set of f(v). In the online process, raycasting is used to render the reconstructed surface from the current view. After stopping scanning, a marching cubes method can be used to generate mesh models.
The above content briefly introduces method of Tubić et al. [6] of generating a surface model from a set of intersecting 3D curves. This method is topologically flexible and has linear complexity, but it cannot reconstruct the surface of very thin objects. We use a laser line scanner to measure the 3D curves from the surface of a sheet metal part with a thickness of 0.8 mm and use method of Tubić et al. to reconstruct the surface mesh model of this object, which is shown in Figure 4a. The resolution (voxel size) of the model is 0.5 mm, and the size of the truncation (also known as the fundamental envelope) is 0.9 mm. Tubić's output has a large area of data missing, and its distance (one-sided Hausdorff) to the Sensors 2020, 20, 534 7 of 30 computer-aided design (CAD) model is greater than 0.2mm (Figure 4b). We optimize the robustness of Tubić's method to thin objects. With the same resolution and truncation, the optimized results not only guarantee the completeness of data ( Figure 4c) but also guarantee its maximum distance (one-sided Hausdorff) to the CAD model within 0.2 mm (Figure 4d). The proposed thin-surface optimization of Tubić's method is described in Sections 2.2 and 2.3. Figure 5 shows the whole pipeline of the proposed algorithm. The algorithm employs a hash table to organize the underlying voxel data. The content related to the underlying data structure can be found in Section 2.4. The whole procedure of the proposed 3D reconstruction algorithm can be summarized as the following seven steps:

1.
Voxels Allocation: after receiving a frame of 3D curves data, we compute the p v of all voxels that fall within the fundamental envelopes of the curves and find the corresponding voxel block of p v in the hash table. If no block is found, we allocate a new block for it on the heap; 2.
TSDF Computation: compute the TSDF observations of these voxels (including the closest point p c on the curve and tangent t c on p c ); 3.
Voxels Indexing: use p v to index the address of voxel v in hash table; 4.
Voxel Integration: before integrating the new TSDF, it is necessary to determine whether the new one is on the same side as the surface already encoded by v. If so, the new TSDF is integrated to voxel v by Equations (6)-(8) (updating weight ω, matrix C and distance vectorṽ). Conversely, we find the inverse voxel v' on the other side of the thin surface and update v'. The method of determining which side of the thin surface the current observation belongs to will be elaborated in the next section; 5.
Normal filtering: the surface normal plays a vital role in the judgment of thin-surface attribution, and also affects the real-time display quality. Therefore, we need to denoise it to make it reliable. Firstly, the surface point p c (Equation (4)) and normal n (through a singular value decomposition (SVD) of matrix C) encoded by voxel v are computed. Then the neighborhood points of v are found in the hash table, and n is recomputed with the surface points and normals of v's neighbors. The related content of normal filtering will also be described in the next section. 6.
Weak Topological Regions Repair: although we have distinguished the opposite sides of the observed thin surface when integrating voxels, it is difficult to distinguish them in some areas with complex topology. When the user stops scanning, we adopt a post-processing method to detect and repair the topological errors in these areas automatically. 7.
Isosurface Extraction: after repairing the topological errors in TSDF, we use the marching cubes algorithm to extract the isosurface and generate a mesh model.

TSDF for Thin Surface
As mentioned earlier, TSDF has a disadvantage for reconstructing thin surfaces. The distance field extends a band, i.e., truncation, both inside and outside the observed surface. When the surface's thickness is less than the width of the truncation band, the values of distance fields that encode opposite sides of the surface will interfere with each other, leading to the effect that the rear surface cannot be recovered from the fields. Figure 6a shows a 2D slice of a corrupted thin surface. Black solid lines represent the real surface of a thin object, and each solid arrow indicates a value in the distance field. The truncation distance of the field is two times of voxel size, and the interfered voxels are colored gray; the darker their color, the higher the impact. Unaffected lattices are white because they are too far from the other side of the surface. These arrows of interfered voxels do not point to the real surface, but to multiple false surfaces that are represented by red dotted lines. Therefore, the correct method is to store two vectors (dotted arrows) in a gray lattice, as shown in Figure 6b. That is to say, each voxel in the overlapping region should encode two tangent planes that are on different real surfaces. It is first necessary to determine which side of the thin surface each TSDF observation originates from during online scanning; we call this process thin-surfaces attribution judgment.
6. Weak Topological Regions Repair: although we have distinguished the opposite sides of the observed thin surface when integrating voxels, it is difficult to distinguish them in some areas with complex topology. When the user stops scanning, we adopt a post-processing method to detect and repair the topological errors in these areas automatically. 7. Isosurface Extraction: after repairing the topological errors in TSDF, we use the marching cubes algorithm to extract the isosurface and generate a mesh model.

TSDF for Thin Surface
As mentioned earlier, TSDF has a disadvantage for reconstructing thin surfaces. The distance field extends a band, i.e., truncation, both inside and outside the observed surface. When the surface's thickness is less than the width of the truncation band, the values of distance fields that encode opposite sides of the surface will interfere with each other, leading to the effect that the rear surface cannot be recovered from the fields. Figure 6a shows a 2D slice of a corrupted thin surface. Black solid lines represent the real surface of a thin object, and each solid arrow indicates a value in the distance field. The truncation distance of the field is two times of voxel size, and the interfered voxels are colored gray; the darker their color, the higher the impact. Unaffected lattices are white because they are too far from the other side of the surface. These arrows of interfered voxels do not point to the real surface, but to multiple false surfaces that are represented by red dotted lines. Therefore, the correct method is to store two vectors (dotted arrows) in a gray lattice, as shown in Figure 6b. That is to say, each voxel in the overlapping region should encode two tangent planes that are on different real surfaces. It is first necessary to determine which side of the thin surface each TSDF observation originates from during online scanning; we call this process thin-surfaces attribution judgment. When one side of a thin surface is observed, the opposite side can be occluded. The view direction of the current observations implies that the current observations belong to one side of the thin surface. Moreover, to obtain the most accurate observations, the direction of the sensor's view usually needs to be approximately perpendicular to the object's surface. Therefore, the correlation between the direction of view and the attribution of the thin surface is stronger. Thus, we store the line of sight of each voxel and continuously update it during scanning as follows: here, n v denotes the normal of the surface encoded by voxel v, and v p represents the current viewpoint. Each normal n v computed from the covariance matrix C in Equation (6) must be oriented consistently towards the viewpoint v p (so that the angle between n v and the line of sight is less than π 2 ). Equation (10) guarantees that the angle between the line of sight and the surface normal is the smallest so that the sign of the normal n v determined by the line of sight is reliable. thickness is less than the width of the truncation band, the values of distance fields that encode opposite sides of the surface will interfere with each other, leading to the effect that the rear surface cannot be recovered from the fields. Figure 6a shows a 2D slice of a corrupted thin surface. Black solid lines represent the real surface of a thin object, and each solid arrow indicates a value in the distance field. The truncation distance of the field is two times of voxel size, and the interfered voxels are colored gray; the darker their color, the higher the impact. Unaffected lattices are white because they are too far from the other side of the surface. These arrows of interfered voxels do not point to the real surface, but to multiple false surfaces that are represented by red dotted lines. Therefore, the correct method is to store two vectors (dotted arrows) in a gray lattice, as shown in Figure 6b. That is to say, each voxel in the overlapping region should encode two tangent planes that are on different real surfaces. It is first necessary to determine which side of the thin surface each TSDF observation originates from during online scanning; we call this process thin-surfaces attribution judgment.  In the online scanning process, there are two cases of the viewpoint orientation: 1.
One Side Visible: when the line of sight is oriented toward one side of the thin surface, the sensor can only measure the current side and cannot measure the opposite side because the dominant side occludes it.

2.
Both Sides Visible: when the line of sight is approximately parallel to a thin surface and perpendicular to the edge of a thin surface, the thin surface on the opposite sides near the edge can be observed.
The first case (One Side Visible) must be addressed to prevent the thin surface from being totally corrupted during online scanning. The second case (Both Sides Visible) will only affect the reconstruction quality near the edge of the thin surface. Moreover, it requires a more complex mechanism, and its computation cost is very high. Therefore, we put this in the post-processing process, whose specific details are described in Section 2.3.
The signs of TSDF indicate whether a voxel is inside or outside an object. We use the following procedure to determine the signs of TSDF. Firstly, the orientation of the normal n v is adjusted to be consistent with the distance vectorṽ (Equation (7)) to get the vector n dis , that is, n dis needs to satisfy the equation: Then the sign S v at voxel v is determined by the dot product of n dis and the line of sight V v stored in v: With the change of signs, we can quickly judge the surface attribution online. When scanning from one side of a thin surface to the other, the current direction of view V v will deviate significantly from the previous V v stored in voxel v. If the normal n v is oriented to the current viewpoint, the sign S v determined by n v and V v is contrary to the sign S v previously stored in v. Meanwhile, if the signs S v and S v are the same, the current measurement is on the same side as the previous cumulative measurement. If the current observation is judged to belong to the other side of the thin surface, then a new voxel v is allocated on the heap, the observed value is integrated into v , and the address of the new voxel v is stored in the original voxel v. When querying the value of voxels in TSDF, we need to know not only the coordinates of the query point but also the line-of-sight at this moment to determine whether the voxels we are looking for are v or v'. Section 2.4 provides more details about the data structure. Figure 7 depicts the plot of the sign S varying with the angle α between vector n dis and line of sight V v . As shown in Figure 7, near the critical point (α = π 2 ), a slight change in the angle will cause the sign S to be flipped. Small perturbations of data (related to the accuracy of apparatus) will lead to a certain degree of deviation in normal estimation. When the angle between the line of sight and the surface normal α ≈ π 2 , the error of normal estimation will lead to an inaccurate sign of TSDF, which will make the misjudgment of the thin-surface attribution. Meanwhile, the reliability of the measures obtained in this case is also relatively low. Therefore, a current measurement reliability judgment is added to the process to determine whether the thin-surface attribution judgment can be turned on. When the line of sight and the surface normal are almost perpendicular (| n dis , V v | ≤ 0.05), the current measurement is judged to be unreliable and discarded. Conversely, the thin-surface attribution judgment is turned on, and the current observation is integrated into the corresponding voxel determined by the thin-surface attribution judgment.
Sensors 2020, 20, x FOR PEER REVIEW 10 of 32 moment to determine whether the voxels we are looking for are v or v'. Section 2.4. provides more details about the data structure. Figure 7 depicts the plot of the sign S varying with the angle α between vector ndis and line of sight Vv. As shown in Figure 7, near the critical point ( 2 π α = ), a slight change in the angle will cause the sign S to be flipped. Small perturbations of data (related to the accuracy of apparatus) will lead to a certain degree of deviation in normal estimation. When the angle between the line of sight and the surface normal 2 π α ≈ , the error of normal estimation will lead to an inaccurate sign of TSDF, which will make the misjudgment of the thin-surface attribution. Meanwhile, the reliability of the measures obtained in this case is also relatively low. Therefore, a current measurement reliability judgment is added to the process to determine whether the thin-surface attribution judgment can be turned on. When the line of sight and the surface normal are almost perpendicular ( , the current measurement is judged to be unreliable and discarded. Conversely, the thin-surface attribution judgment is turned on, and the current observation is integrated into the corresponding voxel determined by the thin-surface attribution judgment. From the above description, we can see that the accuracy of a normal estimation will affect the thin-surface attribution judgment and the accuracy of the distance vector field f(v). Figure 8 shows the surface points (Equation (4)) and normal (through a SVD decomposition of matrix C in Equation (6)) of a scanned model. As can be seen in Figure 8a, the normals are very noisy and unreliable. A normal filtering method with lower computational complexity is used in the online scanning process to improve the accuracy of normal estimation. Figure 8b shows the filtered results. The overall flow of online normal filtering is: 1. Perform the approach proposed in [6] to estimate the first normal nf and compute the reliability measure λr of nf; 2. For each voxel v in the field, estimate the normal ns based on an anisotropic neighborhood; 3. Update the normals by combining the first normal nf with ns under the influence of coefficient λr until either convergence or a maximum number of iterations is reached.
4. From the above description, we can see that the accuracy of a normal estimation will affect the thin-surface attribution judgment and the accuracy of the distance vector field f(v). Figure 8 shows the surface points (Equation (4)) and normal (through a SVD decomposition of matrix C in Equation (6)) of a scanned model. As can be seen in Figure 8a, the normals are very noisy and unreliable. A normal filtering method with lower computational complexity is used in the online scanning process to improve the accuracy of normal estimation. Figure 8b shows the filtered results. The overall flow of online normal filtering is:

1.
Perform the approach proposed in [6] to estimate the first normal n f and compute the reliability measure λ r of n f ; 2.
For each voxel v in the field, estimate the normal n s based on an anisotropic neighborhood; 3.
Update the normals by combining the first normal n f with n s under the influence of coefficient λ r until either convergence or a maximum number of iterations is reached.
Sensors 2020, 20, x FOR PEER REVIEW 11 of 32 (a) (b) Figure 8. The comparison between the normal computed by Tubić et al. [6] and that filtered by our method; (a) normal before filtering; (b) normal after filtering.
When estimating the first normal nf of voxel v, the three eigenvalues λ1 > λ2 > λ3 of the covariance matrix C in Equation (6) are used to evaluate the reliability coefficient λ of nf Let Pauly et al. [57] defined this coefficient as the surface variation. This measures the ratio of variance in the normal direction to the total variance and can be used to express the reliability of normal estimation. This quantity ranges between 0 (flat case) and 1/3 (anisotropic case). Add the following weight function to λσ so that the range of coefficient λr is [0, 1]: The parameters ρ, φ, and ψ can be determined experimentally. Here, we use ρ = 0.01, φ = 0.1, ψ = 0.15. The purpose of introducing coefficient λr is to preserve the features of the first normals as much as possible and to prevent over-smoothing.
Let v be the voxel at which we need to re-estimate the normal; p represents the point closest to v on the surface, and n represents the first normal. pi is the surface point at voxel vi n n n n (14) here, nm is the average of ni. The weight ω consists of two factors: where ωr is used to evaluate the reliability of neighborhood point vi equal to the weight (Equation (8)) accumulated at vi. Therefore, ωr is related to the number of observations at voxel vi and the distance from voxel vi to the surface. The weights are more significant when there are more observations, or the distance is closer, whereas the weights are smaller otherwise. ωθ is an anisotropic weight function, as in the bilateral normal filtering method [58]; it is defined as follows:  (15) where σp is the size of the neighborhood, and the angle parameter σn measures the similarity of the neighborhood normals. pm and nm are the average of pi and ni, respectively. The eigenvector  [6] and that filtered by our method; (a) normal before filtering; (b) normal after filtering.
When estimating the first normal n f of voxel v, the three eigenvalues λ 1 > λ 2 > λ 3 of the covariance matrix C in Equation (6) are used to evaluate the reliability coefficient λ of n f Let Pauly et al. [57] defined this coefficient as the surface variation. This measures the ratio of variance in the normal direction to the total variance and can be used to express the reliability of normal estimation. This quantity ranges between 0 (flat case) and 1/3 (anisotropic case). Add the following weight function to λ σ so that the range of coefficient λ r is [0, 1]: The parameters ρ, ϕ, and ψ can be determined experimentally. Here, we use ρ = 0.01, ϕ = 0.1, ψ = 0.15. The purpose of introducing coefficient λ r is to preserve the features of the first normals as much as possible and to prevent over-smoothing.
Let v be the voxel at which we need to re-estimate the normal; p represents the point closest to v on the surface, and n represents the first normal. p i is the surface point at voxel v i (v i ∈ N v ) in the neighborhood of voxel v, and n i is first normal at v i . N v is under a neighborhood size σ p specified by users. A covariance matrix C n can be constructed from all normals n i in the neighborhood of voxel v as follows: here, n m is the average of n i . The weight ω consists of two factors: where ω r is used to evaluate the reliability of neighborhood point v i equal to the weight (Equation (8)) accumulated at v i . Therefore, ω r is related to the number of observations at voxel v i and the distance from voxel v i to the surface. The weights are more significant when there are more observations, or the distance is closer, whereas the weights are smaller otherwise. ω θ is an anisotropic weight function, as in the bilateral normal filtering method [58]; it is defined as follows: where σ p is the size of the neighborhood, and the angle parameter σ n measures the similarity of the neighborhood normals. p m and n m are the average of p i and n i , respectively. The eigenvector associated with the minimum eigenvalue of C n is the new estimate n s at voxel v, since it corresponds to the direction of the smallest variance of the neighborhood normal. The final normal estimate is updated iteratively by the following formula: where ∝ denotes the normalization of the length such that n is a unit vector. Whenever voxel v gets a new n fi , a new normal estimate n i is obtained by the above method until the deviation between n i and n i−1 is less than a certain threshold, or the number of updated times reaches a limit. As a local filtering method, our method has a low computational cost. Its time-consuming is related to the neighborhood's size; a large neighborhood may increase the time consumption but will also lead to over-smoothing. The 3 × 3 × 3 neighborhood proved sufficient.

Repairing Weak Topological Regions
As mentioned in the previous section, when the line-of-sight is almost parallel to the thin surface, the thin-surface attribution judgment may fail, so the distance vector in the vector field cannot point to the correct position. The reconstructed surface mesh data from the vector field will deviate significantly from the object's actual surface and will result in topological errors such as overlap, intersections, or gaps. Without prior knowledge, it is impossible to get the desired surface shape by merely repairing the topological errors in the surface mesh. Therefore, the distance vectors in TSDF must be repaired to restore the locus of the real surface. We need to analyze TSDF to find areas that may have topological problems and then repair the values of these areas.

Detecting Weak Topological Regions
Our method of detecting weak topological regions in TSDF is inspired by Sharf's method [59]. Sharf's approach for repairing weak topological regions is to find the critical point of the signed distance function (that is the point where the gradient disappears), and then specify the locus of the correct surface at the critical point by user interaction. Sharf finds discrete critical points, not contiguous regions. Many points with topological errors may still be undetected near these critical points, but Sharf compensates for this problem through specific user interface interactions, which violates our original intention of automatically detecting and repairing weak topological areas. Here, we make some adjustments to Sharf's method so that the weak areas we find are continuous and complete. Specifically, the methods of detecting weak regions can be summarized as follows: 1.
Find the critical points: find all voxels v i ∈ N v in a neighborhood with v as the center Within each set, adjacent voxels are joined together to obtain several connected groups Γ i ± , as shown in Figure 10. If the number of connected groups of v is 2, then v is not the critical point. If the number of connection groups of v is 1 or ≥ 2, then v is a critical point; 2.
Generate weak topological regions: take each critical point Pc as a seed point, find connected component Pg (a region of adjacent voxels) of Pc within a specified range (such as two times of the voxel size), and the dot product of the normal less than a certain threshold T dot . Figure 9 shows weak regions detected by our method. This model has many stiffeners with a thickness of about 1 mm. When the truncation is also set to 1 mm, the surface model is prone to generate topological errors in the location of these stiffeners. The proposed method can detect these weak topological regions (the red areas in the figure), and the method of repairing topology errors will be described in the next section.  {v1, v2, v3, v4, v6, v7, v8}, positive set and negative set constitute two connection groups Γ1 + and Γ2 − (red and blue circle), so v is not a critical point; (b) the voxel v, positive set {v2, v3, v4, v6, v7, v8} and negative set {v1, v5}; the two sets are divided into four connection groups Γ1 − , Γ2 + , Γ3 − , and Γ4 + , so v is a critical point (there are four groups in 2D, but only three in 3D).
(a) (b) Figure 10. The effect of weak regions detection; (a) point clouds are generated from TSDF by Equation (4); the non-critical point is marked blue, and the critical point is marked red; (b) a cross-section of a weak region (the green box in figure (a)); there are many points inside the surface that do not fall on the actual surface; these cause the generated surface mesh model to shrink to the inside of the object. Figure 10b shows that some points in the weak region are unaffected by the thin-surface effect and can fall on the actual surface. By a reasonable parameter configuration (including voxel size and truncation width), we can ensure that there will always be some reliable points in the weak region. Therefore, the task of repairing weak areas can be transformed into the process of finding reliable points and inferring the value of unreliable points from reliable points.

Repairing Weak Topological Regions
As shown in Figure 11, voxels inside convex surfaces are easily affected by observations on both sides of the thin wall, while voxels outside are unaffected. Therefore, for convex surfaces, the TSDF values of voxels with positive signs can be used as input to infer the values of voxels with negative signs. Conversely, for concave surfaces, the TSDF values of positive voxels can be inferred from negative voxels. The number of positive and negative connection groups in weak regions can be used to judge whether a surface is convex or concave. When the number of positive connection groups is larger than that of the negative connection groups, the surface is convex, whereas the surface is concave otherwise.  Figure 10b shows that some points in the weak region are unaffected by the thin-surface effect and can fall on the actual surface. By a reasonable parameter configuration (including voxel size and truncation width), we can ensure that there will always be some reliable points in the weak region. Therefore, the task of repairing weak areas can be transformed into the process of finding reliable points and inferring the value of unreliable points from reliable points.

Repairing Weak Topological Regions
As shown in Figure 11, voxels inside convex surfaces are easily affected by observations on both  Figure 9b shows that some points in the weak region are unaffected by the thin-surface effect and can fall on the actual surface. By a reasonable parameter configuration (including voxel size and truncation width), we can ensure that there will always be some reliable points in the weak region. Therefore, the task of repairing weak areas can be transformed into the process of finding reliable points and inferring the value of unreliable points from reliable points.

Repairing Weak Topological Regions
As shown in Figure 11, voxels inside convex surfaces are easily affected by observations on both sides of the thin wall, while voxels outside are unaffected. Therefore, for convex surfaces, the TSDF values of voxels with positive signs can be used as input to infer the values of voxels with negative signs. Conversely, for concave surfaces, the TSDF values of positive voxels can be inferred from negative voxels. The number of positive and negative connection groups in weak regions can be used to judge whether a surface is convex or concave. When the number of positive connection groups is larger than that of the negative connection groups, the surface is convex, whereas the surface is concave otherwise. After identifying reliable voxels, we need to infer the actual surface from them. pi and ni are the surface points and normals of reliable voxels, respectively. Our goal is to construct a directed distance Then, we define f as follows: where x∈ 3 is used to indicate the location to be evaluated. Function ϕ: → is a real-value function on [0, ∞) called the radial basis function. When fitting functions with three variables, biharmonic (ϕ(r) = r) and triharmonic (ϕ(r) = r 3 ) splines are good choices.
( ) f ∇ x is the column vector of dimension d = 3, and can be calculated by the following formula: where ( ) To calculate the value of f, we must find N αi∈ and βi∈ 3 . According to the constraints of f Therefore, the equation system Ax T = b T can be written as: After identifying reliable voxels, we need to infer the actual surface from them. p i and n i are the surface points and normals of reliable voxels, respectively. Our goal is to construct a directed distance field f : R 3 → R so that f (p i ) = 0 and ∇ f (n i ) = n i . The surface represented by f passes through all p i , and the normal of the implicit surface matches n i at p i . In calculating f, we use the Hermite radial basis function (HRBF) [60] interpolation method since it considers both points and normals. Then, we define f as follows: where x∈R 3 is used to indicate the location to be evaluated. Function φ: R → R is a real-value function on [0, ∞) called the radial basis function. When fitting functions with three variables, biharmonic (φ(r) = r) and triharmonic (φ(r) = r 3 ) splines are good choices. ∇ f (x) is the column vector of dimension d = 3, and can be calculated by the following formula: where To calculate the value of f, we must find N α i ∈R and β i ∈R 3 . According to the constraints of f (p i ) = 0 and ∇ f (n i ) = n i , the following linear equations are obtained: Sensors 2020, 20, 534 15 of 30 Therefore, the equation system Ax T = b T can be written as: A single matrix block (d + 1) × (d + 1) of coefficient A (N(d+1))×(N(d+1)) can be written as: After calculating the weights of α i and β i , the distance from any spatial position x to the implicit surface can be calculated using Formula (17), and the gradient can be calculated by Formula (18). The normal of the implicit surface encoded by x can be obtained by normalizing the gradient. Therefore, the distance vector and normal of every unreliable voxel in the weak region of topology can be recomputed from the HRBF.

Data Structure
We defined the following structure to store the TSDF properties at each voxel: Each voxel stores the sight, the covariance matrix C defined in Equation (6) (Since matrix C is a symmetric matrix, only six independent elements in the corresponding upper triangular matrix are stored), the distance defined in Equation (7), the accumulated weight defined in Equation (8), the pointer to the voxel belongs to the other side of the thin surface and the address of the surface point. The normal is stored in the surface point, which can be used to judge the surface attribution and interactive visualization. A certain number of voxels (the default is 8 3 ) are merged to store, which is called a block. All voxels in a block are stored continuously in the order of x-y-z, and the coordinates of each voxel center can be obtained from the coordinates of the block.
All voxel blocks are organized by a hash table implemented on GPU [20]. The hash table stores hash entities, each of which points to an allocated voxel block. Their corresponding relationship is established by converting the spatial coordinates into hash values through a hash function. Nießner provided a complete method for searching, inserting, and deleting data in hash tables and a mechanism for resolving hash collisions. Since the hash table is implemented on the GPU, it can index all allocated data in parallel and thus has a considerable speed advantage.
Unlike Nießner, which stores some voxel blocks on the GPU and others on the CPU, we directly store all voxel blocks on the CPU to reduce the memory footprint of the GPU. Only hash tables are stored on the GPU, and each entity in the hash table contains a pointer to the voxel blocks allocated on the CPU. Once we have inserted a new entry into the hash table, we allocate a voxel block to the heap memory on the CPU. The heap is a dynamic 2D array that supports dynamic scaling. It is divided into contiguous blocks (mapped to the size of voxel blocks). Once allocated heap memory is exhausted, a fixed-length array is appended to the back of the 2D array. Since the array is accessed in parallel, synchronization is necessary, by incrementing the pointer of the array using atomic operation.
Before integrating the new TSDFs, all voxel blocks must be allocated that fall on the input 3D curves, and are also within the fundamental envelope of the curves. For each line segment p i p i+1 on the curve, we first calculate the bounding box determined by p i p i+1 , and then for all the voxels in the bounding box, we determine whether it falls into the fundamental envelope of p i p i+1 . If it is so, we find the corresponding entity of it in the hash table. If no entity is found, we insert an entity in the hash table and allocate a voxel block on the heap.
In addition to the voxel block array, the CPU also stores an inverse voxel block array and an array of neighborhood lookup tables, which are also maintained by dynamic 2D arrays. Inverse voxel blocks are used to store the opposite sides of a thin surface. We also store each element in the inverse voxel blocks as a Voxel struct. Before integrating each new TSDF, we need to determine whether it belongs to the same side of the surface that has been encoded in the non-inverse voxel v. If it is so, we integrate the new value to v, and vice versa integrate it to the inverted voxel v' pointed by v.pointer. If the value of v.pointer is empty, a new voxel v' is allocated on the heap of the inverse voxel blocks, and the address of v' is assigned to v.pointer. Then we compute the new TSDF (the closet point p c on the curve and tangent t at p c ) and integrate the new value to its corresponding voxel. When assigning a new inverse voxel v', we lock voxel v atomically to prevent other threads from writing to it at the same time. if p v is in the fundamental envelope associated with p i, j p i, j+1 then 7: Compute the closest point p c to p v on the curve; 8: Compute the tangent t c on the point p c ; 9: Find the voxel v associated p v in the hash if current TSDF observations (p c and t c ) belong to the surface encoded by v then 13: Update matrix C at voxel v (Equation (6)); 14: Update the sum of distance vectorsṽ with p c -p v (Equation (7)); 15: Update the sum of weight ω at voxel v (Equation (8)); 16: Compute the surface point p s (Equation (4)) and normal n (by SVD decomposition of 17: matrix C); 18: Find the neighborhoods of v; 19: Recompute n (Equation (16)) with the normals of the neighborhood points; 20: else 21: Find inverse voxel v'; 22: Update matrix C at voxel v' (Equation (6)); 23: Update the sum of distance vectorsṽ with p c -p v (Equation (7)); 24: Update the sum of weight ω at voxel v' (Equation (8)); 25: Compute the surface point p s ' (Equation (4)) and normal n' (by SVD decomposition of 26: matrix C); 27: Find the neighborhoods of v'; 28: Recompute n' (Equation (16)) with the normals of the neighborhood points.
To search voxels neighborhood information conveniently and avoid frequent data transmission between CPU and GPU, we maintain a neighborhood lookup table of all blocks on CPU. Neighborhood lookup table plays a vital role in normal filtering operation. It stores the pointers of 26 neighborhood voxel blocks for all voxel blocks. For each voxel v, if it is located at the center of the voxel block B, the 26 neighboring voxels adjacent to it can be found in the block B; its neighbors can be found by simple subscript addition and subtraction since the elements in a voxel block are stored continuously. If it is on the edge of the voxel block B, the neighboring voxels need to be found in the voxel blocks adjacent to B. Neighborhood lookup tables are divided into contiguous buckets, each consisting of 26 elements that store pointers to the neighborhood voxel blocks. The storage order of these buckets is the same as that of the voxel blocks. Therefore, all neighbors of each voxel block can be found through its pointer. Neighborhood lookup tables are also updated in real time during scanning. First, we find the voxel blocks in the fundamental envelopes of the current input frame, and then find 26 neighbors for each of them by the hash table to update the corresponding elements of the lookup table. The pseudocode Algorithm 1 shows the whole procedure of computing TSDF based on the above data structure. It is assumed that the tangent of each point on the curve has been calculated before field computation. Figure 12 shows our entire data structure.
To search voxels neighborhood information conveniently and avoid frequent data transmission between CPU and GPU, we maintain a neighborhood lookup table of all blocks on CPU. Neighborhood lookup table plays a vital role in normal filtering operation. It stores the pointers of 26 neighborhood voxel blocks for all voxel blocks. For each voxel v, if it is located at the center of the voxel block B, the 26 neighboring voxels adjacent to it can be found in the block B; its neighbors can be found by simple subscript addition and subtraction since the elements in a voxel block are stored continuously. If it is on the edge of the voxel block B, the neighboring voxels need to be found in the voxel blocks adjacent to B. Neighborhood lookup tables are divided into contiguous buckets, each consisting of 26 elements that store pointers to the neighborhood voxel blocks. The storage order of these buckets is the same as that of the voxel blocks. Therefore, all neighbors of each voxel block can be found through its pointer. Neighborhood lookup tables are also updated in real time during scanning. First, we find the voxel blocks in the fundamental envelopes of the current input frame, and then find 26 neighbors for each of them by the hash table to update the corresponding elements of the lookup table. The pseudocode Algorithm 1 shows the whole procedure of computing TSDF based on the above data structure. It is assumed that the tangent of each point on the curve has been calculated before field computation. Figure 12 shows our entire data structure. Figure 12. The whole data structure; first, the world coordinates of voxel center are mapped to the hash table to get the address of the voxel block on the central processing unit (CPU), and a thin-surface attribution judgment is then made to store the new TSDFs in either the non-inverse block or the inverse block; the subscripts of the neighborhood lookup table correspond to the individual blocks and each element of the lookup table stores the corresponding neighborhoods of the blocks; only the hash table is stored on the graphics processing unit (GPU), and all other data is stored on the CPU.

Results
Here, we verify the performance of our proposed algorithm through a series of experiments. The experimental data are all registered 3D curves collected by the laser scanner shown in Figure 1. Figure 13 shows an example of the input data of our algorithm. We use the proposed method to integrate all 3D curves of real objects into TSDF, extract the isosurface, and construct the mesh model.

Results
Here, we verify the performance of our proposed algorithm through a series of experiments. The experimental data are all registered 3D curves collected by the laser scanner shown in Figure 1. Figure 13 shows an example of the input data of our algorithm. We use the proposed method to integrate all 3D curves of real objects into TSDF, extract the isosurface, and construct the mesh model.
To verify the effectiveness of the proposed thin-surface improvement, we compare the reconstruction results of the proposed method with Tubić et al. [6] for industrial products of different thicknesses. The accuracy of the proposed method is evaluated by comparing it with two existing hand-held 3D laser line scanners: Creaform HandySCAN700 [7] and Scantech HSCAN771 [8]. As shown in Table 1, these two kinds of scanner can achieve a volumetric accuracy of 0.02 mm (part size within 1 m) and have been widely used in the field of industrial 3D scanning. We use cloud to mesh (C2M) distance metric to quantify the accuracy of surface reconstruction [61]. The distance from each vertex in the source model to the closest triangle in the reference model is recorded, and two standard statistics are computed for all distances: Mean and Standard Deviation (Std.) [61]. In our experiments, the CAD model is used as a reference model. We do statistical hypothesis tests (Welch's t-test [62] and F-test [63]) on these two statistics to evaluate the significance of the improvement. All the statistical tests are implemented by R programming language version 3.0.0 [64]. To verify the effectiveness of the proposed thin-surface improvement, we compare the reconstruction results of the proposed method with Tubić et al. [6] for industrial products of different thicknesses. The accuracy of the proposed method is evaluated by comparing it with two existing hand-held 3D laser line scanners: Creaform HandySCAN700 [7] and Scantech HSCAN771 [8]. As shown in Table 1, these two kinds of scanner can achieve a volumetric accuracy of 0.02 mm (part size within 1 m) and have been widely used in the field of industrial 3D scanning. We use cloud to mesh (C2M) distance metric to quantify the accuracy of surface reconstruction [61]. The distance from each vertex in the source model to the closest triangle in the reference model is recorded, and two standard statistics are computed for all distances: Mean and Standard Deviation (Std.) [61]. In our experiments, the CAD model is used as a reference model. We do statistical hypothesis tests (Welch's t-test [62] and F-test [63]) on these two statistics to evaluate the significance of the improvement. All the statistical tests are implemented by R programming language version 3.0.0 [64]. Table 1. Accuracy of the scanners involved in the comparison.
In terms of computational efficiency, we test the speed of the proposed algorithm on three hardware devices with different configurations shown in Table 2. The performance of the devices we used is equivalent to the hardware configuration recommended by mainstream industrial 3D scanner manufacturers (Creaform) [7]. Furthermore, we verify the effectiveness of the proposed method to improve the underlying data structure by comparing its GPU memory usage with VoxelHashing [20].

PC
Configuration Configuration A Intel Core i7-7700HQ (2.8 GHz), 32 GB RAM, NVIDIA GeForce GTX1050 (4 GB) Configuration B Intel Core i7-7700 (3.6 GHz), 32 GB RAM, NVIDIA GeForce GTX960 (2 GB) Configuration C Intel Core i7-7820X (3.6 GHz), 128 GB RAM, NVIDIA GeForce GTX1060 (4 GB) Figure 14 shows the 3D model of six different objects scanned by the proposed method in our experiment. Figure 14a-e are five thin industrial products with different thicknesses, which are used to verify the effectiveness of the proposed thin-surface optimization. The thickness of these objects is indicated in each subfigure. All of the 3D models in the thin-surface verification experiment are  Table 1. Accuracy of the scanners involved in the comparison.
In terms of computational efficiency, we test the speed of the proposed algorithm on three hardware devices with different configurations shown in Table 2. The performance of the devices we used is equivalent to the hardware configuration recommended by mainstream industrial 3D scanner manufacturers (Creaform) [7]. Furthermore, we verify the effectiveness of the proposed method to improve the underlying data structure by comparing its GPU memory usage with VoxelHashing [20]. Table 2. Personal computer (PC) configuration for performance test.

PC Configuration
Configuration A Intel Core i7-7700HQ (2.8 GHz), 32 GB RAM, NVIDIA GeForce GTX1050 (4 GB) Configuration B Intel Core i7-7700 (3.6 GHz), 32 GB RAM, NVIDIA GeForce GTX960 (2 GB) Configuration C Intel Core i7-7820X (3.6 GHz), 128 GB RAM, NVIDIA GeForce GTX1060 (4 GB) Figure 14 shows the 3D model of six different objects scanned by the proposed method in our experiment. Figure 14a-e are five thin industrial products with different thicknesses, which are used to verify the effectiveness of the proposed thin-surface optimization. The thickness of these objects is indicated in each subfigure. All of the 3D models in the thin-surface verification experiment are generated with the same configuration parameters: voxel size = 0.5 mm, truncation width = 0.9 mm. For comparison purposes, we also set the same parameters for Tubić et al. [6]. When using the scanners of Creaform [7] and Scannech [8], we also set the resolution of scan to 0.5 mm.

Experimental Object and Parameter Setting
The "relief" (f) is 2.4 m × 1.6 m in size. The total number of curves acquired from the "relief" is 472,612, and the total number of 3D surface points is 3,365,470,052. Since this object has large size and low self-occlusion, we use it to test the algorithm time and memory efficiency. This object can ensure that around 7000 laser points are obtained at each frame, basically reaching the upper limit of the single-frame data of the apparatus we used. In the performance test, we compare the speed and memory footprint of the algorithm in different resolutions. The truncation width is set to √ 3 2 · δ (δ as the voxel size), which is recommended by Tubić et al. [6]. The "relief" (f) is 2.4 m × 1.6 m in size. The total number of curves acquired from the "relief" is 472,612, and the total number of 3D surface points is 3,365,470,052. Since this object has large size and low self-occlusion, we use it to test the algorithm time and memory efficiency. This object can ensure that around 7000 laser points are obtained at each frame, basically reaching the upper limit of the single-frame data of the apparatus we used. In the performance test, we compare the speed and memory footprint of the algorithm in different resolutions. The truncation width is set to 3 2 δ ⋅ ( δ as the voxel size), which is recommended by Tubić et al. [6]. Figure 15 compares the surface reconstruction results of Tubić et al. [6] with ours for the same thin objects. The front and back views of each model are shown in these figures, and the front and back sides of the surface model are rendered in different colors (the front is given blue, the back is made yellow). The apparent improvements in the figures are marked with red ellipses. From these figures, we can see that many parts of results of Tubić et al. [6] have been lost, while our results are complete and smooth.  Figure 15 compares the surface reconstruction results of Tubić et al. [6] with ours for the same thin objects. The front and back views of each model are shown in these figures, and the front and back sides of the surface model are rendered in different colors (the front is given blue, the back is made yellow). The apparent improvements in the figures are marked with red ellipses. From these figures, we can see that many parts of results of Tubić et al. [6] have been lost, while our results are complete and smooth.

Qualitative Results
We also used Creaform HandySCAN700 [7] to scan the "sheet metal part" with a resolution of 0.5 mm, as shown in Figure 16. There are some small spikes and holes in Creaform's [7] model, which are not found in the scanning process. The same defects were not found on the surface of our model. However, compared with Creaform's [7] results, our surface is not smooth enough on the edges. Alias artifacts appear at sharp features on the extracted surfaces since TSDF is sampled on a regular grid with a fixed step length. In contrast, Creaform [7] uses higher resolution at sharp features.
The "plastic shell" is a more extreme example with a very complex topology. When scanning this object, it is almost impossible to ensure that the line of sight can be perpendicular to the sheets, so almost all of the sheets are corrupted (Figure 17a). In Figure 17b, Creaform's [7] scanning results show similar problems, and the model surface at the edge of the sheet is messy and incomplete. By repairing the topological structure of these weak topological regions, the shape of these sheets is well restored (the missing parts are caused by occlusion, not by our method). We also used Creaform HandySCAN700 [7] to scan the "sheet metal part" with a resolution of 0.5 mm, as shown in Figure 16. There are some small spikes and holes in Creaform's [7] model, which are not found in the scanning process. The same defects were not found on the surface of our model. However, compared with Creaform's [7] results, our surface is not smooth enough on the edges. Alias artifacts appear at sharp features on the extracted surfaces since TSDF is sampled on a regular grid with a fixed step length. In contrast, Creaform [7] uses higher resolution at sharp features. The "plastic shell" is a more extreme example with a very complex topology. When scanning this object, it is almost impossible to ensure that the line of sight can be perpendicular to the sheets, so almost all of the sheets are corrupted (Figure 17a). In Figure 17b, Creaform's [7] scanning results show similar problems, and the model surface at the edge of the sheet is messy and incomplete. By repairing the topological structure of these weak topological regions, the shape of these sheets is well restored (the missing parts are caused by occlusion, not by our method). The "plastic shell" is a more extreme example with a very complex topology. When scanning this object, it is almost impossible to ensure that the line of sight can be perpendicular to the sheets, so almost all of the sheets are corrupted (Figure 17a). In Figure 17b, Creaform's [7] scanning results show similar problems, and the model surface at the edge of the sheet is messy and incomplete. By repairing the topological structure of these weak topological regions, the shape of these sheets is well restored (the missing parts are caused by occlusion, not by our method).

Quantitative Results
In the accuracy evaluation, we choose two typical thin objects: sheet metal part and plastic shell. The topological structure of the sheet metal part is simple, so it is used to verify the effectiveness of thin-surface improvement (thin-surface attribution judgment) in real-time reconstruction. The plastic shell has a complex topology, which makes it easy to misjudge the attribution of a thin surface in the scanning process; so we use it to verify the effectiveness of the post-processing method for weak topological region repairing.
The accuracy of surface reconstruction is evaluated by the C2M distances [61] between reconstructions and CAD models. Figure 18 plots color maps of C2M distances for the two thin objects. In each subfigure, we visualize the C2M errors through a jet color map and a histogram. Distances greater than 0.2 mm are depicted with deeper colors, and the red/blue gradations indicate positive/negative distances, respectively. We evaluate the C2M errors quantitatively for each method

Quantitative Results
In the accuracy evaluation, we choose two typical thin objects: sheet metal part and plastic shell. The topological structure of the sheet metal part is simple, so it is used to verify the effectiveness of thin-surface improvement (thin-surface attribution judgment) in real-time reconstruction. The plastic shell has a complex topology, which makes it easy to misjudge the attribution of a thin surface in the scanning process; so we use it to verify the effectiveness of the post-processing method for weak topological region repairing.
The accuracy of surface reconstruction is evaluated by the C2M distances [61] between reconstructions and CAD models. Figure 18 plots color maps of C2M distances for the two thin objects. In each subfigure, we visualize the C2M errors through a jet color map and a histogram. Distances greater than 0.2 mm are depicted with deeper colors, and the red/blue gradations indicate positive/negative distances, respectively. We evaluate the C2M errors quantitatively for each method with two statistics, as shown in Table 3. Our real-time processing approach is more accurate than those of Tubić et al. [6], but is similar to the existing scanners (Creaform, Levi, QC, Canada [7] and Scannech [8]). After the proposed post-processing, we get the minimum values on both Mean and Std.
Sensors 2020, 20, x FOR PEER REVIEW 23 of 32 with two statistics, as shown in Table 3. Our real-time processing approach is more accurate than those of Tubić et al. [6], but is similar to the existing scanners (Creaform, Levi, QC, Canada [7] and Scannech [8]). After the proposed post-processing, we get the minimum values on both Mean and Std.   We randomly selected 1000 samples from the C2M distance data population of each method and carried out statistical tests on the samples to evaluate the significance of the proposed improvement. The mean and Std. of each group of sample data are listed in Table 4. Normal distribution is the basic hypothesis of a statistical test, so we use the Pearson's χ 2 test [65] to judge whether the samples are well-modeled by a normal distribution. As shown in Table 5, the p-value of all samples is higher than 0.05, so we conclude that the samples are consistent with the normal distribution at the significance level of α = 0.05.  Here we apply Welch's t-test (unequal variances t-test) [62] to compare the significance of the differences in the mean values between groups of samples, since the Std. between groups is different. Tables 6 and 7 show the t-test results of compared methods for two thin objects. Let µ i be the mean of the population corresponding to a column in the table, and µ j be the mean of the population corresponding to a row. We applied a one-tailed test [66] to each grid in the table at a significance level of 0.05. That is, H 0 : µ i = µ j , H 1 : µ i > µ j . If p < 0.05, we reject the null hypothesis and conclude that µ i and µ j have a significant difference. For the two thin objects we selected, all the methods involved in the comparison have smaller mean values than Tubić et al. [6], but the differences between the other methods are not significant.   1 Meaning of values in each cell: p-value (t value, degrees of freedom ν); * significant difference (p < 0.05).    1 Meaning of values in each cell: p-value (F value); the degree of freedom of each term is 999; * significant difference (p < 0.05).

Performance
We have implemented various Single Instruction Multiple Data (SIMD) [67] instruction set versions of the proposed algorithm in C++ language, and the program was compiled with Visual Studio 2015. Figure 19a compares the frame rates of different instruction set versions for the online proposed reconstruction method on different hardware devices shown in Table 2. The voxel resolution of scanning is set to 0.5 mm, and about 40,000 voxels are inserted in every single frame. Although on different devices, the algorithm uses CPU multithreading and GPU parallelism to speed up, the use of SIMD instruction set still leads to a frame rate increase. Our algorithm can achieve a real-time frame rate above 60 FPS since most of the CPUs support instruction sets above SSE now. Figure 19b compares the time per frame for our method with Tubić et al. Although our method has additional operations such as thin-surface attribution judgment and normal filtering, the time-consumption increase is not obvious due to the superior performance of the underlying data structure we adopted. The comparison was made on a device with Configuration A. Figure 20 compares the CPU memory usage and GPU memory usage of our proposed method and VoxelHashing [20] method. The comparison was made on a device with Configuration A. When the voxel resolution was increased to 2 mm, VoxelHashing [20] had already filled up all of the 4 GB GPU memory and made the program crashed, which significantly limited the algorithm's practicability since most of the GPU hardware devices on the market only supported 4 GB memory. In contrast, the GPU memory footprint of our proposed method was insignificant compared to voxel hashing [20]. Although the CPU memory footprint had increased, it was acceptable (up to 6 GB). After all, most computer devices support CPU memory expansion.
Studio 2015. Figure 19a compares the frame rates of different instruction set versions for the online proposed reconstruction method on different hardware devices shown in Table 2. The voxel resolution of scanning is set to 0.5 mm, and about 40,000 voxels are inserted in every single frame. Although on different devices, the algorithm uses CPU multithreading and GPU parallelism to speed up, the use of SIMD instruction set still leads to a frame rate increase. Our algorithm can achieve a real-time frame rate above 60 FPS since most of the CPUs support instruction sets above SSE now. Figure 19b compares the time per frame for our method with Tubić et al. Although our method has additional operations such as thin-surface attribution judgment and normal filtering, the timeconsumption increase is not obvious due to the superior performance of the underlying data structure we adopted. The comparison was made on a device with Configuration A.  Table 2; the blue bars represent the algorithm implemented with Advanced Vector Extension Instruction Set (AVX) [68], red bars represent Streaming SIMD Extensions Set (SSE) [69], and the green bars represent the version with no instruction sets; (b) performance comparison with Tubić et al. [6]. Figure 20 compares the CPU memory usage and GPU memory usage of our proposed method and VoxelHashing [20] method. The comparison was made on a device with Configuration A. When the voxel resolution was increased to 2 mm, VoxelHashing [20] had already filled up all of the 4 GB GPU memory and made the program crashed, which significantly limited the algorithm's practicability since most of the GPU hardware devices on the market only supported 4 GB memory. In contrast, the GPU memory footprint of our proposed method was insignificant compared to voxel hashing [20]. Although the CPU memory footprint had increased, it was acceptable (up to 6 GB). After all, most computer devices support CPU memory expansion.  Table 2; the blue bars represent the algorithm implemented with Advanced Vector Extension Instruction Set (AVX) [68], red bars represent Streaming SIMD Extensions Set (SSE) [69], and the green bars represent the version with no instruction sets; (b) performance comparison with Tubić et al. [6].

Discussion
From the qualitative results shown in Figure 15, we can see that our proposed modifications have visible progress in the reconstruction of thin surfaces compared with the original method of Tubić et al. [6]. The surface reconstructed by our method is complete and smooth, and it is difficult to see the interference between the opposite sides of the thin surface. However, Tubić et al. [6] cannot achieve the desired results. Both methods use the same input data and configuration parameters (voxel size and truncation width). It can be seen from Figure 17 that the proposed post-processing method is useful and necessary for thin objects with complex topology.
Quantitative analysis results can confirm the conclusion of qualitative evaluation. Whether it is a thin object with simple topological structure (sheet metal part) or complex topological structure (plastic shell), the deviation between the reconstruction of our method and CAD model is the smallest (the minimum mean and Std.). Taking the mean as the evaluation metric, our method significantly improves the accuracy of the original method of Tubić et al. [6], but it shows similar accuracy to the other two existing scanners (Creaform [7] and Scantech [8]). If Std. is used as the evaluation metric,

Discussion
From the qualitative results shown in Figure 15, we can see that our proposed modifications have visible progress in the reconstruction of thin surfaces compared with the original method of Tubić et al. [6]. The surface reconstructed by our method is complete and smooth, and it is difficult to see the interference between the opposite sides of the thin surface. However, Tubić et al. [6] cannot achieve the desired results. Both methods use the same input data and configuration parameters (voxel size and truncation width). It can be seen from Figure 17 that the proposed post-processing method is useful and necessary for thin objects with complex topology.
Quantitative analysis results can confirm the conclusion of qualitative evaluation. Whether it is a thin object with simple topological structure (sheet metal part) or complex topological structure (plastic shell), the deviation between the reconstruction of our method and CAD model is the smallest (the minimum mean and Std.). Taking the mean as the evaluation metric, our method significantly improves the accuracy of the original method of Tubić et al. [6], but it shows similar accuracy to the other two existing scanners (Creaform [7] and Scantech [8]). If Std. is used as the evaluation metric, our method is significantly different from other existing methods (Tubić et al. [6], Creaform [7], and Scantech [8]). Our method is more stable than the existing approaches. Moreover, for the thin object with complex topological structure, it can significantly improve the reconstruction accuracy by the proposed post-processing method.
As can be seen from Figure 19, the proposed algorithm can achieve a frame rate of more than 60 FPS on the hardware devices used in the experiment. These devices are not expensive, and their performance is the same as the hardware configuration recommended by mainstream industrial 3D scanner manufacturers. In terms of efficiency, the proposed algorithm can meet the requirements of industrial real-time 3D reconstruction. As can be seen from Figure 20, the proposed method can effectively reduce the GPU memory footprint of voxel hashing [20]. Although the memory footprint of the CPU has increased, it is within a reasonable range. After all, most computer devices support CPU memory expansion, while GPU cannot expand memory capacity. Therefore, our algorithm has high practicability.
Although the proposed algorithm improves the accuracy of thin-surface reconstruction and can meet the scanning requirements of most thin industrial products, the post-processing method of thin-surface topology repair still has some limitations on parameter setting. When the voxel size or truncation width is too large relative to the thickness of the object, the topology-repairing method will lose its effect. It is because our method of topological repair is based on the assumption that some voxels in the distance field are not interfered by the overlap of truncation bands on thin surfaces. The topology repair method uses these unaffected voxels as input to fit the surface and infer the values of the voxels in the affected area. If the voxel size or the truncation width are so large that the truncation bands on opposite sides of the thin surface overlap entirely, then the surface-fitting method loses reliable input, which leads to the inability to reconstruct the accurate surface. Therefore, for thin objects with complex topology, we recommend that users use such parameter configurations as voxel size ≤ 0.5 mm and truncation width ≤ 1 mm. There is no such limitation for thin objects with simple topology, such as a thin plate.
We use hash tables to compress the space and only reconstruct densely near the observed surface to eliminate the memory footprint of the empty space. However, it is not necessary to use regular grids of fixed voxel size to represent all parts of the scene. For example, a desktop needs less detail than a highly structured keyboard on it. Selecting different resolutions to represent different parts of the scene can reduce memory consumption and improve the reconstruction speed at the same time. It is also advantageous to use small voxel sizes to represent areas with complex topology or sharp features. Therefore, in the future, we will study how to build a 3D model with different resolutions adaptively according to the shape of the surface.

Conclusions
An improved 3D reconstruction method for a thin surface based on a hand-held laser line scanner is proposed. Aiming at the problems existing in the TSDF method for the 3D reconstruction of thin object surfaces, we propose two main improvements: first, in the real-time scanning process, we use the angle between the surface normal and the line of sight to determine the attribution of the thin surface, avoiding the mutual interference between the observation surfaces belonging to different positions; second, we exploit a post-processing method for automatically detecting and repairing topological errors in some areas where misjudgment of thin-surface attribution may occur. Besides, we also improve the underlying data structure of the existing TSDF method to reduce the GPU memory usage. Qualitative and quantitative experimental results provide evidence that the proposed 3D reconstruction approach to a thin surface is valid and can significantly improve the accuracy of thin-surface reconstruction. In comparison with the state-of-the-art laser line scanners, our method shows a similar (C2M distance mean [61]) or even better accuracy (C2M distance Std. [61]). In terms of computing efficiency, the proposed method effectively reduces the GPU memory consumption, which can guarantee a frame rate of more than 60 FPS with the GPU memory footprint under 500 MB.
Although the proposed method can reconstruct a thin surface accurately in most cases, there are still some limitations in parameter setting when reconstructing thin objects with complex topology. If the voxel size or truncation width is too large, mutual interference between the opposite sides of the thin surfaces may still occur. Meanwhile, it is a waste of memory to represent all parts of the scene with a regular grid. In the future, we plan to further improve the robustness and memory efficiency of complex thin-surface reconstruction by adaptively setting different resolutions to the reconstructed model according to the shape of the surface.

Conflicts of Interest:
The authors declare no conflict of interest.