Study of Damage Quantification of Concrete Drainage Pipes Based on Point Cloud Segmentation and Reconstruction

The urban drainage system is an important part of the urban water cycle. However, with the aging of drainage pipelines and other external reasons, damages such as cracks, corrosion, and deformation of underground pipelines can cause serious consequences such as urban waterlogging and road collapse. At present, the detection of underground drainage pipelines mostly focuses on the qualitative identification of pipeline damage, and it is impossible to quantitatively analyze pipeline damage. Therefore, a method to quantify the damage volume of concrete pipes that combines surface segmentation and reconstruction is proposed. An RGB-D sensor is used to collect the damage information of the drainage pipeline, and the collected depth frame is registered to generate the pipeline’s surface point cloud. Voxel sampling and Gaussian filtering are used to improve data processing efficiency and reduce noise, respectively, and the RANSAC algorithm is used to remove the pipeline’s surface information. The ball-pivoting algorithm is used to reconstruct the surface of the segmented damage data and pipe’s surface information, and finally to obtain the damage volume. In order to evaluate, we conducted our research on real-world materials. The measurement results show that the method proposed in this paper measures an average relative error of 7.17% for the external damage volume of concrete pipes and an average relative error of 5.22% for the internal damage measurements of concrete pipes.


Introduction
As an important guarantee for the construction of urban civilization and healthy human life, urban drainage systems can isolate sewage and clean water, thereby improving the quality of human life [1]. With economic growth and the continuous expansion of urban scale, the total length of drainage pipelines in China continues to grow. Consequently, the aging problem of the pipeline system is becoming more and more serious. Road collapse and environmental pollution caused by pipeline aging have caused considerable social impact [2]. Therefore, municipal departments need to spend a lot of money and resources on the maintenance of sewage pipelines [3]. In order to reduce the serious consequences caused by drainage pipe defects, it is very important to detect and evaluate the defects as soon as possible [4]. Currently, the main methods of pipeline inspection include sonar inspection [5], pipeline periscope inspection [6], ground-penetrating radar systems [7], pipeline closed-circuit television inspection [4], etc. Sonar detection can detect mud and foreign matter in pipelines well, but its ability to detect the structural defects of pipelines is poor. Pipeline periscope detection has the advantages of low detection cost and high detection speed. For short drainage pipelines, clear impact data can be obtained, but the software cannot be run in pipelines with high water levels. Ground-penetrating radar (GPR) images can better represent the subsidence or collapse around the pipeline, but the detection effect of this method is poor for other pipeline diseases. Closed-circuit television inspection of pipelines has been widely used in domestic and foreign pipeline detection [5]. Compared with traditional detection methods, closed-circuit television (CCTV) has a higher intelligence level and can provide more concise and obvious image results compared with laser detection and radar tests. The above four methods have the advantages of good accuracy and little influence from testing environments. However, due to the limitations of the instrument itself, these four methods can only undertake qualitative analysis for pipeline damage, not quantitative assessments.
In recent years, the use of 3D information for structural health analysis has become a new trend. 2D images can be used for some structural damage detection, such as crack detection.3D information provides solutions for other types of detection; for example, the depth and volume of the damaged part can be calculated according to the 3D information of the damaged part. Some authors used Structure-from-Motion (SfM) [8] to fuse two-dimensional images and some scale parameters to establish a 3D virtual model. Mahami et al. [9] used SfM combined with an MVS algorithm to detect the target so as to realize automatic detection of the whole project's progress. Golparvar-fard et al. [10] verified that SfM can complete remote assessment of infrastructure before and after disasters. Torok et al. [11] developed a new crack recognition algorithm based on the SfM method, which used robots to collect post-disaster information in order to complete 3D surface damage detection and analysis. Nowak et al. [12] used TLS to complete the overall scanning of a historic building structure and obtained a nearly complete architectural geometry.
Other authors have used 3D laser scanning equipment to quickly capture infrastructure's structural information to build 3D point cloud models. Youn et al. [13] used 3D scanners and Revit to build a platform containing various pieces of historical information that can be used for digital twin to record wood deformation and crack information. Zeibak-shini et al. [14] used laser scanning to compare a generated damage BIM model with a built BIM model to complete a preliminary estimation of the damaged parts of a reinforced concrete frame. Wang et al. [15] used a 3D point cloud as the quality detection of assembled building wall panels, which can quickly and efficiently classify and segment the wall panels that need to be corrected. Turkan et al. [16] completed concrete crack detection by combining a wavelet neural network with 3D ground laser scanning data. Liu et al. [17] used a 3D camera combined with a classical edge detection algorithm and fuzzy logic detection algorithm to perform clear edge detection on installed panels.
Another way to obtain 3D point clouds is to use a depth camera (RGB-D) that incorporates depth information. Some authors put forward the method of obtaining 3D point cloud data from cheap depth cameras to quantify the damage of road potholes [18]. All of these methods quantify the damage by fixing the distance and angle between the camera and the measured object, which limits their use in other scenes and is not reliable for automatic detection.
3D background removal is an important problem in surface damage research. At present, 3D point cloud segmentation methods mainly include the method based on regional growth, the method based on cluster features, and the method based on model fitting. The region growth method proposed by Besl and Jain (1988) [19] is mainly divided into two stages: firstly, seed points are selected; secondly, adjacent points are merged according to certain standards (normal vectors within a certain threshold range). Tovari and Pfeifer [20] proposed the point-based region growth algorithm which combined adjacent points into the same set according to their normal vectors and distance thresholds. The method based on clustering features divides the data set into different classes according to certain standards (distance or normal vectors). Biosca and Lerma [21] developed a fuzzy clustering segmentation method to merge neighboring points whose distance is less than the set threshold into the nearest cluster. Methods based on model fitting mainly have two algorithms, namely, the Hough Transform (HT) algorithm proposed by Ballard et al. [22] and the random sample consistency algorithm (RANSAC) proposed by Fischler et al. [23]. The HT algorithm uses a voting method to identify parameterized models. Rabbani et al. [24] completed automatic detection of cylinder models in point clouds based on the HT algorithm. Although the HT algorithm can segment 3D point clouds well, it has problems such as the consumption of a lot of memory and computing time [25]. The RANSAC algorithm randomly selects data points at first, estimates model parameters according to the selected data points, then puts the remaining points into the model, and, finally, selects the model with the maximum number of points as the best model. The RANSAC algorithm, firstly, randomly selects data points, estimates model parameters according to the selected data points, then puts the remaining points into the model, and, finally, selects the model with the maximum number of points as the best model. Chen et al. [26] segmented polyhedral roofs using an improved RANSAC algorithm and classified primitive elements using a region growth algorithm.
Surface reconstruction is an important method for obtaining dimension data from damage data after segmentation. At present, the common surface reconstruction methods are polygonal mesh reconstruction, parametric surface reconstruction, and implicit surface reconstruction. The most widely used method is polygon mesh reconstruction. The polygon mesh reconstruction method uses a simple mathematical model to describe object surfaces with points, lines, and planes. Delaunay triangulation [27] is a classical method of polygonal mesh reconstruction that was first proposed by the Russian mathematician Boris Delaunay. Delaunay triangle networks have two characteristics; each Delaunay triangle's outer circle does not contain other points in the plane domain, namely, the empty outer circle characteristic; and, after mutual exchange, the minimum angle of the six interior angles will not increase, namely, the minimum angle maximization characteristic. Delaunay triangulation has the advantages of regularity and optimality, and many researchers have developed polygon mesh reconstruction methods based on Delaunay triangulation. Boissonnat and Cazals [28] used natural neighborhood interpolation to construct smooth surfaces based on Delaunay triangulation and Voronoi diagrams. Amenta et al. [29] reconstructed surfaces from disordered point clouds based on Voronoi diagrams. Recently, Bernardini et al. [30] proposed a new polygonal mesh reconstruction method, the ballpivoting algorithm (BPA). The basic principle of the algorithm is to set a ball with radius ρ, which contains only three data points forming a triangle. The ball continues to rotate around the surface of the point cloud and generates the next triangle until all the data in the data set are calculated. The BPA method has the advantages of strong robustness and high efficiency.
In order to quantify the damage volume of underground pipelines under the interference of a complex environment, we propose a quantitative method of assessing the damage volume of underground drainage pipelines integrating 3D point cloud surface segmentation and reconstruction. On the basis of damage segmentation, damage reconstruction and surface reconstruction were carried out with the help of pipeline surface information, and the algorithm had strong portability. The method mainly consisted of four parts: (1) conversion from 2D depth frames to 3D point gathering was completed according to the conversion relationship between the internal coordinates of the acquisition instrument and the world coordinates; (2) the data set was preprocessed by integrating voxel sampling and a Gaussian filter; (3) the parameters of the surface model were estimated by the random sampling consensus algorithm, and the point cloud of the pipeline surface was removed; (4) after the damage data were reconstructed with the surface point cloud, the BPA algorithm was used to complete the surface reconstruction in order to obtain the real damage volume. The rest of this paper includes Section 2: Concrete Pipeline Damage Volume Quantitative Detection Framework, Section 3: Experiments, Section 4: Performance Analysis, Section 5: Discussion, Section 6: Conclusion.

Concrete Pipeline Damage Volume Quantitative Detection Framework
The main objective of this study was to accurately quantify the concrete pipe shedding disease using an inexpensive depth camera. As shown in Figure 1, the algorithm can be divided into four steps: (1) depth data acquisition based on a RGB-D sensor; (2) preprocessing point cloud data; (3) pipeline surface segmentation and damage clustering; (4) obtaining the the surface reconstruction of the entire damage point cloud from the surface and convex damage point cloud provided in Step 3, and quantifying the volume.

Concrete Pipeline Damage Volume Quantitative Detection Framework
The main objective of this study was to accurately quantify the concrete pipe shedding disease using an inexpensive depth camera. As shown in Figure 1, the algorithm can be divided into four steps: (1) depth data acquisition based on a RGB-D sensor; (2) preprocessing point cloud data; (3) pipeline surface segmentation and damage clustering; (4) obtaining the the surface reconstruction of the entire damage point cloud from the surface and convex damage point cloud provided in Step 3, and quantifying the volume.     Table 1. The device provided 3840 × 2160 pixel RGB images and 1024 × 1024 pixel depth images. The depth camera realized the concept of time of flight (ToF) and was composed of an infrared emitter and an infrared sensor. The infrared emitter transmitted a light pulse to be observed by the continuous object, and then to allow the infrared sensor to receive from the object a light pulse. The distance between the infrared sensor and infrared emitter was known, and, therefore, the equipment could, according to the infrared light from the transmitter, sense time spent on the 3D coordinates of each pixel sensor. emitter was known, and, therefore, the equipment could, according to the infrared light from the transmitter, sense time spent on the 3D coordinates of each pixel sensor.  Figure 2. OXCYCZC was the world coordinate system, OWXWYWZW was the camera coordinate system, and UV was the depth image coordinate system. Formulas (1)-(3) were obtained according to the position relations in space. x where fx and fy represented the focal length of the camera on the X-axis and Y-axis, cx and cy represented the position of the center point of the camera's aperture, and s were the scale factors of the depth map. Formulas (1)-(3) were transformed into matrix expressions as expressed in Formula (4).

 
1 1 As shown in Formulas (5)-(7), the matrix C was the internal parameter matrix of the camera. Because the world coordinate origin and the camera origin coincided, the rotation matrix was R. The translation vector was T. MATLAB provided good raw deep data access. The depth data are transformed from a 2D image in the depth camera coordinate system to a 3D point cloud in the world coordinate system. 3D point cloud data was obtained through coordinate-system transformation of original depth data. The space point Q(x, y, z) and its mapping point q(u, v, d) on the depth image (d is the depth data of this point) is shown in Figure 2. OX C Y C Z C was the world coordinate system, O W X W Y W Z W was the camera coordinate system, and UV was the depth image coordinate system. Formulas (1)-(3) were obtained according to the position relations in space.

RGB
where f x and f y represented the focal length of the camera on the x-axis and y-axis, c x and c y represented the position of the center point of the camera's aperture, and s were the scale factors of the depth map. Formulas (1)-(3) were transformed into matrix expressions as expressed in Formula (4).

Voxel Sampling
In the process of scanning, 3D laser scanners will generate point clouds of different densities according to different scanning distances, which will bring difficulties to subsequent 3D point cloud processing (such as point cloud registration). Moreover, due to the excessive number of sampling point clouds, the subsequent calculation will be complicated, and the calculation efficiency will be affected. The main methods for downsampling point clouds are the random downsampling method and the voxelized grid method. The random downsampling method has high computational efficiency but a poor retention effect for point cloud shapes. The research focus of this paper was obtaining the volume levels of pipeline damage, which puts forward higher requirements for the detailed expression of 3D point clouds. Therefore, the downsampling method that pays more attention to the preservation of 3D point cloud shapes, the voxel mesh method [31], was selected in this study.
The original point cloud M = {mi}, i = 1, …, p, mi = (xi,yi,zi) was input to determine its location range in space, and the spatial voxel grid was divided according to the appropriate size. After the division, the point cloud was wrapped by the cubic grid. Figure 3 is the schematic diagram of the collective pixel segmentation of point cloud data, and Figure 4 is the point cloud in the voxel.  As shown in Formulas (5)-(7), the matrix C was the internal parameter matrix of the camera. Because the world coordinate origin and the camera origin coincided, the rotation matrix was R. The translation vector was T. MATLAB provided good raw deep data access. The depth data are transformed from a 2D image in the depth camera coordinate system to a 3D point cloud in the world coordinate system.

Voxel Sampling
In the process of scanning, 3D laser scanners will generate point clouds of different densities according to different scanning distances, which will bring difficulties to subsequent 3D point cloud processing (such as point cloud registration). Moreover, due to the excessive number of sampling point clouds, the subsequent calculation will be complicated, and the calculation efficiency will be affected. The main methods for downsampling point clouds are the random downsampling method and the voxelized grid method. The random downsampling method has high computational efficiency but a poor retention effect for point cloud shapes. The research focus of this paper was obtaining the volume levels of pipeline damage, which puts forward higher requirements for the detailed expression of 3D point clouds. Therefore, the downsampling method that pays more attention to the preservation of 3D point cloud shapes, the voxel mesh method [31], was selected in this study.
The original point cloud M = {m i }, i = 1, . . . , p, m i = (x i , y i , z i ) was input to determine its location range in space, and the spatial voxel grid was divided according to the appropriate size. After the division, the point cloud was wrapped by the cubic grid. Figure 3 is the schematic diagram of the collective pixel segmentation of point cloud data, and Figure 4 is the point cloud in the voxel.

Voxel Sampling
In the process of scanning, 3D laser scanners will generate point clouds of different densities according to different scanning distances, which will bring difficulties to subsequent 3D point cloud processing (such as point cloud registration). Moreover, due to the excessive number of sampling point clouds, the subsequent calculation will be complicated, and the calculation efficiency will be affected. The main methods for downsampling point clouds are the random downsampling method and the voxelized grid method. The random downsampling method has high computational efficiency but a poor retention effect for point cloud shapes. The research focus of this paper was obtaining the volume levels of pipeline damage, which puts forward higher requirements for the detailed expression of 3D point clouds. Therefore, the downsampling method that pays more attention to the preservation of 3D point cloud shapes, the voxel mesh method [31], was selected in this study.
The original point cloud M = {mi}, i = 1, …, p, mi = (xi,yi,zi) was input to determine its location range in space, and the spatial voxel grid was divided according to the appropriate size. After the division, the point cloud was wrapped by the cubic grid. Figure 3 is the schematic diagram of the collective pixel segmentation of point cloud data, and Figure 4 is the point cloud in the voxel.    After the voxel meshes were divided, the intact damaged point cloud was divided into several point cloud subsets. The point cloud data in the voxel should be compressed according to certain standards, and the regional characteristics of the point cloud subset should be retained. This algorithm selected the point mj(x, y, z) closest to the center of gravity of the voxel G(x, y, z) as the characteristic information of the single voxel. As shown in Figure 5, DX, DY, and DZ were the length, width, and height of the unit voxel, respectively. The distance between points mp, mq, mk, ms, and mj and the center of gravity of the voxel G was judged in a single voxel, and, finally, mj was selected as the feature point to describe the single voxel.

Gaussian Filtering Reduces Surface Noise
Due to the influence of equipment accuracy, operator experience, environmental factors, and other factors, as well as the diffraction characteristics of electromagnetic waves, the change of the surface properties of the measured object, and the influence of data stitching and registration operation, some noise points inevitably appear when obtaining point cloud data. In fact, in addition to random error noise points caused by the interference of external factors such as line-of-sight occlusion and obstacles, there are usually some discrete points, namely, outliers, which are far away from the object (i.e., the point cloud of the measured object) in point cloud data. Different acquisition devices will produce different point cloud noise structures. Other tasks that can be accomplished through filtering resampling include hole repair, information loss minimization and mass point cloud data compression processing, etc. After the voxel meshes were divided, the intact damaged point cloud was divided into several point cloud subsets. The point cloud data in the voxel should be compressed according to certain standards, and the regional characteristics of the point cloud subset should be retained. This algorithm selected the point m j (x, y, z) closest to the center of gravity of the voxel G(x, y, z) as the characteristic information of the single voxel. As shown in Figure 5, D X , D Y , and D Z were the length, width, and height of the unit voxel, respectively. The distance between points m p , m q , m k , m s , and m j and the center of gravity of the voxel G was judged in a single voxel, and, finally, m j was selected as the feature point to describe the single voxel.  After the voxel meshes were divided, the intact damaged point cloud was divided into several point cloud subsets. The point cloud data in the voxel should be compressed according to certain standards, and the regional characteristics of the point cloud subset should be retained. This algorithm selected the point mj(x, y, z) closest to the center of gravity of the voxel G(x, y, z) as the characteristic information of the single voxel. As shown in Figure 5, DX, DY, and DZ were the length, width, and height of the unit voxel, respectively. The distance between points mp, mq, mk, ms, and mj and the center of gravity of the voxel G was judged in a single voxel, and, finally, mj was selected as the feature point to describe the single voxel.

Gaussian Filtering Reduces Surface Noise
Due to the influence of equipment accuracy, operator experience, environmental factors, and other factors, as well as the diffraction characteristics of electromagnetic waves, the change of the surface properties of the measured object, and the influence of data stitching and registration operation, some noise points inevitably appear when obtaining point cloud data. In fact, in addition to random error noise points caused by the interference of external factors such as line-of-sight occlusion and obstacles, there are usually some discrete points, namely, outliers, which are far away from the object (i.e., the point cloud of the measured object) in point cloud data. Different acquisition devices will produce different point cloud noise structures. Other tasks that can be accomplished through filtering resampling include hole repair, information loss minimization and mass point cloud data compression processing, etc.

Gaussian Filtering Reduces Surface Noise
Due to the influence of equipment accuracy, operator experience, environmental factors, and other factors, as well as the diffraction characteristics of electromagnetic waves, the change of the surface properties of the measured object, and the influence of data stitching and registration operation, some noise points inevitably appear when obtaining point cloud data. In fact, in addition to random error noise points caused by the interference of external factors such as line-of-sight occlusion and obstacles, there are usually some discrete points, namely, outliers, which are far away from the object (i.e., the point cloud of the measured object) in point cloud data. Different acquisition devices will produce different point cloud noise structures. Other tasks that can be accomplished through filtering resampling include hole repair, information loss minimization and mass point cloud data compression processing, etc. In order to eliminate outlier points that do not conform to the neighborhood caused by 3D point cloud acquisition equipment when sampling the underlying environment, a statistical outlier elimination method [32] was used to remove outlier points. First, for the input point cloud M = {m i }, i = 1, . . . , p, m i = (x i , y i , z i ), the distance (d ij ) from each point to the surrounding neighborhood points was calculated. Secondly, the distance parameters were modeled according to the Gaussian distribution d~N(µ, δ), and the mean value µ and standard deviation δ of the distance between the point and its neighborhood were calculated, as shown in Formulas (8) and (9). Finally, the average nearest neighbor distance of each point are checked. If it was greater than µ, it was taken as an outlier point and removed from the point cloud data set. In short, this method reduced the number of point clouds, shortened the processing time of subsequent steps, and improved the accuracy of processing.
The RANSAC algorithm considers a sampling data set with size n for initial model building. For the plane model, three-point cloud data should be randomly sampled to determine the parameters a, b, and c of the plane equation in Formula (10). The cylindrical model equation is shown in Formula (11); the parameters to be determined were the cylinder center point (x 0 , y 0 , z 0 ), the axis direction vector (a, b, c) and the radius of the cylinder r 0 , which were determined by random sampling of seven data.
According to the model parameters of random sampling, the remaining data points were substituted into the model, the error calculation was carried out, and the remaining data points were screened by controlling the appropriate distance threshold. For the plane model, the distance D was derived, as shown in Formula (12) and if the error D of the insertion point was less than the threshold σ, the point was the interior point and was included in the model. The cylindrical model is similar to the plane model, and the error f is shown in Formula (13); f is the difference between the square of the distance between the point and the specified cylinder after the point is substituted into the cylinder equation. If the number of interior points was greater than 60% of the total data set, it was used as a backup model parameter. We repeated the above operations until the end of iteration, and, finally, selected the model with the most interior points as the model segmentation parameter.
As the RANSAC algorithm is more obvious for threshold setting, if the threshold is set too small, the algorithm will be unstable, and if the threshold is set too large, the algorithm will fail. For the cost value set in the RANSAC algorithm, when the error was greater than the threshold, the value assigned to cost by 1 was changed to the value assigned to cost by the threshold.

Damage Clustering Segmentation
The Euclidean clustering method was used for damage clustering, and the KD tree algorithm sped up the Euclidean clustering algorithm by grouping and numbering 3D point cloud data. The algorithm aggregated the threshold data points set at a certain distance into the same set through the KD tree.
The damage passed through Euclidean clustering results is shown in Figures 8 and 9.

Damage Clustering Segmentation
The Euclidean clustering method was used for damage clustering, and the KD tree algorithm sped up the Euclidean clustering algorithm by grouping and numbering 3D point cloud data. The algorithm aggregated the threshold data points set at a certain distance into the same set through the KD tree.
The damage passed through Euclidean clustering results is shown in Figures 8 and 9.

Damage Clustering Segmentation
The Euclidean clustering method was used for damage clustering, and the KD tree algorithm sped up the Euclidean clustering algorithm by grouping and numbering 3D point cloud data. The algorithm aggregated the threshold data points set at a certain distance into the same set through the KD tree.
The damage passed through Euclidean clustering results is shown in Figures 8 and 9. and cylinders, respectively.

Damage Clustering Segmentation
The Euclidean clustering method was used for damage clustering, and the KD tree algorithm sped up the Euclidean clustering algorithm by grouping and numbering 3D point cloud data. The algorithm aggregated the threshold data points set at a certain distance into the same set through the KD tree.
The damage passed through Euclidean clustering results is shown in Figures 8 and 9.

Damage Reconstruction
After we obtained the damage point cloud, it was reconstructed according to the surface segmentation model. This paper adopted the method of parametric model projection. For the plane model, the damage point cloud obtained needed to be mapped according to Formula (10). The three parameters set were a, b, and c, respectively, to map 3D data into the two-dimensional plane. If the damage point cloud was segmented according to the cylinder model, parameters x0, y0, z0, a, b, c, and r0 were set again according to Formula (11) to transform it into a cylinder point cloud. In order to form complete damage data, damaged surface data and damage data were reconstructed.

Surface Reconstruction
The key step of volume quantization was 3D point cloud surface reconstruction using the Alpha Shapes algorithm as a 3D point cloud contour detection algorithm. As shown in Figure 10, the main flow of the algorithm is as follows: (1) The algorithm selects any point q in the point cloud data and sets the rolling circle radius . All data points within the range of point q 2 are recorded as set Q.
(2) Select a data point q1 in the set Q, q, and . The line is the chord length, and there are two circles with a radius of . The equation of the circle can be calculated through the position relationship in space.
(3) Calculate the distance between other points in set Q and the centers of the two circles after removal. If the distance is greater than (there are no other points in the two circles), it indicates that point q is the boundary point. Then judge the next point.
(4) If the distance between some points in Q and the center of the circle is less than (there are other points in both circles), the data point q1 is rotated to other points in the set Q for the above calculation and comparison. If there is a point that makes (2) and (3) valid, it indicates that point Q is the boundary point and the next point is judged.
(5) If there is no such point, it is proved that point Q is not the boundary point, and the next point is judged.
After the contour of the dataset was detected by the Alpha Shapes algorithm, edge points were connected to generate a triangulation net, and 3D point cloud surface reconstruction was completed, as shown in Figure 11. Damage volume was finally obtained.

Volume Quantization 2.4.1. Damage Reconstruction
After we obtained the damage point cloud, it was reconstructed according to the surface segmentation model. This paper adopted the method of parametric model projection. For the plane model, the damage point cloud obtained needed to be mapped according to Formula (10). The three parameters set were a, b, and c, respectively, to map 3D data into the two-dimensional plane. If the damage point cloud was segmented according to the cylinder model, parameters x 0 , y 0 , z 0 , a, b, c, and r 0 were set again according to Formula (11) to transform it into a cylinder point cloud. In order to form complete damage data, damaged surface data and damage data were reconstructed.

Surface Reconstruction
The key step of volume quantization was 3D point cloud surface reconstruction using the Alpha Shapes algorithm as a 3D point cloud contour detection algorithm. As shown in Figure 10, the main flow of the algorithm is as follows:  (1) The algorithm selects any point q in the point cloud data and sets the rolling circle radius α. All data points within the range of point q 2α are recorded as set Q.
(2) Select a data point q 1 in the set Q, q, and q 1 . The line is the chord length, and there are two circles with a radius of α. The equation of the circle can be calculated through the position relationship in space.
(3) Calculate the distance between other points in set Q and the centers of the two circles after removal. If the distance is greater than α (there are no other points in the two circles), it indicates that point q is the boundary point. Then judge the next point.
(4) If the distance between some points in Q and the center of the circle is less than α (there are other points in both circles), the data point q 1 is rotated to other points in the set Q for the above calculation and comparison. If there is a point that makes (2) and (3) valid, it indicates that point Q is the boundary point and the next point is judged.
(5) If there is no such point, it is proved that point Q is not the boundary point, and the next point is judged.
After the contour of the dataset was detected by the Alpha Shapes algorithm, edge points were connected to generate a triangulation net, and 3D point cloud surface reconstruction was completed, as shown in Figure 11. Damage volume was finally obtained.

Damage Setting
In order to measure the errors of the depth camera and algorithm, we developed damage of different materials, different surface shapes and different sizes. Firstly, polystyrene foam board with a size of 200 cm × 100 cm × 5 cm and concrete board with a size of 50 × 40 × 8 cm were used as the basic materials for the damage volume measurement of the flat plate. Eight and five damages were made in the foam board and concrete, respectively. The damage sizes were randomly assigned. Heat treatment was required for the damaged inner surface of the foam board to eliminate roughness, as shown in Figure 12. Secondly, a concrete pipe with a diameter of 800 mm and 1000 mm was used as the basic material for the quantitative experiment of concrete pipe damage volume. Three damages were respectively set outside and inside the pipe, and the damage size was randomly set, as shown in Figure 13.

Damage Setting
In order to measure the errors of the depth camera and algorithm, we developed damage of different materials, different surface shapes and different sizes. Firstly, polystyrene foam board with a size of 200 cm × 100 cm × 5 cm and concrete board with a size of 50 × 40 × 8 cm were used as the basic materials for the damage volume measurement of the flat plate. Eight and five damages were made in the foam board and concrete, respectively. The damage sizes were randomly assigned. Heat treatment was required for the damaged inner surface of the foam board to eliminate roughness, as shown in Figure 12. Secondly, a concrete pipe with a diameter of 800 mm and 1000 mm was used as the basic material for the quantitative experiment of concrete pipe damage volume. Three damages were respectively set outside and inside the pipe, and the damage size was randomly set, as shown in Figure 13.

Measurement of Real Damage Volume
To measure the true volume of damage, it was necessary to choose good impression materials first. In addition to ensuring good elasticity, fluidity, and plasticity, the ideal impression material also needed to ensure its chemical stability and be easy to separate from the model. Alginate material was selected as the damage impression material; alginate material is mainly composed of alginate, talc, diatomite, and other inert fillers. Its impression is clear, high in precision, and has a strong ability to measure the real volume of damage.
A drainage method was used to measure the real damage volume. After the alginate material was fully mixed with water, it was filled into the damage to generate an impression of the damage. The generated impression was placed into the overflow cup, and the volume of the overflow cup's drained boiling water was the damage volume. In order to prevent measurement errors caused by material smoothing and air bubbles in the process of impression preparation, three impressions and drainage measurements were carried out.

3D Point Cloud Damage Shooting
The depth camera was used to take pictures of the foam board damage and concrete board damage, respectively, under the conditions of light and distance limitation. The distance was set to 100~200 cm, and a camera position was set every 25 cm. Because the depth camera was sensitive to sunlight, this paper's experiment was carried out under indoor lighting.
On the basis of the distance and light condition, the angle setting was added to the damage shooting of concrete pipe.
We adjusted the height of the depth camera to the same height as the damage site and aligned the infrared device on its head with the damage center of the pipeline. First, we set a camera position every 25 cm at a distance of 50-175 cm, as shown in Figure 14. Second, at the same distance from the camera position, we set camera position angles of −6°, −3°, 0°, 3°, and 6°, respectively, with the front of the damaged part being 0° and the right side being forward. The angle setting was based on the angle's change between the depth camera and the head. Finally, on the basis of the same angle, we set settings of light and no light.

Measurement of Real Damage Volume
To measure the true volume of damage, it was necessary to choose good impression materials first. In addition to ensuring good elasticity, fluidity, and plasticity, the ideal impression material also needed to ensure its chemical stability and be easy to separate from the model. Alginate material was selected as the damage impression material; alginate material is mainly composed of alginate, talc, diatomite, and other inert fillers. Its impression is clear, high in precision, and has a strong ability to measure the real volume of damage.
A drainage method was used to measure the real damage volume. After the alginate material was fully mixed with water, it was filled into the damage to generate an impression of the damage. The generated impression was placed into the overflow cup, and the volume of the overflow cup's drained boiling water was the damage volume. In order to prevent measurement errors caused by material smoothing and air bubbles in the process of impression preparation, three impressions and drainage measurements were carried out.

3D Point Cloud Damage Shooting
The depth camera was used to take pictures of the foam board damage and concrete board damage, respectively, under the conditions of light and distance limitation. The distance was set to 100~200 cm, and a camera position was set every 25 cm. Because the depth camera was sensitive to sunlight, this paper's experiment was carried out under indoor lighting.
On the basis of the distance and light condition, the angle setting was added to the damage shooting of concrete pipe.
We adjusted the height of the depth camera to the same height as the damage site and aligned the infrared device on its head with the damage center of the pipeline. First, we set a camera position every 25 cm at a distance of 50-175 cm, as shown in Figure 14. Second, at the same distance from the camera position, we set camera position angles of −6°, −3°, 0°, 3°, and 6°, respectively, with the front of the damaged part being 0° and the right side being forward. The angle setting was based on the angle's change between the depth camera and the head. Finally, on the basis of the same angle, we set settings of light and no light.

Measurement of Real Damage Volume
To measure the true volume of damage, it was necessary to choose good impression materials first. In addition to ensuring good elasticity, fluidity, and plasticity, the ideal impression material also needed to ensure its chemical stability and be easy to separate from the model. Alginate material was selected as the damage impression material; alginate material is mainly composed of alginate, talc, diatomite, and other inert fillers. Its impression is clear, high in precision, and has a strong ability to measure the real volume of damage.
A drainage method was used to measure the real damage volume. After the alginate material was fully mixed with water, it was filled into the damage to generate an impression of the damage. The generated impression was placed into the overflow cup, and the volume of the overflow cup's drained boiling water was the damage volume. In order to prevent measurement errors caused by material smoothing and air bubbles in the process of impression preparation, three impressions and drainage measurements were carried out.

3D Point Cloud Damage Shooting
The depth camera was used to take pictures of the foam board damage and concrete board damage, respectively, under the conditions of light and distance limitation. The distance was set to 100~200 cm, and a camera position was set every 25 cm. Because the depth camera was sensitive to sunlight, this paper's experiment was carried out under indoor lighting.
On the basis of the distance and light condition, the angle setting was added to the damage shooting of concrete pipe.
We adjusted the height of the depth camera to the same height as the damage site and aligned the infrared device on its head with the damage center of the pipeline. First, we set a camera position every 25 cm at a distance of 50-175 cm, as shown in Figure 14. Second, at the same distance from the camera position, we set camera position angles of −6 • , −3 • , 0 • , 3 • , and 6 • , respectively, with the front of the damaged part being 0 • and the right side being forward. The angle setting was based on the angle's change between the depth camera and the head. Finally, on the basis of the same angle, we set settings of light and no light. Due to the space limitations of concrete pipes, damage shoo was mainly set according to angle and light conditions. Hot-melt ad the depth camera to the inner wall of the pipe on the other side of depth camera was moved a certain distance to the inside of the pip of the pipe, and the damaged part was photographed in the angle d −6°, and −12°, respectively, as shown in Figures 15 and 16.  Due to the space limitations of concrete pipes, damage shooting in concrete pipes was mainly set according to angle and light conditions. Hot-melt adhesive was used to fix the depth camera to the inner wall of the pipe on the other side of the damaged part, the depth camera was moved a certain distance to the inside of the pipe along the inner wall of the pipe, and the damaged part was photographed in the angle directions of 0 • , 6 • , 12 • , −6 • , and −12 • , respectively, as shown in Figures 15 and 16. dings 2022, 12, x FOR PEER REVIEW Figure 14. Camera layout outside concrete pipe.
Due to the space limitations of concrete pipes, damage shoo was mainly set according to angle and light conditions. Hot-melt ad the depth camera to the inner wall of the pipe on the other side of depth camera was moved a certain distance to the inside of the pip of the pipe, and the damaged part was photographed in the angle d −6°, and −12°, respectively, as shown in Figures 15 and 16.

Volume Quantization Results of 3D Point Cloud Damage Test
In this paper, about 270 experiments were carried out on 19 pits of different materials and sizes at different distances and angles with different illumination. Table 2 shows the total

Volume Quantization Results of 3D Point Cloud Damage Test
In this paper, about 270 experiments were carried out on 19 pits of different materials and sizes at different distances and angles with different illumination. Table 2 shows the total test volume values of foam board damage, concrete board damage, damage outside concrete pipe, and damage inside concrete pipe when the shooting distance was 100 cm (the distance inside concrete pipes was 50 cm), the shooting Angle was 0 • , and there was light. It can be concluded from the table that the test effect of this method for the analysis of damage outside the concrete pipe is the best with a total error value of 0.87%, followed by a total error value of 2.17% for concrete pipe. The damage test effect of foam board is similar to that of concrete.

Performance of Foam Board
The shooting distance had a great influence on point cloud sparsity. Table 3 and Figure 17 show the results for the foam board at different shooting distances in this study. In the case of shooting distance from 100 cm to 200 cm, the overall relative error range of damage was 7.56% from 100 cm to 24.65% from 200 cm, and the average accuracy error (MPE) at different distances was 16.45%. Damage 6 had the smallest error, when the test volume decreased by 0.28% compared with the real volume in the case of the 100 cm shooting distance. As can be seen from Figure 17, the relative error of the same damage kept increasing with the increase in distance under other conditions. As the shooting distance increased and the pixel of the depth camera remained unchanged, the shooting range increased and the number of pixels describing damage forms decreased, resulting in a decrease in the number of generated point clouds and a gradual increase in the error of the final test volume value. In the figure, when the shooting distance is 100 cm to 175 cm, the relative error of the damage volume changes greatly, while the relative error of the damage volume changes little between 175 cm and 200 cm. As possible reasons for shooting distance increased, the representational ability of the three dimensional point cloud weakened. The 3D point cloud data were a relatively clear characterization of polyethylene foam on the surface of the damage of the tongue and grooves, the shooting distance was far, the 3D point cloud data's representation ability was abate, and smaller pit slots could not show. However, the damage relative error remained stable after all the small pits were erased from the 3D point cloud data. shooting distance. As can be seen from Figure 17, the relative error kept increasing with the increase in distance under other conditions tance increased and the pixel of the depth camera remained unch range increased and the number of pixels describing damage forms in a decrease in the number of generated point clouds and a gradua of the final test volume value. In the figure, when the shooting dist cm, the relative error of the damage volume changes greatly, whil the damage volume changes little between 175 cm and 200 cm. As shooting distance increased, the representational ability of the thr cloud weakened. The 3D point cloud data were a relatively clear ch yethylene foam on the surface of the damage of the tongue and g distance was far, the 3D point cloud data's representation ability w pit slots could not show. However, the damage relative error remain small pits were erased from the 3D point cloud data.

Performance of Concrete Slab
Further, we took pictures of different damages of concrete slab at distances of 100 cm to 200 cm. The test volumes obtained from 3D point cloud and its relative error with the real volume are shown in Table 4 and Figure 18. As shown in Table 4, when the shooting distance was 100 cm to 200 cm, the total relative error range for concrete slab ranged from 7.68% for 100 cm to 28.13% for 200 cm, and the average accuracy error (MPE) at different distances was 17.38%. The damage test results of concrete slab were similar to those of polyethylene foam slab, and the relative error increased with the increase in shooting distance. Overall, the gap between the damage error for concrete slab compared to foam board was not big. Errors mainly occurred because of the material's surface; if the material's surface was relatively coarse, more segmentation was needed to set the threshold, which can lead to larger quantitative impact injury as a result. If there are some small grooves and damage to the inner surface distortion after filming it was concluded that the quantization results of damage volume were affected.

Performance of Concrete Slab
Further, we took pictures of different damages of concrete slab at distan to 200 cm. The test volumes obtained from 3D point cloud and its relative e real volume are shown in Table 4 and Figure 18. As shown in Table 4, when distance was 100 cm to 200 cm, the total relative error range for concrete slab 7.68% for 100 cm to 28.13% for 200 cm, and the average accuracy error (MPE distances was 17.38%. The damage test results of concrete slab were simil polyethylene foam slab, and the relative error increased with the increase in tance. Overall, the gap between the damage error for concrete slab comp board was not big. Errors mainly occurred because of the material's surface rial's surface was relatively coarse, more segmentation was needed to set t which can lead to larger quantitative impact injury as a result. If there ar grooves and damage to the inner surface distortion after filming it was concl quantization results of damage volume were affected.

Performance of Outside the Concrete Pipe
The damage shooting outside the concrete pipe was different from the foam board and the concrete board. The shooting angle and lighting conditions were added on the basis of the distance. The real damage volume data and the test damage volume data are shown in the Table 5 and Figure 19. The total relative error interval was 2.43~14.41%, and the minimum relative error was 2.43% when the shooting condition was 100 cm with light.

Internal Performance of Concrete Pipes
The real damage volume data and test damage volume data in the concrete pipeline are shown in Table 6 and Figure 20. Under light conditions, the average relative error of the three damages was 6.03%, and under no light conditions, the average relative error of the three damages was 4.41%.
As shown in Figure 20, when the shooting angle was 0°, that is, when the depth camera was directly facing the damage position, the relative error was minimal. When the shooting angle kept increasing, the relative error increased accordingly. The results were analyzed as above. On the one hand, the point clouds were missing due to the occlusion of damaged edges; on the other hand, the point clouds obtained would become more sparse as the shooting angle increases. With other conditions unchanged, the relative error increased with the increase in the shooting angle. When the shooting distance was 50 cm with light, the relative error of the damage was 3.63% when the shooting angle was 0 • , and it increased to 5.63% when the shooting angle increased to 3 • . When the shooting angle was 6 • , the maximum relative error was 10.79%. This is because with the rising of the shooting angle, the lack in the point cloud affects the measurement accuracy. The farther away the ground 3D laser scanner is from the concrete surface, the greater the angle of incidence; consequently, the point cloud was thin, increasing relative error.
Under the condition of constant angles and illumination, with the increase in distance, the relative error of damage volume decreased first and then increased, and the relative error of 75-100 cm shooting was small. The main reason for this situation is that the depth camera used in this paper will miss aspects of the point cloud when the working distance is too small under the condition of a narrow angle of view, resulting in error. The point cloud's sparsity will also increase in error when the distance increases.
In the case of the same shooting distance and angle, more specific effects of different lighting conditions on the damage volume need to be further tested.
In general, the measured volume of external damage to concrete pipeline was more accurate than that of foam board and concrete board, and the external surface of concrete pipeline was smoother than that of foam board and concrete board. In addition, the method of normal estimation was added to the segmentation of external surfaces of concrete pipeline to improve the segmentation and fitting accuracy.

Internal Performance of Concrete Pipes
The real damage volume data and test damage volume data in the concrete pipeline are shown in Table 6 and Figure 20. Under light conditions, the average relative error of the three damages was 6.03%, and under no light conditions, the average relative error of the three damages was 4.41%.

Discussion
This paper presents a method using an inexpensive depth se ner. Compared with the existing literature on measuring pits usin are the main contributions of this paper.
Different materials and shapes were used as reference mater formance of sensors.
Joubert et al.
[32] used RANSAC for surface fitting and then m age locations for size calculation. In this paper, surface reconstr based on segmentation and combined with drainage pipe surface damage detection efficiency.
Compared with Kamal et al. [18], who used average-filtering late volume through a method using pixel point integration for de used improved RANSAC surfaces to segment damaged surface p pha Shapes algorithm to detect the external contour of point clou surface to finally complete volume calculation.
For the methods of collecting and processing concrete pipelin per adopted fixed camera shooting distance and angle to collect co measurements and combined the RANSAC segmentation algorit face reconstruction algorithm to complete static data processing. rection is to develop pipeline robots equipped with depth camera tion of damage data and real-time damage detection and proces deep-learning method in view of the complex state of active concr As shown in Figure 20, when the shooting angle was 0 • , that is, when the depth camera was directly facing the damage position, the relative error was minimal. When the shooting angle kept increasing, the relative error increased accordingly. The results were analyzed as above. On the one hand, the point clouds were missing due to the occlusion of damaged edges; on the other hand, the point clouds obtained would become more sparse as the shooting angle increases.

Discussion
This paper presents a method using an inexpensive depth sensor as a pothole scanner. Compared with the existing literature on measuring pits using Kinect, the following are the main contributions of this paper.
Different materials and shapes were used as reference materials to evaluate the performance of sensors.
Joubert et al.
[33] used RANSAC for surface fitting and then manually selected damage locations for size calculation. In this paper, surface reconstruction was carried out based on segmentation and combined with drainage pipe surface information to improve damage detection efficiency.
Compared with Kamal et al. [18], who used average-filtering depth images to calculate volume through a method using pixel point integration for depth distance, this paper used improved RANSAC surfaces to segment damaged surface point clouds and the Alpha Shapes algorithm to detect the external contour of point clouds and reconstruct the surface to finally complete volume calculation.
For the methods of collecting and processing concrete pipeline damage data, this paper adopted fixed camera shooting distance and angle to collect concrete pipeline damage measurements and combined the RANSAC segmentation algorithm and the Alpha surface reconstruction algorithm to complete static data processing. Our future research direction is to develop pipeline robots equipped with depth cameras for dynamic acquisition of damage data and real-time damage detection and processing, combined with a deeplearning method in view of the complex state of active concrete pipeline.

Conclusions
With the continuous improvement of vision sensors, it is possible to realize volume quantization in 3D point clouds. Concrete drainage pipe breakage is a common structural damage. In order to evaluate this structural damage accurately, the broken volume should be quantified. In this paper, we proposed a 3D point cloud volume quantification method for concrete drainage pipe damage integrating surface segmentation and reconstruction. We tested the accuracy of the depth camera Microsoft Azure Kinect DK with an RGB-D sensor in the quantification of concrete pipe damage volumes. The equipment has the advantages of high precision, real-time data transmission, and a low price and can be used to detect and quantify the damage volume of concrete pipeline. Meanwhile, the method provides ideas for other depth cameras to quantify the volume of damage in concrete pipeline. The experimental results show that this method has great potential in the measurement of the damage volumes of drainage pipeline and can provide support for system decisions and quantitative repair materials for drainage pipeline.
Although this study has demonstrated the potential of automatically quantifying damage volumes in drainage lines, there are still some limitations. For example, in our study, only drainage pipes with a single diameter were studied. Therefore, it is necessary to further study the damage of drainage pipes with different diameters. In addition, the underground drainage pipeline service environment is complex; sewage, uneven light, fog, and blockage will affect data collection. The automatic segmentation and adaptive reconstruction of drainage pipe surface point clouds is a challenging task. In this regard, the development of a calculation system that can automatically identify, segment, and quantify drainage pipeline in the complex working environment is our future research direction.  Data Availability Statement: All data that support the findings of this study are available from the corresponding author upon reasonable request.