Next Article in Journal
Conservation Agriculture Saves Irrigation Water in the Dry Monsoon Phase in the Ethiopian Highlands
Next Article in Special Issue
Optimization of Green Infrastructure Practices in Industrial Areas for Runoff Management: A Review on Issues, Challenges and Opportunities
Previous Article in Journal
Geodiversity Evaluation and Water Resources in the Sesia Val Grande UNESCO Geopark (Italy)
Previous Article in Special Issue
Performance of a Hydraulically Linked and Physically Decoupled Stormwater Control Measure (SCM) System with Potentially Heterogeneous Native Soil
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A 3D Reconstruction Pipeline of Urban Drainage Pipes Based on MultiviewImage Matching Using Low-Cost Panoramic Video Cameras

1
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
2
Shenzhen Research Center of Digital City Engineering, Shenzhen 518000, China
*
Authors to whom correspondence should be addressed.
Water 2019, 11(10), 2101; https://doi.org/10.3390/w11102101
Submission received: 2 August 2019 / Revised: 29 September 2019 / Accepted: 5 October 2019 / Published: 9 October 2019
(This article belongs to the Special Issue Urban Drainage Systems)

Abstract

:
Urban drainage pipe networks have complex spatial contributions, andthey are now facing problems such as damage, defects, and aging. A rapid and high-precision pipe inspection strategy is thekey to ensuring thesustainable development of urban water supply and drainage system. In this paper, a three-dimensional (3D) reconstruction pipeline of urban drainage pipes based on multiview image matching using low-cost panoramic video cameras is proposed, which provides an innovative technical approach for pipe inspection. Firstly, we extracted frames from the panoramic video of the pipes andcorrected the geometric distortion using a spherical reprojection to obtain multiview pipe images. Second, the robust feature matching method using support lines and affine-invariant ratios isintroduced to conduct pipe image matching. Finally, the photogrammetric processing, using structure from motion (SfM) and dense reconstruction, wasintroduced to achieve the 3D modeling of drainage pipes. Several typical drainage pipes and shafts of the real scenes were taken for the 3D reconstruction experiments. Theresults show that our strategy can realize high-precision 3D reconstruction of different types of pipes, which can provide effective technical support for rapid and efficient inspection of urban pipes with broad application prospects in the daily management of sustainable urban drainage systems (SUDSs).

1. Introduction

Urban underground pipes are important infrastructures for both water supply and sewage discharge, which arecrucial for urban sustainable development and areclosely related to people’s daily lives. Pipes that are buried deep underground age with service time, which results in substantial potential safety hazards, water waste, water pollution, etc. [1,2]. These adverse consequences affect daily human use and the sustainable operationof urbanpipes. With the continuous construction of urban drainage system, the types and structures ofpipes have undergone complex changes. As part of the large volume of underground pipes, drainage pipes have various types, material, sizes, and shapes, and defects in drainage pipes are distributed in a complicated and irregular manner. Meanwhile, internal information of the drainage pipes is very difficult to obtain accurately in most cities, making recognizing and locating internal damage and erosions difficult. Therefore, more research is needed on the problems of detection and processing of urban pipe defects to realize the sustainable operation and maintenance of urban pipes network with complicated spatial distributions.
The traditional methods of pipe inspection are generally performed by manual underground inspection, lens inspection, and geometric visual inspection. These traditional methods are simple but have many limitations; especially, when dealing with existing urban pipes with complicated spatial distributions, their inspection efficiencies cannot meet the need of daily operation and maintenance due to the vast numbers. With the rapid development of various sensor technologies, new inspection methods based on laser scanners [3,4,5,6], closed-circuit televisions (CCTV) [7,8], pipe sonars [9,10], ultrasonic [11], etc. are constantly emerging. These sensors are usually carried by small inspection robots or other mobile platforms, which are capable of moving deep into the pipes for data acquisition of various types (such as photos, videos, infrared images, ultrasound images, etc.). Then, the internal information can be acquired using professional data processing or fusion methods, and the pipe state inspection is realized with the help of visualization or intelligent data analysis. The inspection methods based on laser scanners and ultrasonic and electromagnetic waves can only detect the spatial distribution and quantitative information of damages, but cannot obtain more detailed textured information of pipes. Inspection robots carrying various imaging sensors (CCTV) into pipes for operation are becoming the main means of routine inspection at present [12,13,14]. However, the structure of underground pipes is relatively specific due totheir considerable length and complicated distributions; CCTV methods with a single lens struggle to obtain more complete internal information because of limited perspectives. On the other hand, current CCTV inspection methods lack accurate positioning information for detection results, which also causes difficulties in pipe maintenance.
Image-based 3D reconstruction is a focus of intense research of both computer vision and photogrammetry [15,16,17,18,19,20,21], which generally consists of three steps—sparse reconstruction, dense reconstruction, and surface reconstruction—and it is applicable for both indoor and outdoor scenes. The position and orientation information of the unordered images is restored in the sparse reconstruction [22,23], which include feature detection and matching [24,25] and camera alignment [26,27]. The dense reconstruction [28,29] reconstructs 3D dense point clouds utilizing sparse point clouds and the images’ poses calculated in sparse reconstruction, and the multiview stereo strategy [30,31] is most commonly used in dense reconstruction methods. The triangular mesh building [32,33] and texture mapping [34,35] are conducted in surface reconstruction [36,37], so that the discrete points can be connected to the continuous surface model with real textures. Thanks to the continuous efforts of relevant scholars, the development of various 3D reconstruction technologies has made it feasible to reconstruct multiple 3D scenes based on images from real environments, such as the applications of 3D fine cultural relics and digital museums [38,39,40,41] and digital city [42,43,44,45]. An underground pipe is also a kind of real scene. If the images can be used to reconstruct the pipes in 3D space [46,47], on the one hand, users can recognize the internal damage and erosions directly and accurately based on the 3D models. On the other hand, the reconstructed 3D models also have spatial coordinates, which can realize the integration of pipe detection and location at the same time. However, although image-based 3D modeling methods have been successfully applied to various natural and indoor scenes, for urban underground pipes, due to their narrow sizes and fewer textures, the imaging perspective of single lensislimited; the imaging quality is also greatly affected by light, water, and mud in the pipe, which has a great impact on the 3D image reconstruction. 3D reconstruction of pipes cannot be well realized using the perspective and quality of the video acquired by the pipe CCTV.
Normal cameras, depth cameras [48,49,50], and panoramic cameras are commonly used optical imaging tools for the interior surface of a 3D object. The imaging field of view of the standard lens is limited, and the process of image capture is relatively complicated so as to cover the completed 3D object. Depth information can be acquired using depth cameras, but similar with normal cameras, the imaging angle of depth cameras is also limited.In recent years, the 360° panoramic imaging technologies [51,52,53,54] and relevant hardware equipment have developed rapidly. Using panoramic cameras to obtain more complete images of scenes with larger perspective has become a useful advantage in the research of image-based 3D reconstruction [55,56,57]. A panoramic camera can perform 360-degree imaging in underground pipes under the circumstance of complicated environments, complex spatial distributions, and limited imaging space, helping obtain high-quality and high-resolution images of underground pipes from omnidirectional perspectives, which provides a new technical approach for 3D reconstruction, damage inspection, and location. Compared with the laser scanner, the 3D model itself, built by panoramas, has real textures, and it saves both the hardware cost and the algorithm complexity of registration between point clouds and images.However, on the other hand, panoramic cameras usually use multiple large-angle fisheye lenses, whose image distortion is very large. Therefore, the 3D reconstruction of underground pipes based on 360-degree panoramic images obtained by panoramic cameras is also a great challenge.
In this paper, focusing on the problem of operation and maintenance inspection of urban water supply and drainage pipes, a 3D reconstruction method based on 360-degree panoramic video is proposed. The low-cost panoramic cameras with fisheye lens are used to capture panoramic videos, which can be installed on the inspection hardware to enter the pipes, and the 360° panoramic images set is extracted from the corresponding panoramic video. The 3D reconstruction of the pipe is carried out using sequential panoramic images and defects or damages can be observed and recognized based on the modeling results; discovery of the location and measurement of target objects is also feasible, which provides technical support for the detection and monitoring of drainage pipe operation and maintenance and has broad application prospects. The methodology is proposed in Section 2 in detail. Section 3 and Section 4 detail the experiments in real test scenes and relevant discussions, and the conclusion is given in Section 5.

2. Methodology

The primary objective of this paper is to use low-cost video panoramic camera as the main tool for pipe inspection, and realize the 3D reconstruction of pipe scenes using the spherical panoramic videos to recognize and locate the internal defects and deposits effectively. The overall pipeline for 3D reconstruction of urban drainage pipes proposed in our paper is shown in Figure 1. First, the panoramic camera is installed at a reasonable position of the inspection equipment so that the camera can capture the omnidirectional view of the pipe; thevideo recording function of the camera is turned on when the inspection device moves in the pipe. After the device is recycled, the complete panoramic video is generated and the spherical panoramic image set is acquired by frames extraction. Then, the large geometric distortion of the panoramic image is corrected by generating a perspective image set using multiview projection. The reprojected image set is then taken to photogrammetry processing, such as the multiview image matching, SfM reconstruction, dense matching, mesh building, and texture mapping sequentially, and, finally, the 3D scene model with real pipe texture is obtained.

2.1. Pipe Panoramic Video Capture and Frames Extraction

The image acquisition with a panoramic camera is capable of obtaining as muchinformation as possible in the limited space. Panoramic cameras are installed on the pipe inspection hardware, and it is inevitable that there will be sloshing, undulation, and other situations for the robots affected by water flow and obstacles while moving, which will also influence the speed of movement; these instabilities are all reflected in the generated videos. The result and accuracy of 3D reconstruction depends on the quality of the panoramic images acquired; for pipe scenes, if the interval of adjacent images is too sparse, and their modeling products areunreliable. Generally, the interval using continuous shooting of panoramic cameras can only acquire one frame per second, and the density of data may not be guaranteed in the case of very unstable situations. The panoramic video can obtain up to 30 consecutive images in one second, compared with continuous shooting, it has a higher capability for rich image acquisition. The density of the images can be guaranteed in various situations during the camera platform movements. Therefore, the panoramic video is selected as the data acquisition strategy for pipe 3D reconstruction in this paper.
Pipe panoramic videos have rich visible information, but this does not mean every frame must be used as input data for reconstruction. According to the moving speed of the inspection device and the actual situations, the video sampling interval can be reasonably determined, and the extracted frames are taken as the input for the subsequent 3D reconstruction. If the inspection device moves at a faster speed or if obvious shakes or fluctuations occur during the data acquisition process, it is necessary to shorten the sampling intervals, for example, extracting 5–10 frames in one second to generate a panoramic image dataset. If the carrier moves smoothly and the speed is moderate, the interval of 1–3 frames in one second can be enough for a panoramic image dataset. Figure 2 shows the process of acquiring a panoramic image set by extracting frames from a panoramic video.

2.2. 2D Panorama Reprojection

Commonly used projection models of panoramic images include cylindrical panoramas, spherical panoramas, and cubic panoramas. Among them, the spherical panorama displays a 360-degree horizontally and 180-degree vertically omnidirectional perspective of the real scene in the form of a continuous two-dimensional (2D) plane. At the same time, commonly used methods of image matching rely on the stability of local geometric distortion near the feature points; the spherical panoramic image is difficult and not robust enough to conduct matching and subsequent photogrammetric processing steps directly because of the large geometric distortion. Projection correction of the panoramic image set is necessary as the preprocessing step.
Figure 3 is the strategy of panoramic image reprojection. The spherical panorama generally extends the panoramic sphere into a 2D panoramic planebased on the principle of equirectangular projection, which is also most commonly used as the output form of products of the panoramic camera. To correct the 2D equirectangular panorama (Figure 3a), the image is first converted back to the 3D panoramic sphere space (Figure 3b), and then the sphere is covered by a cube, whose side length is equal to the sphere diameter 2 R (Figure 3c). Each point on the panoramic sphere has its corresponding point on the cubic surface. For example, in Figure 3c, the object point Q is mapped to the point q S of the panoramic sphere according to linear projection, and the point q p r o on the cube is the unique mapping point of q S . Thus, for a complete equirectangular image with great distortion at the two poles of the panoramic sphere, six perspective images with less distortion can be obtained.
The mapping relations between the point q p a n o ( x p a n o q , y p a n o q ) on a2D panorama and q S ( θ q , φ q ) on a3D panoramic sphere can be expressed as Equations (1) and (2) as follows,
{ x p a n o q = w · θ q / 2 π + ( w / 2 1 ) y p a n o q = ( h / 2 1 ) h · φ q / π ( π < θ π ,   π / 2 < φ π / 2 )
{ x p a n o q = w · θ q / 2 π + ( w / 2 1 ) y p a n o q = ( h / 2 1 ) h · φ q / π ( π < θ π ,   π / 2 < φ π / 2 )
where w is the width of the panorama and h is the height.
A complete panoramic sphere can be projected through six perspective directions. Shown in Figure 3c, image planes in different perspective directions are represented by different colors and line types, which are expressed as ( x t , y t ) , ( x d , y d ) , ( x l , y l ) , ( x r , y r ) , ( x f , y f ) , and ( x b , y b ) , respectively. Table 1 lists the linear mapping relationship between six perspective directions and the panoramic sphere. R is the radius of the panoramic sphere.
The projected results are shown in Figure 3d. Compared with the 2D panorama in Figure 3a, the great geometric distortion at the two poles are corrected to reasonable perspectives and feature points in these perspectives can be matched accurately. Generally, there is one orientation in the six perspectives containing some invalid information, such as the camera platform in the sixth image (contour colored in yellow) in Figure 3d or textures of the robots in other applications; these textures are repeated in the image sequence, which is very detrimental to the image matching, so this perspectiveis removed in the subsequent reconstruction step. To enhance the internal matching reliability among multiview images generated by a single panorama, the cube in Figure 3c is then rotated 45° around the Z S axis, and four other horizontally oriented images are generated. The angle of the linear projection of the ( x d , y d ) image is expanded to 120°. Therefore, as shown in Figure 4, for each panoramic image, nine small distortion images with different directions can be generated. Horizontally orientated images are overlapped with their adjacent images, and the vertically oriented image has overlaps with all other images. This improvement leads to reasonably accurate matching points between different internal images in the same station.

2.3. 3D Reconstruction

Each panoramic image can be divided into nine perspective images after projection correction, which are similar to the multiview images in the field of photogrammetry and computer vision. To provide sufficient and accurate feature matching pairs for the subsequent reconstruction steps, this paper uses the matching method based on support-line voting and affine-invariant constraints [58] to conduct initial image matching. This method first extracts feature points and conducts initial feature matching of an image pair using the scale-invariant feature transform (SIFT) method. The support-line voting strategy connects two pairs of the correspondence points of the image pair to be matched as the support-line pair. The same number of circular descriptors are constructed for each support line, and the support-line descriptor was developed based on the circular descriptors. The Euclidean distance is used as the similarity metric. The set of the support line is considered to be the inlier if the distance of descriptor vectors is less than the designated threshold. Because the length of the support line is determined only by the position of one and the other feature point in one image, for the two images to be matched, the feature description of one of the support images is not affected by the scale of the other image. Thus, the support line method itself has scale invariance. In addition, the support-line method is also insensitive to local geometric distortion; as a number of small circles of the same size are selected as the local region between two points, the large distortion that comes with long-distance in a large circle if connecting two points directly can be avoided. For the matching pairs selected based on the support-line voting, the affine-invariant ratios of the structure of the two crossing support lines are then adopted to purify and estimate the local affine transformation relationship, enough correct matching pairs can be extracted based on the local affine transformation, which is more robust to the distortion compared with the global affine transformation.
A sufficient number of matching pairs are obtained using multiview image matching, and the aerial triangulation and sparse point cloud generation are performed by SfM. SfM first searches for the image pair with the largest camera baseline as the initial pair and performs the relative orientation. Taking one image as the reference image, the 3D position of the homonymous points can be calculatedusing the rotation and translation matrix betweenthese two images. By continuously adding new images into the calculation, the positions of the images and 3D point sets are incrementally restored. The bundle adjustment is used to optimize the camera positions and 3D point locations in the SfM process by iteration during the incremental alignment to achieve the optimal restoration results. The camera’s positions are restored with the sparse point cloud of the scenes after SfM, and the density of the reconstructed sparse point cloud is not enough to express the complete 3D information of the scene; the complete reconstruction process also includes dense point cloud generation, mesh building, and texture mapping.
The dense point cloud reconstruction process is based on the position and orientation of the cameras, recent mainstream methods of dense matching include the patch-based multiview stereo algorithm (PMVS), and the method based on semi-global matching (SGM). The density of generated dense point clouds is almost the same as the point clouds scanned by lasers. The dense point cloud is still discrete, while the 3D reality model needs to display continuous scenes. Therefore, it is necessary to construct a continuous surface mesh model that connects discrete points in scenes. The mesh models are triangulation networks generally, and each triangulation can be considered as a patch. The texture mapping process determines the most appropriate texture block from all input images and uses it as the texture of the patch. After the texture is mapped, the 3D surface model can display the true view in target scenes. With the development of 3D reconstruction in computer vision and photogrammetry, some software provides easy use of modeling functions for both professional and unprofessional users;AgisoftMetashape and Context Capture are both mature and powerful tools, which make pipe modeling more applicable to actual applications.

3. Experiments and Results

To verify the applicability of the proposed method for different pipe scenes, real shafts and pipes of different materials, sizes, and applications were selected as the experimental data. The scene panoramas were acquired using the GoPro Fusion binocular panoramic camera, which can provide panoramic videos with a resolution of 3840 × 1920 (4K). To take the comparative experiments with the method of pipe reconstruction using CCTV images, two different perspectives of the reprojected images, whose orientations are similar to CCTV in normalinspection, were also taken to conduct the photogrammetric processing respectively.

3.1. Scene 1

Scene 1 is a complete rainwater well, and the well is made of red bricks. A total of 44 panoramaswere obtained by extracting video frames, and the panoramic images were projected to 396 perspective images. Taking these small-distortion images as input, the multiview image matching, SfM reconstruction, dense point cloud generation, mesh building, and texture mapping were sequentially performed, and the 3D reality model of the rainwater well was finally reconstructed. Figure 5 showed the sparse point cloud, dense point cloud, the solid model, and the texture mapping result separately.
After SfM reconstruction, 384,337 valid tie points were generated; as shown in Figure 5a, the sparse point cloud has roughly restored the spatial structure of the shaft. Through dense point cloud generation, shown in Figure 5b, the scene built a total of 8,589,928 points. It can be seen that the density of point clouds is very high, and the details of the scene have been well expressed and reconstructed. Figure 5c is the solid model generated by the triangular network connection based on the dense point cloud. The concave–convex changes of the bricks in the shaft can be clearly seen, while the manhole cover is smooth and flat. The final texture-mapped model is shown in Figure 5d. To further observe the texture details, we separated the model from the middle of the shaft. Figure 6a,b shows the reconstruction effects on both sides of the shaft; it can be clearly seen that the reconstructed model reproduces the details of the bricks well, and there are no obvious blurs and stitching gaps, making it fully applicable to crack inspection for wells and pipes. Since the shaft was built using bricks, there are irregular geometric structures within it. Figure 6c,d, respectively, marked the crevices between the bricks and the cross-section edges (yellow straight lines in Figure 6c,d. The reconstructed shaft model is basically conformed to the geometric structure of a cylinder, some bulges along the linings reflect the real structure of the bricks rather than errors.
To compare the effort of our method with the method using CCTV images, we separately used two image sets to perform the 3D reconstruction. One perspective is horizontally toward one side of the shaft, referred to as “side view”, and the other perspective is toward the direction of camera movement, referred to as “front view”. These two perspectives are the commonly used orientations of CCTVs in the inspection hardware. The 3D reconstruction results based on these two perspectives are shown in Figure 7; the side view can also reconstruct the texture on the side of the shaft (Figure 7a), but due to the limited viewing angle, the side view can only reconstruct a small part of the entire shaft. (Figure 7b). Althoughthe 3D reconstruction based on the camera’s forward view completely destroys the geometry structure of the shaft (Figure 7c), the stretching effect is very serious: the bottom of the well is a substantial distance apart from the shaft and the texture details are also severely stretched (Figure 7d). Thus, images towardthe single orientation are not sufficient to cover the complete scene, and they are also unreliable in global error control. The advantage of multiview images reprojected from spherical panoramas is reflected in the image matching accuracy and the quality of bundle adjustment, which can maintain the precise shapes and structures ofthe scene.

3.2. Scene 2

Scene 2 is a part of a plastic drainage pipe in blue, the video frame interval is 10 frames in 1 s, and a total of 126 panoramas were extracted. Photographic measurements were taken on all reprojected images, and the final 3D reconstruction results were also obtained, as shown in Figure 8.
After SfM reconstruction, the scene generated 891,662 points (as shown in Figure 8a, although the sparse point cloud restored the geometric structure), and it is obvious that the points are sparse and scattered. The point cloud of Figure 8b is much denser than the sparse point cloud, which contains 1,549,728 points. The colors in the scene had been reproduced well, such as the blue wall, the leaves at the bottom of the pipe, and so on. Figure 8c is the solid model after the mesh building, it can be seen that some protruding patches exist in the pipe wall. We located these places in the original images and the texture in Figure 8d; these bulges are flocculent dirt attached to the pipe wall and the deposits such as fallen leaves. Some of the bulges and corresponding real image details are shown in Figure 9; it is proven that these irregular meshes of the reconstructed model are not errors.
Figure 10 shows the texture effect of the model expansion, and the result of 3D reconstruction can directly and clearly reflect the textures of the real scene without obvious distortion and blurring; the attachments, leaves, and ponding in the pipe can be visually expressed. It shows that the panorama-based reconstruction method also has good inspection ability for cracks and deposits of the sewer pipes. Asthe control point markers cannot be deeply placed in the real pipe for quantitative accuracy evaluation, we use the pipe characteristics to verify the relative accuracy. The pipe itself can be approximated as a straight cylinder with multiple parallel rings in the pipe wall (shown in Figure 10a); the straight line is used to highlight the centerline of each ring, as shown by the yellow parallel lines in Figure 10c. It can be seen that these rings substantially conform to the straight lines without obvious deformation or bending, and the distances between adjacent straight lines are basically the same, which conforms to the actual situations of the pipe. Figure 10b is the textured portion of the bottom of the pipe after cutting, using two parallel straight lines of equal length at the edge (as shown in Figure 10d), the well wall does not show significant bending and mutation in the radial direction. It shows that image-based reconstruction using multiview perspective images can ensure that the model has a reliable relative accuracy.
Similarly, the perspective images of the two viewing angles in scene 2 were selected for experiments in this paper; one angle features the wall and the other the forward direction of the camera’s movement. Figure 11 shows the reconstruction effect of these two angles of view, and, similar to scene 1, the images toward the top of the pipe can only reconstruct the upper part of the pipe; the model of the images toward the front view is very poor, not only because of serious stretching, but the model itself is broken, which alone cannot be used as the data source for effective reconstruction.

3.3. Other Scenes

To verify the universality of our method in the actual usage scenarios of wells, pipes, etc., we conducted more experiments in different scenes. Figure 12 shows the reconstruction results of two other scenes, in which the two sets of data consist of 25 and 63 panoramic images, respectively, and the two scenes are quite different from scene 1 and scene 2 in terms of materials, shapes, sizes, and texture richness. However, high-quality models were generated after reconstruction through the reprojected images. The first set of data is for a square rainwater well, and the reconstructed model clearly reflects the cracks in the well. The second set of data is a pipe with varying thickness, the overall texture is relatively simple, but the method in this paper still performed well.

4. Discussion

To improve the limitations of pipe inspection using the single lens of CCTV, this paper proposed a pipe 3D reconstruction method based on spherical panoramic videos, and multiple real scenes including pipes and shafts were carried out for experiments. Table 2 summarizes the basic characteristics of the test scenes in Section 3, combined with their corresponding reconstruction effects, proving that our method can effectively reconstruct pipes with different materials, colors, sizes, and texture richness, and the generated 3D models have intuitive and clear textures.
Through multiple sets of experiments, the advantages of panoramic videos compared with CCTV in the imaging field of view can be clearly reflected. CCTV can only obtain images with a certain perspective, and the distortion of an image far from the image center is generally large, which leads to an incomplete reflection of the pipe situations for 3D reconstruction. Compared with lasers as a data source for 3D reconstruction, panoramic videos can provide rich texture information that lasers cannot, whereas lasers have the true position of point cloud and scale. Though experiments in this paper lack real scale information, the relative accuracy verification proves that the reconstruction model in this method maintains reliable stability in all directions. For future applications, if the positions of the starting point and terminal of a pipe are known, the actual scale can also be restored. By determining the proportional relationships between the places where defects or deposits existed in the model and the start and endpoints, the target location can be located accurately in actual scenes. In addition, the hardware cost of pipe inspection can be reduced greatly by using panoramic cameras only.
As the data source for pipe inspection, the quality of panoramic images has a direct impact on the modeling results. Through multiple sets of experiments, we found that the quality of panoramic images is affected by the illumination, the motion state of the camera platform, and the size of the pipes. The panoramic images of scenes 1 and 2 contain the shooting data both outside and inside the pipes, in which existed the changes of illumination. The lights outside the pipes were sufficient, and the camera aperture was relatively small, resulting in darker textures whose positions are inside the pipe of the panoramic images. The overall illumination of the panoramic images shot inside the pipes was weak, and the camera aperture became larger. At this time, the images inside the pipe are brighter and the brightness of the panoramic images is relatively uniform, the illumination change is also reflected in the results of the reconstruction, as shown in Figure 10b, where the left part is very bright and the right pipes are relatively darker. The brightness change will affect the matching accuracy of the images using common matching methods. Therefore, when applying the panoramic cameras to underground pipes, suitable lighting equipment is required to make uniform illumination. In the experiments of this paper, the camera was suspended by the rope or bound with the pole, it was moved by means of manual operations. Shaking and offsets of the camera occurred during the process of video recording, which can well simulate the unstable probabilities of the robot inspection used inside the underground pipes. The captured images are prone to be blurred if the camera did not have anti-shake performance. The GoPro Fusion camera used in this paper has better anti-shake performance so that the panoramic images after frame extraction are still clear, and the reconstructed texture is not blurred. For low-cost consumer panoramic cameras, there are certain stitching errors, which are related to the object–image distance. Different sizes of pipes reflect different object–image distances. Compared with other test data, the diameter of the pipe in scene 2 is smaller and the camera was too close to the bottom of the pipe during video recording. The distance between the camera and the pipe top is larger than the distance to the bottoms of the pipe, differences of the object–image distances in the narrow scene lead to stitching errors (shown in Figure 13a). The stitching error of the panoramic image also reflects the shape of the final reconstruction result at the bottom of the pipe. Cameras moved along the centerline of the shaft roughly in scene 1, and its overall reconstruction is better (Figure 13b). The panoramic cameras are recommended to conduct the data acquisition along the pipe centerline, which can ensure the stitching effects and reconstruction result. In addition, the texture richness also affects the matching accuracy and reliability, which are difficult points in pipe modeling. Some kinds of new pipes are smooth but lack of textures, which easily cause mismatches because of the feature similarities of different points, and thus image-based reconstruction methods are not suitable in this condition. However, more or fewer attachments, defects, and color changes would exist in the serving sewage or drainage pipes, the multiview images, and adjustable interval of frame extraction in our method are advantageous to obtain as many correct matching features as possible for a good modeling result.

5. Conclusions

Based on an image-based method of operation and maintenance inspection of urban water supply and drainage pipe networks, this paper proposes a 3D reconstruction pipeline based on 360° panoramic videos, which involves frames extraction, panorama reprojection, and photogrammetric processing of multiview images. Through multiple experiments of 3D reality modeling, it is proved that the method can reconstruct the real scene of pipes intuitively and clearly, and defects or damages can be observed, identified, and located based on the reconstructed models, which provides technical support for the operation and inspection of drainage pipes and has broad application prospects. As markers of control points cannot be placed deep into the pipes, the current research lacks the quantitative evaluation of the point precision, and this will be a focus of later research.

Author Contributions

Conceptualization, X.Z. and Q.H.; methodology, X.Z. and Q.H.; software, X.Z. and H.W.; validation, X.Z., and H.W.; formal analysis, H.W. and J.L.; investigation, P.Z.; resources, P.Z.; data curation, P.Z. and M.A.; writing—original draft preparation, X.Z.; writing—review and editing, J.L.; visualization, M.A.; supervision, P.Z. and Q.H.; project administration, Q.H.; funding acquisition, Q.H.

Funding

This research was supported by the National Key R&D Program of China (Grant No: 2017YFD0600904), the Science and Technology Planning Project of Guangdong Province, China (Grant No. 2017B020218001), and the Fundamental Research Funds for the Central Universities (Grant No. 2042017kf0235).

Acknowledgments

The authors would like to express their sincerely gratitude to the anonymous reviewers and the editors for their helpful andconstructive comments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Jung, D.; Kim, J.H. Robust Meter Network for Water Distribution Pipe Burst Detection. Water 2017, 9, 820. [Google Scholar] [CrossRef]
  2. Cheng, W.; Xu, G.; Fang, H.; Zhao, D. Study on Pipe Burst Detection Frame Based on Water Distribution Model and Monitoring System. Water 2019, 11, 1363. [Google Scholar] [CrossRef]
  3. Liu, Z.; Krys, D. The use of laser range finder on a robotic platform for pipe inspection. Mech. Syst. Signal Process. 2012, 31, 246–257. [Google Scholar] [CrossRef]
  4. Matos, J.S. Comparison of the inspector and rating protocol uncertainty influence in the condition rating of sewers. Water Sci. Technol. 2014, 69, 862–867. [Google Scholar]
  5. Lepot, M.; Stanić, N.; Clemens, F. A technology for sewer pipe inspection (Part 2): Experimental assessment of a new laser profiler for sewer defect detection and quantification. Autom. Constr. 2017, 73, 1–11. [Google Scholar] [CrossRef]
  6. Son, H.; Kim, C.; Kim, C. 3D reconstruction of as-built industrial instrumentation models from laser-scan data and a 3D CAD database based on prior knowledge. Autom. Constr. 2015, 49, 193–200. [Google Scholar] [CrossRef]
  7. Dirksen, J.; Clemens, F.; Korving, H.; Cherqui, F.; Le Gauffre, P.; Ertl, T.; Plihal, H.; Müller, K.; Snaterse, C.T.M. The consistency of visual sewer inspection data. Struct. Infrastruct. Eng. 2013, 9, 214–228. [Google Scholar] [CrossRef] [Green Version]
  8. Su, T.-C.; Yang, M.-D.; Wu, T.-C.; Lin, J.-Y. Morphological segmentation based on edge detection for sewer pipe defects on CCTV images. Expert Syst. Appl. 2011, 38, 13094–13114. [Google Scholar] [CrossRef]
  9. Carballini, J.; Viana, F. Using synthetic aperture sonar as an effective tool for pipeline inspection survey projects. In Proceedings of the2005 IEEE/OES Acoustics in Underwater Geosciences Symposium (RIO Acoustics), Rio de Janeiro, Brazil, 29–31 July 2015; pp. 1–5. [Google Scholar]
  10. Teixeira, P.V.; Kaess, M.; Hover, F.S.; Leonard, J.J. Underwater inspection using sonar-based volumetric submaps. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 4288–4295. [Google Scholar]
  11. Iyer, S.; Sinha, S.K.; Tittmann, B.R.; Pedrick, M.K. Ultrasonic signal processing methods for detection of defects in concrete pipes. Autom. Constr. 2012, 22, 135–148. [Google Scholar] [CrossRef]
  12. Hoshina, M.; Toyama, S. Development of Spherical Ultrasonic Motor as a Camera Actuator for Pipe Inspection Robot. J. Vibroengineering 2009, 13, 2379–2384. [Google Scholar]
  13. Huang, H.; Yan, J.; Cheng, T. Development and Fuzzy Control of a Pipe Inspection Robot. IEEE Trans. Ind. Electron. 2010, 57, 1088–1095. [Google Scholar] [CrossRef]
  14. Nassiraei, A.A.F.; Kawamura, Y.; Ahrary, A.; Mikuriya, Y.; Ishii, K. Concept and design of a fully autonomous sewer pipe inspection mobile robot “KANTARO”. In Proceedings of the IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 136–143. [Google Scholar]
  15. Stylianou, G.; Lanitis, A. Image Based 3D Face Reconstruction: A Survey. Int. J. Image Graph. 2009, 9, 217–250. [Google Scholar] [CrossRef]
  16. De Reu, J.; De Smedt, P.; Herremans, D.; Van Meirvenne, M.; Laloo, P.; De Clercq, W. On introducing an image-based 3D reconstruction method in archaeological excavation practice. J. Archaeol. Sci. 2014, 41, 251–262. [Google Scholar] [CrossRef]
  17. Gonzalez-Aguilera, D.; López Fernández, L.; Rodríguez-Gonzálvez, P.; Hernandez, D.; Guerrero, D.; Remondino, F.; Menna, F.; Nocerino, E.; Toschi, I.; Ballabeni, A.; et al. GRAPHOS—Open-source software for photogrammetric applications. Photogramm. Rec. 2018, 33, 11–29. [Google Scholar] [CrossRef]
  18. Knyaz, V. Image-based 3d reconstruction and analysis for orthodontia. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, XXXIX-B3, 585–589. [Google Scholar] [CrossRef]
  19. Wolff, K.; Kim, C.; Zimmer, H.; Schroers, C.; Botsch, M.; Sorkine-Hornung, O.; Sorkine-Hornung, A. Point Cloud Noise and Outlier Removal for Image-Based 3D Reconstruction. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 118–127. [Google Scholar]
  20. Yang, M.-D.; Chao, C.-F.; Huang, K.-S.; Lu, L.-Y.; Chen, Y.-P. Image-Based 3D Scene Reconstruction and Exploration in Augmented Reality. Autom. Constr. 2013, 33, 48–60. [Google Scholar] [CrossRef]
  21. Liénard, J.; Vogs, A.; Gatziolis, D.; Strigul, N. Embedded, real-time UAV control for improved, image-based 3D scene reconstruction. Measurement 2016, 81, 264–269. [Google Scholar] [CrossRef]
  22. Wu, C. Towards linear-time incremental structure from motion. In Proceedings of the 2013 International Conference on 3D Vision, Seattle, WA, USA, 29 June–1 July 2013; pp. 127–134. [Google Scholar]
  23. Özyesil, O.; Voroninski, V.; Basri, R.; Singer, A. A survey of structure from motion. Acta Numer. 2017, 26, 305–364. [Google Scholar] [CrossRef]
  24. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  25. Frahm, J.-M.; Fite-Georgel, P.; Gallup, D.; Johnson, T.; Raguram, R.; Wu, C.; Jen, Y.-H.; Dunn, E.; Clipp, B.; Lazebnik, S.; et al. Building Rome on a cloudless day. In Computer Vision—ECCV; Springer: Berlin/Heidelberg, Germany, 2010; pp. 368–381. [Google Scholar]
  26. Agarwal, S.; Snavely, N.; Seitz, S.M.; Szeliski, R. Bundle adjustment in the large. In Computer Vision—ECCV; Springer: Berlin/Heidelberg, Germany, 2010; pp. 29–42. [Google Scholar]
  27. Byröd, M.; Åström, K. Conjugate gradient bundle adjustment. In Computer Vision—ECCV; Springer: Berlin/Heidelberg, Germany, 2010; pp. 114–127. [Google Scholar]
  28. Furukawa, Y.; Ponce, J. Accurate, dense, and robust multiview stereopsis. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1362–1376. [Google Scholar] [CrossRef]
  29. Geiger, A.; Ziegler, J.; Stiller, C. StereoScan: Dense 3d reconstruction in real-time. In Proceedings of the IEEE Intelligent Vehicles Symposium, Baden-Baden, Germany, 5–9 June2011; pp. 963–968. [Google Scholar]
  30. Seitz, S.M.; Curless, B.; Diebel, J.; Scharstein, D.; Szeliski, R. A comparison and evaluation of multi-view stereo reconstruction algorithms. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 17–22 June 2006; Volume 1, pp. 519–528. [Google Scholar]
  31. Wenzel, K.; Rothermel, M.; Fritsch, D.; Haala, N. Image acquisition and model selection for multi-view stereo. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, XL-5/W1, 251–258. [Google Scholar] [CrossRef]
  32. Su, T.; Wang, W.; Lv, Z.; Wu, W.; Li, X. Rapid Delaunay triangulation for randomly distributed point cloud data using adaptive Hilbert curve. Comput. Graph. 2016, 54, 65–74. [Google Scholar] [CrossRef]
  33. Zeng, W.; Liu, G.R. Smoothed finite element methods (S-FEM): An overview and recent developments. Arch. Comput. Methods Eng. 2018, 25, 397–435. [Google Scholar] [CrossRef]
  34. Chen, Z.; Zhou, J.; Chen, Y.; Wang, G. 3D Texture mapping in multi-view reconstruction. In Advances in Visual Computing; Springer: Berlin/Heidelberg, Germany, 2012; pp. 359–371. [Google Scholar]
  35. Jeon, J.; Jung, Y.; Kim, H.; Lee, S. Texture map generation for 3D reconstructed scenes. Vis. Comput. 2016, 32, 955–965. [Google Scholar] [CrossRef]
  36. Berger, M.; Tagliasacchi, A.; Seversky, L.M.; Alliez, P.; Guennebaud, G.; Levine, J.A.; Sharf, A.; Silva, C.T. A survey of surface reconstruction from point clouds. Comput. Graph. Forum 2017, 36, 301–329. [Google Scholar] [CrossRef]
  37. Campos, R.; Garcia, R.; Alliez, P.; Yvinec, M. A surface reconstruction method for in-detail underwater 3d optical mapping. Int. J. Rob. Res. 2015, 34, 64–89. [Google Scholar] [CrossRef]
  38. Bruno, F.; Bruno, S.; De Sensi, G.; Luchi, M.-L.; Mancuso, S.; Muzzupappa, M. From 3D reconstruction to virtual reality: A complete methodology for digital archaeological exhibition. J. Cult. Herit. 2010, 11, 42–49. [Google Scholar] [CrossRef]
  39. Macher, H.; Grussenmeyer, P.; Landes, T.; Halin, G.; Chevrier, C.; Huyghe, O. Photogrammetric recording and reconstruction of town scale models—The case of the plan-relief of Strasbourg. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-2/W5, 489–495. [Google Scholar] [CrossRef]
  40. Pietroni, E.; Forlani, M.; Rufa, C. Livia’s Villa Reloaded: An example of re-use and update of a pre-existing Virtual Museum, following a novel approach in storytelling inside virtual reality environments. In Proceedings of the 2015 Digital Heritage, Granada, Spain, 28 September–2 October2015; Volume 2, pp. 511–518. [Google Scholar]
  41. Santagati, C.; Inzerillo, L.; Di Paola, F. Image-based modeling techniques for architectural heritage 3D digitalization: Limits and potentialities. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, XL-5, 550–560. [Google Scholar] [CrossRef]
  42. Kuschk, G. Model-free Dense Stereo Reconstruction Creating Realistic 3D City Models. In Proceedings of the Joint Urban Remote Sensing Event, Sao Paulo, Brazil, 21–23 April 2013; pp. 202–205. [Google Scholar]
  43. Qu, Y.; Huang, J.; Zhang, X. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera. Sensors 2018, 18, 225. [Google Scholar] [Green Version]
  44. Wu, B.; Xie, L.; Hu, H.; Zhu, Q.; Yau, E. Integration of aerial oblique imagery and terrestrial imagery for optimized 3D modeling in urban areas. ISPRS J. Photogramm. Remote Sens. 2018, 139, 119–132. [Google Scholar] [CrossRef]
  45. Singh, S.P.; Jain, K.; Mandla, V.R. Image based 3D city modeling: Comparative study. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, XL-5, 537–546. [Google Scholar] [CrossRef]
  46. Reyes-Acosta, A.; Lopez-Juarez, I.; Osorio-Comparan, R.; Lefranc, G. Towards 3D pipe reconstruction employing affine transformations from video information. In Proceedings of the 2016 IEEE International Conference on Automatica (ICA-ACCA), Curico, Chile, 19–21 October 2016; pp. 1–6. [Google Scholar]
  47. Zhang, T.; Liu, J.; Liu, S.; Tang, C.; Jin, P. A 3D reconstruction method for pipeline inspection based on multi-vision. Measurement 2017, 98, 35–48. [Google Scholar] [CrossRef]
  48. Eichhardt, I.; Chetverikov, D.; Jankó, Z. Image-guided ToF depth upsampling: A survey. Mach. Vis. Appl. 2017, 28, 267–282. [Google Scholar] [CrossRef]
  49. Rubinsztein-Dunlop, H.; Forbes, A.; Berry, M.V.; Dennis, M.R.; Andrews, D.L.; Mansuripur, M.; Denz, C.; Alpmann, C.; Banzer, P.; Bauer, T.; et al. Roadmap on structured light. J. Opt. 2017, 19, 13001. [Google Scholar] [CrossRef]
  50. Sun, Y.; Liu, M.; Meng, M.Q.-H. Improving RGB-D slam in dynamic environments: A motion removal approach. Rob. Auton. Syst. 2017, 89, 110–122. [Google Scholar] [CrossRef]
  51. Brown, M.; Lowe, D.G. Automatic Panoramic Image Stitching using Invariant Features. Int. J. Comput. Vis. 2007, 74, 59–73. [Google Scholar] [CrossRef]
  52. Lee, W.-T.; Chen, H.-I.; Chen, M.-S.; Shen, I.-C.; Chen, B.-Y. High-resolution 360 Video Foveated Stitching for Real-time VR. Comput. Graph. Forum 2017, 36, 115–123. [Google Scholar] [CrossRef]
  53. Li, L.; Yao, J.; Xie, R.; Xia, M.; Zhang, W. A Unified Framework for Street-View Panorama Stitching. Sensors 2017, 17, 1. [Google Scholar] [CrossRef]
  54. Shum, H.-Y.; Szeliski, R. Systems and Experiment Paper: Construction of Panoramic Image Mosaics with Global and Local Alignment. Int. J. Comput. Vis. 2000, 36, 101–130. [Google Scholar] [CrossRef]
  55. Paris, L.; Calvano, M.; Nardinocchi, C. Web Spherical Panorama for Cultural Heritage 3D Modeling. In New Activities for Cultural Heritage; Springer: Cham, Switzerland, 2017; pp. 182–189. [Google Scholar]
  56. Wahbeh, W.; Nebiker, S.; Fangi, G. Combining public domain and professional panoramic imagery for the accurate and dense 3d reconstruction of the destroyed bel temple in Palmyra. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, III-5, 81–88. [Google Scholar] [CrossRef]
  57. Yang, H.; Zhang, H. Efficient 3D Room Shape Recovery from a Single Panorama. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 5422–5430. [Google Scholar]
  58. Li, J.; Hu, Q.; Ai, M.; Zhong, R. Robust feature matching via support-line voting and affine-invariant ratios. ISPRS J. Photogramm. Remote Sens. 2017, 132, 61–76. [Google Scholar] [CrossRef]
Figure 1. Pipe reconstruction technological pipeline.
Figure 1. Pipe reconstruction technological pipeline.
Water 11 02101 g001
Figure 2. Panoramic frames extraction process; d is the interval of extraction
Figure 2. Panoramic frames extraction process; d is the interval of extraction
Water 11 02101 g002
Figure 3. Panorama reprojection strategy: original panorama (a), panoramic sphere (b), reprojection directions (c), and reprojectedresults (d).
Figure 3. Panorama reprojection strategy: original panorama (a), panoramic sphere (b), reprojection directions (c), and reprojectedresults (d).
Water 11 02101 g003
Figure 4. Reprojection result of a panorama.
Figure 4. Reprojection result of a panorama.
Water 11 02101 g004
Figure 5. Reconstruction result of scene 1: sparse point cloud (a), dense point cloud (b), solid model (c), and textured model (d).
Figure 5. Reconstruction result of scene 1: sparse point cloud (a), dense point cloud (b), solid model (c), and textured model (d).
Water 11 02101 g005
Figure 6. Model texture details (a,b) and relative accuracy analysis (c,d).
Figure 6. Model texture details (a,b) and relative accuracy analysis (c,d).
Water 11 02101 g006
Figure 7. Reconstruction results based on side-view images (a,b), and reconstruction results based on front-view images (c,d).
Figure 7. Reconstruction results based on side-view images (a,b), and reconstruction results based on front-view images (c,d).
Water 11 02101 g007
Figure 8. Reconstruction result of scene 2: sparse point cloud (a), dense point cloud (b), solid model (c), and textured model (d).
Figure 8. Reconstruction result of scene 2: sparse point cloud (a), dense point cloud (b), solid model (c), and textured model (d).
Water 11 02101 g008
Figure 9. Details of solid model (a,c) and corresponding textures (b,d).
Figure 9. Details of solid model (a,c) and corresponding textures (b,d).
Water 11 02101 g009
Figure 10. Model details (a,b) and relative accuracy analysis (c,d).
Figure 10. Model details (a,b) and relative accuracy analysis (c,d).
Water 11 02101 g010
Figure 11. Reconstruction results based on side-view images (a,b), and reconstruction results based on front-view images (c,d).
Figure 11. Reconstruction results based on side-view images (a,b), and reconstruction results based on front-view images (c,d).
Water 11 02101 g011
Figure 12. Reconstruction results of scene 3 (ac) and scene 4 (df).
Figure 12. Reconstruction results of scene 3 (ac) and scene 4 (df).
Water 11 02101 g012
Figure 13. Panorama stitching error in scene 2 (a), and the circle fitting effect in scene 1 (b).
Figure 13. Panorama stitching error in scene 2 (a), and the circle fitting effect in scene 1 (b).
Water 11 02101 g013
Table 1. Mapping equations of perspective images.
Table 1. Mapping equations of perspective images.
Direction θ q φ q
( x d , y d ) { θ q = tan 1 ( Δ y / Δ x ) ; ( Δ x > 0 ) θ q = tan 1 ( Δ y / Δ x ) + π ; ( Δ x < 0   a n d   Δ y > 0 ) θ q = tan 1 ( Δ y / Δ x ) π ; ( Δ x < 0   a n d   Δ y < 0 ) (3) φ q = tan 1 ( R / Δ x 2 + Δ y 2 ) (9)
( x t , y t ) { θ q = tan 1 ( Δ y / Δ x ) ; ( Δ x > 0 ) θ q = tan 1 ( Δ y / Δ x ) π ; ( Δ x < 0   a n d   Δ y > 0 ) θ q = tan 1 ( Δ y / Δ x ) + π ; ( Δ x < 0   a n d   Δ y < 0 ) (4) φ q = tan 1 ( R / Δ x 2 + Δ y 2 ) (10)
( x l , y l ) { θ q = tan 1 ( Δ x / R ) π ; ( Δ x > 0 ) θ q = tan 1 ( Δ x / R ) + π ; ( Δ x < 0 ) (5) φ q = tan 1 ( Δ y / Δ x 2 + R 2 ) (11)
( x r , y r ) θ q = tan 1 ( Δ x / R ) (6)
( x f , y f ) { θ q = tan 1 ( R / Δ x ) ; ( Δ x > 0 ) θ q = tan 1 ( R / Δ x ) π ; ( Δ x < 0 ) (7)
( x b , y b ) { θ q = tan 1 ( R / Δ x ) + π ; ( Δ x > 0 ) θ q = tan 1 ( R / Δ x ) ; ( Δ x < 0 ) (8)
Table 2. Characteristics of test scenes.
Table 2. Characteristics of test scenes.
SceneSizeMaterialTexture Richness
Scene 1largebrickgood
Scene 2narrowplasticuneven
Scene 3secondarycementsecondary
Scene 4smallcementpoor

Share and Cite

MDPI and ACS Style

Zhang, X.; Zhao, P.; Hu, Q.; Wang, H.; Ai, M.; Li, J. A 3D Reconstruction Pipeline of Urban Drainage Pipes Based on MultiviewImage Matching Using Low-Cost Panoramic Video Cameras. Water 2019, 11, 2101. https://doi.org/10.3390/w11102101

AMA Style

Zhang X, Zhao P, Hu Q, Wang H, Ai M, Li J. A 3D Reconstruction Pipeline of Urban Drainage Pipes Based on MultiviewImage Matching Using Low-Cost Panoramic Video Cameras. Water. 2019; 11(10):2101. https://doi.org/10.3390/w11102101

Chicago/Turabian Style

Zhang, Xujie, Pengcheng Zhao, Qingwu Hu, Hean Wang, Mingyao Ai, and Jiayuan Li. 2019. "A 3D Reconstruction Pipeline of Urban Drainage Pipes Based on MultiviewImage Matching Using Low-Cost Panoramic Video Cameras" Water 11, no. 10: 2101. https://doi.org/10.3390/w11102101

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop