Next Article in Journal
Shoulder Range of Motion Measurement Using Inertial Measurement Unit—Validation with a Robot Arm
Previous Article in Journal
A Holistic 4D Approach to Optimize Intrinsic and Extrinsic Factors Contributing to Variability in Microarray Biosensing in Glycomics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

An Inverse Perspective Mapping-Based Approach for Generating Panoramic Images of Pipe Inner Surfaces

Department of Smart Mobility Engineering, Joongbu University, 305 Dongheon-ro, Deogyang-gu, Goyang-si 21713, Gyeonggi-do, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(12), 5363; https://doi.org/10.3390/s23125363
Submission received: 9 May 2023 / Revised: 29 May 2023 / Accepted: 4 June 2023 / Published: 6 June 2023
(This article belongs to the Section Sensing and Imaging)

Abstract

:
We propose an algorithm for generating a panoramic image of a pipe’s inner surface based on inverse perspective mapping (IPM). The objective of this study is to generate a panoramic image of the entire inner surface of a pipe for efficient crack detection, without relying on high-performance capturing equipment. Frontal images taken while passing through the pipe were converted to images of the inner surface of the pipe using IPM. We derived a generalized IPM formula that considers the slope of the image plane to correct the image distortion caused by the tilt of the plane; this IPM formula was derived based on the vanishing point of the perspective image, which was detected using optical flow techniques. Finally, the multiple transformed images with overlapping areas were combined via image stitching to create a panoramic image of the inner pipe surface. To validate our proposed algorithm, we restored images of pipe inner surfaces using a 3D pipe model and used these images for crack detection. The resulting panoramic image of the internal pipe surface accurately demonstrated the positions and shapes of cracks, highlighting its potential utility for crack detection using visual inspection or image-processing techniques.

1. Introduction

Management of inner pipe walls is important because defects on the inner wall of the pipe can interfere with facility operations and pose a threat to safety. The main methods developed for crack detection in pipes include computer vision systems using cameras, magnetic field measurement, and acoustic detection using microphones [1]. With the advancement of robotics technology and artificial intelligence, computer vision-based pipe crack detection techniques have gained significant prominence. When applying computer vision-based crack detection techniques to pipes, two distinct challenges differentiate them from other engineering structures. The first challenge is the movement within the pipe itself. Since most pipes are located underground, accessing the interior is not easy. Many researchers have proposed methods for maneuvering pipe inspection robots to navigate pipes of various diameters and geometric structures [2,3,4]. The second challenge lies in obtaining images of the internal walls of pipes. Generally, the RVI (remote visual inspection) technique is employed, utilizing inspection robots or handheld videoscopes to explore the pipes and capture images of their internal walls [5,6,7]. To address the cost-saving and accuracy enhancement concerns associated with human-dependent visual inspections, various automated image-processing techniques, including artificial intelligence, are required [8,9,10]. For this purpose, surface images of structures, including cracks, are needed. However, unlike other engineering structures, the inner wall of a pipe is typically in the form of a long cylinder, making it difficult to obtain images of the entire surface. Furthermore, the images obtained while moving inside the pipe are perspective images, and the areas that need to be inspected for cracks are the surfaces of the pipe. Therefore, it is challenging to directly apply various automated image-processing techniques (including artificial intelligence developed for automated crack inspection) to the perspective images in order to facilitate automated crack inspection.
Various related studies have been conducted to obtain the surface images of pipes. For example, Pahwa et al. [11] devised a method using a 360-degree camera to capture continuous images inside the pipe and stitch them together. Kagami et al. [12] employed Structure-from-motion (SfM) techniques, utilizing 2D images to reconstruct 3D shapes and detect defects within the pipe. In [13], the authors estimated the relative positions of probes on the pipe and obtained undistorted images by combining a camera sensor with an integrated laser ring projector. Gunatilake et al. [14] proposed a mobile robot detection system that combines infrared laser profiling devices with stereo camera vision to generate a 3D RGB depth map, allowing for scanning, detecting, locating, and measuring internal defects in pipelines. In [15], the authors utilized an active stereo omnidirectional vision sensor to obtain panoramic images of the pipeline’s internal surface. Fang et al. [16] proposed an interesting method that restores the images of pipe walls using images captured from the front view. They addressed the radial distortion issue of the front-view images by employing nonlinear fitting features of a neural network and homemade feature stickers. These studies utilized high-resolution imaging equipment, such as 3D depth cameras that are capable of measuring the three-dimensional positional information of photographs or neural networks trained specifically for particular scenarios. However, these approaches have limitations in terms of their widespread usage due to the high cost involved in building high-performance equipment or the need for specialized artificial intelligence models.
Our proposed technique aims to generate images of the inner wall of a pipe by utilizing perspective images captured by a monocular camera sensor. Since the images of the pipe wall, captured while the sensor moves through the pipe, are perspective images projecting the wall from close to the camera to far away, we calculate the position of the corresponding wall on each pixel of the captured image. This calculation accounts for the focal length and the diameter of the pipe. The key unknown parameter that we need to determine is the depth on the pipe’s surface corresponding to a point on the image plane. The difficulty in finding this stems from the mapping between a two-dimensional plane and a three-dimensional curved surface. The problem becomes even more complex, particularly when the image plane is tilted and exhibits asymmetry. To obtain the relationship between a point on the image plane and a point on the corresponding wall when the pipe and the image plane are perpendicular, we first derive the inverse perspective mapping (IPM) formula for a cylinder. IPM is a transformation technique that can be used to map a 3D object projected onto a 2D plane to a new position to create a new 2D image. In [17,18,19], lane detection systems for autonomous driving, which is a well-known application of IPM, were employed. These systems utilize IPM to transform the perspective view of a road image into a top-view space, simplifying lane detection and eliminating perspective distortion from the resulting 2D image. We use the IPM to restore the sharpness of the image at an appropriate restoration distance, as the sharpness decreases with the increasing distance of the wall from the camera. Moreover, to correct the distortion that occurs due to obstacles during the movement of the robot inside the pipe, we derive a generalized formula that considers the angle of the image plane. This IPM formula is based on the position of the vanishing point in the perspective image. Another factor that contributes to the difficulty of this problem is the unknown position of the vanishing point. While a common method for vanishing point detection involves finding the intersection of the detected lines using the Hough transform [20], this method may not always apply to perspective pipe images due to the absence of a line component leading to the vanishing point. Therefore, we use optical flow [21,22] for vanishing point detection inside the pipe. Optical flow is a vector field that captures the motion of objects between consecutive frames by identifying the displacement of corresponding pixels in adjacent frames. To capture a panoramic image of the inner surface of a pipe, we first create a contour plot by using the vector magnitude of the optical flow formed in the perspective image of the pipe and detect circles. Next, we calculate the convergence positions of these circles to detect the vanishing point. We then connect the transformed images using image stitching techniques [23] and concatenate them into a panoramic image of the pipe wall. Ultimately, our technique generates a flat map of the pipe wall, which is convenient for crack detection. To validate our approach, we conduct numerical experiments using a 3D pipe model with arbitrary shape patterns and cracks inside.

2. Methods

In this section, we introduce an IPM-based method for obtaining a 2D image of the interior surface of a pipe from perspective images acquired while moving along the pipe. The principle of this method is to find a mapping that corresponds to cylindrical coordinates, derived from a polar coordinate system with the vanishing point as the origin, within the perspective image. The key unknown parameter we need to determine is the depth on the pipe that corresponds to a point on the image plane. In Section 2.1, assuming prior knowledge of the vanishing point, we derive an IPM-based formula to determine this depth. We first discuss the case of a perpendicular image plane and then derive the equation for the scenario where the image plane is tilted. Next, in Section 2.2, we present the image stitching technique that connects the transformed images to generate a panoramic image. Section 2.3 introduces the optical flow method to find the vanishing point. Finally, in Section 2.4, we provide an overview of the entire algorithm.

2.1. Inverse Perspective Mapping

2.1.1. Perpendicular Image Plane

Figure 1 illustrates the projection of the pipe’s interior surface onto the image plane within the pipe. We assume that the camera is moving through a straight pipe of a constant radius (R) and capturing perspective images. In Figure 1, the image plane, which is marked by a blue rectangle, is shown perpendicular to the direction of the flow inside the pipe. Suppose that circles are drawn at regular intervals inside the pipe, then the circles will be projected closer to the vanishing point as the camera moves further away from them. When representing the coordinates of the pipe wall in cylindrical coordinates and the image plane in polar coordinates, we can consider the following mapping between a point P ( r , θ ) on the image plane and a point P ( r , θ , z ) on the pipe wall:
f : P ( r , θ ) P ( r , θ , z )
where θ = θ and r = R (const.).
The red rectangle in Figure 1 represents a plane that shows the cross-section of the pipe for a fixed θ . Point P on the image plane corresponds to point A on the cross-section, while point P on the pipe wall corresponds to point B. By a geometric proportion, we have the relationship r : R = f : f + z . Therefore, we obtain the following equation:
z = f ( R r 1 ) .
Here, f is the focal length, R is the radius of the pipe, and r is the distance from a point on the image plane to the vanishing point. Since f and R are both constants, we can observe that z + f and r are inversely proportional to each other, where z + f is the horizontal distance from the light source to the corresponding point on the pipe wall.

2.1.2. Inclined Image Plane

Figure 2 shows the projection of the interior surface of the pipe onto the tilted image plane with an angle of ψ with respect to the x-axis. We define the rectangular region π 2 as the area where the interior of the pipe intersects the plane, which is created by rotating the half x z -plane with x > 0 counterclockwise by an angle of θ . Let l 1 be the intersection line of π 2 and the image plane, π 1 . The angle ( ϕ ) between l 1 and the z-axis is determined by θ and ψ , where ψ represents the angle at which the image plane rotates around the x-axis. Therefore, Equation (2) needs to be modified to include both θ and ψ as variables. As a result, point P on the image plane corresponds to point A on the cross-sectional plane, and the depth z of point B on the wall varies with the values of ψ and θ , which depend on the position of A.
As shown in Figure 3, to determine the projection of a point (A) on the wall (B) from the image plane, we consider the line l 1 on the cross-sectional plane ( π 2 ). Here, l 1 is the intersection of the image plane ( π 1 ) and the cross-sectional plane ( π 2 ), while O is the vanishing point on the image plane. Let C be the light source, then A is projected onto point B on the wall, and the line passing through these three points is denoted as l 2 . The two lines are then expressed as follows:
l 1 : r = z tan ϕ
l 2 : r = R z 0 + f ( z + f )
Since point A lies on both l 1 and l 2 , we obtain the following equation:
R z 0 + f ( r sin ϕ + f ) = r cos ϕ tan ϕ
Solving the equation for z 0 yields the following expression:
z 0 = R [ tan ϕ + f r cos ϕ ] f
Now, we aim to obtain a relationship equation for ϕ , which is determined by the rotation angle θ of the cross-sectional plane ( π 2 ) and the tilt angle ψ of the image plane ( π 1 ), as shown in Figure 2. Let n 1 and n 2 be the normal vectors to planes π 1 and π 2 , respectively. We can obtain the rotated normal vectors by applying the rotations of ψ and θ to the original normal vectors, respectively, as follows:
n 1 = 1 0 0 0 cos ψ sin ψ 0 sin ψ cos ψ 0 0 1 = 0 sin ψ cos ψ ,
n 2 = cos θ sin θ 0 sin θ cos θ 0 0 0 1 0 1 0 = sin θ cos ψ 0 .
The direction vector u of l 1 is the cross product of n 1 and n 2 , which can be expressed as follows:
u = n 1 × n 2 = cos θ cos ψ sin θ cos ψ sin θ sin ψ
Now, considering the relationship between u and the direction vector e 3 of the z-axis, we have:
cos π 2 ϕ = sin ϕ = u · e 3 u = sin θ sin ψ cos 2 ψ + sin 2 θ sin 2 ψ
As a result, we obtain the following equation, which determines the depth of corresponding points on the cylindrical surface for the tilted image plane:
z = R [ tan ϕ + f r cos ϕ ] f
ϕ = sin 1 ( sin θ sin ψ cos 2 ψ + sin 2 θ sin 2 ψ )
Here, f is the focal length, R is the radius of the pipe, and r is the length of O A ¯ . Note that when ψ = 0 , θ = 0 , or π 2 , Equation (11) reduces to the same as Equation (2).

2.2. Image Stitching

To generate a complete interior wall image of the pipe by connecting the images converted to a 2D wall, we use the panoramic image stitching technique [23,24]. This image stitching technique finds common features among multiple images taken of the same location or object and connects them into a single image. We use the SIFT (scale-invariant feature transform) method to identify corresponding points between two images, analyze the geometric relationship between the image pairs, and establish the transformation necessary to align and convert one image to another. Considering that the resolution of the images captured inside the pipe decreases with increasing observation depth, we carefully select and crop specific regions of the captured images for the purpose of image stitching, with the aim of obtaining optimal results despite the decreasing resolution.

2.3. Vanishing Point Detection

We detect the vanishing point using the optical flow obtained from a sequence of perspective images. Figure 4 shows a conceptual diagram that explains the method of detecting the vanishing point. Optical flow is calculated for two consecutive perspective images obtained during movement inside the pipe, as shown in Figure 4a. In order to address the aperture problem, we employed the Gunnar–Farneback optical flow method using the Python OpenCV library. As can be seen in Figure 4b, each vector in the optical flow has a direction toward the vanishing point, and the size decreases as it approaches the vanishing point. Theoretically, the maximum intersection point of the optical flow vectors is the optimal position for estimating the vanishing point. However, in practice, this method’s accuracy is often limited by minor errors in the vectors, which can lead to significant positional deviations in the vanishing point estimation. Therefore, we instead generate a contour plot for the magnitudes of the optical flow vectors, as shown in Figure 4c. Circles are generated along the pipe wall in this contour plot. We select two circles among the circles generated around the vanishing point. Various methods for automatically detecting circles in images have been studied [25,26], but in the present study, we manually select three points on each of the two contours we want to choose on the contour plot to detect the circles. For the two detected circles, C 1 and C 2 , with respective centers, P 1 and P 2 , and respective radii, r 1 and r 2 (where r 1 > r 2 ), we use an infinite geometric series to calculate the position of the vanishing point P v as follows:
P v = P 1 + 1 1 r ( P 2 P 1 ) ,
where r = r 2 / r 1 .

2.4. Algorithm

Figure 5 represents the overall algorithm that we propose for creating a panoramic image of the inside wall of a pipe. It can be summarized into the following four-step process:
  • STEP 1. Capture the image sequences of the inside of the pipe while moving through the pipe, and measure the tilt angles ( θ ).
  • STEP 2. Compute the optical flow for the consecutive images and generate a contour plot for the optical flow magnitudes. Select two circles from the contour plot and determine the vanishing point location using Equation (13).
  • STEP 3. Obtain images of the pipe wall corresponding to each perspective image for the collected image sequence using Equations (11) and (12).
  • STEP 4. Stitch the transformed images together using image stitching to create a panoramic image and detect cracks.

3. Results

To validate our method, we generated a sufficiently long 3D pipe using CATIA and inserted images with irregular patterns and cracks on the inner wall of the pipe. The diameter of the pipe was 350 mm and the focal length was 420 mm. We extracted a total of 50 perspective images by moving 40 mm inside the pipe while capturing images. We considered cases where the shooting angle was perpendicular to the direction of the pipe and cases where the pitch angle was tilted by 0.1 degrees.

3.1. Vanishing Point Detection

Figure 6 illustrates the process used to detect the vanishing point using the optical flow for a sequence of perspective images obtained by traversing inside the pipe model. The optical flow is calculated using the perspective image sequence shown in Figure 6a, and the green arrows in Figure 6b represent the optical flow (note that the white dot in the center here is not the vanishing point but simply the coordinate axis). As the vectors move closer to the center, their sizes decrease, while their magnitudes increase as they move outward, with errors in the direction. Figure 6c shows a contour plot of the magnitude of the optical flow vectors. Several circles can be seen near the vanishing point. The two circles selected from the contour plot, which are indicated by thick red lines in Figure 6d, were used to create circles by selecting three points for each circle on the contour plot. In Figure 6d, the two circles marked with bold red lines are selected from the circles observed in the contour plot. We generated the circles on the contour plot by selecting three points for each of the two circles. The red circles within the selected circles are drawn based on estimates of the centers and radii of the two selected circles. We can see that the centers of the circles converge to the blue vanishing point in the magnified image at the bottom right of Figure 6d.

3.2. Inverse Perspective Mapping

3.2.1. Perpendicular Image Plane

The left side of Figure 7 shows a perspective image when the image plane is perpendicular to the direction of the pipe. Two circles (blue line) with radii r 1 and r 2 ( r 1 > r 2 ) were generated based on the vanishing point to set the area to be transformed. The right image in Figure 7 shows the resulting image plane that has been transformed using Equation (2). We used the s c a t t e r function in MATLAB to pass RGB values to corresponding positions. As can be seen in the transformed image in Figure 7, as we move toward the right, deeper areas of the pipe are represented, thus resulting in decreased resolution and slight image distortion. Therefore, in this experiment, we set the cropping area to points satisfying 4 7 r 1 < r < r 1 , where r is the distance from the vanishing point to the image plane. The transformed images are connected using the stitching technique described in Section 2.2. For image stitching, we adopted the OpenCV library in Python. Figure 8 shows feature matching for two sequential images obtained using the SIFT algorithm. Figure 9 shows the results from connecting the transformed images using image stitching, thus enabling the easy identification of crack positions inside the pipe.

3.2.2. Inclined Image Plane

Figure 10 on the left shows a perspective image obtained when the image plane is tilted by 0.1 degrees in pitch. The blue circle represents the cropping range. If we use Equation (2) without considering the tilt of the image plane, the resulting image is distorted, as shown in the middle image in Figure 10. However, when we use Equations (11) and (12) to consider the tilt of the image plane, the transformed image no longer has a rectangular boundary and instead has a wavy shape, and the image distortion disappears. To perform actual image stitching, we set the cropping range to exclude the wavy area and accordingly calculated the internal square.

4. Discussion

In our paper, we propose a technique for generating the panoramic image of the in-pipe surface by transforming perspective images obtained from inside the pipe. Firstly, we applied the optical flow technique to consecutive perspective images, generating contours and calculating the converging positions of circles to detect the vanishing point. The obtained vanishing point was used as the reference to rotate cross-sections and derive the positions of inverse projection points on the image plane. To correct the image distortions caused by the tilting of the image plane, we derived an IPM-based formulation by considering the intersection of the image plane and the cross-sectional plane, based on the tilting angle.
Through numerical experiments, we confirmed that perspective images obtained from a 3D pipe model could be restored to images of the pipe wall using IPM. The computational time for image transformation using the derived IPM-based formulation was efficient, taking approximately 0.16 s per image, without the need for matrix inversions. We also observed the effective distortion correction of the image even when the image plane was tilted. The manually calculated vanishing point based on passive circle extraction showed reasonable estimations but still exhibited limitations dependent on circle detection accuracy. As a result, the obtained panoramic image of the internal pipe surface accurately displayed the positions and shapes of cracks, indicating its potential usefulness for crack detection based on visual inspection or imaging processing techniques.

5. Conclusions

This paper aims to develop a cost-effective and simple algorithm for generating complete images of the inner walls of pipes. Our contributions are as follows:
  • We developed a simple algorithm based on inverse perspective mapping (IPM) that generates images of the entire pipe wall using only image sequences obtained from a monocular camera, without the need for high-performance capturing equipment.
  • We proposed a new technique based on optical flow for obtaining vanishing points in perspective images of the pipe interior with few line edges.
  • We presented formulas to address image distortion issues caused by shooting angles when applying IPM.
  • We proposed a new technique that utilizes optical flow in perspective images of the pipe interior, in cases with limited line edges, to obtain the vanishing point.
  • Our algorithm enables the acquisition of an end-to-end image of the pipe through image stitching, allowing for the detection of crack severity and location.
  • Furthermore, we can obtain a 360-degree image of the pipe interior without the need for camera rotation.
Our limitations and future research topics are as follows: In this paper, we did not propose a method for estimating the tilted angle of the image plane. Additional considerations should be given to methods for calculating the tilt of the image plane, such as using inertial measurement unit (IMU) sensors or directly measuring the tilt. The automated extraction of circles using the circle Hough transform in the vanishing point detection process requires further research. We expect these issues to be further developed and experimentally validated through future research.

Author Contributions

Conceptualization, S.S.Y. and H.-S.L.; methodology, S.S.Y. and H.-S.L.; software, S.S.Y.; validation, S.S.Y.; resources, H.-S.L.; writing—original draft preparation, S.S.Y.; writing—review and editing, H.-S.L.; visualization, S.S.Y.; project administration, H.-S.L.; funding acquisition, H.-S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This paper was supported by the Joongbu University Research & Development Fund in 2022.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
IPMinverse perspective mapping
RVIremote visual inspection
IMUinertial measurement unit
SIFTscale-invariant feature transform

References

  1. Hashim, A.S.; Grămescu, B.; Niţu, C. Pipe cracks detection methods—A review. In Proceedings of the International Conference of Mechatronics and Cyber-MixMechatronics, Bucharest, Romania, 6–7 September 2018; Volume 2, pp. 185–193. [Google Scholar]
  2. Tang, C.; Du, B.; Jiang, S.; Shao, Q.; Dong, X.; Liu, X.J.; Zhao, H. A pipeline inspection robot for navigating tubular environments in the sub-centimeter scale. Sci. Robot. 2022, 7, eabm8597. [Google Scholar] [CrossRef] [PubMed]
  3. Jang, H.; Kim, T.Y.; Lee, Y.C.; Song, Y.H.; Choi, H.R. Autonomous navigation of in-pipe inspection robot using contact sensor modules. IEEE/ASME Trans. Mechatron. 2022, 27, 4665–4674. [Google Scholar] [CrossRef]
  4. Elankavi, R.S.; Dinakaran, D.; Doss, A.S.A.; Chetty, R.K.; Ramya, M.M. Design and Motion Planning of a Wheeled Type Pipeline Inspection Robot. JRC 2022, 3, 415–430. [Google Scholar] [CrossRef]
  5. Reyes-Acosta, A.V.; Lopez-Juarez, I.; Osorio-Comparán, R.; Lefranc, G. 3D pipe reconstruction employing video information from mobile robots. Appl. Soft Comput. 2019, 75, 562–574. [Google Scholar] [CrossRef]
  6. Roman, H.T.; Pellegrino, B.A.; Sigrist, W.R. Pipe crawling inspection robots: An overview. IEEE Trans. Energy Convers. 1993, 8, 576–583. [Google Scholar] [CrossRef]
  7. Summan, R.; Jackson, W.; Dobie, G.; MacLeod, C.; Mineo, C.; West, G.; Offin, D.; Bolton, G.; Marshall, S.; Lille, A. A novel visual pipework inspection system. AIP Conf. Proc. 2018, 1949, 220001. [Google Scholar]
  8. Islam, M.M.; Kim, J.M. Vision-based autonomous crack detection of concrete structures using a fully convolutional encoder–decoder network. Sensors 2019, 19, 4251. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Zhang, Y.; Cui, X.; Liu, Y.; Yu, B. Tire defects classification using convolution architecture for fast feature embedding. Int. J. Comput. Intell. Syst. 2018, 11, 1056–1066. [Google Scholar] [CrossRef] [Green Version]
  10. Wu, S.; Wu, Y.; Cao, D.; Zheng, C. A fast button surface defect detection method based on Siamese network with imbalanced samples. Multimed. Tools. Appl. 2019, 78, 34627–34648. [Google Scholar] [CrossRef]
  11. Pahwa, R.S.; Leong, W.K.; Foong, S.; Leman, K.; Do, M.N. Feature-less stitching of cylindrical tunnel. In Proceedings of the 2018 IEEE/ACIS 17th International Conference on Computer and Information Science (ICIS), Singapore, 6–8 June 2018; pp. 502–507. [Google Scholar]
  12. Kagami, S.; Taira, H.; Miyashita, N.; Torii, A.; Okutomi, M. 3D pipe network reconstruction based on structure from motion with incremental conic shape detection and cylindrical constraint. In Proceedings of the 2020 IEEE 29th International Symposium on Industrial Electronics (ISIE), Delft, The Netherlands, 17–19 June 2020; pp. 1345–1352. [Google Scholar]
  13. Hosseinzadeh, S.; Jackson, W.; Zhang, D.; McDonald, L.; Dobie, G.; West, G.; MacLeod, C.A. A novel centralization method for pipe image stitching. IEEE Sens. J. 2020, 21, 11889–11898. [Google Scholar] [CrossRef]
  14. Gunatilake, A.; Piyathilaka, L.; Tran, A.; Vishwanathan, V.K.; Thiyagarajan, K.; Kodagoda, S. Stereo vision combined with laser profiling for mapping of pipeline internal defects. IEEE Sens. J. 2020, 21, 11926–11934. [Google Scholar] [CrossRef]
  15. Wu, T.; Lu, S.; Tang, Y. An in-pipe internal defects inspection system based on the active stereo omnidirectional vision sensor. In Proceedings of the 2015 12th International Conference on Fuzzy Systems and Knowledge Discovery, Zhangjiajie, China, 15–17 August 2015; pp. 2637–2641. [Google Scholar]
  16. Fang, X.; Wang, Y.; Li, Y.; Wang, J.; Zhou, L. An End-To-End Model for Pipe Crack Three-Dimensional Visualization Based on a Cascade Neural Network. Appl. Sci. 2020, 10, 1290. [Google Scholar]
  17. Oliveira, M.; Santos, V.; Sappa, A.D. Multimodal inverse perspective mapping. Inf. Fusion 2015, 24, 108–121. [Google Scholar] [CrossRef]
  18. Nieto, M.; Salgado, L.; Jaureguizar, F.; Cabrera, J. Stabilization of inverse perspective mapping images based on robust vanishing point estimation. In Proceedings of the 2007 IEEE Intelligent Vehicles Symposium, Istanbul, Turkey, 13–15 June 2007; pp. 315–320. [Google Scholar]
  19. Jeong, J.; Kim, A. Adaptive inverse perspective mapping for lane map generation with SLAM. In Proceedings of the 2016 13th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Xi’an, China, 19–22 August 2016; pp. 38–41. [Google Scholar]
  20. Lutton, E.; Maitre, H.; Lopez-Krahe, J. Contribution to the determination of vanishing points using Hough transform. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 430–438. [Google Scholar] [CrossRef]
  21. Horn, B.K.; Schunck, B.G. Determining optical flow. Artif. Intell. 1981, 17, 185–203. [Google Scholar] [CrossRef] [Green Version]
  22. Fortun, D.; Bouthemy, P.; Kervrann, C. Optical flow modeling and computation: A survey. Comput. Vis. Image Underst 2015, 134, 1–21. [Google Scholar] [CrossRef] [Green Version]
  23. Brown, M.; Lowe, D.G. Automatic panoramic image stitching using invariant features. Int. J. Comput. Vis. 2007, 74, 59–73. [Google Scholar] [CrossRef] [Green Version]
  24. Lyu, W.; Zhou, Z.; Chen, L.; Zhou, Y. A survey on image and video stitching. Virtual Real. Intell. Hardw. 2019, 1, 55–83. [Google Scholar] [CrossRef]
  25. Ye, H.; Shang, G.; Wang, L.; Zheng, M. A new method based on hough transform for quick line and circle detection. In Proceedings of the 2015 8th International Conference on Biomedical Engineering and Informatics (BMEI), Shenyang, China, 14–16 October 2015; pp. 52–56. [Google Scholar]
  26. Yuen, H.K.; Princen, J.; Illingworth, J.; Kittler, J. Comparative study of Hough transform methods for circle finding. Image Vis. Comput. 1990, 8, 71–77. [Google Scholar] [CrossRef] [Green Version]
Figure 1. A perspective projection onto an image plane that is perpendicular to the piping.
Figure 1. A perspective projection onto an image plane that is perpendicular to the piping.
Sensors 23 05363 g001
Figure 2. Perspective projection onto a tilted image plane.
Figure 2. Perspective projection onto a tilted image plane.
Sensors 23 05363 g002
Figure 3. Correspondence between a tilted image plane and the interior wall surface of a pipe.
Figure 3. Correspondence between a tilted image plane and the interior wall surface of a pipe.
Sensors 23 05363 g003
Figure 4. Detection of vanishing points using the optical flow from successive perspective images of the interior wall surfaces of pipes: (a) perspective images, (b) optical flow, (c) contour plot representing the levels of the optical flow vector magnitude.
Figure 4. Detection of vanishing points using the optical flow from successive perspective images of the interior wall surfaces of pipes: (a) perspective images, (b) optical flow, (c) contour plot representing the levels of the optical flow vector magnitude.
Sensors 23 05363 g004
Figure 5. An overall algorithm for generating a panoramic image of the interior wall of a pipe based on inverse perspective mapping.
Figure 5. An overall algorithm for generating a panoramic image of the interior wall of a pipe based on inverse perspective mapping.
Sensors 23 05363 g005
Figure 6. Vanishing point detection using optical flow.
Figure 6. Vanishing point detection using optical flow.
Sensors 23 05363 g006
Figure 7. Generation of wall images using IPM when the flow direction of the image plane is perpendicular to the pipe. The red dashed lines indicate the cropping range.
Figure 7. Generation of wall images using IPM when the flow direction of the image plane is perpendicular to the pipe. The red dashed lines indicate the cropping range.
Sensors 23 05363 g007
Figure 8. Image registration between two IPM-transformed images.
Figure 8. Image registration between two IPM-transformed images.
Sensors 23 05363 g008
Figure 9. Panoramic image of the pipe wall created by connecting IPM-transformed images using image stitching.
Figure 9. Panoramic image of the pipe wall created by connecting IPM-transformed images using image stitching.
Sensors 23 05363 g009
Figure 10. Image correction for the case when the image plane is tilted.
Figure 10. Image correction for the case when the image plane is tilted.
Sensors 23 05363 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yoo, S.S.; Lee, H.-S. An Inverse Perspective Mapping-Based Approach for Generating Panoramic Images of Pipe Inner Surfaces. Sensors 2023, 23, 5363. https://doi.org/10.3390/s23125363

AMA Style

Yoo SS, Lee H-S. An Inverse Perspective Mapping-Based Approach for Generating Panoramic Images of Pipe Inner Surfaces. Sensors. 2023; 23(12):5363. https://doi.org/10.3390/s23125363

Chicago/Turabian Style

Yoo, Sung Sic, and Heung-Shik Lee. 2023. "An Inverse Perspective Mapping-Based Approach for Generating Panoramic Images of Pipe Inner Surfaces" Sensors 23, no. 12: 5363. https://doi.org/10.3390/s23125363

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop