A Plane Extraction Approach in Inverse Depth Images Based on Region-Growing
Abstract
:1. Introduction
- Based on the pin-hole model of a camera, a plane representation in an inverse-depth image is formulated, which can save computational cost by avoiding 3D construction of the environment.
- The region-growing-based approach is improved in two ways. Taking two basic factors, namely locality and coverage into consideration. a grid local seeding strategy is applied in the proposed approach to improve exploration efficiency. Moreover, a combination of greedy policy and normal coherence enable it to be robust to noise.
- The accuracy and efficiency of the proposed method is validated through experiments on public datasets, and generated saw-tooth images. In addition. the complexity is analyzed.
2. Related Work
3. Preparation
3.1. Plane in Inverse Depth Images
3.2. Estimation of Surface Normals
4. Approach
4.1. Grid Local Seeding
- Coverage. In practical applications, without clear intention, a plane may appear anywhere in an image. When trying to extract planes, each plane should be detected and segmented in the whole image.
- Locality. In the real world, planes are usually continuous areas, which do not overlap with each other. When projected onto an image, the adjacent pixels are more likely to be in the same plane.
4.2. Plane Growing
5. Experiment
5.1. Evaluation on SegComp ABW Dataset
5.1.1. Seeding Strategies
5.1.2. Termination Criteria and Merge
5.1.3. Comparison with State-of-the-Art
5.2. Evaluation on Sawtooth Images
5.3. Evaluation on NYU Depth v2
5.3.1. Qualitative Comparison
5.3.2. Computational Complexity Analysis
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Proença, P.F.; Gao, Y. Fast Cylinder and Plane Extraction from Depth Cameras for Visual Odometry. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 6813–6820. [Google Scholar]
- Yang, S.; Song, Y.; Kaess, M.; Scherer, S. Pop-up SLAM: Semantic monocular plane SLAM for low-texture environments. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 1222–1229. [Google Scholar]
- Yang, S.; Scherer, S. Monocular Object and Plane SLAM in Structured Environments. IEEE Robot. Autom. Lett. 2019, 4, 3145–3152. [Google Scholar] [CrossRef] [Green Version]
- Pham, T.T.; Eich, M.; Reid, I.; Wyeth, G. Geometrically consistent plane extraction for dense indoor 3D maps segmentation. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 4199–4204. [Google Scholar]
- Liu, C.; Kim, K.; Gu, J.; Furukawa, Y.; Kautz, J. PlaneRCNN: 3D Plane Detection and Reconstruction From a Single Image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Daejeon, Korea, 15–20 June 2019. [Google Scholar]
- Deng, Z.; Todorovic, S.; Jan Latecki, L. Unsupervised object region proposals for RGB-D indoor scenes. Comput. Vis. Image Underst. 2017, 154, 127–136. [Google Scholar] [CrossRef] [Green Version]
- Doulamis, A.D.; Doulamis, N.D.; Ntalianis, K.S.; Kollias, S.D. Unsupervised semantic object segmentation of stereoscopic video sequences. Proceedings the 1999 International Conference on Information Intelligence and Systems (Cat. No.PR00446), Bethesda, MD, USA, 31 October–3 November 1999; pp. 527–533. [Google Scholar] [CrossRef] [Green Version]
- Gallo, O.; Manduchi, R.; Rafii, A. CC-RANSAC: Fitting planes in the presence of multiple surfaces in range data. Pattern Recognit. Lett. 2011, 32, 403–410. [Google Scholar] [CrossRef] [Green Version]
- Qian, X.; Ye, C. NCC-RANSAC: A Fast Plane Extraction Method for 3-D Range Data Segmentation. IEEE Trans. Cybern. 2014, 44, 2771–2783. [Google Scholar] [CrossRef] [PubMed]
- Jin, Z.; Tillo, T.; Zou, W.; Zhao, Y.; Li, X. Robust Plane Detection Using Depth Information From a Consumer Depth Camera. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 447–460. [Google Scholar] [CrossRef]
- Tian, Y.; Song, W.; Chen, L.; Sung, Y.; Kwak, J.; Sun, S. Fast planar detection system using a GPU-based 3D Hough transform for LiDAR point clouds. Appl. Sci. 2020, 10, 1744. [Google Scholar] [CrossRef] [Green Version]
- Holz, D.; Behnke, S. Fast Range Image Segmentation and Smoothing Using Approximate Surface Reconstruction and Region Growing. In Intelligent Autonomous Systems 12; Lee, S., Cho, H., Yoon, K.J., Lee, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 61–73. [Google Scholar]
- Feng, C.; Taguchi, Y.; Kamat, V.R. Fast plane extraction in organized point clouds using agglomerative hierarchical clustering. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 6218–6225. [Google Scholar]
- Fischler, M.A.; Bolles, R.C. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
- Marriott, R.T.; Pashevich, A.; Horaud, R. Plane-extraction from depth-data using a Gaussian mixture regression model. Pattern Recognit. Lett. 2018, 110, 44–50. [Google Scholar] [CrossRef] [Green Version]
- Xing, Z.; Shi, Z. Extracting Multiple Planar Surfaces Effectively and Efficiently Based on 3D Depth Sensors. IEEE Access 2019, 7, 7326–7336. [Google Scholar] [CrossRef]
- Hough, P.V. Method and Means for Recognizing Complex Patterns. US Patent 3,069,654, 18 December 1962. [Google Scholar]
- Jin, Z.; Tillo, T.; Zou, W.; Li, X.; Lim, E.G. Depth image-based plane detection. Big Data Anal. 2018, 3, 10. [Google Scholar] [CrossRef]
- Holzer, S.; Rusu, R.B.; Dixon, M.; Gedikli, S.; Navab, N. Adaptive neighborhood selection for real-time surface normal estimation from organized point cloud data using integral images. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Portugal, 7–12 October 2012; pp. 2684–2689. [Google Scholar] [CrossRef]
- Hoover, A.; Jean-Baptiste, G.; Jiang, X.; Flynn, P.J.; Bunke, H.; Goldgof, D.B.; Bowyer, K.; Eggert, D.W.; Fitzgibbon, A.; Fisher, R.B. An experimental comparison of range image segmentation algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 1996, 18, 673–689. [Google Scholar] [CrossRef] [Green Version]
- Silberman, N.; Hoiem, D.; Kohli, P.; Fergus, R. Indoor Segmentation and Support Inference from RGBD Images. In Computer Vision–ECCV 2012; Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 746–760. [Google Scholar]
- Gotardo, P.F.U.; Bellon, O.R.P.; Silva, L. Range image segmentation by surface extraction using an improved robust estimator. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003 Proceedings, Madison, WI, USA, 18–20 June 2003; Volume 2, p. II-33. [Google Scholar]
- Trevor, A.J.; Gedikli, S.; Rusu, R.B.; Christensen, H.I. Efficient Organized Point Cloud Segmentation with Connected Components; Semantic Perception Mapping and Exploration (SPME): Karlsruhe, Germany, 2013. [Google Scholar]
- Georgiev, K.; Creed, R.T.; Lakaemper, R. Fast plane extraction in 3D range data based on line segments. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 23–26 October 2011; pp. 3808–3815. [Google Scholar]
- Holz, D.; Schnabel, R.; Droeschel, D.; Stückler, J.; Behnke, S. Towards Semantic Scene Analysis with Time-of-Flight Cameras. In RoboCup 2010: Robot Soccer World Cup XIV; Ruiz-del Solar, J., Chown, E., Plöger, P.G., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 121–132. [Google Scholar]
- Oehler, B.; Stueckler, J.; Welle, J.; Schulz, D.; Behnke, S. Efficient Multi-resolution Plane Segmentation of 3D Point Clouds. In Intelligent Robotics and Applications; Jeschke, S., Liu, H., Schilberg, D., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 145–156. [Google Scholar]
- Holz, D.; Behnke, S. Approximate triangulation and region growing for efficient segmentation and smoothing of range images. Robot. Auton. Syst. 2014, 62, 1282–1293. [Google Scholar] [CrossRef]
- Fankhauser, P.; Bloesch, M.; Rodriguez, D.; Kaestner, R.; Hutter, M.; Siegwart, R. Kinect v2 for mobile robot navigation: Evaluation and modeling. In Proceedings of the 2015 International Conference on Advanced Robotics (ICAR), Istanbul, Turkey, 27–31 July 2015; pp. 388–394. [Google Scholar] [CrossRef] [Green Version]
Interior Parameters | Value |
---|---|
s | 773.545 |
255 | |
255 | |
2337.212 | |
1610.982 |
Distance | G | N | M | Sensitivity | Specificity | CDR |
---|---|---|---|---|---|---|
0.001 | ✓ | ✓ | 92.0 | 99.8 | 79.1 | |
✓ | ✓ | 94.3 | 99.9 | 75.5 | ||
✓ | ✓ | 94.3 | 99.9 | 74.9 | ||
✓ | ✓ | ✓ | 94.3 | 99.9 | 76.3 | |
0.0015 | ✓ | ✓ | 92.5 | 99.9 | 80.2 | |
✓ | ✓ | 95.0 | 99.9 | 82.7 | ||
✓ | ✓ | 95.3 | 99.9 | 83.0 | ||
✓ | ✓ | ✓ | 95.3 | 99.9 | 83.8 | |
0.002 | ✓ | ✓ | 91.8 | 99.9 | 82.8 | |
✓ | ✓ | 95.3 | 99.9 | 83.5 | ||
✓ | ✓ | 95.7 | 99.9 | 84.3 | ||
✓ | ✓ | ✓ | 95.7 | 99.9 | 84.9 | |
0.0025 | ✓ | ✓ | 91.3 | 99.9 | 79.2 | |
✓ | ✓ | 95.1 | 99.9 | 83.9 | ||
✓ | ✓ | 95.8 | 99.9 | 84.6 | ||
✓ | ✓ | ✓ | 95.9 | 99.9 | 85.1 | |
0.003 | ✓ | ✓ | 90.6 | 99.9 | 77.4 | |
✓ | ✓ | 94.7 | 99.9 | 83.2 | ||
✓ | ✓ | 95.8 | 99.9 | 83.2 | ||
✓ | ✓ | ✓ | 95.8 | 99.9 | 84.4 |
Distance | G * | N * | Sensitivity | Speciticity | CDR |
---|---|---|---|---|---|
0.001 | 80.9 | 98.5 | 54.8 | ||
✓ | 80.8 | 98.5 | 54.8 | ||
✓ | 83.1 | 98.7 | 45.2 | ||
✓ | ✓ | 81.8 | 98.6 | 47.2 | |
0.002 | 95.7 | 99.6 | 100.0 | ||
✓ | 96.2 | 99.7 | 100.0 | ||
✓ | 95.9 | 99.7 | 100.0 | ||
✓ | ✓ | 96.3 | 99.7 | 100.0 | |
0.003 | 96.7 | 99.7 | 1.000 | ||
✓ | 98.0 | 99.8 | 100.0 | ||
✓ | 97.5 | 99.8 | 100.0 | ||
✓ | ✓ | 98.3 | 99.9 | 100.0 | |
0.004 | 95.1 | 99.6 | 100.0 | ||
✓ | 97.2 | 99.8 | 100.0 | ||
✓ | 97.1 | 99.7 | 100.0 | ||
✓ | ✓ | 98.3 | 99.9 | 100.0 |
Process | Time (ms) |
---|---|
Inverse | 1.64 |
Normal Estimation | 34.88 |
Seeding | 4.65 |
Growing | 147.81 |
Merge | 0.06 |
Total | 185.68 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Han, X.; Wang, X.; Leng, Y.; Zhou, W. A Plane Extraction Approach in Inverse Depth Images Based on Region-Growing. Sensors 2021, 21, 1141. https://doi.org/10.3390/s21041141
Han X, Wang X, Leng Y, Zhou W. A Plane Extraction Approach in Inverse Depth Images Based on Region-Growing. Sensors. 2021; 21(4):1141. https://doi.org/10.3390/s21041141
Chicago/Turabian StyleHan, Xiaoning, Xiaohui Wang, Yuquan Leng, and Weijia Zhou. 2021. "A Plane Extraction Approach in Inverse Depth Images Based on Region-Growing" Sensors 21, no. 4: 1141. https://doi.org/10.3390/s21041141
APA StyleHan, X., Wang, X., Leng, Y., & Zhou, W. (2021). A Plane Extraction Approach in Inverse Depth Images Based on Region-Growing. Sensors, 21(4), 1141. https://doi.org/10.3390/s21041141