LiDAR Intensity Completion: Fully Exploiting the Message from LiDAR Sensors
Abstract
:1. Introduction
- LiDAR-Net, a novel intensity completion method is proposed using intensity–depth fusion. The experiment results show that the proposed method can provide competitive performance compared with state-of-the-art completion methods.
- A LiDAR intensity fusion method is proposed to generate the intensity ground truth for training. Using multiple types of intensity data from the proposed method for training can improve the performance of the LiDAR intensity completion.
- The proposed method is tested in object (lane) segmentation based on completed intensity maps. The result shows that off-the-shelf computer vision techniques can operate on the completed LiDAR intensity maps. Moreover, the LiDAR intensity completion provides more robust lane segmentation results than visual cameras under adverse conditions.
2. Related Work
2.1. LiDAR Intensity
2.2. Sparse to Dense
3. Method
3.1. Intensity Fusion for Ground-Truth Generation
3.1.1. Distance Compensation
3.1.2. Incidence Normalization
3.1.3. Multi-View Fusion
3.1.4. Inverse Reproduction
3.2. LiDAR-Net
3.2.1. Architecture
3.2.2. Training
4. Experiments
4.1. Evaluation of Intensity Ground Truth
4.1.1. Consistency in Normalized Intensity Maps
4.1.2. Quality of Artificial Intensity Maps
4.2. Comparison of Intensity Completion
4.3. Completion Ablation Experiments
4.3.1. Effectiveness of Intensity–Depth Fusion
4.3.2. Effectiveness of Supervision with Normalized Intensity
4.4. Comparison of Depth Completion
4.5. Lane Segmentation
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Kashani, A.; Olsen, M.; Parrish, C.; Wilson, N. A review of LiDAR radiometric processing: From ad hoc intensity correction to rigorous radiometric calibration. Sensors 2015, 15, 28099–28128. [Google Scholar] [CrossRef] [Green Version]
- Wan, G.; Yang, X.; Cai, R.; Li, H.; Zhou, Y.; Wang, H.; Song, S. Robust and Precise Vehicle Localization Based on Multi-Sensor Fusion in Diverse City Scenes. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 4670–4677. [Google Scholar]
- Abdelaziz, N.; El-Rabbany, A. An Integrated INS/LiDAR SLAM Navigation System for GNSS-Challenging Environments. Sensors 2022, 22, 4327. [Google Scholar] [CrossRef] [PubMed]
- Chen, X.; Chen, Z.; Liu, G.; Chen, K.; Wang, L.; Xiang, W.; Zhang, R. Railway Overhead Contact System Point Cloud Classification. Sensors 2021, 21, 4961. [Google Scholar] [CrossRef] [PubMed]
- Li, H.; Zhao, S.; Zhao, W.; Zhang, L.; Shen, J. One-Stage Anchor-Free 3D Vehicle Detection from LiDAR Sensors. Sensors 2021, 21, 2651. [Google Scholar] [CrossRef]
- Brkić, I.; Miler, M.; Ševrović, M.; Medak, D. Automatic roadside feature detection based on LiDAR road cross section images. Sensors 2022, 22, 5510. [Google Scholar] [CrossRef]
- Ilg, E.; Mayer, N.; Saikia, T.; Keuper, M.; Dosovitskiy, A.; Brox, T. Flownet 2.0: Evolution of optical flow estimation with deep networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2462–2470. [Google Scholar]
- Xue, F.; Wang, X.; Yan, Z.; Wang, Q.; Wang, J.; Zha, H. Local supports global: Deep camera relocalization with sequence enhancement. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 2841–2850. [Google Scholar]
- Kim, J.; Park, C. End-to-end ego lane estimation based on sequential transfer learning for self-driving cars. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 30–38. [Google Scholar]
- Dong, H.; Anderson, S.; Barfoot, T.D. Two-axis scanning lidar geometric calibration using intensity imagery and distortion mapping. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 3672–3678. [Google Scholar]
- Anderson, S.; McManus, C.; Dong, H.; Beerepoot, E.; Barfoot, T.D. The gravel pit lidar-intensity imagery dataset. In Technical Report ASRL-2012-ABLOOl; UTIAS: North York, ON, Canada, 2012. [Google Scholar]
- Barfoot, T.D.; McManus, C.; Anderson, S.; Dong, H.; Beerepoot, E.; Tong, C.H.; Furgale, P.; Gammell, J.D.; Enright, J. Into darkness: Visual navigation based on a lidar-intensity-image pipeline. In Robotics research; Springer: Berlin/Heidelberg, Germany, 2016; pp. 487–504. [Google Scholar]
- Brodu, N.; Lague, D. 3D terrestrial lidar data classification of complex natural scenes using a multi-scale dimensionality criterion: Applications in geomorphology. ISPRS J. Photogramm. Remote Sens. 2012, 68, 121–134. [Google Scholar] [CrossRef] [Green Version]
- Mal, F.; Karaman, S. Sparse-to-dense: Depth prediction from sparse depth samples and a single image. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 1–8. [Google Scholar]
- Ma, F.; Cavalheiro, G.V.; Karaman, S. Self-supervised sparse-to-dense: Self-supervised depth completion from lidar and monocular camera. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 3288–3295. [Google Scholar]
- Qiu, J.; Cui, Z.; Zhang, Y.; Zhang, X.; Liu, S.; Zeng, B.; Pollefeys, M. Deeplidar: Deep surface normal guided depth prediction for outdoor scene from sparse lidar data and single color image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–17 June 2019; pp. 3313–3322. [Google Scholar]
- Chen, B.; Lv, X.; Liu, C.; Jiao, H. SGSNet: A Lightweight Depth Completion Network Based on Secondary Guidance and Spatial Fusion. Sensors 2022, 22, 6414. [Google Scholar] [CrossRef]
- Chen, L.; Li, Q. An Adaptive Fusion Algorithm for Depth Completion. Sensors 2022, 22, 4603. [Google Scholar] [CrossRef]
- Uhrig, J.; Schneider, N.; Schneider, L.; Franke, U.; Brox, T.; Geiger, A. Sparsity invariant cnns. In Proceedings of the 2017 International Conference on 3D Vision (3DV), Qingdao, China, 10–12 October 2017; pp. 11–20. [Google Scholar]
- Lambert, J.H. Photometria Sive de Mensura et Gradibus Luminis, Colorum et Umbrae; Klett: Stuttgart, Germany, 1760. [Google Scholar]
- Tatoglu, A.; Pochiraju, K. Point cloud segmentation with LIDAR reflection intensity behavior. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, St Paul, MN, USA, 14–18 May 2012; pp. 786–790. [Google Scholar]
- Yin, J.; Shen, J.; Guan, C.; Zhou, D.; Yang, R. Lidar-based online 3d video object detection with graph-based message passing and spatiotemporal transformer attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11495–11504. [Google Scholar]
- Ou, J.; Huang, P.; Zhou, J.; Zhao, Y.; Lin, L. Automatic Extrinsic Calibration of 3D LIDAR and Multi-Cameras Based on Graph Optimization. Sensors 2022, 22, 2221. [Google Scholar] [CrossRef]
- Meng, Q.; Wang, W.; Zhou, T.; Shen, J.; Jia, Y.; Van Gool, L. Towards a weakly supervised framework for 3d point cloud object detection and annotation. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 4454–4468. [Google Scholar] [CrossRef]
- Meng, Q.; Wang, W.; Zhou, T.; Shen, J.; Van Gool, L.; Dai, D. Weakly supervised 3d object detection from lidar point cloud. In Proceedings of the European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2020; pp. 515–531. [Google Scholar]
- Li, F.; Jin, W.; Fan, C.; Zou, L.; Chen, Q.; Li, X.; Jiang, H.; Liu, Y. PSANet: Pyramid splitting and aggregation network for 3D object detection in point cloud. Sensors 2020, 21, 136. [Google Scholar] [CrossRef]
- Kaasalainen, S.; Jaakkola, A.; Kaasalainen, M.; Krooks, A.; Kukko, A. Analysis of incidence angle and distance effects on terrestrial laser scanner intensity: Search for correction methods. Remote Sens. 2011, 3, 2207–2221. [Google Scholar] [CrossRef] [Green Version]
- Sasidharan, S. A Normalization scheme for Terrestrial LiDAR Intensity Data by Range and Incidence Angle. In OSF Preprints; Center for Open Science: Charlottesville, VA, USA, 2018. [Google Scholar]
- Starek, M.; Luzum, B.; Kumar, R.; Slatton, K. Normalizing lidar intensities. In Geosensing Engineering and Mapping (GEM); University of Florid: Gainesville, FL, USA, 2006. [Google Scholar]
- Habib, A.F.; Kersting, A.P.; Shaker, A.; Yan, W.Y. Geometric calibration and radiometric correction of LiDAR data and their impact on the quality of derived products. Sensors 2011, 11, 9069–9097. [Google Scholar] [CrossRef] [Green Version]
- Höfle, B.; Pfeifer, N. Correction of laser scanning intensity data: Data and model-driven approaches. ISPRS J. Photogramm. Remote Sens. 2007, 62, 415–433. [Google Scholar] [CrossRef]
- Jutzi, B.; Gross, H. Normalization of LiDAR intensity data based on range and surface incidence angle. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 2009, 38, 213–218. [Google Scholar]
- Masiero, A.; Guarnieri, A.; Pirotti, F.; Vettore, A. Semi-automated detection of surface degradation on bridges based on a level set method. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 15–21. [Google Scholar] [CrossRef] [Green Version]
- Guan, H.; Yu, Y.; Li, J.; Liu, P.; Zhao, H.; Wang, C. Automated extraction of manhole covers using mobile LiDAR data. Remote Sens. Lett. 2014, 5, 1042–1050. [Google Scholar] [CrossRef]
- Asvadi, A.; Garrote, L.; Premebida, C.; Peixoto, P.; Nunes, U.J. Real-time deep convnet-based vehicle detection using 3d-lidar reflection intensity data. In Proceedings of the Iberian Robotics Conference; Springer: Berlin/Heidelberg, Germany, 2017; pp. 475–486. [Google Scholar]
- Melotti, G.; Premebida, C.; Gonçalves, N.M.d.S.; Nunes, U.J.; Faria, D.R. Multimodal CNN pedestrian classification: A study on combining LIDAR and camera data. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 3138–3143. [Google Scholar]
- Xue, H.; Zhang, S.; Cai, D. Depth image inpainting: Improving low rank matrix completion with low gradient regularization. IEEE Trans. Image Process. 2017, 26, 4311–4320. [Google Scholar] [CrossRef] [Green Version]
- Xu, Y.; Zhu, X.; Shi, J.; Zhang, G.; Bao, H.; Li, H. Depth completion from sparse lidar data with depth-normal constraints. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27–28 October 2019; pp. 2811–2820. [Google Scholar]
- Eldesokey, A.; Felsberg, M.; Khan, F.S. Propagating confidences through cnns for sparse data regression. In Proceedings of the British Machine Vision Conference (BMVC), Newcastle, UK, 3–6 September 2018. [Google Scholar]
- Huang, Z.; Fan, J.; Cheng, S.; Yi, S.; Wang, X.; Li, H. Hms-net: Hierarchical multi-scale sparsity-invariant network for sparse depth completion. IEEE Trans. Image Process. 2019, 29, 3429–3441. [Google Scholar] [CrossRef] [Green Version]
- Jaritz, M.; De Charette, R.; Wirbel, E.; Perrotton, X.; Nashashibi, F. Sparse and dense data with cnns: Depth completion and semantic segmentation. In Proceedings of the IEEE 2018 International Conference on 3D Vision (3DV), Verona, Italy, 5–8 September 2018; pp. 52–60. [Google Scholar]
- Shivakumar, S.S.; Nguyen, T.; Chen, S.W.; Taylor, C.J. DFuseNet: Deep Fusion of RGB and Sparse Depth Information for Image Guided Dense Depth Completion. arXiv 2019, arXiv:1902.00761. [Google Scholar]
- Chodosh, N.; Wang, C.; Lucey, S. Deep convolutional compressed sensing for lidar depth completion. arXiv 2018, arXiv:1803.08949. [Google Scholar]
- LiDAR, V. HDL-32E User Manual; Velodyne LiDAR Inc.: San Jose, CA, USA, 2015. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning. PMLR, Lille, France, 7–9 July 2015; pp. 448–456. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Inman, H.F.; Bradley Jr, E.L. The overlapping coefficient as a measure of agreement between probability distributions and point estimation of the overlap of two normal densities. Commun. Stat.-Theory Methods 1989, 18, 3851–3874. [Google Scholar] [CrossRef]
- Eldesokey, A.; Felsberg, M.; Holmquist, K.; Persson, M. Uncertainty-aware cnns for depth completion: Uncertainty from beginning to end. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 12014–12023. [Google Scholar]
- Ku, J.; Harakeh, A.; Waslander, S.L. In defense of classical image processing: Fast depth completion on the cpu. In Proceedings of the IEEE 2018 15th Conference on Computer and Robot Vision (CRV), Toronto, ON, Canada, 8–10 May 2018; pp. 16–22. [Google Scholar]
- Pan, X.; Shi, J.; Luo, P.; Wang, X.; Tang, X. Spatial as deep: Spatial cnn for traffic scene understanding. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018. [Google Scholar]
- Tan, K.; Cheng, X. Specular reflection effects elimination in terrestrial laser scanning intensity data using Phong model. Remote Sens. 2017, 9, 853. [Google Scholar] [CrossRef]
- Carrea, D.; Abellan, A.; Humair, F.; Matasci, B.; Derron, M.H.; Jaboyedoff, M. Correction of terrestrial LiDAR intensity channel using Oren–Nayar reflectance model: An application to lithological differentiation. ISPRS J. Photogramm. Remote Sens. 2016, 113, 17–29. [Google Scholar] [CrossRef]
- Bolkas, D. Terrestrial laser scanner intensity correction for the incidence angle effect on surfaces with different colours and sheens. Int. J. Remote Sens. 2019, 40, 7169–7189. [Google Scholar] [CrossRef]
- Yan, W.Y.; Shaker, A. Radiometric correction and normalization of airborne LiDAR intensity data for improving land-cover classification. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7658–7673. [Google Scholar]
I | |||
---|---|---|---|
1 | 0.72 | 0.97 |
I | |||
---|---|---|---|
0.396 | 0.317 | 0.407 |
Scene 1 | Scene 2 | Mean | |||||
---|---|---|---|---|---|---|---|
Intensity | Intensity | Intensity | |||||
Method | Input | Type | RMSE | MAE | RMSE | MAE | RMSE |
LiDAR-Net (Ours) | intensity + depth | learning | 20.332 | 13.449 | 28.137 | 18.392 | 24.234 |
Sparse-to-dense [15] | single intensity | learning | 20.676 | 13.696 | 28.570 | 18.767 | 24.623 |
SparseConvs [19] | single intensity | learning | 25.942 | 17.460 | 36.055 | 27.150 | 30.999 |
nConv-CNN [39] | single intensity | learning | x | x | x | x | x |
pNCNN [50] | single intensity | learning | 22.131 | 14.911 | 29.539 | 19.928 | 25.835 |
IP-Basic [51] | single intensity | non-learning | 28.725 | 17.957 | 56.374 | 35.784 | 42.550 |
Scene 1 | Scene 2 | Mean | |||
---|---|---|---|---|---|
Intensity | Intensity | Intensity | |||
Method | RMSE | MAE | RMSE | MAE | RMSE |
onlyI () | 20.676 | 13.696 | 28.570 | 18.767 | 24.623 |
DI-to-DI (+) | 20.454 | 13.556 | 28.237 | 18.582 | 24.346 |
LiDAR-Net (++) | 20.332 | 13.449 | 28.137 | 18.392 | 24.234 |
Scene 1 | Scene 2 | Mean | ||||
---|---|---|---|---|---|---|
Depth [mm] | Depth [mm] | Depth [mm] | ||||
Method | Input | RMSE | MAE | RMSE | MAE | RMSE |
LiDAR-Net (Ours) | i + d | 3822.5 | 1300.2 | 5093.0 | 1974.5 | 4457.8 |
Sparse-to-dense [15] | single d | 3900.1 | 1310.2 | 5226.3 | 2165.3 | 4563.2 |
SparseConvs [19] | single d | 7134.5 | 3162.3 | 9486.8 | 4271.21 | 8310.7 |
NConv-CNN [39] | single d | 5190.1 | 1725.2 | 6534.8 | 2425.7 | 5862.4 |
pNCNN [50] | single d | 3956.8 | 1110.4 | 5104.4 | 1816.0 | 4530.5 |
IP-Basic [51] | single d | 6645.9 | 1934.9 | 8521.6 | 2159.7 | 7583.8 |
Input Type | Precision | Recall | |
---|---|---|---|
RGB image from visible cameras | 0.957 | 0.612 | 0.746 |
from LiDAR-Net | 0.862 | 0.553 | 0.674 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Dai, W.; Chen, S.; Huang, Z.; Xu, Y.; Kong, D. LiDAR Intensity Completion: Fully Exploiting the Message from LiDAR Sensors. Sensors 2022, 22, 7533. https://doi.org/10.3390/s22197533
Dai W, Chen S, Huang Z, Xu Y, Kong D. LiDAR Intensity Completion: Fully Exploiting the Message from LiDAR Sensors. Sensors. 2022; 22(19):7533. https://doi.org/10.3390/s22197533
Chicago/Turabian StyleDai, Weichen, Shenzhou Chen, Zhaoyang Huang, Yan Xu, and Da Kong. 2022. "LiDAR Intensity Completion: Fully Exploiting the Message from LiDAR Sensors" Sensors 22, no. 19: 7533. https://doi.org/10.3390/s22197533
APA StyleDai, W., Chen, S., Huang, Z., Xu, Y., & Kong, D. (2022). LiDAR Intensity Completion: Fully Exploiting the Message from LiDAR Sensors. Sensors, 22(19), 7533. https://doi.org/10.3390/s22197533