BiFPN-KPointNet-CBAM: Application of 3D Point Cloud Technology Based on Deep Learning in Measuring Vegetation
Abstract
:1. Introduction
- We introduce a novel approach to measuring vegetation by the air–ground integrated point cloud data acquisition mode for the first time, and this mode was verified in practice. This model realizes the transition from two-dimensional data to three-dimensional data. Figure 1 below shows the 3D reconstruction of vegetation, which was achieved through an innovative air–ground integrated point cloud data acquisition mode that combined aerial and ground-based scanning techniques.
- Aiming at the relationship and local characteristics between green point cloud data points, a hybrid model based on the PointNet fusion BiFPN and CBAM is proposed and a k-nearest neighbor algorithm is introduced. This model is of great value for classifying and analyzing point cloud data.
- We also conducted empirical evaluations on the data collection mode and classification model. The results show that our design has advantages in the quality of collected data and the accuracy of classification, reflecting the innovation of urban vegetation measuring technology.
2. Related Research
3. Data Collection and Fusion
3.1. Park Greening Data Collection Based on Backpack 3D Laser
3.2. Three-Dimensional Laser-Point-Cloud-Assisted Municipal Road Greening Data Acquisition
3.3. Point Cloud Accuracy Control and Optimization
3.4. Multi-Source Heterogeneous Point Cloud Data Fusion
4. Background
4.1. PointNet
4.2. BiFPN
4.3. CBAM
4.4. Mixing Pooling
5. The Overall Model Based on Deep Learning in Green Detection
6. Experimental Results and Analysis
6.1. Dataset
6.2. Experimental Environment
6.3. Experimental Results
6.4. Ablation Experiment
7. Discussion
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Jinghua, C.; Jin, X.; Zhanhao, L. Coupling relationship and interaction between ecological protection and high-quality economic development in the Yellow River Basin in the new era. J. Shandong Univ. Financ. Econ. 2023, 3, 36–51. [Google Scholar]
- Implementation of the second round of central ecological environmental protection supervision feedback rectification work in Guangdong Province. Nanfang Daily, 10 May 2023.
- Guangzhou Municipal People’s Government. Guangzhou Municipal Government Work Report; Guangzhou Municipal People’s Government: Guangzhou, China, 2023. [Google Scholar]
- Yunxing, Z.; Meiyu, Y.; Ziyang, L. Study on the greening quality of CBD streets in Zhengdong New District based on street panoramic images. Chin. Foreign Archit. 2022, 11, 57–62. [Google Scholar]
- Yamei, C. Study on the Temporal and Spatial Distribution of Nitrogen and Phosphorus in the Typical Riparian Zone of the Linfen Section of the Fenhe River and Its Influencing Factors; Xi’an University of Technology: Xi’an, China, 2021. [Google Scholar]
- Pan, L.; Dai, J.; Wu, Z.; Huang, L.; Wan, Z.; Han, J.; Li, Z. Spatial and Temporal Variations of Nitrogen and Phosphorus in Surface Water and Groundwater of Mudong River Watershed in Huixian Karst Wetland, Southwest China. Sustainability 2021, 13, 10740. [Google Scholar] [CrossRef]
- Jianxiang, G.; Bisheng, Y.; Zhen, D. Intelligent holographic mapping for digital twin cities. Mapp. Bull. 2020, 6, 134–140. [Google Scholar]
- Cheng, F.; Ziwen, Z. Application of ICP algorithm in automatic registration of multi-beam point cloud strips. Ocean. Surv. Mapp. 2023, 43, 5–9. [Google Scholar]
- Ren, P.; Xiaomin, S.; Yuan, L.; Chongbin, X. Application of FR-ICP algorithm in tilt photogrammetry point cloud registration. Space Return Remote Sens. 2023, 44, 13–22. [Google Scholar]
- Yanhu, S.; Xiaodan, Z.; Chengqun, C. A high-precision point cloud registration method based on SAC-NDT and ICP. Single Chip Microcomput. Embed. Syst. Appl. 2023, 23, 61–65+76. [Google Scholar]
- Shuaishuai, W.; Yanhong, B.; Yin, W. ICP point cloud registration method based on ISS-BSHOT feature. J. Yangzhou Univ. (Nat. Sci. Ed.) 2022, 25, 50–55. [Google Scholar]
- Jian, R.; Lianhai, Z.; Sanbao, H. PointNet-based body segmentation method. J. Wuhan Univ. (Eng. Ed.) 2023, 56, 347–352. [Google Scholar]
- Binghai, W. Segmentation of typical elements of subway shield tunnel point cloud based on PointNet. Railw. Constr. Technol. 2022, 12, 159–163. [Google Scholar]
- Jianqi, M.; Hongtao, W.; Puguang, T. Airborne LiDAR point cloud classification by integrating graph convolution and PointNet. Prog. Laser Optoelectron. 2022, 59, 328–334. [Google Scholar]
- Yaoting, H. PointNet-based point cloud classification model. Smart City 2022, 8, 39–41. [Google Scholar]
- Deren, L.; Wenbo, Y.; Zhenfeng, S. Smart city based on digital twins. Comput. Urban Sci. 2021, 1, 4. [Google Scholar] [CrossRef]
- Li, D.; Wang, M.; Shen, X.; Dong, Z. From Earth Observation Satellite to Earth Observation Brain. J. Wuhan Univ. (Inf. Sci. Ed.) 2017, 42, 143–149. [Google Scholar]
- Deren, L.; Jun, M.; Zhenfeng, S. On the Innovation of Geographical Conditions Survey and Monitoring. J. Wuhan Univ. (Inf. Sci. Ed.) 2018, 43, 1–9. [Google Scholar]
- Deren, L.; Hanruo, Y.; Xi, L. Spatio-temporal pattern analysis of urban development in countries along the Belt and Road based on nighttime light remote sensing images. J. Wuhan Univ. (Inf. Sci. Ed.) 2017, 42, 711–720. [Google Scholar]
- Deren, L. From Surveying and Mapping to Geospatial Information Intelligent Service Science. J. Surv. Mapp. 2017, 46, 1207–1212. [Google Scholar]
- Maturana, D.; Scherer, S. Voxnet: A 3d convolutional neutral network for real-time object recognition. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg, Germany, 7–10 July 2015. [Google Scholar]
- Wu, Z.; Song, S.; Khosla, A. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1912–1920. [Google Scholar]
- Qi, C.R.; Su, H.; Nießner, M.; Dai, A.; Yan, M.; Guibas, L.J. Volumetric and multi-view cnns for object classification on 3d data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016. [Google Scholar]
- Su, H.; Ma, S.; Kalogeraks, E. Learned-MillerMulti-View Convolutional Neural Networks for 3d Shape Recognition; ICCV: Paris, France, 2015. [Google Scholar]
- Jia, T.; Xianfeng, H.; Bo, G. Precision control of laser scanning in large-scale landscape mapping. Technol. Plaza 2012, 122, 80–83. [Google Scholar]
- Charles, R.Q.; Su, H.; Kaichun, M.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 77–85. [Google Scholar] [CrossRef]
- Cover, T.; Hart, P. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
- Hao, Y.; FuZhou, D. Large-scale product flexibility detection technology based on combined measurement. Comput. Integr. Manuf. Syst. 2019, 25, 1037–1046. [Google Scholar]
- Zhenwei, N. Backpack vehicle-borne laser scanning combined with UAV tilt aerial survey practice in community holographic data acquisition. Mapp. Bull. 2021, 528, 159–163. [Google Scholar]
- Shuzhen, W.; Guoqiang, Z.; Guangsheng, W. Refined modeling of buildings based on multi-source point cloud data fusion. Mapp. Bull. 2020, 521, 28–32+38. [Google Scholar]
- Chenchen, Z. Research on Point Cloud Registration Based on ICP Algorithm; Zhengzhou University: Zhengzhou, China, 2019. [Google Scholar]
- Bahdanau, D.; Cho, K.; Bengio, Y. Neural Machine Translation by Jointly Learning to Align and Translate, Computer Science. arXiv 2014, arXiv:1409.0473. [Google Scholar]
- Charles, R.; Qi, L.Y.; Hao, S.; Leonidas, J.G. PointNet++: Deep hierarchical feature learning on point sets in a metric space. Adv. Neural Inf. Process. Syst. 2017, 30, 5105–5114. [Google Scholar]
- Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic graph cnn for learning on point clouds. ACM Trans. Graph. (Tog) 2019, 38, 1–12. [Google Scholar] [CrossRef]
- Li, J.; Chen, B.M.; Lee, G.H. So-net: Self-organizing network for point cloud analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 9397–9406. [Google Scholar]
- Khan, T.M.; Robles-Kelly, A.; Naqvi, S.S. T-Net: A Resource-Constrained Tiny Convolutional Neural Network for Medical Image Segmentation. In Proceedings of the 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–8 January 2022; pp. 1799–1808. [Google Scholar] [CrossRef]
- Kumawat, S.; Raman, S. Lp-3dcnn: Unveiling local phase in 3d convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 4903–4912. [Google Scholar]
- Li, Q.; Li, W.; Sun, W.; Li, J.; Liu, Z. Fingerprint and Assistant Nodes Based Wi-Fi Localization in Complex Indoor Environment. IEEE Access 2016, 4, 2993–3004. [Google Scholar] [CrossRef]
Experimental Environment | Configuration Details |
---|---|
Operating system | Ubuntu16.04 |
Memory | 64 GB |
Language | Python |
Development tool | Pycharm |
Graphics card | GTX 1080Ti |
Development platform | Pytorch |
Pooling Strategies | Parkpoint | Roadpoint | ||
---|---|---|---|---|
Mean Class Accuracy | Overall Accuracy | Mean Class Accuracy | Overall Accuracy | |
Max pooling | 86.3% | 89.1% | 86.3% | 87.2% |
Average pooling | 82.3% | 85.4% | 81% | 84.3% |
K-max pooling | 78.4% | 81.1% | 75.3% | 78.8% |
Chunk-max pooling | 86.4% | 86.8% | 84.5% | 85.6% |
Ours (mixed pooling) | 90.6% | 95.2% | 90.1% | 93.9% |
Methods | Input | Parkpoint | Roadpoint | Ref. | ||
---|---|---|---|---|---|---|
Mean Class Accuracy | Overall Accuracy | Mean Class Accuracy | Overall Accuracy | |||
3DShapeNets | Voxel | 77.1% | 84.6% | 76.2% | 83.4% | [26] |
VoxNet | Voxel | 83.1% | 86.0% | 82% | 85.2% | [27] |
MVCNN | Image | - | 89.8% | - | 88.4% | [28] |
PointNet | Point | 86% | 88.2% | 84.4% | 87.6% | [29] |
PointNet++ | Point | - | 90.5% | - | 91% | [30] |
DGCNN | Point | 90% | 93.1% | 88.5% | 92.8% | [31] |
So-net | Point | 87.2% | 90.8% | 86.1% | 91.2% | [32] |
PointCNN | Point | 88.4% | 92.3% | 87.9% | 91.2% | [26] |
LP-3DCNN | Point | - | 91.8% | - | 91.2% | [33] |
RS-CNN | Point | - | 93.4% | - | 92.8% | [34] |
Ours | Point | 90.6% | 95.2% | 90.1% | 93.9% | - |
Model | PointNet | Ours |
---|---|---|
Trainable parameters (M) | 3.3 | 4.8 |
K-NN | Mixed Pooling | Bi-FPN | CBAM | Overall Accuracy | |
---|---|---|---|---|---|
Parkpoint | Roadpoint | ||||
88.2% | 87.6% | ||||
√ | 89.4% | 87.8% | |||
√ √ | √ √ | √ | 90.3% 93.6% | 89.2% 92.5% | |
√ | √ | √ | √ | 95.2% | 93.9% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, Q.; Jiang, J.; Hu, J.; Zhong, S.; Zou, F. BiFPN-KPointNet-CBAM: Application of 3D Point Cloud Technology Based on Deep Learning in Measuring Vegetation. Electronics 2024, 13, 2577. https://doi.org/10.3390/electronics13132577
Liu Q, Jiang J, Hu J, Zhong S, Zou F. BiFPN-KPointNet-CBAM: Application of 3D Point Cloud Technology Based on Deep Learning in Measuring Vegetation. Electronics. 2024; 13(13):2577. https://doi.org/10.3390/electronics13132577
Chicago/Turabian StyleLiu, Qihuanghua, Jianmin Jiang, Jingyi Hu, Songyu Zhong, and Fang Zou. 2024. "BiFPN-KPointNet-CBAM: Application of 3D Point Cloud Technology Based on Deep Learning in Measuring Vegetation" Electronics 13, no. 13: 2577. https://doi.org/10.3390/electronics13132577
APA StyleLiu, Q., Jiang, J., Hu, J., Zhong, S., & Zou, F. (2024). BiFPN-KPointNet-CBAM: Application of 3D Point Cloud Technology Based on Deep Learning in Measuring Vegetation. Electronics, 13(13), 2577. https://doi.org/10.3390/electronics13132577