Filtering Stems and Branches from Terrestrial Laser Scanning Point Clouds Using Deep 3-D Fully Convolutional Networks
Abstract
:1. Introduction
2. Data and Sampling
3. Methodology
3.1. Labeling Reference Points
3.2. Configuring 3-D FCN
3.3. Reconstructing 3-D Tree Geometries
4. Results and Discussion
4.1. Reference Creation
4.2. 3-D FCN Filtering and Evaluation
4.3. Wood Reconstruction
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Blackard, J.A.; Finco, M.V.; Helmer, E.H.; Holden, G.R.; Hoppus, M.L.; Jacobs, D.M.; Lister, A.J.; Moisen, G.G.; Nelson, M.D.; Riemann, R.; et al. Mapping U.S. forest biomass using nationwide forest inventory data and moderate resolution information. Remote Sens. Environ. 2008, 112, 1658–1677. [Google Scholar] [CrossRef]
- Beaudoin, A.; Bernier, P.; Guindon, L.; Villemaire, P.; Guo, X.; Stinson, G.; Bergeron, T.; Magnussen, S.; Hall, R. Mapping attributes of canada’s forests at moderate resolution through k nn and modis imagery. Can. J. For. Res. 2014, 44, 521–532. [Google Scholar] [CrossRef]
- Hauglin, M.; Gobakken, T.; Astrup, R.; Ene, L.; Næsset, E. Estimating single-tree crown biomass of norway spruce by airborne laser scanning: A comparison of methods with and without the use of terrestrial laser scanning to obtain the ground reference data. Forests 2014, 5, 384–403. [Google Scholar] [CrossRef]
- Lindberg, E.; Holmgren, J.; Olofsson, K.; Olsson, H. Estimation of stem attributes using a combination of terrestrial and airborne laser scanning. Eur. J. For. Res. 2012, 131, 1917–1931. [Google Scholar] [CrossRef][Green Version]
- Melson, S.L.; Harmon, M.E.; Fried, J.S.; Domingo, J.B. Estimates of live-tree carbon stores in the pacific northwest are sensitive to model selection. Carbon Balance Manag. 2011, 6. [Google Scholar] [CrossRef] [PubMed]
- Kankare, V.; Holopainen, M.; Vastaranta, M.; Puttonen, E.; Yu, X.; Hyyppä, J.; Vaaja, M.; Hyyppä, H.; Alho, P. Individual tree biomass estimation using terrestrial laser scanning. ISPRS J. Photogramm. Remote Sens. 2013, 75, 64–75. [Google Scholar] [CrossRef]
- Hopkinson, C.; Chasmer, L.; Young-Pow, C.; Treitz, P. Assessing forest metrics with a ground-based scanning lidar. Can. J. For. Res. 2004, 34, 573–583. [Google Scholar] [CrossRef]
- Calders, K.; Newnham, G.; Burt, A.; Murphy, S.; Raumonen, P.; Herold, M.; Culvenor, D.; Avitabile, V.; Disney, M.; Armston, J. Nondestructive estimates of above-ground biomass using terrestrial laser scanning. Methods Ecol. Evol. 2015, 6, 198–208. [Google Scholar] [CrossRef]
- Hackenberg, J.; Wassenberg, M.; Spiecker, H.; Sun, D. Non destructive method for biomass prediction combining tls derived tree volume and wood density. Forests 2015, 6, 1274–1300. [Google Scholar] [CrossRef]
- Raumonen, P.; Casella, E.; Calders, K.; Murphy, S.; Åkerblom, M.; Kaasalainen, M. Massive-scale tree modelling from TLS data. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 2, 189–196. [Google Scholar] [CrossRef]
- Yan, D.-M.; Wintz, J.; Mourrain, B.; Wang, W.; Boudon, F.; Godin, C. Efficient and robust reconstruction of botanical branching structure from laser scanned points. In Proceedings of the 2009 11th IEEE International Conference on Computer-Aided Design and Computer Graphics, Huangshan, China, 19–21 August 2009; pp. 572–575. [Google Scholar]
- Burt, A.; Disney, M.; Raumonen, P.; Armston, J.; Calders, K.; Lewis, P. Rapid characterisation of forest structure from tls and 3d modelling. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2013), Melbourne, Australia, 21–26 July 2013; pp. 3387–3390. [Google Scholar]
- Bayer, D.; Seifert, S.; Pretzsch, H. Structural crown properties of Norway spruce (Picea abies [L.] Karst.) and European beech (Fagus sylvatica [L.]) in mixed versus pure stands revealed by terrestrial laser scanning. Trees 2013, 27, 1035–1047. [Google Scholar] [CrossRef]
- Livny, Y.; Yan, F.; Olson, M.; Chen, B.; Zhang, H.; El-Sana, J. Automatic reconstruction of tree skeletal structures from point clouds. ACM Trans. Graph. 2010, 29. [Google Scholar] [CrossRef]
- Xu, H.; Gossett, N.; Chen, B. Knowledge and heuristic-based modeling of laser-scanned trees. ACM Trans. Graph. 2007, 26. [Google Scholar] [CrossRef]
- Pfeifer, N.; Gorte, B.; Winterhalder, D. Automatic reconstruction of single trees from terrestrial laser scanner data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 35, 114–119. [Google Scholar]
- Côté, J.-F.; Widlowski, J.-L.; Fournier, R.A.; Verstraete, M.M. The structural and radiative consistency of three-dimensional tree reconstructions from terrestrial lidar. Remote Sens. Environ. 2009, 113, 1067–1081. [Google Scholar] [CrossRef][Green Version]
- Béland, M.; Baldocchi, D.D.; Widlowski, J.L.; Fournier, R.A.; Verstraete, M.M. On seeing the wood from the leaves and the role of voxel size in determining leaf area distribution of forests with terrestrial lidar. Agric. For. Meteorol. 2014, 184, 82–97. [Google Scholar] [CrossRef]
- Côté, J.F.; Fournier, R.A.; Egli, R. An architectural model of trees to estimate forest structural attributes using terrestrial lidar. Environ. Model. Softw. 2011, 26, 761–777. [Google Scholar] [CrossRef]
- Hackenberg, J.; Morhart, C.; Sheppard, J.; Spiecker, H.; Disney, M. Highly accurate tree models derived from terrestrial laser scan data: A method description. Forests 2014, 5, 1069–1105. [Google Scholar] [CrossRef]
- Raumonen, P.; Kaasalainen, M.; Akerblom, M.; Kaasalainen, S.; Kaartinen, H.; Vastaranta, M.; Holopainen, M.; Disney, M.; Lewis, P. Fast automatic precision tree models from terrestrial laser scanner data. Remote Sens. 2013, 5, 491–520. [Google Scholar] [CrossRef]
- Danson, F.M.; Gaulton, R.; Armitage, R.P.; Disney, M.; Gunawan, O.; Lewis, P.; Pearson, G.; Ramirez, A.F. Developing a dual-wavelength full-waveform terrestrial laser scanner to characterize forest canopy structure. Agric. For. Meteorol. 2014, 198, 7–14. [Google Scholar] [CrossRef][Green Version]
- Zhan, L.; Douglas, E.; Strahler, A.; Schaaf, C.; Xiaoyuan, Y.; Zhuosen, W.; Tian, Y.; Feng, Z.; Saenz, E.J.; Paynter, I.; et al. Separating leaves from trunks and branches with dual-wavelength terrestrial lidar scanning. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2013), Melbourne, Australia, 21–26 July 2013; pp. 3383–3386. [Google Scholar]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Garcia-Garcia, A.; Orts-Escolano, S.; Oprea, S.; Villena-Martinez, V.; Garcia-Rodriguez, J. A Review on Deep Learning Techniques Applied to Semantic Segmentation. 2017. Available online: https://arxiv.org/abs/1704.06857 (accessed on 7 June 2018).
- Boulch, A.; Saux, B.L.; Audebert, N. Unstructured Point Cloud Semantic Labeling Using Deep Segmentation Networks. 2017. Available online: http://www.boulch.eu/files/2017_3dor-point.pdf (accessed on 7 June 2018).
- Lawin, F.J.; Danelljan, M.; Tosteberg, P.; Bhat, G.; Khan, F.S.; Felsberg, M. Deep Projective 3D Semantic Segmentation. 2017. Available online: https://link.springer.com/chapter/10.1007/978-3-319-64689-3_8 (accessed on 7 June 2018).
- Yang, Z.; Jiang, W.; Xu, B.; Zhu, Q.; Jiang, S.; Huang, W. A convolutional neural network-based 3D semantic labeling method for ALS point clouds. Remote Sens. 2017, 9, 936. [Google Scholar] [CrossRef]
- Huang, J.; You, S. Point Cloud Labeling Using 3D Convolutional Neural Network. 2016. Available online: http://graphics.usc.edu/cgit/publications/papers/point_cloud_3dcnn.pdf (accessed on 7 June 2018).
- Çiçek, Ö.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning dense volumetric segmentation from sparse annotation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 17–21 October 2016; pp. 424–432. [Google Scholar]
- Graham, B. Sparse 3D Convolutional Neural Networks. 2015. Available online: https://arxiv.org/abs/1505.02890 (accessed on 7 June 2018).
- Riegler, G.; Ulusoys, A.O.; Geiger, A. OctNet: Learning Deep 3D Representations at High Resolutions. 2016. Available online: http://openaccess.thecvf.com/content_cvpr_2017/papers/Riegler_OctNet_Learning_Deep_CVPR_2017_paper.pdf (accessed on 7 June 2018).
- Hochreiter, S.; Bengio, Y.; Frasconi, P.; Schmidhuber, J. Gradient Flow In Recurrent Nets: The Difficulty Of Learning Long-Term Dependencies. 2001. Available online: https://pdfs.semanticscholar.org/2e5f/2b57f4c476dd69dc22ccdf547e48f40a994c.pdf (accessed on 7 June 2018).
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep Learning on Point Sets for 3D Classification and Segmentation. 2016. Available online: http://openaccess.thecvf.com/content_cvpr_2017/papers/Qi_PointNet_Deep_Learning_CVPR_2017_paper.pdf (accessed on 7 June 2018).
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. 2017. Available online: http://papers.nips.cc/paper/7095-pointnet-deep-hierarchical-feature-learning-on-point-sets-in-a-metric-space (accessed on 7 June 2018).
- Xi, Z.; Hopkinson, C.; Chasmer, L. Automating plot-level stem analysis from terrestrial laser scanning. Forests 2016, 7, 252. [Google Scholar] [CrossRef]
- Blomley, R.; Weinmann, M.; Leitloff, J.; Jutzi, B. Shape distribution features for point cloud analysis—A geometric histogram approach on multiple scales. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014. [Google Scholar] [CrossRef]
- Sorkine, O. Laplacian Mesh Processing. 2005. Available online: https://pdfs.semanticscholar.org/3ae1/e9b3e39cc8c6e51ef9a36954051845e18d3c.pdf (accessed on 7 June 2018).
- Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. 2015. Available online: https://www.cv-foundation.org/openaccess/content_cvpr_2015/html/Long_Fully_Convolutional_Networks_2015_CVPR_paper.html (accessed on 7 June 2018).
- Bridle, J.S. Probabilistic Interpretation of Feedforward Classification Network Outputs, with Relationships to Statistical Pattern Recognition; Springer: Berlin/Heidelberg, Germany, 1990; pp. 227–236. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. 2015. Available online: https://arxiv.org/pdf/1412.6980.pdf (accessed on 7 June 2018).
- Gander, W.; Golub, G.H.; Strebel, R. Least-squares fitting of circles and ellipses. BIT Numer. Math. 1994, 34, 558–578. [Google Scholar] [CrossRef]
- Zanne, A.E.; Lopez-Gonzalez, G.; Coomes, D.A.; Ilic, J.; Jansen, S.; Lewis, S.L.; Miller, R.B.; Swenson, N.G.; Wiemann, M.C.; Chave, J. Data from: Towards a worldwide wood economics spectrum. Dryad Data Repos. 2009. [Google Scholar] [CrossRef]
- Lambert, M.; Ung, C.; Raulier, F. Canadian national tree aboveground biomass equations. Can. J. For. Res. 2005, 35, 1996–2018. [Google Scholar] [CrossRef]
- Ung, C.H.; Lambert, M.C.; Raulier, F.; Guo, J.; Bernier, P.Y. Biomass of Trees Sampled Across Canada as Part of the Energy from the Forest Biomass (ENFOR) Program. 2017. Available online: https://open.canada.ca/data/en/dataset/fbad665e-8ac9-4635-9f84-e4fd53a6253c (accessed on 10 January 2018).
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 8–10 June 2015; pp. 1–9. [Google Scholar]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17) AAAI, San Francisco, CA, USA, 4–9 February 2017; pp. 4278–4284. [Google Scholar]
- Zheng, S.; Jayasumana, S.; Romera-Paredes, B.; Vineet, V.; Su, Z.; Du, D.; Huang, C.; Torr, P.H. Conditional random fields as recurrent neural networks. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 13–16 December 2015; pp. 1529–1537. [Google Scholar]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; pp. 91–99. [Google Scholar]
Dataset (Size) | Sample ID | Tree Count | DBH Min-Max (cm) | Height (m) | Point Spacing (cm) | Quality Ranking |
---|---|---|---|---|---|---|
Training sample (14) | maple 1 | 2 | 8.3–29.1 | 17.9 | 0.9 | 2 |
maple 2 | 3 | 8.3–12.2 | 11.7 | 0.6 | 7 | |
maple 3 | 4 | 8.7–37.0 | 19.0 | 1.1 | 3 | |
maple 4 | 3 | 0–41.5 | 20.8 | 2.0 | 4 | |
maple 5 | 2 | 0–15.3 | 17.6 | 3.1 | 4 | |
maple 6 | 1 | 0–0 | 20.3 | 2.2 | 6 | |
aspen 1 | 4 | 0–18.8 | 9.5 | 3.2 | 6 | |
aspen 2 | 1 | 0–0 | 6.7 | 4.7 | 10 | |
aspen 3 | 2 | 0–27.1 | 10.8 | 0.9 | 1 | |
aspen 4 | 6 | 9.3–22.9 | 11.3 | 1.7 | 3 | |
aspen 5 | 7 | 0–0 | 7.5 | 7.9 | 9 | |
aspen 6 | 3 | 10.1–19.4 | 8.8 | 2.7 | 4 | |
pine 1 | 4 | 11.6–29.7 | 19.9 | 1.0 | 1 | |
pine 2 | 1 | 0–0 | 18.3 | 1.0 | 5 | |
Testing sample (7) | maple 7 | 4 | 3.9–19.4 | 13.8 | 2.1 | 8 |
maple 8 | 2 | 12.0–30.6 | 15.7 | 0.6 | 2 | |
maple 9 | 2 | 0–0 | 21.2 | 1.8 | 3 | |
aspen 7 | 1 | 16.4–16.4 | 11.5 | 0.6 | 2 | |
aspen 8 | 1 | 30.2–30.2 | 13.8 | 0.5 | 1 | |
pine 3 | 1 | 39.8–39.8 | 17.9 | 2.4 | 5 | |
pine 4 | 4 | 0.0–0.0 | 18.4 | 0.5 | 3 |
Quality Ranking | Rationale |
---|---|
1 | explicit branch and stem forms, very low occlusion degree |
2 | moderate branch and stem details, full stem form, very low occlusion degree |
3 | full stem form, low occlusion degree, few branch details |
4 | fragmentary stem form, moderate occlusion degree |
5 | partial stem form, low occlusion degree |
6 | partial stem form, moderate occlusion degree |
7 | partial stem form, high occlusion degree |
8 | partial stem form, sparse branch or stem points, high occlusion degree |
9 | indiscernible stem points |
10 | indiscernible stem points, surrounded with noisy points |
Dataset (size) | Sample ID | IoU (Stem) | IoU (Branch) | IoU (Other) | mIoU | OA |
---|---|---|---|---|---|---|
Training sample (14) | maple 1 | 0.974 | 0.501 | 0.950 | 0.808 | 0.960 |
maple 2 | 0.278 | 0.361 | 0.973 | 0.537 | 0.968 | |
maple 3 | 0.979 | 0.764 | 0.955 | 0.899 | 0.973 | |
maple 4 | 0.943 | 0.835 | 0.935 | 0.904 | 0.956 | |
maple 5 | 0.860 | 0.489 | 0.995 | 0.781 | 0.975 | |
maple 6 | 0.980 | 0.933 | 0.996 | 0.970 | 0.996 | |
aspen 1 | 0.972 | 0.562 | 0.952 | 0.829 | 0.964 | |
aspen 2 | 0.970 | 0.988 | 0.998 | 0.985 | 0.998 | |
aspen 3 | 0.929 | 0.563 | 0.935 | 0.809 | 0.942 | |
aspen 4 | 0.921 | 0.728 | 0.950 | 0.866 | 0.953 | |
aspen 5 | - | 1.000 | 1.000 | 1.000 | 1.000 | |
aspen 6 | 0.933 | 0.863 | 0.968 | 0.921 | 0.967 | |
pine 1 | 0.984 | 0.792 | 0.942 | 0.906 | 0.975 | |
pine 2 | 0.940 | 0.742 | 0.907 | 0.863 | 0.933 | |
Testing sample (7) | maple 7 | 0.779 | 0.658 | 0.985 | 0.808 | 0.981 |
maple 8 | 0.936 | 0.308 | 0.954 | 0.733 | 0.955 | |
maple 9 | 0.704 | 0.323 | 0.888 | 0.638 | 0.873 | |
aspen 7 | 0.963 | 0.629 | 0.958 | 0.850 | 0.963 | |
aspen 8 | 0.943 | 0.774 | 0.945 | 0.887 | 0.961 | |
pine 3 | 0.980 | 0.782 | 0.930 | 0.898 | 0.955 | |
pine 4 | 0.920 | 0.303 | 0.843 | 0.689 | 0.889 |
Sample ID | Voxel Attribute Combination | IoU (Stem) | IoU (Branch) | IoU (Other) | mIoU | OA |
---|---|---|---|---|---|---|
aspen 6 | occupancy | 0.922 | 0.630 | 0.878 | 0.810 | 0.911 |
occupancy + intensity | 0.924 | 0.648 | 0.881 | 0.817 | 0.914 | |
occupancy + intensity + height | 0.958 | 0.854 | 0.938 | 0.909 | 0.958 | |
maple 8 | occupancy | 0.789 | 0.284 | 0.951 | 0.674 | 0.923 |
occupancy + intensity | 0.847 | 0.355 | 0.960 | 0.721 | 0.942 | |
occupancy + intensity + height | 0.911 | 0.445 | 0.960 | 0.772 | 0.960 |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xi, Z.; Hopkinson, C.; Chasmer, L. Filtering Stems and Branches from Terrestrial Laser Scanning Point Clouds Using Deep 3-D Fully Convolutional Networks. Remote Sens. 2018, 10, 1215. https://doi.org/10.3390/rs10081215
Xi Z, Hopkinson C, Chasmer L. Filtering Stems and Branches from Terrestrial Laser Scanning Point Clouds Using Deep 3-D Fully Convolutional Networks. Remote Sensing. 2018; 10(8):1215. https://doi.org/10.3390/rs10081215
Chicago/Turabian StyleXi, Zhouxin, Chris Hopkinson, and Laura Chasmer. 2018. "Filtering Stems and Branches from Terrestrial Laser Scanning Point Clouds Using Deep 3-D Fully Convolutional Networks" Remote Sensing 10, no. 8: 1215. https://doi.org/10.3390/rs10081215