Next Article in Journal
Functions of Windbreaks in the Landscape Ecological Network and Methods of Their Evaluation
Next Article in Special Issue
Using Sentinel-2 Images to Map the Populus euphratica Distribution Based on the Spectral Difference Acquired at the Key Phenological Stage
Previous Article in Journal
Cupressaceae Pollen in the City of Évora, South of Portugal: Disruption of the Pollen during Air Transport Facilitates Allergen Exposure
Previous Article in Special Issue
Feature-Level Fusion between Gaofen-5 and Sentinel-1A Data for Tea Plantation Mapping
Article

Using U-Net-Like Deep Convolutional Neural Networks for Precise Tree Recognition in Very High Resolution RGB (Red, Green, Blue) Satellite Images

1
Botanical Garden-Institute of the Far Eastern Branch of the Russian Academy of Science, 690024 Vladivostok, Russia
2
Institute of Botany, Czech Academy of Sciences, 252 43 Průhonice, Czech Republic
*
Author to whom correspondence should be addressed.
Forests 2021, 12(1), 66; https://doi.org/10.3390/f12010066
Received: 4 December 2020 / Revised: 2 January 2021 / Accepted: 4 January 2021 / Published: 8 January 2021
(This article belongs to the Special Issue Remote Sensing Applications in Forests Inventory and Management)
Very high resolution satellite imageries provide an excellent foundation for precise mapping of plant communities and even single plants. We aim to perform individual tree recognition on the basis of very high resolution RGB (red, green, blue) satellite images using deep learning approaches for northern temperate mixed forests in the Primorsky Region of the Russian Far East. We used a pansharpened satellite RGB image by GeoEye-1 with a spatial resolution of 0.46 m/pixel, obtained in late April 2019. We parametrized the standard U-Net convolutional neural network (CNN) and trained it in manually delineated satellite images to solve the satellite image segmentation problem. For comparison purposes, we also applied standard pixel-based classification algorithms, such as random forest, k-nearest neighbor classifier, naive Bayes classifier, and quadratic discrimination. Pattern-specific features based on grey level co-occurrence matrices (GLCM) were computed to improve the recognition ability of standard machine learning methods. The U-Net-like CNN allowed us to obtain precise recognition of Mongolian poplar (Populus suaveolens Fisch. ex Loudon s.l.) and evergreen coniferous trees (Abies holophylla Maxim., Pinus koraiensis Siebold & Zucc.). We were able to distinguish species belonging to either poplar or coniferous groups but were unable to separate species within the same group (i.e. A. holophylla and P. koraiensis were not distinguishable). The accuracy of recognition was estimated by several metrics and exceeded values obtained for standard machine learning approaches. In contrast to pixel-based recognition algorithms, the U-Net-like CNN does not lead to an increase in false-positive decisions when facing green-colored objects that are similar to trees. By means of U-Net-like CNN, we obtained a mean accuracy score of up to 0.96 in our computational experiments. The U-Net-like CNN recognizes tree crowns not as a set of pixels with known RGB intensities but as spatial objects with a specific geometry and pattern. This CNN’s specific feature excludes misclassifications related to objects of similar colors as objects of interest. We highlight that utilization of satellite images obtained within the suitable phenological season is of high importance for successful tree recognition. The suitability of the phenological season is conceptualized as a group of conditions providing highlighting objects of interest over other components of vegetation cover. In our case, the use of satellite images captured in mid-spring allowed us to recognize evergreen fir and pine trees as the first class of objects (“conifers”) and poplars as the second class, which were in a leafless state among other deciduous tree species. View Full-Text
Keywords: tree recognition; machine learning; convolutional neural network tree recognition; machine learning; convolutional neural network
Show Figures

Figure 1

MDPI and ACS Style

Korznikov, K.A.; Kislov, D.E.; Altman, J.; Doležal, J.; Vozmishcheva, A.S.; Krestov, P.V. Using U-Net-Like Deep Convolutional Neural Networks for Precise Tree Recognition in Very High Resolution RGB (Red, Green, Blue) Satellite Images. Forests 2021, 12, 66. https://doi.org/10.3390/f12010066

AMA Style

Korznikov KA, Kislov DE, Altman J, Doležal J, Vozmishcheva AS, Krestov PV. Using U-Net-Like Deep Convolutional Neural Networks for Precise Tree Recognition in Very High Resolution RGB (Red, Green, Blue) Satellite Images. Forests. 2021; 12(1):66. https://doi.org/10.3390/f12010066

Chicago/Turabian Style

Korznikov, Kirill A., Dmitry E. Kislov, Jan Altman, Jiří Doležal, Anna S. Vozmishcheva, and Pavel V. Krestov 2021. "Using U-Net-Like Deep Convolutional Neural Networks for Precise Tree Recognition in Very High Resolution RGB (Red, Green, Blue) Satellite Images" Forests 12, no. 1: 66. https://doi.org/10.3390/f12010066

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop