Next Article in Journal
Densifying and Optimizing the Water Level Series for Large Lakes from Multi-Orbit ICESat-2 Observations
Next Article in Special Issue
Evaluating Threatened Bird Occurrence in the Tropics by Using L-Band SAR Remote Sensing Data
Previous Article in Journal
High Signal-to-Noise Ratio MEMS Noise Listener for Ship Noise Detection
Previous Article in Special Issue
Expansion of Eucalyptus Plantation on Fertile Cultivated Lands in the North-Western Highlands of Ethiopia
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tree Crown Detection and Delineation in a Temperate Deciduous Forest from UAV RGB Imagery Using Deep Learning Approaches: Effects of Spatial Resolution and Species Characteristics

1
Graduate School of Science and Technology, Shizuoka University, Shizuoka 422-8529, Japan
2
Faculty of Agriculture, Shizuoka University, Shizuoka 422-8529, Japan
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(3), 778; https://doi.org/10.3390/rs15030778
Submission received: 7 January 2023 / Revised: 27 January 2023 / Accepted: 28 January 2023 / Published: 29 January 2023

Abstract

:
The automatic detection of tree crowns and estimation of crown areas from remotely sensed information offer a quick approach for grasping the dynamics of forest ecosystems and are of great significance for both biodiversity and ecosystem conservation. Among various types of remote sensing data, unmanned aerial vehicle (UAV)-acquired RGB imagery has been increasingly used for tree crown detection and crown area estimation; the method has efficient advantages and relies heavily on deep learning models. However, the approach has not been thoroughly investigated in deciduous forests with complex crown structures. In this study, we evaluated two widely used, deep-learning-based tree crown detection and delineation approaches (DeepForest and Detectree2) to assess their potential for detecting tree crowns from UAV-acquired RGB imagery in an alpine, temperate deciduous forest with a complicated species composition. A total of 499 digitized crowns, including four dominant species, with corresponding, accurate inventory data in a 1.5 ha study plot were treated as training and validation datasets. We attempted to identify an effective model to delineate tree crowns and to explore the effects of the spatial resolution on the detection performance, as well as the extracted tree crown areas, with a detailed field inventory. The results show that the two deep-learning-based models, of which Detectree2 (F1 score: 0.57) outperformed DeepForest (F1 score: 0.52), could both be transferred to predict tree crowns successfully. However, the spatial resolution had an obvious effect on the estimation accuracy of tree crown detection, especially when the resolution was greater than 0.1 m. Furthermore, Dectree2 could estimate tree crown areas accurately, highlighting its potential and robustness for tree detection and delineation. In addition, the performance of tree crown detection varied among different species. These results indicate that the evaluated approaches could efficiently delineate individual tree crowns in high-resolution optical images, while demonstrating the applicability of Detectree2, and, thus, have the potential to offer transferable strategies that can be applied to other forest ecosystems.

1. Introduction

Accurate tree crown detection and delineation are critical for compiling precise forest inventories and enabling the timely detection of forest dynamics required by various conservation strategies [1,2,3]. Among various attempts to date, remote sensing techniques provide reliable approaches to obtain timely, accurate, and complete information and have been increasingly applied to tree crown detection and delineation [4,5,6,7,8]. However, previous studies focusing on mapping tree crowns have generally involved the manual delineation and visual interpretation of remote sensing imagery. The aforementioned approach is laborious and time consuming and, therefore, may only be practical for small areas. Instead, automatic tree detection and delineation from remote sensing imagery can help to overcome these limitations.
Automatic tree detection and delineation from remote sensing imagery have been attempted in recent years, ranging from relatively simple image processing methods to rather complicated machine learning and deep-learning-based approaches [9,10]. Among these, both the image processing and machine-learning-based methods may face difficulties in detecting dense tree crowns accompanied by complex backgrounds and the need to determine individual features, respectively [9,10]. In comparison, the deep learning approach can yield information about a higher level and extract features from raw data by learning procedures rather than human designs, offering high levels of flexibility [11].
Recent advances in deep-learning-based tree crown detection and delineation rely heavily on convolutional neural networks (CNNs) to segment images or enhance treetop detection [12,13,14,15,16]. Nonetheless, they are more effective and capable of outperforming other approaches [17,18]. DeepForest [19] and Detectree2 [20] are two recently developed CNN-rooted deep learning models used for the detection and delineation of tree crowns. Specifically, DeepForest was developed and pre-trained using data from the National Ecological Observatory Network (NEON) with an unsupervised, LiDAR-based algorithm and hand annotations of airborne RGB imagery to detect tree crowns using bounding boxes [19,21]. On the other hand, Detectree2 was built on the Mask R-CNN, an end-to-end and self-training convolutional neural network [22], to recognize the irregular edges of individual tree crowns from airborne RGB imagery [20]. The latter model can detect the specific edges of tree crowns and may, thus, provide information on tree crown areas as well. These two deep-learning-based models allow the automatic and accurate detection of tree crowns from accessed RGB imagery and have become representative tree detection tools. For instance, DeepForest has a wide range of applications for orchard trees and boreal forests [23,24,25,26], whereas Detectree2 has been primarily applied to study tropical forests [20,27]. However, the use of these two methods has not yet been thoroughly investigated in temperate deciduous forests.
It is well known that the main issue with CNNs is that their application requires a large training set [28]. Luckily, trained CNN models are highly transferable; the layer activation patterns learned by a CNN, stored in a single file, can be used to initialize the training of a new CNN and applied to a secondary task, a process termed transfer learning [28,29]. The transfer learning method can, therefore, overcome the limitations of small datasets and facilitate the practical application of CNN techniques in cases where less data are available [30,31]. The two aforementioned pre-trained models are reported to have the potential to offer a transferable means of prediction for tree detection and delineation [3,13]. However, so far, no studies have tested whether these two methods can be transferred readily to deciduous forests characterized by closed and structurally complex canopies with obvious phenological changes and complicated species composition information.
In terms of base remote sensing data for tree detection and delineation, the unmanned aerial vehicle (UAV) platform may provide high spatial and temporal resolution imageries with lower operational costs and less complexity relative to other remote sensing platforms [32,33,34] and have, hence, been extensively used in forest precision management. Several studies have reported that tree crowns can be detected and delineated with promising accuracy by utilizing UAV-based image-capture techniques [13,35,36,37], in which red–green–blue (RGB) imageries are gradually achieved to enable tree crown detection and delineation with feasible and low-cost features [38,39]. As a result, the application of deep learning models based on CNNs applied to UAV-acquired RGB imageries has emerged as a prompt and affordable way of detecting and delineating tree crowns [35,40,41,42]. For example, Chadwick et al. [39] investigated the potential of Mask R-CNN for automatically delineating individual tree crowns from RGB images generated by UAVs in a conifer forest. Recently, Yu et al. [43] detected tree crowns using Mask R-CNN in a plantation forest (Chinese fir) with UAV-acquired RGB imagery. Unfortunately, to date, studies of tree crown detection and delineation from UAV-derived RGB imageries have largely been limited to a single species or forests with a uniform structure, such as coniferous forests [44,45,46]. To the best of our knowledge, there have been relatively few studies considering tree crown delineation and crown area estimation from UAV-generated RGB images in deciduous forests with diverse and complex structures. Furthermore, the importance of the spatial resolution of the tree crown detection accuracy in deciduous forests using deep-learning-based methods has rarely been investigated, although several previous studies considering this have been carried out in coniferous forests or plantations [35,36].
The primary purpose of this study is, thus, to identify effective, deep-learning-based tree detection and delineation approaches from UAV-based RGB imagery in a dense and diverse, temperate deciduous forest. More specifically, the objectives are to: (1) evaluate the representative potentials of the DeepForest and Detectree2 models for tree crown detection and delineation in an alpine, temperate forest with complex topography and species compositions; (2) explore their performance in extracting the tree crown areas from RGB imagery with different spatial resolutions; and (3) reveal the effects of spatial resolution and canopy complexity on detection accuracy.

2. Materials and Methods

2.1. Study Area

This study was conducted on the Nakakawane site (138°06′E, 35°04′N), a temperate deciduous forest located at one of Shizuoka University’s research forests in Japan (Figure 1). The climate of the area is a typical alpine, cold-temperate climate, with an average annual temperature of 16 °C and mean annual precipitation of 2500 mm [47,48]. The forest is dominated by diverse deciduous species, such as Fagus crenata, Betula grossa, Carpinus tschonoskii, Stewartia monadelpha, Acer shirasawanum, Acer nipponicum, and Fraxinus lanuginosa.

2.2. Analysis Overview

To validate the accuracy of the algorithms for crown segmentation, we prepared a crown projection map in polygons for all trees in the canopy layer of the entire 1.5 ha study plot based on the following procedures: first, we produced a georeferenced orthophoto of the study plot, using UAV photographs acquired in September 2018 as a base map, and manually delineated the boundaries of all crowns in the field. We then digitized the field-corrected crown map and linked it with inventory data, including tree IDs, species names, and diameters at breast height. In total, 499 digitized tree crowns with corresponding, accurate inventory data were used in the further analysis.
The evaluation pipeline for individual tree crown detection and delineation from UAV RGB images in the deciduous forest, including the main steps and analysis, is summarized in Figure 2. The workflow included the preprocessing, model tuning, and evaluation of sections. In brief, the imagery data acquisition for the whole study area was conducted and processed into orthophotos, and the ground truth polygons from manually annotated tree crowns based on orthophotos were further assessed in the field, resulting in multiple datasets for model training and evaluation. Next, two deep-learning-based methods were introduced for crown detection and delineation, for which transfer learning was used to train a finer model in advance before both the pre-trained models and transfer-trained models were used, and the crown information was predicted. The influence of multiple spatial resolutions of UAV RGB imagery and canopy complexity for tree crown information detection and delineation was further evaluated.

2.2.1. Image Acquisition and Preprocessing

The UAV-based imagery of this study area was acquired on 18 May and 25 May 2022 using a DJI Zenmuse P1 (DJI, Shenzhen, China) mounted on a DJI Matrice 300 RTK four-rotor aircraft (DJI, Shenzhen, China). The image sensor in the Zenmuse P1 provided 45 megapixels with an 8192 × 5490 image resolution. The flight patterns were programmed automatically by DJI Pilot to achieve an 85% forward overlap rate and 80% side overlap rate with a 60 m flight height above the relative take-off point. To ensure and maintain flight accuracy, the DJI D-RTK 2 (DJI, Shenzhen, China) high-precision GNSS mobile station for Matrice 300 RTK was set at a fixed point and used to obtain highly accurate location information in both vertical and horizontal directions. The imagery data were acquired on sunny and cloudless days, with a total of 1010 images being collected. All the images were continuously input into DJI Terra (DJI, Shenzhen, China) and processed with high-quality parameters, generating two orthophotos of the study area with a 0.007 m original resolution, termed as 0518 and 0525 datasets, respectively. The 0518 dataset was set as a training dataset for model tuning, and the 0525 dataset, as well as its resampled images at resolutions of 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, 0.10, 0.20, 0.30, 0.40, and 0.50 m, was used for predictions and evaluations.

2.2.2. Tree Crown Detection and Delineation Using DeepForest and Detectree2

The two open-sourced, deep-learning-based DeepForest and Detectree2 methods were used for individual tree crown detection and delineation. DeepForest is a Python package developed from a semi-supervised deep learning neural network using the NEON Airborne Observation Platform [19]. In turn, DeepForest aims to detect the individual tree crown location from airborne RGB imagery and is easy to extend into different scenarios as it provides a pre-trained model in which users can conduct transfer training based on local datasets. The training data for transfer learning with the pre-trained model are shown in Figure 1a; the hyperparameters for model tuning were not changed in the transfer learning steps. Both the pre-trained model and transfer-trained model were used to conduct individual tree crown detection procedures and were evaluated at different image resolutions.
Detectree2 is built on Mask R-CNN, a Faster R-CNN [49] extension involving the inclusion of a new branch to perform instance segmentation [22]. The Mask R-CNN stands out within CNN architectures and can obtain excellent results relative to other architectures for instance segmentation tasks. Similar to DeepForest, both the pre-trained and transfer-trained models were used for tree crown delineation, and no hyperparameters were changed in the transfer learning steps. In addition, for each predicted bounding box and polygon, a confidence score value (0–1) was returned by DeepForest and Detectree2.
The prediction results were further divided into a pre-trained group and a transfer-trained group for each method; scales were obtained for all 15 resolutions. The location information for individual tree crowns was first evaluated and compared for the pre-trained versus transfer-trained results of each method before the comparison across the two models was conducted. Furthermore, the predicted results were connected to the most closed ground truth data using a nearest neighbor algorithm with a certain radius and then a simple linear regression model was applied to evaluate the tree crown areas.
In addition, four dominant species with enough samples were chosen to further evaluate the species-specific performance of transfer training by DeepForest and Detectree2. We also investigated the effects of topography on tree crown detection based on the slope information calculated from the five-meter resolution DEM (digital elevation model) data (The Geospatial Information Authority of Japan, GSI).

2.2.3. Accuracy Assessment

The detection accuracy of the tree crowns using both models was evaluated by the following metrics. The intersection over union (IoU), which is determined by the intersection area between the predicted and ground truth tree crowns divided by the sum of the area contained in both, was first used to assess the agreement between the predicted and ground truth tree crowns [35]. The precision, recall, and F1 score [50] were further calculated at an IoU threshold of 0.5. Precision and recall represent the ratio of correctly detected tree crowns of the model detection and the test set, respectively. The F1 score describes the overall accuracy considering both the precision and recall. These three metrics were calculated from the true positive (TP, tree crown is correctly detected), false positive (FP, tree crown is erroneously detected), and false negative (FN, tree crown is omitted). The equations of precision, recall, and F1 score are defined as:
Precision = T P T P + F P
Recall = T P T P + F N
F 1   score = 2 Precision Recall Precision + Recall
where TP, FP, and FN represent true positive, false positive, and false negative, respectively.
In addition, the tree crown areas extracted from the deep-learning-based methods were compared with the ground truth tree crown areas. Linear regression analysis was employed to describe the relationships between them, which were represented by the widely used statistical criteria of the coefficient of determination (R2) and root-mean-square error (RMSE) and were calculated as:
R 2 = 1 i ( y ^ i y ¯ ) 2 i ( y i y ¯ ) 2  
RMSE = 1 n i = 1 n ( ( y i y ^ i ) ) 2  
where y i and y ^ i represent the reference, and estimated value y ¯ i and n indicate the average value and the number of samples, respectively.

3. Results

3.1. Tree Crown Detection Using DeepForest and Detectree2: Pre-Trained vs. Transfer-Trained

The detailed assessment results for the tree crown detection and delineation of the DeepForest method from UAV-based RGB imagery are presented in Figure 3. The precision, recall, and F1 score of the pre-trained DeepForest tree crown detection were very low, with values of 0.18, 0.28, and 0.22, respectively. In comparison, the transfer-trained DeepForest tree crown detection exhibited significantly higher accuracy, with a precision of 0.59 and recall of 0.46. Furthermore, the F1 score of the transfer-trained DeepForest tree crown detection was 0.52 and, therefore, higher than that of the pre-trained DeepForest method.
Figure 4 shows the detection accuracy for tree crowns using the Detectree2 method, including the precision, recall, and F1 score. Specifically, the pre-trained Detectree2 method for tree crown detection yielded a precision of 0.71, a recall of 0.42, and an F1 score of 0.53. The transfer-trained Detectree2 method had a relatively higher recall (0.50) and F1 score (0.57) than the pre-trained Detectree2 one, although the precision of the transfer-trained Detectree2 method (0.66) was slightly lower than that of the pre-trained Detectree2 one.

3.2. Accuracies of Tree Crown Detection Using Images with Different Spatial Resolutions

The effects of different image spatial resolutions on the detection accuracy of tree crowns using both pre-trained and transfer-trained DeepForest from UAV RGB imagery are illustrated in Figure 5. In detail, the precision, recall, and F1 score of the pre-trained DeepForest method were low at 0.007 and 0.01 m resolutions but increased at a resolution of 0.02 m before varying slightly at resolutions ranging from 0.02 to 0.1 m. However, from resolutions of 0.1 to 0.5 m, the precision, recall, and F1 score declined rapidly and continuously. For the transfer-trained DeepForest method, the precision ranged from 0.49 to 0.61 within resolution ranges of 0.007 to 0.5 m, with the highest and lowest precisions noted for the 0.01 and 0.3 m resolutions. The corresponding recall and F1 score of the transfer-trained DeepForest method decreased more or less when the resolution exceeded 0.05 m.
The accuracy (precision, recall, and F1 score) of both pre-trained and transfer-trained Detectree2 for tree crown detection at resolutions ranging from 0.007 to 0.5 m is shown in Figure 6. The precision, recall, and F1 score of the pre-trained Detectree2 method exhibited similar trends, with the values varying with different extents from the resolution of 0.007 to 0.1 m (precision: 0.67–0.71, recall: 0.39–0.45, F1 score: 0.49–0.55); the values then decreased continuously between resolutions of 0.1 and 0.5 m. Moreover, the precision, recall, and F1 score were relatively greater from 0.007 to 0.05 m resolutions, followed by resolutions of 0.06 to 0.1 m and ones of 0.2 to 0.5 m. For the transfer-trained Detectree2 method, the precision ranged from 0.62 to 0.66 at resolutions of 0.007 to 0.08 m, accompanied by the recall ranging from 0.49 to 0.52 and F1 score ranging from 0.55 to 0.58. Then, the precision decreased continuously at resolutions of 0.09 to 0.5 m. Similarly, the recall and F1 score also declined substantially at resolutions from 0.09 to 0.5 m. On the other hand, the recall and F1 score of the transfer-trained Detectree2 method were greater than those of the pre-trained Detectree2 one for the tree crown detection at fine resolutions.

3.3. Estimation of Tree Crown Area Using Detectree2

In addition, the tree crown areas estimated with the pre-trained and transfer-trained Detectree2 methods were evaluated using the reference tree crown areas that were measured during the field survey (Figure 7). The tree crown areas varied from 6.92 to 174.52 m2 for the reference tree crown areas, from 12.18 to 184.40 m2 for the extracted tree crown areas of pre-trained Detectree2, and from 6.61 to 150.75 m2 for the extracted tree crown areas of transfer-trained Detectree2. The relationship between the tree crown areas from the reference and the pre-trained Detectree2 yielded an R2 of 0.68 and an RMSE of 6.64 m2 (Figure 7a). Obviously, the transfer-trained Detectree2 method showed a relatively better performance than the pre-trained Detectree2 one with an R2 of 0.71 and RMSE of 4.75 m2 (Figure 7b).
The accuracy of the tree crown area estimation using the pre-trained and transfer-trained Detectree2 methods varied with different resolutions (Figure 8). The R2 computed between the measured and predicted crown areas ranged from 0.47 to 0.76 (RMSE: 4.91–14.27 m2) for pre-trained Detectree2 and from 0.19 to 0.76 (RMSE: 3.26–15.07 m2) for transfer-trained Detectree2 with resolutions of 0.001 to 0.5 m. Higher R2 values were observed at resolutions of 0.01 to 0.1 m, along with lower RMSE values, for the tree crown area estimation.

3.4. Performance of Both Models for Detecting Crown in Terms of Different Species and Topography

The detection accuracies obtained with the transfer-trained DeepForest and Detectree2 methods for the crown detection of different tree species were investigated (Figure 9). A total of four tree species with enough samples were considered, namely, Acer nipponicum, Acer shirasawanum, Betula grossa, and Fraxinus lanuginose.
The accuracy varied dramatically among these species, irrespective of whether the transfer-trained DeepForest or Detectree2 method was used. For the transfer-trained DeepForest method, A. nipponicum exhibited the highest overall accuracy (precision = 0.58, recall = 0.43, F1 score = 0.41), followed by B grossa (precision = 0.48, recall = 0.40, F1 score = 0.38), while A. shirasawanum and F. lanuginose yielded poor accuracies with an F1 score of less than 0.30. Nevertheless, with the transfer-trained Detectree2 method, A. shirasawanum had the best prediction, with an F1 score of 0.51 (precision = 0.52, recall = 0.50), while A. nipponicum was poorly predicted with the lowest accuracy (precision = 0.07, recall = 0.25, F1 score = 0.11). The B grossa and F. lanuginose species were moderately delineated, as indicated by the same F1 score of 0.40.
In addition, we investigated the confidence score of the transfer-trained DeepForest and Detectree2 methods for different slopes (Figure 10). The mean confidence scores using transfer-trained DeepForest ranged from 0.40 to 0.56, accompanied by standard deviations (sd) of 0.14 to 0.23. In comparison, the confidence scores of transfer-trained Detectree2 were much higher, with the mean values exceeding 0.67, in which they were low for slopes of 15–20° and 40–45°.

4. Discussion

4.1. Performance of DeepForest and Detectree2 for Detecting Tree Crowns in Deciduous Forests with Complex Species Compositions and Topographical Conditions

This study focused on the full evaluation and comparison of the application and transferability of two commonly used, deep-learning-based CNN tree crown detection and delineation approaches in a dense and diverse deciduous forest using very-high-resolution, UAV-derived imagery. Our results demonstrated that the DeepForest and Detectree2 methods can be successfully transferred to deciduous forests for the detection of tree crowns, taking advantage of UAV-based RGB images with precisions of 0.59 (recall: 0.46; F1 score: 0.52) and 0.66 (recall: 0.50; F1 score: 0.57), respectively. The accuracy of these two transferred models was relatively lower than the results reported for crown detection by Fromm et al. [35] and Chadwick et al. [39], who considered coniferous forest areas using UAV-derived RGB images, yielding a precision greater than 0.80. Generally speaking, heterogeneous forest conditions, for example, those involving diverse species and tree shapes, have a negative influence on tree crown detection, as reported in previous studies [38,51]. Nevertheless, the results obtained here were better than those associated with the detection of other broadleaf species [38], indicating that these two transfer-trained methods have the capability to automatically and accurately detect tree crowns in temperate deciduous forests.
The results of this study further demonstrated that Detectree2 is better at recognizing tree crowns than DeepForest, revealing a strong generalization ability for tree crown detection and delineation. Mask R-CNN is commonly employed in the Detectree2 method to conduct instance segmentation by integrating both object detection tasks and semantic segmentation tasks [22]. Previous studies have demonstrated that Mask R-CNN is a state-of-the-art model among CNN architectures, and an excellent performance for the detection of tree crowns has recently been reported [13,16]; our results agree well with those of the abovementioned studies.
The performance for tree crown detection differed across different tree species, which may have been attributed to the distinctive shapes of the species. As reported in previous studies [51,52], the accuracy of tree crown detection depends on the tree crown shape. The Acer shirasawanum species, which generally has spread-out crowns, had the highest overall accuracy (F1 score = 0.51) when using the transfer-trained Detectree2 method, indicating the potential of Detectree2 for detecting broad tree crowns. However, this model predicted Acer nipponicum poorly, with an extremely low accuracy. These two species belong to the same family and genus but have different morphological characteristics [53,54], such as the diameter at breast height, which somewhat influenced the detection accuracy of tree crowns using UAV RGB imagery. Furthermore, the study of Budianti et al. [53] revealed that the phenological transition dates of these two species are different, and such differences in phenological information may also have affected the accuracy of their crown detection.
As expected, we found that topographic characteristics have effects on the detection accuracy of tree crowns, which is in line with the observations of Khosravipour et al. [55] and Nie et al. [56], who carried out treetop detection using canopy height models derived from LiDAR. Alexander et al. [57] also found that topography influences tree detection and height estimations from LiDAR canopy height models in tropical forests. However, a general rule of slope effects on tree crown detection accuracy was unable to be achieved in this study, and further studies are required to ascertain the influence of slope on tree crown detection accuracy.

4.2. Effects of the Spatial Resolutions of UAV Images on Tree Crown Detection

The results obtained in this study suggest that the image spatial resolution has an obvious influence on tree crown detection and delineation from UAV-acquired RGB imagery when using deep-learning-based methods. The Detectree2 method, which performed best for tree crown detection from UAV-based RGB imagery, had a better accuracy (between 0.007 and 0.1 m), which was noticeably higher than the accuracy obtained with resolutions exceeding 0.1 m. This implies that the Detectree2 method exhibits a good predictive ability for tree crown detection when the image resolution is high. The results are consistent with those of previous studies, which showed that a higher spatial resolution generally improves the detection accuracy for CNN-based models. For example, Fromm et al. [35] concluded that an image resolution of 0.3 cm yielded the highest average precision (0.81) for the detection of conifer seedlings when compared to resolutions of 1.5, 2.7, and 6.3 cm. There was no significant difference in accuracy between the resolutions of 0.007 and 0.1 m, as indicated in this study.
In addition, the accuracy of the Detectree2 method declined when the resolution exceeded 0.1 m, and it then had a poor predictive performance, implying that the detection accuracy of tree crowns was impacted by the coarse spatial resolution of the image. The study of Yin and Wang [58] suggested that a 0.25 m resolution was the optimal choice for the detection of individual mangrove crowns from UAV-based LiDAR data using the seeded region growing (SRG) algorithm and marker-controlled watershed segmentation (MCWS) algorithm when compared to resolutions of 0.10, 0.50, and 1 m. Furthermore, Miraki et al. [36] indicated that the highest overall accuracy for the delineation of individual tree crowns using region growing (RG) and inverse watershed segmentation (IWS) was achieved at a spatial resolution of 100 cm when considering resolutions ranging from 5 to 140 cm. One possible reason for the differences between these studies could be attributed to the employed data sources and predictive methods.

4.3. Estimation of Tree Crown Areas

As for the tree crown area determination, Dong et al. [59] estimated a tree canopy area with R2 values of 0.87 and 0.81 for apple trees and pear trees, respectively, using image-processing-based algorithms from high-resolution UAV standard RGB images in an orchard. Mu et al. [60] also obtained very good results for tree crown area estimation using UAV RGB imagery of peach trees. Nevertheless, these studies were conducted on specific species in an orchard with a simple structure using image processing techniques. Alternatively, the best performing Detectree2 method has the advantage of recognizing tree crowns by delineating irregular tree crown shapes and can, thus, be used to distinguish between adjacent tree crowns, with the potential to be further applied to extract tree crown areas. Our results indicate that the tree crown areas could be assessed with both the pre-trained and transfer-trained Detectree2 methods, with R2 values of 0.68 and 0.71, respectively.
However, our results were inferior to those of Braga et al. [12], who reported that the relationship between the tree crown area extracted from Mask R-CNN delineation and an evaluation set had an R2 of 0.93, based on high-resolution satellite images of tropical forests. Even so, this study also achieved promising results regarding deciduous forests, again indicating the robustness of deep-learning-based methods through Mask R-CNN when estimating tree crown areas. Furthermore, the transfer-trained Detectree2 method performed better than the pre-trained Detectree2 one for the extraction of tree crown areas, indicating that the transfer-trained Detectree2 method had a strong ability and potential for estimating the area of tree canopies in temperate deciduous forests. Additionally, the image resolution also affected the accuracy of crown area estimation, in particular when the resolution was greater than 0.1 m.

4.4. Limitations and Perspectives

This study challenged the automatic detection and delineation of tree crowns in a temperate deciduous forest which is densely linked. Previous studies have demonstrated that detecting and delineating tree crowns from a closed canopy may result in uncertainty and errors in the predictive ability when compared to areas with isolated trees or uniformly planted and distributed trees [16]. This study also showed that image resolution has an important influence on the accuracy of tree crown detection and delineation using deep-learning-based methods. Moreover, we suggest that the edges of tree crowns are not clear and can decrease a method’s prediction accuracy.
To improve the estimation accuracy of deep-learning-based methods, future studies should, on the one hand, take full advantage of the available information contained in high-resolution UAV imagery, such as textural information. On the other hand, this study was conducted in a temperate deciduous forest which exhibited obvious phenology signals. As a result, the phenological variability of individual trees and/or adjacent trees should be exploited as it could increase the detection and delineation accuracy of deep-learning-based methods from UAV-acquired RGB imagery. Future research in this direction could improve individual tree crown delineation from high-resolution remote sensing imagery.

5. Conclusions

The evaluation of deep-learning-based methods for the automatic detection and delineation of tree crowns using UAV-based RGB imagery in an alpine, temperate deciduous forest indicated that the initial training on UAV RGB imagery for pre-trained, deep-learning-based models improved the detection results, in which the transfer-trained Detectree2 method was more suitable and robust for automatically delineating individual tree crowns in temperate deciduous forests. This method exhibited a relatively good and stable performance for tree crown detection and crown area estimation at fine resolutions. This study finally confirmed and highlighted that deep-learning-based methods could represent a powerful tool for tree crown detection and serve as a foundation for the automated monitoring of forest ecosystems when high-resolution UAV images are available.

Author Contributions

Conceptualization, Q.W.; Methodology, Y.G.; Formal analysis, Y.G.; Investigation, Y.G. and A.I.; Writing—original draft preparation, Y.G. and Q.W.; Writing—review and editing, Q.W. and A.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the JSPS project (grant no. 21H02230).

Data Availability Statement

The data are available on request from the corresponding author.

Acknowledgments

We would like to thank the members of the Laboratory of Macroecology and the Institute of Silviculture, Shizuoka University, for their support in conducting both fieldwork and laboratory analyses.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhen, Z.; Quackenbush, L.J.; Zhang, L. Trends in Automatic Individual Tree Crown Detection and Delineation-Evolution of LiDAR Data. Remote Sens. 2016, 8, 333. [Google Scholar] [CrossRef] [Green Version]
  2. Yin, D.; Wang, L. How to Assess the Accuracy of the Individual Tree-Based Forest Inventory Derived from Remotely Sensed Data: A Review. Int. J. Remote Sens. 2016, 37, 4521–4553. [Google Scholar] [CrossRef]
  3. Weinstein, B.G.; Marconi, S.; Bohlman, S.A.; Zare, A.; White, E.P. Cross-Site Learning in Deep Learning RGB Tree Crown Detection. Ecol. Inform. 2020, 56, 101061. [Google Scholar] [CrossRef]
  4. Freudenberg, M.; Magdon, P.; Nölke, N. Individual Tree Crown Delineation in High-Resolution Remote Sensing Images Based on U-Net. Neural Comput. Appl. 2022, 34, 22197–22207. [Google Scholar] [CrossRef]
  5. Gomes, M.F.; Maillard, P.; Deng, H. Individual Tree Crown Detection in Sub-Meter Satellite Imagery Using Marked Point Processes and a Geometrical-Optical Model. Remote Sens. Environ. 2018, 211, 184–195. [Google Scholar] [CrossRef]
  6. Tong, F.; Tong, H.; Mishra, R.; Zhang, Y. Delineation of Individual Tree Crowns Using High Spatial Resolution Multispectral WorldView-3 Satellite Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 7751–7761. [Google Scholar] [CrossRef]
  7. Xu, X.; Zhou, Z.; Tang, Y.; Qu, Y. Individual Tree Crown Detection from High Spatial Resolution Imagery Using a Revised Local Maximum Filtering. Remote Sens. Environ. 2021, 258, 112397. [Google Scholar] [CrossRef]
  8. Wagner, F.H.; Ferreira, M.P.; Sanchez, A.; Hirye, M.C.M.; Zortea, M.; Gloor, E.; Phillips, O.L.; de Souza Filho, C.R.; Shimabukuro, Y.E.; Aragão, L.E.O.C. Individual Tree Crown Delineation in a Highly Diverse Tropical Forest Using Very High Resolution Satellite Images. ISPRS J. Photogramm. Remote Sens. 2018, 145, 362–377. [Google Scholar] [CrossRef]
  9. Ke, Y.; Quackenbush, L.J. A Review of Methods for Automatic Individual Tree-Crown Detection and Delineation from Passive Remote Sensing. Int. J. Remote Sens. 2011, 32, 4725–4747. [Google Scholar] [CrossRef]
  10. Moradi, F.; Javan, F.D.; Samadzadegan, F. Potential Evaluation of Visible-Thermal UAV Image Fusion for Individual Tree Detection Based on Convolutional Neural Network. Int. J. Appl. Earth Obs. Geoinf. 2022, 113, 103011. [Google Scholar] [CrossRef]
  11. Lecun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  12. Braga, J.R.G.; Peripato, V.; Dalagnol, R.; Ferreira, M.P.; Tarabalka, Y.; Aragão, L.E.O.C.; de Campos Velho, H.F.; Shiguemori, E.H.; Wagner, F.H. Tree Crown Delineation Algorithm Based on a Convolutional Neural Network. Remote Sens. 2020, 12, 1288. [Google Scholar] [CrossRef] [Green Version]
  13. Hao, Z.; Lin, L.; Post, C.J.; Mikhailova, E.A.; Li, M.; Chen, Y.; Yu, K.; Liu, J. Automated Tree-Crown and Height Detection in a Young Forest Plantation Using Mask Region-Based Convolutional Neural Network (Mask R-CNN). ISPRS J. Photogramm. Remote Sens. 2021, 178, 112–123. [Google Scholar] [CrossRef]
  14. Martins, G.B.; La Rosa, L.E.C.; Happ, P.N.; Filho, L.C.T.C.; Santos, C.J.F.; Feitosa, R.Q.; Ferreira, M.P. Deep Learning-Based Tree Species Mapping in a Highly Diverse Tropical Urban Setting. Urban For. Urban Green. 2021, 64, 127241. [Google Scholar] [CrossRef]
  15. Lassalle, G.; Ferreira, M.P.; La Rosa, L.E.C.; de Souza Filho, C.R. Deep Learning-Based Individual Tree Crown Delineation in Mangrove Forests Using Very-High-Resolution Satellite Imagery. ISPRS J. Photogramm. Remote Sens. 2022, 189, 220–235. [Google Scholar] [CrossRef]
  16. Yang, M.; Mou, Y.; Liu, S.; Meng, Y.; Liu, Z.; Li, P.; Xiang, W.; Zhou, X.; Peng, C. Detecting and Mapping Tree Crowns Based on Convolutional Neural Network and Google Earth Images. Int. J. Appl. Earth Obs. Geoinf. 2022, 108, 102764. [Google Scholar] [CrossRef]
  17. Hanapi, S.N.H.S.; Shukor, S.A.A.; Johari, J. A Review on Remote Sensing-Based Method for Tree Detection and Delineation. IOP Conf. Ser. Mater. Sci. Eng. 2019, 705, 012024. [Google Scholar] [CrossRef]
  18. Li, W.; Dong, R.; Fu, H.; Yu, L. Large-Scale Oil Palm Tree Detection from High-Resolution Satellite Images Using Two-Stage Convolutional Neural Networks. Remote Sens. 2019, 11, 11. [Google Scholar] [CrossRef] [Green Version]
  19. Weinstein, B.G.; Marconi, S.; Aubry-Kientz, M.; Vincent, G.; Senyondo, H.; White, E.P. DeepForest: A Python Package for RGB Deep Learning Tree Crown Delineation. Methods Ecol. Evol. 2020, 11, 1743–1751. [Google Scholar] [CrossRef]
  20. Ball, J.G.C.; Hickman, S.H.M.; Jackson, T.D.; Koay, X.J.; Hirst, J.; Jay, W.; Aubry-Kientz, M.; Vincent, G.; Coomes, D.A. Accurate Tropical Forest Individual Tree Crown Delineation from RGB Imagery Using Mask R-CNN. bioRxiv 2022, 2022-07. [Google Scholar] [CrossRef]
  21. Weinstein, B.G.; Marconi, S.; Bohlman, S.; Zare, A.; White, E. Individual Tree-Crown Detection in Rgb Imagery Using Semi-Supervised Deep Learning Neural Networks. Remote Sens. 2019, 11, 1309. [Google Scholar] [CrossRef] [Green Version]
  22. He, K.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017. [Google Scholar]
  23. Sivanandam, P.; Lucieer, A. Tree Detection and Species Classification in a Mixed Species Forest Using Unoccupied Aircraft System (UAS) RGB and Multispectral Imagery. Remote Sens. 2022, 14, 4963. [Google Scholar] [CrossRef]
  24. Marin, I.; Gotovac, S.; Papić, V. Individual Olive Tree Detection in RGB Images. In Proceedings of the IEEE 2022 International Conference on Software, Telecommunications and Computer Networks (SoftCOM), Split, Croatia, 22–24 September 2022; pp. 1–6. [Google Scholar]
  25. Jemaa, H.; Bouachir, W.; Leblon, B.; Bouguila, N. Computer Vision System for Detecting Orchard Trees from UAV Images. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 43, 661–668. [Google Scholar] [CrossRef]
  26. Bennett, L.; Wilson, B.; Selland, S.; Qian, L.; Wood, M.; Zhao, H.; Boisvert, J. Image to Attribute Model for Trees ( ITAM-T ): Individual Tree Detection and Classification in Alberta Boreal Forest for Wildland Fire Fuel Characterization. Int. J. Remote Sens. 2022, 43, 1848–1880. [Google Scholar] [CrossRef]
  27. Hamzah, R.; Noor, M.F.M. Drone Aerial Image Identification of Tropical Forest Tree Species Using the Mask R-CNN. Int. J. Innov. Comput. 2022, 12, 31–36. [Google Scholar] [CrossRef]
  28. de Lima, R.P.; Marfurt, K. Convolutional Neural Network for Remote-Sensing Scene Classification: Transfer Learning Analysis. Remote Sens. 2020, 12, 86. [Google Scholar] [CrossRef] [Green Version]
  29. Zhao, H.; Liu, F.; Zhang, H.; Liang, Z. Convolutional Neural Network Based Heterogeneous Transfer Learning for Remote-Sensing Scene Classification. Int. J. Remote Sens. 2019, 40, 8506–8527. [Google Scholar] [CrossRef]
  30. Minaee, S.; Abdolrashidi, A.; Su, H.; Bennamoun, M.; Zhang, D. Biometrics Recognition Using Deep Learning: A Survey. arXiv 2019, arXiv:1912.00271. [Google Scholar] [CrossRef]
  31. De Lima, R.P.; Marfurt, K.; Duarte, D.; Bonar, A. Progress and Challenges in Deep Learning Analysis of Geoscience Images. In Proceedings of the 81st EAGE Conference and Exhibition 2019, London, UK, 3–6 June 2019; pp. 1–5. [Google Scholar]
  32. Colomina, I.; Molina, P. Unmanned Aerial Systems for Photogrammetry and Remote Sensing: A Review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef] [Green Version]
  33. Dainelli, R.; Toscano, P.; Di Gennaro, S.F.; Matese, A. Recent Advances in Unmanned Aerial Vehicles Forest Remote Sensing—A Systematic Review. Part Ii: Research Applications. Forests 2021, 12, 397. [Google Scholar] [CrossRef]
  34. Puliti, S.; Ørka, H.O.; Gobakken, T.; Næsset, E. Inventory of Small Forest Areas Using an Unmanned Aerial System. Remote Sens. 2015, 7, 9632–9654. [Google Scholar] [CrossRef] [Green Version]
  35. Fromm, M.; Schubert, M.; Castilla, G.; Linke, J.; McDermid, G. Automated Detection of Conifer Seedlings in Drone Imagery Using Convolutional Neural Networks. Remote Sens. 2019, 11, 2585. [Google Scholar] [CrossRef] [Green Version]
  36. Miraki, M.; Sohrabi, H.; Fatehi, P.; Kneubuehler, M. Individual Tree Crown Delineation from High-Resolution UAV Images in Broadleaf Forest. Ecol. Inform. 2021, 61, 101207. [Google Scholar] [CrossRef]
  37. Komárek, J.; Klápště, P.; Hrach, K.; Klouček, T. The Potential of Widespread UAV Cameras in the Identification of Conifers and the Delineation of Their Crowns. Forests 2022, 13, 710. [Google Scholar] [CrossRef]
  38. Berra, E.F. Individual Tree Crown Detection and Delineation across a Woodland Using Leaf-on and Leaf-off Imagery from a UAV Consumer-Grade Camera. J. Appl. Remote Sens. 2020, 14, 034501. [Google Scholar] [CrossRef]
  39. Chadwick, A.J.; Goodbody, T.R.H.; Coops, N.C.; Hervieux, A.; Bater, C.W.; Martens, L.A.; White, B.; Röeser, D. Automatic Delineation and Height Measurement of Regenerating Conifer Crowns under Leaf-off Conditions Using Uav Imagery. Remote Sens. 2020, 12, 4104. [Google Scholar] [CrossRef]
  40. Neupane, B.; Horanont, T.; Hung, N.D. Deep Learning Based Banana Plant Detection and Counting Using High-Resolution Red-Green-Blue (RGB) Images Collected from Unmanned Aerial Vehicle (UAV). PLoS ONE 2019, 14, e0223906. [Google Scholar] [CrossRef]
  41. Diez, Y.; Kentsch, S.; Fukuda, M.; Caceres, M.L.L.; Moritake, K.; Cabezas, M. Deep Learning in Forestry Using Uav-Acquired Rgb Data: A Practical Review. Remote Sens. 2021, 13, 2837. [Google Scholar] [CrossRef]
  42. Xia, K.; Wang, H.; Yang, Y.; Du, X.; Feng, H. Automatic Detection and Parameter Estimation of Ginkgo Biloba in Urban Environment Based on RGB Images. J. Sens. 2021, 2021, 6668934. [Google Scholar] [CrossRef]
  43. Yu, K.; Hao, Z.; Post, C.J.; Mikhailova, E.A.; Lin, L.; Zhao, G.; Tian, S.; Liu, J. Comparison of Classical Methods and Mask R-CNN for Automatic Tree Detection and Mapping Using UAV Imagery. Remote Sens. 2022, 14, 295. [Google Scholar] [CrossRef]
  44. Miyoshi, G.T.; Imai, N.N.; Tommaselli, A.M.G.; de Moraes, M.V.A.; Honkavaara, E. Evaluation of Hyperspectral Multitemporal Information to Improve Tree Species Identification in the Highly Diverse Atlantic Forest. Remote Sens. 2020, 12, 244. [Google Scholar] [CrossRef] [Green Version]
  45. dos Santos, A.A.; Marcato Junior, J.; Araújo, M.S.; Di Martini, D.R.; Tetila, E.C.; Siqueira, H.L.; Aoki, C.; Eltner, A.; Matsubara, E.T.; Pistori, H.; et al. Assessment of CNN-Based Methods for Individual Tree Detection on Images Captured by RGB Cameras Attached to UAVS. Sensors 2019, 19, 3595. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Pearse, G.D.; Tan, A.Y.S.; Watt, M.S.; Franz, M.O.; Dash, J.P. Detecting and Mapping Tree Seedlings in UAV Imagery Using Convolutional Neural Networks and Field-Verified Data. ISPRS J. Photogramm. Remote Sens. 2020, 168, 156–169. [Google Scholar] [CrossRef]
  47. Sonobe, R.; Wang, Q. Towards a Universal Hyperspectral Index to Assess Chlorophyll Content in Deciduous Forests. Remote Sens. 2017, 9, 191. [Google Scholar] [CrossRef] [Green Version]
  48. Song, G.; Wang, Q.; Jin, J. Leaf Photosynthetic Capacity of Sunlit and Shaded Mature Leaves in a Deciduous Forest. Forests 2020, 11, 318. [Google Scholar] [CrossRef] [Green Version]
  49. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [Green Version]
  50. Goutte, C.; Gaussier, E. A Probabilistic Interpretation of Precision, Recall and F-Score, with Implication for Evaluation. In European Conference on Information Retrieval; Springer: Berlin/Heidelberg, Germany, 2005; pp. 345–359. ISBN 2005921726. [Google Scholar]
  51. Hastings, J.H.; Ollinger, S.V.; Ouimette, A.P.; Sanders-DeMott, R.; Palace, M.W.; Ducey, M.J.; Sullivan, F.B.; Basler, D.; Orwig, D.A. Tree Species Traits Determine the Success of LiDAR-Based Crown Mapping in a Mixed Temperate Forest. Remote Sens. 2020, 12, 309. [Google Scholar]
  52. Surový, P.; Almeida Ribeiro, N.; Panagiotidis, D. Estimation of Positions and Heights from UAV-Sensed Imagery in Tree Plantations in Agrosilvopastoral Systems. Int. J. Remote Sens. 2018, 39, 4786–4800. [Google Scholar] [CrossRef]
  53. Budianti, N.; Mizunaga, H.; Iio, A. Crown Structure Explains the Discrepancy in Leaf Phenology Metrics Derived from Ground-and Uav-Based Observations in a Japanese Cool Temperate Deciduous Forest. Forests 2021, 12, 425. [Google Scholar] [CrossRef]
  54. Kubo, M.; Sakio, H.; Kawanishi, M.; Higa, M. Acer Tree Species. In Long-Term Ecosystem Changes in Riparian Forests; Springer: Singapore, 2020; pp. 83–96. ISBN 9789811530081. [Google Scholar]
  55. Khosravipour, A.; Skidmore, A.K.; Wang, T.; Isenburg, M.; Khoshelham, K. Effect of Slope on Treetop Detection Using a LiDAR Canopy Height Model. ISPRS J. Photogramm. Remote Sens. 2015, 104, 44–52. [Google Scholar] [CrossRef]
  56. Nie, S.; Wang, C.; Xi, X.; Luo, S.; Zhu, X.; Li, G.; Liu, H.; Tian, J.; Zhang, S. Assessing the Impacts of Various Factors on Treetop Detection Using LiDAR-Derived Canopy Height Models. IEEE Trans. Geosci. Remote Sens. 2019, 57, 10099–10115. [Google Scholar] [CrossRef]
  57. Alexander, C.; Korstjens, A.H.; Hill, R.A. Influence of Micro-Topography and Crown Characteristics on Tree Height Estimations in Tropical Forests Based on LiDAR Canopy Height Models. Int. J. Appl. Earth Obs. Geoinf. 2018, 65, 105–113. [Google Scholar] [CrossRef]
  58. Yin, D.; Wang, L. Individual Mangrove Tree Measurement Using UAV-Based LiDAR Data: Possibilities and Challenges. Remote Sens. Environ. 2019, 223, 34–49. [Google Scholar]
  59. Dong, X.; Zhang, Z.; Yu, R.; Tian, Q.; Zhu, X. Extraction of Information about Individual Trees from High-Spatial-Resolution Uav-Acquired Images of an Orchard. Remote Sens. 2020, 12, 133. [Google Scholar] [CrossRef] [Green Version]
  60. Mu, Y.; Fujii, Y.; Takata, D.; Zheng, B.; Noshita, K.; Honda, K.; Ninomiya, S.; Guo, W. Characterization of Peach Tree Crown by Using High-Resolution Images from an Unmanned Aerial Vehicle. Hortic. Res. 2018, 5, 74. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The location of the study area: (a) the base data used for transfer training; (b) the location of the Nakakawane site, Japan.
Figure 1. The location of the study area: (a) the base data used for transfer training; (b) the location of the Nakakawane site, Japan.
Remotesensing 15 00778 g001
Figure 2. Flowchart of the main steps and analysis for the evaluation of deep-learning-based methods for tree crown detection and delineation.
Figure 2. Flowchart of the main steps and analysis for the evaluation of deep-learning-based methods for tree crown detection and delineation.
Remotesensing 15 00778 g002
Figure 3. The tree crown detection of the pre- and transfer-trained DeepForest methods derived from UAV-based RGB imagery in the studied temperate, deciduous forest. The green and orange bounding boxes represent the predicted and ground truth tree crowns.
Figure 3. The tree crown detection of the pre- and transfer-trained DeepForest methods derived from UAV-based RGB imagery in the studied temperate, deciduous forest. The green and orange bounding boxes represent the predicted and ground truth tree crowns.
Remotesensing 15 00778 g003
Figure 4. The tree crown detection of the pre- and transfer-trained Detectree2 methods derived from UAV-based RGB imagery in the studied temperate, deciduous forest. The green and orange bounding outlines indicate the predicted and ground truth tree crowns.
Figure 4. The tree crown detection of the pre- and transfer-trained Detectree2 methods derived from UAV-based RGB imagery in the studied temperate, deciduous forest. The green and orange bounding outlines indicate the predicted and ground truth tree crowns.
Remotesensing 15 00778 g004
Figure 5. The effects of spatial resolution in the evaluation of tree crown detection from UAV-based RGB imagery using the pre- and transfer-trained DeepForest methods as illustrated by the precision (a), recall (b), and F1 score (c).
Figure 5. The effects of spatial resolution in the evaluation of tree crown detection from UAV-based RGB imagery using the pre- and transfer-trained DeepForest methods as illustrated by the precision (a), recall (b), and F1 score (c).
Remotesensing 15 00778 g005
Figure 6. The effects of spatial resolution on the evaluation of tree crown detection from UAV-based RGB imagery using the pre- and transfer-trained Detectree2 methods as illustrated by the precision (a), recall (b), and F1 score (c).
Figure 6. The effects of spatial resolution on the evaluation of tree crown detection from UAV-based RGB imagery using the pre- and transfer-trained Detectree2 methods as illustrated by the precision (a), recall (b), and F1 score (c).
Remotesensing 15 00778 g006
Figure 7. Relationships between the measured tree crown areas and the predicted tree crown areas from the pre-trained (a) and transfer-trained (b) Detectree2 methods. The gray, solid line represents the 1:1 line.
Figure 7. Relationships between the measured tree crown areas and the predicted tree crown areas from the pre-trained (a) and transfer-trained (b) Detectree2 methods. The gray, solid line represents the 1:1 line.
Remotesensing 15 00778 g007
Figure 8. The R2 (coefficient of determination) (a) and RMSE (root-mean-square error) (b) of the tree crown area estimation using the pre-trained and transfer-trained Detectree2 methods.
Figure 8. The R2 (coefficient of determination) (a) and RMSE (root-mean-square error) (b) of the tree crown area estimation using the pre-trained and transfer-trained Detectree2 methods.
Remotesensing 15 00778 g008
Figure 9. The specific accuracy of tree crown detection for different tree species using the transfer-trained DeepForest (a) and Detectree2 (b) methods. AN, AS, BG, and FL represent Acer nipponicum, Acer shirasawanum, Betula grossa, and Fraxinus lanuginose, respectively.
Figure 9. The specific accuracy of tree crown detection for different tree species using the transfer-trained DeepForest (a) and Detectree2 (b) methods. AN, AS, BG, and FL represent Acer nipponicum, Acer shirasawanum, Betula grossa, and Fraxinus lanuginose, respectively.
Remotesensing 15 00778 g009
Figure 10. Confidence scores for different slopes using the transfer-trained DeepForest (a) and Detectree2 (b) methods.
Figure 10. Confidence scores for different slopes using the transfer-trained DeepForest (a) and Detectree2 (b) methods.
Remotesensing 15 00778 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gan, Y.; Wang, Q.; Iio, A. Tree Crown Detection and Delineation in a Temperate Deciduous Forest from UAV RGB Imagery Using Deep Learning Approaches: Effects of Spatial Resolution and Species Characteristics. Remote Sens. 2023, 15, 778. https://doi.org/10.3390/rs15030778

AMA Style

Gan Y, Wang Q, Iio A. Tree Crown Detection and Delineation in a Temperate Deciduous Forest from UAV RGB Imagery Using Deep Learning Approaches: Effects of Spatial Resolution and Species Characteristics. Remote Sensing. 2023; 15(3):778. https://doi.org/10.3390/rs15030778

Chicago/Turabian Style

Gan, Yi, Quan Wang, and Atsuhiro Iio. 2023. "Tree Crown Detection and Delineation in a Temperate Deciduous Forest from UAV RGB Imagery Using Deep Learning Approaches: Effects of Spatial Resolution and Species Characteristics" Remote Sensing 15, no. 3: 778. https://doi.org/10.3390/rs15030778

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop