Next Article in Journal
Advancements in Ocean Mapping and Nautical Cartography
Previous Article in Journal
Application of GIS in Introducing Community-Based Biogas Plants from Dairy Farm Waste: Potential of Renewable Energy for Rural Areas in Bangladesh
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning for Urban Tree Canopy Coverage Analysis: A Comparison and Case Study

Department of Geography, Brigham Young University, Provo, UT 84606, USA
*
Author to whom correspondence should be addressed.
Geomatics 2024, 4(4), 412-432; https://doi.org/10.3390/geomatics4040022
Submission received: 23 September 2024 / Revised: 2 November 2024 / Accepted: 13 November 2024 / Published: 14 November 2024
(This article belongs to the Topic Geocomputation and Artificial Intelligence for Mapping)

Abstract

:
Urban tree canopy (UTC) coverage, or area, is an important metric for monitoring changes in UTC over large areas within a municipality. Several methods have been used to obtain these data, but remote sensing image classification is one of the fastest and most reliable over large areas. However, most studies have tested only one or two classification methods to accomplish this while using costly satellite imagery or LiDAR data. This study seeks to compare three urban tree canopy cover classifiers by testing a deep learning U-Net convolutional neural network (CNN), support vector machine learning classifier (SVM) and a random forests machine learning classifier (RF) on cost-free 2012 aerial imagery over a small southern USA city and midsize, growing southern USA city. The results of the experiment are then used to decide the best classifier and apply it to more recent aerial imagery to determine canopy changes over a 10-year period. The changes are subsequently compared visually and statistically with recent urban heat maps derived from thermal Landsat 9 satellite data to compare the means of temperatures within areas of UTC loss and no change. The U-Net CNN classifier proved to provide the best overall accuracy for both cities (89.8% and 91.4%), while also requiring the most training and classification time. When compared spatially with city heat maps, city periphery regions were most impacted by substantial changes in UTC area as cities grow and the outer regions get warmer. Furthermore, areas of UTC loss had higher temperatures than those areas with no canopy change. The broader impacts of this study reach the urban forestry managers at the local, state/province, and national levels as they seek to provide data-driven decisions for policy makers.

1. Introduction

As urban land in the United States continues to expand, urban forests will continue to play a substantial role in the overall health of urban and rural areas alike [1,2]. Urban forests, defined as individual or clusters of trees within or adjacent to an urban setting, provide several well-documented economic, ecological, and health benefits for residents of the urban environments [3,4]. For example, urban trees provide shade and mitigate the effects of the urban heat island effect [5,6]. They also combat particle air pollution and CO2 emissions by sequestering carbon [7,8], are linked to positive effects in human psychology in urban environments [9,10,11], and reduce stormwater runoff [12,13]. Despite their enormous benefits, urban tree canopy (UTC) coverage is in decline as considerable urban growth in some regions has necessitated the removal of urban forest, leaving some cities struggling with the urban heat island (UHI) effect and other challenges [2]. Managing UTC necessitates accurate measurement and inventory of urban trees within a city, and many municipalities within the Unites States of America (USA) have created organizations or branches of government to oversee the changing urban landscapes [14].
While tagging and inventorying trees in the field creates a valuable and accurate catalog, the process consumes significant time and monetary resources. Many have turned to less time-intensive methods, such as using remote sensing technologies, to provide a review of the urban tree status [15,16]. Remote sensing technologies, such as light detection and ranging (LiDAR), unoccupied aerial vehicles (UAVs), and aerial and satellite imagery, have been used to inventory individual trees or tree canopy coverage in towns and cities [17,18]. LiDAR data provide accurate representation of tree canopies, but can be costly and require significant computational resources for processing large datasets [19,20,21]. UAV data are captured in very high spatial resolutions that is effective in identifying trees and tree canopy, but it is difficult to capture data large spatial extents, i.e., an entire municipality [22,23,24]. Satellite and aerial imagery are being increasingly used in the context of mapping urban tree canopies because of the increasingly high spatial resolution and wide availability [25,26,27,28].
Recent advancements in computer science have increased the applicability of both machine learning and deep learning image classifiers for remote sensing applications [29,30,31]. State-of-the-art machine and deep learning models are available to UTC managers to quickly obtain accurate information regarding tree canopy coverage and how the coverage is changing over time. Accuracy is of the utmost importance as these data are used for legislative decision making and funding [14]. These models are now being offered as tools in accessible and easy-to-use ways through popular GIS software, such as ArcGIS Pro. Recently, scientists have found it possible to detect individual trees or calculate individual tree canopies within UTC coverage for large areal extents with both machine learning and deep learning classifiers and detectors. Yan et al. [32] compared convolutional neural networks (CNNs), support vector machine (SVM) and random forests (RF) models for UTC mapping and found CNNs performed the best. However, they used expensive satellite imagery and investigated delineating individual tree canopies in a city instead of total UTC area within an urban environment. Lv et al. [33] compared several CNNs, including one developed by their own team, but once again sought individual tree segmentation and failed to include machine learning classifiers in their comparison. Zamboni et al. [34] likewise surveyed a large number of deep learning models for individual tree detection but did not use machine learning classifiers. Wang et al. [35] compared a U-Net deep learning model to object-based classification and found U-Net to be a superior classifier for UTC coverage area, but again did not compare the results to machine learning classifiers. Despite the successes of machine and deep learning classifiers on their own, further investigation on the worth of deep learning and machine learning from a practical perspective is important. Deep learning classifiers can require more time to train and classify than machine learning classifiers, suggesting that small differences in overall accuracy may not be worth the longer processing times [36]. Application-based studies, like [37], have created a comprehensive set of UTC data for the entire state of Wisconsin using machine learning algorithms, but the results were not compared with deep learning algorithm results. In many of the application studies, multispectral satellite data or LiDAR data are combined with aerial imagery to enhance the classification [27,28,38,39]. Therefore, comparisons with less complex and still reliable machine learning classification models, like SVM and RF, are needed on simple optical imagery readily available from a management and policy-making perspective.
The present literature on urban tree canopy mapping and UHI have focused on highly populated urban centers at the cost of neglecting smaller or mid-sized urban centers. This study is the first of its kind to compare three machine and deep learning classifiers, U-Net deep learning, support vector machine, and random forests machine learning, for mapping UTC coverage in two cities of different size and shape in the USA from the perspective of an urban tree forestry manager. From this perspective, cost, computational time, and accuracy are the most important considerations. For this purpose, free National Agriculture Imagery Program (NAIP) imagery datasets are used to provide an alternative to the costly satellite, LiDAR, and UAV data that many others use for UTC managers. Commonly used ArcGIS software and desktop computer specifications are used for analysis to simulate the resources available to urban forestry managers. Following the comparison, a change detection analysis is presented as a case study to reveal where and how tree canopies in the two diverse cities of interest, Laurel, Mississippi (MS) and Georgetown, Texas (TX), USA, have changed over a 10-year period. Temperature maps derived from Landsat 9 thermal bands are also used to reveal the location of high temperatures in relation to tree canopy gain or loss. Such land surface temperature mapping and subsequent comparisons with tree canopy coverage is common among research projects as many seek to apply the technology to identify or solve many concerns in urban areas. In fact, several studies are now working to model changes of LST in the future using data from the past now that we have sufficiently large data histories [40,41]. Many are working hard on specific applications in large cities around the world, but few are investigating smaller cities [42,43,44]. We aim to expand the literature in this area as well by looking at smaller regions and compare with deep learning data, which has not yet been conducted.
We hypothesize that deep learning, as available through accessible and widely used GIS software packages, will provide the best overall accuracy results and suggest that NAIP data alone is sufficient to provide accuracies similar to other studies that use multiple datasets. Furthermore, we suggest that tree canopy loss will be more prevalent over a 10-year period for Georgetown, TX because of its rapid growth and it will be visually correlated with high temperatures in the regions of canopy loss. Section 2 of this article will introduce the study areas that make this study unique and discuss methods for classifying data and generating temperature maps. Section 3 will showcase the results of this study, while Section 4 will discuss the strengths of our approach as well as limitations. Finally, Section 5 concludes by sharing the most important information.

2. Materials and Methods

2.1. Study Areas

This study identified two cities of interest to investigate urban tree canopy coverage in the United States (Figure 1). The first of these two cities, Laurel, Mississippi, USA, represents an older and smaller urban environment. Laurel is a town of just over 17,000 people in southern Mississippi and was established as a lumber town in 1882 [45]. Situated in a humid subtropical climate, mean high temperatures can rise as high as 36 degrees Celsius in July and August and fall as low as 23.8 degrees Celsius in January [46]. Recently, Laurel has been the host of the popular home renovation television show Home Town on Home and Garden Television [47]. It has also been a designated “Tree City, USA” for over 40 years now, the first city in the state of Mississippi with that title [48]. Laurel’s geographic boundaries are fairly uniform in nature, without fragmented annexes and appendages, and encircle 40.8 km2 of city, airport, and surrounding industrial zones and forest.
Georgetown, Texas, a suburb of Austin, is representative of a mid-sized city. It has a population of 75,420. In 2022, Georgetown was the fastest-growing city by percent change with an impressive population increase rate of 14.4% [49]. Known for being a warm city, Georgetown is also in a humid subtropical climate where mean high temperatures range from 25.5 degrees Celsius to 36 degrees Celsius. The city area is a fragmented, complex, and unique 156.96 km2. Within city boundaries, special trees titled “Heritage trees” are protected and require a permit to prune or remove [50]. However, this is a small portion of the total UTC within the city and unlike Laurel, Georgetown is not designated as a “Tree City, USA”.

2.2. Remote Sensing Data

2.2.1. NAIP Imagery Data

The National Agriculture Imagery Program (NAIP) began in 2002 and is administered by the U.S. Department of Agriculture (USDA) Farm Service Agency. The goal of the program is to collect aerial imagery during leaf-on conditions. NAIP imageries are collected using digital sensors placed on aircraft that meet rigid calibration specifications [51]. NAIP imagery is generally collected at a 1 m spatial resolution and as true color (RGB) imagery, though more recent imagery has increased spatial resolutions (50–60 cm) and is additionally presented as color-infrared (CIR) imagery. NAIP imagery flights are conducted at variable altitudes with a variety of aircraft and minimum side overlap of 30%. Data management, georegistration, and quality control are all managed by the imagery program. NAIP imagery has been used for urban tree canopy in several other studies [14,37,52].
For this study, four NAIP imagery datasets were acquired and masked to each municipality’s boundaries for further processing. Laurel, Mississippi datasets were acquired for 2012 and 2023, while Georgetown, TX data were gathered for 2012 and 2022. Images for Laurel were transformed into NAD83 (2011) Universal Transverse Mercator (UTM) Zone 16N, and Georgetown images were transformed into NAD83 (2011) UTM Zone 14N for further processing (Table 1).

2.2.2. LANDSAT Data

Landsat 9 is the most recent iteration of the LANDSAT earth observation missions from NASA, having launched 27 September 2021. Landsat 9 collects a new image at each location on Earth every 16 days, resulting in the potential for cloud-free imagery at regular intervals. Cloud free Landsat tiles were collected for each city of interest on a date as close to the NAIP acquisition as possible. Unfortunately, cloudy images were often found in the image acquisition dates directly adjacent to the NAIP data acquisition dates. Nevertheless, imagery captured at dates as close to ideal were gathered and processed. The Georgetown image was gathered from 28 May 2022 and the Laurel LANDSAT image was captured 09/23/2023. The Landsat 9 thermal band 10 was predominately used, along with bands 4 (red) and 5 (NIR) for NDVI and data processing. Band 10 is collected by the Thermal Infrared Sensor 2 (TIRS-2), whose center wavelength is 10,800 nm and spatial resolution is 100 m. The band 10 image was resampled to 30 m for the purpose of this study.
After acquiring the Landsat 9 tiles from the Unites States Geological Survey EarthExplorer website (https://earthexplorer.usgs.gov, accessed on 1 April 2024), the large images were clipped to the extents of each city. Further processing was needed to create surface temperature maps using bands 4, 5, and 10. To convert the raw values into surface temperature, several steps were required. First, raw values were converted to top of atmosphere (TOA) spectral radiance. This was achieved using Equation (1):
TOA (L) = ML * Qcal + AL
where ML is the radiance multiplication factor for Band 10 found in the metadata of Landsat 9 images, Qcal is the band itself, and AL is the radiance addition factor found in the metadata as well.
After calculating TOA, we can calculate the brightness temperature from the following Equation (2):
BT = (K2/(ln(K1/L) + 1)) − 273.15
where K1 is calibration constant 1 provided in the metadata, K2 is calibration constant 2 which is also provided in the metadata, and L is the spectral radiance or TOA layer in (Watts/(m2∗sr∗μm)). Subtracting 273.15 converts the results from degrees kelvin to degrees Celsius.
To improve the temperature map, further steps are needed. First, the normalized difference vegetation index (NDVI) is required to continue and is calculated from Bands 4 and 5 using the traditional equation for NDVI, found here as Equation (3):
NDVI = float(Band 5 − Band 4)/float(Band 5 + Band 4)
where Band 5 is the NIR band and band 4 is the red band on Landsat 9. The NDVI is then used as an input to find the proportion of vegetation (Pv). Pv is calculated using the previously calculated NDVI band in Equation (4):
Pv = ((NDVI − NDVImin)/(NDVImax + NDVImin))
Emissivity is then calculated based on the Pv just obtained using the equation:
E = 0.004 * Pv + 0.986
Finally, land surface temperature (LST) can be calculated using several of the variables just obtained for our area of interest using the Landsat 9 bands. The final LST band was calculated using Equation (6):
LST = (BT/(1 + (0.00115 * BT/1.4388) * ln(E)))
where BT is the brightness temperature layer, and E is the emissivity layer. The final result was a LST layer in degrees Celsius where we could identify spatial correlations between heat and tree canopy loss.
To assess the quality of the LST products, quality assessment (QA) data were included in the surface temperature data download from EarthExplorer. In this raster band, each pixel represents the level of uncertainty in the temperature values. This band was overlayed on the temperature raster results to indicate where higher levels of temperature uncertainty exist. In Figure 2, the areas of higher uncertainty have a pink tint and stands out against the LST map background. We found that the centers of the cities were the least affected, and the most affected regions are where water bodies existed. The water regions were removed from analysis prior to classification and case study analysis. Upon assessment, the temperature looked to be fairly underestimated, but spatial patterns were consistent with what was expected so the maps were used for analysis.

2.3. Classification and Accuracy Assessment

A general workflow of the experiment and case study analysis is presented in Figure 3. Three remote sensing image classifiers are compared, and the best performing classifier is used to classify more recent imagery, eventually resulting in a post-classification change detection and surface heat and tree canopy spatial correlation analysis.

2.3.1. Classifiers

Two traditional machine learning classifiers and a commonly used deep learning classifier are compared in this study by applying each to the same datasets with the same training data on the 2012 imagery collected for both Laurel and Georgetown. The U-Net CNN is one of the most commonly used deep learning image classifiers found in GIScience and is easy to access for UTC mangers through ESRI ArcGIS Pro 3.1.3 [53]. Originally designed for biomedical image segmentation, its applications in remote sensing image analysis are broad but now include pixel-level image classification [54]. U-Net gets its name from its U-shaped design—the architecture of the network is divided into an encoding and decoding side. U-Net has been previously used for individual tree crown extraction [35,55] and with LiDAR Data and multispectral imagery for UTC coverage mapping from UAV data [23]. It has also shown success in modified versions of U-Net for mapping UTC [25] and other pixel-based classifications in a variety of environmental applications [36,56,57].
The two traditional machine learning-based classifiers used for this study are support vector machine (SVM) and random forest (RF) classifiers. They are frequently found in the literature as high-performing image classification models ([58], p. 337). SVM is based on statistical learning theory and can be used for both linear and nonlinear classification, regression, and a series of other tasks. In its simplest form, SVM strives to find a hyperplane between the classes (it can handle many) in the target feature to separate them. Additionally, SVM classifiers do not need to have normally distributed or large training samples to perform well [59]. For this study, we used the support vector machine classifier with the maximum number of samples per class set as 1000. The segment attributes used for classification were active chromaticity color, mean digital number, and compactness.
Random forests (also called random trees in ArcGIS Pro) is also a supervised classification method that relies on a series of decisions being made based on the statistical characteristics of the training dataset and image. The decisions all stem from past decisions to make what looks like a tree when represented graphically. During the process of classification, many decision trees are made (i.e., a forest is grown) and the most frequently created output is used for the final classification [58]. Of the three classifiers used for this comparative study, the ‘best’ classifier will be determined by overall accuracy (OA), kappa coefficient, processing time, and ease of use. For the RF classifier, we used 150 as the maximum numbers of trees, 60 maximum tree depth, and 1000 maximum number of samples per class. The segment attributes used for classification once again were active chromaticity color, mean digital number, and compactness.

2.3.2. Classification

The same 2012 NAIP orthomosaic images were classified three times by each classifier for both Laurel and Georgetown. The NAIP imagery for 2012 clipped was to the municipality boundaries (Figure 4). Training samples and validation samples were first collected using ArcGIS Pro 3.1.3 for each of the study areas using the high resolution NAIP imagery and prior knowledge of each city. In order to increase our focus on obtaining the most accurate results of UTC area and keep classifications simple, only three classes were specified for the classifications. After testing a variety of classes, the most effective and efficient classification occurred with a tree class, grass class, and urban/barren class. Prior to classification, water bodies within municipality boundaries were masked and removed. Because NAIP imagery was clipped to municipality boundaries, only the three classes were present and relevant to the study at hand. A total of 156 total training polygons were collected for Laurel before training the classifier, representing a total area of 1.06 km2. A total of 267 samples were obtained for Georgetown as well, resulting in a total area of 1.41 km2. All samples were randomly obtained in a variety of regions around the study areas in order to fairly represent the diversity of grasses, urban structures, and trees found throughout the cities.
The collected training polygons were once again used in ArcGIS Pro 3.1.3 to train each of the U-Net CNN, SVM, and RF classifiers. Each classification was conducted within the ArcGIS Pro graphical user interface environment to replicate the experience urban forestry managers would have when attempting similar remote-sensing based inventories.

2.3.3. Accuracy Assessment

Accuracy was assessed for the 2012 and 2022/23 classifications using 182 validation samples for Laurel and 282 polygons for Georgetown collected separately from the training polygons. No overlap between the two samples were used to ensure fair assessment (Figure 4). Four remote sensing image classification accuracy statistics, users’ accuracy, producer’s accuracy, overall accuracy (OA), and kappa coefficient, were used to assess each classification. The producer’s accuracy is the total number of pixels classified correctly for a class based on the validation data, divided by the total pixels for that class. The user’s accuracy is the number of correctly classified pixels divided by the number of pixels classified into any particular class. All four statistics were seamlessly calculated using the ArcGIS Pro 3.1.3 compute confusion matrix tool.
The confusion matrix tool was used to first generate a stratified random sample of points dispersed equally throughout the validation polygons. Then fields in the accuracy assessment points were populated from the ground truth, or validation data, as well as the classified raster. Once the fields were complete, the tool computed the OA using the following equation:
OA = i = 1 K x i i N
where xii represents a pixel classified correctly, N is the total number of pixels [60]. The kappa coefficient was likewise computed using the following equation:
K ^ = N i = 1 K x i i i = 1 K ( x i + × x + j ) N 2 i = 1 k ( x i + × x + j )
where N is the total number of samples, k is the number of rows in the confusion matrix, xii is the number of observations in row i and column i, and xi+ and x+j are the marginal totals for row i and column j [61]. The results were presented in tables for the 2012 images and assessed to determine the best classifier. The best classifier was then used on the 2022 and 2023 images.

2.4. Case Study and Heat Mapping

In order to assess the tree canopy changes and potential impact on urban heat, the most accurate classifier was used to create an urban tree canopy map of Georgetown and Laurel in the years 2022 and 2023, respectively. New training and validation samples were digitized in ArcGIS Pro 3.1.3 by scientists familiar with the urban areas using the NAIP imagery. Once again, three classes, urban, tree canopy, and grass, were used for classification because of our specific interest in the tree canopy area. The classified images were used in a post-classification change detection, one of the most common change detection methods [62,63]. This is where two remote sensing images, representing T1 (Time 1) and T2 (Time 2), are classified into land cover classes first and one is subsequently subtracted from the other to show a change in a particular class over time. For this study, we were interested in the canopy change over the 10–11 year period between NAIP data collection dates.
The pixels representing changes in canopy, either from canopy to grass or canopy to urban/barren, were symbolized and overlaid onto the Landsat 9 heat maps generated previously. Using visual techniques described in [60], spatial correlations were identified and will be presented hereafter. To quantitatively assess temperature within the areas of canopy loss and the rest of the city boundaries, 1000 stratified random points were generated for both the areas of canopy loss and no or little change (500 points in each). Using the extract values to points ArcGIS tool, we obtained temperature values for each of the points and then compared the means of the two groups using SPSS software. The results of a independent samples t-test indicates if there is a statistically significant difference between the temperatures found in the areas of canopy change and the general areas of the area otherwise specified.

3. Results

3.1. Laurel, MS Classification Experiment

The Laurel, MS image map from 2012 was classified successfully using the U-Net CNN, SVM, and RF classifiers. The U-Net classifier took the most time, totaling 53 min and 11 s for both model training and classification. The SVM required a total of 8 min and 17 s, while the RF classifier only needed 4 min and 54 s. This was completed on a Dell Inspiron 5680 6-core Intel i7 central processing unit (CPU) desktop with 16 GB RAM, and a Nvidia GTX 1060 3 GB graphical processing unit (GPU). Classification times, OA, and kappa coefficient are reported in Table 2.
Despite taking longer, the overall accuracy and kappa coefficients of the U-Net classifier outperformed the same metrics from the SVM and RF classifiers. Overall accuracy of the U-Net classifier reached 91%, while SVM was adequate at 84% and RF came in third at 76%. Table 3, Table 4 and Table 5 reveal more detailed results pertaining to the users and producers’ accuracy for each classifier.
After close examination, the U-Net also performed the best overall between the user’s accuracy and producer’s accuracy for the forest class especially. Interestingly, the forest class was more often confused with the developed class than the grass class, despite the grass class having more similar RGB values to the forest class than the developed class. Similar to the overall accuracies and kappa coefficient, the SVM and RF classes followed U-Net in terms of forest class accuracy, in that order. Figure 5 visualizes the differences between the three classifier results and the original NAIP image within Laurel boundaries. There are some inconsistencies within each classified image, but the U-Net classifier resulted in tree canopy without the well-documented salt and pepper effect, or large areas of misclassification.

3.2. Georgetown, TX Classification Experiment

The Georgetown, TX classification required considerably more time for training and classification due to its complex geographic extent. The U-Net CNN once again required the most time at 5 h 5 min and 7 s, the SVM classifier required 13 min and two seconds, and the RF classifier required 14 min and 1 s. We hypothesize that the U-Net CNN required significant more time because of the complexity of the image segmentation process needed to generate individual tiles and process each individually. Classification times, OA, and Kappa coefficient are reported in Table 6. We discovered that the U-Net classifier required a significantly longer time for classification, but only 9 min longer to train the classifier. The U-Net proved the time investment to be well worth it as the overall accuracy and kappa coefficients were well above the results reported for SVM and RF.
More specifically, each classifier performed differently for each class, as seen in Table 7, Table 8 and Table 9.
Despite their rapid training and classification, it is apparent that the SVM and RF classifiers struggled with the geographically complex Georgetown, TX imagery. The U-Net classifier likewise was not as successful in classifying Georgetown tree canopy as it was for Laurel, MS, but it significantly outperformed the other two machine learning classifiers. Once again the developed class was most confusing for the forest class, but the U-Net classifier was able to navigate the difference between the two the best. This is visualized in Figure 6, where each classifier result is compared against the original NAIP image from 2012.

3.3. Case Study Results

Following the experiment results, we determined to use the U-Net CNN classifier for the 2022 and 2023 images for the study areas. The classifications were successful, and each required more time to complete to the classifications performed on the 2012 images (Table 10). However, they also resulted in similar accuracies for the forest classes, overall accuracies, and kappa coefficients. We suspect the longer processing times are related to the higher spatial resolution size (60 cm vs. 1 m) since the computer used for processing remained the same.
After the post-classification change detection was performed, the canopy changes were analyzed with the urban heat surface maps (Figure 7). Georgetown, the growing southern city, showed extensive tree canopy changes from 2012 to 2022. Laurel, on the other hand, showed less significant changes. Many of the smaller UTC changes may be considered false positive changes. These occur where tree canopy was misclassified during either the 2012 or 2022/2023 classifications and therefore considered a change when there was simply a mistake in classification. Small changes were ignored in the qualitative assessment of tree canopy changes.
Overall, Laurel experienced far less canopy change than Georgetown, but still experienced a small amount of loss connected with heat exposure. For example, Figure 8 shows the 2012 NAIP image (A), 2023 NAIP image (B), heat map (C), and heat map showing detected canopy loss (D) in a small area in the northwest portion of the city limits. The removal of trees for a small development of duplex homes may seem insignificant, but the higher heat has a strong spatial correlation with the loss of tree canopy in that region. The surrounding areas remain cool with the unchanging urban forest. Generally, Laurel has remained fairly stable with urban tree canopy area. Nevertheless, the difference between the temperatures in the small areas of canopy loss around the city and the rest of the city were statistically significant at the p < 0.001 level, indicating high confidence.
Georgetown, TX was very different with the significant loss of canopy between the two data collection dates. Between that time, Georgetown grew significantly, and this is represented in the several new neighborhood developments. Although there were several examples of this, we represent just two in Figure 8 and Figure 9. In Figure 9, we can see heat especially where the newly developed subdivision or neighborhood was built, but it decreases in the area to the east where urban trees were still present. Just like Laurel, the difference in land surface temperatures in the tree canopy loss areas and the rest of the city limits was statistically significant. In fact, the tree canopy loss areas had temperatures that were higher and statistically significant at the p < 0.001 level.
Figure 10 displays a second example of this phenomenon in the northeastern region of the city where a small development had not been quite completed yet in 2012 but was finished by 2022. There are small pockets of urban forest that reduce heat to a small degree, but the new development overwhelms much of the good the urban forest is doing by making the urbanized region much warmer.

4. Discussion

This study examined the effectiveness of two machine learning and one deep learning models adapted to remote sensing imagery classification for calculating UTC area within a small southern town and a quickly growing southern town in the United States of America. All three classifiers performed more quickly when processing the smaller town data from Laurel as the data size was smaller and more geographically compact. The overall accuracy for each classifier was highest for the small, more simple Laurel imagery, but for both Georgetown and Laurel, the U-Net classifier performed the best overall (89.8% and 91.4%, respectively), better than the SVM and RF classifiers. However, the SVM and RF classifiers were much faster than the U-Net classifiers in the ArcGIS Pro 3.1.3 setting. This may be caused by the computer hardware’s limitations.
The findings from the experiment reflect well among similar experiments performed with other classifiers for determining UTC area coverage (Table 11). In their comparative study of different spatial scales for mapping accurate UTC using U-Net and object-based image analysis (OBIA), Wang et al. [35] found their U-Net model implemented with Python outperformed OBIA in every measure, reaching a result of 99% overall accuracy. Using both Google Earth imagery and LiDAR datasets, Timilsina et al. [38] obtained overall accuracies of 96% and 98% for 2005 and 2015, respectively. Most studies have relied solely on aerial or satellite imagery to detect and classify tree canopy and have resulted in high overall accuracies when processed with a deep learning classifier. The weakest results were still impressive because of the intent to not only classify tree canopy but the species of tree [39]. Overall, the U-Net classification overall accuracy obtained in this study (89.8% and 91.4%) are comparable to other results presented in Table 11. The SVM and RF classifiers, however, performed poorly compared to the studies presented in Table 11. The SVM and RF classifiers repeatedly performed at about 70–80% overall accuracy, while the results shown below reach upwards of 94% [27,28,52]. This reveals some potential improvements to be made to the SVM and RF methods presented here. Because SVM is known to run well on fewer training samples, reducing the input for training this classifier may have improved the results. Nevertheless, the results of this study suggest that free-to-use NAIP RGB imagery is an accessible and comparable resource to use for obtaining UTC area coverage over small and growing cities. The NAIP imagery and robust tools available in ArcGIS Pro, especially the U-Net deep learning classifier, are able to provide accurate results for urban forestry managers without needing programming experience.
The case study presented UTC change results between the 2012 and 2022/2023 NAIP imagery sets for each city and compared the spatial distribution of UTC change to the urban heat maps derived from the Landsat 9 thermal bands. The patterns found in large amounts of growth in Georgetown, TX, and to a smaller extent in Laurel, were similar to the areas of growth in Columbia, SC in [52]. They are also consistent with findings by Tamaskani Esfehankalateh et al. [6] and Loughner et al. [5] that suggest a strong correlation between UTC and urban heat. Spatially, the city centers of both cities experienced the least change in UTC while the periphery experienced the most urban growth and UTC decline. This suggests a stable city, like Laurel, is likely to experience little change to the UHI effect within its borders as long as trees are protected like they are in Laurel. It also suggests that it is more difficult to mitigate UHI effect in the center of cities as there is less room for trees to be planted and grow once they are already removed for urban growth. Very little, if any, tree growth was reported in either city during the 10-year period. We also found that the differences between the temperatures in the areas tree canopy loss and the temperatures in the areas of no change in both cities were statistically significant, indicating the role UTC plays in mitigating heat. The temperatures of the tree canopy loss areas were higher than those throughout the rest of the city. Future policies should prevent significant loss of tree canopy within city limits to mitigate further heat increases.
Several limitations were encountered during the course of this study. NAIP data are collected at times and dates that are beyond the control of any application. For example, despite being collected during leaf on conditions, the window for data collection is from march until October in most southern US cities, meaning that each year may be more or less easily classified depending on the tree conditions and greenness. Additionally, technological limitations with the computer hardware can impact data classification times. More modern computers, supercomputers, and cloud computing with ArcGIS can remove much of the frustration with processing time that we encountered in this study. Future work should focus on adding additional deep learning models to compare with U-Net to determine if another model may perform faster and provide a more accurate classification. Studies focusing on the identification and counting of individual urban trees have assessed multiple deep learning models, but it has yet to be investigated for mapping UTC area coverage [32,33,34,65,66,67].

5. Conclusions

This study tested three remote sensing image classifiers for mapping UTC area in both a small southern USA city and a midsized, growing USA city. This study determined that by using available tools from industry-leading ESRI ArcGIS Pro and free aerial imagery through the NAIP, overall accuracies similar to those obtained using expensive LiDAR and satellite data can be achieved. Of the three classifiers tested, the U-Net CNN classifier within ArcGIS Pro performed the best but required the longest processing time. The U-Net CNN overall accuracies of 89.8% and 91.4% compare favorably with other studies conducted with similar and more costly resources. Furthermore, insights into UTC changes and spatial correlations between urban heat and those UTC changes occurring in small and midsize cities in the southern US were obtained. For example, the peripheries of even midsized growing cities like Georgetown, TX are getting warmer as more UTC is removed for suburban neighborhoods. The broader impacts of this study reach the urban forestry managers at the local, state/province, and national levels as they seek to provide data-driven decisions for policy makers.

Author Contributions

Conceptualization: G.R.M.; Data curation: G.R.M., D.Z., L.N., C.S. and L.S.; Formal analysis: G.R.M., D.Z. and L.N.; Investigation: G.R.M.; Methodology: G.R.M.; Project administration: L.S.; Software: G.R.M., D.Z., L.N. and L.S.; Supervision: G.R.M.; Validation: G.R.M., D.Z. and L.N.; Visualization: G.R.M.; Writing—original draft: G.R.M., D.Z., L.N. and L.S.; Writing—review and editing: G.R.M., D.Z., L.N. and L.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable.

Data Availability Statement

All datasets used are publicly available.

Acknowledgments

A special thank you to all those who provided insightful feedback and comments that improved the value of the manuscript, including the reviewers.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Konijnendijk, C.C.; Ricard, R.M.; Kenney, A.; Randrup, T.B. Defining urban forestry—A comparative perspective of North America and Europe. Urban For. Urban Green. 2006, 4, 93–103. [Google Scholar] [CrossRef]
  2. Nowak, D.J.; Greenfield, E.J. US urban forest statistics, values, and projections. J. For. 2018, 116, 164–177. [Google Scholar] [CrossRef]
  3. Drillet, Z.; Fung, T.; Leong, R.; Sachidhanandam, U.; Edwards, P.; Richards, D. Urban vegetation types are not perceived equally in providing ecosystem services and disservices. Sustainability 2020, 12, 2076. [Google Scholar] [CrossRef]
  4. Carne, R.J. Urban vegetation: Ecological and social value. In Proceedings of the National Greening Australia Conference, Fremantle, WA, Australia, 4–6 October 1994; pp. 4–6. [Google Scholar]
  5. Loughner, C.P.; Allen, D.J.; Zhang, D.-L.; Pickering, K.E.; Dickerson, R.R.; Landry, L. Roles of urban tree canopy and buildings in urban heat island effects: Parameterization and preliminary results. J. Appl. Meteorol. Climatol. 2012, 51, 1775–1793. [Google Scholar] [CrossRef]
  6. Tamaskani Esfehankalateh, A.; Ngarambe, J.; Yun, G.Y. Influence of tree canopy coverage and leaf area density on urban heat island mitigation. Sustainability 2021, 13, 7496. [Google Scholar] [CrossRef]
  7. Janhäll, S. Review on urban vegetation and particle air pollution—Deposition and dispersion. Atmos. Environ. 2015, 105, 130–137. [Google Scholar] [CrossRef]
  8. Lindén, J.; Gustafsson, M.; Uddling, J.; Watne, Å.; Pleijel, H. Air pollution removal through deposition on urban vegetation: The importance of vegetation characteristics. Urban For. Urban Green. 2023, 81, 127843. [Google Scholar] [CrossRef]
  9. World Health Organization. Regional Office for Europe. Urban Green Spaces and Health; World Health Organization. Regional Office for Europe: Copenhagen, Denmark, 2016. [Google Scholar]
  10. Wolf, K.L.; Lam, S.T.; McKeen, J.K.; Richardson, G.R.A.; van den Bosch, M.; Bardekjian, A.C. Urban trees and human health: A scoping review. Int. J. Environ. Res. Public Health 2020, 17, 4371. [Google Scholar] [CrossRef]
  11. Othman, N.; Hamzah, H.; Mohd Salleh, M.Z. Relationship of trees as green infrastructure to pro-environmental behavior for psychological restoration in urbanized society: A systematic review. IOP Conf. Ser. Earth Environ. Sci. 2021, 918, 012047. [Google Scholar] [CrossRef]
  12. Berland, A.; Shiflett, S.A.; Shuster, W.D.; Garmestani, A.S.; Goddard, H.C.; Herrmann, D.L.; Hopton, M.E. The role of trees in urban stormwater management. Landsc. Urban Plan. 2017, 162, 167–177. [Google Scholar] [CrossRef]
  13. Carlyle-Moses, D.E.; Livesley, S.; Baptista, M.D.; Thom, J.; Szota, C. Urban trees as green infrastructure for stormwater mitigation and use. For.-Water Interact. 2020, 397–432. [Google Scholar] [CrossRef]
  14. McGee, J.A.; Day, S.D.; Wynne, R.H.; White, M.B. Using geospatial tools to assess the urban tree canopy: Decision support for local governments. J. For. 2012, 110, 275–286. [Google Scholar] [CrossRef]
  15. Parmehr, E.G.; Amati, M.; Taylor, E.J.; Livesley, S.J. Estimation of urban tree canopy cover using random point sampling and remote sensing methods. Urban For. Urban Green. 2016, 20, 160–171. [Google Scholar] [CrossRef]
  16. Klobucar, B.; Sang, N.; Randrup, T.B. Comparing ground and remotely sensed measurements of urban tree canopy in private residential property. Trees For. People 2021, 5, 100114. [Google Scholar] [CrossRef]
  17. Li, X.; Chen, W.Y.; Sanesi, G.; Lafortezza, R. Remote Sensing in urban forestry: Recent applications and Future Directions. Remote Sens. 2019, 11, 1144. [Google Scholar] [CrossRef]
  18. Moskal, L.M.; Styers, D.M.; Halabisky, M. Monitoring urban tree cover using object-based image analysis and public domain remotely sensed data. Remote Sens. 2011, 3, 2243–2262. [Google Scholar] [CrossRef]
  19. Elmes, A.; Rogan, J.; Williams, C.; Ratick, S.; Nowak, D.; Martin, D. Effects of urban tree canopy loss on land surface temperature magnitude and timing. ISPRS J. Photogramm. Remote Sens. 2017, 128, 338–353. [Google Scholar] [CrossRef]
  20. Guo, L.; Chehata, N.; Mallet, C.; Boukir, S. Relevance of airborne lidar and multispectral image data for urban scene classification using random forests. ISPRS J. Photogramm. Remote Sens. 2011, 66, 56–66. [Google Scholar] [CrossRef]
  21. Chuang, W.-C.; Boone, C.G.; Locke, D.H.; Grove, J.M.; Whitmer, A.; Buckley, G.; Zhang, S. Tree Canopy Change and neighborhood stability: A comparative analysis of Washington, D.C. and Baltimore, MD. Urban For. Urban Green. 2017, 27, 363–372. [Google Scholar] [CrossRef]
  22. Ghanbari Parmehr, E.; Amati, M. Individual tree canopy parameters estimation using UAV-based photogrammetric and Lidar Point Clouds in an urban park. Remote Sens. 2021, 13, 2062. [Google Scholar] [CrossRef]
  23. Elamin, A.; El-Rabbany, A. UAV-based multi-sensor data fusion for urban land cover mapping using a deep convolutional neural network. Remote Sens. 2022, 14, 4298. [Google Scholar] [CrossRef]
  24. Hartling, S.; Sagan, V.; Maimaitijiang, M. Urban tree species classification using UAV-based multi-sensor data fusion and machine learning. GIScience Remote Sens. 2021, 58, 1250–1275. [Google Scholar] [CrossRef]
  25. Chen, S.; Chen, M.; Zhao, B.; Mao, T.; Wu, J.; Bao, W. Urban tree canopy mapping based on double-branch convolutional neural network and multi-temporal high spatial resolution satellite imagery. Remote Sens. 2023, 15, 765. [Google Scholar] [CrossRef]
  26. Guo, J.; Hong, D.; Liu, Z.; Zhu, X.X. Continent-wide urban tree canopy fine-scale mapping and coverage assessment in south America with high-resolution satellite images. ISPRS J. Photogramm. Remote Sens. 2024, 212, 251–273. [Google Scholar] [CrossRef]
  27. Mix, C.; Hunt, N.; Stuart, W.; Hossain, A.K.M.A.; Bishop, B.W. A spatial analysis of urban tree canopy using high-resolution land cover data for Chattanooga, Tennessee. Appl. Sci. 2024, 14, 4861. [Google Scholar] [CrossRef]
  28. Hochmair, H.H.; Benjamin, A.; Gann, D.; Juhasz, L.; Olivas, P.; Fu, Z.J. Change analysis of urban tree canopy in Miami Dade county. Forests 2022, 13, 949. [Google Scholar] [CrossRef]
  29. Zhu, X.X.; Tuia, D.; Mou, L.; Xia, G.-S.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep learning in remote sensing: A comprehensive review and list of resources. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–36. [Google Scholar] [CrossRef]
  30. Zhang, L.; Zhang, L.; Du, B. Deep Learning for Remote Sensing Data: A technical tutorial on the state of the art. IEEE Geosci. Remote Sens. Mag. 2016, 4, 22–40. [Google Scholar] [CrossRef]
  31. Youssef, R.; Aniss, M.; Jamal, C. Machine learning and deep learning in remote sensing and urban application. In Proceedings of the 4th Edition of International Conference on Geo-IT and Water Resources 2020, Geo-IT and Water Resources 2020, Al-Hoceima, Morocco, 11–12 March 2020. [Google Scholar] [CrossRef]
  32. Yan, S.; Jing, L.; Wang, H. A new individual tree species recognition method based on a convolutional neural network and high-spatial resolution remote sensing imagery. Remote Sens. 2021, 13, 479. [Google Scholar] [CrossRef]
  33. Lv, L.; Li, X.; Mao, F.; Zhou, L.; Xuan, J.; Zhao, Y.; Yu, J.; Song, M.; Huang, L.; Du, H. A deep learning network for individual tree segmentation in UAV images with a coupled CSPNet and attention mechanism. Remote Sens. 2023, 15, 4420. [Google Scholar] [CrossRef]
  34. Zamboni, P.; Junior, J.M.; Silva, J.d.A.; Miyoshi, G.T.; Matsubara, E.T.; Nogueira, K.; Goncalves, W.N. Benchmarking anchor-based and anchor-free state-of-the-art deep learning methods for individual tree detection in RGB high-resolution images. Remote Sens. 2021, 13, 2482. [Google Scholar] [CrossRef]
  35. Wang, Z.; Fan, C.; Xian, M. Application and evaluation of a deep learning architecture to urban tree canopy mapping. Remote Sens. 2021, 13, 1749. [Google Scholar] [CrossRef]
  36. Morgan, G.R.; Wang, C.; Li, Z.; Schill, S.R.; Morgan, D.R. Deep learning of high-resolution aerial imagery for coastal Marsh Change Detection: A comparative study. ISPRS Int. J. Geo-Inf. 2022, 11, 100. [Google Scholar] [CrossRef]
  37. Erker, T.; Wang, L.; Lorentz, L.; Stoltman, A.; Townsend, P.A. A statewide urban tree canopy mapping method. Remote Sens. Environ. 2019, 229, 148–158. [Google Scholar] [CrossRef]
  38. Timilsina, S.; Aryal, J.; Kirkpatrick, J.B. Mapping urban tree cover changes using object-based convolution neural network (OB-CNN). Remote Sens. 2020, 12, 3017. [Google Scholar] [CrossRef]
  39. Choudhury, M.A.M.; Marcheggiani, E.; Galli, A.; Modica, G.; Somers, B. Mapping the urban atmospheric carbon stock by LiDAR and WorldView-3 data. Forests 2021, 12, 692. [Google Scholar] [CrossRef]
  40. Kafy, A.; Rahman, M.S.; Faisal, A.-A.-; Hasan, M.M.; Islam, M. Modelling future land use land cover changes and their impacts on land surface temperatures in Rajshahi, Bangladesh. Remote Sens. Appl. Soc. Environ. 2020, 18, 100314. [Google Scholar] [CrossRef]
  41. Alexander, C. Normalised difference spectral indices and urban land cover as indicators of land surface temperature (LST). Int. J. Appl. Earth Obs. Geoinf. 2020, 86, 102013. [Google Scholar] [CrossRef]
  42. Dutta, D.; Rahman, A.; Paul, S.K.; Kundu, A. Impervious surface growth and its inter-relationship with vegetation cover and land surface temperature in peri-urban areas of Delhi. Urban Clim. 2021, 37, 100799. [Google Scholar] [CrossRef]
  43. Mukherjee, F.; Singh, D. Assessing land use–land cover change and its impact on land surface temperature using LANDSAT DATA: A comparison of two urban areas in India. Earth Syst. Environ. 2020, 4, 385–407. [Google Scholar] [CrossRef]
  44. Imran, H.M.; Hossain, A.; Islam, A.K.; Rahman, A.; Bhuiyan, M.A.; Paul, S.; Alam, A. Impact of land cover changes on land surface temperature and human thermal comfort in Dhaka City of Bangladesh. Earth Syst. Environ. 2021, 5, 667–693. [Google Scholar] [CrossRef]
  45. Britannica, T. Editors of Encyclopaedia. Laurel. In Encyclopedia Britannica; Encyclopædia Britannica, Inc.: London, UK, 2011; Available online: https://www.britannica.com/place/Laurel-Mississippi (accessed on 1 April 2024).
  46. Climate-data.org. (n.d.). Available online: https://en.climate-data.org/north-america/united-states-of-america/mississippi/laurel-17322/ (accessed on 1 April 2024).
  47. Home Town. 2024. Available online: https://www.hgtv.com/shows/home-town (accessed on 1 April 2024).
  48. Mississippi Forestry Commission. MFC Recognizes City of Laurel for Tree City USA® Participation. 2019. Available online: https://www.mfc.ms.gov/2019/09/laurel-tree-city-usa/ (accessed on 1 April 2024).
  49. US Census Bureau. Large Southern Cities Lead Nation in Population Growth. In Census.Gov; 18 May 2023. Available online: www.census.gov/newsroom/press-releases/2023/subcounty-metro-micro-estimates.html (accessed on 1 April 2024).
  50. City of Georgetown Texas. 2024. Available online: https://planning.georgetown.org/tree-removal-pruning-and-landscape/ (accessed on 1 April 2024).
  51. Davis, D. National Agriculture Imagery Program (NAIP) Information Sheet. Available online: https://www.fsa.usda.gov/Assets/USDA-FSA-Public/usdafiles/APFO/support-documents/pdfs/naip_infosheet_2016.pdf (accessed on 3 April 2024).
  52. Morgan, G.R.; Fulham, A.; Farmer, T.G. Machine learning in urban tree canopy mapping: A Columbia, SC case study for urban heat island analysis. Geographies 2023, 3, 359–374. [Google Scholar] [CrossRef]
  53. ESRI. (n.d.-a) How U-net works? Available online: https://developers.arcgis.com/python/guide/how-unet-works/ (accessed on 22 September 2024).
  54. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv 2015, arXiv:1505.04597. [Google Scholar]
  55. Moradi, F.; Javan, F.D.; Samadzadegan, F. Potential evaluation of visible-thermal UAV image fusion for individual tree detection based on convolutional neural network. Int. J. Appl. Earth Obs. Geoinf. 2022, 113, 103011. [Google Scholar] [CrossRef]
  56. Clark, A.; Phinn, S.; Scarth, P. Optimized U-net for land Use–Land cover classification using aerial photography. PFG–J. Photogramm. Remote Sens. Geoinf. Sci. 2023, 91, 125–147. [Google Scholar] [CrossRef]
  57. Wang, X.; Hu, Z.; Shi, S.; Hou, M.; Xu, L.; Zhang, X. A deep learning method for optimizing semantic segmentation accuracy of remote sensing images based on improved unet. Sci. Rep. 2023, 13, 7600. [Google Scholar] [CrossRef]
  58. James, G.; Witten, D.; Hastie, T.; Tibshirani, R. An Introduction to Statistical Learning: With Applications in R.; Springer: New York, NY, USA, 2013. [Google Scholar]
  59. ESRI. (n.d.-b) Train Support Vector Machine Classifier (Spatial Analyst). Available online: https://pro.arcgis.com/en/pro-app/latest/tool-reference/spatial-analyst/train-support-vector-machine-classifier.htm (accessed on 22 September 2024).
  60. Jensen, J.R. Introductory Digital Image Processing: A Remote Sensing Perspective; Pearson Education: Glenview, IL, USA, 2016. [Google Scholar]
  61. Congalton, R.; Oderwald, R.G.; Mead, R. Assessing Landsat classification accuracy using discrete multivariate statistical techniques. Photogramm. Eng. Remote Sens. 1983, 49, 1671–1678. [Google Scholar]
  62. Peiman, R. Pre-classification and post-classification change-detection techniques to monitor land-cover and land-use change using multi-temporal landsat imagery: A case study on pisa province in Italy. Int. J. Remote Sens. 2011, 32, 4365–4381. [Google Scholar] [CrossRef]
  63. Serra, P.; Pons, X.; Saurí, D. Post-classification change detection with data from different sensors: Some accuracy considerations. Int. J. Remote Sens. 2003, 24, 3311–3340. [Google Scholar] [CrossRef]
  64. Liu, Y.; Zhang, H.; Cui, Z.; Lei, K.; Zuo, Y.; Wang, J.; Hu, X.; Qiu, H. Very high-resolution images and superpixel-enhanced deepneural forest promote urban tree canopy detection. Remote Sens. 2023, 15, 519. [Google Scholar] [CrossRef]
  65. Ventura, J.; Pawlak, C.; Honsberger, M.; Gonsalves, C.; Rice, J.; Love, N.L.R.; Han, S.; Nguyen, V.; Sugano, K.; Doremus, J.; et al. Individual tree detection in large-scale urban environments using high-resolution multispectral imagery. Int. J. Appl. Earth Obs. Geoinf. 2024, 130, 103848. [Google Scholar] [CrossRef]
  66. Wallace, L.; Sun, Q.; Hally, B.; Hillman, S.; Both, A.; Hurley, J.; Martin Saldias, D.S. Linking urban tree inventories to remote sensing data for individual tree mapping. Urban For. Urban Green. 2021, 61, 127106. [Google Scholar] [CrossRef]
  67. Yang, M.; Mou, Y.; Liu, S.; Meng, Y.; Liu, Z.; Li, P.; Xiang, W.; Zhou, X.; Peng, C. Detecting and mapping tree crowns based on convolutional neural network and google earth images. Int. J. Appl. Earth Obs. Geoinf. 2022, 108, 102764. [Google Scholar] [CrossRef]
Figure 1. Study areas (A) Georgetown TX and (B) Laurel MS within the United States of America.
Figure 1. Study areas (A) Georgetown TX and (B) Laurel MS within the United States of America.
Geomatics 04 00022 g001
Figure 2. Land surface temperature (LST) maps for Georgetown (A) and Laurel (B) and the quality assessment overlay in pink. The pink colors indicate regions where the uncertainty is higher.
Figure 2. Land surface temperature (LST) maps for Georgetown (A) and Laurel (B) and the quality assessment overlay in pink. The pink colors indicate regions where the uncertainty is higher.
Geomatics 04 00022 g002
Figure 3. A general workflow of the experiment and case study.
Figure 3. A general workflow of the experiment and case study.
Geomatics 04 00022 g003
Figure 4. Training and validation samples used for classification and accuracy assessment of the 2012 NAIP images (water included for map purpose only).
Figure 4. Training and validation samples used for classification and accuracy assessment of the 2012 NAIP images (water included for map purpose only).
Geomatics 04 00022 g004
Figure 5. Comparison of the original NAIP image (A), the RF classifier results (B), SVM classifier results (C), and the U-Net classifier results (D) for Laurel, MS. Light green represents grass, dark green is urban tree canopy, and yellow is urban and other classes combined.
Figure 5. Comparison of the original NAIP image (A), the RF classifier results (B), SVM classifier results (C), and the U-Net classifier results (D) for Laurel, MS. Light green represents grass, dark green is urban tree canopy, and yellow is urban and other classes combined.
Geomatics 04 00022 g005
Figure 6. Comparison of the original NAIP image (A), the RF classifier results (B), SVM classifier results (C), and the U-Net classifier results (D) for Georgetown, TX. Light green represents grass, dark green is urban tree canopy, and yellow is urban and other classes combined.
Figure 6. Comparison of the original NAIP image (A), the RF classifier results (B), SVM classifier results (C), and the U-Net classifier results (D) for Georgetown, TX. Light green represents grass, dark green is urban tree canopy, and yellow is urban and other classes combined.
Geomatics 04 00022 g006
Figure 7. Canopy changes (in blue) overlaid on the heat maps for Georgetown (A) and Laurel (B).
Figure 7. Canopy changes (in blue) overlaid on the heat maps for Georgetown (A) and Laurel (B).
Geomatics 04 00022 g007
Figure 8. Laurel Mississippi example of tree canopy loss or change using the 2012 NAIP image (A), 2023 NAIP image (B), heat map (C), and heat map showing detected canopy loss (D).
Figure 8. Laurel Mississippi example of tree canopy loss or change using the 2012 NAIP image (A), 2023 NAIP image (B), heat map (C), and heat map showing detected canopy loss (D).
Geomatics 04 00022 g008
Figure 9. Georgetown, TX first example of tree canopy loss or change using the 2012 NAIP image (A), 2023 NAIP image (B), heat map (C), and heat map showing detected canopy loss (D).
Figure 9. Georgetown, TX first example of tree canopy loss or change using the 2012 NAIP image (A), 2023 NAIP image (B), heat map (C), and heat map showing detected canopy loss (D).
Geomatics 04 00022 g009
Figure 10. Georgetown, TX second example of tree canopy loss or change using the 2012 NAIP image (A), 2023 NAIP image (B), heat map (C), and heat map showing detected canopy loss (D).
Figure 10. Georgetown, TX second example of tree canopy loss or change using the 2012 NAIP image (A), 2023 NAIP image (B), heat map (C), and heat map showing detected canopy loss (D).
Geomatics 04 00022 g010
Table 1. NAIP image data characteristics.
Table 1. NAIP image data characteristics.
2012 Georgetown Image2022 Georgetown Image2012 Laurel Image2023 Laurel Image
Dates25 June 201211 June 202222 August 201211 October 2023
Pixel size1 m60 cm1 m60 cm
Flight altitude 9052 m4875 m9144 m4470 m
AircraftCessna 441Cessna Conquest and Cessna 414Cessna ConquestCessna 441
SensorLeica Geosystems ADS80/SH82 digital sensorsLeica ADS100 pushbroom sensorIntegraph Digital Mapping CameraLeica Geosystem’s ContentMapper digital sensors
Masked image size1.95 gb30.15 gb251 mb698 mb
Table 2. Laurel, MS classification accuracy assessment.
Table 2. Laurel, MS classification accuracy assessment.
U-NetSVMRF
Overall accuracy91.4%84.2%76.2%
Kappa coefficient0.8710.7630.644
Classifier training time 20 m 7 s2 m 4 s1 m 30 s
Classification time33 m 4 s6 m 13 s3 m 24 s
Table 3. U-Net classifier confusion matrix for Laurel, MS.
Table 3. U-Net classifier confusion matrix for Laurel, MS.
GrassDevelopedForestTotalUser’s Accuracy
Grass3823038599%
Developed 183978249780%
Forest00318318100%
Total4004004001200
Producer’s accuracy96%99%80%
Table 4. SVM classifier confusion matrix for Laurel, MS.
Table 4. SVM classifier confusion matrix for Laurel, MS.
GrassDevelopedForestTotalUser’s Accuracy
Grass28611330095%
Developed 03965945587%
Forest114332844574%
Total4004004001200
Producer’s accuracy71.5%99%82%
Table 5. RF classifier confusion matrix for Laurel, MS.
Table 5. RF classifier confusion matrix for Laurel, MS.
GrassDevelopedForestTotalUser’s Accuracy
Grass20511522193%
Developed 973926855770%
Forest98731742275%
Total4004004001200
Producer’s accuracy51%98%79%
Table 6. Georgetown, TX classification accuracy assessment.
Table 6. Georgetown, TX classification accuracy assessment.
U-NetSVMRF
Overall accuracy89.8%68.6%71.3%
Kappa coefficient0.84750.5290.57
Classifier training time 13 m 53 s4 m 16 s3 m 48 s
Classification time3 h 51 m 17 s8 m 46 s10 m 13 s
Table 7. U-Net classifier confusion matrix for Georgetown, TX.
Table 7. U-Net classifier confusion matrix for Georgetown, TX.
GrassDevelopedForestTotalUser’s Accuracy
Grass3350734298%
Developed 443762644684%
Forest212436741289%
Total4004004001200
Producer’s accuracy84%94%92%
Table 8. SVM classifier confusion matrix for Georgetown, TX.
Table 8. SVM classifier confusion matrix for Georgetown, TX.
GrassDevelopedForestTotalUser’s Accuracy
Grass2072521497%
Developed 16836814768354%
Forest253024830382%
Total4004004001200
Producer’s accuracy52%92%62%
Table 9. Random forest classifier confusion matrix for Georgetown, TX.
Table 9. Random forest classifier confusion matrix for Georgetown, TX.
GrassDevelopedForestTotalUser’s Accuracy
Grass21201022295%
Developed 18238513169855%
Forest61525942293%
Total4004004001200
Producer’s accuracy53%96%65%
Table 10. Case study classification results.
Table 10. Case study classification results.
TimeUser’s Accuracy (Forest Class)Producer’s Accuracy
(Forest Class)
OAKappa Coefficient
Laurel, MS1 h 45 m 32 s91%95%95%0.92
Georgetown, TX42 h 15 m 30 s81%92%83%0.75
Table 11. Relevant studies using deep and machine learning to classify tree canopy area.
Table 11. Relevant studies using deep and machine learning to classify tree canopy area.
Data Sources and Study AreasImagery DatesMethodsHighest Overall Accuracies AchievedAuthors
33 imagery patches of Vaihingen, Germany2013U-Net and OBIAU-Net; 99%[35]
Google Earth Satellite Imagery and LiDAR of Tasmania, Australia2005; 2015Object based CNN2015 data; 98%[38]
UAS LiDAR/Multispectral data over Toronto, Ontario, Canada2020Deep CNN, SVM, Maximum likelihoodDeep CNN, RGB and LiDAR; 97%[23]
Gaofen-2 Satellite images over Beijing, China2021Improved Double Branch U-Net; other U-Net variantsImproved Double Branch U-Net; 95.8%[25]
High-resolution satellite images for 888 cities across South America2018–2020Deeplab3+ deep learningDeeplab3+; exceeds 90% for all 888 cities[26]
NAIP over Columbia, SC, USA2005; 2015SVMSVM; 94%[52]
Planet SkySat satellite imagery over Chattanooga, TN, USA2021–2022SVMSVM; 91%[27]
Gaofen-2 satellite imagery over Hui’an county, Fujian Province, China2021Superpixel-enhanced deep neural forests (SDNF); RFSDNF; 95%[64]
Worldview-2 satellite imagery over Miami-Dade County, Florida, USA2019; 2014RFRF; 87%[28]
Worldview-3 satellite and LiDAR data over Brussels, Belgium2015; 2017Geospatial OBIAGEOBIA; 71%[39]
NAIP and SPOT satellite imagery over Wisconsin, USA2013RF, SVM, boosted regression treesNo significant differences between three classifiers[37]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Morgan, G.R.; Zlotnick, D.; North, L.; Smith, C.; Stevenson, L. Deep Learning for Urban Tree Canopy Coverage Analysis: A Comparison and Case Study. Geomatics 2024, 4, 412-432. https://doi.org/10.3390/geomatics4040022

AMA Style

Morgan GR, Zlotnick D, North L, Smith C, Stevenson L. Deep Learning for Urban Tree Canopy Coverage Analysis: A Comparison and Case Study. Geomatics. 2024; 4(4):412-432. https://doi.org/10.3390/geomatics4040022

Chicago/Turabian Style

Morgan, Grayson R., Danny Zlotnick, Luke North, Cade Smith, and Lane Stevenson. 2024. "Deep Learning for Urban Tree Canopy Coverage Analysis: A Comparison and Case Study" Geomatics 4, no. 4: 412-432. https://doi.org/10.3390/geomatics4040022

APA Style

Morgan, G. R., Zlotnick, D., North, L., Smith, C., & Stevenson, L. (2024). Deep Learning for Urban Tree Canopy Coverage Analysis: A Comparison and Case Study. Geomatics, 4(4), 412-432. https://doi.org/10.3390/geomatics4040022

Article Metrics

Back to TopTop