Urban Sprawl and COVID-19 Impact Analysis by Integrating Deep Learning with Google Earth Engine

: Timely information on land use, vegetation coverage, and air and water quality, are crucial for monitoring and managing territories, especially for areas in which there is dynamic urban expansion. However, getting accessible, accurate, and reliable information is not an easy task, since the signiﬁcant increase in remote sensing data volume poses challenges for the timely processing and analysis of the resulting massive data volume. From this perspective, classical methods for urban monitoring present some limitations and more innovative technologies, such as artiﬁcial-intelligence-based algorithms, must be exploited, together with performing cloud platforms and ad hoc pre-processing steps. To this end, this paper presents an approach to the use of cloud-enabled deep-learning technology for urban sprawl detection and monitoring, through the fusion of optical and synthetic aperture radar data, by integrating the Google Earth Engine cloud platform with deep-learning techniques through the use of the open-source TensorFlow library. The model, based on a U-Net architecture, was applied to evaluate urban changes in Phoenix, the second fastest-growing metropolitan area in the United States. The available ancillary information on newly built areas showed good agreement with the produced change detection maps. Moreover, the results were temporally related to the appearance of the SARS-CoV-2 (commonly known as COVID-19) pandemic, showing a decrease in urban expansion during the event. The proposed solution may be employed for the efﬁcient management of dynamic urban areas, providing a decision support system to help policy makers in the measurement of changes in territories and to monitor their impact on phenomena related to urbanization growth and density. The reference data were manually derived by the authors over an area of approximately 216 km 2 , referring to 2019, based on the visual interpretation of high resolution images, and are openly available.


Introduction
The need for shared decision-making and direction-setting for the creation and use of geospatial data was recognised by the United Nations, leading to the establishment of the United Nations Committee of Experts on Global Geospatial Data Management (UN-GGIM), an intergovernmental group whose primary goals are to collaborate with governments to enhance their policy, arrangements, and legal frameworks, to address global issues, and, as a community with common interests and concerns, to contribute to collective knowledge, and the development of effective strategies for building geospatial capacity in developing countries [1]. As reported by UN-GGIM, 2.5 quintillion bytes of data are generated every day, with most of this information currently assimilated by and managed in cloud platforms which are in constant evolution. Decision support systems (DSSs) which were initially of the urban expansion of the city are analyzed, while six types of land-use are mapped utilizing ASTER imagery and object-based analysis for the same area [35], or for dynamic land-cover classification [36], and the detection of road and pavement types [37].
Three models based on a U-Net architecture using different datasets yielded promising results; ancillary information available on a subset of the metropolitan area of Phoenix showed good agreement with maps produced of the changes detected. The multitemporal data used in this study allows for the estimation of the effects of the pandemic on urban dynamics, with results obtained consistent with government data on the slower expansion rate observed after the pandemics. The preparation of the reference data was performed entirely manually and carried out by the authors over an area of about 216 km 2 , with reference to 2019, based on visual interpretation of high resolution data provided by the National Agriculture Imagery Program (NAIP) [38], with the resulting urban layer made freely available to the community for training and testing urban detection algorithms, using the described methodology as a possible benchmark. The proposed system aims to overcome the limitations of classical methods and offers a useful tool to governments and decision makers, which is also accessible to non-experts. In order to focus on different periods and map the desired aspect of urban dynamics, a user only needs to change the dates of the images of interest, keeping unaltered the workflow and avoiding the need to undergo a new training phase. It is worth emphasizing that the proposed method makes use of free open data only, and this represents a huge advantage when funding sources are limited.
The remainder of the article is structured as follows: The methodology and tools are introduced in Section 2, where a brief description of the GEE platform, the TF library and the chosen U-Net architecture is given. The designed case study is presented in Section 3, describing the area of interest, the pre-processing steps for the selected Sentinel-2 optical and Sentinel-1 SAR data, along with their characteristics, and the reference data preparation. In Section 4, the selected neural network and its setup are described. The results are presented in Section 5, including the COVID-19 impact monitoring on the urban sprawl analysis. Section 6 concludes the article.

Google Earth Engine
GEE is a public cloud computing platform including a repository of a large variety of standard Earth science raster datasets. Petabytes of remote sensing data, such as Landsat, MODIS, Sentinel 1, 2, 3 and 5-P, as well as advanced land observing satellite (ALOS) data, have been stored on the system for over 40 years [39]. These multi-modal data can be processed using the JavaScript programming language. Another advantage of using GEE is that any code produced in its framework can be disseminated through official Google channels, direct links, or repositories, and can thus be linked to geographic information system (GIS) services via an application programming interface (API), allowing it to be easily reused and adapted for different scales and situations [40]. Applications developed within GEE range from cultural heritage monitoring [40] to surface water analysis [41] and forest fire dynamics [42], with advantages for the long-term monitoring of urban areas, as demonstrated in [43]. The potential of combining high resolution satellite imagery, cloud computing, and DL technologies is also described in [44].

TensorFlow
TF is a free, open-source, high-performance library for numerical computation [45], and, apart from being particularly popular for ML-based applications, represents a suitable option for many complex models, large training datasets, particularly for input properties, or whenever long training times are expected [46,47]. In the proposed workflow, TF was included in order to increase the overall system performance. TF models are developed, trained, and deployed outside GEE [48], and the results are returned to the GEE framework if post-processing steps are required. A specific, web-based code editor, provided in this case by GEE, is the Earth Engine (EE) Python API, as described in [49], running in Colab Notebooks [50]. TF allows for the use of the Keras functional application programming interface (API), with ML solutions that can be developed more rapidly through essential abstractions and building blocks [51]. Being cross-platform, TF can run on everything, including GPU, CPU, and mobile platforms [52]. A step-by-step description of how to integrate GEE and TF (version 2.0) is given in [53].

Neural Network
The proposed framework uses the Keras implementation of the U-Net model as presented in Figure 1, a fully convolutional neural network (FCNN), originally developed for medical image segmentation, and used today in many other fields, such as EO, for instance, to map surface water [41], sugarcane areas [44], or crop types [54].
The U-Net was chosen for this task since it is suitable for classification and is powerful, stable, and not complex with respect to similar solutions. Several studies have demonstrated its success when applied to multispectral satellite images for Earth surface mapping, especially in impervious environments, such as urban surfaces [55][56][57][58].
For a complete description of the U-Net characteristics and architecture, interested readers can refer to [56,59].

Modular System
In this section, the general description of the modular system is given, while the detailed pipeline used for our research is described in the next section. The main purpose of this study is to provide a system with the ability to extract data about a region of interest, fully implemented in a single cloud environment. The potential of cloud processing platforms, the levels reached by AI, and the wide availability of open satellite data can be combined to effectively create a practical means of monitoring the territory, which can then be integrated, for example, into a web platform and also used by nonexperts. Regardless of the specific choice of images, network architecture, or tools used, the modules necessary for the acquisition of information are divided into a specific structure combining several concepts, as shown in Figure 2, as follows: input data acquisition, reference data creation, data processing, data fusion, preparation of the computational model, generation of classification maps, and combination of these maps for an analysis of changes over time. The input data acquisition can be performed through the numerous repositories containing multi-modal open-source satellite imagery (e.g., Landviewer, Copernicus Open Access Hub, Sentinel Hub, NASA Earthdata Search, Google Earth). The input data can be optical or SAR, with high or low resolution, depending on the desired level of detail of the derived information, and on the area or period of interest.
Compared to the mentioned satellite data, reference data are more difficult to find and are costly to acquire. Despite efforts that have been made to create labeled datasets, these often do not have global coverage or a precise level of detail. In addition, in this case, the cloud platforms aid in creating a suitable dataset for a specifc case study. In fact, several tools have been developed to create precise datasets to be used as reference information, such as Amazon SageMaker Ground Truth and GEE. If necessary, the input data can be pre-processed before they are used in order to ensure or enhance performance. Moreover, the fusion of multi-temporal and multi-modal data allows information to be obtained which goes beyond the content of single images, since multi-modal information acquired by the different satellites, using a variety of electromagnetic wavelengths, are correlative. The fusion of these features represents the input for the computational model. The AI-based models learn from the input data, and, based on this, recognize new image information. ML algorithms, in particular CNNs, are suitable for generating classification maps but are computationally expensive. This could be a limitation if insufficient hardware resources are available for data processing, which can be tackled using cloud platforms and virtual machines designed for computationally intensive tasks, such as ML (Microsoft Azure, Google Cloud Platform, DigitalOcean, Amazon Web Services). Through appropriately trained models, it is possible to obtain classification maps, yielding information on land cover at a given time. The analysis of the information obtained from classification maps at different acquisition times allows for analysis of the changes that have occurred. Eventually, this information can be used to make decisions or used again as input for further processing.
Following the general idea of the workflow, the required modules were selected. With respect to computational complexity, and to related issues (e.g., uniqueness of the observations, different approaches to data observation and recording, wide range of dimensionality) in [61], different public and private services for Web-based online processing were analyzed and compared.
Among them, we selected the GEE cloud platform, the GEE cloud repository, and TF as the cloud processor, as shown in Figure 3. These services feature a high degree of interactivity, and they are usable without the need for downloading and installing any software. However, it is important to point out that GEE and TF represent one possible combination of elements in the process. GEE has been chosen as it offers the versatility to exemplify opportunities in terms of data access/selection, visualization, and information fusion. TF was chosen as it offers a suitable, accessible, and broadly accepted environment in which to develop and adjust the deep learning model required for the segmentation task. As will be seen in more detail in the case study that we will analyze, the optical and SAR data were chosen as input data using the GEE Data Catalog; the GEE Code Editor was used to generate the reference data. Complementary optical and SAR data in space and time were combined as input to a neural network for the task of urban area identification. A U-net architecture was the model of choice for the classification. U-net has been shown to benchmark performance in segmenting information semantically and represents a revolutionary solution for the ability to handle large numbers of training samples and to perform dense pixel-wise classifications. Linked to different points in time, different outputs were combined to generate change detection maps representing the evolution of urban growth over time. Finally, the results obtained from the change maps were analyzed to acquire knowledge about the impact of COVID-19 on urban growth.

Proposed System Workflow
The detailed pipeline used for our research is shown in Figure 3, and is described in this section. As already highlighted, among the possible solutions, we selected the GEE cloud platform, the GEE cloud repository, and TF as the cloud processor.
For easier interoperability between GEE and TF, methods for importing/exporting data were given by the EE API when the TFRecord format was used. The first step consisted of gathering and setting up the imagery to be used as input to the neural network (different images should be chosen for training and prediction). After filtering the appropriate images, among the available image collections, selecting the area and time period of interest, these were exported to the Google Cloud Storage as TFRecords, which were then imported into a virtual machine (VM). Afterwards, the images selected for training and reference data were stacked to create a single image from which single samples could be accessed. The final multi-band stacked image was converted into a multidimensional array in which each new pixel stores 256 × 256 patches of pixels for each band, from which training, validation and testing data sets can be exported. In order to split the reference data with a balanced number of pixels for the classes of interest, pre-made geometries were used to sample the stack in strategic locations.
In order to give details useful for understanding the inner operations and their influence on processing, it must be taken into account that, even if a variety of GPUs are included in Colab (i.e., Nvidia T4s, K80s, or P4s and P100s), it is not possible to select which GPU to use at a specific time. Furthermore, the Colab notebook VM is sometimes not heavy-duty enough to complete an entire training job, especially for a very complex model or a large number of epochs. In these cases, it may be necessary to set up an alternative VM or to package the code for running large training jobs on GEE.
Finally, the trained model was used to make the predictions ( in this case the images were also in TFRecord format). The results were automatically saved in the cloud storage and were then available for any post-processing step. Afterwards, the output of GEE can be directly embedded in different applications. In this study, as mentioned, the deployed model was employed in GEE to execute inferences for urban sprawl analysis on the area of interest, as described in the next section.

Neural Network Setup
In this study, the proposed U-Net model takes 256 × 256 pixel patches as inputs, and outputs per-pixel class probability. A mean squared error loss function on the sigmoidal (SGD) output was used for optimization, since this task can be treated as a regression problem, rather than a classification problem. Indeed, since the segmentation task is binary, a saturating activation function is suitable here.
Shallower networks were also considered, however the best performance was achieved by the proposed architecture comprised of five encoder layers, five decoder layers and one output layer, with a probabilistic confidence layer for the urban and non-urban classes as output. The encoder layer was composed of a linear stack of 2D convolution, batch normalization layers and an activation function (relu), followed by a Max pooling operation that reduced the spatial resolution of the feature map by a factor of two. The decoder layer was comprised of Concatenate, 2D convolution, batch normalization layers and an activation function (relu). Lastly, a final convolutional layer performs a convolution along the channels for each individual pixel (kernel size of (1, 1)) and outputs the final segmentation mask.
The SGD optimizer was used as a training algorithm [62], and the maximum number of epochs used per training cycle was 50, with an initial learning rate of 0.01. Three different sets of images were considered: Sentinel-2 (S2) on its own, Sentinel-1 and S2, and pre-processed Sentinel-1 (S1_ARD) and S2.

Case Study
This study was conducted on Phoenix (USA), in the central region of Arizona within the Sonoran Desert. It is the fifth biggest city in the US, and its area covers about 1.338 km 2 , with a great heterogeneity of types of vegetation, surfaces and soil characteristics (Figure 4). Since 1960, the city and metropolitan area underwent major growth, with many of Phoenix's residential skyscrapers being built during that period [63][64][65][66][67][68]. In 2010, Phoenix achieved the record of becoming the sixth biggest city in US with a population of 1,445,632 and millions more citizens in nearby suburbs. In 2016, it was the second most rapidly growing metropolitan area in the US after Las Vegas. In 2020, according to the recent Census, the population was 1,608,139 with an increase in 10 years of 11.2%. More than four million additional residents moved to Arizona over the last four decades, forcing surrounding cities to expand over vast areas of fragile ecosystems, particularly the desert biomes close to Phoenix and Tucson, which were almost inhabited areas characterized by a scarcity of rainfall and vegetation. For instance, outlying suburbs, such as Buckeye, grew by nearly 80 percent over the past 10 years, with new high-rise residential buildings and row-houses sprawling outward the urban limits into the desert, as reported in the 2021 August edition of The New York Times [66]. The uncontrolled expansion and population increase in areas not adequately equipped to handle large populations have created numerous problems, such as the provision of water for all new residents and their construction sites, especially in the context of droughts and hot summers, draining rivers and reservoirs.
In this study, information retrieved from optical and SAR data were combined to improve the final change detection maps. In particular, Sentinel-2 and Sentinel-1 images of the Copernicus ESA mission were chosen.

Sentinel-2 Data Description
The constellation of two Sentinel-2 satellites offers multispectral imagery with a global five-day revisit time at a ground sampling distance of up to 10 m, which was adequate for the case study, including several spectral bands. Among these, the short wave infrared region of the spectrum is very important for the detection of urban areas and their separation from bare soil. Optical RS data are widely used for classification problems due to the spectral information they provide that allows discrimination between various materials at a high level of detail [73].
For this reason, Level-2A data from the Sentinel-2 MultiSpectral Instrument (MSI) was used as the primary data source; the data were retrieved from the GEE repository [74].
Level-2A refers to the ortho-rectified bottom-of-atmosphere (BOA) reflectance product; bands with a spatial resolution of 10 and 20 m (B2, B3, B4, B5, B6, B7, B8, B8A, B11, B12) were used for the analysis. Overall, a time series of Sentinel-2 composites data was analyzed for the specified case study during the period 2018-2021. The first available images of the area of Phoenix were acquired in December 2018; for the years 2019, 2020 and 2021, the months chosen were March and September in order to monitor urban growth every six months. Image composites were produced by averaging images with reduced cloud coverage for a given month, obtaining a reduction in the presence of noise and local anomalies.

Sentinel-1 Data Description
Complementary C-band Sentinel-1 SAR products, including ground range detected (GRD) data [75], were used for the case study. Sentinel-1 satellites acquire SAR data with a global six-day revisit time under any weather conditions. As already mentioned, allday and all-weather coverage is provided by SAR data. In addition, by examining the radar signal amplitude and using the polarization multiplicity, the main properties of built structures can be evaluated, and, for these reasons, SAR data are widely used in urban environments. The selected Sentinel-1 data cover the same time period of Sentinel-2 data. The image polarizations were VV (vertical transmittance and receiving) and VH (vertical transmittance and horizontal receiving), while the data were chosen from the interferometric wide swath (IW) acquisition mode with an ascending orbit. In this case, the high-resolution Level-1 GRD data had a 10 m spatial resolution. An 'angle band' was included in each scene, containing at every point the approximate incidence angle from the ellipsoid.
As already highlighted, the optical and SAR data combination was expected to result in feature fusion, since data with different inner characteristics were considered. In particular, while optical data bring information on the Earth surface composition and materials, the contribution of SAR data when polarized is mainly related to the geometry of the surface (i.e., flat, rough, tall, etc.). Therefore, the goal in taking multimodal data in the proposed model was to achieve competitive integration of optical and SAR data at a signal level by enhancing the overall final information on man-made structures, or, for instance, on vegetated areas [76,77].
The Sentinel-1 GRD data used in this study were retrieved, as already pointed out, from the GEE catalogue, and the data in the linear scale under the 'Float' extension were retained (i.e., Image collection ID: COPERNUCUS/S1_GRD_FLOAT). The final terraincorrected values for COPERNUCUS/S1_GRD_FLOAT were used without conversion to decibels. It is important to emphasize that this choice guarantees that the statistical properties of the data are preserved and this requirement must, most often, be respected for meaningful outputs [75,78].
GEE developers provide Sentinel-1 data pre-processed with the following operations: orbit file correction, GRD border noise removal, thermal noise removal, radiometric calibration, and terrain correction using SRTM 30, or ASTER DEM for areas greater than 60 degrees latitude where SRTM is not available [79,80]. However, the authors conducted a test using a new framework for preparing Sentinel-1 analysis-ready data (S1_ARD) for additional border noise correction, speckle filtering and radiometric terrain normalization, which was used according to [78]. To preserve the information content and user freedom, these pre-processing steps are often not applied directly to the data.

Preparation of Reference Data
The preparation of the reference data for training and evaluation was performed manually; the data have been made available for open access [81,82].
The images used for this purpose come from the NAIP database, acquired at a onemeter ground sampling distance (GSD) and available in GEE. The spatial resolution of the data allows creation of reference data with a high level of detail by visual interpretation. The reference data were created by drawing polygons directly onto the NAIP optical images as imported in GEE, after applying a spatial filter on the region of interest, and a temporal filter for the time interval 1 January 2019-31 December 2019, and finally selecting the scenes with the smallest cloud coverage. The digitized reference data were finally transformed from vector to raster data with a spatial resolution of 10 m. As well as being imported into the GEE code editor, the data can be exported in different formats (CSV, SHP (shapefile), GeoJSON, KML, KMZ or TFRecord) and represent an urban layer which can be made freely available to the community for training and testing urban detection algorithms, using the described methodology as a possible benchmark. The reference data created in this study can be found at the links [81,82]. Data collected on Phoenix over an area of 207 km 2 were split into three subsets used for training, validation and testing. In order to create a more heterogeneous dataset for urban regions in Arizona, with the aim of easily expanding the area of potential application outside the Phoenix metropolitan area, additional reference data were collected for the city of Tucson ( Figure 5) over an area of 9, 15 km 2 .

Results and Discussion
In this section, first the results of the proposed method are presented, based on the chosen metrics. Then, an urban sprawl qualitative analysis is carried out over the Phoenix area, while a quantitative assessment on a limited region referring to Queen Creek is also reported. Finally, the obtained results are further inspected to assess the impact of the COVID pandemic.

Classification Results and Accuracy
The metrics used for evaluating the proposed models were precision (1), recall (2), and F1 score (3), defined by the following equations: with TP, FP, TN and FN standing, respectively, for true positives, false positives, true negatives, and false negatives. Table 1 shows the validation results for the three different datasets. Standalone S2 provides adequate classification performances, with the fusion of SAR and optical data improving the results. As expected, the overall model performance improved with the merging of optical and SAR data, since their complementary nature allowed for competitive integration of textural, spatial, spectral, and temporal characteristics. When both S2 and S1_ARD were considered, the experiments did not yield superior results, in spite of a relevant increase in recall. Considering the nature of the SAR images of the urban areas, the main explanation for this was assumed to be the impact of spatial speckle filtering, which disturbs salient signatures related to urban structures, such as building corners, balconies, or the bottom end of facades. To exemplify this, it is known that signatures of this kind are the main source of information when estimating building deformations with persistent scatterer interferometry. Speckle filtering supports the analysis of extended surfaces with diffuse signal response, e.g., crop fields. However, analyzing urban sprawl favors focusing on salient and spatially concentrated signatures at the maximum available level of spatial resolution.
In the next section, urban sprawl analysis has been carried out on the model trained using S2 and S1 data. The choice of this time frame was made mainly to create a change map of the dynamic Phoenix metropolitan area over recent years and to investigate whether the health emergency linked to COVID-19 had affected the urbanization phenomenon that has characterized the city for decades. The lack of comprehensive reference data did not allow for a quantitative analysis of the results over the whole Phoenix area. Nevertheless, the availability of reference data on a smaller region enabled quantitative evaluation of the results, as reported in Section 5.3.

Urban Sprawl Analysis
Regarding the whole Phoenix area, a visual comparison of the true color combinations of the Sentinel-2 data was carried out, with Figures 7 and 8 reporting two subsets of interest for the two periods 2018-2020 and 2020-2021, showing the reliability of the method in identifying new built-up areas.
The visual interpretation of the change maps shows that areas exhibiting the fastest urban growth were located outside the Phoenix city boundaries, and included both industrial and residential areas.

Validation on Queen Creek
A limited quantitative assessment was carried out on Queen Creek (Figure 9), a city southeast of Phoenix, as an official map indicating development areas could be compared to the change detection results ( Figure 10).   In conclusion, by visually inspecting each polygon, a total of 34 areas of change were correctly detected, with only two and four areas containing undetected changes and false alarms, respectively. The results demonstrated that the ancillary information available on a subset of the metropolitan area agreed well with the produced maps of detected changes, highlighting the effectiveness of the proposed method.

Consideration of COVID-19 Impact on Urban Growth Rate
As demonstrated in previous sections, the proposed change detection procedure was reliable, and can be adapted to any period of interest by simply changing the dates of the Copernicus data to be retrieved.
Results for Queen Creek, referring to the periods December 2018-March 2020 and March 2020-September 2021, reported in Figure 10, were further inspected to assess the impact of the COVID-19 pandemic. The city growth was computed on a total urban area of 29.6 km 2 . In the time frame December 2018-March 2020 (15 months), the expansion of the city was 3.53 km 2 ; therefore, an average of 9.53% growth was observed per year. In the period March 2020-September 2021 (18 months), the growth was measured as 2.72 km 2 , an average of 6.11 % growth per year. The considered time spans refer to the period before and after COVID-19, enabling a quantitative evaluation of the pandemic impact on Queen Creek, with COVID-19 having an impact on the urban expansion leading to a slowdown of 35%. Official data from the U.S. Census Bureau [84] reported a drop of 10% between May 2019 and May 2020 in the number of permits to build single family homes in Arizona. This was observable in our results, with a higher impact on dynamic areas, such as Queen Creek.
In general, for the whole selected area of Phoenix, a 22% decrease in growth rate per year was observed by comparing the total surfaces of newly built areas in the two change detection maps, before and after the outbreak of the pandemic. This agrees well with data from the US Census Bureau, which reported a decrease after the outbreak of the pandemic in population growth, which was strongly correlated with urban growth, of 25% in major metropolitan areas in the US [85].
The presented analysis is just an example of the use of the proposed method. The chosen time frames allow straightforward analysis of the link between COVID-19 and urban sprawl evolution. Further studies and discussion should be carried out on the specific causes of this slowdown, but these are outside the purpose of this study.

Conclusions
In this study a general framework for the analysis of urban growth through ML techniques implemented on a cloud platform was presented. The advantages of using these powerful tools for monitoring territory have been extensively discussed. The availability of open satellite images with a temporal resolution of several days can readily support the mapping of changes for areas of interest.
In the proposed model, we selected GEE as the cloud platform with its TF library. This has been found to be effective for the monitoring of complex urban dynamics over large areas characterized by fast growth. The choice of multimodal data, the particular network architecture, the different proposed datasets, and the selected area, were intended to provide an example of how cloud computing can impact on realizing the integration of AI, data fusion, and change detection techniques to design a complex tool useful for decision-making by policy makers in urban monitoring.
As a case study, the urban growth of Phoenix was analyzed, and the impact that the COVID-19 pandemic had on the growth of the area of Queen Creek was assessed. In order to focus on different periods of interest, or to derive a full multitemporal evolution of the area, the workflow can be kept unaltered by just changing the dates of the images of interest or adding new ones.
The easy usability of the selected platforms, specifically GEE and TF, and the available computing power, contribute to the adaptability and flexibility of the method, based on key features which are powerful and facilitate timely monitoring of the territory, and which are within the limits of the economic resources that public institutions often have available.
An area larger than 200 km 2 was manually annotated using high-resolution data, with the resulting urban layer made freely available to the community for training and testing urban detection algorithms, using the described methodology as a possible benchmark .
Future work will explore possible extensions of the proposed model and seek to make further improvements. One idea is to consider other pre-processing steps in the data and to analyze their impact. Another objective will be to work on web platform creation to offer a turnkey tool which is ready to use.