Next Article in Journal
Can MERRA-2 Reanalysis Data Reproduce the Three-Dimensional Evolution Characteristics of a Typical Dust Process in East Asia? A Case Study of the Dust Event in May 2017
Next Article in Special Issue
Evaluation of Deep Learning Techniques for Deforestation Detection in the Brazilian Amazon and Cerrado Biomes From Remote Sensing Imagery
Previous Article in Journal
High-Resolution Inundation Mapping for Heterogeneous Land Covers with Synthetic Aperture Radar and Terrain Data
Previous Article in Special Issue
Comparing Sentinel-2 MSI and Landsat 8 OLI Imagery for Monitoring Selective Logging in the Brazilian Amazon
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Change Detection of Deforestation in the Brazilian Amazon Using Landsat Data and Convolutional Neural Networks

by
Pablo Pozzobon de Bem
,
Osmar Abílio de Carvalho Junior
*,
Renato Fontes Guimarães
and
Roberto Arnaldo Trancoso Gomes
Departamento de Geografia, Campus Universitário Darcy Ribeiro, Asa Norte, Universidade de Brasília, DF, 70910-900 Brasília, Brazil
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(6), 901; https://doi.org/10.3390/rs12060901
Submission received: 23 January 2020 / Revised: 17 February 2020 / Accepted: 3 March 2020 / Published: 11 March 2020
(This article belongs to the Special Issue Assessing Changes in the Amazon and Cerrado Biomes by Remote Sensing)

Abstract

:
Mapping deforestation is an essential step in the process of managing tropical rainforests. It lets us understand and monitor both legal and illegal deforestation and its implications, which include the effect deforestation may have on climate change through greenhouse gas emissions. Given that there is ample room for improvements when it comes to mapping deforestation using satellite imagery, in this study, we aimed to test and evaluate the use of algorithms belonging to the growing field of deep learning (DL), particularly convolutional neural networks (CNNs), to this end. Although studies have been using DL algorithms for a variety of remote sensing tasks for the past few years, they are still relatively unexplored for deforestation mapping. We attempted to map the deforestation between images approximately one year apart, specifically between 2017 and 2018 and between 2018 and 2019. Three CNN architectures that are available in the literature—SharpMask, U-Net, and ResUnet—were used to classify the change between years and were then compared to two classic machine learning (ML) algorithms—random forest (RF) and multilayer perceptron (MLP)—as points of reference. After validation, we found that the DL models were better in most performance metrics including the Kappa index, F1 score, and mean intersection over union (mIoU) measure, while the ResUnet model achieved the best overall results with a value of 0.94 in all three measures in both time sequences. Visually, the DL models also provided classifications with better defined deforestation patches and did not need any sort of post-processing to remove noise, unlike the ML models, which needed some noise removal to improve results.

Graphical Abstract

1. Introduction

Deforestation is one of the primary sources of concern regarding climate change as it is one of the largest sources of greenhouse gas emissions in the world, second only to the burning of fossil fuels [1]. Within the region of the Brazilian Amazon, studies have shown that deforestation, in conjunction with forest fires, can make up to 48% of the total emissions [2]. It also bears substantial implications regarding the conservation of ecosystems and their biodiversity in the region, and it has been linked to the loss of species [3] and general loss of ecosystem stability through fragmentation [4]. Locally, estimates also show that unchecked deforestation could lead to reductions in seasonal rainfall [5] and into the savanization of the environment [6].
Remote sensing imagery has been instrumental in the process of keeping track of deforestation in the Amazon. The Brazilian National Institute for Space Research (INPE) releases annual deforestation and land use information derived from satellite imagery data through their Program for Deforestation Monitoring (PRODES) and TerraClass projects [7,8], which have been widely used for monitoring, research, and policymaking. Carbon emission estimates from deforestation are also dependent on land use and land-use change data [1]. However, they are likely to be underestimated due to the omission of illegal logging data in official reports [9].
Change detection is one of the most common tasks within the field of remote sensing. It is defined as the process of analyzing and quantifying the state of an object or phenomenon at different times [10], and is consequently an essential tool in the processes of understanding and tackling deforestation. The changes present in the images can be semantic (of the object under analysis) or noisy (variations in lighting, shadows, among others) [11]. Therefore, the challenge in change detection is to use a method that establishes features that minimize noisy changes and emphasize the semantic changes that are intertwined. Typically, the final map of change detection techniques is a binary classification that contains unchanged and changed regions.
Several reviews and classifications of digital change detection techniques are available in the literature [10,12,13,14,15,16,17,18], evidencing a large quantity of approaches and algorithms in this research area. Tewkesbury et al. [18] provide a synthesis of change detection methods, distinctly considering the units of analysis (pixel, kernel, image-object overlay, image-object comparison, multi-temporal image-object, vector polygon, and hybrid) and the method used to identify the change (layer arithmetic, post-classification change, direct classification, transformation, change vector analysis, hybrid change detection).
Change detection methods based on machine learning (ML) algorithms typically use direct classification [18], which takes a set of stacked temporal images as input and uses complex nonlinear functions to model and determine changes. In this approach, it is not necessary to use pre-classification techniques that seek to define the best measures to detect changes (such as temporal subtraction, data transformation, and change vector analysis). In long-term time series, direct classification based on ML is predominant [19,20].
Deep learning (DL) has recently attracted increasing attention from remote sensing researchers because of its ability to automatically extract features from the image dataset, high-level semantic segmentation, nonlinear problem modeling, and mapping in complex environments [21]. DL has shown great potential in remote sensing, producing state-of-the-art results in different types of remote sensing data processing [22]: image registration [23,24,25,26], land-use and land-cover classification [27,28,29,30], object detection [31,32,33,34], image fusion [35,36,37,38], semantic segmentation [39,40,41,42], and precision evaluation [43]. DL has also been used for change detection techniques, showing superior performance with greater precision in comparison to classic ML methods [44]. The capacity for pattern recognition in the three dimensions of the image (special, spectral, and temporal) makes DL algorithms especially effective when used to change detection with common and recurring patterns [45]. DL-based change detection methods have been applied to different targets such as urban [46,47,48,49], land use/land cover [50,51,52], and landslides [53], among others. Peng et al. [54] proposed a subdivision of DL-based change detection methods that considered three units of analysis: (1) feature [55,56,57]; (2) patch [58,59,60,61]; and (3) image [62,63]. In the case of image-based DL change detection, the algorithms learn the segmentation of changes directly from bi-temporal image pairs, avoiding the negative effects caused when using pixel patches [54]. In this approach, the U-Net architecture has been successfully employed [63,64]. Among the DL algorithms, convolutional neural networks (CNN) are one of the leading types of architectures [22]. CNNs differ from traditional ML algorithms by being able to identify patterns within an n-dimensional context with multiple abstraction levels through convolutional filters and use them for inference.
The objective of this study was to investigate the use of CNNs for the detection of deforestation within the Brazilian Amazon to verify the hypotheses that DL algorithms are a viable and possibly better alternative in comparison to classic ML algorithms when it comes to mapping deforestation. Like many anthropogenic changes in the landscape, deforestation follows specific spatial patterns with support geometric or regular configurations and usually develop around official or unofficial roads, forming a dendritic or “fishbone” distribution [65]. Despite being a prime target for the application of DL algorithms, the number of studies related to deforestation is still small, given the variety of the types of algorithms available. In order to investigate the use of DL for deforestation detection, three different CNN architectures were used to classify deforested areas yearly and were then compared to two classical ML algorithms as points of reference.

2. Material and Methods

2.1. Training and Test Sites

In this study, we selected three regions within the Brazilian Amazon as study sites. These scenes encompass major deforestation centers that have developed along the “TransAmazon” (BR-230) [66,67,68] and “Cuiabá–Santarem” (BR-163) [69,70] highways (Figure 1). In the Amazon, roads are the driving forces for the spatial distribution of deforestation in the Amazon, where most deforestation occurs in the neighborhood of the main highway [71,72]. Widely discussed in the literature, the opening of roads in the Amazon forest favors the establishment of settlements, attracts migrants, facilitates the extraction of resources, increases the profitability of livestock and agriculture, and establishes access to wood [73,74,75,76,77].
The training used two scenes (Sites A and B), and validation utilized the remaining scene (Site C). We defined a bi-temporal approach for modeling and obtained Landsat 8/OLI imagery for each site for the years of 2017, 2018, and 2019, with approximately one year between each observation. Multitemporal images from similar periods of the year reduce variations in the phenology and sun-terrain-sensor geometry. The images acquired were from the dry season to minimize cloud cover and reduce noise (Table 1). Tier 1 Landsat images were used as they offer consistent georegistration within prescribed image-to-image tolerances of less than 12-meter radial root mean square error (RMSE) and are therefore appropriate for time-series analysis [78].

2.2. Deep Learning Models

This research used three different DL architectures available in the literature: U-Net [79], SharpMask [80], and ResUnet [81]. While the U-Net and SharpMask algorithms were not developed for classification with remote sensing data in mind, studies have found that they are not only suitable, but offer state-of-the-art results [39,82]. These algorithms share similarities, being based on architectures known as autoencoders with the addition of bridges or residual connections. Autoencoders downsample the feature maps generated through convolutional filters while incrementally increasing their number to learn low-level features compactly, and then upsample them back to the original input shape for inference. This process can be further enhanced using connections bridging the downsampling and upsampling steps (Figure 2) to propagate information. These connections help speed up training and reduce the degradation of data by combining both low-level detail and high-level contextual information. Low-level spatial detail is essential for change detection and land cover classifications, and that is the main reason behind the choice of this specific type of architecture for this study.
While similar in principle and structure, the chosen architectures differ in depth and complexity. Table 2 shows a summary of the number of layers and the total number of parameters in each model after adapting them for this study. Some of the inner workings of each model are also different. For example, the U-Net and SharpMask algorithms downsample the feature maps through a pooling operation, whereas the ResUnet architecture downsamples by using a stride of two between convolutional filter windows. Another example is how the models use skip connections in different ways, where both U-Net and SharpMask use exclusively long connections (linking the downsampling and upsampling sides of the architecture) while ResUnet makes use of long and short connections (between convolutional blocks).

2.3. Data Structure

The Landsat dataset consisted only of bands 1 through 7, as they share the same spatial resolution and contain most of the spectral information. Our initial training data was a bi-temporal cube stacking the base image and next year’s image, constituting 14 bands. We maintained this data structure for the RF and MLP algorithms, where each pixel is an observation, and each band is a variable. The datasets had to be restructured for the DL algorithms due to the inner workings of the CNNs and due to hardware memory constraints. To build and train the models in this study, we used the Keras [83] python library, a high-level wrapper for the well-known Tensorflow library [84]. When working with three-dimensional image data, Keras accepts inputs in the form of a four-dimensional array with shape (samples, sample rows, sample columns, channels). To convert our images to the correct format, we extracted patches through 200 × 200 pixel windows with a 10-pixel overlap on each side (Figure 3). This process generated a total of 844 samples per site per time sequence, with a total of 3376 training samples and 1688 test samples.

2.4. Ground Truth

To create our ground truth masks, we used the Brazilian Institute of Space Research’s Project for Deforestation Mapping (INPE’s PRODES) data [7] for the years of 2018 and 2019 as a visual guide and then refined it by remapping the deforestation polygons on a smaller scale. PRODES data are commonly used for deforestation reports and studies have used it before when modeling and studying deforestation dynamics [85,86]. The changes were mapped using digitizing tools from the QGIS software [87] at 1:30,000 scale and subsequently transformed into binary raster files with 0 and 1 as absence–presence codes, respectively. In this process, we mapped changes exclusively to the natural forest, regardless of the land cover type in the following year (Figure 4).

2.5. Hyperparameters

The RF model only needed two hyperparameters set, the number of trees to build (ntree) and the number of variables randomly sampled as candidates at each split (mtry). These were set to 500 trees, and three variables, respectively. The structure of the MLP algorithm consisted of a simple 3-layer network containing an input layer, a hidden layer with 256 nodes, and an output layer. The DL algorithms and MLP shared the same hyperparameters for training. Focal loss [88] was used as the loss function as it excels in classification problems with an uneven number of observations in each class, as is the case of our object of study. For gradient descent optimization, we used the adaptive moment estimation (ADAM) algorithm [89] with incorporated Nesterov Momentum (NADAM) with a learning rate of 2e-3, β 1 of 0.9 and β 2 of 0.999. The number of epochs was set to 250 and the batch size to 16 to fit the training process into memory.

2.6. Modeling Approach

Given the context of our main methodological steps described in the previous sections, a top-down view of our modeling approach is described in Figure 5. In addition to the DL algorithms, two classical ML algorithms, random forest (RF) and a simple multilayer perceptron (MLP) architecture, were used as a reference point for the assessment of the DL models. Both models have been extensively researched for land cover classification, and change detection in remote sensing data with their performance is well documented [90,91].

2.7. Accuracy Assessment

The accuracy metrics were calculated using the test site data exclusively to avoid the possibility of biased results due to overfitting. The classification results were compared to the ground truth mask for the test site in order to calculate the accuracy measures. Given that deforestation related change is typically a rare phenomenon, the change–no-change ratio is highly imbalanced [92,93]. Therefore, change detection research usually shows a predominance of invariant areas, causing a bias in some accuracy metrics. For example, overall accuracy is relatively high on most change maps [94]. The Precision and Recall measures (Equations (1) and (2)) were used to offer more insight in the distribution of errors in the classifications, along with three other measures besides accuracy: F1 score (also known as Dice coefficient), Kappa index, and mean intersection over union (mIoU) measure (Equations (3)–(5), respectively). These measures are often used to evaluate DL and ML classifications and are better suited for classifications with imbalanced datasets than overall accuracy as they equally weight class distributions.
P r e c i s i o n = T r u e   P o s i t i v e s T r u e   P o s i t i v e s + F a l s e   P o s i t i v e s
R e c a l l = T r u e   P o s i t i v e s T r u e   P o s i t i v e s + F a l s e   N e g a t i v e s
F 1 = 2 × P r e c i s i o n   × R e c a l l P r e c i s i o n + R e c a l l
K a p p a = p o p e 1 p e
where p o is the rate of agreement between the ground truth and the classification, and p e is the expected rate of agreement due to chance.
m I o U = I o U 1 + I o U 2 + I o U n n
where I o U is the area of intersection divided by the area of union between the classification and ground truth for a class and n is the total number of classes. Finally, we used McNemar’s test [95] to evaluate the statistical significance of differences between the classifications.

3. Results

Quantitatively, the DL models showed a clear advantage over RF and MLP (Table 3). The ResUnet model had the best results with regard to every measure with the exception of Precision in the 2017–2018 time frame. The SharpMask and U-Net models showed similar but slightly inferior results. In comparison, the RF model showed the worst results in most measures, although the performance measures still indicated a good classification. It should be noted that the RF and MLP classifications exhibited a considerable amount of impulse noise (“salt-and-pepper” type), and a majority filter was applied to reduce the noise and improve the classification both visually and quantitatively. The DL models did not require any post-processing steps as they produced classifications with virtually no noise. All models showed very high overall accuracy, but, as explained previously, this measure should be carefully considered as the ratio between the change and no-change classes is highly imbalanced and is mostly explained by the larger, no-change class. McNemar’s test results indicate that despite the seemingly similar results, the model classifications were all significantly different from each other (Table 4).
RF’s higher precision in the 2017–2018 frame can be explained by the low number of false positives produced. Conversely, however, it produced a very high number of false negatives within the same frame (Figure 6). The ResUnet model had the lowest number of misclassified pixels in both time sequences. It also produced the least number of false negatives out of all the models. When looking at the number of false-positive cases, the DL algorithms did not show a large difference over the ML models. With regard to false-negatives, however, they showed a clear advantage. The reduction of false-negative classifications is a considerable advantage of the DL models over the classic ML algorithms, given that underestimating the extent of deforestation is a less desirable outcome than its overestimation.
The models detected roughly the same deforestation sites at the validation site across both time sequences (Figure 7 and Figure 8). However, the DL models provided more detailed classifications within smaller scales, particularly around feature edges. Moreover, all models were able to classify “easy” deforestation patches with less complex spectral mixtures (Figure 9), but the classification of the ML algorithms degraded as the spectral signatures within the patches increased in complexity (Figure 10). RF showed a higher tendency to produce false-negatives both visually and quantitatively.
The total deforested area was slightly higher than the ground truth in the SharpMask and ResUnet predictions in both time sequences (Table 5). The opposite was true for the MLP prediction, which slightly underestimated the total area in both time spans. The RF model underestimated the total deforested area by a very large portion (almost 40 km2 or a 26% decrease in area) in the 2017–2018 sequence due to a large number of false negative predictions, but despite this, it came closest to the ground truth area in the 2018–2019 sequence, although, that does not necessarily mean the predicted areas were the same as the ground truth.
Processing times varied from model to model, but the MLP and DL models offered faster training and prediction times than RF, mainly due to the fact that the Tensorflow framework uses the computer’s graphical processing unit (GPU) for parallel processing instead of the central processing unit (CPU), which is traditionally used for ML. Using an NVIDIA GTX 1070 GPU and a batch size of 16, the total training time ranged from approximately 40 minutes for the simpler MLP model (around 10 seconds per epoch) to almost three hours for the more complex ResUnet model (approximately 40 seconds per epoch). Given the size of the datasets, RF took approximately six hours to train using parallel processing with an Intel Core i5-4690k processor. The difference in processing times was particularly considerable when using the models to classify the images after training. The DL models and MLP classified the test scene within seconds, whereas RF took almost an hour to complete the task.

4. Discussion

The CNN architectures used in this study showed a clear advantage to the classic ML algorithms, both quantitatively and visually, regarding deforestation mapping. Similarly, a comparative study of methods developed by [96] for wetland mapping found that deep learning methods (completely convolutional networks and patch-based deep CNN) obtained better accuracy than RF and support vector machine. The authors found that CNN may produce inferior performance when the training sample size is small, but it tends to show substantially higher accuracy than conventional classifiers using a larger training sample size. We assume that the difference in performance between the DL and traditional ML methods stems from the former’s capability to understand both the spatial and spectral context, whereas the regular ML models inherently only see the spectral information.
Although the current methodologies to detect deforestation with DL architectures vary widely, studies are in agreement that they produce excellent classification results [97,98,99]. Other analogous studies that have investigated the use of DL for single class classifications seem to corroborate with this trend, although there is a large variation between the choice of targets and architectures [34,100,101,102]. While choice and development of architectures for certain targets is a relevant topic for future research, we have found that autoencoder networks with residual connections seem to be a good starting point for classifications in remote sensing imagery as they can take advantage of spatial and spectral information in a very efficient manner.
Despite their advantages, DL algorithms are still not as accessible or as easy to use as classic ML models. Besides needing specific hardware for training, they require a relatively large quantity of samples, and developing ground truth masks for specific targets can be challenging and time-consuming in large extents as both spatial and spectral context are strictly needed, whereas the traditional ML algorithms work with simpler sampling schemes and can produce reasonably good results with a much smaller sample size. Therefore, the process of building a model for broader use (i.e., country-wide monitoring) can be complicated. However, these models have another advantage in the fact that they can be incrementally trained, meaning they could be gradually provided with new samples to update the model weights and improve their classifications with time. With that said, the “black box” nature of these networks can make them undesirable for those who might wish to know and disclose their internal workings such as public and governmental entities. Despite that, through our findings, we believe that with enough development, DL algorithms can provide a viable automatic solution for mapping deforestation in the Amazon alongside projects such as INPE’s PRODES and TerraClass.
It should be noted that while the models showed good capability for generalization within our region of study, we cannot assert that they would achieve the same results in different areas where deforestation is a common occurrence. A broader reaching model would necessarily require samples from different regions to account for the possible spatial and spectral variability from one region to another. Further research should be carried out to study the applicability of the models for similar targets in different areas. In addition, while Landsat data are enough for annual deforestation mapping between dry seasons, more frequent monitoring is virtually impossible as clouds are present above the forest canopy during most of the year, and the ground reflectance cannot reach the satellite’s optical sensors. One solution would be the use of radar data to be able to cross the cloud cover. As such, we also recommend the investigation of the use of radar data and DL algorithms to detect deforestation within a shorter time-frame.

5. Conclusions

In this study, we proposed the use of existing DL architectures to detect yearly changes in the vegetation cover in the region of the Brazilian Amazon, successfully achieving our goal. Results show that these algorithms are a viable alternative to classical ML algorithms, with the improvement of all performance measures and clear advantages such as faster prediction times and lack of noise in the classifications. The SharpMask, U-Net, and ResUnet models showed similar results. However, ResUnet achieved the best values of accuracy, Kappa, F1, and mIoU, and the least amount of errors overall. Visually, the DL algorithms also produced classification masks with well-defined deforestation patches, while the ML models showed an evident loss of quality in harder to classify patches, with a tendency to produce false-negatives and impulse noise that needed to be filtered. One of the main shortcomings of CNNs seems to be the necessity of a 1:1 ground truth regarding the extent of the area of study as the spatial context is critical. In contrast, simpler ML models can be trained on a point-by-point basis (e.g., random sampling points within the extent). Developing a whole ground truth can be an extensive process. However, we have achieved very good results with a relatively small sample size with very little augmentation in the form of overlapping sample patches. The additional bands in remote sensing data may facilitate the detection of targets with less samples, but that supposition needs further research. Furthermore, considering the models were validated by being applied to an independent dataset, the performance measures show that they have very good potential for generalization. DL is still a growing technology, particularly in the remote sensing field as not even popular libraries such as Keras and Tensorflow have built-in tools for dealing with multi-band satellite imagery, but researchers are slowly adapting and developing better architectures specific for remote sensing data. The architectures used in this study performed well in our specific task, although they were developed for entirely different targets. Therefore, these algorithms do not necessarily need to be tailored for specific cases and can even work interchangeably between fields of research.

Author Contributions

Conceptualization, P.P.d.B. and O.A.d.C.J.; Methodology, P.P.d.B. and O.A.d.C.J.; Validation and writing the original draft, P.P.d.B.; Formal analysis, R.F.G. and R.A.T.G; Writing, review, editing, and supervision O.A.d.C.J., R.F.G. and R.A.T.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the following institutions: National Council for Scientific and Technological Development (434838/2018-7), Coordination for the Improvement of Higher Education Personnel and the Union Heritage Secretariat of the Ministry of Economy.

Acknowledgments

We are grateful for the suggestions and the formal evaluations of the anonymous reviewers, which allowed an improvement of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Le Quéré, C.; Andrew, R.M.; Friedlingstein, P.; Sitch, S.; Hauck, J.; Pongratz, J.; Pickers, P.A.; Korsbakken, J.I.; Peters, G.P.; Canadell, J.G.; et al. Global carbon budget 2018. Earth Syst. Sci. Data 2018, 10, 2141–2194. [Google Scholar] [CrossRef] [Green Version]
  2. Aragão, L.E.O.C.; Poulter, B.; Barlow, J.B.; Anderson, L.O.; Malhi, Y.; Saatchi, S.; Phillips, O.L.; Gloor, E. Environmental change and the carbon balance of Amazonian forests: Environmental change in Amazonia. Biol. Rev. 2014, 89, 913–931. [Google Scholar] [CrossRef] [PubMed]
  3. Rosa, I.M.D.; Smith, M.J.; Wearn, O.R.; Purves, D.; Ewers, R.M. The environmental legacy of modern tropical deforestation. Curr. Biol. 2016, 26, 2161–2166. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Vedovato, L.B.; Fonseca, M.G.; Arai, E.; Anderson, L.O.; Aragão, L.E.O.C. The extent of 2014 forest fragmentation in the Brazilian Amazon. Reg. Environ. Chang. 2016, 16, 2485–2490. [Google Scholar] [CrossRef]
  5. Spracklen, D.V.; Garcia-Carreras, L. The impact of Amazonian deforestation on Amazon basin rainfall: Amazonian deforestation and rainfall. Geophys. Res. Lett. 2015, 42, 9546–9552. [Google Scholar] [CrossRef] [Green Version]
  6. Boisier, J.P.; Ciais, P.; Ducharne, A.; Guimberteau, M. Projected strengthening of Amazonian dry season by constrained climate model simulations. Nat. Clim. Chang. 2015, 5, 656–660. [Google Scholar] [CrossRef]
  7. INPE Projeto PRODES: Monitoramento da Floresta Amazônica Brasileira por satélite. Available online: http://www.obt.inpe.br/OBT/assuntos/programas/amazonia/prodes (accessed on 7 October 2019).
  8. INPE Projeto TerraClass. Available online: http://www.inpe.br/cra/projetos_pesquisas/dados_terraclass.php (accessed on 7 October 2019).
  9. Pearson, T.R.H.; Brown, S.; Murray, L.; Sidman, G. Greenhouse gas emissions from tropical forest degradation: An underestimated source. Carbon Balance Manag. 2017, 12, 3. [Google Scholar] [CrossRef] [Green Version]
  10. Singh, A. Review article digital change detection techniques using remotely-sensed data. Int. J. Remote Sens. 1989, 10, 989–1003. [Google Scholar] [CrossRef] [Green Version]
  11. Guo, E.; Fu, X.; Zhu, J.; Deng, M.; Liu, Y.; Zhu, Q.; Li, H. Learning to Measure Change: Fully Convolutional Siamese Metric Networks for Scene Change Detection. arXiv 2018, arXiv:1810.09111. [Google Scholar]
  12. Coppin, P.; Jonckheere, I.; Nackaerts, K.; Muys, B.; Lambin, E. Digital change detection methods in ecosystem monitoring: A review. Int. J. Remote Sens. 2004, 25, 1565–1596. [Google Scholar] [CrossRef]
  13. Lu, D.; Mausel, P.; Brondízio, E.; Moran, E. Change detection techniques. Int. J. Remote Sens. 2004, 25, 2365–2401. [Google Scholar] [CrossRef]
  14. Radke, R.J.; Andra, S.; Al-Kofahi, O.; Roysam, B. Image change detection algorithms: A systematic survey. IEEE Trans. Image Process. 2005, 14, 294–307. [Google Scholar] [CrossRef] [PubMed]
  15. Warner, T.; Almutairi, A.; Lee, J.Y. Remote sensing of land cover change. In The SAGE Handbook of Remote Sensing; Warner, T.A., Nellis, D.M., Foody, G.M., Eds.; SAGE Publications: London, UK, 2009; pp. 459–472. [Google Scholar]
  16. Hecheltjen, A.; Thonfeld, F.; Menz, G. Recent Advances in Remote Sensing Change Detection—A Review. In Land Use and Land Cover Mapping in Europe; Manakos, I., Braun, M., Eds.; Springer: Dordrecht, The Netherlands, 2014; Volume 18, pp. 145–178. [Google Scholar]
  17. Zhu, Z. Change detection using landsat time series: A review of frequencies, preprocessing, algorithms, and applications. ISPRS J. Photogramm. Remote Sens. 2017, 130, 370–384. [Google Scholar] [CrossRef]
  18. Tewkesbury, A.P.; Comber, A.J.; Tate, N.J.; Lamb, A.; Fisher, P.F. A critical synthesis of remotely sensed optical image change detection techniques. Remote Sens. Environ. 2015, 160, 1–14. [Google Scholar] [CrossRef] [Green Version]
  19. Ghosh, S.; Roy, M.; Ghosh, A. Semi-supervised change detection using modified self-organizing feature map neural network. Appl. Soft Comput. 2014, 15, 1–20. [Google Scholar] [CrossRef]
  20. Schneider, A. Monitoring land cover change in urban and peri-urban areas using dense time stacks of Landsat satellite data and a data mining approach. Remote Sens. Environ. 2012, 124, 689–704. [Google Scholar] [CrossRef]
  21. Zhang, L.; Zhang, L.; Du, B. Deep learning for remote sensing data: A technical tutorial on the state of the art. IEEE Geosci. Remote Sens. Mag. 2016, 4, 22–40. [Google Scholar] [CrossRef]
  22. Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnson, B.A. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS J. Photogramm. Remote Sens. 2019, 152, 166–177. [Google Scholar] [CrossRef]
  23. Hughes, L.; Schmitt, M.; Zhu, X. Mining hard negative samples for SAR-optical image matching using generative adversarial networks. Remote Sens. 2018, 10, 1552. [Google Scholar] [CrossRef] [Green Version]
  24. Ma, W.; Zhang, J.; Wu, Y.; Jiao, L.; Zhu, H.; Zhao, W. A Novel Two-Step Registration Method for Remote Sensing Images Based on Deep and Local Features. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4834–4843. [Google Scholar] [CrossRef]
  25. Merkle, N.; Auer, S.; Müller, R.; Reinartz, P. Exploring the potential of conditional adversarial networks for optical and SAR image matching. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1811–1820. [Google Scholar] [CrossRef]
  26. Wang, S.; Quan, D.; Liang, X.; Ning, M.; Guo, Y.; Jiao, L. A deep learning framework for remote sensing image registration. ISPRS J. Photogramm. Remote Sens. 2018, 145, 148–164. [Google Scholar] [CrossRef]
  27. Carranza-García, M.; García-Gutiérrez, J.; Riquelme, J. A Framework for Evaluating Land Use and Land Cover Classification Using Convolutional Neural Networks. Remote Sens. 2019, 11, 274. [Google Scholar] [CrossRef] [Green Version]
  28. Kussul, N.; Lavreniuk, M.; Skakun, S.; Shelestov, A. Deep Learning Classification of Land Cover and Crop Types Using Remote Sensing Data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 778–782. [Google Scholar] [CrossRef]
  29. Li, M.; Wang, L.; Wang, J.; Li, X.; She, J. Comparison of land use classification based on convolutional neural network. J. Appl. Remote Sens. 2020, 14, 1. [Google Scholar] [CrossRef]
  30. Scott, G.J.; England, M.R.; Starms, W.A.; Marcum, R.A.; Davis, C.H. Training Deep Convolutional Neural Networks for Land–Cover Classification of High-Resolution Imagery. IEEE Geosci. Remote Sens. Lett. 2017, 14, 549–553. [Google Scholar] [CrossRef]
  31. Chen, F.; Ren, R.; Van de Voorde, T.; Xu, W.; Zhou, G.; Zhou, Y. Fast Automatic Airport Detection in Remote Sensing Images Using Convolutional Neural Networks. Remote Sens. 2018, 10, 443. [Google Scholar] [CrossRef] [Green Version]
  32. Kang, M.; Ji, K.; Leng, X.; Lin, Z. Contextual Region-Based Convolutional Neural Network with Multilayer Fusion for SAR Ship Detection. Remote Sens. 2017, 9, 860. [Google Scholar] [CrossRef] [Green Version]
  33. Qian, X.; Lin, S.; Cheng, G.; Yao, X.; Ren, H.; Wang, W. Object Detection in Remote Sensing Images Based on Improved Bounding Box Regression and Multi-Level Features Fusion. Remote Sens. 2020, 12, 143. [Google Scholar] [CrossRef] [Green Version]
  34. Yu, L.; Wang, Z.; Tian, S.; Ye, F.; Ding, J.; Kong, J. Convolutional Neural Networks for Water Body Extraction from Landsat Imagery. Int. J. Comput. Intell. Syst. 2017, 16, 1750001. [Google Scholar] [CrossRef]
  35. Liu, X.; Liu, Q.; Wang, Y. Remote sensing image fusion based on two-stream fusion network. Inf. Fusion 2020, 55, 1–15. [Google Scholar] [CrossRef] [Green Version]
  36. Liu, Y.; Chen, X.; Wang, Z.; Wang, Z.J.; Ward, R.K.; Wang, X. Deep learning for pixel-level image fusion: Recent advances and future prospects. Inf. Fusion 2018, 42, 158–173. [Google Scholar] [CrossRef]
  37. Scarpa, G.; Vitale, S.; Cozzolino, D. Target-Adaptive CNN-Based Pansharpening. IEEE Trans. Geosci. Remote Sens. 2018, 56, 1–15. [Google Scholar] [CrossRef] [Green Version]
  38. Yuan, Q.; Wei, Y.; Meng, X.; Shen, H.; Zhang, L. A Multiscale and Multidepth Convolutional Neural Network for Remote Sensing Imagery Pan-Sharpening. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 978–989. [Google Scholar] [CrossRef] [Green Version]
  39. Kemker, R.; Salvaggio, C.; Kanan, C. Algorithms for semantic segmentation of multispectral remote sensing imagery using deep learning. ISPRS J. Photogramm. Remote Sens. 2018, 145, 60–77. [Google Scholar] [CrossRef] [Green Version]
  40. Malambo, L.; Popescu, S.; Ku, N.-W.; Rooney, W.; Zhou, T.; Moore, S. A Deep Learning Semantic Segmentation-Based Approach for Field-Level Sorghum Panicle Counting. Remote Sens. 2019, 11, 2939. [Google Scholar] [CrossRef] [Green Version]
  41. Xiao, X.; Zhou, Z.; Wang, B.; Li, L.; Miao, L. Ship Detection under Complex Backgrounds Based on Accurate Rotated Anchor Boxes from Paired Semantic Segmentation. Remote Sens. 2019, 11, 2506. [Google Scholar] [CrossRef] [Green Version]
  42. Zhuo, X.; Fraundorfer, F.; Kurz, F.; Reinartz, P. Optimization of openstreetmap building footprints based on semantic information of oblique UAV images. Remote Sens. 2018, 10, 624. [Google Scholar] [CrossRef] [Green Version]
  43. Xing, H.; Meng, Y.; Wang, Z.; Fan, K.; Hou, D. Exploring geo-tagged photos for land cover validation with deep learning. ISPRS J. Photogramm. Remote Sens. 2018, 141, 237–251. [Google Scholar] [CrossRef]
  44. Khan, S.H.; He, X.; Porikli, F.; Bennamoun, M. Forest change detection in incomplete satellite images with deep neural networks. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5407–5423. [Google Scholar] [CrossRef]
  45. Mou, L.; Bruzzone, L.; Zhu, X.X. Learning Spectral-Spatial-Temporal Features via a Recurrent Convolutional Neural Network for Change Detection in Multispectral Imagery. IEEE Trans. Geosci. Remote Sens. 2018, 57, 924–935. [Google Scholar] [CrossRef] [Green Version]
  46. Ajami, A.; Ku er, M.; Persello, C.; Pfeffer, K. Identifying a slums’ degree of deprivation from VHR images using convolutional neural networks. Remote Sens. 2019, 11, 1282. [Google Scholar] [CrossRef] [Green Version]
  47. Cao, G.; Li, Y.; Liu, Y.; Shang, Y. Automatic change detection in high-resolution remote-sensing images by means of level set evolution and support vector machine classification. Int. J. Remote Sens. 2014, 35, 6255–6270. [Google Scholar] [CrossRef]
  48. Mboga, N.; Persello, C.; Bergado, J.R.; Stein, A. Detection of informal settlements from VHR images using convolutional neural networks. Remote Sens. 2017, 9, 1106. [Google Scholar] [CrossRef] [Green Version]
  49. Liu, R.; Kuffer, M.; Persello, C. The Temporal Dynamics of Slums Employing a CNN-Based Change Detection Approach. Remote Sens. 2019, 11, 2844. [Google Scholar] [CrossRef] [Green Version]
  50. Cao, C.; Dragićević, S.; Li, S. Land-use change detection with convolutional neural network methods. Environments 2019, 6, 25. [Google Scholar] [CrossRef] [Green Version]
  51. Zhang, X.; Shi, W.; Lv, Z.; Peng, F. Land cover change detection from high-resolution remote sensing imagery using multitemporal deep feature collaborative learning and a semi-supervised chan–vese model. Remote Sens. 2019, 11, 2787. [Google Scholar] [CrossRef] [Green Version]
  52. Zhang, C.; Sargent, I.; Pan, X.; Li, H.; Gardiner, A.; Hare, J.; Atkinson, P.M. An object-based convolutional neural network (OCNN) for urban land use classification. Remote Sens. Environ. 2018, 216, 57–70. [Google Scholar] [CrossRef] [Green Version]
  53. Liu, Y.; Wu, L. Geological disaster recognition on optical remote sensing images using deep learning. Procedia Comput. Sci. 2016, 91, 566–575. [Google Scholar] [CrossRef] [Green Version]
  54. Peng, D.; Zhang, Y.; Guan, H. End-to-End Change Detection for High Resolution Satellite Images Using Improved UNet++. Remote Sens. 2019, 11, 1382. [Google Scholar] [CrossRef] [Green Version]
  55. Hou, B.; Wang, Y.; Liu, Q. Change Detection Based on Deep Features and Low Rank. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2418–2422. [Google Scholar] [CrossRef]
  56. Niu, X.; Gong, M.; Zhan, T.; Yang, Y. A Conditional Adversarial Network for Change Detection in Heterogeneous Images. IEEE Geosci. Remote Sens. Lett. 2019, 16, 45–49. [Google Scholar] [CrossRef]
  57. Zhang, M.; Xu, G.; Chen, K.; Yan, M.; Sun, X. Triplet-Based Semantic Relation Learning for Aerial Remote Sensing Image Change Detection. IEEE Geosci. Remote Sens. Lett. 2019, 16, 266–270. [Google Scholar] [CrossRef]
  58. Gong, M.; Zhan, T.; Zhang, P.; Miao, Q. Superpixel-based difference representation learning for change detection in multispectral remote sensing images. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2658–2673. [Google Scholar] [CrossRef]
  59. Ma, W.; Xiong, Y.; Wu, Y.; Yang, H.; Zhang, X.; Jiao, L. Change Detection in Remote Sensing Images Based on Image Mapping and a Deep Capsule Network. Remote Sens. 2019, 11, 626. [Google Scholar] [CrossRef] [Green Version]
  60. Wang, Q.; Yuan, Z.; Du, Q.; Li, X. GETNET: A General End-to-End 2-D CNN Framework for Hyper- spectral Image Change Detection. IEEE Trans. Geosci. Remote Sens. 2018, 57, 3–13. [Google Scholar] [CrossRef] [Green Version]
  61. Zhang, W.; Lu, X. The Spectral-Spatial Joint Learning for Change Detection in Multispectral Imagery. Remote Sens. 2019, 11, 240. [Google Scholar] [CrossRef] [Green Version]
  62. Lebedev, M.; Vizilter, Y.V.; Vygolov, O.; Knyaz, V.; Rubis, A.Y. Change detection in remote sensing images using conditional adversarial networks. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 42, 565–571. [Google Scholar] [CrossRef] [Green Version]
  63. Lei, T.; Zhang, Y.; Lv, Z.; Li, S.; Liu, S.; Nandi, A.K. Landslide Inventory Mapping from Bi-temporal Images Using Deep Convolutional Neural Networks. IEEE Geosci. Remote Sens. Lett. 2019, 16, 982–986. [Google Scholar] [CrossRef]
  64. Daudt, R.C.; Le Saux, B.; Boulch, A.; Gousseau, Y. High Resolution Semantic Change Detection. arXiv 2018, arXiv:1810.08452v1. [Google Scholar]
  65. Arima, E.Y.; Walker, R.T.; Perz, S.; Souza, C. Explaining the fragmentation in the Brazilian Amazonian forest. J. Land Use Sci. 2015, 1–21. [Google Scholar] [CrossRef]
  66. Godar, J.; Tizado, E.J.; Pokorny, B. Who is responsible for deforestation in the Amazon? A spatially explicit analysis along the Transamazon Highway in Brazil. Forest Ecol. Manag. 2012, 267, 58–73. [Google Scholar] [CrossRef]
  67. Carrero, G.C.; Fearnside, P.M. Forest clearing dynamics and the expansion of landholdings in Apuí, a deforestation hotspot on Brazil’s Transamazon Highway. Ecol. Soc. 2011, 16, 26. [Google Scholar] [CrossRef] [Green Version]
  68. Li, G.; Lu, D.; Moran, E.; Calvi, M.F.; Dutra, L.V.; Batistella, M. Examining deforestation and agropasture dynamics along the Brazilian TransAmazon Highway using multitemporal Landsat imagery. Gisci. Remote Sens. 2019, 56, 161–183. [Google Scholar] [CrossRef]
  69. Soares-Filho, B.; Alencar, A.; Nepstad, D.; Cerqueira, G.; Vera Diaz, M.D.C.; Rivero, S.; Solórzano, L.; Voll, E. Simulating the response of land-cover changes to road paving and governance along a major Amazon highway: The Santarem–Cuiaba corridor. Glob. Chang. Biol. 2004, 10, 745–764. [Google Scholar] [CrossRef]
  70. Müller, H.; Griffiths, P.; Hostert, P. Long-term deforestation dynamics in the Brazilian Amazon—Uncovering historic frontier development along the Cuiabá–Santarém highway. Int. J. Appl. Earth Obs. 2016, 44, 61–69. [Google Scholar] [CrossRef]
  71. Barber, C.P.; Cochrane, M.A.; Souza, C.M., Jr.; Laurance, W.F. Roads, deforestation, and the mitigating effect of protected areas in the Amazon. Biol. Conserv. 2014, 177, 203–209. [Google Scholar] [CrossRef]
  72. Fearnside, P.M. Highway construction as a force in destruction of the Amazon forest. In Handbook of Road Ecology; van der Ree, R., Smith, D.J., Grilo, C., Eds.; John Wiley & Sons Publishers: Oxford, UK, 2015; pp. 414–424. [Google Scholar]
  73. Alves, D.S. Space-time dynamics of deforestation in Brazilian Amazônia. Int. J. Remote Sens. 2002, 23, 2903–2908. [Google Scholar] [CrossRef]
  74. Arima, E.; Walker, R.T.; Perz, S.G.; Caldas, M. Loggers and forest fragmentation: Behavioral models of road building in the Amazon basin. Ann. Assoc. Am. Geogr. 2005, 95, 525–541. [Google Scholar] [CrossRef]
  75. Arima, E.Y.; Walker, R.T.; Sales, M.; Souza, C., Jr.; Perz, S.G. The fragmentation of space in the Amazon basin: Emergent road networks. Photogramm. Eng. Remote Sens. 2008, 74, 699–709. [Google Scholar] [CrossRef]
  76. Asner, G.P.; Broadbent, E.N.; Oliveira, P.J.C.; Keller, M.; Knapp, D.E.; Silva, J.N.M. Condition and fate of logged forests in the Brazilian Amazon. Proc. Natl. Acad. Sci. USA 2006, 103, 12947–12950. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  77. Pfaff, A.; Robalino, J.; Walker, R.; Aldrich, S.; Caldas, M.; Reis, E.; Perz, S.; Bohrer, C.; Arima, E.; Laurance, W.; et al. Road investments, spatial spillovers, and deforestation in the Brazilian Amazon. J. Reg. Sci. 2007, 47, 109–123. [Google Scholar] [CrossRef] [Green Version]
  78. USGS. Landsat Collections: Landsat Collection 1. Available online: https://www.usgs.gov/land-resources/nli/landsat/landsat-collection-1 (accessed on 3 March 2020).
  79. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv 2015, arXiv:1505.04597. [Google Scholar]
  80. Pinheiro, P.O.; Lin, T.-Y.; Collobert, R.; Dollàr, P. Learning to Refine Object Segments. arXiv 2016, arXiv:1603.08695. [Google Scholar]
  81. Zhang, Z.; Liu, Q.; Wang, Y. Road Extraction by Deep Residual U-Net. IEEE Geosci. Remote Sens. Lett. 2018, 15, 749–753. [Google Scholar] [CrossRef] [Green Version]
  82. Wei, S.; Zhang, H.; Wang, C.; Wang, Y.; Xu, L. Multi-Temporal SAR Data Large-Scale Crop Mapping Based on U-Net Model. Remote Sens. 2019, 11, 68. [Google Scholar] [CrossRef] [Green Version]
  83. Chollet, F. Keras. 2015. Available online: https://github.com/fchollet/keras (accessed on 3 March 2020).
  84. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. arXiv 2016, arXiv:1603.04467. [Google Scholar]
  85. Shimabukuro, Y.E.; Arai, E.; Duarte, V.; Jorge, A.; Santos, E.G.; Gasparini, K.A.C.; Dutra, A.C. Monitoring deforestation and forest degradation using multi-temporal fraction images derived from Landsat sensor data in the Brazilian Amazon. Int. J. Remote Sens. 2019, 40, 5475–5496. [Google Scholar] [CrossRef]
  86. Cabral, A.I.R.; Saito, C.; Pereira, H.; Laques, A.E. Deforestation pattern dynamics in protected areas of the Brazilian Legal Amazon using remote sensing data. Appl. Geogr. 2018, 100, 101–115. [Google Scholar] [CrossRef]
  87. Quantum GIS Geographic Information System. Open Source Geospatial Foundation Project. Available online: http://www.qgis.org/it/site/ (accessed on 1 January 2020).
  88. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal Loss for Dense Object Detection. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2999–3007. [Google Scholar]
  89. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  90. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  91. Mahmon, N.A.; Ya’acob, N. A review on classification of satellite image using Artificial Neural Network (ANN). In Proceedings of the 2014 IEEE 5th Control and System Graduate Research Colloquium, Shah Alam, Malaysia, 11–12 August 2014; pp. 153–157. [Google Scholar]
  92. Stehman, S.V. Sampling designs for accuracy assessment of land cover. Int. J. Remote Sens. 2009, 30, 5243–5272. [Google Scholar] [CrossRef]
  93. Foody, G.M. Status of land cover classification accuracy assessment. Remote Sens. Environ. 2002, 80, 185–201. [Google Scholar] [CrossRef]
  94. Stehman, S. Comparing estimators of gross change derived from complete coverage mapping versus statistical sampling of remotely sensed data. Remote Sens. Environ. 2005, 96, 466–474. [Google Scholar] [CrossRef]
  95. McNemar, Q. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika 1947, 12, 153–157. [Google Scholar] [CrossRef]
  96. Liu, T.; Abd-Elrahman, A.; Morton, J.; Wilhelm, V.L. Comparing fully convolutional networks, random forest, support vector machine, and patch-based deep convolutional neural networks for object-based wetland mapping using images from small unmanned aircraft system. Gisci. Remote Sens. 2018, 55, 243–264. [Google Scholar] [CrossRef]
  97. Rakshit, S.; Debnath, S.; Mondal, D. Identifying Land Patterns from Satellite Imagery in Amazon Rainforest using Deep Learning. arXiv 2018, arXiv:1809.00340. [Google Scholar]
  98. Helber, P.; Bischke, B.; Dengel, A.; Borth, D. EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification. arXiv 2019, arXiv:1709.00029. [Google Scholar] [CrossRef] [Green Version]
  99. Ortega, M.X.; Bermudez, J.D.; Happ, P.N.; Gomes, A.; Feitosa, R.Q. Evaluation of Deep Learning Techniques for Deforestation Detection the Amazon Forest. ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci. 2019, IV-2/W7, 121–128. [Google Scholar] [CrossRef] [Green Version]
  100. Liu, C.-C.; Zhang, Y.-C.; Chen, P.-Y.; Lai, C.-C.; Chen, Y.-H.; Cheng, J.-H.; Ko, M.-H. Clouds Classification from Sentinel-2 Imagery with Deep Residual Learning and Semantic Image Segmentation. Remote Sens. 2019, 11, 119. [Google Scholar] [CrossRef] [Green Version]
  101. Li, L.; Liang, J.; Weng, M.; Zhu, H. A Multiple-Feature Reuse Network to Extract Buildings from Remote Sensing Imagery. Remote Sens. 2018, 10, 1350. [Google Scholar] [CrossRef] [Green Version]
  102. Ghorbanzadeh, O.; Blaschke, T.; Gholamnia, K.; Meena, S.; Tiede, D.; Aryal, J. Evaluation of Different Machine Learning Methods and Deep-Learning Convolutional Neural Networks for Landslide Detection. Remote Sens. 2019, 11, 196. [Google Scholar] [CrossRef] [Green Version]
Figure 1. (a) Location of the study sites within the Amazon region with the (b,c) train sites A and B and (d) test site C as true color composite Landsat images taken from June and July 2018.
Figure 1. (a) Location of the study sites within the Amazon region with the (b,c) train sites A and B and (d) test site C as true color composite Landsat images taken from June and July 2018.
Remotesensing 12 00901 g001
Figure 2. Simple representation of an autoencoder architecture with the addition of skip connections. H, W and F represent the height, width, and number of filtered feature maps, respectively. In this study, H and W are both 200 pixels, while F depends on the specific model architecture.
Figure 2. Simple representation of an autoencoder architecture with the addition of skip connections. H, W and F represent the height, width, and number of filtered feature maps, respectively. In this study, H and W are both 200 pixels, while F depends on the specific model architecture.
Remotesensing 12 00901 g002
Figure 3. Example of the patch extraction method to prepare the datasets for the deep learning (DL) algorithms.
Figure 3. Example of the patch extraction method to prepare the datasets for the deep learning (DL) algorithms.
Remotesensing 12 00901 g003
Figure 4. Example of the change mapping in three locations between (a) 2017 and (b) 2018 and the respective (c) rasterized deforestation mask.
Figure 4. Example of the change mapping in three locations between (a) 2017 and (b) 2018 and the respective (c) rasterized deforestation mask.
Remotesensing 12 00901 g004
Figure 5. Flowchart of the modeling approach taken in this study.
Figure 5. Flowchart of the modeling approach taken in this study.
Remotesensing 12 00901 g005
Figure 6. Error distributions in the (a) 2017–2018 and (b) 2018–2019 time sequences in total pixel numbers.
Figure 6. Error distributions in the (a) 2017–2018 and (b) 2018–2019 time sequences in total pixel numbers.
Remotesensing 12 00901 g006
Figure 7. Deforestation masks according to the (a) ground truth and classifications produced by the (b) random forest (RF), (c) multilayer perceptron (MLP), (d) SharpMask, (e) U-Net, and (f) ResUnet models in the 2017–2018 sequence.
Figure 7. Deforestation masks according to the (a) ground truth and classifications produced by the (b) random forest (RF), (c) multilayer perceptron (MLP), (d) SharpMask, (e) U-Net, and (f) ResUnet models in the 2017–2018 sequence.
Remotesensing 12 00901 g007
Figure 8. Deforestation masks according to the (a) ground truth and classifications produced by the (b) RF, (c) MLP, (d) SharpMask, (e) U-Net, and (f) ResUnet models in the 2018–2019 sequence.
Figure 8. Deforestation masks according to the (a) ground truth and classifications produced by the (b) RF, (c) MLP, (d) SharpMask, (e) U-Net, and (f) ResUnet models in the 2018–2019 sequence.
Remotesensing 12 00901 g008
Figure 9. First example location within the test site with the (a) ground truth and classifications made by the (b) RF, (c) MLP, (d) SharpMask, (e) U-Net, and (f) ResUnet models in each time sequence.
Figure 9. First example location within the test site with the (a) ground truth and classifications made by the (b) RF, (c) MLP, (d) SharpMask, (e) U-Net, and (f) ResUnet models in each time sequence.
Remotesensing 12 00901 g009
Figure 10. Second example location within the test site with the (a) ground truth and classifications made by the (b) RF, (c) MLP, (d) SharpMask, (e) U-Net, and (f) ResUnet models in each time sequence. The yellow rectangle highlights an example of a “hard-to-classify” deforestation patch.
Figure 10. Second example location within the test site with the (a) ground truth and classifications made by the (b) RF, (c) MLP, (d) SharpMask, (e) U-Net, and (f) ResUnet models in each time sequence. The yellow rectangle highlights an example of a “hard-to-classify” deforestation patch.
Remotesensing 12 00901 g010
Table 1. Acquisition dates for each site and corresponding Landsat scenes.
Table 1. Acquisition dates for each site and corresponding Landsat scenes.
SiteLandsat SceneAcquisition Date
201720182019
A227_63July 18July 21July 24
B227_65July 18July 21July 24
C230_65June 21June 24July 13
Table 2. Total number of layers and parameters in each deep learning (DL) architecture used in this study.
Table 2. Total number of layers and parameters in each deep learning (DL) architecture used in this study.
ArchitectureLayersParameters
U-Net691,933,866
SharpMask114221,386
ResUnet932,068,554
Table 3. Performance measures for the model validation results for the 2017–2018 and 2018–2019 sequences. Best results in the column in bold text.
Table 3. Performance measures for the model validation results for the 2017–2018 and 2018–2019 sequences. Best results in the column in bold text.
Model2017–2018 2017–2019
F1KappamIoUPrecisionRecallOverall AccuracyF1KappamIoUPrecisionRecallOverall Accuracy
RF0.80140.80030.83320.94140.69760.99790.89020.88920.90000.88770.89280.9979
MLP0.89260.89200.90240.92820.85970.99870.91010.90930.91670.93140.88980.9983
Resunet0.94320.94280.94590.92520.96190.99930.94650.94600.94870.93580.95740.9990
Unet0.91120.91060.91790.92230.90030.99890.93390.93320.93730.91750.95080.9987
Sharpmask0.92230.92180.92740.91730.92740.99900.93370.93310.93720.92180.94600.9987
Table 4. McNemar’s test p-values between model classifications. Values bellow p = 0.05 indicate the differences between classifications are statistically significant.
Table 4. McNemar’s test p-values between model classifications. Values bellow p = 0.05 indicate the differences between classifications are statistically significant.
2017–20182018–2019
MLPResUnetRFSharpMaskU-NetMLPResUnetRFSharpMaskU-Net
MLP
ResUnet<0.001 <0.001
RF<0.001<0.001 <0.001<0.001
SharpMask<0.001<0.001<0.001 <0.001<0.001<0.001
U-Net<0.001<0.001<0.001<0.001 <0.001<0.001<0.001<0.001
Table 5. Total deforested area according to the ground truth and each model’s prediction.
Table 5. Total deforested area according to the ground truth and each model’s prediction.
ReferenceDeforested Area (km2)Difference from Ground Truth (%)
2017–20182018–20192017–20182018–2019
Ground Truth152.73233.44------
Random Forest113.17234.79−25.90+0.58
MultiLayer Perceptron141.45223.01−7.39-4.47
SharpMask154.40239.56+1.10+2.62
U-Net149.10241.90−2.38+3.62
ResUnet158.78238.84+3.96+2.31

Share and Cite

MDPI and ACS Style

de Bem, P.P.; de Carvalho Junior, O.A.; Fontes Guimarães, R.; Trancoso Gomes, R.A. Change Detection of Deforestation in the Brazilian Amazon Using Landsat Data and Convolutional Neural Networks. Remote Sens. 2020, 12, 901. https://doi.org/10.3390/rs12060901

AMA Style

de Bem PP, de Carvalho Junior OA, Fontes Guimarães R, Trancoso Gomes RA. Change Detection of Deforestation in the Brazilian Amazon Using Landsat Data and Convolutional Neural Networks. Remote Sensing. 2020; 12(6):901. https://doi.org/10.3390/rs12060901

Chicago/Turabian Style

de Bem, Pablo Pozzobon, Osmar Abílio de Carvalho Junior, Renato Fontes Guimarães, and Roberto Arnaldo Trancoso Gomes. 2020. "Change Detection of Deforestation in the Brazilian Amazon Using Landsat Data and Convolutional Neural Networks" Remote Sensing 12, no. 6: 901. https://doi.org/10.3390/rs12060901

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop