Next Article in Journal
Analysis on Energy Consumption and Indoor Environment in Kunming, China
Next Article in Special Issue
Governance, Sustainability and Decision Making in Water and Sanitation Management Systems
Previous Article in Journal
How to Overcome the Slow Death of Intercropping in the North China Plain
Previous Article in Special Issue
Nutrient Management in Support of Environmental and Agricultural Sustainability

Sustainability 2012, 4(10), 2566-2573; doi:10.3390/su4102566

Article
Amazon Rainforest Deforestation Daily Detection Tool Using Artificial Neural Networks and Satellite Images
Thiago Nunes Kehl 1, Viviane Todt 2,*, Mauricio Roberto Veronez 2 and Silvio César Cazella 1
1
Universidade do Vale do Rio dos Sinos (UNISINOS), Ciências Exatas e Tecnológicas, Curso de Graduação em Ciência da Computação, Av. Unisinos, 950, Cep 93022-000 São Leopoldo, RS, Brasil; Email: thiagokehl@gmail.com (T.N.K.); cazella@unisinos.br (S.C.C.)
2
Universidade do Vale do Rio dos Sinos (UNISINOS), Ciências Exatas e Tecnológicas, Programa de Pós-Graduação em Geologia, Av. Unisinos, 950, Cep 93022-000 São Leopoldo, RS, Brasil; Email: veronez@unisinos.br
*
Author to whom correspondence should be addressed; Email: vivianetodt@unisinos.br; Tel.: +55-51-3591-1100; Fax: +55-51-3590-8162.
Received: 7 August 2012; in revised form: 9 September 2012 / Accepted: 12 September 2012 /
Published: 4 October 2012

Abstract

: The main purpose of this work was the development of a tool to detect daily deforestation in the Amazon rainforest, using satellite images from the MODIS/TERRA [1] sensor and Artificial Neural Networks. The developed tool provides the parameterization of the configuration for the neural network training to enable us to find the best neural architecture to address the problem. The tool makes use of confusion matrixes to determine the degree of success of the network. Part of the municipality of Porto Velho, in Rondônia state, is located inside the tile H11V09 of the MODIS/TERRA sensor, which was used as the study area. A spectrum-temporal analysis of this area was made on 57 images from 20 of May to 15 of July 2003 using the trained neural network. This analysis allowed us to verify the quality of the implemented neural network classification as well as helping our understanding of the dynamics of deforestation in the Amazon rainforest. The great potential of neural networks for image classification was perceived with this work. However, the generation of consistent alarms, in other words, detecting predatory actions at the beginning; instead of firing false alarms is a complex task that has not yet been solved. Therefore, the major contribution of this paper is to provide a theoretical basis and practical use of neural networks and satellite images to combat illegal deforestation.
Keywords:
Artificial Neural Networks; satellite images classification; deforestation detection

1. Introduction

The Brazilian Rainforest in the Amazon was nearly intact until 1970, when the construction of Trans-Amazonian Highway triggered the high level of deforestation rates. Since then, the rate of deforestation in the Legal Amazon [2] has oscillated. However, the numbers have always been elevated [3]. According to Instituto Nacional de Pesquisas Espaciais—INPE (National Institute for Space Research), approximately 16% of the Forest was destroyed, that is, out of its 3.5 million km², over 550,000 km² had been deforested. In 1978, INPE carried out a survey of the forest for the first time using a satellite data. They found out that as much as 140,000 km² had been cleared. In the following years, no other survey was done, because the government’s agenda did not prioritize environmental preservation issues. Not until 1988 did INPE started to carry out annual surveys, due to international concern regarding the way the Amazon Forest was being taken care of [4].

Considering the elevated rates of deforestation, it becomes clear that some action to help monitor the region was necessary, which would increase the authorities’ control over the area. Some tools have been developed, aiming towards improvement in monitoring. Nevertheless, none of them were used daily. As an example of the main projects that have been developed and used by INPE, we can name PRODES and DETER.

PRODES methodology aims to produce estimates of deforestation levels of annual clear cut in Legal Amazon, through digital classification of images. Such methodology was described by [5]. It is an important tool for obtaining estimates. However, it is inefficient as surveillance is regarded, because the estimates are produced annually.

The DETER Project creates maps with the location of the areas that are being cleared, using photo interpretation of images acquired by MODIS sensor (Moderate Resolution Imaging Spectroradiometer) aboard the Terra satellite. These maps have information about the dynamics of the deforestation in Legal Amazon. Such information is sent to entities and departments like IBAMA (Instituto Brasileiro do Meio Ambiente e Recursos Naturais Renováveis) [6] fortnightly and supports the surveillance and control of deforestation.

Ideally, as far as surveillance is regarded, there should be a detection system running daily, which would make it possible to forecast any changes in the Forest, increasing the surveillance and reducing loss in the areas surrounding the deforested location. Within this framework, an artificial intelligence technique that seems to present great potential for fast classification of satellite images is Artificial Neural Network, which is applied to the images of MODIS/TERRA sensor and might contribute to the daily detection of deforestation zones [7,8,9]. According to [10] an Artificial Neural Network (ANN) can be seen as a group of artificial neurons that have the capacity to process data locally. A connection topology and a learning rule define how these neurons are connected. [11] adds that an ANN is a processor distributed in parallel through simple processing units (neurons), which have the capacity to store knowledge and use it to solve complex problems. In this sense, we propose the creation of a tool for the training and use of artificial neural networks, so that a faster classification of images from MODIS/TERRA sensor can be achieved.

2. Data and Methodology

Images of tile H11V09 of MODIS/TERRA sensor from 20th May–15th July 2003 were used for the testing and training of this tool. Fifty-seven images were used, which were numbered 140 to 196, according to the Julian calendar. The images were chosen because they present a ground truth and have been used in previous works, e.g., [8,9]. This tile includes the area of study in the State of Rondônia, according to the following geographic coordinates: 64° 16’ 24.19” and 62° 26’ 59.41” of longitude W and 9° 30’ 27.69” and 7° 50’ 28.62” of latitude S (see Figure 1). To the pre-processing of the images, a tool called SPRING 5.1.5 (Sistema de Processamento de Informações Georeferenciadas-Georeferenced Information Processing System) was used [12].

Sustainability 04 02566 g001 1024
Figure 1. Legal Amazon, outlined in bold the area of study. Source: Adapted from [6].

Click here to enlarge figure

Figure 1. Legal Amazon, outlined in bold the area of study. Source: Adapted from [6].
Sustainability 04 02566 g001 1024

The software was implemented using the Java programming language in addition to AWT (Abstract Windowing Toolkit) and Swing components for the elaboration of graphical interface. Framework Encog [13] was incorporated for the development of the neural network module, and the database management system MySQL Server was used for the storage of data related to the processed images.

The neural tool for deforestation detection was developed according to the methodology shown by [8], aiming to detect daily degradation based on Terra Satellite sensor data. Firstly, the SPRING program produced three gray scale images divided into soil (red component), vegetation (green component) and shade fractions (blue component) from the 57 available images using the Spectral Linear Mixture Model. The fraction images were then turned into GeoTIFF [14] format, which characterized the database used in this research.

With the georeferenced fraction images in GeoTIFF format, it became possible to develop a neural module and test it. The neural module takes in the same pixel of each one of the fraction images: soil, shade and vegetation. The outcome is expected to be one of the following five results: water/shade, savanna, deforestation, vegetation or clouds.

When the neural network classifies the area as deforestation, a visual alarm is fired, showing the geographic coordinates that the pixel represents. In order to avoid processing areas that have already been completely cleared out or areas that are known to be out of vegetation, a digital mask was created. Pixels that can be identified as savanna or completely cleared out are removed from this mask. It is then possible to analyze the pixels which are expected to show vegetation.

Due to the neural networks’ difficulty in determining the best architecture for solving the problem in question, freedom of parameterization was considered the best option. That is, it is possible to adjust the number of intermediate layers, through graphical interface, as well as the number of neurons in each layer, the number of epochs, expected error, and training algorithm (Back-Propagation and a variation of it, the Resilient Back-Propagation [15]).

Within this scenario, the network will be configured at the time of training, but the number of input and output neurons will already be pre-configured due to the fact that they model the problem being addressed. The three input neurons represent the values of soil, shade and vegetation of each pixel of the gray scale generated fraction images. While the five output neurons map each class of interest, which are: water, savanna, deforestation, vegetation and clouds.

In order to find the best architecture for the problem in question, in each training program there was a different number of neurons in the intermediate layer. In addition, the number of epochs and the number of intermediate layers were also altered. As there is a possibility of there being performance differences between the two networks, even if the same training parameters are applied, all the networks were double-trained and the best result was used.

An automatic generator of confusion matrix was applied in order to determine the quality of the neural response. The confusion matrix shows the extent to which the image classifier mixes up each mapped class.

The data used for training were obtained randomly out of 240 pixels of the images dating 20 May and 14 July, first and last images of the data collection, which have the best quality among the available images. The data are divided into: 30 points of vegetation, 60 points of savanna, 60 points of deforestation, 30 points of shade/water and 60 points of clouds determining,ds.

The test data collection, used to certify neural response correction, is made up of 30 samples of each class, which totalize 150. It is necessary to mention that the selection of points for training and testing were collected randomly, at distinct times and are two disjoint sets.

3. Results and Discussion

According to what was previously explained in Section 2, various training sections were conducted in order to obtain the best architecture for a problem. It should be pointed out, however, that it is impossible to guarantee that an artificial neural network is totally reliable for solving the given problem. A criterion commonly used in neural networks with supervised training’s algorithms is the Mean Square Error-MSE of the network output over the expected response to determine when the neural network is responding well and the training can be stopped. During the training processes that were conducted, the Mean Squared Error (MSE) set was 1%. However, it was not possible to achieve this number in any of the training sections. The fact that the network has not achieved the expected MSE percentage does not mean that the training has been faulty. The hit rates in Table 1—whose value is as much as or over 90% for all training processes—show that the Network is responding correctly, although the stipulated MSE has not been achieved. The hit rates are calculated by the equation below.

Hit Rate = (n hits/n sum) × 100

In which nhits is the number of points that the Network has associated to correct categories and nsum is the total amount of samples used for the test with the trained Neural Network.

Table Table 1. Set of Training Procedures and Parameters for the Neural Network.

Click here to display table

Table 1. Set of Training Procedures and Parameters for the Neural Network.
Training Neurons (hidden)EpochsAchieved MSE Hit Rates
151.00029,47%92,67%
252.00029,63%94,00%
361.00029,60%92,67%
462.00028,81%93,33%
571.00030,90%93,33%
672.00029,60%93,33%
781.00030,83%92,67%
882.00026,41%93,33%
991.00027,78%92,67%
1092.00027,48%94,67%
1193.00026,44%94,00%
12910.00025,07%92,67%
13920.00020,58%92,00%
14101.00027,74%94,00%
15102.00027,65%92,67%
16111.00029,63%93,33%
17112.00027,00%93,33%
18151.00027,93%94,67%
19152.00028,49%92,00%
20301.00024,72%93,33%
21302.00022,43%90,00%
22501.00023,95%92,67%
23502.00019,31%92,00%
245020.00014,95%90,67%

The Artificial Neural Network which has classified the set of samples with the highest number of hits was represented by training process numbers 10 and 18 on the Table 1. Training process number 18 obtained the same rate, but it shows a more complex structure. According to [10] the complexity of the neural model defines the scope for possible solutions for a determined problem, to which the use of a greater number of neurons than necessary affects its power of generalization. For that reason, the least complex neural network, which also solves the problem satisfactorily, is considered the best. In order to evaluate Neural Network 10, a confusion matrix was produced (Table 2). Confusion matrixes make it possible to verify confusion mapped classes. Another category called Undefined was added to cases in which there was excitation of more than one neuron or the inhibition of all of them, allowing us to define a particular mapped category.

The confusion matrix shows that the neural network successfully classified 142 pixels out of the 150. When there was confusion, it only involved pixels of deforestation and savanna. Out of a total of 30 deforestation samples, 25 were correctly classified by the network and 5 were confused with Savanna.

Table Table 2. Confusion Matrix for the Neural Network 10.

Click here to display table

Table 2. Confusion Matrix for the Neural Network 10.
CategoryWaterCloudSavannaDeforestationVegetationUndefinedSum
Water300000030
Cloud030000030
Savanna002730030
Deforestation005250030
Vegetation000030030
Sum3030322830094.67%

It is believed that confusion is due to the fact that savanna and deforestation have a similar spectral signature, which makes the classification process harder. Difficulty in category sorting similar spectral signature categories was also observed by [7] and by [16]. The latter conducted a comparative study between statistics technique and ANN, and found that both methods show difficulties in classification.

In order to demonstrate the difficulty in sorting out two categories with similar spectral signature, the same network was trained with exactly the same parameters, without, however, the points related to savanna. Three training sections were conducted and in all of them the network converged to the expected MSE before iteration number 500; the confusion between categories was nullified. It is important to emphasize that, due to fast convergence, the Mean Square Error stipulated was 0.1%, and not 1% as in previous training processes.

The confusion matrix for the neuron network produced with the same training data set, except for the savanna points, allows us to infer that the Savanna category causes confusion in the classification process. In that case, the hit rate of the artificial neural network was 100%, and there was no confusion in association with any other categories, as can be seen in Table 3. It should be highlighted that this neural network was able to classify the five pixels which had previously been wrongly classified as savanna. Nonetheless, a neural network containing only savanna and deforestation samples was trained. It was observed that the MSE found was similar to those achieved in the training processes of Table 1. The tests show that savanna and deforestation are two spectrally similar categories, which interferes with the neural process of classification by the neural network. It is possible to point out that, in addition to architecture, the sets of data and the mapped categories affect the good performance of the neural network.

The capacity of generalization of the neural network was, firstly, determined with the application of a set of tests and construction of the confusion matrix. However, it is necessary to determine the capacity that the neural network has to completely identify an image of MODIS/TERRA sensor. The first image of the set, dated from 20th May, was classified by the tool and compared with a thematic map (without clouds) of the same area generated by INPE [4]. This image was chosen due to the fact it is on the NADIR and has no clouds in it, which is considered appropriate for the comparison of the produced maps. Overall, the classification obtained was satisfactory, and all the categories, including water were distinguished in the whole image.

Deforestation, however, was mistaken as savanna in some points, due to their spectral similarity. Also, there were some areas of savanna on the image that were misclassified as vegetation, possibly due to the lack of training samples of those areas.

Table Table 3. Confusion Matrix for the Neural Network without Savanna Category.

Click here to display table

Table 3. Confusion Matrix for the Neural Network without Savanna Category.
CategoryWaterCloud DeforestationVegetationUndefinedSum
Water 30000030
Cloud 03000030
Savanna 0000030
Deforestation00300030
Vegetation00030030
Sum303030300100%

Some pixels were categorized as undefined when there was the excitation of more than one neuron of the network or when none of them were activated. In case there is no activation of a neuron, the use of that pixel becomes impossible for the triggering of alarms to the processed scene. Nevertheless, if there is more than one active neuron, it is still possible to use the estimates for the calculation of alarm triggering. It should be emphasized that, as far as producing a thematic map for the scene is concerned, the use of a filter is possible, in order to determine the category of the undefined pixels. One such option could be to consider the undefined category point according to the majority of points around it.

4. Conclusions

According to the results obtained, it is possible to conclude that Neural Networks can be used as orbital images categorizers to help to quickly detect deforestation. This work also shows that the spectral similarity of the mapped classes, such as deforestation and savanna, influences the outcome of the neural network. In this sense it is believed that using more samples from different soil types can contribute to a better classification. As such, the developed tool emerges as a classifier of satellite images, as much as a general purpose tool, including application in teaching and for the free creation and testing of neural networks applicable in many areas.

In view of the inherent difficulty of the process of efficient deforestation alarms, that is, when false alarms are not fired and all areas are considered deforestation points, more studies are necessary. The developed tool was able to detect deforestation points. However, it is still unstable, and produces false alarms. Potentially, better results might be found if the data used are obtained by remote sensing specialists for the training of neural networks. Nonetheless, the software must be enhanced in order to standardize the alarm triggering.

References

  1. National Aeronautics and Space Administration. Modis. Available online: http://modis.gsfc.nasa.gov/about/ (accessed on 16 June 2012).
  2. Instituto Brasileiro de Geografia e Estatística. IBGE releases first digital database of Legal Amazon relief. Available online: http://www.ibge.gov.br/english/presidencia/noticias/noticia_ visualiza.php? id_noticia=1409&id_pagina=1 (accessed on 8 June 2012).
  3. Fearnside, P.M. Deforestation in Brazilian Amazonia: History, rates and consequences. Conserv. Biol. 2005, 19, 680–688. [Google Scholar] [CrossRef]
  4. Instituto Nacional de Pesquisas Espaciais. Projeto PRODES Monitoramento da Floresta Amazônica Brasileira por satélite. 2010. Available online: http://www.obt.inpe.br/prodes/index.html (accessed on 30 March 2012).
  5. Câmara, G.; de Valeriano, D.M.; Soares, J.V. Metodologia para o Cálculo da Taxa Anual de Desmatamento na Amazônia Legal; Report; INPE: São José dos Campos, SP, Brazil, 2006. [Google Scholar]
  6. Instituto Nacional de Pesquisas Espaciais. Sistema DETER Detecção de Desmatamentos em Tempo Real, 2010. Available online: http://www.obt.inpe.br/deter/metodologia_v2.pdf (accessed on 20 November 2010).
  7. Todt, V.; Formaggio, A.R.; Shimabukuro, Y. Identificação de Áreas Desflorestadas na Amazônia Através de Uma Rede Neural Artificial Utilizando Imagens Fração Derivadas dos Dados do IR-MSS/CBERS. In XI Simpósio Brasileiro de Sensoriamento Remoto; INPE: Belo Horizonte, MG, Brazil, 2003; pp. 2697–2704. [Google Scholar]
  8. Todt, V. Detecção em tempo real de desflorestamentos na Amazônia com uso de dados MODIS/TERRA e Redes Neurais. PhD Thesis, INPE, São José dos Campos, SP, Brazil, 2007. [Google Scholar]
  9. Deckmann, R.O. SOS Amazônia: Utilização de Redes Neurais Artificiais para a Detecção de Desmatamentos; N/D: São Leopoldo, RS, Brazil, 2009. [Google Scholar]
  10. De Braga, A.P.; Carvalho, A.P.; de Leon, F.; de Ludermir, T.B. Redes Neurais Artificiais: Teoria e Aplicações, 2nd ed.; LTC: Rio de Janeiro, RJ, Brazil, 2007; p. 226. [Google Scholar]
  11. Haykin, S. Neural Networks: A Comprehensive Foundation, 2nd ed.; Prentice Hall: New Jersey, NJ, USA, 1999; p. 842. [Google Scholar]
  12. Spring. Georeferenced Information Processing System. Available online: http://www.dpi.inpe.br/spring/english/index.html (accessed on 30 March 2012).
  13. Heaton, J. Introduction to Encog 2.3 for Java. 2010. Available online: http://www.heatonresearch.com/dload/ebook/IntroductionToEncogJava.pdf (accessed on 20 November 2010). [Google Scholar]
  14. Vasconcellos, R.M. GeoTIFF uma abordagem resumida do formato. Rio de Janeiro, 2002. CPRM Serviço Geológico do Brasil. Available online: www.cprm.gov.br/publique/media/geotiff.pdf (accessed on 20 November 2010).
  15. Riedmiller, M.; Braun, H. A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP Algorithm. In Proceedings of the IEEE International Conference on Neural Networks, San Francisco, CA, USA, 1993; pp. 586–591.
  16. Queiroz, R.B.; Rodrigues, A.G.; Gómez, A.T. Estudo Comparativo Entre as Técnicas Máxima Verossimilhança Gaussiana e Redes Neurais na Classificação de Imagens IR-MSS CBERS 1. In I WorkComp Sul; UNISUL: Palhoça, SC, Brazil, 2004. [Google Scholar]
Sustainability EISSN 2071-1050 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert