Next Article in Journal
Monitoring Tamarix Changes Using WorldView-2 Satellite Imagery in Grand Canyon National Park, Arizona
Next Article in Special Issue
Assessing the Effect of Training Sampling Design on the Performance of Machine Learning Classifiers for Land Cover Mapping Using Multi-Temporal Remote Sensing Data and Google Earth Engine
Previous Article in Journal
Vegetation Productivity Dynamics in Response to Climate Change and Human Activities under Different Topography and Land Cover in Northeast China
Previous Article in Special Issue
Validation of Visually Interpreted Corine Land Cover Classes with Spectral Values of Satellite Images and Machine Learning
Article

Identifying Spatiotemporal Patterns in Land Use and Cover Samples from Satellite Image Time Series

1
Earth Observation and Geoinformatics Division, National Institute for Space Research (INPE), Avenida dos Astronautas, 1758, Jardim da Granja, Sao Jose dos Campos, SP 12227-010, Brazil
2
Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, 7500 AE Enschede, The Netherlands
*
Author to whom correspondence should be addressed.
Academic Editor: Lluís Pesquer Mayos
Remote Sens. 2021, 13(5), 974; https://doi.org/10.3390/rs13050974
Received: 30 December 2020 / Revised: 22 February 2021 / Accepted: 25 February 2021 / Published: 4 March 2021

Abstract

The use of satellite image time series analysis and machine learning methods brings new opportunities and challenges for land use and cover changes (LUCC) mapping over large areas. One of these challenges is the need for samples that properly represent the high variability of land used and cover classes over large areas to train supervised machine learning methods and to produce accurate LUCC maps. This paper addresses this challenge and presents a method to identify spatiotemporal patterns in land use and cover samples to infer subclasses through the phenological and spectral information provided by satellite image time series. The proposed method uses self-organizing maps (SOMs) to reduce the data dimensionality creating primary clusters. From these primary clusters, it uses hierarchical clustering to create subclusters that recognize intra-class variability intrinsic to different regions and periods, mainly in large areas and multiple years. To show how the method works, we use MODIS image time series associated to samples of cropland and pasture classes over the Cerrado biome in Brazil. The results prove that the proposed method is suitable for identifying spatiotemporal patterns in land use and cover samples that can be used to infer subclasses, mainly for crop-types.
Keywords: data training; time series; clustering; spatiotemporal patterns data training; time series; clustering; spatiotemporal patterns

1. Introduction

The large amount of remote sensing images freely available nowadays with improved temporal and spatial resolution brings new opportunities for land use and cover mapping over large areas [1]. Many authors propose a paradigm shift, where change detection is replaced with continuous monitoring [2]. To achieve this goal, researchers need access to satellite image time series to detect complex underlying processes [3].
The use of machine learning methods has been the preferred approach for satellite image time series analysis to map land use and cover changes (LUCC) [1]. These methods are supervised approaches that require a training phase using samples labeled a priori. Comparative analysis of different machine learning methods shows that the quality of training samples has a large impact on classification accuracy [1,4,5,6]. These results motivate our work, which aims to answer the following question: How can training samples be assessed to improve the accuracy of LUCC maps produced by machine learning classification methods that use them?
Different approaches have been proposed to improve the quality of training samples [7]. Such strategies include best practices in collecting training data [7,8,9,10] and refinement methods of samples using satellite image time series [11,12,13]. Other approaches such as semi-supervised learning and active learning have been applied to support training sample acquisition [14,15,16,17,18]. However, most studies are limited to small areas and inter-annual extents [7], not being suitable for large areas due to the need for a large number of samples to characterize each class [19].
In large areas, the variability of land use and cover classes is high and intrinsic to different regions and periods due to heterogeneous biodiversity as well as distinct climatic conditions and management practices [20,21,22,23]. Therefore, it is essential to explore ways to obtain samples that properly represent high intra-class variability considering spatiotemporal variations in large areas and multiple years.
In many situations, experts use generic labels for training samples (e.g., “forest”, “cropland”, and “grassland”). In practice, the actual spatiotemporal variability of the time series data does not match such generic labels. For this reason, it is useful to distinguish subclasses of high-level labels that correspond to regions of separability on the attribute space. Based on this, this paper presents a method to identify subclasses in training samples of satellite image time series. The method distinguishes different types of land use and cover classes over large areas in a more detailed granularity than user-provided labels. Using phenological and spectral information provided by satellite images time series, the method refines the generic labels and improves the accuracy of resulting LUCC maps.
The proposed method is based on time series clustering. In general, time series clustering has been applied for exploratory analysis [24,25,26], characterization of spatiotemporal patterns [27,28,29,30,31], and to soften the lack of discriminated data of land use and cover types [18,32,33]. It is a promising approach to exploit spectral and phenological information and refine training samples to ensure an acceptable level of quality [11,13,18,22]. Spectral and phenological information can be considered to discriminate different types of land use and cover classes during the sample collecting and labeling, contributing to improving the quality of training data sets. However, time series clustering results can be difficult to interpret and visualize, especially when the training data have a high dimension [34].
Our algorithm uses self-organizing maps (SOMs) [35] combined with a hierarchical algorithm [36,37]. SOMs have been applied in spatiotemporal data analysis [28,29,31,38,39,40,41,42] mainly due to two properties. SOMs map high-dimensional data into a low bi-dimensional grid, representing the input data in low dimension. They also preserve neighborhood information; similar patterns in attribute space tend to stay close in the 2D SOM space. After mapping the samples from the high-dimensional attribute space to the 2D SOM space, we apply a hierarchical algorithm to the SOM clusters. The resulting subclusters refine the original training data into subclasses. These subclasses have lower intra-class variability and higher inter-class variability than the original SOM clusters. Given the SOM proprieties, they provide refinement of the original training samples.
As a proof of concept, we present a case study using Moderate Resolution Imaging Spectroradiometer (MODIS) image time series of 7 years (2010–2017) associated with samples of cropland and pasture classes over the Cerrado biome in Brazil. The results show that the proposed method is suitable for identifying spatiotemporal patterns in land use and cover samples that can be used to infer subclasses mainly for crop-types.

2. Material and Methods

2.1. Data

The case study presented in this paper uses samples of the Cerrado biome in Brazil. Cerrado is a large and dynamic landscape with an area of approximately 204 million hectares (Mha), covering almost 22% of the central area in Brazil. It is one of the largest and most diverse tropical savannas in the world [43,44]. However, half of the biome has been changed and deforested due to advanced agricultural production and livestock [45].
The dataset is a merge of samples collected by remote sensing specialists through visual interpretation of high-resolution images, samples collected by the specialist in field observations, and farmer interviews provided by the Brazilian National Institute for Space Research (INPE) team partners.
Figure 1 illustrates the dataset used in our case study. The dataset includes 15,794 samples spread over the Cerrado biome from 2010 to 2017 and is divided into two classes: (1) cropland and (2) pasture. Most of the cropland samples are in the same locations associated with different years.
Each sample has a spatial location, a period containing the start date and end date according to an agricultural calendar (from August to September), a label describing the sample class, and the satellite image time series for each attribute (bands and vegetation indexes) associated with it. The time series were extracted from the product MOD13Q1 (Collection 6) of the MODIS sensor. The images were collected on an interval of 16 days with 250 m spatial resolution. For this dataset, we used two vegetation indices and two bands available in MOD13Q1: the Normalized Difference Vegetation Index (NDVI) and the Enhanced Vegetation Index (EVI), and near-infrared (NIR) and and mid-infrared (MIR).
The MOD13Q1 product is created by considering the best available pixel from all acquisitions of MODIS images in a 16-day period. This approach selects the best pixel available in 16 days, avoiding cloud, shadow, or low-quality pixels. We obtained the time series from the MOD13Q1 product without performing further processing.
The Cerrado biome has distinct soil and weather over its area. Thus, it has great and heterogeneous types of land use and cover classes. However, it is not trivial to separate these types using 250 m and 16-day MOD13Q1 data. For this reason, we decided to use only the cropland and pasture classes to show how the proposed method works.

2.2. Overview

Clustering methods are useful to extract spatiotemporal information from satellite image time series samples [24]. There are methods of clustering that use the approach of spatial clustering objects and spatial and temporal objects. Although our time series have a geo-location, we used only the time series points as a similarity measure to identify the clusters [46,47]. Then, the geo-localization and period of each time series were used to explore the spatiotemporal patterns.
Figure 2 illustrates the proposed method to identify and analyze spatiotemporal patterns. Given a dataset of satellite image time series samples, we applied SOMs to group similar time series. SOMs map the input data (high dimensionality) onto a bi-dimensional grid (low dimensionality), keeping the topology of the neighborhood; that is, a similar time series tends to be close in 2D space [35]. The SOM grid contains units called neurons, where each neuron has a weight vector associated with it. At the end of the SOM process, each time series from the input dataset was assigned to a neuron. In this way, from the output of SOM, the clusters can be recognized and created. Once the class label of each sample was known, each neuron was labeled using the majority technique. To explore each cluster, the neurons with the same category were selected. The hierarchical method [48] was then applied to the weight vectors of these neurons to extract subclusters. Due to the high intra-class variability, the hierarchical method, as secondary clustering, was applied to the neurons’ weight vectors with the same category. The use of hierarchical clustering allows visualization through dendrogram, which can be cut in a specific height to define the number of subclusters. Our method suggests the number of the cluster through an inter-cluster validation index. However, specialists can apply different numbers of groups and explore them according to their necessities. The subclusters were grouped through the time series similarity. However, since all samples have the spatial location and the period containing the start date and end date, these subclusters can indicate patterns in space or time.

2.3. Self-Organizing Map

SOM is an unsupervised learning method based on competitive learning that reduces a high-dimensional feature input space onto a lower-dimensional feature output space. A large dataset can be mapped and represented by a set of neurons through the weight vectors. An important characteristic of SOM is preserving the neighborhood topology; thus, similar input data are mapped to the same neuron or a nearby one.
Each neuron j of the output space has a weight vector w j = [ w j 1 , , w j n ] containing the same dimension n of the input data x ( t ) = [ x ( t ) 1 , , x ( t ) n ] . The weight vectors can be initialized randomly or according to some heuristic. The algorithm has two main steps. First, the distances D j between a sample and each neuron of the SOM grid is computed. The neuron containing the smallest value of the distance d b is selected as the best matching unit (BMU) for this sample. The equations to compute the distance and BMU are given by
D j = i = 1 n ( x ( t ) i w j i ) 2 .
d b = m i n D 1 , , D j .
The second step is the adjustment of the weight vector of the BMU and its neighbors so that the neurons have similar characteristics. The equation of adjustment is given by
w j i = w j i + α × h b , j [ x ( t ) i w j i ] .
where the parameters α is the learning rate and h b , j is the neighborhood function. The learning must be set as 0 < α < 1 , and it controls the change level of the BMU and its neighbors during the training step. The neighborhood function limits the size of the BMU’s neighborhood that must be updated. The diversity of neighborhood functions can be seen in [35,49].
These steps are performed T times for neurons to organize themselves to have a similar neighborhood between them. Then, each sample from the dataset is assigned to a neuron.
The use of a secondary clustering directly on the SOM grid is not well suited in our case. It can generate confusion between the classes because of the high variability of patterns in a specific class; therefore, the neurons were labeled previously. For this reason, we labeled the neurons before splitting the clusters through hierarchical clustering. According to Kohonen [50], when an input dataset has a specific number of classes, we can assume that each neuron belongs to one of these classes. The neurons were categorized according to the majority class label that occurs in each one. In cases of the tiebreaker, the neuron received the label of the majority neighborhood.
Kohonen et al. [51] argue that when the neurons are labeled according to the majority, the quality of the organization of the final map can be measured through the external criteria of clustering validation. In our approach, we consider a cluster as a set of neurons labeled with the same category. We use purity as the external criteria of cluster validation. Purity assesses the clusters according to the number of the most representative class assigned to them.

2.4. Hierarchical Clustering

Once all the neurons were labeled, we applied the hierarchical clustering on the weight vectors of neurons with the same category. This aids in extracting useful information, analyzing the spatiotemporal dynamics, and avoiding confusion between different classes’ samples.
Hierarchical clustering is a method where the data are partitioned successively, building a hierarchy of clusters [37]. This type of representation facilitates the visualization in each step where the level similarity occurs. There are two types of hierarchical clustering, agglomerative and divisive. In the divisive algorithm, the entire dataset starts in one cluster, and then it is split into two more similar clusters. In the agglomerative method, as illustrated in Figure 2 (step 4), each weight vector ( w 2 , w 3 , w 4 , w 6 , w 7 , w 8 ) starts in its own cluster, a similarity matrix is computed, and the two most similar groups are identified. At each step, the clusters are merged, and the hierarchy is built based on linkage criteria.
The linkage criteria present the distance measures between the clusters. There are several linkage criteria proposed in the literature. The most common methods are single, complete, and average linkage [36]. This paper uses the Euclidean distance and the average linkage to build the hierarchical clustering. The average linkage computes the mean distance from each element of a cluster to all the elements of the other cluster [36,37].
The hierarchical algorithms build a binary tree called a dendrogram. This structure represents the order of how the clusters were merged. A dendrogram divides the data into an internally homogeneous group. Through the hierarchy of the tree, it is possible to visualize the variability of the data. The dendrogram aims to explore and define the suitable number of clusters according to the analysis level. Figure 2 (step 5) illustrates a dendrogram and how the cluster can be defined. The dendrogram was cut at the height where three clusters were created. In our method, the internal cluster validity, the C-Index [52] criteria, were applied to define the number of cluster for each class from the dendrogram. It is defined by:
C I n d e x = S d S d m i n S d m a x S d m i n
where S d is the summed distance between the neurons within the same cluster, S d m i n is the sum of the smallest distance between the pair of objects within the same cluster, and S d m a x is the sum of the largest distance. Finally, the new subclusters of class Z can be created, as shown in Figure 2 (step 6). The weights w 3 , w 4 and w 5 were merged in cluster 1, w 6 and w 7 in cluster 2, and w 8 in cluster 3.

2.5. Clustering Output

After the clustering process, an interpretation and comparison of the clusters are necessary to identify the data’s features. Generally, the amount of data used in remote sensing analysis is large; therefore, interpreting and analyzing the data in an aggregated way can facilitate the analysis. In general, our method performs the computational processing that allows the summarization of the input data to facilitate searching for knowledge, information, and pattern discovery. Besides the spatial location and time available in each dataset sample, the clustering result provides information that is useful in further analysis. After obtaining the identifiers for each subcluster and the information of the neurons associated with it, a specialist can identify, in an easy way, patterns that need to be interpreted. Moreover, to facilitate the analyses, interactive visual tools can process the output information generated by the clustering methods and jointly with the input data that contain the spatial location and period, providing spatial and temporal information.
Figure 3 illustrates an example of output after the clustering methods. For each sample (presented in Section 2.1), the result of clustering methods provides identifiers of neurons and subclusters and their labels generated by SOM and hierarchical clustering, respectively. Three samples of class Z were assigned to different neurons. However, these neurons were labeled as class Z because of the majority class. Although these samples belong to the same category’s neurons, we can notice in neurons 1, 4, and 13 distinct patterns through the weight vector. When the hierarchical clustering was applied to the weight vectors of class Z, these samples were assigned to three different subclusters. Furthermore, through the SOM grid neighborhood, we notice that neuron 4 is an outlier within the neurons of the subcluster of class Z. However, it is necessary to understand whether this neuron is an error or represents a different pattern, e.g., in space or time, within the class Z.
To evaluate the relative performance of the original and refined training data sets, we used 5-fold cross-validation [53]. We split the training dataset into training and test sample sets to avoid overfitting and biased data [53,54]. Using the 5-fold approach, we estimated the classification accuracy (overall, producer, and user accuracy). We chose the random forest classifier due to its robustness on land use and cover mapping [5,55] to evaluate the training datasets generated from the cluster analysis.

3. Results

This section presents a case study to show how the method presented in this paper works.

3.1. Creating Clusters Using SOM

Figure 4 presents the SOM results for the vegetation indices and spectral bands used in the training step. The parameters used in SOM to generate the maps were Euclidean distance as similarity metric, grid size = 14 × 14 , learning rate initial = 0.5 and final = 0.1 , number of iteration = 100 and the neurons were labeled according to majority voting. The vegetation indices and spectral bands were chosen based on the study conducted by Santos et al. [56]. While the input parameters of SOM were defined based on empirical experiments, the start points, such as the size grid and learning rate, were suggested by Vesanto and Alhoniemi [57].
Figure 4 illustrates the primary clustering generated by SOM. The 15,794 samples, shown in Section 2.1, are represented by a set of 196 neurons, where 139 neurons represent the cluster of cropland, and the cluster of pasture is represented by 57 neurons. Only by analyzing the grid generated by SOM do we notice the high variability of the patterns, mainly in cropland neurons. Moreover, the single and double cropping groups can be identified in the cropland neurons, and some neurons of pasture are considerably similar to neurons of cropland with single cropping types. The similarity between these neurons can be investigated by an expert in more detail considering the spatial and temporal information provided by the samples. Some hypotheses can also be created, indicating whether these samples are separable or not due to noise, i.e., clouds, type of sensor used (spatial resolution, temporal resolution), or mislabeled samples.
Figure 5 shows how the samples are spread over the SOM grid, identifying whether they are nearest or farthest of their neighborhood. Although the neurons are labeled as a specific class, they are not 100% pure. Samples of different types were associated with neurons for which the majority belong to other classes. Overall, the purity for cropland and pasture is, respectively, 99.6% and 99.4%. Most of the confusion occurs near the class boundary. We can identify where these samples were mapped through the SOM grid. For instance, the cropland neurons with single cropping patterns are neighbors of the pasture neurons. In addition, the grids show precisely where the cropland samples were mapped in neurons labeled as pasture and vice-versa.

3.2. Revealing the Patterns of Cropland

The hierarchical clustering was applied to 139 weight vectors of cropland. Then, the C-Index suggested 10 clusters for cropland, as illustrated in Figure 6.
Figure 7 and Figure 8 give an overview of the cropland clusters’ characteristics. Figure 7a illustrates the cropland’s neurons partitioned into ten groups in the SOM grid. Note that the neighborhood within each subcluster tends to be similar. Each neuron contains a set of samples, and they are represented by a weight vector. Figure 7b illustrates the neurons weight vectors for each subcluster. Figure 8 shows the geographical location of samples mapped in each cluster. The samples are spread over the Cerrado biome in the states of Mato Grosso (MT), Bahia (BA), Tocantins (TO), and Maranhão (MA).
From the patterns generated by the subclusters and geographical location provided by the samples, an initial analysis that experts conducted to infer subclasses for Cropland suggests the following:
  • Cropland 1 represents samples of soy–fallow. These samples are mapped only in the state of Bahia. This region is known due to the mostly single cropping regimes [58]. Some samples are spread over Goiás and Tocantins states; however, they are originally labeled as pasture.
  • Cropland 2 represents samples of fallow–cotton. This type of crop is mapped in the states of Mato Grosso and Bahia. The patterns of this group are well defined.
  • Croplands 3,4, and 8 represent samples of soy–cotton. In this study, this type of crop is mapped only in the state of Mato Grosso. Through the temporal patterns (Figure 7b), we can notice small variations, particularly during the first cycle (soy crop). This difference may be due to the soybean variety. The soybean varieties planted in Brazil can be of early, medium, and late maturity. The average cycle ranges from 99 to 128 days [59].
  • Cropland 5 represents samples of millet–cotton. In this study, this type of crop is mapped only in the state of Mato Grosso.
  • Croplands 7, 9, and 10 represent samples of soy–corn. This type of crop is spread over the Cerrado biome. The variability of this class can be noticed through the temporal patterns extracted by SOM. It occurs due to the climatological and soil variations leading to differences in each area’s agricultural calendar.
  • Cropland 6 is not well defined. Notice in the SOM grid (Figure 7a) that most of the neighbors of Cropland 6 belong to other groups. The temporal signatures can be confounded between soy and millet during the first cycle due to noise, likely caused by clouds. It is necessary to look at these samples in more detail. In contrast, in the second cycle, we can notice patterns of cotton and corn. Additionally, this cluster contains samples initially labeled as pasture.
Some examples are presented throughout this section, detailing how knowledge extraction can be obtained from the output clustering methods.
Figure 9 presents the temporal variability between the patterns of a single cropping soy–fallow. All the crop samples from Cropland 1 are mapped only in the west of Bahia state, a region with tradition in planting the single cropping system [58] (Figure 9b). Figure 9 shows the temporal dynamics of a point ( 12.8875 , 45.8769 ) named single cropping over four harvests from 2010 to 2014. According to the agricultural calendars of the National Supply Company (CONAB) and the Brazilian Institute of Geography and Statistics (IBGE), soybean planting in this region varies from October to January. For rained systems, soybean planting is linked to the rainy season [60]. Therefore, the planting season varies from year to year, and this can be seen in Figure 9c. It can be observed that the period in which the crop was planted in the soil also varied, and this may be an indication of the variety of soybean that was planted in this region (with early, medium, or late maturation).
In Figure 10, we highlighted different neurons (Figure 10b) that belong to these clusters with distinct spatial location, agricultural calendar, and patterns affected by clouds (Figure 10c) to illustrate the variability between the patterns of soy–corn. In neuron 189, some samples belong to the states of Tocantins and Maranhão (7 S). The first cycle of these samples starts later than in Mato Grosso’s state (neurons 106 and 195). This occurs because these regions have a different agricultural calendar. Despite the pattern of neuron 189 indicating a later beginning of cycle compared to samples in Mato Grosso, it also presents samples located in Mato Grosso. However, it likely happened because of the noise caused by clouds in Mato Grosso’s sample, causing a pattern similar to that observed in Maranhão.
Figure 10b,d present the weight vectors and the NDVI time series samples associated with each neuron. In addition to the differences at the beginning of the first cycle, the biggest difference is in the cycle’s harvest period. The samples assigned to neuron 189 were harvested late. This indicates that the soybean planted there has worse variety than those planted in samples assigned neurons 106 and 195. In the second cycle, we observe the opposite behavior. Most of the corn samples attributed to neuron 189 are shorter than in neurons 106 and 195. This may have occurred because most samples of neuron 12 are located in western Bahia, Tocantins, and Maranhão states, where corn crops are shorter (early maturing variety) than in the state of Mato Grosso. Moreover, the corn crop in these regions was affected due to the negative weather conditions in 2017.
Figure 11 illustrates in more detail the cluster of Cropland 6. This cluster has three neurons; however, the weight vectors (see Figure 11b) and the time series assigned to each neuron (see Figure 11d) indicate that neuron 103 contains samples of soy–corn. In contrast, neurons 61 and 75 contain samples of soy–cotton. Neurons 61 and 75 are distant from the clusters that represent soy–cotton (Cropland 3,4, and 8) and near the neurons of pasture and soy–corn and millet–cotton (Figure 11a). This occurs because of the peaks caused by clouds that impact the soybean patterns, causing a decrease in the pattern’s average values. This can lead to confusion with millet. Moreover, this confusion can occur mainly when pasture is planted (notice in Figure 11c, samples of pasture mapped in neurons 61 and 75), since its planting period is between September and March, often coinciding with millet’s planting period. Furthermore, millet is often used as pasture in the crop–livestock integration system [61].

3.3. Revealing the Patterns of Pasture

In the SOM grid, there are 57 neurons classified as pasture. However, the samples of pasture do not have a high variability like those of cropland. We separated the dendrogram (Figure 12) into two clusters, as suggested by the C-index. Although the C-Index suggests only two clusters, small variations can be found in cluster 2. However, an expert can explore these variations if necessary.
Figure 13 illustrates an overview of the pasture cluster. In Pasture 1, the weight vectors (Figure 13b) present patterns that can be confounded with single cropping. This type of confusion may occur due to October to April belonging to the rainy season [62], and, as such, the spectral profiles in these months are low due to the noise of clouds and rain. Furthermore, it can be observed in the SOM grid in Figure 13a that the neighbors of the neurons labeled as Pasture 1 are classified as cropland.
The group of Pasture 2, on average, presents similar spectral patterns. Furthermore, this cluster has proportionally less confusion with cropland. In addition, there are some patterns with the lowest spectral values, e.g., NDVI values below 0.5. NDVI is an index related to biophysical variables that control vegetation productivity, such as net primary productivity and the leaf area index [63]. Therefore, there is a probability of these samples representing areas with lower productivity.
Figure 14 illustrates the cropland samples mapped in Pasture clusters. These samples belong to the states of Mato Grosso and Bahia. The NDVI time series of the cropland samples assigned to Pasture 1 present a high incidence of clouds from September to February. Due to these peaks, it is not easy to distinguish which specific type of crop these samples belong to or if they really are pasture. It would be necessary to check each region’s agricultural calendar and their respective years or high-resolution images in detail. The NDVI time series of cropland’s samples mapped in Pasture 2 are noisy, and their temporal signatures are similar to those of pasture. These samples may have been affected by noise, or they may be mislabeled. In addition, in the work of [64], the authors presented a pattern of 370 samples of pasture in the state of Mato Grosso, also using time series of the product MOD13Q1, where it is also possible to observe much noise caused by clouds, similar to the patterns presented in the samples labeled as cropland that were assigned in Pasture 2.

3.4. Assessing the Performance of the Training Samples

After the cluster analysis, we evaluated the performance of two datasets generated by the clusters. First, we used the entire dataset, on which the samples mapped in pasture neurons maintain the cropland label. Second, we removed from the dataset samples of cropland that were mapped in pasture’s clusters. Additionally, we divided Cropland 6 into soy–corn and soy–cotton (see Figure 11). Table 1 presents the number of samples associated with each cluster and the classes that were assigned to each one.
Table 2 and Table 3 present the confusion matrix for the samples relabeled according to the analysis provided by the clusters. For the entire dataset, the overall accuracy was 97% (Table 2), and for the filtered dataset, the overall was 98% (Table 3). Although the producer’s accuracy to cropland, presented in Table 2, is low, few samples were classified as pasture. Both confusion matrices indicate the majority confusion among soy–corn, soy–cotton, and millet–cotton, as highlighted in Cropland 6 analysis. Soy–cotton and soy–fallow mix together; this can happen because their first cycle is the same, and small variations during the second cycle can affect the separability between them. The producer’s accuracy regarding millet–cotton is also low, probably because the noise caused by clouds during the first cycle of soybean makes the separability between soybean and millet difficult. The confusions between fallow–cotton and soy–cotton can occur because of some land variation during the fallow cycle, and the same occurs to confusions between soy–fallow and soy–corn. Although we removed cropland samples mapped in pasture neurons (Table 3), the confusion between Pasture 2 and the double cropping types still occurs due to clouds’ noise affecting the separability between them.
In addition, we evaluated the original dataset without the relabeled samples, as shown in Table 4. As expected, the training data performance was much better with non-discriminated data. Despite the confusion between crop and pasture, the overall accuracy was 99%. When the labels were specified, the confusion was spread among the same types. However, we can see where the most significant confusion occurs, such as in Pasture 2 and double cropping and Pasture 1 and single cropping.

4. Discussions

In this study, we show how the proposed method can be applied. The aim is to assess the clusters provided by SOM combined with hierarchical clustering to analyze the possibility of generating subclasses. The method suggests ten subclusters for cropland and two subclusters for pasture. We relabeled the samples that were initially labeled as cropland into five subclasses according to the spatial, temporal, and phenological information. Besides the subclasses, it also possible to identify mislabeled or noisy samples in the training dataset, contributing to dataset enhancement. For instance, in Cropland 1, there are some samples of pasture. In this case, an analysis by experts could be conducted to verify why these samples are not well separated. This can be achieved using phenological metrics, checking high-resolution images or maps of agriculture and pasture. This analysis is essential to verify if the samples are not separable due to the sensors’ spatial and temporal resolution, noise such as clouds, or mislabeling.
The output table provided by the clustering methods, illustrated in Figure 3, allows quantifying the samples’ information, such as the number of pasture samples mapped in neurons labeled as Cropland 1. As shown in Table 4, the original dataset presents confusion between pasture and cropland. However, when the subclasses are inferred through the subclusters, the reason for this type of confusion becomes clearer. In our dataset, there is confusion between samples with single cropping and pasture. This occurs due to the similarity between their temporal patterns. This can be observed in further detail in Cropland 2 (Figure 7b) and Pasture 1 (Figure 13b). Since the method indicates exactly which samples are being confounded and their information, such as spatial location (longitude and latitude) and time, an expert can check through high-resolution or LUCC maps to identify the real label. Thus, future errors can be avoided in the classification.
In general, the samples’ patterns presented in the results section help significantly to distinguish the types of agriculture and pasture. Nevertheless, it is important to note that small variations within the subclusters can still be observed in Cropland 1. It is possible to note the different temporal patterns in each neuron of Cropland 1 and consequently the samples assigned to them, indicating variation in soybean maturity. This type of variation can be captured through the hierarchical clustering using the MODIS sensor if the number of subclusters provided by the dendrogram is higher than 10. Once we have 139 neurons to represent cropland samples, the number of subclusters provided by hierarchical clustering can be 139 if the dendrogram presented in Figure 6 is cut when the height = 0 . In our case study, the C-index was applied to define the number of subclusters for cropland, but it could be defined manually according to the final user’s real goal.
In contrast to soy-fallow, soy–corn and soy–cotton were distributed among different groups. The number of soy–corn and soy–fallow samples caused an unbalance in the dataset. For this reason, the differences between the soy–corn and soy–fallow neurons are quite small. The most significant differences are attributed to more distant neurons in the SOM grid. In this way, considering the cut in the dendrogram, the significant intra-class variations, such as spatial location, and agriculture calendar, are indicated in different subclusters. In addition, phenological metrics could be used as thresholds to distinguish the clusters and capture crop variations.
Although the neurons present intra-class variations, when identifying soybean variability until crop separability, it is important to highlight that the sensor’s choice may present different results to that shown in this paper due to the spatial and temporal resolution. In addition, the choice of the similarity measure used in the clustering methods can also omit or highlight these variations [65].

5. Conclusions

There is a lack of high-quality training samples in the remote sensing field, mainly regarding change monitoring in large areas and multiples years due to the high intra-class variability of land use and cover types. A way to improve sample acquisition tasks is through temporal patterns. The use of satellite image time series provides phenological and spectral information that can be considered during the collection of samples or labeling process. However, for large datasets, it can be inadequate and complicated for humans analysis, for this reason support from computation methods is necessary.
The focus of this paper was to present the method and show how it can be applied. The proposed method combined SOM with hierarchical clustering to identify spatiotemporal patterns through exploratory analysis in training sample datasets. The method works well to infer subclasses using MODIS images of 250 m and 16 days, mainly for cropland classes presented in this dataset. From the subclusters, we were able to identify mislabeled samples and refine the generic labels to infer subclasses. The method is not free of generating uncertain labels; however, the phenological, spectro-temporal, and spatial information provided by the satellite image time series samples identified through the subclusters patterns assists the experts during the labeling process.
We explored Cropland and Pasture samples over the Cerrado biome in Brazil using MODIS because of their high variability in different regions and years. An initiative called Brazil Data Cube [66] is producing Analysis-Ready Data (ARD) and multidimensional data cubes from medium-resolution satellite images for all Brazilian territory. Using these data cubes, we intend to apply the proposed method to 10-20 meters and five days Sentinel 2 image time series in order to separate all distinct types of land use and cover classes in Cerrado.

6. Data and Code Availability

The method was implemented in the R package. The first step of SOM was implemented on package sits (Satellite Image Time Series), available on GitHub at https://github.com/e-sensing/sits (accessed on 30 December 2020). The data set and the code used in this manuscript to generate the results presented in Section 3 is provided under the GNU General Public License v3.0 and is available on GitHub at https://github.com/lorenalves/ST_patterns_time_series (accessed on 30 December 2020).

Author Contributions

Conceptualization, L.A.S., K.F., G.C., M.P., R.Z.-M. and E.-W.A.; methodology, L.A.S., K.F., G.C., M.P., R.Z.-M. and E.-W.A.; software, L.A.S., K.F. and G.C.; validation, L.A.S., K.F., G.C., M.P., K.F., G.C. and M.P.; formal analysis, L.A.S., K.F., G.C. and M.P.; investigation, L.A.S., K.F. and G.C.; writing—original draft preparation, L.A.S., K.F. and M.P.; writing—review and editing, K.F., G.C., M.P., R.Z.-M. and E.-W.A.; project administration, K.F.; funding acquisition, K.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Amazon Fund through the financial collaboration of the Brazilian Development Bank (BNDES) and the Foundation for Science, Technology and Space Applications (FUNCATE), process 17.2.0536.1; Coordenação de Aperfeiçoamento de Pessoal de Nível Superior-Brasil (CAPES), finance code 001; Institutional Program for Internationalization (PrInt-CAPES).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

See Section 6 for Data and Code Availability.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gomez, C.; White, J.C.; Wulder, M.A. Optical Remotely Sensed Time Series Data for Land Cover Classification: A Review. ISPRS J. Photogramm. Remote. Sens. 2016, 116, 55–72. [Google Scholar] [CrossRef]
  2. Woodcock, C.E.; Loveland, T.R.; Herold, M.; Bauer, M.E. Transitioning from Change Detection to Monitoring with Remote Sensing: A Paradigm Shift. Remote Sens. Environ. 2020, 238, 111558. [Google Scholar] [CrossRef]
  3. Pasquarella, V.J.; Holden, C.E.; Kaufman, L.; Woodcock, C.E. From Imagery to Ecology: Leveraging Time Series of All Available LANDSAT Observations to Map and Monitor Ecosystem State and Dynamics. Remote Sens. Ecol. Conserv. 2016, 2, 152–170. [Google Scholar] [CrossRef]
  4. Millard, K.; Richardson, M. On the importance of training data sample selection in random forest image classification: A case study in peatland ecosystem mapping. Remote Sens. 2015, 7, 8489–8515. [Google Scholar] [CrossRef]
  5. Pelletier, C.; Valero, S.; Inglada, J.; Champion, N.; Marais Sicre, C.; Dedieu, G. Effect of Training Class Label Noise on Classification Performances for Land Cover Mapping with Satellite Image Time Series. Remote Sens. 2017, 9, 173. [Google Scholar] [CrossRef]
  6. Maxwell, A.E.; Warner, T.A.; Fang, F. Implementation of Machine-Learning Classification in Remote Sensing: An Applied Review. Int. J. Remote. Sens. 2018, 39, 2784–2817. [Google Scholar] [CrossRef]
  7. Pengra, B.W.; Stehman, S.V.; Horton, J.A.; Dockter, D.J.; Schroeder, T.A.; Yang, Z.; Cohen, W.B.; Healey, S.P.; Loveland, T.R. Quality control and assessment of interpreter consistency of annual land cover reference data in an operational national monitoring program. Remote Sens. Environ. 2020, 238, 111261. [Google Scholar] [CrossRef]
  8. Olofsson, P.; Foody, G.M.; Herold, M.; Stehman, S.V.; Woodcock, C.E.; Wulder, M.A. Good Practices for Estimating Area and Assessing Accuracy of Land Change. Remote Sens. Environ. 2014, 148, 42–57. [Google Scholar] [CrossRef]
  9. Elmes, A.; Alemohammad, H.; Avery, R.; Caylor, K.; Eastman, J.R.; Fishgold, L.; Friedl, M.A.; Jain, M.; Kohli, D.; Laso Bayas, J.C.; et al. Accounting for Training Data Error in Machine Learning Applied to Earth Observations. Remote Sens. 2020, 12, 1034. [Google Scholar] [CrossRef]
  10. Huang, H.; Wang, J.; Liu, C.; Liang, L.; Li, C.; Gong, P. The migration of training samples towards dynamic global land cover mapping. ISPRS J. Photogramm. Remote. Sens. 2020, 161, 27–36. [Google Scholar] [CrossRef]
  11. Viana, C.M.; Girão, I.; Rocha, J. Long-Term Satellite Image Time-Series for Land Use/Land Cover Change Detection Using Refined Open Source Data in a Rural Region. Remote Sens. 2019, 11, 1104. [Google Scholar] [CrossRef]
  12. Simoes, R.; Picoli, M.C.A.; Camara, G.; Maciel, A.; Santos, L.; Andrade, P.R.; Sánchez, A.; Ferreira, K.; Carvalho, A. Land Use and Cover Maps for Mato Grosso State in Brazil from 2001 to 2017. Sci. Data 2020, 7, 34. [Google Scholar] [CrossRef]
  13. Belgiu, M.; Bijker, W.; Csillik, O.; Stein, A. Phenology-based sample generation for supervised crop type classification. Int. J. Appl. Earth Obs. Geoinf. 2021, 95, 102264. [Google Scholar] [CrossRef]
  14. Demir, B.; Persello, C.; Bruzzone, L. Batch-mode active-learning methods for the interactive classification of remote sensing images. IEEE Trans. Geosci. Remote. Sens. 2010, 49, 1014–1031. [Google Scholar] [CrossRef]
  15. Tuia, D.; Volpi, M.; Copa, L.; Kanevski, M.; Munoz-Mari, J. A survey of active learning algorithms for supervised remote sensing image classification. IEEE J. Sel. Top. Signal Process. 2011, 5, 606–617. [Google Scholar] [CrossRef]
  16. Huang, X.; Weng, C.; Lu, Q.; Feng, T.; Zhang, L. Automatic labelling and selection of training samples for high-resolution remote sensing image classification over urban areas. Remote Sens. 2015, 7, 16024–16044. [Google Scholar] [CrossRef]
  17. Lu, Q.; Ma, Y.; Xia, G.S. Active learning for training sample selection in remote sensing image classification using spatial information. Remote Sens. Lett. 2017, 8, 1210–1219. [Google Scholar] [CrossRef]
  18. Solano-Correa, Y.T.; Bovolo, F.; Bruzzone, L. A Semi-Supervised Crop-Type Classification Based on Sentinel-2 NDVI Satellite Image Time Series And Phenological Parameters. In Proceedings of the 2019 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2019), Yokohama, Japan, 28 July–2 August 2019; pp. 457–460. [Google Scholar] [CrossRef]
  19. Radoux, J.; Lamarche, C.; Van Bogaert, E.; Bontemps, S.; Brockmann, C.; Defourny, P. Automated training sample extraction for global land cover mapping. Remote Sens. 2014, 6, 3965–3987. [Google Scholar] [CrossRef]
  20. Hostert, P.; Griffiths, P.; van-der-Linden, S.; Pflugmacher, D. Time Series Analyses in a New Era of Optical Satellite Data. In Remote Sensing Time Series: Revealing Land Surface Dynamics; Kuenzer, C., Dech, S., Wagner, W., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2015; pp. 25–41. [Google Scholar]
  21. Comber, A.; Wulder, M. Considering Spatiotemporal Processes in Big Data Analysis: Insights from Remote Sensing of Land Cover and Land Use. Trans. GIS 2019, 23, 879–891. [Google Scholar] [CrossRef]
  22. Alencar, A.; Shimbo, J.Z.; Lenti, F.; Balzani Marques, C.; Zimbres, B.; Rosa, M.; Arruda, V.; Castro, I.; Fernandes Márcico Ribeiro, J.P.; Varela, V.; et al. Mapping Three Decades of Changes in the Brazilian Savanna Native Vegetation Using Landsat Data Processed in the Google Earth Engine Platform. Remote Sens. 2020, 12, 924. [Google Scholar] [CrossRef]
  23. Meroni, M.; d’Andrimont, R.; Vrieling, A.; Fasbender, D.; Lemoine, G.; Rembold, F.; Seguini, L.; Verhegghen, A. Comparing land surface phenology of major European crops as derived from SAR and multispectral data of Sentinel-1 and -2. Remote Sens. Environ. 2021, 253, 112232. [Google Scholar] [CrossRef] [PubMed]
  24. Liao, T.W. Clustering of Time Series Data: A Survey. Pattern Recognit. 2005, 38, 1857–1874. [Google Scholar] [CrossRef]
  25. Aghabozorgi, S.; Shirkhorshidi, A.S.; Wah, T.Y. Time-Series Clustering: A Decade Review. Inf. Syst. 2015, 53, 16–38. [Google Scholar] [CrossRef]
  26. Paparrizos, J.; Gravano, L. k-shape: Efficient and accurate clustering of time series. In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, Melbourne, Australia, 31 May – 4 June 2015; pp. 1855–1870. [Google Scholar]
  27. Birant, D.; Kut, A. ST-DBSCAN: An algorithm for clustering spatial–temporal data. Data Knowl. Eng. 2007, 60, 208–221. [Google Scholar] [CrossRef]
  28. Andrienko, G.; Andrienko, N.; Bremm, S.; Schreck, T.; Von Landesberger, T.; Bak, P.; Keim, D. Space-in-Time and Time-in-Space Self-Organizing Maps for Exploring Spatiotemporal Patterns. Comput. Graph. Forum 2010, 29, 913–922. [Google Scholar] [CrossRef]
  29. Augustijn, E.W.; Zurita-Milla, R. Self-Organizing Maps as an Approach to Exploring Spatiotemporal Diffusion Patterns. Int. J. Health Geogr. 2013, 12, 60. [Google Scholar] [CrossRef] [PubMed]
  30. Liu, H.; Zhan, Q.; Yang, C.; Wang, J. Characterizing the spatio-temporal pattern of land surface temperature through time series clustering: Based on the latent pattern and morphology. Remote Sens. 2018, 10, 654. [Google Scholar] [CrossRef]
  31. Qi, J.; Liu, H.; Liu, X.; Zhang, Y. Spatiotemporal evolution analysis of time-series land use change using self-organizing map to examine the zoning and scale effects. Comput. Environ. Urban Syst. 2019, 76, 11–23. [Google Scholar] [CrossRef]
  32. Xiong, J.; Thenkabail, P.S.; Gumma, M.K.; Teluguntla, P.; Poehnelt, J.; Congalton, R.G.; Yadav, K.; Thau, D. Automated cropland mapping of continental Africa using Google Earth Engine cloud computing. ISPRS J. Photogramm. Remote. Sens. 2017, 126, 225–244. [Google Scholar] [CrossRef]
  33. Wang, S.; Azzari, G.; Lobell, D.B. Crop type mapping without field-level labels: Random forest transfer and unsupervised clustering techniques. Remote Sens. Environ. 2019, 222, 303–317. [Google Scholar] [CrossRef]
  34. Hallac, D.; Vare, S.; Boyd, S.; Leskovec, J. Toeplitz inverse covariance-based clustering of multivariate time series data. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, 13–17 August 2017; pp. 215–223. [Google Scholar]
  35. Kohonen, T. The Self-Organizing Map. Proc. IEEE 1990, 78, 1464–1480. [Google Scholar] [CrossRef]
  36. Leonard Kaufman, P.J.R. Finding Groups in Data: An Introduction to Cluster Analysis, 9th ed.; Wiley-Interscience: Hoboken, NJ, USA, 1990. [Google Scholar]
  37. Everitt, B.S.; Landau, S.; Leese, M.; Stahl, D. Cluster Analysis, 5th ed.; Wiley Series in Probability and Statistics; Wiley: Hoboken, NJ, USA, 2011. [Google Scholar]
  38. Zurita-Milla, R.; Van Gijsel, J.; Hamm, N.A.; Augustijn, P.; Vrieling, A. Exploring spatiotemporal phenological patterns and trajectories using self-organizing maps. IEEE Trans. Geosci. Remote. Sens. 2012, 51, 1914–1921. [Google Scholar] [CrossRef]
  39. Chen, I.T.; Chang, L.C.; Chang, F.J. Exploring the spatio-temporal interrelation between groundwater and surface water by using the self-organizing maps. J. Hydrol. 2018, 556, 131–142. [Google Scholar] [CrossRef]
  40. Guo, D.; Chen, J.; MacEachren, A.M.; Liao, K. A visualization system for space-time and multivariate patterns (vis-stamp). IEEE Trans. Vis. Comput. Graph. 2006, 12, 1461–1474. [Google Scholar] [PubMed]
  41. Astel, A.; Tsakovski, S.; Barbieri, P.; Simeonov, V. Comparison of self-organizing maps classification approach with cluster and principal components analysis for large environmental data sets. Water Res. 2007, 41, 4566–4578. [Google Scholar] [CrossRef] [PubMed]
  42. Liu, Y.; Weisberg, R.H. A review of self-organizing map applications in meteorology and oceanography. In Self-Organizing Maps: Applications and Novel Algorithm Design; InTech: Rijeka, Croatia, 2011; pp. 253–272. [Google Scholar]
  43. Dickie, A.; Magno, I.; Giampietro, J.; Dolginow, A. Challenges and Opportunities for Conservation, Agricultural Production, and Social Inclusion in the Cerrado Biome; Technical Report; CEA Consulting: San Francisco, CA, USA, 2016. [Google Scholar]
  44. Soterroni, A.C.; Ramos, F.M.; Mosnier, A.; Fargione, J.; Andrade, P.R.; Baumgarten, L.; Pirker, J.; Obersteiner, M.; Kraxner, F.; Camara, G.; et al. Expanding the Soy Moratorium to Brazil’s Cerrado. Sci. Adv. 2019, 5, eaav7336. [Google Scholar] [CrossRef]
  45. Klink, C.; Machado, R. Conservation of the Brazilian Cerrado. Conserv. Biol. 2005, 19, 707–713. [Google Scholar] [CrossRef]
  46. Ansari, M.Y.; Ahmad, A.; Khan, S.S.; Bhushan, G.; Mainuddin. Spatiotemporal clustering: A review. Artif. Intell. Rev. 2019, 53, 2381–2423. [Google Scholar] [CrossRef]
  47. Wu, X.; Cheng, C.; Zurita-Milla, R.; Song, C. An overview of clustering methods for geo-referenced time series: From one-way clustering to co- and tri-clustering. Int. J. Geogr. Inf. Sci. 2020, 34, 1822–1848. [Google Scholar] [CrossRef]
  48. Johnson, S.C. Hierarchical clustering schemes. Psychometrika 1967, 32, 241–254. [Google Scholar] [CrossRef]
  49. Natita, W.; Wiboonsak, W.; Dusadee, S. Appropriate Learning Rate and Neighborhood Function of Self-Organizing Map (SOM) for Specific Humidity Pattern Classification over Southern Thailand. Int. J. Model. Optim. 2016, 6. [Google Scholar] [CrossRef]
  50. Kohonen, T. Essentials of the Self-Organizing Map. Neural Netw. 2013, 37, 52–65. [Google Scholar] [CrossRef]
  51. Kohonen, T.; Kaski, S.; Lagus, K.; Salojarvi, J.; Honkela, J.; Paatero, V.; Saarela, A. Self organization of a massive document collection. IEEE Trans. Neural Netw. 2000, 11, 574–585. [Google Scholar] [CrossRef]
  52. Hubert, L.J.; Levin, J.R. A general statistical framework for assessing categorical clustering in free recall. Psychol. Bull. 1976, 83, 1072. [Google Scholar] [CrossRef]
  53. Arlot, S.; Celisse, A. A survey of cross-validation procedures for model selection. Stat. Surv. 2010, 4, 40–79. [Google Scholar] [CrossRef]
  54. Bengio, Y.; Grandvalet, Y. No Unbiased Estimator of the Variance of K-Fold Cross-Validation. J. Mach. Learn. Res. 2004, 5, 1089–1105. [Google Scholar]
  55. Belgiu, M.; Dragut, L. Random Forest in Remote Sensing: A Review of Applications and Future Directions. ISPRS J. Photogramm. Remote. Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  56. Santos, L.; Ferreira, K.; Picoli, M.; Camara, G. Self-Organizing Maps in Earth Observation Data Cubes Analysis. In Advances in Self-Organizing Maps, Learning Vector Quantization, Clustering and Data Visualization; Vellido, A., Gibert, K., Angulo, C., Martin, J., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; pp. 70–79. [Google Scholar]
  57. Vesanto, J.; Alhoniemi, E. Clustering of the self-organizing map. IEEE Trans. Neural Networks 2000, 11, 586–600. [Google Scholar] [CrossRef] [PubMed]
  58. Sanches, I.D.; Feitosa, R.Q.; Achanccaray, P.; Montibeller, B.; Luiz, A.J.B.; Soares, M.D.; Prudente, V.H.R.; Vieira, D.C.; Maurano, L.E.P. Lem Benchmark Database for Tropical Agricultural Remote Sensing Application. ISPRS Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2018, XLII-1, 387–392. [Google Scholar] [CrossRef]
  59. Zito, R.K.; Filho, O.L.d.M.; Pereira, M.J.Z.; Meyer, M.C.; Hirose, E.; Nicoli, C.M.L.; Costa, S.V.d.; de Neto, C.D.M.; Nunes, J., Jr.; Vieira, N.E.; et al. Cultivares de soja: Macrorregiões 3, 4 e 5 Goiás e Região Central do Brasil. 2018. Available online: https://www.embrapa.br/en/busca-de-publicacoes/-/publicacao/1067791/cultivares-de-soja-macrorregioes-3-4-e-5-goias-e-regiao-central-do-brasil (accessed on 15 July 2020).
  60. Abrahão, G.M.; Costa, M.H. Evolution of rain and photoperiod limitations on the soybean growing season in Brazil: The rise (and possible fall) of double-cropping systems. Agric. For. Meteorol. 2018, 256–257, 32–45. [Google Scholar] [CrossRef]
  61. Alonso, M.P.; Moraes, E.; Pereira, D.H.; Pina, D.S.; Mombach, M.A.; Hoffmann, A.; de Moura Gimenez, B.; Sanson, R.M.M. Pearl millet grain for beef cattle in crop-livestock integration system: Intake and digestibility. Semin. Cienc. Agrar. 2017, 38, 1471–1482. [Google Scholar] [CrossRef]
  62. Embrapa. O Cerrado. 2020. Available online: http://www.cpac.embrapa.br/unidade/ocerrado (accessed on 14 September 2020).
  63. Jensen, J.R. Remote Sensing of the Environment: An Earth Resource Perspective, 2nd ed.; Pearson: London, UK, 2009. [Google Scholar]
  64. Picoli, M.; Camara, G.; Sanches, I.; Simoes, R.; Carvalho, A.; Maciel, A.; Coutinho, A.; Esquerdo, J.; Antunes, J.; Begotti, R.A.; et al. Big Earth Observation Time Series Analysis for Monitoring Brazilian Agriculture. ISPRS J. Photogramm. Remote. Sens. 2018, 145, 328–339. [Google Scholar] [CrossRef]
  65. Ferreira, K.; Santos, L.; Picoli, M. Evaluating Distance Measures for Image Time Series Clustering in Land Use and Cover Monitoring. In MACLEAN 2019 MAChine Learning for EArth ObservatioN Workshop; CEUR-WS: Würzburg, Germany, 2019. [Google Scholar]
  66. Ferreira, K.R.; Queiroz, G.R.; Vinhas, L.; Marujo, R.F.B.; Simoes, R.E.O.; Picoli, M.C.A.; Camara, G.; Cartaxo, R.; Gomes, V.C.F.; Santos, L.A.; et al. Earth Observation Data Cubes for Brazil: Requirements, Methodology and Products. Remote Sens. 2020, 12, 4033. [Google Scholar] [CrossRef]
Figure 1. Land use samples dataset of Cerrado biome.
Figure 1. Land use samples dataset of Cerrado biome.
Remotesensing 13 00974 g001
Figure 2. The method for exploratory analysis using time series based on clustering methods.
Figure 2. The method for exploratory analysis using time series based on clustering methods.
Remotesensing 13 00974 g002
Figure 3. Clustering output. The red lines in the self-organizing maps (SOM) grid represent the subclusters that were generated from the neurons labeled as class Z. For each sample of the dataset, an id and label of the neuron and an id and label of subclusters are assigned to each one.
Figure 3. Clustering output. The red lines in the self-organizing maps (SOM) grid represent the subclusters that were generated from the neurons labeled as class Z. For each sample of the dataset, an id and label of the neuron and an id and label of subclusters are assigned to each one.
Remotesensing 13 00974 g003
Figure 4. SOM grids. Each line inside the neurons is a weight vector generated by SOM to represent a set of sample in low-dimensional space.
Figure 4. SOM grids. Each line inside the neurons is a weight vector generated by SOM to represent a set of sample in low-dimensional space.
Remotesensing 13 00974 g004
Figure 5. Mapping samples onto the SOM grid. Each dot represents a sample.
Figure 5. Mapping samples onto the SOM grid. Each dot represents a sample.
Remotesensing 13 00974 g005
Figure 6. Dendrogram partitioned into ten groups for cropland.
Figure 6. Dendrogram partitioned into ten groups for cropland.
Remotesensing 13 00974 g006
Figure 7. Clusters of cropland. (a) SOM grid with subclusters of Cropland. (b) Weight vectors of each subcluster. Each line represent a neuron.
Figure 7. Clusters of cropland. (a) SOM grid with subclusters of Cropland. (b) Weight vectors of each subcluster. Each line represent a neuron.
Remotesensing 13 00974 g007
Figure 8. Clusters of cropland. Spatial location, by cluster, where the samples are.
Figure 8. Clusters of cropland. Spatial location, by cluster, where the samples are.
Remotesensing 13 00974 g008
Figure 9. The cluster of soy–fallow: subclusters of Cropland 1. (a) SOM grid with soy–fallow subclusters. (b) The spatial location where the samples assigned to Cropland 1 are. (c) Changes in Moderate Resolution Imaging Spectroradiomet (MODIS) time series of point (−12.8875, −45.8769) over 4 years.
Figure 9. The cluster of soy–fallow: subclusters of Cropland 1. (a) SOM grid with soy–fallow subclusters. (b) The spatial location where the samples assigned to Cropland 1 are. (c) Changes in Moderate Resolution Imaging Spectroradiomet (MODIS) time series of point (−12.8875, −45.8769) over 4 years.
Remotesensing 13 00974 g009
Figure 10. The cluster of soy–corn: subclusters of Croplands 7, 9, and 10. (a) SOM grid with soy–corn subclusters. (b) Weight vectors of each neuron. (c) Spatial location where the samples allocated by neurons 106, 189, and 195 are. (d) Normalized Difference Vegetation Index (NDVI) time series and the number of samples assigned to these neurons of soy–corn.
Figure 10. The cluster of soy–corn: subclusters of Croplands 7, 9, and 10. (a) SOM grid with soy–corn subclusters. (b) Weight vectors of each neuron. (c) Spatial location where the samples allocated by neurons 106, 189, and 195 are. (d) Normalized Difference Vegetation Index (NDVI) time series and the number of samples assigned to these neurons of soy–corn.
Remotesensing 13 00974 g010
Figure 11. Cluster of Cropland 6 (a) SOM grid. (b) Weight vectors of neurons that belong to Cropland 6. (c) Spatial location where the samples allocated by neurons 103, 61, and 75 are. (d) NDVI time series and the number of samples by assigned to these neurons of Cropland 6. The blue line are the raw NDVI time series, the thick line is the median, and the yellow lines represent the 25% and 75% quantiles.
Figure 11. Cluster of Cropland 6 (a) SOM grid. (b) Weight vectors of neurons that belong to Cropland 6. (c) Spatial location where the samples allocated by neurons 103, 61, and 75 are. (d) NDVI time series and the number of samples by assigned to these neurons of Cropland 6. The blue line are the raw NDVI time series, the thick line is the median, and the yellow lines represent the 25% and 75% quantiles.
Remotesensing 13 00974 g011
Figure 12. Dendrogram for pasture separated into two clusters.
Figure 12. Dendrogram for pasture separated into two clusters.
Remotesensing 13 00974 g012
Figure 13. Cluster of pasture. (a) SOM grid with subclusters of pasture. (b) Weight vectors of each subcluster. Each line represent a neuron. (c) Spatial subclusters.
Figure 13. Cluster of pasture. (a) SOM grid with subclusters of pasture. (b) Weight vectors of each subcluster. Each line represent a neuron. (c) Spatial subclusters.
Remotesensing 13 00974 g013
Figure 14. Samples originally labeled as cropland that were assigned to clusters of pasture.
Figure 14. Samples originally labeled as cropland that were assigned to clusters of pasture.
Remotesensing 13 00974 g014
Table 1. Number of training samples by cluster.
Table 1. Number of training samples by cluster.
ClusterCountFrequencyClass
1. Cropland_Pasture410.26%Cropland
2. Cropland_15633.5%Soy–Fallow
3. Cropland_23482.2 %Fallow–Cotton
4. Cropland_3_4_6_8386624.5 %Soy–Cotton
5. Cropland_54292.8 %Millet–Cotton
6. Cropland_6_7_9_10533134.5%Soy–Corn
7. Pasture_1900.58%Pasture_1
8. Pasture_2492631.2%Pasture_2
Table 2. Confusion matrix—the cropland samples mapped in pasture were kept in the dataset.
Table 2. Confusion matrix—the cropland samples mapped in pasture were kept in the dataset.
12345678UA
1. Cropland50000000100%
2. Soy–Corn2353723363176197%
3. Soy–Cotton0503912271101197%
4. Millet–Cotton014339001098%
5. Fallow–Cotton612033400097%
6. Soy–Fallow2130015520097%
7. Pasture_256301449786198%
8. Pasture_1000000027100%
PA12%98%98%79%95%98%99%30%98%
Table 3. Confusion matrix—the cropland samples mapped in pasture were removed from the dataset.
Table 3. Confusion matrix—the cropland samples mapped in pasture were removed from the dataset.
1234567UA
1. Soy–Corn53653168176197%
2. Soy–Cotton58391328901097%
3.Millet–Cotton13333001098%
4. Fallow–Cotton13033600098%
5. Soy–Fallow130015510097%
6. Pasture_25401549186198%
7. Pasture_100000028100%
PA98%98%77%96%97%99%31%97%
Table 4. Confusion matrix—original Dataset.
Table 4. Confusion matrix—original Dataset.
PastureCroplandUA
Pasture49801799%
Cropland361076199%
PA99%99%99%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Back to TopTop