Next Article in Journal
Potential of X-Band Images from High-Resolution Satellite SAR Sensors to Assess Growth and Yield in Paddy Rice
Next Article in Special Issue
Land-Use Mapping in a Mixed Urban-Agricultural Arid Landscape Using Object-Based Image Analysis: A Case Study from Maricopa, Arizona
Previous Article in Journal
Evaluation of Daytime Evaporative Fraction from MODIS TOA Radiances Using FLUXNET Observations
Previous Article in Special Issue
Data Transformation Functions for Expanded Search Spaces in Geographic Sample Supervised Segment Generation
Article Menu

Export Article

2014, 6(7), 5976-5994;

Change Detection Algorithm for the Production of Land Cover Change Maps over the European Union Countries
Space Research Centre of the Polish Academy of Sciences, Bartycka 18A 00-716 Warsaw, Poland
Institute of Geodesy and Cartography, Modzelewskiego 27, 02-679 Warsaw, Poland
Author to whom correspondence should be addressed.
Received: 31 March 2014; in revised form: 6 June 2014 / Accepted: 12 June 2014 / Published: 27 June 2014


Contemporary satellite Earth Observation systems provide growing amounts of very high spatial resolution data that can be used in various applications. An increasing number of sensors make it possible to monitor selected areas in great detail. However, in order to handle the volume of data, a high level of automation is required. The semi-automatic change detection methodology described in this paper was developed to annually update land cover maps prepared in the context of the Geoland2. The proposed algorithm was tailored to work with different very high spatial resolution images acquired over different European landscapes. The methodology is a fusion of various change detection methods ranging from: (1) layer arithmetic; (2) vegetation indices (NDVI) differentiating; (3) texture calculation; and methods based on (4) canonical correlation analysis (multivariate alteration detection (MAD)). User intervention during the production of the change map is limited to the selection of the input data, the size of initial segments and the threshold for texture classification (optionally). To achieve a high level of automation, statistical thresholds were applied in most of the processing steps. Tests showed an overall change recognition accuracy of 89%, and the change type classification methodology can accurately classify transitions between classes.
change detection; land cover; very high resolution images; MAD; OBIA; automatic; Europe

1. Introduction

International and national institutions require information about the landscape and the environment to support decision-makers. Earth observation (EO) techniques are advanced tools that can meet this requirement. As the landscape is not static, but is constantly changing, EO should focus not only on mapping the static environment, but, in general, on detecting changes in order to meet the needs of contemporary users.
The algorithm described in this paper was designed in the context of the Seasonal and Annual Change Monitoring (SATChMo) project. SATChMo is one of three Core Mapping Services (CMS) of the Copernicus project, Geoland2 [1]. SATChMo is focused on land cover (LC) classification and change detection based on annual and seasonal time-scales. The main SATChMo products are generic LC maps. The production of maps on the European scale is a long process; hence, to update them on a yearly basis, a convenient automated tool is required to speed up the process and reduce the costs. The main goal of the presented study is to develop an algorithm that can rapidly detect and classify changes. User interaction should be minimal and limited to the inspection of classified changes in the final phase of map production. Before the algorithm was developed, the SATChMo team reviewed the most commonly used change detection techniques, such as post classification comparison, the grid-based method, support vector machines, image entropy and arithmetical and statistical methods. Preliminary tests were performed on test sites to evaluate whether it was possible to develop a single methodology for all image pairs contained in the project’s database and to assess the usefulness of different methods in meeting the needs of the project.
The basic premise of remote sensing change detection analysis is that changes in LC result in changes in radiance values, and these values are significantly higher than changes caused by other factors [2], for example atmospheric fluctuations or the differences in Sun angle observed during acquisitions. In fact, in many cases, change detection has to deal with large differences in radiance within the same LC class. This is mostly caused by seasonality (changes in vegetation phenology), but is also influenced by variable atmospheric conditions, such as clouds and shadows. Visual interpretation of satellite images requires expert knowledge to detect and name all of these changes. While experienced operators may be very accurate, the drawback is the amount of time needed to generate the change layer. A widely-used quantitative change detection method [3] (and also one of the easiest to use) is post-classification comparison. Its two main advantages are that it does not require radiometric normalization or atmospheric correction. Nonetheless, accuracy depends on pre- and post-classifications, and every error in the individual maps will be present in the final change detection product [3].
The most common methods are based on mathematical operators, such as the subtraction of the original or transformed images (indices, ratios, etc.). Coppin et al. [4] describe this group as univariate image differencing. A good example is the subtraction of NDVI indices, computed at two different times. NDVI is widely used for change detection concerning all aspects of environmental monitoring on different scales. Lunetta et al. [5] used MODIS NDVI time series data for the automatic monitoring of changes in vegetation. Yuan and Elvidge [6] investigated the usability of NDVI to the Landsat-based change detection. Ardila et al. [7] used NDVI as one of the factors for the detection of changes in urban tree structure on very high resolution (VHR) images. The main disadvantage is that only using image differencing methods leads to errors, caused by variation in spectral radiation and spectral similarities between LC classes.
Another group of change detection methods that have proved useful are bi-temporal linear data transformations. The most popular is principal component analyses (PCA), and they have been applied in many studies [810]. As an extension of the canonical correlation analysis, Nielsen [11] proposed multivariate alteration detection (MAD). An advantage of this method, compared to PCA, is its insensitivity to differences in sensor gain or atmospheric correction schemes [11]. It is also a good solution for detecting changes on a combination of different sensors; hence, it is especially useful in historical change detection studies.
Traditionally, change detection methods are based on an evaluation of individual pixels. However, the characteristics of VHR data and varying user needs mean that per-pixel approaches cannot meet the requirement of fast and highly accurate mapping [12]. Object-based image analysis (OBIA) is one of the solutions to the limitations of pixel-based methods. The principles of the OBIA (often referred also as GEOBIA (geographic object-based image analysis)) paradigm were given in [13,14] and more recently in [12]. The basic premise is to use meaningful image objects instead of pixels alone. Image objects consist of pixels grouped on the basis of homogeneity during the segmentation process. Objects can be analyzed on the basis of a variety of distinctive features that are unavailable in the case of per-pixel approach, such as topological relationships, geometry, size, etc. An extensive review of object-based change detection covering the issues, motivations, challenges and examples of algorithms was given by Chen et al. [15] and Hussain et al. [16].
The rationale for our methodology was to update the LC maps prepared in the SATChMo project. Other anticipated benefits were the ability to generate change statistics based on the images acquired for the project. Specifically, we wanted to address two change detection problems. The first was the accurate spatial identification of changed areas, and the second was the classification of transitions between LC classes. Change must be interpreted and its significance explained before it is finally inserted in a map-updating process. In order to achieve this, we need a precise definition of what is considered as a change in a particular case and a list of all of the LC transitions for which we are looking.
In the case of remote observations of the Earth, change can be assessed as a transition from one LC class to another, a boundary change of the same class of object or as a change within a specific LC class that influences the reflectance registered by the sensor and represents land conditions. In this paper, we limit the task to permanent changes that are associated with transitions between LC types. Changes that are of interest must be durable, i.e., they are not caused by change in the class condition (e.g., caused by the phenology fluctuations during the growing season).

2. Materials and Methods

2.1. Study Area and Data

Our work is based on satellite images collected in the frame of the Area Frame Sampling Europe (AFS Europe) task under the SATChMo project. The goal of AFS Europe was to develop a statistical methodology for sample selection in order to provide LC statistics for European Union countries. There are 114 AFS sites, each of a size of 15 by 15 km. AFS collects data from all of the European environmental zones, starting from the Mediterranean in the south, through the temperate regions to Boreal in the north. The dataset for each site consists of images from 2009 (KOMPSAT2), 2010 (KOMPSAT2 and FORMOSAT2) and 2011 (GeoEye, QuickBird, IKONOS, FORMOSAT2). A detailed description of the AFS Europe service is given in Banaszkiewicz et al. [17]. The above list of sensors indicates that some image pairs used for change detection have different spatial resolutions; from 0.5 m to 2 m in the panchromatic channel and from 1 m to 8 m in multispectral channels. Each sensor provides four standard spectral channels: blue, green, red and infra-red. Townshend et al. [18] proved that even subpixel misregistration can have an impact on change detection and suggested that registration accuracy should be taken into account for any method. Hence, the first step of our data pre-processing was image-to-image co-registration. This helped to ensure that corresponding pixels on both images represented the same geographic location, and misclassifications caused by shifts of unchanged object boundaries were reduced. Because images from different years do not fully overlap, only the overlapping part was clipped and used for further processing.
The main SATChMo products are generic LC maps prepared for each European AFS site for the base year of 2009. All maps produced for AFS Europe are freely available at the project website [1]. They represent ten generic LC types (urban, bare ground, water, snow and ice, agriculture, forest, sparse woody vegetation, grassland, other vegetation, shadows and clouds). The classification algorithm was developed as a generic tool for processing VHR images. As most of the images acquired for SATChMo were KOMPSAT2, the algorithm was initially tailored to this kind of data (i.e., spatial resolution) and later extended to include FORMOSAT2 images. The algorithm also works well with other VHR images. It is based on OBIA and uses the concept of texture classification as a basic premise. A full description of this is given in Lewinski et al. [19]
As the area of interest is extremely wide, for the purposes of this paper, detailed results are only presented for the test site, which is shown in Figure 1. It concerns a Warsaw (Poland) metropolitan area for which two IKONOS images from the years 2002 and 2008 were acquired. It is an area where there have been many LC changes, which are mostly related to urban growth (a decrease in agricultural areas, deforestation, etc.). Natural processes caused by changes in the Vistula river banks can also be observed. For completeness, a summary of the change detection activity over AFS Europe sites is also presented.

2.2. Change Detection Algorithm

2.2.1. Description of the Land Cover Classes

Expected transitions include nearly all combinations of the 10 generic LC classes. Nevertheless, some are not probable (e.g., a change from urban to forest) and can be omitted by the algorithm. However, some changes that are not probable during one year may occur in a longer time span (e.g., most of the changes to the class forest). Since the development of the method was done on images from the years 2002 and 2008, also those changes are taken into account. A list of all change types that may take place on an annual basis is given in Table 1.
The list of land cover classes and a short specification of each class is presented below:
Urban/artificial: all areas covered by buildings, roads and artificially-surfaced areas. Includes residential, industrial, commercial and transport-related structures and surfaces (road and rail networks and associated land, airports), port areas, as well as rural settlements, excluding green areas within urban areas (parks, fields, etc.).
Bare ground: areas of permanently non-cultivated ground. Includes mineral extraction sites, building site wasteland, bare rocks, sand dunes, beaches, dry lake beds, intertidal mud and very sparsely vegetated areas where 90% of the land surface is covered by rock.
Water: includes both water courses and water bodies of both natural and artificial origin.
Snow and ice: land, covered by glaciers or permanent snowfields.
Agricultural areas: a heterogeneous agricultural class, which includes semi-permanent and non-permanent cereal, legume and root crops, as well as permanent crops, such as vineyards and orchards. It may show evidence of management in the form of irrigation infrastructure or ploughing furrows. The class also includes tree plantations (olives, almonds, oranges, etc.), managed grasslands and pastures.
Forest/woodland/tree: vegetation cover formed from closely-spaced deciduous and/or coniferous tree species.
Sparse woody vegetation: includes areas of transitional woodland/shrub; young broad-leafed and coniferous wood species, with herbaceous vegetation and dispersed solitary trees, together with areas of degenerative forest. Includes forest cut zones, as well as afforestation areas.
Grassland: natural grasslands with herbaceous, developed under minimum human interference (not mowed, fertilized or stimulated by chemicals, which might influence the production of biomass). These areas may include very sparse scattered trees and shrubs.
Other vegetation: this class contains all remaining vegetation types that are not considered in the class description provided above (e.g., moorlands, reed beds, wetlands).
Clouds, voids, etc.: includes clouds and deep shadows where LC interpretation is not possible.
Classification can be problematic, as the definitions are broad and the automatic classification process is based solely on satellite images without the use of external data. The conclusions drawn during the development of the classification algorithm (Lewinski, Bochenek and Turlej [19]) were useful in understanding the issues related to change detection:
Complex definition of particular classes: class definitions were highly generalized. Classes, such as agricultural areas, contain LC forms that have different spectral and textural characteristics, e.g., green fields, brown fields and tree plantations.
Wide area of interests: the same class in different regions of Europe may have very different spectral and textural characteristics, e.g., the spatial arrangement of built-up areas may vary significantly, depending on the country and landscape
Lack of consistency in the date of image acquisition: some classes (e.g., agricultural areas) may vary significantly, depending on when the image was registered (e.g., brown or green fields)

2.2.2. Evaluation of the Spectral Characteristics of Land Cover Changes

Before the algorithm development phase, we investigated the characteristics of LC changes. In order to do this, six representative pairs of KOMPSAT-2 images from different regions of Europe were chosen as representatives for a visual interpretation of changes. The test sites were in France, Germany, Spain, Italy, Latvia and Poland. The fourteen types of LC changes that were identified at these test sites are presented in the first column of Table 2.
The next step was to perform a separability analysis of these classes. As the input features, NDVI, texture measures (SIGMA filter described in [20] and [21]), MAD components related to multispectral bands (MAD1, MAD2, MAD3, MAD4 and MAD5), values of all bands and their differences were taken into account. These features were used for the calculation of Feature Combination 1 and Feature Combination 2, which performed best in class discrimination.
Feature   Combination   1 = 0.55 × NDVI DIFFERENCE + 0.25 × SIGMA DIFFERENCE 0.78 × MAD 4 0.82 × MAD 5
Feature   Combination   2 = 0.35 × NDVI DIFFERENCE + 0.4 × SIGMA DIFFERENCE 0.37 × MAD 4 0.93 × MAD 5
The discriminant analysis, performed using STASTISTICA software for all 14 LC change classes, is presented in graphical form in Figure 2.
Figure 2 shows that classes cannot be grouped into separate clusters, and there is a lot of overlap in the feature space. This means that automatic recognition of transitions between LC classes is not possible using a simple classification approach. A revision of the physical characteristics of LC changes led us to the conclusion that they can be grouped according to changes in the amount of vegetation (indicated by the variability of NDVI values) and artificial surfaces. Change types (direct transitions between LC classes) were aggregated into three broad groups, called change directions:
Revegetation: transitions of LC, where the amount of vegetation is indicated by the NDVI increase distinctly over time
Devegetation: transitions of LC, where the amount of vegetation is indicated by the NDVI decrease distinctly or disappears entirely over time, but is not connected witch the appearance of new artificial surfaces.
Artificialization: LC transitions connected to the appearance of new artificial surfaces, including buildings, roads and construction sites.
A list of change types and directions was presented in Table 2. The separability analysis was performed once again for these groups. The aggregation process led to a distinct improvement in the results. We later found that the three change directions can be differentiated with acceptable accuracy, using just two features: NDVI (Figure 3) and texture measure (Figure 4).
As Figure 3 shows, the revegetation direction can be determined using just the NDVI. The remaining classes (devegetation and artificialization) overlap, as they cause a similar decrease in the amount of vegetation. In this case, better class separation is observed using texture measure.
Experiences from the classification procedure applied in SATChMo for the generation of classification maps show that texture differentiates well between the urban/forest and agriculture/water LC classes [21]. Here, we assume that changes between these groups of classes are reflected in a change of texture. For example, a change from forest to agriculture should result in a change from high to low values of texture. On the other hand, in most observed cases, the appearance of new built-up areas is similar to the devegetation process (a decrease in the amount of vegetation), but in the case of texture measures, the final result of the process is different. Usually, new built-up areas lead to an increase in image texture, whereas the devegetation process, like clear-cutting, causes a decrease in the image texture. Although there are exceptions to these assumptions, in most cases, they are valid and can be implemented in the classification process.

2.2.3. Description of the Algorithm

Our algorithm is a fusion of various change detection methods that produce a mask of classified changes. The workflow is presented in Figure 5.
The inputs to the algorithm are VHR images from time T1 and T2, generic LC maps produced from image T1 and MAD components. The MAD method is based on a canonical correlations analysis. The basic concept is to calculate canonical variates (for images covering the same region) and to subtract them from one another. Nielsen [11] defines the MAD transformation as:
[ X Y ] [ a p T X b P T Y a l T X b l T Y ]
where X and Y are images written as vectors and a and b define coefficients from a standard canonical analysis. This procedure combines all of the change information into a single image. As an improvement to standard MAD transformation, Nielsen [22] introduced an iterative process in which standard MAD variates are calculated at the beginning, and in the following iterations, weights are added to the observations. The assumption is that larger weights are assigned to observations with small MAD variates. The result of the iteratively reweighted MAD algorithm is four layers of MAD variates and a layer with the chi-square distribution that determines the weights assigned to observations.
The IDL extension for ENVI, prepared by Canty [23], was used for the MAD calculation. The data is imported into an object-based environment using eCognition software; hence, the first step of the processing is segmentation. This is done separately for images T1 and T2 using a multiresolution segmentation algorithm described by Baatz and Schape [24]. For the segmentation process, red and near-infrared bands are used. The scale parameter is defined by the user, by trial-and-error, independently, for images T1 and T2. It may differ depending on the type of landscape and spatial resolution of the input data. The general idea is to create objects that are as big as possible and, at the same time, as homogenous as possible. Although a number of automatic or semi-automatic solutions have been proposed [2527], most of them are not sufficiently mature for seamless integration into the workflow. The most promising method is based on the computation of local variances, which is implemented in the Estimation of Scale Parameter tool developed by [27] and its automation and implementation in eCognition [28]. Other parameters (i.e., shape and compactness) are fixed; they provide a reasonable delineation of the various land cover types of areas spread across Europe and cannot be changed by the operator. Next, texture layers are computed on the basis of panchromatic images using the Sigma function. At this stage, the operator divides objects in images T1 and T2 into high and low texture classes. This step can be also done automatically by evaluating the LC map of image T1. The intersection of the segmentation layers is then found. The result is a layer of segments that represent unchanged objects (those that are similar in T1 and T2) and objects that delineate potential changes. At the same time, a map of changes from high to low texture, low to high texture and no texture change is prepared.
One of the methods used commonly for change detection is the observation of changes in NDVI. This is a rather simple method that gives very good results in particular cases. First, NDVI layers are calculated for times T1 and T2. Next, the NDVI difference is calculated. When we investigate the Gaussian distribution of NDVI difference values, we can observe that the biggest changes are located below and above one standard deviation. Using the standard deviation, objects can be assigned to classes of increased, decreased or constant NDVI. This approach makes it possible to compare changes in objects given an overall distribution of NDVI differences within the scene and to only classify those that are most significant. An additional advantage of using the standard deviation is that the algorithm is more automatic and the user does not need to specify any threshold.
The third layer used in the change detection workflow is the classification of MAD layers. MAD components represent different categories of changes. As relevant changes in man-made structures are generally uncorrelated with seasonal vegetation changes or statistical image noise, they are usually concentrated in specific MAD components [29]. The implemented algorithm only uses those values that represent the most significant changes. From each MAD layer, we select objects with the most extreme values (below or above the standard deviation) and produce a change and no change mask. However, this also contains irrelevant changes (e.g., of a temporal character or a change of class condition), and the next step is to filter them out.
To do this, we use an approach that is similar to decision trees. We define the number of rules sets that define different types of changes. Each rule set is related to a specific type of change that can occur in land cover. For example, the rule for artificialization of agricultural areas selects objects where texture was classified as increased, NDVI as decreased and the MAD component indicates change. As mentioned above, we are not interested in changes in vegetation phenology, so any fluctuations in the agriculture class, classified as a change by the MAD procedure, should be filtered out. This class remains classified as “no change” in texture for all phenology seasons, even if spectral values change, which helps to minimize the detection of changes that are the result of a natural variability in the phenology of vegetation.
In parallel, information about the direction of change is created. The process begins with scene segmentation, based on the original image data (multispectral and panchromatic) and the generic LC maps produced on the basis of image T1. After obtaining homogeneous objects, a test of the difference in the amount of vegetation is performed. This eliminates from the analysis objects with small differences in NDVI. Next, residual objects are divided into two groups: revegetation and devegetation. The criteria for these two groups are also based on the NDVI. This part of the algorithm includes the recognition of areas where the amount of vegetation has changed over time even if it is caused by phenology. Next, in order to identify the artificialization group, texture analysis is applied. As a result of this two-stage procedure, the final map containing three classes of change direction is produced.
In the final step, information about the direction of change is inserted into the general mask of changes, which is produced at the first stage of the algorithm. The areas where no change direction was detected, but that were marked as a change, are subject to further manual examination. Both thematic layers (the map of changes and the map of transitions between classes) are combined in order to create the resultant map of change types and directions. The end result is three outputs: a map of change directions, a map of change types and a classification map for date T2.
For production purposes, the method was implemented in eCognition Architect. The change detection process is reduced to four main steps: (1) input data selection; (2) segmentation; (3) change detection and classification; and (4) manual correction of detected changes and the export of the results. In the second step, the user has to define the segment size for both T1 and T2 images. Step 3 can be fully automated or the user can manually select thresholds for texture classification. After completing the process of change detection, the working area is automatically prepared for review and the manual correction of changes. To facilitate the work, the image data is divided into a grid of squares. The user can easily accept, reject or change classified objects inside the squares. Because the automatic procedure classifies more changes than there actually are, only boxes that cover changed objects are a subject of investigation. At this point, the user can reclassify change types if they were misclassified.

3. Experimental Section

3.1. Warsaw Test Site

The main output of the processing is a map of LC change types (Figures 6 and 7). It presents the transitions between LC classes recognized by the algorithm and assigned to the changed objects. At this point, it should be noted that LC change map used for the evaluation of the algorithm’s accuracy is the result generated by the algorithm with no manual editing or error correction. Manual editing was part of the production phase of the project to ensure the best possible accuracy of the final maps.
Figure 6 shows that most of changes that were recognized and named reflected changes present in the satellite images. Most were related to artificialization (change to urban class), but there was also a shift of river banks, and the disappearance of bare soil was recognized (Figure 7). It can be also seen that most of the changes related to the change of vegetation season (considered as false changes) were ignored. The algorithm was able to successfully detect urban area development in the western part of the image, i.e., a change from “agriculture” to “urban” and from “other vegetation” to “urban”. Similarly, small urban grassland areas are properly classified in the final layer. Results show that the number of classified changes is slightly higher than the number of real changes. This is in line with the general assumption change detection practice that errors of commission are more acceptable than errors of omission. In some cases, it was not possible to designate change type. In such cases, one of the three general change directions (i.e., revegetation if the change was related to an increase of NDVI values, devegetation if the change was related to a decrease of NDVI values and artificialization if the change was related to an increase of texture) was assigned to the object. All changes of agriculture and other vegetation areas were accurately classified as a change to urban areas. In the case of a small water body that appeared near new buildings, it was not possible to classify a change type, and it was assigned to the class, devegetation.
The final step of the change detection, like the LC classification, is the assessment of the obtained data. Usually, this is done by a comparison with the reference data, which are defined by a ground survey or, if this is not possible, by visual interpretation. Reference data can be expressed as points or as objects. There are a number of studies that uses point data (e.g., [30,31]). However, it has been argued that the accuracy of the object-based change detection assessment requires new methods [32] and that the validation concepts should also take into account a spatial assessment of the object boundaries [33] in addition to a simple thematic assessment. This can be done using objects as reference data. One of the main advantages of using point data is that the evaluation of the accuracy is less complicated compared to using objects [15] and, thus, may be more useful for the evaluation of a large number of maps. Our study adopted the traditional method based on random points. We assumed that a large number of points would allow a statistically valid thematic evaluation of the results. The change layer was validated on 1000 randomly selected points, including 500 points distributed over areas where change had been identified. Other points were distributed over areas marked as no change. From the available accuracy assessment methods (described, for example, by [34]), a standard confusion matrix was chosen. The change and no change error matrix is given in the Table 3.
Table 3 shows that the overall accuracy is 89%. The kappa value (0.78), calculated following the method proposed by [35], shows that the result is significantly better than a random distribution. When we investigate the detailed classification of change types on a Warsaw test scene, we see that a change from all classes to the urban class was detected with the highest level of accuracy. Three hundred twenty reference points were marked as a class transition from agriculture to urban; 269 such changes were correctly classified; 47 points were classified as other changes (mostly as transitions from forest and other vegetation to urban), and only four points were marked as no change. Overall, only 2% of points were omitted, and the accuracy of the automatic change type classification was 76% for the pair of images. The eastern part of Figure 7 shows significant changes in the riverbanks. Some bare ground areas disappeared on the left riverbank. This change was only partially mapped, and the change type (bare ground to urban) is incorrect (it should be bare ground to water). As the methodology needed to be used in the map production process, a tool for the manual inspection of objects recognized as changes was implemented in the eCognition Architect software. This makes it possible to carry out a quick check of the change mask and to correct errors without the need for inspecting the whole scene.

3.2. Results of the Change Detection over the Area Frame Sampling Sites

Our algorithm was used to process all of the available AFS Europe images. The final dataset included 26 image pairs for the years 2009–2010 and 15 image pairs for the years 2010–2011. They covered areas in Spain, Germany, Italy, Portugal, Poland, Finland, Hungary, Latvia, Greece, Bulgaria, France and the United Kingdom. We performed change detection on the overlapping parts of images and applied our algorithm to 3789 km2 of land. It should be underlined that the results presented here were manually corrected following the automatic change detection procedure. Detected changes covered 9.849 km2; this is 0.26% of the total area. Most of the changes related to devegetation (about 8 km2). The most significant change was due to deforestation (5.5 km2).
For 2010–2011, only 15 image pairs were available, and the total area was 785 km2. Change areas made up 0.56% of this. Here, again, the most significant changes related to devegetation (3.57 km2). Detailed results are given in Table 4. The results of the change detection are freely available from the Geoland2 web portal [1] or the dedicated server [36].

3.3. Sources of Errors and Uncertainties

There are two types of possible errors. The first is related to change detection, the second is related to change classification. Our results from the selected test site and the assessment of their accuracy show that although the proposed algorithm accurately detects change, in some cases, the identified change in objects does not fully delineate objects present in the image. This may be caused by segmentation errors (i.e., when image objects do not represent land cover objects accurately). Such a situation is usually caused by an erroneous evaluation of scale parameters for images T1 and T2. In some cases, changes in other parameters (i.e., shape, compactness) or the use of other layers in the segmentation process would give better results. However, we decided to fix these parameters for reasons of automation and transferability between environmental zones. An additional bias may have been introduced by the texture layers used in the process. The texture calculation method, although valid for most cases, enhances objects’ edges. This can be observed, for example, at the border between two high-contrast LC classes (e.g., bare soil and water). In this case, the border is classified as high texture, despite the fact that both classes are representative of low texture. This may affect object characteristics and ultimately generate errors. Another uncertainty is related to misclassifications in the reference generic LC map. Each change type present in the final change layer consists of transitions from one class to another. The “from” reference is taken from the LC map; hence, misclassifications of this map may cause errors in change classification. However, it will not alone affect the identification of changed areas, as it is not based exclusively on the image data.

4. Conclusions

This paper presents a change detection method designed for the annual update of generic land cover (LC) maps produced in the context of the Geoland2-SATChMo project. An evaluation of the spectral characteristics of potential LC changes showed that their discriminant features could not be grouped into separate clusters in the feature space; hence, it was not possible to accurately classify them. However, a grouping into more general change directions (artificialization, devegetation and revegetation) helped to distinguish between them and, taken together with the reference LC map, improved the accuracy of change type classification. Furthermore, we found that the application of the texture and NDVI change detection method, together with the use of multivariate alteration detection layers in the OBIA environment, led to a significant reduction in the number of false positives caused by spectral variation within classes, especially those related to fluctuations in vegetation in the agricultural class. The selection of thresholds based on a Gaussian distribution made it possible to highly automate the process and reduce user intervention to a minimum, while maintaining a high level of accuracy. Tests performed on IKONOS images showed that the novel automatic change detection method was able to produce a change mask with 89% overall accuracy. Errors were mostly a result of the inhomogeneity of objects caused by inaccurate scale parameter estimation and edge effects in the texture layers. This research has led to the implementation of a convenient tool and has been applied in practice to produce change detection maps over an area of 3789.97 km2 for the years 2009–2010 and 785.01 km2 for the years 2010–2011, identifying 9.85 km2 and 4.41 km2 of LC changes, respectively. Most of the detected changes were related to devegetation, in particular deforestation.
Our study proves that it is possible to develop a highly-automated method for LC change detection and change classification, which can operate in different environmental zones with very high resolution images acquired by different sensors. However, our experience shows that trade-offs must be made between accuracy, performance and low-cost mapping. Even in the case of a very accurate automatic method, manual revision and correction of the results remain important parts of the process.
Future research will focus on the automation of the selection of the scale parameter used in the segmentation process. Not only will this help to reduce the time required for change mask extraction and increase the level of automation, it will also make the results more reliable.


The authors would like to acknowledge Katarzyna Dabrowska-Zielinska, Alicja Malinska and Monika Tomaszewska from the Remote Sensing Centre at the Institute of Geodesy and Cartography and Andrzej Kotarba from the Earth Observation Group of the Space Research Centre for their help during the preparation of the presented approach.
EO data was provided by the ESA managed GSC-DA, funded by the European Community’s Seventh Framework Programme (FP7/2007-2013) under EC ESA Grant Agreement No. 223001.

Author Contributions

Sebastian Aleksandrowicz was responsible for the development and testing of the change detection methodology and the eCognition Architect interface preparation.
Konrad Turlej was responsible for the development and testing of the change classification methodology and the change editing tool in eCognition Architect.
Stanisław Lewiński and Zbigniew Bochenek were responsible for the scientific supervision on the development process.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Geoland2 Portal. Available online: (accessed on 14 April 2014).
  2. Ingram, K.; Knapp, E.; Robinson, J. Change Detection Technique Development for Improved Urbanized Area Delineation; Technical Memorandum CSC/TM-81/6087; Computer Science Corporation: Maryland, MD, USA, 1981. [Google Scholar]
  3. Jensen, J.R. Introductory Digital Image Processing: A Remote Sensing Perspective; Prentice Hall: Englewood Cliff, NJ, USA, 2005. [Google Scholar]
  4. Coppin, P.; Jonckheere, I.; Nackaerts, K.; Muys, B.; Lambin, E. Digital change detection methods in ecosystem monitoring: A review. Int. J. Remote Sens 2004, 25, 1565–1596. [Google Scholar]
  5. Lunetta, R.S.; Knight, J.F.; Ediriwickrema, J.; Lyon, J.G.; Worthy, L.D. Land-cover change detection using multi-temporal MODIS NDVI data. Remote Sens. Environ 2006, 105, 142–154. [Google Scholar]
  6. Yuan, D.; Elvidge, C. NALC land cover change detection pilot study: Washington DC area experiments. Remote Sens. Environ 1998, 66, 166–178. [Google Scholar]
  7. Ardila, J.P.; Bijker, W.; Tolpekin, V.A.; Stein, A. Multitemporal change detection of urban trees using localized region-based active contours in VHR images. Remote Sens. Environ 2012, 124, 413–426. [Google Scholar]
  8. Richards, J.A. Thematic mapping from multitemporal image data using the principal components transformation. Remote Sens. Environ 1984, 16, 35–46. [Google Scholar]
  9. Coppin, P.R.; Bauer, M.E. Processing of multitemporal Landsat TM imagery to optimize extraction of forest cover change features. IEEE Trans. Geosci. Remote Sens 1994, 32, 918–927. [Google Scholar]
  10. Qiu, B.; Prinet, V.; Perrier, E.; Monga, O. Multi-Block PCA Method for Image Change Detection. Proceedings of the 12th International Conference on Image Analysis and Processing, Mantova, Italy, 17–19 September 2003; pp. 385–390.
  11. Nielsen, A.A.; Conradsen, K.; Simpson, J.J. Multivariate Alteration Detection (MAD) and maf postprocessing in multispectral, bitemporal image data: New approaches to change detection studies. Remote Sens. Environ 1998, 64, 1–19. [Google Scholar]
  12. Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.; Queiroz Feitosa, R.; van der Meer, F.; van der Werff, H.; van Coillie, F.; et al. Geographic object-based image analysis—Towards a new paradigm. ISPRS J. Photogramm. Remote Sens 2014, 87, 180–191. [Google Scholar]
  13. Blaschke, T.; Burnett, C.; Pekkarinen, A. Image Segmentation Methods for Object-Based Analysis and Classification. In Remote Sensing Image Analysis: Including the Spatial Domain; Springer: Berlin, Germany, 2004; pp. 211–236. [Google Scholar]
  14. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens 2010, 65, 2–16. [Google Scholar]
  15. Chen, G.; Hay, G.J.; Carvalho, L.M.T.; Wulder, M.A. Object-based change detection. Int. J. Remote Sens 2012, 33, 4434–4457. [Google Scholar]
  16. Hussain, M.; Chen, D.; Cheng, A.; Wei, H.; Stanley, D. Change detection from remotely sensed images: From pixel-based to object-based approaches. ISPRS J. Photogramm. Remote Sens 2013, 80, 91–106. [Google Scholar]
  17. Banaszkiewicz, M.; Smith, G.; Gallego, J.; Aleksandrowicz, S.; Lewinski, S.; Kotarba, A.; Bochenek, Z.; Dabrowska-Zielinska, K.; Turlej, K.; Groom, A.; et al. European Area Frame Sampling Based on Very High Resolution Images. In Land Use and Land Cover Mapping in Europe; Manakos, I., Braun, M., Eds.; Sringer: Berlin, Germany, 2014; pp. 75–88. [Google Scholar]
  18. Townshend, J.R.G.; Justice, C.O.; Gurney, C.; McManus, J. The impact of misregistration on change detection. IEEE Trans. Geosci. Remote Sens 1992, 30, 1054–1060. [Google Scholar]
  19. Lewinski, S.; Bochenek, Z.; Turlej, K. Application of an Object-Oriented Method for Classification of VHR Satellite Images Using a Rule-Based Approach and Texture Measures. In Land Use and Land Cover Mapping in Europe; Manakos, I., Braun, M., Eds.; Springer: Berlin, Germany; pp. 193–201.
  20. De Kok, R.; Wężyk, P. Principles of Full Autonomy in Image Interpretation. The Basic Architectural Design for a Sequential Process with Image Objects. In Object-Based Image Analysis; Blaschke, T., Hay, G.J., Eds.; Springer: Berlin, Germany, 2008; pp. 697–710. [Google Scholar]
  21. Lewiński, S.; Bochenek, Z.; Turlej, K. Application of object-oriented method for classification of vhr satellite images using rule-based approach and texture measures. Geoinf. Issues 2010, 2, 19–26. [Google Scholar]
  22. Nielsen, A.A. The regularized iteratively reweighted mad method for change detection in multi- and hyperspectral data. IEEE Trans. Image Process 2007, 16, 463–478. [Google Scholar]
  23. Canty, M.J. Image Analysis, Classification, and Change Detection in Remote Sensing: With Algorithms for Envi/idl, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2009. [Google Scholar]
  24. Baatz, M.; Schape, A. Multiresolution Segmentation—An Optimization Approach for High Quality Multi-Scale Image Segmentation Angewandte Geographische Informations Verarbeitung XII. In Angewandte Geographische Informationsverarbeitung XII. Beiträge zum AGIT-Symposium Salzburg 2000; Karlsruhe, Herbert Wichmann Verlag: Berlin, German, 2000; pp. 12–23. [Google Scholar]
  25. Saba, F.; Valadanzouj, M.; Mokhtarzade, M. The optimazation of multi resolution segmentation of remotely sensed data using genetic alghorithm. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 2013, 1, 345–349. [Google Scholar]
  26. Johnson, B.; Xie, Z. Unsupervised image segmentation evaluation and refinement using a multi-scale approach. ISPRS J. Photogramm. Remote Sens 2011, 66, 473–483. [Google Scholar]
  27. Dragut, L.; Tiede, D.; Levick, S.R. Esp: A tool to estimate scale parameter for multiresolution image segmentation of remotely sensed data. Int. J. Geogr. Inf. Sci 2010, 24, 859–871. [Google Scholar]
  28. Drăguţ, L.; Csillik, O.; Eisank, C.; Tiede, D. Automated parameterisation for multi-scale image segmentation on multiple layers. ISPRS J. Photogramm. Remote Sens 2014, 88, 119–127. [Google Scholar]
  29. Niemeyer, I.; Bachmann, F.; John, A.; Listner, C.; Marpu, P.R. Object-based change detection and classification. 2009, 7477. [Google Scholar] [CrossRef]
  30. Im, J.; Jensen, J.R.; Tullis, J.A. Object-based change detection using correlation image analysis and image segmentation. Int. J. Remote Sens 2008, 29, 399–423. [Google Scholar]
  31. Linke, J.; McDermid, G.; Laskin, D.; McLane, A.; Pape, A.; Cranston, J.; Hall-Beyer, M.; Franklin, S. A disturbance-inventory framework for flexible and reliable landscape monitoring. Photogramm. Eng. Remote Sens 2009, 75, 981–995. [Google Scholar]
  32. Blaschke, T. Towards a framework for change detection based on image objects. Göttinger Geographische Abhandlungen 2005, 113, 1–9. [Google Scholar]
  33. Albrecht, F.; Lang, S.; Hölbling, D. Spatial accuracy assessment of object boundaries for object-based image analysis. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 2010, 38, C7. [Google Scholar]
  34. Foody, G.M. Status of land cover classification accuracy assessment. Remote Sens. Environ 2002, 80, 185–201. [Google Scholar]
  35. Cohen, J. A coefficient of agreement for nominal scales. Educ. Psychol. Meas 1960, 20, 37–46. [Google Scholar]
  36. SATChMo Satchmo Change Layers. Available online: (accessed on 14 April 2014).
Figure 1. IKONOS panchromatic images of the suburbs of the Warsaw metropolitan area from the years: (a) 2002 (T1) and (b) 2008 (T2) used for algorithm testing.
Figure 1. IKONOS panchromatic images of the suburbs of the Warsaw metropolitan area from the years: (a) 2002 (T1) and (b) 2008 (T2) used for algorithm testing.
Remotesensing 06 05976f1
Figure 2. The feature space showing the differentiation of the 14 LC change types identified at the six test sites. Feature Combination 1 and Feature Combination 2 are the best combinations of discriminant features (such as NDVI, texture and multivariate alteration detection (MAD) components).
Figure 2. The feature space showing the differentiation of the 14 LC change types identified at the six test sites. Feature Combination 1 and Feature Combination 2 are the best combinations of discriminant features (such as NDVI, texture and multivariate alteration detection (MAD) components).
Remotesensing 06 05976f2
Figure 3. Differentiation of aggregated classes using NDVI.
Figure 3. Differentiation of aggregated classes using NDVI.
Remotesensing 06 05976f3
Figure 4. Differentiation of aggregated classes using texture (SIGMA filter).
Figure 4. Differentiation of aggregated classes using texture (SIGMA filter).
Remotesensing 06 05976f4
Figure 5. Change detection and classification workflow.
Figure 5. Change detection and classification workflow.
Remotesensing 06 05976f5
Figure 6. IKONOS image from 2002 (a) and IKONOS image from 2008 (b) with a change type layer overlaid on image T2 (c); subset of the Warsaw test area.
Figure 6. IKONOS image from 2002 (a) and IKONOS image from 2008 (b) with a change type layer overlaid on image T2 (c); subset of the Warsaw test area.
Remotesensing 06 05976f6
Figure 7. IKONOS image from 2002 (a) and IKONOS image from 2008 (b) with the change type layer overlaid on image T2 (c); subset of the Warsaw test area.
Figure 7. IKONOS image from 2002 (a) and IKONOS image from 2008 (b) with the change type layer overlaid on image T2 (c); subset of the Warsaw test area.
Remotesensing 06 05976f7
Table 1. Combinations of land cover changes that may take place during one year. X, no change; 0, probably not; 1, rare; 2, often.
Table 1. Combinations of land cover changes that may take place during one year. X, no change; 0, probably not; 1, rare; 2, often.
Urban/ArtificialAgricultural AreasBare GroundWaterSnow and IceForest/Woodland/TreesSparse Woody VegetationGrass-landOther VegetationClouds, Voids, etc.
Agricultural areas2X2200121X
Bare ground22X210121X
Snow and Ice0011X0001X
Sparse woody vegetation222202X21X
Other vegetation22221222XX
Clouds, voids, etc.XXXXXXXXXX
Table 2. Land cover changes and the direction of the change based on differences in texture values and NDVI index.
Table 2. Land cover changes and the direction of the change based on differences in texture values and NDVI index.
Type of Land Cover ChangeName of Change Direction
Agriculture > urbanArtificialization
Bare land > urbanArtificialization
Forest > urbanArtificialization
Sparse vegetation > urbanArtificialization
Agriculture > bare landDevegetation
Forest > agricultureDevegetation
Forest > bare landDevegetation
Grassland > bare landDevegetation
Sparse vegetation > bare landDevegetation
Agriculture > grasslandRevegetation
Forest > grasslandRevegetation
Sparse vegetation > grasslandRevegetation
Urban > agricultureRevegetation
Urban > grasslandRevegetation
Table 3. The error matrix for change/no-change objects.
Table 3. The error matrix for change/no-change objects.

ChangeNo ChangeSum
No change97487584
Table 4. The results of the application of the change detection algorithm to image pairs acquired in 2009–2010 and 2010–2011.
Table 4. The results of the application of the change detection algorithm to image pairs acquired in 2009–2010 and 2010–2011.
Time SpanTotal Area (km2)ChangeNo ChangeChange DirectionArea (km2)


Remote Sens. EISSN 2072-4292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top