Land-cover information is important for landscape characterization and environmental monitoring [1
]. A variety of land-cover information products is generated from different remote-sensing systems [3
]. They depict both static cover types and dynamic changes in space and time. For example, after release of four National Land Cover Database (NLCD) products over the past two decades: NLCD 1992, 2001, 2006, and 2011, the U.S. Geological Survey (USGS) has designed a new generation of NLCD products named NLCD 2016 to further establish a long-term monitoring capability for land resources [6
]. Time series of land-cover maps are also common [7
]. While land-cover information is conventionally represented in categorical format (i.e., encoded as discrete cover types and/or change class labels), fractional representations of land-cover and change also become popular [9
], although this paper focuses on the former type of land-cover change maps.
Scientific replicability and practical usefulness of land-cover information products depend on their accuracy. However, no land-cover or change maps created from computer processing of remotely sensed images and auxiliary data can be completely accurate [12
]. Even visual interpretation of remote-sensing images cannot guarantee absolute accuracy in resultant maps. It is thus important to build up accuracy assessment capabilities in production and use of land-cover information. There is increasing research and developments on accuracy assessments in the aforementioned different types of land-cover information products [14
The conventional way to describe accuracies in land-cover and land-cover change maps is through error matrices, from which we can obtain some global summary of map accuracies in terms of overall, user’s, and producer’s accuracies [14
]. However, we may not be able to get any information about spatial variation of accuracies below individual map classes from these non-spatial (or global) accuracy metrics. In contrast, local accuracy descriptors would better support exploratory and diagnostic analyses of spatial distributions of misclassification errors, hence, better guiding classifier improvements and information fusion. Spatialized accuracy characterization in land-cover information products has thus drawn increasing research attention, for both static land-cover classification and dynamic change-categorization [19
]. Useful reviews are given in Khatami et al. [29
], Zhang et al. [30
], and Zhang et al. [31
For quantification of local accuracies in change maps specifically, reference sample data are essential as in conventional non-spatial accuracy assessments. It is preferable to have collocated sample data to verify true cover types at the same locations but different times to get the information about correctness of change categorization for individual locations (i.e., map pixels) [32
]. However, collocated sample data may be far from enough or even unavailable. On the other hand, non-collocated sample data are common for reference data collected for mono-temporal maps without consideration or coordination for change analysis [35
]. Such non-collocated data’s assembling and re-use would help save cost of sampling, especially for ground-based surveys. Re-use of non-collocated sample data may be made directly or, more effectively, in combination with additionally acquired sample data. The latter option can be furnished by augmented sample design [36
], which is built from a base design say stratified random sampling (StRS) as used in Zhang et al. [31
]. We may collect new sample data at locations where changes are observed, while collecting repeating sample data at some of prior sampled locations, with the aim to allocate sample units to strata of “from-to” classes according to recommendations [37
Accordingly, there are two broad types of methods for mapping local accuracies in land-cover change maps: direct and indirect, as in change analysis [38
]. In the former, collocated sample data are assumed so that methods designed for mono-temporal land-cover maps can be directly extended to bi-temporal change maps [27
]. The latter caters for non-collocated sample data and is based on mono-temporal accuracy analyses and their proper synthesis [31
]. It plays an increasingly important role in change analyses and accuracy assessments since sample designs become necessarily complexly configured due to the need to monitor the changing landscape [40
]. Zhang et al. [31
] proposed an indirect method, which is actually Method Product with adjustment for cross-correlation (i.e., temporal correlation) in bi-temporal classification correctness (hence named PXCOV to stand for Product with adjustment for cross(X)-COVariance). Method Product evaluates probabilities of correct change-classification as multiplication of probabilities of correct mono-temporal classifications; the multiplication operator was previously used by Steele [41
] and Pontius and Cheuk [42
] for constructing combined classifiers and comparing fuzzy maps, respectively. Using PXCOV, it is possible to utilize all reference sample data available, regardless of their being collocated or not, contributing to cost-saving in sampling for reference data. This is facilitated by estimating local accuracies in single-date classifications and then synthesizing them properly through accommodating temporal correlation between single-date accuracies. However, its practicality is limited by the need to estimate temporal correlation via cross-validation cokriging (a relatively complicated procedure of geostatistical modeling and computing) when collocated sample data are not available or insufficient (i.e., when sample data available are non-collocated or mostly so).
To overcome PXCOV’s lack of practicality, this paper presents a simple alternative method. It is furnished through stratified approximation of temporal correlation in change-classification accuracies and features combined use of minimum and multiplication operators. Specifically, in this composite method (named Fuzzy+Product), minimum operator (resembling fuzzy set “min” operator, thus named Fuzzy) is applied over non-change pixels whereas maximum correlation is assumed so that minimum values of predicted probabilities of correct classifications in single-date maps are taken as approximation for probabilities of correct change-classification at individual no-change pixels [42
]. For change pixels where temporal correlation is usually very weak and thus assumed zero approximately, multiplication operator (i.e., the product rule [31
] named Product) is applied. Thus, using Fuzzy+Product, we can estimate local accuracies in change-classification easily through combined use of minimum operator (over no-change pixels) and multiplication operator (over change pixels), without having to run the complicated procedure of cross-validation cokriging iteratively as in the original PXCOV implementation.
The research question concerns whether Fuzzy+Product is a reasonably accurate method in comparison with PXCOV while being simpler than PXCOV. Thus, the objective of this paper is to investigate performances of the proposed method Fuzzy+Product relative to alternative methods including PXCOV. As is demonstrated by the experimental results, the answer is positive.
Fuzzy+Product’s benefits are summarized below. First, it is simpler than PXCOV without compromising accuracy. Second, it holds great potentials for future applications in accuracy characterization in the context of multi-temporal change analyses, while PXCOV is oriented to bi-temporal settings only, as is discussed in Section 4.2
. Third, its extension to accuracy analyses in fractional (change) classifications is also promising (see Section 4.2
). Lastly, it facilitates local-global-combined accuracy assessments and modeling with uncertainty quantified while benefitting area estimate corrections, as discussed in Section 4.3
. These advantages manifest themselves well in the scenarios where reference sample data are non-collocated, or mostly so.
Some clarification about this research in comparison with the work underlying PXCOV [31
] is in place here. First, Fuzzy+Product works by stratified approximation of cross-correlation between single-date classification accuracies rather than relying on complicated geostatistical modeling as in PXCOV when there are no or few collocated sample data. Although we used the same datasets as in [31
], the major thrust of contribution in this research is not repetition of the work in [31
]. Second, we focus explicitly on applications of Fuzzy+Product (in comparisons with alternative indirect methods) using largely non-collocated data while applications of PXCOV were discussed (mainly in comparisons with direct methods) using both non-collocated and collocated data.
The remainder of the article is as follows. In Section 2
, we first describe the study sites and datasets. Five training sample sets of equal sizes with different but dominating proportions of non-collocated sample pixels, which were collected at five study sites in China in previous research [31
], were used. This is followed by a description of the proposed method Fuzzy+Product along with alternative methods: PXCOV, Fuzzy, and Product, with the latter two methods applied to all pixels regardless of them being change or no-change pixels. Section 3
describes the results of empirical studies aiming to test and compare the performances of these methods on the basis of independent validation sample data. Finally, Section 5
concludes the paper after some discussion in Section 4