Next Article in Journal
ECOSTRESS and CIMIS: A Comparison of Potential and Reference Evapotranspiration in Riverside County, California
Next Article in Special Issue
Ground Validation and Error Sources Identification for GPM IMERG Product over the Southeast Coastal Regions of China
Previous Article in Journal
Toward Non-Invasive Measurement of Atmospheric Temperature Using Vibro-Rotational Raman Spectra of Diatomic Gases
Previous Article in Special Issue
Consistency Analysis and Accuracy Assessment of Three Global 30-m Land-Cover Products over the European Union using the LUCAS Dataset
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

About the Pitfall of Erroneous Validation Data in the Estimation of Confusion Matrices

Earth & Life Institute, Université Catholique de Louvain, B-1348 Louvain-la-Neuve, Belgium
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Remote Sens. 2020, 12(24), 4128; https://doi.org/10.3390/rs12244128
Submission received: 3 December 2020 / Revised: 12 December 2020 / Accepted: 14 December 2020 / Published: 17 December 2020

Abstract

:
Accuracy assessment of maps relies on the collection of validation data, i.e., a set of trusted points or spatial objects collected independently from the classified map. However, collecting spatially and thematically accurate dataset is often tedious and expensive. Despite good practices, those datasets are rarely error-prone. Errors in the reference dataset propagate to the probabilities estimated in the confusion matrices. Consequently, the estimates of the quality are biased: accuracy indices are overestimated if the errors are correlated and underestimated if the errors are conditionally independent. The first findings of our study highlight the fact that this bias could invalidate statistical tests of map accuracy assessment. Furthermore, correlated errors in the reference dataset induce unfair comparison of classifiers. A maximum entropy method is thus proposed to mitigate the propagation of errors from imperfect reference datasets. The proposed method is based on a theoretical framework which considers a trivariate probability table that links the observed confusion matrix, the confusion matrix of the reference dataset and the “real” confusion matrix. The method was tested with simulated thematic and geo-reference errors. It proved to reduce the bias to the level of the sampling uncertainty. The method was very efficient with geolocation errors because conditional independence of errors can reasonably be assumed. Thematic errors are more difficult to mitigate because they require the estimation of an additional parameter related to the amount of spatial correlation. In any case, while collecting additional trusted labels is usually expensive, our result show that the benefits for accuracy assessment are much larger than collecting a larger number of questionable reference data.

Graphical Abstract

1. Introduction

Providing a confusion matrix together with a geolocationmap is a prevailing good practice in remote sensing data analysis [1,2]. The confusion matrix indeed yields a complete summary about the matching between geographic data product and reference data. The entries of the confusion matrix allow map users to derive numerous summary measures of the class accuracy, that can be used to compare different classifiers in scientific studies or to inform end users about the quality of the map. The confusion matrix is also used to achieve a better estimate of the area covered by each class.
The reference datasets used to build the confusion matrix are often assumed to be error-free, but even ground data often include some errors [3]. Confusion matrices and associated quality indices are, therefore, likely to yield biased quality assessment of the maps. The negative impact of those errors on the apparent accuracy of maps has been highlighted in previous studies [4,5]. This impact on accuracy indices has been quantified in situations of correlated and non correlated errors. For instance, a case study showed that 10 % errors in a reference dataset lead to 18.5 % underestimation of the producer accuracy if errors were independent and 12.5 % overestimation if they were correlated [6].
The sources of errors in the collection of ground reference data are well documented, and some recent studies aimed at assessing their contribution depending on whether the errors are thematic or positional. Thematic errors consist in assigning the wrong label to a sample unit because of erroneous classification, (uncertainties of the response design [7], temporal mismatch, transitional classes, class interpretation errors, careless error [8]). Positional errors (also called geolocation errors) may result in incorrect matching of reference and map labels because the geographic position of the sampling unit is shifted with respect to the map (mislocation of testing sites, mislocation of the map or uncertain definition of boundaries) [2,9,10].
A range of approaches have been developed to deal with imperfect reference data. In medical science, Enøe et al. [11] used maximum likelihood methods to handle unknown reference information in a binary case, while Espeland and Handelman [12] used latent class models to obtain consensual confusion matrix among several observers. Those methods help to validate datasets without gold standard [13], but they do not adjust for poor quality reference datasets. In remote sensing, Sarmento et al. [14] integrated information from a second label in the reference dataset to build a confidence value on the accuracy estimates. Foody [6] proposed a method to correct sensitivity and specificity estimates when conditional independence holds and the quality of the reference classification is known. Carlotto [3] proposed a method to retrieve the actual overall accuracy based on the apparent accuracy and the errors between the reference and the “ground truth”. However, this method is assuming that errors are equally distributed amongst classes.
Sub-optimal reference data is a reflection of the costs of acquiring high quality data [6]. Instead of correcting the reference dataset, our method therefore proposes to manage its uncertainty globally in order to obtain a corrected confusion matrix, i.e., a confusion matrix that reflects the matching between the map and the trusted labels. This would be particularly useful to manage the diversity of data sources that are now used for validation ( ground survey or aerial surveys; crowdsourced or expert-based). The major difference with previous studies is that we rebuild at best the full confusion matrix and not only some of the indices that can be derived from it. Furthermore, the proposed method does not assume that errors are equally distributed amongst classes nor that the errors are conditionally independent.
This paper is presenting the theoretical developments for recovering the trusted confusion matrix based on the observed confusion matrix and a confusion matrix between the reference dataset and the trusted labels. These theoretical development are then implemented to illustrate the method based on two synthetic case studies. The first case study addresses thematic errors while the second one focuses on geolocation errors.

2. Theoretical Framework

The confusion matrix is usually assumed to represent the overlay between a map and the “ground truth”. In practice the reference data used to estimate it is not perfect, therefore calling it “ground truth” is improper. In this study, we are referring to a map where a set of m classes are considered, where labels from the classification are indexed with i, while the trusted (ground truth) labels are indexed with j. In parallel, we assume that a reference dataset containing errors has been collected, with labels indexed with k. Typically, the correspondence between the classification and the reference is assessed through a confusion matrix (i.e., the table of the joint counts n ( i , k ) of pixels classified in class i but referenced in class k), from which the m × m table of the joint probability estimates p ^ ( i , k ) = n ( i , k ) / n is computed, where n refers to the sample size. In parallel, let us consider similarly that the m × m table of the joint counts n ( j , k ) is at hand (but not necessarily estimated from the same sample), so that the probability estimates p ^ ( j , k ) = n ( j , k ) / n assess the correspondence between the trusted and the reference classes over the study area. There are thus two known confusion matrices: the classified map vs the reference and the reference vs the trusted labels.
What is sought for are estimates for the unknown joint p ( i , j ) probabilities that assess the correspondence between the classification and the ground truth. Instead of the p ( i , k ) that only quantifies the quality of the map with respect to a reference—that might differ from the ground truth and whose choice might also differ between users—the p ( i , j ) ’s are truly assessing the accuracy of the map. The question is how to estimate at best these probabilities when the only knowledge at hand are the sets of p ^ ( i , k ) ’s and p ^ ( j , k ) ’s while possibly considering an additional conditional independence hypothesis if relevant. We will show hereafter that this problem can be handled using a maximum entropy (MaxEnt) approach (see, e.g., [15,16] for a presentation of the rationale of the method and its panel of applications).

2.1. Joint Probability Table

In order to ease the discussion, let us consider the set of unknown joint probabilities p ( i , j , k ) that can be organised in a 3D probability table, as shown in Figure 1. By definition, all bivariate probabilities p ( i , j ) , p ( i , k ) and p ( j , k ) are marginal probabilities from this table, so that equalling the marginal probabilities with their estimates leads to
i = 1 m p ( i , j , k ) = Δ p ( j , k ) = p ^ ( j , k ) j , k
j = 1 m p ( i , j , k ) = Δ p ( i , k ) = p ^ ( i , k ) i , k
k = 1 m p ( i , j , k ) = Δ p ( i , j ) i , j
where the unknown p ( i , j , k ) ’s are thus subject to the two constraints given in Equations (1a) and (1b), while Equation (1c) allows us to compute the required p ( i , j ) ’s. Furthermore, because k p ( j , k ) = Δ p ( j ) , we can further derive from Equation (1a) that p ( j ) = p ^ ( j ) .
When the classification is conducted independently from the selection of the reference, it is reasonable to consider that, for a given “ground truth”, the classification errors are independent from the reference errors. Stated otherwise, we can assume independence between classification and reference errors conditionally to the “ground truth” class. Stated in probability terms, we thus have under this conditional independence the result
p ( i , k | j ) = p ( i | j ) p ( k | j ) i , j , k
or equivalently (see details in Appendix A)
p ( i , j , k ) = p ( i , j ) p ( j , k ) p ( j ) i , j , k p ( i , j , k ) p ( i , j ) = Δ p ( k | i , j ) = p ( j , k ) p ( j ) i , j , k
Finally, attention should also be paid to the fact that p ^ ( j , k ) and p ^ ( i , k ) cannot be arbitrarily chosen, because summing again over i or j must lead to the same results for p ^ ( k ) , with
p ^ ( k ) = Δ i p ^ ( i , k ) j p ^ ( j , k )
However, the usual sum-to-one constraint i , j , k p ( i , j , k ) = 1 does not need to be explicitly accounted for, as it is automatically enforced from Equations (1a) and (1b) as long as these bivariate probability estimates sum to one, as required.

2.2. Maximum Entropy Estimation

An elegant solution to the estimation of the set of unknown probabilities p = { p ( i , j , k ) } is given by the MaxEnt approach, that aims at looking for p such that its entropy H ( p ) is maximized, with
H ( p ) = E [ ln p ] = i , j , k p ( i , j , k ) ln p ( i , j , k )
where again p is subject to the constraints in Equations (1a), (1b) and (3). Using the Lagrangian formalism, it is thus possible to look for p that maximizes H ( p ) subject to these constraints by maximizing the objective function
O ( p ) = H ( p ) + i , j λ i k p ( i , k ) p ^ ( i , k ) + j , k μ j k p ( j , k ) p ^ ( j , k ) + i , j , k κ i j k p ( k | i , j ) p ^ ( j , k ) p ^ ( j )
where λ i k , μ j k and κ i j k are Lagrange multipliers and where the last term (i.e., the constraint derived from Equation (3)) is only needed if the conditional independence hypothesis is accounted for. As H ( p ) is convex everywhere, the maximum of Equation (6) can be numerically found using classical constrained convex optimization methods. However, it can be remembered too that maximizing entropy under expectation constraints is equivalent to maximize likelihood (ML) under structural constraints, as each problem is the convex dual of the other. The MaxEnt solution will thus correspond to the ML estimation of p subject to the conditional independence hypothesis and the p ^ ( i , k ) ’s and p ^ ( j , k ) ’s estimates. This also means that numerical algorithms used in a ML context are relevant too for finding the MaxEnt solution. In particular, the easy-to-implement iterative proportional fitting algorithm [17] will be used here. For this, let us rewrite p ( i , j , k ) by accounting for the various constraints in Equations (1a), (1b) and (3), with
p ( i , j , k ) = p ( j , k ) p ( i | j , k ) = p ^ ( j , k ) p ( i , j , k ) i p ( i , j , k ) i , j , k
p ( i , j , k ) = p ( i , k ) p ( j | i , k ) = p ^ ( i , k ) p ( i , j , k ) j p ( i , j , k ) i , j , k
p ( i , j , k ) = p ( k | i , j ) p ( i , j ) = p ( j , k ) p ( j ) p ( i , j ) = p ^ ( j , k ) p ^ ( j ) k p ( i , j , k ) i , j , k
Starting from initial guesses p ( i , j , k ) [ 0 ] , it is then possible to iteratively correct these probabilities by enforcing the constraints, with
p ( i , j , k ) [ + 1 ] = p ^ ( j , k ) p ( i , j , k ) [ ] i p ( i , j , k ) [ ] i , j , k
p ( i , j , k ) [ + 2 ] = p ^ ( i , k ) p ( i , j , k ) [ + 1 ] j p ( i , j , k ) [ + 1 ] i , j , k
p ( i , j , k ) [ + 3 ] = p ^ ( j , k ) p ^ ( j ) k p ( i , j , k ) [ + 2 ] i , j , k
as it is clear that each constraint is fulfilled at the end of the corresponding step in Equations (8a)–(8c), with
p ( j , k ) [ + 1 ] = i p ( i , j , k ) [ + 1 ] = p ^ ( j , k ) i , j , k
p ( i , k ) [ + 2 ] = j p ( i , j , k ) [ + 2 ] = p ^ ( i , k ) i , j , k
p ( k | i , j ) [ + 3 ] = p ( i , j , k ) [ + 3 ] k p ( i , j , k ) [ + 3 ] = p ^ ( j , k ) p ^ ( j ) i , j , k
By iterating steps (8a)–(8c) until convergence to the final results p [ ] = { p ( i , j , k ) [ ] }, the MaxEnt estimates of p ( i , j ) are then given by p ( i , j ) [ ] = k p [ ] .
If the conditional independence is not assumed, Equations (7c), (8c) and (9c) can then simply be omitted from the previous equations and the final corresponding MaxEnt estimates p [ ] will be different. The initial guesses p [ 0 ] can be arbitrarily chosen as long as they sum up to one and do not include null values. A convenient choice is p [ 0 ] = 1 m 3 i , j , k , that also corresponds to the MaxEnt estimate when no constraints need to be accounted for. Matlab codes are available upon request. The reader may also refer to more generic packages, with R code available in the MIPFP package [18] and Python code available in the IPFN package [19].

2.3. Assessing the Conditional Independence

As previously mentioned, the MaxEnt procedure will lead to two distinct sets of p [ ] depending on the fact that the conditional independence hypothesis is or is not accounted for. Conditional independence is a realistic hypothesis when the classification is conducted independently from the selection of the reference, but this is not always the case and errors that can be correlated to various (but in general unknown) extents. The user is thus facing the difficult choice of selecting one of these two distinct MaxEnt estimates. When a subset of n s u b data of the reference dataset is consolidated with the highest possible accuracy (e.g., ground survey), it becomes possible to directly obtain estimates p ^ = { p ^ ( i , j , k ) } , with p ^ ( i , j , k ) = n ( i , j , k ) / n s u b . However, the size n s u b for such a sample is typically much lower than the size n of the samples for computing p ^ ( i , k ) that is used in the MaxEnt estimation procedure. Nevertheless, having at hand p ^ allows the user to balance the two MaxEnt estimates. Indeed, let us consider the Kullback-Leibler divergence K L ( . | | . ) between the direct frequency estimates p ^ and one of the MaxEnt estimates p [ ] (i.e., with or without conditional independence), with
K L ( p ^ | | p [ ] ) = i , j , k p ^ ln p ^ p [ ]
so that the MaxEnt estimate to be favoured is the estimate associated with the smallest divergence. Let us denote p [ , a + b ] and p [ , a + b + c ] as the MaxEnt estimates without and with the conditional independence hypothesis, respectively (where superscripts a, b and c are referencing to Equations (7)–(9)). We proposed here to simply combine these MaxEnt estimates by finding the linear combination
p [ ] ( α ) = α · p [ , a + b ] + ( 1 α ) · p [ , a + b + c ]
that minimizes the divergence K L ( p ^ | | p [ ] ( α ) ) between p ^ and this linear combination, where α and 1 α are the weights associated with the MaxEnt estimates without or with this conditional independence hypothesis, respectively. Minimizing K L ( p ^ | | p [ ] ( α ) ) with respect to α allows us to combine at best both MaxEnt estimates based on the information brought by p ^ . The final estimates of the p ( i , j ) ’s are then given by p ( i , j ) [ ] ( α ) = k p [ ] ( α ) .

3. Synthetic Case Studies

While it is not possible to exhaustively test the proposed methodology for all possible cases that could occur, a variety of quantitative tests were designed for this task. As this request a complete control of the errors in the reference data set and in the classification results, we used a set of synthetic maps, reference and “truth”.

3.1. Virtual Truth

For the sake of simplicity and without loss of generality, we assumed that an existing map (Figure 2) was perfect and we used it as our trusted map (j index). A two-meters resolution land cover map produced by the Lifewatch project [20] was selected for this goal. It covers the Walloon region (South of Belgium) and includes approximately 4 billion pixels. The landscape of the Walloon Region is quite fragmented and was described using 8 main land cover classes, namely croplands, broadleaved forests, needleleaved forests, herbaceous covers, shrublands, bare soils, water bodies and artificial impervious areas.

3.2. Classified Maps

The “virtual truth” was used to generate classified maps (i index) by inverting a set of confusion matrices. The classification process was mimicked by using the conditional probabilities. For a pixel with true class j, the classified value was randomly assigned to class i based on the conditional probabilities p ( i | j ) . The confusion matrices that we used differed by their overall accuracy (a high (H) overall accuracy of about 90 % and a low (L) overall accuracy of about 80 % ) and by three distinct patterns for the classification errors:
  • An error pattern that reuses the observed confusions that occur in practice, i.e., with many errors between poorly separable classes (e.g., herbaceous cover vs cropland) and few errors between highly separable classes (e.g., a water body vs a tree). An empirical confusion matrix p ( i , j ) based on quality controlled reference data collection with field survey over the area was used for this. This pattern was selected because classification algorithms often meet the same discrimination issues, therefore they are likely to have a similar pattern despite their different performances. This pattern will be referred to as Obs.
  • A class-independent error pattern, where all off-diagonal p ( i , j ) values are equal to the same constant. The values of the diagonal p ( i , j ) were set according to the frequency of each class in the virtual truth. This pattern was selected because it was used in a previous study [3]. It will be referred to as Const.
  • A random error pattern, where all off-diagonal p ( i , j ) values are independently selected from a uniform distribution. This pattern was selected for its lack of arbitrary structure, contrary to the constant errors of Const and the more symmetrical errors of Obs. This pattern will be referred to as Rand.
The six resulting confusion matrices are given in Appendix B.

3.3. Reference Datasets

References datasets (k index) were simulated using the trusted map and, in some cases, the classified maps. The two types of errors in the reference data were addressed separately in this study, namely geolocation errors and thematic errors.

3.3.1. Thematic Interpretation Errors

The collection of reference datasets often relies on photo-interpretation, either by experts or by crowdsourcing. The overall accuracy of various reference datasets has been assessed in a few studies, with results ranging from 11 % to 100 % with a mode at 80 % [8]. In practice, estimating the quality of the reference dataset would ideally require a field survey or the consensus between several experts and/or high quality ancillary data. Achieving the highest possible quality on a reference dataset is however very costly, hence rarely applicable on a large number of samples.
Like in the case of the synthetic maps, different OA and different patterns of thematic errors in the reference datasets are tested with synthetic data. The various matrices of p ( j , k ) ’s that have been considered are described below (see Appendix C for numerical values):
  • A field-based error pattern, with p ( j , k ) based on the assessment of operators in the study area. A high accuracy p ( j , k ) confusion matrix was obtained by comparing a consensual point-based photo-interpretation (from 25-cm visible and infra-red orthophotos at two dates, combined with 1-m resolution LIDAR) with a field survey. The corresponding error rate in this dataset is about 3 % . This pattern will be referred to as Field.
  • A class-independent error pattern, where the probability of errors is constant for all classes. Three levels of errors were considered: 10 % , 5 % and 2 % . This pattern will be referred to as Unif.
  • A proportional error pattern, where the probability of errors is proportional to the “virtual truth” class frequency. Two levels of errors were considered: 10 % , 5 % and 2 % . This pattern will be referred to as Prop.
  • A conditional error pattern which is specific to each classified map. Contrary to the other methods, knowledge about i and j classes of the pixel are used to simulate a correlation between the reference and the classification results. For doing this, 50 % of the points that were misclassified were labelled with the same incorrect label in the reference dataset (while all other labels remained correct). This pattern will be referred to as Cor, with a mention of the map from which it is derived.
As done previously, the theoretical probabilities p ( j , k ) were defined according to these 8 patterns. The p ( k | j ) were used together with the extracted true class value j to simulate the reference class k for each sample point.
Simulated validations ( n = 200 ) of the 6 synthetic maps described in Section 3.2 were performed for each of the 8 types of imperfect reference data set. This yielded a total of 48 combinations of classified maps and reference data sets.

3.3.2. Geolocation Errors

The contamination of thematic accuracy indices by geolocation errors is a current issue in accuracy assessment, even if it has less impact in the case of object-based accuracy assessment [21]. Geolocation errors concern the position of the sampling units (e.g., uncertainty of the GNSS receiver on the ground) as well as the position of the pixels (e.g., uncertainty of the orthorectification). These combined errors yield a relative position error between the classified image and the reference dataset, also called co-registration error. Co-registration errors in turn lead to incorrect labelling when the geolocated sampling unit is not matching the correct pixel anymore.
In order to simulate co-registration errors, the X and Y coordinates of the point samples were randomly shifted based on a uniform distribution. The value of the virtual truth under the point before the shift is compared with the value of the classified map after the shift. Two amplitudes of maximum shifts have been considered: 1 and 1.5 pixel.
The theoretical confusion matrix p ( j , k ) could be obtained by measuring the frequencies of the class pairs before and after the shift, for all possible multiple of the pixel size and in all directions. These frequencies are weighted by the probability to be located at each position. This probability is computed as the integral of the probability density function (pdf) over each pixel of the neighborhood. Because we used a uniform distribution, the integral of the pdf over a pixel simplifies here to the proportion of the pixel that will be covered by the shift, as illustrated in Figure 3. Of course, more complex (non uniformly distributed) random shifts could be considered, thus requiring to evaluate the integral of the pdf that was used over each pixel (as it is done for computing the contribution of an object to the signal in [22]). For the 1.5 pixel shift, the central pixel and its eight direct neighbours have the same probability of being sampled, that is 1 / 9 . For the 1 pixel shift, the most likely pixel is the central one (i.e., no impact of the coregistration error) while the neighbouring pixels do not have the same probability of being selected. These probabilities are equal to 1/4 for the central pixel, 1/8 for each of the two vertically and two horizontally neighbouring pixels and 1/16 for each of the four diagonally neighbouring pixels.

3.4. Impact of Erroneous Reference Data

Different methods are tested to estimate the cell values of the p ^ ( i , j ) confusion matrix in case of errors in the reference dataset. Those estimates are compared with the theoretical p ( i , j ) confusion matrix. The quantitative comparison is based on the primary accuracy indices used for the accuracy assessment of a map, namely the overall accuracy (OA), the producer accuracies (PA) and the user accuracies (UA). The root mean square error (RMSE) and the bias of the OA are used as main indicator. More specific information is also provided with the average RMSE of UAs and PAs.
These statistics are computed between sets of 200 estimated matrices for each method and the p ( i , j ) measured with all pixels of the study area. All methods use points that are sampled according to a simple probabilistic sampling design. These points samples provide, as in standard accuracy assessment method, n joints observation of the reference (k) and classified (i) labels.
As described in the following sections, additional information about the trusted (j) label is obtained in a different way when the errors are caused by thematic or by geolocation errors.

3.4.1. Thematic Errors

Thematic errors in the reference dataset are addressed by collecting trusted (j) label information on top of an existing reference (k). We consider that the trusted label is only available for a subset of the points, because of the cost of such information. Obviously, the proposed method would be useless if trusted labels were available for all points. In this study, the quality assessment dataset is made of a 800 points sample with pairs of classified and reference labels, inside which a 100 points subsample is enriched with trusted labels.
The confusion matrix of the reference dataset versus the trusted labels can be estimated from the subsample based on the frequencies of the ( i , j ) pairs. This 100-points p ^ ( j , k ) matrix is then combined with the 800-points p ^ ( i , k ) matrix to reconstruct p ( i , j ) [ ] ( α ) with the linear combination of MaxEnt estimates that minimize the KL divergence from p ^ ( i , j , k ) , as presented in Section 2.3. These results are compared with the 100-points p ^ ( i , j ) (an unbiased estimator only based on the trusted labels) and with the 800-points p ^ ( i , k ) (the confusion matrix commonly used). In order to test the MaxEnt method under ideal conditions, p ( i , j ) [ ] ( α ) is also computed with the theoretical p ( j , k ) values instead of their p ^ ( i , k ) estimates.

3.4.2. Geolocation Errors

In the case of geolocation errors, the p ^ ( j , k ) matrix is estimated with a high precision, as explained in Section 3.3.2. However, the p ^ ( i , j , k ) ’s cannot be estimated because it is not possible to build ( i , j , k ) triplets without knowing the precise position of each point (only the distribution of the geolocation errors is known). Fortunately, because of the distinct causes associated with thematic and geolocation errors, it is reasonable to assume that the errors in the maps are not correlated with the geolocation mismatches in the reference dataset. The conditional independence is therefore assumed in all cases, so that the trusted confidence matrix is given by p ( i , j ) [ , a + b + c ] = k p [ , a + b + c ] . For the sake of comparison with thematic errors, the p ^ ( i , k ) is based on the frequencies of ( i , k ) observations in the 800-points sample.

4. Results

The results are presented separately for the thematic and geolocation errors. In each case, we first assess the discrepancies between the trusted confusion matrix (i.e., the simulated p ( i , j ) ’s) and the confusion matrix estimated from an imperfect reference dataset (i.e., the p ^ ( i , k ) ’s). The results then focus on the potential use of limited information about trusted labels in order to mitigate the impact of errors in the reference dataset using the MaxEnt approach.

4.1. Uncertainty from Thematic Errors

4.1.1. Impact of Imperfect Reference Dataset

Table 1 and Table 2 summarize the results of the simulated validation with different reference datasets (as described in Section 3.3.1) and different maps (as described in Section 3.2). The first column of each table displays the mean of the trusted overall accuracy O A ( i , j ) of each set of simulated maps, i.e., the OA of p ( i , j ) obtained from a reference dataset without thematic or geolocation errors. The first line of each table shows the O A ( j , k ) for the different reference datasets, i.e., the OA of p ( j , k ) . The other cells contain the O A ( i , k ) obtained by the comparison between each set of simulated map and 10,000 points extracted from each reference dataset, i.e., the OA of p ( i , k ) . These p ( i , k ) matrices reflect those that are observed (and most of the time published) in scientific papers. When the errors in the reference dataset are independent of the errors in the classified image (see Table 1), the O A ( i , k ) obtained with any of the imperfect reference dataset is always underestimated with respect to the O A ( i , j ) obtained with the trusted labels (first column of the table). The value of the bias (difference between observed and trusted OA) is primarily linked with the OA of the imperfect reference dataset. For our case studies, the underestimation ranges from 1.5 % to 1.9 % using a 98 % accurate reference dataset, from 3.9 % to 4.7 % using a 95 % accurate reference dataset, and from 7.4 % to 9.3 % using a 90 % accurate reference dataset. Values for biases are also linked to values of the trusted OA of the maps, as for a given imperfect sample, the bias is larger when the map is more accurate. The distribution of the errors in the imperfect reference datasets has a minor impact on the bias. The results for UAs and PAs (not detailed here) are consistent with the results summarized by OAs. The average RMSE for UA and PA are large on the observed confusion matrices, especially for UA (up to 8.1 % for PA and up to 34.8 % for UA). The errors are more related to the quality of the reference datasets than to the quality of the map that is being validated.
In the framework of a comparison between different classifiers, accuracy assessment aims at determining the most accurate classifier out of a set of benchmarked approaches. It is therefore interesting to note that conditionally independent errors in the reference dataset did not alter the ranking of the classified maps in terms of OA. In other words, the relative quality of the various classified maps is not modified by the presence of conditionally independent errors in the reference dataset. As shown in Table 1, the most accurate map ( O b s H ) has the largest observed OA whatever the reference dataset, despite a bias of about 9 % in the worst case. This ranking is also preserved between C o n s t H and R a n d H (the observed OA of C o n s t H remains 0.3 % larger than the observed OA of R a n d H with all samples).
On the other hand, when the errors of the reference datasets are correlated with the errors of the classification (see Table 2), the bias is always positive (i.e., the O A ( i , k ) overestimates the O A ( i , j ) ). When the same errors occur concurrently on a point of reference and its underlying pixel, they are counted as a correct classification in the confusion matrix. The overestimation is thus equal to the amount of co-occurring errors introduced in the dataset (half of the errors of the map, by design in this study). This also has a marked impact on UAs and PAs (not detailed in a table), with average RMSE increased by 40 % in several cases.
From the perspective of a comparison between several classifiers, the impact of correlated errors is different than for conditionally independent errors. By construction, the errors in each reference dataset are only correlated to one of the map. Table 2 clearly shows that the largest O A ( i , k ) (in bold characters) is always observed for the map for which the errors in the reference dataset are correlated. Indeed, the difference between (i) the positive bias of the map with errors correlated to the reference dataset and (ii) the negative bias for the other maps where errors are not correlated to the reference dataset, was always larger than the difference between the theoretical OA’s of the case studies. This systematically alters the ranking of the methods. This incorrect ranking was consistently observed for all sets, even when the best ( O b s H ) and the worst ( R a n d L ) maps (with Δ O A ( i , j ) = 12.5 ) are validated with the reference dataset correlated to the R a n d L map.

4.1.2. Maximum Entropy Correction

In order to mitigate the effect of imperfect reference datasets on the estimation of the confusion matrix, the use of the MaxEnt approach has been investigated along with other practical solutions. Table 3 provides a comparison between the tested estimators of the p ^ ( i , j ) matrix, namely the MaxEnt approach estimate p ( i , j ) [ ] ( α ) with a known (theoretical) p ( j , k ) matrix, the MaxEnt approach where the p ^ ( j , k ) matrix is estimated from a subsample of 100 points, and the direct estimate of p ^ ( i , j ) based on a subsample of 100 points with trusted label. The p ^ ( i , k ) values estimated from a sample of 800 points are also considered, as it is currently used instead of the p ( i , j ) matrix in most scientific papers. This has been done for all synthetic maps validated by all of the imperfect reference datasets. On average and for all case studies, the lowest RMSE is achieved by the MaxEnt approach estimate p ( i , j ) [ ] ( α ) with a known (theoretical) p ( j , k ) matrix (mean RMSE = 1.86). In increasing order of RMSE values, one can then identify the MaxEnt approach when estimating p ^ ( j , k ) from a subsample of 100 points (mean RMSE = 2.92), followed by estimating p ^ ( i , j ) from a subsample of 100 points (mean RMSE = 3.27) and, finally, substituting p ^ ( i , j ) with p ^ ( i , k ) based on a sample of 800 points (mean RMSE = 4.94). Each method has however its own advantages and weaknesses, as described below.
The MaxEnt approach p ( i , j ) [ ] ( α ) with known p ( j , k ) , along with a sample of 800 points with i and k values, yields OA estimates with a RMSE ranging from 0.69 to 3.31 (second column of Table 3). It outperforms other methods in all cases except when O b s H (OA = 93.3 % ) is validated with reference samples of low quality (OA ≈ 90%). In those two cases, the direct estimate of the confusion matrix based on a small ( n = 100 ) sample of trusted labels is better. The RMSE values for UAs and PAs (results not shown) are also significantly improved after applying the corrections on the confusion matrices, but the results are better for PA than for UA. The largest RMSE after correction was 6.1 % for UA and 1 % for PA.
When the p ( j , k ) values are estimated from a sample of 100 points, the RMSE of the MaxEnt approach increase, with a bias of approximately 2 % for the case of the 90 % accurate reference (to be compared with 0.5 % for the correct p ( j , k ) ). Nevertheless, combining a large sample from an inaccurate reference dataset with a small sample from a trusted reference proved to be a good compromise between the cost and the efficiency of the validation process. The MaxEnt approach with estimated p ^ ( j , k ) systematically yielded a smaller RMSE than when using p ^ ( i , k ) . However, the p ^ ( i , j ) estimates based on 100 points have a lower RMSE than the MaxEnt approach when the accuracy of the synthetic map is large and the accuracy of the reference dataset is small. For our synthetic study, it occurred 12 times out of the 48 cases (Table 3).
The p ^ ( i , j ) based on a subsample of points with trusted label is unbiased. As seen from the theoretical variance of OA estimates which is given by
V a r [ O A ^ ] = O A · ( 1 O A ) / n
their RMSEs will increase when the OA of the map gets closer to 50 % or when the sample size n decreases. This is corroborated by the RMSEs of Table 3 (biases not shown as they are theoretically equal to zero). RMSEs with n = 100 points ranges from 2.4 to 4 % depending on the OA of the maps (RSME is smaller when the OA of the map is larger).

4.2. Uncertainty from Geolocation Errors

4.2.1. Impact of Imperfect Reference Dataset

Table 4 and Table 5 give the joint frequencies of labels observed at the accurate position of the sample and at their shifted position (by 1.5 pixel). Those matrices are almost symmetrical, showing no difference between a positive and a negative shift. Furthermore, the matrices with vertically and horizontally shifted pixels are similar to each other (with less than 0.1 % difference for their OAs), and the same similarity is observed for the matrices with diagonally shifted pixels. This leads to the conclusion that the landscape in the study area is isotropic, i.e., the probability of error does not depend on the direction of the shift. The difference between the horizontal (Table 4) and the diagonal (Table 5) shifts are due to the larger distances to the centers of the pixels along the diagonals.
As explained in Section 3.3.2, the four median, the four diagonal and the central confusion matrices are then combined to compute the p ( j , k ) confusion matrices of geolocation for 1 pixel and 1.5 pixel shifts. Results for the 1.5 pixel shifts is illustrated in Table 6. The OAs of these matrices represent the probability that a randomly shifted point falls in the land cover category where it is supposed to be. As expected, the OA is smaller for the maximum shift of 1.5 ( O A ( j , k ) = 88.76 % ) than for the maximum shift of 1 pixel ( O A ( j , k ) = 90.76 % ).
The probability to fall on another label than the label under the exact location of the sampling point (that we call geolocation errors) varies across classes, as it can be seen on the last line of Table 6. Those differences are mainly linked to the landscape structure associated with these classes rather than to the total area covered by the class. Land cover classes that occur in large patches are indeed less affected than sparsely distributed ones. For instance, crop fields and herbaceous patches cover approximately the same area, but the geolocation errors for crop fields ( 4 % ) is much lower than for herbaceous patches ( 12 % ), as the latter are much smaller and more dispersed. On the other hand, and despite the presence of hundreds of small ponds, most of the water pixels are grouped inside a few big lakes. The water class is therefore little affected by geolocation errors even if it covers a small area (≈1%) by comparison with, e.g., trees or herbaceous covers.
The geolocation mismatch observed with the geolocation confusion matrices contaminate the thematic confusion matrix. For a simulated classification with an overall accuracy of 93.3 % according to the p ( i , j ) matrix, the bias due to the geolocation error cannot be neglected. This bias is equal to 7.6 % and 9.7 % with 1 and 1.5 maximum pixel shifts, respectively.

4.2.2. Maximum Entropy Correction

In the case of geolocation mismatch, a point sample would not provide joint ( i , j ) information because the value and the orientation of the shift are unknown for a specific location. It is therefore not possible to directly obtain p ^ ( i , j ) estimates in this situation. The MaxEnt estimates p [ , a + b + c ] can however be obtained by taking advantage of the p ( j , k ) matrix. Table 6 illustrates one of the matrices which are used as p ^ ( j , k ) by the MaxEnt approach. The test is performed here in the worst conditions for the MaxEnt approach, i.e., with a high OA of the synthetic map ( O b s H ) and with OAs of approximately 89 and 91 % as derived from the p ( j , k ) values.
The bias for the OAs derived from the p ( i , j ) [ , a + b + c ] matrix is equal to 0.26% and 0.35% for 2 and 3 pixels neighborhoods, respectively. These values are close to the usually accepted uncertainty and are better than the residual bias observed in the case of thematic errors on the same synthetic map. The RMSE for UAs and PAs also strongly decreases. For UAs, it drops from 18.0 % to 2.4 % and from 13.3 % to 1.8 % on the 3 and 2 pixels neighborhood, respectively. Similarly, for PAs, it drops from 12.4 % to 2.0 % and from 9.9 % to 1.6 % on the 3 and 2 pixels neighborhoods, respectively.

5. Discussion

This paper highlighted the substantial misestimation of primary quality indices (i.e., overall accuracy, user’s and producer’s accuracies) when the reference dataset is not error-free. Errors in the reference dataset lead to an underestimation of the map quality when these errors are conditionally independent, and to an overestimation otherwise. The absolute value of the bias is often significantly larger than the confidence interval on the estimated indices, which means that the quantity and the quality of the reference samples are equally important to build a reliable confusion matrix. The MaxEnt approach that was proposed in this paper allowed us to reduce the estimation bias in both cases (correlated and conditionally independent errors) for all maps. However, it requires some knowledge about the actual quality of the reference dataset in order to be optimal.
Considering the cost of collecting high quality reference dataset, a pragmatic approach is often recommended [1]. With this goal in mind, Section 5.1 focuses on the selection of the best classification algorithm while Section 5.2 discusses the ways to evaluate the absolute quality of maps and improve area estimates. Accordingly, the sampling strategy should be optimized for these two main usages of the confusion matrix. The contamination by geolocation errors is also discussed in Section 5.3.

5.1. Comparing Classification Outputs

When it comes to assess the performance of a new classifier by comparison with state-of-the-art classifiers, the focus is mainly on the comparison of the classification outputs, typically based on their respective overall accuracies. In parallel, methods for determining the optimal sample size focus on the statistical significance of a difference in the proportion of correctly allocated cases [23].
In this context, our results highlight the risk to invalidate the conclusion of these statistical tests due to the presence of biases in the estimated confusion matrices. Indeed, if the errors in the reference dataset are correlated with the errors of one of the classification algorithms, this algorithm will be systematically favoured by comparison with the other ones, due to an overestimation of its overall accuracy. This should be kept in mind, especially in the case of classification methods that make use of correlated calibration and validation datasets.
The MaxEnt approach is able to reduce the absolute value of the bias for the validation of all classifiers (either with correlated or conditionally independent errors), hence making the comparison more fair. However, there is no theoretical expression for the variance of the prediction with the MaxEnt method, and the bias is not completely removed by the corrections. Consequently, a rigorous test of the statistical significance of differences between OA remains unpractical.
Fortunately, a ranking of the classification algorithms is reliable when using p ^ ( i , k ) estimates built from a large (yet uncertain) reference dataset that offer some guarantee about the conditional independence of errors. In other words, it is possible to determine the best classifier if and only if the (potential) errors in the validation data are conditionally independent with the errors of all classifiers. Under this assumption, the statistical significance of the superiority of one classifier can be tested with the OA derived from p ^ ( i , k ) , and the MaxEnt approach additionally provides the unbiased value of this OA.

5.2. Map Quality Assessment and Area Estimates

Quality indices derived from the confusion matrix aim to decide if the quality requirements of the map for end users are fulfilled. Clearly, the presence of a bias that would be larger than the confidence interval on these quality indices invalidates the use of statistical tests for deciding if the map can be considered as agreeable to users. Our results have shown, on one hand, that the bias is a function of the amount of errors in the reference dataset and, on the other hand, that this bias is often larger than the recommended confidence interval on overall accuracies. The p ( i , k ) ’s should therefore not be used a a substitute for the p ( i , j ) ’s in general. As illustrated in Section 4, using this substitution can lead to strong underestimation of overestimation of the quality indices.
Two methods that can be considered to estimate the p ( i , j ) ’s are (i) a direct frequency estimate using a small reference dataset of the highest possible reliability, or (ii) the MaxEnt approach that combines a large but imperfect reference dataset with information about the quality of this dataset. In both cases, it is necessary to collect costly high quality reference data, but the first method puts all the efforts on the high quality reference while the second takes advantage of cheaper material such as, e.g., crowdsourcing or even existing maps. It is not possible to decide in advance which method will be the most efficient, because it depends on the accuracy of the reference dataset and the distribution of the classes in the map. On average, ours results showed that the MaxEnt method should prevail, but p ^ ( i , j ) ’s could be used when the accuracy of the map is very large while the accuracy of the reference dataset is very low.
Another use of the confusion matrix is the estimation of the area of each class based on pixel counting adjusted by the UA [24], which is considered as the state-of-the-art correction [1]. The correction of the area estimates is also directly impacted by the presence of errors in the confusion matrix. The uncertainty about the UA’s is then of paramount importance. When the errors of the map are correlated with the errors of reference dataset, these errors are synergizing instead of cancelling. When errors are conditionally independent, the consequences of the state-of-the-art correction of the area estimates are usually not predictable, although our results showed that there are more geolocation errors on small and sparsely distributed classes compared with large and compact classes. In this case, the area of the small and dispersed patches is therefore likely to be even more underestimated after applying the area correction. Considering the cost that such errors on areas can have for decision making (e.g., ecosystem service assessment [25]), a correction of the confusion matrix is therefore strongly recommended in this case. Again, the MaxEnt approach would be interesting in this situation has it positively impacts the UA’s estimation.

5.3. Geolocation Error

Mitigating the effect of geolocation errors is the most promising use of the proposed MaxEnt approach, as the conditional independence of these errors is typically fulfilled and the error associated with various positioning tools is usually well known. A good understanding of these geolocation errors is however necessary in order to achieve appropriate corrections, and the results depend on the spatial resolution and the sources of errors. A uniform spatial distribution of these errors was chosen in this study for the sake of simplicity, though it is not necessarily a realistic choice. In real case studies, it is thus necessary to rigorously describe the distribution of those geolocation errors in order to estimate at best the p ( j , k ) ’s.
For this goal, the classified map or an existing map with high spatial precision can be used as long as the spatial structure of the landscape is preserved (e.g., no salt-and-pepper effect induced by the classification itself). Those maps are indeed likely to preserve the size of the patches and the neighbouring information around each class. Directional effects could also be taken into account, as the orientation of edges with respect to sun and viewing angles can lead to various geolocation errors for vertical objects (trees, buildings, etc.) [26]. In parallel, RMSE of the orthorectification model and/or of the GPS receiver on the field can provide information about the distribution of the geolocation errors around each point.

5.4. Practical Recommendation

Despite the fact that the proposed MaxEnt method successfully managed to provide sound estimates for the p ( i , j ) ’s, limiting the presence of errors in the reference dataset should be the priority. The consequences of correlated errors in the validation process are more detrimental than conditionally independent errors. As explained before, these correlated errors can lead to misleading conclusions when comparing the performance of various classification algorithms, as they amplify the errors for area estimates and they yield over-optimistic results about the quality of a map.
The risks of correlated errors is largely reduced if good practices in geographic data accuracy assessment are properly accounted for. The validation dataset should not be a subset of the training/calibration dataset, as this will automatically create correlated errors. Spatial auto-correlation should also be avoided, such as systematically selecting validation points inside polygons that are used for the calibration or in the neighborhood of calibration points. This is particularly important with convolutional neural networks and other context-based classifiers. Indeed, because of their use of neighbourhoods (by design), spatially correlated errors are likely to be correlated in the classification. If possible, using a completely different source of validation is recommended too, like e.g., ground data, higher resolution images, images at different dates, etc.
Finally, it is worth noting that the best way to provide information about the quality of the reference dataset is by using a set of ( i , j , k ) triplets, as it allows the user to test whether errors are correlated or not. However, providing the p ^ ( j , k ) ’s remains useful if the conditional independence of the errors can be assumed, like in the case of geolocation errors. The MaxEnt method can then be applied to reduce the bias of the p ^ ( i , j ) estimates, hence improving the reliability of the accuracy assessment.

6. Conclusions

In this study, we provide quantitative results about the key role that accurate reference datasets play for the validation of maps. The need for such accurate reference datasets is often forgotten when the focus is on the precision of the accuracy assessment. However, the presence of a bias on the overal accuracy invalidates the conclusions of statistical tests comparing the differences of correctly classified pixels. In particular, we show why it is not possible to conclude about the superiority of a classifier when the independence between the validation dataset and the calibration dataset is not guaranteed.
A maximum entropy method is thus proposed to mitigate the impact of erroneous reference dataset of the confusion matrix. Based on a variety of simulated cases, our results emphasize the benefit and the efficiency of this approach, both in the case of thematic errors in the reference dataset and in the case of geolocation errors inducing label mismatches. This method relies on the collection of a relatively smaller number of trusted labels in addition to usual large reference dataset. While collecting additional trusted labels is usually expensive, the benefits for accuracy assessment are much larger than collecting a larger number of questionable reference data.
Although we combined jointly in this study a large variety of synthetic situations for the sources of errors (i.e., various error patterns, absolute errors, thematic and geolocation errors), it remains true that the superiority of the MaxEnt approach has not been proven for all possible cases and based on theoretical grounds. However, using information theory opens new perspectives for map validation with imperfect samples at a reasonable cost.

Author Contributions

Both authors contributed equally. Both authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Belgian Science Policy in the frame of the biodivERsA (Woodnet project).

Acknowledgments

The authors thank the anonymous reviewers and the guest editors for their valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of the Equivalences

In Section 2, under the conditional independence (see Equation (2), here denoted as Equation (A1)), we have
p ( i , k | j ) = p ( i | j ) p ( k | j ) i , j , k
In parallel, from the conditional probability definition, we have
p ( i , k | j ) = Δ p ( i , j , k ) p ( j )
so that equalling Equations (A2) and (A1) leads to
p ( i , j , k ) p ( j ) = p ( i | j ) p ( k | j ) i , j , k
Using again the conditional probability definition, we have too
p ( i | j ) = Δ p ( i , j ) p ( j ) i , j ; p ( k | j ) = Δ p ( j , k ) p ( j ) j , k
so that plugging Equation (A4) inside Equation (A3) leads to
p ( i , j , k ) p ( j ) = p ( i , j ) p ( j ) p ( j , k ) p ( j ) i , j , k
which after simplification leads to the first part of Equation (3) in Section 2, i.e.,
p ( i , j , k ) = p ( i , j ) p ( j , k ) p ( j ) i , j , k
As again from the conditional probability definition we have
p ( k | i , j ) = Δ p ( i , j , k ) p ( i , j )
using Equation (A7) into Equation (A6) leads to the second part of Equation (3), with
p ( k | i , j ) = p ( j , k ) p ( j ) i , j , k

Appendix B. Confusion Matrices (i,j) of Classified Maps

Table A1. Confusion matrix with case study errors and medium overall accuracy.
Table A1. Confusion matrix with case study errors and medium overall accuracy.
CropNeedleLBroadLHerbShrubArtifBareSWater
crop228.00038.60000
NeedleL0183.022.41.3703.6400
BroadL013.670.2001.8900
Herb48.02.540262.06.2901.080
Shrub02.541.123.9544.2000
Artif2.44000038.500
BareS0002.5909.270.220
Water000006.6405.64
Table A2. Confusion matrix with case study errors and large overall accuracy.
Table A2. Confusion matrix with case study errors and large overall accuracy.
CropNeedleLBroadLHerbShrubArtifBareSWater
crop259.00014.20000
NeedleL0195.09.270.5301.5600
BroadL05.0183.9000.900
Herb19.01.130291.02.2400.780
Shrub01.060.491.5548.2000
Artif0.94000050.400
BareS0001.0604.120.520
Water000003.0305.64
Table A3. Confusion matrix with random errors and large overall accuracy.
Table A3. Confusion matrix with random errors and large overall accuracy.
CropNeedleLBroadLHerbShrubArtifBareSWater
crop267.02.342.431.490.490.20.220.15
NeedleL1.12189.00.933.12.342.690.290.58
BroadL3.182.5386.21.372.440.730.140.53
Herb3.53.380.73292.00.080.20.310.68
Shrub1.430.171.43.5738.41.730.040.86
Artif0.431.840.723.492.6650.40.120.67
BareS0.051.430.183.243.01.930.10.83
Water2.261.281.1101.082.060.081.34
Table A4. Confusion matrix with random errors and medium overall accuracy.
Table A4. Confusion matrix with random errors and medium overall accuracy.
CropNeedleLBroadLHerbShrubArtifBareSWater
crop234.05.934.027.023.734.490.270.28
NeedleL5.88165.03.651.233.964.0301.17
BroadL8.329.5265.86.235.525.780.331.31
Herb2.54.861.23277.03.535.570.070.68
Shrub7.383.343.995.3322.41.790.10.15
Artif6.054.497.350.521.0932.00.090.04
BareS6.298.485.097.94.540.770.061.21
Water8.450.232.522.495.75.550.380.8
Table A5. Confusion matrix with nearly constant errors and large overall accuracy.
Table A5. Confusion matrix with nearly constant errors and large overall accuracy.
CropNeedleLBroadLHerbShrubArtifBareSWater
Crop266.01.81.742.071.531.680.10.41
NeedleL1.76189.01.612.251.391.420.180.56
BroadL1.751.6581.32.041.381.860.140.56
Herb1.841.941.94294.01.451.870.220.62
Shrub1.651.951.811.9839.61.510.140.54
Artif1.831.951.851.911.848.10.180.46
BareS1.821.731.782.311.631.710.140.51
Water1.871.921.641.71.71.80.21.98
Table A6. Confusion matrix with nearly constant errors and medium overall accuracy.
Table A6. Confusion matrix with nearly constant errors and medium overall accuracy.
CropNeedleLBroadLHerbShrubArtifBareSWater
Crop243.04.994.225.753.613.910.220.75
NeedleL5.36166.04.095.223.243.40.20.6
BroadL5.355.1464.35.223.423.490.210.68
Herb5.194.884.66272.03.313.710.150.6
Shrub4.945.154.015.1526.83.460.20.62
Artif5.495.014.125.033.4735.20.160.82
BareS5.235.234.374.773.513.370.020.83
Water4.485.243.944.923.143.410.140.74

Appendix C. Confusion Matrices (j,k) of Reference Datasets

Table A7. Confusion matrix for the reference obtained by photointerpretation and validated on the field.
Table A7. Confusion matrix for the reference obtained by photointerpretation and validated on the field.
CropNeedleLBroadLHerbacShrubArtifBare SWater
Crop265.00013.80000
NeedleL0202.0000000
BroadL04.8888.800000
Herbac9.2800299.00000
Shrub000050.5000
Artif0000060.000
Bare S0000001.30
Water00000005.64
Table A8. Confusion matrix for the reference with uniform error values for each class and overall accuracy of 90%.
Table A8. Confusion matrix for the reference with uniform error values for each class and overall accuracy of 90%.
CropNeedleLBroadLHerbacShrubArtifBare SWater
Crop252.03.823.584.133.784.113.714.01
NeedleL2.83181.02.752.862.993.123.252.71
BroadL1.331.3784.41.391.291.331.341.19
Herbac4.654.234.1278.04.324.34.284.25
Shrub0.570.80.650.6445.40.70.820.86
Artif0.940.860.840.90.7953.90.850.95
Bare S0.020.040.0300.0101.160.04
Water0.080.090.010.130.110.070.095.06
Table A9. Confusion matrix for the reference with uniform error values for each class and overall accuracy of 95%.
Table A9. Confusion matrix for the reference with uniform error values for each class and overall accuracy of 95%.
CropNeedleLBroadLHerbacShrubArtifBare SWater
Crop265.01.921.752.231.982.142.051.99
NeedleL1.48192.01.151.631.481.381.431.7
BroadL0.780.6988.70.750.680.630.720.71
Herbac2.222.132.42292.02.062.382.282.16
Shrub0.290.360.310.3648.20.340.20.41
Artif0.40.390.350.430.4557.30.320.39
Bare S00.0100.010.010.021.230.02
Water0.030.060.080.010.050.050.075.29
Table A10. Confusion matrix for the reference with uniform error values for each class and overall accuracy of 98%.
Table A10. Confusion matrix for the reference with uniform error values for each class and overall accuracy of 98%.
CropNeedleLBroadLHerbacShrubArtifBare SWater
Crop273.00.920.660.880.860.850.860.81
NeedleL0.5198.00.670.580.580.630.610.59
BroadL0.260.2891.70.270.260.330.330.21
Herbac1.010.980.85302.00.90.930.910.91
Shrub0.180.050.140.1549.50.110.170.16
Artif0.130.280.140.150.258.80.190.14
Bare S000.0100.010.011.270
Water0.030.020.010.0500.030.025.48
Table A11. Confusion matrix for the reference with error values proportional to (j,k) class frequencies and overall accuracy of 90%.
Table A11. Confusion matrix for the reference with error values proportional to (j,k) class frequencies and overall accuracy of 90%.
CropNeedleLBroadLHerbacShrubArtifBare SWater
Crop251.07.883.911.82.112.210.030.21
NeedleL7.31182.02.597.61.141.540.030.11
BroadL2.932.0784.33.180.490.640.030.07
Herbac13.08.154.1278.02.582.560.020.18
Shrub1.431.060.491.6545.50.320.020.03
Artif1.711.410.632.120.2653.80.010.06
Bare S0.070.030.010.030.0201.140
Water0.160.080.070.170.040.0205.1
Table A12. Confusion matrix for the reference with error values proportional to (j,k) class frequencies and overall accuracy of 95%.
Table A12. Confusion matrix for the reference with error values proportional to (j,k) class frequencies and overall accuracy of 95%.
CropNeedleLBroadLHerbacShrubArtifBare SWater
Crop265.04.252.065.551.041.050.020.12
NeedleL3.56192.01.523.630.540.8400.07
BroadL1.580.8588.91.720.220.330.020.01
Herbac6.144.551.69293.01.01.1300.14
Shrub0.740.470.260.5948.20.2100.02
Artif0.880.580.190.880.1257.30.020.02
Bare S0.010.0200.010.0101.250
Water0.120.030.010.0800.0205.38
Table A13. Confusion matrix for the reference with errors proportional to (j,k) class frequencies and overall accuracy of 98%.
Table A13. Confusion matrix for the reference with errors proportional to (j,k) class frequencies and overall accuracy of 98%.
CropNeedleLBroadLHerbacShrubArtifBare SWater
Crop273.01.840.722.460.450.4200.03
NeedleL1.4198.00.481.660.280.350.020
BroadL0.550.5191.80.60.110.0900.01
Herbac2.81.580.77302.00.530.570.010.06
Shrub0.250.150.060.2949.60.0600
Artif0.340.20.140.390.0558.900.01
Bare S0000.01001.290
Water0.010.010.010.04000.015.56

References

  1. Olofsson, P.; Foody, G.M.; Herold, M.; Stehman, S.V.; Woodcock, C.E.; Wulder, M.A. Good practices for estimating area and assessing accuracy of land change. Remote Sens. Environ. 2014, 148, 42–57. [Google Scholar] [CrossRef]
  2. Comber, A.; Fisher, P.; Brunsdon, C.; Khmag, A. Spatial analysis of remote sensing image classification accuracy. Remote Sens. Environ. 2012, 127, 237–246. [Google Scholar] [CrossRef] [Green Version]
  3. Carlotto, M.J. Effect of errors in ground truth on classification accuracy. Int. J. Remote Sens. 2009, 30, 4831–4849. [Google Scholar] [CrossRef]
  4. Congalton, R.G. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sens. Environ. 1991, 37, 35–46. [Google Scholar] [CrossRef]
  5. Brannstrom, C.; Filippi, A. Remote classification of Cerrado (Savanna) and agricultural land covers in northeastern Brazil. Geocarto Int. 2008, 23, 109–134. [Google Scholar] [CrossRef]
  6. Foody, G.M. Assessing the accuracy of land cover change with imperfect ground reference data. Remote Sens. Environ. 2010, 114, 2271–2285. [Google Scholar] [CrossRef] [Green Version]
  7. Radoux, J.; Waldner, F.; Bogaert, P. How response designs and class proportions affect the accuracy of validation data. Remote Sens. 2020, 12, 257. [Google Scholar] [CrossRef] [Green Version]
  8. Van Coillie, F.M.; Gardin, S.; Anseel, F.; Duyck, W.; Verbeke, L.P.; De Wulf, R.R. Variability of operator performance in remote-sensing image interpretation: The importance of human and external factors. Int. J. Remote Sens. 2014, 35, 754–778. [Google Scholar] [CrossRef]
  9. Powell, R.; Matzke, N.; De Souza, C.; Clark, M.; Numata, I.; Hess, L.; Roberts, D. Sources of error in accuracy assessment of thematic land-cover maps in the Brazilian Amazon. Remote Sens. Environ. 2004, 90, 221–234. [Google Scholar] [CrossRef]
  10. See, L.; Comber, A.; Salk, C.; Fritz, S.; Van Der Velde, M.; Perger, C.; Schill, C.; McCallum, I.; Kraxner, F.; Obersteiner, M. Comparing the quality of crowdsourced data contributed by expert and non-experts. PLoS ONE 2013, 8, e69958. [Google Scholar] [CrossRef] [Green Version]
  11. Enøe, C.; Georgiadis, M.P.; Johnson, W.O. Estimation of sensitivity and specificity of diagnostic tests and disease prevalence when the true disease state is unknown. Prev. Vet. Med. 2000, 45, 61–81. [Google Scholar] [CrossRef]
  12. Espeland, M.A.; Handelman, S.L. Using latent class models to characterize and assess relative error in discrete measurements. Biometrics 1989, 45, 587–599. [Google Scholar] [CrossRef] [PubMed]
  13. Hui, S.L.; Zhou, X.H. Evaluation of diagnostic tests without gold standards. Stat. Methods Med. Res. 1998, 7, 354–370. [Google Scholar] [CrossRef] [PubMed]
  14. Sarmento, P.; Carrão, H.; Caetano, M.; Stehman, S. Incorporating reference classification uncertainty into the analysis of land cover accuracy. Int. J. Remote Sens. 2009, 30, 5309–5321. [Google Scholar] [CrossRef]
  15. Kapur, J.N. Maximum Entropy Models in Science and Engineering; John Wiley & Son: New Delhi, India, 1989. [Google Scholar]
  16. Wu, N. The Maximum Entropy Method; Springer Science & Business Media: Berlin, Germany, 2012; Volume 32. [Google Scholar]
  17. Fienberg, S.E. An iterative procedure for estimation in contingency tables. Ann. Math. Stat. 1970, 41, 907–917. [Google Scholar] [CrossRef]
  18. Barthélemy, J.; Suesse, T. mipfp: An R Package for Multidimensional Array Fitting and Simulating Multivariate Bernoulli Distributions. J. Stat. Softw. Code Snippets 2018, 86, 1–20. [Google Scholar] [CrossRef] [Green Version]
  19. Forthommme, D. Iterative Proportional Fitting for Python with N Dimensions. Github 2019, 1, 1–2. Available online: Https://Github.Com/Dirguis/Ipfn (accessed on 14 December 2020).
  20. Radoux, J.; Bourdouxhe, A.; Coos, W.; Dufrêne, M.; Defourny, P. Improving Ecotope Segmentation by Combining Topographic and Spectral Data. Remote Sens. 2019, 11, 354. [Google Scholar] [CrossRef] [Green Version]
  21. Radoux, J.; Bogaert, P. Good practices for object-based accuracy assessment. Remote Sens. 2017, 9, 646. [Google Scholar] [CrossRef] [Green Version]
  22. Radoux, J.; Chomé, G.; Jacques, D.; Waldner, F.; Bellemans, N.; Matton, N.; Lamarche, C.; d’Andrimont, R.; Defourny, P. Sentinel-2’s potential for sub-pixel landscape feature detection. Remote Sens. 2016, 8, 488. [Google Scholar] [CrossRef] [Green Version]
  23. Foody, G.M. Sample size determination for image classification accuracy assessment and comparison. Int. J. Remote Sens. 2009, 30, 5273–5291. [Google Scholar] [CrossRef]
  24. Stehman, S.V. Estimating area from an accuracy assessment error matrix. Remote Sens. Environ. 2013, 132, 202–211. [Google Scholar] [CrossRef]
  25. Foody, G.M. Valuing map validation: The need for rigorous land cover map accuracy assessment in economic valuations of ecosystem services. Ecol. Econ. 2015, 111, 23–28. [Google Scholar] [CrossRef]
  26. Radoux, J.; Defourny, P. A quantitative assessment of boundaries in automated forest stand delineation using very high resolution imagery. Remote Sens. Environ. 2007, 110, 468–475. [Google Scholar] [CrossRef]
Figure 1. Illustration of the 3D table of p ( i , j , k ) probabilities with m = 3 classes, where i, j and k indices refers to the classification, trusted labels and reference, respectively. The constraints are the 2D marginal tables of p ^ ( i , k ) ’s and p ^ ( j , k ) ’s. What is sought for is the 2D marginal table of p ( i , j ) ’s.
Figure 1. Illustration of the 3D table of p ( i , j , k ) probabilities with m = 3 classes, where i, j and k indices refers to the classification, trusted labels and reference, respectively. The constraints are the 2D marginal tables of p ^ ( i , k ) ’s and p ^ ( j , k ) ’s. What is sought for is the 2D marginal table of p ( i , j ) ’s.
Remotesensing 12 04128 g001
Figure 2. Land cover information used as a virtual truth to illustrate the method.
Figure 2. Land cover information used as a virtual truth to illustrate the method.
Remotesensing 12 04128 g002
Figure 3. Area proportion occupied by neighbouring pixels for uniform distribution shifts from the center of the pixel, with 1 pixel maximum shift in (a) and 1.5 pixel maximum shift in (b).
Figure 3. Area proportion occupied by neighbouring pixels for uniform distribution shifts from the center of the pixel, with 1 pixel maximum shift in (a) and 1.5 pixel maximum shift in (b).
Remotesensing 12 04128 g003
Table 1. Overall accuracies (OA) derived from the synthetic maps validated using reference datasets with conditionally independent errors. The first line refers to p ( j , k ) , the first column refers to p ( i , j ) and the other cells refer to p ( i , k ) . Maximum values for each type of reference dataset are in bold. The maps are sorted based on their OA with trusted reference. Const refers to constant errors on the off-diagonal cells, Rand refers to randomly selected errors, and Obs refers to errors rates that are based on a real case study. For the reference datasets, Unif correspond to uniformly distributed errors, Prop to errors rates proportional to the class frequency, and Field to errors observed in a real case study. The letters L and H refers to low and high quality, respectively.
Table 1. Overall accuracies (OA) derived from the synthetic maps validated using reference datasets with conditionally independent errors. The first line refers to p ( j , k ) , the first column refers to p ( i , j ) and the other cells refer to p ( i , k ) . Maximum values for each type of reference dataset are in bold. The maps are sorted based on their OA with trusted reference. Const refers to constant errors on the off-diagonal cells, Rand refers to randomly selected errors, and Obs refers to errors rates that are based on a real case study. For the reference datasets, Unif correspond to uniformly distributed errors, Prop to errors rates proportional to the class frequency, and Field to errors observed in a real case study. The letters L and H refers to low and high quality, respectively.
ReferencesTrusted Unif L Unif H Prop L Prop H Field
Maps
Trusted10090.195.190.094.997.2
C o n s t L 79.872.276.072.175.977.5
R a n d L 80.873.177.073.076.978.5
O b s L 83.275.379.375.379.281.4
R a n d H 92.083.187.682.987.589.5
C o n s t H 92.483.487.983.287.889.8
O b s H 93.384.288.884.188.790.9
Table 2. Overall accuracies (OA) derived from the validation of synthetic maps with reference datasets correlated with the errors on the map. The first line refers to p ( j , k ) , the first column refers to p ( i , j ) and the other cells refer to p ( i , k ) . Maximum values for each type of imperfect reference dataset are in bold. The maps are sorted based on their OA with trusted reference. Const refers to constant errors on the off-diagonal cells, Rand refers to randomly selected errors, and Obs refers to errors rates that are based on a real case study. The letters L and H refer to low and high quality, respectively. For the reference datasets, 50 % of the errors are copied from the corresponding classification indicated by the subscript.
Table 2. Overall accuracies (OA) derived from the validation of synthetic maps with reference datasets correlated with the errors on the map. The first line refers to p ( j , k ) , the first column refers to p ( i , j ) and the other cells refer to p ( i , k ) . Maximum values for each type of imperfect reference dataset are in bold. The maps are sorted based on their OA with trusted reference. Const refers to constant errors on the off-diagonal cells, Rand refers to randomly selected errors, and Obs refers to errors rates that are based on a real case study. The letters L and H refer to low and high quality, respectively. For the reference datasets, 50 % of the errors are copied from the corresponding classification indicated by the subscript.
ReferencesTrusted Cor Const L Cor Rand L Cor Obs L Cor Rand H Cor Const H Cor Obs H
Maps
Trusted10089.990.491.596.096.296.6
C o n s t L 79.889.873.273.477.277.477.3
R a n d L 80.873.990.474.478.278.378.3
O b s L 83.275.375.691.780.180.281.0
R a n d H 92.083.383.884.496.089.089.1
C o n s t H 92.483.784.184.789.196.289.4
O b s H 93.384.284.686.089.789.996.7
Table 3. Comparison between the different methods for estimating the confusion matrix of a map. Each simulated map (first column) is validated with each reference dataset (second column) with conditionally independent errors, as well as with the reference dataset correlated to it by design (named C o r r M a p N a m e ). The RMSE ( n = 200 ) on the estimated O A ^ ( i , j ) compared with the theoretical O A ( i , j ) are provided in columns 3 to 6. In addition, the last two columns show the bias of the MaxEnt and the p ( i , k ) approaches.
Table 3. Comparison between the different methods for estimating the confusion matrix of a map. Each simulated map (first column) is validated with each reference dataset (second column) with conditionally independent errors, as well as with the reference dataset correlated to it by design (named C o r r M a p N a m e ). The RMSE ( n = 200 ) on the estimated O A ^ ( i , j ) compared with the theoretical O A ( i , j ) are provided in columns 3 to 6. In addition, the last two columns show the bias of the MaxEnt and the p ( i , k ) approaches.
MapsReferencesRMSE p ( i , j ) [ ] ( α ) with p ^ ( j , k ) RMSE p ( i , j ) [ ] ( α ) with p ( j , k ) RMSE p ^ ( i , j ) ( n = 100 )RMSE p ^ ( i , k ) ( n = 800 )Bias p ( i , j ) [ ] ( α ) with p ^ ( j , k ) Bias p ^ ( i , k )
C o n s t H C o r C o n s t H 1.820.742.583.810.343.08
C o n s t H Uni 90 % 5.712.042.79.05−1.69−8.92
C o n s t H Uni 95 % 3.251.872.734.66−1.46−4.29
C o n s t H Uni 98 % 1.791.682.682.15−1.25−0.46
C o n s t H Prop 90 % 5.112.162.699.31−1.75−8.54
C o n s t H Prop 95 % 2.961.652.714.51−1.12−3.92
C o n s t H Prop 98 % 1.731.572.632.15−0.95−2.17
C o n s t H Field1.521.462.632.83−0.42−3.79
C o n s t L C o r C o n s t L 2.961.78410.111.5210.62
C o n s t L Uni 90 % 3.931.994.017.72−0.74−6.51
C o n s t L Uni 95 % 2.631.913.94.19−0.85−4.13
C o n s t L Uni 98 % 1.861.84.032.21−0.79−1.13
C o n s t L Prop 90 % 3.331.814.067.9−0.6−9.63
C o n s t L Prop 95 % 2.371.864.014.12−0.48−7.63
C o n s t L Prop 98 % 1.731.724.022.15−0.54−3.88
C o n s t L Field1.81.773.952.74−0.32−1.76
R a n d H C o r R a n d H 1.830.792.784.050.433.84
R a n d H Uni 90 % 5.651.922.699.08−1.52−8.79
R a n d H Uni 95 % 3.281.712.784.72−1.26−3.54
R a n d H Uni 98 % 1.81.612.712.19−1.07−2.41
R a n d H Prop 90 % 5.091.862.639.17−1.41−9.91
R a n d H Prop 95 % 3.051.72.84.6−0.97−4.29
R a n d H Prop 98 % 1.771.562.642.08−0.81−2.04
R a n d H Field1.541.42.682.75−0.4−2.66
R a n d L C o r R a n d L 2.81.653.919.581.3911.05
R a n d L Uni 90 % 3.851.893.877.85−0.53−6.2
R a n d L Uni 95 % 2.611.913.924.21−0.74−3.07
R a n d L Uni 98 % 1.731.713.692.13−0.68−1.82
R a n d L Prop 90 % 3.181.73.947.95−0.33−10.45
R a n d L Prop 95 % 2.231.733.824.07−0.32−4.95
R a n d L Prop 98 % 1.751.754.012.13−0.5−3.07
R a n d L Field1.771.733.862.67−0.24−3.57
O b s H C o r O b s H 1.850.692.493.440.213.69
O b s H Uni 90 % 6.293.312.359.23−3.1−10.18
O b s H Uni 95 % 3.772.812.544.75−2.58−3.93
O b s H Uni 98 % 2.082.182.462.16−1.89−2.68
O b s H Prop 90 % 5.6132.439.29−2.75−11.18
O b s H Prop 95 % 3.572.382.414.61−2.05−3.68
O b s H Prop 98 % 2.051.862.562.09−1.47−1.81
O b s H Field1.681.522.512.62−0.37−4.56
O b s L C o r O b s L 2.661.423.678.541.136.42
O b s L Uni 90 % 5.232.793.788.16−2.39−5.95
O b s L Uni 95 % 3.322.463.744.31−1.96−4.33
O b s L Uni 98 % 2.152.083.652.14−1.51−2.33
O b s L Prop 90 % 4.682.553.768.02−2−7.33
O b s L Prop 95 % 3.032.133.694.09−1.51−1.2
O b s L Prop 98 % 21.893.782.06−1.15−0.58
O b s L Field2.021.893.872.28−0.55−2.33
Table 4. Example of confusion matrix for a 1.5 pixel shift along the vertical Y-axis.
Table 4. Example of confusion matrix for a 1.5 pixel shift along the vertical Y-axis.
ShiftedCropBroadLNeedleLHerbacShrubArtifBareWaterTot
Original
Crop27.010.130.030.790.010.120.000.01
BroadL0.1217.680.371.020.670.210.000.01
NeeldleL0.020.398.210.470.340.080.000.00
Herb.0.810.950.4826.940.630.910.020.01
Shrub0.010.700.340.593.260.050.000.00
Artif0.120.220.090.900.044.580.000.01
Bare0.000.000.000.020.000.000.130.00
Water0.010.010.000.010.000.010.000.50
Accuracy0.960.880.860.880.660.770.870.9588.30
Table 5. Example of confusion matrix for a 1.5 pixel shift along the diagonal of the X,Y axes.
Table 5. Example of confusion matrix for a 1.5 pixel shift along the diagonal of the X,Y axes.
ShiftedCropBroadLNeedleLHerbacShrubArtifBareWaterTot
Original
Crop26.820.160.040.920.020.140.000.01
BroadL0.1517.170.551.150.770.260.000.01
NeedleL0.030.577.920.520.380.100.000.00
Herb0.921.130.5326.440.651.050.020.01
Shrub0.020.790.380.633.090.050.000.00
Artif0.150.250.101.050.054.350.000.01
Bare0.000.000.000.020.000.000.130.00
Water0.010.010.000.010.000.010.000.49
Accuracy0.950.860.830.860.620.730.850.9386.41
Table 6. Total weighted confusion matrix for a maximum shift of 1.5 pixel.
Table 6. Total weighted confusion matrix for a maximum shift of 1.5 pixel.
ShiftedCropBroadLNeedleLHerbacShrubArtifBareWaterTot
Original
Crop27.030.130.030.760.010.1100.01
BroadL0.1217.730.40.980.620.200.01
NeedleL0.020.438.230.440.310.0800
Herbac0.780.90.4527.130.580.880.010.01
Shrub0.010.660.320.543.380.0500
Artif0.120.210.080.860.044.6300.01
Bare0000.01000.130
Water0.010.0100.0100.0100.5
Accuracy0.960.880.860.880.680.780.870.9488.76
geo. errors0.040.120.140.120.320.220.130.0611.24
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Radoux, J.; Bogaert, P. About the Pitfall of Erroneous Validation Data in the Estimation of Confusion Matrices. Remote Sens. 2020, 12, 4128. https://doi.org/10.3390/rs12244128

AMA Style

Radoux J, Bogaert P. About the Pitfall of Erroneous Validation Data in the Estimation of Confusion Matrices. Remote Sensing. 2020; 12(24):4128. https://doi.org/10.3390/rs12244128

Chicago/Turabian Style

Radoux, Julien, and Patrick Bogaert. 2020. "About the Pitfall of Erroneous Validation Data in the Estimation of Confusion Matrices" Remote Sensing 12, no. 24: 4128. https://doi.org/10.3390/rs12244128

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop