Next Article in Journal
Optimal, Recursive and Sub-Optimal Linear Solutions to Attitude Determination from Vector Observations for GNSS/Accelerometer/Magnetometer Orientation Measurement
Previous Article in Journal
An Observation Task Chain Representation Model for Disaster Process-Oriented Remote Sensing Satellite Sensor Planning: A Flood Water Monitoring Application
Article Menu
Issue 3 (March) cover image

Export Article

Open AccessArticle
Remote Sens. 2018, 10(3), 376; https://doi.org/10.3390/rs10030376

An Exploration of Some Pitfalls of Thematic Map Assessment Using the New Map Tools Resource

1
Swedish University of Agricultural Sciences, Southern Swedish Forest Research Centre, SE-23053 Alnarp, Sweden
2
International Institute for Applied Systems Analysis (IIASA), Center for Citizen Science and Earth Observation, Schlossplatz 1, 2361 Laxenburg, Austria
*
Author to whom correspondence should be addressed.
Received: 23 January 2018 / Revised: 13 February 2018 / Accepted: 18 February 2018 / Published: 1 March 2018
Full-Text   |   PDF [613 KB, uploaded 1 March 2018]   |  

Abstract

A variety of metrics are commonly employed by map producers and users to assess and compare thematic maps’ quality, but their use and interpretation is inconsistent. This problem is exacerbated by a shortage of tools to allow easy calculation and comparison of metrics from different maps or as a map’s legend is changed. In this paper, we introduce a new website and a collection of R functions to facilitate map assessment. We apply these tools to illustrate some pitfalls of error metrics and point out existing and newly developed solutions to them. Some of these problems have been previously noted, but all of them are under-appreciated and persist in published literature. We show that binary and categorical metrics, including information about true-negative classifications, are inflated for rare categories, and more robust alternatives should be chosen. Most metrics are useful to compare maps only if their legends are identical. We also demonstrate that combining land-cover classes has the often-neglected consequence of apparent improvement, particularly if the combined classes are easily confused (e.g., different forest types). However, we show that the average mutual information (AMI) of a map is relatively robust to combining classes, and reflects the information that is lost in this process; we also introduce a modified AMI metric that credits only correct classifications. Finally, we introduce a method of evaluating statistical differences in the information content of competing maps, and show that this method is an improvement over other methods in more common use. We end with a series of recommendations for the meaningful use of accuracy metrics by map users and producers. View Full-Text
Keywords: thematic maps; map accuracy; map comparison; overall accuracy; Cohen’s Kappa; producers accuracy; users accuracy; average mutual information thematic maps; map accuracy; map comparison; overall accuracy; Cohen’s Kappa; producers accuracy; users accuracy; average mutual information
Figures

Graphical abstract

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Salk, C.; Fritz, S.; See, L.; Dresel, C.; McCallum, I. An Exploration of Some Pitfalls of Thematic Map Assessment Using the New Map Tools Resource. Remote Sens. 2018, 10, 376.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Remote Sens. EISSN 2072-4292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top