Next Article in Journal
A Method for Exploring the Link between Urban Area Expansion over Time and the Opportunity for Crime in Saudi Arabia
Next Article in Special Issue
Relasphone—Mobile and Participative In Situ Forest Biomass Measurements Supporting Satellite Image Mapping
Previous Article in Journal
Oil Droplet Clouds Suspended in the Sea: Can They Be Remotely Detected?
Previous Article in Special Issue
Comparing Road-Kill Datasets from Hunters and Citizen Scientists in a Landscape Context
Article Menu

Export Article

Open AccessArticle
Remote Sens. 2016, 8(10), 859; doi:10.3390/rs8100859

The Tasks of the Crowd: A Typology of Tasks in Geographic Information Crowdsourcing and a Case Study in Humanitarian Mapping

1
Centre for Interdisciplinary Methodologies, University of Warwick, CV4 7AL Coventry, UK
2
GIScience Chair, Institute of Geography, Heidelberg University, 69120 Heidelberg, Germany
3
Department of Computer Systems, ICMC, University of São Paulo, 135566-590 São Carlos/SP, Brazil
*
Author to whom correspondence should be addressed.
Academic Editors: Steffen Fritz, Cidália Costa Fonte, Clement Atzberger and Prasad S. Thenkabail
Received: 2 August 2016 / Revised: 8 September 2016 / Accepted: 11 October 2016 / Published: 18 October 2016
(This article belongs to the Special Issue Citizen Science and Earth Observation)
View Full-Text   |   Download PDF [10535 KB, uploaded 18 October 2016]   |  

Abstract

In the past few years, volunteers have produced geographic information of different kinds, using a variety of different crowdsourcing platforms, within a broad range of contexts. However, there is still a lack of clarity about the specific types of tasks that volunteers can perform for deriving geographic information from remotely sensed imagery, and how the quality of the produced information can be assessed for particular task types. To fill this gap, we analyse the existing literature and propose a typology of tasks in geographic information crowdsourcing, which distinguishes between classification, digitisation and conflation tasks. We then present a case study related to the “Missing Maps” project aimed at crowdsourced classification to support humanitarian aid. We use our typology to distinguish between the different types of crowdsourced tasks in the project and choose classification tasks related to identifying roads and settlements for an evaluation of the crowdsourced classification. This evaluation shows that the volunteers achieved a satisfactory overall performance (accuracy: 89%; sensitivity: 73%; and precision: 89%). We also analyse different factors that could influence the performance, concluding that volunteers were more likely to incorrectly classify tasks with small objects. Furthermore, agreement among volunteers was shown to be a very good predictor of the reliability of crowdsourced classification: tasks with the highest agreement level were 41 times more probable to be correctly classified by volunteers. The results thus show that the crowdsourced classification of remotely sensed imagery is able to generate geographic information about human settlements with a high level of quality. This study also makes clear the different sophistication levels of tasks that can be performed by volunteers and reveals some factors that may have an impact on their performance. View Full-Text
Keywords: crowdsourced geographic information; citizen science; crowdsourcing; disaster management; volunteered geographic information crowdsourced geographic information; citizen science; crowdsourcing; disaster management; volunteered geographic information
Figures

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Albuquerque, J.P.; Herfort, B.; Eckle, M. The Tasks of the Crowd: A Typology of Tasks in Geographic Information Crowdsourcing and a Case Study in Humanitarian Mapping. Remote Sens. 2016, 8, 859.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Remote Sens. EISSN 2072-4292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top