Previous Article in Journal
A Systematic Literature Review of Artificial Intelligence in Prehospital Emergency Care
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Assessing the Influence of Feedback Strategies on Errors in Crowdsourced Annotation of Tumor Images

1
User-Centric Analysis of Multimedia Data Research Group, Faculty of Electrical Engineering and Information Technology, Technische Universität Ilmenau, Gustav-Kirchhoff-Straße 1, 98693 Ilmenau, Germany
2
ScaleHub GmbH, Heidbergstraße 100, 22846 Norderstedt, Germany
3
Institute of Anatomy and Cell Biology, Faculty of Medicine, Julius-Maximilians-Universität Würzburg, Koellikerstraße 6, 97070 Würzburg, Germany
*
Authors to whom correspondence should be addressed.
Big Data Cogn. Comput. 2025, 9(9), 220; https://doi.org/10.3390/bdcc9090220
Submission received: 30 June 2025 / Revised: 6 August 2025 / Accepted: 18 August 2025 / Published: 26 August 2025

Abstract

Crowdsourcing enables the acquisition of distributed human intelligence for solving tasks involving human judgments in scalable ways, with many use cases in various application areas accessing human intelligence. However, crowdworkers completing the tasks may have limited or no background knowledge about the tasks they solve due to the plethora of various tasks available. Therefore, the tasks—even on a micro scale—also need to include appropriate training for the crowdworkers to enable them to complete them successfully. However, training crowdworkers efficiently in a short time for complex tasks poses a challenge and remains an unresolved issue. This paper addresses this challenge by empirically comparing different training strategies for crowdworkers and evaluating their impact on the crowdworkers’ task results. We perform comparisons between a basic training strategy, a strategy based on previous errors made by other crowdworkers, and the addition of instant feedback during training and task completion. Our results show that adding instant feedback during both the training phase and during the task yields more attention from the workers in difficult tasks and hence reduces errors and improves the results. We conclude that more attention is retained when the content of instant feedback includes information about mistakes made by other crowdworkers previously.
Keywords: crowdsourcing; optimized training; instant feedback; medical image annotation crowdsourcing; optimized training; instant feedback; medical image annotation

Share and Cite

MDPI and ACS Style

Libreros, J.A.; Gamboa, E.; Henke, E.; Hirth, M. Assessing the Influence of Feedback Strategies on Errors in Crowdsourced Annotation of Tumor Images. Big Data Cogn. Comput. 2025, 9, 220. https://doi.org/10.3390/bdcc9090220

AMA Style

Libreros JA, Gamboa E, Henke E, Hirth M. Assessing the Influence of Feedback Strategies on Errors in Crowdsourced Annotation of Tumor Images. Big Data and Cognitive Computing. 2025; 9(9):220. https://doi.org/10.3390/bdcc9090220

Chicago/Turabian Style

Libreros, Jose Alejandro, Edwin Gamboa, Erik Henke, and Matthias Hirth. 2025. "Assessing the Influence of Feedback Strategies on Errors in Crowdsourced Annotation of Tumor Images" Big Data and Cognitive Computing 9, no. 9: 220. https://doi.org/10.3390/bdcc9090220

APA Style

Libreros, J. A., Gamboa, E., Henke, E., & Hirth, M. (2025). Assessing the Influence of Feedback Strategies on Errors in Crowdsourced Annotation of Tumor Images. Big Data and Cognitive Computing, 9(9), 220. https://doi.org/10.3390/bdcc9090220

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop