Next Article in Journal
Deep Neural Network Analysis for Environmental Study of Coral Reefs in the Gulf of Eilat (Aqaba)
Next Article in Special Issue
A Combined System Metrics Approach to Cloud Service Reliability Using Artificial Intelligence
Previous Article in Journal
GeoLOD: A Spatial Linked Data Catalog and Recommender
Article

Deep Automation Bias: How to Tackle a Wicked Problem of AI?

Institute of Technology Assessment (ITA), Austrian Academy of Sciences, 1030 Vienna, Austria
Academic Editors: Nigel Houlden and Vic Grout
Big Data Cogn. Comput. 2021, 5(2), 18; https://doi.org/10.3390/bdcc5020018
Received: 12 March 2021 / Revised: 13 April 2021 / Accepted: 15 April 2021 / Published: 20 April 2021
The increasing use of AI in different societal contexts intensified the debate on risks, ethical problems and bias. Accordingly, promising research activities focus on debiasing to strengthen fairness, accountability and transparency in machine learning. There is, though, a tendency to fix societal and ethical issues with technical solutions that may cause additional, wicked problems. Alternative analytical approaches are thus needed to avoid this and to comprehend how societal and ethical issues occur in AI systems. Despite various forms of bias, ultimately, risks result from eventual rule conflicts between the AI system behavior due to feature complexity and user practices with limited options for scrutiny. Hence, although different forms of bias can occur, automation is their common ground. The paper highlights the role of automation and explains why deep automation bias (DAB) is a metarisk of AI. Based on former work it elaborates the main influencing factors and develops a heuristic model for assessing DAB-related risks in AI systems. This model aims at raising problem awareness and training on the sociotechnical risks resulting from AI-based automation and contributes to improving the general explicability of AI systems beyond technical issues. View Full-Text
Keywords: artificial intelligence; machine learning; automation bias; fairness; transparency; accountability; explicability; uncertainty; human-in-the-loop; awareness raising artificial intelligence; machine learning; automation bias; fairness; transparency; accountability; explicability; uncertainty; human-in-the-loop; awareness raising
Show Figures

Figure 1

MDPI and ACS Style

Strauß, S. Deep Automation Bias: How to Tackle a Wicked Problem of AI? Big Data Cogn. Comput. 2021, 5, 18. https://doi.org/10.3390/bdcc5020018

AMA Style

Strauß S. Deep Automation Bias: How to Tackle a Wicked Problem of AI? Big Data and Cognitive Computing. 2021; 5(2):18. https://doi.org/10.3390/bdcc5020018

Chicago/Turabian Style

Strauß, Stefan. 2021. "Deep Automation Bias: How to Tackle a Wicked Problem of AI?" Big Data and Cognitive Computing 5, no. 2: 18. https://doi.org/10.3390/bdcc5020018

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop