Explainable Machine Learning

A special issue of Machine Learning and Knowledge Extraction (ISSN 2504-4990).

Deadline for manuscript submissions: closed (31 December 2021) | Viewed by 26396

Special Issue Editors


E-Mail Website1 Website2
Guest Editor
1. Institut für Numerische Simulation, Endenicher Allee 19b, 53115 Bonn, Germany
2. Fraunhofer Center for Machine Learning and Fraunhofer SCAI, Schloss Birlinghoven, 53757 Sankt Augustin, Germany
Interests: machine learning; numerical simulation; reinforcement learning; uncertainty quantification; data-driven science and engineering; simulation data analysis

E-Mail Website
Guest Editor
Institute of Geodesy and Geoinformation, Nussallee 15, 53115 Bonn, Germany
Interests: remote sensing; image analysis; machine learning; pattern recognition; plant phenotyping
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear colleagues,

Machine learning methods are currently used widely in commercial applications and in many scientific areas. There is an increasing demand to understand the way a specific model operates and the underlying reasons for the decision produced by the machine learning model. In the natural sciences, where ML is increasingly employed to optimize and produce scientific outcomes, explainability can be seen as a prerequisite to ensure the scientific value of the outcome. In societal contexts, the reasons for a decision often matter. Typical examples are (semi-)automatic loan applications, hiring decisions, or risk assessment for insurance applicants. Here, one wants to gain insight, also due to regulatory reasons and fair decision making, why a model gives a certain prediction and how this relates to the individual under consideration. For engineering applications, where ML models are deployed for decision-support and automation in potentially changing environments, an assumption is that with explainable ML approaches, robustness and reliability can be realized more easily.

While machine learning is employed in numerous projects and publications today, the vast majority is not concerned with aspects of interpretability or explainability. This Special Issue aims at the presentation of new approaches for explainable ML. In particular, contributions with an emphasis on applications with explainable ML are welcomed.

Prof. Dr. Jochen Garcke
Prof. Dr. Ribana Roscher
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Machine Learning and Knowledge Extraction is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • transparency
  • interpretability
  • explainability
  • scientific consistency
  • uncertainty quantification
  • data-driven science
  • data-driven engineering

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

2 pages, 504 KiB  
Editorial
Explainable Machine Learning
by Jochen Garcke and Ribana Roscher
Mach. Learn. Knowl. Extr. 2023, 5(1), 169-170; https://doi.org/10.3390/make5010010 - 17 Jan 2023
Cited by 4 | Viewed by 2341
Abstract
Machine learning methods are widely used in commercial applications and in many scientific areas [...] Full article
(This article belongs to the Special Issue Explainable Machine Learning)

Research

Jump to: Editorial

22 pages, 4534 KiB  
Article
Explainable Machine Learning Reveals Capabilities, Redundancy, and Limitations of a Geospatial Air Quality Benchmark Dataset
by Scarlet Stadtler, Clara Betancourt and Ribana Roscher
Mach. Learn. Knowl. Extr. 2022, 4(1), 150-171; https://doi.org/10.3390/make4010008 - 11 Feb 2022
Cited by 15 | Viewed by 5264
Abstract
Air quality is relevant to society because it poses environmental risks to humans and nature. We use explainable machine learning in air quality research by analyzing model predictions in relation to the underlying training data. The data originate from worldwide ozone observations, paired [...] Read more.
Air quality is relevant to society because it poses environmental risks to humans and nature. We use explainable machine learning in air quality research by analyzing model predictions in relation to the underlying training data. The data originate from worldwide ozone observations, paired with geospatial data. We use two different architectures: a neural network and a random forest trained on various geospatial data to predict multi-year averages of the air pollutant ozone. To understand how both models function, we explain how they represent the training data and derive their predictions. By focusing on inaccurate predictions and explaining why these predictions fail, we can (i) identify underrepresented samples, (ii) flag unexpected inaccurate predictions, and (iii) point to training samples irrelevant for predictions on the test set. Based on the underrepresented samples, we suggest where to build new measurement stations. We also show which training samples do not substantially contribute to the model performance. This study demonstrates the application of explainable machine learning beyond simply explaining the trained model. Full article
(This article belongs to the Special Issue Explainable Machine Learning)
Show Figures

Figure 1

10 pages, 2244 KiB  
Article
Surrogate Object Detection Explainer (SODEx) with YOLOv4 and LIME
by Jonas Herskind Sejr, Peter Schneider-Kamp and Naeem Ayoub
Mach. Learn. Knowl. Extr. 2021, 3(3), 662-671; https://doi.org/10.3390/make3030033 - 6 Aug 2021
Cited by 10 | Viewed by 6269
Abstract
Due to impressive performance, deep neural networks for object detection in images have become a prevalent choice. Given the complexity of the neural network models used, users of these algorithms are typically given no hint as to how the objects were found. It [...] Read more.
Due to impressive performance, deep neural networks for object detection in images have become a prevalent choice. Given the complexity of the neural network models used, users of these algorithms are typically given no hint as to how the objects were found. It remains, for example, unclear whether an object is detected based on what it looks like or based on the context in which it is located. We have developed an algorithm, Surrogate Object Detection Explainer (SODEx), that can explain any object detection algorithm using any classification explainer. We evaluate SODEx qualitatively and quantitatively by detecting objects in the COCO dataset with YOLOv4 and explaining these detections with LIME. This empirical evaluation does not only demonstrate the value of explainable object detection, it also provides valuable insights into how YOLOv4 detects objects. Full article
(This article belongs to the Special Issue Explainable Machine Learning)
Show Figures

Figure 1

22 pages, 2506 KiB  
Article
Robust Learning with Implicit Residual Networks
by Viktor Reshniak and Clayton G. Webster
Mach. Learn. Knowl. Extr. 2021, 3(1), 34-55; https://doi.org/10.3390/make3010003 - 31 Dec 2020
Cited by 5 | Viewed by 3444
Abstract
In this effort, we propose a new deep architecture utilizing residual blocks inspired by implicit discretization schemes. As opposed to the standard feed-forward networks, the outputs of the proposed implicit residual blocks are defined as the fixed points of the appropriately chosen nonlinear [...] Read more.
In this effort, we propose a new deep architecture utilizing residual blocks inspired by implicit discretization schemes. As opposed to the standard feed-forward networks, the outputs of the proposed implicit residual blocks are defined as the fixed points of the appropriately chosen nonlinear transformations. We show that this choice leads to the improved stability of both forward and backward propagations, has a favorable impact on the generalization power, and allows for control the robustness of the network with only a few hyperparameters. In addition, the proposed reformulation of ResNet does not introduce new parameters and can potentially lead to a reduction in the number of required layers due to improved forward stability. Finally, we derive the memory-efficient training algorithm, propose a stochastic regularization technique, and provide numerical results in support of our findings. Full article
(This article belongs to the Special Issue Explainable Machine Learning)
Show Figures

Figure 1

17 pages, 10927 KiB  
Article
Concept Discovery for The Interpretation of Landscape Scenicness
by Pim Arendsen, Diego Marcos and Devis Tuia
Mach. Learn. Knowl. Extr. 2020, 2(4), 397-413; https://doi.org/10.3390/make2040022 - 3 Oct 2020
Cited by 7 | Viewed by 3270
Abstract
In this paper, we study how to extract visual concepts to understand landscape scenicness. Using visual feature representations from a Convolutional Neural Network (CNN), we learn a number of Concept Activation Vectors (CAV) aligned with semantic concepts from ancillary datasets. These concepts represent [...] Read more.
In this paper, we study how to extract visual concepts to understand landscape scenicness. Using visual feature representations from a Convolutional Neural Network (CNN), we learn a number of Concept Activation Vectors (CAV) aligned with semantic concepts from ancillary datasets. These concepts represent objects, attributes or scene categories that describe outdoor images. We then use these CAVs to study their impact on the (crowdsourced) perception of beauty of landscapes in the United Kingdom. Finally, we deploy a technique to explore new concepts beyond those initially available in the ancillary dataset: Using a semi-supervised manifold alignment technique, we align the CNN image representation to a large set of word embeddings, therefore giving access to entire dictionaries of concepts. This allows us to obtain a list of new concept candidates to improve our understanding of the elements that contribute the most to the perception of scenicness. We do this without the need for any additional data by leveraging the commonalities in the visual and word vector spaces. Our results suggest that new and potentially useful concepts can be discovered by leveraging neighbourhood structures in the word vector spaces. Full article
(This article belongs to the Special Issue Explainable Machine Learning)
Show Figures

Figure 1

24 pages, 52749 KiB  
Article
A Hybrid Artificial Neural Network to Estimate Soil Moisture Using SWAT+ and SMAP Data
by Katherine H. Breen, Scott C. James, Joseph D. White, Peter M. Allen and Jeffery G. Arnold
Mach. Learn. Knowl. Extr. 2020, 2(3), 283-306; https://doi.org/10.3390/make2030016 - 21 Aug 2020
Cited by 10 | Viewed by 4027
Abstract
In this work, we developed a data-driven framework to predict near-surface (0–5 cm) soil moisture (SM) by mapping inputs from the Soil & Water Assessment Tool to SM time series from NASA’s Soil Moisture Active Passive (SMAP) satellite for the period 1 January [...] Read more.
In this work, we developed a data-driven framework to predict near-surface (0–5 cm) soil moisture (SM) by mapping inputs from the Soil & Water Assessment Tool to SM time series from NASA’s Soil Moisture Active Passive (SMAP) satellite for the period 1 January 2016–31 December 2018. We developed a hybrid artificial neural network (ANN) combining long short-term memory and multilayer perceptron networks that were used to simultaneously incorporate dynamic weather and static spatial data into the training algorithm, respectively. We evaluated the generalizability of the hybrid ANN using training datasets comprising several watersheds with different environmental conditions, examined the effects of standard and physics-guided loss functions, and experimented with feature augmentation. Our model could estimate SM on par with the accuracy of SMAP. We demonstrated that the most critical learning of the physical processes governing SM variability was learned from meteorological time series, and that additional physical context supported model performance when test data were not fully encapsulated by the variability of the training data. Additionally, we found that when forecasting SM based on trends learned during the earlier training period, the models appreciated seasonal trends. Full article
(This article belongs to the Special Issue Explainable Machine Learning)
Show Figures

Graphical abstract

Back to TopTop