Next Article in Journal
Depicting More Information in Enriched Squarified Treemaps with Layered Glyphs
Next Article in Special Issue
Analysis of Facial Information for Healthcare Applications: A Survey on Computer Vision-Based Approaches
Previous Article in Journal
Analysis and Identification of Possible Automation Approaches for Embedded Systems Design Flows
Previous Article in Special Issue
Triadic Automata and Machines as Information Transformers
Open AccessFeature PaperReview

On the Integration of Knowledge Graphs into Deep Learning Models for a More Comprehensible AI—Three Challenges for Future Research

Nexa Center for Internet and Society, Department of Control and Computer Engineering (DAUIN), Politecnico di Torino, 10129 Turin, Italy
*
Author to whom correspondence should be addressed.
Information 2020, 11(2), 122; https://doi.org/10.3390/info11020122
Received: 24 December 2019 / Revised: 15 February 2020 / Accepted: 18 February 2020 / Published: 22 February 2020
(This article belongs to the Special Issue 10th Anniversary of Information—Emerging Research Challenges)
Deep learning models contributed to reaching unprecedented results in prediction and classification tasks of Artificial Intelligence (AI) systems. However, alongside this notable progress, they do not provide human-understandable insights on how a specific result was achieved. In contexts where the impact of AI on human life is relevant (e.g., recruitment tools, medical diagnoses, etc.), explainability is not only a desirable property, but it is -or, in some cases, it will be soon-a legal requirement. Most of the available approaches to implement eXplainable Artificial Intelligence (XAI) focus on technical solutions usable only by experts able to manipulate the recursive mathematical functions in deep learning algorithms. A complementary approach is represented by symbolic AI, where symbols are elements of a lingua franca between humans and deep learning. In this context, Knowledge Graphs (KGs) and their underlying semantic technologies are the modern implementation of symbolic AI—while being less flexible and robust to noise compared to deep learning models, KGs are natively developed to be explainable. In this paper, we review the main XAI approaches existing in the literature, underlying their strengths and limitations, and we propose neural-symbolic integration as a cornerstone to design an AI which is closer to non-insiders comprehension. Within such a general direction, we identify three specific challenges for future research—knowledge matching, cross-disciplinary explanations and interactive explanations. View Full-Text
Keywords: eXplainable artificial intelligence; deep learning; knowledge graphs eXplainable artificial intelligence; deep learning; knowledge graphs
Show Figures

Figure 1

MDPI and ACS Style

Futia, G.; Vetrò, A. On the Integration of Knowledge Graphs into Deep Learning Models for a More Comprehensible AI—Three Challenges for Future Research. Information 2020, 11, 122.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop