Next Article in Journal
Theory in Emancipative Action: Aligning Action Research in Information Systems Education with Critical Social Research in Information Systems
Previous Article in Journal
Sustainability Assessment of Asset Management Decisions for Wastewater Infrastructure Systems—Implementation of a System Dynamics Model

Introduction to the Special Issue “Artificial Intelligence Knowledge Representation”

Center for Technology Ethics, Institute for Socio Technical Complex Systems, Edinburgh EH1, UK
T-AI Tech, Tainan 70801, Taiwan
Departamento de Inteligencia Artificial, Universidad Politécnica de Madrid, 28660 Madrid, Spain
Ontology Engineering Group, 28660 Madrid, Spain
Author to whom correspondence should be addressed.
Systems 2019, 7(3), 35;
Received: 11 July 2019 / Accepted: 15 July 2019 / Published: 22 July 2019
(This article belongs to the Special Issue Artificial Intelligence Knowledge Representation)
A guest editorial could have been written when this Special Issue was first announced, to stimulate submissions and guide prospect authors, yet writing it closer to the deadline (together with the deadline extension announcement) makes it possible to highlight recent developments which have happened since.
The rapid proliferation of Artificial Intelligence (AI) tools, platforms and technologies from academia to industry and beyond, and the consequent media spin, has been worryingly accompanied by the lack of reference to, and poor application of, knowledge representation (KR), despite a wealth of techniques and modelling options.
For those who have worked in AI before the current hype cycle, such notable shortfalls may limit the credibility of contributions to AI developments especially in consideration of evolutionary and increasingly autonomous software development techniques, such as neural networks.
The lack of adequate application and teaching of KR in AI in large majorities of contemporary AI programs, together with the limited ability for different classes of users to gain insights into AI driven system functions (without having to parse and debug convoluted, encrypted code to test systems behaviour, for example) are important concerns, especially now that AI chains drive and underpin system logic at global level, from banking ATMs to personal identity, to online account and workflows of all kinds.
KR can provide mechanisms and tools for system logic to be transparent and accountable, necessary qualities for auditability, reliability, and explainability. Yet AI systems logic—use application and visibility—can be cumbersome and require specialised knowledge and cost considerable time to analyse.
KR can also support the shared and explicit understanding of the socio technical context in which AI systems are deployed, as well can help capture and analyze risks and responsibilities associated with autonomous functions of distributed intelligent systems.
Most of AI, like many scientific and technology topics, can be only superficially understood without in depth specific knowledge of the programming languages and systems architectures, can easily drift into spin and misinformation. In some cases, it is becoming difficult to distinguish fact from fiction, as in the Deepfake examples [1].
Policy makers and legislators cannot even begin to evaluate the reality of AI challenges at regulatory level without explicit KR of the intended functions and methods and underlying quality and integrity assurance. Business processes, understanding, explainability, decision making, usability and reliability of intelligent systems all depend on KR.
Technically, KR was devised (fifty years ago or so) to support computational models.
Now it has become a vast, deconstructed domain which is becoming increasingly applicable, relevant and necessary to the development and maintenance of intelligent systems. Yet, it is not integrated nor applied in AI educational programmes, except for some research outputs and mostly limited to niche communities of scholars, leaving dangerous pragmatic gaps in education and industry. This special issue was initially conceived to address these gaps.
Finally, the greater risk of systemic deviation [2] is looming: highly distributed systems being developed and deployed in such a way that they intended goals and functions can easily be manipulated and distorted from their intended functions and goals. Distributed and modular architectures, a la block chain, without adequate KR to guide the process lifecycle, can become double edge swords: nobody is responsible for systems behaviour or even worse, someone is going to be made look as if they are responsible for system behaviour, when they are not.
Since our Special Issue was launched, alongside a community group at W3C [3] knowledge representation has made a definite come back in AI circles and is re-entering discourse, in relation to increasingly big questions. Colleagues have started adding knowledge representation as a topic of interest in workshop agendas and call for papers of AI related events, slides addressing KR are being added to AI lecture notes, one or two new international workshops referencing KR have been announced.
At a minimum our special issue so far has contributed to bring KR back to the table.
Future developments in AI KR may extend to integrate human-machine cognition, whole systems approaches as well as socio technical systems dimensions. Advances in the AI KR should further emphasize and contrast natural language KR vs. KR techniques for machine learning.
If a new age for AI is underway, then it is a task of future generations to further intelligent technology responsibly, and it may befall upon those who have seen it rise and fall before to ensure that adequately robust KR understanding and methods continue to underpin it.

Conflicts of Interest

The authors declared no conflict of interest.


  1. Deepfakes. Available online: (accessed on 29 June 2019).
  2. Di Maio, P. Systemic Deviation. In Proceedings of the 60th Annual Meeting of the ISSS, Boulder, CO, USA, 23–30 July 2016. [Google Scholar]
  3. AI KR W3C CG. Available online: (accessed on 29 June 2019).
Back to TopTop