Narrative Review on Symbolic Approaches for Explainable Artificial Intelligence: Foundations, Challenges, and Perspectives †
Abstract
1. Introduction
- This review aims to:
- 1.
- Provide a comprehensive overview of the theoretical foundations, current applications, and challenges of symbolic approaches in explainable AI.
- 2.
- Analyze the limitations of traditional symbolic systems (e.g., scalability, rigidity) and explore emerging solutions, including neuro-symbolic AI [8].
- 3.
- Discuss future directions in critical fields (e.g., healthcare, cybersecurity, autonomous vehicles) where explainability is a non-negotiable requirement.
- 4.
- Foster interdisciplinary research by bridging technical advancements with ethical, legal, and societal considerations.
- This paper is structured as follows:
- Foundations of symbolic approaches: Logic, ontologies, and expert systems.
- Comparison with connectionist approaches: Advantages, limitations, and complementarities.
- Current applications: Use cases in healthcare, finance, industry, and law.
- Technical challenges: Scalability, integration with deep learning.
- Future directions: Hybrid AI, dynamic ontologies, regulatory frameworks.
2. Methods
2.1. Literature Search Strategy
2.1.1. Sources Used
- IEEE Xplore: for technical articles and conferences in computer science and engineering.
- PubMed: for medical applications of explainable AI [9].
- Google Scholar: for broad, interdisciplinary coverage.
- ACM Digital Library: for works in theoretical and applied computer science.
- SpringerLink: for articles and books on theoretical foundations and practical applications [11].
2.1.2. Keywords and Search Terms
2.2. Inclusion and Exclusion Criteria
2.2.1. Inclusion Criteria
- Recent articles: Works published between 2018 and 2023 were preferred to reflect the most recent advances [12].
- Literature reviews: Systematic reviews and meta-analyses providing an overview of the field were included [13].
- Applied studies: Works demonstrating practical applications of symbolic approaches in fields such as healthcare, finance, or robotics were selected [8].
- Thematic relevance: Only works directly related to AI explicability and symbolic approaches were included.
2.2.2. Exclusion Criteria
2.2.3. Selection Process
2.3. Analysis and Synthesis Approach
Operationalization of the Classification Process
- Logical approaches: Works primarily relying on formal logic, predicate logic, or probabilistic reasoning.
- Ontological approaches: Studies using ontologies or structured knowledge representation frameworks.
- Expert systems: Research focused on rule-based or knowledge-based expert systems, particularly modern implementations.
- Hybrid approaches: Works integrating symbolic methods (e.g., logic, ontologies) with deep learning or other neural techniques.
3. Foundations of Symbolic Approaches in Explainable AI
3.1. Definition and Principles of Symbolic Approaches
3.2. Historical Development and Evolution of Symbolic Approaches
3.3. Key Concepts of Symbolic Approaches
3.3.1. Logical Foundations
- Predicate logic models complex relationships, propositional logic handles simple statements.
- Fuzzy logic captures gradations; probabilistic logic integrates uncertainty.
- Enables transparent, rigorous reasoning [20].
3.3.2. Ontological Structures
- Formal knowledge organization (e.g., SNOMED CT in medicine).
- Ensure unambiguous interpretation and knowledge reuse [21].
3.3.3. Expert Systems
- Combine knowledge bases with inference engines [22].
- Provide traceable solutions in specialized domains.
3.3.4. Knowledge Representation
- Production rules, conceptual graphs, and frames [23].
- Translate human knowledge into computable, structured formats.
4. Current Applications of Symbolic Approaches
5. Towards Hybrid Explainable AI: Trends and Future Directions
5.1. Hybridization of Symbolic and Connectionist Approaches
- Robotics (symbolic planning with adaptive learning)
- Healthcare (combining neural image analysis with diagnostic rules)
5.2. Using Ontologies and Knowledge Graphs
5.3. Future Prospects in Critical Areas
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
ProbLog | a probabilistic logic programming language |
DeepProbLog | deep learning with probabilistic logic reasoning |
ML | Machine Learning |
AI | Artificial Intelligence |
XAI | Explainable Artificial Intelligence |
MYCIN | An Early Expert System for Medical Diagnosis |
SNOMED CT | Systematized Nomenclature of Medicine—Clinical Terms |
NLP | Natural Language Processing (NLP) is a branch of Artificial Intelligence (AI) |
TLN | Temporal Logic Network a type of neural-symbolic AI model |
GPT | Generative Pre-trained Transformer |
RGPD | Règlement Général sur la Protection des Données |
RAVEN | Relational and Analogical Visual Reasoning |
References
- Doshi-Velez, F.; Kim, B. Towards a rigorous science of interpretable machine learning. arXiv 2017, arXiv:1702.08608. [Google Scholar] [CrossRef]
- Samek, W.; Wiegand, T.; Müller, K.-R. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. ITU J. ICT Discov. 2017, 1, 39–48. [Google Scholar]
- Marcus, G. Deep learning: A critical appraisal. arXiv 2018, arXiv:1801.00631. [Google Scholar] [CrossRef]
- Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; Galstyan, A. A survey on bias and fairness in machine learning. ACM Comput. Surv. 2021, 54, 115. [Google Scholar] [CrossRef]
- Wachter, S.; Mittelstadt, B.; Russell, C. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. J. Law Technol. 2017, 31, 841–887. [Google Scholar] [CrossRef]
- Arrieta, A.B.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; Garcia, S.; Gil-Lopez, S.; Molina, D.; Benjamins, R.; et al. Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef]
- Shortliffe, E.H.; Buchanan, B.G. A model of inexact reasoning in medicine. Math. Biosci. 1975, 23, 351–379. [Google Scholar] [CrossRef]
- Besold, T.R.; d’Avila Garcez, A.; Bader, S.; Bowman, H.; Domingos, P.; Hitzler, P.; Kühnberger, K.-U.; Lamb, L.C.; Lima, P.M.V.; de Penning, L.; et al. Neural-Symbolic Learning and Reasoning: A Survey and Interpretation. Neuro-Symbolic Artificial Intelligence: The State of the Art; IOS Press: Amsterdam, The Netherlands, 2021; pp. 1–51. [Google Scholar]
- Holzinger, A.; Langs, G.; Denk, H.; Zatloukal, K.; Müller, H. Causability and explainability of artificial intelligence in medicine. WIREs Data Min. Knowl. Discov. 2019, 9, e1312. [Google Scholar] [CrossRef]
- Kitchenham, B.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; EBSE Technical Report EBSE-2007-01; Software Engineering Group, School of Computer Science and Mathematics, Keele University: Keele, UK, 2007; Volume 2. [Google Scholar]
- Baader, F.; McGuinness, D.L.; Nardi, D.; Patel-Schneider, P.F. The Description Logic Handbook: Theory, Implementation, and Applications; Cambridge University Press: Cambridge, UK, 2017. [Google Scholar]
- Gunning, D.; Stefik, M.; Choi, J.; Miller, T.; Stumpf, S.; Yang, G.Z. XAI—Explainable artificial intelligence. Sci. Robot. 2019, 4, eaay7120. [Google Scholar] [CrossRef]
- Endsley, M.R. From here to autonomy: Lessons learned from human-automation research. Hum. Factors 2017, 59, 5–27. [Google Scholar] [CrossRef]
- Wing, J.M. Trustworthy AI. Commun. ACM 2021, 64, 64–71. [Google Scholar] [CrossRef]
- Braun, V.; Clarke, V. Thematic Analysis: A Practical Guide; SAGE Publications: London, UK, 2021. [Google Scholar]
- McCarthy, J. Programs with common sense. In Proceedings of the Teddington Conference on the Mechanization of Thought Processes; Her Majesty’s Stationary Office: Edinburgh, UK, 1959. [Google Scholar]
- Newell, A.; Simon, H.A. Computer science as empirical inquiry: Symbols and search. Commun. ACM 1976, 19, 113–126. [Google Scholar] [CrossRef]
- Brachman, R.; Levesque, H. Knowledge Representation and Reasoning; Morgan Kaufmann: San Mateo, CA, USA, 2004. [Google Scholar]
- Smith, B.C. Reflection and Semantics in a Procedural Language; MIT Press: Cambridge, MA, USA, 1982. [Google Scholar]
- De Raedt, L.; Dumančić, S.; Manhaeve, R.; Marra, G. From statistical relational to neuro-symbolic AI. AIJ 2022, 307, 4943–4950. [Google Scholar]
- Russell, S.; Norvig, P. Artificial Intelligence: A Modern Approach, 4th ed.; Pearson: London, UK, 2020. [Google Scholar]
- Smith, B.; Ashburner, M.; Rosse, C.; Bard, J.; Bug, W.; Ceusters, W.; Goldberg, L.J.; Eilbeck, K.; Ireland, A.; Mungall, C.J.; et al. The OBO Foundry. Appl. Ontol. 2007, 25, 1251–1255. [Google Scholar]
- Hayes-Roth, F. Rule-based systems. Commun. ACM 1985, 28, 921–932. [Google Scholar] [CrossRef]
- Sowa, J. Knowledge Representation; Brooks/Cole: Pacific Grove, CA, USA, 2000. [Google Scholar]
- Gunning, D. Explainable AI (XAI); DARPA Technical Report; DARPA: Arlington, VA, USA, 2017. [Google Scholar]
- Manhaeve, R.; Dumančić, S.; Kimmig, A.; Demeester, T.; De Raedt, L. Neural probabilistic logic programming in DeepProbLog. Artif. Intell. 2021, 298, 103504. [Google Scholar] [CrossRef]
- Wachter, S.; Mittelstadt, B.; Floridi, L. Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation. Int. Data Priv. Law 2017, 7, 76–99. [Google Scholar] [CrossRef]
- Holzinger, A. Interactive machine learning for health informatics. Brain Inform. 2016, 3, 119–131. [Google Scholar] [CrossRef] [PubMed]
- d’Avila Garcez, A.; Lamb, L.C. Neurosymbolic AI: The 3rd wave. arXiv 2020, arXiv:2012.05876. [Google Scholar] [CrossRef]
- Riegel, R.; Gray, A.; Luus, F.; Khan, N.; Makondo, N.; Akhalwaya, I.Y.; Qian, H.; Fagin, R.; Barahona, F.; Sharma, U.; et al. Logical Neural Networks. arXiv 2020. [Google Scholar] [CrossRef]
- Hogan, A.; Blomqvist, E.; Cochez, M.; D’amato, C.; De Melo, G.; Gutierrez, C.; Kirrane, S.; Gayo, J.E.L.; Navigli, R.; Neumaier, S.; et al. Knowledge Graphs. ACM Comput. Surv. 2021, 54, 71. [Google Scholar] [CrossRef]
- Singhal, A. Introducing the Knowledge Graph: Things, Not Strings. Available online: https://blog.google/products/search/introducing-knowledge-graph-things-not/ (accessed on 18 September 2024).
- Ali, H.; Fatima, T. Integrating Neural Networks and Symbolic Reasoning: A Neurosymbolic AI Approach for Decision-Making Systems. ResearchGate 2025. [Google Scholar] [CrossRef]
- Zhang, C.; Gao, F.; Jia, B.; Zhu, Y.; Zhu, S.C. Raven: A dataset for relational and analogical visual reasoning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 5312–5322. [Google Scholar]
- Riou, C.; El Azzouzi, M.; Hespel, A.; Guillou, E.; Coatrieux, G.; Cuggia, M. Ensuring general data protection regulation compliance and security in a clinical data warehouse from a university hospital: Implementation study. JMIR Med. Inform. 2025, 13. [Google Scholar] [CrossRef]
- Ghadermazi, J.; Hore, S.; Shah, A.; Bastian, N.D. GTAE-IDS: Graph Transformer-Based Autoencoder Framework for Real-Time Network Intrusion Detection. IEEE Trans. Inf. Forensics Secur. 2025, 20, 4026–4041. [Google Scholar] [CrossRef]
- AlNusif, M. Explainable AI in Edge Devices: A Lightweight Framework for Real-Time Decision Transparency. Int. J. Eng. Comput. Sci. 2025, 14, 27447–27472. [Google Scholar] [CrossRef]
Logical approaches | Works based on formal logic, predicate logic, or probabilistic logic. |
Ontological approaches | Works using ontologies for knowledge representation. |
Expert system | Studies on modern expert systems and their role in explainability. |
Hybrid approaches | Works combining symbolic methods with deep learning (neuro-symbolic AI). |
Criterion | Symbolic Approach | Connectionist Approach | Hybrid Approach |
---|---|---|---|
Representation | Symbols, logical rules, ontologies. | Weights of connections in a neural network. | Combination of symbols and continuous representations. |
Reasoning | Deductive logic, explicit rules. | Emergent, based on neural interactions. | Symbolic logic guided by deep learning. |
Learning | Manual (rule-based) or inductive. | Automatic, based on data. | Automatic, based on data |
Explainability | Highly explainable (traceable decisions). 92–98% interpretability scores | Low explainability (black box). 15–35% interpretability scores | Improved explainability through the integration of symbolic rules [24]. 75–90% interpretability scores |
Scalability | Limited for large knowledge bases. | Excellent for large amounts of data. | Challenge: balancing scalability and complexity. |
Management of uncertainty | Rigid, requires extensions (fuzzy logic). | Excellent (handles ambiguity well) | Combine probabilistic logic and deep learning. |
Areas of application | Medical diagnosis, planning, rule-based TLN. | Computer vision, data-driven NLP, recommendation | Health, robotics, critical systems requiring explainability. |
Advantages | Transparency, clear logical reasoning. | Flexibility, learning from complex data. | Combination of the strengths of both approaches. |
Limits | Difficult to adapt to unstructured data. | Difficult to interpret and explain. | Design and integration complexity. |
Examples | MYCIN (medical diagnosis), Prolog | GPT (text generation), CNN (computer vision). | DeepProbLog, Providers of Neural Theorems [25]. |
Domain | Applications | Examples Concerts | Advantages of Symbolic Approaches |
---|---|---|---|
Health | Diagnostic, clinical decision support. | MYCIN, IBM Watson for Oncology | Explainability, transparency [27], trust. |
Finance | Fraud detection, automated auditing. | ACTICO, MiFID II compliance systems. | Regulatory compliance, traceability [28]. |
Industrie | Predictive maintenance, process management | Industrial Control Systems (G2). | Security, efficiency, interpretability. |
Law and ethics | Regulated decision, ethical compliance. | ROSS Intelligence, Ethical Guidelines for AI. | Respect for the rules, justification of decisions. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Meziane, L.; Abbaoui, W.; Abdellaoui, S.; El Bhiri, B.; Ziti, S. Narrative Review on Symbolic Approaches for Explainable Artificial Intelligence: Foundations, Challenges, and Perspectives. Eng. Proc. 2025, 112, 39. https://doi.org/10.3390/engproc2025112039
Meziane L, Abbaoui W, Abdellaoui S, El Bhiri B, Ziti S. Narrative Review on Symbolic Approaches for Explainable Artificial Intelligence: Foundations, Challenges, and Perspectives. Engineering Proceedings. 2025; 112(1):39. https://doi.org/10.3390/engproc2025112039
Chicago/Turabian StyleMeziane, Loubna, Wafae Abbaoui, Soukayna Abdellaoui, Brahim El Bhiri, and Soumia Ziti. 2025. "Narrative Review on Symbolic Approaches for Explainable Artificial Intelligence: Foundations, Challenges, and Perspectives" Engineering Proceedings 112, no. 1: 39. https://doi.org/10.3390/engproc2025112039
APA StyleMeziane, L., Abbaoui, W., Abdellaoui, S., El Bhiri, B., & Ziti, S. (2025). Narrative Review on Symbolic Approaches for Explainable Artificial Intelligence: Foundations, Challenges, and Perspectives. Engineering Proceedings, 112(1), 39. https://doi.org/10.3390/engproc2025112039