Next Article in Journal
Identification of Interacting Objects and Evaluation of Interaction Loss from Wideband Double-Directional Channel Measurement Data by Using Point Cloud Data
Previous Article in Journal
LR-SQL: A Supervised Fine-Tuning Method for Text2SQL Tasks Under Low-Resource Scenarios
Previous Article in Special Issue
A Comparative Evaluation of Transformer-Based Language Models for Topic-Based Sentiment Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Verified Language Processing with Hybrid Explainability †

by
Oliver Robert Fox
,
Giacomo Bergami
* and
Graham Morgan
School of Computing, Faculty of Science, Agriculture and Engineering, Newcastle University, Newcastle Upon Tyne NE4 5TG, UK
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in LaSSI: Logical, Structural, and Semantic text Interpretation. Fox, O.R.; Bergami, G.; Morgan, G. Database Engineered Applications. In Proceedings of the 28th International Symposium, IDEAS 2024, Bayonne, France, 26–29 August 2024, Springer: Berlin/Heidelberg.
Electronics 2025, 14(17), 3490; https://doi.org/10.3390/electronics14173490
Submission received: 20 May 2025 / Revised: 8 July 2025 / Accepted: 25 August 2025 / Published: 31 August 2025

Abstract

The volume and diversity of digital information have led to a growing reliance on Machine Learning (ML) techniques, such as Natural Language Processing (NLP), for interpreting and accessing appropriate data. While vector and graph embeddings represent data for similarity tasks, current state-of-the-art pipelines lack guaranteed explainability, failing to accurately determine similarity for given full texts. These considerations can also be applied to classifiers exploiting generative language models with logical prompts, which fail to correctly distinguish between logical implication, indifference, and inconsistency, despite being explicitly trained to recognise the first two classes. We present a novel pipeline designed for hybrid explainability to address this. Our methodology combines graphs and logic to produce First-Order Logic (FOL) representations, creating machine- and human-readable representations through Montague Grammar (MG). The preliminary results indicate the effectiveness of this approach in accurately capturing full text similarity. To the best of our knowledge, this is the first approach to differentiate between implication, inconsistency, and indifference for text classification tasks. To address the limitations of existing approaches, we use three self-contained datasets annotated for the former classification task to determine the suitability of these approaches in capturing sentence structure equivalence, logical connectives, and spatiotemporal reasoning. We also use these data to compare the proposed method with language models pre-trained for detecting sentence entailment. The results show that the proposed method outperforms state-of-the-art models, indicating that natural language understanding cannot be easily generalised by training over extensive document corpora. This work offers a step toward more transparent and reliable Information Retrieval (IR) from extensive textual data.
Keywords: verified artificial intelligence; eXplainable AI (XAI); hybrid explainability; natural language processing; full text similarity; spatiotemporal reasoning verified artificial intelligence; eXplainable AI (XAI); hybrid explainability; natural language processing; full text similarity; spatiotemporal reasoning

Share and Cite

MDPI and ACS Style

Fox, O.R.; Bergami, G.; Morgan, G. Verified Language Processing with Hybrid Explainability. Electronics 2025, 14, 3490. https://doi.org/10.3390/electronics14173490

AMA Style

Fox OR, Bergami G, Morgan G. Verified Language Processing with Hybrid Explainability. Electronics. 2025; 14(17):3490. https://doi.org/10.3390/electronics14173490

Chicago/Turabian Style

Fox, Oliver Robert, Giacomo Bergami, and Graham Morgan. 2025. "Verified Language Processing with Hybrid Explainability" Electronics 14, no. 17: 3490. https://doi.org/10.3390/electronics14173490

APA Style

Fox, O. R., Bergami, G., & Morgan, G. (2025). Verified Language Processing with Hybrid Explainability. Electronics, 14(17), 3490. https://doi.org/10.3390/electronics14173490

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop