- Article
Verifying Machine Learning Interpretability and Explainability Requirements Through Provenance
- Lynn Vonderhaar,
- Juan Couder and
- Omar Ochoa
- + 3 authors
Machine learning (ML) engineering increasingly incorporates principles from software and requirements engineering to improve development rigor; however, key non-functional requirements (NFRs) such as interpretability and explainability remain difficult to specify and verify using traditional requirements practices. Although prior work defines these qualities conceptually, their lack of measurable criteria prevents systematic verification. This paper presents a novel provenance-driven approach that decomposes ML interpretability and explainability NFRs into verifiable functional requirements (FRs) by leveraging model and data provenance to make model behavior transparent. The approach identifies the specific provenance artifacts required to validate each FR and demonstrates how their verification collectively establishes compliance with interpretability and explainability NFRs. The results show that ML provenance can operationalize otherwise abstract NFRs, transforming interpretability and explainability into quantifiable, testable properties and enabling more rigorous, requirements-based ML engineering.
14 February 2026


![The three “Starting Point” classes and some of their subclasses in PROV-O [50].](https://mdpi-res.com/cdn-cgi/image/w=470,h=317/https://mdpi-res.com/software/software-05-00009/article_deploy/html/images/software-05-00009-g001-550.jpg)


