Interpretable Deep Prototype-Based Neural Networks: Can a 1 Look like a 0?
Abstract
1. Introduction
2. State of the Art
3. Deep Prototype-Based Network Architecture
3.1. The Forward Encoding Flow
3.2. The Backward Decoding Flow
3.3. Guided Prototype Learning
4. Experiments and Results
4.1. NNE Score
4.2. Experiment 1: Baseline Architecture
4.3. Experiment 2: Model Distillation
4.4. Experiment 3: Cana 1 Look like a 0?
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Rudin, C.; Chen, C.; Chen, Z.; Huang, H.; Semenova, L.; Zhong, C. Interpretable machine learning: Fundamental principles and 10 grand challenges. Stat. Surv. 2022, 16, 1–85. [Google Scholar] [CrossRef]
- Li, O.; Liu, H.; Chen, C.; Rudin, C. Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018. [Google Scholar]
- Ras, G.; Xie, N.; Van Gerven, M.; Doran, D. Explainable deep learning: A field guide for the uninitiated. J. Artif. Intell. Res. 2022, 73, 329–396. [Google Scholar] [CrossRef]
- Deng, L. The mnist database of handwritten digit images for machine learning research. IEEE Signal Process. Mag. 2012, 29, 141–142. [Google Scholar] [CrossRef]
- Chen, C.; Li, O.; Tao, D.; Barnett, A.; Rudin, C.; Su, J. This Looks Like That: Deep Learning for Interpretable Image Recognition. In Proceedings of the NeurIPS, Vancouver, BC, Canada, 8–14 December 2019. [Google Scholar]
- Stefenon, S.F.; Singh, G.; Yow, K.C.; Cimatti, A. Semi-ProtoPNet Deep Neural Network for the Classification of Defective Power Grid Distribution Structures. Sensors 2022, 22, 4859. [Google Scholar] [CrossRef] [PubMed]
- Singh, G.; Yow, K.C. Object or Background: An Interpretable Deep Learning Model for COVID-19 Detection from CT-Scan Images. Diagnostics 2021, 11, 1732. [Google Scholar] [CrossRef]
- Singh, G. One and one make eleven: An interpretable neural network for image recognition. Knowl.-Based Syst. 2023, 279, 110926. [Google Scholar] [CrossRef]
- Nauta, M.; van Bree, R.; Seifert, C. Neural Prototype Trees for Interpretable Fine-Grained Image Recognition. In Proceedings of the CVPR, Nashville, TN, USA, 19–25 June 2021. [Google Scholar]
- Gulshad, S.; Long, T.; van Noord, N. Hierarchical Explanations for Video Action Recognition. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Vancouver, BC, Canada, 18–22 June 2023; pp. 3703–3708. [Google Scholar] [CrossRef]
- Koh, P.W.; Nguyen, T.; Tang, Y.S.; Mussmann, S.; Pierson, E.; Kim, B.; Liang, P. Concept Bottleneck Models. In Proceedings of the ICML, Virtual Event, 13–18 July 2020. [Google Scholar]
- Papernot, N.; McDaniel, P. Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning. In Proceedings of the ICML, Stockholm, Sweden, 10–15 July 2018. [Google Scholar]
- Sabour, S.; Frosst, N.; Hinton, G.E. Dynamic Routing Between Capsules. In Proceedings of the NeurIPS, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Zhang, N.; Donahue, J.; Girshick, R.; Darrell, T. Part-based R-CNNs for Fine-grained Category Detection. In Proceedings of the ECCV, Zurich, Switzerland, 6–12 September 2014. [Google Scholar]
- Bancroft, S. An Algebraic Solution of the GPS Equations. IEEE Trans. Aerosp. Electron. Syst. 1985, AES-21, 56–59. [Google Scholar] [CrossRef]
- Abel, J.; Chaffee, J. Existence and Uniqueness Analysis for the GPS Equations. IEEE Trans. Aerosp. Electron. Syst. 1991, 27, 952–956. [Google Scholar] [CrossRef]
- Norrdine, A. An Algebraic Solution to the Multilateration Problem. In Proceedings of the International Conference on Indoor Positioning and Indoor Navigation (IPIN), Sydney, Australia, 13–15 November 2012; pp. 1–6. [Google Scholar]
- Levenberg, K. A Method for the Solution of Certain Non-Linear Problems in Least Squares. Q. Appl. Math. 1944, 2, 164–168. [Google Scholar] [CrossRef]
- Marquardt, D.W. An Algorithm for Least-Squares Estimation of Nonlinear Parameters. J. Soc. Ind. Appl. Math. 1963, 11, 431–441. [Google Scholar] [CrossRef]
- Moore, E.H. On the Reciprocal of the General Algebraic Matrix. Bull. Am. Math. Soc. 1920, 26, 394–395. [Google Scholar]
- Penrose, R. A Generalized Inverse for Matrices. Proc. Camb. Philos. Soc. 1955, 51, 406–413. [Google Scholar] [CrossRef]
- Bohle, M.; Singh, N.; Fritz, M.; Schiele, B. B-Cos Alignment for Inherently Interpretable CNNs and Vision Transformers. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 4504–4518. [Google Scholar] [CrossRef] [PubMed]
- Wolf, T.N.; Kavak, E.; Bongratz, F.; Wachinger, C. SIC: Similarity-Based Interpretable Image Classification with Neural Networks. arXiv 2025, arXiv:2501.17328. [Google Scholar]
- Turbé, H.; Bjelogrlic, M.; Mengaldo, G.; Lovis, C. ProtoS-ViT: Visual foundation models for sparse self-explainable classifications. arXiv 2024, arXiv:2406.10025. [Google Scholar]
- Singh, G.; Frizzo Stefenon, S.; Yow, K.C. The shallowest transparent and interpretable deep neural network for image recognition. Sci. Rep. 2025, 15, 13940. [Google Scholar] [CrossRef]
- Leavitt, M.L.; Morcos, A. Selectivity considered harmful: Evaluating the causal impact of class selectivity in DNNs. arXiv 2020, arXiv:2003.01262. [Google Scholar] [CrossRef]
- Pearce, M.T.; Dooms, T.; Rigg, A.; Oramas, J.M.; Sharkey, L. Bilinear MLPs enable weight-based mechanistic interpretability. arXiv 2025, arXiv:2410.08417. [Google Scholar]
Matrix | NNE |
---|---|
Identity (perfectly sharp) | |
Uniform (fully diffuse) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
García-Cuesta, E.; Manrique, D.; Ionescu, R.C. Interpretable Deep Prototype-Based Neural Networks: Can a 1 Look like a 0? Electronics 2025, 14, 3584. https://doi.org/10.3390/electronics14183584
García-Cuesta E, Manrique D, Ionescu RC. Interpretable Deep Prototype-Based Neural Networks: Can a 1 Look like a 0? Electronics. 2025; 14(18):3584. https://doi.org/10.3390/electronics14183584
Chicago/Turabian StyleGarcía-Cuesta, Esteban, Daniel Manrique, and Radu Constantin Ionescu. 2025. "Interpretable Deep Prototype-Based Neural Networks: Can a 1 Look like a 0?" Electronics 14, no. 18: 3584. https://doi.org/10.3390/electronics14183584
APA StyleGarcía-Cuesta, E., Manrique, D., & Ionescu, R. C. (2025). Interpretable Deep Prototype-Based Neural Networks: Can a 1 Look like a 0? Electronics, 14(18), 3584. https://doi.org/10.3390/electronics14183584