Next Article in Journal
Review of Automatic Estimation of Emotions in Speech
Previous Article in Journal
I-BIM Applied in Railway Geometric Inspection Activity: Diagnostic and Alert
Previous Article in Special Issue
Transformer-Based Explainable Model for Breast Cancer Lesion Segmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Review

Interpretable Optimization: Why and How We Should Explain Optimization Models

Institute for Research in Technology (IIT), Universidad Pontificia Comillas, 28015 Madrid, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(10), 5732; https://doi.org/10.3390/app15105732
Submission received: 19 March 2025 / Revised: 4 May 2025 / Accepted: 14 May 2025 / Published: 20 May 2025
(This article belongs to the Special Issue Machine Learning and Reasoning for Reliable and Explainable AI)

Abstract

Interpretability is widely recognized as essential in machine learning, yet optimization models remain largely opaque, limiting their adoption in high-stakes decision-making. While optimization provides mathematically rigorous solutions, the reasoning behind these solutions is often difficult to extract and communicate. This lack of transparency is particularly problematic in fields such as energy planning, healthcare, and resource allocation, where decision-makers require not only optimal solutions but also a clear understanding of trade-offs, constraints, and alternative options. To address these challenges, we propose a framework for interpretable optimization built on three key pillars. First, simplification and surrogate modeling reduce problem complexity while preserving decision-relevant structures, allowing stakeholders to engage with more intuitive representations of optimization models. Second, near-optimal solution analysis identifies alternative solutions that perform comparably to the optimal one, offering flexibility and robustness in decision-making while uncovering hidden trade-offs. Last, rationale generation ensures that solutions are explainable and actionable by providing insights into the relationships among variables, constraints, and objectives. By integrating these principles, optimization can move beyond black-box decision-making toward greater transparency, accountability, and usability. Enhancing interpretability strengthens both efficiency and ethical responsibility, enabling decision-makers to trust, validate, and implement optimization-driven insights with confidence.
Keywords: interpretable optimization; optimization; explainability; global sensitivity analysis; fitness landscape; near-optimal solutions; surrogate modeling; problem simplification; presolve; sensitivity analysis; modeling all alternatives; modeling to generate alternatives; ethics; rationale generation interpretable optimization; optimization; explainability; global sensitivity analysis; fitness landscape; near-optimal solutions; surrogate modeling; problem simplification; presolve; sensitivity analysis; modeling all alternatives; modeling to generate alternatives; ethics; rationale generation

Share and Cite

MDPI and ACS Style

Lumbreras, S.; Ciller, P. Interpretable Optimization: Why and How We Should Explain Optimization Models. Appl. Sci. 2025, 15, 5732. https://doi.org/10.3390/app15105732

AMA Style

Lumbreras S, Ciller P. Interpretable Optimization: Why and How We Should Explain Optimization Models. Applied Sciences. 2025; 15(10):5732. https://doi.org/10.3390/app15105732

Chicago/Turabian Style

Lumbreras, Sara, and Pedro Ciller. 2025. "Interpretable Optimization: Why and How We Should Explain Optimization Models" Applied Sciences 15, no. 10: 5732. https://doi.org/10.3390/app15105732

APA Style

Lumbreras, S., & Ciller, P. (2025). Interpretable Optimization: Why and How We Should Explain Optimization Models. Applied Sciences, 15(10), 5732. https://doi.org/10.3390/app15105732

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop