Next Article in Journal
Application of Integrable Systems in Carbon Price Determination
Previous Article in Journal
An Investigation of Subsidy Policies on Recycling and Remanufacturing System in Two-Echelon Supply Chain for Negative Binomial Distribution
Previous Article in Special Issue
Mutual-Energy Inner Product Optimization Method for Constructing Feature Coordinates and Image Classification in Machine Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Summary–Explanation Method for Promoting Trust Through Greater Support with Application to Credit Evaluation Systems

1
College of Computer Science and Engineering, Jishou University, Jishou 416000, China
2
College of Communication and Electronic Engineering, Jishou University, Jishou 416000, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(8), 1305; https://doi.org/10.3390/math13081305
Submission received: 29 March 2025 / Revised: 13 April 2025 / Accepted: 14 April 2025 / Published: 16 April 2025
(This article belongs to the Special Issue Advances in Machine Learning and Graph Neural Networks)

Abstract

:
Decision support systems are being increasingly applied in critical decision-making domains such as healthcare and criminal justice. Trust in these systems requires transparency and explainability. Among the forms of explanation, globally consistent summary–explanation (SE) is a rule-based local explanation offering useful global information and 100% dataset consistency. However, globally consistent SEs with limited complexity often have a small amount of support, making them unconvincing. To improve the support of SEs, this paper introduces the q-consistent SE, trading slightly lower consistency for greater support. The challenge is solving the maximizing support with the q-consistency (MSqC) problem, which is more complex than maximizing support for global consistency, leading to extended solution times using standard solvers. To enhance efficiency, the paper proposes a weighted column sampling (WCS) method, using simplified increase support (SIS) scores to create and solve smaller problem instances. Experiments on credit evaluation scenarios confirm that the SIS-based WCS method on MSqC problems improves scalability and yields SEs with greater support and better global extrapolation effectiveness.

1. Introduction

Decision support systems (DSS) have been incorporated into various domains that require critical decision-making, such as healthcare, criminal justice, and finance [1,2,3,4,5]. DSS with explainable artificial intelligence (XAI) assist decision-makers by generating a high-level summary of models/datasets or clear explanations of suggested decisions. This paper examines a challenge encountered by DSS in the domain of credit evaluation: how to deliver accurate and credible summary–explanations (SE). An SE is defined as a local explanation rule b ( x e ) y with global information. The following is an example of A globally consistent SE for an observation instance of the Home Equity Line of Credit (HELOC) dataset used in the FICO explainability challenge [6]: “For all the 100 people with ExternalRiskEstimate 63 and AverageMinFile 48 , the model predicts a high risk of default”. An SE rule b ( x e ) y being globally consistent means that y is true for all instances in the support set (the 100 people that satisfy the conditions). Therefore, although SE is specific to a target observation  x e , it offers a valuable global perspective on the dataset or model.
In credit evaluation, the problem of maximizing support with global consistency (MSGC) aims to deliver a persuasive explanation for the credit evaluation decision by searching for the conditions that are satisfied by both the applicant and most other people in the database (thus maximum support). The MSGC problem can be formulated as an integer programming (IP) problem [7] and solved by branch-and-bound (B&B) solvers [8].
Well-supported explanations are convincing, which foster trust in the system among users. Nonetheless, owing to the global consistency constraint, solving the MSGC problem on large datasets often leads to rules that are either highly complex, have low support, or are even infeasible. For many real-world scenarios, explanations with strong support significantly influence user acceptance, whereas minor inconsistencies can often be tolerated. Guided by this observation, this paper addresses the problem of maximizing support with q-consistency (MSqC), q ( 0 , 1 ] , which substantially enhances the support of the SE rule. Consider, for example, a 0.85-consistent SE rule with a support value of 2000: “In over 85% of the 2000 people with ExternalRiskEstimate 63 and AverageMinFile 48 , the  model predicts a high risk of default”.
Although the extension seems straightforward, it turns out that the MSqC problem is much more complex than the original MSGC problem. The primary cause lies in the fact that when maximizing the q consistent support, which allows for a proportion of up to  1 q  of the matched observations to be inconsistent, it is necessary to incorporate all observations with outcomes different from the target into the MSqC formulation. This requirement, i.e., the inclusion of outcome-divergent observations, was absent in the original MSGC problem.
Besides formulating the MSqC problem, this paper also addresses the following challenges:
1.
Efficiently solving the MSqC problem with large datasets. The B&B method is inefficient on IP problems with large datasets, including MSGC and MSqC. For example, a 60-second time limit should be established to terminate the solution [7]. In our reproduction, solving MSGC with the SCIP solver [9] requires 1852 s for | N | = 10  K and 101 s on average for datasets of size | N | = 1  K. As demonstrated by the experimental results, the performance of the B&B method degrades significantly when solving the more complex MSqC problem. Therefore, a more efficient and scalable solver is needed to solve the MSqC problems on a large dataset.
2.
Finding explanations that extrapolate well. In certain domains, such as credit evaluations, optimization models are often addressed within a subset of the global data, owing to practical considerations including transmission efficiency, privacy protection, and distributed storage solutions. Therefore, explanations with high support and consistency on a local dataset should also exhibit these qualities on the global dataset. This extrapolation effectiveness of explanations should be measured, and the explanations obtained by our method should exhibit good extrapolation effectiveness.
The primary contributions of this study are summarized below:
1.
The MSqC problem is novelly formulated for the first time, which can generate structured explanations (SEs) with substantially higher support, achieved by allowing for slight reductions in consistency.
2.
The simplified increased support (SIS)-based WCS method is proposed for solving the MSqC problem efficiently, which is much more scalable than the standard B&B method.
3.
A global prior injection technique is proposed to further improve the SIS-based WCS method for finding SE with better extrapolation effectiveness.
This paper is organized as follows. Section 2 introduces the background and related studies. Section 3 formally defines the globally consistent SE and q-consistent SE, and formulates the MSGC and MSqC problems. Then, Section 4 introduces the proposed SIS-based WCS method and the global prior injection technique. In Section 5, computer experiments are conducted to evaluate the effectiveness of the proposed methods, in terms of both the solving time and solution quality. Finally, Section 6 concludes the paper.
The list of symbols used in this paper can be found in Supplementary Materials.

2. Background

2.1. Explainable Artificial Intelligence

Explainable Artificial Intelligence (XAI) refers to a collection of techniques and methods aimed at making AI models more transparent and understandable to human users. Traditional black-box machine learning models, such as deep neural networks and ensemble methods, often yield high predictive accuracy but lack interpretability, making it difficult for users to trust and adopt their decisions in critical applications [10]. XAI seeks to bridge this gap by providing insights into model behavior, offering explanations that enhance accountability, regulatory compliance, and user trust [11,12,13].
XAI methods can be broadly classified into intrinsic interpretability and post hoc interpretability [11]. Intrinsic methods involve models that are inherently interpretable, such as decision trees, linear regression, and rule-based models. These models provide transparency by design, allowing users to trace decision pathways directly.
Post hoc interpretability, on the other hand, involves techniques applied after training a complex black-box model. These include feature importance analysis, local interpretable model-agnostic explanations (LIME) [14], and Shapley additive explanations (SHAP) [15]. Post hoc methods can provide global (explaining the overall behavior of a model) or local (explaining a specific decision) insights into model predictions.
Despite significant progress, XAI faces multiple challenges. For example, (1) simpler models are often more interpretable but may lack predictive power [16] (accuracy vs. interpretability trade-off); (2) many XAI methods are computationally expensive and struggle to scale [17] (scalability and efficiency); (3) XAI methods can be susceptible to adversarial manipulations [18] (robustness against adversarial attacks); (4) integrating XAI tools into existing systems often require specialized knowledge [11]. Recent studies have also emphasized the value of incorporating diverse contextual factors to enhance interpretability in complex domains, such as travel demand prediction using environmental and socioeconomic variables [19].

2.2. Combinatorial Optimization in XAI

Combinatorial optimization involves mathematical techniques for selecting the optimal solution from a finite set of possibilities. Common approaches include Mixed-Integer Linear Programming (MILP), Constraint Programming (CP), heuristic search, and branch-and-bound methods [20]. These techniques have found applications in enhancing XAI by providing rigorous ways to enforce transparency constraints and ensure optimal interpretability [16].
  • Feature Selection and Dimensionality Reduction: Combinatorial optimization can be used to identify the smallest subset of features that maintain model performance while improving interpretability. MILP-based feature selection methods have been shown to enhance transparency by reducing unnecessary complexity [18].
  • Optimized Decision Trees and Rule Lists: Recent studies have demonstrated the use of combinatorial optimization for training globally optimal decision trees and rule-based models. These methods directly optimize for both accuracy and interpretability by minimizing tree depth or the number of decision rules [16,21,22,23,24].
  • Counterfactual Explanations: Counterfactual explanations provide actionable insights by showing the minimal changes required for a different model outcome. The generation of optimal counterfactual examples is a combinatorial problem that can be effectively solved using MILP [25].

2.3. Enhancing Credit Evaluation with Combinatorial Optimization

Financial decision-making, particularly in credit evaluation, relies heavily on machine learning models. Regulatory requirements such as the General Data Protection Regulation (GDPR) mandate transparency in automated decision-making, requiring lenders to provide clear explanations for loan approvals and rejections [26].
The use of combinatorial optimization techniques in credit evaluation systems has proven beneficial in several key areas.
  • Monotonic Constraints in Credit Scoring: First, monotonic constraints in credit scoring ensure that model outputs remain aligned with financial intuition. For instance, increasing a borrower’s income should not decrease their creditworthiness. These constraints can be effectively enforced using MILP, allowing models to maintain both interpretability and regulatory compliance [27].
  • Optimal Credit Scoring Models: Second, combinatorial optimization facilitates the development of optimal credit scoring models. By deriving sparse yet highly predictive models, these techniques enable the creation of interpretable scoring systems that provide transparent creditworthiness assessments while maintaining strong predictive performance [17].
  • Counterfactual Explanations for Loan Decisions: Finally, counterfactual explanations generated through MILP provide actionable insights for loan applicants. These explanations suggest specific changes that applicants can make to improve their credit standing, such as reducing outstanding debt or increasing savings. By offering tailored guidance, counterfactual explanations enhance the fairness and transparency of credit decision-making [28].
The integration of combinatorial optimization into XAI for credit evaluation not only improves model interpretability but also aligns with regulatory standards. Transparent models enable financial institutions to audit decision-making processes, ensuring fairness and reducing bias. Moreover, by providing clear, actionable explanations, these methods help to build trust between customers and lenders, ultimately leading to more responsible AI deployment in financial services [26].
In some scenarios, no models exist to generate neighborhood data, and the sole source of knowledge is historical or pre-provided data. One prominent instance of this scenario corresponds to the FICO explainable machine learning challenge [6], in which the FICO company was provided a dataset generated by its black-box model, yet the model itself remained inaccessible to researchers. For situations in which data alone are accessible, and the model itself is not accessible, a more data-centered approach is necessary. Two possible options are to fit a model to the data and provide interpretations of it [29,30], or explain the decisions based on data patterns [7,31]. For example, globally consistent rule-based summary–explanation (SE) was proposed in [7], casting the problem of maximizing the number of samples that support the decision rule as a combinatorial optimization problem, called the maximizing support with global consistency (MSGC) problem.
However, the global consistency constraint in MSGC often results in infeasibility or small support for the SE solution of MSGC on many practical large datasets. Indeed, in many real-world scenarios, explanations with high support facilitate users’ trust in the explainer system, whereas minor inconsistencies can often be acceptable within certain thresholds. Guided by this idea, we generalize the MSGC problem to MSqC in this paper by permitting a small inconsistency level q in explanations, thereby significantly enhancing the support of the SE rule.
While our proposed MSqC framework focuses on optimizing the trade-off between support and consistency in rule-based explanations, it also shares conceptual similarities with approximation-based rule generation methods, such as “probably approximately correct” (PAC) rules [32]. These methods often allow a small fraction of exceptions or errors in order to improve the generalization capacity of learned rules.
However, there are important differences. PAC-style rule learners typically operate in a probabilistic framework, relying on distributional assumptions and statistical learning theory to guarantee generalization. Moreover, many of these methods are model-dependent and aim to learn classification rules applicable across the dataset. In contrast, our MSqC framework is model-agnostic and deterministic, directly optimizing rule support under a user-defined consistency threshold q without requiring any probabilistic assumptions or model access. Furthermore, MSqC generates target-specific summary–explanations that not only maximize support but also ensure interpretability in high-stakes decision-making settings, such as credit evaluation.
This distinction positions MSqC as a novel contribution in the space of interpretable explanation frameworks with a focus on optimization-based control over consistency and coverage rather than statistical generalization guarantees.

3. Problem Formulation

In this section, we first introduce the typical use cases of SE under a credit risk assessment scenario; then, the formal definitions of SE, its max-support problem, the consistency level, and the extrapolation challenge are presented in the rest of this section.

3.1. A Credit Risk Assessment Scenario

Here, a credit risk assessment scenario is used to illustrate the use cases and specific forms of SEs generated through a decision support system (DSS). Given input data x e (also called target data, indicating that it is the target to be explained), the DSS outputs a suggested decision y e and the SE that explains the target observation ( x e , y e ) . Figure 1 illustrates two cases in which the DSS responses to the request of a loaner or a banker, facilitating their comprehension of the target observation ( x e , y e ) using SEs that reveal the risk levels of the instances in the dataset similar to x e .
The figure also shows that different SEs can have varying degrees of impact on users’ trust, depending on its level of credibility. In general, SEs with large support and high consistency levels are more convincing, promoting trust, while SEs with small support or low consistency levels are less persuasive, therefore undermining users’ trust in the system. On the right-hand side of the figure, a comparison is made between the globally consistent SE [7] and the q-consistent SE proposed in this paper using a target observation sampled from the 10K-sized FICO dataset [6]. It can be seen that while the globally consistent SE has a consistency of 100%, it has very small support (which is typically the case for globally consistent SEs). In contrast, the q-consistent SE attains much greater support at the cost of a lower consistency of 82%, which, given its advantages, should be acceptable for this scenario, as well as potentially for many others.

3.2. Globally Consistent Summary–Explanation

The summary–explanation (SE) is defined on a truth table, i.e., a | P | -dimensional observation dataset { ( x i , y i ) , i N } with binary features x i { 0 , 1 } | P | . Further, N serves as the index set for observations, while P represents the set of indices for binary feature functions. In the credit risk assessment scenario, binary labels y i { 0 , 1 } are used to denote high (1) or low (0) risk, although it should be noted that SE does not require labels to be binary. In general, such a truth table can be derived from any dataset { ( x ˜ i , y i ) , i N } with arbitrary inputs x ˜ i . As an illustration, the initial input vector x ˜ e = { x ˜ e , 1 , x ˜ e , 2 } = { 30 , 10 } is converted into the binary representation x e = { δ e , 1 , δ e , 2 , δ e , 3 } = { 1 , 0 , 1 } , through the ordered feature function set { x ˜ e , 1 0 , x ˜ e , 1 50 , x ˜ e , 2 0 } . Henceforth, the terms ’feature’ and ’feature function’ are treated as synonymous.
Let b denote a conjunctive clause constructed by the logical AND (∧) of multiple conditions, where each condition corresponds to a binary feature F p , i.e.,  b ( · ) = p P F p ( · ) , where P P represents a subset of the feature set P . For the example above, b could be x ˜ e , 1 0 , or  ( x ˜ e , 1 50 ) ( x ˜ e , 2 0 ) .
A summary–explanation (SE) is a rule b ( · ) y that describes the binary classifier
h b ( · ) y ( x ) = y if b ( x ) = p P F p ( x ) = 1 , 1 y otherwise .
A globally consistent SE for an observation ( x e , y e ) is an SE b ( · ) y e with the following properties:
1.
Relevancy, i.e.,  b ( x e ) = 1 ;
2.
Consistency, i.e., for all observations i N , if  b ( x i ) = 1 , then y i = y e .
In a more accessible manner, a globally consistent SE b ( · ) y e can be articulated as follows: “for every observation (such as people or customers) where the clause b ( · ) holds, the outcomes (e.g., predicted risk/decision) matches the label y e of observation e”. This type of explanation aligns the current observation ( x e , y e ) with historical data points in the dataset, thereby making it more persuasive to users in domains such as credit evaluation.
The quality of the clause b ( · ) is evaluated using two metrics:
1.
Complexity | b | , which is the count of conditions within b;
2.
Support | S N ( b ) | , which represents the cardinality of the support set S N ( b ) , is defined as the set of observations in N that satisfy clause b. Specifically, S N ( b ) = { i N : b ( x i ) = 1 } .
As is customary, the term support can refer to either the support set S N ( b ) or its cardinality | S N ( b ) | , depending on the context. To prevent ambiguity, this paper uses set notation to denote either index sets or original sets for notational simplicity.

3.3. Minimizing Complexity and Maximizing Support

The challenges of minimizing complexity with global consistency (MCGC) and maximizing support with global consistency (MSGC) for SE b ( · ) y e were introduced by Rudin et al. [7]. The MCGC problem aims to identify a solution b that minimizes complexity | b | , and this objective can be modeled as the following IP model:
min b p P e b p
s . t . p P e b p ( 1 δ i , p ) 1 , i N N e
b p { 0 , 1 } , p P e
where the binary decision variable b p = 1 signifies that feature p is included in the resulting clause b = p P F p ( · ) , i.e.,  p P and b p = 0 otherwise. These variables, b p , are termed feature variables. Additionally, the binary variable δ i , p { 0 , 1 } signifies whether observation i satisfies binary feature p, i.e.,  δ i , p = F p ( x i ) . The activation feature set P e for an observation e consists of the features that are satisfied by e, i.e.,  P e = { p P : δ e , p = 1 } . In addition, N e represents the set of consistent observations, defined as N e = { i N : y i = y e } . Therefore, the set of inconsistent observations is N N e , where for each i N N e , it holds that y i y e . The relevancy property is guaranteed by selecting features from P e only, while the consistency property is ensured by condition (2), which ensures that for any observation with y i y e , b ( x i ) = 0 . The solution of the MC model is fast, due to simplicity; however, it does not guarantee large support.
The MSGC problem can be viewed as an extension of the MCGC problem, where the objective is to find b with maximal support | s N ( b ) | , subjecting to a complexity constraint | b | < M c . As indicated by Rudin et al. [7], this paper adopts M c = 4 value deemed reasonable for the complexity of SE in credit evaluation. The MSGC problem is formulated as follows:
max b , r i N e r i
s . t . p P e b p ( 1 δ i , p ) 1 , i N N e
p P e b p ( 1 δ i , p ) M ( 1 r i ) , i N e
p P e b p M c ,
b p , r i { 0 , 1 } , p P e , i N e
where the binary decision variable r i { 0 , 1 } denotes whether observation i is included in the support of clause b ( · ) , i.e.,  b ( x i ) = 1 if and only if r i = 1 . Additionally, the constant M must satisfy M M c . Equation (6) enforces that for an observation i N e to be part of b ( · ) ’s support, all conditions within b ( · ) must be satisfied by i. While the support of the MSGC SE is typically larger than that of the MCGC SE, solving the MSGC model (4)–(8) is notably slower due to its elevated complexity.

3.4. The Third Metric: Consistency Level

Formally, a globally consistent symbolic explanation b ( · ) y corresponds to a 1-consistent or 100%-consistent rule, as it mandates perfect consistency across all observations i N . However, deriving such a strictly consistent rule is often computationally challenging if not outright impossible in real-world datasets. For large-scale applications, both the MCGC and MSGC models typically yield rules with either excessive complexity, insufficient support, or infeasibility. The rationale is straightforward: complex datasets rarely contain 1-consistent rules with moderate complexity (e.g., M c 4 ) Consequently, relaxing the consistency requirement from strict 1-consistency to a more lenient threshold such as 0.9 or 0.8 becomes a pragmatic choice. This relaxation is widely acceptable in practical SE applications, including credit evaluation, where near-consistent rules often suffice for actionable insights.
We characterize q-consistency as the following condition: for at least a fraction q of the samples where b ( x i ) = 1 , i N , it holds that y i = y . Define S N ( b , y ) as the set of consistent examples, i.e.,  S N ( b , y ) = { i N : b ( x i ) = 1 , y i = y } . Formally, the consistency measure for the rule b ( · ) y is defined as the ratio of consistent examples to the total support, i.e.,
c N ( b , y ) = | S N ( b , y ) | / | S N ( b ) | .
Consequently, the q-consistency requirement translates to ensuring that this ratio meets or exceeds q, i.e.,  c N ( b , y ) q .
A q-consistent SE b ( · ) y e for an observation ( x e , y e ) can be restated as follows: ”For more than q proportion of all the observations where b ( · ) holds true, the outcomes are y e ”. For instance, in the domain of credit evaluation, a q-consistent SE for an observation of the HELOC dataset [6] can be as follows: “For over 80% of all the 1108 people with NumTotalTrades 40 and NumTradesOpeninLast12M 3 , the model predicts a high risk of default”. From the above, it can be seen that the essence of the summary–explanation lies in the following factors: (1) informing the applicant about how many people share similar features with the applicant (i.e., the support), and (2) explaining to the applicant that among these similar individuals, a q-fraction of them defaulted. As a result, the system provides a credible explanation of why the applicant is classified as high risk and consequently rejected.
Naturally, the problem of maximizing support with q-consistency (MSqC) can be extended from the MSGC problem (4)–(8). The objective of MSqC is to maximize the support of SE b y while subjecting to the q-consistency constraint c N ( b , y ) q , which can be formulated as follows:
max b , r i N r i
s . t . p P e b p ( 1 δ i , p ) + r i 1 , i N N e
p P e b p ( 1 δ i , p ) M ( 1 r i ) , i N
p P e b p M c ,
i N ( a i q ) r i 0 ,
b p , r i { 0 , 1 } , p P e , i N .
Compared to the MSGC model (4)–(8), four modifications are introduced, listed as follows.
1.
Binary variables r i (representing supportive observations) are defined for all observations i N , rather than just for consistent observations i N e .
2.
Constraints (11) now incorporate r i , allowing for inconsistent support ( r i = 1 ) for i N N e . Specifically, recall that in MCGC and MSGC, the consistency constraint requires that for any inconsistent observation i, the SE rule is not satisfied, i.e.,  b ( x i ) = p P F p ( x i ) = 0 . Thus, constraints (2) and (5) in the MCGC and MSGC problems mean that at least one clause p in the SE rule b (i.e., b p = 1 ) needs to be unsatisfied (i.e., δ i , p = 0 ) . Here, in MSqC, some inconsistent observations i N N e may also satisfy the SE rule b. Thus, constraint (11) imposes b ( x i ) = 0 only on those observations that are not supportive ( r i = 0 ).
3.
Constraints (12) now apply to all i N instead of only those in N e , simply because r i are now defined for all observations N , rather than just for consistent observations i N . As in MSGC, this constraint enforces that for any supportive observations ( r i = 1 ); it must hold that b ( x i ) = 1 , i.e., whenever b p = 1 , we have δ i , p = 1 , for i with r i = 1 .
4.
Equation (4) introduces the q-consistency constraint, where binary constants a i = 1 denotes i N e . Specifically, since the number of the consistent examples is | S N ( b , y ) | = i N a i r i , and the size of the support is | S N ( b ) | = i N r i , by the definition of the consistency measure c N ( b , y ) (9), constraint (14) describes the q-consistency constraint c N ( b , y ) q .
Remark 1.
Previous research [7] has demonstrated that MCGC and MSGC are NP-hard. The model formulations MCGC (1)–(3), MSGC (4)–(8), and MSqC (10)–(15) reveal a clear progression in their complexity levels. Formally proving the NP-hardness of MSqC is left as future work, but its complexity dominance over MSGC and MCGC is evident from the increased model size, i.e., MSqC has strictly more variables and constraints while maintaining a similar structure.

3.5. The Extrapolation Challenge

In domains such as credit evaluations, optimization models are often solved over a subset of the global data, driven by practical factors such as transmission efficiency, privacy concerns, and distributed storage. As shown in Figure 2, the DSS optimizes an SE on one of the local datasets, while the SE may be evaluated later on the global dataset. The extrapolation challenge states that if the SEs given by the DSS have large support and consistency locally, then they should also have large support and consistency level if evaluated on the global dataset.
More formally, let D denote the global dataset (index set), and  N D be a local dataset (index set) that is drawn randomly from D . For a target observation   ( x e , y e ) , e N to be explained, an SE rule  b N y e is sought by solving an optimization problem on N . During the solution process, the local support  | S N ( b ) | and local consistency level  c N ( b , y e ) of an SE rule b y e can be computed on the local dataset N . However, as the global dataset is inaccessible during the optimization process, the challenge in extrapolation lies in improving the global support  | S D ( b ) | and global consistency level  | c D ( b , y e ) | without direct access to the global dataset.

4. Methodology

As formulated in the previous section, the generation of an SE can be implemented by solving an optimization model. Specifically, three IP models, i.e., MC, MSGC, and MSqC, have been introduced to optimize the SE for different objectives.
Since we aim to enhance the credibility of SE by increasing its support, our focus is on the solution algorithms for MSGC and MSqC.
The MSGC model [7] is solved with B&B solvers such as SCIP [9]. However, the B&B method is hardly applicable on these IP models with large datasets, because not only is B&B an exact solution method, but also MSGC and MSqC are NP-hard. Therefore, it is necessary to develop an approximate solution algorithm for large-scale MSqC problems that is not only efficient but also find SEs that extrapolate well (see Section 3.5).
However, most approximate algorithms, including evolutionary algorithms (e.g., genetic algorithms and particle swarm optimization) are not suitable for our objective, because both MSGC and MSqC are large combinatorial problems with numerous constraints. Specifically, the MSGC problem has | N e | + | P e | binary decision variables and | N | + 1 constraints, and the MSqC problem has | N | + | P e | binary decision variables and | N N e | + | N | + 2 constraints. As a result, regular evolutionary methods struggle to even find a feasible solution for MSGC and MSqC.
Our method is based on a heuristic sampling approach that shares similarities with stochastic optimization methods such as stochastic subgradient descent [33,34]. However, unlike purely random sampling, each sub-problem in our framework is constructed in a guided manner using SIS scores, which incorporates global prior knowledge about feature importance (called global prior injection). This guided sampling helps improve the representativeness of selected sub-problems and addresses the extrapolation challenge commonly encountered in local explanation methods.
In this section, we first introduce the the SIS scores-based sampling method for feature variables, then the global prior injection technique that aims to improve the SE solutions quality on the global dataset, and finally the proposed SIS-based WCS optimization algorithm framework. The relationship of the different components in the optimization framework is illustrated in Figure 3. In short, to achieve scalability in time efficiency, the WCS optimization algorithm decomposes a large MSqC problem into multiple smaller MSGC sub-problems and finds the optimal solution among the solutions of the MSGCs. Each MSGC is obtained by the column (feature) and row (instance) sampling of the dataset N , with column sampling weighted by the features’ pre-computed SIS scores. The global prior injection technique embeds global preferences of the features into their SIS score distribution, utilizing the fact that SIS scores can be computed before the solution process.
For the convenience of the reader, the symbols used throughout this paper are listed in the Supplementary Materials.

4.1. Simplified Increased Support

The simplified increased support (SIS) is a score of feature variables by which the smaller IP sub-problems (i.e., the smaller MSGCs) determine their selection of variables. In other words, each smaller MSGC problem is generated by sampling (without replacement) ρ | P e | , ρ ( 0 , 1 ] with the number of feature variables from the activation feature set P e according to the features’ SIS scores s p , which is defined as the follows:
s p = i N e δ i , p i N N e δ i , p , p P e .
The derivation process is detailed in the Supplementary Materials. In line with our intuition, this involves weighting a feature p by the frequency with which it is satisfied by observations where y i = y p , subtracting a feature p by the frequency at which it holds across observations where y i y p . The sampling probability for each feature p is Prob p = σ p ( s ) with
σ p ( x ) = e x p k e x k , and s p = a s p max ( s ) ,
where σ ( · ) denotes the Softmax function, s represents the SIS vector defined in (16), s is the SIS vector normalized and scaled by a factor a, and  s p signifies the pth element of s corresponding to the feature variable b p .

4.2. Global Prior Injection

To address the global extrapolation challenge, we propose that global prior information should be incorporated into each WCS IP model’s local solution procedure through s p gl , which is an extension of (16) as shown below:
s p gl = i D e δ i , p i D D e δ i , p , p P
where D e represents the subset of consistent observations within the global dataset D , specifically defined as D e = { i D : y i = y e } . By defining i D e δ i , p = Δ p , y e , and leveraging the binary nature of y i , the complementary sum over inconsistent observations is i D D e δ i , p = Δ p , 1 y e . Notably, Δ p , 0 and Δ p , 1 are exclusively determined by the global dataset D ; as such, these values can be pre-computed and applied to multiple summary–explanation queries.

4.3. The Weighted Column Sampling Optimization Framework

The proposed WCS optimization framework for approximately solving the MSqC model (10)–(15) can be decomposed into formulating, resolving, and integrating the solutions of N subs number of smaller IPs, each of which is an MSGC model (4)–(8) with an observations dataset N i N and feature set P i P e . The overall procedure of the WCS optimization framework is shown in Figure 3 and Algorithm 1.
To be more specific, let MSGC i denote the ith sub-problem. The observation dataset N i for MSGC i is uniformly randomly sampled (without replacement) with a given size | N i | = n wcs , and the feature set P i is sampled (without replacement) with a given size | P i | = ρ | P e | , ρ ( 0 , 1 ] from the activation feature set P e of the original MSqC problem, according to the global-prior-injected SIS-based probability σ p ( s p gl ) . Given that every MSGC i problem is not large in scale, they can be solved with the B&B algorithm efficiently. Moreover, since the iterations within the for-loop are independent of each other, parallel processing can be utilized to accelerate the solution process.
Algorithm 1: The SIS-based WCS optimization algorithm
Mathematics 13 01305 i001
Finally, the ChooseBest function can be implemented in multiple ways. Since both the support and consistency level should be maximized, their product can serve as the sole selection metric, and either one can be given priority. In the case of credit risk assessment, this study employs a method where the solution with the maximum support is searched first among those with a consistency level exceeding 80%. If no such solutions exist, the search continues with those having a consistency level exceeding 75%, and so forth.
The solution time of MSqC(WCS) depends on N sub , the solution time of each sub-problem MSGC i , and the ChooseBest function. As analyzed by Rudin et al. [7], each MSGC i is NP-hard. However, fixing n wcs and ρ can set the time complexity of MSGC i to a constant level, denoted as K. The ChooseBest function is of time complexity O ( N sub ) . Then, the time complexity of MSqC(WCS) will be O ( N sub · O ( MSGC i ) + N sub ) = O ( K N sub ) . The space complexity of MSqC(WCS) is O ( | P | · | D | + n th O ( MSGC i ) ) , where n th denotes the number of threads running concurrently. The space complexity of each sub-problem O ( MSGC i ) can also be set to a constant level by fixing n wcs and ρ ; then, the space complexity of MSqC(WCS) will be O ( | P | · | D | + n th K ) . In our experiments, we typically use N sub = 40 , n wcs = 100 , and ρ = 0.25 , and it can be verified in Section 5 that the solution time of MSqC(WCS) does not increase with the size | N | of the local solution dataset.

5. Computer Experiments

It is worth noting that our method differs fundamentally from popular model-dependent explanation techniques such as LIME [14] and SHAP [15]. These methods aim to interpret the predictions of a trained black-box model by quantifying the contribution of each input feature to the output for individual instances. In contrast, our approach is model-agnostic and dataset-driven: it constructs rule-based summary–explanations that describe subpopulations satisfying certain logical conditions with high support and acceptable consistency, without referencing any specific prediction model.
Due to these fundamental differences in scope and goal, direct empirical comparisons with LIME or SHAP are not meaningful. Instead, we focus on comparing our method against structurally similar summary–explanation models (e.g., MSGC and MSqC solved via B&B) to evaluate performance in terms of support, consistency, and runtime. This comparison better reflects the relative advantages of q-consistent SEs in rule-based, global interpretability contexts.
Specifically, in this section, three different SE generation methods are tested and compared: MSGC(B&B), MSqC(B&B), and MSqC(WCS). The SE generation methods are implemented by solving MSGC or MSqC with the B&B or WCS methods. Note that we have used the notation <Model> (<Method>) to denote solving model <Model> using solution method <Method>. For example, MSqC(WCS) denotes solving MSqC (10)–(15) with the SIS-based WCS method and using the solution as the SE output (see Figure 3).

5.1. Experimental Setup

Dataset descriptions. Here, three distinct credit-related datasets are utilized to validate the effectiveness of our approach. The first one is the renowned HELOC dataset, employed in the FICO machine learning explainability challenge [6]. The other two datasets are sourced from the UCI Machine Learning Repository: Taiwan Credit [35], focusing on credit card client default cases in Taiwan, and Australian Credit Approval [36], which concerns credit card applications. In the following, we primarily describe the application of different methods to the HELOC dataset. For detailed information about the Taiwan Credit and Australian Credit Approval datasets, as well as a comparison of the different methods on these three datasets, please refer to Appendix A.
The HELOC dataset contains the credit evaluation-related information of an individual, such as their credit history, outstanding balances, and delinquency status. The information are stored in 23 feature variables, including numeric and discrete types with missing values. (Note that, in addition to handling missing values, features binarization is required to establish the MSGC and MSqC model, which will generate more binary features than the original dataset.) The target variable “RiskPerformance” represents the repayment status of an individual’s credit account, indicating whether the individual paid their debts as negotiated over a 12–36 month period. The dataset contains data for 10,460 individuals, which is reduced to | D | = 9871 after data cleaning. Further details of this dataset are given in the Supplementary Materials.
Data preprocessing. The dataset used in our experiments has been pre-cleaned and does not contain any missing values. All features are either numerical or categorical. To handle categorical features, we first apply label encoding to convert string values into integers, followed by one-hot encoding to produce binary feature indicators. This representation is compatible with our summary–explanation (SE) framework, which operates on boolean-valued feature functions. For numerical features, we use quantile-based thresholding to generate binary features. Specifically, we select a small number of quantile cut-off points (e.g., quartiles) and transform each numerical variable into a set of binary features indicating whether its value exceeds each threshold. This preprocessing ensures that all features used in the SE models are binary-valued and suitable for logical clause construction.
Distributed storage simulation: To evaluate the global extrapolation effectiveness of the different methods, it is necessary to mimic the distributed storage of datasets and carry out the local solution of MSGC and MSqC models. Here, we use a local–global ratio parameter α ( 0 , 1 ] to control the size of the local dataset | N | = α | D | . At α = 1 , it is equivalent to having no distributed storage, and the models are solved directly on the global dataset.
Parameters and hardware: Table 1 presents the parameter settings employed in the experiments. Experiments were carried out on a Mac Mini desktop featuring an M1 chip.
Metrics: The SE generation methods are evaluated from three different aspects, i.e., solution speed, local solution effectiveness, and global extrapolation effectiveness. Specifically, given a target observation ( x e , y e ) and an SE solution b s y e of an SE generation method, five metrics are computed, which can be grouped as follows:
1.
Local solution performance metrics, i.e., solution time T s , local support s N = | S N ( b s ) | and local consistency level c N = c N ( b s , y e ) .
2.
Global extrapolation performance metrics, i.e., global support s D = | S D ( b s ) | and global consistency level c D = c D ( b s , y e ) .
As is customary, the metrics are calculated by averaging across multiple algorithm executions using the 1-shifted geometric mean, which exhibits resilience to outliers of all magnitudes [37].

Experimental Procedure

The experiment consists of the following three phases:
1.
Preprocessing phase: The missing values were handled and features were binarized. Specifically, ordinal features are binarized with n thresholds quantile thresholds, and categorical features are binarized into one-hot vectors. Then, the global SIS scores s p gl (18) of features are computed.
2.
Execution and parameter sweep phase: In order to investigate how the performance of the three methods are impacted by different parameter configurations, a parameter sweep was conducted. Specifically, for each parameter setting (mainly varying n wcs and α with other parameters fixed), a round of experiment was conducted, which can be divided into the following two steps.
(a)
Request scenario generation: A local dataset consisting of | N | observations was sampled from the global dataset, and an explanation request was randomly generated by selecting an observation from the global dataset as the target observation to be explained.
(b)
Problem formulation and solution: The MSGC and MSqC models were established, and then the MSGC model is solved by B&B and MSqC is solved by both B&B and the proposed SIS-based WCS method. The names of the SE generation methods, i.e., MSGC(B&B), MSqC(B&B), and MSqC(WCS), indicate which model is being solved by which solution method. Metrics were calculated and recorded. This process was repeated n reps times, and then the 1-shifted geometric mean of the metrics was calculated.
3.
Results and analysis phase: The results of the experiments were analyzed and discussed.
In the following, we first take a quick look at how the SEs produced by MSqC(WCS) are different from the SEs produced by MSGC(B&B) by sampling a few results from the SE solutions; then, the results are compared in details on local solution effectiveness and global extrapolation effectiveness, respectively. In Appendix A, the results for two other datasets are also presented.

5.2. Sampled Results of the Summary–Explanation Solutions

For an illustration of the SE solutions in practical applications, Figure 4 compares several example SEs generated by MSGC(B&B) and MSqC(WCS) methods for random target observations when α = 1 . It can be observed that the SEs produced by MSqC(WCS) have much greater support while maintaining consistency within an acceptable range for the credit evaluation scenario.

5.3. Local Solution Effectiveness

The effectiveness of local solutions for various methods is evaluated by contrasting their solution time, local support size, and local consistency scores. Table 2 shows the local solution performance of the three methods under different settings of the local–global ratio α with WCS sub-problem size n wcs = 100 . We have the following observations.
1.
Solution time T s : MSGC(B&B) and MSqC(B&B) scale poorly with respect to the size of the local dataset | N | ( T s increases drastically as α increases from 0.01 to 1, which corresponds to | N | increases from approximately from 0.1 K to 10 K). When | N | 4 K, MSqC(B&B) fails to complete within the 2-h time limit. In contrast, the solution time of MSqC(WCS) is independent of | N | , and in fact, it relies only on the number and complexities of the WCS sub-problems.
2.
Local support s N : MSGC(B&B) yields SE solutions with the smallest local support, while MSqC(B&B) yields SE solutions with the largest. This is the expected result which motivates the formulation of the MSqC problem (see Section 3.4). Furthermore, MSqC(WCS) solutions also have much larger local support than MSGC(B&B), though not as large as MSqC(B&B). These observations of the local support and solution time of the different methods are also demonstrated in Figure 5.
3.
Local consistency level c N : MSGC(B&B) solutions always have a local consistency level of c N = 1 , and MSqC(B&B) solutions always have c N q = 0.85 , because B&B is an exact solution method. Compared with the two B&B-based methods, MSqC(WCS) solutions generally have lower local consistency levels.
However, the local consistency levels c N of the MSqC(WCS) solutions can be raised by increasing the WCS sub-problem size n wcs , as demonstrated in Figure 6, for most local–global ratio values α . Increasing the sampled feature ratio ρ can also achieve the same goal, though the experiment is omitted. Note that stochastic variations exist for the performance of MSqC(WCS) because of the randomness in observations and features sampling (see Algorithm 1).
In summary, MSqC(WCS) offers superior time efficiency compared to MSqC(B&B) at the expense of reduced local support s N and lower local consistency levels c N . However, the local support is still significantly larger than that of MSGC(B&B), and its local consistency levels remain relatively close to the desired value q = 0.85 . Moreover, the local consistency levels of MSqC(WCS) can be further improved by adjusting parameters, although this may increase the solution time, necessitating a trade-off based on the specific situation.

5.4. Global Extrapolation Effectiveness

As highlighted earlier, in real-world scenarios, model solving typically involves only a subset of the observation dataset, motivated by requirements like privacy constraints and distributed storage efficiency. Consequently, assessing the global extrapolation effectiveness becomes crucial. Table 3 shows the global extrapolation performance of the three SE generation methods under different settings of the local–global ratio α with WCS sub-problem size n wcs = 100 . We have the following observations.
1.
Global support s D : Similar to local support s N , MSqC(WCS) solutions also have much larger global support than MSGC(B&B), though not as large as MSqC(B&B).
2.
Global consistency level c D : In contrast to the local consistency level c N , the global consistency level c D of MSqC(WCS) is comparable to that of MSGC(B&B) and MSqC(B&B). Moreover, advantages can be observed when α is small (e.g., 0.01, 0.04), corresponding to larger distributed systems where each local dataset is significantly smaller than the global dataset.
In addition, similar to the local consistency level c N , the global consistency levels c D of MSqC(WCS) solutions can also be raised by increasing the WCS sub-problem sizes n wcs , as demonstrated in Figure 7.
For α = 1 , the scenario is equivalent to the absence of distributed storage, with models being solved directly using the global dataset.

6. Conclusions

In conclusion, this paper presents an improved summary–explanation (SE) decision support method that aims to promote trust in critical decision-making domains such as credit evaluation. Our method addresses the challenges associated with globally consistent SE by formulating the MSqC problem, which yields SEs achieving substantially higher support in exchange for marginally reduced consistencies. The major contributions of this study are as follows:
1.
Methodologically, this paper formulates the MSqC problem for the first time and proposes a novel solution method for MSqC, which not only yield SEs with much greater support, but is also much more scalable in efficiency.
2.
From a practical standpoint, this paper offers a valuable tool for decision-makers to generate high-level summaries of datasets and clear explanations of suggested decisions. By generating SE with greater support, this tool can improve the reliability and trustworthiness of DSS.
While the proposed approach demonstrates clear advantages in support maximization and runtime efficiency, the current method assumes static datasets and does not address dynamic or streaming data scenarios, where SEs may need continuous adaptation. In addition, our current experiments assume i.i.d. sampling between local and global datasets. In real-world applications, such as federated credit scoring or temporal shifts in user behavior, non-i.i.d. data distributions can significantly impact the extrapolation quality of explanations. Extending the framework to accommodate these aspects would further enhance its practical applicability.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/math13081305/s1: Table S1. List of symbols. Table S2. Dataset feature descriptions. Table S3. Dataset feature statistics.

Author Contributions

Conceptualization, methodology, software model, and writing original draft, C.P.; formal analysis, writing review and editing, T.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (No. 62466018) and the key Research Foundation of Education Bureau of Hunan Province (No. 23A0387).

Data Availability Statement

Data will be made available on request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Experiments are also performed on two other datasets, which are all credit-related binary classification datasets. The detailed information and a comparison of the three different datasets can be found in Table A1. Local solution performances of the different methods applied on the Taiwan Credit dataset and Australian Credit Approval dataset are shown in Table A2. Global extrapolation performances of the different methods applied on the Taiwan Credit dataset are shown in Table A3. Note that for the Australian Credit Approval dataset, due to its small size, the local–global ratio parameter α starts from 0.1, and no global extrapolation experiments were conducted.
Table A1. Datasets information.
Table A1. Datasets information.
DatasetHELOC
(FICO, 2018) [6]
Taiwan Credit
(Yeh, 2016) [36]
Australian Credit Approval
(Lichman et al., 2013) [37]
SourceFICOUCI ML
Repository
UCI ML
Repository
SizeMediumLargeSmall
N. clean instances987130,000653
N. features231515
BalanceYNY
N. instances with y = 0 513623,364296
N. instances with y = 1 47356636357
Table A2. Local solution performance of different methods applied on the Taiwan Credit dataset and Australian Credit Approval dataset. The local–global ratio α is an adjustable parameter. The size of the local dataset N depends on α by | N | = α | D | . Metrics: solution time T s , local support s N , and local consistency level c N .
Table A2. Local solution performance of different methods applied on the Taiwan Credit dataset and Australian Credit Approval dataset. The local–global ratio α is an adjustable parameter. The size of the local dataset N depends on α by | N | = α | D | . Metrics: solution time T s , local support s N , and local consistency level c N .
(a) Dataset: Taiwan Credit (Yeh, 2016) [36]
MSGC(B&B)MSqC(B&B)MSqC(WCS)
α | N | T s s N c N T s s N c N T s s N c N
0.010.3K9.3333.2817.19137.300.861.3287.210.86
0.041.2K73.8939.65184.76535.730.821.41155.390.66
0.072.1K125.3937.821295.99966.960.851.23361.590.71
0.13K294.6941.381N/AN/AN/A1.26314.080.65
0.412KN/AN/AN/AN/AN/AN/A1.231540.490.70
0.721KN/AN/AN/AN/AN/AN/A1.321021.200.62
130KN/AN/AN/AN/AN/AN/A1.163539.000.70
(b) Dataset: Australian Credit Approval (Lichman et al., 2013) [37]
MSGC(B&B)MSqC(B&B)MSqC(WCS)
α | N | T s s N c N T s s N c N T s s N c N
0.1650.0719.7810.2632.060.870.1419.161.00
0.42611.2145.7014.9497.550.890.2066.070.89
0.74573.0563.67114.42194.550.900.22143.700.91
16535.8163.31128.61264.580.890.21212.720.91
Table A3. Global extrapolation performance of the different methods applied on the Taiwan Credit dataset. The local–global ratio α is an adjustable parameter. The size of the local dataset N depends on α by | N | = α | D | . Metrics: global support s D | and global consistency level c D .
Table A3. Global extrapolation performance of the different methods applied on the Taiwan Credit dataset. The local–global ratio α is an adjustable parameter. The size of the local dataset N depends on α by | N | = α | D | . Metrics: global support s D | and global consistency level c D .
MSGC(B&B)MSqC(B&B)MSqC(WCS)
α | N | s D c D s D c D s D c D
0.010.3 K3516.640.7814,371.220.767786.900.77
0.041.2 K980.290.7812,530.410.792067.150.65
0.072.1 K537.850.8013,910.320.823454.960.71
0.13 K384.070.82N/AN/A1981.380.64
0.412 KN/AN/AN/AN/A3361.190.70
0.721 KN/AN/AN/AN/A1350.060.62
130 KN/AN/AN/AN/A3539.000.70

References

  1. Khan, N.; Okoli, C.N.; Ekpin, V.; Attai, K.; Chukwudi, N.; Sabi, H.; Akwaowo, C.; Osuji, J.; Benavente, L.; Uzoka, F.-M. Adoption and utilization of medical decision support systems in the diagnosis of febrile diseases: A systematic literature review. Expert Syst. Appl. 2023, 220, 119638. [Google Scholar] [CrossRef]
  2. Liu, X.; Faisal, M.; Alharbi, A. A decision support system for assessing the role of the 5G network and AI in situational teaching research in higher education. Soft Comput. 2022, 26, 10741–10752. [Google Scholar] [CrossRef]
  3. Birzhandi, P.; Cho, Y.-S. Application of fairness to healthcare, organizational justice, and finance: A survey. Expert Syst. Appl. 2023, 216, 119465. [Google Scholar] [CrossRef]
  4. Yousefli, A.; Heydari, M.; Norouzi, R. A data-driven stochastic decision support system to investment portfolio problem under uncertainty. Soft Comput. 2022, 26, 5283–5296. [Google Scholar] [CrossRef]
  5. Wang, J.; Yu, C.; Zhang, J. Constructing the regional intelligent economic decision support system based on fuzzy C-mean clustering algorithm. Soft Comput. 2020, 24, 7989–7997. [Google Scholar] [CrossRef]
  6. FICO. Explainable Machine Learning Challenge. Available online: https://community.fico.com/s/explainable-machine-learning-challenge (accessed on 25 July 2022).
  7. Rudin, C.; Shaposhnik, Y. Globally-consistent rule-based summary-explanations for machine learning models: Application to credit-risk evaluation. J. Mach. Learn. Res. 2023, 24, 1–44. [Google Scholar] [CrossRef]
  8. Vanderbei, R. Linear Programming: Foundations and Extensions, 3rd ed.; Springer: New York, NY, USA, 2008; ISBN 978-0-387-74387-5. [Google Scholar]
  9. Gamrath, G.; Anderson, D.; Bestuzheva, K.; Chen, W.-K.; Eifler, L.; Gasse, M.; Gemander, P.; Gleixner, A.; Gottwald, L.; Halbig, K.; et al. The SCIP Optimization Suite 7.0; Zuse Institute Berlin: Berlin, Germany, 2020; Available online: https://optimization-online.org/wp-content/uploads/2020/03/7705.pdf (accessed on 19 March 2025).
  10. Samek, W.; Müller, K.-R. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning; Springer: Berlin, Germany, 2019; pp. 5–22. [Google Scholar]
  11. Arrieta, A.B.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; García, S.; Gil-López, S.; Molina, D.; Benjamins, R. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef]
  12. Ravi, V.; Srivastava, V.K.; Singh, M.P.; Burila, R.K.; Kassetty, N.; Vardhineedi, P.N.; Pasam, V.R.; Prova, N.N.I.; De, I. Explainable AI (XAI) for Credit Scoring and Loan Approvals. arXiv 2025, arXiv:2503.07829. [Google Scholar]
  13. Yeo, W.J.; Van Der Heever, W.; Mao, R.; Cambria, E.; Satapathy, R.; Mengaldo, G.J.A.I.R. A comprehensive review on financial explainable AI. Expert Syst. Appl. 2025, 58, 1–49. [Google Scholar] [CrossRef]
  14. Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar]
  15. Lundberg, S.M.; Lee, S.-I. A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 2017, 30, 4768–4777. [Google Scholar]
  16. Bertsimas, D.; Dunn, J. Optimal classification trees. Mach. Learn. 2017, 106, 1039–1082. [Google Scholar] [CrossRef]
  17. Ustun, B.; Rudin, C. Supersparse linear integer models for optimized medical scoring systems. Mach. Learn. 2016, 102, 349–391. [Google Scholar] [CrossRef]
  18. Baniecki, H.; Biecek, P. Adversarial attacks and defenses in explainable artificial intelligence: A survey. Inf. Fusion 2024, 102, 102303. [Google Scholar] [CrossRef]
  19. Xu, Z.; Lv, Z.; Li, J.; Sun, H.; Sheng, Z. A Novel Perspective on Travel Demand Prediction Considering Natural Environmental and Socioeconomic Factors. IEEE Intell. Transp. Syst. Mag. 2022, 15, 2–25. [Google Scholar] [CrossRef]
  20. Alhulayil, M.; Al-Mistarihi, M.F.; Shurman, M.M.J.E. Performance analysis of dual-hop AF cognitive relay networks with best selection and interference constraints. Expert Syst. Appl. 2022, 12, 124. [Google Scholar] [CrossRef]
  21. Maldonado, S.; Pérez, J.; Weber, R.; Labbé, M. Feature selection for support vector machines via mixed integer linear programming. Inf. Sci. 2014, 279, 163–175. [Google Scholar] [CrossRef]
  22. Angelino, E.; Larus-Stone, N.; Alabi, D.; Seltzer, M.; Rudin, C. Learning certifiably optimal rule lists for categorical data. J. Mach. Learn. Res. 2018, 18, 1–78. [Google Scholar]
  23. Shah, S.; Munir, A.; Salam, A.; Ullah, F.; Amin, F.; AlSalman, H.; Javeed, Q. A dynamic trust evaluation and update model using advanced decision tree for underwater wireless sensor networks. Sci. Rep. 2024, 14, 22393. [Google Scholar] [CrossRef]
  24. Sripodok, P.; Lapthanasupkul, P.; Arayapisit, T.; Kitkumthorn, N.; Srimaneekarn, N.; Neeranadpuree, V.; Amornwatcharapong, W.; Hempornwisarn, S.; Amornwikaikul, S.; Rungraungrayabkul, D. Development of a decision tree model for predicting the malignancy of localized gingival enlargements based on clinical characteristics. Sci. Rep. 2024, 14, 22185. [Google Scholar] [CrossRef]
  25. Arjmandi, M.; Fattahi, M.; Motevassel, M.; Rezaveisi, H. Evaluating algorithms of decision tree, support vector machine and regression for anode side catalyst data in proton exchange membrane water electrolysis. Sci. Rep. 2023, 13, 20309. [Google Scholar] [CrossRef]
  26. Korikov, A.; Shleyfman, A.; Beck, J.C. Counterfactual explanations for optimization-based decisions in the context of the GDPR. In Proceedings of the ICAPS 2021 Workshop on Explainable AI Planning (XAIP), Virtual Event, 6 August 2021; p. 17. [Google Scholar]
  27. European Commission. General Data Protection Regulation (GDPR). Available online: https://eur-lex.europa.eu/eli/reg/2016/679/oj (accessed on 25 July 2022).
  28. Chen, C.-C.; Li, S.-T. Credit rating with a monotonicity-constrained support vector machine model. Expert Syst. Appl. 2014, 41, 7235–7247. [Google Scholar] [CrossRef]
  29. Kanamori, K.; Takagi, T.; Kobayashi, K.; Ike, Y.; Uemura, K.; Arimura, H. Ordered counterfactual explanation by mixed-integer linear optimization. In Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI 2021), Virtual Conference, 2–9 February 2021; pp. 11564–11574. [Google Scholar]
  30. Dash, S.; Günlük, O.; Wei, D. Boolean decision rules via column generation. In Proceedings of the 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, QC, Canada, 3–8 December 2018; pp. 4660–4670. [Google Scholar]
  31. Lawless, C.; Dash, S.; Günlük, O.; Wei, D. Interpretable and fair boolean rule sets via column generation. J. Mach. Learn. Res. 2023, 24, 1–50. [Google Scholar]
  32. Haussler, D.; Warmuth, M.K. The probably approximately correct (PAC) and other learning models. In The Mathematics of Generalization; CRC Press: Boca Raton, FL, USA, 1995; ISBN 978-0-429-49252-5. [Google Scholar]
  33. Robbins, H.; Monro, S. A Stochastic Approximation Method. Ann. Math. Stat. 1951, 22, 400–407. [Google Scholar] [CrossRef]
  34. Gu, B.; Shan, Y.; Quan, X.; Zheng, G. Accelerating Sequential Minimal Optimization via Stochastic Subgradient Descent. IEEE Trans. Cybernet. 2021, 51, 2215–2223. [Google Scholar] [CrossRef]
  35. Kweon, S.J.; Hwang, S.W.; Lee, S.; Jo, M.J. Demurrage pattern analysis using logical analysis of data: A case study of the Ulsan Port Authority. Expert Syst. Appl. 2022, 206, 117745. [Google Scholar] [CrossRef]
  36. Yeh, I.-C. Default of Credit Card Clients. UCI Machine Learning Repository. Available online: https://doi.org/10.24432/C55S3H (accessed on 19 March 2025).
  37. Quinlan, J. UCI Machine Learning Repository. 1987. Available online: https://doi.org/10.24432/C5FS30 (accessed on 19 March 2025).
Figure 1. Explanation scenarios in credit evaluation with summary–explanation (SE). The q-consistent SE has much greater support (2114) than that of the globally consistent SE (88), with a generally acceptable lower consistency of 82%.
Figure 1. Explanation scenarios in credit evaluation with summary–explanation (SE). The q-consistent SE has much greater support (2114) than that of the globally consistent SE (88), with a generally acceptable lower consistency of 82%.
Mathematics 13 01305 g001
Figure 2. Global extrapolation effectiveness and the global extrapolation challenge: How can we improve the global support | S D ( b ) | and global consistency level | c D ( b , y e ) | without direct access to the global dataset D ?
Figure 2. Global extrapolation effectiveness and the global extrapolation challenge: How can we improve the global support | S D ( b ) | and global consistency level | c D ( b , y e ) | without direct access to the global dataset D ?
Mathematics 13 01305 g002
Figure 3. Solving the MSqC model with the SIS-based WCS optimization method. The local MSqC problem is solved by sampling and solving smaller MSGC sub-problems, guided by the features’ SIS scores designed with global prior injection.
Figure 3. Solving the MSqC model with the SIS-based WCS optimization method. The local MSqC problem is solved by sampling and solving smaller MSGC sub-problems, guided by the features’ SIS scores designed with global prior injection.
Mathematics 13 01305 g003
Figure 4. Examples of MSGC(B&B) SEs (left) and MSqC(WCS) SEs (right).
Figure 4. Examples of MSGC(B&B) SEs (left) and MSqC(WCS) SEs (right).
Mathematics 13 01305 g004
Figure 5. Local support vs. solution time of the SE generation methods under different local–global ratio α (HELOC dataset).
Figure 5. Local support vs. solution time of the SE generation methods under different local–global ratio α (HELOC dataset).
Mathematics 13 01305 g005
Figure 6. Solution local consistency levels c N of MSqC(WCS) for sub-problems with varying sizes n wcs , ranging from 100 to 500 (HELOC dataset).
Figure 6. Solution local consistency levels c N of MSqC(WCS) for sub-problems with varying sizes n wcs , ranging from 100 to 500 (HELOC dataset).
Mathematics 13 01305 g006
Figure 7. Solution global consistency levels c D of MSqC(WCS) for sub-problems with varying sizes n wcs , ranging from 100 to 500 (HELOC dataset).
Figure 7. Solution global consistency levels c D of MSqC(WCS) for sub-problems with varying sizes n wcs , ranging from 100 to 500 (HELOC dataset).
Mathematics 13 01305 g007
Table 1. Experimental parameters.
Table 1. Experimental parameters.
ParameterValueDescription
M c 4Maximum complexity for an SE solution
q0.85Minimum consistency level for MSqC(B&B)
a3Scaling factor for feature sampling
N subs 40Number of WCS sub-problems
n wcs 100/300/500Sub-problem size, i.e., number of observations in WCS sub-problems
ρ 0.25Sampled feature ratio for WCS sub-problems
T m a x 7200 sSolution time limit
α 0.01/0.04/0.07
/0.1/0.4/0.7/1
The local–global ratio for controlling the size of the local dataset
n thresholds 9Number of quantile thresholds to binarize ordinal features
n rep 40Number of solution executions to average metrics
Table 2. Local solution performance of the SE generation methods on HELOC dataset, with n subs = 100 . The local–global ratio α is an adjustable parameter. The size of the local dataset N depends on α by | N | = α | D | . Metrics: solution time T s , local support s N and local consistency level c N .
Table 2. Local solution performance of the SE generation methods on HELOC dataset, with n subs = 100 . The local–global ratio α is an adjustable parameter. The size of the local dataset N depends on α by | N | = α | D | . Metrics: solution time T s , local support s N and local consistency level c N .
MSGC(B&B)MSqC(B&B)MSqC(WCS)
α | N | T s s N c N T s s N c N T s s N c N
0.01991.8518.101.005.5535.490.863.4716.741.00
0.0439521.2628.801.00252.2893.590.863.7266.780.87
0.0769150.9130.111.001170.23130.810.853.9994.170.83
0.1987100.6330.071.002374.98155.970.853.83127.670.84
0.43948557.5826.411.00>7200N/AN/A4.05541.220.83
0.769101234.8622.011.00>7200N/AN/A3.89900.490.79
198711851.8521.541.00>7200N/AN/A3.69961.640.77
Table 3. Global extrapolation performance of the SE generation methods with n subs = 100 on the HELOC dataset. The local–global ratio α is an adjustable parameter. The size of the local dataset N depends on α by | N | = α | D | . Metrics: global support s D | and global consistency level c D .
Table 3. Global extrapolation performance of the SE generation methods with n subs = 100 on the HELOC dataset. The local–global ratio α is an adjustable parameter. The size of the local dataset N depends on α by | N | = α | D | . Metrics: global support s D | and global consistency level c D .
MSGC(B&B)MSqC(B&B)MSqC(WCS)
α | N | s D c D s D c D s D c D
0.01991765.230.713546.020.671693.800.72
0.04395721.260.772311.310.761666.510.78
0.07691439.220.791865.700.791275.640.76
0.1987321.780.791543.260.781266.340.79
0.4394868.080.82N/AN/A1359.750.82
0.7691033.050.85N/AN/A1270.460.79
1987121.541.00N/AN/A961.640.77
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Peng, C.; He, T. An Improved Summary–Explanation Method for Promoting Trust Through Greater Support with Application to Credit Evaluation Systems. Mathematics 2025, 13, 1305. https://doi.org/10.3390/math13081305

AMA Style

Peng C, He T. An Improved Summary–Explanation Method for Promoting Trust Through Greater Support with Application to Credit Evaluation Systems. Mathematics. 2025; 13(8):1305. https://doi.org/10.3390/math13081305

Chicago/Turabian Style

Peng, Chen, and Tianci He. 2025. "An Improved Summary–Explanation Method for Promoting Trust Through Greater Support with Application to Credit Evaluation Systems" Mathematics 13, no. 8: 1305. https://doi.org/10.3390/math13081305

APA Style

Peng, C., & He, T. (2025). An Improved Summary–Explanation Method for Promoting Trust Through Greater Support with Application to Credit Evaluation Systems. Mathematics, 13(8), 1305. https://doi.org/10.3390/math13081305

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop