A New Method to Compare the Interpretability of Rule-based Algorithms

Interpretability is becoming increasingly important for predictive model analysis. Unfortunately, as remarked by many authors, there is still no consensus regarding this notion. The goal of this paper is to propose the definition of a score that allows to quickly compare interpretable algorithms. This definition consists of three terms, each one being quantitatively measured with a simple formula: predictivity, stability and simplicity. While predictivity has been extensively studied to measure the accuracy of predictive algorithms, stability is based on the Dice-Sorensen index for comparing two rule sets generated by an algorithm using two independent samples. The simplicity is based on the sum of the lengths of the rules derived from the predictive model. The proposed score is a weighted sum of the three terms mentioned above. We use this score to compare the interpretability of a set of rule-based algorithms and tree-based algorithms for the regression case and for the classification case.


Introduction
The widespread use of machine learning (ML) methods in many important areas such as health care, justice, defense or asset management has underscored the importance of interpretability for the decision-making process. In recent years, the number of publications on interpretability has increased exponentially. For a complete overview of interpretability in ML the reader may see the book [40] and the article [41]. We distinguish two main approaches to generate interpretable prediction models.
The first approach is to use a non-interpretable ML algorithm to generate the predictive model, and then create a so-called post-hoc interpretable model. One common solution is to use graphic tools, such as the Partial Dependence Plot (PDP) [19] or the Individual Conditional Expectation (ICE) [24]. A drawback of these methods is that they are limited by the human perception. Indeed, a plot with more than 3 dimensions cannot be interpreted by humans and so it is not useful for data sets with many features. An alternative way consists in using a surrogate model to explain the model generated by a black-box. We refer to the algorithms Local Interpretable Model-agnostic Explanations (LIME) [46], DeepLIFT [48] and SHapley Additive exPlanations (SHAP) [35] that attempt to measure the importance of a feature in the prediction process (we refer to [25] for an overview of the available methods). However, as outlined in [47], the explanations generated by these algorithms may not be sufficient to allow a reasonable decision process.
The second approach is to use intrinsically interpretable algorithms to directly generate interpretable models. There are two main families of intrinsically interpretable algorithms: tree-based algorithms that are based on decision trees such as Classification And Regression Trees (CART) [7], Iterative Dichotomiser 3 (ID3) [44], C4.5 [45], M5P [51], Logistic Model Trees (LMT) [32] and rulebased algorithms that are generating rule sets such as Repeated Incremental Pruning to Produce Error Reduction (RIPPER) [8], First Order Regression (FORS) [31], M5 Rules [29], RuleFit [17], Ensemble of Decision Rules (Ender) [10], Node Harvest [39] or more recently Stable and Interpretable RUle Set (SIRUS) [4,5] and the Coverage Algorithm [38]. It is important to note that any tree can be converted into a set of rules, while the opposite is not true.
These algorithms generate predictive models based on the notion of a rule. The condition part If is a logical conjunction, where the c i 's are tests that check whether the observation has the specified properties or not. The number k is called the length of the rule. If all c i 's are fulfilled the rule is said to be activated. The conclusion part Then is the prediction of the rule if it is activated. Even though rule-based algorithms and tree-based algorithms seem to be easy to understand, there is no exact mathematical definition for the concept of interpretability. This is due to the fact that interpretability involves multiple concepts as explained in [34], [11], [52] and [42]. The goal of this paper is to propose a definition that combines these concepts in order to generate an interpretability score. It is important to note that related concepts such as justice, ethics, and morality, which are associated with specific applications to health care, justice, defense or asset management, cannot be measured quantitatively.
As proposed in [52] and [4], we describe an interpretability score for any model formed by rules based on the triptych predictivity, stability, and simplicity: The predictivity score measures the accuracy of the generated prediction model. The accuracy ensures a high degree of confidence in the generated model. The stability score quantifies the sensitivity of an algorithm to noise, and it allows to evaluate the robustness of the algorithm. The simplicity score could be conceptualized as the ability to easily verify the prediction. A simple model makes it easy to evaluate some qualitative criteria such as justice, ethics and morality. By measuring these three concepts we are therefore able to evaluate the interpretability of several algorithms for a given problem.
A similar idea has been proposed in [28] in the area of the Logical Analysis of Data (LAD), by introducing the concept of Pareto-optimal patterns or strong patterns. The main part of the LAD is to select the best patterns from the dataset based on a triptych that includes simplicity, selectivity, and evidence. The authors identify two extreme cases of patterns: strong prime patterns and strong spanned patterns. The first are the most specific strong patterns while the last are the simplest strong patterns. In [2], the authors have studied the effects of pattern filtering on classification accuracy. They show that the prime patterns do provide somewhat higher classification accuracy, although the loss of accuracy by using strong spanned patterns is relatively small. For an overview of the LAD we refer the readers to [1].

Predictivity score
The aim of a predictive model is to predict the value of a random variable of interest Y ∈ Y, given features X ∈ X where X is a d-dimensional space. Formally, we consider the standard setting as follows: Let (X, Y ) be a random vector in X × Y of unknown distribution Q such that where E[Z] = 0 and V(Z) = σ 2 and g * is a measurable function from X to Y.
We denote by G the set of all measurable functions from X to R. The accuracy of a predictor g ∈ G is measured by its risk, defined as where γ : G×(X ×Y) → [0, ∞[ is called a contrast function and its choice depends on the nature of Y . The risk measures the average discrepancy between g(X) and Y , given a new observation (X, Y ) from the distribution Q. As mentioned in [3], the definition (1) includes most cases of the classical statistical models. Given a sample D n = ((X 1 , Y 1 ), . . . , (X n , Y n )), our aim is to predict Y given X. The observations (X i , Y i ) are assumed to be independent and identically distributed (i.i.d) from the distribution Q.
We consider a statistical algorithm which is a measurable mapping from (X × Y) n to a class of measurable functions G n ⊆ G. This algorithm generates a predictor g n by using the Empirical Risk Minimization principle (ERM) [50], meaning that g n = arg min g∈Gn L n (g), where L n (g) = 1 n n i=1 γ(g, (X i , Y i )) is the empirical risk. The notion of predictivity is based on the ability of an algorithm to provide an accurate predictor. This notion has been extensively studied before. In this paper we define the predictivity score as where h n is a baseline predictor chosen by the analyst. The idea is to consider a naïve and easily built predictor chosen according to the contrast function. For instance, if Y ∈ R, we generally use the quadratic contrast with γ (g; (X, Y )) = (g(X) − Y ) 2 . In this case, the minimizer of the risk (1) is the regression function defined by If Y ∈ {0, 1}, we use the 0 − 1 contrast function γ (g; (X, Y )) := 1 g(X) =Y , and the minimizer of the risk is the Bayes classifier defined by Yi≥n/2 . The predictivity score (2) is a measure of accuracy which is independent of the range of Y . The risk (1) is a positive function, so P n (g n , h n ) < 1. Moreover, if P n (g n , h n ) < 0, it means that the predictor g n is less accurate than the chosen baseline predictor h n . Thus, in this case it is better to use the predictor h n instead of g n . Hence, we can assume that the predictivity score is a positive number between 0 and 1.

q-Stability score
Usually, stability refers to the stability of the prediction [50]. Indeed, it has been shown that stability and predictive accuracy are closely connected (see for example [6,43]). In this paper we are more interested in the stability of the generated model. The importance of the stability for interpretability has been presented in [53]. Nevertheless, generating a stable set of rules is challenging as explained in [33]. In [4,5], the authors have proposed a measure of the stability for rule-based algorithms based on the following definition: "A rule learning algorithm is stable if two independent estimations based on two independent samples, drawn from the same distribution Q, result in two similar lists of rules." The q-stability score is based on the same definition. This concept is problematic for algorithms that do not use feature discretization and work with real values. Indeed, if the feature is continuous, the probability that a decision tree algorithm will cut on the same exact value for the same rule for two independent samples is zero. For this reason, this definition of stability is too stringent in this case. One way to avoid this problem is to discretize all continuous features. The discretization of features is a common solution to control the complexity of a rule generator. In [14], for example, the authors use entropy minimization heuristics to discretize features and for the algorithms Bayesian rule lists (BRL) [33], SIRUS [4,5] and Rule Induction Partioning Estimator (RIPE) [37] the authors have discretized the features by using their empirical quantiles. We refer to [12] for an overview of the common discretization methods.
In this paper, to generate the q-stability score, we consider a discretization process on the conditions of the selected rules based on the empirical q-quantile of the implied continuous features. Because this process is only used for the calculation of the q-stability, it does not affect the accuracy of the generated model.
First, we discretize the continuous features that are involved in the selected rules. Let q ∈ N be the number of quantiles considered for the q-stability score and let X be a continuous feature. An integer p ∈ {1, . . . , q}, called bin, is A discrete version of the X feature, designated by Q q (X), is constructed by replacing each value with its corresponding bin. In other words, a value p a is assigned to all a ∈ X such that a ∈ [x (pa−1)/q , x pa/q ].
Then, we extend this discretization to selected rules by replacing the interval boundaries of the individual tests c i with the corresponding bins. For example, Finally, the formula for the q-stability score is based on the so-called Dice-Sorensen index. Let A be an algorithm and let D n and D n be two independent samples of n i.i.d. observations drawn from the same distribution Q. We denote by R n and R n the rule sets generated by an algorithm A based on D n and D n , respectively. Then, the q-stability score is calculated as where Q q (R) is the discretized version of the rule set R, with the convention that 0/0 = 0, and the discretization process is performed by using D n and D n , respectively. The q-stability score (3) is the ratio of the common rules between Q q (R n ) and Q q (R n ). It is a positive number between 0 and 1: If Q q (R n ) and Q q (R n ) have no common rules, then S q n (A) = 0, while if Q q (R n ) and Q q (R n ) have the same rules, then S q n (A) = 1.

Simplicity score
Simplicity as a component of interpretability has been studied in [36] for the classification trees and in [23] for the rule-based models. In [38] the authors have introduced the concept of an interpretability index, which is based on the sum of the length of all the rules of the prediction model. Such an interpretability index should not be confused with the broader concept of interpretability that is developed in this paper. As discussed in section 5, the former will be interpreted as one of the components of the latter.
Definition 4.1. The interpretability index of an estimator g n generated by a rule set R n is defined by Even if (4) seems naive, we consider it to be a reasonable measure for the simplicity of a tree-based algorithm or a rule-based algorithm. Indeed, as the number of rules or the length of the rules increases, Int(g n ) also increases. The fewer the number of rules and their lengths, the easier their understanding should be.
It is important to note that the value (4), which is a positive number, cannot be directly compared to the scores from (2) and (3), which are between 0 and 1.
The simplicity score is based on the Definition 4.1. The idea is to compare (4) relatively to a set of algorithms A m 1 = {A 1 , . . . , A m }. Hence the simplicity of an algorithm A i ∈ A m 1 is defined in relative terms as follows: .
Similar to the previously defined scores, this quantity is also a positive number between 0 and 1: If A i generates the simplest predictor among the set of algorithms A m 1 then S n (A i , A m 1 ) = 1, and the simplicity of other algorithms in A m 1 are evaluated relatively to A i . We note that it would be useful to be able to calculate the simplicity score of only one algorithm. To do this, we would need to have a threshold value for the simplicity score. In practice this information could be obtained by using a survey on the maximum size of a rule set that people are willing to accept to use.

Interpretability score
In [11] the authors define interpretability as "the ability to explain or present to a person in an understandable form" We claim that an algorithm with a high predictivity score (2), stability score (3) and simplicity score (5) is interpretable in the sense of [11]. Indeed, a high predictivity score ensures confidence in and truthfulness of the generated model, a high stability score ensures robustness and a good noise insensibility, and a high simplicity score ensures that the generated model is easy to understand for humans and can also be easily audited.
The main idea behind the proposed definition of interpretability is to use a weighted sum of these three scores. Let A m 1 be a set of algorithms. Then, the interpretability of any algorithm A i ∈ A m 1 is defined as: where the coefficients α 1 , α 2 and α 3 have been chosen according to the analyst's objective, such that α 1 + α 2 + α 3 = 1.
It is important to note that the definition of interpretability (6) depends on the set of algorithms under consideration and the specific setting. Therefore, the interpretability score only makes sense within this set of algorithms and for the given setting.

Brief overview of the selected algorithms
RIPPER is a sequential coverage algorithm. It is based on the "divide-andconquer" approach. This means that for a selected class it searches for the best rule according to a criterion and removes the points covered by that rule. Then it searches for the best rule for the remaining points and so on until all points of this class are covered. Then it moves on to the next class, with the classes being examined in the order of increasing size.
PART is also a "divide-and-conquer" rule learner. The main difference is that in order to create the "best rule", the algorithm uses a pruned decision tree and keeps the leaf with the largest coverage.
RuleFit is a very accurate rule-based algorithm. First it generates a list of rules by considering all nodes and leaves of a boosted tree ensemble ISLE [18]. Then the rules are used as additional binary features in a sparse linear regression model that is using the Lasso [49]. A feature generated by a rule is equal to 1 if the rule is activated, and it is 0 otherwise.
NodeHarvest also uses a tree ensemble as a rule generator. The algorithm considers all nodes and leaves of a Random Forest as rules and solves a linear quadratic problem to fit a weight for each node. Hence, the estimator is a convex combination of the nodes. Table 1: Presentations of the publicly available regression datasets used in this paper Name (n × d) Description Ozone 330 × 9 Prediction of atmospheric ozone concentration from daily meteorological measurements [27].
MPG 398 × 8 Prediction of city-cycle fuel consumption in miles per gallon [13].
Student 649 × 32 Prediction of the final grade of the student based on attributes collected by reports and questionnaires [9].
Abalone 4177 × 7 Prediction of the age of abalone from physical measurements [13].
Covering Algorithm has been designed to generate a very simple model. The algorithm extracts a sparse rule set considering all nodes and leaves of a tree ensembles (using the Random Forest algorithm, Gradient Boosting algorithm [19] or Stochastic Gradient Boosting algorithm [20]). Rules are selected according to their statistical properties to form a "quasi-covering". The covering is then turned into a partition using the so-called partitioning trick [37] to form a consistent estimator of the regression function.
SIRUS has been designed to be a stable predictive algorithm. SIRUS uses a modified Random Forest to generate a large number of rules, and selects rules with a redundancy greater than the tuning parameter p 0 . To be sure that redundancy is achieved, the features are discretized.
For a comprehensive review of rule-based algorithms we refer to [21,22], while for a comprehensive review of interpretable machine learning we refer to [40].

Datasets
We have used publicly available databases from the UCI Machine Learning Repository [13] and from [27]. We have selected six datasets for regression 2 which are summarized in Table 1, and three datasets for classification which are summarized in Table 2.

Execution
For each dataset we perform 10-fold cross-validation. The parameter settings for the algorithm are summarized in Table 3. These parameters were selected according to the author's recommendations to generate models based on rules with equivalent lengths. It means that all rules generated by algorithms have a bounded length. The parameters of the algorithms are not tuned because it is not the purpose of the paper to rank the algorithms. The aim of this section is to illustrate how this score is computed. For each algorithm, a model is fitted on the training set to obtain the simplicity score (4), while we measure the predictivity score (2) on the test set. To obtain the predictivity score we set γ (g; (X, Y )) = (g(X) − Y ) 2 and h n = 1 n n i=1 y i for regression, γ (g; (X, Y )) = 1 g(X) =Y and h n = mode ({y 1 , . . . , y n }) for classification.
Then, to obtain the stability score, the training set is randomly divided into two sets of equal length and two models are constructed. The code is a combination of Python and R and it is available on GitHub https://github.com/Advestis/Interpretability.
The choice of α's in (6) is an important step in the process of comparing interpretability. For these applications we use an equally weighted average. It means that Another possibility is to set each α to be inversely proportional to the variance of the associated score for each data set. In our application the results were very similar to the equally weighted case (data not shown).

Results for regression
The averaged scores are summarized in Table 4. As expected RuleFit is the most accurate algorithm. However, RuleFit is neither stable nor simple. SIRUS is the most stable algorithm and the Covering Algorithm is one of the simplest. For all datasets, SIRUS seems to be the most interesting algorithm among this selection of algorithms and by our score (6). Figures 1, 2 and 3 are the box-plots of the predictivity scores, q-stability scores and simplicity scores, respectively, of each algorithms on the dataset Ozone. Another interesting result is obtained from the correlation matrix table 5, which was calculated considering all results generated by the 10-fold crossvalidation for all datasets. It shows that the simplicity score is negatively correlated with the predictivity score, which illustrates the well-known predictivity / simplicity trade-off. Furthermore, the stability score seems to be uncorrelated with the predictivity score, but negatively correlated with the simplicity score, a result which is less expected.
One may note that the distributions of the scores are very different. Indeed, the ranges for q-stability and simplicity are small relative to the predictivity scores. This may be explained by the fact that all algorithms are designed to be accurate, but not necessarily stable or simple. For example, SIRUS was thought to be stable, and according to the q-stability score it is with a score of about 1. On the other hand, stability was not considered for RuleFit and its q-stability score is always low. We can apply the same reasoning for the simplicity score.

Results for classification
The averaged scores are summarized in Table 6. All selected algorithms have the same accuracy for all datasets. However, RIPPER and PART are both very stable algorithms, and RIPPER is the simplest of the three algorithms. Therefore, for these datasets and among these three algorithms, RIPPER is the algorithm that is most interpretable according to our measure (6). Figures  Table 4: Average of predictivity score (P n ), stability score (S q n ), simplicity score (S n ) and interpretability score (I) over a 10-fold cross-validation of commonly used interpretable algorithms for various public regression datasets. Best values are in bold, as well as values within 10% of the maximum value for each dataset.     Table 6: Average of predictivity score (P n ), stability score (S q n ), simplicity score (S n ) and interpretability score (I) over a 10-fold cross-validation of commonly used interpretable algorithms for various public classification datasets. Best values are in bold, as well as values within 10% of the maximum value for each dataset. 4, 5 and 6 are the box-plots of the predictivity scores, q-stability scores and simplicity scores, respectively, of each algorithms for the dataset Speaker. In contrast to the regression case, the correlation matrix table 7, which was calculated considering all scores generated by the 10-fold cross-validation for all datasets, shows that the scores do not seem to be correlated.

Dataset
These results should take into account that for the classification part we have tested fewer algorithms on fewer data sets than for the regression part. The accuracy of the models for these datasets was small. If the algorithms are not accurate enough, it may not be useful to look at the other scores. The algorithms appear to be very stable, which may be explained by the fact that they are not complex. Since it is not enough to have a good predictivity score, these algorithms must be tuned to be more complex.

Conclusion and perspectives
In this paper we propose a score that may be used to compare the interpretability of tree-based algorithms and rule-based algorithms. This score is based on the triptych: predictivity (2), stability (3), and simplicity (5), as proposed in [52,4]. The proposed methodology seems to provide an easy way to rank the interpretability of a set of algorithms by being composed of three different scores that allow to integrate the main components of interpretability. It may be seen from our applications that the q-stability score and the simplicity score are quite stable regardless of the datasets. This observation is related to the properties of the algorithms; indeed, an algorithm designed for accuracy, stability or simplicity should maintain this property independent of the datasets. It is important to note that, according to the definition 4.1, 100 rules of length 1 have the same interpretability index (5) as a single rule of length 100, which may be debatable. Furthermore, the stability score is purely syntactical and quite restrictive. If some features are duplicated, two rules can have two different syntactical conditions, but they are otherwise identical due to their activations. One possibility to relax the stability score could be to compare the rules on the basis of their activation sets (i.e. by searching for observations where the conditions are fulfilled simultaneously). Another issue is the selection of the weights in the interpretability formula (6). For simplicity, we have used equal weights in this paper, but future work is needed on the optimal choice of these weights to match the specific goals of the analyst.
As seen from the paper, the proposed interpretability score is meaningless unless it is used for the comparison of two or more algorithms. In future work, we intend to develop an interpretability score that can be computed for an algorithm regardless if other algorithms are considered or not. We also plan to adapt the measure of interpretability to other well-known ML algorithms and ML problems such as clustering or dimension reduction methods. To achieve this goal we will need to modify the definitions of the q-stability score and the simplicity score. Indeed, these two scores can be currently computed only for rule-based algorithms or tree-based algorithms (after a transformation of the generated tree into a set of rules).
Another interesting extension would be the addition of a semantic analysis of the variables involved in the rules. In fact, NLP methods could be used to measure the distance between the target and these variables in a text corpus. This distance could be interpreted as the relevance of using such variables to describe the target.