Next Article in Journal
Capital Structure Theory: Past, Present, Future
Previous Article in Journal
Coupling Failure Mechanism of Underground Structures Induced by Construction Disturbances
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stochastic Configuration Based Fuzzy Inference System with Interpretable Fuzzy Rules and Intelligence Search Process

1
School of Control Science and Engineering, Dalian University of Technology, Dalian 116024, China
2
School of Applied Mathematics, Beijing Normal University, Zhuhai 519087, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(3), 614; https://doi.org/10.3390/math11030614
Submission received: 30 November 2022 / Revised: 20 January 2023 / Accepted: 21 January 2023 / Published: 26 January 2023
(This article belongs to the Special Issue Intelligent and Fuzzy Systems in Engineering and Technology)

Abstract

:
In this paper, a stochastic configuration based fuzzy inference system with interpretable fuzzy rules (SCFS-IFRs) is proposed to improve the interpretability and performance of the fuzzy inference system and determine autonomously an appropriate model structure. The proposed SCFS-IFR first accomplishes a fuzzy system through interpretable linguistic fuzzy rules (ILFRs), which endows the system with clear semantic interpretability. Meanwhile, using an incremental learning method based on stochastic configuration, the appropriate architecture of the system is determined by incremental generation of ILFRs under a supervision mechanism. In addition, the particle swarm optimization (PSO) algorithm, an intelligence search technique, is used in the incremental learning process of ILFRs to obtain better random parameters and improve approximation accuracy. The performance of SCFS-IFRs is verified by regression and classification benchmark datasets. Regression experiments show that the proposed SCFS-IFRs perform best on 10 of the 20 data sets, statistically significantly outperforming the other eight state-of-the-art algorithms. Classification experiments show that, compared with the other six fuzzy classifiers, SCFS-IFRs achieve higher classification accuracy and better interpretation with fewer rules.

1. Introduction

Neuro-fuzzy systems [1,2], which inherit the advantages of both fuzzy inference systems and neural networks, have attracted sustained attention. The fuzzy rules in neuro-fuzzy systems make the system interpretable and the trainable parameters make the precision of the system improvable via learning.
Interpretability is a very important research direction for neuro-fuzzy systems. Since interpretability is considered as the ability to express the meaning of a model in an understandable way, it is of great help to understand the function of the model and what is learned from it [3]. Although we have not yet found a way to measure interpretability mathematically, a fuzzy inference system with high interpretability should have comprehensible fuzzy partition and simple fuzzy rules. We should use fuzzy sets that are distinguishable and semantically clear and use as few fuzzy rules as possible to build an interpretable rule base. In [4], a fuzzy extreme learning machine (FELM) uses ILFRs to improve the interpretability of an extreme learning machine (ELM). ILFRs use several fuzzy sets to equally divide the input universe of discourse in order to better endow the fuzzy sets with semantic labels and generate random matrices to select features and to combine rules. ILFRs have been concerned in neuro-fuzzy modeling because of their advantages in interpretability. In [5,6], two deep fuzzy classifiers which adopt ILFRs are proposed. In order to improve the model transparency of the fuzzy broad learning system (FBLS), a Compact FBLS (CFBLS) using ILFRs has been improved in [7]. Although there are a large number of neuro-fuzzy systems which have been developed to improve interpretability and approximation accuracy, there is still room for further improvement. It is still a significant research problem to achieve good approximation accuracy with an appropriate number of fuzzy rules and determine the number of these fuzzy rules autonomously rather than through trial and error.
The most commonly used fuzzy system in the neural fuzzy model is the Takagi-Sugeno-Kang (TSK), which has been widely applied in many fields, such as regression [8], classification [9], model approximation [10], adaptive control [11], business [12], etc. Neural-fuzzy systems adopt the form of a neural network to represent the fuzzy system, and the training algorithm in the neural network is used to realize the extraction of fuzzy rules and the establishment of the fuzzy system without tedious manual intervention. The back propagation (BP) algorithm, a commonly used training algorithm in neural network, has been used for training neuro-fuzzy systems [13]. However, as datasets grow in size and dimension, the training algorithm of the neuro-fuzzy system has encountered many difficulties, such as training rate, overfitting, low interpretability, etc.
Because of the effectiveness and efficiency of randomized learning techniques, they have great potential in developing fast learning algorithms for neuro-fuzzy systems. In [14], the proposed Bernstein fuzzy system uses the ELM training system to simultaneously approximate smooth functions and their derivatives. In the generalized Bernstein fuzzy system, the premise parameters are randomly generated and the consequent parameters are determined by solving a linear optimization problem. In [15], a regularized extreme learning adaptive neuro-fuzzy inference system (R-ELANFIS) is designed to improve the generalization performance of a neuro-fuzzy system. In [16], a broad learning system (BLS) is proposed for fast learning of a large volume of data, which randomly assign the values of input parameters and expand the neurons in a broad manner without a retraining process. In [17], a TSK fuzzy system is combined with a broad learning system to establish a fuzzy broad learning system.
The scope of random parameters plays an important role in the training process of randomized learning approaches and has been proved to affect approximation capability [18]. A stochastic configuration network (SCN) is proposed in [19] to improve randomized learning techniques. SCN is built incrementally by assigning the random parameters under a supervisory mechanism and adaptively setting their range. The supervisory mechanism of SCN is helpful for ensuring universal approximation and good performance. Various SCN models have been developed over the past decade. In [20], a deep SCN is developed to randomly assign the wights of each layer under inequality constraints. In [21], a deep stacked SCN is established to automatically extract hidden units from data streams. In [22], the driving amount is introduced into the structure of SCN to improve the generalization approximation ability.
In this paper, a novel stochastic configuration based fuzzy inference system called SCFS-IFRs is proposed to improve the interpretability and performance of the fuzzy system, which uses an adaptive incremental learning approach based on the stochastic configuration algorithm to determine an appropriate architecture autonomously and ensure universal approximation and continuous performance improvement of SCFS-IFRs. The contribution of SCFS-IFRs can be summarized as follows:
(1)
For high interpretability, SCFS-IFRs implement a fuzzy system through ILFRs and determine the appropriate number of fuzzy rules by an adaptive incremental learning algorithm. ILFRs better endow fuzzy sets with semantic labels by equally dividing the input universe of discourse and the rule combination matrix is used to select activated fuzzy sets. They help to improve the semantic expression of fuzzy rules. Moreover, the incremental learning method of SCFS-IFRs which determines a suitable number of fuzzy rules, is also beneficial for improving interpretability and avoiding the problem of “rule explosion”.
(2)
For good performance, a kind of supervisory mechanism and an intelligence search process are to be considered in the adaptive incremental learning algorithm of SCFS-IFRs. The inequality constraints that the premise parameters of fuzzy rules need to satisfy are given, which can ensure that the performance of SCFS-IFRs will be improved along with the increase in rules. The PSO algorithm, an intelligence search technique, is used in the process of stochastic configuration to obtain better random parameters.
The rest of this article is organized as follows. In Section 2, we review the TSK fuzzy system and SCN. In Section 3, we elaborate the proposed SCFS-IFRs. In Section 4, numerical experiments verify the performance of the proposed SCFS-IFRs. Some concluding remarks are given in Section 5.
The abbreviations and variables used in this paper are listed in Table 1 and Table 2.

2. Preliminaries

2.1. Takagi-Sugeno-Kang Fuzzy Inference System

The Takagi-Sugeno-Kang (TSK) fuzzy system proposed in [23], as one of the most important fuzzy systems, has been applied extensively in many different fields [24]. The consequent part of the fuzzy rule is the function of the input variables rather than the fuzzy set. Specifically, given an input vector x = x 1 , x 2 , x m T , the typical fuzzy rules are of the form:
R l : If   x 1   is   A l 1   and and   x m   is   A l m , then   y = f l x ,   l = 1 , 2 , , L
where A l j is an antecedent fuzzy set of variable x j for the rule R l , f l x is the consequent function of the rule R l and L is the number of fuzzy rules.
Although the popular first-order TSK fuzzy system sets the consequent function f l x as linear function of the input variables, f l x can be any form function. The linear combination of a subset of the Chebyshev polynomial has been adopted by the rule consequent for improving the non-linear expression ability of TSK fuzzy system in [25], which is of the form.
f l x = ϕ x T a l
where ϕ x = 1 , T 1 x 1 , , T 1 x m , T 2 x 1 , , T n x m T is the expanded input vector of form, a l n m + 1 is the weight of the linear combination, T k + 1 x j is the Chebyshev polynomial of the form T k + 1 x j = 2 x j T k x j T k 1 x j with T 0 x j = 1 , T 1 x j = x j and n is the highest degree of the used Chebyshev polynomial.
For the l th fuzzy rule R l , the activation level is:
τ l x = j = 1 m μ A l j x j
where μ A l j x j is the membership function of A l j .
As pointed out in [26,27,28], the output system 1 can be calculated as
y = l = 1 L τ l x f l x = l = 1 L τ l x ϕ x T a l
In the above TSK fuzzy system, the parameters of membership function μ A l j and the consequent weight a l need to be determined. Traditional gradient descent algorithms can be used to identify these parameters, but is usually time-consuming with a great deal of input data. Many learning algorithms [29,30,31] have been proposed for accelerating the parameter identification procedure and developing the approximation performance. For example, the premise parameters can be generated randomly and the consequent parameters can be quickly calculated using a pseudoinverse.

2.2. Stochastic Configuration Network

The stochastic configuration network (SCN) [19], which develops a randomized method, generates random parameters under a supervisory mechanism. It can guarantee the approximation capability and determine an appropriate architecture.
For a giving target function f : m , an SCN with L 1 hidden nodes is denoted as
y = j = 1 L 1 β j g w j T x + b j
where g w j T x + b j is the activation function with random parameters w j m and b j , β j is the output weight of the hidden node. The residual error is denoted as
e L 1 = f j = 1 L 1 β j g w j T x + b j
If e L 1 is bigger than a predefined error tolerance ε , the SCN framework incrementally adds hidden nodes until the residual error reaches ε . The supervisory mechanism of an inequality constraint is of the form
e L 1 , g L 2 1 r μ L e L 1 2
where r 0 , 1 and μ L is a sequence with lim L + μ L = 0 and 0 < μ L 1 r and g L x = g w L T x + b L is the random basis function of the new hidden node.
The output parameters β j ,   j = 1 , 2 , , L are calculated by
β 1 * , β 2 * , , β L * = arg min β f j = 1 L β j g j 2
The leading model y = j = 1 L β j g j shows continuous performance improvement and the universal approximation of SCN have been proved in [19].
Although the existence of random parameters satisfying the constraint conditions 7 has been proved theoretically, how to find better parameters among these satisfying constraint conditions is still a meaningful problem. In this paper, the PSO algorithm is applied to find better parameters to further improve the performance of the system.

3. Stochastic Configuration Based Fuzzy Inference System with Interpretable Fuzzy Rules and Intelligence Search Process

The SCFS-IFRs model establishes a TSK fuzzy system of form (1) by adopting the following new method.
(1)
The ILFRs with the rule combination matrix are used to construct the TSK fuzzy system to guarantee model interpretability and avoid the “rule explosion” problem.
(2)
An adaptive incremental learning algorithm based on stochastic configuration is proposed to train the SCFS-IFR model and fix the suitable number of fuzzy rules.
(3)
The PSO algorithm is adopted to search for better parameters to improve the performance of the system in the process of stochastic configuration.
This section elaborates the proposed SCFS-IFRs. Firstly, the general structure of the SCFS-IFRs is introduced and then their adaptive learning algorithm is presented.

3.1. General Structure of SCFS-IFRs

The details of SCFS-IFRs shown in Figure 1 are given as follows. Given a dataset with inputs X = x 1 , x 2 , , x N T and outputs Y = y 1 , y 2 , , y N T , where x i = x i 1 , x i 2 , , x i m and y i = y i 1 , y i 2 , , y i t , j = 1 , 2 , , N and suppose all input features are normalized to 0 , 1 . We equally divide 0 , 1 into K 1 parts, and correspondingly, each feature is transformed into K fuzzy sets A k k = 1 K which are defined as follows
μ A k x i j = exp x i j u k 2 2 σ 2 ,   k = 1 , 2 , , K ,   j = 1 , 2 , , m
where μ A k is the Gaussian membership function of the k th fuzzy set, x i j is the j th dimension of input feature x i , u k = k 1 K is the center of fuzzy set A k and σ is the randomly generated width. Here, the K fuzzy sets can be assigned K explicit linguistic labels. For instance, when K = 5 , the five fuzzy sets can be assigned 5 linguistic labels: very low, low, medium, high, and very high. Compared with the random allocation of center parameters, the traditional equidistant partition method makes the fuzzy set more interpretable but may lead to “rule explosion”.
Motivated by [6], the rule combination matrix C 0 , 1 m × K × L is used to determine which fuzzy sets of the input features are used to generate the L fuzzy rules, where C j , k , l = 1 represents the k th fuzzy set of the j th feature in which the l th fuzzy rule will be activated; conversely, it will not be activated. To ensure that at least one fuzzy set is activated for each feature in each rule, we require that k = 1 K C j , k , l 1 , j = 1 , 2 , , m ,   l = 1 , 2 , , L . B l j denotes the union of all activated fuzzy sets of the j th feature in the l th fuzzy rule, i.e.,
B l j = C j , k , l = 1 A k , j = 1 , 2 , , m
The SCFS-IFRs with L ILFRs take the following form:
R l : if   x i 1   is   B l 1   and   x i 2   is   B l 2   and and   x i m   is   B l m ,   then   y = ϕ x i T a l ,   l = 1 , 2 , , L  
We can adopt the operator of algebraic sum s   a , b = a + b a b to calculate the membership function of B l j as
μ B l j x i j = 1 k = 1 K 1 C j , k , l μ A k x i j ,   j = 1 , 2 , , m ,   l = 1 , 2 , , L
Therefore, the output and residual error of the SCFS-IFRs 11 on dataset X , Y can be calculated as
S L = l = 1 L τ l x 1 ϕ x 1 T τ l x N ϕ x N T   a l = l = 1 L η l a l
e L = Y S L  
where τ l x i = j = 1 m B l j x i j = j = 1 m 1 k = 1 K 1 C j , k , l μ A k x i j is the activation level of the l th fuzzy rule, a l is the weight of the linear combination of the expanded input vector ϕ x i and denotes
η l = τ l x 1 ϕ x 1 T τ l x N ϕ x N T
Remark 1.
A SCFS-IFRs model is implemented in the framework of the TSK fuzzy system. The semantically clear fuzzy sets generated by equidistant partition and the rule combination matrix used to generate fuzzy rules contribute to the high interpretability of SCFS-IFRs.
For instance, let us have L = 2 fuzzy rules, m = 2 features for each input sample x i , K = 5 fuzzy sets with linguistic labels: very low, low, medium, high, and very high. Given
C : , : , 1 = 1 0 0 1 0 0 0 1 0 0 0 1 0 0 1 ,   C : , : , 2 = 0 1 0 0 0 0 0 0 1 0 0 1 1 0 0
The two ILFRs can be expressed as
R 1 : if   x i 1   is   very   low   or   low   and   x i 2   is   medium   and   x i 3   is   high   or   very   high ,   then   y = ϕ x i T a 1 R 2 : if   x i 1   is   very   high   and   x i 2   is   very   low   and   x i 3   is   medium   or   high ,   then   y = ϕ x i T a 2
In addition, the rule combination matrix defined in this paper allows multiple fuzzy sets corresponding to each feature in a fuzzy rule to be activated, which improves the semantic expression of fuzzy rules and makes it possible to express more complex relations with fewer rules.

3.2. Adaptive Incremental Learning for SCFS-IFRs

We propose an adaptive incremental learning algorithm to train the proposed SCFS-IFRs, which can be helpful in determining an appropriate model structure autonomously and ensure the approximation capability of SCFS-IFRs. In order to further improve the approximation accuracy, the PSO algorithm is adopted to search for better random parameters in the process of stochastic configuration.
The adaptive incremental learning algorithm starts with an initial system, which can be either an empty system or a system with L 0 rules generated by other traditional methods. Given training dataset X , Y , new fuzzy rules are increased incrementally in the light of a supervisory mechanism of stochastic configuration for improving the approximation performance until a predefined tolerance level ε is met.
Suppose that there is SCFS-IFRs S L 1 with L 1 fuzzy rules of the form 11 . The output and residual error of S L 1 on dataset X , Y are expressed in the form
S L 1 = l = 1 L 1 η l a l ,   e L 1 = Y S L 1
If e L 1 does not reach the predefined tolerance level ε , i.e., e L 1 ε , a new fuzzy rule R L should be generated to construct the new SCFS-IFRs S L with L fuzzy rules. There are three parameters that should be determined for generating rule R L : the rule combination matrix (denoted by C L = C : , : , L ), the width of gaussian membership function (denoted by σ L ) and the weight of the linear combination in the consequent part (denoted by a L ).
The premise parameters C L and σ L are randomly produced to satisfy the following inequalities
η L η L T η L + λ I M 1 η L T e L 1 2 1 r μ L e L 1 2
and the consequent parameter a L can be evaluated by
a L = η L T η L + λ I 1 η L T e L 1
where λ is the regular coefficient, I is the identity matrix, η L is of form 15 and r 0 , 1 and μ L is a sequence with lim L μ L = 0 and 0 < μ L 1 r .
The output and the residual error of the S L can be represented as
S L = l = 1 L η l a l ,   e L = Y S L = e L 1 η L a L
and the improvement brought about from the new rule R L can be calculated as
P L = e L 2 e L 1 2 = e L 1 η L a L 2 e L 1 2 = η L η L T η L + λ I M 1 η L T e L 1 2 .
The improvement P L provides the target fucntion for the choice of parameters C L and σ L . There may be many values of parameters C L and σ L that satisfy the inequality constraint 16 . We adopt the PSO algorithm to search for better random parameters C L and σ L which satisfy the inequality constraint 16 and have smaller values of improvement P L , in order to improve the system performance.
In the process of stochastic configuration using the PSO algorithm, firstly T pairs of parameters C L t ,   σ L t t = 1 T within 0 , 1 m × K × 0 , b σ . are randomly initialized as the position of initial population; correspondingly, T pairs of parameters V C t ,   V σ t t = 1 T within 1 , 1 m × K × b σ , b σ are randomly initialized as the velocity vectors of the initial population. The fitness value P L t of particle t can be calculated by Equation 18 .
At each iteration, the positions and velocity vectors can be updated as
V C t ,   V σ t n e w = w V C t ,   V σ t o l d + c 1 r 1 C L t ,   σ L t p C L t ,   σ L t o l d + c 2 r 2 C L g ,   σ L g C L t ,   σ L t o l d ,
C L t ,   σ L t t e m p = C L t ,   σ L t o l d + V C t ,   V σ t n e w
where w is the inertia weight factor, c 1 and c 2 are positive acceleration constants, r 1 and r 2 are random factors sampled from a uniform distribution U 0 , 1 , C L t ,   σ L t p are C L g ,   σ L g represent the best position in the history of particle t and the whole population, respectively. In this paper, we set the parameters of PSO as w = 0.9 ,   c 1 = c 2 = 2 . Through n m a x iterations of PSO, we can obtain better parameters to generate new fuzzy rules to improve the performance of the system.
In this way, we gradually generate new fuzzy rules one by one and add them to the system until the residual error reaches the predefined error tolerance ε or the number of rules exceeds the given maximum rules number L m a x . The adaptive incremental learning algorithm can be summarized as follows.
Adaptive incremental learning algorithm of SCFS-IFRs:
Given dataset with inputs X = x 1 , x 2 , , x N T and outputs Y = y 1 , y 2 , , y N T , the predefined maximum number of rules L m a x , the error tolerance ε , the population size T , the maximum number of iterations of the PSO n m a x , the number of fuzzy sets K for each feature, the upper bound b σ of the width parameter and the regularization parameter λ .
  • Initialize e = Y ,   L = 0 , 0 < r < 1 ;
  • Set center of Gaussian membership function u k = k 1 K ,   k = 1 , 2 , , K ;
  • While L < L m a x and e > ε , Do
  • Set μ L = 1 r / L + 1 ;
  • Randomly initialize T particles with position and velocity vectors as C L t ,   σ L t t = 1 T and V C t ,   V σ t t = 1 T
  • For i = 1 to n m a x do
  • Calculate the fitness value P L t t = 1 T according to Equation 18 ;
  • Update the positions C L t ,   σ L t t = 1 T and velocity vectors V C t ,   V σ t t = 1 T using Equations (19) and (20);
  • End for
  • Set the best position as C L g ,   σ L g as C L   ,   σ L   ;
  • Calculate P L according to Equation 18 ;
  • if P L r e L 1 2 Calculate η L   and   a L according to Equations 15 and 17 , respectively;
  • Else randomly take τ 0 , r , renew r : = r + τ , return to Step 4;
  • End if
  • Update e e η L a L ;
  • End while
  • Returen C l   ,   σ l   , a l l = 1 L .
In this way, the adaptive incremental learning algorithm determines the structure and parameters of SCFS-IFRs. The random parameters C L   ,   σ L   are adaptively determined according to the inequality constraint 16 and the appropriate architecture is determined. In the adaptive incremental learning algorithm, while the random parameters C L   ,   σ L   are optimized, the number of fuzzy sets K for each feature, the regularization parameter λ , and the degree of Chebyshev polynomial n are predefined and do not need to be optimized.
Remark 2.
The approximation capability of SCFS-IFRs can be analyzed as follows. If the random parameters C L   ,   σ L   of the new fuzzy rule satisfies the inequality constraint 16 and the consequent parameter a L is evaluated by Equation (17), we can find that e L 2 e L 1 2 1 r μ L e L 1 2 according to Equation (18). This means that e L 2 r + μ L e L 1 2 . Since lim L μ L = 0 , it is obvious that lim L e L 2 = 0 , which proves the approximation capability of SCFS-IFRs.

4. Performance Evaluation

In this section, the performance of the proposed SCFS-IFRs is investigated. The proposed SCFS-IFRs is compared with some representative models such as RVFLN [32], ELM [33], RVFL-RS [34], HPFNN [35], SCN [19], SCBLS [36], BLS [16], FBLS [7], TSK-FC [37], L2TSK-FC [37], HID-TSK [6], CFBLS [7], FELM [4] and BL-DFIS [38] through numerical experiments on 20 regression datasets and 10 classification datasets.
For a fair comparison, we use the same training strategies for SCFS-IFRs as [6,7], i.e., a ten-fold cross validation. We repeat every experiment 30 times and report the average result. The optimal settings of parameters are obtained by grid search method. The number of fuzzy sets K for each feature is chosen from {3,5,7}, while the regularization parameter λ is chosen from { 10 3 , 10 2 , 10 1 }, and the degree of Chebyshev polynomial n is chosen form {1,2,3,4}. The predefined maximum number of rules L m a x is set as 25, the population size T is set as 50 and the maximum number of iterations of the PSO n m a x is set as 10.

4.1. Regression

20 regression datasets which are selected from the UCI [39] and KEEL[40] repositories are employed to evaluate the performance of the SCFS-IFRs in regression tasks. The relevant details of the 20 datasets are summarized in Table 3.
Table 4 compares the testing RMSE of the proposed models with another eight algorithms: RVFLN [32], ELM [33], RVFL-RS [34], HPFNN [35], SCN [19], SCBLS [36], BLS [16] and FBLS [7]. The results of RVFLN, ELM, RVFL-RS and HPFNN are borrowed from [35], and the results of SCN, SCBLS, BLS, FBLS are obtained from the source code provided by the cited author. It should be noted that, in order to be consistent with the results in [35], only the input features are normalized in this experiment. In Table 4, the bold items represent the smallest testing RMSE. Our SCFS-IFRs shows the smallest testing RMSE in 10 of 20 datasets while HPFNN shows the smallest testing RMSE in five datasets. It can be seen that SCFS-IFRs have better generalization performance than the other eight algorithms.
In order to observe the statistical differences between SCFS-IFRs and the other eight algorithms in the 20 datasets, the Friedman test and the Holm post-hoc test are applied. The result of the Friedman test is displayed in Table 5. We can see that the SCFS-IFRs has the top rank and the p -value is less than the significance level 0.1 which implies that there are significant differences in the ranking of generalization performance of these algorithms.
Then we compare SCFS-IFRs with other models using the Holm post-hoc test and the result is shown in Table 6. The null hypothesis H 0 in the Holm test is that the ranking of the two comparative methods has no significant difference. The significance level α is set as 0.1. If the adjusted p-value of the Holm test is less than the significance level α , this means we should reject the null hypothesis H 0 and there are remarkable differences between the two comparative methods. From the Holm test results in Table 6, we can see that the proposed SCFS-IFRs are significantly better than RVFLN, ELM, RVFL-RS, HPFNN, SCN, BLS, FBLS and SCBLS.

4.2. Classification

We select the following 10 datasets to evaluate the performance of the proposed SCFS-IFRs in classification tasks. The relevant details of the 10 classification datasets are summarized in Table 7.
The average classification accuracies for the proposed SCFS-IFRs and the other six fuzzy models, TSK-FC [37], L2TSK-FC [37], HID-TSK [6], CFBLS [7], FELM [4] and BL-DFIS [38], for the 10 classification datasets are reported in Table 8. The results of TSK-FC [37], L2TSK-FC, HID-TSK and CFBLS are borrowed from [7] and the results of FELM and BL-DFIS are borrowed from [38]. The bold items represent the highest testing classification accuracy. The comparisons of classification accuracy in Table 8 show that, SCFS-IFRs have the highest average classification accuracy. In terms of the classification accuracy of each data set, SCFS-IFRs have the highest classification accuracy in six of 10 data sets, while the accuracy of classification of SCFS-IFRs in the other four data sets is still comparable.
The incremental learning algorithm is another advantage of SCFS-IFRs. This helps to achieve a compact architecture while maintaining good performance. Table 9 displays the comparisons of the average number of fuzzy rules that all seven models need to use in order to achieve the accuracy shown in Table 8. The bold items represent the minimum number of rules. From Table 9, It can be observed that the proposed SCFS-IFRs can achieve good classification accuracy with few rules.
Finally, similar to Section 4.2, we use the Friedman test and the Holm post-hoc test to observe the statistical differences between SCFS-IFRs and the other six algorithms on the 10 classification datasets. The result of the Friedman test is displayed in Table 10. We can see that the SCFS-IFR has the highest ranking and the p -value is 1.339 ×   10 9 0.10 . This implies that there are significant differences in the ranking of these algorithms.
The result of the Holm post-hoc test is listed in Table 11. The null hypothesis H 0 in the Holm test is that the rankings of the two comparative methods have no significant difference. From Table 11, we can see that the proposed SCFS-IFRs is significantly better than TSK-FC, L2TSK-FC, HID-TSK and FELM, but there is no significant difference in the comparison results between SCFS-IFRs, BL-DFIS and CFBLS.
However, SCFS-IFRs has an advantage over BL-DFIS and CFBLS in terms of interpretability. First, it can be seen from Table 9 that the average number of rules of SCFS-IFRs is 4.67, far less than the average number of rules for CFBLS, 11.56. The fewer the number of rules, the better the interpretability. Then, BL-DFIS use mapped features by ELM-AE as the premise variable of fuzzy rules, which is not conducive to the interpretation of the rule.

4.3. The Interpretability of SCFS-IFRs

Although a mathematical measure of interpretability has not been found, a fuzzy inference system with high interpretability should have clear semantic expression and simple fuzzy rules. We discuss the interpretability of SCFS-IFRs through one experimental result on the Monk2 task.
In this experiment, the input universe is divided into four equal parts, which implies five fuzzy subsets with five linguistic labels as shown in Section 3.1. The five fuzzy subsets have five fixed centers which are 0 , 0.25 , 0.5 , 0.75 , 1 . In this experiment, two fuzzy rules generated by the proposed SCFS-IFRs are presented in Table 12. The widths of fuzzy subsets and the rule combination matrix are randomly assigned under the supervisory mechanism 16 and optimized by the proposed adaptive incremental learning algorithm of SCFS-IFRs. The widths of the two fuzzy rules are 0.8139 and 0.4826. The rule combination matrixes of two fuzzy rules are:
C 1 = 0 0 1 1 0 0 0 0 1 0 1 0 0 0 0 0 1 1 1 0 0 1 0 0 0 0 0 1 0 1
and
C 2 = 1 1 0 0 0 1 0 0 0 0 0 0 0 1 1 1 0 1 0 0 0 0 0 0 1 1 1 0 0 0
where C l j , k = 1 represents the k the fuzzy set of the j th input variable in the l th fuzzy rule will be activated; conversely, it will not be activated.
The activated fuzzy sets and the polynomial consequent functions are listed in the two parts of Table 12, respectively. Since the highest degree of the used Chebyshev polynomial is set as 1, the polynomial consequent functions are linear functions.
In SCFS-IFRs, the fixed language labels of the antecedents make the intuitive interpretation of the antecedents possible, and the rule combination matrixes allow activation of more than one fuzzy set, corresponding to each feature in a fuzzy rule, which improves the semantic expression of fuzzy rules. This shortens the length of fuzzy rules while maintaining the performance of the model and avoids the “rule explosion” problem.

5. Conclusions

By incorporating stochastic configuration and ILFRs into a TSK fuzzy system, a novel neuro-fuzzy system named SCFS-IFRs is established for classification and regression data modeling. The SCFS-IFRs uses a stochastic configuration based incremental learning approach to determine the system architecture and the random parameters. Interpretable fuzzy rules are incrementally generated and the premise parameters are randomly selected based on stochastic configuration strategy. In addition, the PSO algorithm, an intelligence search technique, is used in the process of stochastic configuration to obtain better random parameters and further improve the approximation accuracy.
Hence, the proposed SCFS-IFRs improves the interpretability and approximation accuracy of the neuro-fuzzy system at the same time. We tested the model’s performance in both the regression task and the classification task. In the regression experiments, the results of 20 regression datasets which are selected from UCI [39] and KEEL [40] repositories show that the proposed SCFS-IFRs performs best on 10 of the 20 data sets while statistically significantly outperforming the other eight state-of-the-art algorithms. In the classification experiments, SCFS-IFRs achieves higher classification accuracy and better interpretation with fewer rules, compared with the other six fuzzy classifiers. In the future, a broad learning system can be considered to combine with SCFS-IFRs in order to improve approximation performance and decrease training time. In addition, it would be an interesting problem to analyze the approximation bound of SCFS-IFRs theoretically.

Author Contributions

Conceptualization, W.Z. and H.L.; methodology, W.Z.; software, W.Z.; validation, W.Z. and H.L. and M.B.; formal analysis, W.Z.; investigation, W.Z.; resources, M.B.; data curation, W.Z.; writing—original draft preparation, W.Z.; writing—review and editing, H.L.; visualization, W.Z.; supervision, W.Z.; project administration, W.Z.; funding acquisition, H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation (NNSF) of China under Grant (61773088, 12071056).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jang, J.-S.R. ANFIS: Adaptive-Network-Based Fuzzy Inference System. IEEE Trans. Syst. Man Cybern. 1993, 23, 665–685. [Google Scholar] [CrossRef]
  2. Kar, S.; Das, S.; Ghosh, P.K. Applications of neuro fuzzy systems: A brief review and future outline. Appl. Soft Comput. 2015, 15, 243–259. [Google Scholar] [CrossRef]
  3. Pedrycz, W.; Chen, S.-M. (Eds.) Interpretable Artificial Intelligence: A Perspective of Granular Computing; Springer: Basingstoke, UK, 2021. [Google Scholar]
  4. Wong, S.Y.; Yap, K.S.; Yap, H.J.; Tan, S.C.; Chang, S.W. On Equivalence of FIS and ELM for Interpretable Rule-Based Knowledge Representation. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 1417–1430. [Google Scholar] [CrossRef] [PubMed]
  5. Zhou, T.; Ishibuchi, H.; Wang, S. Stacked Blockwise Combination of Interpretable TSK Fuzzy Classifiers by Negative Cor-relation Learning. IEEE Trans. Fuzzy Syst. 2018, 26, 3327–3341. [Google Scholar] [CrossRef]
  6. Zhang, Y.; Ishibuchi, H.; Wang, S. Deep Takagi–Sugeno–Kang Fuzzy Classifier with Shared Linguistic Fuzzy Rules. IEEE Trans. Fuzzy Syst. 2018, 26, 1535–1549. [Google Scholar] [CrossRef]
  7. Feng, S.; Chen, C.L.P.; Xu, L.; Liu, Z. On the Accuracy–Complexity Tradeoff of Fuzzy Broad Learning System. IEEE Trans. Fuzzy Syst. 2021, 29, 2963–2974. [Google Scholar] [CrossRef]
  8. Siminski, K. Prototype based granular neuro-fuzzy system for regression task. Fuzzy Sets Syst. 2022, 449, 5. [Google Scholar] [CrossRef]
  9. Feng, S.; Chen, C.L.P.; Zhang, C.-Y. A Fuzzy Deep Model Based on Fuzzy Restricted Boltzmann Machines for High-dimensional Data Classification. IEEE Trans. Fuzzy Syst. 2020, 28, 1344–1355. [Google Scholar] [CrossRef]
  10. Sadjadi, E.N.; Herrero, J.G.; Molina, J.M.; Moghaddam, Z.H. On Approximation Properties of Smooth Fuzzy Models. Int. J. Fuzzy Syst. 2018, 20, 2657–2667. [Google Scholar] [CrossRef]
  11. Mahmoud, T.A.; Abdo, M.I.; Elsheikh, E.A.; Elshenawy, L.M. Direct adaptive control for nonlinear systems using a TSK fuzzy echo state network based on fractional-order learning algorithm. J. Frankl. Inst. 2021, 358, 9034–9060. [Google Scholar] [CrossRef]
  12. Rajab, S.; Sharma, V. A review on the applications of neuro-fuzzy systems in business. Artif. Intell. Rev. 2018, 49, 481–510. [Google Scholar] [CrossRef]
  13. Gaxiola, F.; Melin, P.; Valdez, F.; Castillo, O. Interval type-2 fuzzy weight adjustment for backpropagation neural networks with application in time series prediction. Inf. Sci. 2014, 260, 1–14. [Google Scholar] [CrossRef]
  14. Wang, D.-G.; Song, W.-Y.; Li, H.-X. Approximation properties of ELM-fuzzy systems for smooth functions and their derivatives. Neurocomputing 2015, 149, 265–274. [Google Scholar] [CrossRef]
  15. Kv, S.; Pillai, G. Regularized extreme learning adaptive neuro-fuzzy algorithm for regression and classification. Knowl.-Based Syst. 2017, 127, 100–113. [Google Scholar] [CrossRef]
  16. Chen, C.L.P.; Liu, Z. Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 10–24. [Google Scholar] [CrossRef]
  17. Feng, S.; Chen, C.L.P. Fuzzy Broad Learning System: A Novel Neuro-Fuzzy Model for Regression and Classification. IEEE Trans. Cybern. 2020, 50, 414–424. [Google Scholar] [CrossRef]
  18. Li, M.; Wang, D. Insights into randomized algorithms for neural networks: Practical issues and common pitfalls. Inf. Sci. 2017, 382–383, 170–178. [Google Scholar] [CrossRef]
  19. Wang, D.; Li, M. Stochastic Configuration Networks: Fundamentals and Algorithms. IEEE Trans. Cybern. 2017, 47, 3466–3479. [Google Scholar] [CrossRef] [Green Version]
  20. Wang, D.; Li, M. Deep Stochastic Configuration Networks with Universal Approximation Property. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar] [CrossRef] [Green Version]
  21. Pratama, M.; Wang, D. Deep stacked stochastic configuration networks for lifelong learning of non-stationary data streams. Inf. Sci. 2018, 495, 150–174. [Google Scholar] [CrossRef] [Green Version]
  22. Wang, Q.; Dai, W.; Ma, X.; Shang, Z. Driving amount based stochastic configuration network for industrial process modeling. Neurocomputing 2020, 394, 61–69. [Google Scholar] [CrossRef]
  23. Sugeno, M.; Kang, G. Structure identification of fuzzy model. Fuzzy Sets Syst. 1988, 28, 15–33. [Google Scholar] [CrossRef]
  24. Ojha, V.; Abraham, A.; Snášel, V. Heuristic design of fuzzy inference systems: A review of three decades of research. Eng. Appl. Artif. Intell. 2019, 85, 845–864. [Google Scholar] [CrossRef] [Green Version]
  25. Pratama, M.; Lu, J.; Anavatti, S.; Lughofer, E.; Lim, C.-P. An incremental meta-cognitive-based scaffolding fuzzy neural network. Neurocomputing 2016, 171, 89–105. [Google Scholar] [CrossRef]
  26. Wang, S.; Chung, F.; HongBin, S.; Dewen, H. Cascaded centralized TSK fuzzy system: Universal approximator and high interpretation. Appl. Soft Comput. 2005, 5, 131–145. [Google Scholar] [CrossRef]
  27. Zhou, T.; Chung, F.-L.; Wang, S. Deep TSK Fuzzy Classifier with Stacked Generalization and Triplely Concise Interpretability Guarantee for Large Data. IEEE Trans. Fuzzy Syst. 2017, 25, 1207–1221. [Google Scholar] [CrossRef]
  28. Buckley, J. Sugeno type controllers are universal controllers. Fuzzy Sets Syst. 1993, 53, 299–303. [Google Scholar] [CrossRef]
  29. Zou, W.; Xia, Y.; Dai, L. Fuzzy Broad Learning System Based on Accelerating Amount. IEEE Trans. Fuzzy Syst. 2022, 30, 4017–4024. [Google Scholar] [CrossRef]
  30. Yousefi, J. A modified NEFCLASS classifier with enhanced accuracy-interpretability trade-off for datasets with skewed feature values. Fuzzy Sets Syst. 2021, 413, 99–113. [Google Scholar] [CrossRef]
  31. Han, H.-G.; Sun, C.; Wu, X.; Yang, H.; Qiao, J. Training Fuzzy Neural Network via Multi-Objective Optimization for Non-linear Systems Identification. IEEE Trans. Fuzzy Syst. 2022, 30, 3574–3588. [Google Scholar] [CrossRef]
  32. Pao, Y.-H.; Takefuji, Y. Functional-link net computing: Theory, system architecture, and functionalities. Computer 1992, 25, 76–79. [Google Scholar] [CrossRef]
  33. Huang, G.-B.; Zhou, H.; Ding, X.; Zhang, R. Extreme Learning Machine for Regression and Multiclass Classification. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2012, 42, 513–529. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Vuković, N.; Petrović, M.; Miljković, Z. A comprehensive experimental evaluation of orthogonal polynomial expanded random vector functional link neural networks for regression. Appl. Soft Comput. 2018, 70, 1083–1096. [Google Scholar] [CrossRef]
  35. Zhang, C.; Oh, S.-K.; Fu, Z. Hierarchical polynomial-based fuzzy neural networks driven with the aid of hybrid network architecture and ranking-based neuron selection strategies. Appl. Soft Comput. 2021, 113, 107865. [Google Scholar] [CrossRef]
  36. Zhou, W.; Wang, D.; Li, H.; Bao, M. Stochastic configuration broad learning system and its approximation capability analysis. Int. J. Mach. Learn. Cybern. 2022, 13, 797–810. [Google Scholar] [CrossRef]
  37. Deng, Z.; Choi, K.-S.; Chung, F.-L.; Wang, S. Scalable TSK Fuzzy Modeling for Very Large Datasets Using Mini-mal-Enclosing-Ball Approximation. IEEE Trans. Fuzzy Syst. 2011, 19, 210–226. [Google Scholar] [CrossRef]
  38. Bai, K.; Zhu, X.; Wen, S.; Zhang, R.; Zhang, W. Broad Learning Based Dynamic Fuzzy Inference System with Adaptive Structure and Interpretable Fuzzy Rules. IEEE Trans. Fuzzy Syst. 2022, 30, 3270–3283. [Google Scholar] [CrossRef]
  39. Dua, D.; Graff, C. UCI Machine Learning Repository. University of California, Irvine, School of Information and Computer Sciences. 2017. Available online: http://archive.ics.uci.edu/ml (accessed on 26 February 2022).
  40. Triguero, I.; Gonzalez, S.; Moyano, J.M.; Garcıa, S.; Fernandez, A. KEEL 3.0: An Open Source Software for Multi-Stage Analysis in Data Mining. Int. J. Comput. Intell. Syst. 2017, 10, 1238–1249. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Illustration of the SCFS-IFRs.
Figure 1. Illustration of the SCFS-IFRs.
Mathematics 11 00614 g001
Table 1. Table of abbreviations.
Table 1. Table of abbreviations.
AbbreviationsFull Name
ILFRsinterpretable linguistic fuzzy rules
PSOparticle swarm optimization
ELMextreme learning machine
FELMfuzzy extreme learning machine
BLSbroad learning system
FBLSfuzzy broad learning system
CFBLScompact fuzzy broad learning system
TSKTakagi-Sugeno-Kang
TSK-FCTSK fuzzy classifier
BPback propagation
R-ELANFISregularized extreme learning adaptive neuro-fuzzy inference system
SCNstochastic configuration network
RVFLNrandom vector functional link neural network
HPFNNhierarchical polynomial-based fuzzy neural networks
BL-DFISbroad learning based dynamic fuzzy inference system
SCFS-IFRsstochastic configuration based fuzzy inference system with interpretable fuzzy rules
Table 2. Table of variables.
Table 2. Table of variables.
VariablesInterpretation
u k center of the k th fuzzy set, predefined constant
σ L width of fuzzy set in the L th fuzzy rule, probabilistic variables
C L rule combination matrix of the L th rule, probabilistic variables
a L weight parameters of the L th rule
e L residual error of system with L rule
ε predefined tolerance level
r real variable with adaptively values between 0 and 1
μ L real sequence with a limit of zero and adaptively values
λ predefined regular coefficient
I identity matrix
L m a x predefined maximum number of rules
K predefined number of fuzzy sets for each feature
b σ predefined upper bound of the width parameter σ L
V C t ,   V σ t velocity vectors of population of PSO
w predefined inertia wight factor of PSO
c 1 , c 2 predefined positive acceleration constants of PSO
r 1 , r 2 random factors of PSO, probabilistic variables
n m a x predefined the maximum number of iterations of PSO
T predefined the population size of PSO
Table 3. Details of 20 regression datasets from UCI and KEEL.
Table 3. Details of 20 regression datasets from UCI and KEEL.
DatasetNo. of FeaturesNo. of DataDatasetNo. of FeaturesNo. of Data
Airfoil51503Baseball16337
Concrete81030Izmir91461
Auto MPG7392Ankara91609
Yacht6308Electrical41056
QSAR6908Computer218192
Treasury151049Laser4993
Abalone84177Prices13506
Categorical74052Cool8768
Auto MPG65392Wine111599
Energy6365Heat8786
Table 4. Comparisons of testing RMSE on 20 datasets.
Table 4. Comparisons of testing RMSE on 20 datasets.
DatasetRVFLNELMRVFL-RSHPFNNSCNSCBLSBLSFBLSSCFS-IFRs
Airfoil3.01202.95753.01203.64795.24684.90165.12555.20062.7497
Concrete6.54546.80646.75646.78547.20416.73687.16607.07355.5644
Auto MPG3.02112.88332.90472.57233.17613.16333.17803.12712.6370
Yacht3.40283.31444.78100.87839.30734.35516.90555.28760.7443
QSAR0.94100.94690.92740.91660.89860.87630.86970.87980.9160
Treasury0.20860.20200.19680.23050.22500.21490.23280.22720.1815
Abalone2.15062.12632.12602.09662.25022.22282.18302.18722.0994
Categorical0.32600.38650.20120.07800.11300.79241.02010.93210.0804
Auto MPG63.06422.98663.92172.63232.08502.13592.17482.07882.6878
Energy0.38980.39800.39050.38830.38810.37910.38110.37140.3930
Baseball723.70833.72717.55615.14711.36678.40666.68642.84693.55
Izmir1.20321.37811.21861.10281.14021.13891.14261.15111.0810
Ankara1.43641.70541.41221.27541.24531.28251.31041.28111.1982
Electrical102.40107.1683.215115.26110.7599.984113.691111.97480.311
Computer4.99925.51094.90392.92844.21783.45782.92284.63632.4164
Laser6.56306.21084.95427.01696.33076.30586.21076.15455.8172
Prices4.72954.19064.02593.61766.13954.73955.71095.73053.0717
Cool2.02632.10771.67231.77303.02242.94302.83012.03901.3136
Wine0.64400.64830.64480.62900.66000.65520.65710.66160.6294
Heat1.14061.37950.50850.76622.38752.39852.43241.21390.5349
Table 5. Result of Friedman tests (significance level of 0.10).
Table 5. Result of Friedman tests (significance level of 0.10).
AlgorithmSCFS-IFRsHPFNNRVFL-RSSCBLSRVFLNFBLSELMBLSSCN
Rank2.3003.7504.5755.0505.4755.5005.9506.1006.300
p-value0.00001
Table 6. Result of Holm post-hoc tests (significance level of 0.10).
Table 6. Result of Holm post-hoc tests (significance level of 0.10).
ComparisonStatisticAdjusted p-ValueResult
SCFS-IFRs vs SCN4.61880.00003H0 is rejected
SCFS-IFRs vs BLS4.38780.00008H0 is rejected
SCFS-IFRs vs ELM4.21460.00015H0 is rejected
SCFS-IFRs vs FBLS3.69500.0011H0 is rejected
SCFS-IFRs vs RVFLN3.66610.0011H0 is rejected
SCFS-IFRs vs SCBLS3.17540.00449H0 is rejected
SCFS-IFRs vs RVFL-RS2.62690.01723H0 is rejected
SCFS-IFRs vs HPFNN1.67430.09407H0 is rejected
Table 7. Details of 10 classification datasets from UCI and KEEL.
Table 7. Details of 10 classification datasets from UCI and KEEL.
DatasetNo. of DataNo. of FeaturesNo. of Classes
Diabetes76882
Australia690142
Balance62543
Magic0419,020102
Monk243262
Pageblocks5472105
Sonar208602
Specfheart267442
Breast69992
Wine178133
Table 8. Comparisons of classification accuracies on 10 datasets.
Table 8. Comparisons of classification accuracies on 10 datasets.
DatasetTSK-FCL2TSK-FCHID-TSKCFBLSFELMBL-DFISSCFS-IFRs
Diabetes0.66870.64320.71820.77640.75120.77240.7738
Australia0.72980.77680.83370.87500.84770.86570.8594
Balance0.70790.71140.84160.90660.87200.91410.9152
Magic040.64870.74240.81600.85330.82600.86290.8631
Monk20.63320.60620.65990.93530.65980.74820.9720
Pageblocks0.90250.89380.90670.95500.93550.95920.9603
Sonar0.49300.66200.65850.75910.68950.82230.8243
Specfheart0.75160.74500.75200.83350.79210.81300.7622
Breast0.88460.94690.95120.97500.96690.96740.9691
Wine0.90370.91830.90760.97400.96760.98780.9885
Average0.73240.76460.80450.88430.83080.87130.8888
Table 9. Comparisons of average number of fuzzy rules on 10 datasets.
Table 9. Comparisons of average number of fuzzy rules on 10 datasets.
DatasetTSK-FCL2TSK-FCHID-TSKCFBLSFELMBL-DFISSCFS-IFRs
Diabetes3.12.09.08.4181.13.5
Australia7.010.09.015.7192.31.7
Balance4.33.99.39.2202.21.2
Magic0413.410.836.318.92014.310.4
Monk28.57.67.05.9188.93.8
Pageblocks7.27.016.218.91712.612.0
Sonar8.59.324.08.1202.24.6
Specfheart7.17.59.39.2201.01.3
Breast14.613.912.07.6203.35.75
Wine10.05.742.013.7201.22.4
Average8.377.7717.4111.5619.24.914.67
Table 10. Result of Friedman tests (significance level of 0.10).
Table 10. Result of Friedman tests (significance level of 0.10).
AlgorithmSCFS-IFRsCFBLSBL-DFISFELMHID-TSKL2TSK-FCTSK-FC
Rank2.3003.7504.5755.0505.4755.5005.950
p-value1.339 × 10−9
Table 11. Result of Holm post-hoc tests (significance level of 0.10).
Table 11. Result of Holm post-hoc tests (significance level of 0.10).
ComparisonStatisticAdjusted p-ValueResult
SCFS-IFRs vs TSK-FC5.071980H0 is rejected
SCFS-IFRs vs L2TSK-FC4.657940.00002H0 is rejected
SCFS-IFRs vs HID-TSK3.519330.00173H0 is rejected
SCFS-IFRs vs FELM2.380730.05184H0 is rejected
SCFS-IFRs vs BL-DFIS0.621061H0 is accepted
SCFS-IFRs vs CFBLS0.414041H0 is accepted
Table 12. Rule description for the Monk2 dataset.
Table 12. Rule description for the Monk2 dataset.
IFTHEN
R 1 x 1 is medium or high and x 2 is high and x 3 is very low and
x 4 is low or medium or high and x 5 is low and x 5 is medium or very high
y 1 = 0.761 + 0.011 x 1 0.608 x 2 0.038 x 3 + 0.051 x 4 0.026 x 5 + 0.016 x 6
y 2 = 0.219 + 0.020 x 1 0.892 x 2 0.102 x 3 + 0.125 x 4 0.015 x 5 0.043 x 6
R 2 x 1 is very low or low and x 2 is very low and x 3 is high or very high and
x 4 is very low or medium and x 5 is very high and x 5 is very low and low
y 1 = 0.279 + 0.060 x 1 + 0.082 x 2 + 0.187 x 3 + 0.174 x 4 0.547 x 5 0.134 x 6
y 2 = 0.024 0.067 x 1 0.342 x 2 0.164 x 3 0.079 x 4 + 0.774 x 5 + 0.260 x 6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhou, W.; Li, H.; Bao, M. Stochastic Configuration Based Fuzzy Inference System with Interpretable Fuzzy Rules and Intelligence Search Process. Mathematics 2023, 11, 614. https://doi.org/10.3390/math11030614

AMA Style

Zhou W, Li H, Bao M. Stochastic Configuration Based Fuzzy Inference System with Interpretable Fuzzy Rules and Intelligence Search Process. Mathematics. 2023; 11(3):614. https://doi.org/10.3390/math11030614

Chicago/Turabian Style

Zhou, Wei, Hongxing Li, and Menghong Bao. 2023. "Stochastic Configuration Based Fuzzy Inference System with Interpretable Fuzzy Rules and Intelligence Search Process" Mathematics 11, no. 3: 614. https://doi.org/10.3390/math11030614

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop