Next Article in Journal
Bifurcation and Dynamics Analysis of a Piecewise-Linear van der Pol Equation
Previous Article in Journal
On Inverse and Implicit Function Theorem for Sobolev Mappings
Previous Article in Special Issue
A Hybrid Fuzzy Mathematical Programming Approach for Manufacturing Inventory Models with Partial Trade Credit Policy and Reliability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Approach to Generating Fuzzy Rules for a Fuzzy Controller Based on the Decision Tree Interpretation

by
Anton A. Romanov
*,
Aleksey A. Filippov
* and
Nadezhda G. Yarushkina
Department of Information Systems, Ulyanovsk State Technical University, 32 Severny Venetz Street, 432027 Ulyanovsk, Russia
*
Authors to whom correspondence should be addressed.
Axioms 2025, 14(3), 196; https://doi.org/10.3390/axioms14030196
Submission received: 16 January 2025 / Revised: 2 March 2025 / Accepted: 4 March 2025 / Published: 6 March 2025
(This article belongs to the Special Issue Recent Developments in Fuzzy Control Systems and Their Applications)

Abstract

:
This article describes solutions to control problems using fuzzy logic, which facilitates the development of decision support systems across various fields. However, addressing this task through the manual creation of rules in specific fields necessitates significant expert knowledge. Machine learning methods can identify hidden patterns. A key novelty of this approach is the algorithm for generating fuzzy rules for a fuzzy controller, derived from interpreting a decision tree. The proposed algorithm allows the quality of the control actions in organizational and technical systems to be enhanced. This article presents an example of generating a set of fuzzy rules through the analysis of a decision tree model. The proposed algorithm allows for the creation of a set of fuzzy rules for constructing fuzzy rule-based systems (FRBSs). Additionally, it autogenerates membership functions and linguistic term labels for all of the input and output parameters. The machine learning model and the FRBS obtained were assessed using the coefficient of determination ( R 2 ). The experimental results demonstrated that the constructed FRBS performed on average 2% worse than the original decision tree model. While the quality of the FRBS could be enhanced by optimizing the membership functions, this topic falls outside the scope of the current article.

1. Introduction

Controlling complex technical systems involves analyzing large volumes of data. General control theory provides requirements for the data and signals associated with the control object. In addition, it is essential to consider the data features and characteristics and the limitations of the external environment. In our previous studies, we demonstrated that the choice of data analysis method and the quality of the analysis results are dependent on the context of the control problem [1,2].
The quality of the control task is affected by several factors, including the following:
  • The complexity of the control object;
  • The complexity of the control task;
  • The volume of data available for analysis;
  • The time constraints;
  • The urgency of the decisions.
These factors require the selection of an appropriate class of mathematical models for control systems, guided by an analysis of the properties of the control object.
A complex technical system requires a reproducible and deterministic approach to control. It is necessary to develop a mathematical model of the object for control purposes when the behavior of an object is well understood [3]. However, sometimes, constructing a mathematical model is not possible. In such cases, rule-based systems or machine learning methods can address the control problem.
Machine learning methods introduce challenges related to the interpretability of the results. Understanding how the system makes its decisions is crucial for assessing the correctness of the results in fields where the cost of errors is significant [4,5].
Fuzzy inference systems (FISs) are used to address the control problem while balancing interpretability and quality [4,5,6]. An FIS is based on the principles of fuzzy logic and fuzzy inference algorithms.
The study presented in [6] provides a comprehensive classification of fuzzy inference systems (FISs) and explores the primary challenges associated with their development.
The following categories of FISs are identified:
  • Fuzzy rule-based systems (FRBSs): FRBSs represent a traditional implementation of an FIS [6]. The structure of an FRBS is illustrated in Figure 1. A collection of numerical (crisp) inputs, denoted as X, is used as the FRBS’s input. This is followed by a fuzzification process, which calculates the membership degree for each input parameter from X in relation to the fuzzy set. Subsequently, fuzzy inference is organized based on a set of fuzzy rules, which determine the knowledge base of the FRBS, describing the behavioral characteristics of an object and its external environment. The defuzzification process then converts the fuzzy inference results into a set of crisp output parameters, represented as Y.
  • Genetic/evolutionary fuzzy systems (GFSs): A GFS operates on principles similar to those of an FRBS but uses genetic and evolutionary algorithms to train and fine-tune the parameters [4,7,8]. These evolutionary techniques allow for the optimal set of fuzzy rules to be generated and the membership function parameters to be optimized.
  • Hierarchical fuzzy systems (HFSs): An HFS also relies on the principles of an FRBS but addresses the challenge of maintaining a large number of logical rules [9,10,11,12,13]. This issue is resolved by constructing a set of simple fuzzy systems that are organized into a hierarchical structure.
  • Neuro fuzzy systems (NFSs): An NFS builds on the FRBS scheme by integrating an FRBS with artificial neural networks. Although the FRBS can interpret the results, neural networks can learn to solve specific tasks [14,15,16]. Neural networks in NFSs are an alternative to the traditional set of fuzzy rules in FRBSs.
The fuzzy rule r i F can be expressed as follows [17]:
r i F : if ( x 1 is A 1 ) and ( x 2 is A 2 ) and and ( x n is A n ) y 1 is C 1 ,
where x j X denotes the j-th input variable; A j A represents the fuzzy value associated with x j ; and C k indicates the fuzzy value of the rule consequent, which corresponds to the k-th output variable y k Y .
The fuzzy inference process can be described in the following steps [18,19]:
  • Fuzzification of the input values: Each value of the input variable x j linked to a fuzzy set A is characterized by a membership degree μ A ( x i ) 0 , 1
  • Aggregation: The truth degree δ i A of the antecedent of the i-th rule is calculated based on the aggregation of the membership degrees of the input variables X. The specific aggregation method is determined by the algorithm in use, such as Mamdani, Sugeno, or Tsukamoto.
  • Activation: The truth degree δ i C of the consequent of the i-th rule is calculated at the activation stage. If the consequent comprises a single fuzzy statement, its truth degree equals the algebraic product of the weight coefficient and the truth degree of an antecedent of the rule. Where the consequent comprises multiple statements, the truth degree of each statement is derived from the algebraic product of the weight coefficient and the truth degree δ i A of an antecedent of the rule. If the weight coefficient is unspecified, then the default value is one.
  • Accumulation: A membership function F is constructed for all output variables during the accumulation phase. This function is constructed using the max union of the membership degrees of all fuzzy sets related to the k-th output variable y k Y .
  • Defuzzification: A crisp value for the output variable y k is derived from the membership function F at the defuzzification stage. Commonly, the center of gravity method is employed for this process.
The primary challenges associated with developing FISs are described in [6]:
  • The representation of uncertainty: This challenge involves selecting the most suitable membership functions and their parameters, considering the specific characteristics of the data and the problem at hand [16,20,21].
  • The evolving nature of algorithms: An FIS must be capable of adapting to variations in the input data to consider changes in the external environment and the object itself. One potential solution to this problem is the application of fuzzy clustering algorithms [9].
  • Interpretability/explainability: The rules should contain the fewest possible conditions, and the total number of rules should be minimized to achieve a satisfactory quality level [4,5].
  • Model optimization: This encompasses the previous challenges. Currently, an FIS’s hyperparameters are manually configured, which requires a deep understanding of the problem, its domain, and the mechanisms involved in the tuning of FISs [14,15,16,20,21].
In this paper, we focus on addressing these model optimization and interpretability/explainability challenges through the proposed fuzzy rule generation algorithm based on decision tree interpretation. This algorithm facilitates a reduction in both the number of rules and the number of conditions within those rules. The application of decision trees to improving the interpretability/explainability and generating a set of rules is extensively detailed in the literature. Related studies include leveraging decision trees to interpret fuzzy clustering results [22], the automatic generation of membership functions and tuning their parameters [23], the creation of fuzzy rule sets [24,25], and the automated construction of FISs through a data analysis [26]. Generally, these studies have been based on statistical techniques for data analysis and decision tree training based on fuzzified data.
Given the focus of this research, we are particularly interested in addressing the generation of fuzzy rule sets through decision tree interpretation, specifically exploring methods for simplifying the rules and decreasing their number to enhance the interpretability and explainability of FRBSs while also reducing the time required for their creation.
This research introduces a novel perspective on the generation of fuzzy rules, focusing on a reduction in the number of rules and statements. The focus of this paper is primarily on decreasing the time required to generate fuzzy rules and enhancing the interpretability and explainability of the results.
This study explores the potential to generate fuzzy rules derived from interpreting a decision tree model, even without expert knowledge in a specific domain. The proposed algorithm applies across various domains, allowing for the generation of a set of fuzzy rules for building an FRBS that addresses a wide range of tasks. Additionally, we will examine how the quality of the dataset affects the quality of the resulting FRBS.
The main aim of this study is to validate the following hypotheses:
 Hypothesis 1.
The proposed algorithm can generate a set of fuzzy rules for an FRBS based on the interpretation of the decision tree.
 Hypothesis 2.
The quality of the FRBS using the optimal parameters should not substantially decline when compared to that of the original decision tree model.
 Hypothesis 3.
The quality of the resulting FRBS is highly dependent on the quality of the dataset on which the decision tree is trained.
This paper is organized as follows: Section 2 provides an analysis of related works and the works from the current study scope. Section 3 presents a complete definition of a method for generating fuzzy rules for a fuzzy controller obtained from a decision tree. It also describes a detailed algorithm for generating fuzzy rules, along with a rule clustering algorithm that groups rules according to the parameters of their antecedent statements. Section 4 provides the experimental results, including examples of two FRBSs whose rule bases were created using the proposed method, and demonstrates the performance of these FRBSs.

2. Related Works

Google Scholar is used as the database in which to search for related works. Its search engine is of high quality, enabling users to filter articles for specific periods, view works by authors, and find related articles.
Table 1 presents the parameters used for the related works search, including the topic, keywords, and period.

2.1. Fuzzy Systems in Control

At present, machine learning techniques, including deep neural networks, can address a wide range of control tasks with performance levels that are comparable to or exceed human capabilities. Nevertheless, for many challenges, fuzzy systems continue to be the preferred choice.
Fuzzy systems are based on fuzzy modeling, addressing approximation and classification tasks. One of the key advantages of fuzzy systems is their ability to represent the behavior of the modeled object through fuzzy rules, which can be interpreted by both humans and the systems themselves.
These systems provide the necessary level of transparency (often referred to as “white box” models) in scenarios where the cost of errors is significant. As a result, fuzzy systems offer a high degree of explainability: it is always possible to trace how a result was derived and to adjust the system’s behavior if an error is identified. Fuzzy systems are widely utilized in control problems across various application areas, such as [27] the following:
  • Agriculture [28,29];
  • Autonomous driving [30,31,32];
  • Biomedical diagnostics [33,34,35];
  • Energy system control [36,37,38,39];
  • Fault diagnostics and detection [40,41,42];
  • Financial time series prediction [43,44,45];
  • Industrial control [46,47,48];
  • Robotics [49,50,51];
  • Weather forecasting [52,53,54], etc.
In the development of fuzzy systems, both classical architectures and those that combine fuzzy logic principles with machine learning techniques are employed, such as neural networks and reinforcement learning. This integration leverages the strengths of each approach while ensuring a satisfactory level of explainability for the system’s results.

2.2. Evolving Fuzzy Systems

To address the challenges associated with constructing fuzzy systems, methods for developing evolving fuzzy systems are employed [55,56]. The learning mechanism of the evolving fuzzy systems relies on analyzing data streams and adjusting the system in response to data changes. In this context, both the system’s structure and its parameters may evolve.
The architecture of a standard evolving fuzzy system comprises two main components:
  • Updating the system parameters;
  • Evolving the structure.
The structure-evolving component is dedicated to the creation, modification, and deletion of fuzzy rules based on changes in the data being processed. Initially, the knowledge base of such a system is empty and is populated during the learning phase by analyzing incoming data. Generally, the generation of fuzzy rules is based on the following methods:
  • The density/potential criterion [57,58];
  • The distance criterion [59,60];
  • The error criterion [55,61];
  • The firing strength criterion [62];
  • The statistical contribution criterion [63].
The basis of the rule generation methods described above relies on clustering the observations (data) and subsequently converting the centers of these clusters into rules. In addition, different machine learning models or statistical analysis techniques can be employed to evaluate the importance of these rules.
The analysis of related works indicates that data can serve as a basis for generating fuzzy rules within a fuzzy system. In addition, machine learning techniques can aid in the creation of these fuzzy rules. Subsequently, fuzzy rules are developed based on the hidden patterns identified in the data.

2.3. Fuzzy Logic Types

Various types of fuzzy logic are used to develop fuzzy systems [64]:
  • Type-1 fuzzy logic;
  • Type-2 fuzzy logic;
  • Interval type-2 fuzzy logic, etc.
Type-2 fuzzy logic extends traditional type-1 fuzzy logic to manage uncertainty better [65]. Type-2 fuzzy sets allow for the integration of uncertainty regarding the membership function within fuzzy set theory, directly addressing the limitations associated with type-1 fuzzy sets. In type-1 fuzzy systems, the membership function remains constant, whereas type-2 fuzzy systems feature a variable membership function. A fuzzy set plays a crucial role in transforming input values into fuzzy variables.
The advantage of using type-2 fuzzy logic lies in its enhanced flexibility in representing and managing uncertainty. Numerous studies have demonstrated that type-2 fuzzy logic can enhance the performance of fuzzy systems [66,67,68]. Nonetheless, there are several drawbacks associated with type-2 fuzzy logic that complicate its application when compared to type-1 fuzzy logic, including increased computational expenses and greater complexity in configuring the parameters of a fuzzy system. While this study does not focus on other types of fuzzy logic beyond type-1, it presents an opportunity to illustrate the principles of the proposed method for generating fuzzy rules based on the decision tree interpretation.

2.4. Fuzzy Rule Generation

Several studies are closely related to our work. The paper [23] describes an approach to the generation of fuzzy rules through an analysis of a decision tree created using the ID3 algorithm applied to the data after fuzzification. However, this work does not detail the algorithm for rule generation and does not address the simplification of or reductions in the rule number. In paper [24], an approach is discussed to developing an FIS for time series forecasting, where a set of rules is derived from the analysis of time series data after fuzzification. This paper similarly neglects to consider a reduction in or simplification of the rules. The work in [25] presents a method for constructing an FIS focused on predicting diabetes diagnoses, generating fuzzy rules based on a decision tree created using the C4.5 algorithm, again focusing on the data analyzed after fuzzification. Like previous studies, it lacks a detailed description of the rule generation algorithm and does not explore rule simplification or reduction. In [26], a methodology is described for the automatic generation of an FIS for ischemic and arrhythmic beat classification, where fuzzy rules are constructed from a decision tree derived from crisp data using the C4.5 algorithm. Although the number of rules is reduced on the basis of an author-proposed metric, this paper does not explicitly address reductions in the rule number.
In this study, we introduce a novel approach to generating fuzzy rules to construct an FRBS through the interpretation of decision trees. To obtain the initial set of rules, we extract them from the decision tree, similar to the methodologies in [23,24,25,26]. Then, we proceed with steps to simplify the statements of the rules and reduce their number. Following this, we perform fuzzification for the generated rules, also applying rule reduction at this stage. In addition, we provide a series of experiments to evaluate the effectiveness of our proposed approach.

3. Materials and Methods

In the previous section, we provided an analysis of articles that address the challenge of generating fuzzy rules for FRBSs. Traditionally, the development of a fuzzy rule set for a control system requires extensive expert knowledge from an analyst.
In this paper, we introduce an approach to generating fuzzy rules for an FRBS by analyzing the results of a supervised learning algorithm that relies on decision trees.

3.1. Schema of the Proposed Approach

We used the dataset described in [69] to obtain fuzzy rules extracted from the trained decision tree model. In [70], the authors used this dataset to construct and evaluate a GFS.
This dataset was selected for several reasons:
  • The dataset represents stable physical processes that remain unaffected by external variables;
  • The dataset is initially high-quality, eliminating the need for additional time spent on data preparation;
  • The dataset is compact and contains a limited number of features, which streamlines the development and debugging of the fuzzy rule generation algorithm.
The dataset comprises multiple tables, with each detailing the impact of varying concentrations of aluminum oxide and titanium dioxide dispersed in a 50:50 volumetric proportion of distilled water and ethylene glycol on the density and viscosity parameters at different temperatures. The datasets are presented in Appendix A and Appendix B. Each dataset is divided into training and test sets.
Figure 2 illustrates the schema for extracting the fuzzy rules to develop an FRBS based on interpreting the outcomes of the decision tree.
As depicted in Figure 2, the input data are the training set for each parameter: density and viscosity.
The CART algorithm [71] was selected to train a binary decision tree. This algorithm offers several advantages:
  • There is no requirement to compute or choose various parameters to run the algorithm.
  • There is no need to pre-select the features for analysis. The features are automatically chosen during model training based on the Gini index value.
  • The algorithm effectively works with outliers, creating separate branches in the tree for the data that contain them.
  • It provides a rapid training speed for the model.
In the context of generating fuzzy rules, the CART algorithm significantly simplifies constructing a decision tree for various tasks. This simplification occurs even in the absence of a comprehensive understanding of the specific of the domain and the modeling object.
A significant drawback of the CART algorithm is its reduced performance on datasets with numerous dependencies among the features. This issue is not addressed in this article and will be explored in future work.
The decision tree model is created after the CART algorithm is executed. The effectiveness of the decision tree model is evaluated using the R 2 metric [71]. This model is used as the input data for the proposed approach to generating fuzzy rules, resulting in the creation of a set of fuzzy rules. Then, FRBSs are automatically formed based on the obtained fuzzy rules. Membership functions and linguistic term labels are automatically generated for all input and output parameters.
The effectiveness of the FRBS is evaluated using the R 2 metric in the test set and is compared to the decision tree model to validate Hypothesis 1.

3.2. Description of Fuzzy System

A fuzzy system must be automatically created for the density parameter, as illustrated in the MATLAB Fuzzy Logic Toolbox notation (R2024b) [72] in Figure 3.
The input parameters X [69] are as follows:
  • Temperature ( t e m p ): 20–70 °C;
  • Al2O3 concentration ( a l ): 0, 0.05, and 0.3 vol %;
  • TiO2 concentration ( t i ): 0, 0.05, and 0.3 vol %.
The output parameter Y [69] is density ( d e n s i t y ).
The FRBS is based on the Mamdani algorithm [73]. Triangular membership functions are used for the fuzzification of the input parameters and the defuzzification of the output parameter.
Figure 4, Figure 5, Figure 6, and Figure 7 illustrate the automatically generated fuzzy sets for the parameters a l , t i , t e m p , and  d e n s i t y , respectively.
The Automf algorithm from the scikit-fuzzy library [74] was used to create all the fuzzy sets described above. The number of linguistic terms was a hyperparameter for the proposed approach to generating fuzzy rules, and we determined its value through experimentation.

3.3. Description of the Approach to Generating Fuzzy Rules

In this subsection, we describe the proposed method for generating fuzzy rules, illustrated by the example of constructing an FRBS to evaluate the output parameter d e n s i t y using the input parameters t e m p , a l , and  t i . The dataset is represented in Table A1.
The proposed approach can be represented as a function F r u l e s :
F r u l e s : r C r F ,
where r C represents a collection of rules obtained from a decision tree (crisp rules), and  r F is a set of fuzzy rules produced by applying the proposed approach.
Formally, a crisp rule can be defined as
r i C : if ( x 1 1 v 1 ) and ( x 2 1 v 2 ) and and ( x n n v n ) y 1 ,
where x j X denotes the j-th input parameter; v j indicates the specific value of x j ; and j [ , > , = ] represents the relationship between the input parameter x j and its corresponding value v j .
The fuzzy rule is defined in Equation (1).
The process of generating a set of fuzzy rules r F from the crisp rules r C using the function F r u l e s consists of the following steps:
Step 1.
Extracting raw rules from the decision tree. The initial stage involves obtaining a set of raw rules r r a w from the decision tree. The set r r a w contains crisp rules.
Step 2.
Normalizing the raw rules. A set of normalized rules, called r n o r m , is derived from the raw rules r r a w . This step requires eliminating any overlapping statements across all input parameters X to form a normalized rule r i n o r m from the raw rule r i r a w . The set r n o r m contains crisp rules.
Step 3.
Reducing similar rules. Some normalized rules in r n o r m may have identical antecedents but different consequents. These rules are classified as similar and should be reduced into a single rule, forming a set of reduced rules denoted as r r n o r m . The set r r n o r m contains crisp rules.
Step 4.
Simplifying rules. This step involves transitioning from multiple statements in the antecedents of the rules associated with a specific input parameter x j that use the relations > and ≤ to a single statement with the relation =, which is essential for the construction of fuzzy rules. A set of simplified rules r s i m p is created from the set of reduced rules r r n o r m . The set r s i m p consists of crisp rules.
Step 5.
Fuzzifying rules. It is important to define fuzzy sets for the input parameters X and the output parameters Y to fuzzify the set of rules r s i m p . This step generates a collection of raw fuzzy rules r f r a w based on the simplified rules r s i m p . The set r f r a w contains fuzzy rules.
Step 6.
Reducing similar fuzzy rules. After the previous step, there may be fuzzy rules with similar antecedents but different consequents. In this step, similar fuzzy rules are reduced from the set r f r a w to arrive at the final collection of fuzzy rules r f u z z .
Figure 8 illustrates the steps involved in the proposed approach to generating fuzzy rules based on the interpretation of the decision tree.
Now, let us examine the steps of the proposed approach in greater detail.

3.4. Steps of the Approach to Generating Fuzzy Rules

Step 1.
Extracting raw rules from the decision tree
The set of raw rules r r a w is extracted from the decision tree in the initial stage of the proposed approach. The extraction of a set of raw rules is performed using the algorithm defined in [75].
A raw rule extracted from a decision tree may include an excessive number of overlapping statements. For example,
if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t i 0.025 ) and ( a l 0.025 ) and ( t e m p > 55.0 ) and ( t e m p > 62.5 ) 1.033
The complete set of raw rules r r a w can be found in Appendix C.
Step 2.
Normalization of the raw rules
In the second step of the proposed approach, a set of normalized rules r n o r m is created from the set of raw rules r r a w . It is essential to erase any overlapping statements across all of the input parameters X to obtain a normalized rule r i n o r m from the raw rule r i r a w . The normalization process can be described by the algorithm described in Algorithm 1.
Algorithm 1 generates the set of normalized rules r n o r m . The complete set of normalized rules is available in Appendix D. Below is an example of a normalized rule:
Raw rule : if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t i 0.025 ) and ( a l 0.025 ) and ( t e m p > 55.0 ) and ( t e m p > 62.5 ) 1.033
Normalized rule : if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) 1.033
Step 3.
Reducing similar rules
Some normalized rules may have equivalent antecedents but different consequents. We refer to these as similar rules, which should be reduced to a single rule. Algorithm 2 formally describes the process of reducing similar rules.
Algorithm 1 Normalization algorithm
Input:
  • Set of raw rules r r a w = { r 1 r a w , , r n r a w } extracted from a decision tree and consisting of N rules.
  • Set of input parameters X.
Output:
  • Set of normalized rules r n o r m .
for all  r i r a w r r a w   do
    Create a new normalized rule r i n o r m .
    for all  x j X  do
        Obtain the statements s for an input parameter x j from the antecedent of the rule r i r a w .
        Select from s a statement s m i n with k is > and the minimal value v k :
s m i n = s k s | k is > , v k min .
        Select from s a statement s m a x with k is and the maximal value v k :
s m a x = s k s | k is , v k max .
        Add statements s m i n and s m a x to the antecedent of the new rule r i n o r m .
    end for
    Add the consequent of the i-th raw rule r i r a w to the new rule r i n o r m .
    Add the new rule r i n o r m to the set of normalized rules r n o r m .
end for
return  r n o r m
Algorithm 2 Reducing algorithm
Input:
  • Set of normalized rules r n o r m = { r 1 n o r m , , r n n o r m } consisting of N rules.
Output:
  • Set of normalized rules r r n o r m after removing similar rules (reducing them).
Obtain the set of similar rules r s i m .
Add the rules that are not present in the set r s i m to the set r r n o r m .
for all  r i s i m r s i m   do
    Select the rules r whose antecedent matches that of the i-th rule r i s i m from the set of similar rules r s i m .
    Create a new rule r i r n o r m by aggregating the rules r and applying the averaging function to the values of the output parameter y i in the consequent
y i = avg y 1 r , , y m r .
    Add the new rule r i r n o r m to the set r r n o r m .
end for
return  r r n o r m
The following example illustrates the execution of Algorithm 2:
Before reducing similar rules : if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t e m p 55.0 ) 1.045 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t e m p 55.0 ) 1.051
After reducing similar rules : if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t e m p 55.0 ) 1.048
The set of normalized rules r n o r m comprises 34 rules ( | r n o r m | = 34 ), while the resulting reduced set r r n o r m comprises 24 rules ( | r r n o r m | = 24 ). The complete set r r n o r m is detailed in Appendix E.
Step 4.
Simplifying rules
Next, we need to make the transition from multiple statements of the antecedents of the rules belonging to a specific input parameter x j with the relations > and ≤ to a single statement with the relation =. It is necessary to construct fuzzy rules.
The rule simplification algorithm is based on the analysis of the statements for each input parameter x j . It is essential to determine the left n and right m boundaries of the interval for each input parameter x j with the relations > and ≤, respectively, in each i-th rule r i r n o r m . Next, the maximum value for each interval must be identified, considering the chosen membership function. For example, we seek the center of the base of a triangle for a triangular membership function. We apply the avg function if the interval includes the left n and right m boundaries. We use the min function if it only has a left boundary n. Otherwise, we apply the max function. Figure 9 illustrates this schema.
The process of simplifying rules can be structured as described in Algorithm 3.
Let us examine an example of simplified rules:
Before : if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) 1.033 After : if ( a l = 0.0 ) and ( t i = 0.0 ) and ( t e m p = 70 ) 1.033
Before : if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t e m p 62.5 ) 1.038 After : if ( a l = 0.0 ) and ( t i = 0.0 ) and ( t e m p = 47.5 ) 1.038
A complete set of simplified rules r s i m p can be found in Appendix F.
Algorithm 3 Simplifying rule algorithm
Input:
  • Reduced set of normalized rules r r n o r m = { r 1 r n o r m , , r n r n o r m } consisting of N rules.
  • Set of input parameters X.
  • Training set for a decision tree learning d a t a .
Output:
  • Set of simplified rules r s i m p .
for all  r i r n o r m r r n o r m   do
    Create a new rule r i s i m p .
    for all  x j X  do
        Obtain the statements s from the antecedent of the rule r i r n o r m that describe the left n and right m boundaries of the input parameter x j :
n = s k if s k s s . t . k is > otherwise m = s k if s k s s . t . k is otherwise
        Calculate a new value for a single statement:
v a l u e = avg v a l u e n , v a l u e m if n and m min d a t a [ x j ] if n and m max d a t a [ x j ] otherwise .
        Create a new statement s j ( x j = v a l u e ) and add them to the antecedent of the new rule r i s i m p .
    end for
    Add the new rule r i s i m p to the set of simplified rules r s i m p .
end for
return  r s i m p
Step 5.
Fuzzifying rules
It is necessary to define fuzzy sets for the input parameters X and the output parameters Y for the fuzzification of the rule set r s i m p . The automatic method described in [74] applies to generating fuzzy sets for crisp parameters as implementing the function F f u z z :
F f u z z : x i × n A i , x i X , A i = U μ A ( x i ) / x i , F f u z z : y j × n C j , y j Y , C j = U μ A ( y j ) / x j .
This process results in the formation of the corresponding fuzzy sets with a specified number of linguistic terms n for each input and output parameter.
Subsequently, Algorithm 4 generates a collection of raw fuzzy rules r f r a w based on the simplified rule set r s i m p .
Let us examine an example of fuzzy rules:
Before : if ( a l = 0.0 ) and ( t i = 0.0 ) and ( t e m p = 70 ) 1.033 After : if ( a l is l o w ) and ( t i is l o w ) and ( t e m p is h i g h ) ( d e n s i t y is l o w e r )
Step 6.
Reducing similar fuzzy rules
Rules with similar antecedents but differing consequents may be produced after executing Algorithm 4. Algorithm 5 is used to reduce these similar fuzzy rules.
Algorithm 4 Fuzzifying rule algorithm
Input:
  • Set of simplified rules r s i m p = { r 1 s i m p , , r n s i m p } consisting of N rules.
  • Hyperparameters H to specify the number of linguistic terms n for the fuzzification of the input X and output Y parameters.
Output:
  • Set of raw fuzzy rules r f r a w .
for all  r i s i m p r s i m p   do
    Create a new rule r i f r a w .
    Obtain the statements s from the antecedent of the i-th rule r i f r a w .
    for all  s j s  do
        Fuzzify the value of the j-th statement and select the fuzzy value with the maximal membership degree:
A j = F f u z z ( s j , H ) a m a x = arg max a A j μ A ( s j )
        Create a new statement s j f u z z ( x j is a m a x ) and add it to the antecedent of the new rule r i f r a w .
    end for
    Fuzzyfy the value of the consequent y i and select the fuzzy value with maximal membership degree:
C i = F f u z z ( y i , H ) c m a x = arg max c C i μ A ( y i )
    Add a statement s i f u z z ( y i is c m a x ) as a consequent of the new rule r i f r a w .
    Add the new rule r i f r a w to the set of raw fuzzy rules r f r a w .
end for
return  r f r a w
Algorithm 5 Reducing similar fuzzy rule algorithm
Input:
  • Set of raw fuzzy rules r f r a w = { r 1 f r a w , , r n f r a w } consisting of N rules.
Output:
  • Reduced set of fuzzy rules r f u z z after removing similar rules.
Obtain the set of similar rules r s i m .
Add the rules that are not present in the set r s i m to the set r f u z z .
for all  r i s i m r s i m   do
    Obtain the rules r whose antecedent matches that of the i-th rule r i s i m from the set of similar rules r s i m .
    Select the rule r m i n from the set of rules r for which the sum of the membership degrees of the statements in the rule antecedent is minimal:
r m i n = arg min r i r j = 1 n μ A ( x i j )
    Add the rule r m i n to the set r f u z z .
end for
return  r f u z z
The resulting reduced set of fuzzy rules r f u z z is detailed in Appendix G. Initially, there are 24 rules in the set before fuzzification ( | r s i m p | = 24 ), which are reduced to 15 rules ( | r f u z z | = 15 ) after fuzzification and the removal of similar rules.
Once the set of fuzzy rules is established, fuzzy inference can be executed to obtain the values of the output parameters Y based on the input parameters X.

3.5. Rule Clustering

Rule clustering allows the grouping of rules according to the parameters found in the antecedent statements of these rules. These groups simplify expert evaluation of the rules and the tuning of the hyperparameters to the proposed fuzzy rule generation method. The rule clustering algorithm presented here applies to both crisp and fuzzy rules.
For example, groups can be identified as rows 1 to 9, 10 to 15, etc. in the dataset referenced in Table A1. A specific set of input parameters must be defined to cluster the rules. The antecedent statements of the rules are chosen on the basis of these parameters. Clustering of the dataset in Table A1 can be performed using the parameters a l and t i , while the parameter t e m p can be ignored since its value is consistently repeated in all data groups.
This process results in the formation of a unique set of statements s:
s = { ( a l 0.175 ) , ( a l > 0.025 ) , ( a l > 0.175 ) , ( t i 0.175 ) , ( t i > 0.025 ) , ( t i > 0.175 ) } .
The rules need to be vectorized for clustering. The set s is used as a binary mask for vectorization. For example, the rule r 1 n o r m r n o r m results in the vector v 1 n o r m :
r 1 n o r m : if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) 1.033 v 1 n o r m = 1 , 0 , 0 , 1 , 0 , 0 .
Once vectorization is complete, the next step is to automatically determine the number of clusters. The minimum number of clusters is set at k m i n = 2 , while the maximum can be provided by the user or calculated as k m a x = r n o r m + 1 . The automatic selection of the number of clusters relies on the silhouette score s [76]:
s = b a max a , b ,
where a is the mean intra-cluster distance; b is the distance between a sample and the nearest cluster that the sample is not a part of. The best value is 1, and the worst value is 1 . Values near 0 indicate overlapping clusters. Negative values indicate that a sample has been assigned to the wrong cluster, as a different cluster is more similar.
The KMeans algorithm is used for clustering. The KMeans algorithm was selected due to its rapid performance. To address the challenge of determining the optimal number of clusters, the silhouette score was utilized. This metric was preferred due to the simplicity of its implementation and its ability to programmatically determine the optimal number of clusters.
The proposed algorithm performs n iterations for each k i k m i n , k m a x . The silhouette score s i is calculated (Figure 10) for each iteration k i , and the iteration showing the minimum number of clusters ( k i ) with the highest value s i is selected. In this case, the optimal s i value was achieved during iteration i = 4 , resulting in a division into five clusters.
The result of the rule clustering is as follows:
Cluster 1 : if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) 1.033 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p 32.5 ) 1.062
Cluster 3 : if ( a l 0.175 ) and ( t i 0.175 ) and ( t i > 0.025 ) and ( t e m p > 32.5 ) 1.056 if ( a l 0.175 ) and ( t i 0.175 ) and ( t i > 0.025 ) and ( t e m p 32.5 ) 1.091
The complete results of the rule clustering can be found in Appendix H.

4. Experiments

We developed an application to validate our hypotheses. The primary parameters of the environment for this application are as follows:
  • Programming language: Python.
  • Python interpreter version: 3.12.
  • Libraries:
    • Machine learning library (including the CART decision trees and KMeans clustering): scikit-learn 1.5.2;
    • Data manipulation libraries: numpy 2.1.0 and pandas 2.2.2;
    • Fuzzy inference library: scikit-fuzzy 0.5.0;
    • Plotting library: matplotlib 3.9.2;
    • Additional dependency for the scikit-fuzzy library: networkx 3.4.2.

4.1. Hypothesis 1 Validation

To evaluate Hypothesis 1, we developed three FRBSs. We used the dataset described in [69] to obtain the fuzzy rules extracted from the decision tree model.
The dataset comprises multiple tables, with each detailing the impact of varying concentrations of aluminum oxide and titanium dioxide dispersed in a 50:50 volumetric proportion of distilled water and ethylene glycol on the density and viscosity parameters at different temperatures. The datasets are presented in Appendix A and Appendix B. Each dataset is split into training and test sets.
Three FRBSs were developed for the parameters of density, viscosity, and temperature, respectively. All of the systems are illustrated in Figure 11 using the MATLAB Fuzzy Toolbox notation.
Figure 4, Figure 5, Figure 6, Figure 7, Figure 12, Figure 13, and Figure 14 illustrate the automatically generated fuzzy sets for the parameters a l , t i , t e m p (as the input parameters for the density FRBS with n = 3 ), d e n s i t y , t e m p (with n = 5 as the input parameter for the viscosity FRBS and the output parameter for the temperature FRBS), v i s c o s i t y (with n = 5 as the output parameter for the viscosity FRBS), and v i s c o s i t y (with n = 3 as the input parameter for the temperature FRBS), respectively.
Decision tree models were constructed for the output parameter d e n s i t y using the training set from the dataset in Table A1, and the output parameters v i s c o s i t y and t e m p were modeled using the dataset from Table A2. The input parameters for the density and viscosity FRBSs included a l , t i , and t e m p , whereas the input parameters for the temperature FRBS consisted of a l , t i , and v i s c o s i t y .
The following R 2 metric values were calculated for the resulting decision tree models using the test set from the datasets in Table A1 and Table A2:
  • R d e n s i t y 2 = 0.99 ;
  • R v i s c o s i t y 2 = 0.83 ;
  • R t e m p 2 = 0.76 .
The following raw rules were extracted from the decision trees:
  • r d e n s i t y r a w = 34 ;
  • r v i s c o s i t y r a w = 35 ;
  • r t e m p r a w = 25 .
The following raw rules were obtained after applying normalization and removing similar rules (reducing them):
  • r d e n s i t y n o r m = 24 ;
  • r v i s c o s i t y n o r m = 26 ;
  • r t e m p n o r m = 17 .
Subsequently, the proposed algorithm generated the following fuzzy rules:
  • r d e n s i t y f u z z = 15 ;
  • r v i s c o s i t y f u z z = 19 ;
  • r t e m p f u z z = 11 .
Fuzzy inference implements the calculation of the crisp output parameters Y from the crisp input parameters X. Fuzzy rules are used in the inference process to represent expert knowledge as the functional relationship F : X Y .
For example, for the density FRBS, the steps of fuzzy inference are as follows for the input parameters a l = 0 , t i = 0 , and t e m p = 25 :
  • Fuzzification:
    • μ l o w ( a l ) = 1.0 , μ a v e r a g e ( a l ) = 0.0 , μ h i g h ( a l ) = 0.0 ;
    • μ l o w ( t i ) = 1.0 , μ a v e r a g e ( t i ) = 0.0 , μ h i g h ( t i ) = 0.0 ;
    • μ l o w ( t e m p ) = 0.8 , μ a v e r a g e ( t e m p ) = 0.2 , μ h i g h ( t e m p ) = 0.0 .
  • Aggregation and activation:
    • For rule
      if ( a l is l o w ) and ( t i is l o w ) and ( t e m p is a v e r a g e ) ( d e n s i t y is l o w e r )
      δ 1 A = min { 1.0 , 1.0 , 0.2 } = 0.2
      δ 1 C = δ 1 A = 0.2 ;
    • For rule
      if ( a l is l o w ) and ( t i is l o w ) and ( t e m p is l o w ) ( d e n s i t y is l o w )
      δ 2 A = min { 1.0 , 1.0 , 0.8 } = 0.8
      δ 2 C = δ 2 A = 0.8 ;
    • For rule
      if ( a l is h i g h ) and ( t e m p is a v e r a g e ) ( d e n s i t y is h i g h )
      δ 3 A = min { 0.0 , 0.0 , 0.2 } = 0.0
      δ 3 C = δ 3 A = 0.0 , etc.
  • Accumulation. Figure 15 represents the accumulation result.
  • Defuzzification. d e n s i t y = 1.076 , and d e n s i t y Y .
Table 2 presents the test set used to evaluate the approximation ability of the constructed FRBSs. Each row lists the values of the input variables, t e m p , a l , t i , and v i s c o s i t y , along with the corresponding output parameter value in the Real column. The Inferred column displays the fuzzy inference results for each set of input parameters (row). The RMSE column shows the root mean square deviation between the values in the Real and Inferred columns.
The R 2 values for the FRBS were calculated using the test set from the datasets in Table A1 and Table A2:
  • R ˜ d e n s i t y 2 = 0.97 ;
  • R ˜ v i s c o s i t y 2 = 0.81 ;
  • R ˜ t e m p 2 = 0.75 .
We then calculated the difference in quality between the decision tree models and the FRBSs using the R 2 metric:
  • Δ d e n s i t y = R d e n s i t y 2 R ˜ d e n s i t y 2 = 0.014 ;
  • Δ v i s c o s i t y = R v i s c o s i t y 2 R ˜ v i s c o s i t y 2 = 0.025 ;
  • Δ t e m p = R t e m p 2 R ˜ t e m p 2 = 0.014 .
The average difference in the R 2 metric is about 2%. Hypothesis 1 is proven.

4.2. Hypothesis 2 Validation

In this experiment, the slight difference in performance between the decision tree model and the FRBS can be attributed to the optimal selection of the parameters for the FRBS, including the membership function parameters, the number of terms, and the fuzzy rules.
To validate Hypothesis 2, we modified the number of linguistic terms to three for all fuzzy sets in all FRBSs.
Next, the R 2 values for the modified FRBS were calculated using the test sets:
  • R ¯ d e n s i t y 2 = 0.87 ;
  • R ¯ v i s c o s i t y 2 = 0.58 ;
  • R ¯ t e m p 2 = 0.55 .
We subsequently determined the quality difference between the original and the modified FRBSs:
  • Δ d e n s i t y = R ˜ d e n s i t y 2 R ¯ d e n s i t y 2 = 0.10 ;
  • Δ v i s c o s i t y = R ˜ v i s c o s i t y 2 R ¯ v i s c o s i t y 2 = 0.23 ;
  • Δ t e m p = R ˜ t e m p 2 R ¯ t e m p 2 = 0.20 .
We also determined the the quality difference between the decision tree models and the modified FRBSs:
  • Δ d e n s i t y = R d e n s i t y 2 R ¯ d e n s i t y 2 = 0.12 ;
  • Δ v i s c o s i t y = R v i s c o s i t y 2 R ¯ v i s c o s i t y 2 = 0.25 ;
  • Δ t e m p = R t e m p 2 R ¯ t e m p 2 = 0.21 .
The average difference in the R 2 metric between the original and modified FRBSs is about 18%, while it is about 19% between the decision tree models and the modified FRBSs. Hypothesis 2 is proven.

4.3. Hypothesis 3 Validation

To validate Hypothesis 3, we conducted an analysis of the datasets in Table A1 and Table A2.
Let us first examine the results from the analysis of the dataset in Table A2. We used the dataset in Table A2 to construct the viscosity FRBS and temperature FRBS.
As illustrated in Figure 16, there are no outliers present in the dataset. The lack of outliers minimizes the approximation errors, allowing for more accurate patterns during the modeling process.
Figure 17 demonstrates the strong inverse correlation between the temperature and viscosity features. The presence of this correlation enhances the performance of the temperature FRBS, bringing it in line with that of the decision tree model.
In addition, Figure 18 shows that the values of viscosity and temperature are well distributed across the entire solution space. Therefore, the dataset in Table A2 is qualitative.
Based on the findings from prior experiments, we can conclude that applying the proposed method for generating fuzzy rules to a qualitative dataset is effective. This approach allows for the creation of an FRBS with a satisfactory level of quality.
Now, let us examine the dataset in Table A1, which we used to develop the density FRBS.
Figure 19 shows that the dataset in Table A1 does not contain outliers either.
As illustrated in Figure 20, there is no correlation between the temperature feature and the al, ti, and density features. Consequently, constructing an FRBS to determine temperature based on the density value using the proposed rule generation algorithm may not be workable.
The fuzzy rule generation process targets minimizing both the number of rule conditions and the total number of rules. However, in the lack of a correlation between the input and output parameters, this simplification may significantly restrict the solution space.
Figure 21 demonstrates that the data distribution in the dataset in Table A1 may prevent the ability to achieve an acceptable level of approximation when calculating the temperature based on the density value. As a result, the quality of the FRBS produced will probably be lower than that of the decision tree model.
Figure 22 illustrates the temperature* FRBS represented using the MATLAB Fuzzy Logic Toolbox notation. This FRBS is designed for calculating the output parameter t e m p using the input parameters a l , t i , and d e n s i t y .
For all of the FRBS linguistic variables, three linguistic terms were automatically created using the triangular membership function. These parameters were selected because they resulted in the highest quality of the system.
The R t e m p * 2 = 0.77 metric value was calculated for the resulting decision tree model using the test set from the datasets in Table A1.
The 27 raw rules were extracted from the decision tree: r t e m p * r a w = 27 .
The 15 raw rules were obtained after applying normalization and removing similar rules (reducing them): r t e m p * n o r m = 15 .
Next, the proposed algorithm generated the eight fuzzy rules: r t e m p * f u z z = 8 .
Let us examine a set of generated fuzzy rules:
if ( d e n s i t y is l o w ) and ( t i is l o w ) and ( a l is h i g h ) ( t e m p is h i g h ) if ( d e n s i t y is l o w ) and ( t i is h i g h ) ( t e m p is h i g h ) if ( d e n s i t y is l o w ) ( t e m p is h i g h ) if ( d e n s i t y is l o w ) and ( t i is l o w ) and ( a l is l o w ) ( t e m p is a v e r a g e ) if ( d e n s i t y is a v e r a g e ) and ( a l is h i g h ) ( t e m p is l o w ) if ( d e n s i t y is a v e r a g e ) ( t e m p is a v e r a g e ) if ( d e n s i t y is h i g h ) and ( a l is l o w ) ( t e m p is l o w ) if ( d e n s i t y is h i g h ) and ( a l is h i g h ) ( t e m p is l o w )
The R ˜ t e m p * 2 = 0.04 value for the temperature* FRBS was calculated using the test set from the dataset in Table A1.
Figure 23 illustrates the distribution of both real and inferred values of the temperature parameter derived from the execution of the temperature* FRBS.
After disabling the stage of reducing similar rules, the number of fuzzy rules increased to 10, and the quality of the system was enhanced to R ˜ t e m p * 2 = 0.11 . The distribution of both the real and inferred values of the temperature parameter derived from the execution of the modified temperature* FRBS is illustrated in Figure 24.
Hypothesis 3 is proven.

5. Discussion

In this section, we describe the primary advantages, disadvantages, and limitations of the proposed approach to generating fuzzy rules based on the decision tree interpretation.
Our analysis of the related literature indicates that the FRBS balances interpretability and an acceptable quality level [4,5,6]. One key advantage of the proposed method is that it enables the construction of an FRBS without requiring an in-depth understanding of the domain, the task, or the analyzed object. This approach significantly reduces the time required to create an initial version of the FRBS, allowing for later improvements. Optimizing the parameters or modifying the fuzzy rule base enhances the system quality. Any existing method can be used to adjust the FRBS parameters. For example, genetic or evolutionary algorithms can be used for parameter optimization [6,7,8]. With the optimal parameters and a high-quality dataset, the resulting FRBS performs comparably to a decision tree model. The advantages of the proposed method compared to existing techniques for generating fuzzy rules from decision trees include the following [23,24,25,26]:
  • This approach applies to a range of tasks because it is not limited to a specific domain or task;
  • This article explains the fuzzy rule generation algorithm, complete with details and examples;
  • This article includes experiments that show how the parameters and data quality affect the quality of the resulting FRBS.
However, there are some drawbacks to this approach. It lacks automatic adaptation to changes in data, unlike evolving fuzzy systems [27]. When generating rules, it does not consider various rule quality metrics that are typically employed in evolving fuzzy systems [27] and cognitive modeling methods [77]. In addition, the proposed method uses only type-1 fuzzy logic [65].
The limitations of the proposed approach include significant dependence on the quality of the dataset and the number of features. As the number of features increases, the complexity and quantity of rules rise, leading to longer operational times for the FRBS and a decline in its overall quality. The limitations of the proposed approach also include the ability to work with a dataset on which the CART algorithm shows an acceptable result. The proposed approach is suitable only for an FRBS based on the Mamdani algorithm.

6. Conclusions

This study presents a novelty in terms of the approach to generating a set of fuzzy rules with a subsequent reduction in the number of rules and the simplification of their conditions. The proposed paper largely focuses on reducing the time costs of generating a set of FRBS rules and improving the interpretability/explainability of the results.
The significance of the proposed algorithm for generating fuzzy rules based on the interpretation of the decision tree can be summarized as follows:
  • The proposed approach facilitates the rapid construction of an FRBS without requiring a profound understanding of the task, its domain, or the analyzed object.
  • The performance of the resulting system will be approximately equivalent to that of the decision tree. The performance depends on the proper selection of the optimal number of fuzzy variables and the types and configurations of membership functions. It is important to note that this article does not discuss the selection of the optimal FRBS parameters.
  • In certain situations, it may be necessary to modify the generated fuzzy rules. To address this issue, a clustering algorithm has been proposed which can identify groups of similar rules. This approach can help evaluate the extent of the coverage within the solution space.
In this study, we present an approach to generating fuzzy rules for constructing an FRBS based on the interpretation of the decision tree. To obtain the initial set of rules, we extract rules from the decision tree. The CART algorithm was chosen as the algorithm for training the model to create a binary decision tree. Next, we perform the steps of reducing the number of rules and simplifying the rule conditions. After that, the fuzzification process is performed. For the obtained fuzzy rules, a reduction in the number of rules is also applied. We also presented a set of experiments that allowed us to evaluate the adequacy of the proposed approach. The quality of the decision-tree-based model and the constructed FRBSs is evaluated using the metric R 2 . The average difference in the R 2 metric between the decision tree and the constructed FRBSs is approximately 2%.
Future work plans include the following:
  • The development of an approach to generating a set of fuzzy rules based on the interpretation of other machine learning algorithms;
  • The development of a method for generating fuzzy sets considering the specifics of the subject area and data to improve the quality of the FRBS;
  • Adapting the proposed approach to be compatible with additional fuzzy inference algorithms.

Author Contributions

Conceptualization, A.A.R.; data curation, A.A.F.; formal analysis, A.A.F.; methodology, N.G.Y.; visualization, A.A.R. All of the authors contributed equally. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported the Ministry of Science and Higher Education of Russia in the framework of project no. 075-03-2023-143, “The study of intelligent predictive analytics based on the integration of methods for constructing features of heterogeneous dynamic data for machine learning and methods of predictive multimodal data analysis”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. The Density Dataset

Table A1. The effect of the input parameters t e m p , a l , and t i on the output parameter d e n s i t y .
Table A1. The effect of the input parameters t e m p , a l , and t i on the output parameter d e n s i t y .
# temp (°C) al (%) ti (%) density
training dataset
120001.0625
225001.05979
335001.05404
440001.05103
545001.04794
650001.04477
760001.03826
865001.03484
970001.03182
10200.0501.08755
11450.0501.07105
12500.0501.0676
13550.0501.06409
14650.0501.05691
15700.0501.05291
16200.301.18861
17250.301.18389
18300.301.1792
19400.301.17017
20450.301.16572
21500.301.16138
22550.301.15668
23600.301.15233
24700.301.14414
252000.051.09098
262500.051.08775
273000.051.08443
283500.051.08108
294000.051.07768
306000.051.06362
316500.051.05999
327000.051.05601
332500.31.2186
343500.31.20776
354500.31.19759
365000.31.19268
375500.31.18746
386500.31.178
testing dataset
130001.05696
255001.04158
3250.0501.08438
4300.0501.08112
5350.0501.07781
6400.0501.07446
7600.0501.06053
8350.301.17459
9650.301.14812
104500.051.07424
115000.051.07075
testing dataset
125500.051.06721
132000.31.22417
143000.31.2131
154000.31.20265
166000.31.18265
177000.31.17261

Appendix B. The Viscosity Dataset

Table A2. The effect of the input parameters t e m p , a l , and t i on the output parameter v i s c o s i t y .
Table A2. The effect of the input parameters t e m p , a l , and t i on the output parameter v i s c o s i t y .
# temp (°C) al (%) ti (%) density
training dataset
120003.707
225003.18
335002.361
445001.832
550001.629
655001.465
770001.194
8200.0504.66
9300.0503.38
10350.0502.874
11400.0502.489
12500.0501.897
13550.0501.709
14600.0501.47
15200.306.67
16250.305.594
17300.304.731
18350.304.118
19400.303.565
20550.302.426
21600.302.16
22700.301.728
232000.054.885
242500.054.236
253500.053.121
264000.052.655
274500.052.402
285000.052.109
296000.051.662
307000.051.289
312000.37.132
322500.35.865
333000.34.944
343500.34.354
354500.33.561
365500.32.838
376000.32.538
387000.31.9097
testing dataset
130002.716
240002.073
360001.329
465001.211
5250.0504.12
6450.0502.217
7650.0501.315
8700.0501.105
9450.303.111
10500.302.735
11650.301.936
123000.053.587
135500.051.953
146500.051.443
154000.33.99
165000.33.189
176500.32.287

Appendix C. The Set of Raw rraw

This appendix presents a set of raw rules extracted from the decision tree model. Each rule corresponds to an individual branch of the tree. We utilize a depth-first search algorithm to trace a path from the tree’s root to each of its leaves to extract these rules. The total number of raw rules equals the number of leaves in the decision tree model. A more detailed explanation of the algorithm for generating a flat representation of the raw rules can be found in [75].
if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t i 0.025 ) and ( a l 0.025 ) and ( t e m p > 55.0 ) and ( t e m p > 62.5 ) 1.033 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t i 0.025 ) and ( a l 0.025 ) and ( t e m p > 55.0 ) and ( t e m p 62.5 ) 1.038 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t i 0.025 ) and ( a l 0.025 ) and ( t e m p 55.0 ) and ( t e m p > 47.5 ) 1.045 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t i 0.025 ) and ( a l 0.025 ) and ( t e m p 55.0 ) and ( t e m p 47.5 ) 1.051 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t i 0.025 ) and ( a l > 0.025 ) and ( t e m p > 60.0 ) and ( t e m p > 67.5 ) 1.053 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t i > 0.025 ) and ( t e m p > 50.0 ) and ( t e m p > 67.5 ) 1.056 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t i 0.025 ) and ( a l > 0.025 ) and ( t e m p > 60.0 ) and ( t e m p 67.5 ) 1.057 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p 32.5 ) and ( t i 0.025 ) and ( a l 0.025 ) and ( t e m p > 22.5 ) 1.06 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t i > 0.025 ) and ( t e m p > 50.0 ) and ( t e m p 67.5 ) and ( t e m p > 62.5 ) 1.06 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p 32.5 ) and ( t i 0.025 ) and ( a l 0.025 ) and ( t e m p 22.5 ) 1.062 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t i > 0.025 ) and ( t e m p > 50.0 ) and ( t e m p 67.5 ) and ( t e m p 62.5 ) 1.064 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t i 0.025 ) and ( a l > 0.025 ) and ( t e m p 60.0 ) and ( t e m p > 52.5 ) 1.064 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t i 0.025 ) and ( a l > 0.025 ) and ( t e m p 60.0 ) and ( t e m p 52.5 ) 1.069 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t i > 0.025 ) and ( t e m p 50.0 ) and ( t e m p > 37.5 ) 1.078 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t i > 0.025 ) and ( t e m p 50.0 ) and ( t e m p 37.5 ) 1.081 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p 32.5 ) and ( t i > 0.025 ) and ( t e m p > 27.5 ) 1.084 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p 32.5 ) and ( t i 0.025 ) and ( a l > 0.025 ) 1.088 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p 32.5 ) and ( t i > 0.025 ) and ( t e m p 27.5 ) and ( t e m p > 22.5 ) 1.088 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p 32.5 ) and ( t i > 0.025 ) and ( t e m p 27.5 ) and ( t e m p 22.5 ) 1.091 if ( a l > 0.175 ) and ( t e m p > 35.0 ) and ( t e m p > 52.5 ) and ( t e m p > 65.0 ) 1.144 if ( a l > 0.175 ) and ( t e m p > 35.0 ) and ( t e m p > 52.5 ) and ( t e m p 65.0 ) and ( t e m p > 57.5 ) 1.152 if ( a l > 0.175 ) and ( t e m p > 35.0 ) and ( t e m p > 52.5 ) and ( t e m p 65.0 ) and ( t e m p 57.5 ) 1.157 if ( a l > 0.175 ) and ( t e m p > 35.0 ) and ( t e m p 52.5 ) and ( t e m p > 42.5 ) and ( t e m p > 47.5 ) 1.161 if ( a l > 0.175 ) and ( t e m p > 35.0 ) and ( t e m p 52.5 ) and ( t e m p > 42.5 ) and ( t e m p 47.5 ) 1.166 if ( a l > 0.175 ) and ( t e m p > 35.0 ) and ( t e m p 52.5 ) and ( t e m p 42.5 ) 1.17 if ( a l 0.175 ) and ( t i > 0.175 ) and ( t e m p > 40.0 ) and ( t e m p > 60.0 ) 1.178 if ( a l > 0.175 ) and ( t e m p 35.0 ) and ( t e m p > 22.5 ) and ( t e m p > 27.5 ) 1.179 if ( a l > 0.175 ) and ( t e m p 35.0 ) and ( t e m p > 22.5 ) and ( t e m p 27.5 ) 1.184 if ( a l 0.175 ) and ( t i > 0.175 ) and ( t e m p > 40.0 ) and ( t e m p 60.0 ) and ( t e m p > 52.5 ) 1.187 if ( a l > 0.175 ) and ( t e m p 35.0 ) and ( t e m p 22.5 ) 1.189 if ( a l 0.175 ) and ( t i > 0.175 ) and ( t e m p > 40.0 ) and ( t e m p 60.0 ) and ( t e m p 52.5 ) and ( t e m p > 47.5 ) 1.193 if ( a l 0.175 ) and ( t i > 0.175 ) and ( t e m p > 40.0 ) and ( t e m p 60.0 ) and ( t e m p 52.5 ) and ( t e m p 47.5 ) 1.198 if ( a l 0.175 ) and ( t i > 0.175 ) and ( t e m p 40.0 ) and ( t e m p > 30.0 ) 1.208 if ( a l 0.175 ) and ( t i > 0.175 ) and ( t e m p 40.0 ) and ( t e m p 30.0 ) 1.219

Appendix D. The Set of Normalized Rules r norm

if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) 1.033 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t e m p 62.5 ) 1.038 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t e m p 55.0 ) 1.045 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t e m p 55.0 ) 1.051 if ( a l 0.175 ) and ( a l > 0.025 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) 1.053 if ( a l 0.175 ) and ( t i 0.175 ) and ( t i > 0.025 ) and ( t e m p > 32.5 ) 1.056 if ( a l 0.175 ) and ( a l > 0.025 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t e m p 67.5 ) 1.057 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p 32.5 ) and ( t e m p > 22.5 ) 1.06 if ( a l 0.175 ) and ( t i 0.175 ) and ( t i > 0.025 ) and ( t e m p > 32.5 ) and ( t e m p 67.5 ) 1.06 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p 32.5 ) 1.062 if ( a l 0.175 ) and ( t i 0.175 ) and ( t i > 0.025 ) and ( t e m p > 32.5 ) and ( t e m p 67.5 ) 1.064 if ( a l 0.175 ) and ( a l > 0.025 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t e m p 60.0 ) 1.064 if ( a l 0.175 ) and ( a l > 0.025 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t e m p 60.0 ) 1.069 if ( a l 0.175 ) and ( t i 0.175 ) and ( t i > 0.025 ) and ( t e m p > 32.5 ) and ( t e m p 50.0 ) 1.078 if ( a l 0.175 ) and ( t i 0.175 ) and ( t i > 0.025 ) and ( t e m p > 32.5 ) and ( t e m p 50.0 ) 1.081 if ( a l 0.175 ) and ( t i 0.175 ) and ( t i > 0.025 ) and ( t e m p 32.5 ) and ( t e m p > 27.5 ) 1.084 if ( a l 0.175 ) and ( a l > 0.025 ) and ( t i 0.175 ) and ( t e m p 32.5 ) 1.088 if ( a l 0.175 ) and ( t i 0.175 ) and ( t i > 0.025 ) and ( t e m p 32.5 ) and ( t e m p > 22.5 ) 1.088 if ( a l 0.175 ) and ( t i 0.175 ) and ( t i > 0.025 ) and ( t e m p 32.5 ) 1.091 if ( a l > 0.175 ) and ( t e m p > 35.0 ) 1.144 if ( a l > 0.175 ) and ( t e m p > 35.0 ) and ( t e m p 65.0 ) 1.152 if ( a l > 0.175 ) and ( t e m p > 35.0 ) and ( t e m p 65.0 ) 1.157 if ( a l > 0.175 ) and ( t e m p > 35.0 ) and ( t e m p 52.5 ) 1.161 if ( a l > 0.175 ) and ( t e m p > 35.0 ) and ( t e m p 52.5 ) 1.166 if ( a l > 0.175 ) and ( t e m p > 35.0 ) and ( t e m p 52.5 ) 1.17 if ( a l 0.175 ) and ( t i > 0.175 ) and ( t e m p > 40.0 ) 1.178 if ( a l > 0.175 ) and ( t e m p 35.0 ) and ( t e m p > 22.5 ) 1.179 if ( a l > 0.175 ) and ( t e m p 35.0 ) and ( t e m p > 22.5 ) 1.184 if ( a l 0.175 ) and ( t i > 0.175 ) and ( t e m p > 40.0 ) and ( t e m p 60.0 ) 1.187 if ( a l > 0.175 ) and ( t e m p 35.0 ) 1.189 if ( a l 0.175 ) and ( t i > 0.175 ) and ( t e m p > 40.0 ) and ( t e m p 60.0 ) 1.193 if ( a l 0.175 ) and ( t i > 0.175 ) and ( t e m p > 40.0 ) and ( t e m p 60.0 ) 1.198 if ( a l 0.175 ) and ( t i > 0.175 ) and ( t e m p 40.0 ) and ( t e m p > 30.0 ) 1.208 if ( a l 0.175 ) and ( t i > 0.175 ) and ( t e m p 40.0 ) 1.219

Appendix E. The Set of Normalized Rules rnorm After Reduction

if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) 1.033 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t e m p 62.5 ) 1.038 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t e m p 55.0 ) 1.048 if ( a l 0.175 ) and ( a l > 0.025 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) 1.053 if ( a l 0.175 ) and ( t i 0.175 ) and ( t i > 0.025 ) and ( t e m p > 32.5 ) 1.056 if ( a l 0.175 ) and ( a l > 0.025 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t e m p 67.5 ) 1.057 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p 32.5 ) and ( t e m p > 22.5 ) 1.06 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p 32.5 ) 1.062 if ( a l 0.175 ) and ( t i 0.175 ) and ( t i > 0.025 ) and ( t e m p > 32.5 ) and ( t e m p 67.5 ) 1.062 if ( a l 0.175 ) and ( a l > 0.025 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t e m p 60.0 ) 1.067 if ( a l 0.175 ) and ( t i 0.175 ) and ( t i > 0.025 ) and ( t e m p > 32.5 ) and ( t e m p 50.0 ) 1.079 if ( a l 0.175 ) and ( t i 0.175 ) and ( t i > 0.025 ) and ( t e m p 32.5 ) and ( t e m p > 27.5 ) 1.084 if ( a l 0.175 ) and ( a l > 0.025 ) and ( t i 0.175 ) and ( t e m p 32.5 ) 1.088 if ( a l 0.175 ) and ( t i 0.175 ) and ( t i > 0.025 ) and ( t e m p 32.5 ) and ( t e m p > 22.5 ) 1.088 if ( a l 0.175 ) and ( t i 0.175 ) and ( t i > 0.025 ) and ( t e m p 32.5 ) 1.091 if ( a l > 0.175 ) and ( t e m p > 35.0 ) 1.144 if ( a l > 0.175 ) and ( t e m p > 35.0 ) and ( t e m p 65.0 ) 1.155 if ( a l > 0.175 ) and ( t e m p > 35.0 ) and ( t e m p 52.5 ) 1.166 if ( a l 0.175 ) and ( t i > 0.175 ) and ( t e m p > 40.0 ) 1.178 if ( a l > 0.175 ) and ( t e m p 35.0 ) and ( t e m p > 22.5 ) 1.182 if ( a l > 0.175 ) and ( t e m p 35.0 ) 1.189 if ( a l 0.175 ) and ( t i > 0.175 ) and ( t e m p > 40.0 ) and ( t e m p 60.0 ) 1.193 if ( a l 0.175 ) and ( t i > 0.175 ) and ( t e m p 40.0 ) and ( t e m p > 30.0 ) 1.208 if ( a l 0.175 ) and ( t i > 0.175 ) and ( t e m p 40.0 ) 1.219

Appendix F. The Set of Simplified Rules rsimp

if ( a l = 0.0 ) and ( t i = 0.0 ) and ( t e m p = 70 ) 1.033 if ( a l = 0.0 ) and ( t i = 0.0 ) and ( t e m p = 47.5 ) 1.038 if ( a l = 0.0 ) and ( t i = 0.0 ) and ( t e m p = 43.75 ) 1.048 if ( a l = 0.0 ) and ( t i = 0.0 ) and ( t e m p = 27.5 ) 1.06 if ( a l = 0.0 ) and ( t i = 0.0 ) and ( t e m p = 20 ) 1.062 if ( a l = 0.3 ) and ( t e m p = 70 ) 1.144 if ( a l = 0.3 ) and ( t e m p = 50.0 ) 1.155 if ( a l = 0.3 ) and ( t e m p = 43.75 ) 1.166 if ( a l = 0.3 ) and ( t e m p = 28.75 ) 1.182 if ( a l = 0.3 ) and ( t e m p = 20 ) 1.189 if ( a l = 0.0 ) and ( t i = 0.1 ) and ( t e m p = 70 ) 1.056 if ( a l = 0.0 ) and ( t i = 0.1 ) and ( t e m p = 50.0 ) 1.062 if ( a l = 0.0 ) and ( t i = 0.1 ) and ( t e m p = 41.25 ) 1.079 if ( a l = 0.0 ) and ( t i = 0.1 ) and ( t e m p = 30.0 ) 1.084 if ( a l = 0.0 ) and ( t i = 0.1 ) and ( t e m p = 27.5 ) 1.088 if ( a l = 0.0 ) and ( t i = 0.1 ) and ( t e m p = 20 ) 1.091 if ( a l = 0.0 ) and ( t i = 0.3 ) and ( t e m p = 70 ) 1.178 if ( a l = 0.0 ) and ( t i = 0.3 ) and ( t e m p = 50.0 ) 1.193 if ( a l = 0.0 ) and ( t i = 0.3 ) and ( t e m p = 35.0 ) 1.208 if ( a l = 0.0 ) and ( t i = 0.3 ) and ( t e m p = 20 ) 1.219 if ( a l = 0.1 ) and ( t i = 0.0 ) and ( t e m p = 70 ) 1.053 if ( a l = 0.1 ) and ( t i = 0.0 ) and ( t e m p = 50.0 ) 1.057 if ( a l = 0.1 ) and ( t i = 0.0 ) and ( t e m p = 46.25 ) 1.067 if ( a l = 0.1 ) and ( t i = 0.0 ) and ( t e m p = 20 ) 1.088 ,

Appendix G. The Set of Fuzzy Rules rfuzz

if ( a l is l o w ) and ( t i is l o w ) and ( t e m p is h i g h ) ( d e n s i t y is l o w e r ) if ( a l is l o w ) and ( t i is l o w ) and ( t e m p is a v e r a g e ) ( d e n s i t y is l o w e r ) if ( a l is l o w ) and ( t i is l o w ) and ( t e m p is l o w ) ( d e n s i t y is l o w ) if ( a l is h i g h ) and ( t e m p is h i g h ) ( d e n s i t y is a v e r a g e ) if ( a l is h i g h ) and ( t e m p is a v e r a g e ) ( d e n s i t y is h i g h ) if ( a l is h i g h ) and ( t e m p is l o w ) ( d e n s i t y is h i g h ) if ( a l is l o w ) and ( t i is a v e r a g e ) and ( t e m p is h i g h ) ( d e n s i t y is l o w ) if ( a l is l o w ) and ( t i is a v e r a g e ) and ( t e m p is a v e r a g e ) ( d e n s i t y is l o w ) if ( a l is l o w ) and ( t i is a v e r a g e ) and ( t e m p is l o w ) ( d e n s i t y is l o w ) if ( a l is l o w ) and ( t i is h i g h ) and ( t e m p is h i g h ) ( d e n s i t y is h i g h ) if ( a l is l o w ) and ( t i is h i g h ) and ( t e m p is a v e r a g e ) ( d e n s i t y is h i g h e r ) if ( a l is l o w ) and ( t i is h i g h ) and ( t e m p is l o w ) ( d e n s i t y is h i g h e r ) if ( a l is a v e r a g e ) and ( t i is l o w ) and ( t e m p is h i g h ) ( d e n s i t y is l o w e r ) if ( a l is a v e r a g e ) and ( t i is l o w ) and ( t e m p is a v e r a g e ) ( d e n s i t y is l o w ) if ( a l is a v e r a g e ) and ( t i is l o w ) and ( t e m p is l o w ) ( d e n s i t y is l o w )

Appendix H. Result of the Rule Clustering

Cluster 1 : if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) 1.033 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t e m p 62.5 ) 1.038 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t e m p 55.0 ) 1.048 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p 32.5 ) and ( t e m p > 22.5 ) 1.06 if ( a l 0.175 ) and ( t i 0.175 ) and ( t e m p 32.5 ) 1.062 Cluster 2 : if ( a l > 0.175 ) and ( t e m p > 35.0 ) 1.144 if ( a l > 0.175 ) and ( t e m p > 35.0 ) and ( t e m p 65.0 ) 1.155 if ( a l > 0.175 ) and ( t e m p > 35.0 ) and ( t e m p 52.5 ) 1.166 if ( a l > 0.175 ) and ( t e m p 35.0 ) and ( t e m p > 22.5 ) 1.182 if ( a l > 0.175 ) and ( t e m p 35.0 ) 1.189 Cluster 3 : if ( a l 0.175 ) and ( t i 0.175 ) and ( t i > 0.025 ) and ( t e m p > 32.5 ) 1.056 if ( a l 0.175 ) and ( t i 0.175 ) and ( t i > 0.025 ) and ( t e m p > 32.5 ) and ( t e m p 67.5 ) 1.062 if ( a l 0.175 ) and ( t i 0.175 ) and ( t i > 0.025 ) and ( t e m p > 32.5 ) and ( t e m p 50.0 ) 1.079 if ( a l 0.175 ) and ( t i 0.175 ) and ( t i > 0.025 ) and ( t e m p 32.5 ) and ( t e m p > 27.5 ) 1.084 if ( a l 0.175 ) and ( t i 0.175 ) and ( t i > 0.025 ) and ( t e m p 32.5 ) and ( t e m p > 22.5 ) 1.088 if ( a l 0.175 ) and ( t i 0.175 ) and ( t i > 0.025 ) and ( t e m p 32.5 ) 1.091 Cluster 4 : if ( a l 0.175 ) and ( t i > 0.175 ) and ( t e m p > 40.0 ) 1.178 if ( a l 0.175 ) and ( t i > 0.175 ) and ( t e m p > 40.0 ) and ( t e m p 60.0 ) 1.193 if ( a l 0.175 ) and ( t i > 0.175 ) and ( t e m p 40.0 ) and ( t e m p > 30.0 ) 1.208 if ( a l 0.175 ) and ( t i > 0.175 ) and ( t e m p 40.0 ) 1.219 Cluster 5 : if ( a l 0.175 ) and ( a l > 0.025 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) 1.053 if ( a l 0.175 ) and ( a l > 0.025 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t e m p 67.5 ) 1.057 if ( a l 0.175 ) and ( a l > 0.025 ) and ( t i 0.175 ) and ( t e m p > 32.5 ) and ( t e m p 60.0 ) 1.067 if ( a l 0.175 ) and ( a l > 0.025 ) and ( t i 0.175 ) and ( t e m p 32.5 ) 1.088

References

  1. Romanov, A.A.; Filippov, A.A.; Yarushkina, N.G. Adaptive Fuzzy Predictive Approach in Control. Mathematics 2023, 11, 875. [Google Scholar] [CrossRef]
  2. Romanov, A.; Filippov, A. Context Modeling in Predictive Analytics. In Proceedings of the 2021 International Conference on Information Technology and Nanotechnology (ITNT), Samara, Russia, 20–24 September 2021. [Google Scholar]
  3. Xu, J.; Wang, Q.; Lin, Q. Parallel Robot with Fuzzy Neural Network Sliding Mode Control. Adv. Mech. Eng. 2018, 10, 1687814018801261. [Google Scholar] [CrossRef]
  4. Fernandez, A.; Herrera, F.; Cordon, O.; del Jesus, M.J.; Marcelloni, F. Evolutionary Fuzzy Systems for Explainable Artificial Intelligence: Why, When, What For, and Where to? IEEE Comput. Intell. Mag. 2019, 14, 69–81. [Google Scholar] [CrossRef]
  5. Moral, A.; Castiello, C.; Magdalena, L.; Mencar, C. Explainable Fuzzy Systems; Springer: Berlin, Germany, 2021. [Google Scholar]
  6. Varshney, A.K.; Torra, V. Literature Review of the Recent Trends and Applications in Various Fuzzy Rule-Based Systems. Int. J. Fuzzy Syst. 2023, 25, 2163–2186. [Google Scholar] [CrossRef]
  7. Krömer, P.; Platoš, J. Simultaneous Prediction of Wind Speed and Direction by Evolutionary Fuzzy Rule Forest. Procedia Comput. Sci. 2017, 108, 295–304. [Google Scholar] [CrossRef]
  8. Su, W.C.; Juang, C.F.; Hsu, C.M. Multiobjective Evolutionary Interpretable Type-2 Fuzzy Systems with Structure and Parameter Learning for Hexapod Robot Control. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 3066–3078. [Google Scholar] [CrossRef]
  9. Kerr-Wilson, J.; Pedrycz, W. Generating a Hierarchical Fuzzy Rule-Based Model. Fuzzy Sets Syst. 2020, 381, 124–139. [Google Scholar] [CrossRef]
  10. Razak, T.R.; Fauzi, S.S.M.; Gining, R.A.J.; Ismail, M.H.; Maskat, R. Hierarchical Fuzzy Systems: Interpretability and Complexity. Indones. J. Electr. Eng. Inform. 2021, 9, 478–489. [Google Scholar] [CrossRef]
  11. Zouari, M.; Baklouti, N.; Sanchez-Medina, J.; Kammoun, H.M.; Ayed, M.B.; Alimi, A.M. PSO-Based Adaptive Hierarchical Interval Type-2 Fuzzy Knowledge Representation System (PSO-AHIT2FKRS) for Travel Route Guidance. IEEE Trans. Intell. Transport. Syst. 2022, 23, 804–818. [Google Scholar] [CrossRef]
  12. Roy, D.K.; Saha, K.K.; Kamruzzaman, M.; Biswas, S.K.; Hossain, M.A. Hierarchical Fuzzy Systems Integrated with Particle Swarm Optimization for Daily Reference Evapotranspiration Prediction: A Novel Approach. Water Resour. Manag. 2021, 35, 5383–5407. [Google Scholar] [CrossRef]
  13. Wei, X.J.; Zhang, D.Q.; Huang, S.J. A Variable Selection Method for a Hierarchical Interval Type-2 TSK Fuzzy Inference System. Fuzzy Sets Syst. 2022, 438, 46–61. [Google Scholar] [CrossRef]
  14. Karaboga, D.; Kaya, E. Adaptive Network Based Fuzzy Inference System (ANFIS) Training Approaches: A Comprehensive Survey. Artif. Intell. Rev. 2019, 52, 2263–2293. [Google Scholar] [CrossRef]
  15. Shaik, R.B.; Kannappan, E.V. Application of Adaptive Neuro-Fuzzy Inference Rule-Based Controller in Hybrid Electric Vehicles. J. Electr. Eng. Technol. 2020, 15, 1937–1945. [Google Scholar] [CrossRef]
  16. Lin, C.-M.; Le, T.-L.; Huynh, T.-T. Self-Evolving Function-Link Interval Type-2 Fuzzy Neural Network for Nonlinear System Identification and Control. Neurocomputing 2018, 275, 2239–2250. [Google Scholar] [CrossRef]
  17. Li, F.; Yu, F.; Shen, L.; Li, H.; Yang, X.; Shen, Q. EEG-Based emotion recognition with combined fuzzy inference via integrating weighted fuzzy rule inference and interpolation. Mathematics 2025, 13, 166. [Google Scholar] [CrossRef]
  18. Zadeh, L.A. The Concept of a Linguistic Variable and Its Application to Approximate Reasoning—I. Inf. Sci. 1975, 8, 199–249. [Google Scholar] [CrossRef]
  19. Zadeh, L.A. Fuzzy Logic. Computer 1988, 21, 83–93. [Google Scholar] [CrossRef]
  20. Chen, S.; Tsai, F. A New Method to Construct Membership Functions and Generate Fuzzy Rules from Training Instances. Int. J. Inf. Manag. Sci. 2005, 16, 47. [Google Scholar] [CrossRef]
  21. Wu, T.-P.; Chen, S.-M. A New Method for Constructing Membership Functions and Fuzzy Rules from Training Examples. IEEE Trans. Syst. Man Cybern. B 1999, 29, 25–40. [Google Scholar]
  22. Jiao, L.; Yang, H.; Liu, Z.G.; Pan, Q. Interpretable Fuzzy Clustering Using Unsupervised Fuzzy Decision Trees. Inf. Sci. 2022, 611, 540–563. [Google Scholar] [CrossRef]
  23. Idris, N.F.; Ismail, M.A. Breast Cancer Disease Classification Using Fuzzy-ID3 Algorithm with FUZZYDBD Method: Automatic Fuzzy Database Definition. PeerJ Comput. Sci. 2021, 7, e427. [Google Scholar] [CrossRef] [PubMed]
  24. Al-Gunaid, M.; Shcherbakov, M.; Kamaev, V.; Gerget, O.; Tyukov, A. Decision Trees Based Fuzzy Rules. In Information Technologies in Science, Management, Social Sphere and Medicine; Atlantis Press: Amsterdam, The Netherlands, 2016. [Google Scholar]
  25. Nagaraj, P.; Deepalakshmi, P. An Intelligent Fuzzy Inference Rule-Based Expert Recommendation System for Predictive Diabetes Diagnosis. Int. J. Imaging Syst. Technol. 2022, 32, 1373–1396. [Google Scholar] [CrossRef]
  26. Exarchos, T.P.; Tsipouras, M.G.; Exarchos, C.P.; Papaloukas, C.; Fotiadis, D.I.; Michalis, L.K. A Methodology for the Automated Creation of Fuzzy Expert Systems for Ischaemic and Arrhythmic Beat Classification Based on a Set of Rules Obtained by a Decision Tree. Artif. Intell. Med. 2007, 40, 187–200. [Google Scholar] [CrossRef]
  27. Gu, X.; Han, J.; Shen, Q.; Angelov, P.P. Autonomous learning for fuzzy systems: A review. Artif. Intell. Rev. 2023, 56, 7549–7595. [Google Scholar] [CrossRef]
  28. Alghamdi, M.; Angelov, P.; Gimenez, R.; Rufino, M.; Soares, E. Self-organising and self-learning model for soybean yield prediction. In Proceedings of the 2019 Sixth International Conference on Social Networks Analysis, Management and Security (SNAMS), Granada, Spain, 22–25 October 2019; pp. 441–446. [Google Scholar]
  29. Mohamed, S.; Hameed, I.A. A GA-based adaptive neuro-fuzzy controller for greenhouse climate control system. Alex. Eng. J. 2018, 57, 773–779. [Google Scholar] [CrossRef]
  30. Soares, E.; Angelov, P.; Costa, B.; Castro, M. Actively semi-supervised deep rule-based classifier applied to adverse driving scenarios. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; pp. 1–8. [Google Scholar]
  31. Andonovski, G.; Sipele, O.; Iglesias, J.A.; Sanchis, A.; Lughofer, E.; Škrjanc, I. Detection of driver maneuvers using evolving fuzzy cloud-based system. In Proceedings of the 2020 IEEE Symposium Series on Computational Intelligence (SSCI), Canberra, Australia, 1–4 December 2020; pp. 700–706. [Google Scholar]
  32. Wu, Q.; Cheng, S.; Li, L.; Yang, F.; Meng, L.J.; Fan, Z.X.; Liang, H.W. A fuzzy-inference-based reinforcement learning method of overtaking decision making for automated vehicles. Proc. Inst. Mech. Eng. Part D J. Automob. Eng. 2022, 236, 75–83. [Google Scholar] [CrossRef]
  33. Stirling, J.; Chen, T.; Bucholc, M. Diagnosing Alzheimer’s disease using a self-organising fuzzy classifier. In Fuzzy Logic: Recent Applications and Developments; Springer International Publishing: Cham, Switzerland, 2021; pp. 69–82. [Google Scholar]
  34. de Campos Souza, P.V.; Lughofer, E. Identification of heart sounds with an interpretable evolving fuzzy neural network. Sensors 2020, 20, 6477. [Google Scholar] [CrossRef]
  35. Kukker, A.; Sharma, R. A genetic algorithm assisted fuzzy Q-learning epileptic seizure classifier. Comput. Electr. Eng. 2021, 92, 107154. [Google Scholar] [CrossRef]
  36. Severiano, C.A.; e Silva, P.C.D.L.; Cohen, M.W.; Guimarães, F.G. Evolving fuzzy time series for spatio-temporal forecasting in renewable energy systems. Renew. Energy 2021, 171, 764–783. [Google Scholar] [CrossRef]
  37. Andonovski, G.; Lughofer, E.; Škrjanc, I. Evolving fuzzy model identification of nonlinear Wiener-Hammerstein processes. IEEE Access 2021, 9, 158470–158480. [Google Scholar] [CrossRef]
  38. Blažič, A.; Škrjanc, I.; Logar, V. Soft sensor of bath temperature in an electric arc furnace based on a data-driven Takagi–Sugeno fuzzy model. Appl. Soft Comput. 2021, 113, 107949. [Google Scholar] [CrossRef]
  39. Alfaverh, F.; Denai, M.; Sun, Y. Demand response strategy based on reinforcement learning and fuzzy reasoning for home energy management. IEEE Access 2020, 8, 39310–39321. [Google Scholar] [CrossRef]
  40. Pratama, M.; Dimla, E.; Tjahjowidodo, T.; Pedrycz, W.; Lughofer, E. Online tool condition monitoring based on parsimonious ensemble+. IEEE Trans. Cybern. 2018, 50, 664–677. [Google Scholar] [CrossRef]
  41. Camargos, M.O.; Bessa, I.; D’Angelo, M.F.S.V.; Cosme, L.B.; Palhares, R.M. Data-driven prognostics of rolling element bearings using a novel error based evolving Takagi–Sugeno fuzzy model. Appl. Soft Comput. 2020, 96, 106628. [Google Scholar] [CrossRef]
  42. Malik, H.; Sharma, R.; Mishra, S. Fuzzy reinforcement learning based intelligent classifier for power transformer faults. ISA Trans. 2020, 101, 390–398. [Google Scholar] [CrossRef]
  43. Rodrigues Júnior, S.E.; de Oliveira Serra, G.L. Intelligent forecasting of time series based on evolving distributed Neuro-Fuzzy network. Comput. Intell. 2020, 36, 1394–1413. [Google Scholar] [CrossRef]
  44. Cao, B.; Zhao, J.; Lv, Z.; Gu, Y.; Yang, P.; Halgamuge, S.K. Multiobjective evolution of fuzzy rough neural network via distributed parallelism for stock prediction. IEEE Trans. Fuzzy Syst. 2020, 28, 939–952. [Google Scholar] [CrossRef]
  45. Yarushkina, N.; Filippov, A.; Romanov, A. Contextual Analysis of Financial Time Series. Mathematics 2024, 13, 57. [Google Scholar] [CrossRef]
  46. Precup, R.E.; Teban, T.A.; Albu, A.; Borlea, A.B.; Zamfirache, I.A.; Petriu, E.M. Evolving fuzzy models for prosthetic hand myoelectric-based control. IEEE Trans. Instrum. Meas. 2020, 69, 4625–4636. [Google Scholar] [CrossRef]
  47. Yang, Z.X.; Rong, H.J.; Wong, P.K.; Angelov, P.; Yang, Z.X.; Wang, H. Self-evolving data cloud-based PID-like controller for nonlinear uncertain systems. IEEE Trans. Ind. Electron. 2020, 68, 4508–4518. [Google Scholar] [CrossRef]
  48. Zhang, H.; Zhang, K.; Cai, Y.; Han, J. Adaptive fuzzy fault-tolerant tracking control for partially unknown systems with actuator faults via integral reinforcement learning method. IEEE Trans. Fuzzy Syst. 2019, 27, 1986–1998. [Google Scholar] [CrossRef]
  49. Gu, X.; Khan, M.A.; Angelov, P.; Tiwary, B.; Yourdshah, E.S.; Yang, Z.X. A novel self-organizing PID approach for controlling mobile robot locomotion. In Proceedings of the 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Glasgow, UK, 19–24 July 2020; pp. 1–10. [Google Scholar]
  50. Juang, C.F.; Lu, C.H.; Huang, C.A. Navigation of three cooperative object-transportation robots using a multistage evolutionary fuzzy control approach. IEEE Trans. Cybern. 2020, 52, 3606–3619. [Google Scholar] [CrossRef] [PubMed]
  51. Goharimanesh, M.; Mehrkish, A.; Janabi-Sharifi, F. A fuzzy reinforcement learning approach for continuum robot control. J. Intell. Robot. Syst. 2020, 100, 809–826. [Google Scholar] [CrossRef]
  52. Leite, D.; Škrjanc, I. Ensemble of evolving optimal granular experts, OWA aggregation, and time series prediction. Inf. Sci. 2019, 504, 95–112. [Google Scholar] [CrossRef]
  53. Azad, A.; Kashi, H.; Farzin, S.; Singh, V.P.; Kisi, O.; Karami, H.; Sanikhani, H. Novel approaches for air temperature prediction: A comparison of four hybrid evolutionary fuzzy models. Meteorol. Appl. 2020, 27, e1817. [Google Scholar] [CrossRef]
  54. Malik, H.; Yadav, A.K. A novel hybrid approach based on relief algorithm and fuzzy reinforcement learning approach for predicting wind speed. Sustain. Energy Technol. Assess. 2021, 43, 100920. [Google Scholar] [CrossRef]
  55. Ge, D.; Zeng, X.J. Learning data streams online—An evolving fuzzy system approach with self-learning/adaptive thresholds. Inf. Sci. 2020, 507, 172–184. [Google Scholar] [CrossRef]
  56. Gu, X.; Shen, Q. A self-adaptive fuzzy learning system for streaming data prediction. Inf. Sci. 2021, 579, 623–647. [Google Scholar] [CrossRef]
  57. Angelov, P.P.; Filev, D.P. An approach to online identification of Takagi-Sugeno fuzzy models. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2004, 34, 484–498. [Google Scholar] [CrossRef]
  58. Gu, X.; Angelov, P.P. Self-organising fuzzy logic classifier. Inf. Sci. 2018, 447, 36–51. [Google Scholar] [CrossRef]
  59. Rong, H.J.; Yang, Z.X.; Wong, P.K. Robust and noise-insensitive recursive maximum correntropy-based evolving fuzzy system. IEEE Trans. Fuzzy Syst. 2019, 28, 2277–2284. [Google Scholar] [CrossRef]
  60. Ferdaus, M.M.; Pratama, M.; Anavatti, S.G.; Garratt, M.A. PALM: An incremental construction of hyperplanes for data stream regression. IEEE Trans. Fuzzy Syst. 2019, 27, 2115–2129. [Google Scholar] [CrossRef]
  61. Yang, Z.X.; Rong, H.J.; Angelov, P.; Yang, Z.X. Statistically evolving fuzzy inference system for non-Gaussian noises. IEEE Trans. Fuzzy Syst. 2021, 30, 2649–2664. [Google Scholar] [CrossRef]
  62. Samanta, S.; Pratama, M.; Sundaram, S. A novel spatio-temporal fuzzy inference system (SPATFIS) and its stability analysis. Inf. Sci. 2019, 505, 84–99. [Google Scholar] [CrossRef]
  63. Pratama, M.; Anavatti, S.G.; Lughofer, E. GENEFIS: Toward an effective localist network. IEEE Trans. Fuzzy Syst. 2013, 22, 547–562. [Google Scholar] [CrossRef]
  64. Jahanshahi, H.; Yousefpour, A.; Soradi-Zeid, S.; Castillo, O. A review on design and implementation of type-2 fuzzy controllers. Math. Methods Appl. Sci. 2022. [Google Scholar] [CrossRef]
  65. Mendel, J.; Hagras, H.; Tan, W.W.; Melek, W.W.; Ying, H. Introduction to Type-2 Fuzzy Logic Control: Theory and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
  66. Mirnezami, S.A.; Mousavi, S.M.; Mohagheghi, V. An innovative interval type-2 fuzzy approach for multi-scenario multi-project cash flow evaluation considering TODIM and critical chain with an application to energy sector. Neural Comput. Appl. 2021, 33, 2263–2284. [Google Scholar] [CrossRef]
  67. Ecer, F. Multi-criteria decision making for green supplier selection using interval type-2 fuzzy AHP: A case study of a home appliance manufacturer. Oper. Res. 2022, 22, 199–233. [Google Scholar] [CrossRef]
  68. Aleksić, A.; Milanović, D.D.; Komatina, N.; Tadić, D. Evaluation and ranking of failures in manufacturing process by combining best-worst method and VIKOR under type-2 fuzzy environment. Expert Syst. 2023, 40, e13148. [Google Scholar] [CrossRef]
  69. Said, Z.; Abdelkareem, M.A.; Rezk, H.; Nassef, A.M. Dataset on Fuzzy Logic Based-Modelling and Optimization of Thermophysical Properties of Nanofluid Mixture. Data Brief 2019, 26, 104547. [Google Scholar] [CrossRef]
  70. Said, Z.; Abdelkareem, M.A.; Rezk, H.; Nassef, A.M. Fuzzy Modeling and Optimization for Experimental Thermophysical Properties of Water and Ethylene Glycol Mixture for Al2O3 and TiO2 Based Nanofluids. Powder Technol. 2019, 353, 345–358. [Google Scholar] [CrossRef]
  71. James, G.; Witten, D.; Hastie, T.; Tibshirani, R. An Introduction to Statistical Learning; Springer: New York, NY, USA, 2013. [Google Scholar]
  72. Fuzzy Logic Toolbox. Available online: https://www.mathworks.com/products/fuzzy-logic.html (accessed on 10 January 2025).
  73. Mamdani, E.H. Application of fuzzy algorithms for control of simple dynamic plant. Proc. Inst. Electr. Eng. 1974, 121, 1585–1588. [Google Scholar] [CrossRef]
  74. Aggarwal, A. A Beginner’s Guide to Fuzzy Logic Controllers for AC Temperature Control. Available online: https://threws.com/a-beginners-guide-to-fuzzy-logic-controllers-for-ac-temperature-control/ (accessed on 1 November 2024).
  75. Płoński, P. Extract Rules from Decision Tree in 3 Ways with Scikit-Learn and Python. Available online: https://mljar.com/blog/extract-rules-decision-tree/ (accessed on 1 November 2024).
  76. Silhouette Coefficient. Available online: https://scikit-learn.org/dev/modules/clustering.html#silhouette-coefficient (accessed on 1 November 2024).
  77. Kliegr, T.; Bahník, Š.; Fürnkranz, J. A review of possible effects of cognitive biases on interpretation of rule-based machine learning models. Artif. Intell. 2021, 295, 103458. [Google Scholar] [CrossRef]
Figure 1. General architecture of FRBS.
Figure 1. General architecture of FRBS.
Axioms 14 00196 g001
Figure 2. Proposed approach schema.
Figure 2. Proposed approach schema.
Axioms 14 00196 g002
Figure 3. Density FRBS.
Figure 3. Density FRBS.
Axioms 14 00196 g003
Figure 4. Fuzzy set of Al2O3 concentration ( a l ) with three linguistic terms as an input parameter.
Figure 4. Fuzzy set of Al2O3 concentration ( a l ) with three linguistic terms as an input parameter.
Axioms 14 00196 g004
Figure 5. Fuzzy set of TiO2 concentration ( t i ) with three linguistic terms as an input parameter.
Figure 5. Fuzzy set of TiO2 concentration ( t i ) with three linguistic terms as an input parameter.
Axioms 14 00196 g005
Figure 6. Fuzzy set of temperature ( t e m p ) with three linguistic terms as an input parameter of the censity FRBS.
Figure 6. Fuzzy set of temperature ( t e m p ) with three linguistic terms as an input parameter of the censity FRBS.
Axioms 14 00196 g006
Figure 7. Fuzzy set of density ( d e n s i t y ) with five linguistic terms as an output parameter of the censity FRBS.
Figure 7. Fuzzy set of density ( d e n s i t y ) with five linguistic terms as an output parameter of the censity FRBS.
Axioms 14 00196 g007
Figure 8. Steps of the proposed approach to generating fuzzy rules.
Figure 8. Steps of the proposed approach to generating fuzzy rules.
Axioms 14 00196 g008
Figure 9. Rule simplification schema.
Figure 9. Rule simplification schema.
Axioms 14 00196 g009
Figure 10. Silhouette score diagram.
Figure 10. Silhouette score diagram.
Axioms 14 00196 g010
Figure 11. Density, viscosity, and temperature FRBSs.
Figure 11. Density, viscosity, and temperature FRBSs.
Axioms 14 00196 g011
Figure 12. Fuzzy set for temperature ( t e m p ) with five linguistic terms as an input parameter of the viscosity FRBS and an output parameter of the temperature FRBS.
Figure 12. Fuzzy set for temperature ( t e m p ) with five linguistic terms as an input parameter of the viscosity FRBS and an output parameter of the temperature FRBS.
Axioms 14 00196 g012
Figure 13. Fuzzy set of viscosity ( v i s c o s i t y ) with five linguistic terms as the output parameter of the viscosity FRBS.
Figure 13. Fuzzy set of viscosity ( v i s c o s i t y ) with five linguistic terms as the output parameter of the viscosity FRBS.
Axioms 14 00196 g013
Figure 14. Fuzzy set of viscosity ( v i s c o s i t y ) with three linguistic terms as an input parameter of the temperature FRBS.
Figure 14. Fuzzy set of viscosity ( v i s c o s i t y ) with three linguistic terms as an input parameter of the temperature FRBS.
Axioms 14 00196 g014
Figure 15. Accumulation result for the d e n s i t y output parameter.
Figure 15. Accumulation result for the d e n s i t y output parameter.
Axioms 14 00196 g015
Figure 16. Boxplots for the features of the dataset in Table A2.
Figure 16. Boxplots for the features of the dataset in Table A2.
Axioms 14 00196 g016
Figure 17. Correlation analysis of the dataset in Table A2.
Figure 17. Correlation analysis of the dataset in Table A2.
Axioms 14 00196 g017
Figure 18. Distribution analysis of the dataset in Table A2.
Figure 18. Distribution analysis of the dataset in Table A2.
Axioms 14 00196 g018
Figure 19. Boxplots for the features of the dataset in Table A1.
Figure 19. Boxplots for the features of the dataset in Table A1.
Axioms 14 00196 g019
Figure 20. Correlation analysis of the dataset in Table A1.
Figure 20. Correlation analysis of the dataset in Table A1.
Axioms 14 00196 g020
Figure 21. Distribution analysis of the dataset in Table A1.
Figure 21. Distribution analysis of the dataset in Table A1.
Axioms 14 00196 g021
Figure 22. Temperature* FRBS.
Figure 22. Temperature* FRBS.
Axioms 14 00196 g022
Figure 23. Distribution analysis of real and inferred values of temperature* FRBS.
Figure 23. Distribution analysis of real and inferred values of temperature* FRBS.
Axioms 14 00196 g023
Figure 24. Distribution analysis of real and inferred values of modified temperature* FRBS.
Figure 24. Distribution analysis of real and inferred values of modified temperature* FRBS.
Axioms 14 00196 g024
Table 1. Related work search parameters.
Table 1. Related work search parameters.
TopicKeywordsPeriod
Fuzzy systems in control“fuzzy systems” in controlFrom 2019
Trends and challenges in fuzzy system development“fuzzy rule based systems” recent trends review
Challenges in the explainability of fuzzy systems”fuzzy systems” explainability
Representation of knowledge and rules based on fuzzy logic“fuzzy knowledge representation” “fuzzy rules”
Types of fuzzy logictype-2 fuzzy systems review
Integration of fuzzy systems and neural networksanfis “adaptive Neuro fuzzy inference system”
Hierarchical fuzzy systemshierarchical fuzzy systems
Evolutionary fuzzy systemsevolutionary fuzzy systems
Selection of the optimal parameters for fuzzy systems“membership functions” construction
Generation of fuzzy rules based on a decision tree“decision tree” “fuzzy rules” generation
Table 2. Experimental results.
Table 2. Experimental results.
# temp (°C)
Viscosity
(Pa·s)
al (%) ti (%)RealInferredRMSE
Density FRBS
130001.0561.0730.017
255001.0411.0470.006
3250.0501.0841.0760.008
4300.0501.0811.0730.007
5350.0501.0771.0690.009
6400.0501.0741.0670.007
7600.0501.0611.0670.007
8350.301.1741.1720.002
9650.301.1481.1360.012
104500.051.0741.0670.007
115000.051.0711.0670.004
125500.051.0671.0680.001
132000.31.2241.2040.020
143000.31.2131.2020.011
154000.31.2021.2030.001
166000.31.1821.1760.007
177000.31.1721.1720.000
Total0.009
Viscosity FRBS
130002.7163.0890.374
240002.0732.3590.287
360001.3291.4650.137
465001.2111.4140.204
5250.0504.1203.1880.931
6450.0502.2172.0450.171
7650.0501.3151.4140.100
8700.0501.1051.4080.304
9450.303.1113.4990.388
10500.302.7353.4750.740
11650.301.9361.8120.124
123000.053.5873.1110.475
135500.051.9532.1280.176
146500.051.4431.4140.028
154000.33.9903.4750.515
165000.33.1893.4750.286
176500.32.2871.8120.475
Total0.407
Temperature FRBS
12.716003048.54018.540
22.073004051.73911.739
31.329006058.6961.304
41.211006563.5091.491
54.1200.0502530.2695.269
62.2170.0504552.1907.190
Temperature FRBS
71.3150.0506559.1225.878
81.1050.0507065.5134.487
93.1110.304544.2900.710
102.7350.305052.5202.520
111.9360.306559.2585.742
123.58700.053034.2854.285
131.95300.055551.6753.325
141.44300.056556.1378.863
153.99000.34038.1631.838
163.18900.35044.9095.092
172.28700.36559.9715.029
Total6.954
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Romanov, A.A.; Filippov, A.A.; Yarushkina, N.G. An Approach to Generating Fuzzy Rules for a Fuzzy Controller Based on the Decision Tree Interpretation. Axioms 2025, 14, 196. https://doi.org/10.3390/axioms14030196

AMA Style

Romanov AA, Filippov AA, Yarushkina NG. An Approach to Generating Fuzzy Rules for a Fuzzy Controller Based on the Decision Tree Interpretation. Axioms. 2025; 14(3):196. https://doi.org/10.3390/axioms14030196

Chicago/Turabian Style

Romanov, Anton A., Aleksey A. Filippov, and Nadezhda G. Yarushkina. 2025. "An Approach to Generating Fuzzy Rules for a Fuzzy Controller Based on the Decision Tree Interpretation" Axioms 14, no. 3: 196. https://doi.org/10.3390/axioms14030196

APA Style

Romanov, A. A., Filippov, A. A., & Yarushkina, N. G. (2025). An Approach to Generating Fuzzy Rules for a Fuzzy Controller Based on the Decision Tree Interpretation. Axioms, 14(3), 196. https://doi.org/10.3390/axioms14030196

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop