Next Article in Journal
Mathematical Modeling to Describe Drying Behavior of Kyoho (Vitis labruscana) Skin Waste: Drying Kinetics and Quality Attributes
Previous Article in Journal
Experimental Studies and Condition Monitoring of Auxiliary Processes in the Production of Al2O3 by Sol–Gel Technology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Hierarchical Interval Type 2 Self-Organizing Fuzzy System for Data-Driven Robot Control

College of Electrical Engineering, Sichuan University, Chengdu 610065, China
*
Author to whom correspondence should be addressed.
Processes 2022, 10(10), 2091; https://doi.org/10.3390/pr10102091
Submission received: 2 October 2022 / Revised: 10 October 2022 / Accepted: 10 October 2022 / Published: 15 October 2022
(This article belongs to the Section Automation Control Systems)

Abstract

:
To solve the dimensional explosion problem, this paper proposes a new architecture for the fuzzy system, the deep hierarchical self-organizing interval type-2 fuzzy system (DHSOIT2FS). Each sub-fuzzy system is a self-organizing interval type-2 fuzzy system, constructed online, with rules constructed by a rule online update algorithm, consequent parameters updated by iterative least squares, and antecedent parameters are updated using a gradient descent algorithm. DHSOIT2FS uses a classic serial-layered structure to build the overall framework. The first layer uses the first two dimensions of data as input. Each subsequent layer uses the output of the previous layer with the next dimensional data as input until it is built. During the training process, each data point is trained with DHSOIT2FS before passing in the next data point to achieve online construction. The effectiveness of the approach in this paper is illustrated using two numerical simulation examples. The proposed method is also applied to a data-driven control example of a single-link robot and achieves good tracking results.

1. Introduction

High latitude, in addition to a large order of magnitude data from data streams, is becoming the most common data type in many real-world applications [1]. Datastream data are quick and prone to conceptual drift. Therefore, it has become a popular problem for research in recent years [2]. Self-organizing fuzzy systems are effective structures for confronting this problem. Self-organizing fuzzy systems are powerful tools that are widely used for real-time non-stationary problem approximation. There are many developments in recent years [3,4,5]. Online self-organizing fuzzy systems can handle streaming data. After each datum comes in, the self-organizing fuzzy system only needs to adjust a few parameters instead of adjusting all parameters.
Fuzzy logic systems (FLSs) have been successfully used in a variety of areas [6,7,8,9]. However, most of these studies focus on type 1 FLSs. In recent years, studies on interval type 2 (IT2) FLSs have drawn much attention [10,11,12], and a complete theory of IT2 FLSs has been developed in [13]. IT2 FLSs are extensions of type 1 FLSs, where the membership value of an IT2 fuzzy set is a type 1 fuzzy number. IT2 FLSs appear to be a more promising method than their type 1 counterparts in handling problems with uncertainties such as noisy data and different word meanings. IT2 fuzzy sets allow researchers to model and minimize the effects of uncertainties in rule-based systems. Successful applications of IT2 FLSs include areas of signal processing [10,11], control [13,14,15], and medical applications [16].
Hierarchical fuzzy systems can be used to solve the problem of rule explosion. Each sub-fuzzy system is connected in a layer-to-layer fashion [17,18]. There are many ways to connect hierarchical fuzzy systems. The most classical ones are serial and stacked connections. Using serial connections can effectively reduce the number of rules. Hierarchical fuzzy systems also have practical applications in many ways [19,20].
The great success of deep convolutional neural networks (DCNN) [21] in solving complex practical problems [22,23,24] reveals a basic fact that multilevel structures are very powerful models in representing complex relationships. The main problems of DCNN are the substantial computational load required to train the many parameters of the DCNN and the lack of interpretability for the large number of model parameters [25]. To solve this problem, some scholars try to combine fuzzy systems with deep learning to obtain deep fuzzy neural networks. Currently, there is also substantial literature combining deep learning with fuzzy systems [26,27,28]. It can appropriately improve the interpretability of the system and reduce the computation time.
In this paper, we mainly combine the self-organizing interval type 2 fuzzy system with the deep hierarchical fuzzy system to form the deep hierarchical self-organizing interval type 2 fuzzy system (DHSOIT2FS). Each sub-fuzzy system is generated by self-organized construction. The structure of the sub-fuzzy system is formed by the self-organization of online data flow. Then, antecedent and consequent parameters are adjusted by gradient descent and iterative least squares algorithms. A serial hierarchical architecture is also used to combine multiple sub-fuzzy systems to form the DHSOIT2FS. When training the entire system, each data point is trained in turn for each sub-fuzzy system until all sub-fuzzy systems are trained. The effectiveness of the proposed algorithm is then illustrated in two numerical simulation examples and an actual single-link robot control simulation example.
The rest of this paper is organized as follows. Section 2 introduces the interval type 2 fuzzy system. Section 3 describes the construction method of DHSOIT2FS. Section 4 shows two numerical simulation examples and a data-driven robot control example. Finally, the conclusion is presented in Section 5.

2. Interval Type-2 Fuzzy System

In this section, we introduce information on interval type 2 fuzzy systems.
The IT2 FLS uses first-order TSK-type rules, and each rule has the following form:
Rule i : If x 1 k is A 1 i , , and x n k is A n i Then y is a ˜ 0 i + j = 1 n a ˜ j i x j k , i = 1 , , R
where x j k , j = 1 n is the input variable, and antecedent fuzzy set A j i , i = 1 R , j = 1 n is an IT2 fuzzy set. k represents the sampling serial number. y is the output variable, and R is the number of rules. Consequent parameters a ˜ j i , i = 1 R , j = 0 n are interval sets:
a ˜ j i = c j i d j i , c j i + d j i , j = 0 , , n
where c j i and d j i represent the central value and range of a ˜ j i , respectively. The IT2 FLS consists of a fuzzifier, inference engine, type reducer, and defuzzifier.
The fuzzier maps a crisp value to an IT2 fuzzy set. A Gaussian membership function with an uncertain mean, m, and a fixed standard deviation, σ , is used in this paper, which is described by the following:
μ j i x j = exp x j m j i σ j i 2 N m j i , σ j i ; x j , m j i m j 1 i , m j 2 i
where m j 1 i , m j 2 i represent the left mean and right mean of the Gaussian membership function, respectively. The membership degree μ j i x j is denoted by μ j i x j = μ ¯ j i x j , μ ̲ j i x j . μ ¯ j i x j and μ ̲ j i x j denote the upper membership degree and lower membership degree, respectively, which are calculated as follows.
μ ¯ j i x j = N m j 1 i , σ j i ; x j x j < m j 1 i 1 m j 1 i x j m j 2 i N m j 2 i , σ j i ; x j x j > m j 2 i
μ ̲ j i x j = N m j 2 i , σ j i ; x j x j m j 1 i + m j 2 i / 2 N m j 1 i , σ j i ; x j x j > m j 1 i + m j 2 i / 2
The fire strength of the ith rule is given as follows:
f i = j = 1 n μ ̲ j i x j j = 1 n μ ¯ j i x j = f ̲ i f ¯ i
where f i denotes the fire strength interval of the i t h rule; f ̲ i and f ¯ i represent the lower and upper bounds of the activation intensity, respectively. Let x 0 1 ; the output of each rule is an interval vaule denoted by [ g i l , g i r ] .
g i l , g i r = j = 0 n c j i x j j = 0 n x j s j i , j = 0 n c j i x j + j = 0 n x j s j i
According to [29,30], let g l = ( g 1 l , g 2 l , , g R l ) and g r = ( g 1 r , g 2 r , , g R r ) denote the original rule-ordered consequent values. Let f ̲ = ( f ̲ 1 , f ̲ 2 , , f ̲ R ) , and f ¯ = ( f ¯ 1 , f ¯ 2 , , f ¯ R ) . Through the type-reduction operation, an interval output [ y l , y r ] is calculated as follows:
y l = f ¯ T Q l T E 1 T E 1 Q l g l + f ̲ T Q l T E 2 T E 2 Q l g l i = 1 S L ( Q l f ¯ T ) i + i = S L + 1 R ( Q l f ̲ T ) i
y r = f ̲ T Q r T E 3 T E 3 Q r g r + f ¯ T Q r T E 4 T E 4 Q r g r i = 1 S R ( Q r f ¯ T ) i + i = S R + 1 R ( Q r f ̲ T ) i
where S L and S R are the switched points obtained from the Karnik–Mendel (KM) algorithm [13]. Q l and Q r are R × R permutation matrices, and the following is the case.
E 1 = ( I S L × S L , 0 ( R S L ) × S L ) R S L × R E 2 = ( 0 ( R S L ) × S L , I ( R S L ) × ( R S L ) ) R ( R S L ) × R E 3 = ( I S R × S R , 0 ( R S R ) × S R ) R S R × R E 4 = ( 0 ( R S R ) × S R , I ( R S R ) × ( R S R ) ) R ( R S R ) × R
The defuzzier maps T1 fuzzy set to a crisp value. Here, the output is as follows.
y = 1 2 y l + y r

3. Deep Hierarchical Self-Organizing Interval Type 2 Fuzzy System

In this section, the structure learning of self-organizing interval type 2 fuzzy system (abbreviated as SOFS) is introduced, and the parameters are learned. Then, the overall composition framework of DHSOIT2FS is introduced.

3.1. SOFS Systems Structure Learning

The task of structure learning is to determine when to generate a new rule. Structure learning and parameter learning methods from paper [29] are used here. The threshold of the activation strength of the rule is also linked to the threshold of the fuzzy set to reduce the number of parameters. Since the firing strength in the SOFS is an interval, the center of the interval is computed as follows:
f c = max 1 i R k 1 2 f ¯ i x + f ̲ i x
where R k is the number of rules at time t. If f c is smaller than f t h , which means that there is no cluster (rule) that properly covers the input data, a new cluster (rule) is added. f t h 0 , 1 is a threshold predefined by humans, which is used to adjust the number of rules.
Once a new rule is generated, there is a problem in determining if the corresponding fuzzy set in each input variable should be generated. An intuitive idea is to use the maximum value of the membership of the input to determine whether a new fuzzy set is to be generated:
μ j c = max 1 i n s j k 1 2 μ ¯ j i x + μ ̲ j i x , j = 1 , , n
where n s j k is the number of fuzzy sets in input variable j. If μ j c is larger than ρ , which is calculated as (14), then use the existing fuzzy A n I c as the antecedent part of the new rule in input variable j, where I c is the subscript of the corresponding fuzzy set.
ρ = f t h n
The intuitive basis comes from distributing the threshold of activation intensity equally to each fuzzy set. If the membership of a fuzzy set does not reach this average membership value, then a new fuzzy set should be generated. This is one of the ways to relate the fire strength threshold to the number of inputs. It also reduces the number of user-defined parameters.
If μ j c is smaller than ρ , generate a new fuzzy set in input variable j and set n s j ( k + 1 ) = n s j ( k ) + 1 ; the initial uncertain mean, m j i , and standard deviation, σ j i , for the n s j ( k + 1 ) th interval type-2 fuzzy set in input variable x j are as follows:
m n s j ( k + 1 ) i x j m 0 , x j + m 0
σ n s j ( k + 1 ) i = σ 0
where m 0 represents the uncertain mean value of the initial interval type 2 fuzzy set. σ 0 is a prespecified parameter for determining the initial fuzzy set width. The choice of the two parameters depends on the range of the input domain. If the input has a large fluctuating range of values, then this value should also be larger; in contrast, if it is smaller, then its value should also be chosen to be smaller. If the input domain is normalized to between 0 and 1, then m 0 = 0.1 , σ 0 = 0.4 can be chosen as its empirical value.
In addition to assigning the initial antecedent part parameters, the initial consequent part parameters should also be determined for a new generated rule. The initial consequent parameters are set to the following.
c 0 i = y d ( k ) , d 0 i = 0.1 , c j i = 0.01 , d j i = 0.1 , j = 1 , , n a n d i = 1 , , R
where y d ( k ) is the desired output for input x . The selection of these values has little to do with the output theoretical domain, as they can all be optimized by subsequent iterative least squares algorithms. Repeat the above operation for all subsequent input pairs.

3.2. SOFS Systems Parameters Learning

The task of parameters learning is to tune the antecedent and consequent parameters to minimize the error function, which is described by the following Equation (18):
E = 1 2 y y d 2
where y denotes real output. In this paper, we use a gradient descent algorithm to tune antecedent parameters. Let p j i denote one of the free parameters in rule i and variable j. Using the gradient descent algorithm, we can obtain the following:
p j i ( k + 1 ) = p j i ( k ) η E ( k ) p j i ( k )
where η denotes the learning coefficient. The value of learning rate is related to the amount of data and the size of the input domain. When there is more data and the input domain is smaller, the learning rate can be reduced appropriately. Conversely, it should be increased appropriately. More details can be found in [29].
We used rule-ordered iterative least squares to tune consequent parameters. According to [29,30], (11) can be reexpressed as follows:
y = 1 2 y l + y r = ϕ ¯ l T ϕ ¯ r T g l g r = ϕ ¯ l 1 ϕ ¯ l R ϕ ¯ r 1 ϕ ¯ r R g l g r = ϕ ¯ c 1 x 0 ϕ ¯ c 1 x n ϕ ¯ s 1 x 0 ϕ ¯ s 1 x n ϕ ¯ c R x 0 ϕ ¯ c R x n ϕ ¯ s R x 0 ϕ ¯ s R x n × c 0 1 , c n 1 d 0 1 d n 1 c 0 R c n R d 0 R d n R = ϕ I L T g I L
where ϕ ¯ c i = ϕ ¯ r i + ϕ ¯ l i and ϕ ¯ s i = ϕ ¯ r i ϕ ¯ l i , i = 1 , , n , then the consequent parameters vector g IL can be updated by the following rule-ordered iterative least squares.
S k + 1 = 1 λ S ( k ) S ( k ) ϕ I L ( k + 1 ) ϕ I L T ( k + 1 ) S ( k ) λ + ϕ I L T ( k + 1 ) S ( k ) ϕ I L ( k + 1 ) g I L ( k + 1 ) = g I L ( k ) + S ( k + 1 ) ϕ I L ( k + 1 ) × y d ( k + 1 ) ϕ I L T ( k + 1 ) g I L ( k )
where 0 λ 1 is a forgetting factor ( λ = 0.99998 in this paper) and S R R × R . The initial value of S is a large positive quantity matrix.
For a new data pair, structure learning is performed first, followed by learning consequent parameters, and then learning the antecedent parameters. This is repeated until all training data pass.

3.3. DHSOIT2FS Structure and Learning

Fuzzy systems often face the problem of “dimensional catastrophe” when dealing with high-dimensional problems. We used a hierarchical block structure to build a DHSOIT2FS, dividing variables into groups for calculations to reduce the dimensionality of data. The model is built from the bottom up, and the final target output is used as the target output of each sub-fuzzy system in each layer.
The basic structure of DHSOIT2FS is shown in Figure 1. The input and fuzzy systems for each layer are denoted as X i j , SOFS k , where i denotes the ith level, j denotes the jth input, and k denotes the kth sub-fuzzy system. Consider a problem with n-dimensional input and a single output. A DHSOIT2FS will contain n 1 SOFS subsystems. The entire system is constructed from the bottom up. This is a serial hierarchical structure that allows the system to be computationally reduced, the overall number of rules to be reduced, and the interpretability to be improved.
Each subsystem is a two-input, one-output system; thus, the number of rules and the number of fuzzy sets for a single sub-fuzzy system is not too large, and the computational complexity is low. For each subsystem, the desired output data are the desired output at the current moment. The input of the first sub-fuzzy system is the first two dimensions of input data at the current moment. The input of the subsequent sub-fuzzy system is the integration of the output of the previous sub-fuzzy system with the data of the next dimension.
The entire deep hierarchical fuzzy system is trained from the bottom to the top. Training processes with respect to the signal sub-fuzzy system is carried out as described above. The input and output data pairs are first obtained, and then the structure is learned, followed by the parameters, and the current output is finally given. This structure is suitable for use in online data streams. For each online data, each sub-fuzzy system is trained in turn until the entire system is trained. The entire process is quick and efficient. Then, the next data point is imported, and so on until all data streams are trained. This approach is different from using all training data to train a subsystem individually and then using all data to train the next subsystem.

4. Simulation

In this section, nonlinear dynamical system identification, high-dimensional dynamical system identification, and a single-link robot control problem are provided. The simulation results illustrate the effectiveness of the methods in this paper.

4.1. Nonlinear Dynamic System Identification

A simple nonlinear system is used to approximate a simple robotic system. Its expression is given in (22) and is widely used in several papers. It is used to verify the effectiveness of the algorithm proposed in this paper.
y ( k + 1 ) = y ( k ) y ( k 1 ) [ y ( k ) + 2.5 ] 1 + y 2 ( k ) + y 3 ( k 1 ) + u ( k )
where u ( k ) = s i n ( 2 π k / 25 ) , y ( 0 ) = 0 , y ( 1 ) = 1 . There are three inputs y ( k ) , y ( k 1 ) , y ( k ) and one output y ( k + 1 ) . The model can be described by the following equation:
y p ( k + 1 ) = f y ( k ) , y ( k 1 ) , u ( k )
where f ( · ) represents the fuzzy model obtained by our algorithm. Five hundred patterns are generated from k = 1 to k = 500 , with the first 400 patterns used for training and the last 100 used for testing. The parameters in the DHSOIT2FS are set to be f t h = 0.1 , σ 0 = 0.4 , and η = 0.001 . The RMSE between desired output y d ( k + 1 ) and predicted output y p ( k + 1 ) includes evaluation indicators, which is calculated as follows.
RMSE = k = 400 499 y d k + 1 y p k + 1 2 500
The structure of the system is shown in Figure 2. There are two sub-fuzzy systems, SOFS1 and SOFS2, and the output of the former system is used as the first dimensional input of the next system. The entire system is constructed using online data. After each piece of data comes in, it is first trained using y ( k ) and y ( k 1 ) as the two-dimensional inputs to SOFS1. After the parameters of SOFS1 are adjusted, its output and u ( k ) are used as two-dimensional inputs with respect to SOFS2 for training.
To better illustrate the effectiveness of the algorithm, a comparison is made here with the algorithm in the paper [26]. The algorithm uses the code provided in the attachment of the paper [26]. The number of fuzzy sets for each dimension is set to seven. There is a sliding window size of 3, i.e., three inputs per subsystem. This method of constructing a deep fuzzy system is a stacked approach, which is different from the serial approach in this paper. The former requires more rules and fuzzy sets. It can be observed that our predicted output almost coincides with the desired output. The number of rules used by the two methods, the number of fuzzy sets, and the final RMSE accuracy on the test set are shown in Table 1. It can be seen that the method in this paper uses fewer rules and several fuzzy sets to obtain improved accuracies. Here, the simulation time of the WM method is much smaller than the simulation time of the method in this paper. Because the WM method is not trained online but offline, the training test time is very short. Although the simulation time of our method is longer than that of the WM method, it is within an acceptable range.
Figure 3 shows the predicted output of our method and the expected output of the test samples. It can be seen that the method in this paper keeps track of the given output very well. The horizontal and vertical coordinates are a numerical quantity and have no physical meaning; thus, there are no units. In particular, the horizontal coordinate represents the sampling interval, which is also not a real sense of time.
Figure 4 shows the fuzzy set of the entire DHSOIT2FS. It can be seen that the fuzzy sets are well distributed and each fuzzy set has a suitable width. The overall interpretability is strong.
Figure 5 shows the changes in the fuzzy system’s parameters throughout the training process. It can be seen that both antecedent and consequent parameters converge quickly, which verifies the effectiveness of the method in this paper.

4.2. Higher Dimensional System Identification

A simple equation is used here to describe a robotic system. High-dimensional data are also used to model the multidimensional inputs to the robot. The system can be described by (25):
y ( k ) = i = 1 m y ( k i ) 1 + i = 1 m y 2 ( k i ) + u ( k 1 )
where u ( k ) = s i n ( 2 π k / 20 ) , y ( k ) = 0 , k = 1 , 2 , , m . In this paper, we take m as 10. There are ten inputs, y ( k 1 ) , y ( k 2 ) , , y ( k 10 ) , u ( k 1 ) , and one output, y ( k ) . The model can be described by the following equation.
y p ( k + 1 ) = f y ( k 1 ) , , y ( k 10 ) , u ( k 1 )
Three thousand three hundred patterns are generated from k = 1 to k = 3300 , with eighty percent of the data used as the training set and the rest as the test set. The parameters in the DHSOIT2FS are set to be f t h = 0.01 , σ 0 = 0.5 , and η = 0.001 . The evaluation metrics still use RMSE, and the formula is similar to (24). Training is also performed as described in the previous section. The structure of the system is similar to Figure 1. The method from the paper [26] is also used here for comparison. The number of its fuzzy sets is set to 5 and the size of the sliding window is set to 3. The number of rules used by the two methods, the number of fuzzy sets, and the final RMSE accuracy on the test set are shown in Table 2. It can be seen that the method in this paper uses fewer rules and a number of fuzzy sets to obtain improved accuracies.
Figure 6 shows the predicted output of our method and the expected output of the test samples. It can be seen that the proposed method in this paper can identify nonlinear systems very well.
Figure 7 shows the fuzzy set of the SOFS1. The fuzzy sets are all distributed very evenly, and semantic variables can be specified for each fuzzy set. The system is highly interpretable.

4.3. Data-Driven Single Linkage Robot Control

Consider a single-link robot control system described by the following:
M q ¨ + 0.5 mgl sin ( q ) = τ y = q
where M = 0.5 kgm 2 represents the rotational inertia of the rod. m = 1 kg is the mass of the rod. l = 1 m denotes the length of the rod. g = 9.8 m/s 2 stands for the acceleration of gravity. q , q ˙ , and q ¨ represent the angular position, angular velocity, and angular acceleration of the rod, respectively.
The task here is to consider keeping the angular position of the system constant at 1 (that is, to have the angular position of the system track a step signal). The simulation time is S T = 5 s. The step size is 0.01. Assume that the initial states of the system are all zero. First, the pid control method is used with the following expression (28):
τ ( k ) = k p e ( k ) + k i n = 0 k e ( i ) + k d e ( k ) e ( k 1 )
where k p is scale coefficient, k i is integral coefficient, and k d is differential coefficient. Appropriate pid parameters are selected, k p = 100 , k i = 5 , k d = 2000 , to make the system stable. Moreover, define e a ( k ) = n = 0 k e ( n ) = e a ( k 1 ) + e ( k ) , e d ( k ) = e ( k ) e ( k 1 ) . The composition of variables is inserted into input data sets X = e ( k ) , e d ( k ) , e a ( k ) , k = 1 , 2 , , 4001 . The output data are the control signal τ ( k ) . Since the order of magnitude of ea differs too much from that of e, two design approaches are considered here for normalizing and not normalizing the input and output data. This idea can also be extended to other data sets or control methods. The structure of the algorithm without normalization is similar to Figure 2. The normalized algorithm structure has two additional modules: the normalization of the input data and denormalization of the output data. The structure is shown in Figure 8, where variable e ^ ( k ) , e ^ d ( k ) , e ^ a ( k ) , τ ^ is the normalized variable.
Since the data set needs to be collected first here, obtaining the maximum and minimum values of the data for the entire process is straightforward. Essentially, the online training method is still used here, with the exception that the prior knowledge of the maximum and minimum values of the inputs and outputs is known in advance. This is also available in practical applications. The normalized expressions are given below:
e ^ ( k ) = e ( k ) e m i n e m a x e m i n
where e m i n and e m a x represent the minimum and maximum values of e ( k ) in the entire training process, respectively. The parameters in the DHSOIT2FS are set to be f t h = 0.001 , σ 0 = 0.4 , and η = 0.0001 . When the training is over, the resulting DHSOIT2FS is replaced by the pid controller in the original model. The result obtained is shown in Figure 9. y 1 represents the output state obtained without the normalized DHSOIT2FS algorithm; y 2 represents the output state obtained with the normalized DHSOIT2FS algorithm; y p i d represents the output state obtained with the pid algorithm; y d represents the desired output state. The results of other methods are consistent with our method. The algorithm in this paper is completely data-driven, and there is no need to know any parameters of the model at all. In Figure 9, the lines of ypid are sources of the original data. Moreover, the method of this paper can well construct the fuzzy system after obtaining several data points, eventually making the effect consistent with it. This shows the effectiveness of the method in this paper because if the control method of the data source uses other methods with better controls, then the results of this paper can also fit that method very well. It can be seen that the output obtained by our method can track well on the given input, proving the effectiveness of our method.
Figure 10 shows the distribution of the fuzzy set of antecedent for the DHSOIT2FS algorithm with normalization. It can be seen that only one fuzzy set is used for each dimension. It is very concise as well as highly interpretable. In contrast, if the normalization algorithm is not used, the output of the SOFS1 sub-fuzzy system fluctuates greatly in value since it is trained using the desired control input, u. This would lead to a range of values of X21 well beyond [0,1], which would cause a mismatch between the parameters set by SOFS2 and the range of values of X21. This will cause the system to set too many unnecessary fuzzy sets as well as rules. This normalization method is the online learning method when the scope of the data is known. Although this experiment is a simple pid control simulation, the idea is analogous to other methods of the control simulation.

5. Conclusions

This paper proposes a new fuzzy system architecture: the DHSOIT2FS. Each sub-fuzzy system is a self-organizing interval type-2 fuzzy system, constructed online, with rules constructed by an online rule update algorithm, consequent parameters updated by iterative least squares, and antecedent parameters updated using a gradient descent algorithm. DHSOIT2FS uses a classic serial-layered structure to build the overall framework. The first layer uses the first two dimensions of data as input. Each subsequent layer uses the output of the previous layer with the next dimensional data as input until it is built. During the training process, each data point is trained with DHSOIT2FS before passing in the next data point to achieve the online construction. The effectiveness of the approach in this paper is illustrated using two numerical simulation examples as well as a robot control simulation example.

Author Contributions

Conceptualization, Z.M., T.Z. and N.L.; methodology, Z.M., T.Z. and N.L.; software, Z.M.; validation, Z.M.; formal analysis, Z.M.; investigation, Z.M.; resources, Z.M., T.Z. and N.L.; data curation, Z.M.; writing—original draft preparation, Z.M.; writing—review and editing, Z.M., T.Z. and N.L.; visualization, Z.M.; supervision, T.Z. and N.L.; project administration, Z.M., T.Z. and N.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ge, D.; Zeng, X.J. A self-evolving fuzzy system which learns dynamic threshold parameter by itself. IEEE Trans. Fuzzy Syst. 2018, 27, 1625–1637. [Google Scholar] [CrossRef] [Green Version]
  2. Gu, X.; Shen, Q. A self-adaptive fuzzy learning system for streaming data prediction. Inf. Sci. 2021, 579, 623–647. [Google Scholar] [CrossRef]
  3. Wei, Z.X.; Doctor, F.; Liu, Y.X.; Fan, S.Z.; Shieh, J.S. An optimized type-2 self-organizing fuzzy logic controller applied in anesthesia for propofol dosing to regulate BIS. IEEE Trans. Fuzzy Syst. 2020, 28, 1062–1072. [Google Scholar] [CrossRef]
  4. Ferdaus, M.M.; Pratama, M.; Anavatti, S.G.; Garratt, M.A.; Pan, Y. Generic evolving self-organizing neuro-fuzzy control of bio-inspired unmanned aerial vehicles. IEEE Trans. Fuzzy Syst. 2019, 28, 1542–1556. [Google Scholar] [CrossRef] [Green Version]
  5. Gu, X. Multilayer ensemble evolving fuzzy inference system. IEEE Trans. Fuzzy Syst. 2020, 29, 2425–2431. [Google Scholar] [CrossRef]
  6. Zhao, T.; Tong, W.; Mao, Y. Hybrid Non-singleton Fuzzy Strong Tracking Kalman Filtering for High Precision Photoelectric Tracking System. IEEE Trans. Ind. Inform. 2022. [Google Scholar] [CrossRef]
  7. Zhao, T.; Chen, C.; Cao, H. Evolutionary self-organizing fuzzy system using fuzzy-classification-based social learning particle swarm optimization. Inf. Sci. 2022, 606, 92–111. [Google Scholar] [CrossRef]
  8. Zhao, T.; Cao, H.; Dian, S. A Self-Organized Method for a Hierarchical Fuzzy Logic System based on a Fuzzy Autoencoder. IEEE Trans. Fuzzy Syst. 2022. [Google Scholar] [CrossRef]
  9. Zhao, T.; Chen, C.; Cao, H.; Dian, S.; Xie, X. Multiobjective Optimization Design of Interpretable Evolutionary Fuzzy Systems With Type Self-Organizing Learning of Fuzzy Sets. IEEE Trans. Fuzzy Syst. 2022. [Google Scholar] [CrossRef]
  10. Karnik, N.N.; Mendel, J.M.; Liang, Q. Type-2 fuzzy logic systems. IEEE Trans. Fuzzy Syst. 1999, 7, 643–658. [Google Scholar] [CrossRef]
  11. Mendel, J.M.; John, R.B. Type-2 fuzzy sets made simple. IEEE Trans. Fuzzy Syst. 2002, 10, 117–127. [Google Scholar] [CrossRef]
  12. Deng, Z.; Choi, K.S.; Cao, L.; Wang, S. T2FELA: Type-2 fuzzy extreme learning algorithm for fast training of interval type-2 TSK fuzzy logic system. IEEE Trans. Neural Netw. Learn. Syst. 2013, 25, 664–676. [Google Scholar] [CrossRef] [PubMed]
  13. Liang, Q.; Mendel, J.M. Interval type-2 fuzzy logic systems: Theory and design. IEEE Trans. Fuzzy Syst. 2000, 8, 535–550. [Google Scholar] [CrossRef] [Green Version]
  14. Liang, Q.; Mendel, J.M. Equalization of nonlinear time-varying channels using type-2 fuzzy adaptive filters. IEEE Trans. Fuzzy Syst. 2000, 8, 551–563. [Google Scholar] [CrossRef] [Green Version]
  15. Mitchell, H.B. Pattern recognition using type-II fuzzy sets. Inf. Sci. 2005, 170, 409–418. [Google Scholar] [CrossRef]
  16. Yorozu, T.; Hirano, M.; Oka, K.; Tagawa, Y. Electron spectroscopy studies on magneto-optical media and plastic substrate interface. IEEE Transl. J. Magn. Jpn. 1987, 2, 740–741. [Google Scholar] [CrossRef]
  17. Wang, L.X. Analysis and design of hierarchical fuzzy systems. IEEE Trans. Fuzzy Syst. 1999, 7, 617–624. [Google Scholar] [CrossRef]
  18. Lee, M.L.; Chung, H.Y.; Yu, F.M. Modeling of hierarchical fuzzy systems. Fuzzy Sets Syst. 2003, 138, 343–361. [Google Scholar] [CrossRef]
  19. Zhang, Y.; Ishibuchi, H.; Wang, S. Deep Takagi–Sugeno–Kang fuzzy classifier with shared linguistic fuzzy rules. IEEE Trans. Fuzzy Syst. 2017, 26, 1535–1549. [Google Scholar] [CrossRef]
  20. Fukuda, T.; Hasegawa, Y.; Shimojima, K. Structure organization of hierarchical fuzzy model using by genetic algorithm. Proc. 1995 IEEE Int. Conf. Fuzzy Syst. 1995, 1, 295–300. [Google Scholar]
  21. Hinton, G.E.; Osindero, S.; Teh, Y.W. A fast learning algorithm for deep belief nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef]
  22. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  23. Wang, G.; Qiao, J. An Efficient Self-Organizing Deep Fuzzy Neural Network for Nonlinear System Modeling. IEEE Trans. Fuzzy Syst. 2022, 30, 2170–2182. [Google Scholar] [CrossRef]
  24. Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; Van Den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. Mastering the game of Go with deep neural networks and tree search. Nature 2016, 529, 484–489. [Google Scholar] [CrossRef]
  25. Sadeghi, R.; Banerjee, T.; Romine, W. Early hospital mortality prediction using vital signals. Smart Health 2018, 9, 265–274. [Google Scholar] [CrossRef] [PubMed]
  26. Wang, L.X. Fast Training Algorithms for Deep Convolutional Fuzzy Systems With Application to Stock Index Prediction. IEEE Trans. Fuzzy Syst. 2020, 28, 1301–1314. [Google Scholar] [CrossRef] [Green Version]
  27. Qin, B.; Nojima, Y.; Ishibuchi, H.; Wang, S. Realizing deep high-order TSK fuzzy classifier by ensembling interpretable zero-order TSK fuzzy subclassifiers. IEEE Trans. Fuzzy Syst. 2020, 29, 3441–3455. [Google Scholar] [CrossRef]
  28. Liu, Y.; Lu, X.; Peng, W.; Li, C.; Wang, H. Compression and regularized optimization of modules stacked residual deep fuzzy system with application to time series prediction. Inf. Sci. 2022, 608, 551–577. [Google Scholar] [CrossRef]
  29. Juang, C.F.; Tsao, Y.W. A self-evolving interval type-2 fuzzy neural network with online structure and parameter learning. IEEE Trans. Fuzzy Syst. 2008, 16, 1411–1424. [Google Scholar] [CrossRef]
  30. Mendel, J.M. Computing derivatives in interval type-2 fuzzy logic systems. IEEE Trans. Fuzzy Syst. 2004, 12, 84–98. [Google Scholar] [CrossRef]
Figure 1. The structure of the DHSOIT2FS.
Figure 1. The structure of the DHSOIT2FS.
Processes 10 02091 g001
Figure 2. The structure of the DHSOIT2FS in 4.1.
Figure 2. The structure of the DHSOIT2FS in 4.1.
Processes 10 02091 g002
Figure 3. The predicted output with the desired output in 4.1.
Figure 3. The predicted output with the desired output in 4.1.
Processes 10 02091 g003
Figure 4. Fuzzy set of the entire DHSOIT2FS in 4.1. (a) Fuzzy set of SOFS1’s first dimension. (b) Fuzzy set of SOFS1’s second dimension. (c) Fuzzy set of SOFS2’s first dimension. (d) Fuzzy set of SOFS2’s second dimension.
Figure 4. Fuzzy set of the entire DHSOIT2FS in 4.1. (a) Fuzzy set of SOFS1’s first dimension. (b) Fuzzy set of SOFS1’s second dimension. (c) Fuzzy set of SOFS2’s first dimension. (d) Fuzzy set of SOFS2’s second dimension.
Processes 10 02091 g004
Figure 5. Variations in antecedent and consequent parameters during DHSOIT2FS training. (a) Antecedent parameters. (b) Consequent parameters.
Figure 5. Variations in antecedent and consequent parameters during DHSOIT2FS training. (a) Antecedent parameters. (b) Consequent parameters.
Processes 10 02091 g005
Figure 6. The predicted output with the desired output in 4.2.
Figure 6. The predicted output with the desired output in 4.2.
Processes 10 02091 g006
Figure 7. Fuzzy set of the SOFS1 in 4.2. (a) Fuzzy set of SOFS1’s first dimension. (b) Fuzzy set of SOFS1’s second dimension.
Figure 7. Fuzzy set of the SOFS1 in 4.2. (a) Fuzzy set of SOFS1’s first dimension. (b) Fuzzy set of SOFS1’s second dimension.
Processes 10 02091 g007
Figure 8. The structure of the DHSOIT2FS in 4.3.
Figure 8. The structure of the DHSOIT2FS in 4.3.
Processes 10 02091 g008
Figure 9. The output obtained by each control algorithm and the desired output.
Figure 9. The output obtained by each control algorithm and the desired output.
Processes 10 02091 g009
Figure 10. Distribution of the fuzzy set of antecedent for the DHSOIT2FS algorithm with normalization. (a) Fuzzy set of SOFS1’s first dimension. (b) Fuzzy set of SOFS1’s second dimension. (c) Fuzzy set of SOFS2’s first dimension. (d) Fuzzy set of SOFS2’s second dimension.
Figure 10. Distribution of the fuzzy set of antecedent for the DHSOIT2FS algorithm with normalization. (a) Fuzzy set of SOFS1’s first dimension. (b) Fuzzy set of SOFS1’s second dimension. (c) Fuzzy set of SOFS2’s first dimension. (d) Fuzzy set of SOFS2’s second dimension.
Processes 10 02091 g010
Table 1. Comparison of the results of the two methods in 4.1.
Table 1. Comparison of the results of the two methods in 4.1.
MethodsWMDHSOIT2FS
RMSE0.09260.0476
Number of rules34320
Total number of fuzzy sets2120
TrainRunTime0.1570 s2.1560 s
TestRunTime0.0342 s0.1790 s
Table 2. Comparison of the results of the two methods in 4.2.
Table 2. Comparison of the results of the two methods in 4.2.
MethodsWMDHSOIT2FS
RMSE0.23900.05185
Number of rules312548
Total number of fuzzy sets37558
TrainRunTime8.180 s46.404 s
TestRunTime2.159 s4.457 s
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mei, Z.; Zhao, T.; Liu, N. Deep Hierarchical Interval Type 2 Self-Organizing Fuzzy System for Data-Driven Robot Control. Processes 2022, 10, 2091. https://doi.org/10.3390/pr10102091

AMA Style

Mei Z, Zhao T, Liu N. Deep Hierarchical Interval Type 2 Self-Organizing Fuzzy System for Data-Driven Robot Control. Processes. 2022; 10(10):2091. https://doi.org/10.3390/pr10102091

Chicago/Turabian Style

Mei, Zhen, Tao Zhao, and Nian Liu. 2022. "Deep Hierarchical Interval Type 2 Self-Organizing Fuzzy System for Data-Driven Robot Control" Processes 10, no. 10: 2091. https://doi.org/10.3390/pr10102091

APA Style

Mei, Z., Zhao, T., & Liu, N. (2022). Deep Hierarchical Interval Type 2 Self-Organizing Fuzzy System for Data-Driven Robot Control. Processes, 10(10), 2091. https://doi.org/10.3390/pr10102091

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop