Next Article in Journal
Risk Appetite and Jumps in Realized Correlation
Previous Article in Journal
The Class Equation and the Commutativity Degree for Complete Hypergroups
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Filtering-Based Parameter Identification Methods for Multivariable Stochastic Systems

1
Taizhou Electric Power Conversion and Control Engineering Technology Research Center, Taizhou University, Taizhou 225300, China
2
Department of Mathematical Sciences, Xi’an Jiaotong Liverpool University, Suzhou 215123, China
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(12), 2254; https://doi.org/10.3390/math8122254
Submission received: 12 November 2020 / Revised: 5 December 2020 / Accepted: 11 December 2020 / Published: 21 December 2020

Abstract

:
This paper presents an adaptive filtering-based maximum likelihood multi-innovation extended stochastic gradient algorithm to identify multivariable equation-error systems with colored noises. The data filtering and model decomposition techniques are used to simplify the structure of the considered system, in which a predefined filter is utilized to filter the observed data, and the multivariable system is turned into several subsystems whose parameters appear in the vectors. By introducing the multi-innovation identification theory to the stochastic gradient method, this study produces improved performances. The simulation numerical results indicate that the proposed algorithm can generate more accurate parameter estimates than the filtering-based maximum likelihood recursive extended stochastic gradient algorithm.

1. Introduction

System identification is the theory and methods of establishing the mathematical models of dynamical systems [1,2,3,4,5] and some identification approaches have been proposed for scalar systems and multivariable systems [6,7,8,9,10,11]. Multivariable systems exist more widely in modern large-scale industrial processes, multivariable systems can more accurately describe the characteristics of dynamic processes, and have extensive application prospects to study the identification methods of multivariable systems [12,13,14]. The identification methods of multivariable systems can be regarded as an extension of those of scalar systems [15,16]. Therefore, how to identify the multivariable systems by extending the identification methods of scalar systems has attracted much attention. This paper focuses on the identification issues of multivariable systems with complex structures and many parameters. For decades, many parameter estimation methods have been developed for multivariable systems, such as the stochastic gradient methods [17], the iterative methods [18], the recursive least-squares methods [19,20] and the blind identification methods [21]. The maximum likelihood algorithm has good statistical properties and can deal with colored noises directly [22,23,24]. The present study aims to investigate a more efficient algorithm based on the maximum likelihood principle, the negative gradient search, the data filtering, and the multi-innovation identification theory.
The complex structures and high dimensions in the parameter matrices of the multivariable systems lead to the increase in computational complexity [25,26,27]. Inspired by the hierarchical control based on the decomposition-coordination principle for large-scale systems, the hierarchical identification can be served as the solution to reduce the computational intensity by decomposing the identification model into several subsystems with smaller dimension and fewer variables [28]. Differing from the hierarchical identification [29], the model decomposition technique, which is based on the matrix row and column multiplication expansion, is an effective method to reduce the computational burden. Recently, the model decomposition technique are used in [30,31] to reduce the computational complexity by transforming the multivariable system into several small-scale subsystems with only the parameter vectors to be determined. By changing the noise model structure of the subsystem to whiten the colored noise, an adaptive filter is designed to filter the observed data, then the subsystem identification model is further simplified and the parameter estimation accuracy is improved [32,33,34]. For ARX models with unmeasurable outputs, a modified Kalman filter was designed and a new multi-step-length formulation was derived to improve the performance of the gradient iterative algorithm [35].
The advantage of the stochastic gradient methods is that they need less computational effort compared to existing identification methods [36,37]. Due to their zigzagging behavior, the stochastic gradient methods have slow convergence rates [38,39]. The focus of this paper is to investigate a new method with computational efficiency by introducing the multi-innovation identification theory into the stochastic gradient method. The innovation is the useful information that can improve the accuracy of parameter estimation or state estimation. From the viewpoint of innovation modification, the multi-innovation identification theory improves the convergence rate and parameter estimation accuracy from the following two aspects [40,41]. Firstly, the multi-innovation method uses not only the current data but also the past data in each recursive calculation step, which is the reason to improve the convergence rate. Secondly, the multi-innovation method repeatedly utilizes the available data in the neighboring two recursions, which is the reason to improve the parameter estimation accuracy. In this aspect, multi-innovation methods have been developed in [42,43]. It is well known that an increasing innovation length leads to better parameter estimation accuracy, but the price paid is a large computational effort [44,45]. The difficulty arises as to how to choose the innovation length.
In summary, although a filtering and maximum likelihood-based recursive least-squares algorithm is available for multivariable systems with complex structures and colored noises [32], there remains a need for enhancing the parameter estimation accuracy with computational efficiency. Motivated by these considerations, this paper has the following contributions:
  • The data filtering and model decomposition techniques are used to reduce the computational complexity of the multivariable systems contaminated by uncertain disturbances.
  • A filtering-based multivariable maximum likelihood multi-innovation extended stochastic gradient (F-M-ML-MIESG) algorithm is proposed for improved parameter estimation accuracy while retaining desired computational performance.
  • The noise model parameters are dealt with directly using the maximum likelihood principle.
Briefly, this paper is recognized as follows. Section 2 describes the multivariable system with unmeasurable disturbances, derives the subsystem identification model, and forms the identification problems. Section 3 develops a filtering-based multivariable maximum likelihood recursive extended stochastic gradient (F-M-ML-RESG) algorithm. Section 4 introduces the multi-innovation identification theory to the F-M-ML-RESG algorithm to derive an F-M-ML-MIESG algorithm. Section 5 gives a numerical example to verify the proposed algorithms. Finally, Section 6 summarizes the study.

2. The System Description and Identification Model

Symbols Meaning
0 : The zero matrix of appropriate sizes.
1 n : An n-dimensional column vector whose entries are all 1.
I or I n :The identity matrix of appropriate sizes or n × n .
X T :The transpose of the vector or matrix X .
X :The norm of the vector or matrix X .
A = : X :X is defined by A.
X : = A : X is defined by A.
k: The time variable.
θ ^ k :The estimate of θ at time k.
p 0 : A large positive constant, e.g., p 0 = 10 6 .
Consider the following multivariable controlled autoregressive autoregressive moving average (M-CARARMA) model:
A ( q ) y k = Q ( q ) u k + D ( q ) C ( q ) v k ,
where y k m and u k r are the output and input vectors, respectively, v k m denotes random white noise vector with zero mean and variance σ 2 . The polynomials A ( q ) , Q ( q ) , C ( q ) , and D ( q ) are expressed as
A ( q ) : = I m + A 1 q 1 + A 2 q 2 + + A n a q n a , A l = [ a i j l ] m × m , ł = 1 , 2 , , n a , Q ( q ) : = Q 1 q 1 + Q 2 q 2 + + Q n b q n b , B l = [ b i j l ] m × r , ł = 1 , 2 , , n b , C ( q ) : = 1 + c 1 q 1 + c 2 q 2 + + c n c q n c , c l , ł = 1 , 2 , , n c , D ( q ) : = 1 + d 1 q 1 + d 2 q 2 + + d n d q n d . d l , ł = 1 , 2 , , n d ,
Assume that y ( k ) = 0 , u ( k ) = 0 , and v ( k ) = 0 for k 0 , the orders n a , n b , n c , and n d are known. Differing from the work in [32], the focus of this paper is to derive a new method to identify the polynomial coefficients A l , Q l , c l , and d l . Referring to the work in [32], in order to reduce the computational complexity, Equation (1) is decomposed into several subsystems. Then, the ith subsystem can be represented as
v i , k = C ( q ) D ( q ) l = 0 n a A l i y k l l = 1 n b Q l i u k l = C ( q ) D ( q ) [ A i ( q ) y k Q i ( q ) u k ] , i = 1 , 2 , , m .
Define
a i : = [ A 1 i , A 2 i , , A n a i ] T m n a , b i : = [ Q 1 i , Q 2 i , , Q n b i ] T r n b , c : = [ c 1 , c 2 , , c n c ] T n c , d : = [ d 1 , d 2 , , d n d ] T n d , y 1 , k : = C ( q ) y k , u 1 , k : = C ( q ) u k , φ i 1 , k : = φ a , k φ b , k φ i d , k , θ i 1 : = a i b i d n 0 , n 0 : = m n a + r n b + n d , φ a , k : = [ y 1 , k 1 T , y 1 , k 2 T , , y 1 , k n a T ] T m n a , φ b , k : = [ u 1 , k 1 T , u 1 , k 2 T , , u 1 , k n b T ] T r n b , φ i c , k : = [ w i , k 1 , w i , k 2 , , w i , k n c ] T n c , φ i d , k : = [ v i , k 1 , v i , k 2 , , v i , k n d ] T n d , φ i 0 , k : = [ y k 1 T , y k 2 T , , y k n a T , u k 1 T , u k 2 T , , u k n b T ] T m n a + r n b , θ i 0 : = a i b i m n a + r n b .
From (2), it follows that
A i ( q ) y k = Q i ( q ) u k + D ( q ) C ( q ) v i , k .
Multiplying both sides of the above equation by C ( q ) gives
A i ( q ) C ( q ) y k = Q i ( q ) C ( q ) u k + D ( q ) v i , k .
That is,
A i ( q ) y 1 , k = Q i ( q ) u 1 , k + D ( q ) v i , k .
Then, the subsystem identification model can be expressed as
y i 1 , k = [ e i A i ( q ) ] y 1 , k + Q i ( q ) u 1 , k + D ( q ) v i , k = a i T φ a , k + b i T φ b , k + φ i d , k T d + v i , k = θ i 1 T φ i 1 , k + v i , k .
Define an intermediate variable
w i , k : = D ( q ) C ( q ) v i , k ,
or
w i , k = [ 1 C ( q ) ] w i , k + D ( q ) v i , k .
From (4), it follows that
w i , k = [ 1 C ( q ) ] w i , k + [ D ( q ) 1 ] v i , k + v i , k = φ i c , k T c + φ i d , k T d + v i , k = y i , k [ e i A i ( q ) ] y k Q i ( q ) u k = y i , k θ i 0 T φ i 0 , k .
Define
w 1 , k : = w i , k φ i d , k T d ,
or
w 1 , k = φ i c , k T c + v i , k .

3. The F-M-ML-RESG Algorithm

This section derives an F-M-ML-RESG algorithm to identify θ i 1 in (3) and c in (5) by applying the negative gradient search and maximum likelihood principle based on the observed data { u k , y k : k = 1 , 2 , 3 , }.
Define the criterion function as
J ( θ i 1 , k ) = 1 2 k = 1 L v i , k 2 ,
where
v i , k = 1 D ( q ) [ A i ( q ) y 1 , k Q i ( q ) u 1 , k ] .
By minimizing J ( θ i 1 , k ) , the maximum likelihood estimate θ ^ i 1 ( k ) can be obtained [30,31]. Define the polynomial estimates of A i ( q ) , Q i ( q ) , and D ( q ) at time k as
A ^ i ( k , q ) = e i + A ^ 1 i , k q 1 + A ^ 2 i , k q 2 + + A ^ n a i , k q n a , Q ^ i ( k , q ) = Q ^ 1 i , k q 1 + Q ^ 2 i , k q 2 + + Q ^ n b i , k q n b , D ^ ( k , q ) = 1 + d ^ 1 , k q 1 + d ^ 2 , k q 2 + + d ^ n d , k q n d .
Let v ^ i , k represent the estimate of v i , k at time k. Computing the gradient of v i , k in (6) with respect to A l i , Q l i , and d l at point θ i 1 = θ ^ i 1 , k 1 yields
v i , k A l i T | θ ^ i 1 , k 1 = [ D ^ ( k 1 , q ) ] 1 q l y ^ 1 , k = : q l y ^ 1 f , k , v i , k Q l i T | θ ^ i 1 , k 1 = [ D ^ ( k 1 , q ) ] 1 q l u ^ 1 , k = : q l u ^ 1 f , k , v i , k d l | θ ^ i 1 , k 1 = [ D ^ ( k 1 , q ) ] 1 q l v ^ i , k = : q l v ^ i f , k ,
where y ^ 1 f , k , u ^ 1 f , k , and v ^ i f , k are defined by
y ^ 1 f , k : = [ D ^ ( k 1 , q ) ] 1 y ^ 1 , k , u ^ 1 f , k : = [ D ^ ( k 1 , q ) ] 1 u ^ 1 , k , v ^ i f , k : = [ D ^ ( k 1 , q ) ] 1 v ^ i , k .
Referring to the work in [32], it follows that
y ^ 1 f , k = y k + l = 1 n c c ^ l , k 1 y k l d ^ 1 , k 1 y ^ 1 f , k 1 d ^ 2 , k 1 y ^ 1 f , k 2 d ^ n d , k 1 y ^ 1 f , k n d , u ^ 1 f , k = u k + l = 1 n c c ^ l , k 1 u k l d ^ 1 , k 1 u ^ 1 f , k 1 d ^ 2 , k 1 u ^ 1 f , k 2 d ^ n d , k 1 u ^ 1 f , k n d , v ^ i f , k = v ^ i , k l = 1 n d d ^ l , k 1 v ^ i f , k l .
Define
φ ^ a , k : = [ y ^ 1 , k 1 T , y ^ 1 , k 2 T , , y ^ 1 , k n a T ] T m n a , φ ^ b , k : = [ u ^ 1 , k 1 T , u ^ 1 , k 2 T , , u ^ 1 , k n b T ] T r n b , φ ^ i d , k : = [ v ^ i , k 1 , v ^ i , k 2 , , v ^ i , k n d ] T n d , φ ^ i 1 , k : = φ ^ a , k φ ^ b , k φ ^ i d , k n 0 .
From (3), it follows that v i , k = y i 1 , k θ i 1 T φ i 1 , k . Replacing y i 1 , k , φ i 1 , k , and θ i 1 with y ^ i 1 , k , φ ^ i 1 , k , and θ ^ i 1 , k yields the estimate of v i , k as
v ^ i , k = y ^ i 1 , k θ ^ i 1 , k T φ ^ i 1 , k .
Substituting (8) into (7) results in
v ^ i f , k = v ^ i , k d ^ 1 , k 1 v ^ i f , k 1 d ^ 2 , k 1 v ^ i f , k 2 d ^ n d , k 1 v ^ i f , k n d .
Hence, the filtered information vector φ ^ i 1 f , k can be written as
φ ^ i 1 f , k = [ y ^ 1 f , k 1 T , y ^ 1 f , k 2 T , , y ^ 1 f , k n a T , u ^ 1 f , k 1 T , u ^ 1 f , k 2 T , , u ^ 1 f , k n b T , v ^ i f , k 1 , v ^ i f , k 2 , , v ^ i f , k n d ] T n 0 .
Rewrite the cost function in a recursive form:
J ( θ i 1 , k ) = J ( θ i 1 , k 1 ) + 1 2 v i , k 2 .
Applying the negative gradient search and minimizing J ( θ i 1 , k ) result in [15]
θ ^ i 1 , k = θ ^ i 1 , k 1 1 r i , k grad [ J ( θ i 1 , k 1 , k ) ] , r i , k = r i , k 1 + φ i 1 , k 2 , r i , 0 = 1 .
Define
φ i 1 f , k : = grad [ v i , k ] | θ ^ i 1 , k 1 = v i , k θ i 1 | θ ^ i 1 , k 1 .
Referring to the work in [18,42,43], grad [ J ( θ i 1 , k 1 , k ) ] can be represented as
grad [ J ( θ i 1 , k 1 , k ) ] = φ i 1 f , k v i , k .
Therefore, an F-M-ML-RESG method can be obtained:
θ ^ i 1 , k = θ ^ i 1 , k 1 + φ i 1 f , k r i , k v i , k , r i , k = r i , k 1 + φ i 1 f , k 2 , r i , 0 = 1 .
From (5), the estimate of c at time k can be obtained by minimizing the following cost function
J ( c ) : = k = 1 L [ w 1 , k φ i c , k T c ] 2 .
Since the unmeasured terms v i f , k 1 in φ i 1 f , k exist, to address this identification difficulty, the unmeasured terms can be replaced with their corresponding estimates of v i f , k 1 . Define the innovation e i 1 , k : = y ^ i 1 , k θ ^ i 1 , k 1 T φ ^ i 1 , k . Therefore, the F-M-ML-RESG method can be summarized as follows:
θ ^ i 1 , k = θ ^ i 1 , k 1 + φ ^ i 1 f , k r i 1 , k e i 1 , k ,
e i 1 , k = y ^ i 1 , k θ ^ i 1 , k 1 T φ ^ i 1 , k ,
r i 1 , k = r i 1 , k 1 + φ ^ i 1 f , k 2 ,
v ^ i , k = y ^ i 1 , k θ ^ i 1 , k T φ ^ i 1 , k ,
d ^ k = 1 m i = 1 m d ^ i , k ,
φ ^ i 1 , k = [ y ^ 1 , k 1 T , y ^ 1 , k 2 T , , y ^ 1 , k n a T , u ^ 1 , k 1 T , u ^ 1 , k 2 T , , u ^ 1 , k n b T , v ^ i , k 1 , v ^ i , k 2 , , v ^ i , k n d ] T ,
φ ^ i 1 f , k = [ y ^ 1 f , k 1 T , y ^ 1 f , k 2 T , , y ^ 1 f , k n a T , u ^ 1 f , k 1 T , u ^ 1 f , k 2 T , , u ^ 1 f , k n b T , v ^ i f , k 1 , v ^ i f , k 2 , , v ^ i f , k n d ] T ,
y ^ 1 f , k = y ^ 1 , k d ^ 1 , k 1 y ^ 1 f , k 1 d ^ 2 , k 1 y ^ 1 f , k 2 d ^ n d , k 1 y ^ 1 f , k n d ,
u ^ 1 f , k = u ^ 1 , k d ^ 1 , k 1 u ^ 1 f , k 1 d ^ 2 , k 1 u ^ 1 f , k 2 d ^ n d , k 1 u ^ 1 f , k n d ,
v ^ i f , k = v ^ i , k d ^ 1 , k 1 v ^ i f , k 1 d ^ 2 , k 1 v ^ i f , k 2 d ^ n d , k 1 v ^ i f , k n d ,
y ^ 1 , k = y k + l = 1 n c c ^ l , k 1 y ^ k l ,
u ^ 1 , k = u k + l = 1 n c c ^ l , k 1 u ^ k l ,
c ^ i , k = c ^ i , k 1 + φ ^ i c , k r i 2 , k e i 2 , k ,
e i 2 , k = w ^ i , k φ ^ i d , k T d ^ k φ ^ i c , k T c ^ i , k 1 ,
r i 2 , k = r i 2 , k 1 + φ ^ i c , k 2 ,
φ ^ i c , k = [ w ^ i , k 1 , w ^ i , k 2 , , w ^ i , k n c ] T ,
φ ^ i d , k = [ v ^ i , k 1 , v ^ i , k 2 , , v ^ i , k n d ] T ,
w ^ i , k = y i , k θ ^ i 0 , k T φ i 0 , k ,
φ i 0 , k = [ y k 1 T , y k 2 T , , y k n a T , u k 1 T , u k 2 T , , u k n b T ] T ,
θ ^ i 0 , k ( k ) = [ a ^ i , k T , b ^ i , k T ] T ,
θ ^ i 1 , k = [ a ^ i , k T , b ^ i , k T , d ^ k T ] T ,
Θ ^ k = [ θ ^ 1 , k T , θ ^ 2 , k T , , θ ^ m , k T ] T .
The steps of the F-M-ML-RESG method for computing θ ^ i 1 , k , c ^ i , k and Θ ^ k are listed below:
  • Initialization: Let k = 1 , and set the initial values θ ^ i 1 , 0 = 1 n 0 / p 0 , c ^ i , 0 = 1 n c / p 0 , r i 1 , 0 = 1 , r i 2 , 0 = 1 , y ^ 1 , j = 0 , u ^ 1 , j = 0 , y ^ 1 f , j = 0 , u ^ 1 f , j = 0 , v ^ i f , j = 0 , w ^ i , j = 1 / p 0 , and v ^ i , j = 1 / p 0 for j 0 , p 0 = 10 6 .
  • Collect u k and y k , compute y ^ 1 , k and u ^ 1 , k by (19) and (20), respectively.
  • Construct φ ^ i 1 , k and φ i 0 , k by (14) and (27), respectively, compute v ^ i , k by (12).
  • Compute y ^ 1 f , k , u ^ 1 f , k , and v ^ i f , k by (16)–(18), respectively, construct φ ^ i 1 f , k by (15).
  • Compute e i 1 , k and r i 1 , k by (10) and (11), respectively.
  • Compute θ ^ i 1 , k by (9), update θ ^ i 0 , k by (28), compute d ^ k by (13), update θ ^ i 1 , k by (29).
  • Construct φ ^ i c , k and φ ^ i d , k by (24) and (25), respectively.
  • Compute e i 2 , k , r i 2 , k , and w ^ i , k by (22), (23), and (26), respectively.
  • Update c ^ i , k by (21).
  • If k < L , let k : = k + 1 and go to Step 2; otherwise, terminate this computational procedure and obtain Θ ^ k by (30).
The flowchart of computing the estimates θ ^ i 1 , k , c ^ i , k and Θ ^ k by the F-M-ML-RESG algorithm in (9)–(30) is shown in Figure 1.

4. The F-M-ML-MIESG Algorithm

In order to further enhance the parameter estimation accuracy of the F-M-ML-RESG method, by introducing the multi-innovation identification theory, an F-M-ML-MIESG method is investigated. Define the information matrix Γ ^ i 1 ( p , k ) , the filtered information matrix Γ ^ i 1 f ( p , k ) , and the stacked output vector Y ^ i 1 ( p , k ) as
Γ ^ i 1 ( p , k ) : = [ φ ^ i 1 , k , φ ^ i 1 , k 1 , , φ ^ i 1 , k p + 1 ] n 0 × p , Γ ^ i 1 f ( p , k ) : = [ φ ^ i 1 f , k , φ ^ i 1 f , k 1 , , φ ^ i 1 f , k p + 1 ] n 0 × p , Y ^ i 1 ( p , k ) : = [ y ^ i 1 , k , y ^ i 1 , k 1 , , y ^ i 1 , k p + 1 ] T p ,
where p is the innovation length. Define the stacked output vector W ^ i ( p , k ) , the information matrix Γ ^ i c ( p , k ) , and the information matrix Γ ^ i d ( p , k ) as
Γ ^ i c ( p , k ) : = [ φ ^ i c , k , φ ^ i c , k 1 , , φ ^ i c , k p + 1 ] n c × p , Γ ^ i d ( p , k ) : = [ φ ^ i d , k , φ ^ i d , k 1 , , φ ^ i d , k p + 1 ] n d × p , W ^ i ( p , k ) : = [ w ^ i , k , w ^ i , k 1 , , w ^ i , k p + 1 ] T p .
Referring to the work in [18,42,43], Equation (10) becomes the following equation:
E i 1 ( p , k ) = Y ^ i 1 ( p , k ) Γ ^ i 1 T ( p , k ) θ ^ i 1 , k 1 .
Equation (22) can be reformulated into the following equation:
E i 2 ( p , k ) = W ^ i ( p , k ) Γ ^ i d T ( p , k ) d ^ i ( k ) Γ ^ i c T ( p , k ) c ^ i , k 1 .
Based on the F-M-ML-RESG method in (9)–(30), the F-M-ML-MIESG method can be obtained as follows:
θ ^ i 1 , k = θ ^ i 1 , k 1 + Γ ^ i 1 f ( p , k ) r i 1 , k E i 1 ( p , k ) ,
E i 1 ( p , k ) = Y ^ i 1 ( p , k ) Γ ^ i 1 T ( p , k ) θ ^ i 1 , k 1 ,
r i 1 , k = r i 1 , k 1 + φ ^ i 1 f , k 2 ,
v ^ i , k = y ^ i 1 , k θ ^ i 1 , k T φ ^ i 1 , k ,
d ^ k = 1 m i = 1 m d ^ i , k ,
Γ ^ i 1 ( p , k ) = [ φ ^ i 1 , k , φ ^ i 1 , k 1 , , φ ^ i 1 , k p + 1 ] ,
Γ ^ i 1 f ( p , k ) = [ φ ^ i 1 f , k , φ ^ i 1 f , k 1 , , φ ^ i 1 f , k p + 1 ] ,
Y ^ i 1 ( p , k ) = [ y ^ i 1 , k , y ^ i 1 , k 1 , , y ^ i 1 , k p + 1 ] T ,
φ ^ i 1 , k = [ y ^ 1 , k 1 T , y ^ 1 , k 2 T , , y ^ 1 , k n a T , u ^ 1 , k 1 T , u ^ 1 , k 2 T , , u ^ 1 , k n b T , v ^ i , k 1 , v ^ i , k 2 , , v ^ i , k n d ] T ,
φ ^ i 1 f , k = [ y ^ 1 f , k 1 T , y ^ 1 f , k 2 T , , y ^ 1 f , k n a T , u ^ 1 f , k 1 T , u ^ 1 f , k 2 T , , u ^ 1 f , k n b T , v ^ i f , k 1 , v ^ i f , k 2 , , v ^ i f , k n d ] T ,
y ^ 1 f , k = y ^ 1 , k d ^ 1 , k 1 y ^ 1 f , k 1 d ^ 2 , k 1 y ^ 1 f , k 2 d ^ n d , k 1 y ^ 1 f , k n d ,
u ^ 1 f , k = u ^ 1 , k d ^ 1 , k 1 u ^ 1 f , k 1 d ^ 2 , k 1 u ^ 1 f , k 2 d ^ n d , k 1 u ^ 1 f , k n d ,
v ^ i f , k = v ^ i , k d ^ 1 , k 1 v ^ i f , k 1 d ^ 2 , k 1 v ^ i f , k 2 d ^ n d , k 1 v ^ i f , k n d ,
y ^ 1 , k = y k + l = 1 n c c ^ l , k 1 y ^ k l ,
u ^ 1 , k = u k + l = 1 n c c ^ l , k 1 u ^ k l ,
c ^ i , k = c ^ i , k 1 + Γ ^ i c ( p , k ) r i 2 , k E i 2 ( p , k ) ,
E i 2 ( p , k ) = W ^ i ( p , k ) Γ ^ i d T ( p , k ) d ^ i , k Γ ^ i c T ( p , k ) c ^ i , k 1 ,
r i 2 , k = r i 2 , k 1 + φ ^ i c , k 2 ,
Γ ^ i c ( p , k ) = [ φ ^ i c , k , φ ^ i c , k 1 , , φ ^ i c , k p + 1 ] ,
Γ ^ i d ( p , k ) = [ φ ^ i d , k , φ ^ i d , k 1 , , φ ^ i d , k p + 1 ] ,
W ^ i ( p , k ) = [ w ^ i , k , w ^ i , k 1 , , w ^ i , k p + 1 ] T ,
φ ^ i c , k = [ w ^ i , k 1 , w ^ i , k 2 , , w ^ i , k n c ] T ,
φ ^ i d , k = [ v ^ i , k 1 , v ^ i , k 2 , , v ^ i , k n d ] T ,
w ^ i , k = y i , k θ ^ i 0 , k T φ ^ i 0 , k ,
φ i 0 , k = [ y k 1 T , y k 2 T , , y k n a T , u k 1 T , u k 2 T , , u k n b T ] T ,
θ ^ i 0 , k = [ a ^ i , k T , b ^ i , k T ] T ,
θ ^ i 1 , k = [ a ^ i , k T , b ^ i , k T , d ^ k T ] T ,
Θ ^ k = [ θ ^ 1 , k T , θ ^ 2 , k T , , θ ^ m , k T ] T .
The F-M-ML-RESG method is a special case of the F-M-ML-MIESG method because, when p = 1 , the F-M-ML-MIESG method degenerates into the F-M-ML-RESG method. The proposed approaches in the paper can combine other estimation algorithms [46,47,48,49,50] to study the parameter identification problems of linear and nonlinear systems with different disturbances [51,52,53,54,55], and can be applied to other fields [56,57,58,59,60] such as signal processing and process control systems. The F-M-ML-MIESG method consists of the following steps for computing θ ^ i 1 , k , c ^ i , k and Θ ^ k :
  • Initialization: Let k = 1 , and set the initial values θ ^ i 1 , 0 = 1 n 0 / p 0 , c ^ i , 0 = 1 n c / p 0 , r i 1 , 0 = 1 , r i 2 , 0 = 1 , y ^ 1 , j = 0 , u ^ 1 , j = 0 , y ^ 1 f , j = 0 , u ^ 1 f , j = 0 , v ^ i f , j ( j ) = 0 , w ^ i , j = 1 / p 0 , and v ^ i , j = 1 / p 0 for j 0 , p 0 = 10 6 .
  • Collect u k and y k , compute y ^ 1 , k and u ^ 1 , k by (44) and (45), respectively.
  • Form φ ^ i 1 , k , φ i 0 , k , and Y ^ i 1 ( p , k ) by (39), (55), and (38), respectively.
  • Compute y ^ 1 f , k , u ^ 1 f , k , and v ^ i f , k by (41)–(43), respectively.
  • Form φ ^ i 1 f , k , Γ ^ i 1 ( p , k ) , and Γ ^ i 1 f ( p , k ) by (40), (36), and (37), respectively.
  • Compute E i 1 ( p , k ) , r i 1 , k , and θ ^ i 1 , k by (32), (33), and (31), respectively.
  • Update θ ^ i 0 , k , d ^ k , and θ ^ i 1 , k by (56), (35), and (57), respectively.
  • Compute v ^ i , k and w ^ i , k by (34) and (54), respectively.
  • Form φ ^ i c , k and φ ^ i d , k by (52) and (53), respectively.
  • Form Γ ^ i c ( p , k ) , Γ ^ i d ( p , k ) , and W ^ i ( p , k ) by (49), (50), and (51), respectively.
  • Compute E i 2 ( p , k ) and r i 2 , k by (47) and (48), respectively.
  • Update c ^ i , k by (46).
  • If k < L , let k : = k + 1 and go to Step 2; otherwise, terminate this computational procedure and obtain Θ ^ k by (58).
The flowchart of computing the estimates θ ^ i 1 , k , c ^ i , k and Θ ^ k by the F-M-ML-MIESG algorithm in (31)–(58) is shown in Figure 2.
The model decomposition technique is applied to solve the coupling relationship between the input and output variables of the multivariable system. Thus, the complexity of system identification algorithms is reduced. The data filtering technique is used to filter the observed data. Hence, the subsystem identification model is simplified. The proposed method is based on the data filtering technique, the coupling identification concept, the multi-innovation identification theory, and the negative gradient search for improved parameter estimation and computational performance. The maximum likelihood principle is utilized to estimate the parameters of the noise model directly.

5. Examples

Example 1. 
Consider the following M-CARARMA model
A ( q ) y k = Q ( q ) u k + D ( q ) C ( q ) v k , y 1 , k y 2 , k + 0.34 0.21 0.47 0.19 y 1 , k 1 y 2 , k 1 = 2.43 0.59 2.52 0.27 u 1 , k 1 u 2 , k 1 + D ( q ) C ( q ) v 1 , k v 2 , k , A ( q ) = I + A 1 q 1 = I + a 11 a 12 a 21 a 22 q 1 = 1 + 0.34 q 1 0.21 q 1 0.47 q 1 1 + 0.19 q 1 , Q ( q ) = Q 1 q 1 = b 11 b 12 b 21 b 22 q 1 = 2.43 q 1 0.59 q 1 2.52 q 1 0.27 q 1 , C ( q ) = 1 + c 1 q 1 + c 2 q 2 = 1 + 0.18 q 1 + 0.10 q 2 , D ( q ) = 1 + d 1 q 1 + d 2 q 2 = 1 + 0.07 q 1 + 0.12 q 2 , θ 1 = [ a 11 , a 12 , b 11 , b 12 , c 1 , c 2 , d 1 , d 2 ] = [ 0.34 , 0.21 , 2.43 , 0.59 , 0.18 , 0.10 , 0.07 , 0.12 ] , θ 2 = [ a 21 , a 22 , b 21 , b 22 , c 1 , c 2 , d 1 , d 2 ] = [ 0.47 , 0.19 , 2.52 , 0.27 , 0.18 , 0.10 , 0.07 , 0.12 ] , Θ T = [ a 11 , a 12 , a 21 , a 22 , b 11 , b 12 , b 21 , b 22 , c 1 , c 2 , d 1 , d 2 ] = [ 0.34 , 0.21 , 0.47 , 0.19 , 2.43 , 0.59 , 2.52 , 0.27 , 0.18 , 0.10 , 0.07 , 0.12 ] ,
where { u 1 , k } and { u 2 , k } are persistent excitation signal sequences with zero mean and unit variance, { v 1 , k } and { v 2 , k } are white noise sequences with zero mean and different variances. Taking the data length L = 3000 and the noise variances σ 1 2 = σ 2 2 = 0 . 10 2 with p = 9 , applying the F-M-ML-MIESG method in (31)–(58) to the example model, the parameter estimates and their errors δ : = θ ^ k θ / θ are shown in Table 1, the parameter estimates θ ^ 1 , k and θ ^ 2 , k versus k are shown in Figure 3. When σ 1 2 = σ 2 2 = 0 . 10 2 , the F-M-ML-MIESG parameter estimation errors versus k with different innovation lengths p are shown in Figure 4. When p = 9 , the F-M-ML-MIESG parameter estimation errors versus k with different noise variances are shown in Figure 5.
Example 2. 
Consider the following another 3-input and 3-output system:
A ( q ) y k = Q ( q ) u k + D ( q ) C ( q ) v k , y 1 , k y 2 , k y 3 , k + 0.30 0.45 0.28 0.40 0.20 0.50 0.25 0.30 0.50 y 1 , k 1 y 2 , k 1 y 3 , k 1 = 1.35 0.70 0.90 2.25 1.10 0.10 0.30 0.10 0.30 u 1 , k 1 u 2 , k 1 u 3 , k 1 + D ( q ) C ( q ) v 1 , k v 2 , k v 3 , k , C ( q ) = 1 + c 1 q 1 = 1 + 0.25 q 1 , D ( q ) = 1 + d 1 q 1 = 1 0.10 q 1 , θ 1 = [ a 11 , a 12 , a 13 , b 11 , b 12 , b 13 , c 1 , d 1 ] = [ 0.30 , 0.45 , 0.28 , 1.35 , 0.70 , 0.90 , 0.25 , 0.10 ] , θ 2 = [ a 21 , a 22 , a 23 , b 21 , b 22 , b 23 , c 1 , d 1 ] = [ 0.40 , 0.20 , 0.50 , 2.25 , 1.10 , 0.10 , 0.25 , 0.10 ] , θ 3 = [ a 31 , a 32 , a 33 , b 31 , b 32 , b 33 , c 1 , d 1 ] = [ 0.25 , 0.30 , 0.50 , 0.30 , 0.10 , 0.30 , 0.25 , 0.10 ] .
The simulation conditions of this example are similar to those in Example 1. Applying the F-M-ML-MIESG algorithm to estimate the parameters of this example system, the simulation results are shown in Table 2, Figure 6 and Figure 7.
From these simulations, it can be observed that
  • The parameter estimation errors of the F-M-ML-MIESG method decease as k increases, as shown in Table 1 and Table 2 and Figure 3.
  • The parameter estimation errors of the F-M-ML-MIESG algorithm become smaller as p increases, as shown in Figure 4 and Figure 6.
  • Under the same noise variances, when p 2 , the F-M-ML-MIESG method generates more accurate parameter estimates than the F-M-ML-RESG method, as shown in Figure 4 and Figure 6.
  • For the same data length and the same innovation length, the F-M-ML-MIESG method converges quickly to the true values as the noise variance decreases, as shown in Figure 5 and Figure 7.

6. Conclusions

This paper considers the parameter estimation of the linear M-CARARMA system with an ARMA noise. By means of an adaptive linear filter, the subsystem identification model is simplified, then an F-M-ML-MIESG method is discussed by introducing the multi-innovation identification theory to the stochastic gradient method. The purpose of an adaptive filter is to improve the parameter estimation accuracy by filtering the observed data without changing the relationship between input and output data. Both the model decomposition technique and the data filtering technique are used to reduce the system complexity, and the identification model is simplified. The simulation validation demonstrates that the F-M-ML-MIESG method provides a higher parameter estimation accuracy than the F-M-ML-RESG method when p 2 . The proposed filtering-based parameter identification methods for multivariable stochastic systems in this paper can be extended to study the identification issues of other scalar and multivariable stochastic systems with colored noises [61,62,63,64,65,66] and can be applied to some engineering application systems [67,68,69,70,71,72,73] such as filtering, estimation, prediction [74,75,76,77,78,79,80,81], and so on.

Author Contributions

Conceptualization and methodology, H.X.; validation and analysis, F.C. Both the authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Colleges and Universities of Jiangsu Province OF FUNDER Grant Nos. 19KJB120012,19KJD460007, the 135 Engineering of Taizhou Education Bureau OF FUNDER Grant No. 2018TZCJ001, the General Topic of Taizhou University OF FUNDER Grant No. TZXY2019YBKT005, and by the Key Program Special Fund in XJTLU OF FUNDER Grant No. KSF-E-12.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ding, J.; Chen, J.Z.; Lin, J.X.; Wan, L.J. Particle filtering based parameter estimation for systems with output-error type model structures. J. Frankl. Inst. 2019, 356, 5521–5540. [Google Scholar] [CrossRef]
  2. Xu, L.; Chen, L.; Xiong, W.L. Parameter estimation and controller design for dynamic systems from the step responses based on the Newton iteration. Nonlinear Dyn. 2015, 79, 2155–2163. [Google Scholar] [CrossRef]
  3. Ding, J.; Chen, J.Z.; Lin, J.X.; Jiang, G.P. Particle filtering-based recursive identification for controlled auto-regressive systems with quantised output. IET Control Theory Appl. 2019, 13, 2181–2187. [Google Scholar] [CrossRef]
  4. Xu, L. The damping iterative parameter identification method for dynamical systems based on the sine signal measurement. Signal Process 2016, 120, 660–667. [Google Scholar] [CrossRef]
  5. Ding, J.; Cao, Z.X.; Chen, J.Z.; Jiang, G.P. Weighted parameter estimation for Hammerstein nonlinear ARX systems. Circuits Syst. Signal Process. 2020, 39, 2178–2192. [Google Scholar] [CrossRef]
  6. Ding, F.; Xu, L.; Meng, D.D.; Jin, X.B.; Alsaedi, A.; Hayat, T. Gradient estimation algorithms for the parameter identification of bilinear systems using the auxiliary model. J. Comput. Appl. Math. 2020, 369, 112575. [Google Scholar] [CrossRef]
  7. Pan, J.; Jiang, X.; Wan, X.K.; Ding, W. A filtering based multi-innovation extended stochastic gradient algorithm for multivariable control systems. Int. J. Control Autom. Syst. 2017, 15, 1189–1197. [Google Scholar] [CrossRef]
  8. Zhang, X. Recursive parameter estimation and its convergence for bilinear systems. IET Control Appl. 2020, 14, 677–688. [Google Scholar] [CrossRef]
  9. Cui, T.; Ding, F.; Alsaedi, A.; Hayat, T. Joint multi-innovation recursive extended least squares parameter and state estimation for a class of state-space systems. Int. J. Control Autom. Syst. 2020, 18, 1412–1424. [Google Scholar] [CrossRef]
  10. Zhang, X.; Liu, Q.Y. Recursive identification of bilinear time-delay systems through the redundant rule. J. Frankl. Inst. 2020, 357, 726–747. [Google Scholar] [CrossRef]
  11. Xu, L. Parameter estimation algorithms for dynamical response signals based on the multi-innovation theory and the hierarchical principle. IET Signal Process. 2017, 11, 228–237. [Google Scholar] [CrossRef]
  12. Bin, M.; Marconi, L. Output regulation by postprocessing internal models for a class of multivariable nonlinear systems. Int. J. Robust Nonlinear Control 2020, 30, 1115–1140. [Google Scholar] [CrossRef]
  13. Hakimi, A.R.; Binazadeh, T. Sustained oscillations in MIMO nonlinear systems through limit cycle shaping. Int. J. Robust Nonlinear Control 2020, 30, 587–608. [Google Scholar] [CrossRef]
  14. Cheng, S.; Wei, Y.; Chen, Y.; Wang, Y.; Liang, Q. Fractional-order multivariable composite model reference adaptive control. Int. J. Adapt. Control Signal Process. 2017, 31, 1467–1480. [Google Scholar] [CrossRef]
  15. Ding, F.; Zhang, X.; Xu, L. The innovation algorithms for multivariable state-space models. Int. J. Adapt. Control Signal Process. 2019, 33, 1601–1608. [Google Scholar] [CrossRef]
  16. Ding, F.; Liu, Y.J.; Bao, B. Gradient based and least squares based iterative estimation algorithms for multi-input multi-output systems. Proc. Inst. Mech. Eng. Part I J. Syst. Control Eng. 2012, 226, 43–55. [Google Scholar] [CrossRef]
  17. Ji, Y.; Zhang, C.; Kang, Z.; Yu, T. Parameter estimation for block-oriented nonlinear systems using the key term separation. Int. J. Robust Nonlinear Control 2020, 30, 3727–3752. [Google Scholar] [CrossRef]
  18. Xia, H.F.; Yang, Y.Q. Maximum likelihood gradient-based iterative estimation for multivariable systems. IET Control Theory Appl. 2019, 13, 1683–1691. [Google Scholar] [CrossRef]
  19. Zhao, D.; Ding, S.X.; Karimi, H.R.; Li, Y.Y.; Wang, Y.Q. On robust Kalman filter for two-dimensional uncertain linear discrete time-varying systems: A least squares method. Automatica 2019, 99, 203–212. [Google Scholar] [CrossRef]
  20. Ji, Y.; Jiang, X.K.; Wan, L.J. Hierarchical least squares parameter estimation algorithm for two-input Hammerstein finite impulse response systems. J. Frankl. Inst. 2020, 357, 5019–5032. [Google Scholar] [CrossRef]
  21. Patel, A.M.; Li, J.K.J.; Finegan, B.; McMurtry, M.S. Aortic pressure estimation using blind identification approach on single input multiple output nonlinear Wiener systems. IEEE Trans. Biomed. Eng. 2018, 65, 1193–1200. [Google Scholar] [CrossRef] [PubMed]
  22. Tolić, I.; Miličević, K.; Šuvak, N.; Biondić, I. Nonlinear least squares and maximum likelihood estimation of probability density function of cross-border transmission losses. IEEE Trans. Power Syst. 2018, 33, 2230–2238. [Google Scholar] [CrossRef]
  23. Marey, M.; Mostafa, H. Maximum-likelihood integer frequency offset estimator for alamouti SFBC-OFDM systems. IEEE Commun. Lett. 2020, 24, 777–781. [Google Scholar] [CrossRef]
  24. Li, M.H.; Liu, X.M. Maximum likelihood least squares based iterative estimation for a class of bilinear systems using the data filtering technique. Int. J. Control Autom. Syst. 2020, 18, 1581–1592. [Google Scholar] [CrossRef]
  25. Pulido, B.; Zamarreño, J.M.; Merino, A.; Bregon, A. State space neural networks and model-decomposition methods for fault diagnosis of complex industrial systems. Eng. Appl. Artif. Intel. 2019, 79, 67–86. [Google Scholar] [CrossRef]
  26. Hafezi, Z.; Arefi, M.M. Recursive generalized extended least squares and RML algorithms for identification of bilinear systems with ARMA noise. ISA Trans. 2019, 88, 50–61. [Google Scholar] [CrossRef]
  27. Boudjedir, C.E.; Boukhetala, D.; Bouri, M. Iterative learning control of multivariable uncertain nonlinear systems with nonrepetitive trajectory. Nonlinear Dyn. 2019, 95, 2197–2208. [Google Scholar] [CrossRef]
  28. Parigi Polverini, M.; Formentin, S.; Merzagora, L.; Rocco, P. Mixed data-driven and model-based robot implicit force control: A hierarchical approach. IEEE Trans. Control Syst. Technol. 2020, 28, 1258–1271. [Google Scholar] [CrossRef]
  29. Li, M.H.; Liu, X.M. The least squares based iterative algorithms for parameter estimation of a bilinear system with autoregressive noise using the data filtering technique. Signal Process. 2018, 147, 23–34. [Google Scholar] [CrossRef]
  30. Xia, H.F.; Yang, Y.Q. Recursive least-squares estimation for multivariable systems based on the maximum likelihood principle. Int. J. Control Autom. Syst. 2020, 18, 503–512. [Google Scholar] [CrossRef]
  31. Xia, H.F.; Ji, Y.; Yang, Y.Q. Improved least-squares identification for multiple-output nonlinear stochastic systems. IET Control Theory Appl. 2020, 14, 964–971. [Google Scholar] [CrossRef]
  32. Xia, H.F.; Yang, Y.Q. Maximum likelihood-based recursive least-squares estimation for multivariable systems using the data filtering technique. Int. J. Syst. Sci. 2019, 50, 1121–1135. [Google Scholar] [CrossRef]
  33. Berntorp, K.; Di Cairano, S. Tire-stiffness and vehicle-state estimation based on noise-adaptive particle filtering. IEEE Trans. Control Syst. Technol. 2019, 27, 1100–1114. [Google Scholar] [CrossRef]
  34. Ahmad, S.; Rehan, M.; Iqbal, M. Robust generalized filtering of uncertain Lipschitz nonlinear systems under measurement delays. Nonlinear Dyn. 2018, 92, 1567–1582. [Google Scholar] [CrossRef]
  35. Chen, J.; Zhu, Q.M.; Liu, Y.J. Modified Kalman filtering based multi-step-length gradient iterative algorithm for ARX models with random missing outputs. Automatica 2020, 118, 109034. [Google Scholar] [CrossRef]
  36. Chen, J.; Shen, Q.Y.; Ma, J.X.; Liu, Y.J. Stochastic average gradient algorithm for multirate FIR models with varying time delays using self-organizing maps. Int. J. Adapt. Control Signal Process. 2020, 34, 955–970. [Google Scholar] [CrossRef]
  37. Chen, J.; Zhang, Y.; Zhu, Q.M.; Liu, Y.J. Aitken based modified Kalman filtering stochastic gradient algorithm for dual-rate nonlinear models. J. Frankl. Inst. 2019, 356, 4732–4746. [Google Scholar] [CrossRef]
  38. Ljung, L. System Identification: Theory User, 2nd ed.; Prentice Hall: Englewood Cliffs, NJ, USA, 1999. [Google Scholar]
  39. Zhang, H.M. Quasi gradient-based inversion-free iterative algorithm for solving a class of the nonlinear matrix equations. Comput. Math. Appl. 2019, 77, 1233–1244. [Google Scholar] [CrossRef]
  40. Jin, Q.B.; Wang, Z.; Liu, X.P. Auxiliary model-based interval-varying multi-innovation least squares identification for multivariable OE-like systems with scarce measurements. J. Process Control 2015, 35, 154–168. [Google Scholar] [CrossRef]
  41. Li, J.H.; Zhang, J.L. Maximum likelihood identification of dual-rate Hammerstein output error moving average system. IET Control Theory Appl. 2020, 14, 1078–1090. [Google Scholar] [CrossRef]
  42. Xia, H.F.; Ji, Y.; Liu, Y.J.; Xu, L. Maximum likelihood-based multi-innovation stochastic gradient method for multivariable systems. Int. J. Control Autom. Syst. 2019, 17, 565–574. [Google Scholar] [CrossRef]
  43. Xia, H.F.; Ji, Y.; Xu, L.; Alsaedi, A.; Hayat, T. Maximum likelihood-based gradient estimation for multivariable nonlinear systems using the multiinnovation identification theory. Int. J. Robust Nonlinear Control 2020, 30, 5446–5463. [Google Scholar] [CrossRef]
  44. Dong, S.J.; Yu, L.; Zhang, W.A. Robust hierarchical identification of Wiener systems in the presence of dynamic disturbances. J. Frankl. Inst. 2020, 357, 3809–3834. [Google Scholar] [CrossRef]
  45. Wang, L.J.; Ji, Y.; Yang, H.L.; Xu, L. Decomposition-based multiinnovation gradient identification algorithms for a special bilinear system based on its input-output representation. Int. J. Robust Nonlinear Control 2020, 30, 3607–3623. [Google Scholar] [CrossRef]
  46. Xu, L. Iterative parameter estimation for signal models based on measured data. Circuits Syst. Signal Process. 2018, 37, 3046–3069. [Google Scholar] [CrossRef]
  47. Xu, L.; Xiong, W.L.; Alsaedi, A.; Hayat, T. Hierarchical parameter estimation for the frequency response based on the dynamical window data. Int. J. Control Autom. Syst. 2018, 16, 1756–1764. [Google Scholar] [CrossRef]
  48. Gu, Y.; Liu, J.; Li, X.; Chou, Y.; Ji, Y. State space model identification of multirate processes with time-delay using the expectation maximization. J. Frankl. Inst. 2019, 356, 1623–1639. [Google Scholar] [CrossRef]
  49. Xu, L.; Ding, F.; Zhu, Q.M. Hierarchical Newton and least squares iterative estimation algorithm for dynamic systems by transfer functions based on the impulse responses. Int. J. Syst. Sci. 2019, 50, 141–151. [Google Scholar] [CrossRef] [Green Version]
  50. Xu, L.; Ding, F.; Lu, X.; Wan, L.J.; Sheng, J. Hierarchical multi-innovation generalised extended stochastic gradient methods for multivariable equation-error autoregressive moving average systems. IET Control Theory Appl. 2020, 14, 1276–1286. [Google Scholar] [CrossRef]
  51. Pan, J.; Ma, H.; Zhang, X.; Liu, Q.Y. Recursive coupled projection algorithms for multivariable output-error-like systems with coloured noises. IET Signal Process. 2020, 14, 455–466. [Google Scholar] [CrossRef]
  52. Xu, L.; Ding, F.; Wan, L.J.; Sheng, J. Separable multi-innovation stochastic gradient estimation algorithm for the nonlinear dynamic responses of systems. Int. J. Adapt. Control Signal Process. 2020, 34, 937–954. [Google Scholar] [CrossRef]
  53. Zhang, X.; Ding, F.; Alsaadi, F.E.; Hayat, T. Recursive parameter identification of the dynamical models for bilinear state space systems. Nonlinear Dyn. 2017, 89, 2415–2429. [Google Scholar] [CrossRef]
  54. Zhang, X.; Xu, L.; Ding, F.; Hayat, T. Combined state and parameter estimation for a bilinear state space system with moving average noise. J. Frankl. Inst. 2018, 355, 3079–3103. [Google Scholar] [CrossRef]
  55. Gu, Y.; Zhu, Q.; Nouri, H. Bias compensation-based parameter and state estimation for a class of time-delay nonlinear state-space models. IET Control Theory Appl. 2020, 14, 2176–2185. [Google Scholar] [CrossRef]
  56. Zhang, X.; Ding, F.; Xu, L.; Yang, E.F. State filtering-based least squares parameter estimation for bilinear systems using the hierarchical identification principle. IET Control Theory Appl. 2018, 12, 1704–1713. [Google Scholar] [CrossRef] [Green Version]
  57. Wang, L.J.; Ji, Y.; Wan, L.J.; Bu, N. Hierarchical recursive generalized extended least squares estimation algorithms for a class of nonlinear stochastic systems with colored noise. J. Frankl. Inst. 2019, 356, 10102–10122. [Google Scholar] [CrossRef]
  58. Zhang, X.; Ding, F.; Xu, L.; Yang, E.F. Highly computationally efficient state filter based on the delta operator. Int. J. Adapt. Control Signal Process. 2019, 33, 875–889. [Google Scholar] [CrossRef]
  59. Fan, Y.M.; Liu, X.M. Two-stage auxiliary model gradient-based iterative algorithm for the input nonlinear controlled autoregressive system with variable-gain nonlinearity. Int. J. Robust Nonlinear Control 2020, 30, 5492–5509. [Google Scholar] [CrossRef]
  60. Zhang, X.; Ding, F.; Yang, E.F. State estimation for bilinear systems through minimizing the covariance matrix of the state estimation errors. Int. J. Adapt. Control Signal Process. 2019, 33, 1157–1173. [Google Scholar] [CrossRef]
  61. Xu, L.; Song, G.L. A recursive parameter estimation algorithm for modeling signals with multi-frequencies. Circuits Syst. Signal Process. 2020, 39, 4198–4224. [Google Scholar] [CrossRef]
  62. Gu, Y.; Chou, Y.; Liu, J.; Ji, Y. Moving horizon estimation for multirate systems with time-varying time-delays. J. Frankl. Inst. 2019, 356, 2325–2345. [Google Scholar] [CrossRef]
  63. Xu, L.; Ding, F.; Yang, E.F. Separable recursive gradient algorithm for dynamical systems based on the impulse response signals. Int. J. Control Autom. Syst. 2020, 18, 3167–3177. [Google Scholar] [CrossRef]
  64. Zhang, X. Hierarchical parameter and state estimation for bilinear systems. Int. Syst. Sci. 2020, 51, 275–290. [Google Scholar] [CrossRef]
  65. Gan, M.; Chen, C.L.P.; Chen, G.Y.; Chen, L. On some separated algorithms for separable nonlinear squares problems. IEEE Trans.Cybern. 2018, 48, 2866–2874. [Google Scholar] [CrossRef]
  66. Zhang, X. Adaptive parameter estimation for a general dynamical system with unknown states. Int. J. Robust Nonlinear Control 2020, 30, 1351–1372. [Google Scholar] [CrossRef]
  67. Chen, G.Y.; Gan, M.; Chen, C.L.P.; Li, H.X. A regularized variable projection algorithm for separable nonlinear least-squares problems. IEEE Trans. Autom. Control 2019, 64, 526–537. [Google Scholar] [CrossRef]
  68. Zhang, X.; Ding, F.; Xu, L. Recursive parameter estimation methods and convergence analysis for a special class of nonlinear systems. Int. J. Robust Nonlinear Control 2020, 30, 1373–1393. [Google Scholar] [CrossRef]
  69. Liu, H.; Zou, Q.X.; Zhang, Z.P. Energy disaggregation of appliances consumptions using ham approach. IEEE Access 2019, 7, 185977–185990. [Google Scholar] [CrossRef]
  70. Liu, L.J.; Liu, H.B. Data filtering based maximum likelihood gradient estimation algorithms for a multivariate equation-error system with ARMA noise. J. Frankl. Inst. 2020, 357, 5640–5662. [Google Scholar] [CrossRef]
  71. Wang, L.J.; Guo, J.; Xu, C.; Wu, T.Z.; Lin, H.P. Hybrid model predictive control strategy of supercapacitor energy storage system based on double active bridge. Energies 2019, 12, 2134. [Google Scholar] [CrossRef] [Green Version]
  72. Tian, S.S.; Zhang, X.X.; Xiao, S.; Zhang, J.; Chen, Q.; Li, Y. Application of C6F12O/CO2 mixture in 10 kV medium-voltage switchgear. IET Sci. Technol. 2019, 13, 1225–1230. [Google Scholar] [CrossRef]
  73. Ni, J.Y.; Zhang, Y.L. Parameter estimation algorithms of linear systems with time-delays based on the frequency responses and harmonic balances under the multi-frequency sinusoidal signal excitation. Signal Process. 2021, 181, 107904. [Google Scholar] [CrossRef]
  74. Ji, F.; Liao, L.; Wu, T.Z.; Chang, C.; Wang, M.N. Self-reconfiguration batteries with stable voltage during the full cycle without the DC-DC converter. J. Energy Storage 2020, 28, 101213. [Google Scholar] [CrossRef]
  75. Wan, X.K.; Jin, Z.Y.; Wu, H.B.; Liu, J.J.; Zhu, B.R.; Xie, H.G. Heartbeat classification algorithm based on one-dimensional convolution neural network. J. Mech. Med. Biol. 2020, 20, 2050046. [Google Scholar] [CrossRef]
  76. Wan, X.K.; Liu, J.J.; Jin, Z.Y.; Zhu, B.R.; Zhang, M.R. Ventricular repolarization instability quantified by instantaneous frequency of ECG ST intervals. Technol. Health Care 2020. [Google Scholar] [CrossRef]
  77. Zhang, Y.; Yan, Z.; Zhou, C.C.; Wu, T.Z.; Wang, Y.Y. Capacity allocation of HESS in micro-grid based on ABC algorithm. Int. Low Carbon Technol. 2020, ctaa014. [Google Scholar] [CrossRef]
  78. Zhao, Z.; Wang, X.; Yao, P.; Bai, Y. A health performance evaluation method of multirotors under wind turbulence. Nonlinear Dyn. 2020, 102, 1701–1715. [Google Scholar] [CrossRef]
  79. Jin, X.; Wang, H.X.; Wang, X.Y.; Bai, Y.T.; Su, T.L.; Kong, J.L. Deep-learning prediction model with serial two-level decomposition based on bayesian optimization. Complexity 2020, 2020, 4346803. [Google Scholar] [CrossRef]
  80. Chen, M.T.; Ding, F.; Lin, R.M.; Ng, T.Y.; Zhang, Y.L.; Wei, W. Maximum likelihood least squares-based iterative methods for output-error bilinear-parameter models with colored noises. Int. J. Robust Nonlinear Control 2020, 30, 6262–6280. [Google Scholar] [CrossRef]
  81. Ma, H.; Pan, J.; Ding, F.; Xu, L.; Ding, W. Partially-coupled least squares based iterative parameter estimation for multi-variable output-error-like autoregressive moving average systems. IET Control Theory Appl. 2019, 13, 3040–3051. [Google Scholar] [CrossRef]
Figure 1. The flowchart of computing the F-M-ML-RESG estimates θ ^ i 1 , k , c ^ i , k and Θ ^ k .
Figure 1. The flowchart of computing the F-M-ML-RESG estimates θ ^ i 1 , k , c ^ i , k and Θ ^ k .
Mathematics 08 02254 g001
Figure 2. The flowchart of computing the F-M-ML-MIESG estimates θ ^ i 1 , k , c ^ i , k and Θ ^ k .
Figure 2. The flowchart of computing the F-M-ML-MIESG estimates θ ^ i 1 , k , c ^ i , k and Θ ^ k .
Mathematics 08 02254 g002
Figure 3. The F-M-ML-MIESG parameter estimates versus k ( σ 2 = 0 . 10 2 , p = 9 ).
Figure 3. The F-M-ML-MIESG parameter estimates versus k ( σ 2 = 0 . 10 2 , p = 9 ).
Mathematics 08 02254 g003
Figure 4. The F-M-ML-MIESG estimation errors versus k with different innovation lengths ( σ 2 = 0 . 10 2 ).
Figure 4. The F-M-ML-MIESG estimation errors versus k with different innovation lengths ( σ 2 = 0 . 10 2 ).
Mathematics 08 02254 g004
Figure 5. The F-M-ML-MIESG estimation errors versus k with different σ 2 ( p = 9 ).
Figure 5. The F-M-ML-MIESG estimation errors versus k with different σ 2 ( p = 9 ).
Mathematics 08 02254 g005
Figure 6. The F-M-ML-MIESG estimation errors versus k with different innovation lengths ( σ 2 = 0 . 10 2 ).
Figure 6. The F-M-ML-MIESG estimation errors versus k with different innovation lengths ( σ 2 = 0 . 10 2 ).
Mathematics 08 02254 g006
Figure 7. The F-M-ML-MIESG estimation errors versus k with different σ 2 ( p = 9 ).
Figure 7. The F-M-ML-MIESG estimation errors versus k with different σ 2 ( p = 9 ).
Mathematics 08 02254 g007
Table 1. The F-M-ML-MIESG estimates and errors ( σ 2 = 0 . 10 2 , p = 9 ).
Table 1. The F-M-ML-MIESG estimates and errors ( σ 2 = 0 . 10 2 , p = 9 ).
k100200500100020003000True Values
a 11 0.318200.290870.300580.307180.314350.317050.34000
a 12 0.190710.190150.198490.204140.198260.199300.21000
a 21 −0.49919−0.46008−0.45017−0.45085−0.45203−0.45256−0.47000
a 22 0.190940.202690.193200.191480.199710.199370.19000
b 11 1.783431.974352.159292.242582.301072.325852.43000
b 12 −0.36392−0.43090−0.49287−0.52493−0.54330−0.55194−0.59000
b 21 −1.84309−2.04914−2.23926−2.32552−2.38841−2.41420−2.52000
b 22 0.109170.163150.204170.225420.236120.241930.27000
c 1 0.197190.169720.157180.168120.180930.174110.18000
c 2 0.304850.147390.139230.133290.096660.096030.10000
d 1 0.003600.003450.004220.004120.004260.004430.07000
d 2 0.025430.026470.026840.027390.027920.028110.12000
δ ( % ) 27.0977418.9862611.362147.874955.443304.46325
Table 2. The F-M-ML-MIESG estimates and errors ( σ 2 = 0 . 10 2 , p = 9 ).
Table 2. The F-M-ML-MIESG estimates and errors ( σ 2 = 0 . 10 2 , p = 9 ).
k100200500100020003000True Values
a 11 −0.27269−0.32691−0.29484−0.29755−0.29976−0.29662−0.30000
a 12 0.482920.479250.460180.454620.454390.454300.45000
a 13 0.149160.165870.206040.228560.243420.248380.28000
a 21 −0.37780−0.36285−0.38617−0.38781−0.39033−0.39405−0.40000
a 22 0.122120.124120.166150.184000.183490.185490.20000
a 23 0.386690.424060.444680.457430.467050.473740.50000
a 31 0.268760.248430.259280.255790.250300.253150.25000
a 32 0.314170.298670.297030.298890.298610.300470.30000
a 33 0.383770.411800.435090.453960.467740.474530.50000
b 11 0.942181.059461.159331.213021.252951.271351.35000
b 12 −0.62282−0.63298−0.65448−0.66639−0.67653−0.67960−0.70000
b 13 0.766710.798600.825680.844530.860250.866750.90000
b 21 −1.72820−1.88151−2.00584−2.08070−2.13062−2.15230−2.25000
b 22 0.967280.987751.025371.041641.058871.065171.10000
b 23 0.211550.171430.149570.136770.125700.121590.10000
b 31 0.179300.216680.244200.259040.271930.278100.30000
b 32 0.011200.035760.060500.072090.078810.083110.10000
b 33 0.260130.261160.275450.280670.286210.288340.30000
c 1 0.153330.212470.222090.245000.200590.236160.25000
d 1 −0.02364−0.02344−0.02373−0.02411−0.02439−0.02471−0.10000
δ ( % ) 23.8926017.2379411.325048.030895.707674.79640
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xia, H.; Chen, F. Filtering-Based Parameter Identification Methods for Multivariable Stochastic Systems. Mathematics 2020, 8, 2254. https://doi.org/10.3390/math8122254

AMA Style

Xia H, Chen F. Filtering-Based Parameter Identification Methods for Multivariable Stochastic Systems. Mathematics. 2020; 8(12):2254. https://doi.org/10.3390/math8122254

Chicago/Turabian Style

Xia, Huafeng, and Feiyan Chen. 2020. "Filtering-Based Parameter Identification Methods for Multivariable Stochastic Systems" Mathematics 8, no. 12: 2254. https://doi.org/10.3390/math8122254

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop