Next Article in Journal
General Blaschke Bodies and the Asymmetric Negative Solutions of Shephard Problem
Previous Article in Journal
Logarithmic Aggregation Operators of Picture Fuzzy Numbers for Multi-Attribute Decision Making Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Decomposition Least-Squares-Based Iterative Identification Algorithms for Multivariable Equation-Error Autoregressive Moving Average Systems

1
College of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao 266061, China
2
School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, China
3
Department of Mathematics, King Abdulaziz University, Jeddah 21589, Arabia
4
Editorial Office of Journal of Qingdao University of Science and Technology (Natural Science Edition), Qingdao University of Science and Technology, Qingdao 266061, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(7), 609; https://doi.org/10.3390/math7070609
Submission received: 29 May 2019 / Revised: 27 June 2019 / Accepted: 28 June 2019 / Published: 9 July 2019
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
This paper is concerned with the identification problem for multivariable equation-error systems whose disturbance is an autoregressive moving average process. By means of the hierarchical identification principle and the iterative search, a hierarchical least-squares-based iterative (HLSI) identification algorithm is derived and a least-squares-based iterative (LSI) identification algorithm is given for comparison. Furthermore, a hierarchical multi-innovation least-squares-based iterative (HMILSI) identification algorithm is proposed using the multi-innovation theory. Compared with the LSI algorithm, the HLSI algorithm has smaller computational burden and can give more accurate parameter estimates and the HMILSI algorithm can track time-varying parameters. Finally, a simulation example is provided to verify the effectiveness of the proposed algorithms.

1. Introduction

With the development of modern industry, multivariable systems have provided rich possibilities to system modeling and process control [1,2]. Compared with scalar systems, multivariable systems have more complex structures, complicated relationships between variables, high dimensions and stochastic disturbances. For decades, multivariable system identification has attracted increasing attention and many identification approaches have been reported [3]. Among them, how to improve the identification accuracy and to increase the identification efficiency of multivariable systems has become the core problems for researchers. Liu et al. proposed a partially-coupled gradient identification algorithm to obtain the more accurate estimates by filtering the input and output data and by transform the system into two subsystems [4].
Many methods have been used to deal with system identification problems [5,6,7,8]. Minimizing different criterion leads to different identification methods, such as neural network methods, fuzzy logic system identification methods, wavelet network system identification methods and so on. According to certain criterion function adopting different search strategies leads to different identification approaches, such as Newton identification methods, least-squares identification methods [9] and gradient identification methods [10,11]. The recursive and iterative identification techniques can handle the parameter estimates of the system models. The recursive identification algorithms can avoid the matrix inversion and can be operated on-line [12,13] and the iterative algorithms can make use of all the given input–output data sufficiently and can improve the parameter estimation accuracy [14,15,16,17]. A filtering based multi-innovation gradient estimation algorithm has been proposed for nonlinear dynamical systems [18].
On the other hand, some identification principles are widely applied to parameter estimation for linear and nonlinear systems and many methods have been obtained for identifying multi-input multi-output systems. The multi-innovation identification theory is beneficial for deriving more accurate estimation algorithms by expanding the innovation from a scalar to a vector and from the vector to the matrix. The hierarchical identification principle is a decomposition-based identification technique, whose key is to decompose a system into several subsystems which can be identified easier by using the gradient-based or the least-squares-based identification algorithms. Hierarchical identification methods are suitable for large-scale and complex systems since they can reduce the dimension of the systems to be identified and reduce the computation load. The filtering technique is potential for identifying systems with colored noises [19,20]. An adaptive filtering based multi-innovation stochastic gradient algorithm was derived for bilinear systems with colored noise and can give small parameter estimation errors as the innovation length increases [21]. A multi-innovation gradient algorithm was developed based on the Kalman filtering to solve the joint state and parameter estimation problem for a nonlinear state space system with time-delay [22]. Many identification methods can be found in [23,24,25,26,27,28,29,30,31,32] and they can be applied to many areas [33,34,35,36,37].
Recently, using the hierarchical identification principle and the multi-innovation identification theory, some algorithms have been proposed to track the parameters of nonlinear systems and multivariable systems [38,39]. Considering that the least-squares algorithms have high estimation accuracy and rapid convergence [40,41,42], this paper focuses on the parameter identification problem of multivariable equation-error systems with autoregressive moving average noise process and presents iterative identification algorithms using the hierarchical identification principle and the multi-innovation method. The key is to decompose a system into two subsystems and to iteratively estimate the matrix of unknown parameters of each subsystem separately. The main contribution of this work are as follows.
  • A decomposition least-squares-based iterative identification algorithm is derived for multivariable equation-error autoregressive moving average systems by using the hierarchical identification principle.
  • Compared with the least-squares-based iterative algorithm, the proposed algorithm can improve the estimation accuracy and decrease the computation burden.
  • A hierarchical multi-innovation least-squares-based iterative identification algorithm is proposed by using the multi-innovation theory, which can track time-varying parameters.
The remainder of the paper is organized as follows. Section 2 derives the identification models of multivariable equation-error systems with different forms of colored noise processes. A Least-squares-based iterative identification algorithm is introduced in Section 3. In Section 4, a hierarchical least-squares-based iterative algorithm is derived. Section 5 presents a hierarchial multi-innovation least-squares-based iterative identification algorithm. The numerical example results are presented to illustrate the effectiveness of the proposed algorithms in Section 6. Finally, Section 7 concludes the work.

2. System Description and Identification Model

Symbols Meaning
0 :The zero matrix of appropriate sizes.
1 m × n : An m × n matrix whose entries are all 1.
1 n :An n dimensional column vector whose entries are all 1.
I or I n :The identity matrix of appropriate sizes or n × n .
tr [ X ] :The trace of the square matrix X .
X T :The transpose of the vector or matrix X .
X : X 2 : = tr [ X X T ] .
A = : X : X is defined by A.
X : = A : X is defined by A.
s:The time variable.
θ ^ :The estimate of the parameter matrix θ .
θ ^ ( s ) :The estimate of θ at time s.
p 0 :A large positive constant, e.g., p 0 = 10 6 .
Consider the following multivariable equation-error system with the colored noise,
A ( r ) y ( s ) = B ( r ) u ( s ) + w ( s ) ,
where u ( s ) : = [ u 1 ( s ) , u 2 ( s ) , , u r ( s ) ] T R r is the input vector, y ( s ) : = [ y 1 ( s ) , y 2 ( s ) , , y m ( s ) ] T R m is the output vector, A ( r ) and B ( r ) are the matrix-coefficient polynomials in the unit backward shift operator r 1 (i.e., r 1 x ( s ) = x ( s 1 ) ) with degrees n a and n b , and they are defined as
A ( r ) : = I m + A 1 r 1 + A 2 r 2 + + A n a r n a , A i R m × m , B ( r ) : = B 1 r 1 + B 2 r 2 + + B n b r n b , B i R m × r ,
where w ( s ) R m is a stochastic noise vector with zero mean, which may be a moving average (MA) process, an autoregressive (AR) process or an autoregressive moving average process (ARMA).
Considering the ARMA process, there are four forms for the description of the noise term as follows.
  • The first form is w ( s ) = d ( r ) c ( r ) v ( s ) R m , where v ( s ) R m is a white noise vector, and c ( r ) and d ( r ) are scalar polynomials in r 1 with degrees n c and n d , which are expressed as
    c ( r ) : = 1 + c 1 r 1 + c 2 r 2 + + c n c r n c , c i R , d ( r ) : = 1 + d 1 r 1 + d 2 r 2 + + d n d r n d , d i R .
  • The second form is w ( s ) = d ( r ) C 1 ( r ) v ( s ) R m , where d ( r ) is a scalar polynomial and C ( r ) is a matrix polynomial in r 1 , which is expressed as
    C ( r ) : = I m + C 1 r 1 + C 2 r 2 + + C n c r n c , C i R m × m .
  • The third form is w ( s ) = D ( r ) c ( r ) v ( s ) R m , where c ( r ) is a scalar polynomial and D ( r ) is a matrix polynomial in r 1 , which is expressed as
    D ( r ) : = I m + D 1 r 1 + D 2 r 2 + + D n d r n d , D i R m × m .
  • The last form is w ( s ) = C 1 ( r ) D ( r ) v ( s ) R m , where C ( r ) and D ( r ) are matrix polynomials in r 1 with degrees n c and n d .
Without loss of generality, we consider the multivariable systems with ARMA noise process, i.e.,
A ( r ) y ( s ) = B ( r ) u ( s ) + C 1 ( r ) D ( r ) v ( s ) ,
where A i , B i , C i and D i are the coefficient matrices to be estimated. Assume that the orders n a , n b , n c and n d are known and n : = m n a + r n b + m n c + m n d , n 1 : = m n a + r n b , n 2 : = m n c + m n d , u ( s ) = 0 , y ( s ) = 0 and v ( s ) = 0 for s 0 .
Define the parameter matrices θ , α and β as
θ T : = [ α T , β T ] R m × n , α T : = [ A 1 , A 2 , , A n a , B 1 , B 2 , , B n b ] R m × n 1 , β T : = [ C 1 , C 2 , , C n c , D 1 , D 2 , , D n d ] R m × n 2 ,
and the information vectors φ ( s ) , ϕ α ( s ) and ψ ( s ) as
φ ( s ) : = [ ϕ α T ( s ) , ψ T ( s ) ] T R n ,
ϕ α ( s ) : = [ y T ( s 1 ) , y T ( s 2 ) , , y T ( s n a ) , u T ( s 1 ) , u T ( s 2 ) , , u T ( s n b ) ] T ,
ψ ( s ) : = [ w T ( s 1 ) , w T ( s 2 ) , , w T ( s n c ) , v T ( s 1 ) , v T ( s 2 ) , , v T ( s n d ) ] T .
It follows that,
w ( s ) = [ I m C ( r ) ] w ( s ) + D ( r ) v ( s ) = [ I m C ( r ) ] w ( s ) + [ D ( r ) I m ] v ( s ) + v ( s ) = β T ψ ( s ) + v ( s ) .
Substituting Equation (5) into Equation (1) gives
y ( s ) = [ I m A ( r ) ] y ( s ) + B ( r ) u ( s ) + w ( s )
= α T ϕ α ( s ) + β T ψ ( s ) + v ( s )  
= θ T φ ( s ) + v ( s ) .  
In this identification model, θ is the parameter matrix to be identified, which consists of two parameter matrices: the parameter matrix α of the system model and the parameter matrix β of the noise model.
Note that the unknown parameter matrices A i , B i , C i and D i are included in θ , the identification problem has many parameters to be estimated and it is significant to enhance the computational efficiency of the multivariable system identification algorithms. Aiming at this goal, this work studies the decomposition least-squares-based iterative estimation algorithms for the multivariable CARARMA systems by using the hierarchical identification principle from the measured input–output data { u ( s ) , y ( s ) : s = 1 , 2 , , L } (L denotes the data length).

3. The Least-Squares-Based Iterative Algorithm

Let k = 1 , 2 , 3 , be an iterative variable and θ ^ k : = α ^ k β ^ k be the estimate of θ = α β at iteration k. Assume that L is the data length ( L n ). According to the data { u ( s ) , y ( s ) : s = 1 , 2 , , L } and based on Equation (7), define the stacked output matrix Y ( L ) as
Y ( L ) = [ y ( 1 ) , y ( 2 ) , , y ( L ) ] R m × L ,
and the stacked information matrix Ω ( L ) is defined as
Ω ( L ) = [ φ ( 1 ) , φ ( 2 ) , , φ ( L ) ] R n × L .
Define a quadratic criterion function,
J ( θ ) : = Y ( L ) θ T Ω ( L ) 2 = s = 1 L [ y ( s ) θ T φ ( s ) ] T [ y ( s ) θ T φ ( s ) ] .
To minimize J ( θ ) , letting its partial derivatives with θ at θ = θ ^ be zero gives
J ( θ ) θ | θ = θ ^ = 2 Ω ( L ) [ Y ( L ) θ ^ T Ω ( L ) ] T = 0
or
J ( θ ) θ | θ = θ ^ = 2 s = 1 L φ ( s ) [ y ( s ) θ ^ T φ ( s ) ] T = 0 .
Then, the estimates of the parameter matrix θ is given by
θ ^ = [ Ω ( L ) Ω T ( L ) ] 1 Ω ( L ) Y T ( L ) = s = 1 L φ ( s ) φ T ( s ) 1 s = 1 L φ ( s ) y T ( s ) .
It is observed that the information vector φ ( s ) in Ω ( L ) involves the unknown noise terms w ( s i ) ( i = 1 , 2 , , n c ) and v ( s j ) ( j = 1 , 2 , , n d ), therefore the estimate θ ^ in Equation (9) cannot been computed. To resolve this problem, we replace the unknown noise terms w ( s i ) and v ( s j ) in φ ( s ) with their corresponding estimates w ^ k 1 ( s i ) and v ^ k 1 ( s j ) at iteration k 1 . Then, we can obtain the least-squares-based iterative (LSI) algorithm as follows:
θ ^ k = [ Ω ^ k ( L ) Ω ^ k T ( L ) ] 1 Ω ^ k ( L ) Y T ( L ) = s = 1 L φ ^ k ( s ) φ ^ k T ( s ) 1 s = 1 L φ ^ k ( s ) y T ( s ) , k = 1 , 2 , 3 ,
Y ( L ) = [ y ( 1 ) , y ( 2 ) , , y ( L ) ] ,
Ω ^ k ( L ) = [ φ ^ k ( 1 ) , φ ^ k ( 2 ) , , φ ^ k ( L ) ] ,
φ ^ k ( s ) = [ ϕ α T ( s ) , w ^ k 1 T ( s 1 ) , w ^ k 1 T ( s 2 ) , , w ^ k 1 T ( s n c ) , v ^ k 1 T ( s 1 ) , v ^ k 1 T ( s 2 ) , , v ^ k 1 T ( s n d ) ] T ,
ϕ α ( s ) = [ y T ( s 1 ) , y T ( s 2 ) , , y T ( s n a ) , u T ( s 1 ) , u T ( s 2 ) , , u T ( s n b ) ] T ,
w ^ k ( s ) = y ( s ) α ^ k T ϕ α ( s ) , s = 1 , 2 , , L ,
v ^ k ( s ) = y ( s ) θ ^ k T φ ^ ( s ) ,
θ ^ k T = [ α ^ k T , β ^ k T ] ,
α ^ k T = [ A ^ 1 , k , A ^ 2 , k , , A ^ n a , k , B ^ 1 , k , B ^ 2 , k , , B ^ n b , k ] ,
β ^ k T = [ C ^ 1 , k , C ^ 2 , k , , C ^ n c , k , D ^ 1 , k , D ^ 2 , k , , D ^ n d , k ] .
The computational efficiency of the LSI algorithm for the multivariable CARARMA systems is given in Table 1.

4. The Hierarchical Least-Squares-Based Iterative Algorithm

The multivariable system in Equation (6) is decomposed into two subsystems, one containing α and the other containing β , and each subsystem is identified separately by using the hierarchical identification principle and the iterative technique. Define two intermediate variables,
h ( s ) : = y ( s ) β T ψ ( s ) ,
w ( s ) : = y ( s ) α T ϕ α ( s ) .
The system in Equation (1) can be decomposed into the following two fictitious subsystems:
h ( s ) = α T ϕ α ( s ) + v ( s ) ,
w ( s ) = β T ψ ( s ) + v ( s ) .
Let L be the data length ( L n ) and define the stacked output matrices Y ( L ) , H ( L ) , W ( L ) and the stacked information matrices Φ ( L ) and Ψ ( L ) as
Y ( L ) : = [ y ( 1 ) , y ( 2 ) , , y ( L ) ] R m × L ,
H ( L ) : = [ h ( 1 ) , h ( 2 ) , , h ( L ) ] R m × L ,
W ( L ) : = [ w ( 1 ) , w ( 2 ) , , w ( L ) ] R m × L ,
Φ ( L ) : = [ ϕ α ( 1 ) , ϕ α ( 2 ) , , ϕ α ( L ) ] R n 1 × L ,
Ψ ( L ) : = [ ψ ( 1 ) , ψ ( 2 ) , , ψ ( L ) ] R n 2 × L .
From Equations (20) and (21), we have
H ( L ) = Y ( L ) β T Ψ ( L ) ,
W ( L ) = Y ( L ) α T Φ ( L ) .
Define quadratic criterion functions:
J 1 ( α ) : = H ( L ) α T Φ ( L ) 2 = s = 1 L [ h ( s ) α T ϕ α ( s ) ] T [ h ( s ) α T ϕ α ( s ) ] , J 2 ( β ) : = W ( L ) β T Ψ ( L ) 2 = s = 1 L [ w ( s ) β T ψ ( s ) ] T [ w ( s ) β T ψ ( s ) ] .
Let k = 1 , 2 , 3 , be an iterative variable, and α ^ k and β ^ k be the estimates of parameter matrices α and β at iteration k, respectively. Minimizing J 1 ( α ) and J 2 ( β ) and letting their partial derivatives with respect to α and β be zero, we can obtain the least-squares-based iterative relations, respectively:
J 1 ( α ) α | α = α ^ = 2 Φ ( L ) [ H ( L ) α ^ T Φ ( L ) ] T = 0
or
J 1 ( α ) α | α = α ^ = 2 s = 1 L ϕ α ( s ) [ h ( s ) α ^ T ϕ α ( s ) ] T = 0 ,
J 2 ( β ) β | β = β ^ = 2 Ψ ( L ) [ W ( L ) β ^ T Ψ ( L ) ] T = 0
or
J 2 ( β ) β | β = β ^ = 2 s = 1 L ψ ( s ) [ w ( s ) β ^ T ψ ( s ) ] T = 0 .
Then, the estimates of the parameter matrices α and β are given by
α ^ = [ Φ ( L ) Φ T ( L ) ] 1 Φ ( L ) [ Y ( L ) β ^ T Ψ ( L ) ] T ,
β ^ = [ Ψ ( L ) Ψ T ( L ) ] 1 Ψ ( L ) [ Y ( L ) α ^ T Φ ( L ) ] T .
Notice that the information vector ψ ( s ) in Ψ ( L ) contains the unknown noise items w ( s i ) and v ( s j ) , thus the algorithm in Equations (28) and (29) cannot be implemented. The solution is to use the estimates w ^ k 1 ( s i ) and v ^ k 1 ( s j ) of the unknown noise vectors w ( s i ) and v ( s j ) at iteration k 1 to define the estimate ψ ^ k ( s ) of ψ ( s ) as
ψ ^ k ( s ) : = [ w ^ k 1 T ( s 1 ) , w ^ k 1 T ( s 2 ) , , w ^ k 1 T ( s n c ) , v ^ k 1 T ( s 1 ) , v ^ k 1 T ( s 2 ) , , v ^ k 1 T ( s n d ) ] T .
Replacing the unknown parameter matrix α in Equation (21) with its estimate α ^ k , we have the estimate of w ( s ) at iteration k as
w ^ k ( s ) : = y ( s ) α ^ k T ϕ α ( s ) .
According to Equations (5) and (23), replacing w ( s ) and α with their estimates w ^ k ( s ) and α ^ k , we have the estimate of v ( s ) at iteration k as
v ^ k ( s ) : = w ^ k ( s ) β ^ k T ψ ^ k ( s ) = y ( s ) α ^ k T ϕ α ( s ) β ^ k T ψ ^ k ( s ) .
Use the estimate ψ ^ k ( s ) of ψ ( s ) to define the estimate Ψ ^ k ( L ) of Ψ ( L ) as
Ψ ^ k ( L ) : = [ ψ ^ k ( 1 ) , ψ ^ k ( 2 ) , , ψ ^ k ( L ) ] R n 2 × L .
Replacing β , α and Ψ ( L ) in Equations (28) and (29) with their estimates β ^ k 1 , α ^ k 1 and Ψ ^ k ( L ) , and combining Equations (24), (25), (33), (3), and (30)–(32), we have the hierarchical least-squares-based iterative (HLSI) identification algorithm as follows:
α ^ k = [ Φ ( L ) Φ T ( L ) ] 1 Φ ( L ) [ Y ( L ) β ^ k 1 T Ψ ^ k ( L ) ] T = s = 1 L ϕ α ( s ) ϕ α T ( s ) 1 s = 1 L ϕ α ( s ) [ y ( s ) β ^ k 1 T ψ ^ k ( s ) ] T , k = 1 , 2 , 3 ,
β ^ k = [ Ψ ^ k ( L ) Ψ ^ k T ( L ) ] 1 Ψ ^ k ( L ) [ Y ( L ) α ^ k 1 T Φ ( L ) ] T = s = 1 L ψ ^ k ( s ) ψ ^ k T ( s ) 1 s = 1 L ψ ^ k ( s ) [ y ( s ) α ^ k 1 T ϕ α ( s ) ] T ,
Y ( L ) = [ y ( 1 ) , y ( 2 ) , , y ( L ) ] ,
Φ ( L ) = [ ϕ α ( 1 ) , ϕ α ( 2 ) , , ϕ α ( L ) ] ,
Ψ ^ k ( L ) = [ ψ ^ k ( 1 ) , ψ ^ k ( 2 ) , , ψ ^ k ( L ) ] ,
ϕ α ( s ) = [ y T ( s 1 ) , y T ( s 2 ) , , y T ( s n a ) , u T ( s 1 ) , u T ( s 2 ) , , u T ( s n b ) ] T ,
ψ ^ k ( s ) = [ w ^ k 1 T ( s 1 ) , w ^ k 1 T ( s 2 ) , , w ^ k 1 T ( s n c ) , v ^ k 1 T ( s 1 ) , v ^ k 1 T ( s 2 ) , , v ^ k 1 T ( s n d ) ] T ,
w ^ k ( s ) = y ( s ) α ^ k T ϕ α ( s ) , s = 1 , 2 , 3 , , L ,
v ^ k ( s ) = y ( s ) α ^ k T ϕ α ( s ) β ^ k T ψ ^ k ( s ) ,
θ ^ k T = [ α ^ k T , β ^ k T ] ,
α ^ k T = [ A ^ 1 , k , A ^ 2 , k , , A ^ n a , k , B ^ 1 , k , B ^ 2 , k , , B ^ n b , k ] ,
β ^ k T = [ C ^ 1 , k , C ^ 2 , k , , C ^ n c , k , D ^ 1 , k , D ^ 2 , k , , D ^ n d , k ] .
The steps of the HLSI algorithm to compute the parameter estimates α ^ k and β ^ k are listed as follows.
  • To initialize, let k = 1 , give the data length L ( L n ) and some small positive ε , and set the initial values: α ^ 0 = 1 n 1 × m / p 0 , β ^ 0 = 1 n 2 × m / p 0 , p 0 = 10 6 .
  • Collect the input–output data { u ( s ) , y ( s ) : s = 1 , 2 , , L }. Let w ^ 0 ( s ) and v ^ 0 ( s ) be random variables, and form the information vector ϕ α ( s ) by Equation (39), the stacked output matrix Y ( L ) by Equation (36) and the stacked information matrix Φ ( L ) by Equation (37).
  • Form the information vector ψ ^ k ( s ) by Equation (40), s = 1 , 2 , , L , and form the stacked information matrix Ψ ^ k ( L ) by Equation (38).
  • Update the parameter estimates α ^ k by Equation (34) and β ^ k by Equation (35).
  • Compute w ^ k ( s ) and v ^ k ( s ) by Equations (41) and (42).
  • If α ^ k α ^ k 1 + β ^ k β ^ k 1 > ε , increase k by 1 and go to Step 3; otherwise, obtain the iterative time k and the estimates α ^ k and β ^ k , and terminate this procedure.
Remark 1.
In the above algorithms, the unknown variables are replaced with their corresponding previous iteration estimates. Under some conditions and ensuring that the information matrices are persistently exciting, the identification algorithms based on the replacement of the unknown variables with their estimates can give satisfactory parameter estimates.
Remark 2.
The computation is very important to evaluate the performance of the algorithm and it can be measured by the numbers of multiplication and addition operations. In particular, one division operation can be counted as one multiplication operation and one subtraction operation can be counted as one addition operation. One multiplication or one addition operation is called one floating point operation (flop for short). Here, we give the computational efficiency by measuring the flop numbers involved in the identification algorithms.
The multiplication and addition numbers of the HLSI algorithm are shown in Table 2.
The amount of calculations of the LSI and HLSI identification algorithms are compared as follows
N 1 N 2 = [ 2 n 3 + ( 2 m + 2 L + 1 ) n 2 + ( 4 m L 2 m 3 ) n + 1 ] k [ 2 m ( n 1 2 + n 2 2 ) + 2 m n ( 3 L 2 ) + 2 n 2 3 + n 2 2 ( 2 L + 1 ) 3 n 2 + 1 ] [ 2 n 1 3 + n 1 2 ( 2 L + 1 ) 3 n 1 + 1 ] = [ ( 4 m 2 + 2 m L 4 m 3 ) n + 3 n 1 3 + ( 2 L + 3 n 2 + 1 ) 1 2 + 3 n 1 n 2 2 + 3 n 2 1 ] k [ 2 n 1 3 + n 1 2 ( 2 L + 1 ) 3 n 1 + 1 ] > 0 .
It is obvious that the HLSI identification algorithm requires less computation than the LSI identification algorithm.
Compared with the LSI algorithm, the HLSI algorithm can obtain more accurate parameter estimates and higher computational efficiency for multivariable systems with the colored noise. Furthermore, the proposed approaches in the paper can combine other mathematical optimization approaches [43,44,45,46,47] and statistical strategies [48,49,50,51,52,53,54] to study the parameter estimation problems of linear and nonlinear systems with different disturbances [55,56,57,58,59,60], and can be applied to other fields [61,62,63,64] such as signal processing [65,66,67,68,69,70].

5. The Hierarchical Multi-Innovation Least-Squares-Based Iterative Identification Algorithm

To possess higher identification accuracy of the HLSI algorithm, we propose a hierarchical multi-innovation least-squares-based iterative algorithm by using the multi-innovation identification theory in this section.
Let p be the innovation length. Consider the data from time l = s p + 1 to l = s and define the stacked information matrices:
Y ( p , s ) : = [ y ( s ) , y ( s 1 ) , , y ( s p + 1 ) ] R m × p ,
Φ ( p , s ) : = [ ϕ α ( s ) , ϕ α ( s 1 ) , , ϕ α ( s p + 1 ) ] R n 1 × p ,
Ψ ( p , s ) : = [ ψ ( s ) , ψ ( s 1 ) , , ψ ( s p + 1 ) ] R n 2 × p ,
H ( p , s ) : = [ h ( s ) , h ( s 1 ) , , h ( s p + 1 ) ] = Y ( p , s ) β T Ψ ( p , s ) R m × p ,
W ( p , s ) : = [ w ( s ) , w ( s 1 ) , , w ( s p + 1 ) ] = Y ( p , s ) α T Φ ( p , s ) R m × p .
From the identification model in Equation (6), we define two cost functions:
J 3 ( α ) : = H ( p , s ) α T Φ ( p , s ) 2 = l = s p + 1 s [ h ( l ) α T ϕ α ( l ) ] T [ h ( l ) α T ϕ α ( l ) ] , J 4 ( β ) : = W ( p , s ) β T Ψ ( p , s ) 2 = l = s p + 1 s [ w ( l ) β T ψ ( l ) ] T [ w ( l ) β T ψ ( l ) ] .
Let α ^ k ( s ) and β ^ k ( s ) be the kth iterative estimates of α and β at the current time s. Minimizing J 3 ( α ) and J 4 ( β ) , we have the iterative relations as follows:
α ^ k ( s ) = [ Φ ( p , s ) Φ T ( p , s ) ] 1 Φ ( p , s ) [ Y ( p , s ) β ^ k 1 T ( s ) Ψ ( p , s ) ] T ,
β ^ k ( s ) = [ Ψ ( p , s ) Ψ T ( p , s ) ] 1 Ψ ( p , s ) [ Y ( p , s ) α ^ k 1 T ( s ) Φ ( p , s ) ] T .
Notice that the information vector ψ ( s ) in Ψ ( p , s ) contains the unmeasured variables w ( s i ) and v ( s j ) , thus the algorithm in Equations (49) and (50) cannot be implemented. The solution is to use the estimates w ^ k 1 ( s i ) and v ^ k 1 ( s j ) of the unknown noise vectors w ( s i ) and v ( s j ) to define the estimate ψ ^ k ( s ) of Ψ ( p , s ) as
ψ ^ k ( s ) : = [ w ^ k 1 T ( s 1 ) , w ^ k 1 T ( s 2 ) , , w ^ k 1 T ( s n c ) , v ^ k 1 T ( s 1 ) , v ^ k 1 T ( s 2 ) , , v ^ k 1 T ( s n d ) ] T .
Replacing the unknown parameter matrix α in Equation (22) with its estimate α ^ k ( s ) , we have the estimate of w ( s ) at iteration k as
w ^ k ( s ) : = y ( s ) α ^ k T ( s ) ϕ α ( s ) .
Equation (23) gives the estimate of v ( s ) at iteration k as
v ^ k ( s ) : = y ( s ) α ^ k T ( s ) ϕ α ( s ) β ^ k T ( s ) ψ ^ k ( s ) .
Define the estimate Ψ ^ k ( p , s ) of Ψ ( p , s ) with the estimate ψ ^ k ( s ) of ψ ( s ) as
Ψ ^ k ( p , s ) : = [ ψ ^ k ( s ) , ψ ^ k ( s 1 ) , , ψ ^ k ( s p + 1 ) ] R n 2 × p .
Replacing β , α and Ψ ( p , t ) in Equations (49) and (50) with their estimates β ^ k 1 ( s ) , α ^ k 1 ( s ) and Ψ ^ k ( p , t ) , and combining Equations (46), (47), (54), (3), and (51)–(53), we have the hierarchical multi-innovation least-squares-based iterative identification algorithm for the multivariable CARARMA systems as follows:
α ^ k ( s ) = [ Φ ( p , s ) Φ T ( p , s ) ] 1 Φ ( p , s ) [ Y ( p , s ) β ^ k 1 T ( s ) Ψ ^ k ( p , s ) ] T = l = s p + 1 s ϕ α ( l ) ϕ α T ( l ) 1 l = s p + 1 s ϕ α ( l ) [ y ( l ) β ^ k 1 T ( s ) ψ ^ k ( l ) ] T ,
β ^ k ( s ) = [ Ψ ^ k ( p , s ) Ψ ^ k ( p , s ) ] 1 Ψ ^ k ( p , s ) [ Y ( p , s ) α ^ k 1 T ( s ) ϕ α ( p , s ) ] T = l = s p + 1 s ψ ^ k ( l ) ψ ^ k T ( l ) 1 l = s p + 1 s ψ ^ k ( l ) [ y ( l ) α ^ k 1 T ( s ) ϕ α ( l ) ] T ,
Y ( p , s ) = [ y ( s ) , y ( s 1 ) , , y ( s p + 1 ) ] ,
Φ ( p , s ) = [ ϕ α ( s ) , ϕ α ( s 1 ) , , ϕ α ( s p + 1 ) ] ,
Ψ ^ k ( p , s ) = [ ψ ^ k ( s ) , ψ ^ k ( s 1 ) , , ψ ^ k ( s p + 1 ) ] ,
ϕ α ( s ) = [ y T ( s 1 ) , y T ( s 2 ) , , y T ( s n a ) , u T ( s 1 ) , u T ( s 2 ) , , u T ( s n b ) ] T ,
ψ ^ k ( s ) = [ w ^ k 1 T ( s 1 ) , w ^ k 1 T ( s 2 ) , , w ^ k 1 T ( s n c ) , v ^ k 1 T ( s 1 ) , v ^ k 1 T ( s 2 ) , , v ^ k 1 T ( s n d ) ] T ,
w ^ k ( s ) = y ( s ) α ^ k T ( s ) ϕ α ( s ) , s = 1 , 2 , , s ,
v ^ k ( s ) = y ( s ) α ^ k T ( s ) ϕ α ( s ) β ^ k T ( s ) ψ ^ k ( s ) ,
θ ^ k T ( s ) = [ α ^ k T ( s ) , β ^ k T ( s ) ] ,
α ^ k T ( s ) = [ A ^ 1 , k ( s ) , A ^ 2 , k ( s ) , , A ^ n a , k ( s ) , B ^ 1 , k ( s ) , B ^ 2 , k ( s ) , , B ^ n b , k ( s ) ] ,
β ^ k T ( s ) = [ C ^ 1 , k ( s ) , C ^ 2 , k ( s ) , , C ^ n c , k ( s ) , D ^ 1 , k ( s ) , D ^ 2 , k ( s ) , , D ^ n d , k ( s ) ] .
The steps of the HMIGI algorithm for computing α ^ k ( s ) and β ^ k ( s ) are listed as follows.
  • To initialize, choose an innovation length p, let s = p , give some small positive ε and set the maximum iterative number k max . Set the initial values: θ ^ 0 = α ^ 0 ( s ) β ^ 0 ( s ) = 1 n × m / p 0 , w ^ 0 ( s ) and v ^ 0 ( s ) to be random vectors.
  • Let k = 1 and collect the input–output data { u ( s ) , y ( s ) } . Form the stacked output matrix Y ( p , s ) by Equation (57), form the information vector ϕ α ( s ) by Equation (60) and the stacked information matrix Φ ( p , s ) by Equation (58).
  • Form the information vector ψ ^ k ( s ) by Equation (61) and the stacked information matrix Ψ ^ k ( p , s ) by Equation (59).
  • Update the parameter estimates α ^ k ( s ) by Equation (55) and β ^ k ( s ) by Equation (56).
  • Compute w ^ k ( s ) and v ^ k ( s ) by Equations (62) and (63).
  • If k < k max , increase k by 1 and go to Step 3; otherwise, proceed to the next step.
  • Compare θ ^ k ( s ) : = α ^ k ( s ) β ^ k ( s ) and θ ^ k 1 ( s ) , for the given small positive ε , if α ^ k ( s ) α ^ k 1 ( s ) + β ^ k ( s ) β ^ k 1 ( s ) > ε , let θ ^ 0 ( s + 1 ) : = θ ^ k ( s ) and increase s by 1 and go to Step 2; otherwise, obtain the iterative estimate θ ^ k ( s ) and terminate this procedure.

6. Example

Consider the following simulation system
A ( r ) y ( s ) = B ( r ) u ( s ) + C 1 ( r ) D ( r ) v ( s ) , A ( r ) = I 2 + A 1 r 1 = 1 + 0.50 r 1 0.40 r 1 0.70 r 1 1 + 0.80 r 1 , B ( r ) = B 1 r 1 = 1.00 r 1 0.50 r 1 0.40 r 1 1.20 r 1 , C ( r ) = I 2 + C 1 r 1 = 1 0.55 r 1 0.20 r 1 1.90 r 1 1 + 0.03 r 1 , D ( r ) = I 2 + D 1 r 1 = 1 + 0.05 r 1 0.05 r 1 0.05 r 1 1 + 0.05 r 1 , θ T = [ A 1 , B 1 , C 1 , D 1 ] .
In simulation, { u 1 ( s ) } and { u 2 ( s ) } are taken as two independent persistent excitation signal sequences with zero mean and unit variances, { v 1 ( s ) } and { v 2 ( s ) } as two independent white noise sequences with zero mean and variances σ 1 2 for v 1 ( s ) and σ 2 2 for v 2 ( s ) . Taking the data length L = 3000 , σ 1 2 = σ 2 2 = 0.05 2 , σ 1 2 = σ 2 2 = 1.00 2 and σ 1 2 = σ 2 2 = 2.00 2 , respectively. Applying the LSI and HLSI algorithms to estimate the parameters of this example system, the parameter estimates and the estimation errors δ : = θ ^ k θ / θ of the LSI algorithm are shown in Table 3, Table 4 and Table 5, and the parameter estimates and the estimation errors δ : = α ^ k α 2 + β ^ k β 2 α 2 + β 2 of the HLSI algorithm are shown in Table 6, Table 7 and Table 8.
From the simulation results in Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8, we can draw the following conclusions.
  • The estimation errors given by the LSI and HLSI algorithms become smaller as the iterative variable k increases (see the estimation errors δ in Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8).
  • The HLSI algorithm leads to smaller parameter estimation errors than the LSI algorithm for the same data length L and noise level (see Table 4 and Table 7).
  • With iterations increase, the parameter estimations given by the HLSI algorithm are close to the true parameters, which verify the effectiveness of the HLSI algorithm proposed in this paper (see Table 6, Table 7 and Table 8).
To validate the obtained model, we use the LSI estimates and the HLSI estimates to construct the estimated model, respectively. The predicted outputs are
y ^ ( s ) = y ( s ) D 1 ( r ) C ( r ) [ A ( r ) y ( s ) B ( r ) u ( s ) ] .
Using the LSI parameter estimates in Table 4 with the noise variance σ 1 2 = σ 2 2 = 0.50 2 and k = 10 to construct the LSI estimated model:
y ^ L S ( s ) = y ( s ) D ^ L S 1 ( r ) C ^ L S ( r ) [ A ^ L S ( r ) y ( s ) B ^ L S ( r ) u ( s ) ] , A ^ L S ( r ) = I 2 + A ^ L S 1 r 1 = 1 + 0.49925 r 1 0.40139 r 1 0.70093 r 1 1 + 0.7640 r 1 , B ^ L S ( r ) = B ^ L S 1 r 1 = 0.992172 r 1 0.50968 r 1 0.39309 r 1 1.19732 r 1 , C ^ L S ( r ) = I 2 + C ^ L S 1 r 1 = 1 0.55793 r 1 0.21329 r 1 1.87117 r 1 1 + 0.01880 r 1 , D ^ L S ( r ) = I 2 + D ^ L S 1 r 1 = 1 0.00534 r 1 0.00021 r 1 0.01492 r 1 1 + 0.00802 r 1 .
Using the HLSI parameter estimates in Table 7 with the noise variance σ 1 2 = σ 2 2 = 0.50 2 and k = 10 to construct the HLSI estimated model as
y ^ H ( s ) = y ( s ) D ^ H 1 ( r ) C ^ H ( r ) [ A ^ H ( r ) y ( s ) B ^ H ( r ) u ( s ) ] , A ^ H ( r ) = I 2 + A ^ H 1 r 1 = 1 + 0.49955 r 1 0.40119 r 1 0.70042 r 1 1 + 0.79617 r 1 , B ^ H ( r ) = B ^ H 1 r 1 = 0.99217 r 1 0.50969 r 1 0.39310 r 1 1.19731 r 1 , C ^ H ( r ) = I 2 + C ^ H 1 r 1 = 1 0.55804 r 1 0.21307 r 1 1.87272 r 1 1 + 0.01863 r 1 , D ^ H ( r ) = I 2 + D ^ H 1 r 1 = 1 0.00534 r 1 0.00017 r 1 0.01494 r 1 1 + 0.00806 r 1 .
For model validation, we use a different dataset with a length of L r = 100 , whose data are from s = L + 1 to s = L + L r , to compute the predicted outputs y ^ ( s ) = [ y ^ 1 ( s ) , y ^ 2 ( s ) ] T R 2 of the system. The actual outputs y i ( s ) ( i = 1 , 2 ), the predicted output y ^ L S I ( s ) based on the LSI parameter estimates, and the predicted output y ^ H L S I ( s ) based on the HLSI parameter estimates are plotted in Figure 1 and Figure 2. Define the root-mean-square errors (RMSEs) as
E r r o r ( y ^ ) : = 1 L r s = L L + L r [ y ^ ( s ) y ( s ) ] 2 .
Using the estimated outputs to compute the root-mean-square errors for the LSI and HLSI estimated models as
E r r o r ( y ^ L S ) : = 1 L r s = L L + L r [ y ^ L S ( s ) y ( s ) ] 2 = [ 0 . 89203 , 1 . 24336 ] T R 2 , E r r o r ( y ^ H ) : = 1 L r s = L L + L r [ y ^ H ( s ) y ( s ) ] 2 = [ 0 . 65907 , 1 . 22223 ] T R 2 .
Remark 3.
Figure 1 and Figure 2 demonstrate that the predicted outputs are close to the measurement outputs, which means that the LSI and HLSI estimation algorithms are effective and they can give accurate estimates of the multivariable CARARMA systems.
Remark 4.
In Figure 1 and Figure 2, we can see that the predicted outputs of the LSI and HLSI algorithms are close to true values for y 1 ( s ) . However, for y 2 ( s ) , the predicted outputs of the LSI algorithm has a significant discrepancy while the HLSI estimates remain close to true values, which shows that the LSI method has a limitation for application to this example. On the other hand, the proposed HLSI algorithm achieves good estimation performance for identifying the multivariable CARARMA systems. Generally speaking, the combination of the decomposition techniques and the iterative technique results in more accurate parameter estimates and better computational efficiency for multivariable systems.

7. Conclusions

This work proposes a hierarchical least-squares-based iterative identification algorithm and a hierarchical multi-innovation least-squares-based iterative identification algorithm for multivariable CARARMA systems based on the hierarchical identification principle and the iterative technique. Compared with the least-squares-based iterative identification algorithm, the proposed algorithms achieve highly accurate parameter estimates and improve the performance the algorithms. The proposed decomposition least-squares-based iterative identification algorithms for multivariable equation-error autoregressive moving average systems can combine other estimation methods [71,72,73,74] and the mathematical techniques [75,76,77] to explore the parameter identification methods of other scalar, multivariable linear, nonlinear systems with colored noises [78,79,80], and can be extended to other scientific fields [81,82,83,84,85,86,87,88] such as signal modeling and communication networked systems [89,90,91,92].

Author Contributions

Conceptualization and methodology, L.W. and F.D.; and validation and analysis, X.L. and C.C.

Funding

This work was supported by the National Natural Science Foundation of China (No. 61304093) and the Natural Science Foundation of Shandong Province (ZR201702170236).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xu, L.; Chen, L.; Xiong, W.L. Parameter estimation and controller design for dynamic systems from the step responses based on the Newton iteration. Nonlinear Dyn. 2015, 79, 2155–2163. [Google Scholar] [CrossRef]
  2. Xu, L. The parameter estimation algorithms based on the dynamical response measurement data. Adv. Mech. Eng. 2017, 9, 1–12. [Google Scholar] [CrossRef]
  3. Pan, J.; Jiang, X.; Wan, X.K.; Ding, W. A filtering based multi-innovation extended stochastic gradient algorithm for multivariable control systems. Int. J. Control Autom. Syst. 2017, 15, 1189–1197. [Google Scholar] [CrossRef]
  4. Liu, Q.Y.; Ding, F.; Xu, L.; Yang, E.F. Partially coupled gradient estimation algorithm for multivariable equation-error autoregressive moving average systems using the data filtering technique. IET Control Theory Appl. 2019, 13, 642–650. [Google Scholar] [CrossRef]
  5. Xu, L. The damping iterative parameter identification method for dynamical systems based on the sine signal measurement. Signal Process 2016, 120, 660–667. [Google Scholar] [CrossRef]
  6. Wang, D.Q.; Li, L.W.; Ji, Y.; Yan, Y.R. Model recovery for Hammerstein systems using the auxiliary model based orthogonal matching pursuit method. Appl. Math. Model. 2018, 54, 537–550. [Google Scholar] [CrossRef]
  7. Wang, D.Q.; Yan, Y.R.; Liu, Y.J.; Ding, J.H. Model recovery for Hammerstein systems using the hierarchical orthogonal matching pursuit method. J. Comput. Appl. Math. 2019, 345, 135–145. [Google Scholar] [CrossRef]
  8. Xu, L. A proportional differential control method for a time-delay system using the Taylor expansion approximation. Appl. Math. Comput. 2014, 236, 391–399. [Google Scholar] [CrossRef]
  9. Liu, Q.Y.; Ding, F. Auxiliary model-based recursive generalized least squares algorithm for multivariate output-error autoregressive systems using the data filtering. Circuits Syst. Signal Process. 2019, 38, 590–610. [Google Scholar] [CrossRef]
  10. Ge, Z.W.; Ding, F.; Xu, L.; Alsaedi, A.; Hayat, T. Gradient-based iterative identification method for multivariate equation-error autoregressive moving average systems using the decomposition technique. J. Frankl. Inst. 2019, 356, 658–1676. [Google Scholar] [CrossRef]
  11. Pan, J.; Ma, H.; Jiang, X.; Ding, W. Adaptive gradient-based iterative algorithm for multivariate controlled autoregressive moving average systems using the data filtering technique. Complexity 2018, 2018, 9598307. [Google Scholar] [CrossRef]
  12. Li, J.H.; Zheng, W.X.; Gu, J.P.; Hua, L. A recursive identification algorithm for Wiener nonlinear systems with linear state-space subsystem. Circuits Syst. Signal Process. 2018, 37, 2374–2393. [Google Scholar] [CrossRef]
  13. Xu, H.; Ding, F.; Yang, E.F. Modeling a nonlinear process using the exponential autoregressive time series model. Nonlinear Dyn. 2019, 95, 2079–2092. [Google Scholar] [CrossRef]
  14. Ding, J.L. The hierarchical iterative identification algorithm for multi-input-output-error systems with autoregressive noise. Complexity 2017, 2017, 5292894. [Google Scholar] [CrossRef]
  15. Ding, J.L. Recursive and iterative least squares parameter estimation algorithms for multiple-input-output-error systems with autoregressive noise. Circuits Syst. Signal Process. 2018, 37, 1884–1906. [Google Scholar] [CrossRef]
  16. Xu, L. Application of the Newton iteration algorithm to the parameter estimation for dynamical systems. J. Comput. Appl. Math. 2015, 288, 33–43. [Google Scholar] [CrossRef]
  17. Liu, S.Y.; Ding, F.; Xu, L.; Hayat, T. Hierarchical principle-based iterative parameter estimation algorithm for dual-frequency signals. Circuits Syst. Signal Process. 2019, 38, 3251–3268. [Google Scholar] [CrossRef]
  18. Wang, Y.J.; Ding, F. A filtering based multi-innovation gradient estimation algorithm and performance analysis for nonlinear dynamical systems. IMA J. Appl. Math. 2017, 82, 1171–1191. [Google Scholar] [CrossRef]
  19. Wang, Y.J.; Ding, F. Novel data filtering based parameter identification for multiple-input multiple-output systems using the auxiliary model. Automatica 2016, 71, 308–313. [Google Scholar] [CrossRef]
  20. Ding, J.; Chen, J.Z.; Lin, J.X.; Wan, L.J. Particle filtering based parameter estimation for systems with output-error type model structures. J. Frankl. Inst. 2019, 356, 5521–5540. [Google Scholar] [CrossRef]
  21. Zhang, X.; Xu, L.; Ding, F.; Hayatc, T. Combined state and parameter estimation for a bilinear state space system with moving average noise. J. Frankl. Inst. 2018, 355, 3079–3103. [Google Scholar] [CrossRef]
  22. Zhang, X.; Ding, F.; Alsaadi, F.E.; Hayat, T. Recursive parameter identification of the dynamical models for bilinear state space systems. Nonlinear Dyn. 2017, 89, 2415–2429. [Google Scholar] [CrossRef]
  23. Ding, F.; Chen, H.B.; Xu, L.; Dai, J.Y.; Li, Q.S.; Hayat, T. A hierarchical least squares identification algorithm for Hammerstein nonlinear systems using the key term separation. J. Frankl. Inst. 2018, 355, 3737–3752. [Google Scholar] [CrossRef]
  24. Xu, L.; Ding, F. Parameter estimation for control systems based on impulse responses. Int. J. Control Autom. Syst. 2017, 15, 2471–2479. [Google Scholar] [CrossRef]
  25. Xu, L.; Ding, F. Parameter estimation algorithms for dynamical response signals based on the multi-innovation theory and the hierarchical principle. IET Signal Process. 2017, 11, 228–237. [Google Scholar] [CrossRef]
  26. Ding, F.; Liu, G.; Liu, X.P. Partially coupled stochastic gradient identification methods for non-uniformly sampled systems. IEEE Trans. Autom. Control 2010, 55, 1976–1981. [Google Scholar] [CrossRef]
  27. Ding, F. Coupled-least-squares identification for multivariable systems. IET Control Theory Appl. 2013, 7, 68–79. [Google Scholar] [CrossRef]
  28. Ding, F. Two-stage least squares based iterative estimation algorithm for CARARMA system modeling. Appl. Math. Model. 2013, 37, 4798–4808. [Google Scholar] [CrossRef]
  29. Ding, F.; Wang, Y.J.; Dai, J.Y.; Li, Q.S.; Chen, Q.J. A recursive least squares parameter estimation algorithm for output nonlinear autoregressive systems using the input-output data filtering. J. Frankl. Inst. 2017, 354, 6938–6955. [Google Scholar] [CrossRef]
  30. Xu, L.; Ding, F. Recursive least squares and multi-innovation stochastic gradient parameter estimation methods for signal modeling. Circuits Syst. Signal Process. 2017, 36, 1735–1753. [Google Scholar] [CrossRef]
  31. Ding, F. Decomposition based fast least squares algorithm for output error systems. Signal Process. 2013, 93, 1235–1242. [Google Scholar] [CrossRef]
  32. Zhang, X.; Ding, F.; Xu, L.; Yang, E.F. State filtering-based least squares parameter estimation for bilinear systems using the hierarchical identification principle. IET Control Theory Appl. 2018, 12, 1704–1713. [Google Scholar] [CrossRef]
  33. Cao, Y.; Wang, Z.; Liu, F.; Li, P.; Xie, G. Bio-inspired speed curve optimization and sliding mode tracking control for subway trains. IEEE Trans. Veh. Technol. 2019. [Google Scholar] [CrossRef]
  34. Cao, Y.; Lu, H.; Wen, T. A safety computer system based on multi-sensor data processing. Sensors 2019, 19, 818. [Google Scholar] [CrossRef]
  35. Cao, Y.; Zhang, Y.; Wen, T.; Li, P. Research on dynamic nonlinear input prediction of fault diagnosis based on fractional differential operator equation in high-speed train control system. Chaos 2019, 29, 013130. [Google Scholar] [CrossRef]
  36. Cao, Y.; Li, P.; Zhang, Y. Parallel processing algorithm for railway signal fault diagnosis data based on cloud computing. Future Gener. Comput. Syst. 2018, 88, 279–283. [Google Scholar] [CrossRef]
  37. Cao, Y.; Ma, L.C.; Xiao, S.; Zhang, X.; Xu, W. Standard analysis for transfer delay in CTCS-3. Chin. J. Electron. 2017, 26, 1057–1063. [Google Scholar] [CrossRef]
  38. Salhi, H.; Kamoun, S. A recursive parametric estimation algorithm of multivariable nonlinear systems described by Hammerstein mathematical models. Appl. Math. Model. 2015, 39, 4951–4962. [Google Scholar] [CrossRef]
  39. Wan, L.J.; Ding, F. Decomposition- and gradient-based iterative identification algorithms for multivariable systems using the multi-innovation theory. Circuits Syst. Signal Process. 2019, 38, 2971–2991. [Google Scholar] [CrossRef]
  40. Li, M.H.; Liu, X.M. The least squares based iterative algorithms for parameter estimation of a bilinear system with autoregressive noise using the data filtering technique. Signal Process. 2018, 147, 23–34. [Google Scholar] [CrossRef]
  41. Xu, L.; Ding, F. Iterative parameter estimation for signal models based on measured data. Circuits Syst. Signal Process. 2018, 37, 3046–3069. [Google Scholar] [CrossRef]
  42. Xu, L.; Xiong, W.L.; Alsaedi, A.; Hayat, T. Hierarchical parameter estimation for the frequency response based on the dynamical window data. Int. J. Control Autom. Syst. 2018, 16, 1756–1764. [Google Scholar] [CrossRef]
  43. Zhang, X.; Ding, F.; Xu, L.; Alsaedi, A.; Hayat, T. A hierarchical approach for joint parameter and state estimation of a bilinear system with autoregressive noise. Mathematics 2019, 7, 356. [Google Scholar] [CrossRef]
  44. Ding, F.; Pan, J.; Alsaedi, A.; Hayat, T. Gradient-based iterative parameter estimation algorithms for dynamical systems from observation data. Mathematics 2019, 7, 428. [Google Scholar] [CrossRef]
  45. Ma, H.; Pan, J.; Lv, L.; Xu, G.H.; Ding, F.; Alsaedi, A.; Hayat, T. Recursive algorithms for multivariable output-error-like ARMA systems. Mathematics 2019, 7, 558. [Google Scholar] [CrossRef]
  46. Wang, Y.J.; Ding, F.; Xu, L. Some new results of designing an IIR filter with colored noise for signal processing. Digit. Signal Process. 2018, 72, 44–58. [Google Scholar] [CrossRef]
  47. Zhang, X.; Ding, F.; Xu, L.; Yang, E.F. Highly computationally efficient state filter based on the delta operator. Int. J. Adapt. Control Signal Process. 2019, 33, 875–889. [Google Scholar] [CrossRef]
  48. Yin, C.C.; Wen, Y.Z.; Zhao, Y.X. On the optimal dividend problem for a spectrally positive levy process. Astin Bull. 2014, 44, 635–651. [Google Scholar] [CrossRef]
  49. Yin, C.C.; Wen, Y.Z. Optimal dividend problem with a terminal value for spectrally positive Levy processes. Insur. Math. Econ. 2013, 53, 769–773. [Google Scholar] [CrossRef]
  50. Yin, C.C.; Zhao, J.S. Nonexponential asymptotics for the solutions of renewal equations, with applications. J. Appl. Probab. 2006, 43, 815–824. [Google Scholar] [CrossRef] [Green Version]
  51. Yin, C.C.; Wang, C.W. The perturbed compound Poisson risk process with investment and debit interest. Methodol. Comput. Appl. Probab. 2010, 12, 391–413. [Google Scholar] [CrossRef]
  52. Yin, C.C.; Wen, Y.Z. Exit problems for jump processes with applications to dividend problems. J. Comput. Appl. Math. 2013, 245, 30–52. [Google Scholar] [CrossRef]
  53. Wen, Y.Z.; Yin, C.C. Solution of Hamilton-Jacobi-Bellman equation in optimal reinsurance strategy under dynamic VaR constraint. J. Funct. Spaces 2019, 2019, 6750892. [Google Scholar] [CrossRef]
  54. Sha, X.Y.; Xu, Z.S.; Yin, C.C. Elliptical distribution-based weight-determining method for ordered weighted averaging operators. Int. J. Intell. Syst. 2019, 34, 858–877. [Google Scholar] [CrossRef]
  55. Pan, J.; Li, W.; Zhang, H.P. Control algorithms of magnetic suspension systems based on the improved double exponential reaching law of sliding mode control. Int. J. Control Autom. Syst. 2018, 16, 2878–2887. [Google Scholar] [CrossRef]
  56. Li, M.H.; Liu, X.M. Auxiliary model based least squares iterative algorithms for parameter estimation of bilinear systems using interval-varying measurements. IEEE Access 2018, 6, 21518–21529. [Google Scholar] [CrossRef]
  57. Sun, Z.Y.; Zhang, D.; Meng, Q.; Chen, C.C. Feedback stabilization of time-delay nonlinear systems with continuous time-varying output function. Int. J. Syst. Sci. 2019, 50, 244–255. [Google Scholar] [CrossRef]
  58. Zhan, X.S.; Cheng, L.L.; Wu, J.; Yang, Q.S.; Han, T. Optimal modified performance of MIMO networked control systems with multi-parameter constraints. ISA Trans. 2019, 84, 111–117. [Google Scholar] [CrossRef]
  59. Zhan, X.S.; Guan, Z.H.; Zhang, X.H.; Yuan, F.S. Optimal tracking performance and design of networked control systems with packet dropout. J. Frankl. Inst. 2013, 350, 3205–3216. [Google Scholar] [CrossRef]
  60. Jiang, C.M.; Zhang, F.F.; Li, T.X. Synchronization and antisynchronization of N-coupled fractional-order complex chaotic systems with ring connection. Math. Methods Appl. Sci. 2018, 41, 2625–2638. [Google Scholar] [CrossRef]
  61. Wang, T.; Liu, L.; Zhang, J.; Schaeffer, E.; Wang, Y. A M-EKF fault detection strategy of insulation system for marine current turbine. Mech. Syst. Signal Process. 2019, 115, 269–280. [Google Scholar] [CrossRef]
  62. Gong, P.C.; Wang, W.Q.; Wan, X.R. Adaptive weight matrix design and parameter estimation via sparse modeling for MIMO radar. Signal Process. 2017, 139, 1–11. [Google Scholar] [CrossRef]
  63. Gong, P.C.; Wang, W.Q.; Li, F.C.; Cheung, H. Sparsity-aware transmit beamspace design for FDA-MIMO radar. Signal Process. 2018, 144, 99–103. [Google Scholar] [CrossRef]
  64. Wan, X.K.; Li, Y.; Xia, C.; Wu, M.H.; Liang, J.; Wang, N. A T-wave alternans assessment method based on least squares curve fitting technique. Measurement 2016, 86, 93–100. [Google Scholar] [CrossRef]
  65. Zhao, N. Joint Optimization of cooperative spectrum sensing and resource allocation in multi-channel cognitive radio sensor networks. Circuits Syst. Signal Process. 2016, 35, 2563–2583. [Google Scholar] [CrossRef]
  66. Zhao, N.; Chen, Y.; Liu, R.; Wu, M.H.; Xiong, W. Monitoring strategy for relay incentive mechanism in cooperative communication networks. Comput. Electr. Eng. 2017, 60, 14–29. [Google Scholar] [CrossRef]
  67. Zhao, N.; Wu, M.H.; Chen, J.J. Android-based mobile educational platform for speech signal processing. Int. J. Electr. Eng. Edu. 2017, 54, 3–16. [Google Scholar] [CrossRef]
  68. Zhao, N.; Liang, Y.; Pei, Y. Dynamic contract incentive mechanism for cooperative wireless networks. IEEE Trans. Veh. Technol. 2018, 67, 10970–10982. [Google Scholar] [CrossRef]
  69. Zhao, X.L.; Liu, F.; Fu, B.; Na, F. Reliability analysis of hybrid multi-carrier energy systems based on entropy-based Markov model. Proc. Inst. Mech. Eng. Part O J. Risk Reliab. 2016, 230, 561–569. [Google Scholar] [CrossRef]
  70. Zhao, X.L.; Lin, Z.Y.; Fu, B.; He, L.; Na, F. Research on automatic generation control with wind power participation based on predictive optimal 2-degree-of-freedom PID strategy for multi-area interconnected power system. Energies 2018, 11, 3325. [Google Scholar] [CrossRef]
  71. Xu, L.; Ding, F.; Gu, Y.; Alsaedi, A.; Hayat, T. A multi-innovation state and parameter estimation algorithm for a state space system with d-step state-delay. Signal Process. 2017, 140, 97–103. [Google Scholar] [CrossRef]
  72. Gu, Y.; Liu, J.; Li, X.; Chou, Y.; Ji, Y. State space model identification of multirate processes with time-delay using the expectation maximization. J. Frankl. Inst. 2019, 356, 1623–1639. [Google Scholar] [CrossRef]
  73. Gu, Y.; Chou, Y.; Liu, J.; Ji, Y. Moving horizon estimation for multirate systems with time-varying time-delays. J. Frankl. Inst. 2019, 356, 2325–2345. [Google Scholar] [CrossRef]
  74. Ding, F.; Xu, L.; Alsaadi, F.E.; Hayat, T. Iterative parameter identification for pseudo-linear systems with ARMA noise using the filtering technique. IET Control Theory Appl. 2018, 12, 892–899. [Google Scholar] [CrossRef]
  75. Liu, F.; Xue, Q.; Yabuta, K. Boundedness and continuity of maximal singular integrals and maximal functions on Triebel-Lizorkin spaces. Sci. China Math. 2019. [Google Scholar] [CrossRef]
  76. Liu, F. Boundedness and continuity of maximal operators associated to polynomial compound curves on Triebel-Lizorkin spaces. Math. Inequal. Appl. 2019, 22, 25–44. [Google Scholar] [CrossRef]
  77. Liu, F.; Fu, Z.; Jhang, S. Boundedness and continuity of Marcinkiewicz integrals associated to homogeneous mappings on Triebel-Lizorkin spaces. Front. Math. China 2019, 14, 95–122. [Google Scholar] [CrossRef]
  78. Zhang, W.H.; Xue, L.; Jiang, X. Global stabilization for a class of tochastic nonlinear systems with SISS-like conditions and time delay. Int. J. Robust Nonlinear Control 2018, 28, 3909–3926. [Google Scholar] [CrossRef]
  79. Li, N.; Guo, S.; Wang, Y. Weighted preliminary-summation-based principal component analysis for non-Gaussian processes. Control Eng. Pract. 2019, 87, 122–132. [Google Scholar] [CrossRef]
  80. Wang, Y.; Si, Y.; Huang, B.; Lou, Z. Survey on the theoretical research and engineering applications of multivariate statistics process monitoring algorithms: 2008–2017. Can. J. Chem. Eng. 2018, 96, 2073–2085. [Google Scholar] [CrossRef]
  81. Wu, M.H.; Li, X.; Liu, C.; Liu, M.; Zhao, N.; Wang, J.; Wan, X.K.; Rao, Z.H.; Zhu, L. Robust global motion estimation for video security based on improved k-means clustering. J. Ambient Intell. Humaniz. Comput. 2019, 10, 439–448. [Google Scholar] [CrossRef]
  82. Wan, X.K.; Wu, H.; Qiao, F.; Li, F.; Li, Y.; Yan, Y.; Wei, J. Electrocardiogram baseline wander suppression based on the combination of morphological and wavelet transformation based filtering. Computat. Math. Methods Med. 2019, 2019, 7196156. [Google Scholar] [CrossRef] [PubMed]
  83. Feng, L.; Li, Q.X.; Li, Y.F. Imaging with 3-D aperture synthesis radiometers. IEEE Trans. Geosci. Remote Sens. 2019, 57, 2395–2406. [Google Scholar] [CrossRef]
  84. Shi, W.X.; Liu, N.; Zhou, Y.M.; Cao, X.A. Effects of postannealing on the characteristics and reliability of polyfluorene organic light-emitting diodes. IEEE Trans. Electron Devices 2019, 66, 1057–1062. [Google Scholar] [CrossRef]
  85. Fu, B.; Ouyang, C.X.; Li, C.S.; Wang, J.W.; Gul, E. An improved mixed integer linear programming approach based on symmetry diminishing for unit commitment of hybrid power system. Energies 2019, 12, 833. [Google Scholar] [CrossRef]
  86. Wu, T.Z.; Shi, X.; Liao, L.; Zhou, C.J.; Zhou, H.; Su, Y.H. A capacity configuration control strategy to alleviate power fluctuation of hybrid energy storage system based on improved particle swarm optimization. Energies 2019, 12, 642. [Google Scholar] [CrossRef]
  87. Zhao, X.L.; Lin, Z.Y.; Fu, B.; He, L.; Li, C.S. Research on the predictive optimal PID plus second order derivative method for AGC of power system with high penetration of photovoltaic and wind power. J. Electr. Eng. Technol. 2019, 14, 1075–1086. [Google Scholar] [CrossRef]
  88. Liu, N.; Mei, S.; Sun, D.; Shi, W.; Feng, J.; Zhou, Y.M.; Mei, F.; Xu, J.; Jiang, Y.; Cao, X.A. Effects of charge transport materials on blue fluorescent organic light-emitting diodes with a host-dopant system. Micromachines 2019, 10, 344. [Google Scholar] [CrossRef]
  89. Tian, X.P.; Niu, H.M. A bi-objective model with sequential search algorithm for optimizing network-wide train timetables. Comput. Ind. Eng. 2019, 127, 1259–1272. [Google Scholar] [CrossRef]
  90. Li, X.Y.; Li, H.X.; Wu, B.Y. Piecewise reproducing kernel method for linear impulsive delay differential equations with piecewise constant arguments. Appl. Math. Comput. 2019, 349, 304–313. [Google Scholar] [CrossRef]
  91. Ma, F.Y.; Yin, Y.K.; Li, M. Start-up process modelling of sediment microbial fuel cells based on data driven. Math. Probl. Eng. 2019, 2019, 7403732. [Google Scholar] [CrossRef]
  92. Yang, F.; Zhang, P.; Li, X.X. The truncation method for the Cauchy problem of the inhomogeneous Helmholtz equation. Appl. Anal. 2019, 98, 991–1004. [Google Scholar] [CrossRef]
Figure 1. The true output y 1 ( s ) , the predicted outputs y ^ L S I 1 ( s ) and y ^ H L S I 1 ( s ) versus s.
Figure 1. The true output y 1 ( s ) , the predicted outputs y ^ L S I 1 ( s ) and y ^ H L S I 1 ( s ) versus s.
Mathematics 07 00609 g001
Figure 2. The true output y 2 ( s ) , the predicted outputs y ^ L S I 2 ( s ) and y ^ H L S I 2 ( s ) versus s.
Figure 2. The true output y 2 ( s ) , the predicted outputs y ^ L S I 2 ( s ) and y ^ H L S I 2 ( s ) versus s.
Mathematics 07 00609 g002
Table 1. The computational efficiency of the least-squares-based iterative identification algorithm.
Table 1. The computational efficiency of the least-squares-based iterative identification algorithm.
ExpressionsNumber of MultiplicationsNumber of Additions
θ ^ k = S k β k R n × m m n 2 k m n ( n 1 ) k
S k : = S k 1 R n × n ( n 3 + n 2 n + 1 ) k ( n 3 + n 2 2 n ) k
S k : = Ω ^ k ( L ) Ω ^ k T ( L ) R n × n n 2 L k n 2 ( L 1 ) k
β k : = Ω ^ k ( L ) Y T ( L ) R n × m m n L k m n ( L 1 ) k
w ^ k ( s ) = y ( s ) α ^ k T ϕ α ( s ) R m m n 1 L k m n 1 L k
v ^ k ( s ) = w ^ k β ^ k T ψ ^ k ( s ) R m m n 2 L k m n 2 L k
Sum (Number of multiplications) n 3 + ( m + 1 + L ) n 2 + ( 2 m L 1 ) n + 1 k
Sum (Number of additions) n 3 + ( m + L ) n 2 + 2 ( m L m 1 ) n k
Total flops N 1 : = 2 n 3 + ( 2 m + 2 L + 1 ) n 2 + ( 4 m L 2 m 3 ) n + 1 k
Table 2. The computational efficiency of the hierarchical least-squares-based iterative identification algorithm.
Table 2. The computational efficiency of the hierarchical least-squares-based iterative identification algorithm.
ExpressionsNumber of MultiplicationsNumber of Additions
α ^ k = S 1 β 1 , k R n 1 × m m n 1 2 k m n 1 ( n 1 1 ) k
S 1 : = S 1 1 R n 1 × n 1 n 1 3 + n 1 2 n 1 + 1 n 1 3 + n 1 2 2 n 1
S 1 : = ϕ α ( L ) ϕ α T ( L ) R n 1 × n 1 n 1 2 L n 1 2 ( L 1 )
β 1 , k : = Φ ( L ) E k T ( L ) R n 1 × m m n 1 L k m n 1 ( L 1 ) k
E k ( L ) : = Y ( L ) α ^ k 1 T Φ ( L ) β ^ k 1 T Ψ ^ k ( L ) m n L k m n L k
β ^ k = S 2 , k β 2 , k R n 2 × m m n 2 2 k m n 2 ( n 2 1 ) k
S 2 , k : = S 2 , k 1 R n 2 × n 2 [ n 2 3 + n 2 2 n 2 + 1 ] k [ n 2 3 + n 2 2 2 n 2 ] k
S 2 , k : = Ψ ^ k ( L ) Ψ ^ k T ( L ) R n 2 × n 2 n 2 2 L k n 2 2 ( L 1 ) k
β 2 , k : = Ψ ^ k ( L ) E k T ( L ) R n 2 × m m n 2 L k m n 2 ( L 1 ) k
w ^ k ( s ) = y ( s ) α ^ k T ϕ α ( s ) R m m n 1 L k m n 1 L k
v ^ k ( s ) = w ^ k β ^ k T ψ ^ k ( s ) R m m n 2 L k m n 2 L k
Sum (Number of multiplications) [ m ( n 1 2 + n 2 2 ) + 3 m n L + n 2 3 + n 2 2 ( L + 1 ) n 2 + 1 ] k
+ n 1 3 + n 1 2 ( L + 1 ) n 1 + 1
Sum (Number of additions) [ m ( n 1 2 + n 2 2 ) + m n ( 3 L 2 ) + n 2 3 + n 2 2 L 2 n 2 ] k
+ n 1 3 + n 1 2 L 2 n 1
Total flops N 2 : = [ 2 m ( n 1 2 + n 2 2 ) + 2 m n ( 3 L 2 ) + 2 n 2 3 + n 2 2 ( 2 L + 1 ) 3 n 2 + 1 ] k
+ 2 n 1 3 + n 1 2 ( 2 L + 1 ) 3 n 1 + 1
Table 3. The LSI estimates and errors ( σ 1 2 = σ 2 2 = 0.50 2 ).
Table 3. The LSI estimates and errors ( σ 1 2 = σ 2 2 = 0.50 2 ).
k a 11 a 12 b 11 b 12 c 11 c 12 d 11 d 12
10.452550.406430.98869−0.50900−0.006880.00264−0.00796−0.00359
20.486340.386990.99227−0.50971−0.544770.22674−0.00537−0.00023
30.509580.395960.99203−0.50976−0.569160.21497−0.005230.00100
40.506150.404610.99211−0.50968−0.565440.20941−0.005310.00003
50.497580.403360.99219−0.50966−0.555290.21183−0.00537−0.00049
100.499250.401390.99217−0.50968−0.557930.21329−0.00534−0.00021
True values0.500000.400001.00000−0.50000−0.550000.200000.05000−0.05000
k a 21 a 22 b 21 b 22 c 21 c 22 d 21 d 22 δ ( % )
1−0.799450.73056−0.387761.18668−0.02309−0.02288−0.022500.0039768.20082
2−0.657150.75762−0.393201.19655−1.914110.05582−0.014350.013944.08239
3−0.660340.80889−0.393531.19729−1.91203−0.00101−0.014700.009963.81620
4−0.702850.80968−0.393021.19743−1.866450.00885−0.015030.006763.61680
5−0.713570.79623−0.392961.19737−1.855800.02099−0.015020.007103.66328
10−0.700930.79640−0.393091.19732−1.871170.01880−0.014920.008023.45265
True values−0.700000.80000−0.400001.20000−1.900000.03000−0.050000.05000
Table 4. The LSI estimates and errors ( σ 1 2 = σ 2 2 = 1.00 2 ).
Table 4. The LSI estimates and errors ( σ 1 2 = σ 2 2 = 1.00 2 ).
k a 11 a 12 b 11 b 12 c 11 c 12 d 11 d 12
10.379110.418210.97582−0.51920−0.020770.00610−0.01571−0.00216
20.458620.367700.98462−0.51944−0.518500.24583−0.01069−0.00056
30.523300.378410.98443−0.51962−0.585390.22828−0.010710.00344
40.521770.412210.98429−0.51938−0.583620.20060−0.010680.00061
50.492050.409500.98436−0.51932−0.549860.20628−0.01068−0.00126
100.496620.402050.98442−0.51939−0.556690.21270−0.01067−0.00036
True values0.500000.400001.00000−0.50000−0.550000.200000.05000−0.05000
k a 21 a 22 b 21 b 22 c 21 c 22 d 21 d 22 δ ( % )
1−0.934190.64539−0.390211.17886−0.05758−0.04446−0.040300.0192967.62883
2−0.592630.66521−0.384791.19285−1.974680.14646−0.029310.032188.64671
3−0.578940.81485−0.387131.19469−1.99036−0.01829−0.029700.023066.64949
4−0.695570.83966−0.386831.19518−1.86685−0.02063−0.029580.012714.35994
5−0.748870.79252−0.386471.19502−1.812230.02568−0.029590.012774.77498
10−0.702200.78945−0.386501.19486−1.866650.02395−0.029570.016253.46273
True values−0.700000.80000−0.400001.20000−1.900000.03000−0.050000.05000
Table 5. The LSI estimates and errors ( σ 1 2 = σ 2 2 = 2.00 2 ).
Table 5. The LSI estimates and errors ( σ 1 2 = σ 2 2 = 2.00 2 ).
k a 11 a 12 b 11 b 12 c 11 c 12 d 11 d 12
10.305980.432730.95321−0.54078−0.051320.00896−0.034070.00703
20.423160.351540.96908−0.53879−0.484230.26160−0.02134−0.00104
30.540520.333730.97007−0.53902−0.603850.27146−0.021980.00733
40.554360.425670.96900−0.53866−0.618030.18606−0.021600.00172
50.482690.424710.96851−0.53864−0.541170.19036−0.02131−0.00235
100.490930.403870.96891−0.53872−0.552040.21020−0.02138−0.00053
True values0.500000.400001.00000−0.50000−0.550000.200000.05000−0.05000
k a 21 a 22 b 21 b 22 c 21 c 22 d 21 d 22 δ ( % )
1−1.052930.57820−0.402611.16284−0.12697−0.08390−0.081490.0591866.29107
2−0.526150.53991−0.368971.18798−2.036240.27031−0.059070.0566415.54031
3−0.449890.78904−0.373051.18991−2.115020.00375−0.060160.0474612.73434
4−0.671160.92651−0.376091.19057−1.88402−0.10881−0.058360.024688.21885
5−0.829630.79822−0.375391.19022−1.723340.01847−0.058200.024308.51037
10−0.712310.78084−0.374251.19008−1.850930.03082−0.058650.032114.13713
True values−0.700000.80000−0.400001.20000−1.900000.03000−0.050000.05000
Table 6. The HLSI estimates and errors ( σ 1 2 = σ 2 2 = 0.50 2 ).
Table 6. The HLSI estimates and errors ( σ 1 2 = σ 2 2 = 0.50 2 ).
k a 11 a 12 b 11 b 12 c 11 c 12 d 11 d 12
10.452630.406370.98875−0.50909−0.045410.027270.025810.01375
20.454910.403560.98880−0.51009−0.511100.20729−0.005260.00002
30.488530.392320.99200−0.50967−0.513090.20921−0.004890.00030
40.500950.393720.99204−0.50988−0.544420.21420−0.004770.00107
50.503620.401080.99221−0.50987−0.553170.21647−0.004890.00036
100.499550.401190.99217−0.50969−0.558040.21307−0.00534−0.00017
True values0.500000.400001.00000−0.50000−0.550000.200000.05000−0.05000
k a 21 a 22 b 21 b 22 c 21 c 22 d 21 d 22 δ ( % )
1−0.799360.73063−0.388341.187480.01185−0.138660.004230.0183169.14794
2−0.795890.74816−0.391681.18917−1.771850.08294−0.009520.017167.20851
3−0.665390.75709−0.392071.19513−1.762510.05227−0.010020.016716.24422
4−0.678430.79006−0.392121.19557−1.883550.05344−0.013130.010803.52902
5−0.685010.80187−0.392171.19649−1.880490.02427−0.014460.008233.41217
10−0.700420.79617−0.393101.19731−1.872720.01863−0.014940.008063.43715
True values−0.700000.80000−0.400001.20000−1.900000.03000−0.050000.05000
Table 7. The HLSI estimates and errors ( σ 1 2 = σ 2 2 = 1.00 2 ).
Table 7. The HLSI estimates and errors ( σ 1 2 = σ 2 2 = 1.00 2 ).
k a 11 a 12 b 11 b 12 c 11 c 12 d 11 d 12
10.379420.418100.97612−0.51907−0.065550.029950.023710.03103
20.389120.411940.97660−0.52021−0.439350.19532−0.011760.00257
30.455240.385050.98314−0.51937−0.450020.19959−0.010840.00294
40.480570.378030.98320−0.51970−0.514660.21120−0.010330.00417
50.501320.391420.98407−0.51981−0.526890.22128−0.009330.00318
100.497700.399810.98448−0.51946−0.562570.21197−0.01094−0.00043
True values0.500000.400001.00000−0.50000−0.550000.200000.05000−0.05000
k a 21 a 22 b 21 b 22 c 21 c 22 d 21 d 22 δ ( % )
1−0.933420.64534−0.390981.181120.02877−0.15072−0.00904−0.0021070.25212
2−0.930910.68495−0.395791.18382−1.634030.16609−0.020720.0476615.04949
3−0.646620.66320−0.384161.18926−1.616600.09883−0.021940.0471212.33823
4−0.654620.72677−0.385111.18991−1.871870.13390−0.023230.029425.98792
5−0.614290.75987−0.382661.19109−1.855950.06066−0.024950.023555.08516
10−0.709840.79018−0.386891.19508−1.884600.02909−0.030280.015043.34293
True values−0.700000.80000−0.400001.20000−1.900000.03000−0.050000.05000
Table 8. The HLSI estimates and errors ( σ 1 2 = σ 2 2 = 2.00 2 ).
Table 8. The HLSI estimates and errors ( σ 1 2 = σ 2 2 = 2.00 2 ).
k a 11 a 12 b 11 b 12 c 11 c 12 d 11 d 12
10.306590.432560.95390−0.54000−0.105830.035310.019510.06559
20.332480.422800.95566−0.54103−0.367700.18045−0.027320.01179
30.405650.386540.96508−0.53914−0.399170.18838−0.024890.01057
40.442280.366240.96510−0.53908−0.469500.19745−0.024480.01207
50.478390.380360.96662−0.53919−0.489830.22303−0.020970.01083
100.504590.385050.96907−0.53937−0.582990.21447−0.02257−0.00096
True values0.500000.400001.00000−0.50000−0.550000.200000.05000−0.05000
k a 21 a 22 b 21 b 22 c 21 c 22 d 21 d 22 δ ( % )
1−1.051280.57789−0.403691.167990.06260−0.17483−0.03557−0.0429172.22302
2−1.060020.63997−0.409641.17231−1.511350.23182−0.053300.1089822.60909
3−0.655700.56386−0.376541.18036−1.489400.13195−0.055860.1104618.48183
4−0.647910.63257−0.379341.18113−1.853600.21998−0.048870.0693510.56573
5−0.519200.65844−0.370441.18255−1.819020.13118−0.048330.0618710.21901
10−0.726240.78837−0.374321.19006−1.966620.08720−0.058940.017225.15136
True values−0.700000.80000−0.400001.20000−1.900000.03000−0.050000.05000

Share and Cite

MDPI and ACS Style

Wan, L.; Liu, X.; Ding, F.; Chen, C. Decomposition Least-Squares-Based Iterative Identification Algorithms for Multivariable Equation-Error Autoregressive Moving Average Systems. Mathematics 2019, 7, 609. https://doi.org/10.3390/math7070609

AMA Style

Wan L, Liu X, Ding F, Chen C. Decomposition Least-Squares-Based Iterative Identification Algorithms for Multivariable Equation-Error Autoregressive Moving Average Systems. Mathematics. 2019; 7(7):609. https://doi.org/10.3390/math7070609

Chicago/Turabian Style

Wan, Lijuan, Ximei Liu, Feng Ding, and Chunping Chen. 2019. "Decomposition Least-Squares-Based Iterative Identification Algorithms for Multivariable Equation-Error Autoregressive Moving Average Systems" Mathematics 7, no. 7: 609. https://doi.org/10.3390/math7070609

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop