Next Article in Journal
Anti-Synchronization of a Class of Chaotic Systems with Application to Lorenz System: A Unified Analysis of the Integer Order and Fractional Order
Previous Article in Journal
Characterization of Graphs Associated with Numerical Semigroups
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recursive Algorithms for Multivariable Output-Error-Like ARMA Systems

1
Hubei Key Laboratory for High-efficiency Utilization of Solar Energy and Operation Control of Energy Storage System, School of Electrical and Electronic Engineering, Hubei University of Technology, Wuhan 430068, China
2
College of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao 266061, China
3
School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, China
4
Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Mathematics 2019, 7(6), 558; https://doi.org/10.3390/math7060558
Submission received: 26 May 2019 / Revised: 10 June 2019 / Accepted: 12 June 2019 / Published: 19 June 2019
(This article belongs to the Section Engineering Mathematics)

Abstract

:
This paper studies the parameter identification problems for multivariable output-error-like systems with colored noises. Based on the hierarchical identification principle, the original system is decomposed into several subsystems. However, each subsystem contains the same parameter vector, which leads to redundant computation. By taking the average of the parameter estimation vectors of each subsystem, a partially-coupled subsystem recursive generalized extended least squares (PC-S-RGELS) algorithm is presented to cut down the redundant parameter estimates. Furthermore, a partially-coupled recursive generalized extended least squares (PC-RGELS) algorithm is presented to further reduce the computational cost and the redundant estimates by using the coupling identification concept. Finally, an example indicates the effectiveness of the derived algorithms.

1. Introduction

System identification is an important branch in the field of modern control and is an important method to establish systematic mathematical models from the combination of observation data and prior knowledge [1,2,3,4,5,6,7,8], and has been applied in many fields for decades, such as controller design [9,10,11,12,13,14,15] and system analysis [16,17,18,19,20]. Parameter identification is an important part of system identification and is to estimate the parameters by using the measurable data [21,22,23,24,25,26,27]. Parameter estimation methods can be applied to many areas [28,29,30,31]. Recently, in the literature of parameter identification, Wan et al. studied the problem of parameter estimation for multivariable equation-error systems with colored noises and proposed a hierarchical gradient-based iterative identification algorithm by using the hierarchical identification principle [32]. Chen et al. transformed the time-delay rational model into an augmented model based on the redundant rule and proposed a biased compensation recursive least squares-based threshold algorithm [33]. Other identification methods can be found in [34,35,36,37,38,39,40].
Multivariable systems widely consist in practical industrial processes on account of multi-input multi-output systems can describe modern industrial process more accurately [41,42,43,44,45]. Parameter estimation of multivariable systems has attracted extensive research attention over the past decades, and many different identification approaches have been proposed to solve the parameter identification problems of multivariable systems, such as the hierarchical identification principle and the coupling identification concept [46,47]. The core idea of the hierarchical identification principle is to decompose the original model into several submodels, and to combine other approaches to estimate the parameters of the submodels [48,49]. The coupled identification methods have been derived to identify the parameters of multivariable systems and were first presented in [50]. The basic idea of the coupling identification concept is to decompose the original system into several subsystems, and to estimate the parameters based on the coupled parameter relationships between these subsystems [51,52,53].
In the field of system modeling and control, the recursive identification and iterative identification methods are basic [54,55,56,57,58,59]. The recursive least squares methods are the commonly used parameter estimation approaches among many different parameter estimation techniques [60,61,62]. Recently, the recursive least squares (RLS) algorithm is always combined other methods to identify the complex systems. For instance, Zhang et al. proposed a filtering-based two-stage RLS algorithm for a bilinear system which is described by the state space form based on the filtering technique [63]. Liu et al. sudied the parameter estimation problems of multivariate output-error autoregressive systems and derived a filtering-based auxiliary model recursive generalized least squares algorithm based on the data filtering technique and the auxiliary model identification idea [64].
Multivariable output-error-like systems are special type of multivariable systems, which contain not only multiple inputs and multiple outputs, but also more complex parameter forms, i.e., the parameter vector and the parameter matrix [65,66]. Hence, the multivariable output-error like systems can describe modern industrial process more accurately than other types of multivariable systems. Recently, for multivariate output-error systems, Wang et al. proposed a decomposition based recursive least squares identification algorithm by using the auxiliary model, and analyzed its convergence through the stochastic process theory [67]. Ding proposed a hierarchical iterative identification algorithm for multi-input output-error systems with autoregressive noise [68]. Different from the methods in [67,68], this paper studies the parameter identification problems of multivariable output-error-like (M-OE-like) systems with colored noises which is described by the autoregressive moving average (ARMA) model by means of the decomposition technique and the coupling identification concept [69,70]. The main contributions of this paper lie in the following aspects.
  • Based on the hierarchical identification principle, this paper decomposes the original system into m subsystems.
  • Based on the coupled relationships between subsystems, a partially-coupled recursive generalized extended least squares (PC-RGELS) algorithm is proposed to identify the parameters of M-OEARMA-like systems.
  • The derived PC-RGELS algorithm has higher computation efficiency and higher estimation accuracy than the recursive generalized extended least squares (RGELS) algorithm.
This paper is organized as follows. Section 2 describes the identification model. A RGELS algorithm is proposed to give some comparisons with the proposed algorithms in Section 3. Section 4 proposes a partially-coupled recursive generalized extended least squares algorithm. Section 5 provides the numerical simulation results to illustrate the performance of the proposed algorithms. Finally, Section 6 gives some conclusions.

2. The System Description

Let us introduce some symbols. “ B = : Y ” or “ Y : = B ” stands for “B is defined as Y”; the superscript T stands for the vector/matrix transpose; the symbol I n denotes an identity matrix of size n × n ; 1 n stands for an n-dimensional column vector whose elements are 1; the symbol ⊗ represents the Kronecker product, for example, C = [ c i j ] R m × n , D = [ d i j ] R p × q , C D = [ c i j D ] R ( m p ) × ( n q ) , in general C D D C ; col [ Y ] is defined as the vector formed by all columns of matrix Y arranged in order, for example, Y = [ y 1 , y 2 , , y n ] R m × n , col [ Y ] = [ y 1 T , y 2 T , , y n T ] T R m n ; Y ^ ( s ) denotes the estimate of Y at time s; the norm of a matrix (or a column vector) Y is defined by Y 2 : = tr [ Y Y T ] .
Recently, for the multivariable output-error system:
y ( s ) = Φ s ( s ) θ A ( z ) + v ( s ) ,
where y ( s ) : = [ y 1 ( s ) , y 2 ( s ) , , y m ( s ) ] T R m refers to the output vector of the system, v ( s ) R m is the white noise vector with zero mean, Φ s ( s ) R m × n is the information matrix consisting of the input-output data, θ R n is the parameter vector, z 1 is a unit delay operator: z 1 y ( s ) = y ( s 1 ) , A ( z ) is a monic polynomial in z 1 , and
A ( z ) : = 1 + a 1 z 1 + a 2 z 2 + + a n a z n a , a i R ,
a coupled stochastic gradient identification algorithm has been proposed to estimate the parameters of this system [71].
Different from the system in [71], this paper considers the multivariable output-error-like system with autoregressive moving average noise:
y ( s ) = Q ( z ) α ( z ) u ( s ) + D ( z ) C ( z ) v ( s ) ,
where the definitions of y ( s ) and v ( s ) are same to above, u ( s ) : = [ u 1 ( s ) , u 2 ( s ) , , u r ( s ) ] T R r is the system input vector, α ( z ) , C ( z ) and D ( z ) are monic polynomials in z 1 and Q ( z ) is a matrix-coefficient polynomial in z 1 , and defined by
α ( z ) : = 1 + a 1 z 1 + a 2 z 2 + + a n z n , a i R , Q ( z ) : = Q 1 z 1 + Q 2 z 2 + + Q n z n , Q i R m × r , C ( z ) : = 1 + c 1 z 1 + c 2 z 2 + + c n c z n c , c i R , D ( z ) : = 1 + d 1 z 1 + d 2 z 2 + + d n d z n d , d i R .
In order to focus on the essence of the parameter estimation, we assume that the orders m, r, n, n c and n d are known, and y ( s ) = 0 , u ( s ) = 0 and v ( s ) = 0 for s 0 . Define the actual output vector of the system,
x ( s ) : = Q ( z ) α ( z ) u ( s ) R m .
Define the parameter vector α and the parameter matrix θ , and the information vector φ ( s ) and the information matrix ψ s ( s ) as
α : = [ a 1 , a 2 , , a n ] T R n , θ T : = [ Q 1 , Q 2 , , Q n ] R m × ( r n ) , φ ( s ) : = [ u T ( s 1 ) , u T ( s 2 ) , , u T ( s n ) ] T R r n , ψ s ( s ) : = [ x ( s 1 ) , x ( s 2 ) , , x ( s n ) ] R m × n .
Equation (3) can be expressed as
x ( s ) = [ 1 α ( z ) ] x ( s ) + Q ( z ) u ( s ) , = ψ s ( s ) α + θ T φ ( s ) .
Define an internal vector of the system,
w ( s ) : = D ( z ) C ( z ) v ( s ) R m .
Let n 1 : = n c + n d , define the parameter vector ρ and the information matrix ψ n ( s ) of the system as
ρ : = [ c 1 , c 2 , , c n c , d 1 , d 2 , , d n d ] T R n 1 , ψ n ( s ) : = [ w ( s 1 ) , w ( s 2 ) , , w ( s n c ) , v ( s 1 ) , v ( s 2 ) , , v ( s n d ) ] R m × n 1 .
Equation (5) can be rewritten as
w ( s ) = [ 1 C ( z ) ] w ( s ) + D ( z ) v ( s ) ,
= y ( s ) ψ s ( s ) α θ T φ ( s ) ,
= ψ n ( s ) ρ + v ( s ) .
Using (4) and (7), Equation (2) can be equivalently written as
y ( s ) = x ( s ) + w ( s ) ,
= ψ s ( s ) α + θ T φ ( s ) + ψ n ( s ) ρ + v ( s ) ,
= [ ψ s ( s ) , ψ n ( s ) ] α ρ + θ T φ ( s ) + v ( s ) .
Let n 0 : = n 1 + n , define the information matrix ψ ( s ) and the parameter vector β as
ψ ( s ) : = [ ψ s ( s ) , ψ n ( s ) ] R m × n 0 ,
β : = [ α T , ρ T ] T R n 0 .
Substituting (10) and (11) into (9) gives the hierarchical identification model
y ( s ) = ψ ( s ) β + θ T φ ( s ) + v ( s ) .
For convenience, we define an information matrix Ψ ( s ) by making use of the Kronecker product of the information matrix ψ ( s ) and the information vector φ ( s ) as
Ψ ( s ) : = [ ψ ( s ) , φ T ( s ) I m ] R m × n 2 , n 2 : = n 0 + m r n .
Hence, a parameter vector ϑ is defined by using the parameter vector β and the parameter matrix θ ,
ϑ : = β col [ θ T ] R n 2 .
Then Equation (12) can be equivalently expressed as
y ( s ) = Ψ ( s ) ϑ + v ( s ) .
Therefore, we get the identification model (13) of the M-OEARMA-like system in (2), where ϑ is the parameter vector to be identified and contains all the parameters of the system (2)

3. The RGELS Algorithm

Based on Equation (13), define a criterion function,
J 1 ( ϑ ) : = j = 1 s y ( j ) Ψ ( j ) ϑ 2 .
Let ϑ ^ ( s ) be the estimate of ϑ at time s. Minimizing J 1 ( ϑ ) gives
J 1 ( ϑ ) ϑ | ϑ = ϑ ^ ( s ) = ( Y s H s ϑ ) T ( Y s H s ϑ ) ϑ | ϑ = ϑ ^ ( s ) = 0 ,
where
Y s : = [ y T ( 1 ) , y T ( 2 ) , y T ( 3 ) , , y T ( s ) ] T R m s , H s : = [ Ψ T ( 1 ) , Ψ T ( 2 ) , Ψ T ( 3 ) , , Ψ T ( s ) ] T R ( m s ) × n 2 ,
Parameter estimate ϑ ^ ( s ) of ϑ can be obtained from (15) as
ϑ ^ ( s ) = ( H s T H s ) 1 H s T Y s ,
= j = 1 s Ψ T ( j ) Ψ ( j ) 1 j = 1 s Ψ T ( j ) y ( j ) .
Define the covariance matrix
P 1 ( s ) : = j = 1 s Ψ T ( j ) Ψ ( j ) R n 2 × n 2 , = P 1 ( s 1 ) + Ψ T ( s ) Ψ ( s ) .
Let L ( s ) : = P ( s ) Ψ T ( s ) R n 2 × m be the gain matrix. Based on the derivation of the RLS algorithm in [72,73], we can easily get the RLS relations:
ϑ ^ ( s ) = ϑ ^ ( s 1 ) + L ( s ) [ y ( s ) Ψ ( s ) ϑ ^ ( s 1 ) ] ,
L ( s ) = P ( s 1 ) Ψ T ( s ) [ I m + Ψ ( s ) P ( s 1 ) Ψ T ( s ) ] 1 ,
P ( s ) = P ( s 1 ) L ( s ) [ P ( s 1 ) Ψ T ( s ) ] T .
However, Equations (19)–(21) cannot figure out the parameter estimate ϑ ^ ( s ) because of the information matrix Ψ ( s ) contains the unknown vectors x ( s i ) , w ( s i ) and v ( s i ) . The solution is to replace these unknown vectors in Ψ ( s ) with their corresponding estimates x ^ ( s i ) , w ^ ( s i ) and v ^ ( s i ) by using the auxiliary model. Define the estimates of Ψ ( s ) , ψ ( s ) , ψ s ( s ) and ψ n ( s ) as
Ψ ^ ( s ) : = [ ψ ^ ( s ) , φ T ( s ) I m ] R m × n 2 ,
ψ ^ ( s ) : = [ ψ ^ s ( s ) , ψ ^ n ( s ) ] R m × n 0 ,
ψ ^ s ( s ) : = [ x ^ ( s 1 ) , x ^ ( s 2 ) , , x ^ ( s n ) ] R m × n ,
ψ ^ n ( s ) : = [ w ^ ( s 1 ) , , w ^ ( s n c ) , v ^ ( s 1 ) , , v ^ ( s n d ) ] R m × n 1 .
Replacing ψ s ( s ) , α and θ in (4) and (6) with their estimates ψ ^ s ( s ) , α ^ ( s ) and θ ^ ( s ) , the estimates x ^ ( s ) and w ^ ( s ) can be calculated by two auxiliary models:
x ^ ( s ) : = ψ ^ s ( s ) α ^ ( s ) + θ ^ T ( s ) φ ( s ) ,
w ^ ( s ) : = y ( s ) x ^ ( s ) = y ( s ) ψ ^ s ( s ) α ^ ( s ) θ ^ T ( s ) φ ( s ) .
From (13), use the estimates Ψ ^ ( s ) and ϑ ^ ( s ) of Ψ ( s ) and ϑ to define the estimate of v ( s ) as
v ^ ( s ) : = y ( s ) Ψ ^ ( s ) ϑ ^ ( s ) .
Combining (22)–(28) and replacing Ψ ( s ) in (19)–(21) with Ψ ^ ( s ) yield the following recursive generalized extended least squares (RGELS) algorithm:
ϑ ^ ( s ) = ϑ ^ ( s 1 ) + L ( s ) [ y ( s ) Ψ ^ ( s ) ϑ ^ ( s 1 ) ] ,
L ( s ) = P ( s 1 ) Ψ ^ T ( s ) [ I m + Ψ ^ ( s ) P ( s 1 ) Ψ ^ T ( s ) ] 1 ,
P ( s ) = P ( s 1 ) L ( s ) [ P ( s 1 ) Ψ ^ T ( s ) ] T ,
Ψ ^ ( s ) = [ ψ ^ ( s ) , φ T ( s ) I m ] ,
φ ( s ) = [ u T ( s 1 ) , u T ( s 2 ) , , u T ( s n ) ] T ,
ψ ^ ( s ) = [ ψ ^ s ( s ) , ψ ^ n ( s ) ] ,
ψ ^ s ( s ) = [ x ^ ( s 1 ) , x ^ ( s 2 ) , , x ^ ( s n ) ] ,
ψ ^ n ( s ) = [ w ^ ( s 1 ) , w ^ ( s 2 ) , , w ^ ( s n c ) , v ^ ( s 1 ) , v ^ ( s 2 ) , , v ^ ( s n d ) ] ,
x ^ ( s ) = ψ ^ s ( s ) α ^ ( s ) + θ ^ T ( s ) φ ( s ) ,
w ^ ( s ) = y ( s ) ψ ^ s ( s ) α ^ ( s ) θ ^ T ( s ) φ ( s ) ,
v ^ ( s ) = y ( s ) Ψ ^ ( s ) ϑ ^ ( s ) .
The procedure contained in the RGELS algorithm in (29)–(39) as follows.
  • For s 0 , all variables are set to zero. Set the data length L. Let s = 1 , set the initial values ϑ ^ ( 0 ) = 1 n 2 / p 0 , x ^ ( 0 ) = 1 m / p 0 , w ^ ( 0 ) = 1 m / p 0 , v ^ ( 0 ) = 1 m / p 0 , P ( 0 ) = p 0 I n 3 , p 0 = 10 6 .
  • Collect the input-output data u ( s ) and y ( s ) , and construct φ ( s ) using (33).
  • Form ψ ^ s ( s ) , ψ ^ n ( s ) and ψ ^ ( s ) using (35)–(36) and (34), and form Ψ ^ ( s ) using (32).
  • Calculate the covariance matrix P ( s ) and the gain matrix L ( s ) using (31) and (30), and update the estimate ϑ ^ ( s ) using (29).
  • Figure the estimates x ^ ( s ) , w ^ ( s ) and v ^ ( s ) using (37)–(39).
  • Compare s with L: if s L , increase s by 1 and go to Step 2; otherwise obtain the parameter estimate ϑ ^ ( L ) of ϑ and break up the program.
The flowchart of computing ϑ ^ ( L ) in the RGELS algorithm is shown in Figure 1. The RGELS algorithm is basic in system identification, and can be extended to study the parameter estimation problems of different systems such as signal modeling and communication networked systems [74,75,76,77,78,79].

4. The PC-RGELS Algorithm

In this part, a partially-coupled recursive generalized extended least squares (PC-RGELS) identification algorithm is studied to cut down the redundant estimates and improve the computational efficiency of the RGELS algorithm based on the decomposition technique and the coupling identification concept.
The identification model in (12) of system (2) is rewritten as follows:
y ( s ) = ψ ( s ) β + θ T φ ( s ) + v ( s ) .
Referring to the decomposition methods in [51,52,71], let ψ i T ( s ) R n 0 be the ith row of the information matrix ψ ( s ) , that is
ψ ( s ) : = [ ψ 1 ( s ) , ψ 2 ( s ) , , ψ m ( s ) ] T R m × n 0 .
Similarly, let θ i R r n be the ith column of the parameter matrix θ :
θ : = [ θ 1 , θ 2 , , θ m ] R ( r n ) × m .
Then Equation (40) can be decomposed into m subsystem identification models:
y i ( s ) = ψ i T ( s ) β + θ i T φ ( s ) + v i ( s ) , = ψ i T ( s ) β + φ T ( s ) θ i + v i ( s ) , i = 1 , 2 , , m .
According to the identification model in (41), define a gradient criterion function,
J 2 ( β , θ i ) : = j = 1 s [ y i ( j ) ψ i T ( j ) β φ T ( j ) θ i ] 2 .
Let β ^ ( s ) and θ ^ i ( s ) be the estimates of β and θ i at time s. Minimizing J 2 ( β , θ i ) gives
J 2 ( β , θ i ) β | β = β ^ ( s ) = Y i , s H i , s β Z i , s θ i 2 β | β = β ^ ( s ) = 0 ,
J 2 ( β , θ i ) θ i | θ i = θ ^ i ( s ) = Y i , s H i , s β Z i , s θ i 2 θ i | θ i = θ ^ i ( s ) = 0 ,
where
Y i , s : = [ y i ( 1 ) , y i ( 2 ) , y i ( 3 ) , , y i ( s ) ] T R s , H i , s : = [ ψ i ( 1 ) , ψ i ( 2 ) , ψ i ( 3 ) , , ψ i ( s ) ] T R s × n 0 , Z i , s : = [ φ ( 1 ) , φ ( 2 ) , φ ( 3 ) , , φ ( s ) ] T R s × ( r n ) ,
From (43) and (44), we can get the least squares estimates β ^ ( s ) and θ ^ i ( s ) of β and θ i :
β ^ ( s ) = ( H i , s T H i , s ) 1 ( H i , s T Y i , s H i , s T Z i , s θ i ) ,
= j = 1 s ψ i ( j ) ψ i T ( j ) 1 j = 1 s ψ i ( j ) y i ( j ) j = 1 s ψ i ( j ) φ T ( j ) θ i ,
θ ^ i ( s ) = ( Z i , s T Z i , s ) 1 ( Z i , s T Y i , s Z i , s T H i , s β ) ,
= j = 1 s φ ( j ) φ T ( j ) 1 j = 1 s φ ( j ) y i ( j ) j = 1 s φ ( j ) ψ i T ( j ) β .
Define the covariance matrixes P β , i ( s ) and P θ , i ( s ) , and the gain matrixes L β , i ( s ) and L θ , i ( s ) as
P β , i 1 ( s ) : = j = 1 s ψ i ( j ) ψ i T ( j ) R n 0 × n 0 ,
= P β , i 1 ( s 1 ) + ψ i ( s ) ψ i T ( s ) ,
P θ , i 1 ( s ) : = j = 1 s φ ( j ) φ T ( j ) R ( r n ) × ( r n ) ,
= P θ , i 1 ( s 1 ) + φ ( s ) φ T ( s ) ,
L β , i ( s ) : = P β , i ( s ) ψ i ( s ) R n 0 ,
L θ , i ( s ) : = P θ , i ( s ) φ ( s ) R r n .
Based on the derivation of the RLS algorithm in [72,73], we can summarize the RLS estimates β ^ ( s ) and θ ^ i ( s ) of β i and θ i :
β ^ ( s ) = β ^ ( s 1 ) + L β , i ( s ) [ y i ( s ) ψ i T ( s ) β ^ ( s 1 ) φ T ( s ) θ ^ i ( s 1 ) ] ,
L β , i ( s ) = P β , i ( s 1 ) ψ i ( s ) 1 + ψ i T ( s ) P β , i ( s 1 ) ψ i ( s ) ,
P β , i ( s ) = P β , i ( s 1 ) L β , i ( s ) [ P β , i ( s 1 ) ψ i ( s ) ] T ,
θ ^ i ( s ) = θ ^ i ( s 1 ) + L θ , i ( s ) [ y i ( s ) ψ i T ( s ) β ^ ( s 1 ) φ T ( s ) θ ^ i ( s 1 ) ] ,
L θ , i ( s ) = P θ , i ( s 1 ) φ ( s ) 1 + φ T ( s ) P θ , i ( s 1 ) φ ( s ) ,
P θ , i ( s ) = P θ , i ( s 1 ) L θ , i ( s ) [ P θ , i ( s 1 ) φ ( s ) ] T .
However, Equations (55)–(60) cannot figure out the estimates β ^ ( s ) and θ ^ i ( s ) . Because the subsystem information vector ψ i ( s ) includes the unknown vectors x ( s i ) , w ( s i ) and v ( s i ) . In order to deal with this problem, we replace these unknown vectors in ψ i ( s ) with their corresponding estimates x ^ ( s i ) , w ^ ( s i ) and v ^ ( s i ) by making use of the auxiliary model identification idea. Then the estimates of ψ ( s ) , ψ s ( s ) and ψ n ( s ) can be constructed by
ψ ^ ( s ) : = [ ψ ^ s ( s ) , ψ ^ n ( s ) ] ,
= [ ψ ^ 1 ( s ) , ψ ^ 2 ( s ) , , ψ ^ m ( s ) ] T R m × n 0 ,
ψ ^ s ( s ) : = [ x ^ ( s 1 ) , x ^ ( s 2 ) , , x ^ ( s n ) ] R m × n ,
ψ ^ n ( s ) : = [ w ^ ( s 1 ) , , w ^ ( s n c ) , v ^ ( s 1 ) , , v ^ ( s n d ) ] R m × n 1 .
Based on (4), (6) and (41), the estimates x ^ ( s ) , w ^ ( s ) and v ^ i ( s ) at time s can be calculated through three auxiliary models:
x ^ ( s ) : = ψ ^ s ( s ) α ^ ( s ) + θ ^ T ( s ) φ ( s ) ,
w ^ ( s ) : = y ( s ) ψ ^ s ( s ) α ^ ( s ) θ ^ T ( s ) φ ( s ) ,
v ^ i ( s ) : = y i ( s ) ψ ^ i T ( s ) β ^ ( s ) φ T ( s ) θ ^ i ( s ) .
For clarity, let β ^ i ( s ) represent the estimate of β in Subsystem i at time s. Combining (61)–(67) and replacing ψ i ( s ) in (55)–(60) with its estimate ψ ^ i ( s ) and the same parameter vector β ^ ( s ) in (55)–(60) with β ^ i ( s ) give the subsystem recursive generalized extended least squares (S-RGELS) algorithm:
β ^ i ( s ) = β ^ i ( s 1 ) + L β , i ( s ) [ y i ( s ) ψ ^ i T ( s ) β ^ i ( s 1 ) φ T ( s ) θ ^ i ( s 1 ) ] , i = 1 , 2 , , m ,
L β , i ( s ) = P β , i ( s 1 ) ψ ^ i ( s ) [ 1 + ψ ^ i T ( s ) P β , i ( s 1 ) ψ ^ i ( s ) ] 1 ,
P β , i ( s ) = P β , i ( s 1 ) L β , i ( s ) [ P β , i ( s 1 ) ψ ^ i ( s ) ] T ,
θ ^ i ( s ) = θ ^ i ( s 1 ) + L θ , i ( s ) [ y i ( s ) ψ ^ i T ( s ) β ^ i ( s 1 ) φ T ( s ) θ ^ i ( s 1 ) ] ,
L θ , i ( s ) = P θ , i ( s 1 ) φ ( s ) [ 1 + φ T ( s ) P θ , i ( s 1 ) φ ( s ) ] 1 ,
P θ , i ( s ) = P θ , i ( s 1 ) L θ , i ( s ) [ P θ , i ( s 1 ) φ ( s ) ] T ,
φ ( s ) = [ u T ( s 1 ) , u T ( s 2 ) , , u T ( s n ) ] T ,
ψ ^ ( s ) = [ ψ ^ s ( s ) , ψ ^ n ( s ) ] ,
= [ ψ ^ 1 ( s ) , ψ ^ 2 ( s ) , , ψ ^ m ( s ) ] T ,
ψ ^ s ( s ) = [ x ^ ( s 1 ) , x ^ ( s 2 ) , , x ^ ( s n ) ] ,
ψ ^ n ( s ) = [ w ^ ( s 1 ) , w ^ ( s 2 ) , , w ^ ( s n c ) , v ^ ( s 1 ) , v ^ ( s 2 ) , , v ^ ( s n d ) ] ,
x ^ ( s ) = ψ ^ s ( s ) α ^ ( s ) + θ ^ T ( s ) φ ( s ) ,
w ^ ( s ) = y ( s ) ψ ^ s ( s ) α ^ ( s ) θ ^ T ( s ) φ ( s ) ,
v ^ i ( s ) = y i ( s ) ψ ^ i T ( s ) β ^ i ( s ) φ T ( s ) θ ^ i ( s ) ,
v ^ ( s ) = [ v ^ 1 ( s ) , v ^ 2 ( s ) , , v ^ m ( s ) ] T ,
θ ^ ( s ) = [ θ ^ 1 ( s ) , θ ^ 2 ( s ) , , θ ^ m ( s ) ] .
From the S-RGELS algorithm in (68)–(83), we can acquire m estimates β ^ i ( s ) of β from (68)–(83), it leads to a lot of redundant computation. However, we only need one parameter estimate of β . In order to cut down the redundant parameter estimates and improve the parameter estimation accuracy, the first way is to take their average value as the estimate of β :
β ^ ( s ) = β ^ 1 ( s ) + β ^ 2 ( s ) + + β ^ m ( s ) m R n 0 .
Replacing β ^ i ( s 1 ) in (68)–(83) with β ^ ( s 1 ) gives the partially-coupled subsystem recursive generalized extended least squares (PC-S-RGELS) algorithm:
β ^ i ( s ) = β ^ ( s 1 ) + L β , i ( s ) [ y i ( s ) ψ ^ i T ( s ) β ^ ( s 1 ) φ T ( s ) θ ^ i ( s 1 ) ] , i = 1 , 2 , , m ,
L β , i ( s ) = P β , i ( s 1 ) ψ ^ i ( s ) [ 1 + ψ ^ i T ( s ) P β , i ( s 1 ) ψ ^ i ( s ) ] 1 ,
P β , i ( s ) = P β , i ( s 1 ) L β , i ( s ) [ P β , i ( s 1 ) ψ ^ i ( s ) ] T ,
θ ^ i ( s ) = θ ^ i ( s 1 ) + L θ , i ( s ) [ y i ( s ) ψ ^ i T ( s ) β ^ ( s 1 ) φ T ( s ) θ ^ i ( s 1 ) ] ,
L θ , i ( s ) = P θ , i ( s 1 ) φ ( s ) [ 1 + φ T ( s ) P θ , i ( s 1 ) φ ( s ) ] 1 ,
P θ , i ( s ) = P θ , i ( s 1 ) L θ , i ( s ) [ P θ , i ( s 1 ) φ ( s ) ] T ,
φ ( s ) = [ u T ( s 1 ) , u T ( s 2 ) , , u T ( s n ) ] T ,
ψ ^ ( s ) = [ ψ ^ s ( s ) , ψ ^ n ( s ) ] ,
= [ ψ ^ 1 ( s ) , ψ ^ 2 ( s ) , , ψ ^ m ( s ) ] T ,
ψ ^ s ( s ) = [ x ^ ( s 1 ) , x ^ ( s 2 ) , , x ^ ( s n ) ] ,
ψ ^ n ( s ) = [ w ^ ( s 1 ) , w ^ ( s 2 ) , , w ^ ( s n c ) , v ^ ( s 1 ) , v ^ ( s 2 ) , , v ^ ( s n d ) ] ,
x ^ ( s ) = ψ ^ s ( s ) α ^ ( s ) + θ ^ T ( s ) φ ( s ) ,
w ^ ( s ) = y ( s ) ψ ^ s ( s ) α ^ ( s ) θ ^ T ( s ) φ ( s ) ,
v ^ ( s ) = y ( s ) ψ ^ ( s ) β ^ ( s ) θ ^ T ( s ) φ ( s ) ,
β ^ ( s ) = β ^ 1 ( s ) + β ^ 2 ( s ) + + β ^ m ( s ) m ,
θ ^ ( s ) = [ θ ^ 1 ( s ) , θ ^ 2 ( s ) , , θ ^ m ( s ) ] .
Generally, for the recursive algorithms, it is considered that the parameter estimates are close to their true parameters as the time s increasing. Therefore, the estimate β ^ i 1 ( s ) is closer to the true parameter than the estimate β ^ i ( s 1 ) . Combining (91)–(100) and replacing β ^ i ( s 1 ) in (68) and (71) when i = 1 with β ^ m ( s 1 ) and β ^ i ( s 1 ) in (68) and (71) when i = 2 , 3 , , m with β ^ i 1 ( s ) give the partially-coupled recursive generalized extended least squares (PC-RGELS) algorithm:
β ^ 1 ( s ) = β ^ m ( s 1 ) + L β , 1 ( s ) [ y 1 ( s ) ψ ^ 1 T ( s ) β ^ m ( s 1 ) φ T ( s ) θ ^ 1 ( s 1 ) ] ,
L β , 1 ( s ) = P β , 1 ( s 1 ) ψ ^ i ( s ) [ 1 + ψ ^ 1 T ( s ) P β , 1 ( s 1 ) ψ ^ 1 ( s ) ] 1 ,
P β , 1 ( s ) = P β , 1 ( s 1 ) L β , 1 ( s ) [ P β , 1 ( s 1 ) ψ ^ 1 ( s ) ] T ,
θ ^ 1 ( s ) = θ ^ 1 ( s 1 ) + L θ , 1 ( s ) [ y 1 ( s ) ψ ^ 1 T ( s ) β ^ m ( s 1 ) φ T ( s ) θ ^ 1 ( s 1 ) ] ,
L θ , 1 ( s ) = P θ , 1 ( s 1 ) φ ( s ) [ 1 + φ T ( s ) P θ , 1 ( s 1 ) φ ( s ) ] 1 ,
P θ , 1 ( s ) = P θ , 1 ( s 1 ) L θ , i ( s ) [ P θ , 1 ( s 1 ) φ ( s ) ] T ,
β ^ i ( s ) = β ^ i 1 ( s ) + L β , i ( s ) [ y i ( s ) ψ ^ i T ( s ) β ^ i 1 ( s ) φ T ( s ) θ ^ i ( s 1 ) ] , i = 2 , 3 , , m ,
L β , i ( s ) = P β , i ( s 1 ) ψ ^ i ( s ) [ 1 + ψ ^ i T ( s ) P β , i ( s 1 ) ψ ^ i ( s ) ] 1 ,
P β , i ( s ) = P β , i ( s 1 ) L β , i ( s ) [ P β , i ( s 1 ) ψ ^ i ( s ) ] T ,
θ ^ i ( s ) = θ ^ i ( s 1 ) + L θ , i ( s ) [ y i ( s ) ψ ^ i T ( s ) β ^ i 1 ( s ) φ T ( s ) θ ^ i ( s 1 ) ] ,
L θ , i ( s ) = P θ , i ( s 1 ) φ ( s ) [ 1 + φ T ( s ) P θ , i ( s 1 ) φ ( s ) ] 1 ,
P θ , i ( s ) = P θ , i ( s 1 ) L θ , i ( s ) [ P θ , i ( s 1 ) φ ( s ) ] T ,
φ ( s ) = [ u T ( s 1 ) , u T ( s 2 ) , , u T ( s n ) ] T ,
ψ ^ ( s ) = [ ψ ^ s ( s ) , ψ ^ n ( s ) ] ,
= [ ψ ^ 1 ( s ) , ψ ^ 2 ( s ) , , ψ ^ m ( s ) ] T ,
ψ ^ s ( s ) = [ x ^ ( s 1 ) , x ^ ( s 2 ) , , x ^ ( s n ) ] ,
ψ ^ n ( s ) = [ w ^ ( s 1 ) , w ^ ( s 2 ) , , w ^ ( s n c ) , v ^ ( s 1 ) , v ^ ( s 2 ) , , v ^ ( s n d ) ] ,
x ^ ( s ) = ψ ^ s ( s ) α ^ ( s ) + θ ^ T ( s ) φ ( s ) ,
w ^ ( s ) = y ( s ) ψ ^ s ( s ) α ^ ( s ) θ ^ T ( s ) φ ( s ) ,
v ^ i ( s ) = y i ( s ) ψ ^ i T ( s ) β ^ m ( s ) φ T ( s ) θ ^ i ( s ) ,
v ^ ( s ) = [ v ^ 1 ( s ) , v ^ 2 ( s ) , , v ^ m ( s ) ] T ,
θ ^ ( s ) = [ θ ^ 1 ( s ) , θ ^ 2 ( s ) , , θ ^ m ( s ) ] .
The procedures for achieving the PC-RGELS algorithm in (101)–(122) are as follows.
  • For s 0 , all variables are set to zero. Set the data length L. Let s = 1 , set the initial values β ^ m ( 0 ) = 1 n 0 / p 0 , θ ^ i ( 0 ) = 1 r n / p 0 , x ^ ( 0 ) = 1 m / p 0 , w ^ ( 0 ) = 1 m / p 0 , v ^ ( 0 ) = 1 m / p 0 , P β , i ( 0 ) = p 0 I n 0 , P θ , i ( 0 ) = p 0 I r n , p 0 = 10 6 .
  • Collect the input-output data u ( s ) = [ u 1 ( s ) , u 2 ( s ) , , u r ( s ) ] T and y ( s ) = [ y 1 ( s ) , y 2 ( s ) , , y m ( s ) ] T , and construct φ ( s ) using (113).
  • Form ψ ^ s ( s ) and ψ ^ n ( s ) using (116)–(117) and construct ψ ^ ( s ) using (114), and read ψ ^ i ( s ) from ψ ^ ( s ) in (115), i = 1 , 2 , 3 , , m .
  • Compute P β , 1 ( s ) and P θ , 1 ( s ) using (103) and (106), and compute L β , 1 ( s ) and L θ , 1 ( s ) using (102) and (105), and update the estimates β ^ 1 ( s ) and θ ^ 1 ( s ) using (101) and (104).
  • For i = 2 , 3 , , m , calculate P β , i ( s ) and P θ , i ( s ) using (109) and (112), and compute L β , i ( s ) and L θ , i ( s ) using (108) and (111), and refresh the estimates β ^ i ( s ) and θ ^ i ( s ) using (107) and (1110).
  • Construct θ ^ ( s ) by (112), calculate the estimates x ^ ( s ) , w ^ ( s ) and v ^ ( s ) using (118)–(120).
  • Compare s with L: if s L , increase s by 1 and go to Step 2; otherwise obtain the estimate β ^ ( L ) and θ ^ ( L ) and terminate this procedure.
The flowchart of computing β ^ ( L ) and θ ^ ( L ) in the PC-RGELS algorithm is shown in Figure 2.

5. Example

The numerical simulation is based on the following M-OEARMA-like system:
y ( s ) = Q ( z ) α ( z ) u ( s ) + D ( z ) C ( z ) v ( s ) , α ( z ) : = 1 + a 1 z 1 = 1 + 0.26 z 1 , Q ( z ) : = A 1 z 1 = a 2 a 3 a 4 a 5 z 1 = 0.57 z 1 0.93 z 1 0.87 z 1 0.65 1 , C ( z ) : = 1 + c 1 z 1 = 1 0.56 z 1 , D ( z ) : = 1 + d 1 z 1 = 1 + 0.32 z 1 ,
the parameter vectors of the system are
α : = [ a 1 , a 2 , a 3 , a 4 , a 5 ] T = [ 0.26 , 0.57 , 0.93 , 0.87 , 0.65 ] T , β : = [ c 1 , d 1 ] T = [ 0.56 , 0.32 ] T , ϑ : = α β = [ a 1 , a 2 , a 3 , a 4 , a 5 , c 1 , d 1 ] T , = [ 0.26 , 0.57 , 0.93 , 0.87 , 0.65 , 0.56 , 0.32 ] T .
Here, the inputs { u 1 ( s ) } and { u 2 ( s ) } are taken as two independent persistent excitation signal sequences with zero mean and unit variances, { v 1 ( s ) } and { v 2 ( s ) } are taken as two white noise sequences with zero mean and variances σ 1 2 for v 1 ( s ) and σ 2 2 for v 2 ( s ) . Set σ 1 2 = σ 2 2 = 0.30 2 . Then we can use them to generate the output vector y ( s ) = [ y 1 ( s ) , y 2 ( s ) ] T . Set the data length L = 3000 . Applying the RGELS, PC-S-RGELS and PC-RGELS algorithms to identy the parameters of the given system. The parameter estimates and their estimation errors are shown in Table 1, Table 2 and Table 3. The parameter estimation errors δ : = θ ^ ( s ) θ / θ versus s is shown in Figure 3.
Taking the simulation conditions with σ 1 2 = σ 2 2 = 0.20 2 and σ 1 2 = σ 2 2 = 0.60 2 , using the PC-RGELS algorithm to identify the parameters of the given systems, respectively, the parameter estimates and their estimation errors are shown in Table 4 and Table 5, and the estimation errors δ versus s is shown in Figure 4.
From Table 1, Table 2, Table 3, Table 4 and Table 5 and Figure 3 and Figure 4, we can draw the following conclusions.
  • The parameter estimation errors given by the RGELS, PC-S-RGELS and PC-RGELS algorithms become smaller as s increasing. Thus the proposed algorithms for multivariable OEARMA-like system are effective.
  • Under the same simulation conditions, the PC-S-RGELS and PC-RGELS algorithms can give more accurate parameter estimates compared with the RGELS algorithm.
  • A lower noise level leads to a higher parameter estimation accuracy by the PC-RGELS algorithm under the same data length.

6. Conclusions

In this paper, we have dealt with the parameter identification problems of the M-OEARMA-like systems. A partially-coupled recursive generalized extended least squares algorithm is presented based on the hierarchical identification principle and the coupling identification concept. At the last of this paper, we discussed the comparison between the RGELS algorithm and the derived algorithms, and found that the PC-S-RGELS and PC-RGELS algorithms have higher computational efficiency than the RGELS algorithm on account of the proposed algorithms avoid calculating the inverse of the covariance matrix. The analysis and numerical example indicate that the PC-RGELS algorithm can give more accurate parameter estimates than the RGELS algorithm. Furthermore, the proposed methods can be extended to other fields by means of some other mathematical tools and approaches [80,81,82,83,84,85,86] to model industrial processes [87,88,89,90,91,92,93,94,95,96,97,98].

Author Contributions

Conceptualization and methodology, J.P., H.M. and F.D.; software, H.M. and L.L.; validation and analysis, G.X., A.A. and T.H. Finally, all the authors have read and approved the final manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (no. 61603127) and the 111 Project (B12018).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Na, J.; Chen, A.S.; Herrmann, G.; Burke, R.; Brace, C. Vehicle engine torque estimation via unknown input observer and adaptive parameter estimation. IEEE Trans. Veh. Technol. 2018, 67, 409–422. [Google Scholar] [CrossRef]
  2. Pan, J.; Li, W.; Zhang, H.P. Control algorithms of magnetic suspension systems based on the improved double exponential reaching law of sliding mode control. Int. J. Control Autom. Syst. 2018, 16, 2878–2887. [Google Scholar] [CrossRef]
  3. Zhang, X.; Ding, F.; Xu, L.; Yang, E.F. Highly computationally efficient state filter based on the delta operator. Int. J. Adapt. Control Signal Process. 2019, 33, 875–889. [Google Scholar] [CrossRef]
  4. Xu, H.; Ding, F.; Yang, E.F. Modeling a nonlinear process using the exponential autoregressive time series model. Nonlinear Dyn. 2019, 95, 2079–2092. [Google Scholar] [CrossRef]
  5. Xu, L.; Chen, L.; Xiong, W.L. Parameter estimation and controller design for dynamic systems from the step responses based on the Newton iteration. Nonlinear Dyn. 2015, 79, 2155–2163. [Google Scholar] [CrossRef]
  6. Xu, L. The parameter estimation algorithms based on the dynamical response measurement data. Adv. Mech. Eng. 2017, 9. [Google Scholar] [CrossRef]
  7. Xu, L.; Ding, F.; Gu, Y.; Alsaedi, A.; Hayat, T. A multi-innovation state and parameter estimation algorithm for a state space system with d-step state-delay. Signal Process. 2017, 140, 97–103. [Google Scholar] [CrossRef]
  8. Xu, L.; Ding, F. Iterative parameter estimation for signal models based on measured data. Circuits Syst. Signal Process. 2018, 37, 3046–3069. [Google Scholar] [CrossRef]
  9. Zhang, W.; Lin, X.; Chen, B.S. LaSalle-type theorem and its applications to infinite horizon optimal control of discrete-time nonlinear stochastic systems. IEEE Trans. Autom. Control 2017, 62, 250–261. [Google Scholar] [CrossRef]
  10. Li, N.; Guo, S.; Wang, Y. Weighted preliminary-summation-based principal component analysis for non-Gaussian processes. Control Eng. Pract. 2019, 87, 122–132. [Google Scholar] [CrossRef]
  11. Wang, Y.; Si, Y.; Huang, B.; Lou, Z. Survey on the theoretical research and engineering applications of multivariate statistics process monitoring algorithms: 2008–2017. Can. J. Chem. Eng. 2018, 96, 2073–2085. [Google Scholar] [CrossRef]
  12. Wang, Y.Q.; Zhang, H.; Wei, S.L.; Zhou, D.G.; Huang, B. Control performance assessment for ILC-controlled batch processes in a 2-D system framework. IEEE Trans. Syst. Man Cybern. Syst. 2017, 48, 1493–1504. [Google Scholar] [CrossRef]
  13. Tian, X.P.; Niu, H.M. A bi-objective model with sequential search algorithm for optimizing network-wide train timetables. Comput. Ind. Eng. 2019, 127, 1259–1272. [Google Scholar] [CrossRef]
  14. Wang, Y.J.; Ding, F.; Xu, L. Some new results of designing an IIR filter with colored noise for signal processing. Digit. Signal Process. 2018, 72, 44–58. [Google Scholar] [CrossRef]
  15. Wong, W.C.; Chee, E.; Li, J.L.; Wang, X.N. Recurrent neural network-based model predictive control for continuous pharmaceutical manufacturing. Mathematics 2018, 6, 242. [Google Scholar] [CrossRef]
  16. Ding, F.; Xu, L.; Alsaadi, F.E.; Hayat, T. Iterative parameter identification for pseudo-linear systems with ARMA noise using the filtering technique. IET Control Theory Appl. 2018, 12, 892–899. [Google Scholar] [CrossRef]
  17. Chen, G.Y.; Gan, M.; Chen, C.L.P.; Li, H.X. A regularized variable projection algorithm for separable nonlinear least squares problems. IEEE Trans. Autom. Control 2019, 64, 526–537. [Google Scholar] [CrossRef]
  18. Li, X.Y.; Li, H.X.; Wu, B.Y. Piecewise reproducing kernel method for linear impulsive delay differential equations with piecewise constant arguments. Appl. Math. Comput. 2019, 349, 304–313. [Google Scholar] [CrossRef]
  19. Xu, L.; Xiong, W.; Alsaedi, A.; Hayat, T. Hierarchical parameter estimation for the frequency response based on the dynamical window data. Int. J. Control Autom. Syst. 2018, 16, 1756–1764. [Google Scholar] [CrossRef]
  20. Zhang, X.; Xu, L.; Ding, F.; Hayat, T. Combined state and parameter estimation for a bilinear state space system with moving average noise. J. Frankl. Inst. 2018, 355, 3079–3103. [Google Scholar] [CrossRef]
  21. Li, M.H.; Liu, X.M.; Ding, F. Filtering-based maximum likelihood gradient iterative estimation algorithm for bilinear systems with autoregressive moving average noise. Circuits Syst. Signal Process. 2018, 37, 5023–5048. [Google Scholar] [CrossRef]
  22. Xu, L. The damping iterative parameter identification method for dynamical systems based on the sine signal measurement. Signal Process. 2016, 120, 660–667. [Google Scholar] [CrossRef]
  23. Xu, L. A proportional differential control method for a time-delay system using the Taylor expansion approximation. Appl. Math. Comput. 2014, 236, 391–399. [Google Scholar] [CrossRef]
  24. Ding, F.; Chen, H.B.; Xu, L.; Dai, J.Y.; Li, Q.S.; Hayat, T. A hierarchical least squares identification algorithm for Hammerstein nonlinear systems using the key term separation. J. Frankl. Inst. 2018, 355, 3737–3752. [Google Scholar] [CrossRef]
  25. Ding, J.; Chen, J.Z.; Lin, J.X.; Wan, L.J. Particle filtering based parameter estimation for systems with output-error type model structures. J. Frankl. Inst. 2019, 356, 5521–5540. [Google Scholar] [CrossRef]
  26. Liu, N.; Mei, S.; Sun, D.; Shi, W.; Feng, J.; Zhou, Y.M.; Mei, F.; Xu, J.; Jiang, Y.; Cao, X.A. Effects of charge transport materials on blue fluorescent organic light-emitting diodes with a host-dopant system. Micromachines 2019, 10, 344. [Google Scholar] [CrossRef] [PubMed]
  27. Xu, L.; Ding, F. Parameter estimation for control systems based on impulse responses. Int. J. Control Autom. Syst. 2017, 15, 2471–2479. [Google Scholar] [CrossRef]
  28. Cao, Y.; Wang, Z.; Liu, F.; Li, P.; Xie, G. Bio-inspired speed curve optimization and sliding mode tracking control for subway trains. IEEE Trans. Veh. Technol. 2019. [Google Scholar] [CrossRef]
  29. Cao, Y.; Lu, H.; Wen, T. A safety computer system based on multi-sensor data processing. Sensors 2019, 19, 818. [Google Scholar] [CrossRef]
  30. Cao, Y.; Zhang, Y.; Wen, T.; Li, P. Research on dynamic nonlinear input prediction of fault diagnosis based on fractional differential operator equation in high-speed train control system. Chaos 2019, 29, 013130. [Google Scholar] [CrossRef]
  31. Cao, Y.; Li, P.; Zhang, Y. Parallel processing algorithm for railway signal fault diagnosis data based on cloud computing. Future Gener. Comput. Syst. 2018, 88, 279–283. [Google Scholar] [CrossRef]
  32. Wan, L.J.; Ding, F. Decomposition-based gradient iterative identification algorithms for multivariable systems using the multi-innovation theory. Circuits Syst. Signal Process. 2019, 38, 2971–2991. [Google Scholar] [CrossRef]
  33. Chen, J.; Zhu, Q.M.; Li, J.; Liu, Y.J. Biased compensation recursive least squares-based threshold algorithm for time-delay rational models via redundant rule. Nonlinear Dyn. 2018, 91, 797–807. [Google Scholar] [CrossRef]
  34. Yin, C.C.; Wen, Y.Z.; Zhao, Y.X. On the optimal dividend problem for a spectrally positive levy process. Astin Bull. 2014, 44, 635–651. [Google Scholar] [CrossRef]
  35. Yin, C.C.; Wen, Y.Z. Optimal dividend problem with a terminal value for spectrally positive Levy processes. Insur. Math. Econ. 2013, 53, 769–773. [Google Scholar] [CrossRef]
  36. Yin, C.C.; Zhao, J.S. Nonexponential asymptotics for the solutions of renewal equations, with applications. J. Appl. Probab. 2006, 43, 815–824. [Google Scholar] [CrossRef] [Green Version]
  37. Yin, C.C.; Wang, C.W. The perturbed compound Poisson risk process with investment and debit interest. Methodol. Comput. Appl. Probab. 2010, 12, 391–413. [Google Scholar] [CrossRef]
  38. Yin, C.C.; Wen, Y.Z. Exit problems for jump processes with applications to dividend problems. J. Comput. Appl. Math. 2013, 245, 30–52. [Google Scholar] [CrossRef]
  39. Wen, Y.Z.; Yin, C.C. Solution of Hamilton-Jacobi-Bellman equation in optimal reinsurance strategy under dynamic VaR constraint. J. Funct. Spaces 2019, 6750892. [Google Scholar] [CrossRef]
  40. Sha, X.Y.; Xu, Z.S.; Yin, C.C. Elliptical distribution-based weight-determining method for ordered weighted averaging operators. Int. J. Intell. Syst. 2019, 34, 858–877. [Google Scholar] [CrossRef]
  41. Pan, J.; Jiang, X.; Wan, X.K.; Ding, W. A filtering based multi-innovation extended stochastic gradient algorithm for multivariable control systems. Int. J. Control Autom. Syst. 2017, 15, 1189–1197. [Google Scholar] [CrossRef]
  42. Ge, Z.W.; Ding, F.; Xu, L.; Alsaedi, A.; Hayat, T. Gradient-based iterative identification method for multivariate equation-error autoregressive moving average systems using the decomposition technique. J. Frankl. Inst. 2019, 356, 1658–1676. [Google Scholar] [CrossRef]
  43. Ding, F.; Xu, L.; Zhu, Q.M. Performance analysis of the generalised projection identification for time-varying systems. IET Control Theory Appl. 2016, 10, 2506–2514. [Google Scholar] [CrossRef]
  44. Xu, L.; Ding, F. Parameter estimation algorithms for dynamical response signals based on the multi-innovation theory and the hierarchical principle. IET Signal Process. 2017, 11, 228–237. [Google Scholar] [CrossRef]
  45. Zhan, X.S.; Cheng, L.L.; Wu, J.; Yang, Q.S.; Han, T. Optimal modified performance of MIMO networked control systems with multi-parameter constraints. ISA Trans. 2019, 84, 111–117. [Google Scholar] [CrossRef] [PubMed]
  46. Xu, L.; Ding, F.; Zhu, Q.M. Hierarchical Newton and least squares iterative estimation algorithm for dynamic systems by transfer functions based on the impulse responses. Int. J. Syst. Sci. 2019, 50, 141–151. [Google Scholar] [CrossRef]
  47. Wang, F.F.; Ding, F. Partially coupled gradient based iterative identification methods for multivariable output-error moving average systems. Int. J. Model. Identif. Control 2016, 26, 293–302. [Google Scholar] [CrossRef]
  48. Ding, F.; Liu, X.G.; Chu, J. Gradient-based and least-squares-based iterative algorithms for Hammerstein systems using the hierarchical identification principle. IET Control Theory Appl. 2013, 7, 176–184. [Google Scholar] [CrossRef]
  49. Wang, C.; Li, K.C.; Su, S. Hierarchical Newton iterative parameter estimation of a class of input nonlinear systems based on the key term separation principle. Complexity 2018, 2018, 7234147. [Google Scholar] [CrossRef]
  50. Ding, F.; Liu, G.; Liu, X.P. Partially coupled stochastic gradient identification methods for non-uniformly sampled systems. IEEE Trans. Autom. Control 2010, 55, 1976–1981. [Google Scholar] [CrossRef]
  51. Liu, Q.Y.; Ding, F.; Yang, E.F. Parameter estimation algorithm for multivariable controlled autoregressive autoregressive moving average systems. Digit. Signal Process. 2018, 83, 323–331. [Google Scholar] [CrossRef]
  52. Liu, Q.Y.; Ding, F.; Xu, L.; Yang, E.F. Partially coupled gradient estimation algorithm for multivariable equation-error autoregressive moving average systems using the data filtering technique. IET Control Theory Appl. 2019, 13, 642–650. [Google Scholar] [CrossRef]
  53. Ding, F. Coupled-least-squares identification for multivariable systems. IET Control Theory Appl. 2013, 7, 68–79. [Google Scholar] [CrossRef]
  54. Ding, F. Two-stage least squares based iterative estimation algorithm for CARARMA system modeling. Appl. Math. Model. 2013, 37, 4798–4808. [Google Scholar] [CrossRef]
  55. Xu, L. Application of the Newton iteration algorithm to the parameter estimation for dynamical systems. J. Comput. Appl. Math. 2015, 288, 33–43. [Google Scholar] [CrossRef]
  56. Ding, F.; Meng, D.D.; Dai, J.Y.; Li, Q.S.; Alsaedi, A.; Hayat, T. Least squares based iterative parameter estimation algorithm for stochastic dynamical systems with ARMA noise using the model equivalence. Int. J. Control Autom. Syst. 2018, 16, 630–639. [Google Scholar] [CrossRef]
  57. Wang, Y.J.; Ding, F. Iterative estimation for a non-linear IIR filter with moving average noise by means of the data filtering technique. IMA J. Math. Control Inf. 2017, 34, 745–764. [Google Scholar] [CrossRef]
  58. Li, M.H.; Liu, X.M. The least squares based iterative algorithms for parameter estimation of a bilinear system with autoregressive noise using the data filtering technique. Signal Process. 2018, 147, 23–34. [Google Scholar] [CrossRef]
  59. Ding, F.; Pan, J.; Alsaedi, A.; Hayat, T. Gradient-based iterative parameter estimation algorithms for dynamical systems from observation data. Mathematics 2019, 7, 428. [Google Scholar] [CrossRef]
  60. Ding, F.; Wang, Y.J.; Dai, J.Y.; Li, Q.S.; Chen, Q.J. A recursive least squares parameter estimation algorithm for output nonlinear autoregressive systems using the input-output data filtering. J. Frankl. Inst. 2017, 354, 6938–6955. [Google Scholar] [CrossRef]
  61. Zhang, X.; Ding, F.; Alsaadi, F.E.; Hayat, T. Recursive parameter identification of the dynamical models for bilinear state space systems. Nonlinear Dyn. 2017, 89, 2415–2429. [Google Scholar] [CrossRef]
  62. Xu, L.; Ding, F. Recursive least squares and multi-innovation stochastic gradient parameter estimation methods for signal modeling. Circuits Syst. Signal Process. 2017, 36, 1735–1753. [Google Scholar] [CrossRef]
  63. Zhang, X.; Ding, F.; Xu, L.; Alsaedi, A.; Hayat, T. A hierarchical approach for joint parameter and state estimation of a bilinear system with autoregressive noise. Mathematics 2019, 7, 356. [Google Scholar] [CrossRef]
  64. Liu, Q.Y.; Ding, F. Auxiliary model-based recursive generalized least squares algorithm for multivariate output-error autoregressive systems using the data filtering. Circuits Syst. Signal Process. 2019, 38, 590–610. [Google Scholar] [CrossRef]
  65. Ding, F. Decomposition based fast least squares algorithm for output error systems. Signal Process. 2013, 93, 1235–1242. [Google Scholar] [CrossRef]
  66. Ding, F.; Liu, X.P.; Liu, G. Gradient based and least-squares based iterative identification methods for OE and OEMA systems. Digital Signal Process. 2010, 20, 664–677. [Google Scholar] [CrossRef]
  67. Wang, Y.J.; Ding, F.; Wu, M.H. Recursive parameter estimation algorithm for multivariate output- error systems. J. Frankl. Inst. 2018, 355, 5163–5181. [Google Scholar] [CrossRef]
  68. Ding, J.L. The hierarchical iterative identification algorithm for multi-input-output-error systems with autoregressive noise. Complexity 2017, 2017, 5292894. [Google Scholar] [CrossRef]
  69. Wang, X.H.; Ding, F. Partially coupled extended stochastic gradient algorithm for nonlinear multivariable output error moving average systems. Eng. Comput. 2017, 34, 629–647. [Google Scholar] [CrossRef]
  70. Zhang, X.; Ding, F.; Xu, L.; Yang, E.F. State filtering-based least squares parameter estimation for bilinear systems using the hierarchical identification principle. IET Control Theory Appl. 2018, 12, 1704–1713. [Google Scholar] [CrossRef]
  71. Huang, W.; Ding, F.; Hayat, T.; Alsaedi, A. Coupled stochastic gradient identification algorithms for multivariate output-error systems using the auxiliary model. Int. J. Control Autom. Syst. 2017, 15, 1622–1631. [Google Scholar] [CrossRef]
  72. Wang, Y.J.; Ding, F. The filtering based iterative identification for multivariable systems. IET Control Theory Appl. 2016, 10, 894–902. [Google Scholar] [CrossRef]
  73. Meng, D.D. Recursive least squares and multi-innovation gradient estimation algorithms for bilinear stochastic systems. Circuits Syst. Signal Process 2016, 35, 1052–1065. [Google Scholar] [CrossRef]
  74. Wan, X.K.; Li, Y.; Xia, C.; Wu, M.H.; Liang, J.; Wang, N. A T-wave alternans assessment method based on least squares curve fitting technique. Measurement 2016, 86, 93–100. [Google Scholar] [CrossRef]
  75. Zhao, N. Joint Optimization of cooperative spectrum sensing and resource allocation in multi-channel cognitive radio sensor networks. Circuits Syst. Signal Process. 2016, 35, 2563–2583. [Google Scholar] [CrossRef]
  76. Zhao, X.L.; Liu, F.; Fu, B.; Na, F. Reliability analysis of hybrid multi-carrier energy systems based on entropy-based Markov model. Proc. Inst. Mech. Eng. Part O-J. Risk Reliab. 2016, 230, 561–569. [Google Scholar] [CrossRef]
  77. Zhao, N.; Liang, Y.; Pei, Y. Dynamic contract incentive mechanism for cooperative wireless networks. IEEE Trans. Veh. Technol. 2018, 67, 10970–10982. [Google Scholar] [CrossRef]
  78. Gong, P.C.; Wang, W.Q.; Li, F.C.; Cheung, H. Sparsity-aware transmit beamspace design for FDA-MIMO radar. Signal Process. 2018, 144, 99–103. [Google Scholar] [CrossRef]
  79. Zhao, X.L.; Lin, Z.Y.; Fu, B.; He, L.; Na, F. Research on automatic generation control with wind power participation based on predictive optimal 2-degree-of-freedom PID strategy for multi-area interconnected power system. Energies 2018, 11, 3325. [Google Scholar] [CrossRef]
  80. Liu, F.; Xue, Q.; Yabuta, K. Boundedness and continuity of maximal singular integrals and maximal functions on Triebel-Lizorkin spaces. Sci. China Math. 2019. [Google Scholar] [CrossRef]
  81. Liu, F. Boundedness and continuity of maximal operators associated to polynomial compound curves on Triebel-Lizorkin spaces. Math. Inequal. Appl. 2019, 22, 25–44. [Google Scholar] [CrossRef]
  82. Liu, F.; Fu, Z.; Jhang, S. Boundedness and continuity of Marcinkiewicz integrals associated to homogeneous mappings on Triebel-Lizorkin spaces. Front. Math. China 2019, 14, 95–122. [Google Scholar] [CrossRef]
  83. Wang, D.Q.; Yan, Y.R.; Liu, Y.J.; Ding, J.H. Model recovery for Hammerstein systems using the hierarchical orthogonal matching pursuit method. J. Comput. Appl. Math. 2019, 345, 135–145. [Google Scholar] [CrossRef]
  84. Zhang, S.; Wang, D.Q.; Liu, F. Separate block-based parameter estimation method for Hammerstein systems. R. Soc. Open Sci. 2018, 5, 172194. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  85. Wang, D.Q.; Zhang, Z.; Xue, B.Q. Decoupled parameter estimation methods for Hammerstein systems by using filtering technique. IEEE Access 2018, 6, 66612–66620. [Google Scholar] [CrossRef]
  86. Wang, D.Q.; Li, L.W.; Ji, Y.; Yan, Y.R. Model recovery for Hammerstein systems using the auxiliary model based orthogonal matching pursuit method. Appl. Math. Modell. 2018, 54, 537–550. [Google Scholar] [CrossRef]
  87. Feng, L.; Li, Q.X.; Li, Y.F. Imaging with 3-D aperture synthesis radiometers. IEEE Trans. Geosci. Remote Sens. 2019, 57, 2395–2406. [Google Scholar] [CrossRef]
  88. Shi, W.X.; Liu, N.; Zhou, Y.M.; Cao, X.A. Effects of postannealing on the characteristics and reliability of polyfluorene organic light-emitting diodes. IEEE Trans. Electron Devices 2019, 66, 1057–1062. [Google Scholar] [CrossRef]
  89. Fu, B.; Ouyang, C.X.; Li, C.S.; Wang, J.W.; Gul, E. An improved mixed integer linear programming approach based on symmetry diminishing for unit commitment of hybrid power system. Energies 2019, 12, 833. [Google Scholar] [CrossRef]
  90. Wu, T.Z.; Shi, X.; Liao, L.; Zhou, C.J.; Zhou, H.; Su, Y.H. A capacity configuration control strategy to alleviate power fluctuation of hybrid energy storage system based on improved particle swarm optimization. Energies 2019, 12, 642. [Google Scholar] [CrossRef]
  91. Zhao, N.; Chen, Y.; Liu, R.; Wu, M.H.; Xiong, W. Monitoring strategy for relay incentive mechanism in cooperative communication networks. Comput. Electr. Eng. 2017, 60, 14–29. [Google Scholar] [CrossRef]
  92. Zhao, N.; Wu, M.H.; Chen, J.J. Android-based mobile educational platform for speech signal processing. Int. J. Electr. Eng. Educ. 2107, 54, 3–16. [Google Scholar] [CrossRef]
  93. Wan, X.K.; Wu, H.; Qiao, F.; Li, F.; Li, Y.; Wan, Y.; Wei, J. Electrocardiogram baseline wander suppression based on the combination of morphological and wavelet transformation based filtering. Comput. Math. Methods Med. 2019, 7196156. [Google Scholar] [CrossRef] [PubMed]
  94. Ma, F.Y.; Yin, Y.K.; Li, M. Start-up process modelling of sediment microbial fuel cells based on data driven. Math. Probl. Eng. 2019, 7403732. [Google Scholar] [CrossRef]
  95. Yang, F.; Zhang, P.; Li, X.X. The truncation method for the Cauchy problem of the inhomogeneous Helmholtz equation. Appl. Anal. 2019, 98, 991–1004. [Google Scholar] [CrossRef]
  96. Sun, Z.Y.; Zhang, D.; Meng, Q.; Cheng, C.C. Feedback stabilization of time-delay nonlinear systems with continuous time-varying output function. Int. J. Syst. Sci. 2019, 50, 244–255. [Google Scholar] [CrossRef]
  97. Wu, M.H.; Li, X.; Liu, C.; Liu, M.; Zhao, N.; Wang, J.; Wan, X.K.; Rao, Z.H.; Zhu, L. Robust global motion estimation for video security based on improved k-means clustering. J. Ambient Intell. Hum. Comput. 2019, 10, 439–448. [Google Scholar] [CrossRef]
  98. Zhao, X.L.; Lin, Z.Y.; Fu, B.; He, L.; Li, C.S. Research on the predictive optimal PID plus second order derivative method for AGC of power system with high penetration of photovoltaic and wind power. J. Electr. Eng. Technol. 2019, 14, 1075–1086. [Google Scholar] [CrossRef]
Figure 1. The flowchart of computing the RGELS parameter estimate ϑ ^ ( L ) .
Figure 1. The flowchart of computing the RGELS parameter estimate ϑ ^ ( L ) .
Mathematics 07 00558 g001
Figure 2. The flowchart of computing the PC-RGELS parameter estimates β ^ ( L ) and θ ^ ( L ) .
Figure 2. The flowchart of computing the PC-RGELS parameter estimates β ^ ( L ) and θ ^ ( L ) .
Mathematics 07 00558 g002
Figure 3. The RGELS, PC-S-RGELS and PC-RGELS estimation errors δ versus s.
Figure 3. The RGELS, PC-S-RGELS and PC-RGELS estimation errors δ versus s.
Mathematics 07 00558 g003
Figure 4. The PC-RGELS estimation errors δ versus s with different σ 2 .
Figure 4. The PC-RGELS estimation errors δ versus s with different σ 2 .
Mathematics 07 00558 g004
Table 1. The RGELS estimates and their errors.
Table 1. The RGELS estimates and their errors.
s a 1 a 2 a 3 a 4 a 5 c 1 d 1 δ ( % )
1000.21713−0.554710.906270.843320.71040−0.480950.1801310.72413
2000.27110−0.562210.944900.869380.64018−0.473910.233257.35692
5000.24836−0.544100.939940.868730.66404−0.550630.0675815.08382
10000.26144−0.556260.940610.868730.64676−0.590240.0350816.99606
20000.25755−0.548620.948530.868430.65790−0.575380.0362016.91802
30000.25925−0.555820.942690.866860.66788−0.567650.0591615.52793
True values0.26000−0.570000.930000.870000.65000−0.560000.32000
Table 2. The PC-S-RGELS estimates and their errors.
Table 2. The PC-S-RGELS estimates and their errors.
s a 1 a 2 a 3 a 4 a 5 c 1 d 1 δ ( % )
1000.20140−0.563870.926310.704460.79768−0.357620.6892028.39191
2000.24432−0.565310.933730.796870.69082−0.383300.6457722.51734
5000.25446−0.561830.940700.839560.67391−0.474480.4715110.58749
10000.26117−0.563730.934380.853470.65752−0.531540.402545.29978
20000.25977−0.559820.939120.861610.65870−0.540460.371273.42342
30000.26036−0.563500.935760.863390.66214−0.548300.370273.20492
True values0.26000−0.570000.930000.870000.65000−0.560000.32000
Table 3. The PC-RGELS estimates and their errors.
Table 3. The PC-RGELS estimates and their errors.
s a 1 a 2 a 3 a 4 a 5 c 1 d 1 δ ( % )
1000.24978−0.606670.870310.781150.56724−0.395070.3016712.87537
2000.27882−0.585590.912590.820140.61386−0.429100.4125410.32160
5000.25835−0.565320.930120.854140.64496−0.580100.252994.26796
10000.26982−0.566190.930450.859930.64143−0.606940.269144.21835
20000.26104−0.561770.937800.864800.65036−0.576560.304401.53744
30000.26119−0.564870.934880.865100.65647−0.567860.341461.49741
True values0.26000−0.570000.930000.870000.65000−0.560000.32000
Table 4. The PC-RGELS estimates and their errors with σ 2 = 0.20 2 .
Table 4. The PC-RGELS estimates and their errors with σ 2 = 0.20 2 .
s a 1 a 2 a 3 a 4 a 5 c 1 d 1 δ ( % )
1000.24988−0.568060.904610.893230.60879−0.591920.234426.30236
2000.26570−0.558660.902900.891400.62628−0.562270.308092.68948
5000.26807−0.573300.922280.869270.62782−0.556560.317631.50427
10000.26418−0.571440.922340.873480.63193−0.569930.321451.34724
20000.26205−0.570950.924560.875460.64561−0.547890.328531.03059
30000.26050−0.569270.924910.871640.64979−0.547520.318050.81408
True values0.26000−0.570000.930000.870000.65000−0.560000.32000
Table 5. The PC-RGELS estimates and their errors with σ 2 = 0.60 2 .
Table 5. The PC-RGELS estimates and their errors with σ 2 = 0.60 2 .
s a 1 a 2 a 3 a 4 a 5 c 1 d 1 δ ( % )
100−0.07666−1.103231.303480.682170.72605−0.49016−0.0946351.44481
2000.02363−0.829591.026960.844060.70880−0.49353−0.0331230.52969
5000.16820−0.712201.003890.832830.64901−0.504350.0692818.85790
10000.20791−0.636230.958600.866020.63011−0.544430.1548011.21196
20000.23053−0.601740.940240.878890.65490−0.533070.250935.15448
30000.24053−0.586860.931740.870140.66124−0.535460.275763.42763
True values0.26000−0.570000.930000.870000.65000−0.560000.32000

Share and Cite

MDPI and ACS Style

Ma, H.; Pan, J.; Lv, L.; Xu, G.; Ding, F.; Alsaedi, A.; Hayat, T. Recursive Algorithms for Multivariable Output-Error-Like ARMA Systems. Mathematics 2019, 7, 558. https://doi.org/10.3390/math7060558

AMA Style

Ma H, Pan J, Lv L, Xu G, Ding F, Alsaedi A, Hayat T. Recursive Algorithms for Multivariable Output-Error-Like ARMA Systems. Mathematics. 2019; 7(6):558. https://doi.org/10.3390/math7060558

Chicago/Turabian Style

Ma, Hao, Jian Pan, Lei Lv, Guanghui Xu, Feng Ding, Ahmed Alsaedi, and Tasawar Hayat. 2019. "Recursive Algorithms for Multivariable Output-Error-Like ARMA Systems" Mathematics 7, no. 6: 558. https://doi.org/10.3390/math7060558

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop