Next Article in Journal
Towards the Verbal Decision Analysis Paradigm for Implementable Prioritization of Software Requirements
Next Article in Special Issue
Iterative Identification for Multivariable Systems with Time-Delays Based on Basis Pursuit De-Noising and Auxiliary Model
Previous Article in Journal
Local Coupled Extreme Learning Machine Based on Particle Swarm Optimization
Previous Article in Special Issue
Parameter Estimation of a Class of Neural Systems with Limit Cycles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Bias Compensation Based Parameter and State Estimation for Observability Canonical State-Space Models with Colored Noise

1
College of Mathematics and Statistics, Xinyang Normal University, Xinyang 464000, China
2
School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, China
*
Author to whom correspondence should be addressed.
Algorithms 2018, 11(11), 175; https://doi.org/10.3390/a11110175
Submission received: 4 September 2018 / Revised: 22 October 2018 / Accepted: 22 October 2018 / Published: 1 November 2018
(This article belongs to the Special Issue Parameter Estimation Algorithms and Its Applications)

Abstract

:
This paper develops a bias compensation-based parameter and state estimation algorithm for the observability canonical state-space system corrupted by colored noise. The state-space system is transformed into a linear regressive model by eliminating the state variables. Based on the determination of the noise variance and noise model, a bias correction term is added into the least squares estimate, and the system parameters and states are computed interactively. The proposed algorithm can generate the unbiased parameter estimate. Two illustrative examples are given to show the effectiveness of the proposed algorithm.

1. Introduction

Mathematical models play a great role in adaptive control, online prediction and signal modeling [1,2,3,4]. The system models of physical plants can be constructed from first principle, but this method cannot work well when the physical mechanism of the true plants is not very clear [5,6]. Thus, data-driven-based black-box or gray-box state-space modeling can be used to approximate the law of motion [7,8,9]. The advantages of state-space modeling lie in that the model maps the relationship from inputs to outputs, and the states reflect the internal dynamic behavior of the plants. The estimation of state-space models may include both the unknown system parameters and the unmeasurable states [10,11]. It is well known that the Kalman filtering method captures the optimal state estimator based on the measurement data [12,13,14]. However, the model parameters of the system are generally assumed to be known in advance, and the statistical characteristics of measurement noise and process noise have to be chosen, which are hard to satisfy in practice [15,16,17].
On the identification of state-space models, the classical identification methods are the prediction-error methods, which require adequate knowledge of the system structures and parameters [18,19], and the subspace identification methods, which ignore the consistency of system parameter estimates [20,21,22]. Since the system states and parameters are involved in the state-space model, the simultaneous estimation of them is a feasible choice [23,24]. In this aspect, Pavelková and Kárný investigated the state and parameter estimation problem of the state-space model with bounded noise, providing a maximum estimator based on Bayesian theory [25]. By transforming the single-input multiple-output Hammerstein state-space model into the multivariable one, Ma et al. used the Kalman smoother to estimate the system states and the expectation maximization algorithm to compute parameter estimates [26]. Li et al. presented a maximum likelihood-based identification algorithm for the Wiener nonlinear system with a linear state-space subsystem [27]. Wang and Liu applied the recursive least squares algorithm to non-uniformly sampled state-space systems with the states being available and combined the singular value decomposition with the hierarchical identification principle for the identification of the system with the states being unavailable [28]. These studies were mainly focused on the state-space system with white noise.
In industrial processes, the observation data are often corrupted by colored noises [29,30,31]. As we have known, the parameter estimates generated by the recursive least squares algorithm for the system with colored noise are biased. To obtain the unbiased estimates, the bias compensation method was proposed [32]. The basic idea is to compensate the biased least squares estimate by adding a correction term [33,34,35]. Zhang investigated the parameter estimation of the multiple-input single-output linear system with colored noise and introduced a stable prefilter to preprocess the input data for the purpose of obtaining the unbiased estimate [36]. Zheng extended the bias compensation method to the errors-in-variables output-error system, where the input was contaminated by white noise and the output errors were modeled by colored noise [37]. On the identification of state-space models with colored noise, Wang et al. applied the Kalman filter algorithm to estimate the system states and developed a filtering-based recursive least squares algorithm for observer canonical state-space systems with colored noise [38]. On the basis of the work in [38], this paper discusses the state and parameter identification of the state-space model with colored noise and presents a bias compensation-based identification algorithm for jointly estimating the system parameters and states based on the bias compensation. The main contributions of the paper are as follows.
  • By using the bias compensation, this paper derives the identification model and achieves the unbiased parameter estimation for observability canonical state-space models with colored noise.
  • By employing the interactive identification, this paper explores the relationship between the noise parameters and variance and the bias correction term and realizes the simultaneous estimation of the system parameters, noise parameters and system states.
The rest of this paper is organized as follows. Section 2 demonstrates the problem formulation about the observability canonical state-space system and derives the identification model. Section 3 develops a bias compensation-based parameter and state estimation algorithm. Section 4 provides two illustrative examples to show that the proposed algorithm is effective. Finally, Section 5 offers some concluding remarks.

2. Problem Description and Identification Model

For narrative convenience, let us introduce some notation. The nomenclature is displayed in Table 1.
Consider the following state-space system with colored noise,
x ( k + 1 ) = G x ( k ) + h u ( k ) ,
y ( k ) = f x ( k ) + e ( k ) ,
where x ( k ) : = [ x 1 ( k ) , x 2 ( k ) , , x n ( k ) ] T R n is the state vector, u ( k ) R is the system input, y ( k ) R is the system output, e ( k ) R is a random noise and G R n × n , h R n and f R 1 × n are the system parameter matrix and vectors, defined as:
G : = 0 1 0 0 0 0 1 0 0 0 0 1 g 1 g 2 g 3 g n R n × n , h : = h 1 h 2 h n 1 h n R n , f : = [ 1 , 0 , , 0 , 0 ] R 1 × n .
The external disturbance e ( k ) can be fitted by a moving average process, an autoregressive process or an autoregressive moving average process. Without loss of generality, we consider e ( k ) as a moving average noise process:
e ( k ) : = ( 1 + e 1 z 1 + e 2 z 2 + + e n e z n e ) v ( k ) ,
where v ( k ) is the white noise with zero mean and variance δ . Assume that y ( k ) = 0 , u ( k ) = 0 and v ( k ) = 0 for k 0 . The objective is to estimate the parameters g i , h i and e i and the system state x ( k ) from the available input-output data { u ( k ) , y ( k ) : k = 0 , 1 , 2 , } .
Note that the system in Equations (1) and (2) is an observability canonical form, and the observability matrix T is an identity matrix, i.e.,
T = [ f T , ( f G ) T , , ( f G n 1 ) T ] T = I n .
From Equations (1) and (2), we have:
y ( k ) = f x ( k ) + e ( k ) , y ( k + 1 ) = f x ( k + 1 ) + e ( k + 1 )
= f G x ( k ) + f h u ( k ) + e ( k + 1 ) , y ( k + 2 ) = f G x ( k + 1 ) + f h u ( k + 1 ) + e ( k + 2 )
= f G 2 x ( k ) + f G h u ( k ) + f h u ( k + 1 ) + e ( k + 2 ) , y ( k + n 1 ) = f G n 1 x ( k ) + f G n 2 h u ( k ) + f G n 3 h u ( k + 1 ) +
+ f h u ( k + n 2 ) + e ( k + n 1 ) , y ( k + n ) = f G n x ( k ) + f G n 1 h u ( k ) + f G n 2 h u ( k + 1 ) +
+ f h u ( k + n 1 ) + e ( k + n ) .
Define the parameter matrix M and the information vectors ϕ y ( k ) , ϕ u ( k ) and ϕ e ( k ) as:
M : = 0 0 0 0 f h 0 0 0 f G h f h 0 0 f G n 2 h f G n 3 h f h 0 R n × n , ϕ y ( k ) : = [ y ( k n ) , y ( k n + 1 ) , , y ( k 1 ) ] T R n , ϕ u ( k ) : = [ u ( k n ) , u ( k n + 1 ) , , u ( k 1 ) ] T R n , ϕ e ( k ) : = [ e ( k n ) , e ( k n + 1 ) , , e ( k 1 ) ] T R n .
According to Equations (5)–(8), we have:
ϕ y ( k + n ) = T x ( k ) + M ϕ u ( k + n ) + ϕ e ( k + n ) = x ( k ) + M ϕ u ( k + n ) + ϕ e ( k + n ) .
Equation (10) can be rewritten as:
x ( k ) = ϕ y ( k + n ) M ϕ u ( k + n ) ϕ e ( k + n ) .
Define the parameter vectors ϑ , ϑ g and ϑ h and the information vectors ϕ s ( k ) and ϕ n ( k ) as:
ϑ : = [ ϑ g T , ϑ h T ] T R 2 n , ϑ g : = [ f G n ] T R n , ϑ h : = [ f G n M + [ f G n 1 h , f G n 2 h , , f h ] ] T R n , ϕ s ( k ) : = ϕ y ( k ) ϕ u ( k ) R 2 n , ϕ n ( k ) : = ϕ e ( k ) 0 R 2 n .
Inserting (11) into (9) yields:
y ( k + n ) = f G n [ ϕ y ( k + n ) M ϕ u ( k + n ) ϕ e ( k + n ) ] + [ f G n 1 h , f G n 2 h , , f h ] ϕ u ( k + n ) + e ( k + n ) = ϕ y T ( k + n ) ϑ g + ϕ u T ( k + n ) ϑ h ϕ e T ( k + n ) ϑ g + e ( k + n ) = ϕ s T ( k + n ) ϑ + ϕ n T ( k + n ) ϑ + e ( k + n ) .
Define the intermediate variable:
w ( k ) : = ϕ n T ( k ) ϑ + e ( k ) .
Replacing k + n in Equation (12) with k gives:
y ( k ) = ϕ s T ( k ) ϑ + ϕ n T ( k ) ϑ + e ( k )
= ϕ s T ( k ) ϑ + w ( k ) .
Equation (14) or (15) is the system identification model of the state-space system in Equation (1), where the information vector ϕ s ( k ) is composed of the observed data. Define the parameter vector ϑ v and the information vector ϕ v ( k ) as:
ϑ v : = [ e 1 , e 2 , , e n e ] R n e , ϕ v ( k ) : = [ v ( k 1 ) , v ( k 2 ) , , v ( k n e ) ] T R n e .
From Equation (3), we have:
e ( k ) = ϕ v T ( k ) ϑ v + v ( k ) .
Thus, we obtain the noise identification model in Equation (16). The following derives the bias compensation-based identification algorithm based on the system model in Equation (15) and the noise model in Equation (16).

3. The Bias Compensation-Based Parameter and State Estimation Algorithm

The algorithm includes two parts: the parameter estimation algorithm and the state estimation algorithm. Two parts are implemented in an interactive way.

3.1. The Parameter Estimation Algorithm

According to the identification model in Equation (15), define the cost function:
J ( ϑ ) : = i = 1 k [ y ( i ) ϕ s T ( i ) ϑ ] 2
Using the least squares search and minimizing J ( ϑ ) give the least squares estimate ϑ ^ LS ( k ) of the parameter vector ϑ :
ϑ ^ LS ( k ) = P ( k ) i = 1 k ϕ s ( i ) y ( i ) , P 1 ( k ) = i = 1 k ϕ s ( i ) ϕ s T ( i ) .
Inserting (14) into (17) and using (13), we have:
ϑ ^ LS ( k ) = P ( k ) i = 1 k ϕ s ( i ) [ ϕ s T ( i ) ϑ + ϕ n T ( i ) ϑ + e ( i ) ] = ϑ + P ( k ) i = 1 k ϕ s ( i ) [ ϕ n T ( i ) ϑ + e ( i ) ] .
Obviously, the least squares estimate ϑ ^ LS ( k ) in Equation (18) is a biased estimate since e ( k ) is correlated noise. Equation (18) can be rewritten as:
P 1 ( k ) [ ϑ ^ LS ( k ) ϑ ] = i = 1 k ϕ s ( i ) [ ϕ n T ( i ) ϑ + e ( i ) ] .
Dividing by k and taking limits on both sides give:
lim k 1 k [ P 1 ( k ) ( ϑ ^ LS ( k ) ϑ ) ] = lim k 1 k i = 1 k ϕ s ( i ) ϕ n T ( i ) ϑ + lim k 1 k i = 1 k ϕ s ( i ) e ( i ) .
Note that e ( k ) in Equation (19) is the moving average noise, and v ( k ) is white noise with zero mean and variance δ and is independent of the inputs. From Equation (3), we have:
R ( 0 ) : = E [ e 2 ( k ) ] = ( 1 + e 1 2 + e 2 2 + + e n e 2 ) δ = [ 1 + ϑ v T ϑ v ] δ , R ( i ) : = E [ e ( k i ) e ( k ) ] = ( e i + e i + 1 e 1 + + e n e e n e i ) δ = ϑ v T ( i : n e ) 1 ϑ v ( 1 : n e i ) δ , i = 1 , 2 , , n e ,
where R ( i ) is the autocorrelation function of the noise e ( k ) and R ( i ) = 0 when i > n e . Define the autocorrelation function vectors r and ζ and the autocorrelation function matrices R, Λ and Q as:
r : = [ R ( n ) , R ( n 1 ) , , R ( 1 ) , 0 , 0 , 0 ] T R 2 n ,
ζ : = [ R ( n ) / δ , R ( n 1 ) / δ , , R ( 1 ) / δ , 0 , 0 , 0 ] T R 2 n ,
R : = diag [ Λ , 0 ] R ( 2 n ) × ( 2 n ) ,
Λ : = R ( 0 ) R ( 1 ) R ( n 1 ) R ( 1 ) R ( 0 ) R ( n 2 ) R ( n 1 ) R ( n 2 ) R ( 0 ) R n × n ,
Q : = R / δ .
In fact, Λ is a Toeplitz matrix consisting of n autocorrelation functions. Equation (19) can be rewritten as:
lim k 1 k [ P 1 ( k ) ( ϑ ^ LS ( k ) ϑ ) ] = ( R ϑ r ) .
or:
lim k ϑ ^ LS ( k ) = ϑ k P ( k ) ( R ϑ r ) .
It can be seen from Equation (25) that the bias of the least squares estimate ϑ ^ LS ( k ) can be eliminated by adding a compensation term Δ ϑ ( k ) : = k P ( k ) ( R ϑ r ) . That is, the estimate ϑ ^ C ( k ) : = ϑ ^ LS ( k ) + Δ ϑ ( k ) is an unbiased estimate of the true parameter vector ϑ . Thus, the unbiased estimate ϑ ^ C ( k ) can be computed by the following recursive expressions,
ϑ ^ C ( k ) = ϑ ^ LS ( k ) + k P ( k ) [ R ^ ( k ) ϑ ^ C ( k ) r ^ ( k ) ] = ϑ ^ LS ( k ) + k δ ^ ( k ) P ( k ) [ Q ^ ( k ) ϑ ^ C ( k ) ζ ^ ( k ) ] ,
ϑ ^ LS ( k ) = ϑ ^ LS ( k 1 ) + P ( k ) ϕ s ( k ) [ y ( k ) ϕ s T ( k ) ϑ ^ LS ( k 1 ) ] ,
P 1 ( k ) = P 1 ( k 1 ) + ϕ s ( k ) 2 , P ( 0 ) = p 0 I 2 n ,
ϕ s ( k ) = ϕ y ( k ) ϕ u ( k ) , ϑ ^ C ( k ) = ϑ ^ g ( k ) ϑ ^ h ( k ) .
Equation (26) shows that the unbiased estimate ϑ ^ C ( k ) is related to the estimates of R and r (i.e., the noise variance δ and the noise parameter vector ϑ v ). The following derives their estimates based on the interactive identification.
Let the least squares residual ε LS ( i ) : = y ( i ) ϕ s T ( i ) ϑ ^ LS ( k ) . Using Equation (14) and the relation i = 1 k ε LS ( i ) ϕ s T ( i ) = 0 gives:
i = 1 k ε LS 2 ( i ) = i = 1 k ε LS ( i ) [ ϕ s T ( i ) ϑ + ϕ n T ( i ) ϑ + e ( i ) ϕ s T ( i ) ϑ ^ LS ( k ) ] = i = 1 k [ ϕ s T ( i ) ϑ + ϕ n T ( i ) ϑ + e ( i ) ϕ s T ( i ) ϑ ^ LS ( k ) ] [ ϕ n T ( i ) ϑ + e ( i ) ] = i = 1 k ϕ s T ( i ) [ ϑ ϑ ^ LS ( k ) ] [ ϕ n T ( i ) ϑ + e ( i ) ] + i = 1 k [ ϕ n T ( i ) ϑ + e ( i ) ] 2 .
Noting that R is a symmetric matrix, and using Equations (20)–(24), we have:
lim k 1 k i = 1 k ε LS 2 ( i ) = ϑ T R T [ ϑ ϑ ^ LS ( k ) ] + r T [ ϑ ϑ ^ LS ( k ) ] + ϑ R ϑ 2 r T ϑ + R ( 0 ) = ϑ T R ϑ ^ LS ( k ) r T [ ϑ + ϑ ^ LS ( k ) ] + R ( 0 ) = ϑ T δ Q ϑ ^ LS ( k ) δ ζ T [ ϑ + ϑ ^ LS ( k ) ] + δ [ 1 + ϑ v T ϑ v ] .
Thus, we have:
δ = lim k 1 k i = 1 k ε LS 2 ( i ) ϑ T Q ϑ ^ LS ( k ) ζ T [ ϑ + ϑ ^ LS ( k ) ] + 1 + ϑ v T ϑ v .
Let J ( k ) : = i = 1 k ε LS 2 ( i ) be the cost function at time k, Q ^ ( k ) and ζ ^ ( k ) be the estimates of Q and ζ at instant k, respectively. Then, the estimate of the noise variance δ can be computed by:
δ ^ ( k ) = 1 k J ( k ) ϑ ^ C T ( k ) Q ^ ( k ) ϑ ^ LS ( k ) ζ ^ T ( k ) [ ϑ ^ C ( k ) + ϑ ^ LS ( k ) ] + 1 + ϑ ^ v T ( k ) ϑ ^ v ( k ) ,
J ( k ) = J ( k 1 ) + [ y ( k ) ϕ s T ( k ) ϑ ^ LS ( k 1 ) ] 2 1 + ϕ s T ( k ) P ( k 1 ) ϕ s ( k ) ,
ζ ^ ( k ) = [ R ^ ( n ) / δ ^ ( k 1 ) , R ^ ( n 1 ) / δ ^ ( k 1 ) , , R ^ ( 1 ) / δ ^ ( k 1 ) , 0 , 0 , 0 ] T ,
Q ^ ( k ) = 1 + ϑ ^ v T ( k ) ϑ ^ v ( k ) R ^ ( 1 ) / δ ^ ( k 1 ) R ^ ( n 1 ) / δ ^ ( k 1 ) R ^ ( 1 ) / δ ^ ( k 1 ) 1 + ϑ ^ v T ( k ) ϑ ^ v ( k ) R ^ ( n 2 ) / δ ^ ( k 1 ) R ^ ( n 1 ) / δ ^ ( k 1 ) R ^ ( n 2 ) / δ ^ ( k 1 ) 1 + ϑ ^ v T ( k ) ϑ ^ v ( k ) 0 0 0 ,
R ^ ( i ) δ ^ ( k 1 ) = ϑ ^ v T ( i : n e ) 1 ϑ ^ v ( 1 : n e i ) , i = 1 , 2 , , n e .
Note that Equations (30)–(34) involve the estimate ϑ ^ v ( k ) of the noise parameter vector ϑ v , which can be computed by the noise model in Equation (16). Let e ^ ( k ) and v ^ ( k ) be the estimates of e ( k ) and v ( k ) , respectively. Define the noise information vectors:
ϕ ^ n ( k ) : = [ e ^ ( k n ) , e ^ ( k n + 1 ) , , e ^ ( k 1 ) , 0 , 0 , , 0 ] T R 2 n ,
ϕ ^ v ( k ) : = [ v ^ ( k 1 ) , v ^ ( k 2 ) , , v ^ ( k n e ) ] T R n e .
From Equations (14) and (16), we have:
e ^ ( k ) = y ( k ) ϕ s T ( k ) ϑ ^ C ( k ) ϕ ^ n T ( k ) ϑ ^ C ( k ) ,
v ^ ( k ) = e ^ ( k ) ϕ ^ v T ( k ) ϑ ^ v ( k ) .
Using the least squares principle, the estimate ϑ ^ v ( k ) of ϑ v in Equation (16) can be computed by:
ϑ ^ v ( k ) = ϑ ^ v ( k 1 ) + P v ( k ) ϕ ^ v ( k ) [ e ^ ( k ) ϕ ^ v T ( k ) ϑ ^ v ( k 1 ) ] ,
P v 1 ( k ) = P v 1 ( k 1 ) + ϕ ^ v ( k ) ϕ ^ v T ( k ) .
Thus, Equations (26)–(40) form the bias compensation-based parameter estimation (BC-PE) algorithm for identifying the parameter vector ϑ . The BC-PE algorithm is an interactive estimation process: we can compute the estimates ϑ ^ v ( k ) and δ ^ ( k ) ; with these obtained variable estimates, we derive the estimate of R and r and then update the unbiased estimate ϑ ^ C ( k ) using Equations (26)–(29).

3.2. The State Estimation Algorithm

When the system parameter estimation and the noise estimation e ^ ( k ) are obtained from the BC-PE algorithm, we can estimate the system state x ( t ) using Equation (11).
Post-multiplying Equation (4) by h and letting the elements of each row be equal, we have:
f G i 1 h = h i , i = 1 , 2 , , n .
Then, the parameter matrix M can be expressed as:
M = 0 0 0 0 h 1 0 0 0 h 2 h 1 0 0 h n 1 h n 2 h 1 0 .
Post-multiplying Equation (4) by G and letting the elements of n-th row be equal, we have:
f G n = [ g 1 , g 2 , , g n ] .
According to the definitions of ϑ g and ϑ h , we have:
ϑ g = [ f G n ] T = [ g 1 , g 2 , , g n ] T , ϑ h = [ f G n M + [ f G n 1 h , f G n 2 h , , f h ] ] T = h 1 g 2 h 2 g 3 h n 1 g n + h n h 1 g 3 h 2 g 4 h n 2 g n + h n 1 h 1 g n + h 2 h 1 = g 2 g 3 g n 1 g 3 g 4 1 0 g n 1 0 0 1 0 0 0 h 1 h 2 h n 1 h n .
Once the estimate of the parameter vector ϑ is computed by the BC-PE algorithm in Equations (26)–(40), we can extract the estimates ϑ ^ g ( k ) and ϑ ^ h ( k ) of the parameter vectors ϑ g and ϑ h . Then, the estimates g ^ i ( k ) and h ^ i ( k ) can be computed by:
ϑ ^ g ( k ) = [ g ^ 1 ( k ) , g ^ 2 ( k ) , , g ^ n ( k ) ] T ,
h ^ 1 ( k ) h ^ 2 ( k ) h ^ n 1 ( k ) h ^ n ( k ) = g ^ 2 ( k ) g ^ 3 ( k ) g ^ n ( k ) 1 g ^ 3 ( k ) g ^ 4 ( k ) 1 0 g ^ n ( k ) 1 0 0 1 0 0 0 1 ϑ ^ h ( k ) .
From Equation (11), we have:
x ( k n ) = ϕ y ( k ) M ϕ u ( k ) ϕ e ( k ) .
Replacing M and ϕ e ( k ) with their corresponding estimates M ^ ( k ) and ϕ ^ e ( k ) , we have:
x ^ ( k n ) = ϕ y ( k ) M ^ ( k ) ϕ u ( k ) ϕ ^ e ( k ) ,
ϕ y ( k ) = [ y ( k n ) , y ( k n + 1 ) , , y ( k 1 ) ] T ,
ϕ u ( k ) = [ u ( k n ) , u ( k n + 1 ) , , u ( k 1 ) ] T ,
ϕ ^ e ( k ) = [ e ^ ( k n ) , e ^ ( k n + 1 ) , , e ^ ( k 1 ) ] T ,
M ^ ( k ) = 0 0 0 0 h ^ 1 ( k ) 0 0 0 h ^ 2 ( k ) h ^ 1 ( k ) 0 0 h ^ n 1 ( k ) h ^ n 2 ( k ) h ^ 1 ( k ) 0 .
Equations (41)–(47) form the state estimation algorithm for the state-space system in Equation (1). Let θ : = [ g T , h T , ϑ v T ] T R 2 n + n e , g : = [ g 1 , g 2 , , g n ] R n . By interactively implementing the parameter estimation algorithm in Equations (26)–(40) and the state estimation algorithm in Equations (41)–(47), the estimation of the system parameter vector θ and the state x ( k ) can be obtained.
The steps for implementing the bias compensation-based parameter and state estimation (BC-PSE) algorithm in Equations (26) and (47) for state-space systems with colored noise are listed as follows.
  • Let k = 1 , and set the initial values ϑ ^ LS ( 0 ) = ϑ ^ C ( 0 ) = 1 2 n / p 0 , ϑ ^ v ( 0 ) = 1 n e / p 0 , P ( 0 ) = p 0 I 2 n , J ( 0 ) = 0 , p 0 = 10 6 .
  • Collect the input-output data u ( k ) and y ( k ) . Construct ϕ y ( k ) using Equation (44), ϕ u ( k ) using Equation (45) and ϕ s ( k ) using Equation (29).
  • Compute P ( k ) using Equation (28) and J ( k ) using Equation (31). Update the parameter estimate ϑ ^ LS ( k ) using Equation (27).
  • Construct ϕ ^ n ( k ) using Equation (35) and ϕ ^ v ( k ) using Equation (36). Compute e ^ ( k ) using Equation (37) and v ^ ( k ) using Equation (38), and compute P v ( k ) using Equation (40). Update the noise parameter estimate ϑ ^ v ( k ) using Equation (39).
  • Compute ζ ^ ( k ) , Q ^ ( k ) and R ^ ( i ) / δ ^ ( k 1 ) using Equations (32)–(34). Compute δ ^ ( k ) using Equation (30) and ϑ ^ C ( k ) using Equation (26).
  • Read ϑ ^ g ( k ) and ϑ ^ h ( k ) using Equation (29). Acquire the parameter estimates g ^ i ( k ) and h ^ i ( k ) using Equations (41) and (42).
  • Construct ϕ ^ e ( k ) using Equation (46) and M ^ ( k ) using Equation (47).
  • Compute the state estimate x ^ ( k ) using Equation (43).
  • Let θ ^ ( k ) : = g ^ ( k ) h ^ ( k ) ϑ ^ v ( k ) . If k < L , increase k by one, and go to Step 2; otherwise, stop, and obtain the parameter estimation vector θ ^ ( L ) .

4. Examples

Example 1.
Consider the following state-space system with moving average noise,
x ( k + 1 ) = 0 1 g 1 g 2 x ( k ) + h 1 h 2 u ( k ) = 0 1 0.90 0.80 x ( k ) + 1.10 1.60 u ( k ) , y ( k ) = [ 1 , 0 ] x ( k ) + e ( k ) , e ( k ) = ( 1 + e 1 z 1 + e 2 z 2 ) v ( k ) = ( 1 + 0.20 z 1 0.60 z 1 ) v ( k ) .
The parameter vector to be estimated is:
θ = [ g 1 , g 2 , h 1 , h 2 , e 1 , e 2 ] T = [ 0.90 , 0.80 , 1.10 , 1.60 , 0.20 , 0.60 ] T .
In simulation, the input { u ( k ) } is set as a persistent excitation sequence and { v ( k ) } as a zero-mean noise sequence with variance δ = 0.25 . Take the data length L = 1200 . Use the first 1000 data and apply the BC-PSE algorithm to get the estimates of the parameter vector θ and the system state x ( k ) . The parameter estimates and their errors under different noise variances are displayed in Table 2 and Figure 1, where the estimation error τ : = θ ^ ( k ) θ / θ × 100 % . The parameter estimate θ ^ ( k ) versus k is plotted in Figure 2, and the state estimate x ^ ( k ) computed by the BC-PSE algorithm is illustrated in Figure 3 and Figure 4.
The remaining 200 data from k = 1001 –1200 are taken to test the effectiveness of the estimated model. As a comparison, the curves of the estimated output y ^ ( k ) and the predicted output y ( k ) are depicted in Figure 5.
Example 2.
Consider a third-order observability canonical state-space system,
x ( k + 1 ) = 0 1 0 0 0 1 0.80 0.50 0.60 x ( k ) + 2.20 0.30 0.20 u ( k ) , y ( k ) = [ 1 , 0 , 0 ] x ( k ) + e ( k ) , e ( k ) = ( 1 + e 1 z 1 + e 2 z 2 ) v ( k ) = ( 1 + 0.50 z 1 + 0.40 z 1 ) v ( k ) .
The parameter vector to be estimated is:
θ = [ g 1 , g 2 , g 3 , h 1 , h 2 , h 3 , e 1 , e 2 ] T = [ 0.80 , 0.50 , 0.60 , 2.20 , 0.30 , 0.20 , 0.50 , 0.40 ] T .
The simulation conditions are the same as those in Example 1. Table 3 and Figure 6 compare the bias compensation-based parameter estimates and errors under noise variances δ = 0.25 and δ = 1.00 . Figure 7 and Figure 8 depict the curves of the parameter estimates θ ^ ( k ) versus the instant k.
Figure 9, Figure 10 and Figure 11 plot the dynamics of three state estimates x ^ i ( k ) , i = 1 , 2 , 3 . Figure 12 describes the output comparison of y ^ ( k ) with y ( k ) .
From Table 2 and Table 3 and Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12, we can draw some conclusions as follows.

5. Conclusions

This paper discusses the identification of the observability canonical state-space system with colored noise via our proposed bias compensation-based parameter and state estimation algorithm. The numerical results indicate that the algorithm can effectively estimate the system states and parameters. The advantage of this algorithm is that the parameter estimates are unbiased. The algorithm can be combined with other recursive algorithms, such as the multi-innovation algorithm, to study the identification of nonlinear state space systems [39], dual-rate systems [40], signal modeling [41] and time series analysis [42,43].

Author Contributions

Joint work.

Funding

This work was supported by the Science and Technology Project of Henan Province (China, 182102210536, 182102210538), the Key Research Project of Henan Higher Education Institutions (China, 18A120003, 18A130001, 18B520036), the National Science Foundation of China (China, 61503122) and Nanhu Scholars Program for Young Scholars of XYNU(China).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Na, J.; Yang, J.; Wu, X.; Guo, Y. Robust adaptive parameter estimation of sinusoidal signals. Automatica 2015, 53, 376–384. [Google Scholar] [CrossRef]
  2. Kalafatis, A.D.; Wang, L.; Cluett, W.R. Identification of time-varying pH processes using sinusoidal signals. Automatica 2005, 41, 685–691. [Google Scholar] [CrossRef]
  3. Na, J.; Yang, J.; Ren, X.M.; Guo, Y. Robust adaptive estimation of nonlinear system with time-varying parameters. Int. J. Adapt. Control Process. 2015, 29, 1055–1072. [Google Scholar] [CrossRef]
  4. Liu, S.Y.; Xu, L.; Ding, F. Iterative parameter estimation algorithms for dual-frequency signal models. Algorithms 2017, 10, 118. [Google Scholar] [CrossRef]
  5. Na, J.; Mahyuddin, M.N.; Herrmann, G.; Ren, X.; Barber, P. Robust adaptive finite-time parameter estimation and control for robotic systems. Int. J. Robust Nonlinear Control 2015, 25, 3045–3071. [Google Scholar] [CrossRef] [Green Version]
  6. Huang, W.; Ding, F. Coupled least squares identification algorithms for multivariate output-error systems. Algorithms 2017, 10, 12. [Google Scholar] [CrossRef]
  7. Goos, J.; Pintelon, R. Continuous-time identification of periodically parameter-varying state space models. Automatica 2016, 71, 254–263. [Google Scholar] [CrossRef]
  8. AlMutawa, J. Identification of errors-in-variables state space models with observation outliers based on minimum covariance determinant. J. Process Control 2009, 19, 879–887. [Google Scholar] [CrossRef]
  9. Yuan, Y.; Zhang, H.; Wu, Y.; Zhu, T.; Ding, H. Bayesian learning-based model predictive vibration control for thin-walled workpiece machining processes. IEEE/ASME Trans. Mechatron. 2017, 22, 509–520. [Google Scholar] [CrossRef]
  10. Ding, F. Combined state and least squares parameter estimation algorithms for dynamic systems. Appl. Math. Model. 2014, 38, 403–412. [Google Scholar] [CrossRef]
  11. Ma, J.X.; Xiong, W.L.; Chen, J.; Ding, F. Hierarchical identification for multivariate Hammerstein systems by using the modified Kalman filter. IET Control Theory Appl. 2017, 11, 857–869. [Google Scholar] [CrossRef]
  12. Fatehi, A.; Huang, B. Kalman filtering approach to multi-rate information fusion in the presence of irregular sampling rate and variable measurement delay. J. Process Control 2017, 53, 15–25. [Google Scholar] [CrossRef]
  13. Zhao, S.; Shmaliy, Y.S.; Liu, F. Fast Kalman-like optimal unbiased FIR filtering with applications. IEEE Trans. Signal Process. 2016, 64, 2284–2297. [Google Scholar] [CrossRef]
  14. Zhou, Z.P.; Liu, X.F. State and fault estimation of sandwich systems with hysteresis. Int. J. Robust Nonlinear Control 2018, 28, 3974–3986. [Google Scholar] [CrossRef]
  15. Zhao, S.; Huang, B.; Liu, F. Linear optimal unbiased filter for time-variant systems without apriori information on initial condition. IEEE Trans. Autom. Control 2017, 62, 882–887. [Google Scholar] [CrossRef]
  16. Zhao, S.; Shmaliy, Y.S.; Liu, F. On the iterative computation of error matrix in unbiased FIR filtering. IEEE Signal Process. Lett. 2017, 24, 555–558. [Google Scholar] [CrossRef]
  17. Erazo, K.; Nagarajaiah, S. An offline approach for output-only Bayesian identification of stochastic nonlinear systems using unscented Kalman filtering. J. Sound Vib. 2017, 397, 222–240. [Google Scholar] [CrossRef]
  18. Verhaegen, M.; Verdult, V. Filtering and System Identification: A Least Squares Approach; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar]
  19. Ljung, L. System Identification: Theory for the User, 2nd ed.; Prentice Hall: Englewood Cliffs, NJ, USA, 1999. [Google Scholar]
  20. Yu, C.P.; Ljung, L.; Verhaegen, M. Identification of structured state-space models. Automatica 2018, 90, 54–61. [Google Scholar] [CrossRef]
  21. Naitali, A.; Giri, F. Persistent excitation by deterministic signals for subspace parametric identification of MISO Hammerstein systems. IEEE Trans. Autom. Control 2016, 61, 258–263. [Google Scholar] [CrossRef]
  22. Ase, H.; Katayama, T. A subspace-based identification of Wiener-Hammerstein benchmark model. Control Eng. Pract. 2015, 44, 126–137. [Google Scholar] [CrossRef]
  23. Xu, L.; Ding, F.; Gu, Y.; Alsaedi, A.; Hayat, T. A multi-innovation state and parameter estimation algorithm for a state space system with d-step state-delay. Signal Process. 2017, 140, 97–103. [Google Scholar] [CrossRef]
  24. Zhang, X.; Ding, F.; Xu, L.; Yang, E.F. State filtering-based least squares parameter estimation for bilinear systems using the hierarchical identification principle. IET Control Theory Appl. 2018, 12, 1704–1713. [Google Scholar] [CrossRef]
  25. Pavelkova, L.; Karny, M. State and parameter estimation of state-space model with entry-wise correlated uniform noise. Int. J. Adapt. Control Signal Process. 2014, 28, 1189–1205. [Google Scholar] [CrossRef]
  26. Ma, J.X.; Wu, O.Y.; Huang, B.; Ding, F. Expectation maximization estimation for a class of input nonlinear state space systems by using the Kalman smoother. Signal Process. 2018, 145, 295–303. [Google Scholar] [CrossRef]
  27. Li, J.H.; Zheng, W.X.; Gu, J.P.; Hua, L. A recursive identification algorithm for Wiener nonlinear systems with linear state-space subsystem. Circuits Syst. Signal Process. 2018, 37, 2374–2393. [Google Scholar] [CrossRef]
  28. Wang, H.W.; Liu, T. Recursive state-space model identification of non-uniformly sampled systems using singular value decomposition. Chin. J. Chem. Eng. 2014, 22, 1268–1273. [Google Scholar] [CrossRef]
  29. Ding, J.L. Data filtering based recursive and iterative least squares algorithms for parameter estimation of multi-input output systems. Algorithms 2016, 9, 49. [Google Scholar] [CrossRef]
  30. Yu, C.P.; You, K.Y.; Xie, L.H. Quantized identification of ARMA systems with colored measurement noise. Automatica 2016, 66, 101–108. [Google Scholar] [CrossRef]
  31. Jafari, M.; Salimifard, M.; Dehghani, M. Identification of multivariable nonlinear systems in the presence of colored noises using iterative hierarchical least squares algorithm. ISA Trans. 2014, 53, 1243–1252. [Google Scholar] [CrossRef] [PubMed]
  32. Sagara, S.; Wada, K. On-line modified least-squares parameter estimation of linear discrete dynamic systems. Int. J. Control 1977, 25, 329–343. [Google Scholar] [CrossRef]
  33. Mejari, M.; Piga, D.; Bemporad, A. A bias-correction method for closed-loop identification of linear parameter-varying systems. Automatica 2018, 87, 128–141. [Google Scholar] [CrossRef]
  34. Ding, J.; Ding, F. Bias compensation based parameter estimation for output error moving average systems. Int. J. Adapt. Control Signal Process. 2011, 25, 1100–1111. [Google Scholar] [CrossRef]
  35. Diversi, R. Bias-eliminating least-squares identification of errors-in-variables models with mutually correlated noises. Int. J. Adapt. Control Signal Process. 2013, 27, 915–924. [Google Scholar] [CrossRef]
  36. Zhang, Y. Unbiased identification of a class of multi-input single-output systems with correlated disturbances using bias compensation methods. Math. Comput. Model. 2011, 53, 1810–1819. [Google Scholar] [CrossRef]
  37. Zheng, W.X. A bias correction method for identification of linear dynamic errors-in-variables models. IEEE Trans. Autom. Control 2002, 47, 1142–1147. [Google Scholar] [CrossRef]
  38. Wang, X.H.; Ding, F.; Alsaedi, A.; Hayat, T. Filtering based parameter estimation for observer canonical state space systems with colored noise. J. Frankl. Inst. 2017, 354, 593–609. [Google Scholar] [CrossRef]
  39. Pan, W.; Yuan, Y.; Goncalves, J.; Stan, G.B. A sparse Bayesian approach to the identification of nonlinear state-space systems. IEEE Trans. Autom. Control 2016, 61, 182–187. [Google Scholar] [CrossRef]
  40. Yang, X.Q.; Yang, X.B. Local identification of LPV dual-rate system with random measurement delays. IEEE Trans. Ind. Electron. 2018, 65, 1499–1507. [Google Scholar] [CrossRef]
  41. Xu, L.; Xiong, W.L.; Alsaedi, A.; Hayat, T. Hierarchical parameter estimation for the frequency response based on the dynamical window data. Int. J. Control Autom. Syst. 2018, 16, 1756–1764. [Google Scholar] [CrossRef]
  42. Gan, M.; Chen, C.L.P.; Chen, G.Y.; Chen, L. On some separated algorithms for separable nonlinear squares problems. IEEE Trans. Cybern. 2018, 48, 2866–2874. [Google Scholar] [CrossRef] [PubMed]
  43. Chen, G.Y.; Gan, M.; Chen, G.L. Generalized exponential autoregressive models for nonlinear time series: Stationarity, estimation and applications. Inf. Sci. 2018, 438, 46–57. [Google Scholar] [CrossRef]
Figure 1. The estimation errors τ versus k with different noise variances for Example 1.
Figure 1. The estimation errors τ versus k with different noise variances for Example 1.
Algorithms 11 00175 g001
Figure 2. The parameter estimates θ ^ ( k ) versus k for Example 1.
Figure 2. The parameter estimates θ ^ ( k ) versus k for Example 1.
Algorithms 11 00175 g002
Figure 3. The state x 1 ( k ) and the estimated state x ^ 1 ( k ) versus k for Example 1 ( δ = 0.25 ).
Figure 3. The state x 1 ( k ) and the estimated state x ^ 1 ( k ) versus k for Example 1 ( δ = 0.25 ).
Algorithms 11 00175 g003
Figure 4. The state x 2 ( k ) and the estimated state x ^ 2 ( k ) versus k for Example 1 ( δ = 0.25 ).
Figure 4. The state x 2 ( k ) and the estimated state x ^ 2 ( k ) versus k for Example 1 ( δ = 0.25 ).
Algorithms 11 00175 g004
Figure 5. The actual output y ( k ) and the estimated output y ^ ( k ) versus k for Example 1.
Figure 5. The actual output y ( k ) and the estimated output y ^ ( k ) versus k for Example 1.
Algorithms 11 00175 g005
Figure 6. The estimation errors τ versus k with different noise variances for Example 2.
Figure 6. The estimation errors τ versus k with different noise variances for Example 2.
Algorithms 11 00175 g006
Figure 7. The parameter estimates g ^ 1 ( k ) , g ^ 2 ( k ) , g ^ 3 ( k ) and h ^ 1 ( k ) versus k for Example 2.
Figure 7. The parameter estimates g ^ 1 ( k ) , g ^ 2 ( k ) , g ^ 3 ( k ) and h ^ 1 ( k ) versus k for Example 2.
Algorithms 11 00175 g007
Figure 8. The parameter estimates h ^ 2 ( k ) , h ^ 3 ( k ) , e ^ 1 ( k ) and e ^ 2 ( k ) versus k for Example 2.
Figure 8. The parameter estimates h ^ 2 ( k ) , h ^ 3 ( k ) , e ^ 1 ( k ) and e ^ 2 ( k ) versus k for Example 2.
Algorithms 11 00175 g008
Figure 9. The true state x 1 ( k ) and the estimated state x ^ 1 ( k ) versus k for Example 2 ( δ = 0.25 ).
Figure 9. The true state x 1 ( k ) and the estimated state x ^ 1 ( k ) versus k for Example 2 ( δ = 0.25 ).
Algorithms 11 00175 g009
Figure 10. The true state x 2 ( k ) and the estimated state x ^ 2 ( k ) versus k for Example 2 ( δ = 0.25 ).
Figure 10. The true state x 2 ( k ) and the estimated state x ^ 2 ( k ) versus k for Example 2 ( δ = 0.25 ).
Algorithms 11 00175 g010
Figure 11. The true state x 3 ( k ) and the estimated state x ^ 3 ( k ) versus k for Example 2 ( δ = 0.25 ).
Figure 11. The true state x 3 ( k ) and the estimated state x ^ 3 ( k ) versus k for Example 2 ( δ = 0.25 ).
Algorithms 11 00175 g011
Figure 12. The actual output y ( k ) and the estimated output y ^ ( k ) versus k for Example 2.
Figure 12. The actual output y ( k ) and the estimated output y ^ ( k ) versus k for Example 2.
Algorithms 11 00175 g012
Table 1. The nomenclature for symbols.
Table 1. The nomenclature for symbols.
Nomenclature
T matrix/vector transpose A 1 inverse of the matrix A
E expectation operatorzunit forward shift operator
X 2-norm of a matrix X I m m × m identity matrix
1 n n-dimensional column vector whose elements are 1
θ ^ ( k ) estimate of the parameter vector θ at instant k
θ ( 1 : m ) a vector consisting of the first entry to the m-th entry of θ
Table 2. The bias compensation-based parameter estimates and errors for Example 1.
Table 2. The bias compensation-based parameter estimates and errors for Example 1.
δ k g 1 g 2 h 1 h 2 e 1 e 2 τ ( % )
1.00 20−0.906720.588052.47895−2.333080.12542−1.4169674.94801
50−0.911460.778641.56458−1.454390.06347−0.6866921.66474
100−0.897690.780621.45065−1.562520.11477−0.6155515.33989
200−0.899470.791221.18977−1.588520.18174−0.622504.02568
500−0.902890.793261.19050−1.567030.17911−0.629934.35605
1000−0.895020.797421.14729−1.637210.18966−0.619452.71347
0.25 20−0.899020.683911.87036−1.866950.37006−0.7047535.74537
50−0.908300.790671.34730−1.490920.18690−0.5189211.92791
100−0.900030.790441.28652−1.561420.20501−0.508918.91773
200−0.900830.795961.15147−1.582930.22360−0.530863.84174
500−0.901990.796951.14809−1.578560.20617−0.565282.67739
1000−0.897760.798691.12493−1.616340.20773−0.576411.63983
True values−0.900000.800001.10000−1.600000.20000−0.600000.00000
Table 3. The bias compensation-based parameter estimates and errors for Example 2.
Table 3. The bias compensation-based parameter estimates and errors for Example 2.
δ k g 1 g 2 g 3 h 1 h 2 h 3 e 1 e 2 τ ( % )
1.00 20−0.628740.234040.699182.71399−0.514300.035210.557580.5455040.52480
50−0.631620.174550.823872.36896−0.103380.170460.572610.3625724.04086
100−0.729200.377050.698532.107860.002050.034900.531530.4734115.57075
200−0.779900.435410.648262.076860.111570.166420.523630.433919.54558
500−0.802100.494470.603842.198780.248770.161260.562420.436373.76452
1000−0.806000.504370.597772.235140.299240.191050.526080.427012.04986
0.25 20−0.703990.328580.674962.51439−0.092170.183080.564890.4259621.34339
50−0.695580.297170.739812.289770.118090.206660.541710.3814413.16282
100−0.770490.447400.646362.157120.160390.135320.488210.466807.35863
200−0.794240.474280.622752.143900.210640.195650.492990.437574.56623
500−0.802230.499210.602372.201360.276950.185220.535370.427832.05074
1000−0.803850.503170.599072.218450.301070.197830.514980.419391.21121
True values−0.800000.500000.600002.200000.300000.200000.500000.400000.00000

Share and Cite

MDPI and ACS Style

Wang, X.; Ding, F.; Liu, Q.; Jiang, C. The Bias Compensation Based Parameter and State Estimation for Observability Canonical State-Space Models with Colored Noise. Algorithms 2018, 11, 175. https://doi.org/10.3390/a11110175

AMA Style

Wang X, Ding F, Liu Q, Jiang C. The Bias Compensation Based Parameter and State Estimation for Observability Canonical State-Space Models with Colored Noise. Algorithms. 2018; 11(11):175. https://doi.org/10.3390/a11110175

Chicago/Turabian Style

Wang, Xuehai, Feng Ding, Qingsheng Liu, and Chuntao Jiang. 2018. "The Bias Compensation Based Parameter and State Estimation for Observability Canonical State-Space Models with Colored Noise" Algorithms 11, no. 11: 175. https://doi.org/10.3390/a11110175

APA Style

Wang, X., Ding, F., Liu, Q., & Jiang, C. (2018). The Bias Compensation Based Parameter and State Estimation for Observability Canonical State-Space Models with Colored Noise. Algorithms, 11(11), 175. https://doi.org/10.3390/a11110175

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop