Coupled Least Squares Identification Algorithms for Multivariate Output-Error Systems

Abstract: This paper focuses on the recursive identification problems for a multivariate output-error system. By decomposing the system into several subsystems and by forming a coupled relationship between the parameter estimation vectors of the subsystems, two coupled auxiliary model based recursive least squares (RLS) algorithms are presented. Moreover, in contrast to the auxiliary model based recursive least squares algorithm, the proposed algorithms provide a reference to improve the identification accuracy of the multivariate output-error system. The simulation results confirm the effectiveness of the proposed algorithms.


Introduction
Multivariable systems are popular in industrial processes [1][2][3] and a number of successful methods have been developed to solve the identification and control problems of multivariable systems [4][5][6][7].For example, Zhang and Hoagg used a candidate-pool approach to identify the feedback and feedforward transfer function matrices and presented a frequency-domain technique for identifying multivariable feedback and feedforward systems [8]; Salhi and Kamoun proposed a recursive algorithm to estimate the parameters of the dynamic linear part and the static nonlinear part of multivariable Hammerstein systems [9].
The idea of the auxiliary model is to use the measurable information to construct a dynamical model and to replace the unknown variables in the information vector with the output of the auxiliary model [10,11].There are two typical identification methods for multivariate output-error systems: stochastic gradient (SG) algorithms [12,13] and the recursive least squares (RLS) algorithms [14,15].The SG algorithm requires lower computational cost, but the RLS algorithm has a faster convergence rate than the SG algorithm [16].The RLS algorithm has been applied to the identification of various systems [17,18].For example, on the basis of the work in [19], Jin et al. proposed an auxiliary model based recursive least squares algorithm for autoregressive output-error autoregressive systems [20]; and Wang and Tang presented an auxiliary model based recursive least squares algorithm for a class of linear-in-parameter output-error moving average systems [21].
Although the RLS algorithm can be applied to identify the parameter of the multivariate output-error systems, it requires computing the matrix inversion (see Remark 1 in the following section), resulting in a large computational burden [22].This motivates us to study a new coupled least squares algorithm without involving matrix inversion.The coupling identification concept is useful for simplifying the parameter estimation of the coupled parameter multivariable systems [23].It is based on the coupled relationship of the parameter estimates between the subsystems of a multivariable system [24][25][26].The purpose of the coupling identification is to reduce the redundant estimation of the subsystem parameter vectors and to avoid computing the matrix inversion of the RLS algorithm.Recently, a coupled least squares algorithm has been proposed for multiple linear regression systems [22].
This paper focuses on the parameter estimation of multivariate output-error systems, and the main contributions of this paper are the following:

•
for multivariate output-error systems, this paper derives two coupled least squares parameter estimation algorithms by using the auxiliary model identification idea and the coupling identification concept; • the proposed algorithms can generate more accurate parameter estimates, and avoid computing the matrix inversion in the multivariable RLS algorithm, for the purpose of reducing computational load.
The rest of this paper is organized as follows: Section 2 gives some definitions and the identification model of multivariate output-error systems.Section 3 presents two new coupled auxiliary model identification algorithms.Section 4 gives two simulation examples to validate the effectiveness of the proposed methods.Finally, some concluding remarks are offered in Section 5.

System Description and Identification Model
Let us introduce some notation.The symbol I m is an m × m identity matrix; 1 n is an n-dimensional column vector whose elements are 1; the superscript T denotes the matrix transpose; the norm of the matrix X is defined as X 2 := tr[X X T ]; the symbol ⊗ denotes the Kronecker product or the direct product denotes the vector formed by the column of the matrix X, that is, if denotes the estimate of X at time t and X(t) := X(t) − X denotes the estimation error.
Consider the following multivariate output-error system: where y(t) := [y 1 (t), y 2 (t), • • • , y m (t)] T ∈ R m is the system output vector and the noisy measurement of x(t), Φ s (t) ∈ R m×n is the information matrix consisting of the input-output data, θ ∈ R n is the parameter vector, and v(t T ∈ R m is the observation noise vector with zero mean, and z −1 is a unit backward shift operator with [z −1 y(t) = y(t − 1)].
Assume that the degrees m, n, n a are known and when t ≤ 0, y(t) = 0, Φ s (t) = 0 and v(t) = 0. {Φ s (t), y(t)} is the available measurement data.
For the identification model in (5), Φ x (t) is the information matrix that consists of the unknown inner variables x(t − j)'s, so we construct an auxiliary model x a (t), and define the estimate of Φ x (t) as Then, we use Φx (t) and Φ s (t) to construct the estimate of Φ(t) as Thus, according to (4), we can obtain the auxiliary model, The objective of this paper is to use the auxiliary model identification idea and the coupling identification concept to derive new methods for estimating the system parameters θ, a 1 , a 2 , • • • , a n a from the observation data {y(t), Φ s (t)} and to confirm the theoretical results with simulation examples.

The Auxiliary Model Based Recursive Least Squares Algorithm
According to the identification model in (5), define a cost function: Based on the auxiliary model identification idea and on the derivation of the RLS algorithm [27,28], we use the output x a (t) as the unknown inner vector x(t) and replace the unknown information matrix Φ(t) with its estimate Φ(t), and obtain the following auxiliary model based recursive least squares (AM-RLS) algorithm: The steps of computing the parameter estimation vector θ(t) by the AM-RLS algorithm are listed in the following: p 0 = 10 6 .Set the data length L. 2. Collect the observation data {Φ s (t), y(t)} and form the information matrix Φx (t) by (10).3. Form Φ(t) by (9), and compute L(t) by (7) and P(t) by ( 8). 4. Update the parameter estimation vector θ(t) by (6). 5. Compute the output x a (t) of the auxiliary model using (11).6.If t = L, stop the recursive computation and obtain the parameter estimates; otherwise, increase t by 1 and go to Step 2.

Remark 1.
For the multivariable RLS algorithm in ( 6)-( 10), we can see from (7) that it requires computing the matrix inversion [I m + Φ(t)P(t − 1) ΦT (t)] −1 ∈ R m×m at each step, resulting in heavy computational load, especially for large m (the number of outputs).This is the drawback of the multivariable RLS algorithm in ( 6)- (10).This motivates us to study new coupled parameter identification methods.

The Coupled Subsystem Auxiliary Model Based Recursive Least Squares Algorithm
The coupling identification is usually used to reduce the redundant estimates of the system parameter vectors, based on the coupled relationship of the parameter estimates between subsystems [22].
Let φ T i (t) ∈ R 1×n 0 be the ith row of the information matrix Φ(t), i.e., From ( 5), we obtain m identification models (subsystems) From here, all subsystems contain a common parameter vector ϑ ∈ R n 0 .In general, one of the subsystems can be used to identify the parameter vector ϑ; however, in order to improve the parameter estimation precision, we should make full use of the information in all subsystems for identifying ϑ.
Based on the RLS algorithm in ( 6)- (11), and applying the auxiliary model idea, we replace the unknown variables φ i (t) in the identification algorithm with their estimates φi (t), and obtain m RLS algorithms from (13), namely, the subsystem recursive least squares (S-RLS) algorithm, From here, we can see that there is no coupled relationship between the subsystem parameter estimation vector θi (t).
Remark 2. For i = 1, 2, • • • , m, we can obtain m estimation vectors θi (t) from ( 14)-( 16), and they are all the estimates of the common parameter vector ϑ in all subsystems, resulting in a large amount of redundant parameter estimates.One way is to use their average as the estimate of ϑ, that is If we regard the parameter estimate θ(t) in (17) as the output parameter vector, then each S-RLS identification algorithm is still independent.According to the coupling identification concept, we use θ(t − 1) to replace θi (t − 1) in the S-RLS algorithm, and get the coupled subsystem AM-RLS (C-S-AM-RLS) algorithm: The steps of computing the parameter estimation vector θ(t) by the C-S-AM-RLS algorithm in ( 18)-( 25) are listed in the following: p 0 = 10 6 .Set the data length L. 2. Collect the observation data {Φ s (t), y(t)} and form the information matrix Φx (t) by ( 22). 3. Form Φ(t) by ( 23) and read φi (t) from Φ(t) in (24) (19), and P i (t) by (20), and update the parameter estimation vector θi (t) by ( 18). 5. Compute θ(t) by ( 21) and x a (t) by ( 25). 6.If t = L, stop the recursive computation and obtain the parameter estimates; otherwise, increase t by 1 and go to Step 2.

The Coupled Auxiliary Model Based Recursive Least Squares Algorithm
In order to avoid the redundant parameter estimates, we use the coupling identification concept to derive a coupled AM-RLS algorithm based on the C-S-AM-RLS algorithm.
Referring to the partially coupled SG identification method [24], and with the help of the Jacobi or Gauss-Seidel iterative algorithm, replacing θm (t − 1) with θ1 (t − 1) for i = 1, replacing θi (t − 1) with θi−1 (t) for i = 2, 3, • • • , m, we can obtain the following coupled auxiliary model based recursive least squares (C-AM-RLS) identification algorithm: In the above algorithm in ( 26)-(35), θi (t) ∈ R n 0 is the parameter estimation vector of the ith subsystem at time t, L i (t) ∈ R n 0 is the gain vector of the ith subsystem at time t, P i (t) ∈ R n 0 ×n 0 is the covariance matrix of the ith subsystem at time t.θi−1 (t) and P i−1 (t) are the parameter estimation vector and the covariance matrix of the (i − 1)th subsystem at time t, respectively; θm (t − 1) and P m (t − 1) are the parameter estimation vector and the covariance matrix of the mth subsystem at time t − 1, respectively, the system parameter estimation vector is defined by the parameter estimation vector of the mth subsystem at time t: θ(t) = θm (t).
The procedure of computing the parameter estimation vector θm (t) in ( 26)-( 35) is as follows.

Examples
Example 1.Consider the following multivariate output-error system: In simulation, we generate two persistent excitation signal sequences with zero mean and unit variances as the inputs {u 1 (t)} and {u 2 (t)}, and take v 1 (t) and v 2 (t) to be two white noise sequences with zero mean and variances σ 2  1 for v 1 (t), and σ 2 2 for v 2 (t).Taking σ 2 1 = σ 2 2 = σ 2 = 0.50 2 , the data length L = 3000, and applying the AM-RLS, C-S-AM-RLS and C-AM-RLS algorithms to estimate the parameters of this system, respectively, the parameter estimates are shown in Tables 1-3, and the estimation errors δ := θ(t) − θ / θ versus t are shown in Figures 1 and 2.
From Tables 1-3 and Figures 1 and 2, we can draw the following conclusions.

•
The parameter estimation errors by the presented algorithms become smaller and smaller and go to zero with the increasing of time t.

•
In contrast to the AM-RLS algorithm, the proposed C-S-AM-RLS and C-AM-RLS algorithms have faster convergence rates and more accurate parameter estimates with the same simulation conditions.Example 2. Consider the following 2-input 2-output system: This example system can be transformed into the multivariate output-error system: The simulation conditions are similar to those of Example 1. Applying the AM-RLS algorithm, the C-S-AM-RLS algorithm and the C-AM-RLS algorithm with σ 2 = 0.50 2 and σ 2 = 0.20 2 to estimate the parameters of this system, respectively, the parameter estimates are shown in Tables 4-6, and the estimation errors δ versus t are shown in Figures 3-7.
From Tables 4-6 and Figures 3-7, we can draw the following conclusions:

•
In contrast to the AM-RLS algorithm, the proposed C-S-AM-RLS and C-AM-RLS algorithms have faster convergence rates and more accurate parameter estimates with the same simulation conditions, and the C-AM-RLS algorithm can obtain the most accurate estimates for the system parameters.

•
The parameter estimation errors given by the proposed algorithms are smaller under a lower noise level-see Tables 4-6 and Figures 3-7.

Conclusions
By means of the auxiliary model identification idea, this paper employs the coupling identification concept to propose a novel recursive identification method for multivariate output-error systems.The proposed methods have the following properties:

•
The C-S-AM-RLS algorithm and the C-AM-RLS algorithm are presented by forming a coupled relationship between the parameter estimation vectors of the subsystems, and they avoid computing the matrix inversion in the multivariable AM-RLS algorithm so they require lower computational load and achieve highly accurate parameter estimates.

•
With the noise-to-signal ratios decreasing, the parameter estimation errors given by the proposed algorithms become smaller.
The basic idea of the proposed algorithms in this paper can be extended and applied to other fields [29][30][31].

Figure 1 .
Figure 1.The AM-RLS and the C-S-AM-RLS estimation errors δ versus t for Example 1.

Figure 2 .
Figure 2. The AM-RLS and the C-AM-RLS estimation errors δ versus t for Example 1.

Table 1 .
The AM-RLS estimates and their errors for Example 1.

Table 2 .
The C-S-AM-RLS estimates and their errors for Example 1.

Table 3 .
The C-AM-RLS estimates and their errors for Example 1.

Table 4 .
The AM-RLS estimates and errors with different noise variances for Example 2.

Table 5 .
The C-S-AM-RLS estimates and errors with different noise variances for Example 2.

Table 6 .
The C-AM-RLS estimates and errors with different noise variances for Example 2.