Next Article in Journal
Ascertaining the Ideality of Photometric Stereo Datasets under Unknown Lighting
Previous Article in Journal
Data-Driven Deployment of Cargo Drones: A U.S. Case Study Identifying Key Markets and Routes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Greedy Pursuit Hierarchical Iteration Algorithm for Multi-Input Systems with Colored Noise and Unknown Time-Delays

School of Intelligent Manufacturing, Nanyang Institute of Technology, Nanyang 473004, China
*
Author to whom correspondence should be addressed.
Algorithms 2023, 16(8), 374; https://doi.org/10.3390/a16080374
Submission received: 5 July 2023 / Revised: 28 July 2023 / Accepted: 31 July 2023 / Published: 4 August 2023

Abstract

:
This paper focuses on the joint estimation of parameters and time delays for multi-input systems that contain unknown input delays and colored noise. A greedy pursuit hierarchical iteration algorithm is proposed, which can reduce the estimation cost. Firstly, an over-parameterized approach is employed to construct a sparse system model of multi-input systems even in the absence of prior knowledge of time delays. Secondly, the hierarchical principle is applied to replace the unknown true noise items with their estimation values, and a greedy pursuit search based on compressed sensing is employed to find key parameters using limited sampled data. The greedy pursuit search can effectively reduce the scale of the system model and improve the identification efficiency. Then, the parameters and time delays can be estimated simultaneously while considering the known orders and found locations of key parameters by utilizing iterative methods with limited sampled data. Finally, some simulations are provided to illustrate the effectiveness of the presented algorithm in this paper.

1. Introduction

System identification is a crucial process in modern control theory that involves determining the mathematical model of a system based on its input–output measurement data. The system identification is summarized and extended in references [1,2]. This modeling and estimation technique help engineers to estimate parameters that characterize system behavior, which can be used to design controllers for various applications. An efficient identification method is very important. The performance analysis of some algorithms for system identification is presented in [3], and some new identification methods based on multi-innovation theory are presented in [4]. Moreover, this technique has numerous practical applications across different fields such as aerospace engineering, robotics, and chemical processing plants. With the development of modern science and technology, the objects faced by system identification become increasingly complex. Those system models are defined as large-scale systems such as complex networks, neural networks, and artificial intelligence [5,6]. The parameter estimation of large-scale systems is an important research direction in system identification, where the reduced-order and decomposition are effective methods for identification of large-scale systems [7,8]. For instance, the neural network or artificial intelligence, which can be regarded as a class of typical large-scale systems, typically employ millions of interconnected neurons to simulate the brain. Similarly, the complex dynamic network control systems, characterized by a significant number of nodes and links, can also be considered as large-scale systems [9,10]. In conclusion, system identification plays an essential role in modern control theory by providing accurate mathematical models for complex systems. It allows us to predict outputs accurately while improving overall performance and efficiency across various systems. However, the difficulty of identification will increase with the increasingly dimensionality of the large-scale systems, which is a challenge that has to be faced in the future [11].
Large-scale systems usually contain a large number of variables. When these systems have unknown input time delays, the model used to describe them often includes redundant information. The identification problem for large-scale systems is typically divided into two stages: first, test measures are taken to estimate the orders and time delays; second, some identification algorithms are used to estimate the parameters. Therefore, traditional methods require prior knowledge such as system orders and time delays before estimating parameters. For example, the expectation maximization approach is used to estimate the parameters when the system orders and time-delay are known [12], the Bayesian method is used to estimate the system when the time-delay is known [13], and so does reference [14]. The least squares (LS) algorithms are widely used in large-scale system identification due to their fast parameter convergence rate and high estimation accuracy. The hierarchical LS iterative method is used to estimate the input nonlinear system [15,16], and the recursive LS estimation method is applied to identify the output nonlinear systems [17]. However, it should be noted that applying these methods directly can be difficult because there may be coupling items within large-scale systems. It is necessary to decouple the system before parameter estimation. A common approach is to decompose large-scale systems into some multi-variable subsystems; among these subsystems, multiple-input multiple-output (MIMO) systems are a typical class of multi-variable system. The MIMO systems can be decomposed into several single-input single-output (SISO) subsystems using decomposition techniques. Then, each of those subsystems can be identified one by one with enough sampled data [18], which comes with a huge computational cost. Another alternative approach for decomposing MIMO systems is to break them down into several multiple-input single-output (MISO) systems.
The identification problem of MISO systems has been extensively studied, and a large number of effective research results are given. Along with the emergence of compressed sensing (CS) in recent years, reference [19] gives the relevant basic of CS theory, reference [20] mainly introduces the principle and application of CS, and reference [21] presents the convergence of an interesting orthogonal matching pursuit (OMP) algorithm. The CS theory provides a new idea and some useful methods for system identification. For MISO systems that contain unknown input time delays, the parameter vector to be identified of the system that is over-parameterized has high dimension and sparse characteristics since the information vector contains unknown input delays. The CS is an effective idea to solve the parameter estimation for sparse systems. Many scholars have carried out research in related fields and achieved a series of results. Inspired by the greedy search idea of the CS, some parameter identification algorithms for output error models were designed by combining the auxiliary model and the hierarchical identification principle. For MISO output error systems with unknown time-delays, reference [22] combines a high dimensional and sparse estimation model with an auxiliary identification model to present the auxiliary model iterative algorithm, which has less computational cost compared with the auxiliary model least squires iterative, and reference [23] gives an auxiliary gradient pursuit iterative algorithm using the gradient search principle to further reduce the amount of computation. An iterative algorithm has been applied to achieve the estimation of time delays and parameters for multivariate systems output error systems by combining basis pursuit and auxiliary models in [24]. For MISO finite impulse response systems with input time delays, reference [25] combines the gradient search and matching pursuit method to give a gradient pursuit iterative algorithm and realize the parameters and time-delay estimation, and the gradient pursuit iterative algorithm can reduce the computational complexity compared with the traditional method. For nonlinear systems, a block-oriented nonlinear system was parameterized into a linear identification model including a sparse parameter vector, and the parameter estimation algorithm was presented by combining the matching pursuit idea based on CS, the auxiliary model, and key variable separation technology in [26]. For nonlinear systems containing colored noise in general, an OMP iterative method was presented to obtain the true parameters and time delays of an input Hammerstein nonlinear system in [27].
In this article, the kernel matrix is constructed to normalize the information matrix and satisfy the application conditions of compressed sensing, and a hierarchical iterative algorithm based on a greedy pursuit search idea is investigated to realize the parameters and time-delay estimation effectively at the same time [28]. The main innovations in this article are listed as follows.
  • The multi-variable systems model is recast based on the framework of CS by using the hierarchical identification principle.
  • The unknown true internal noise items of the recast sparse model in the presented algorithm are replaced by their estimation values according to the hierarchical principle.
  • The presented algorithm constructs a kernel matrix to find the locations of key parameters and reduce the estimated dimension and computational cost by using greedy pursuit search, in which only limited sampled data are used.
  • The parameters and time delays are estimated simultaneously by using the presented algorithm.
The remaining of this article is organized as follows. Section 2 introduces the multi-input system model with unknown input delays and colored noise. In Section 3, a greedy pursuit hierarchical iterative algorithm is given based on the over-parameterized sparse system model. In Section 4, some simulation experiments are given to show the effectiveness of the proposed method. Finally, Section 5 gives a summary of the content.

2. Systems Model

Consider the following systems shown in Figure 1.
The systems can be depicted as
A ( t ) y ( d ) = i = 1 r t τ i B i ( t ) u i ( d ) + F ( t ) C ( t ) e ( d ) ,
where u i ( d ) and y ( d ) are the available input and output sampled data, and e ( d ) is the noise satisfying e ( d ) N ( 0 , σ 2 ) , respectively. A ( t ) , B i ( t ) , C ( t ) , F ( t ) are
A ( t ) : = 1 + j = 1 n a a j t j , B i ( t ) : = j = 1 n b i b i j t j , C ( t ) : = 1 + j = 1 n c c j t j , F ( t ) : = 1 + j = 1 n f f j t j .
where n a , n b i , n c , and n f are the system known orders and where the input time delay τ i of each channel is unknown.
Introduce the internal noise item w ( d )
w ( d ) : = F ( t ) C ( t ) e ( d ) = j = 1 n c c j w ( d j ) + j = 1 n f f j e ( d j ) + e ( d ) = φ n T ( d ) θ n + e ( d ) ,
where
φ n ( d ) : = [ w ( d 1 ) , w ( d 2 ) , , w ( d n c ) , e ( d 1 ) , e ( d 2 ) , , e ( d n f ) ] T R n c + n f ,
θ n : = [ c 1 , c 2 , , c n c , f 1 , f 2 , , f n f ] T R n c + n f .
The subscript n denotes noise.
Then, system model (1) can be rewritten as
y ( d ) = [ 1 A ( t ) ] y ( d ) + i = 1 r t τ i B i ( t ) u i ( d ) + w ( d ) .
Since the time delay of each input channel is unknown, define the data regression length l, which is large enough to satisfy l max ( τ i + n b i ) . Define the over-parameterized vector φ s ( t ) and θ s as
φ s ( d ) : = [ φ a T ( d ) , φ b T ( d ) ] T R n a + l r ,
θ s : = [ θ a T , θ b T ] T R n a + l r ,
where
φ a ( d ) : = [ y ( d 1 ) , y ( d 2 ) , , y ( d n a ) ] T R n a , φ b ( d ) : = [ φ b 1 T ( x ) , φ b 2 T ( x ) , , φ b r T ( x ) ] T R l r , φ b i ( d ) : = [ u i ( d 1 ) , , u i ( d τ i ) , u i ( d τ i 1 ) , , u i ( d τ i n b i ) , , u i ( d l ) ] T R l . θ a : = [ a 1 , a 2 , , a n a ] T R n a , θ b : = [ θ b 1 T , θ b 2 T , , θ b r T ] T R l r , θ b i : = [ 0 τ i , b i 1 , b i 2 , , b i n b i , 0 l τ i n b i ] T R l , i = 1 , 2 , , r .
The subscript s denotes the system, and 0 x denotes the zero block where the subscript means the number of zero elements.
Then, the following over-parameterized model can be indicated from Equation (5)
y ( d ) = φ s T ( d ) θ s + w ( d ) = φ s T ( d ) θ s + φ n T ( d ) θ n + e ( d ) = φ T ( d ) θ + e ( d ) ,
where
φ ( d ) : = [ φ s T ( d ) , φ n T ( d ) ] T R N , N : = n a + l r + n c + n f , θ : = [ θ s T , θ n T ] T R N .
Consider the sampled data from d = 1 to d = L and define Y , Φ , E as
Y : = [ y ( 1 ) , y ( 2 ) , , y ( L ) ] T R L , Φ : = [ φ ( 1 ) , φ ( 2 ) , , φ ( L ) ] T R L × N , E : = [ e ( 1 ) , e ( 2 ) , , e ( L ) ] T R L .
The over-parameterized system model can be wrote as
Y = Φ θ + E .
Minimize the following cost function
J ( θ ) : = j = 1 L [ y ( j ) φ T ( j ) θ ] 2 = E T E ,
and the LS estimation value of the parameter vector will be obtained by using the LS principle
θ ^ = ( Φ T Φ ) 1 Φ T Y .
Remark 1.
It is worth noting that Equation (10) cannot estimate the parameter directly as φ n ( d ) contains unmeasurable internal noise items w ( d ) and e ( d ) . In addition, the over-parameterized sparse system model has a large dimension N. Directly adopting the least squares method requires the sampled data to be much larger than the dimension N of the system model to achieve the satisfactory estimation accuracy. In some extreme cases, it will lead to matrix S = [ Φ ^ T Φ ^ ] 1 singularity when sufficient sampled data cannot be obtained. A large number of samples and high-dimensional matrix operations will increase the estimation cost.
In order to achieve high efficiency of parameters and time-delay estimation, this paper drives a greedy pursuit hierarchical iterative algorithm for system model (1) using limited sampled data according to the sparse property of the over-parameterized system model.

3. Greedy Pursuit Hierarchical Iterative Parameter Estimation Algorithm

The sparsity property of the over-parameterized system model makes it possible to identify the multi-input system with limited sampled data according to the compressed sensing idea. System model (9) can be rewrote as
Y = j = 1 N ϕ j θ j + E ,
where ϕ j and θ j are the j-th column of Φ and the corresponding parameter in θ . According to Equation (7), θ is sparse with a large number of zero elements and only contains a small number of non-zero key parameters. When the locations of the key parameters are found, the goal of reducing the dimensionality of the over-parameterized system model can be achieved. This provides the possibility of using limited sampling data to realize the parameters and time-delay estimation.
In order to meet the sparse recovery conditions, the information matrix Φ should be normalized. Define the normalized transformation matrix T as
T : = Φ ( 1 ) 0 0 0 Φ ( 2 ) 0 0 0 0 Φ ( N ) 1 R N × N
where Φ ( j ) = i = 1 L Φ i j 2 denotes the modulus of the jth column of Φ . Introducing T to Equation (9), the following can be obtained:
Y = Ψ ϑ + E ,
where Ψ = Φ T , ϑ = T 1 θ .
Let q be the inner iteration variable. Define the residual output as
r q : = Y Ψ Λ q ϑ Λ q .
where Λ q is an index set that is composed of the locations of the found key nonzero parameters, Ψ Λ q is a sub-matrix of matrix Ψ , and ϑ Λ q is a non-zero parameter vector. Both Ψ Λ q and ϑ Λ q are indexed by Λ q . The sub-matrix Ψ Λ q is composed of the indexed columns of the information matrix Ψ .
At the next internal iteration, define the following cost function and minimize it to find location of one key parameter.
J ( j ) : = r q ψ j ϑ j 2 ,
which lead to ϑ j = ψ j r q / ψ j 2 2 . Plugged back ϑ j into Equation (15), then we have
J ( j ) = min ϑ j ψ j ϑ j r q 2 2 = ψ j T r q ψ j 2 2 ψ j r q 2 2 =   r q 2 2 ( ψ j T r q ) 2 ψ i 2 2 = r q 2 2 ( ψ j T r q ) 2 .
where ψ j ( j = 1 , 2 , , N ) is the column of the normalized matrix Ψ . From Equation (16), the column most relevant to the last iteration residual r q can be found at next iteration. That is, the minimum problem is equivalent to the largest inner product (in absolute value) between the columns of Ψ and r q . The location of found column can be indicated by the index
λ q + 1 = arg max j = 1 , 2 , , N | r q , ψ j | .
Then, the index set and the index matrix can be updated according to Λ q + 1 : = Λ q λ q + 1 .
Remark 2.
Because w ( d ) and e ( d ) in the matrix Φ are unknown noise items. The above steps from Equations (12)–(17) cannot be directly implemented since Φ contains some unknown items according to Equation (2).
Replace the true unmeasurable values w ( d ) and e ( d ) with their last iteration estimate by using the hierarchical principle [29]. Let k be the external iteration variable. θ ^ k , θ ^ k , s , θ ^ k , n are iteration estimates of θ , θ s , θ n , respectively. By replacing d with d j in Equations (2) and (8), the following results can be obtained:
w ( d j ) = φ n T ( d j ) θ n + e ( d j ) ,
y ( d j ) = φ s T ( d j ) θ s + w ( d j ) .
Then, the unknown noise items w ( d j ) , e ( d j ) can be replaced by their last estimate w ^ k 1 ( d j ) , e ^ k 1 ( d j ) based on Equation (3) and the hierarchical principle,
φ ^ k , n ( d ) = [ w ^ k 1 ( d 1 ) , w ^ k 1 ( d 2 ) , , w ^ k 1 ( d n c ) , e ^ k 1 ( d 1 ) , e ^ k 1 ( d 2 ) , , e ^ k 1 ( d n f ) ] T R n c + n f .
Replace φ n T ( d j ) , θ n , θ s with their iteration estimates φ ^ k , n T ( d j ) , θ ^ k , n , θ ^ k , s in Equations (18) and (19); the estimation values of w ( d j ) and e ( d j ) at k-th iteration can be achieved by
w ^ k ( d j ) = y ( d j ) φ k , s T ( d j ) θ ^ k , s ,
e ^ k ( d j ) = w ^ k ( d j ) φ ^ k , n T ( d j ) θ ^ k , n .
According to Equation (20), the estimation of information matrix Φ ^ k can be denoted at the k-th external iteration, where φ ^ k ( d ) = [ φ s T ( d ) , φ ^ k , n T ( d ) ] T .
Use Φ ^ k to replace Φ in Equation (9). Construct the normalized transformation matrix T ^ using Φ ^ k based on Equation (12), and construct Ψ ^ k according to Φ ^ k and T ^ .
Then, Equations (14) and (15) can be expressed as
r k , q = Y Ψ ^ k , Λ q ϑ ^ k , Λ q ,
λ k , q = arg max j = 1 , 2 , , N | r k , q 1 , ψ ^ j | ,
where the double subscripts k , q denote that the key index at q inner iteration can be found using the kth external iteration updated data.
Applying the index set to update the normalization sub-matrix, we have
Ψ ^ k , Λ q = Ψ ^ k , Λ q 1 ψ ^ k , λ q , Λ q = Λ q 1 λ q .
Define a cost function as follows at the qth inner iteration to find the best parameter estimation value
J ( ϑ k , Λ q ) = r k , q 2 = Y Ψ ^ k , Λ q ϑ k , Λ q 2 .
Minimize Equation (26), and the LS estimate can be obtained
ϑ ^ k , Λ q = [ Ψ ^ k , Λ q T Ψ ^ k , Λ q ] 1 Ψ ^ k , Λ q T Y .
Note that ϑ ^ k , Λ q R q , using Λ k , q and the transformation matrix T ^ , the over-parameterized sparse parameter vector can be obtained.
Then, the steps of the greedy pursuit hierarchical iterative (GPHI) algorithm are listed as follows.
  • Define l and collect sampled data { u i ( d ) , y ( d ) : d = 1 , 2 , , L } to form Y.
  • To initialize external iteration: let k = 1 , θ ^ = 1 N / p 0 , p 0 = 10 6 , w ^ 0 ( d ) , e ^ 0 ( d ) be a random number, and give allowable error ϵ and ε .
  • Form φ ^ k , n ( d ) by Equation (20), and φ ^ k ( d ) by
    φ ^ k ( d ) = [ φ s T ( d ) , φ ^ k , n T ( d ) ] T .
    Using Equation (28), update Φ ^ k , and using Equation (12) based on Φ ^ k , construct T ^ . Then, construct Ψ ^ k by using Φ ^ k and T ^ .
  • Begin the internal iteration. Let q = 1 , r k , 0 = Y , and Λ 0 = , Ψ ^ k , Λ 0 = .
    (a)
    Find λ q by Equation (24).
    (b)
    Update Λ q and Ψ ^ k , Λ q by Equation (25).
    (c)
    Compute the parameter estimate ϑ ^ k , Λ q by Equation (27). Update residual r q by Equation (23).
    (d)
    If ϑ ^ k , Λ q ϑ ^ k ,   Λ q 1 ϵ , obtain ϑ ^ k , Λ q ; otherwise, increase q by 1 and go to step (a).
  • Recover parameter estimate θ ^ k by
    ϑ ^ k , Λ q R p Λ q ϑ ^ k R N , θ ^ k = T ^ ϑ ^ k .
    Update noise estimates w ^ k ( d j ) , e ^ k ( d j ) by Equations (21) and (22), and update the noise estimation vectors φ ^ k , n ( d ) , φ ^ k ( d ) using Equations (20) and (29), Φ ^ k and Ψ ^ k by
    Φ ^ k = [ φ ^ k ( 1 ) , φ ^ k ( 2 ) , , φ ^ k ( L ) ] T , Ψ ^ k = Φ ^ k T ^ .
  • If θ ^ k θ ^ k 1   ε , complete the iteration stage and receive the final estimate θ ^ ; otherwise, let k = k + 1 and turn to step 4.
The recovered parameter vector θ ^ has r + 1 zero blocks. With prior knowledge l and orders n a , n b i , n c , n f , time-delay estimates will be received by
τ ^ 1 = n 1 , τ ^ i = n j ( l τ ^ i 1 n b j ) ,
where n j ( j = 2 , 3 , , r ) is the number of zero items in each zero block.

4. Simulation Experiments

Experiment 1. Consider a multi-input system as follows:
A ( t ) y ( d ) = i = 1 5 t τ i B i ( t ) u i ( d ) + F ( t ) C ( t ) e ( d ) , A ( t ) = 1 + a 1 t 1 + a 2 t 2 = 1 0.80 t 1 + 0.60 t 2 , B 1 ( t ) = b 11 t 1 + b 12 t 2 = 2.00 t 1 1.20 t 2 , B 2 ( t ) = b 21 t 1 + b 22 t 2 = 1.80 t 1 0.90 t 2 , B 3 ( t ) = b 31 t 1 + b 32 t 2 = 1.00 t 1 + 0.50 t 2 , C ( t ) = 1 + c 1 t 1 = 1 + 0.80 t 1 , F ( t ) = 1 + f 1 t 1 = 1 0.40 t 1 . τ 1 = 6 , τ 2 = 15 , τ 3 = 20 .
Let l = 30 ; the true value of the parameter vector to be estimated is
θ = [ 0.80 , 0.60 , 0 6 , 2.00 , 1.20 , 0 37 , 1.80 , 0.90 , 0 33 , 1.00 , 0.50 , 0 8 , 0.80 , 0.40 ] T R N .
The { u i ( d ) } and { y ( d ) } are measurable. The e ( d ) N ( 0 , σ 2 ) is noise, and the noise variances are taken to be σ 2 = 0.50 2 and σ 2 = 1.00 2 , respectively. Take the sampled data length L = 1000 . The dimension of θ is N : = n a + l r + n c + n f = 94 , where the sparsity is K : = n a + i = 1 5 n b i + n c + n f = 10 .
Use the first 200 sampled data to estimate this experiment by applying the GPHI algorithm, and the data from d = 200 to d = 500 are taken to verify the effectiveness of the obtained results. Define parameter estimation error as δ : = θ ^ k θ / θ × 100 % .
Noise variance is a commonly utilized metric for quantifying noise in measurement data, with higher values indicating reduced accuracy and reliability of the measurements. The impact of noise variance on parameter estimation accuracy cannot be overlooked. To evaluate the robustness of the GPHI algorithm against noise interference, the different noise variances σ 2 = 0.50 2 and σ 2 = 1.00 2 are applied.
The parameter estimates and estimation error δ versus iterative times k are shown in Table 1 and Figure 2. From Figure 2, it can be seen that the estimation errors tend to decrease as the iterative times increase under different noise variances. The parameter estimates versus iterative times k are shown in Figure 3, Figure 4, Figure 5 and Figure 6.
From Figure 3, Figure 4, Figure 5 and Figure 6, it can be concluded that the estimated values gradually approach the true values with the increase in iterative times under different noise variances.
When noise variances are σ 2 = 0.50 2 and σ 2 = 1.00 2 , the true outputs and estimated outputs versus iterative times k are given in Figure 7 and Figure 8, and a comparison of δ and the different sampled data length is shown in Table 2. When the system is disturbed by different noises, the estimated outputs can better match the true outputs and the bias of them is small—see Figure 7 and Figure 8.
With σ 2 = 0.50 2 and L = 200 , the locations of the estimated key non-zero parameters are shown in Table 3.
According to Table 3 and Equation (31), the time delays can be estimated
τ 1 ^ = n 1 = 6 , τ 2 ^ = n 2 ( l τ 1 ^ n b 2 ) = 37 ( 30 6 2 ) = 15 , τ 3 ^ = n 3 ( l τ 2 ^ n b 3 ) = 33 ( 30 15 2 ) = 20 .
Experiment 2. Consider the other multi-input system,
A ( t ) y ( d ) = i = 1 5 t τ i B i ( t ) u i ( d ) + F ( t ) C ( t ) e ( d ) , A ( t ) = 1 + a 1 t 1 + a 2 t 2 = 1 + 0.60 t 1 + 0.4 t 2 , B 1 ( t ) = b 11 t 1 + b 12 t 2 = 2.0 t 1 1.3 t 2 , B 2 ( t ) = b 21 t 1 + b 22 t 2 = 1.5 t 1 0.9 t 2 , B 3 ( t ) = b 31 t 1 + b 32 t 2 = 1.0 t 1 + 0.5 t h 2 , B 4 ( t ) = b 41 t 1 + b 42 t 2 = 1.2 t 1 + 0.6 t 2 , B 5 ( t ) = b 51 t 1 + b 52 t 2 = 1.0 t 1 0.8 t 2 , C ( t ) = 1 + c 1 t 1 = 1 + 0.7 t 1 , F ( t ) = 1 + f 1 t 1 = 1 0.4 t 1 . τ 1 = 9 , τ 2 = 23 , τ 3 = 15 , τ 4 = 30 , τ 5 = 17 .
Let the data regression length l = 50 . The true parameter vector to be identified is
θ = [ 0.60 , 0.40 , 0 9 , 2.00 , 1.30 , 0 62 , 1.50 , 0.90 , 0 40 , 1.00 , 0.50 , 0 63 , 1.20 , 0.60 , 0 35 , 1.00 , 0.80 , 0 31 , 0.70 , 0.40 ] T R N .
The dimension of sparse parameter vector is N : = n a + l r + n c + n f = 254 , where the number of key non-zero parameters is K : = n a + i = 1 5 n b i + n c + n f = 14 . In this simulation, take the length of measurable input and output data L = 1000 . The data from d = 1 to d = 250 are used to realize the parameters and time-delay estimation. The remaining data from d = 250 to d = 500 are used to contrast the bias between the true outputs and estimated outputs. The other constraints are similar with those in Experiment 1.
To show the advantages of the GPHI algorithm, the errors δ of the LSI and GPHI algorithms versus the iteration times k are depicted in Figure 9, where sampled data length L is 250 and noise variance is σ 2 = 0.50 2 . From Figure 9, it can be found that the GPHI algorithm can achieve higher parameter estimation accuracy with limited sampled data.
When the noise variances are σ 2 = 0.50 2 , σ 2 = 1.00 2 , apply the GPHI algorithm. The errors δ versus k are shown in Table 4 and Figure 10, and the parameter estimates versus k are given in Figure 11, Figure 12, Figure 13 and Figure 14. Under different noise variances, the algorithm can achieve effective estimation accuracy of parameter estimation, and the parameter estimates approach the true values with the increase in the iterative times. This shows that the GPHI algorithm has certain robustness from Figure 10, Figure 11, Figure 12, Figure 13 and Figure 14.
Apply the data from d = 250 to d = 500 to verify the outputs with different noise variances. The true outputs, estimated outputs, and their bias are shown in Figure 15 and Figure 16, respectively. Figure 15 and Figure 16 show that the estimated output values can match the true values of the system well with different noise variances.
The errors δ versus different sampled data length L are shown in Table 5 to verify the validity of estimation accuracy under limited sampled data length. With σ 2 = 0.50 2 and L = 250 , the key non-zero parameter location estimationsare given in Table 6.
According to Table 6 and Equation (31), the time delays of Experiment 2 can be estimated
τ 1 ^ = n 1 = 9 , τ 2 ^ = n 2 ( l τ 1 ^ n b 2 ) = 62 ( 50 9 2 ) = 23 , τ 3 ^ = n 3 ( l τ 2 ^ n b 3 ) = 40 ( 50 23 2 ) = 15 , τ 4 ^ = n 4 ( l τ 3 ^ n b 4 ) = 63 ( 50 15 2 ) = 30 , τ 5 ^ = n 5 ( l τ 4 ^ n b 5 ) = 35 ( 50 30 2 ) = 17 .
  • With the increase in noise variance, the error of estimated parameter vector becomes bigger, but the estimation accuracy is still higher. This shows that the GPHI algorithm has certain robustness—see Figure 2 and Figure 9 and Table 1 and Table 4.
  • Compared with the traditional LSI algorithm, the GPHI algorithm can use the limited sampled data to achieve higher parameter estimation accuracy—see Figure 9.
  • With different noise variances, the parameter estimates converge near the true values as the iterative times increase, which shows that GPHI algorithm has certain robustness—see Figure 3, Figure 4, Figure 5, Figure 6, Figure 11, Figure 12, Figure 13 and Figure 14.
  • The estimation accuracy of using limited sampled data is close to that of using large sampled data. This shows that the GPHI algorithm can realize parameter estimation using limited sampled data—see Table 2 and Table 5.
  • The estimated outputs can match the true outputs well—see Figure 7, Figure 8, Figure 15 and Figure 16.
  • The GPHI algorithm can accurately find the locations of key non-zero parameters, which shows that the GPHI algorithm can estimate the parameters and time delays simultaneously—see Table 3 and Table 6 and Equations (32) and (33).

5. Conclusions

This paper solves the problem of the joint estimation of parameters and time delays of a class of multi-input systems with colored noise by using limited sampled data. A hierarchical compressed sensing framework is established, and an efficient greedy pursuit hierarchical iterative algorithm is provided. Since the over-parameterized system model is sparse, the locations of key non-zero parameters can be found by using the greedy search method to reduce estimation cost, and the parameter estimates can be identified by using the iterative method. The time delays can be estimated based on prior knowledge combined with the structure of parameter estimates. Simulation examples demonstrate that the given algorithm can efficiently estimate parameters and time delays simultaneously.

Author Contributions

Conceptualization, T.T.; methodology, R.D. and T.T.; software, R.D.; validation, T.T.; analysis, R.D.; writing—original draft preparation, T.T.; writing—review and editing, T.T. and R.D.; and supervision, T.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Science and Technology Project of Henan Province (No. 202102210297), the Key Research Project of Henan Higher Education Institutions (No. 21A413006), and the Science and Technology Project of Nanyang (No. JCQY2021012).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ljung, L. System Identification: Theory for The User, Information and System Science Series; Tsinghua University Press: Beijing, China, 2002. [Google Scholar]
  2. Ding, F. System Identification—New Theory and Methods; Science Press: Beijing, China, 2013. [Google Scholar]
  3. Ding, F. System Identification—Performance Analysis for Identification Methods; Science Press: Beijing, China, 2014. [Google Scholar]
  4. Ding, F. System Identification—Multi-innovation Identification Theory and Methods; Science Press: Beijing, China, 2016. [Google Scholar]
  5. Xu, J.; Tao, Q.; Li, Z.; Xi, X.; Suykens, J.A.; Wang, S. Efficient hinging hyperplanes neural network and its application in nonlinear system identification. Automatica 2020, 116, 108906. [Google Scholar] [CrossRef] [Green Version]
  6. Zhang, C.M.; Lu, Y. Study on artificial intelligence: The state of the art and future prospects. J. Ind. Inf. Integr. 2021, 23, 00224. [Google Scholar] [CrossRef]
  7. Chen, J.; Huang, B.; Gan, M.; Chen, P. A novel reduced-order algorithm for rational model based on Arnoldi process and Krylov subspace. Automatica 2021, 129, 109663. [Google Scholar] [CrossRef]
  8. Li, Y.; Yu, K. Adaptive fuzzy decentralized sampled-data control for large-scale nonlinear systems. IEEE Trans. Fuzzy Syst. 2022, 30, 1809–1822. [Google Scholar] [CrossRef]
  9. Skarding, J.; Gabrys, B.; Musial, K. Foundations and modeling of dynamic networks using dynamic graph neural networks: A survey. IEEE Access 2021, 9, 79143–79168. [Google Scholar] [CrossRef]
  10. Liu, Y.A.; Tang, G.S.D.; Liu, Y.F.; Kong, Q.K.; Wang, J. Extended dissipative sliding mode control for nonlinear networked control systems via event-triggered mechanism with random uncertain measurement. Appl. Math. Comput. 2021, 396, 125901. [Google Scholar] [CrossRef]
  11. Zhang, X.; Liu, Q.Y.; Ding, F.; Alsaedi, A.; Hayat, T. Recursive identification of bilinear time-delay systems through the redundant rule. J. Frankl. Inst. 2020, 357, 726–747. [Google Scholar] [CrossRef]
  12. Gu, Y.; Liu, J.; Li, X.; Chou, Y.; Ji, Y. State space model identification of multirate processes with time-delay using the expectation maximization. J. Frankl. Inst. 2019, 356, 1623–1639. [Google Scholar] [CrossRef]
  13. Fei, Q.L.; Ma, J.X.; Xiong, W.L.; Guo, F. Variational Bayesian identification for bilinear state space models with Markov-switching time delays. Int. J. Robust Nonlinear Control 2020, 30, 7478–7495. [Google Scholar] [CrossRef]
  14. Ding, F.; Wang, X.H.; Mao, L.; Xu, L. Joint state and multi-innovation parameter estimation for time-delay linear systems and its convergence based on the Kalman filtering. Digit. Signal Process. 2017, 62, 211–223. [Google Scholar] [CrossRef]
  15. Ding, F.; Ma, H.; Pan, J.; Yang, E.F. Hierarchical gradient and least squares-based iterative algorithms for input nonlinear output-error systems using the key term separation. J. Frankl. Inst. 2021, 358, 5113–5135. [Google Scholar] [CrossRef]
  16. Ding, F.; Liu, X.M.; Hayat, T. Hierarchical least squares identification for feedback nonlinear equation-error systems. J. Frankl. Inst. 2020, 357, 2958–2977. [Google Scholar] [CrossRef]
  17. Ding, F.; Wang, X.H.; Chen, Q.J.; Xiao, Y.S. Recursive least squares parameter estimation for a class of output nonlinear systems based on the model decomposition. Circ. Syst. Signal Process. 2016, 35, 3323–3338. [Google Scholar] [CrossRef]
  18. Chen, T.; Ohlsson, H.; Ljung, L. On the estimation of transfer functions, regularizations and Gaussian processes—Revisited. Automatica 2012, 48, 1525–1535. [Google Scholar]
  19. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar]
  20. Candès, E.J.; Wakin, M.B. An introduction to compressive sampling. IEEE Signal Process. Mag. 2008, 25, 21–30. [Google Scholar] [CrossRef]
  21. Tropp, J.A. Greed is good: Algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 2004, 50, 2231–2242. [Google Scholar] [CrossRef] [Green Version]
  22. Liu, Y.J.; You, J.Y.; Ding, F. Iterative identification for multiple-input systems based on auxiliary model-orthogonal matching pursuit. Control Decis. 2019, 34, 787–792. [Google Scholar]
  23. You, J.Y.; Liu, Y.J.; Chen, J.; Ding, F. Iterative identification for multiple-input systems with time-delays based on greedy pursuit and auxiliary model. J. Frankl. Inst. 2019, 356, 5819–5833. [Google Scholar]
  24. You, J.Y.; Liu, Y.J. Iterative identification for multivariable systems with time-delays based on basis pursuit de-noising and auxiliary model. Algorithms 2018, 11, 180. [Google Scholar] [CrossRef]
  25. Tao, T.Y.; Wang, B.; Wang, X.H. Parameter and time delay estimation algorithm based on gradient pursuit for multi-input C-ARMA systems. Control Decis. 2022, 37, 2085–2090. [Google Scholar]
  26. Wang, D.Q.; Li, L.W.; Ji, Y.; Yan, Y. Model recovery for Hammerstein systems using the auxiliary model based orthogonal matching pursuit method. Appl. Math. Model. 2017, 54, 537–550. [Google Scholar] [CrossRef]
  27. Mao, Y.W.; Ding, F.; Liu, Y.J. Parameter estimation algorithms for Hammerstein time-delay systems based on the orthogonal matching pursuit scheme. IET Signal Process. 2017, 11, 265–274. [Google Scholar] [CrossRef]
  28. Troop, J.A. Just relax: Convex programming methods for identifying sparse signals in noise. IEEE Trans. Inf. Theory 2006, 52, 1030–1051. [Google Scholar] [CrossRef] [Green Version]
  29. Ding, F. System Identification—Iterative Search Principle and Identification Methods; Science Press: Beijing, China, 2018. [Google Scholar]
Figure 1. The multi-input systems with input time delays and colored noise.
Figure 1. The multi-input systems with input time delays and colored noise.
Algorithms 16 00374 g001
Figure 2. The estimation errors δ versus k of Experiment 1 with L = 200 ( σ 2 = 0.50 2 , σ 2 = 1.00 2 ).
Figure 2. The estimation errors δ versus k of Experiment 1 with L = 200 ( σ 2 = 0.50 2 , σ 2 = 1.00 2 ).
Algorithms 16 00374 g002
Figure 3. The estimated parameter values a ^ 1 , a ^ 2 , c ^ 1 , f ^ 1 versus k of Experiment 1 with L = 200 and σ 2 = 0.50 2 .
Figure 3. The estimated parameter values a ^ 1 , a ^ 2 , c ^ 1 , f ^ 1 versus k of Experiment 1 with L = 200 and σ 2 = 0.50 2 .
Algorithms 16 00374 g003
Figure 4. The estimated parameter values b ^ 11 , b ^ 12 , b ^ 21 , b ^ 22 , b ^ 31 , b ^ 32 versus k of Experiment 1 with L = 200 and σ 2 = 0.50 2 .
Figure 4. The estimated parameter values b ^ 11 , b ^ 12 , b ^ 21 , b ^ 22 , b ^ 31 , b ^ 32 versus k of Experiment 1 with L = 200 and σ 2 = 0.50 2 .
Algorithms 16 00374 g004
Figure 5. The estimated parameter values a ^ 1 , a ^ 2 , c ^ 1 , f ^ 1 versus k of Experiment 1 with L = 200 and σ 2 = 1.00 2 .
Figure 5. The estimated parameter values a ^ 1 , a ^ 2 , c ^ 1 , f ^ 1 versus k of Experiment 1 with L = 200 and σ 2 = 1.00 2 .
Algorithms 16 00374 g005
Figure 6. The estimated parameter values b ^ 11 , b ^ 12 , b ^ 21 , b ^ 22 , b ^ 31 , b ^ 32 versus k of Experiment 1 with L = 200 and σ 2 = 1.00 2 .
Figure 6. The estimated parameter values b ^ 11 , b ^ 12 , b ^ 21 , b ^ 22 , b ^ 31 , b ^ 32 versus k of Experiment 1 with L = 200 and σ 2 = 1.00 2 .
Algorithms 16 00374 g006
Figure 7. The true outputs, estimated outputs, and their bias of Experiment 1 with σ 2 = 0.50 2 .
Figure 7. The true outputs, estimated outputs, and their bias of Experiment 1 with σ 2 = 0.50 2 .
Algorithms 16 00374 g007
Figure 8. The true outputs, estimated outputs, and their bias of Experiment 1 with σ 2 = 1.00 2 .
Figure 8. The true outputs, estimated outputs, and their bias of Experiment 1 with σ 2 = 1.00 2 .
Algorithms 16 00374 g008
Figure 9. The estimation errors of the LSI and GPHI algorithms versus k of Experiment 2 ( L = 250 , σ 2 = 0.50 2 ).
Figure 9. The estimation errors of the LSI and GPHI algorithms versus k of Experiment 2 ( L = 250 , σ 2 = 0.50 2 ).
Algorithms 16 00374 g009
Figure 10. The estimation errors δ versus k of Experiment 2 with L = 250 and ( σ 2 = 0.50 2 , σ 2 = 1.00 2 ).
Figure 10. The estimation errors δ versus k of Experiment 2 with L = 250 and ( σ 2 = 0.50 2 , σ 2 = 1.00 2 ).
Algorithms 16 00374 g010
Figure 11. The estimated parameter values a ^ 1 , a ^ 2 , b ^ 11 , b ^ 12 , b ^ 21 , b ^ 22 , c ^ 1 , f ^ 1 versus k of Experiment 2 with L = 250 and σ 2 = 0.50 2 .
Figure 11. The estimated parameter values a ^ 1 , a ^ 2 , b ^ 11 , b ^ 12 , b ^ 21 , b ^ 22 , c ^ 1 , f ^ 1 versus k of Experiment 2 with L = 250 and σ 2 = 0.50 2 .
Algorithms 16 00374 g011
Figure 12. The estimated parameter values b ^ 31 , b ^ 32 , b ^ 41 , b ^ 42 , b ^ 51 , b ^ 52 versus k of Experiment 2 with L = 250 and σ 2 = 0.50 2 .
Figure 12. The estimated parameter values b ^ 31 , b ^ 32 , b ^ 41 , b ^ 42 , b ^ 51 , b ^ 52 versus k of Experiment 2 with L = 250 and σ 2 = 0.50 2 .
Algorithms 16 00374 g012
Figure 13. The estimated parameter values a ^ 1 , a ^ 2 , b ^ 11 , b ^ 12 , b ^ 21 , b ^ 22 , c ^ 1 , f ^ 1 versus k of Experiment 2 with L = 250 and σ 2 = 1.00 2 .
Figure 13. The estimated parameter values a ^ 1 , a ^ 2 , b ^ 11 , b ^ 12 , b ^ 21 , b ^ 22 , c ^ 1 , f ^ 1 versus k of Experiment 2 with L = 250 and σ 2 = 1.00 2 .
Algorithms 16 00374 g013
Figure 14. The estimated parameter values b ^ 31 , b ^ 32 , b ^ 41 , b ^ 42 , b ^ 51 , b ^ 52 versus k of Experiment 2 with L = 250 and σ 2 = 1.00 2 .
Figure 14. The estimated parameter values b ^ 31 , b ^ 32 , b ^ 41 , b ^ 42 , b ^ 51 , b ^ 52 versus k of Experiment 2 with L = 250 and σ 2 = 1.00 2 .
Algorithms 16 00374 g014
Figure 15. The true outputs, estimated outputs, and their bias of Experiment 2 with σ 2 = 0.50 2 .
Figure 15. The true outputs, estimated outputs, and their bias of Experiment 2 with σ 2 = 0.50 2 .
Algorithms 16 00374 g015
Figure 16. The true outputs, estimated outputs, and their bias of Experiment 2 with σ 2 = 1.00 2 .
Figure 16. The true outputs, estimated outputs, and their bias of Experiment 2 with σ 2 = 1.00 2 .
Algorithms 16 00374 g016
Table 1. The estimated parameter values and estimation errors of Experiment 1 with L = 200 ( σ 2 = 0.50 2 , σ 2 = 1.00 2 ).
Table 1. The estimated parameter values and estimation errors of Experiment 1 with L = 200 ( σ 2 = 0.50 2 , σ 2 = 1.00 2 ).
σ 2 k a 1 a 2 b 11 b 12 b 21 b 22 b 31 b 32 c 1 f 1 δ %
0.50 2 1−0.71070.50901.9694−1.0090−1.8753−1.02191.07900.53660.00000.000027.7831
2−0.87170.62181.9681−1.3473−1.8160−0.70101.02680.38430.92480.000015.8395
3−0.79820.58951.9816−1.1991−1.7996−0.87801.03960.48890.80530.000012.1355
5−0.80520.59051.9772−1.2174−1.7890−0.87561.03390.48760.7704−0.27633.9074
8−0.79050.58181.9615−1.1770−1.7950−0.90011.02990.49720.7374−0.42652.5213
1.00 2 1−0.53190.35121.9279−0.6311−1.9520−1.34141.13470.68120.00000.000040.4417
2−1.10280.65131.9611−1.7779−1.8306−0.17421.05160.00001.17580.000042.0247
3−0.75170.63541.9523−1.1405−1.8269−0.95561.11000.53340.68730.000016.0285
5−0.87960.62961.9260−1.3454−1.8156−0.74631.04520.43680.94770.000017.9049
8−0.82180.58471.9238−1.2333−1.7917−0.84391.05650.46630.7521−0.45194.0108
True values−0.80000.60002.0000−1.2000−1.8000−0.90001.00000.50000.8000−0.4000
Table 2. The estimation errors with different L and σ 2 .
Table 2. The estimation errors with different L and σ 2 .
Sampled Data Length L4005006007008001000
Estimation error δ ( % ) ( σ 2 = 0.50 2 ) 2.39312.5912.05351.61651.98682.2789
Estimation error δ ( % ) ( σ 2 = 1.00 2 ) 6.17815.93934.10594.39163.69134.5132
Table 3. The locations of the estimated key non-zero parameters of Experiment 1 with L = 200 and σ 2 = 0.50 2 .
Table 3. The locations of the estimated key non-zero parameters of Experiment 1 with L = 200 and σ 2 = 0.50 2 .
Parameter a ^ 1 a ^ 2 b ^ 11 b ^ 12 b ^ 21 b ^ 22 b ^ 31 b ^ 32 c ^ 1 f ^ 1
Location12910484983849394
Table 4. The estimated parameter values of parameter and errors of Experiment 2 with L = 250 ( σ 2 = 0.50 2 , σ 2 = 1.00 2 ).
Table 4. The estimated parameter values of parameter and errors of Experiment 2 with L = 250 ( σ 2 = 0.50 2 , σ 2 = 1.00 2 ).
σ 2 k a 1 a 2 b 11 b 12 b 21 b 22 b 31 b 32 b 41 b 42 b 51 b 52 c 1 f 1 δ %
0.50 2 10.70430.41931.9900−1.15051.4756−0.7222−0.95420.3833−1.22750.51370.9985−0.66430.00000.000024.0379
20.59580.38681.9909−1.34711.4534−0.8632−0.99990.4732−1.19260.59491.0018−0.78920.82090.000011.8417
30.57020.37311.9940−1.41021.4709−0.9009−1.00070.5398−1.18640.62080.9846−0.80650.6720−0.34083.8284
50.59370.39741.9985−1.36441.4680−0.8740−1.01190.5066−1.19640.58600.9948−0.78240.6427−0.42132.6627
80.59710.39441.9958−1.35381.4675−0.8662−1.01310.5092−1.19590.58020.9964−0.77930.6478−0.40592.4836
1.00 2 11.14870.53781.9923−0.39231.51760.0000−0.97430.0000−1.19620.00000.98230.00000.00000.000068.7931
20.66100.35691.9489−1.25991.4150−0.7717−1.00950.3747−1.25430.53560.9832−0.70200.79770.000015.6424
30.53420.31111.9775−1.50401.4303−0.9078−0.99060.5777−1.18960.69851.0027−0.83100.7349−0.29097.8178
50.56160.37672.0078−1.48321.4422−0.9073−1.01590.5403−1.20810.62440.9810−0.81440.6377−0.46105.8361
80.60410.39311.9989−1.39381.4410−0.8344−1.03170.5114−1.20760.57110.9894−0.76750.6391−0.41234.0277
True values0.60000.40002.0000−1.30001.5000−0.9000−1.00000.5000−1.20000.60001.0000−0.80000.7000−0.4000
Table 5. The parameter estimation errors with different L and σ 2 .
Table 5. The parameter estimation errors with different L and σ 2 .
Sampled Data Length L3004005006007008001000
Estimation error δ ( % ) ( σ 2 = 0.50 2 ) 2.38441.78311.98322.51222.46272.3881.8807
Estimation error δ ( % ) ( σ 2 = 1.00 2 ) 3.6413.21644.0554.73754.57414.19343.2887
Table 6. The locations of the estimated key non-zero parameters of Experiment 2 with σ 2 = 0.50 2 and L = 250 .
Table 6. The locations of the estimated key non-zero parameters of Experiment 2 with σ 2 = 0.50 2 and L = 250 .
Parameter a ^ 1 a ^ 2 b ^ 11 b ^ 12 b ^ 21 b ^ 22 b ^ 31 b ^ 32 b ^ 41 b ^ 42 b ^ 51 b ^ 52 c ^ 1 f ^ 1
Location1212137677118119183184220221253254
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Du, R.; Tao, T. A Greedy Pursuit Hierarchical Iteration Algorithm for Multi-Input Systems with Colored Noise and Unknown Time-Delays. Algorithms 2023, 16, 374. https://doi.org/10.3390/a16080374

AMA Style

Du R, Tao T. A Greedy Pursuit Hierarchical Iteration Algorithm for Multi-Input Systems with Colored Noise and Unknown Time-Delays. Algorithms. 2023; 16(8):374. https://doi.org/10.3390/a16080374

Chicago/Turabian Style

Du, Ruijuan, and Taiyang Tao. 2023. "A Greedy Pursuit Hierarchical Iteration Algorithm for Multi-Input Systems with Colored Noise and Unknown Time-Delays" Algorithms 16, no. 8: 374. https://doi.org/10.3390/a16080374

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop