Next Article in Journal
A Continuum Damage-Based Anisotropic Hyperelastic Fatigue Model for Short Glass Fiber Reinforced Polyamide 66
Previous Article in Journal
A Fresnel Cosine Integral WASD Neural Network for the Classification of Employee Attrition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distributed Moving Horizon Fusion Estimation for Nonlinear Constrained Uncertain Systems

School of Automation, Qingdao University, Qingdao 266100, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2023, 11(6), 1507; https://doi.org/10.3390/math11061507
Submission received: 20 February 2023 / Revised: 14 March 2023 / Accepted: 17 March 2023 / Published: 20 March 2023

Abstract

:
This paper studies the state estimation of a class of distributed nonlinear systems. A new robust distributed moving horizon fusion estimation (DMHFE) method is proposed to deal with the norm-bounded uncertainties and guarantee the estimation performance. Based on the given relationship between a state covariance matrix and an error covariance matrix, estimated values of the unknown parameters in the system model can be obtained. Then, a local moving horizon estimation optimization algorithm is constructed by using the measured values of sensor nodes themselves, the measured information of adjacent nodes and the prior state estimates. By solving the above nonlinear optimization problem, a local optimal state estimation is obtained. Next, based on covariance intersection (CI) fusion strategy, the local optimal state estimates sent to the fusion center are fused to derive optimal state estimates. Furthermore, the sufficient conditions for the square convergence of the fusion estimation error norm are given. Finally, a simulation example is employed to demonstrate the effectiveness of the proposed algorithm.

1. Introduction

In recent years, multi-sensor fusion estimation problems were the focus of intense research. In traditional estimation problems information was always obtained from a single sensor [1], which had limitations. With the development of science and technology, higher precision requirements are put forward for various industrial control processes. Multi-sensor fusion estimation technology has been widely used in various real environments, including intelligent robot systems and driverless cars, target tracking, signal processing, etc. [2,3,4,5,6]. In addition, multi-sensor systems are easily affected by external disturbances during operation, which causes certain uncertainties to appear in the system model parameters [7,8]. Therefore, it is of great theoretical significance and engineering application value to study the distributed state fusion estimation of uncertain systems.
At present, there are three basic methods for state estimation of multi-sensor information fusion: centralized fusion, distributed fusion and sequential fusion. Although centralized fusion can obtain global optimal fusion estimation, it involves a heavy computational burden and has poor fault tolerance. Sequential fusion is more convenient for handling asynchronous sampling systems [9]. Distributed fusion has attracted wide attention because of its parallel structure and easy fault detection and isolation. In [10], a signal selection method based on event trigger is proposed to deal with packet loss caused by network and packet confusion which is caused by random transmission delay. Reference [11] put forward an algorithm that can get consistent output local estimation when the basic model was unknown but fixed. The authors of reference [12] proposed a distributed fusion Kalman filter (DFKF) algorithm based on an optimal weighted fusion criterion. In [13], a cross covariance matrix of estimation error between any two local prediction variables was deduced, and the authors successfully analyzed the steady-state characteristics of these variables. In [14], a distributed extended Kalman filter algorithm based on sensor networks is proposed to solve norm-bounded uncertain parameters in nonlinear systems. Unfortunately, the algorithms used in the above-mentioned studies are all based on Kalman or extended Kalman filters, which are characterized by high statistical noise and cannot consider the constraints of the system. Therefore, in this study we adopt a moving horizon estimation method, which can deal with system constraints and has high estimation accuracy for complex nonlinear systems.
Moving horizon estimation (MHE) is a finite horizon estimation method which has developed rapidly in recent years. By introducing a performance index, the solution for optimization problems is limited to a fixed horizon window. It is characterized by moving optimization, explicit processing constraints, and does not have special requirements for statistical characteristics of noise [15,16]. At present, moving horizon estimation has been widely used in chemical process state estimation [17], fault detection [18], system identification [19], exchange systems [20,21], large-scale systems [22,23,24], networked systems [25], uncertain systems and distributed systems [25,26,27,28,29,30,31,32,33,34]. In [22], two distributed moving horizon estimation algorithms were proposed, which can deal with interference and noise constraints. In [23], an event-triggered mechanism was introduced to govern the evaluation of given estimators and the network information exchange between the device and the estimators, which can provide good estimation performance while reducing the processor usage and network communication frequency. In [30], a distributed moving horizon estimation method is proposed for nonlinear constrained systems. The dynamic characteristics of state variables that cannot be observed by each sensor are described, and the conditions of local information exchange are provided. In [34], a robust estimation problem of linear multi-sensor system constrained by additive and multiplicative uncertainties is studied, and a moving horizon H fusion-estimation algorithm is proposed. However, none of the above studies consider the uncertain parameters in distributed systems: when there are uncertain parameters in the system, the traditional moving horizon estimation algorithm cannot satisfy the stability requirements of the system, which reduces its estimation efficiency. Therefore, there is a need to improve the original algorithm.
Based on the above analysis, this paper presents a distributed moving horizon fusion estimation (DMHFE) for nonlinear constrained uncertain systems. The main innovations are as follows:
(1)
By constructing an augmented system, the relationship between the estimation error covariance and the state covariance of the augmented system is established, and the estimated values of the uncertain parameters are obtained by optimizing the upper bound of the error covariance matrix.
(2)
By introducing a covariance intersection fusion estimation strategy and considering the fusion estimation error of the system, a sufficient condition for the square convergence of local estimation error norm is given. Moreover, the relationship between fusion estimation error and local estimation error is derived.
The structure of this article is as follows. The distributed moving horizon estimation problem of the desired solution is introduced, and the expression of local performance index is given in Section 2. Estimated values of the system uncertainty parameters are given in Section 3. The fusion estimation criterion and the calculation method of weighted coefficient are given in Section 4. A stability analysis is given in Section 5. Numerical simulation examples are given in Section 6. The conclusions are summarized in Section 7.
Notation: For a column vector ν , the notation ν Y 2 stands for ν T Y ν , where Y is a symmetric positive–semidefinite matrix and ν T represents the transpose of ν . m represents m-dimensional Euclidean space. The matrix diag m 1 , , m n is a block diagonal with blocks m n , I represents identity matrix, and · stands for the Euclidean norm. E · represents the expectation of the random variable. For any matrix S , S > 0 S 0 means that S is a positive definite (semi-positive definite) matrix and λ max M , λ min M represent the maximum and minimum singular values of matrix M, respectively.

2. Problem Description

Consider the following distributed nonlinear system model with uncertain parameters:
x t + 1 = f ( x t ) + ω t y t i = h i ( x t ) + v t i , i i , i = 1 , 2 L
where x t n is the system state and y t i m is the measured value of sensor i. Δ f t i = M 1 i Υ 1 , t i S 1 i , Υ 1 , t i is a Lebesgue measurable unknown function matrix, satisfying Υ 1 , t i ( Υ 1 , t i ) T I . I is the identity matrix with appropriate dimensions; M 1 i and S 1 i are the matrices with known appropriate dimensions. ω t and v t i represent the process noise and measurement noise of the system, respectively, with the following statistical characteristics:
E ω t = 0 , E v t i = 0 , E ω t ( v t i ) T = 0 , E ω i ( ω t ) T = Q δ i t , E v t i ( v t i ) T = R i δ i t , t = 0 , 1 , 2 ,
where, δ i t is a sign function, and Q and R i represent the covariance matrices of process noise and measurement noise, respectively, satisfying Q > 0 , R i > 0 . Next, we will discuss the proposed distributed moving horizon fusion estimator design which includes L sensors for the distributed nonlinear system of (1). A schematic of this algorithm is shown in Figure 1.
Table 1 shows the meaning of each item in Equation (1). It can be seen from Figure 1 that sensor i i = 1 , 2 L sends the measured value y t i i = 1 , 2 L to the local estimator i i = 1 , 2 L , and the local optimal estimator x ^ t i * is solved by solving the nonlinear optimization problem. Then x ^ t i * i = 1 , 2 L is sent to the fusion center through the communication network, and the fusion estimate x ^ t * is obtained using the covariance intersection fusion estimation strategy. In order to realize the above distributed fusion estimation, the following optimization problem of local moving horizon estimation is presented:
 Problem 1. 
min x ^ t N | t i * J t i ( x ^ t N | t i * )
J t i ( x ^ t N | t i * ) = μ i x ^ t N | t i x ˜ t N i ( P ˜ t N i ) 1 2 + j ( j i ) i μ j x ^ t N | t i x ˜ t N j ( P ˜ t N j ) 1 2 + k = t N t y k i h ^ i ( x ^ k | t i ) ( R i ) 1 2 + k = t N t j i y k j h ^ j ( x ^ k | t i ) ( R j ) 1 2
with the following constraints:
X = x : x θ x V i = v : v θ v i W = ω : ω θ ω
where X , V i , W is the set of convex polyhedra satisfying the corresponding constraints.
Table 1. The symbolic nomenclature of Formula (1).
Table 1. The symbolic nomenclature of Formula (1).
iThe local sensor
LThe number of sensors
i The set of sensors adjacent to sensor i
f ( x t ) = f ^ i ( x t ) + Δ f t i x t The unknown nonlinear function
f ^ i · The known nominal nonlinear dynamic function
h i · The known nominal nonlinear observation function
Δ f t i The system model error
In Equation (2), considering a moving window of fixed length N + 1 , Ω t = { t N , t N + 1 , , t } , J t i ( x ^ t N | t i * ) represents the performance index of a local optimization problem. μ is the weight coefficient, satisfying μ i + j ( j i ) i μ j = 1 . The first two terms represent the arrival cost, which summarizes the impact of the data before the time t N on the current data; P ˜ t N i is a symmetric positive definite weight matrix quantifying the confidence of the prior state estimate x ˜ t N i , which penalizes the distance between the initial state estimate in the moving window Ω t and the prior state estimate x ˜ t N i . It is noted that in traditional moving horizon estimation algorithms, the arrival cost usually takes the following form:
x ^ t N | t i x ¯ t N i ( P t N i ) 1 2
However, in the distributed system, in order to ensure the observability of the entire state vector, we reconstruct the arrival cost into the fusion arrival cost obtained by the weighted average of the local arrival cost of node i and the local arrival cost of node i’s in-neighbors j, corresponding to the first two terms in (2), respectively. For the third term, the distance between the expected output based on the state estimate and the actual measurement are weighted using R i . Finally, the fourth term, represents the case that, at time t, sensor i may receive measurement information y t j from the in-neighbors node j i .
By solving optimization Problem 1, the optimal state estimation value x ^ t N | t i * is obtained. It is worth noting that the estimated state at other times in optimization Problem 1 can be calculated using the following formula:
x ^ g + 1 | t i * = f ( x ^ g | t i * ) + Δ f ^ t i x ^ g | t i * + K ^ t i ( y t i h ^ i ( x ^ g | t i * ) ) + Z ^ t i j i ( P ˜ t j ) 1 μ j ( x ^ g | t i * x ˜ g | t j * ) g = t N , t 1
where Δ f ^ t i , K ^ t i , Z ^ t i and P ˜ t j are matrices with appropriate dimensions. Similarly, the prior estimated state at time t N can also be calculated using (3). Before solving the above local optimization problem, unknown parameters in (3) must be solved.
 Remark 1. 
From (3), one can find that the window estimation model consists of four components. Therein, the last three terms are designed to improve estimation performance [14]. So the key of this paper is to find the optimal gain matrices Δ f ^ t i * , K ^ t i * , Z ^ t i * and the upper bound of error covariance P ˜ t j .

3. The Method of Solving the Estimated Value of Uncertain Parameter in System

This section, considering the system model (1), proves that the estimation error covariance matrix is bounded even if uncertain parameters exist in the system models, and the estimated values of the uncertain parameters are given.
At time t, to facilitate the addressing of linearization errors and norm-bounded uncertainties in Taylor series expansions, f ^ i ( x t ) and h i ( x t ) at x ^ t i , using Taylor expansion, can be written into the following formulas:
f ^ i ( x t ) = f ^ i ( x ^ t i ) + B t i ( x t x ^ t i ) + B ˜ t i ( x t x ^ t i ) h i ( x t ) = h ^ i ( x ^ t i ) + C t i ( x t x ^ t i ) + C ˜ t i ( x t x ^ t i )
where B t i = f ^ i ( x t ) x t | x t = x ^ t i , C t i = h i ( x t ) x t | x t = x ^ t i , B ˜ t i = M 2 i Υ 2 , t i S 2 i , C ˜ t i = M 3 i Υ 3 , t i S 3 i ; the Lebesgue measurable unknown function matrices Υ 2 , t i , Υ 3 , t i satisfy Υ 2 , t i ( Υ 2 , t i ) T I , Υ 3 , t i ( Υ 3 , t i ) T I ; and M 2 i , S 2 i , M 3 i , S 3 i are known matrices with appropriate dimensions.
Following [14], the gain matrices Δ f ^ t i , K ^ t i , Z ^ t i from (3) and P ˜ t j in the perfomance index (2) should be solved. Therefore, we construct an augmented system and derive the upper bound of the estimation error covariance matrix in system (1) by finding the upper bound of the state covariance matrix of the augmented system. Letting x ¯ t + 1 i = ( x t + 1 ) T ( x ^ t + 1 i ) T T , the augmented system is written as:
x ¯ t + 1 i = ( A ¯ t i + Δ A ¯ t i ) x ¯ t i + f ¯ t i + ω ¯ t i + Z ¯ t i
with
A ¯ t i = B t i B t i K ^ t i C t i Δ f ^ t i K ^ t i C t i Δ A ¯ t i = Δ f t i + B ˜ t i B ˜ t i K ^ t i C ˜ t i K ^ t i C ˜ t i f ¯ t i = f ^ i ( x ^ t i ) f ^ i ( x ^ t i ) , ω ¯ t i = ω t K ^ t i v t i Z ¯ t i = 0 Z ^ t i j i ( P ˜ t j ) 1 μ j ( x ^ t i x ˜ t j )
The state covariance matrix of the augmented system is defined as P ¯ t + 1 i = E x ¯ t + 1 i ( x ¯ t + 1 i ) T and the estimation error covariance matrix for sensor i at time t + 1 is defined as P t + 1 i = E ( x t + 1 x ^ t + 1 i ) ( x t + 1 x ^ t + 1 i ) T . It is noted that the error covariance matrix and the augmented system state covariance matrix should satisfy the following lemma:
 Lemma 1. 
Given error covariance matrix P t + 1 i and augmented system state covariance matrix P ¯ t + 1 i , the estimated value of uncertain parameters are
Δ f ^ t i * = U 3 , t , i 1 ( B t i ) T ( C t i ) T ( K ^ t i ) T U 2 , t , i U 3 , t , i
K ^ t i * = 2 ( C t i ) T U 2 , t , i T U 2 , t , i B t i U 3 , t , i 1 2 B t i C t i U 1 , t , i × ( 1 + β 1 , i + β 2 , i ) 2 ( C t i ) T U 1 , t , i C t i ( C t i ) T U 2 , t , i C t i 2 ( C t i ) T U 2 , t , i T × U 3 , t , i 1 U 2 , t , i C t i + ( C t i ) T U 2 , t , i + 2 ( 1 + β 1 , i + β 2 , i ) ρ 3 , i 1 M 3 i ( M 3 i ) T + R i 1
Z ^ t i * = 2 ( C t i ) T U 2 , t , i T U 2 , t , i B t i U 3 , t , i 1 2 B t i C t i U 1 , t , i × ( 1 + β 1 , i + β 2 , i ) 2 ( C t i ) T U 1 , t , i C t i ( C t i ) T U 2 , t , i C t i 2 ( C t i ) T U 2 , t , i T × U 3 , t , i 1 U 2 , t , i C t i + ( C t i ) T U 2 , t , i + 2 ( 2 + β 1 , i 1 ) Ψ t i 1
which means the following inequality
P t + 1 i = I I P ¯ t + 1 i I I T P ˜ t + 1 i
holds. Here, Ψ t i = c ¯ i j i ( P ˜ t j ) 1 , and c ¯ i is the number of sensors adjacent to sensor i.
 Proof. 
By applying ([14], Lemma 1),  the state covariance matrix of the augmented system can be transformed into the following form:
P ¯ t + 1 i = E x ¯ t + 1 i ( x ¯ t + 1 i ) T ( 1 + β 1 , i + β 2 , i ) ( A ¯ t i + Δ A ¯ t i ) E x ¯ t i ( x ¯ t i ) T ( A ¯ t i + Δ A ¯ t i ) T + ( 2 + β 2 , i 1 ) f ¯ t i ( f ¯ t i ) T + ( 2 + β 1 , i 1 ) E Z ¯ t i ( Z ¯ t i ) T + E ω ¯ t i ( ω ¯ t i ) T
where β 1 , i , β 2 , i are appropriate positive constants. It is noted that there are four terms on the right side of inequality (7); every term will be analyzed separately.
For the sake of analysis, Δ A ¯ t i can be written as follows:
Δ A ¯ t i = M 1 i Υ 1 , t i S 1 i + M 2 i Υ 2 , t i S 2 i M 2 i Υ 2 , t i S 2 i K ^ t i M 3 i Υ 3 , t i S 3 i K ^ t i M 3 i Υ 3 , t i S 3 i = M 1 i 0 Υ 1 , t i S 1 i 0 + M 2 i 0 Υ 2 , t i S 2 i 0 + M 2 i 0 Υ 2 , t i 0 S 2 i + K ^ t i M 3 i K ^ t i M 3 i Υ 3 , t i 0 S 3 i
where S 1 i , S 2 i , S 3 i be the unit matrices of proper dimensions. Then, the following formula is satisfied by applying ([14], Lemma 2):
( A ¯ t i + Δ A ¯ t i ) E x ¯ t i ( x ¯ t i ) T ( A ¯ t i + Δ A ¯ t i ) T = A ¯ t i U ¯ 1 , t , i U ¯ 2 , t , i U ¯ 2 , t , i T U ¯ 3 , t , i ( A ¯ t i ) T + ρ 1 , i 1 M 1 i ( M 1 i ) T 0 0 0 + 2 ρ 2 , i 1 M 2 i ( M 2 i ) T 0 0 0 + ρ 3 , i 1 I I I I K ^ t i M 3 i ( M 3 i ) T ( K ^ t i ) T
with the appropriate positive constants ρ 1 , i , ρ 2 , i , ρ 3 , i , and
U ¯ 1 , t , i U ¯ 2 , t , i U ¯ 2 , t , i T U ¯ 3 , t , i = P ¯ t i ρ 1 , i I ρ 1 , i I ρ 1 , i I ( ρ 1 , i + 2 ρ 2 , i + ρ 3 , i ) I 1 .
For the fourth term on the right-hand side of inequality (7), the equation is
E ω ¯ t i ( ω ¯ t i ) T = Q 0 0 K ^ t i R i ( K ^ t i ) T .
For the third term on the right of inequality (7), based on the basic knowledge of graph theory, we obtain
E Z ¯ t i ( Z ¯ t i ) T 0 0 0 Z ^ t i E j i ( P ˜ t j ) 1 μ j ( x ^ t i x ˜ t j ) × j i ( P ˜ t j ) 1 μ j ( x ^ t i x ˜ t j ) T ( Z ^ t i ) T 0 0 0 Z ^ t i Ψ ¯ t i ( Z ^ t i ) T
where Ψ ¯ t i = c ¯ i j i ( P ˜ t j ) 1 P t j ( ( P ˜ t j ) 1 ) T .
To sum up, (7) can be written as:
P ¯ t + 1 i ( 1 + β 1 , i + β 2 , i ) A ¯ t i U ¯ 1 , t , i U ¯ 2 , t , i U ¯ 2 , t , i T U ¯ 3 , t , i ( A ¯ t i ) T + ( 2 + β 1 , i 1 ) 0 0 0 Z ^ t i Ψ ¯ t i ( Z ^ t i ) T + O t i + 0 0 0 I K ^ t i ( 1 + β 1 , i + β 2 , i ) ρ 3 , i 1 M 3 i ( M 3 i ) T + R i ( K ^ t i ) T
where
O t i = ( 2 + β 2 , i 1 ) f ¯ t i ( f ¯ t i ) T + Q 0 0 0 + ( 1 + β 1 , i + β 2 , i ) ρ 1 , i 1 M 1 i ( M 1 i ) T 0 0 0 + 2 ( 1 + β 1 , i + β 2 , i ) ρ 2 , i 1 M 2 i ( M 2 i ) T 0 0 0
It is noted from (10) that:
P ^ t + 1 i = ( 1 + β 1 , i + β 2 , i ) A ¯ t i U 1 , t , i U 2 , t , i U 2 , t , i T U 3 , t , i ( A ¯ t i ) T + ( 2 + β 1 , i 1 ) 0 0 0 Z ^ t i Ψ t i ( Z ^ t i ) T + 0 0 0 I K ^ t i ( 1 + β 1 , i + β 2 , i ) ρ 3 , i 1 M 3 i ( M 3 i ) T + R i ( K ^ t i ) T + O t i
and
U 1 , t , i U 2 , t , i U 2 , t , i T U 3 , t , i = ( P ˜ t i ) 1 ρ 1 , i I ρ 1 , i I ρ 1 , i I ( ρ 1 , i + 2 ρ 2 , i + ρ 3 , i ) I 1 .
By applying ([14], Lemma 3), if the initial conditions of the state covariance matrix P ¯ 0 i P ^ 0 i are satisfiedwith the positive definite matrices U 1 , t , i , U 2 , t , i , U 3 , t , i , then, in turn, the following inequalities are satisfied:
P ¯ t i P ^ t i
Considering the relationship between the error covariance matrix and the augmented system state covariance matrix, it follows that
P t + 1 i = I I P ¯ t + 1 i I I T I I P ^ t + 1 i I I T = P ˜ t + 1 i = ( 1 + β 1 , i + β 2 , i ) ( B t i ) T Π U 1 , t , i + Ξ U 2 , t , i T Π U 2 , t , i + Ξ U 3 , t , i T ( C t i ) T ( K ^ t i ) T Π U 1 , t , i Ξ U 2 , t , i T Λ Π U 2 , t , i Ξ U 3 , t , i T + ( 2 + β 1 , i 1 ) Z ^ t i Ψ t i ( Z ^ t i ) T + Γ t i
where
Π = B t i K ^ t i C t i Ξ = K ^ t i C t i B t i Δ f ^ t i Λ = ( Δ f ^ t i ) T ( C t i ) T ( K ^ t i ) T Γ t i = 2 ( 2 + β 2 , i 1 ) f ¯ t i ( f ¯ t i ) T + Q + ( 1 + β 1 , i + β 2 , i ) ρ 1 , i 1 M 1 i ( M 1 i ) T + 2 ( 1 + β 1 , i + β 2 , i ) ρ 1 , i 1 M 2 i ( M 2 i ) T + K ^ t i ( 1 + β 1 , i + β 2 , i ) ρ 3 , i 1 M 3 i ( M 3 i ) T + R i ( K ^ t i ) T .
Then, by calculating the partial derivative of the trace of (15) with respect to Δ f ^ t i , we obtain
P ˜ t + 1 i Δ f ^ t i = 2 B t i U 2 , t , i 2 B t i U 3 , t , i 2 ( C t i ) T ( K ^ t i ) T U 2 , t , i + 2 ( C t i ) T ( K ^ t i ) T U 2 , t , i = 0 .
Similarly, by calculating the partial derivative of the trace of P ˜ t + 1 i with respect to K ^ t i , Z ^ t i , it follows that
P ˜ t + 1 i K ^ t i = K ^ t i ( 1 + β 1 , i + β 2 , i ) × 2 ( C t i ) T U 1 , t , i C t i ( C t i ) T U 2 , t , i C t i 2 ( C t i ) T U 2 , t , i T U 3 , t , i 1 U 2 , t , i × C t i + ( C t i ) T U 2 , t , i + 2 ( 1 + β 1 , i + β 2 , i ) ρ 3 , i 1 M 3 i ( M 3 i ) T + R i 2 B t i C t i U 1 , t , i + 2 ( C t i ) T U 2 , t , i T U 2 , t , i B t i U 3 , t , i 1 = 0
P ˜ t + 1 i Z ^ t i = Z ^ t i ( 1 + β 1 , i + β 2 , i ) × 2 ( C t i ) T U 1 , t , i C t i ( C t i ) T U 2 , t , i C t i 2 ( C t i ) T U 2 , t , i T U 3 , t , i 1 U 2 , t , i C t i + ( C t i ) T U 2 , t , i + 2 ( 2 + β 1 , i 1 ) Ψ t i 2 B t i C t i U 1 , t , i + 2 ( C t i ) T U 2 , t , i T U 2 , t , i B t i U 3 , t , i 1 = 0
Through calculation, the estimated values of the gain matrix are derived by
Δ f ^ t i * = U 3 , t , i 1 ( B t i ) T ( C t i ) T ( K ^ t i ) T U 2 , t , i U 3 , t , i
K ^ t i * = 2 ( C t i ) T U 2 , t , i T U 2 , t , i B t i U 3 , t , i 1 2 B t i C t i U 1 , t , i × ( 1 + β 1 , i + β 2 , i ) 2 ( C t i ) T U 1 , t , i C t i ( C t i ) T U 2 , t , i C t i 2 ( C t i ) T U 2 , t , i T × U 3 , t , i 1 U 2 , t , i C t i + ( C t i ) T U 2 , t , i + 2 ( 1 + β 1 , i + β 2 , i ) ρ 3 , i 1 M 3 i ( M 3 i ) T + R i 1
Z ^ t i * = 2 ( C t i ) T U 2 , t , i T U 2 , t , i B t i U 3 , t , i 1 2 B t i C t i U 1 , t , i × ( 1 + β 1 , i + β 2 , i ) 2 ( C t i ) T U 1 , t , i C t i ( C t i ) T U 2 , t , i C t i 2 ( C t i ) T U 2 , t , i T × U 3 , t , i 1 U 2 , t , i C t i + ( C t i ) T U 2 , t , i + 2 ( 2 + β 1 , i 1 ) Ψ t i 1
Based on the above analysis, we obtained the upper bound of the estimated error covariance matrix by deriving the relationship between the error covariance matrix and the augmented system state covariance matrix. Through optimizing the upper bound, estimates of unknown parameters were obtained, thus ensuring the feasibility of optimization Problem 1. In the following section, a fusion estimation criterion will be designed to compensate for the uncertainty of single sensor measurement.   □
 Remark 2. 
It is noted that in (4), the same idea as the extended Kalman filter is used to carry out Taylor expansion of the nonlinear function, and linearization error is introduced. However, different from the conventional extended Kalman filter, here a time-varying matrix Υ 2 , t i , Υ 3 , t i considering linearization error and norm-bounded uncertainty is introduced, and then the uncertainty problem is embedded into the solution of nonlinear optimization Problem 1 to compensate for the linearization error.

4. Covariance Intersection Fusion Estimator

In distributed linear systems, considering the real-time requirements of estimation, we often adopt the scalar weighted linear minimum variance fusion estimation criterion, which needs to calculate the mutual covariance matrix between two adjacent nodes. However, for distributed nonlinear systems, it is very difficult to solve the mutual covariance matrices of fusion estimation algorithms. Therefore, based on the algorithm in [35], a covariance intersection (CI) fusion estimation algorithm is proposed. Furthermore, the following lemma is given:
 Lemma 2 
([35]). Given an n-dimensional random vector x t N n and an unbiased estimate x ^ t N | t i * ( i = 1 , 2 , L ) , and given P ˜ t N i can be calculated using (15), then a covariance intersection (CI) fusion estimation algorithm is given by:
x ^ t N | t * = P ˜ t N i = 1 L μ i ( P ˜ t N i ) 1 x ^ t N | t i *
P ˜ t N 1 = i = 1 L μ i ( P ˜ t N i ) 1
0 μ i 1 , i = 1 L μ i = 1
where the scalar μ i can be obtained from the following formula:
min μ i [ 0 , 1 ] , i = 1 L μ i = 1 t r ( P ˜ t N )
Based on the above analysis, the above distributed moving horizon fusion estimation method can be summarized in Algorithm 1.   
Algorithm 1: Distributed moving horizon fusion estimation
1:
When t > N , x ˜ 0 i , P ^ 0 i , Q , R i , N can be initialized;
2:
At time t N , Δ f ^ t i , K ^ t i , Z ^ t i are calculated using (4), (13), and (17)–(19);
3:
The prior estimation state x ˜ t N i is calculated using (3), and P ^ t N i is calculated using (12).
4:
By solving the optimization Problem 1, get the optimal state estimation value x ^ t N | t i * ;
5:
Calculate the state estimate at the current moment using (3);
6:
The fusion estimation value x ^ t N | t * is calculated using (20)–(22).
7:
At time t N + 1 , by repeating calculation based on the new measurements, return to
step 2.
 Remark 3. 
When t N , the full information moving horizon estimation problem should be solved to obtain the local optimal state estimate. The specific solution process can be referred to in [31].
 Remark 4. 
The method presented in this paper is different from the traditional moving horizon estimation; the error covariance matrix is not fixed, but changes with time. Therefore, it is a fault tolerant state estimation algorithm. For the sake of stability analysis in the next section, the calculation of the state estimate in the window should be consistent with the recursive formula of the prior state estimation model, i.e., (3) can be used to calculate both the state estimation at the current time and the prior state estimation at initial time.
In this section, a fusion estimation criterion was introduced and gave the calculation method of the weighting coefficient μ . For convenience of application, the above distributed moving horizon fusion estimation was summarized as Algorithm 1. In the next section, the stability of the proposed algorithm will be proven, and a sufficient condition for the square convergence of the fusion estimation error norm will be given.

5. Algorithm Stability Analysis

The stability analysis is divided into two parts: firstly, the boundedness of the error covariance matrix is proven, and then the convergence of the error square of the fusion estimation is proven. Before proving the stability, we first define the following variables:
M N ( x t N ) = Δ h i ( x t N ) h i f ( x t N ) h i f f N ( x t N )
where the symbol “∘” represents the composition of functions, and
Φ ω ( x t N ) = 0 0 0 h i f ( 1 ) ω t N ω t N 0 0 h i f ( 2 ) ω t N t N + 1 ω t N h i f ( 2 ) ω t N t N + 1 ω t N + 1 0 h i f ( N ) ω t N t 1 ω t N h i f ( N ) ω t N t 1 ω t N + 1 h i f ( N ) ω t N t 1 ω t 1 ω t N t 1 = 0
where
f ( n ) = f f ( n ) t i m e s f ( n ) ω t N t N + n 1 ( x ) = f ( f ( f ( x ) + ω t N ) + ω t N + 1 ) + ω t N + n 1 γ ω = max Φ ω ( x t N ) , d x = max x x ^ , λ max ( P ˜ t N i ) 1 = θ p i ( R i ) 1 = θ r i , R = d i a g ( R i , R i ) ( N + 1 ) t i m e s , f ¯ = max Δ f t i 2 , K ¯ = max K ^ t i 2 , Z ¯ = max Z ^ t i 2
Furthermore
ε i = max ε n i n = 1 , N
which satisfies
h i f ( 1 ) ω t N ( x t N ) ω t N ω t N = ω ˜ t N h i f ( 1 ) ω t N ( x t N ) ω t N ω t N = 0 ε 1 i ω ˜ t N h i f ( N ) ω t N t 1 ( x t N ) ω t N t 1 ω t N t 1 = ω ˜ t N t 1 h i f ( 1 ) ω t N t 1 ( x t N ) ω t N t 1 ω t N t 1 = 0 ε N i ω ˜ t N t 1
obtained from the classical mean value theorem [36].
Before proving the stability of the system, we first introduce the following definitions and assumptions:
 Definition 1. 
A continuous function ξ : 0 , x 0 , is a K-function if it is strictly monotonically increasing, ξ ( x ) > 0 for x 0 , ξ ( 0 ) = 0 , and lim x ξ ( x ) = .
 Definition 2. 
If feasible solutions to Problem 1 exists, then P ^ k i = P ^ k i T 0 0 k t exists, such that (12) holds.
 Definition 3. 
If a K-function ξ · exists, such that ξ ( x 1 x 2 2 ) M N ( x 1 ) M N ( x 2 ) 2 holds, then the system (1) is a fully observable system in N + 1 steps.
 Assumption 1. 
System (1) is fully observable in N + 1 steps with a K-function ξ · defined by Definition 3.
 Assumption 2. 
Functions f · , h i · satisfy Lipschitz continuity, i.e., scalars m 1 , m 2 > 0 exist, such that the following inequalities hold:
f ( x 1 ) f ( x 2 ) 2 m 1 x 1 x 2 2 h i ( x 1 ) h i ( x 2 ) 2 m 2 x 1 x 2 2
Based on the assumptions and definitions above, the following theorem is given
 Theorem 1. 
If the estimation error covariance matrix can be calculated by (12) and the solution of Problem 1 exists, then the estimation error covariance is bounded for system (1).
 Proof. 
It can be seen from Definition 2 that the recursive window estimation model given in (3) is feasible, i.e., when T , the error covariance matrix P ^ t i obtained from (12) tends to the unique positive semidefinite matrix. If the solution of Problem 1 exists, i.e., the performance index J t i * is bounded, it is expressed as follows:
J t i * = min x ^ t N | t i * J t i ( x ^ t N | t i * ) < Θ , Θ > 0
Then it can be inferred that
( v t i ) T ( R i ) 1 v t i < J t i * = min x ^ t N | t i * J t i ( x ^ t N | t i * ) < Θ
Let v t i = y t i y ^ t i ; combining it with Assumption 2, we get:
E λ min ( R i ) 1 t r ( v t i ( v t i ) T ) I λ min ( R i ) 1 E ( v t i ( v t i ) T ) = λ min ( R i ) 1 m 2 E ( x t x ^ t i ) ( x t x ^ t i ) T + R i
According to P t i = E x t x ^ t i x t x ^ t i T , we obtain
λ min ( R i ) 1 ( m 2 P t i + R i ) E λ min ( R i ) 1 t r ( v t i ( v t i ) T ) I ( v t i ) T ( R i ) 1 v t i < J t i * = min x ^ t N | t i * J t i ( x ^ t N | t i * ) < Θ
So the upper bound of estimation error covariance is Θ . Next, we substitute the upper bound value into (20) and rewrite (20) into the following form:
x ^ t N | t * = ( i = 1 L μ i Θ 1 ) 1 i = 1 L μ i Θ 1 x ^ t N | t i * = i = 1 L μ i x ^ t N | t i *
Then the proof is complete. Next we prove the boundedness of the norm square of the fusion estimation error and define the local estimation error at time t N as
e t N i = x t N x ^ t N | t i *
Similarly, the fusion estimation error at time t N is defined as
e t N = x t N x ^ t N | t * .
Then, the necessary theorem should be introduced:
 Theorem 2. 
Supposing that Assumptions 1 and 2 hold, a K-function ξ · defined in Assumption 1 exists and satisfies the following conditions:
ϕ = inf ξ ( x 1 x 2 2 ) x 1 x 2 2 0
If the error covariance matrix corresponding to x ˜ t N i in (3) and weight matrix P ˜ t N i , R i in (2) cause the following inequality
20 μ i θ p i L ( m 1 2 + Z ¯ μ i θ p i L ) μ i θ p i + ϕ θ r i < 1
to hold, then
lim t e t N i 2 ϑ i 1 δ i
Furthermore, the norm square of the fusion estimation error satisfies the following form:
e t N 2 e t N i 2 χ t N i
where χ t N i has the following form:
χ 0 i = ϑ 0 i χ t i = δ i χ t 1 i + ϑ i , t = 1 , 2
with
δ i = 20 μ i θ p i L ( m 1 2 + Z ¯ μ i θ p i L ) μ i θ p i + ϕ θ r i ϑ i = 4 μ i θ p i + ϕ θ r i 2 μ i θ p i θ ω 2 + θ r i ( γ ω N θ ω + N + 1 θ v i ε i 2 N ( N + 1 ) ( 2 N + 1 ) 6 θ ω 2 ) 2 ϑ 0 i = 4 μ i θ p i + ϕ θ r i μ i θ p i d x 2 + θ r i ( γ ω N θ ω + N + 1 θ v i ε i 2 N ( N + 1 ) ( 2 N + 1 ) 6 θ ω 2 ) 2 .
 Proof. 
The performance index can be rewritten as the following form:
J t i ( x ^ t N | t i * ) = j ( j i ) i μ j x ^ t N | t i x ˜ t N j ( P ˜ t N j ) 1 2 + k = t N t y k i h ^ i ( x ^ k | t i ) ( R i ) 1 2
According to x ^ t N | t i * being optimal, the upper bound of the optimal performance index J t i * is given by
J t i * j ( j i ) i μ j x t N x ˜ t N j ( P t N j ) 1 2 + k = t N t y k i h i ( x k ) ( R i ) 1 2
Each of the terms in the second term of the right-hand side of the above inequality can be written as:
k = 0 : v t N i 2 k = 1 : h f ( 1 ) ω t N ( x t N ) h f ( 1 ) ( x t N ) + v t N + 1 i 2 = h i f ( 1 ) ω t N ( x t N ) ω t N ω t N = 0 ω t N + ϖ ( 1 ) + v t N + 1 i k = N : h f ( N ) ω t N t 1 ( x t N ) h f ( N ) ( x t N ) + v t i 2 = h i f ( 1 ) ω t N t 1 ( x t N ) ω t N t 1 ω t N t 1 = 0 ω t N t 1 + ϖ ( N ) + v t i
ϖ ( k ) is a function of x t N , ω t N , and Ξ = c o l ( 0 , ϖ ( 1 ) , ϖ ( 2 ) , ϖ ( N ) ) , so the upper bound of the performance index is changed to
J t i * j ( j i ) i μ j x t N x ˜ t N j ( P ˜ t N j ) 1 2 + θ r i ( γ ω N θ ω + N + 1 θ v i + Ξ ) 2
After a little bit of algebra, we obtain:
ϖ ( 1 ) 1 2 ε 1 i ω t N 2 ϖ ( N ) 1 2 ε N i ω t N t 1 2 = 1 2 ε N i s = 0 N 1 ω t N + s 2 Ξ 2 ( 1 2 ε 1 i ω t N 2 ) 2 + ( 1 2 ε n i s = 0 N 1 ω t N + s 2 ) 2
According to the formula of inequality of the sum of squares and the relevant definitions in (23), we obtain:
Ξ 2 ( ε i ) 2 4 ( θ ω 4 + + N 2 θ ω 4 ) = ( ε i ) 2 4 θ ω 4 k = 1 N k 2 = ( ε i ) 2 4 N ( N + 1 ) ( 2 N + 1 ) 6 θ ω 4
Then (32) is rewritten as
J t i * j ( j i ) i μ j x t N x ˜ t N i ( P ˜ t N j ) 1 2 + θ r i ( γ ω N θ ω + N + 1 θ v i + ε i 2 N ( N + 1 ) ( 2 N + 1 ) 6 θ ω 2 ) 2
For simplicity of expression, let
q = θ r i ( γ ω N θ ω + N + 1 θ v i + ε i 2 N ( N + 1 ) ( 2 N + 1 ) 6 θ ω 2 ) 2
So we get the upper bound of the optimal performance index as
J t i * j ( j i ) i μ j x t N x ˜ t N i ( P ˜ t N j ) 1 2 + q
Similarly, we give the derivation process of the lower bound of the optimal performance index. The second term on the right of the inequality can be rewritten as
k = t N t y k i h i ( x ^ k | t i ) ( R i ) 1 2 = Y t N t M N ( x ^ t N | t i * ) ( R ) 1 2
Because
M N ( x t N ) M N ( x ^ t N | t i * ) ( R ) 1 2 = Y t N t M N ( x ^ t N | t i * ) Y t N t M N ( x t N ) ( R ) 1 2
where, Y t N t = c o l ( y t N i , y t i ) .
Further, using Assumption 1, we obtain
Y t N t M N ( x ^ t N | t i * ) R 1 2 1 2 ξ ( x t N x ^ t N | t i * ( P ˜ t N i ) 1 2 ) q
For the first term in inequality (32), it is shown that
x t N x ^ t N | t i * ( P ˜ t N j ) 1 2 2 x t N x ˜ t N i ( P ˜ t N j ) 1 2 + 2 x ˜ t N i x ^ t N | t i * ( P ˜ t N j ) 1 2
By combining (26) and (34), the lower bound of the optimal value of the performance index is
J t i * 1 2 μ i θ p i e t N i 2 + 1 2 ξ [ θ r i e t N i 2 ] μ i θ p i j ( j i ) i μ j x t N x ˜ t N i 2 q
Combining (33) and (36), we obtain
1 2 μ i θ p i e t N i 2 + 1 2 ξ [ θ r i e t N i 2 ] 2 θ p i j ( j i ) i μ j x t N x ˜ t N i 2 + 2 q
According to Theorem 2, we obtain
ϕ ( e t N i ( P ˜ t N i ) 1 2 ) ξ ( e t N i ( P ˜ t N i ) 1 2 )
After sorting out, we obtain
e t N i 2 4 μ i θ p i L μ i θ p i + ϕ θ r i x t N x ˜ t N i 2 + 4 μ i θ p i + ϕ θ r i q
Based on (38) and (23), we have
x t N x ˜ t N i 2 = f ( x t N 1 ) + ω t N 1 f ( x ^ t N 1 | t 1 i * ) Δ f ^ t N 1 i x ^ t N 1 | t 1 i * K ^ t N 1 i ( y t N 1 i h ^ i ( x ^ t N 1 | t 1 i * ) ) Z ^ t N 1 i × j i ( P ˜ t N 1 j ) 1 μ j ( x ^ t N 1 | t 1 i * x ˜ t N 1 j ) 2 5 f ( x t N 1 ) f ( x ^ t N 1 | t 1 i * ) 2 + 5 ω t N 1 2 + 5 f ¯ 2 θ x 2 + 5 K ¯ q 2 + 10 Z ¯ L μ i θ p i ( e t N 1 i 2 η ) 5 m 1 2 x t N 1 x ^ t N 1 | t 1 i * 2 + 5 θ ω 2 + 5 f ¯ 2 θ x 2 + 5 K ¯ q 2 + 10 Z ¯ L μ i θ p i ( e t N 1 i 2 η ) = ( 5 m 1 2 + 5 Z ¯ L μ i θ p i ) e t N 1 i 2 + 5 θ ω 2 + 5 f ¯ 2 θ x 2 + 5 K ¯ q 2 5 Z ¯ L μ i θ p i η
where,
= 4 L + 1 μ i θ p i + ϕ θ r i 2 μ i θ p i L , η = 2 q μ i θ p i L
Substituting (39) into (38) obtains
e t N i 2 4 μ i θ p i L μ i θ p i + ϕ θ r i ( 5 m 1 2 + 5 Z ¯ L μ i θ p i ) e t N 1 i 2 + 5 θ ω 2 + 5 f ¯ 2 θ x 2 + 5 K ¯ q 2 5 Z ¯ L μ i θ p i η + 4 μ i θ p i + ϕ θ r i q = 20 μ i θ p i L ( m 1 2 + Z ¯ μ i θ p i L ) e t N 1 i 2 μ i θ p i + ϕ θ r i + 20 μ i θ p i L θ ω 2 + f ¯ 2 θ x 2 + K ¯ q 2 Z ¯ L μ i θ p i η + 4 q μ i θ p i + ϕ θ r i
If inequality (28) holds, it is easy to obtain the upper bound ϑ i / ( 1 δ i ) of the square of the local estimation error norm, because χ t i = ( δ i ) t χ 0 i + ϑ i s = 0 t 1 ( χ i ) s . Then the boundedness proof of local estimation error is completed. □
Here, the stability of the local estimation error is analyzed. Next, we will prove the stability of the fusion estimation error by introducing Definition 4.
 Definition 4. 
For any point set x i , if σ i 0 , i σ i = 1 , the convex function g ( x ) satisfies the following inequality:
g i = 1 M σ i x i i = 1 M σ i g ( x i ) .
 Proof. 
The fusion estimation error can be rewritten as
e t N = x t N i = 1 L μ i x ^ t N | t i * .
According to Definition 4, we obtain
g i = 1 L μ i x ^ t N | t i * i = 1 L μ i g x ^ t N | t i *
Let g x ^ t N | t i * = x ^ t N | t i * x ˜ t N j ( P t N j ) 1 2 . It is easy to prove that g x ^ t N | t i * is a convex function. We record the upper bound of g x ^ t N | t i * i = 1 , 2 L as D. Then (42) is converted into
g i = 1 L μ i x ^ t N | t i * i = 1 L μ i g x ^ t N | t i * D i = 1 L μ i
By combining (36), (38) and (40), we get
x ^ t N | t i * x ˜ t N j ( P t N j ) 1 2 e t N i 2 η
Similarly, substituting x ^ t N | t * as a feasible solution into g x ^ t N | t i * i = 1 , 2 L obtains
x ^ t N | t * x ˜ t N j ( P t N j ) 1 2 e t N 2 η
Combining (42), (44) and (45), we obtain
e t N 2 e t N i 2
When t = N , (47) can be obtained from (37):
e 0 2 e 0 i 2 4 μ i θ p i + ϕ θ r i μ i θ p i d x 2 + θ r i ( γ ω N θ ω + N + 1 θ v i ε i 2 N ( N + 1 ) ( 2 N + 1 ) 6 θ ω 2 ) 2 = ϑ 0 i
It can be seen from (40), (46) and (47) that (30) and (31) are proven. It is the same as the previous stability analysis of the local estimation error, if (28) is established, then (29) and (30) are established, because χ t i = ( δ i ) t χ 0 i + ϑ i s = 0 t 1 ( χ i ) s . Then the proof is complete. □

6. Numerical Simulation

To verify the effectiveness of the proposed distributed moving horizon fusion estimation algorithm, we consider the following unstable state space model with two sensors:
x t + 1 1 = x t 1 + Δ f t 1 x t 2 + ω t x t + 1 2 = 0.5 x t 1 + 5 cos x t 2 + Δ f t 2 x t 2 + ω t y t 1 = 5 sin x t 1 + v t 1 y t 2 = 5 cos x t 2 + v t 2
We let Δ f t i = M 1 i Υ 1 , t i S 1 i , M 1 i = M 2 i = M 3 i = 0.005 , S 1 i = S 2 i = S 3 i = I , Υ j , t i Υ j , t i T I j = 1 , 2 , 3 , P 0 i = 1 0 0 1 , x ^ 0 i = 0 0 T ; similar to [14] the uncertain parameters are chosen as β 1 , i = 0.2 , β 2 , i = 1 , ρ 1 , i = ρ 2 , i = 0.001 , ρ 3 , i = 0.01 and the variances are chosen as Q = 15 , R i = 10 . It is noted that the U 1 , t , i , U 2 , t , i , U 3 , t , i matrix can be calculated using (13), and the Jacobian matrix B t i calculated according to model (48) is
1 Δ f t 1 0.5 5 sin x t 2 + Δ f t 2
and
C t i = 5 cos x t 1 0 5 sin x t 2 0
Additionally, the constraints of the system noise are set as ω t > 0 , the simulation number n is set to 100 and the moving horizon window length N is set to 4. The performance of the proposed algorithm is evaluated using root mean square error (RMSE), which is written as
R M S E = 1 n + 1 k = t n t x ^ k | t i * x ˜ k j 2
We compared the methods in [10,14] respectively. In [10], the matrix-weighted fusion estimation algorithm is used, which is complex and takes a long time to calculate. In [14], a distributed robust extended Kalman filter algorithm is proposed, but fusion estimation is not carried out. Figure 2 and Figure 3 show the state trajectory of the system, and Figure 4 shows the comparison of root mean square error. It can be seen from the figure that the state trajectory after fusion estimation has a better tracking effect and lower root mean square error than that of local state estimation in [14] and fusion estimation in [10]. Table 2 shows the mean root mean square error under different moving horizon window lengths. It can be seen from the table that the system error decreases with an increase in window length, but the amount of computation also increases with the increase in the window length. Therefore, a larger N is not better, and N is generally selected as twice the system order. We fully balance the two factors of computation and root mean square error, and choose N as 4. The results show that the accuracy of distributed moving horizon fusion estimation is higher than that of both the local estimation in [14] and the matrix-weighted fusion estimation method in [10]. Meanwhile, Figure 4 also shows that the estimation errors of each local estimator and fusion estimator are bounded. Finally, Table 3 shows the running time of different algorithms. In 100 simulations, the calculation time of the proposed method is increased by 33.23% and 29.1%, respectively, compared to [10,14]. The results show that the proposed method not only achieves a high estimation accuracy, but also improves computation efficiency.

7. Conclusions

In this paper, the problem of estimating the state was solved by following the MHE paradigm for a class of distributed nonlinear systems in the presence of norm-bounded uncertainty. By constructing an augmented system, the upper bound of the estimation error covariance was derived, and the upper bound was optimized to obtain estimated values of the uncertain parameters in the system. In addition, the proposed method fully considered the system constraints: by constructing performance indicators, the state estimation problem was transformed into a nonlinear optimization problem, which avoided the fussy recursive solution of the traditional Kalman filter. Then a fusion estimation method based on covariance intersection (CI) strategy was proposed, which compensated for the difficulty in solving the cross-covariance matrix in the fusion estimation of nonlinear systems, reduced the computational burden of the estimation problem and improved the computational efficiency. More importantly, the sufficient conditions for the convergence of the norm square of the fusion estimation error were given. Finally, a numerical simulation example was given to verify the effectiveness of the algorithm. In future work, the correlation of noise should be fully considered, and the proposed algorithm should be applied to distributed nonlinear systems with correlated noises.

Author Contributions

Conceptualization, S.W. and B.X.; methodology, S.W. and B.X.; software, S.W.; data curation, S.W.; writing—original draft preparation, S.W.; writing—review and editing, S.W.; funding acquisition, S.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 61603205, in part by the China Postdoctoral Science Foundation under Grant 2017M612205.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zou, L.; Wang, Z.; Hu, J.; Han, Q.L. Moving horizon estimation meets multi-sensor information fusion: Development, opportunities and challenges. Inf. Fusion 2020, 60, 1–10. [Google Scholar] [CrossRef]
  2. Heo, S.; Park, C.G. Consistent EKF-based visual-inertial odometry on matrix Lie group. IEEE Sens. J. 2018, 18, 3780–3788. [Google Scholar] [CrossRef]
  3. Mur-Artal, R.; Montiel, J.; Tardos, J.D. Orb-slam: A versatile and accurate monocular slam system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef] [Green Version]
  4. Forster, C.; Carlone, L.; Dellaert, F.; Scaramuzza, D. On-manifold preintegration for real-time visual-inertial odometry. IEEE Trans. Robot. 2017, 33, 1–21. [Google Scholar] [CrossRef] [Green Version]
  5. Li, M.; Yu, H.; Zheng, X.; Mourikis, A.I. High-fidelity sensor modeling and self-calibration in vision-aided inertial navigation. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 409–416. [Google Scholar]
  6. Wang, Z.B.; Yang, L.; Huang, Z.P.; Wu, J.K.; Zhang, Z.Q.; Sun, L.X. Human motion tracking based on complementary Kalman filter. In Proceedings of the 2017 IEEE 14th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Eindhoven, The Netherlands, 9–12 May 2017; pp. 55–58. [Google Scholar]
  7. Zhao, G.R.; Han, X.; Du, W.J.; Lu, C. Fusion estimator with stochastic sensor gain degradation for uncertain systems. Control Decis. 2016, 31, 1413–1418. [Google Scholar]
  8. Guo, G.; Wang, B.F. Robust kalman filtering for uncertain discrete-time systems with multiple packet dropouts. Acta Autom. Sin. 2010, 36, 767–772. [Google Scholar] [CrossRef]
  9. Ma, J.; Yang, X.M.; Sun, S.L. Distributed fusion estimation for multi–sensor systems with time–correlated multiplicative noises. Acta Autom. Sin. 2021, 47, 1–13. [Google Scholar]
  10. Liu, L.; Zhou, W.; Fei, M.; Yang, Z.; Yang, H.; Zhou, H. Distributed fusion estimation for stochastic uncertain systems with network-induced complexity and multiple noise. IEEE Trans. Cybern. 2021, 52, 8753–8765. [Google Scholar] [CrossRef]
  11. Wang, S.; Ren, W.; Chen, J. Fully distributed dynamic state estimation with uncertain process models. IEEE Trans. Control Netw. Syst. 2018, 5, 1841–1851. [Google Scholar] [CrossRef]
  12. Xu, M.; Zhang, Y.; Zhang, D.; Chen, B. Distributed robust dimensionality reduction fusion estimation under doS attacks and uncertain covariances. IEEE Access 2021, 9, 10328–10337. [Google Scholar] [CrossRef]
  13. Pang, C.; Sun, S. Fusion predictors for multisensor stochastic uncertain systems with missing measurements and unknown measurement disturbances. IEEE Sens. J. 2015, 15, 4346–4354. [Google Scholar] [CrossRef]
  14. Duan, P.; Duan, Z.; Lv, Y.; Chen, G. Distributed Finite-Horizon Extended Kalman Filtering for Uncertain Nonlinear Systems. IEEE Trans. Cybern. 2021, 51, 512–520. [Google Scholar] [CrossRef] [PubMed]
  15. Zhang, Z.; Xue, B.; Fan, J. Noise Adaptive Moving Horizon Estimation for State-of-Charge Estimation of Li-Ion Battery. IEEE Access 2021, 9, 5250–5259. [Google Scholar] [CrossRef]
  16. Zhao, H.Y.; Chen, H. Moving horizon estimation approach to constrained systems with uncertain noise covariance. Control Decis. 2008, 23, 217–220. [Google Scholar]
  17. Kühl, P.; Diehl, M.; Kraus, T.; Schlöder, J.P.; Bock, H.G. A real-time algorithm for moving horizon state and parameter estimation. Comput. Chem. Eng. 2011, 35, 71–83. [Google Scholar] [CrossRef]
  18. Zavala, V.M.; Biegler, L.T. Optimization-based strategies for the operation of low-density polyethylene tubular reactors: Moving horizon estimation. Comput. Chem. Eng. 2009, 33, 379–390. [Google Scholar] [CrossRef]
  19. Ramlal, J.; Allsford, K.V.; Hedengren, J.D. Moving horizon estimation and control for an industrial gas phase polymerization reactor. IFAC Proc. Vol. 2007, 40, 1040–1045. [Google Scholar] [CrossRef] [Green Version]
  20. Alessandri, A.; Baglietto, M.; Battistelli, G. Receding-horizon estimation for switching discrete-time linear systems. IEEE Trans. Autom. Control 2005, 50, 1736–1748. [Google Scholar] [CrossRef]
  21. Guo, Y.; Huang, B. Moving horizon estimation for switching nonlinear systems. Automatica 2013, 49, 3270–3281. [Google Scholar] [CrossRef]
  22. Farina, M.; Ferrari-Trecate, G.; Scattolini, R. Moving-horizon partition-based state estimation of large-scale systems. Automatica 2010, 46, 910–918. [Google Scholar] [CrossRef]
  23. Yin, X.; Huang, B. Event-Triggered distributed moving horizon state estimation of linear systems. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 6439–6451. [Google Scholar] [CrossRef]
  24. Schneider, R.; Hannemann-Tams, R.; Marquardt, W. An iterative partition-based moving horizon estimator with coupled inequality constraints. Automatica 2015, 61, 302–307. [Google Scholar] [CrossRef]
  25. Liu, A.; Yu, L.; Zhang, W.A.; Chen, M.Z. Moving horizon estimation for networked systems with quantized measurements and packet dropouts. IEEE Trans. Circuits Syst. 2013, 60, 1823–1834. [Google Scholar] [CrossRef] [Green Version]
  26. Alessandri, A.; Baglietto, M.; Battistelli, G. Min-max moving-horizon estimation for uncertain discrete-time systems. SIAM J. Control Optim. 2012, 50, 1439–1465. [Google Scholar] [CrossRef]
  27. Fagiano, L.; Novara, C. A combined moving horizon and direct virtual sensor approach for constrained nonlinear estimation. Automatica 2013, 49, 193–199. [Google Scholar] [CrossRef]
  28. Johansen, T.; Sui, D.; Nybø, R. Regularized nonlinear moving-horizon observer with robustness to delayed and lost data. IEEE Trans. Control Syst. Technol. 2013, 21, 2114–2128. [Google Scholar] [CrossRef]
  29. Alessandri, A.; Awawdeh, M. Moving-horizon estimation with guaranteed robustness for discrete-time linear systems and measurements subject to outliers. Automatica 2016, 67, 85–93. [Google Scholar] [CrossRef]
  30. Farina, M.; Ferrari-Trecate, G.; Scattolini, R. Distributed moving horizon estimation for nonlinear constrained systems. Int. J. Robust Nonlinear Control 2012, 22, 123–143. [Google Scholar] [CrossRef] [Green Version]
  31. Rao, C.V.; Rawlings, J.B.; Lee, J.H. Constrained linear state estimation-a moving horizon approach. Automatica 2001, 37, 1619–1628. [Google Scholar] [CrossRef]
  32. Sinopoli, B.; Schenato, L.; Franceschetti, M.; Poolla, K.; Jordan, M.I.; Sastry, S.S. Kalman filtering with intermittent observations. IEEE Trans. Autom. Control 2004, 49, 1453–1464. [Google Scholar] [CrossRef]
  33. Wang, S.; Ren, W. On the convergence conditions of distributed dynamic state estimation using sensor networks: A unified framework. IEEE Trans. Control Syst. Technol. 2018, 26, 1300–1316. [Google Scholar] [CrossRef]
  34. He, D.; Xu, C.; Zhu, J.; Du, H. Moving horizon H estimation of constrained multisensor systems with uncertainties and fading channels. IEEE Trans. Instrum. Meas. 2021, 70, 1–12. [Google Scholar] [CrossRef]
  35. Kang, C.H.; Kim, S.Y.; Song, J.W. Data Fusion With Inverse Covariance Intersection for Prior Covariance Estimation of the Particle Flow Filter. IEEE Access 2020, 8, 221203–221213. [Google Scholar] [CrossRef]
  36. Alessandri, A.; Baglietto, M.; Battistelli, G. Moving-horizon state estimation for nonlinear discrete-time systems: New stability results and approximation schemes. Automatica 2008, 7, 1753–1765. [Google Scholar] [CrossRef]
Figure 1. Distributed moving horizon fusion estimation structure.
Figure 1. Distributed moving horizon fusion estimation structure.
Mathematics 11 01507 g001
Figure 2. The trajectories of the state x 1 [10,14].
Figure 2. The trajectories of the state x 1 [10,14].
Mathematics 11 01507 g002
Figure 3. The trajectories of the state x 2 [10,14].
Figure 3. The trajectories of the state x 2 [10,14].
Mathematics 11 01507 g003
Figure 4. Comparison of RMSE [10,14].
Figure 4. Comparison of RMSE [10,14].
Mathematics 11 01507 g004
Table 2. The mean root mean square error (RMSE) of different window lengths N.
Table 2. The mean root mean square error (RMSE) of different window lengths N.
Window Length24816
DMHFE in this paper0.03980.03690.02260.0106
Methods proposed in [10]0.04450.04230.02540.0156
Local estimator 1 in [14]0.07980.07220.03010.0214
Local estimator 2 in [14]0.11080.10300.05150.0321
Table 3. Run time (s) under different simulation times m.
Table 3. Run time (s) under different simulation times m.
m50100200400
DMHFE in this paper0.74830.93801.98464.0806
Methods proposed in [10]1.12051.40492.38694.1246
Methods proposed in [14]0.97581.32292.27684.0969
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, S.; Xue, B. Distributed Moving Horizon Fusion Estimation for Nonlinear Constrained Uncertain Systems. Mathematics 2023, 11, 1507. https://doi.org/10.3390/math11061507

AMA Style

Wang S, Xue B. Distributed Moving Horizon Fusion Estimation for Nonlinear Constrained Uncertain Systems. Mathematics. 2023; 11(6):1507. https://doi.org/10.3390/math11061507

Chicago/Turabian Style

Wang, Shoudong, and Binqiang Xue. 2023. "Distributed Moving Horizon Fusion Estimation for Nonlinear Constrained Uncertain Systems" Mathematics 11, no. 6: 1507. https://doi.org/10.3390/math11061507

APA Style

Wang, S., & Xue, B. (2023). Distributed Moving Horizon Fusion Estimation for Nonlinear Constrained Uncertain Systems. Mathematics, 11(6), 1507. https://doi.org/10.3390/math11061507

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop