Next Article in Journal
Storytelling and Visualization: An Extended Survey
Previous Article in Journal
An Anti-Collision Algorithm for RFID Based on an Array and Encoding Scheme
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distributed State Estimation under State Inequality Constraints with Random Communication over Multi-Agent Networks

1
Xi’an Institute of High-Tech, Xi’an 710025, China
2
Institute of Electronics, Chinese Academy of Sciences, Beijing 100190, China
*
Author to whom correspondence should be addressed.
Information 2018, 9(3), 64; https://doi.org/10.3390/info9030064
Submission received: 1 February 2018 / Revised: 28 February 2018 / Accepted: 7 March 2018 / Published: 13 March 2018

Abstract

:
In this paper, we investigate distributed state estimation for multi-agent networks with random communication, where the state is constrained by an inequality. In order to deal with the problem of environmental/communication uncertainties and to save energy, we introduce two random schemes, including a random sleep scheme and an event-triggered scheme. With the help of Kalman-consensus filter and projection on the constrained set, we propose two random distributed estimation algorithm. The estimation of each agent is achieved by projecting the consensus estimate, which is obtained by virtue of random exchange information with its neighbors. The estimation error is shown to be bounded with probability one when the agents randomly take the measurement or communicate with their neighbors. We show the stability of proposed algorithm based on Lyapunov method and projection and demonstrate their effectiveness via numerical simulations.

1. Introduction

Recently, state estimation using a multi-agent network have been a hot topic due to its broad range of applications in engineering systems such as robotic networks, area surveillance and smart grids. Distributed method has benefits of robustness and scalability. As one of the popular estimation method, distributed Kalman filter has ability of real-time estimation and non-stationary process tracking.
Distributed Kalman-filter-based (DKF) estimations have been wildly studied in the literatures [1,2,3,4,5,6,7,8,9,10,11]. The idea of references [1,2,5,6,9] is adding consensus term into a traditional Kalman filter structure. For example, in [1], the authors proposed a distributed Kalman filter, where the consensus information state and associated information matrix were used to approximate the global solution. An optimal consensus-based Kalman filter was reported in [2], where all-to-all communication is needed. In order to obtain a practical algorithm, in [2], the authors also reported a scalable distributed Kalman filter, where only the state estimation need to be transmitted between neighbors. Moreover, the switching communication topology for distributed estimation problem was studied in [7,8]. Another kind of distributed Kalman filter design is use the idea of fusing the local estimate of each agent [3,4,10,11]. In [3], the authors investigated distributed Kalman filter by fusing information state of neighbors, and a covariance intersection based distributed Kalman filter was proposed in [4] proposed a diffusion Kalman filter based on covariance intersection method. The last kind of DKF is Consensus+Innovations approach [12,13], which allows each agent tracking the global measurement. It should be notice that the approaches in [12,13] can approximate global optimal solution, which achieved by knowing global pseudo-observations matrix in advance.
New estimation problems emerge resulting from unreliable communication channel and random data packet dropout. In a multi-agent network, KCF was characterized under the effect of lossy sensor networks in [5] by incorporating a Bernoulli random variable in the consensus term. Independent Bernoulli distribution was also used to model the random presence of nonlinear dynamics as well as the quantization effect in the sensor communications in [14]. From an energy saving viewpoint, the agent can be enabled to reduce communication rate. The random sleep scheme (RS) was introduced into the projecting consensus algorithm in [15,16], which allows each agent to choose whether to fall asleep or take action. Different from the RS scheme, there is another well-known scheme, called event-triggered scheme, under which information is transmitted only when the predefined event conditions are satisfied. Assuming the Gaussian properties of the priori conditioned distribution, a deterministic event-triggered schedule was proposed in [17], and, to overcome the limitation of the Gaussian assumption, a stochastic event-triggered scheme was developed and the corresponding exact MMSE estimator was obtained in [18].
On the other hand, in practice, we may obtain some information or knowledge about the state variables, which could be used to improve the performance (such as accuracy or convergence rate) in the design of distributed estimators. In some cases, such knowledge can be described as given constraints of the state variables, which may be obtained from physical laws, geometric relationship, environment constraints, etc. There have been many method to solve the problem of constrained state in Kalman filter, such as pseudo-observations [19], projection [20,21], and moving horizon method [22]. In fact, a survey about the conventional design of Kalman filter with state constraints was reported in [23], and moreover, it showed that the constraints as additional information are useful to improve the estimation performance. In [11], the authors studied distributed estimation with constraints information, where both the state estimation and covariance information are transmitted. In this paper, we concentrate on the problem of distributed estimation under state constraints with random communication, and we use projection method to deal with the constrained state.
In this paper, we focus on distributed estimation under state inequality constraints with random communication, which is an important problem. We design a distributed Kalman filter incorporating consensus term and projection method. Specifically, the constrained estimate obtained in each agent by projecting the consensus estimation onto the constrained surface. Moreover, in order to reduce the communication efficiently, we introduce a stochastic event triggered scheme to realize a tradeoff between communication rate and performance. We summarize the contribution of the paper as follows: (i) We proposed a distributed Kalman filer with state inequity constraints and stochastic communication. (ii) We introduce random sleep scheme and stochastic event-triggered scheme to reduce the communication rate. (iii) We analyze the stability properties of proposed algorithm by Lyapunov method.
The remainder of the paper is organized as follows. Section 2 provide some necessary preliminaries and formulate the distributed estimation problem with state constriants and random communication. Section 3 give algorithms based on random sleep scheme and stochastic event-triggered scheme. The performance analysis are provided in Section 4, and then a numerical simulation is shown in Section 5, which shows the benefit of state constraints. Section 6 give the discussions of the paper. Finally, some concluding remarks are provided in Section 7.
Notations: The set of real number is denoted by R . A symmetric matrix is denoted by M, and a positive definite matrix is M 0 ( M > 0 ) means that the matrix is positive semi-definite (definite). The maximum and minimum eigenvalue are denoted by λ m a x ( · ) and λ m i n ( · ) , respectively. E { · }  represents mathematical expectation. P ( x ) denotes the probability of random variable ( x ) .

2. Problem Formulation

In this section, we first provide some preliminaries about convex analysis [24]. Then the problem of distributed estimation with state constraints and random communication is formulated.

2.1. Preliminaries

Consider a function f ( · ) : R m R n , if for any x , y R m and 0 < a < 1 , f ( a x + ( 1 a ) y ) a f ( x ) + ( 1 a ) f ( y ) holds, then f ( · ) is a convex function. Similarly, if for any x , y X and 0 a 1 , a x + ( 1 a ) y X holds, then X R n is a convex set. Denote P X : R n X is the projection onto a closed and convex subset X, which has following definition:
P X ( z ) = a r g min x X x z .
In order to analyze the properties of proposed algorithm, we provide a lemma about projection (see [25]).
Lemma 1.
Let X be a closed convex set in R m . Then
(i) 
( P X ( x ) x ) ( z P X ( x ) ) 0 , for all z X ;
(ii) 
P X ( x ) P X ( z ) x z , for all x and z;
(iii) 
P X ( x ) z 2 x z 2 P X ( x ) x 2 , for any z X .

2.2. Problem Formulation

Consider the following linear dynamics
x k + 1 = A x k + w k ,
where x k R m is the states, w k is zero-mean and covariance Q > 0 Gaussian white noises.
In practice, according to the physical laws or design specification, some additional information is known as prior knowledge, which can be formulated as inequality constriants about state variables (some engineering applications can be found in [23]). In this paper, we consider the state inequality constraints [26,27,28]. Specifically, for the dynamics (1), the inequality constraints about the state can be given as follows,
q t ( x ) 0 , t = 1 , , s ,
where q t ( x ) : R m R is a convex function, s is the number of the constraints.
State x k is estimated by a network consisting of N agents. The measurement equation of the ith agent is given by:
y i , k = C i x k + v i , k ,
where C i R q i × m , v i , k is the measurement noises by agent i which is assumed to be zero-mean white Gaussian with covariance R i > 0 . v i , k is independent of w k k , i and is independent of v j , s when i j or k s .
Graph theory [29] can be used to describe the communication topology of the network, where the agent i can be treated as node i, and the edge can be regarded as communication link. An undirected graph is denoted as G = ( V , E ) , where V = { 1 , 2 , , N } is the node set, and E = { ( i , j ) : i , j V } is the edge set. If node i and node j are connected by an edge, then these two vertices are called adjacent. The neighbor set of node i is defined by N i ( k ) = { j : ( i , j ) E } . Denote W = ( ω i j ) n n R n × n as the weighted adjacency matrix of G , where the element ω i i = 0 and ω i j = ω j i > 0 , and the corresponding Laplacian is L. The weighted adjacency matrix of graph G is defined as W = ( ω i j ) n n R n × n , where ω i i = 0 and ω i j = ω j i > 0 . Denote the Laplacian of the weighted graph is L. It is well known that, for Laplacian L associated with graph G , if G is connected, then λ 1 ( L ) = 0 and λ 2 ( L ) > 0 .
In this paper, we adopt the following two standard assumptions, which have been widely used.
Assumption 1.
( A , C i ) is detectable and ( A , Q ) is controllable.
Assumption 2.
The undirected graph G is connected.
The Kalman-consensus filter is designed by using local measurement and information from neighbors, which was given in [2],
x ˜ i , k + 1 = A x i , k + K i , k ( y i , k C i x i , k ) + A G i , k j N i ( x j , k x i , k ) ,
where x i , k is the estimate by each agent. G i , k is the consensus gain, K i , k and G i , k are the estimator gain and consensus gain need to be designed, respectively.
In order to satisfy the state constraints (2), the estimate obtained by each agent should solve the following optimization problem:
min x i , k ( x i , k x ˜ i , k ) ( x i , k x ˜ i , k ) , s . t . q t ( x i , k ) 0 , t = 1 , , s ,
where x ˜ i , k + 1 = A x i , k + K i , k ( y i , k C i x i , k ) + A G i , k j N i ( x j , k x i , k ) . In this paper, we assume that each agent knows the constraints (2), and therefore, the constrained estimation can be obtained by projection at each agent. Namely,
x i , k = P X ( x ˜ i , k ) ,
with X = { x q t ( x ) 0 , t = 1 , , s } . The aim of distributed estimator (4) is to find the optimal gain K i , k and G i , k to minimize the mean-squared estimation error i = 1 N x i , k x k 2 .
Different with existing works in [2,7,8], which did not consider any state constraints and random communication. Here, we consider the problem of constrained KCF with random communication. Moreover, due to the uncertainty environment and saving energy, agents may loss measurement or random communication with neighbors. Hence, to solve our problem, we need to show how to design the Kalman filter gain K i , k and the consensus gain G i , k under random communication such that the estimation error of (6) is bounded.
In the following section, we introduce random sleep and event-triggered scheme to design distributed estimation algorithm, and then analyze the stability conditions of the proposed algorithms.

3. Distributed Algorithms

We present two algorithms in the following subsections, respectively.

3.1. Random Sleep Algorithm

In practice, the communication cost may be much larger than the computational cost. Hence it is reasonable to reduce the communication rate and to save energy. Here we introduce a random sleep (RS) scheme, which allows the agents to have their own policies to collect measurement or send message to its neighbors. To be specific, in our RS scheme, the agents may fall in sleep during the measurement collection and consensus stage with independent Bernoulli decision, respectively. When a agent sleeps during the measurement collection, it cannot use measurement to update the local estimate. When it sleeps during the consensus stage, it cannot send any information to its neighbors.
Let 0 < ρ i m < 1 and 0 < ρ i c < 1 be given constants. Denote σ i , k m and σ i , k c as independent Bernoulli random variables with P ( σ i , k m = 1 ) = ρ i m , P ( σ i , k m = 0 ) = 1 ρ i m , and P ( σ i , k c = 1 ) = ρ i c , P ( σ i , k c = 0 ) = 1 ρ i c . We make the following assumptions,
E ( σ i , k m σ j , k m ) = 0 , i j , E ( σ i , k c σ j , k c ) = 0 , i j , E ( σ i , k m σ j , k c ) = 0 , i , j .
Furthermore, notice that E ( σ i , k m ) = ρ i m , E ( σ i , k m ) 2 = ρ i m , and E ( σ i , k c ) = ρ i c , E ( σ i , k c ) 2 = ρ i c .
Then we propose a distributed random sleep algorithm for the Kalman-filter-based estimation as follows:
S t e p 1 . ( r a n d o m s l e e p o n m e a s u r e m e n t c o l l e c t i o n ) x ¯ i , k + 1 = A x i , k + σ i , k m K i , k ( y i , k C i x i , k ) S t e p 2 . ( r a n d o m s l e e p o n c o n s e n s u s ) x ˜ i , k + 1 = x ¯ i , k + 1 + A G i , k j N i σ j , k c ( x j , k x i , k ) .
If we ignore consensus step, the covariance iteration and the gain can be obtained by the minimum mean squared error (MMSE) estimation as follows (see [30]):
P i , k + 1 = A P i , k A + Q σ i , k m K i , k C i P i , k A
K i , k = A P i , k C i ( C i P i , k C i + R i ) 1
To ensure that the estimate by each agent always satisfies the constraints, we present the following algorithm by projecting the unconstrained estimation onto the constrained surfaces:
x i , k + 1 = P X { x ˜ i , k + 1 } .
The proposed Kalman-consensus filter with constraints via RS scheme (RSKCF) is summarized in Algorithm 2.
Define P i , k = E { ( x k x i , k ) ( x k x i , k ) } and P ˜ i , k = E { ( x k x ˜ i , k ) ( x k x ˜ i , k ) } . According to Lemma 1, T r ( P i , k ) T r ( P ˜ i , k ) , which hints that the estimator can achieve better performance using the constraint information in the design. In [20], the author proved that the projection-based Kalman filter with linear equality constraints performs better than unconstrained estimation.
Notice that P i , k in the algorithm does not represent the estimation error covariance with respect to x i , k any longer. A poor choice of consensus gain G i , k may spoil the stability of the error dynamics. In section IV, we will present the stability analysis of the algorithm with an appropriate choice of the consensus gain.
Algorithm 1 KCF with constraints via RS (RSKCF)
At time k, a prior information
Initialization x i , 0 , P i , 0 ;
Random Sleep on Measurement Collection
K i , k = A P i , k C i ( C i P i , k C i + R i ) 1 , x ¯ i , k + 1 = A x i , k + σ i , k m K i , k ( y i , k C i x i , k ) , P i , k + 1 = A P i , k A + Q σ i , k m K i , k C i P i , k A ;
Random Sleep on Consensus
x ˜ i , k + 1 = x ¯ i , k + 1 + A G i , k j N i σ j , k c ( x j , k x i , k ) ;
Projection
x i , k + 1 = P X ( x ˜ i , k + 1 )
Define e i , k + 1 = x k + 1 x i , k + 1 , then the error dynamics of Algorithm 1 can be written as
e i , k + 1 = ( A σ i , k m K i , k C i ) e i , k + A G i , k j N i σ j , k c ( e j , k e i , k ) σ i , k m K i , k v i , k + w k + α i , k + 1 .
Remark 1.
The different choices of ρ i m and ρ i c correspond to different network conditions. If ρ i c = 0 and 0 < ρ i m < 1 , i V , the network is not connected, and each agent only computes the estimate by its local Kalman filter with constraints. If ρ c = 1 and 0 < ρ i m < 1 , i V , the multi-agent system has a perfect communication channel but each agent may fall in sleep during the measurement collection. When 0 < ρ i c < 1 and ρ i m = 1 , i V , each agent produces measurement at each time but maybe fall in sleep during the consensus stage. Since the RS probability ρ i c and ρ i m can be determined independently, we can take the advantage of the formulation to simplify the design in practice.
Remark 2.
It should be notice that the inequality constraints q t ( x ) 0 , t = 1 , , s in this paper including the linear equality constraints. In section IV, the problem (4) will be solved in closed-form under linear equality constraints D x k = d as a special case, which consistent with the case in [20]. It also should be notice that, the nonlinear equality constraints cannot be solve by projection method, because { x | q t ( x ) = 0 , t = 1 , , s } may not be a convex set.

3.2. Stochastic Event-Triggered Scheme

An event-triggered scheme, based on an event monitor, can also be considered to reduce the communication rate and to save energy. The event-triggered idea is carried out as follows: when the associated monitor exceeds a predefined threshold, the agent transmits local information with its neighbors, which provides a tradeoff between the communication rate and the performance. Since the agents only transmit valuable information, the event-triggered scheme may achieve better performance compared with the proposed RS scheme. In [17,18], the authors dealt with a centralized stochastic event-triggered estimation problem. Here we introduce a distributed stochastic event-triggered scheme by extending the algorithms described in [18].
Denote a binary decision variable γ i , k { 0 , 1 } for agent i at time k. We use the following strategy: agent i measures the target state and sends information to its neighbors if γ i , k = 1 ; agent i only receives information from its neighbors if γ i , k = 0 . As stated in [18], at each time instant, agent i generates an i.i.d random variable φ i , k and computes γ i , k as follows:
γ i , k = 0 φ i , k ψ ( y ˜ i , k ) 1 φ i , k > ψ ( y ˜ i , k ) ,
where ψ ( y ˜ i , k ) exp ( 1 2 y ˜ i , k Y i y ˜ i , k ) , and y ˜ i , k = y i , k C i x i , k . Clearly, in (13), the parameter Y i introduces one degree of freedom to balance the tradeoff between the communication rate and the estimation performance. From an engineering viewpoint, a larger Y i means more information to be transmitted.
The local MMSE estimator (without consensus and projection) for agent i with incorporating the stochastic event-triggering (13) is given as follows (see Theorem 2 in [18]):
K i , k = A P i , k C i ( C i P i , k C i + R i + ( 1 γ i , k ) Y i , k 1 ) 1 , P i , k + 1 = A P i , k A + Q K i , k C i P i , k A , x ¯ i , k + 1 = A x i , k + K i , k ( y i , k C i x i , k ) .
As stated before, by the consensus idea and projection technique, the constrained estimation can be obtained by x i , k = P X ( x ˜ i , k ) . The stochastic event-triggered KCF with constraints (ETKCF) is described in Algorithm 2.
Algorithm 2 KCF with constraints via stochastic event-triggered scheme
Initialization x i , 0 , P i , 0 ;
Local Estimation
K i , k = A P i , k C i ( C i P i , k C i + R i + ( 1 γ i , k ) Y i , k 1 ) 1 , P i , k + 1 = A P i , k A + Q K i , k C i P i , k A , x ¯ i , k + 1 = A x i , k + K i , k ( y i , k C i x i , k ) .
Consensus
x ˜ i , k + 1 = x ¯ i , k + 1 + A G i , k j N i γ j , k ( x j , k x i , k ) ;
Projection
x i , k + 1 = P X ( x ˜ i , k + 1 ) .
Remark 3.
Since only the important information is broadcasted, the stochastic event-triggered scheme may achieve better performance compared with the RS scheme. The RS scheme, however, has a simpler form and is easier to implement.
Remark 4.
Here, we provide some notations, which will help to analysis the proposed algorithms. For given A, C i , Q, R i and P i , k , without loss of generality, there are positive scalars b ¯ , c ¯ , q ̲ , r ̲ i , p ̲ i , and p ¯ i such that
A a ¯ , C i c ¯ i , Q q ̲ I , R i r ̲ i I , p ̲ i I P i , k p ¯ i I .
The last inequality holds because the solution of (9) is bounded by Assumption 1 and Theorem 1 in [30].

4. Performance Analysis

In this section, we analyze the convergence of the proposed algorithms in the following three subsections. We introduce some concepts related to stochastic processes, which are useful in the following convergence analysis ([31,32]).
Definition 1.
The stochastic process ζ k is said to be exponential bounded in mean square, if there are real numbers, η > 0 , ϑ > 0 and 0 < ν < 1 such that
E { ζ k 2 } η ζ 0 2 ϑ k + ν
holds for every k 0 .
Definition 2.
The stochastic process ζ k is said to be bounded with probability one, if
sup k 0 ζ k <
holds with probability one.
We first give three lemmas, which can be found in [33].
Lemma 2
(Lemma 2.1, [33]). Suppose that there is a stochastic process V k ( ξ k ) as well as positive numbers θ ¯ , θ ̲ , μ, and 0 < α 1 such that
θ ̲ ξ k 2 V k ( ξ k ) θ ¯ ξ k 2
and
E { V k + 1 ( ξ k + 1 ) | ξ k } V k ( ξ k ) μ α V k ( ξ k )
are fulfilled. Then the stochastic process is exponentially bounded in mean square, i.e.,
E { ξ k 2 } θ ¯ θ ̲ E { ξ 0 2 } ( 1 α ) n + μ u i = 1 n 1 ( 1 α ) i
for every n 0 . Moreover, the stochastic process is bounded with probability one.
Lemma 3
(Lemma 3.1, [33]). Under Assumption 1, there is a real number 0 < τ i < 1 , i = 1 , , N such that
( A K i , k C i ) P i , k + 1 1 ( A K i , k C i ) ( 1 τ i ) P i , k 1
Lemma 4
(Lemma 3.3, [33]). Under Assumption 1, there is a positive constant ϵ > 0 , such that
E { w k P i , k + 1 1 w k + v i , k K i , k P i , k + 1 1 v i , k K i , k } ϵ i
holds.

4.1. Random Sleep Scheme

The following theorem shows the stochastic stability of (12).
Theorem 1.
Under Assumptions 1 and 2, the error dynamics (12) for Algorithm 1 is exponentially bounded in mean square and bounded with probability one.
To prove the Theorem 1, we give a lemma first. Its proof can be found in Appendix A.
Lemma 5.
Under Assumptions 1, there exists a constant 0 < τ ¯ i < 1 such that
E { ( A σ i , k m K i , k C i ) P i , k + 1 1 ( A σ i , k m K i , k C i ) } ( 1 τ ¯ i ) P i , k 1 .
holds.
Proof of Theorem 1.
By Lemma 4, there exists a constant ϵ ¯ i such that
E { w k P i , k + 1 1 w k + ( σ i , k m ) 2 v i , k K i , k P i , k + 1 1 v i , k K i , k } ϵ ¯ i .
The error dynamics of Algorithm 1 is
e i , k + 1 ( A σ i , k m K i , k C i ) e i , k σ i , k m K i , k v i , k + w k + A G i , k j N i σ j , k c ( e j , k e i , k )
With V k ( e k ) = e k P k 1 e k , we have
V k + 1 ( e k + 1 ) i = 1 N e i , k ( A σ i , k m K i , k C i ) P i , k + 1 1 ( A σ i , k m K i , k C i ) e i , k + 2 i = 1 N e i , k ( A σ i , k m K i , k C i ) P i , k + 1 1 A G i , k z j i , k + 2 i = 1 N e i , k ( A σ i , k m K i , k C i ) P i , k + 1 1 ( w k σ i , k m K i , k v i , k ) + i = 1 N z j i , k G i , k A P i , k + 1 1 A G i , k z j i , k + 2 i = 1 N z j i , k A P i , k + 1 1 ( w k σ i , k m K i , k v i , k ) + i = 1 N w k P i , k + 1 1 w k + i = 1 N σ i , k m 2 v i , k K i , k P i , k + 1 1 K i , k v i , k 2 i = 1 N w k P i , k + 1 1 K i , k v i , k .
where z j i , k = j N i σ j , k c ( e j , k e i , k ) . By (24) and Lemma 5, and taking expectation on both side of above equation, we have
E { V k + 1 ( e k + 1 ) } i = 1 N ( 1 τ ¯ i ) e i , k P i , k 1 e i , k + i = 1 N ϵ ¯ i + 2 i = 1 N e i , k ( A ρ i m K i , k C i ) P i , k + 1 1 A G i , k z ^ j i , k + i = 1 N z ^ j i , k G i , k A P i , k + 1 1 A G i , k z ^ j i , k ,
where z ^ j i , k = E { z j i , k } = j N i ρ j c ( e j , k e i , k ) .
Similarly, the consensus gain can be choosen as G i , k = g ( ( A σ i , k m K i , k C i ) P i , k + 1 1 A ) 1 Γ ˜ i , k , where Γ ˜ i , k = ( A σ i , k m K i , k C i ) P i , k + 1 1 ( A σ i , k m K i , k C i ) . Denoting Γ ¯ i , k = E { Γ ˜ i , k } = ( A ρ i m K i , k C i ) P i , k + 1 1 ( A ρ i m K i , k C i ) , we have
E { V k + 1 ( e k + 1 ) } i = 1 N ( 1 τ ¯ i ) e i , k P i , k 1 e i , k + i = 1 N ϵ ¯ i + i = 1 N g e i , k Γ ¯ i , k z ^ j i , k + i = 1 N g 2 z ^ j i , k Γ ¯ i , k z ^ j i , k i = 1 N ( 1 τ ¯ i ) e i , k e i , k + i = 1 N ϵ ¯ i g λ m i n ( Γ ¯ k ) e k L ̲ e k + g 2 λ m a x ( Γ ¯ k ) λ m a x ( L ̲ L ̲ ) e k e k i = 1 N ( 1 τ ¯ i ) e i , k e i , k + i = 1 N ϵ ¯ i + g 2 λ m a x ( Γ ¯ k ) λ m a x ( L ̲ L ̲ ) e k e k ,
where Γ ¯ k = d i a g [ Γ ¯ i , k , , Γ ¯ i , k ] and L ̲ = [ l ̲ i j ] defined as
l ̲ i j = ρ j c , i f ( i , j ) E , j N i l ̲ i j , i f i = j , 0 , o t h e r w i s e .
According to Lemma 5, λ m a x ( Γ ¯ k ) 1 τ ¯ p ̲ , where τ ¯ = min { τ ¯ 1 , , τ ¯ N } . Substituting it into (29) yields
E { V k + 1 ( e k + 1 ) } ( ( 1 τ ¯ ) + 1 τ ¯ p ̲ p ¯ g 2 λ m a x ( L ̲ L ̲ ) ) e k P k 1 e k + i = 1 N ϵ ¯ i = ( ( 1 τ ¯ ) + 1 τ ¯ p ̲ p ¯ g 2 λ m a x ( L ̲ L ̲ ) ) V k ( e k ) + i = 1 N ϵ ¯ i .
Take 0 < g g with
g = τ ¯ p ̲ ( 1 τ ¯ ) p ¯ λ m a x ( L ̲ L ̲ ) .
Obviously, Lemma 2 can be satisfied with
α = τ ¯ + 1 p ̲ ( τ ¯ 1 ) p ¯ g 2 λ m a x ( L ̲ L ̲ ) μ = i = 1 N ϵ ¯ i .
Thus, the conclusion follows. ☐
Remark 5.
From the proofs of Theorems 1, the constrained estimation x i , k is the same as unconstrained estimation x ˜ i , k once x ˜ i , k X for agent i. Otherwise, x x k < x ˜ i , k x k . In other words, the constrained estimation x i , k is closer to the true state x k than the unconstrained estimation x ˜ i , k . In fact, our schemes first make all agents’ estimation error bounded, and then project each estimation onto the surface of constraints.
Remark 6.
It should be notice that, if the system is asymptotically stable, i.e., the spectral radius of A is less than 1, there alway exists the gains to guarantee the convergence of MSE. When the system is unstable, i.e., the spectral radius of A greater than 1, we need to design estimation gain K i , k and G i , k to guarantee the convergence of MSE. In [13], the authors discussed the maximum degree of instability of A, that guarantees the convergence of distributed estimation algorithm. The maximum degree of instability of A relates with connectivity and global measurement matrix, which reflects the tracking ability of the networks. Given the maximum degree of A, the problem remains how to design the gain matrices K i , k and G i , k .
In what follows, we give the closed-form solution under linear equality constraints, which can be written as follows:
D x k = d ,
where D R s × m is constraint matrix, d R s , and s is the number of constraints. Generally, D should be of full rank, otherwise, we have redundant constraints. In such case, we can remove linearly dependent rows from D until D is of full rank.
In order to use projection method, the constraints (32) can be written as X = { x D x = d } . For each individual agent, we can obtain the constrained estimate by projection as follows:
x i , k + 1 = x ˜ i , k + 1 D ( D D ) 1 ( D x ˜ i , k + 1 d ) ,
where x ˜ i , k + 1 is the estimate obtained by the consensus step. Therefore, we have the following result.
Corollary 1.
Under Assumptions 1 and 2, the error dynamics of the Algorithm 1 with constraints (32) is exponentially bounded in mean square and bounded with probability one. Moreover, the constrained estimation is x i , k + 1 = x ˜ i , k + 1 D ( D D ) 1 ( D x ˜ i , k + 1 d ) .
Clearly, linear equality constraints can be extended to the linear state inequality constraints D x k d . Specifically, if the consensus estimate step satisfies D x ˜ i , k d , then constrained estimation x i , k and x ˜ i , k will be the same. Otherwise, the constrained estimate can be obtained by projecting x ˜ i , k onto D x k = d . Therefore, the Corollary 1 still holds for the linear inequality constraints case.

4.2. Event-Triggered Scheme

The communication rate γ i for agent i can be achieved by
Pr ( γ i , k = 0 ) = Pr ( φ i , k ψ ( y ˜ i , k ) ) = E { exp ( 1 2 y ˜ i , k Y i y ˜ i , k ) } .
Notice that ψ ( y ˜ i , k ) is proportional to the probability density of Gaussian variable. If we can obtain the covariance of y ˜ i , k in the steady state, we can obtain the communication rate [18]. However, it is difficult to analyze the communication rate since y ˜ i , k depends on the consensus and projection stage in the distributed case, though a communication rate is determined using the stochastic event-triggered scheme (13).
Although we cannot explicitly obtain the stochastic event-triggered communication rate, we can still easily find that the event-triggered scheme performs better than the random sleep scheme having the same communication cost (referring to the the performance comparison between random sleep scheme and stochastic event-triggered scheme by simulations as shown in Section 5).
Actually, for agent i, there exists a stochastic event-triggered communication rate 0 < γ i , k < 1 . The probability of collection measurements and sending information can be taken as 1 and γ i , k , respectively. Following Theorem 1, there exists a sufficient small consensus gain to guarantee that the error dynamics is exponentially bounded in mean square and bounded with probability one.

5. Simulations

In this section, we give some simulations to illustrate the effectiveness of proposed algorithms. We compare the estimation performances of the proposed estimator with the suboptimal consensus-based Kalman filter (SKCF) in [2]. Moreover, we compare the performance between proposed two algorithms.
A target moves along a line with constant velocity, which is tracked by a network with N = 6 agents. The topology of the network is shown in Figure 1. The dynamics of target is given as follows
x k + 1 = 1 0 T 0 0 1 0 T 0 0 1 0 0 0 0 1 x k + w k ,
where T is the sampling period, x k = [ x k ( 1 ) , x k ( 2 ) , x k ( 3 ) , x k ( 4 ) ] . x k ( 1 ) and x k ( 2 ) are the target position, and x k ( 3 ) and x k ( 4 ) are the velocity along different directions. In this example, we take T = 1 s and Q = d i a g ( 0.1 , 0.1 , 0.1 , 0.1 ) .
The measurement by agent i is expressed as follows,
y i , k = 1 0 0 0 0 1 0 0 x k + v i , k
In [20], the author stated that the target is constrained if it is traveling along the road, otherwise, it is unconstrained. In this example, we assume the target is along a given road, and therefore, the problem is constrained. Here we consider that the target is travelling on a road with a heading of η , which means tan η = x k ( 2 ) x k ( 1 ) = x k ( 4 ) x k ( 3 ) . Then the constrained information can be written as:
D = 1 tan η 0 0 0 0 1 tan η d = 0 0
In this example, we take η = 60 d e g .
Simulation Case 1: In this case, we test the performance of proposed two algorithms. Denote e 0 = [ 5 , 5 , 0.3 , 0.3 ] , x 0 = [ 0 , 0 , tan η , 1 ] , and initial conditions by agent i is set to x i , 0 = x 0 + ( 1 ) i i e 0 , R 1 = d i a g ( 90 , 90 ) , R 2 = d i a g ( 80 , 80 ) , R 3 = d i a g ( 70 , 70 ) , R 4 = d i a g ( 75 , 75 ) , R 5 = d i a g ( 85 , 85 ) , R 6 = d i a g ( 95 , 95 ) .
We consider the total mean square estimation errors (TMSEE) of Algorithm 1, and compare its performance with the existing consensus-based Kalman filter algorithms in [9]. Here, we name the algorithm in [9] as SKCF. TMSEE is widely used to indicate the performance of the estimator, which is defined as
T M S E E k = 1 N i = 1 N E ( x i , k x k ) ( x i , k x k ) .
The parameters of the multi-agent system are chosen that the upper bound on g is determined by Theorem 1. In both cases, 1000 times independent Monte Carlo simulations are carried out to show the estimation performance. Figure 2 shows the TMSEE of Algorithm 1 and SKCF in [9] with the same communication rate. It can be seen that the constrained estimation by Algorithm 1 is more accurate than the unconstrained KCF. Moreover, according to Figure 2, it is clear that the RSKCF with the constraints converges to a precision of 60 in less than 15 instances while SKCF without any constraints needs almost 20 instances. Therefore, for this example based on the constraint information, the proposed RSKCF with the constraints outperforms SKCF in both convergence rate and accuracy.
Next, we give the simulation results of Algorithm 1. The parameters setting is the same as above. Take ρ 1 m = ρ 3 m = ρ 5 m = 0.5 , ρ 2 m = ρ 2 m = ρ 2 m = 1 and ρ 1 c = = ρ 6 c = 1 . Figure 3 shows that the estimation error of the agents can reach consensus and bounded, which is consistent with the results of Theorem 1.
The curve of g with fixed ρ i m = 0.7 , i V is shown in Figure 4, while the curve with fixed ρ i c = 0.7 , i V is shown in Figure 5. From the proof of Theorem 1 it can be observed that the upper bound for g increases along with the probability of collection measurements. Moreover, the upper bound for g decreases along with the probability of broadcast accordingly.
In order to compare the performance between stochastic event-triggered scheme and random sleep scheme, we choose parameters to ensure that γ i = ρ i c and ρ i m = 1 . Specifically, we can obtain that γ 1 = 0.6027 , γ 2 = 0.5940 , γ 3 = 0.5823 , γ 4 = 0.5928 , γ 5 = 0.6194 and γ 6 = 0.6248 by simulations. The triggering sequence are shown in Figure 6. The stochastic event-triggered scheme performs better than the random sleep scheme as shown in Figure 7. An intuitive explanation is that, only important information is transmitted by stochastic event-triggered scheme. The communication rate of Algorithm 2 with Y i = 0.1 × I 2 , 0.3 × I 2 , 0.5 × I 2 , 0.7 × I 2 , i are shown in Figure 8, where I 2 = d i a g { 1 , 1 } . The simulation results show that the communication rate increases along with the increasing of Y i , since the agent will be more likely to share the information with its neighbors along with the increasing of Y i based on the event-triggered scheme (13).
Simulation Case 2: In this case, we give an example in a complex network, i.e., the network consists of 30 agents. The communication topology of the network shown in Figure 9. The parameters setting is the same as case 1. The initial conditions by agent i is set to x i , 0 = x 0 + ( 1 ) i i e 0 , i = 1 , , 30 , R i = d i a g ( r 1 , i , r 2 , i ) , where r 1 , i and r 2 , i are taking from ( 50 , 80 ) randomly. Figure 10 shows the comparison between Algorithms 1 and 2. From Figure 10, we can see that the stochastic event-triggered scheme still performs better than random sleep scheme Figure 11 and Figure 12 give the TMSEE of Algorithms 1 and 2, respectively.

6. Discussion

As shown in Figure 2, the proposed algorithm can achieve better performance comparing with the one in [9]. It is due to that the constraints information can be treated as additional information, and such additional information beyond those explicitly given in system model. Therefore, the description of modified model is different with the standard Kalman filter equations, and the modified one can help to improve the estimation performance. In [34], the authors studied estimation with nonlinear equality constraints, where the nonlinear state constraints are linearization locally and the estimation was obtained by projection to local linear surface. When it comes to distributed setting, such linearization will produce approximation error and may suffer from a lack of convergence. Our future study may include designing distributed filtering with nonlinear state equality constraints.
A consensus-based Kalman filter with stochastic sensor activation was proposed in [5], where all sensors share the same activation probability, and the stability of the algorithm was proved. In [9], the authors extended the results to different activation probability of each sensor, and the convergence property was shown under mild condition. Both [5] and [9] studied mean-square stability of the algorithm. Differently, we study the stochastic stability of the proposed algorithm, and show that the constraints information will help to improve the estimation performance.
In this paper, we investigate the random communication problem for distributed estimation with state constraints. It should be notice that local observability condition was needed. In [11], the authors also studied distributed estimation problem with state inequality constraints for deterministic case, where the global observability condition was needed, i.e., each agent need not to be observable. The results in [11] relies on the sufficient communication between agents so that the information will spread to whole network. However, when agent communicates with neighbors randomly, the information will not spread enough to whole network. Therefore, the global observability hard to guarantee the stability of estimation. Under global observability, it can be easy to see that if agents are not communicating with each other, the estimation will diverge. It is worth studying how to design the communication rate such that the estimation error is stable under global observability condition.
In [35,36], the distributed event-triggered estimation was also studied. In [35], a time-varying gain was designed by Riccati-like difference equations in order to adjust the innovative and state-consensus information, while, in [36], an event-triggered scheme was provided by analyzing the stability conditions of the error dynamics (without noise terms). Motivated by [18], we introduce the stochastic event-triggered scheme [18], which shown how to design a parameter in the event mechanism satisfying a desired trade-off between the communication rate and the estimation quality. However, in distributed setting, it is hard to obtain the communication rate due to the correlation between agents. In future we may study the communication rate for distributed stochastic event-triggered estimation.
There exist a lot of works to investigate network structures [37,38,39,40]. In [37], the authors presented a metrics suite for evaluating the communication of the multi-agent systems, where one agent can choose any agent to communicate. The paper [38] considered the problem of network-based practical set consensus over multi-agent systems subject to input saturation constraints with quantization and network-induced delays. We considered distributed estimation problem over multi-agent systems without input, quantization and delays, which can be our future works. In [39], the authors considered the problem of permutation routing over wireless networks. The main idea in [39] is to partition the network into several groups. Then, broadcasting is performed locally in each group and getaways are used for communication between groups to send the items to their final destinations.

7. Conclusions

Distributed estimation based on the Kalman filter under state constraints with random communication was proposed in this paper. Two stochastic schemes, that is, the random sleep and event-triggered schemes, were introduced to deal with the environment or communication uncertainties and for energy saving. The convergence of the proposed algorithms was verified, and the conditions for the corresponding stability were given by choosing a suitable consensus gain. Moreover, it was shown that the information of additional state constraints is useful to improve the performance of distributed estimators.

Acknowledgments

This work was partially supported by national natural science foundations of China 61403399.

Author Contributions

Chen Hu, Haoshen Lin, Zhenhua Li, Bing He and Gang Liu contributed to the idea. Chen Hu developed the algorithm, and Chen Hu, Haoshen Lin collected the data and performed the simulations. Chen Hu, Haoshen Lin and Zhenhua Li analyzed the experimental results and wrote this paper. Bing He and Gang Liu supervised the study and reviewed this paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Lemma 5

Proof of Lemma 5.
Equation (9) can be written as
P i , k + 1 = A P i , k A + Q σ i , k m K i , k ( C i P i , k C i + R i ) K k .
Substituting (10) into (A1) and rearranging the terms yield
P i , k + 1 = ( A σ i , k m K i , k C i ) P i , k ( A σ i , k m K i , k C i ) + Q + σ i , k m K i , k C i P i , k ( A σ i , k m K i , k C i ) .
It can be verified that
A 1 ( A σ i , k m K i , k C i ) P i , k = P i , k σ i , k m P i , k C i ( C i P i , k C i + R i ) 1 C i P i , k .
By applying the matrix inverse lemma (see [41], Appendix A.22, p. 487), we have
A 1 ( A K i , k C i ) P i , k = P i , k P i , k C i ( C i P i , k C i + R i ) 1 C i P i , k = ( P 1 + C i R i 1 C i ) 1 > 0 .
Similarly, we obtain
A 1 ( A σ i , k m K i , k C i ) P i , k = P i , k σ i , k m P i , k C i ( C i P i , k C i + R i ) 1 C i P i , k > 0 .
Moreover, it follows from P i , k > 0 and R i > 0 that
A 1 K i , k C i = P i , k C i ( C i P i , k C i + R ) 1 C i 0 .
Combining (A5) and (A6) gives
K i , k C i P i , k ( A σ i , k m K i , k C i ) = A ( A 1 K i , k C i ) ( A 1 ( A K i , k C i ) P i , k ) A 0 .
As a result,
P i , k + 1 ( A σ i , k m K i , k C i ) P i , k ( A σ i , k m K i , k C i ) + Q .
Notice that C i P i , k C i 0 ,
K i , k a ¯ p ¯ i c ¯ i 1 r ̲ i .
Substituting (A9) into (A8) yields
P i , k + 1 ( A σ i m K i , k C i ) [ P i , k + ( A σ i m K i , k C i ) 1 Q ( A σ i m K i , k C i ) 1 ] ( A σ i m K i , k C i ) ( A σ i m K i , k C i ) ( P i , k + q ̲ ( a ¯ + σ i m a ¯ p ¯ i c ¯ i 2 / r ) 2 I ) ( A σ i m K i , k C i ) .
Taking the inverse of both sides of (A10), multiplying from left and right with ( A σ i m K i , k C i ) and ( A σ i m K i , k C i ) , respectively, and then taking expectation operator on the both sides, we obtain
E { ( A σ i , k m K i , k C i ) P i , k + 1 1 ( A σ i , k m K i , k C i ) } ( 1 + q ̲ p ¯ i ( a ¯ + ρ i m a ¯ p ¯ i c ¯ i 2 / r ) 2 ) 1 P i , k 1 ,
with
1 τ ¯ i = 1 1 + q ̲ p ¯ i ( a ¯ + ρ i m a ¯ p ¯ i c ¯ i 2 / r ) 2 .
 ☐

References

  1. Olfati-Saber, R. Distributed Kalman filtering for sensor networks. In Proceedings of the IEEE Conference on Decision and Control, New Orleans, LA, USA, 12–14 December 2007; pp. 5492–5498. [Google Scholar]
  2. Olfati-Saber, R. Kalman-consensus filter: Optimality, stability, and performance. In Proceedings of the Joint IEEE Conference on Decision and Control and Chinese Control Conference, Shanghai, China, 15–18 Decmber 2009; pp. 7036–7042. [Google Scholar]
  3. Cattivelli, F.; Sayed, A. Diffusion strategies for distributed Kalman filtering and smoothing. IEEE Trans. Autom. Control 2010, 55, 2069–2084. [Google Scholar] [CrossRef]
  4. Hu, J.; Xie, L.; Zhang, C. Diffusion Kalman Filtering Based on Covariance Intersection. IEEE Trans. Signal Process. 2012, 60, 891–902. [Google Scholar] [CrossRef]
  5. Yang, W.; Chen, G.; Wang, X.; Shi, L. Stochastic sensor activation for distributed state estimation over a sensor network. Automatica 2014, 50, 2070–2076. [Google Scholar] [CrossRef]
  6. Stanković, S.; Stanković, M.; Stipanović, D. Consensus based overlapping decentralized estimation with missing observations and communication faults. Automatica 2009, 45, 1397–1406. [Google Scholar] [CrossRef]
  7. Zhou, Z.; Fang, H.; Hong, Y. Distributed estimation for moving target based on state-consensus strategy. IEEE Trans. Autom. Control 2013, 58, 2096–2101. [Google Scholar] [CrossRef]
  8. Hu, C.; Qin, W.; He, B.; Liu, G. Distributed H estimation for moving target under switching multi-agent network. Kybernetika 2014, 51, 814–819. [Google Scholar]
  9. Yang, W.; Yang, C.; Shi, H.; Shi, L.; Chen, G. Stochastic link activation for distributed filtering under sensor power constraint. Automatica 2017, 75, 109–118. [Google Scholar] [CrossRef]
  10. Ji, H.; Lewis, F.L.; Hou, Z.; Mikulski, D. Distributed information-weighted Kalman consensus filter for sensor networks. Automatica 2017, 77, 18–30. [Google Scholar] [CrossRef]
  11. Hu, C.; Qin, W.; Li, Z.; He, B.; Liu, G. Consensus-based state estimation for multi-agent systems with constraint information. Kybernetika 2017, 53, 545–561. [Google Scholar] [CrossRef]
  12. Das, S.; Moura, J.M. Distributed Kalman filtering with dynamic observations consensus. IEEE Trans. Signal Process. 2015, 63, 4458–4473. [Google Scholar]
  13. Das, S.; Moura, J.M. Consensus+ innovations distributed Kalman filter with optimized gains. IEEE Trans. Signal Process. 2017, 65, 467–481. [Google Scholar] [CrossRef]
  14. Dong, H.; Wang, Z.; Gao, H. Distributed Filtering for a Class of Time-Varying Systems Over Sensor Networks With Quantization Errors and Successive Packet Dropouts. IEEE Trans. Signal Process. 2012, 60, 3164–3173. [Google Scholar] [CrossRef]
  15. Lou, Y.; Shi, G.; Johansson, K.; Henrik, K.; Hong, Y. Convergence of random sleep algorithms for optimal consensus. Syst. Control Lett. 2013, 62, 1196–1202. [Google Scholar] [CrossRef]
  16. Yi, P.; Hong, Y. Stochastic sub-gradient algorithm for distributed optimization with random sleep scheme. Control Theory Technol. 2015, 13, 333–347. [Google Scholar] [CrossRef]
  17. Wu, J.; Jia, Q.; Johansson, K.; Shi, L. Event-based sensor data scheduling: Trade-off between communication rate and estimation quality. IEEE Trans. Autom. Control 2013, 58, 1041–1046. [Google Scholar] [CrossRef]
  18. Han, D.; Mo, Y.; Wu, J.; Weerakkody, S.; Sinopoli, B.; Shi, L. Stochastic event-triggered sensor schedule for remote state estimation. IEEE Trans. Autom. Control 2015, 60, 2661–2675. [Google Scholar] [CrossRef]
  19. Julier, S.; LaViola, J. On Kalman filtering with nonlinear equality constraints. IEEE Trans. Signal Process. 2007, 55, 2774–2784. [Google Scholar] [CrossRef]
  20. Simon, D.; Chia, T.L. Kalman filtering with state equality constraints. IEEE Trans. Aerosp. Electron. Syst. 2002, 38, 128–136. [Google Scholar] [CrossRef]
  21. Ko, S.; Bitmead, R. State estimation for linear systems with state equality constraints. Automatica 2007, 43, 1363–1368. [Google Scholar] [CrossRef]
  22. Rao, C.; Rawlings, J.; Lee, J. Constrained linear state estimation—A moving horizon approach. IEEE Trans. Aerosp. Electron. Syst. 2001, 37, 1619–1628. [Google Scholar] [CrossRef]
  23. Simon, D. Kalman filtering with state constraints: A survey of linear and nonlinear algorithms. IET Control Theory Appl. 2010, 4, 1303–1318. [Google Scholar] [CrossRef]
  24. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  25. Nedić, A.; Ozdaglar, A.; Parrilo, P. Constrained consensus and optimization in multi-agent networks. IEEE Trans. Autom. Control 2010, 55, 922–938. [Google Scholar] [CrossRef]
  26. Bell, B.; Burke, J.; Pillonetto, G. An inequality constrained nonlinear Kalman-Bucy smoother by interior point likelihood maximization. Automatica 2009, 45, 25–33. [Google Scholar] [CrossRef]
  27. Simon, D.; Simon, D. Kalman filtering with inequality constraints for turbofan engine health estimation. IEE Proc. Control Theory Appl. 2006, 153, 371–378. [Google Scholar] [CrossRef]
  28. Goodwin, G.; Seron, M.; Doná, J.D. Constrained Control and Estimation: An Optimisation Approach; Springer: New York, NY, USA, 2006. [Google Scholar]
  29. Godsil, C.; Royle, G. Algebraic Graph Theory; Springer: New York, NY, USA, 2001. [Google Scholar]
  30. Sinopoli, B.; Schenato, L.; Franceschetti, M.; Poolla, K.; Jordan, M.I.; Sastry, S.S. Kalman filtering with intermittent observations. IEEE Trans. Autom. Control 2004, 49, 1453–1464. [Google Scholar] [CrossRef]
  31. Agniel, R.; Jury, E. Almost sure boundedness of randomly sampled systems. SIAM J. Control 1971, 9, 372–384. [Google Scholar] [CrossRef]
  32. Tarn, T.; Rasis, Y. Observers for nonlinear stochastic systems. IEEE Trans. Autom. Control 1976, 21, 441–448. [Google Scholar] [CrossRef]
  33. Reif, K.; Günther, S.; Yaz, E.; Unbehauen, R. Stochastic stability of the discrete-time extended Kalman filter. IEEE Trans. Autom. Control 1999, 44, 714–728. [Google Scholar] [CrossRef]
  34. Yang, C.; Blasch, E. Kalman filtering with nonlinear state constraints. IEEE Trans. Aerosp. Electron. Syst. 2009, 45, 70–84. [Google Scholar] [CrossRef]
  35. Liu, Q.; Wang, Z.; He, X.; Zhou, D. Event-based distributed filtering with stochastic measurement fading. IEEE Trans. Ind. Inform. 2015, 11, 1643–1652. [Google Scholar] [CrossRef]
  36. Meng, X.; Chen, T. Optimality and stability of event triggered consensus state estimation for wireless sensor networks. In Proceedings of the American Control Conference, Portland, OR, USA, 4–6 June 2014; pp. 3565–3570. [Google Scholar]
  37. Gutiérrez Cosio, C.; García Magariño, I. A metrics suite for the communication of multi-agent systems. J. Phys. Agents 2009, 3, 7–14. [Google Scholar] [CrossRef]
  38. Ding, L.; Zheng, W.X.; Guo, G. Network-based practical set consensus of multi-agent systems subject to input saturation. Automatica 2018, 89, 316–324. [Google Scholar] [CrossRef]
  39. Lakhlef, H.; Bouabdallah, A.; Raynal, M.; Bourgeois, J. Agent-based broadcast protocols for wireless heterogeneous node networks. Comput. Commun. 2018, 115, 51–63. [Google Scholar] [CrossRef]
  40. García-Magariño, I.; Gutiérrez, C. Agent-oriented modeling and development of a system for crisis management. Expert Syst. Appl. 2013, 40, 6580–6592. [Google Scholar] [CrossRef]
  41. Lewis, F.; Xie, L.; Popa, D. Optimal and Robust Estimation: With an Introduction to Stochastic Control Theory; CRC Press: Boca Raton, FL, USA, 2008. [Google Scholar]
Figure 1. Topology of the multi-agent network in case 1.
Figure 1. Topology of the multi-agent network in case 1.
Information 09 00064 g001
Figure 2. Comparison between RSKCF and SKCF.
Figure 2. Comparison between RSKCF and SKCF.
Information 09 00064 g002
Figure 3. Performance of agents with random sleep scheme. (ad) represent the estimation error of x k ( 1 ) , x k ( 2 ) , x k ( 3 ) and x k ( 4 ) , respectively.
Figure 3. Performance of agents with random sleep scheme. (ad) represent the estimation error of x k ( 1 ) , x k ( 2 ) , x k ( 3 ) and x k ( 4 ) , respectively.
Information 09 00064 g003
Figure 4. Upper bound of g for a fixed ρ m = 0.7 .
Figure 4. Upper bound of g for a fixed ρ m = 0.7 .
Information 09 00064 g004
Figure 5. Upper bound of g for a fixed ρ c = 0.7 .
Figure 5. Upper bound of g for a fixed ρ c = 0.7 .
Information 09 00064 g005
Figure 6. The triggering sequence.
Figure 6. The triggering sequence.
Information 09 00064 g006
Figure 7. Comparison between RSKCF and ETKCF with constraints.
Figure 7. Comparison between RSKCF and ETKCF with constraints.
Information 09 00064 g007
Figure 8. Communication rate for different Y i .
Figure 8. Communication rate for different Y i .
Information 09 00064 g008
Figure 9. Network topology of 30 agents.
Figure 9. Network topology of 30 agents.
Information 09 00064 g009
Figure 10. Comparison between algorithms 1 and 2 for case 2.
Figure 10. Comparison between algorithms 1 and 2 for case 2.
Information 09 00064 g010
Figure 11. TMSEE of position and velocity of algorithm 1. (a) TMSEE of position; (b) TMSEE of velocity.
Figure 11. TMSEE of position and velocity of algorithm 1. (a) TMSEE of position; (b) TMSEE of velocity.
Information 09 00064 g011
Figure 12. TMSEE of position and velocity of algorithm 2. (a) TMSEE of position; (b) TMSEE of velocity.
Figure 12. TMSEE of position and velocity of algorithm 2. (a) TMSEE of position; (b) TMSEE of velocity.
Information 09 00064 g012

Share and Cite

MDPI and ACS Style

Hu, C.; Li, Z.; Lin, H.; He, B.; Liu, G. Distributed State Estimation under State Inequality Constraints with Random Communication over Multi-Agent Networks. Information 2018, 9, 64. https://doi.org/10.3390/info9030064

AMA Style

Hu C, Li Z, Lin H, He B, Liu G. Distributed State Estimation under State Inequality Constraints with Random Communication over Multi-Agent Networks. Information. 2018; 9(3):64. https://doi.org/10.3390/info9030064

Chicago/Turabian Style

Hu, Chen, Zhenhua Li, Haoshen Lin, Bing He, and Gang Liu. 2018. "Distributed State Estimation under State Inequality Constraints with Random Communication over Multi-Agent Networks" Information 9, no. 3: 64. https://doi.org/10.3390/info9030064

APA Style

Hu, C., Li, Z., Lin, H., He, B., & Liu, G. (2018). Distributed State Estimation under State Inequality Constraints with Random Communication over Multi-Agent Networks. Information, 9(3), 64. https://doi.org/10.3390/info9030064

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop