Next Article in Journal
Functionalized Silver Nano-Sensor for Colorimetric Detection of Hg2+ Ions: Facile Synthesis and Docking Studies
Next Article in Special Issue
Hybrid Particle Swarm Optimization for Multi-Sensor Data Fusion
Previous Article in Journal
Comparative Computational and Experimental Detection of Adenosine Using Ultrasensitive Surface-Enhanced Raman Spectroscopy
Previous Article in Special Issue
Health Management Decision of Sensor System Based on Health Reliability Degree and Grey Group Decision-Making
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Centralized Fusion Approach to the Estimation Problem with Multi-Packet Processing under Uncertainty in Outputs and Transmissions

by
Raquel Caballero-Águila
1,*,†,
Aurora Hermoso-Carazo
2,† and
Josefa Linares-Pérez
2,†
1
Departamento de Estadística, Universidad de Jaén, Paraje Las Lagunillas, 23071 Jaén, Spain
2
Departamento de Estadística, Universidad de Granada, Avda. Fuentenueva, 18071 Granada, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2018, 18(8), 2697; https://doi.org/10.3390/s18082697
Submission received: 8 July 2018 / Revised: 12 August 2018 / Accepted: 13 August 2018 / Published: 16 August 2018
(This article belongs to the Collection Multi-Sensor Information Fusion)

Abstract

:
This paper is concerned with the least-squares linear centralized estimation problem in multi-sensor network systems from measured outputs with uncertainties modeled by random parameter matrices. These measurements are transmitted to a central processor over different communication channels, and owing to the unreliability of the network, random one-step delays and packet dropouts are assumed to occur during the transmissions. In order to avoid network congestion, at each sampling time, each sensor’s data packet is transmitted just once, but due to the uncertainty of the transmissions, the processing center may receive either one packet, two packets, or nothing. Different white sequences of Bernoulli random variables are introduced to describe the observations used to update the estimators at each sampling time. To address the centralized estimation problem, augmented observation vectors are defined by accumulating the raw measurements from the different sensors, and when the current measurement of a sensor does not arrive on time, the corresponding component of the augmented measured output predictor is used as compensation in the estimator design. Through an innovation approach, centralized fusion estimators, including predictors, filters, and smoothers are obtained by recursive algorithms without requiring the signal evolution model. A numerical example is presented to show how uncertain systems with state-dependent multiplicative noise can be covered by the proposed model and how the estimation accuracy is influenced by both sensor uncertainties and transmission failures.

1. Introduction

1.1. Background and Motivation

With the active development of computer and communication technologies, the estimation problem in multi-sensor network stochastic systems has become an important research topic in the last few years. The significant advantages of multi-sensor systems in practical situations (low cost, remote operation, simple installation, and maintenance) are obvious, and have triggered wide use of these systems in many areas, such as target tracking, communications, the manufacturing industry, etc. Moreover, they usually provide more information than traditional communication systems with a single sensor alone. In spite of these advantages, a sensor network is not generally a reliable communication medium, and together with the communication capacity limitations (network bandwidths or service capabilities, among others), may yield different uncertainties during data transmission, such as missing measurements, random delays, and packet dropouts.
The development of sensor networks motivates the necessity to desig fusion estimation algorithms which integrate the information from the different sensors and take these network-induced uncertainties into account to achieve a satisfactory performance. Depending on the way the information fusion is performed, there are two fundamental fusion techniques: the centralized fusion approach, in which the measurements from all sensors are sent to a central processor where the fusion is performed, and the distributed fusion approach, in which the measurements from each sensor are processed independently to obtain local estimators before being sent to the fusion center. The survey papers [1,2,3] can be examined for a wide view of these and other multi-sensor data fusion techniques.
As already indicated, centralized fusion architecture is based on a fusion centre that is able to receive, fuse, and process the data coming from every sensor; hence, centralized fusion estimation algorithms provide optimal signal estimators based on the measured outputs from all sensors and, consequently, when all of the sensors work correctly and the connections are perfect, they have the optimal estimation accuracy. In light of these concerns, it is not surprising that the study of the centralized and distributed fusion estimation problems in multi-sensor systems with network-induced uncertainties (in both the sensor measured outputs and the data transmission) has become an active research area in recent years. The estimation problem in systems with uncertainties in the sensor outputs (such as missing measurements, stochastic sensor gain degradation and fading measurements) is addressed in refs. [4,5,6], among others. In refs. [7,8,9,10], systems with failures during transmission (such as uncertain observations, random delays, and packet dropouts) are considered. Also, recent advances in the estimation, filtering, and fusion of networked systems with network-induced phenomena can be reviewed in refs. [11,12], where detailed overviews on this field are presented.
Since our aim in this paper is the design of centralized fusion estimators in multi-sensor network systems with measurements perturbed by random parameter matrices subject to random transmission failures (delays and packet dropouts), and multi-packet processing is considered, we discuss the research status of the estimation problem in networked systems with some of these characteristics.

1.2. Multi-Sensor Measured Outputs with Random Parameter Matrices

It is well known that in sensor-network environments, the measured outputs can be subject not only to additive noises, but also to multiplicative noise uncertainties due to several reasons, such as the presence of an intermittent sensor or hardware failure, natural or human-made interference, etc. For example, measurement equations that model the above-mentioned situations involving degradation of the sensor gain, or missing or fading measurements must include multiplicative noises described by random variables with values of [ 0 , 1 ] . So, random measurement parameter matrices provide a unified framework to address different simultaneous network-induced phenomena, and networked systems with random parameter matrices are used in different areas of science (see, e.g., refs. [13,14]). Also, systems with random sensor delays and/or multiple packet dropouts are transformed into equivalent observation models with random measurement matrices (see, e.g., ref. [15]). Hence, the estimation problem for systems with random parameter matrices has experienced increasing interest due to its diverse applications, and many estimation algorithms for such systems have been proposed over the last few years (see, e.g., refs. [16,17,18,19,20,21,22,23,24], and references therein).

1.3. Transmission Random Delays and Losses: Observation Predictor Compensation

Random delays and packet dropouts in the measurement transmissions are usually unavoidable and clearly deteriorate the performance of the estimators. For this reason, much effort has been made towards the study of the estimation problem to incorporate the effects of these transmission uncertainties, and several modifications of the standard estimation algorithms have been proposed (see, e.g., refs. [25,26,27], and references therein). In the estimation problem from measurements subject to transmission losses, when a packet drops out, the processor does not recieve a valid measurement and the most common compensation procedure is the hold-input mechanism which consists of processing the last measurement that was successfully transmitted. Unlike the approach to deal with losses, in ref. [28] the estimator of the lost measurement based on the information received previously is proposed as the compensator; this methodology significantly improves the estimations, since in cases of loss, all the previously received measurements are considered, instead of using only the last one. In view of this clear improvement of the estimators, the compensation strategy developed in ref. [28] has been adopted in some other recent investigations (see, e.g., refs. [29,30], and references therein).

1.4. Multi-Packet Processing

Another concern at the forefront of research in networked systems subject to random delays and packet dropouts is the number of packets that are processed to update the estimator at each moment, and different observation models have been proposed to deal with this issue. For example, to avoid losses as much as possible, in ref. [16] it is assumed that each packet is transmitted several times. In contrast, to avoid the network congestion that may be caused by multiple transmissions, ref. [31] the packets are sent just once. These papers also assume that each packet is either received on time, delayed for, at most, one sampling time, or lost, and only one packet or no packets are processed to update the estimator at each moment. However, in refs. [32,33,34] two packets were able to arrive at each sampling time, in which case, both were used to update the estimators, thus improving their performance. In these papers, different packet dropout compensation procedures have been employed. The last available measurement was used as compensation in refs. [32,34], while the observation predictor was considered in ref. [34].

1.5. Addressed Problem and Paper Contributions

Based on the considerations made above, we were motivated to address the study of the centralized fusion estimation problem for multi-sensor networked systems perturbed by random parameter matrices. This problem is discussed under the following assumptions: (a) Each sensor transmits their measured outputs to a central processor over different communication channels and random delays, and packet dropouts are assumed to occur during the transmission; (b) in order to avoid the network congestion, at each time instant, the different sensors send their packets only once, but due to the transmission random failures, the processing center can receive more than one packet; specifically, either one packet, two packets, or nothing; and (c) the measurement output predictor is used as a loss compensation strategy.
The main contributions and advantages of the current work are summarized as follows: (1) A unified framework to deal with different network-induced phenomena in the measured outputs, such as missing measurements or sensor gain degradation, is provided by the use of random measurement matrices. (2) Besides the uncertainties in the measured outputs, random one-step delays and packet dropouts are assumed to occur during the transmission at different rates at each sensor. Unlike previous authors’ papers concerning random measurement matrices and random transmission delays and losses where only one packet is processed to update the estimator at each moment, in this paper, the estimation algorithms are obtained under the assumption that either one packet, two packets, or nothing may arrive at each sampling time. (3) Concerning the compensation strategy, the use of the measurement predictor as the loss compensator combined with the simultaneous processing of delayed packets provides better estimators in comparison to other approaches where the last measurement successfully received is used to compensate the data packets and only one packet is processed to update the estimator at each moment. (4) The centralized fusion estimation problem is addressed using covariance information, without requiring full knowledge of the state-space model generating the signal process, thus providing a general approach to deal with different kinds of signal processes. (5) The innovation approach is used to obtain recursive prediction, filtering, and fixed-point smoothing algorithms which are recursive and computationally simple, and thus aresuitable for online implementation. In contrast to the approaches where the state augmentation technique is used, the proposed algorithms are deduced without making use of augmented systems; therefore, they have lower computational costs than those based on the augmentation method.

1.6. Paper Structure and Notation

The remaining sections of the paper are organized as follows. Section 2 presents the assumptions for the signal process, the mathematical models of the multi-sensor measured outputs with random parameter matrices, and the measurements received by the central processor with random delays and packet losses. Section 3 provides the main results of the research, namely, the covariance-based centralized least-squares linear prediction and filtering algorithm (Theorem 1) and fixed-point smoothing algorithm (Theorem 2). A numerical example is presented in Section 4 to show the performance of the proposed centralized estimators, and some concluding remarks are drawn in Section 5. The proofs of Theorems 1 and 2 are presented in the Appendix A and Appendix B, respectively.
The notations used throughout the paper are standard. R n and R m × n denote the n-dimensional Euclidean space and the set of all m × n real matrices, respectively. A T and A 1 denote the transpose and inverse of a matrix (A), respectively. I n and 0 n denote the n × n identity matrix and zero matrix, respectively. 1 n denotes the all-ones vector. Finally, ⊗ and ∘ are the Kronecker and Hadamard products, respectively, and δ k , s is the Kronecker delta function.

2. Observation Model and Preliminaries

The aim of this section is to design a mathematical model to allow the observations to be processed in the least-squares (LS) linear estimation problem of discrete-time signal processes from multi-sensor noisy measurements transmitted through imperfect communication channels where random one-step delay and packet dropouts may arise in the transmission process. More specifically, in order to avoid the network congestion, at every sampling time, it is assumed that the measured outputs from each sensor, which are perturbed by random parameter matrices, are transmitted just once to a central processor, and due to random delays and losses, the processing center (PC) may receive, from each sensor, either one packet, two packets, or nothing at each time instant.
In this context, our goal is to find recursive algorithms for the LS linear prediction, filtering, and fixed-point smoothing problems using the centralized fusion method. We assume that only information about the mean and covariance functions of the signal process is available, and this information is specified in the following hypothesis:
(H1) 
The n x -dimensional signal process, { x k } k 1 , has a zero-mean, and its autocovariance function is expressed in a separable form, E [ x k x s T ] = A k B s T , s k , where A k , B s R n x × M are known matrices.

2.1. Multi-Sensor Measured Outputs with Random Parameter Matrices

We assume that there are m sensors which provide measured outputs of the signal process that are affected by random parameter matrices according to the following model:
z k ( i ) = H k ( i ) x k + v k ( i ) , k 1 , i = 1 , , m ,
where z k ( i ) R n z is the signal measurement in the i-th sensor at time k, H k ( i ) are random parameter matrices, and v k ( i ) are the measurement noises. We assume the following hypotheses for these proceses:
(H2) 
{ H k ( i ) } k 1 , for i = 1 , , m , are independent sequences of independent random parameter matrices. For p = 1 , , n z and q = 1 , , n x , we denote h p q ( i ) ( k ) as the ( p , q ) -th entry of H k ( i ) , which has known first and second order moments, and H ¯ k ( i ) = E [ H k ( i ) ] .
(H3) 
The measurement noises { v k ( i ) } k 1 , i = 1 , , m , are zero-mean second-order white processes with E [ v k ( i ) v s ( j ) T ] = R k ( i j ) δ k , s .

2.2. Observation Model. Properties

To address the estimation problem with the centralized fusion method, the observations coming from the different sensors are gathered and jointly processed at each sampling time to yield the optimal signal estimator. So, the problem is addressed by considering, at each time ( k 1 ), the vector constituted by the measurements received from all sensors and for this purpose, Equations (1) were combined to yield the following stacked measured output equation:
z k = H k x k + v k , k 1 ,
where z k = z k ( 1 ) T , , z k ( m ) T T , H k = H k ( 1 ) T , , H k ( m ) T T , v k = v k ( 1 ) T , , v k ( m ) T T .
As already indicated, random one-step delays and packet dropouts occur during the transmissions to the PC. To model these failures, we introduced the following sequences of random variables:
  • { γ k ( i ) } k 1 , i = 1 , , m , are sequences of Bernoulli random variables. Each i = 1 , , m , γ k ( i ) = 0 means that the output at the current sampling time, z k ( i ) , arrives on time to be processed for the estimation, while γ k ( i ) = 1 means that this output is either delayed or dropped out; and
  • { ψ k ( i ) } k 2 , i = 1 , , m , are sequences of Bernoulli random variables. For each i = 1 , , m , ψ k ( i ) = 1 means that z k 1 ( i ) is processed at sampling time k (because it was one-step delayed) and ψ k ( i ) = 0 means that z k 1 ( i ) is not processed at sampling time k (because it was either received at time k 1 or dropped out). Since γ k 1 ( i ) = 0 implies ψ k ( i ) = 0 , it is clear that the value of ψ k ( i ) is conditioned by that of γ k 1 ( i ) .
For the previous sequences of Bernoulli variables, we assume the following hypothesis:
(H4) 
γ k ( i ) , ψ k + 1 ( i ) T k 1 , i = 1 , , m , are independent sequences of independent random vectors, such that
  • { γ k ( i ) } k 1 , i = 1 , , m , are sequences of Bernoulli random variables with known probabilities, P γ k ( i ) = 1 = γ ¯ k ( i ) , k 1 ; and
  • { ψ k ( i ) } k 2 , i = 1 , , m , are sequences of Bernoulli random variables such that the conditional probabilities ( P ( ψ k ( i ) = 1 / γ k 1 ( i ) = 1 ) ) are known. Thus,
    ψ ¯ k ( i ) P ψ k ( i ) = 1 = P ψ k ( i ) = 1 / γ k 1 ( i ) = 1 γ ¯ k 1 ( i ) , k 2 .
Moreover, the mutual independence hypothesis of the involved processes is also necessary:
(H5) 
For i = 1 , , m , the signal process { x k } k 1 , the random matrices { H k ( i ) } k 1 , and the noises { v k ( i ) } k 1 and γ k ( i ) , ψ k + 1 ( i ) T k 1 are mutually independent.
Remark 1.
From hypothesis (H4), for i , j = 1 , , m , the following correlations are clear:
E [ γ k ( i ) γ k ( j ) ] = γ ¯ k ( i ) , i = j , γ ¯ k ( i ) γ ¯ k ( j ) , i j . E [ ψ k ( i ) ψ k ( j ) ] = ψ ¯ k ( i ) , i = j , ψ ¯ k ( i ) ψ ¯ s ( j ) , i j . E [ ψ k + 1 ( i ) ( 1 γ k ( j ) ) ] = 0 , i = j , ψ ¯ k + 1 ( i ) ( 1 γ ¯ k ( j ) ) , i j .
In order to write jointly the sensor measurements to be processed at each sampling time, we defined the matrices Γ k D i a g γ k ( 1 ) , , γ k ( m ) I n z . and Ψ k D i a g ψ k ( 1 ) , , ψ k ( m ) I n z , k 1 . From the definition of variables γ k ( i ) , i = 1 , , m , , it is clear that the non-zero components of vector ( I m n z Γ k ) z k are those of z k that arrive on time at the PC and, consequently, those processed at time k. The other components of z k are delayed or lost, and as compensation, the corresponding components of the predictor z ^ k / k 1 , specified in Γ k z ^ k / k 1 , are processed. Similarly, the non-zero components of Ψ k z k 1 are those of z k 1 that are affected by one-step delay, and consequently, they are also processed at time k. Hence, the processed observations at each time are expressed by the following model:
y k = ( I m n z Γ k ) z k + Γ k z ^ k / k 1 Ψ k z k 1 , k 2 ; y 1 = ( I m n z Γ 1 ) z 1 0 ,
or equivalently,
y k = C 0 ( I m n z Γ k ) z k + C 0 Γ k z ^ k / k 1 + C 1 Ψ k z k 1 , k 2 ; y 1 = C 0 ( I m n z Γ 1 ) z 1 ,
where C 0 = ( I m n z , 0 m n z ) T and C 1 = ( 0 m n z , I m n z ) T .
Remark 2.
For a better understanding of Model (4) for the measurements processed after the possible transmission one-step delays and losses, a single sensor is considered in the following comments. On the one hand, note that γ k = 0 means that the output at the current sampling time ( z k ) arrives on time to be processed. Then, if ψ k = 1 , the measurement processed at time k is y k = z k T z k 1 T T , while if ψ k = 0 , then y k = z k T 0 T . On the other hand, if γ k = 1 , the output z k is either delayed or dropped out, and its predictor z ^ k / k 1 is processed at time k. Then, if ψ k = 1 , the measurement processed at time k is y k = z ^ k / k 1 T z k 1 T T , while if ψ k = 0 , then y k = z ^ k / k 1 T 0 T . Table 1 displays ten iterations of a specific simulation of packet transmission.
From Table 1, it can be observed that z 1 , z 3 , z 6 , z 7 , and z 9 arrive on time to be processed; z 2 , z 4 and z 8 are one-step delayed; and z 5 and z 10 are lost. So, Model (4) describes possible one-step random transmission delays and packet dropouts in networked systems, where one or two packets can be processed for the estimation. Finally, note that the predictors z ^ k / k 1 , k = 2 , 4 , 5 , 8 , 10 are used to compensate for the measurements that do not arrive on time.
The problem is then formulated as that of obtaining the LS linear estimator of the signal, x k based on the observations y 1 , , y L given in (5). Next, some statistical properties of the processes involved in observation models (2) and (5), which are necessary to address the LS linear estimation problem, are specified:
(P1) 
{ H k } k 1 is a sequence of independent random matrices with known means: H ¯ k E [ H k ] = H ¯ k ( 1 ) T , , H ¯ k ( m ) T T , k 1 .
(P2) 
The sequence { v k } k 1 is a zero-mean second-order process with E [ v k v s T ] = R k δ k , s , where R k = R k ( i j ) i , j = 1 , , m .
(P3) 
The random matrices Γ k , Ψ k + 1 k 1 are independent, and their means are given by
Γ ¯ k E [ Γ k ] = D i a g γ ¯ k ( 1 ) , , γ ¯ k ( m ) I n z , Ψ ¯ k E [ Ψ k ] = D i a g ψ ¯ k ( 1 ) , , ψ ¯ k ( m ) I n z .
(P4) 
The signal process, { x k } k 1 and the processes { H k } k 1 { v k } k 1 and Γ k , Ψ k + 1 k 1 are mutually independent.
(P5) 
{ z k } k 1 is a zero-mean process with covariance matrices Σ k , s z E [ z k z s T ] , for s k which, from (P4), are given by
Σ k , s z = E H k A k B s T H s T + R k δ k , s , s k ,
with E [ H k A k B s T H s T ] = H ¯ k A k B s T H ¯ s T , for s < k , and
E [ H k A k B k T H k T ] = E [ H k ( i ) A k B k T H k ( j ) T ] i , j = 1 , , m ,
where the ( p , q ) -th entries of the matrices E [ H k ( i ) A k B k T H k ( j ) T ] are given by
E [ H k ( i ) A k B k T H k ( j ) T ] p q = a = 1 n x b = 1 n x E [ h p a ( i ) ( k ) h q b ( j ) ( k ) ] A k B k T a b , p , q = 1 , , n z .
Remark 3.
By denoting γ k = γ k ( 1 ) , , γ k ( m ) T 1 n z and ψ k = ψ k ( 1 ) , , ψ k ( m ) T 1 n z , it is clear that K k 1 γ E ( 1 m n z γ k ) ( 1 m n z γ k ) T and K k ψ E ψ k ψ k T are known matrices whose entries are given in (3). Now, by defining
ξ k = C 0 I m n z Γ k z k + C 1 Ψ k z k 1 , k 2 ; ξ 1 = C 0 I m n z Γ 1 z 1 ,
and taking the Hadamard product properties into account, it is easy to check that the covariance matrices ( Σ k ξ E ξ k ξ k T ) are given by
Σ k ξ = C 0 ( K k 1 γ Σ k z ) C 0 T + C 1 ( K k ψ Σ k 1 z ) C 1 T + C 0 ( I m n z Γ ¯ k ) Σ k , k 1 z Ψ ¯ k C 1 T + C 1 Ψ ¯ k Σ k , k 1 z T ( I m n z Γ ¯ k ) C 0 T , k 2 ; Σ 1 ξ = C 0 ( K 1 1 γ Σ k 1 ) C 0 T .

3. Centralized Fusion Estimators

This section is concerned with the problem of obtaining recursive algorithms for the LS linear centralized fusion prediction, filtering, and fixed-point smoothing estimators. For this purpose, we used an innovation approach. Also the estimation error covariance matrices, which are used to measure the accuracy of the proposed estimators when the LS optimality criterion is used, were derived.
The centralized fusion structure for the considered networked systems with random uncertainties in the measured outputs and transmission is illustrated in Figure 1.

3.1. Innovation Technique

As indicated above, our aim was to obtain the optimal LS linear estimators, x ^ k / L , of the signal x k based on the measurements y 1 , , y L , given in (5), by recursive algorithms. Since the estimator x ^ k / L is the orthogonal projection of the signal x k onto the linear space spanned by the nonorthogonal vectors y 1 , , y L , we used an innovation approach in which the observation process { y k } k 1 was transformed into an equivalent one (innovation process) of orthogonal vectors { μ k } k 1 ; the equivalence means that each set { μ 1 , , μ L } spans the same linear subspace as { y 1 , , y L } .
The innovation at time k is defined as μ k = y k y ^ k / k 1 , where y ^ 1 / 0 = E [ y 1 ] = 0 and, for k 2 , y ^ k / k 1 , the one-stage linear predictor of y k is the projection of y k onto the linear space generated by μ 1 , , μ k 1 . Due to the orthogonality property of the innovations and since the innovation process is uniquely determined by the observations, by replacing the observation process by the innovation one, the following general expression for the LS linear estimators of any vector w k based on the observations y 1 , , y L was obtained
w ^ k / L = h = 1 L E w k μ h T E μ h μ h T 1 μ h .
This expression is derived from the uncorrelation property of the estimation error with all of the innovations, which is guaranteed by the Orthogonal Projection Lemma (OPL). As shown by (8), the first step to obtain the signal estimators is to find an explicit formula for the innovation or, equivalently, for the one-stage linear predictor of the observation.

One-Stage Observation Predictor

To obtain y ^ k / k 1 , the projection of y k onto the linear space generated by μ 1 , , μ k 1 , we used (5) and we note that Ψ k and H k 1 are correlated with the innovation μ k 1 . So, to simplify the derivation of y ^ k / k 1 , the observations (5) were rewritten as follows:
y k = C 0 I m n z Γ k z k + C 1 Ψ ¯ k H ¯ k 1 x k 1 + C 0 Γ k z ^ k / k 1 + V k 1 , k 2 , V k = C 1 Ψ k + 1 z k C 1 Ψ ¯ k + 1 H ¯ k x k , k 1 .
Taking into account that Ψ k + 1 and H k are independent of μ h , for h k 1 , it is easy to see that E V k μ h T = 0 for h k 1 . So, from the general expression (8), we obtained V ^ k / k = V k Π k 1 μ k , k 1 , where V k E V k μ k T = E V k y k T . Hence, according to the projection theory, y ^ k / k 1 satisfies
y ^ k / k 1 = C 0 H ¯ k x ^ k / k 1 + C 1 Ψ ¯ k H ¯ k 1 x ^ k 1 / k 1 + V k 1 Π k 1 1 μ k 1 , k 2 .
This expression for the one-stage observation predictor along with (8) for the LS linear estimators are the starting points to get the recursive prediction, filtering, and fixed-point smoothing algorithms.

3.2. Centralized Fusion Prediction, Filtering, and Smoothing Algorithms

The following theorem presents a recursive algorithm for the LS linear centralized fusion prediction and filtering estimators x ^ k / L , L k , of the signal x k based on the observations y 1 , , y L given in (5) or equivalently, in (9).
Theorem 1.
Under hypotheses (H1)–(H5), the LS linear centralized predictors and filter x ^ k / L , L k and the corresponding error covariance matrices Σ ^ k / L E [ ( x k x ^ k / L ) ( x k x ^ k / L T ) ] are obtained by
x ^ k / L = A k e L , Σ ^ k / L = A k B k A k Σ L e T , L k ,
where the vectors e L and the matrices Σ L e E e L e L T are recursively obtained from
e L = e L 1 + E L Π L 1 μ L , L 1 ; e 0 = 0 ,
Σ L e = Σ L 1 e + E L Π L 1 E L T , L 1 ; Σ 0 e = 0 ,
and the matrices E L E e L μ L T satisfy
E L = H ¯ B L T Σ L 1 e H ¯ A L T E L 1 Π L 1 1 V L 1 T , L 2 ; E 1 = H ¯ B 1 T ,
where H ¯ D L , for D = A , B , is defined by
H ¯ D L = C 0 I m n z Γ ¯ L H ¯ L D L + C 1 Ψ ¯ L H ¯ L 1 D L 1 , L 2 ; H ¯ D 1 = C 0 ( I m n z Γ ¯ 1 ) H ¯ 1 D 1 .
The innovations μ L = y L y ^ L / L 1 are given by
μ L = y L H ¯ A L + C 0 Γ ¯ L H ¯ L A L e L 1 V L 1 Π L 1 1 μ L 1 , L 2 ; μ 1 = y 1 ,
and the coefficients V L = E V L μ L T , are obtained by
V L = C 1 ( K L + 1 , L ψ ( 1 γ ) ( Σ L z H ¯ L A L Σ L 1 e A L T H ¯ L T ) Ψ ¯ L + 1 H ¯ L A L B L A L Σ L 1 e T H ¯ L T ( I Γ ¯ L ) ) C 0 T , L 1 ,
where K L + 1 , L ψ ( 1 γ ) E ψ L + 1 ( 1 m n z γ L ) T , whose entries are given in (3).
The innovation covariance matrices Π L E μ L μ L T satisfy
Π L = Σ L ξ C 0 K L γ ( H ¯ L A L Σ L 1 e A L T H ¯ L T ) C 0 T + O L , L 1 A L T H ¯ L T Γ ¯ L C 0 T H ¯ A L O L , L 1 T V L 1 Π L 1 1 Y L , L 1 T , L 2 ; Π 1 = Σ 1 ξ ,
where the matrices Σ L ξ are given in (7), K L γ E γ L γ L T , whose entries are obtained by (3), and the matrices O L , L 1 E [ y L e L 1 T ] and Y L , L 1 E [ y L μ L 1 T ] are given by
O L , L 1 = H ¯ A L + C 0 Γ ¯ L H ¯ L A L Σ L 1 e + V L 1 Π L 1 1 E L 1 T , L 2 . Y L , L 1 = H ¯ A L + C 0 Γ ¯ L H ¯ L A L E L 1 + V L 1 , L 2 .
Proof. 
See Appendix A. ☐
Next, a recursive algorithm for the LS linear centralized fusion smoothers x ^ k / k + h at the fixed point k for any h 1 is established in the following theorem.
Theorem 2.
Under hypotheses (H1)–(H5), the LS linear centralized fixed-point smoothers x ^ k / k + h are calculated by
x ^ k / k + h = x ^ k / k + h 1 + X k , k + h Π k + h 1 μ k + h , k 1 , h 1 ,
whose initial condition is given by the centralized filter x ^ k / k , and the matrices X k , k + h = E x k μ k + h T are obtained by
X k , k + h = B k E k , k + h 1 H ¯ A k + h T X k , k + h 1 Π k + h 1 1 V k + h 1 T , h 1 ; X k , k = A k E k .
The matrices E k , k + h = E [ x k e k + h T ] satisfy the following recursive formula:
E k , k + h = E k , k + h 1 + X k , k + h Π k + h 1 E k + h T , h 1 ; E k , k = A k Σ k e .
The fixed-point smoothing error covariance matrices, Σ ^ k / k + h E ( x k x ^ k / k + h ) ( x k x ^ k / k + h ) T , are calculated by
Σ ^ k / k + h = Σ ^ k / k + h 1 X k , k + h Π k + h 1 X k , k + h T , k 1 , h 1 ,
with the initial condition given by the filtering error covariance matrix Σ ^ k / k .
The filter x ^ k / k , the innovations μ k + h , their covariance matrices Σ ^ k / k and Π k + h , and the matrices E k + h and Σ k e were obtained from Theorem 1.
Proof. 
See Appendix B. ☐

4. Numerical Simulation Example

The performance of the proposed centralized filtering and fixed-point smoothing algorithms was analyzed in a numerical simulation example which also shows how some of the sensor uncertainties covered by the current measurement model (1) with random parameter matrices influence the accuracy of the estimators. Also, the effect of the random transmission delays and packet losses on the performance of the estimators was analyzed.

4.1. Signal Process

Consider a discrete-time scalar signal process generated by the following model with the state-dependent multiplicative noise
x k + 1 = 0.9 + 0.01 ε k x k + w k , k 0 ,
where x 0 is a standard Gaussian variable, and { w k } k 0 , { ε k } k 0 are zero-mean Gaussian white processes with unit variance. Assuming that x 0 , { w k } k 0 and { ε k } k 0 are mutually independent, the signal covariance is given by E [ x k x s ] = 0.9 k s D s , s k , where D s = E [ x s 2 ] is obtained by
D s = ( 0.9 2 + 0.01 2 ) D s 1 + 1 , s 1 ; D 0 = 1 .
Hence, the hypothesis (H1) is satisfied with, for example, A k = 0.9 k y B s = 0.9 s D s .
This signal process has been considered in the current authors’ previous papers and shows that hypothesis (H1) regarding the signal autocovariance function is satisfied for uncertain systems with state-dependent multiplicative noise. Also, situations where the system matrix in the state-space model is singular are covered by hypothesis (H1) (see, e.g., ref. [9]). Hence, this hypothesis provides a unified general context to deal with different situations, thus avoiding the derivation of specific algorithms for each of them.

4.2. Sensor Measured Outputs

As in ref. [20], let us consider four sensors that provide scalar measurements with different random failures, which are described using random parameters according to the theoretical model (1). Namely, sensor 1 has continuous gain degradation, sensor 2 has discrete gain degradation, sensor 3 has missing measurements and sensor 4 has both missing measurements and multiplicative noise. Specifically, the scalar measured outputs are described according to the model
z k ( i ) = H k ( i ) x k + v k ( i ) , k 1 , i = 1 , 2 , 3 , 4 ,
where the additive noises are defined as v k ( i ) = c i η k , with c 1 = 1 , c 2 = 0.5 , c 3 = c 4 = 0.75 , and { η k } k 1 is a Gaussian white sequence with a mean of 0 and variance of 0.5 . The additive noises are correlated with R k ( i j ) = 0.5 c i c j , k 1 ; i , j = 1 , 2 , 3 , 4 . The random measurement matrices are defined by H k ( i ) = θ k ( i ) C k ( i ) , for i = 1 , 2 , 3 , where C k ( 1 ) = 0.82 , C k ( 2 ) = 0.75 , C k ( 3 ) = 0.74 , and H k ( 4 ) = θ k ( 4 ) 0.75 + 0.95 φ k , where the sequence { φ k } k 1 is a standard Gaussian white process, and { θ k ( i ) } k 1 , i = 1 , 2 , 3 , 4 , are also white processes with time-invariant probability distributions that are given as follows:
  • { θ k ( 1 ) } k 1 are uniformly distributed over [ 0.2 , 0.7 ] .
  • P θ k ( 2 ) = 0 = 0.3 , P θ k ( 2 ) = 0 . 5 = 0.3 , P θ k ( 2 ) = 1 = 0.4 , k 1 .
  • For i = 3 , 4 , { θ k ( i ) } k 1 are Bernoulli random variables with P θ k ( i ) = 1 = θ ¯ ( i ) , k 1 .

4.3. Model for the Measurements Processed

Now, according to the theoretical study, we assume that the sensor measurements, y k , that are processed to update the estimators are modeled by
y k = ( I 4 Γ k ) z k + Γ k z ^ k / k 1 Ψ k z k 1 , k 2 ; y 1 = ( I 4 Γ 1 ) z 1 0 ,
where Γ k = D i a g γ k ( 1 ) , γ k ( 2 ) , γ k ( 3 ) , γ k ( 4 ) and Ψ k = D i a g ψ k ( 1 ) , ψ k ( 2 ) , ψ k ( 3 ) , ψ k ( 4 ) , and for i = 1 , 2 , 3 , 4 , { γ k ( i ) } k 1 and { ψ k ( i ) } k 2 are sequences of independent Bernoulli random variables whose distributions are determined by the following probabilities:
  • γ ¯ ( i ) P γ k ( i ) = 1 , k 1 : probability that the measurement z k ( i ) is not received at time k because it is delayed or lost.
  • ψ ¯ γ ( i ) P ψ k ( i ) = 1 / γ k 1 ( i ) = 1 , k 1 : probability that the measurement z k 1 ( i ) is received at the current time (k), conditioned to the fact that it is not received on time.
  • ψ ¯ ( i ) P ψ k ( i ) = 1 = ψ ¯ γ ( i ) γ ¯ ( i ) , k 1 : probability that the measurement z k 1 ( i ) is received and processed at the current time k.
Finally, in order to apply the proposed algorithms, it was assumed that all the processes involved in the observation equations satisfy the independence hypotheses imposed on the theoretical model.
To illustrate the feasibility and effectiveness of the proposed algorithms, they were implemented in MATLAB, and fifty iterations of the filtering and fixed-point smoothing algorithms were performed. The estimation accuracy was examined by analyzing the error variances for different probabilities of the Bernoulli variables modeling the random failures in sensors 3 and 4 ( θ ¯ ( i ) , i = 3 , 4 ). Also, different values of the probabilities γ ¯ ( i ) , corresponding to the transmission uncertainties, and different conditional probabilities ψ ¯ γ ( i ) , i = 1 , 2 , 3 , 4 , were considered in order to analyze the effect of these failures on the estimation accuracy.
In the study of the performance of the centralized estimators, they were compared with local ones, which were computed using only the measurements received from each single sensor. In that case, the measurements processed at each local processor can be described by
y k ( i ) = ( 1 γ k ( i ) ) z k + γ k ( i ) z ^ k / k 1 ( i ) ψ k ( i ) z k 1 , k 2 ; y 1 ( i ) = ( 1 γ 1 ( i ) ) z 1 ( i ) 0 , i = 1 , 2 , 3 , 4 ,
where z ^ k / k 1 ( i ) is the one-stage predictor of z k ( i ) based on y 1 ( i ) , , y k 1 ( i ) , and the corresponding local estimators are obtained via recursive algorithms similar to those in Theorems 1 and 2.

4.4. Performance of the Centralized Fusion Filtering and Smoothing Estimators

For i = 1 , 2 , 3 , 4 , we assumed that γ ¯ ( i ) = ψ ¯ γ ( i ) = 0.1 i , and that the missing probabilities 1 θ ¯ ( i ) had the same value in sensors i = 3 , 4 , namely, θ ¯ ( i ) = 0.5 , i = 3 , 4 . The error variances of the local filtering estimators and both the centralized filtering and smoothing error variances are displayed in Figure 2. This figure shows, on the one hand, that the error variances of the centralized fusion filtering estimators are significantly smaller than those of every local estimator. Consequently, agreeing with what is theoretically expected, the centralized fusion filter has better accuracy than the local ones, as it is the optimal one based on the information from all the contributing sensors. On the other hand, Figure 2 also shows that as more observations are considered in the estimation, the error variances are lower and consequently, the performance of the centralized estimators becomes better. In other words, the smoothing estimators are more accuracy than the filtering ones, and the accuracy of the smoothers at each fixed-point k is better as the number of available observations k + h increases, although this improvement is practically imperceptible for h > 3 . Similar results were obtained for other values of the probabilities θ ¯ ( i ) , γ ¯ ( i ) and ψ ¯ γ ( i ) .

4.5. Influence of the Missing Measurement Phenomenon in Sensors 3 and 4

Considering γ ¯ ( i ) = ψ ¯ γ ( i ) = 0.1 i , i = 1 , 2 , 3 , 4 , again, in order to show the effect of the missing probabilities in sensors i = 3 , 4 , the centralized filtering error variances are displayed in Figure 3 for different values of these probabilities 1 θ ¯ ( i ) . Specifically, in Figure 3a, it is assumed that θ ¯ ( 3 ) = θ ¯ ( 4 ) with a range of values from 0.5 to 0.9, and in Figure 3b, θ ¯ ( 3 ) = 0.5 and θ ¯ ( 4 ) varies from 0.6 to 0.9. From these figures, it is clear that the performance of the centralized fusion filter is indeed influenced by the probabilities θ ¯ ( i ) , i = 3 , 4 . Specifically, the performance of the centralized filter is poorer as θ ¯ ( i ) decreases, which means that, as expected, the lower the probability of missing measurements is, the better performance the filter has. Analogous results were obtained for the centralized smoothers and for other values of the probabilities.
Considering that the behavior of the error variances was analogous in all of the iterations, only the results at a specific iteration ( k = 50 ) are displayed in the following figures.

4.6. Influence of the Probabilities γ ¯ ( i ) and ψ ¯ γ ( i )

Considering θ ¯ ( i ) = 0.5 , i = 3 , 4 , as in Figure 2, we analyze the influences of the random delays and packet dropouts on the performance of the centralized filtering estimators. We assume that the four sensors have the same probability of measurements not arriving on time ( γ ¯ ( i ) = γ ¯ , i = 1 , 2 , 3 , 4 ) and also the same conditional probability ( ψ ¯ γ ( i ) = ψ ¯ γ , i = 1 , 2 , 3 , 4 ). Figure 4 displays the centralized filtering error variances at k = 50 versus ψ ¯ γ for γ ¯ varying from 0.1 to 0.9. This figure shows that for each value of γ ¯ , the error variances decrease when the conditional probability increases. This result was expected since, for a fixed arbitrary value of γ ¯ , the increase in ψ ¯ γ entails that of ψ ¯ , which is the probability of processing the delayed measurement at the previous time at the current time. Also, we observed that a decrease in the error variances was more evident for higher values of γ ¯ , which was also expected since ψ ¯ = ψ ¯ γ γ ¯ and hence, γ ¯ specifies the increasing rate of ψ ¯ with respect to ψ ¯ γ .
Similar results to the previous ones and consequently, analogous conclusions, were deduced for the smoothing estimators and for different values of the probabilities γ ¯ ( i ) and ψ ¯ γ ( i ) at each sensor. By way of example, the smoothing error variances Σ ^ 50 / 51 are displayed in Figure 5 for some of the situations considered above.

5. Concluding Remarks

In this paper, recursive algorithms were designed for the LS linear centralized fusion prediction, filtering, and smoothing problems in networked systems with random parameter matrices in the measured outputs. At each sampling time, every sensor sends its measured output to the fusion centre where the data packets coming from all the sensors are gathered. Every data packet is assumed to be transmitted just once, but random delays and packet dropouts can occur during this transmission, so the estimator may receive either one packet, two packets, or nothing. When the current measurement of a sensor does not arrive punctually, the corresponding component of the stacked measured output predictor is used as the compensator in the design of the estimators.
Some of the main advantages of the current approach are the following ones:
  • The consideration of random measurement matrices provides a general framework to address different uncertainties, such as missing measurements, multiplicative noise, or sensor gain degradation, as has been illustrated by a simulation example.
  • The covariance-based approach used to design the estimation algorithms does not require the knowledge of the state-space model, even though it is also applicable to the classical formulation using this model.
  • In contrast to most estimation algorithms dealing with random delays and packet dropouts in the literature, the proposed ones do not require any state vector augmentation technique, and thus are computationally more simple.
  • The current estimation algorithms were designed using the LS optimality criterion by a innovation approach and no particular structure of the estimators is required.

Author Contributions

All authors have contributed equally to this work. R.C.-Á., A.H.-C. and J.L.-P. provided original ideas for the proposed model and they all collaborated in the derivation of the estimation algorithms; they participated equally in the design and analysis of the simulation results; and the paper was also written and reviewed cooperatively.

Funding

This research is supported by Ministerio de Economía, Industria y Competitividad, Agencia Estatal de Investigación and Fondo Europeo de Desarrollo Regional FEDER (grant no. MTM2017-84199-P).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof of Theorem 1.
Based on the general expression (8), to obtain the LS linear estimators x ^ k / L , L k , it is necessary to calculate the coefficients
X k , h E x k μ h T = E x k y h T E x k y ^ h / h 1 T , h k .
Using (5) for y h , the independence hypotheses and the factorization of the signal covariance (H1) lead to E [ x k y h T ] = A k H ¯ B h T + E [ x k x ^ h / h 1 T ] H ¯ h T Γ ¯ h C 0 T , 2 h k , and E [ x k y 1 T ] = A k H ¯ B 1 T , with H ¯ B h given in (15). Now, using expression (10) for y ^ h / h 1 , together with (8) for x ^ h / h 1 and x ^ h 1 / h 1 , the coefficients X k , h , 1 h k , are expressed as follows:
X k , h = A k H ¯ B h T j = 1 h 1 X k , j Π j 1 X h , j T H ¯ h T ( I m n z Γ ¯ h ) C 0 T + X h 1 , j T H ¯ h 1 T Ψ ¯ h C 1 T X k , h 1 Π h 1 1 V h 1 T , 2 h k ; X k , 1 = A k H ¯ B 1 T ,
which guarantees that X k , h = A k E h , 1 h k , with E h given by
E h = H ¯ B h T j = 1 h 1 E j Π j 1 E j H A h T E h 1 Π h 1 1 V h 1 T , h 2 ; E 1 = H ¯ B 1 T .
Then, by defining e L h = 1 L E h Π h 1 μ h and Σ L e E e L e L T = h = 1 L E h Π h 1 E h , for L 1 , and taking into account that E [ e L μ h T ] = E h , for h L , it is easy to obtain expressions (11)–(16).
Next, the expression (17) for V L = E V L μ L T = E V L y L T is derived. Using (9) for V L , we write V L = C 1 E [ Ψ L + 1 z L y L T ] Ψ ¯ L + 1 H ¯ L E [ x L y L T ] , and we calculate each of these expectations:
  • From (5), we write y L = C 0 I m n z Γ L z L z ^ L / L 1 + C 1 Ψ L z L 1 + C 0 z ^ L / L 1 , and from the independence properties, it is clear that
    E [ Ψ L + 1 z L y L T ] = E [ Ψ k + 1 z L ( z L z ^ L / L 1 ) T ( I m n z Γ L ) ] C 0 T + Ψ ¯ L + 1 E [ z L z L 1 T ] Ψ ¯ L C 1 T + Ψ ¯ L + 1 E [ z L z ^ L / L 1 T ] C 0 T .
    Now, from the Hadamard product properties, we obtain E [ Ψ L + 1 z L ( z L z ^ L / L 1 ) T ( I m n z Γ L ) ] = ( K L + 1 , L ψ ( 1 γ ) ( Σ L z E [ z L z ^ L / L 1 ] T ) ; from property (P5), E [ z L z L 1 T ] = H ¯ L A L B L 1 T H ¯ L 1 T , and using the OPL and (2), E [ z L z ^ L / L 1 T ] = E [ z ^ L / L 1 z ^ L / L 1 T ] = H ¯ L E [ x ^ L / L 1 x ^ L / L 1 T ] H ¯ L T . Then, using (11) and the definition of Σ L e , the following expression is obtained:
    E [ Ψ L + 1 z L y L T ] = ( K L + 1 , L ψ ( 1 γ ) ( Σ L z H ¯ L A L Σ L 1 e A L T H ¯ L T ) ) C 0 T + Ψ ¯ L + 1 H ¯ L A L B L 1 T H ¯ L 1 T Ψ ¯ L C 1 T + Σ L 1 e A L T H ¯ L T C 0 T .
  • Using (2), (5) again and the OPL, together with Hypothesis (H1) and (15) for H ¯ B L , we have
    Ψ ¯ L + 1 H ¯ L E [ x L y L T ] = Ψ ¯ L + 1 H ¯ L A L H ¯ B L T + Σ L 1 e A L T H ¯ L T Γ ¯ L C 0 T .
From the above items and using (15), B L 1 T H ¯ L 1 T Ψ ¯ L C 1 T H ¯ B L T = B L T H ¯ L T ( I m n z Γ ¯ L ) C 0 T , expression (17) is deduced with no difficulty.
To obtain expression (18) for Π L = E [ μ L μ L T ] , we apply the OPL to write Π L = E y L y L T E y ^ L / L 1 y ^ L / L 1 T .
On the one hand, using the OPL again, we express E y ^ L / L 1 y ^ L / L 1 T = E y ^ L / L 1 y L T which, takes (16) into account for y ^ L / L 1 , and the definitions of O L , L 1 and Y L , L 1 , clearly satisfy
E y ^ L / L 1 y L T = H ¯ A L + C 0 Γ ¯ L H ¯ L A L O L , L 1 T + V L 1 Π L 1 1 Y L , L 1 T , L 2 .
On the other hand, to obtain E y L y L T , we use (9) and (6) to write y L = ξ L + C 0 Γ L z ^ L / L 1 , and since z ^ L / L 1 = H ¯ L A L e L 1 , the following expression is obtained from the definition of Σ L e after some manipulations:
E y L y L T = Σ L ξ C 0 K L γ ( H ¯ L A L Σ L 1 e A L T H ¯ L T ) C 0 T + C 0 Γ ¯ L H ¯ L A L O L , L 1 T + O L , L 1 A L T H ¯ L T Γ ¯ L C 0 T , L 2 .
From the above expectations, again, after some manipulations, expression (18) for Π L is obtained.
To complete the proof, expression (19) for O L , L 1 = E [ y L e L 1 T ] and Y L , L 1 = E [ y L μ L 1 T ] is derived. Using the OPL, we have E [ y L e L 1 T ] = E [ y ^ L / L 1 e L 1 T ] and E [ y L μ L 1 T ] = E [ y ^ L / L 1 μ L 1 T ] , and from (16) for y ^ L / L 1 , expression (19) is straightforward. Then, the proof of Theorem 1 is complete.

Appendix B

Proof of Theorem 2.
Using (8), the signal estimators are written as x ^ k / k + h = l = 1 k + h X k , l Π l 1 μ l , h 1 , from which it is immediately deduced that the smoothers are recursively obtained by (20) from the filter x ^ k / k .
Taking into account that X k , k + h = E x k y k + h T E x k y ^ k + h / k + h 1 T , h 1 , the recursive relation (21) is derived by just calculating each of these expectations as follows:
  • Hypothesis (H1) together with (15), leads to
    E x k y k + h T = B k H ¯ A k + h T + E x k e k + h 1 T A k + h T H ¯ k + h T Γ ¯ k + h C 0 T , h 1 .
  • From (16) for y ^ k + h / k + h 1 , it is clear that
    E x k y ^ k + h / k + h 1 T = E x k e k + h 1 T H ¯ A k + h + C 0 Γ ¯ k + h H ¯ k + h A k + h T + X k , k + h 1 Π k + h 1 1 V k + h 1 T , h 1 .
From the above items, (21) is proven simply by denoting E k , k + h = E x k e k + h T , whose recursive expression (22) is also obvious by using (12) for e k + h .
Finally, using (20) for the smoothers x ^ k / k + h , the recursive formula for the fixed-point smoothing error covariance matrices Σ ^ k / k + h is immediately deduced. ☐

References

  1. Castanedo, F. A review of data fusion techniques. Sci. World J. 2013, 2013, 704504. [Google Scholar] [CrossRef] [PubMed]
  2. Khaleghi, B.; Khamis, A.; Karray, F.O.; Razavi, S.N. Multisensor data fusion: A review of the state-of-the-art. Inform. Fusion 2013, 14, 28–44. [Google Scholar] [CrossRef]
  3. Bark, M.; Lee, S. Distributed multisensor data fusion under unknown correlation and data inconsistency. Sensors 2017, 17, 2472. [Google Scholar]
  4. Lin, H.; Sun, S. State estimation for a class of non-uniform sampling systems with missing measurements. Sensors 2016, 16, 1155. [Google Scholar] [CrossRef] [PubMed]
  5. Liu, Y.; Wang, Z.; He, X.; Zhou, D.H. Minimum-variance recursive filtering over sensor networks with stochastic sensor gain degradation: Algorithms and performance analysis. IEEE Trans. Control Netw. Syst. 2016, 3, 265–274. [Google Scholar] [CrossRef]
  6. Li, W.; Jia, Y.; Du, J. Distributed filtering for discrete-time linear systems with fading measurements and time-correlated noise. Digit. Signal Process. 2017, 60, 211–219. [Google Scholar] [CrossRef]
  7. Ma, J.; Sun, S. Centralized fusion estimators for multisensor systems with random sensor delays, multiple packet dropouts and uncertain observations. IEEE Sens. J. 2013, 13, 1228–1235. [Google Scholar] [CrossRef]
  8. Chen, B.; Zhang, W.; Yu, L. Distributed fusion estimation with missing measurements, random transmission delays and packet dropouts. IEEE Trans. Autom. Control 2014, 59, 1961–1967. [Google Scholar] [CrossRef]
  9. Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J. Fusion estimation using measured outputs with random parameter matrices subject to random delays and packet dropouts. Signal Process. 2016, 127, 12–23. [Google Scholar] [CrossRef]
  10. Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J. Least-squares estimation in sensor networks with noise correlation and multiple random failures in transmission. Math. Probl. Eng. 2017, 2017, 1570719. [Google Scholar] [CrossRef]
  11. Hu, J.; Wang, Z.; Chen, D.; Alsaadi, F.E. Estimation, filtering and fusion for networked systems with network-induced phenomena: New progress and prospects. Inform. Fusion 2016, 31, 65–75. [Google Scholar] [CrossRef] [Green Version]
  12. Sun, S.; Lin, H.; Ma, J.; Li, X. Multi-sensor distributed fusion estimation with applications in networked systems: A review paper. Inform. Fusion 2017, 38, 122–134. [Google Scholar] [CrossRef]
  13. Luo, Y.; Zhu, Y.; Luo, D.; Zhou, J.; Song, E.; Wang, D. Globally optimal multisensor distributed random parameter matrices Kalman filtering fusion with applications. Sensors 2008, 8, 8086–8103. [Google Scholar] [CrossRef] [PubMed]
  14. Shen, X.J.; Luo, Y.T.; Zhu, Y.M.; Song, E.B. Globally optimal distributed Kalman filtering fusion. Sci. China Inf. Sci. 2012, 55, 512–529. [Google Scholar] [CrossRef]
  15. Wang, S.; Fang, H.; Tian, X. Minimum variance estimation for linear uncertain systems with one-step correlated noises and incomplete measurements. Digit. Signal Process. 2016, 49, 126–136. [Google Scholar] [CrossRef]
  16. Hu, J.; Wang, Z.; Gao, H. Recursive filtering with random parameter matrices, multiple fading measurements and correlated noises. Automatica 2013, 49, 3440–3448. [Google Scholar] [CrossRef]
  17. Linares-Pérez, J.; Caballero-Águila, R.; García-Garrido, I. Optimal linear filter design for systems with correlation in the measurement matrices and noises: Recursive algorithm and applications. Int. J. Syst. Sci. 2014, 45, 1548–1562. [Google Scholar] [CrossRef]
  18. Yang, Y.; Liang, Y.; Pan, Q.; Qin, Y.; Yang, F. Distributed fusion estimation with square-root array implementation for Markovian jump linear systems with random parameter matrices and cross-correlated noises. Inf. Sci. 2016, 370–371, 446–462. [Google Scholar] [CrossRef]
  19. Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J. Networked fusion filtering from outputs with stochastic uncertainties and correlated random transmission delays. Sensors 2016, 16, 847. [Google Scholar] [CrossRef] [PubMed]
  20. Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J. Optimal fusion estimation with multi-step random delays and losses in transmission. Sensors 2017, 17, 1151. [Google Scholar] [CrossRef] [PubMed]
  21. Sun, S.; Tian, T.; Honglei, L. State estimators for systems with random parameter matrices, stochastic nonlinearities, fading measurements and correlated noises. Inf. Sci. 2017, 397–398, 118–136. [Google Scholar] [CrossRef]
  22. Wang, W.; Zhou, J. Optimal linear filtering design for discrete time systems with cross-correlated stochastic parameter matrices and noises. IET Control Theory Appl. 2017, 11, 3353–3362. [Google Scholar] [CrossRef]
  23. Han, F.; Dong, H.; Wang, Z.; Li, G.; Alsaadi, F.E. Improved Tobit Kalman filtering for systems with random parameters via conditional expectation. Signal Process. 2018, 147, 35–45. [Google Scholar] [CrossRef]
  24. Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J.; Wang, Z. A new approach to distributed fusion filtering for networked systems with random parameter matrices and correlated noises. Inform. Fusion 2019, 45, 324–332. [Google Scholar] [CrossRef]
  25. Guo, Y. Switched filtering for networked systems with multiple packet dropouts. J. Franklin Inst. 2017, 354, 3134–3151. [Google Scholar] [CrossRef]
  26. Yang, C.; Deng, Z. Robust time-varying Kalman estimators for systems with packet dropouts and uncertain-variance multiplicative and linearly correlated additive white noises. Int. J. Adapt. Control Signal Process. 2018, 32, 147–169. [Google Scholar]
  27. Xing, Z.; Xia, Y.; Yan, L.; Lu, K.; Gong, Q. Multisensor distributed weighted Kalman filter fusion with network delays, stochastic uncertainties, autocorrelated, and cross-correlated noises. IEEE Trans. Syst. Man Cybern. Syst. 2018, 48, 716–726. [Google Scholar] [CrossRef]
  28. Silva, E.I.; Solis, M.A. An alternative look at the constant-gain Kalman filter for state estimation over erasure channels. IEEE Trans. Autom. Control 2013, 58, 3259–3265. [Google Scholar] [CrossRef]
  29. Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J. New distributed fusion filtering algorithm based on covariances over sensor networks with random packet dropouts. Int. J. Syst. Sci. 2017, 48, 1805–1817. [Google Scholar] [CrossRef]
  30. Ding, J.; Sun, S.; Ma, J.; Li, N. Fusion estimation for multi-sensor networked systems with packet loss compensation. Inform. Fusion 2019, 45, 138–149. [Google Scholar] [CrossRef]
  31. Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J. Covariance-based fusion filtering for networked systems with random transmission delays and non-consecutive losses. Int. J. Gen. Syst. 2017, 46, 752–771. [Google Scholar] [CrossRef]
  32. Zhu, C.; Xia, Y.; Xie, L.; Yan, L. Optimal linear estimation for systems with transmission delays and packet dropouts. IET Signal Process. 2013, 7, 814–823. [Google Scholar] [CrossRef]
  33. Ma, J.; Sun, S. Linear estimators for networked systems with one-step random delay and multiple packet dropouts based on prediction compensation. IET Signal Process. 2017, 1, 197–204. [Google Scholar] [CrossRef]
  34. Ma, J.; Sun, S. Distributed fusion filter for networked stochastic uncertain systems with transmission delays and packet dropouts. Signal Process. 2017, 130, 268–278. [Google Scholar] [CrossRef]
Figure 1. Centralized fusion filtering estimation with random uncertainties in measured outputs and transmission.
Figure 1. Centralized fusion filtering estimation with random uncertainties in measured outputs and transmission.
Sensors 18 02697 g001
Figure 2. Error variance comparison of the local filters and centralized fusion filter and smoothers.
Figure 2. Error variance comparison of the local filters and centralized fusion filter and smoothers.
Sensors 18 02697 g002
Figure 3. Centralized fusion filtering error variances for different values of θ ¯ ( 3 ) and θ ¯ ( 4 ) : (a) θ ¯ ( 3 ) = θ ¯ ( 4 ) from 0.5 to 0.9; (b) θ ¯ ( 3 ) = 0.5 and θ ¯ ( 4 ) from 0.6 to 0.9.
Figure 3. Centralized fusion filtering error variances for different values of θ ¯ ( 3 ) and θ ¯ ( 4 ) : (a) θ ¯ ( 3 ) = θ ¯ ( 4 ) from 0.5 to 0.9; (b) θ ¯ ( 3 ) = 0.5 and θ ¯ ( 4 ) from 0.6 to 0.9.
Sensors 18 02697 g003
Figure 4. Centralized filtering error variances at k = 50 versus ψ ¯ γ , for γ ¯ , varying from 0.1 to 0.9 when θ ¯ ( i ) = 0.5 , i = 3 , 4 .
Figure 4. Centralized filtering error variances at k = 50 versus ψ ¯ γ , for γ ¯ , varying from 0.1 to 0.9 when θ ¯ ( i ) = 0.5 , i = 3 , 4 .
Sensors 18 02697 g004
Figure 5. Centralized smoothing error variances ( Σ ^ 50 / 51 ) when θ ¯ ( i ) = 0.5 i = 3 , 4 , for different values of the probabilities γ ¯ ( i ) and ψ ¯ γ ( i ) , i = 1 , 2 , 3 , 4 : (a) versus ψ ¯ γ , for γ ¯ ( i ) = 0.1 i , 0.15 i , 0.2 i ; (b) versus γ ¯ , for ψ ¯ γ ( i ) = 0.1 i , 0.15 i , 0.2 i ; (c) versus γ ¯ ( 1 ) = γ ¯ ( 2 ) for ψ ¯ γ ( i ) = 0.1 i and different values of γ ¯ ( 3 ) and γ ¯ ( 4 ) ; and (d) versus ψ ¯ γ ( 3 ) = ψ ¯ γ ( 4 ) , for γ ¯ ( i ) = 0.1 i and different values of ψ ¯ γ ( 1 ) and ψ ¯ γ ( 2 ) .
Figure 5. Centralized smoothing error variances ( Σ ^ 50 / 51 ) when θ ¯ ( i ) = 0.5 i = 3 , 4 , for different values of the probabilities γ ¯ ( i ) and ψ ¯ γ ( i ) , i = 1 , 2 , 3 , 4 : (a) versus ψ ¯ γ , for γ ¯ ( i ) = 0.1 i , 0.15 i , 0.2 i ; (b) versus γ ¯ , for ψ ¯ γ ( i ) = 0.1 i , 0.15 i , 0.2 i ; (c) versus γ ¯ ( 1 ) = γ ¯ ( 2 ) for ψ ¯ γ ( i ) = 0.1 i and different values of γ ¯ ( 3 ) and γ ¯ ( 4 ) ; and (d) versus ψ ¯ γ ( 3 ) = ψ ¯ γ ( 4 ) , for γ ¯ ( i ) = 0.1 i and different values of ψ ¯ γ ( 1 ) and ψ ¯ γ ( 2 ) .
Sensors 18 02697 g005
Table 1. Measurements processed to update the estimators.
Table 1. Measurements processed to update the estimators.
Time k 12345678910
γ k 0101100101
ψ k 010100010
y k z 1 0 z ^ 1 / 0 0 z 3 z 2 z ^ 4 / 3 0 z ^ 5 / 4 z 4 z 6 0 z 7 0 z ^ 8 / 7 0 z 9 z 8 z ^ 10 / 9 0

Share and Cite

MDPI and ACS Style

Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J. Centralized Fusion Approach to the Estimation Problem with Multi-Packet Processing under Uncertainty in Outputs and Transmissions. Sensors 2018, 18, 2697. https://doi.org/10.3390/s18082697

AMA Style

Caballero-Águila R, Hermoso-Carazo A, Linares-Pérez J. Centralized Fusion Approach to the Estimation Problem with Multi-Packet Processing under Uncertainty in Outputs and Transmissions. Sensors. 2018; 18(8):2697. https://doi.org/10.3390/s18082697

Chicago/Turabian Style

Caballero-Águila, Raquel, Aurora Hermoso-Carazo, and Josefa Linares-Pérez. 2018. "Centralized Fusion Approach to the Estimation Problem with Multi-Packet Processing under Uncertainty in Outputs and Transmissions" Sensors 18, no. 8: 2697. https://doi.org/10.3390/s18082697

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop