Fusion Estimation from Multisensor Observations with Multiplicative Noises and Correlated Random Delays in Transmission

In this paper, the information fusion estimation problem is investigated for a class of multisensor linear systems affected by different kinds of stochastic uncertainties, using both the distributed and the centralized fusion methodologies. It is assumed that the measured outputs are perturbed by one-step autocorrelated and cross-correlated additive noises, and also stochastic uncertainties caused by multiplicative noises and randomly missing measurements in the sensor outputs are considered. At each sampling time, every sensor output is sent to a local processor and, due to some kind of transmission failures, one-step correlated random delays may occur. Using only covariance information, without requiring the evolution model of the signal process, a local least-squares (LS) filter based on the measurements received from each sensor is designed by an innovation approach. All these local filters are then fused to generate an optimal distributed fusion filter by a matrix-weighted linear combination, using the LS optimality criterion. Moreover, a recursive algorithm for the centralized fusion filter is also proposed and the accuracy of the proposed estimators, which is measured by the estimation error covariances, is analyzed by a simulation example.


Introduction
Over the past decades, the use of sensor networks has experienced a fast development encouraged by the wide range of potential applications in many areas, since they usually provide more information than traditional single-sensor communication systems.So, important advances have been achieved concerning the estimation problem in networked stochastic systems and the design of multisensor fusion techniques [1].Many of the existing fusion estimation algorithms are related to conventional systems (see e.g., [2][3][4][5], and the references therein), where the sensor measured outputs are affected only by additive noises and each sensor transmits its outputs to the fusion center over perfect connections.
However, in a network context, usually the restrictions of the physical equipment or the uncertainties in the external environment, inevitably cause problems in both the sensor outputs and the transmission of such outputs, that can worsen dramatically the quality of the fusion estimators designed without considering these drawbacks [6].Multiplicative noise uncertainties and missing measurements are some of the random phenomena that usually arise in the sensor measured outputs and motivate the design of new estimation algorithms (see e.g., [7][8][9][10][11], and references therein).
Furthermore, when the sensors send their measurements to the processing center via a communication network some additional network-induced phenomena, such as random delays or measurement losses, inevitably arise during this transmission process, which can spoil the fusion estimators performance and motivate the design of fusion estimation algorithms for systems with one (or even several) of the aforementioned uncertainties (see e.g., [12][13][14][15][16][17][18][19][20][21][22][23][24], and references therein).All the above cited papers on signal estimation with random transmission delays assume independent random delays at each sensor and mutually independent delays between the different sensors; in [25] this restriction was weakened and random delays featuring correlation at consecutive sampling times were considered, thus allowing to deal with some common practical situations (e.g., those in which two consecutive observations cannot be delayed).
It should be also noted that, in many real-world problems, the measurement noises are usually correlated; this occurs, for example, when all the sensors operate in the same noisy environment or when the sensor noises are state-dependent.For this reason, the fairly conservative assumption that the measurement noises are uncorrelated is commonly weakened in many of the aforementioned research papers on signal estimation.Namely, the optimal Kalman filtering fusion problem in systems with noise cross-correlation at consecutive sampling times is addressed, for example, in [19]; also, under different types of noise correlation, centralized and distributed fusion algorithms for systems with multiplicative noise are obtained in [11,20], and for systems where the measurements might have partial information about the signal in [7].
In this paper, covariance information is used to address the distributed and centralized fusion estimation problems for a class of linear networked stochastic systems with multiplicative noises and missing measurements in the sensor measured outputs, subject to transmission random one-step delays.It is assumed that the sensor measurement additive noises are one-step autocorrelated and cross-correlated, and the Bernoulli variables describing the measurement delays at the different sensors are correlated at the same and consecutive sampling times.As in [25], correlated random delays in the transmission are assumed to exist, with different delay rates at each sensor; however, the proposed observation model is more general than that considered in [25] since, besides the random delays in the transmission, multiplicative noises and missing phenomena in the measured outputs are considered; also cross-correlation between the different sensor additive noises is taken into account.Unlike [7][8][9][10][11] where multiplicative noise uncertainties and/or missing measurements are considered in the sensor measured outputs, in this paper random delays in the transmission are also assumed to exist.Hence, a unified framework is provided for dealing simultaneously with missing measurements and uncertainties caused by multiplicative noises, along with random delays in the transmission and, hence, the proposed fusion estimators have wide applicability.Recursive algorithms for the optimal linear distributed and centralized filters under the least-squares (LS) criterion are derived by an innovation approach.Firstly, local estimators based on the measurements received from each sensor are obtained and then the distributed fusion filter is generated as the LS matrix-weighted linear combination of the local estimators.Also, a recursive algorithm for the optimal linear centralized filter is proposed.Finally, it is important to note that, even though the state augmentation method has been largely used in the literature to deal with the measurement delays, such method leads to a significant rise of the computational burden, due to the increase of the state dimension.In contrast to such approach, the fusion estimators proposed in the current paper are obtained without needing the state augmentation; so, the dimension of the designed estimators is the same as that of the original state, thus reducing the computational cost compared with the existing algorithms based on the augmentation method.
The rest of the paper is organized as follows.The multisensor measured output model with multiplicative noises and missing measurements, along with the transmission random one-step delay model, are presented in Section 2. The distributed fusion estimation algorithm is derived in Section 3, and a recursive algorithm for the centralized LS linear filtering estimator is proposed in Section 4. The effectiveness of the proposed estimation algorithms is analyzed in Section 5 by a simulation example and some conclusions are drawn in Section 6.

Notation:
The notation throughout the paper is standard.R n and R m×n denote the n-dimensional Euclidean space and the set of all m × n real matrices, respectively.For a matrix A, the symbols A T and A −1 denote its transpose and inverse, respectively; the notation A ⊗ B represents the Kronecker product of the matrices A, B. If the dimensions of vectors or matrices are not explicitly stated, they are assumed to be compatible with algebraic operations.In particular, I denotes the identity matrix of appropriate dimensions.The notation a ∧ b indicates the minimum value of two real numbers a, b.For any function G k,s , depending on the time instants k and s, we will write G k = G k,k for simplicity; analogously, F (i) = F (ii) will be written for any function F (ij) , depending on the sensors i and j.Moreover, for an arbitrary random vector α (i) k , we will use the notation α k , where E[.] is the mathematical expectation operator.Finally, δ k,s denotes the Kronecker delta function.

Problem Formulation and Model Description
This paper is concerned with the LS linear filtering estimation problem of discrete-time stochastic signals from randomly delayed observations coming from networked sensors using the distributed and centralized fusion methods.The signal measurements at the different sensors are affected by multiplicative and additive noises, and the additive sensor noises are assumed to be correlated and cross-correlated at the same and consecutive sampling times.Each sensor output is transmitted to a local processor over imperfect network connections and, due to network congestion or some other causes, random one-step delays may occur during this transmission process; in order to model different delay rates in the transmission from each sensor to the local processor, different sequences of correlated Bernoulli random variables with known probability distributions are used.
In the distributed fusion method, each local processor produces the LS linear filter based on the measurements received from the sensor itself; afterwards, these local estimators are transmitted to the fusion center over perfect connections, and the distributed fusion filter is generated by a matrix-weighted linear combination of the local LS linear filtering estimators using the mean squared error as optimality criterion.In the centralized fusion method, all measurement data of the local processors are transmitted to the fusion center, also over perfect connections, and the LS linear filter based on all the measurements received is obtained by a recursive algorithm.
Next, we present the observation model and the hypotheses on the signal and noise processes necessary to address the estimation problem.

Signal Process
The distributed and centralized fusion filtering estimators will be obtained under the assumption that the evolution model of the signal to be estimated is unknown and only information about its mean and covariance functions is available; specifically, the following hypothesis is required: Hypothesis 1.The n x -dimensional signal process x k k≥1 has zero mean and its autocovariance function is expressed in a separable form, E[x k x T s ] = A k B T s , s ≤ k, where A k , B s ∈ R n x ×n are known matrices.
Note that, when the system matrix Φ in the state-space model of a stationary signal is available, the signal autocovariance function is and Hypothesis 1 is clearly satisfied taking, for example, and Hypothesis 1 is also satisfied taking A k = Φ k,0 and B s = E[x s x T s ](Φ −1 s,0 ) T .Furthermore, Hypothesis 1 covers even situations where the system matrix in the state-space model is singular, although a different factorization must be used in those cases (see e.g., [21]).Hence, Hypothesis 1 on the signal autocovariance function covers both stationary and non-stationary signals, providing a unified context to deal with a large number of different situations and avoiding the derivation of specific algorithms for each of them.

Multisensor Measured Outputs
Consider m sensors, whose measurements obey the following equations: where z k ∈ R n z is the measured output of the i-th sensor at time k, which is transmitted to a local processor by unreliable network connections, and k are known time-varying matrices of suitable dimensions.For each sensor i = 1, . . ., m, θ (i) k k≥1 is a Bernoulli process describing the missing phenomenon, ε (i) k k≥1 is a scalar multiplicative noise, and v (i) k k≥1 is the measurement noise.
The following hypotheses on the observation model given by Equation (1) are required: Hypothesis 2. The processes θ (i) k k≥1 , i = 1, . . ., m, are independent sequences of independent Bernoulli random variables with know probabilities P θ Hypothesis 3. The multiplicative noises ε (i) k k≥1 , i = 1, . . ., m, are independent sequences of independent scalar random variables with zero means and known second-order moments; we will denote Hypothesis 4. The sensor measurement noises v (i) k k≥1 , i = 1, . . ., m, are zero-mean sequences with known second-order moments defined by: E v From Hypothesis 2, different sequences of independent Bernoulli random variables with known probabilities are used to model the phenomenon of missing measurements at each sensor; so, when coming from the i-th sensor at time k; otherwise, θ (i) k = 0 and the state is missing in the measured output from the i-th sensor at time k, which means that such observation only contains additive noise v k .Although these variables are assumed to be independent from sensor to sensor, such condition is not necessary to deduce either the centralized estimators or the local estimators, but only to obtain the cross-covariance matrices of the local estimation errors, which are necessary to determine the matrix weights of the distributed fusion estimators.Concerning Hypothesis 3, it should be noted that the multiplicative noises involved in uncertain systems are usually gaussian noises.Finally, note that the conservative hypothesis of independence between different sensor measurement noises has been weakened in Hypothesis 4, since such independence assumption may be a limitation in many real-world problems; for example, when all the sensors operate in the same noisy environment, the noises are usually correlated, or even some sensors may have the same measurement noises.

Observation Model with Random One-Step Delays
For each k ≥ 1, assume that the measured outputs of the different sensors, z (i) k , i = 1, . . ., m, are transmitted to the local processors through unreliable communication channels and, due to network congestion or some other causes, random one-step delays with different rates are supposed to exist in these transmissions.Assuming that the first measurement is always available and considering different sequences of Bernoulli random variables, γ (i) k k≥2 , i = 1, . . ., m, to model the random delays, the observations used in the estimation are described by: From Equation ( 2) it is clear that γ k ; that is, the local processor receives the data from the i-th sensor at the sampling time k.When γ k−1 , meaning that the measured output at time k is delayed and the previous one z k−1 is used for the estimation.These Bernoulli random variables modelling the delays are assumed to be one-step correlated, thus covering many practical situations; for example, those in which consecutive observations transmitted through the same channel cannot be delayed, or situations where there are some sort of links between the different communications channels.Specifically, the following hypothesis is assumed: s are independent for |k − s| ≥ 2, and the second-order moments, γ s , s = k − 1, k, and i, j = 1, . . ., m, are also known.
Finally, the following independence hypothesis is also required: Hypothesis 6.For i = 1, . . ., m, the processes x k k≥1 , θ In the following proposition, explicit expressions for the autocovariance functions of the transmitted and received measurements, that will be necessary for the distributed fusion estimation algorithm, are derived.Proposition 1.For i, j = 1, . . ., m, the autocovariance functions ] are given by: Proof.From Equations ( 1) and ( 2), taking into account Hypotheses 1-6, the expressions given in Equation ( 3) are easily obtained.

Distributed Fusion Linear Filter
In this section, we address the distributed fusion linear filtering problem of the signal from the randomly delayed observations defined by Equations ( 1) and (2), using the LS optimality criterion.In the distributed fusion method, each local processor provides the LS linear filter of the signal x k based on the measurements from the corresponding sensor, which will be denoted by x . ., m, will be detailed.Finally, in Section 3.3, the distributed fusion filter weighted by matrices, x (D) k/k , will be generated from the local filters by applying the LS optimality criterion.

Local LS Linear Filtering Recursive Algorithm
To obtain the signal LS linear filters based on the available observations from each sensor, we will use an innovation approach.For each sensor i = 1, . . ., m, the innovation at time k, which represents the new information provided by the k-th observation, is defined by µ As it is known (see e.g., [26]), the innovations, µ (i) k k≥1 , constitute a zero-mean white process, and the LS linear estimator of any random vector α k based on the observations y k/L , can be calculated as a linear combination of the corresponding innovations, µ where This general expression for the LS linear estimators along with the Orthogonal Projection Lemma (OPL), which guarantees that the estimation error is uncorrelated with all the observations or, equivalently, that it is uncorrelated with all the innovations, are the essential keys to derive the proposed recursive local filtering algorithm.
Taking into account Equation ( 4), the first step to obtain the signal estimators is to find an explicit formula for the innovation µ and taking into account the independence hypotheses on the model, it is easy to see that: s for s ≤ h − 2, and using Equation (4) for w (i) k/k−1 , we obtain that: where Equation ( 6) for the one-stage observation predictor is the starting point to derive the local recursive filtering algorithm presented in Theorem 1; this algorithm provide also the filtering error covariance matrices, which measure the accuracy of the estimators x (i) k/k when the LS optimality criterion is used.Theorem 1.Under Hypotheses 1-6, for each single sensor node i = 1, . . ., m, the local LS linear filter, x (i) k/k , and the corresponding error covariance matrix, P (i) k/k , are given by: and: where the vectors O k and the matrices r k ] are recursively obtained from: and the matrices J The innovations µ k , and their covariance matrices, Π k , are given by: and: The coefficients W , are calculated as: Finally, the matrices Σ k,s are given in Equation (3) and H (i) Proof.The local filter x k/k will be obtained from the general expression given in Equation ( 4), starting from the computation of the coefficients: The independence hypotheses and the separable structure of the signal covariance assumed in Hypothesis 1 lead to E[x k y given by Equation (15).From Equation (6) for Hence, using now Equation (4) for x h−1/h−1 , the filter coefficients are expressed as: h given by: Therefore, by defining , Equation (7) for the filter follows immediately from Equation (4), and Equation ( 8) is obtained by using the OPL to express k/k , and applying Hypothesis 1 and Equation (7).The recursive Equations ( 9) and ( 10) are directly obtained from the corresponding definitions, taking into account that r which, in turn, from Equation ( 16), leads to Equation (11) for From now on, using that x k−1 and Equation ( 15), the expression for the observation predictor given by Equation (6) will be rewritten as follows: From Equation ( 17), Equation ( 12) for the innovation is directly obtained and, applying the OPL to express its covariance matrix as k/k−1 , the following identity holds: Now, using again Equation (17), and taking Equation ( 11) into account, it is deduced To complete the proof, the expressions for W k given in Equation ( 5), are derived using that w and Equation ( 14) for W (i) k,k−2 is directly obtained from Equations ( 1), ( 2) and ( 5), using the hypotheses stated on the model.Next, using Equation (4) for y To compute the first expectation involved in this formula, we write: and we apply the OPL to rewrite E x s y k/k−1 and using Equations ( 11) and ( 17), it follows that E w The second expectation in Equation ( 18) is easily computed taking into account that, from the OPL, it is equal to E y k−2 and using Equation (17).So the proof of Theorem 1 is completed.

Cross-Correlation Matrices between Any Two Local Filters
To obtain the distributed filtering estimator, the cross-correlation matrices between any pair of local filters must be calculated; a recursive formula for such matrices is derived in the following theorem (the notation in this theorem is the same as that used in Theorem 1).

Theorem 2. Under Hypotheses 1-6, the cross-correlation matrices between two local filters,
k/k , i, j = 1, . . ., m, are calculated by: with r where J are given by: J and J The innovation cross-covariance matrices Π are obtained as: where are given by: The coefficients W Finally, the matrices Σ k,s , and B s , s = k − 1, k, l = i, j, are given in Equations ( 3) and (15), respectively.

Proof. Equation (19) for Σ (ij)
k/k is directly obtained using Equation ( 7) for the local filters and defining r Next, we derive the recursive formulas to obtain the matrices r (ij) k , which clearly satisfy Equation (20) just by using Equation ( 9) and defining J For later derivations, the following expression of the one-stage predictor of y (j) k based on the observations of sensor i will be used; this expression is obtained from Equation ( 5), taking into account that x As Equation ( 17) is a particular case of Equation ( 26), for i = j, hereafter we will also refer to it for the local predictors y T ; then, using Equation ( 26) for both predictors, Equation ( 21) is easily obtained.Also, Equation (22) for To obtain Equation ( 23), first we apply the OPL to express Π . Then, using Equation (26) for y k,k−h , as it has been shown in the proof of Theorem 1. Equation (24) for k,s , and using Equation ( 26) in E y Finally, the reasoning to obtain Equation ( 25) for the coefficients W is also similar to that used to derive W (i) k,k−h in Theorem 1, so it is omitted and the proof of Theorem 2 is then completed.

Derivation of the Distributed LS Fusion Linear Filter
As it has been mentioned previously, a matrix-weighted fusion linear filter is now generated from the local filters by applying the LS optimality criterion.The distributed fusion filter at any time T is the vector constituted by the local filters, and F k ∈ R n x ×mn x is the matrix obtained by minimizing the mean squared error, As it is known, the solution of this problem is given by and, consequently, the proposed distributed filter is expressed as: , where k/k are the cross-correlation matrices between any two local filters given in Theorem 2.
The distributed fusion linear filter weighted by matrices is presented in the following theorem.
T denote the vector constituted by the local LS filters given in Theorem 1, and k/k , and the error covariance matrix, P k/k , are given by: x and: Proof.As it has been discussed previously, Equation (28) is immediately derived from Equation (27), k/k , using Hypothesis 1 and Equation (28).Then, Theorem 3 is proved.

Centralized LS Fusion Linear Filter
In this section, using an innovation approach, a recursive algorithm is designed for the LS linear centralized fusion filter of the signal, x k , which will be denoted by x

Stacked Observation Model
In the centralized fusion filtering, the observations of the different sensors are jointly processed at each sampling time to yield the filter x (C) k/k .To carry out this process, at each sampling time k ≥ 1 we will deal with the vector constituted by the observations from all sensors, y k = y  2), can be expressed as: where z k = z Hence, the problem is to obtain the LS linear estimator of the signal, x k , based on the observations y 1 , . . ., y k , and this problem requires the statistical properties of the processes involved in Equations ( 30) and (31), which are easily inferred from the model Hypotheses 1-6: Property 1. Θ k k≥1 is a sequence of independent random parameter matrices whit known means Property 5.The processes x k k≥1 , Θ k k≥1 , E k k≥1 , v k k≥1 and Γ k k≥2 are mutually independent.

Recursive Filtering Algorithm
In view of Equations ( 30) and (31) and the above properties, the study of the LS linear filtering problem based on the stacked observations is completely similar to that of the local filtering problem carried out in Section 3. Therefore, the centralized filtering algorithm described in the following theorem is derived by an analogous reasoning to that used in Theorem 1 and its proof is omitted.
The innovations, µ k , and their covariance matrices, Π k , are given by: and: respectively, and the coefficients In the above formulas, the matrices Σ Note that these matrices only depend on the matrices A k and B k , which are known, and the matrices r k , which are recursively calculated and do not depend on the current set of observations.Hence, the filtering error covariance matrices provide a measure of the estimators performance even before we get any observed data.

Numerical Simulation Example
In this section, a numerical example is shown to examine the performance of the proposed distributed and centralized filtering algorithms and how the estimation accuracy is influenced by the missing and delay probabilities.Let us consider that the system signal to be estimated is a zero-mean scalar process, {x k } k≥1 , with autocovariance function E[x k x j ] = 1.025641 × 0.95 k−j , j ≤ k, which is factorizable according to Hypothesis 1 just taking, for example, A k = 1.025641 × 0.95 k and B k = 0.95 −k .
Sensor measured outputs.The measured outputs of this signal are assumed to be provided by three different sensors and described by Equation (1): k = 0.75 and C (1) Influence of the missing measurements.Considering again γ (i) = 0.21, i = 1, 2, 3, in order to show the effect of the missing measurements phenomena, the distributed and centralized filtering error variances are displayed in Figure 2 for different values of the probability θ; specifically, when θ is varied from 0.1 to 0.9.In this figure, both graphs (corresponding to the distributed and centralized fusion filters, respectively) show that the performance of the filters becomes poorer as θ decrease, which means that, as expected, the performance of both filters improves as the probability of missing measurements, 1 − θ, decreases.This figure also confirms that both methods, distributed and centralized, have approximately the same accuracy for the different values of the missing probabilities, thus corroborating the previous comments.Assuming the same probabilities θ = 0.5 and γ (i) = 0.21 as in Figure 1, and using one thousand independent simulations, the different centralized filtering estimates are compared using the mean square error (MSE) at each sampling time k, which is calculated as k ; 1 ≤ k ≤ 100 denotes the s-th set of artificially simulated data and x (s) k/k is the filter at the sampling time k in the s-th simulation run.The results are displayed in Figure 4, which shows that: (a) the proposed centralized filtering algorithm provides better estimations than the other filtering algorithms since the possibility of different simultaneous uncertainties in the different sensors is considered; (b) the centralized filter [8] outperforms the filter [25] since, even though the latter accommodates the effect of the delays during transmission, it does not take into account the missing measurement phenomenon in the sensors; (c) the filtering algorithm in [4] provides the worst estimations, a fact that was expected since neither the uncertainties in the measured outputs nor the delays during transmission are taken into account.Six-sensor network.Finally, according to the anonymous reviewers suggestion, the feasibility of the proposed estimation algorithms is tested for a larger number of sensors.More specifically, three additional sensors are considered with the same characteristics as the previous ones, but a probability θ * = P θ

Conclusions
In this paper, distributed and centralized fusion filtering algorithms have been designed in multi-sensor systems from measured outputs with both multiplicative and additive noises, assuming correlated random delays in transmissions.The main outcomes and results can be summarized as follows: • Covariance information approach.The evolution model generating the signal process is not required to design the proposed distributed and centralized fusion filtering algorithms; nonetheless, they are also applicable to the conventional formulation using the state-space model.

•
Measured outputs with multiplicative and additive noises.The sensor measured outputs are assumed to be affected by different stochastic uncertainties (namely, missing measurements and multiplicative noises), besides cross-correlation between the different sensor additive noises.

•
Random one-step transmission delays.The fusion estimation problems are addressed assuming that random one-step delays may occur during the transmission of the sensor outputs through the network communication channels; the delays have different characteristics at the different sensors and they are assumed to be correlated and cross-correlated at consecutive sampling times.This correlation assumption covers many situations where the common assumption of independent delays is not realistic; for example, networked systems with stand-by sensors for the immediate replacement of a failed unit, thus avoiding the possibility of two successive delayed observations.• Distributed and centralized fusion filtering algorithms.As a first step, a recursive algorithm for the local LS linear signal filter based on the measured output data coming from each sensor has been designed by an innovation approach; the computational procedure of the local algorithms is very simple and suitable for online applications.After that, the matrix-weighted sum that minimizes the mean-squared estimation error is proposed as distributed fusion estimator.Also, using covariance information, a recursive centralized LS linear filtering algorithm, with an analogous structure to that of the local algorithms, is proposed.The accuracy of the proposed fusion estimators, obtained under the LS optimality criterion, is measured by the error covariance matrices, which can be calculated offline as they do not depend on the current observed data set.
(i) k/k ; afterwards, these local filters are transmitted to the fusion center where the distributed filter, x (D) k/k , is designed as a matrix-weighted linear combination of such local filters.First, in Section 3.1, for each i = 1, . . ., m, a recursive algorithm for the local LS linear filter, x(i)k/k , will be deduced.Then, in Section 3.2, the derivation of the cross-correlation matrices between any two local filters, Σ

h
or, equivalently, for the observation predictor y(i) h/h−1 .Using the following alternative expression for the observations y (i) k given by Equation (2),

T⊗
is the vector constituted by the sensor measured outputs given in Equation (1), and Γ k = Diag γ Let us note that the stacked vector z k is affected by random matrices Θ k = Diag θ I, and by a measurement additive noise v k = v

Property 2 .Property 3 .Property 4 .
E k k≥1 is a sequence of independent random parameter matrices whose entries have zero means and known second-order moments.The noise v k k≥1 is a zero-mean sequence with known second-order moments defined by the matrices R k,s ≡ R (ij) k,s i,j=1, ...,m .The matrices Γ k k≥2 have known means,Γ k ≡ E[Γ k ] = Diag γ (1) k , . . ., γ (m) k ⊗ I , k ≥ 2,and Γ k and Γ s are independent for |k − s| ≥ 2.

Theorem 4 .
The centralized LS linear filter, x (C) k/k , is given by:x (C) k/k = A k O k , k ≥ 1where the vectors O k and the matrices r k = E O k O T k are recursively obtained from: i,j=1,...,m , s = k, k − 1, with Σ y (ij) k,s given in Equation (3), and H Ψ k = H (15).The performance of the LS linear filters x (C) k/k , k ≥ 1, is measured by the error covariance matrices P (C) k/k = E x k x T k − E x , whose computation, not included in Theorem 3, is immediate from Hypothesis 1 and expression x (C) k/k = A k O k of the filter:

k = 1
= 0.75, i = 4, 5, 6, for the Bernoulli random variables modelling the missing measurements phenomena.The results are shown in Figure5, from which similar conclusions to those from Figure1are deduced.