Centralized Fusion Approach to the Estimation Problem with Multi-Packet Processing under Uncertainty in Outputs and Transmissions

This paper is concerned with the least-squares linear centralized estimation problem in multi-sensor network systems from measured outputs with uncertainties modeled by random parameter matrices. These measurements are transmitted to a central processor over different communication channels, and owing to the unreliability of the network, random one-step delays and packet dropouts are assumed to occur during the transmissions. In order to avoid network congestion, at each sampling time, each sensor’s data packet is transmitted just once, but due to the uncertainty of the transmissions, the processing center may receive either one packet, two packets, or nothing. Different white sequences of Bernoulli random variables are introduced to describe the observations used to update the estimators at each sampling time. To address the centralized estimation problem, augmented observation vectors are defined by accumulating the raw measurements from the different sensors, and when the current measurement of a sensor does not arrive on time, the corresponding component of the augmented measured output predictor is used as compensation in the estimator design. Through an innovation approach, centralized fusion estimators, including predictors, filters, and smoothers are obtained by recursive algorithms without requiring the signal evolution model. A numerical example is presented to show how uncertain systems with state-dependent multiplicative noise can be covered by the proposed model and how the estimation accuracy is influenced by both sensor uncertainties and transmission failures.


Background and Motivation
With the active development of computer and communication technologies, the estimation problem in multi-sensor network stochastic systems has become an important research topic in the last few years. The significant advantages of multi-sensor systems in practical situations (low cost, remote operation, simple installation, and maintenance) are obvious, and have triggered wide use of these systems in many areas, such as target tracking, communications, the manufacturing industry, etc. Moreover, they usually provide more information than traditional communication systems with a single sensor alone. In spite of these advantages, a sensor network is not generally a reliable communication medium, and together with the communication capacity limitations (network bandwidths or service capabilities, among others), may yield different uncertainties during data transmission, such as missing measurements, random delays, and packet dropouts.
The development of sensor networks motivates the necessity to desig fusion estimation algorithms which integrate the information from the different sensors and take these network-induced uncertainties into account to achieve a satisfactory performance. Depending on the way the information fusion is performed, there are two fundamental fusion techniques: the centralized fusion approach, in which the measurements from all sensors are sent to a central processor where the fusion is performed, and the distributed fusion approach, in which the measurements from each sensor are processed independently to obtain local estimators before being sent to the fusion center. The survey papers [1][2][3] can be examined for a wide view of these and other multi-sensor data fusion techniques.
As already indicated, centralized fusion architecture is based on a fusion centre that is able to receive, fuse, and process the data coming from every sensor; hence, centralized fusion estimation algorithms provide optimal signal estimators based on the measured outputs from all sensors and, consequently, when all of the sensors work correctly and the connections are perfect, they have the optimal estimation accuracy. In light of these concerns, it is not surprising that the study of the centralized and distributed fusion estimation problems in multi-sensor systems with network-induced uncertainties (in both the sensor measured outputs and the data transmission) has become an active research area in recent years. The estimation problem in systems with uncertainties in the sensor outputs (such as missing measurements, stochastic sensor gain degradation and fading measurements) is addressed in refs. [4][5][6], among others. In refs. [7][8][9][10], systems with failures during transmission (such as uncertain observations, random delays, and packet dropouts) are considered. Also, recent advances in the estimation, filtering, and fusion of networked systems with network-induced phenomena can be reviewed in refs. [11,12], where detailed overviews on this field are presented.
Since our aim in this paper is the design of centralized fusion estimators in multi-sensor network systems with measurements perturbed by random parameter matrices subject to random transmission failures (delays and packet dropouts), and multi-packet processing is considered, we discuss the research status of the estimation problem in networked systems with some of these characteristics.

Multi-Sensor Measured Outputs with Random Parameter Matrices
It is well known that in sensor-network environments, the measured outputs can be subject not only to additive noises, but also to multiplicative noise uncertainties due to several reasons, such as the presence of an intermittent sensor or hardware failure, natural or human-made interference, etc. For example, measurement equations that model the above-mentioned situations involving degradation of the sensor gain, or missing or fading measurements must include multiplicative noises described by random variables with values of [0, 1]. So, random measurement parameter matrices provide a unified framework to address different simultaneous network-induced phenomena, and networked systems with random parameter matrices are used in different areas of science (see, e.g., refs. [13,14]). Also, systems with random sensor delays and/or multiple packet dropouts are transformed into equivalent observation models with random measurement matrices (see, e.g., ref. [15]). Hence, the estimation problem for systems with random parameter matrices has experienced increasing interest due to its diverse applications, and many estimation algorithms for such systems have been proposed over the last few years (see, e.g., refs. [16][17][18][19][20][21][22][23][24], and references therein).

Transmission Random Delays and Losses: Observation Predictor Compensation
Random delays and packet dropouts in the measurement transmissions are usually unavoidable and clearly deteriorate the performance of the estimators. For this reason, much effort has been made towards the study of the estimation problem to incorporate the effects of these transmission uncertainties, and several modifications of the standard estimation algorithms have been proposed (see, e.g., refs. [25][26][27], and references therein). In the estimation problem from measurements subject to transmission losses, when a packet drops out, the processor does not recieve a valid measurement and the most common compensation procedure is the hold-input mechanism which consists of processing the last measurement that was successfully transmitted. Unlike the approach to deal with losses, in ref. [28] the estimator of the lost measurement based on the information received previously is proposed as the compensator; this methodology significantly improves the estimations, since in cases of loss, all the previously received measurements are considered, instead of using only the last one. In view of this clear improvement of the estimators, the compensation strategy developed in ref. [28] has been adopted in some other recent investigations (see, e.g., refs. [29,30], and references therein).

Multi-Packet Processing
Another concern at the forefront of research in networked systems subject to random delays and packet dropouts is the number of packets that are processed to update the estimator at each moment, and different observation models have been proposed to deal with this issue. For example, to avoid losses as much as possible, in ref. [16] it is assumed that each packet is transmitted several times. In contrast, to avoid the network congestion that may be caused by multiple transmissions, ref. [31] the packets are sent just once. These papers also assume that each packet is either received on time, delayed for, at most, one sampling time, or lost, and only one packet or no packets are processed to update the estimator at each moment. However, in refs. [32][33][34] two packets were able to arrive at each sampling time, in which case, both were used to update the estimators, thus improving their performance. In these papers, different packet dropout compensation procedures have been employed. The last available measurement was used as compensation in refs. [32,34], while the observation predictor was considered in ref. [34].

Addressed Problem and Paper Contributions
Based on the considerations made above, we were motivated to address the study of the centralized fusion estimation problem for multi-sensor networked systems perturbed by random parameter matrices. This problem is discussed under the following assumptions: (a) Each sensor transmits their measured outputs to a central processor over different communication channels and random delays, and packet dropouts are assumed to occur during the transmission; (b) in order to avoid the network congestion, at each time instant, the different sensors send their packets only once, but due to the transmission random failures, the processing center can receive more than one packet; specifically, either one packet, two packets, or nothing; and (c) the measurement output predictor is used as a loss compensation strategy.
The main contributions and advantages of the current work are summarized as follows: (1) A unified framework to deal with different network-induced phenomena in the measured outputs, such as missing measurements or sensor gain degradation, is provided by the use of random measurement matrices. (2) Besides the uncertainties in the measured outputs, random one-step delays and packet dropouts are assumed to occur during the transmission at different rates at each sensor. Unlike previous authors' papers concerning random measurement matrices and random transmission delays and losses where only one packet is processed to update the estimator at each moment, in this paper, the estimation algorithms are obtained under the assumption that either one packet, two packets, or nothing may arrive at each sampling time. (3) Concerning the compensation strategy, the use of the measurement predictor as the loss compensator combined with the simultaneous processing of delayed packets provides better estimators in comparison to other approaches where the last measurement successfully received is used to compensate the data packets and only one packet is processed to update the estimator at each moment. (4) The centralized fusion estimation problem is addressed using covariance information, without requiring full knowledge of the state-space model generating the signal process, thus providing a general approach to deal with different kinds of signal processes. (5) The innovation approach is used to obtain recursive prediction, filtering, and fixed-point smoothing algorithms which are recursive and computationally simple, and thus aresuitable for online implementation. In contrast to the approaches where the state augmentation technique is used, the proposed algorithms are deduced without making use of augmented systems; therefore, they have lower computational costs than those based on the augmentation method.

Paper Structure and Notation
The remaining sections of the paper are organized as follows. Section 2 presents the assumptions for the signal process, the mathematical models of the multi-sensor measured outputs with random parameter matrices, and the measurements received by the central processor with random delays and packet losses. Section 3 provides the main results of the research, namely, the covariance-based centralized least-squares linear prediction and filtering algorithm (Theorem 1) and fixed-point smoothing algorithm (Theorem 2). A numerical example is presented in Section 4 to show the performance of the proposed centralized estimators, and some concluding remarks are drawn in Section 5. The proofs of Theorems 1 and 2 are presented in the Appendix A and Appendix B, respectively.
The notations used throughout the paper are standard. R n and R m×n denote the n-dimensional Euclidean space and the set of all m × n real matrices, respectively. A T and A −1 denote the transpose and inverse of a matrix (A), respectively. I n and 0 n denote the n × n identity matrix and zero matrix, respectively. 1 n denotes the all-ones vector. Finally, ⊗ and • are the Kronecker and Hadamard products, respectively, and δ k,s is the Kronecker delta function.

Observation Model and Preliminaries
The aim of this section is to design a mathematical model to allow the observations to be processed in the least-squares (LS) linear estimation problem of discrete-time signal processes from multi-sensor noisy measurements transmitted through imperfect communication channels where random one-step delay and packet dropouts may arise in the transmission process. More specifically, in order to avoid the network congestion, at every sampling time, it is assumed that the measured outputs from each sensor, which are perturbed by random parameter matrices, are transmitted just once to a central processor, and due to random delays and losses, the processing center (PC) may receive, from each sensor, either one packet, two packets, or nothing at each time instant.
In this context, our goal is to find recursive algorithms for the LS linear prediction, filtering, and fixed-point smoothing problems using the centralized fusion method. We assume that only information about the mean and covariance functions of the signal process is available, and this information is specified in the following hypothesis: (H1) The n x -dimensional signal process, {x k } k≥1 , has a zero-mean, and its autocovariance function is expressed in a separable form,

Multi-Sensor Measured Outputs with Random Parameter Matrices
We assume that there are m sensors which provide measured outputs of the signal process that are affected by random parameter matrices according to the following model: where z (i) k ∈ R n z is the signal measurement in the i-th sensor at time k, H (i) k are random parameter matrices, and v (i) k are the measurement noises. We assume the following hypotheses for these proceses: . . , m, are independent sequences of independent random parameter matrices. For p = 1, . . . , n z and q = 1, . . . , n x , we denote h (i) pq (k) as the (p, q)-th entry of H (i) k , which has known first and second order moments, and H

Observation Model. Properties
To address the estimation problem with the centralized fusion method, the observations coming from the different sensors are gathered and jointly processed at each sampling time to yield the optimal signal estimator. So, the problem is addressed by considering, at each time (k ≥ 1), the vector constituted by the measurements received from all sensors and for this purpose, Equations (1) were combined to yield the following stacked measured output equation: where As already indicated, random one-step delays and packet dropouts occur during the transmissions to the PC. To model these failures, we introduced the following sequences of random variables: k−1 is processed at sampling time k (because it was one-step delayed) and ψ For the previous sequences of Bernoulli variables, we assume the following hypothesis: , i = 1, . . . , m, are independent sequences of independent random vectors, . . , m, are sequences of Bernoulli random variables with known probabilities, P γ . . , m, are sequences of Bernoulli random variables such that the conditional probabilities (P ψ Moreover, the mutual independence hypothesis of the involved processes is also necessary: are mutually independent. Remark 1. From hypothesis (H4), for i, j = 1, . . . , m, the following correlations are clear: In order to write jointly the sensor measurements to be processed at each sampling time, we efined . . , m,, it is clear that the non-zero components of vector (I mn z − Γ k )z k are those of z k that arrive on time at the PC and, consequently, those processed at time k. The other components of z k are delayed or lost, and as compensation, the corresponding components of the predictor z k/k−1 , specified in Γ k z k/k−1 , are processed. Similarly, the non-zero components of Ψ k z k−1 are those of z k−1 that are affected by one-step delay, and consequently, they are also processed at time k. Hence, the processed observations at each time are expressed by the following model: or equivalently, where C 0 = ( I mn z , 0 mn z ) T and C 1 = ( 0 mn z , I mn z ) T . (4) for the measurements processed after the possible transmission one-step delays and losses, a single sensor is considered in the following comments. On the one hand, note that γ k = 0 means that the output at the current sampling time (z k ) arrives on time to be processed. Then, if ψ k = 1, the measurement processed at time k is

Remark 2. For a better understanding of Model
On the other hand, if γ k = 1, the output z k is either delayed or dropped out, and its predictor z k/k−1 is processed at time k. Then, if ψ k = 1, the measurement processed at time k is Table 1 displays ten iterations of a specific simulation of packet transmission. From Table 1, it can be observed that z 1 , z 3 , z 6 , z 7 , and z 9 arrive on time to be processed; z 2 , z 4 and z 8 are one-step delayed; and z 5 and z 10 are lost. So, Model (4) describes possible one-step random transmission delays and packet dropouts in networked systems, where one or two packets can be processed for the estimation. Finally, note that the predictors z k/k−1 , k = 2, 4, 5, 8, 10 are used to compensate for the measurements that do not arrive on time.
The problem is then formulated as that of obtaining the LS linear estimator of the signal, x k based on the observations {y 1 , . . . , y L } given in (5). Next, some statistical properties of the processes involved in observation models (2) and (5), which are necessary to address the LS linear estimation problem, are specified: (P3) The random matrices Γ k , Ψ k+1 k≥1 are independent, and their means are given by (P4) The signal process, {x k } k≥1 and the processes {H k } k≥1 {v k } k≥1 and Γ k , Ψ k+1 k≥1 are mutually independent. (P5) {z k } k≥1 is a zero-mean process with covariance matrices Σ z k,s ≡ E[z k z T s ], for s ≤ k which, from (P4), are given by where the (p, q)-th entries of the matrices E[H , p, q = 1, . . . , n z .

Remark 3. By denoting
k are known matrices whose entries are given in (3). Now, by defining and taking the Hadamard product properties into account, it is easy to check that the covariance matrices (Σ ξ k ≡ E ξ k ξ T k ) are given by

Centralized Fusion Estimators
This section is concerned with the problem of obtaining recursive algorithms for the LS linear centralized fusion prediction, filtering, and fixed-point smoothing estimators. For this purpose, we used an innovation approach. Also the estimation error covariance matrices, which are used to measure the accuracy of the proposed estimators when the LS optimality criterion is used, were derived.
The centralized fusion structure for the considered networked systems with random uncertainties in the measured outputs and transmission is illustrated in Figure 1.

Innovation Technique
As indicated above, our aim was to obtain the optimal LS linear estimators, x k/L , of the signal x k based on the measurements {y 1 , . . . , y L }, given in (5), by recursive algorithms. Since the estimator x k/L is the orthogonal projection of the signal x k onto the linear space spanned by the nonorthogonal vectors {y 1 , . . . , y L }, we used an innovation approach in which the observation process {y k } k≥1 was transformed into an equivalent one (innovation process) of orthogonal vectors {µ k } k≥1 ; the equivalence means that each set {µ 1 , . . . , µ L } spans the same linear subspace as {y 1 , . . . , y L }.
The innovation at time k is defined as µ k = y k − y k/k−1 , where y 1/0 = E[y 1 ] = 0 and, for k ≥ 2, y k/k−1 , the one-stage linear predictor of y k is the projection of y k onto the linear space generated by {µ 1 , . . . , µ k−1 }. Due to the orthogonality property of the innovations and since the innovation process is uniquely determined by the observations, by replacing the observation process by the innovation one, the following general expression for the LS linear estimators of any vector w k based on the observations {y 1 , . . . , y L } was obtained This expression is derived from the uncorrelation property of the estimation error with all of the innovations, which is guaranteed by the Orthogonal Projection Lemma (OPL). As shown by (8), the first step to obtain the signal estimators is to find an explicit formula for the innovation or, equivalently, for the one-stage linear predictor of the observation.

One-Stage Observation Predictor
To obtain y k/k−1 , the projection of y k onto the linear space generated by {µ 1 , . . . , µ k−1 }, we used (5) and we note that Ψ k and H k−1 are correlated with the innovation µ k−1 . So, to simplify the derivation of y k/k−1 , the observations (5) were rewritten as follows: Taking into account that Ψ k+1 and H k are independent of µ h , for h ≤ k − 1, it is easy to see that Hence, according to the projection theory, y k/k−1 satisfies This expression for the one-stage observation predictor along with (8) for the LS linear estimators are the starting points to get the recursive prediction, filtering, and fixed-point smoothing algorithms.

Centralized Fusion Prediction, Filtering, and Smoothing Algorithms
The following theorem presents a recursive algorithm for the LS linear centralized fusion prediction and filtering estimators x k/L , L ≤ k, of the signal x k based on the observations {y 1 , . . . , y L } given in (5) or equivalently, in (9).
where the vectors e L and the matrices Σ e L ≡ E e L e T L are recursively obtained from and the matrices E L ≡ E e L µ T L satisfy where H D L , for D = A, B, is defined by The innovations µ L = y L − y L/L−1 are given by and the coefficients V L = E V L µ T L , are obtained by where K
The innovation covariance matrices Π L ≡ E µ L µ T L satisfy where the matrices Σ ξ L are given in (7), K γ L ≡ E γ L γ T L , whose entries are obtained by (3), and the matrices Proof. See Appendix A.
Next, a recursive algorithm for the LS linear centralized fusion smoothers x k/k+h at the fixed point k for any h ≥ 1 is established in the following theorem.

Theorem 2.
Under hypotheses (H1)-(H5), the LS linear centralized fixed-point smoothers x k/k+h are calculated by whose initial condition is given by the centralized filter x k/k , and the matrices X k,k+h = E x k µ T k+h are obtained by The matrices E k,k+h = E[x k e T k+h ] satisfy the following recursive formula: The fixed-point smoothing error covariance matrices, with the initial condition given by the filtering error covariance matrix Σ k/k . The filter x k/k , the innovations µ k+h , their covariance matrices Σ k/k and Π k+h , and the matrices E k+h and Σ e k were obtained from Theorem 1.
Proof. See Appendix B.

Numerical Simulation Example
The performance of the proposed centralized filtering and fixed-point smoothing algorithms was analyzed in a numerical simulation example which also shows how some of the sensor uncertainties covered by the current measurement model (1) with random parameter matrices influence the accuracy of the estimators. Also, the effect of the random transmission delays and packet losses on the performance of the estimators was analyzed.

Signal Process
Consider a discrete-time scalar signal process generated by the following model with the state-dependent multiplicative noise where x 0 is a standard Gaussian variable, and {w k } k≥0 , {ε k } k≥0 are zero-mean Gaussian white processes with unit variance. Assuming that x 0 , {w k } k≥0 and {ε k } k≥0 are mutually independent, the signal covariance is given by E[x k x s ] = 0.9 k−s D s , s ≤ k, where D s = E[x 2 s ] is obtained by D s = (0.9 2 + 0.01 2 )D s−1 + 1, s ≥ 1; D 0 = 1.
Hence, the hypothesis (H1) is satisfied with, for example, A k = 0.9 k y B s = 0.9 −s D s . This signal process has been considered in the current authors' previous papers and shows that hypothesis (H1) regarding the signal autocovariance function is satisfied for uncertain systems with state-dependent multiplicative noise. Also, situations where the system matrix in the state-space model is singular are covered by hypothesis (H1) (see, e.g., ref. [9]). Hence, this hypothesis provides a unified general context to deal with different situations, thus avoiding the derivation of specific algorithms for each of them.

Sensor Measured Outputs
As in ref. [20], let us consider four sensors that provide scalar measurements with different random failures, which are described using random parameters according to the theoretical model (1). Namely, sensor 1 has continuous gain degradation, sensor 2 has discrete gain degradation, sensor 3 has missing measurements and sensor 4 has both missing measurements and multiplicative noise. Specifically, the scalar measured outputs are described according to the model where the additive noises are defined as v (i) k = c i η k , with c 1 = 1, c 2 = 0.5, c 3 = c 4 = 0.75, and {η k } k≥1 is a Gaussian white sequence with a mean of 0 and variance of 0.5. The additive noises are correlated with R (ij) k = 0.5c i c j , k ≥ 1; i, j = 1, 2, 3, 4. The random measurement matrices are defined by H

Model for the Measurements Processed
Now, according to the theoretical study, we assume that the sensor measurements, y k , that are processed to update the estimators are modeled by k } k≥2 are sequences of independent Bernoulli random variables whose distributions are determined by the following probabilities: k−1 is received at the current time (k), conditioned to the fact that it is not received on time.
• ψ (i) ≡ P ψ γ γ (i) , k ≥ 1: probability that the measurement z (i) k−1 is received and processed at the current time k.
Finally, in order to apply the proposed algorithms, it was assumed that all the processes involved in the observation equations satisfy the independence hypotheses imposed on the theoretical model.
To illustrate the feasibility and effectiveness of the proposed algorithms, they were implemented in MATLAB, and fifty iterations of the filtering and fixed-point smoothing algorithms were performed. The estimation accuracy was examined by analyzing the error variances for different probabilities of the Bernoulli variables modeling the random failures in sensors 3 and 4 (θ (i) , i = 3, 4). Also, different values of the probabilities γ (i) , corresponding to the transmission uncertainties, and different conditional probabilities ψ (i) γ , i = 1, 2, 3, 4, were considered in order to analyze the effect of these failures on the estimation accuracy.
In the study of the performance of the centralized estimators, they were compared with local ones, which were computed using only the measurements received from each single sensor. In that case, the measurements processed at each local processor can be described by where z

Performance of the Centralized Fusion Filtering and Smoothing Estimators
For i = 1, 2, 3, 4, we assumed that γ (i) = ψ  Figure 2. This figure shows, on the one hand, that the error variances of the centralized fusion filtering estimators are significantly smaller than those of every local estimator. Consequently, agreeing with what is theoretically expected, the centralized fusion filter has better accuracy than the local ones, as it is the optimal one based on the information from all the contributing sensors. On the other hand, Figure 2 also shows that as more observations are considered in the estimation, the error variances are lower and consequently, the performance of the centralized estimators becomes better. In other words, the smoothing estimators are more accuracy than the filtering ones, and the accuracy of the smoothers at each fixed-point k is better as the number of available observations k + h increases, although this improvement is practically imperceptible for h > 3. Similar results were obtained for other values of the probabilities θ (i) , γ (i) and ψ
From these figures, it is clear that the performance of the centralized fusion filter is indeed influenced by the probabilities θ (i) , i = 3, 4. Specifically, the performance of the centralized filter is poorer as θ (i) decreases, which means that, as expected, the lower the probability of missing measurements is, the better performance the filter has. Analogous results were obtained for the centralized smoothers and for other values of the probabilities.
Considering that the behavior of the error variances was analogous in all of the iterations, only the results at a specific iteration (k = 50) are displayed in the following figures.

Influence of the Probabilities γ (i) and ψ (i) γ
Considering θ (i) = 0.5, i = 3, 4, as in Figure 2, we analyze the influences of the random delays and packet dropouts on the performance of the centralized filtering estimators. We assume that the four sensors have the same probability of measurements not arriving on time (γ (i) = γ, i = 1, 2, 3, 4) and also the same conditional probability (ψ 2,3,4). Figure 4 displays the centralized filtering error variances at k = 50 versus ψ γ for γ varying from 0.1 to 0.9. This figure shows that for each value of γ, the error variances decrease when the conditional probability increases. This result was expected since, for a fixed arbitrary value of γ, the increase in ψ γ entails that of ψ, which is the probability of processing the delayed measurement at the previous time at the current time. Also, we observed that a decrease in the error variances was more evident for higher values of γ, which was also expected since ψ = ψ γ γ and hence, γ specifies the increasing rate of ψ with respect to ψ γ .
Similar results to the previous ones and consequently, analogous conclusions, were deduced for the smoothing estimators and for different values of the probabilities γ (i) and ψ (i) γ at each sensor.
By way of example, the smoothing error variances Σ 50/51 are displayed in Figure 5 for some of the situations considered above.

Concluding Remarks
In this paper, recursive algorithms were designed for the LS linear centralized fusion prediction, filtering, and smoothing problems in networked systems with random parameter matrices in the measured outputs. At each sampling time, every sensor sends its measured output to the fusion centre where the data packets coming from all the sensors are gathered. Every data packet is assumed to be transmitted just once, but random delays and packet dropouts can occur during this transmission, so the estimator may receive either one packet, two packets, or nothing. When the current measurement of a sensor does not arrive punctually, the corresponding component of the stacked measured output predictor is used as the compensator in the design of the estimators.
Some of the main advantages of the current approach are the following ones: • The consideration of random measurement matrices provides a general framework to address different uncertainties, such as missing measurements, multiplicative noise, or sensor gain degradation, as has been illustrated by a simulation example. • The covariance-based approach used to design the estimation algorithms does not require the knowledge of the state-space model, even though it is also applicable to the classical formulation using this model. • In contrast to most estimation algorithms dealing with random delays and packet dropouts in the literature, the proposed ones do not require any state vector augmentation technique, and thus are computationally more simple. • The current estimation algorithms were designed using the LS optimality criterion by a innovation approach and no particular structure of the estimators is required.
Author Contributions: All authors have contributed equally to this work. R.C.-Á., A.H.-C. and J.L.-P. provided original ideas for the proposed model and they all collaborated in the derivation of the estimation algorithms; they participated equally in the design and analysis of the simulation results; and the paper was also written and reviewed cooperatively.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A
Proof of Theorem 1. Based on the general expression (8), to obtain the LS linear estimators x k/L , L ≤ k, it is necessary to calculate the coefficients Using (5) (15). Now, using expression (10) for y h/h−1 , together with (8) for x h/h−1 and x h−1/h−1 , the coefficients X k,h , 1 ≤ h ≤ k, are expressed as follows: which guarantees that X k,h = A k E h , 1 ≤ h ≤ k, with E h given by Then, by defining e L ≡ L ∑ h=1 E h Π −1 h µ h and Σ e L ≡ E e L e T L = L ∑ h=1 E h Π −1 h E h , for L ≥ 1, and taking into account that E[e L µ T h ] = E h , for h ≤ L, it is easy to obtain expressions (11)-(16). Next, the expression (17) for V L = E V L µ T L = E V L y T L is derived. Using (9) for V L , we write V L = C 1 E[Ψ L+1 z L y T L ] − Ψ L+1 H L E[x L y T L ] , and we calculate each of these expectations: • From (5), we write y L = C 0 (I mn z − Γ L ) (z L − z L/L−1 ) + C 1 Ψ L z L−1 + C 0 z L/L−1 , and from the independence properties, it is clear that To obtain expression (18) for Π L = E[µ L µ T L ], we apply the OPL to write Π L = E y L y T L − E y L/L−1 y T L/L−1 . On the one hand, using the OPL again, we express E y L/L−1 y T L/L−1 = E y L/L−1 y T L which, takes (16)  On the other hand, to obtain E y L y T L , we use (9) and (6) to write y L = ξ L + C 0 Γ L z L/L−1 , and since z L/L−1 = H L A L e L−1 , the following expression is obtained from the definition of Σ e L after some manipulations: From the above expectations, again, after some manipulations, expression (18) for Π L is obtained.
To complete the proof, expression (19)