Next Article in Journal
On the State Approach Representations of Convolutional Codes over Rings of Modular Integers
Previous Article in Journal
Fast Hyperparameter Calibration of Sparsity Enforcing Penalties in Total Generalised Variation Penalised Reconstruction Methods for XCT Using a Planted Virtual Reference Image
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Distributed and Centralized Fusion Filtering Problems of Tessarine Signals from Multi-Sensor Randomly Delayed and Missing Observations under Tk-Properness Conditions

by
José D. Jiménez-López
*,
Rosa M. Fernández-Alcalá
,
Jesús Navarro-Moreno
and
Juan C. Ruiz-Molina
Department of Statistics and Operations Research, University of Jaén, Paraje Las Lagunillas, 23071 Jaén, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(22), 2961; https://doi.org/10.3390/math9222961
Submission received: 27 October 2021 / Revised: 15 November 2021 / Accepted: 16 November 2021 / Published: 19 November 2021
(This article belongs to the Section Probability and Statistics)

Abstract

:
This paper addresses the fusion estimation problem in tessarine systems with multi-sensor observations affected by mixed uncertainties when under T k -properness conditions. Observations from each sensor can be updated, delayed, or contain only noise, and a correlation is assumed between the state and the observation noises. Recursive algorithms for the optimal local linear filter at each sensor as well as both centralized and distributed linear fusion estimators are derived using an innovation approach. The T k -properness assumption implies a reduction in the dimension of the augmented system, which yields computational savings in the previously mentioned algorithms compared to their counterparts, which are derived from real or widely linear processing. A numerical simulation example illustrates the obtained theoretical results and allows us to visualize, among other aspects, the insignificant difference in the accuracy of both fusion filters, which means that the distributed filter, although suboptimal, is preferable in practice as it implies a lower computational cost.

1. Introduction

In the scientific community, there has recently been increasing interest in approaching different estimation problems in systems with observations proceeding from multiple sensors. Evidently, this availability yields better estimations, since the information supplied from several sensors may compensate for the possible adverse effects of some faulty sensors, communication errors, or defects when using a single sensor. According to the way that information is fused, the estimation techniques can be categorized into two large groups: centralized and distributed methods. In the former group, all the observations from all the sensors are directly sent to the fusion center to be processed in a single fusion estimator. In the latter, the observations at each sensor are independently processed, providing a local estimator, and, afterwards, in the fusion center, a single distributed fusion estimator is built from these local estimators. Both fusion “architectures” have strengths and weaknesses, meaning that they can be used alternatively in practice, depending on the problem at hand. As is already known, the centralized method provides a theoretical, optimal solution to the estimation problem; in contrast, this may imply a heavy computational burden, and a high bandwidth may be required. Moreover, the architecture cannot be changed [1,2,3,4]. In contrast, the situation with the distributed fusion method the opposite. This method has great advantages over the centralized one due to the fact that it requires a lower computational load and communication costs, and is more robust in terms of failure and flexibility. However, the disadvantage of this method is that the optimality condition of the estimators is lost [2,5,6]. Nevertheless, this weakness is acceptable when considering the advantages that this methodology presents, accounting for the slight difference that there may be in practice between the estimators obtained by both methods.
Furthermore, in data transmission problems, random delays, packet dropouts, or missing measurements frequently occur. These problems are caused by limited communication capabilities or some failures in transmission components. In the real domain, there is a wide-ranging literature on signal linear processing under uncertain outputs based on both centralized (see, e.g., [7,8,9], among others) and distributed fusion methods (see, e.g., [10,11,12,13,14]). In all of them, different scenarios concerning the initial state-space model and the characteristics of the observations from the sensors are considered. More specifically, by considering these three mixed uncertainties, the centralized fusion linear optimal estimation problem has been solved in [8]. In multi-sensor systems with missing measurements, the centralized and distributed fusion estimators have been obtained in [9] and [11,12], respectively, by assuming different conditions for the noise variances and updating the state at each sensor. An analysis of the effects of these packet losses has been developed in [15] for the centralized Kalman filtering and [12,16] for the distributed fusion filtering problem. Moreover, the distributed fusion estimation problem has also been studied in networked systems, for a class of uncertain sensor networks with multiple delays [7,10], and assuming correlated noises [13,14]. In the above references, the situations referring to the uncertainties in the observations are usually modeled by Bernoulli random parameters with known probabilities. Indeed, this is the most common probability distribution to describe the different types of uncertainty.
In the last two decades, the interest in hypercomplex domains has considerably grown due to their adequacy in describing a high number of physical phenomena. Their usefulness lies in the fact that they operate in higher-dimensional spaces, and are thus able to explain the relationships between the dimensions. As an example, we can mention the use of hypercomplex domains in virtual reality [17,18], acoustic applications [19,20], communication [21,22], image processing [23,24], seismic phenomena [25,26], robotics [27,28], materials [29,30], avionics [31,32], etc.
At the same time, the need to address, among other aspects, the signal processing problem by using 4D hypercomplex algebras arises. To date, quaternion algebra has been the most commonly used type of algebra in signal processing, since it is a normed division algebra. Specifically, in the optimal signal estimation problem, under the so-called widely linear (WL) processing, the signal quaternion and its three involutions have to be considered. Nevertheless, the processes involved in the model can present some properties that lead to a reduction in the dimension of the model by considering the quaternion signal itself, called strictly linear (SL) processing, or a two-dimensional vector given by the quaternion signal and its involution over the pure unit quaternion, named semi-widely linear (SWL) processing. This reduction in the dimension of the model entails a major decrease in the computational burden. In this framework, the WL estimation problem has been studied in [17,19,31] for different real problems, and algorithms based on the Kalman filter have been proposed in the quaternion field. Moreover, in multi-sensor systems, the WL distributed fusion estimation problem has been addressed in [33,34,35,36,37]. In addition, for systems with missing observations, WL and SWL filtering, prediction and smoothing algorithms have been designed in [38] and [39], respectively, with correlation hypotheses on the state and observation noises considered in the latter. Additionally, the WL estimation problem in multi-sensor systems with random delays, packet dropouts, and uncertain observations has been studied in [40]. It should be highlighted that quaternion processing is not always the most appropriate methodology. In fact, the use of tessarine signals can bring, under certain conditions, a better performance of the estimators, as has been tested in [41,42,43].
However, the signal estimation problem in the tessarine domain has hardly been studied at all. This is due to the fact that the set of tessarine random variables does not have a Hilbert space structure; therefore, it is difficult to calculate the LS linear estimator. Recently, in [41], a metric has been defined in the tessarine domain that satisfies the properties necessary to guarantee the existence and uniqueness of the projection. Moreover, by making an analogy with the quaternion domain, the authors define T 1 -properness, and prove that the tessarine widely linear (TWL) estimation error coincides with that of T 1 , which yields a considerable reduction in the computational cost of obtaining the optimal solution. Later, the T 2 -properness definition is established in [42], obtaining the same result for the TWL and T 2 estimation errors. These types of T k -properness have been used in [43] to approach the LS linear centralized fusion estimation problem of signals from multi-sensor observations with random delays.
Considering these benefits for the T k -properness, in this paper, we address both distributed and centralized fusion LS estimation problems in tessarine systems with mixed uncertainties of randomly delayed and missing observations proceeding from multiple sensors. At each sensor and instant of time, each tessarine component can be updated, delayed, or contain only noise, independently of the remaining sensors. We have also assumed a correlation between the signal and observation noises, which is a very desirable property for multi-sensor systems, since the observations act as output feedback [14]. By using an innovation approach, recursive algorithms are derived to compute the LS distributed and centralized fusion filters. These have been characterized for both T k -proper scenarios, where the reduction in the system dimension is clearly reflected. A decrease in the computational burden can also be observed by using the distributed filtering algorithm instead of the centralized one and, although the distributed algorithm provides suboptimal estimators, a simulation example shows that the differences between them may be insignificant in practice.
The rest of the paper is organized as follows. In Section 2, the notation that will be used throughout the paper is established, as well as a revision of the main concepts and results in the tessarine domain. Next, in Section 3, the tessarine state and observation equations (both the real and available ones) are established, as well as the hypotheses on the processes involved. The different mixed uncertainties are modelled by using Bernoulli random variables with known parameters, whose values of one or zero determine if an observation is updated, delayed, or contains only noise. Then, the augmented model (by considering the three conjugations) is presented in Section 4, and the conditions for the processes which guarantee T k -properness are studied. The reduction in the model under these T k -properness conditions is shown and, in this scenario, a recursive algorithm is proposed to obtain the local optimal LS linear filtering estimators. Subsequently, in Section 5, by considering the compact model, that is, the observations from all the sensors, a recursive algorithm is devised to obtain the optimal LS linear centralized fusion filter under T k -properness conditions. Adittionally, a method for the recursive computation of the T k distributed fusion filter as the LS matrix-weighted linear combination of the optimal local LS linear estimators is proposed in Section 6. Afterwards, a numerical simulation example in Section 7 illustrates several items: (i) the superiority of both fusion filters over the local ones, (ii) the performance of the fusion filters increases as the updating and delay probabilities become greater and by comparing the same probability that the observation is updated or delayed; (iii) the accuracy of the centralized fusion filter is better than the distributed one, but the difference is almost insignificant. Finally, a section of conclusions (Section 8) and two appendixes with the proofs of all the results in the paper have been included.

2. Preliminaries

In this section, we introduce the basic notation that will be used in this paper. Matrices will be written in bold, upper-case letters, column vectors in bold, lower-case letters, and scalar quantities in lightfaced letters. We will use the superscripts “*”, “ T ” and “ H ” to represent the tessarine conjugate, transpose and conjugate transpose, respectively. For 0 n × m , we will denote the n × m zero matrix, I n the identity matrix of dimension n, and 1 n and 0 n , the column vector of dimension n with all its elements 1 or 0, respectively. Letters Z , R and T will represent the set of integer, real and tessarine field, respectively. Moreover, A R n × m (respectively, A T n × m ) means that A is a real (respectively, tessarine) n × m matrix. Similarly, r R m (respectively, r T m ) means that r is a m-dimensional real (respectively, tessarine) vector.
In addition, E [ · ] and Cov ( · ) will denote the expectation and covariance operators and diag ( · ) is a diagonal matrix with the elements specified on the main diagonal. Finally, “∘” and “⊗” denote the Hadamard and Kronecker products, respectively, and δ t s , the Kronecker delta function. We will use the following property of the Hadamard product:
Property 1.
If A R n × n and b R n , then diag ( b ) A diag ( b ) = b b T A .
Throughout this paper, and unless otherwise stated, all the random variables are assumed to have zero-mean. Next, we present a review of the tessarine domain.
Definition 1.
A tessarine random signal x ( t ) T n is a stochastic process of the form [41]
x ( t ) = x r ( t ) + η x η ( t ) + η x η ( t ) + η x η ( t ) ,
where x ν ( t ) R n , for ν = r , η , η , η , are real random signals and the imaginary units { η , η , η } satisfy the identities:
η η = η , η η = η , η η = η , η 2 = η 2 = 1 , η 2 = 1 .
These properties of the imaginary units guarantee the commutative property of the product, implying a great advantage over the quaternion algebra, in which that property is not met. In contrast, the involutions defined in the quaternion domain cannot be defined in the tessarine one because they are auto-involutive. The conjugate of the tessarine signal given in (1) is defined as
x * ( t ) = x r ( t ) η x η ( t ) + η x η ( t ) η x η ( t ) ,
and the auxiliary tessarines as
x η ( t ) = x r ( t ) + η x η ( t ) η x η ( t ) η x η ( t ) , x η ( t ) = x r ( t ) η x η ( t ) η x η ( t ) + η x η ( t ) .
It is also defined the real vector formed with the components of x ( t ) in (1), x r ( t ) = x r T ( t ) , x η T ( t ) , x η T ( t ) , x η T ( t ) T , and the augmented tessarine signal vector constituted by the tessarine signal x ( t ) and its conjugations, i.e., x ¯ ( t ) = x T ( t ) , x * T ( t ) , x η T ( t ) , x η T ( t ) T . The relationship between both vectors is given by the following expression
x ¯ ( t ) = 2 T n x r ( t ) ,
where T n = 1 2 A I n , with
A = 1 η η η 1 η η η 1 η η η 1 η η η .
This property is satisfied: T n H T n = I 4 n .
As in the quaternion field ([38]), we define the following product between tessarines, which will be crucial to model the intermittency and delay in the observations.
Definition 2.
The productbetween two tessarine signals x ( t ) , y ( s ) T n is defined as
x ( t ) y ( s ) = x r ( t ) y r ( s ) + η x η ( t ) y η ( s ) + η x η ( t ) y η ( s ) + η x η ( t ) y η ( s ) .
The following property of the product ★ is easy to check.
Property 2.
The augmented vector of x ( t ) y ( s ) is x ( t ) y ( s ) ¯ = D x ( t ) y ¯ ( s ) , where D x ( t ) = T n diag ( x r ( t ) ) T n H .
Definition 3.
The pseudo autocorrelation function of the random signal x ( t ) T n is defined as Γ x ( t , s ) = E x ( t ) x H ( s ) , t , s Z , and the pseudo cross-correlation function of the random signals x ( t ) T n and y ( t ) T m is defined as Γ x y ( t , s ) = E x ( t ) y H ( s ) , t , s Z .
The concept of T 1 and T 2 -properness we next define, was recently introduced in [41] and [42], respectively, and consists of the vanishing of some of the pseudo correlation functions of the signal with its conjugations, in an analogous manner to what was in the quaternion domain.
Definition 4.
A random signal x ( t ) T n is said to be T 1 -proper (respectively, T 2 -proper) if, and only if, the functions Γ xx ν ( t , s ) , with ν = , η , η (respectively, ν = η , η ), vanish for all t , s Z . Similarly, two random signals x ( t ) T n and y ( t ) T m are cross T 1 -proper (respectively, cross T 2 -proper) if, and only if, the functions Γ xy ν ( t , s ) , with ν = , η , η (respectively, ν = η , η ), vanish for all t , s Z . Finally, x ( t ) and y ( t ) are jointly T 1 -proper (respectively, jointly T 2 -proper) if, and only if, they are T 1 -proper (respectively, T 2 -proper) and cross T 1 -proper (respectively, cross T 2 -proper).
Note that, in TWL processing, the fact that T k -properness is satisfied, causes a considerable reduction in the dimensions of the processes involved, with the consequent decrease in the computational cost. More specifically, in the most general case, TWL processing uses the augmented vectors formed by the process and its three conjugations. However, if tessarine linear processing is applied by considering only the process or the process and its conjugate, then they are denominated T 1 and T 2 linear processing, respectively. In [41], the authors prove that, in general, the TWL estimation error is lower than that of T 1 , but under T 1 -properness conditions, both the TWL and T 1 estimators coincide. Analogous considerations about T 2 processing are shown in [42]. Moreover, both papers propose statistical tests to experimentally check if a signal is T k -proper, for k = 1 , 2 , and prove that, under certain conditions, the tessarine processing can obtain better estimators than the quaternion one. The statements above justify that the T k -properness conditions are very desirable in practice. In this sense, it will be of great interest to determine under what conditions T k -properness is guaranteed.

3. Model Formulation

Consider a tessarine state x ( t ) T n satisfying the following state equation:
x ( t + 1 ) = F 1 ( t ) x ( t ) + F 2 ( t ) x * ( t ) + F 3 ( t ) x η ( t ) + F 4 ( t ) x η ( t ) + u ( t ) , t 0 ,
where F i ( t ) T n × n , for i = 1 , , 4 , are deterministic matrices and u ( t ) T n is a tessarine noise.
We also assume that R sensors exist, providing each of them with a real observation, z ( i ) ( t ) T n , satisfying the following equation:
z ( i ) ( t ) = x ( t ) + v ( i ) ( t ) , t 1 , i = 1 , , R ,
with v ( i ) ( t ) T n a tessarine noise.
As it occurs in multiple practical situations, due to failures in communication channels, network congestion or some other causes, the available observations can suffer delays and even contain only noise in an intermittent manner. This fact can be modelled by the following equation for the available observation in each sensor, y ( i ) ( t ) T n , for i = 1 , , R :
y ( i ) ( t ) = γ 1 ( i ) ( t ) z ( i ) ( t ) + γ 2 ( i ) ( t ) z ( i ) ( t 1 ) + ( 1 n γ 1 ( i ) ( t ) γ 2 ( i ) ( t ) ) v ( i ) ( t ) , t 2 ; y ( i ) ( 1 ) = z ( i ) ( 1 ) .
For each sensor i = 1 , , R and for j = 1 , 2 , γ j ( i ) ( t ) = [ γ j 1 ( i ) ( t ) , , γ j n ( i ) ( t ) ] T T n is a tessarine random vector whose components are composed by independent Bernoulli random variables γ j m , ν ( i ) ( t ) , with m = 1 , , n , ν = r , η , η , η with known probabilities p j m , ν ( i ) ( t ) , respectively. Values of one or zero for these Bernoulli variables indicate if the corresponding component of the available observation is updated, one-step delayed or only contains noise. These variables must satisfy that, for each i = 1 , , R , m = 1 , , n , ν = r , η , η , η , γ 1 m , ν ( i ) ( t ) + γ 2 m , ν ( i ) ( t ) = 1 or γ 1 m , ν ( i ) ( t ) + γ 2 m , ν ( i ) ( t ) = 0 at each instant of time; that is, if one of them takes value 1, the other one is 0, or both are 0; moreover, p 1 m , ν ( i ) ( t ) + p 2 m , ν ( i ) ( t ) 1 . In this sense, if γ 1 m , ν ( i ) ( t ) = 1 , then y m , ν ( i ) ( t ) = z m , ν ( i ) ( t ) ; that is, it contains the updated observation component; if γ 2 m , ν ( i ) ( t ) = 1 , then y m , ν ( i ) ( t ) = z m , ν ( i ) ( t 1 ) , the available observation component is that delayed one instant of time and, finally, if γ 1 m , ν ( i ) ( t ) = γ 2 m , ν ( i ) ( t ) = 0 , then y m , ν ( i ) ( t ) = v m , ν ( i ) ( t ) ; that is, it contains only noise. Observe that the initial available observation is updated.
We assume the following hypotheses for the model defined in (2)–(4):
Hypothesis 1 (H1).
{ u ( t ) ; t 0 } and { v ( i ) ( t ) ; t 1 } , i = 1 , , R are white noises with pseudo variances Q ( t ) and R ( i ) ( t ) , respectively. Moreover, they are correlated at the same instant of time, with E u ( t ) v ( i ) H ( t ) = S ( i ) ( t ) .
Hypothesis 2 (H2).
v ( i ) ( t ) is independent of v ( j ) ( t ) for i , j = 1 , , R , with i j .
Hypothesis 3 (H3).
For each sensor i = 1 , R , and j = 1 , 2 , the Bernoulli variables in γ j ( i ) ( t ) are independent of those in γ j ( i ) ( s ) , for s t . The same hypothesis for γ j ( i ) ( t ) and γ j ( l ) ( t ) with i l .
Hypothesis 4 (H4).
The initial state, x ( 0 ) , (whose pseudo variance is denoted by P 0 ) and the noises { u ( t ) ; t 0 } , { v ( i ) ( t ) ; t 1 } and { γ j ( i ) ( t ) ; t 2 } , for j = 1 , 2 , i = 1 , , R , are mutually independent.
An example of the above scenario can be found in the problem of dynamic targeting tracking (see, e.g., [13]) and, in particular, in the problem of tracking the rotations of an aircraft [35,36].

4. Local T k -Proper LS Linear Filtering Problem

In this section, the local LS linear filtering problem is studied for the tessarine model described above; that is, for each sensor i = 1 , , R , the optimal lineal estimation problem of the signal x ( t ) is addressed from the observations provided by that sensor i. For this purpose, firstly, the model is formulated at each sensor, by considering both delays and the internittency in the avilable observation equation. Moreover, due to the great computational advantages, the conditions on the processes involved to ensure T k -properness are proposed. Once the model has been formulated, a recursive algorithm is presented to obtain the optimal local LS linear filters. In Section 6, this algorithm will be necessary to address the distributed fusion LS linear filtering problem.

4.1. Local T k -Proper Model Formulation

For each sensor i = 1 , , R , the augmented version of the model defined in Equations (2)–(4) is as follows
x ¯ ( t + 1 ) = Φ ¯ ( t ) x ¯ ( t ) + u ¯ ( t ) , t 0 , z ¯ ( i ) ( t ) = x ¯ ( t ) + v ¯ ( i ) ( t ) , t 1 , y ¯ ( i ) ( t ) = D γ 1 ( i ) ( t ) z ¯ ( i ) ( t ) + D γ 2 ( i ) ( t ) z ¯ ( i ) ( t 1 ) + D 1 γ 1 ( i ) γ 2 ( i ) ( t ) v ¯ ( i ) ( t ) , t 2 ; y ¯ ( i ) ( 1 ) = z ¯ ( i ) ( 1 ) ,
where
Φ ¯ ( t ) = F 1 ( t ) F 2 ( t ) F 3 ( t ) F 4 ( t ) F 2 ( t ) F 1 ( t ) F 4 ( t ) F 3 ( t ) F 3 η ( t ) F 4 η ( t ) F 1 η ( t ) F 2 η ( t ) F 4 η ( t ) F 3 η ( t ) F 2 η ( t ) F 1 η ( t ) ,
with D γ j ( i ) ( t ) , for j = 1 , 2 , and D 1 γ 1 ( i ) γ 2 ( i ) ( t ) , defined in Property 2. Moreover, the pseudo variances of the augmented white noises u ¯ ( t ) and v ¯ ( i ) ( t ) are denoted by Q ¯ ( t ) and R ¯ ( i ) ( t ) , respectively, E u ¯ ( t ) v ¯ ( i ) H ( s ) = S ¯ ( i ) ( t ) δ t , s , and E x ¯ ( 0 ) x ¯ H ( 0 ) = P ¯ 0 .
Next, conditions for the processes involved in (5), which guarantee a T k -properness scenario, are given in Property 3.
Property 3.
Consider the model described in Equation (5):
1.
If x ( 0 ) and u ( t ) are T 1 -proper, and Φ ¯ ( t ) is a block diagonal matrix of the form
Φ ¯ ( t ) = diag F 1 ( t ) , F 1 ( t ) , F 1 η ( t ) , F 1 η ( t ) ,
then x ( t ) is T 1 -proper. If additionally p j m , r ( i ) ( t ) = p j m , η ( i ) ( t ) = p j m , η ( i ) ( t ) = p j m , η ( i ) ( t ) p j m ( i ) ( t ) , t , j , m , i , v ( i ) ( t ) is T 1 -proper, and u ( t ) and v ( i ) ( t ) are cross T 1 -proper, then x ( t ) and y ( i ) ( t ) are jointly T 1 -proper. In this scenario,
Π γ j ( i ) ( t ) = E D γ j ( i ) ( t ) = I 4 Π 1 j ( i ) ( t ) , i = 1 , , R , j = 1 , 2 ,
with
Π 1 j ( i ) ( t ) = diag p j 1 , r ( i ) ( t ) , , p j n , r ( i ) ( t ) , i = 1 , , R , j = 1 , 2 .
Moreover, Π 1 γ 1 ( i ) γ 2 ( i ) ( t ) = E D 1 γ 1 ( i ) γ 2 ( i ) ( t ) = I 4 n Π γ 1 ( i ) ( t ) Π γ 2 ( i ) ( t ) .
2.
By substituting in the last item T 1 by T 2 and the matrix Φ ¯ ( t ) by this other
Φ ¯ ( t ) = diag Φ 2 ( t ) , Φ 2 η ( t ) , with Φ 2 ( t ) = F 1 ( t ) F 2 ( t ) F 2 ( t ) F 1 ( t ) ,
and assuming that the Bernoulli parameters now satisfy that p j m , r ( i ) ( t ) = p j m , η ( i ) ( t ) and p j m , η ( i ) ( t ) = p j m , η ( i ) ( t ) , t , j , m , i , an identical property can be established for the joint T 2 -properness of x ( t ) and y ( i ) ( t ) . In this case, the expectation of the multiplicative noises are given by
Π γ j ( i ) ( t ) = diag Π 2 j ( i ) ( t ) , Π 2 j ( i ) ( t ) , i = 1 , , R , j = 1 , 2 , Π 1 γ 1 ( i ) γ 2 ( i ) ( t ) = I 4 n Π γ 1 ( i ) ( t ) Π γ 2 ( i ) ( t ) , i = 1 , , R ,
with
Π 2 j ( i ) ( t ) = 1 2 Π a j ( i ) ( t ) Π b j ( i ) ( t ) Π b j ( i ) ( t ) Π a j ( i ) ( t ) , i = 1 , , R , j = 1 , 2 ,
and
Π a j ( i ) ( t ) = diag p j 1 , r ( i ) ( t ) + p j 1 , η ( i ) ( t ) , , p j n , r ( i ) ( t ) + p j n , η ( i ) ( t ) , i = 1 , , R , j = 1 , 2 , Π b j ( i ) ( t ) = diag p j 1 , r ( i ) ( t ) p j 1 , η ( i ) ( t ) , , p j n , r ( i ) ( t ) p j n , η ( i ) ( t ) , i = 1 , , R , j = 1 , 2 .
Note that these T k -property conditions allow us to reduce the dimension of the available observations; that is, y 1 ( i ) ( t ) y ( i ) ( t ) and y 2 ( i ) ( t ) y ( i ) T ( t ) , y ( i ) H ( t ) T , for T 1 and T 2 -proper scenarios, respectively, and hence, the observation equation given in (5) is now expressed in the following terms:
y k ( i ) ( t ) = D k γ 1 ( i ) ( t ) z ¯ ( i ) ( t ) + D k γ 2 ( i ) ( t ) z ¯ ( i ) ( t 1 ) + D k 1 γ 1 ( i ) γ 2 ( i ) ( t ) v ¯ ( i ) ( t ) , t 2 ; y k ( i ) ( 1 ) = I k n , 0 k n × ( 4 k ) n z ¯ ( i ) ( 1 ) ,
where
D k γ j ( i ) ( t ) = T k diag γ j ( i ) r ( t ) T n H , i = 1 , , R , j = 1 , 2 , D k 1 γ 1 ( i ) γ 2 ( i ) ( t ) = T k diag 1 4 n γ 1 ( i ) r ( t ) γ 2 ( i ) r ( t ) T n H , i = 1 , , R ,
with
T k = 1 2 B k I n ,
and
B k = 1 η η η , for k = 1 1 η η η 1 η η η , for k = 2 .
Moreover, under T k -properness, for i = 1 , , R and j = 1 , 2 ,
Π k γ j ( i ) ( t ) = E D k γ j ( i ) ( t ) = Π k j ( i ) ( t ) , 0 k n × ( 4 k ) n , Π k 1 γ 1 ( i ) γ 2 ( i ) ( t ) = E D k 1 γ 1 ( i ) γ 2 ( i ) ( t ) = I k n Π k 1 ( i ) ( t ) Π k 2 ( i ) ( t ) , 0 k n × ( 4 k ) n ,
where the matrices Π k j ( i ) ( t ) , for k = 1 , 2 , are defined in Property 3.
Remark 1.
Let us observe that the T k -properness also causes a dimension reduction in the remaining processes and matrices involved in Equation (5). More specifically, in the T 1 -proper scenario, x ¯ ( t ) , u ¯ ( t ) z ¯ ( i ) ( t ) , v ¯ ( i ) ( t ) and Φ ¯ ( t ) , are substituted by x 1 ( t ) x ( t ) , u 1 ( t ) u ( t ) , z 1 ( i ) ( t ) z ( i ) ( t ) , v 1 ( i ) ( t ) v ( i ) ( t ) and Φ 1 ( t ) F 1 ( t ) ; and, in the T 2 -proper scenario, by x 2 ( t ) x ( t ) , x H ( t ) T , u 2 ( t ) u ( t ) , u H ( t ) T , z 2 ( i ) ( t ) z ( i ) ( t ) , z ( i ) H ( t ) T , v 2 ( i ) ( t ) v ( i ) ( t ) , v ( i ) H ( t ) T and Φ 2 ( t ) (given in (6)). The corresponding pseudo variance matrices of the noises u k ( t ) and v k ( i ) ( t ) as well as its pseudo cross-covariance matrix, will be denoted by Q k ( t ) , R k ( i ) ( t ) , and S k ( i ) ( t ) , respectively; moreover, P 0 k = E x k ( 0 ) x k H ( 0 ) .

4.2. Local LS Filtering Estimators

Given the model described in (5), with the available observation equation in (7) for the T k -proper scenarios, a recursive algorithm to calculate the local LS filtering estimators is presented in this section. This estimator, denoted by x ^ ( i ) T k ( t | t ) , represents at each sensor i, the optimal LS linear estimator of the signal x ( t ) from the observations y k ( i ) ( 1 ) , , y k ( i ) ( t ) . Note that this estimator can be obtained by projecting x ( t ) over the linear space spanned by the observations y k ( i ) ( 1 ) , , y k ( i ) ( t ) . At the tessarine domain, the existence and uniqueness of this projection is guaranteed ([41]). To derive this algorithm, the innovations are used, instead of the observations, defined as ϵ k ( i ) ( t ) = y k ( i ) ( t ) y ^ k ( i ) ( t | t 1 ) , with y ^ k ( i ) ( t | t 1 ) the optimal LS linear estimator of the observation y k ( i ) ( t ) from the observations until the previous instant, that is, y k ( i ) ( 1 ) , , y k ( i ) ( t 1 ) .
Theorem 1.
For each sensor i = 1 , , R , the optimal filter, x ^ ( i ) T k ( t | t ) , is obtained by extracting the first n components of x ^ k ( i ) ( t | t ) , which satisfies this expression
x ^ k ( i ) ( t | t ) = x ^ k ( i ) ( t | t 1 ) + L k ( i ) ( t ) ϵ k ( i ) ( t ) , t 1 ,
where x ^ k ( i ) ( t + 1 | t ) , can be recursively calculated as
x ^ k ( i ) ( t + 1 | t ) = Φ k ( t ) x ^ k ( i ) ( t | t ) + H k ( i ) ( t ) ϵ k ( i ) ( t ) , t 1 ,
with initial conditions x ^ k ( i ) ( 1 | 0 ) = x ^ k ( i ) ( 0 | 0 ) = 0 k n , L k ( i ) ( t ) = Θ k ( i ) ( t ) Ω k ( i ) 1 ( t ) and H k ( i ) ( t ) = S k ( i ) ( t ) I k n Π k 2 ( i ) ( t ) Ω k ( i ) 1 ( t ) .
The innovations, ϵ k ( i ) ( t ) , satisfy this relation
ϵ k ( i ) ( t ) = y k ( i ) ( t ) Π k 1 ( i ) ( t ) x ^ k ( i ) ( t | t 1 ) Π k 2 ( i ) ( t ) x ^ k ( i ) ( t 1 | t 1 ) + G k ( i ) ( t 1 ) ϵ k ( i ) ( t 1 ) , t 2 ,
with initial condition ϵ k ( i ) ( 1 ) = y k ( i ) ( 1 ) and G k ( i ) ( t ) = R k ( i ) ( t ) I k n Π k 2 ( i ) ( t ) Ω k ( i ) 1 ( t ) .
Moreover, the matrices Θ k ( i ) ( t ) are obtained by this expression
Θ k ( i ) ( t ) = P k ( i ) ( t | t 1 ) Π k 1 ( i ) ( t ) + Φ k ( t 1 ) P k ( i ) ( t 1 | t 1 ) Π k 2 ( i ) ( t ) H k ( i ) ( t 1 ) Θ k ( i ) ( t 1 ) + G k ( i ) ( t 1 ) Ω k ( i ) ( t 1 ) H Π k 2 ( i ) ( t ) + S k ( i ) ( t 1 ) Φ k ( t 1 ) Θ k ( i ) ( t 1 ) G k ( i ) H ( t 1 ) Π k 2 ( i ) ( t ) , t 2 ; Θ k ( i ) ( 1 ) = D k ( 1 ) ,
with
D k ( 1 ) = I k n , 0 k n × ( 4 k ) n D ¯ ( 1 ) I k n , 0 k n × ( 4 k ) n T ,
and D ¯ ( 1 ) given in (16).
The pseudo-covariance matrix of the innovations, Ω k ( i ) ( t ) = E ϵ k ( i ) ( t ) ϵ k ( i ) H ( t ) , is calculated as follows
Ω k ( i ) ( t ) = Ψ 1 k ( i ) ( t ) + Ψ 2 k ( i ) ( t ) + Ψ 2 k ( i ) H ( t ) + Ψ 3 k ( i ) ( t ) + Ψ 4 k ( i ) ( t ) + Π k 1 ( i ) ( t ) P k ( i ) ( t | t 1 ) Π k 1 ( i ) ( t ) + Π k 1 ( i ) ( t ) J k ( i ) ( t 1 ) Π k 2 ( i ) ( t ) + Π k 2 ( i ) ( t ) J k ( i ) H ( t 1 ) Π k 1 ( i ) ( t ) + Π k 2 ( i ) ( t ) P k ( i ) ( t 1 | t 1 ) Θ k ( i ) ( t 1 ) G k ( i ) H ( t 1 ) G k ( i ) ( t 1 ) Θ k ( i ) H ( t 1 ) G k ( i ) ( t 1 ) Ω k ( i ) ( t 1 ) G k ( i ) H ( t 1 ) Π k 2 ( i ) ( t ) , t 2 ; Ω k ( i ) ( 1 ) = D k ( 1 ) + R k ( i ) ( 1 ) ,
with the matrices Ψ l k ( i ) ( t ) , for l = 1 , 2 , 3 , 4 , given by
Ψ 1 k ( i ) ( t ) = T k Cov γ 1 ( i ) r ( t ) T n H D ¯ ( t ) T n T k H , Ψ 2 k ( i ) ( t ) = T k Cov γ 1 ( i ) r ( t ) , γ 2 ( i ) r ( t ) T n H Φ ¯ ( t 1 ) D ¯ ( t 1 ) + S ¯ ( i ) ( t 1 ) T n T k H , Ψ 3 k ( i ) ( t ) = T k Cov γ 2 ( i ) r ( t ) T n H D ¯ ( t 1 ) T n T k H , Ψ 4 k ( i ) ( t ) = T k E 1 γ 2 ( i ) r ( t ) 1 γ 2 ( i ) r ( t ) T T n H R ¯ ( i ) ( t ) T n T k H + T k E γ 2 ( i ) r ( t ) γ 2 ( i ) r T ( t ) T n H R ¯ ( i ) ( t 1 ) T n T k H ,
and J k ( i ) ( t ) , by this other relation
J k ( i ) ( t ) = Φ k ( t ) P k ( i ) ( t | t ) H k ( i ) ( t ) Θ k ( i ) H ( t ) Φ k ( t ) Θ k ( i ) ( t ) G k H ( t ) + S k ( i ) ( t ) H k ( i ) ( t ) Ω k ( i ) ( t ) G k ( i ) H ( t ) .
Moreover, D ¯ ( t ) satisfies the following recursive formula
D ¯ ( t ) = Φ ¯ ( t 1 ) D ¯ ( t 1 ) Φ ¯ H ( t 1 ) + Q ¯ ( t 1 ) , t 1 ; D ¯ ( 0 ) = P ¯ 0 .
Finally, the pseudo covariance matrices of the filtering errors, P ( i ) T k ( t | t ) , are obtained from P k ( i ) ( t | t ) , which satisfies the following recursive formula
P k ( i ) ( t | t ) = P k ( i ) ( t | t 1 ) Θ k ( i ) ( t ) Ω k ( i ) 1 ( t ) Θ k ( i ) H ( t ) , t 1 ,
with P k ( i ) ( t + 1 | t ) , calculated by this other equation
P k ( i ) ( t + 1 | t ) = Φ k ( t ) P k ( i ) ( t | t ) Φ k H ( t ) Φ k ( t ) Θ k ( i ) ( t ) H k ( i ) H ( t ) H k ( i ) ( t ) Θ k ( i ) H ( t ) Φ k H ( t ) H k ( i ) ( t ) Ω k ( i ) ( t ) H k ( i ) H ( t ) + Q k ( t ) , t 1 ,
and initial conditions P k ( i ) ( 0 | 0 ) = P 0 k , and P k ( i ) ( 1 | 0 ) = D k ( 1 ) .
The proof is deferred to the Appendix A.
Remark 2.
Notice that, similarly to QWL processing, TWL processing is isomorphic to the real processing;thus, any WL algorithm could be equivalently expressed through a real formalism. In other words, the three approaches (QWL, TWL and R 4 ) are completely equivalent. However, this equivalence vanishes under properness conditions.
Effectively, under T k , for k = 1 , 2 , properness conditions, the dimension of the observation vector is reduced 4/k times, which leads to estimation algorithms with a lower computational load with respect to the ones derived from a TWL approach (see [44] for further details). Specifically, this computational load is of order O ( 64 n 3 ) for the TWL local LS filtering algorithm, whereas this is of order O ( k 3 n 3 ) for the T k , for k = 1 , 2 , algorithms.
Similar comments can be applied to the following T k centralized and distributed fusion filtering algorithms.

5. T k -Proper Centralized Fusion Linear Filtering Problem

In this section, we approach the centralized fusion LS linear filtering problem, that is, by using the observations from all the sensors jointly. For this purpose, we consider the augmented vector for both real and available observation, denoted by z ( t ) = z ¯ ( 1 ) T ( t ) , , z ¯ ( R ) T ( t ) T , and y ( t ) = y ¯ ( 1 ) T ( t ) , , y ¯ ( R ) T ( t ) T , respectively. In this way, the observation equations now are given as follows
z ( t ) = Ξ n x ¯ ( t ) + v ( t ) , t 1 , y ( t ) = D γ 1 ( t ) z ( t ) + D γ 2 ( t ) z ( t 1 ) + D 1 γ 1 γ 2 ( t ) v ( t ) , t 2 ; y ( 1 ) = z ( 1 ) ,
where Ξ n = 1 R I 4 n , D γ j ( t ) = Y n diag γ j r ( t ) Y n H , for j = 1 , 2 , and D 1 γ 1 γ 2 ( t ) = Y n diag 1 4 R n γ 1 r ( t ) γ 2 r ( t ) Y n H , with γ j r ( t ) = γ j ( 1 ) r T ( t ) , , γ j ( R ) r T ( t ) T , Y n = I R T n . Moreover, R ( t ) = E v ( t ) v H ( t ) = diag R ¯ ( 1 ) ( t ) , , R ¯ ( R ) ( t ) , and E u ¯ ( t ) v H ( s ) = S ( t ) δ t s , with the matrix S ( t ) given by S ( t ) = S ¯ ( 1 ) ( t ) , , S ¯ ( R ) ( t ) .
However, as previously commented on in Section 4.1, under T k -properness conditions, it is possible to reduce the dimension of the available observation equation, which can be expressed as follows:
y k ( t ) = D k γ 1 ( t ) z ( t ) + D k γ 2 ( t ) z ( t 1 ) + D k 1 γ 1 γ 2 ( t ) v ( t ) , t 2 ; y k ( 1 ) = Δ k z ( 1 ) ,
where D k γ j ( t ) = Y k diag γ j r ( t ) Y n H , D k 1 γ 1 γ 2 ( t ) = Y k diag 1 4 R n γ 1 r ( t ) γ 2 r ( t ) Y n H , with Y k = I R T k , T k given in (8), and Δ k = I R I k n , 0 k n × ( 4 k ) n . Moreover,
Π ¯ k γ j ( t ) = E D k γ j ( t ) = diag Π k γ j ( 1 ) ( t ) , , Π k γ j ( R ) ( t ) , Π ¯ k 1 γ 1 γ 2 ( t ) = E D k 1 γ 1 γ 2 ( t ) = diag Π k 1 γ 1 ( 1 ) γ 2 ( 1 ) ( t ) , , Π k 1 γ 1 ( R ) γ 2 ( R ) ( t ) ,
with Π k γ j ( i ) ( t ) and Π k 1 γ 1 ( i ) γ 2 ( i ) ( t ) , for i = 1 , , R , given in (9).
Note that the centralized fusion LS linear filter under T k -properness conditions, x ^ T k ( t | t ) , is the optimal LS linear estimator of the signal x ( t ) from the observations { y k ( 1 ) , , y k ( t ) } . This optimality condition is harmed by the computational complexity, especially as the number of the sensors increases.
The next corollary contains a recursive algorithm to determine the centralized fusion linear filter, for the state equation in (5), and the real and available observation equations described in (19) and (20), respectively, which can be derived by following an analogous proof to that used in Theorem 1.
Corollary 1.
For previously described model, the optimal centralized fusion filter, x ^ T k ( t | t ) , is obtained by extracting the first n components of x ^ k ( t | t ) , which are recursively calculated as follows
x ^ k ( t | t ) = x ^ k ( t | t 1 ) + L k ( t ) ϵ k ( t ) t 1 ,
where x ^ k ( t + 1 | t ) satisfies this formula
x ^ k ( t + 1 | t ) = Φ k ( t ) x ^ k ( t | t ) + H k ( t ) ϵ k ( t ) t 1 ,
with initial conditions x ^ k ( 1 | 0 ) = x ^ k ( 0 | 0 ) = 0 k n . Moreover, L k ( t ) = Θ k ( t ) Ω k 1 ( t ) , H k ( t ) = S k ( t ) I k n R Π k 2 ( t ) Ω k 1 ( t ) , where S k ( t ) = [ S k ( 1 ) ( t ) , , S k ( R ) ( t ) ] , with S k ( i ) ( t ) , for i = 1 , , R , defined in Remark 1, and Π k j ( t ) = diag Π k j ( 1 ) ( t ) , , Π k j ( R ) ( t ) , for j = 1 , 2 , with Π k j ( i ) ( t ) given in Property 3.
The innovations, ϵ k ( t ) , are obtained as follows
ϵ k ( t ) = y k ( t ) Π k 1 ( t ) Ξ k x ^ k ( t | t 1 ) Π k 2 ( t ) Ξ k x ^ k ( t 1 | t 1 ) + G k ( t 1 ) ϵ k ( t 1 ) , t 2 ,
with ϵ k ( 1 ) = y k ( 1 ) , Ξ k = 1 R I k n , and G k ( t ) = R k ( t ) I k n R Π k 2 ( t ) Ω k 1 ( t ) , with R k ( t ) = diag R k ( 1 ) ( t ) , , R k ( R ) ( t ) , and R k ( i ) ( t ) , for i = 1 , , R , defined in Remark 1.
The matrices Θ k ( t ) satisfy this relationship
Θ k ( t ) = P k ( t | t 1 ) Ξ k T Π k 1 ( t ) + Φ k ( t 1 ) P k ( t 1 | t 1 ) Ξ k T Π k 2 ( t ) H k ( t 1 ) Ξ k Θ k ( t 1 ) + G k ( t 1 ) Ω k ( t 1 ) H Π k 2 ( t ) + S k ( t 1 ) Φ k ( t 1 ) Θ k ( t 1 ) G k H ( t 1 ) Π k 2 ( t ) , t 2 ; Θ ( 1 ) = 1 R T D k ( 1 ) ,
with D k ( 1 ) given in (14).
The pseudo covariance matrix of the innovations, Ω ( t ) , is obtained from this expression
Ω k ( t ) = Ψ 1 k ( t ) + Ψ 2 k ( t ) + Ψ 2 k H ( t ) + Ψ 3 k ( t ) + Ψ 4 k ( t ) + Π k 1 ( t ) Ξ k P k ( t | t 1 ) Ξ k T Π k 1 ( t ) + Π k 1 ( t ) J k ( t 1 ) Π k 2 ( t ) + Π k 2 ( t ) J k H ( t 1 ) Π k 1 ( t ) + Π k 2 ( t ) Ξ k P k ( t 1 | t 1 ) Ξ k T Ξ k Θ k ( t 1 ) G k H ( t 1 ) G k ( t 1 ) Θ k H ( t 1 ) Ξ k T G k ( t 1 ) Ω k ( t 1 ) G k H ( t 1 ) Π k 2 ( t ) , t 2 ; Ω k ( 1 ) = I R D k ( 1 ) + R k ( 1 ) ,
where
Ψ 1 k ( t ) = Y k Cov γ 1 r ( t ) Y n H Ξ n D ¯ ( t ) Ξ n T Y n Y k H , Ψ 2 k ( t ) = Y k Cov γ 1 r ( t ) , γ 2 r ( t ) Y n H Ξ n Φ ¯ ( t 1 ) D ¯ ( t 1 ) Ξ n T + S ( t 1 ) Y n Y k H , Ψ 3 k ( t ) = Y k Cov γ 2 r ( t ) Y n H Ξ n D ¯ ( t 1 ) Ξ n T Y n Y k H , Ψ 4 k ( t ) = Y k E 1 4 R n γ 2 r ( t ) 1 4 R n γ 2 r ( t ) T Y n H R ( t ) Y n Y k H + Y k E γ 2 r ( t ) γ 2 r T ( t ) Y n H R ( t 1 ) Y n Y k H ,
with D ¯ ( t ) computed in (16), and J k ( t ) given by
J k ( t ) = Ξ k Φ k ( t ) P k ( t | t ) H k ( t ) Θ k H ( t ) Ξ k H Φ k ( t ) Θ k ( t ) G k H ( t ) + S k ( t ) H k ( t ) Ω k ( t ) G k H ( t ) .
Finally, the filtering error covariance matrices, P T k ( t | t ) , are calculated from P k ( t | t ) , which satisfy the following equation
P k ( t | t ) = P k ( t | t 1 ) Θ k ( t ) Ω k 1 ( t ) Θ k H ( t ) ,
where P k ( t + 1 | t ) can be recursively obtained from this relation
P k ( t + 1 | t ) = Φ k ( t ) P k ( t | t ) Φ k H ( t ) Φ k ( t ) Θ k ( t ) H k H ( t ) H k ( t ) Θ k H ( t ) Φ k H ( t ) H k ( t ) Ω k ( t ) H k H ( t ) + Q k ( t ) ,
with initial conditions P k ( 0 | 0 ) = P 0 k , and P k ( 1 | 0 ) = D k ( 1 ) .

6. T k -Proper Distributed Fusion LS Linear Filter

Our aim in this section is to address the distributed fusion LS linear filter under T k properness conditions, x ^ D T k ( t | t ) , which in turn is calculated by extracting the first n-components from x ^ k D ( t | t ) . This last estimator is a linear function of the local filters x ^ k ( 1 ) ( t | t ) , , x ^ k ( R ) ( t | t ) , calculated in Section 4, whose weights are those minimizing the mean squared error, and as is known, is of the form
x ^ k D ( t | t ) = E x k ( t ) x ^ k H ( t | t ) E x ^ k ( t | t ) x ^ k H ( t | t ) 1 x ^ k ( t | t ) ,
where x ^ ( t | t ) = x ^ k ( 1 ) T ( t | t ) , , x ^ k ( R ) T ( t | t ) T . Moreover, taking into account Theorem 3 in [41], the matrices of the above expression are computed as follows
E x k ( t ) x ^ k H ( t | t ) = K k ( 11 ) ( t ) , , K k ( R R ) ( t ) , E x ^ k ( t | t ) x ^ k H ( t | t ) K k ( t ) = K k ( i j ) ( t ) i , j = 1 , , R ,
where K k ( i j ) ( t ) = E x ^ k ( i ) ( t | t ) x ^ k ( j ) H ( t | t ) . Moreover, the distributed fusion linear filtering error covariance matrices under T k properness conditions, P D T k ( t | t ) , are obtained from P k D ( t | t ) , which satisfies the equation
P k D ( t | t ) = D k ( t ) K k ( 11 ) ( t ) , , K k ( R R ) ( t ) K k 1 ( t ) K k ( 11 ) ( t ) , , K k ( R R ) ( t ) H ,
with D k ( t ) = I k n , 0 k n × ( 4 k ) n D ¯ ( t ) I k n , 0 k n × ( 4 k ) n T , and D ¯ ( t ) given in (16).
Then, it will be necessary to calculate these matrices K k ( i j ) ( t ) , for i , j = 1 , , R , to obtain x ^ k D ( t | t ) . With this purpose, a recursive formula is proposed in Lemma 4, which involves the computation of another intermediate matrices through Lemmas 1–3, as well as some of the matrices defined in Theorem 1. The proof is deferred to the Appendix B.
Note that, in contrast to the centralized fusion LS linear filter, the distributed fusion filter presented here loses the optimality condition; however, it does considerably reduce the computational burden. Moreover, in many practical situations, the difference in performance is so insignificant that this fact, in conjunction with the lower computational cost, makes the distributed filter preferable to the centralized one.
Lemma 1.
For the model described in Section 4.1, the following expressions are satisfied for the defined expectations
( i )
Θ k ( i ) ( t 1 , t ) = E x k ( t 1 ) ϵ k ( i ) H ( t ) , t 2 .
Θ k ( i ) ( t 1 , t ) = D k ( t 1 ) K k ( i i ) ( t 1 ) A k ( i ) H ( t 1 ) Θ k ( i ) ( t 1 ) B k ( i ) H ( t 1 ) ,
where D k ( t ) = I k n , 0 k n × ( 4 k ) n D ¯ ( t ) I k n , 0 k n × ( 4 k ) n T , A k ( i ) ( t ) = Π k 1 ( i ) ( t + 1 ) Φ k ( t ) + Π k 2 ( i ) ( t + 1 ) , B k ( i ) ( t ) = Π k 1 ( i ) ( t + 1 ) H k ( i ) ( t ) + Π k 2 ( i ) ( t + 1 ) G k ( i ) ( t ) , and K k ( i i ) ( t ) is given in (29).
( i i )
Θ v k ( i j ) ( t ) = E v k ( i ) ( t ) ϵ k ( j ) H ( t ) , t 1 .
Θ v k ( i j ) ( t ) = R k ( i ) ( t ) I k n Π k 2 ( i ) ( t ) δ i j , t 2 ; Θ v k ( i j ) ( 1 ) = R k ( i ) ( 1 ) δ i j .
( i i i )
Θ v k ( i j ) ( t 1 , t ) = E v k ( i ) ( t 1 ) ϵ k ( j ) H ( t ) , t 2 .
Θ v k ( i j ) ( t 1 , t ) = S k ( i ) H ( t 1 ) Π k 1 ( j ) ( t ) + R k ( i ) ( t 1 ) Π k 2 ( i ) ( t ) δ i j Θ v k ( i j ) ( t 1 ) A k ( j ) ( t 1 ) L k ( j ) ( t 1 ) + B k ( j ) ( t 1 ) H .
Lemma 2.
Let us consider the model described in Section 4.1. Then, the matrices L k ( i j ) ( t ) = E x ^ k ( i ) ( t | t 1 ) ϵ k ( j ) H ( t ) , for i , j = 1 , , R , satisfy this relationship
L k ( i j ) ( t ) = K k ( i i ) ( t , t 1 ) K k ( i j ) ( t , t 1 ) Π k 1 ( j ) ( t ) + Φ k ( t 1 ) K k ( i i ) ( t 1 ) K k ( i j ) ( t 1 ) Π k 2 ( j ) ( t ) + H k ( i ) ( t 1 ) Θ k ( i ) ( t 1 ) N k ( j i ) ( t 1 ) H Π k 2 ( j ) ( t ) + C k ( i ) ( t 1 ) Θ v k ( j i ) H ( t 1 ) L k ( i j ) ( t , t 1 ) G k ( j ) H ( t 1 ) Π k 2 ( j ) ( t ) , t 2 ,
with the initial condition L k ( i j ) ( 1 ) = 0 k n × k n , and where N k ( i j ) ( t ) = L k ( i j ) ( t ) + L k ( i ) ( t ) M k ( i j ) ( t ) , C k ( i ) ( t ) = Φ k ( t ) L k ( i ) ( t ) + H k ( i ) ( t ) , with M k ( i j ) ( t ) = E ϵ k ( i ) ( t ) ϵ k ( j ) H ( t ) recursively computed as indicated in Lemma 3, and L k ( i j ) ( t , t 1 ) = E x ^ k ( i ) ( t | t 1 ) ϵ k ( j ) H ( t 1 ) is obtained as follows
L k ( i j ) ( t , t 1 ) = Φ k ( t 1 ) L k ( i j ) ( t 1 ) + C k ( i ) ( t 1 ) M k ( i j ) ( t 1 ) , t 2 .
Moreover, L k ( i j ) ( t 1 , t ) = E x ^ k ( i ) ( t 1 | t 1 ) ϵ k ( j ) H ( t ) , is calculated from this expression
L k ( i j ) ( t 1 , t ) = K k ( i i ) ( t 1 ) K k ( i j ) ( t 1 ) A k ( j ) H ( t 1 ) + Θ k ( i ) ( t 1 ) H k ( i ) H ( t 1 ) Π k 1 ( j ) ( t ) + L k ( i ) ( t 1 ) Θ v k ( j i ) H ( t 1 ) Π k 2 ( j ) ( t ) N k ( i j ) ( t 1 ) B k ( j ) H ( t 1 ) , t 2 .
Lemma 3.
Considering the model described in Section 4.1, the matrices M k ( i j ) ( t ) = E ϵ k ( i ) ( t ) ϵ k ( j ) H ( t ) , for i , j = 1 , , R , are obtained from this equation
M k ( i j ) ( t ) = Π k 1 ( i ) ( t ) Θ k ( j ) ( t ) L k ( i j ) ( t ) + I k n Π k 2 ( i ) ( t ) Θ v k ( i j ) ( t ) + Π k 2 ( i ) ( t ) Θ k ( j ) ( t 1 , t ) + Θ v k ( i j ) ( t 1 , t ) L k ( i j ) ( t 1 , t ) G k ( i ) ( t 1 ) M k ( i j ) ( t 1 , t ) , t 2 ,
with initial condition M k ( i j ) ( 1 ) = D k ( 1 ) + R k ( i ) ( 1 ) δ i j . Moreover, the following recursive formula allows us to compute the matrices M k ( i j ) ( t 1 , t ) = E ϵ k ( i ) ( t 1 ) ϵ k ( j ) H ( t ) ,
M k ( i j ) ( t 1 , t ) = Θ k ( i ) H ( t 1 ) A k ( j ) H ( t 1 ) + Π k 1 ( i ) ( t 1 ) S k ( i ) H ( t 1 ) L k ( j i ) H ( t , t 1 ) Π k 1 ( j ) ( t ) + Θ v k ( j i ) H ( t 1 ) N k ( j i ) H ( t 1 ) M k ( i j ) ( t 1 ) G k ( j ) H ( t 1 ) Π k 2 ( j ) ( t ) , t 2 .
Lemma 4.
For the model described in Section 4.1, the pseudo-cross-covariance matrices of the local filters, denoted by K k ( i j ) ( t ) = E x ^ k ( i ) ( t | t ) x ^ k ( j ) H ( t | t ) , for i , j = 1 , , R , are calculated as follows
K k ( i j ) ( t ) = K k ( i j ) ( t , t 1 ) + N k ( i j ) ( t ) L k ( j ) H ( t ) + L k ( i ) ( t ) L k ( j i ) H ( t ) , t 1 ,
where K k ( i j ) ( t + 1 , t ) = E x ^ k ( i ) ( t + 1 | t ) x ^ k ( j ) H ( t + 1 | t ) satisfies this equation
K k ( i j ) ( t + 1 , t ) = Φ k ( t ) K k ( i j ) ( t ) Φ k H ( t ) + N k ( i j ) ( t ) H k ( j ) H ( t ) + H k ( i ) ( t ) L k ( j i ) H ( t + 1 , t ) , t 1 ,
with initial conditions K k ( i j ) ( 1 , 0 ) = K k ( i j ) ( 0 ) = 0 k n × k n .

7. Numerical Simulations

Our aim in this section is to show the performance of the proposed centralized and distributed fusion linear filters algorithms in several situations: on the one hand, in comparison with the local LS filters obtained at each sensor; on the other, focusing on the proposed fusion algorithms, to analyze the accuracy of the estimators by supposing different probabilities of updating, delay, and uncertainty in the observations from the sensors. In all of these situations, T k -properness conditions were assumed, which entails a considerable reduction in the computational burden.
For this purpose, let us consider the scalar tessarine model with delayed and uncertain observations produced by five sensors, described by the Equations (2)–(4), with F 1 ( t ) = 0.5 0.1 η + 0.07 η + 0.8 η T . Hypotheses (H1)–(H4), established in Section 3, are also assumed to be satisfied. Moreover, the pseudo-covariance matrices of the noises and the initial state were defined with a general structure, differentiating between both T 1 and T 2 -proper scenarios. More specifically, the covariance matrices of the real state noise is given as follows:
E u r ( t ) u r T ( s ) = q 11 0 q 13 0 0 q 22 0 q 13 q 13 0 q 11 0 0 q 13 0 q 22 δ t s , T 1 - proper case : q 11 = q 22 = 0.9 , q 13 = 0.3 . T 2 - proper case : q 11 = 1.9 , q 22 = 1.6 , q 13 = 0.3 .
Moreover, the other matrices in hypothesis (H1), R ( i ) ( t ) and S ( i ) ( t ) , for i = 1 , , 5 , are given by the following relation between the noises v ( i ) ( t ) and u ( t ) :
v ( i ) ( t ) = a i u ( t ) + w ( i ) ( t ) ,
with a i scalar constants ( a 1 = 0.4 , a 2 = 0.8 , a 3 = 0.5 , a 4 = 0.6 , a 5 = 0.2 ), and w ( i ) ( t ) tessarine zero-mean white Gaussian noises, independent of u ( t ) , with real covariance matrices, R w r ( i ) ( t ) = diag ( b i , b i , b i , b i ) , where b 1 = 3 , b 2 = 7 , b 3 = 15 , b 4 = 13 , b 5 = 11 .
To complete the initial conditions of the model, the variance matrix of the real initial state is given as follows:
E x r ( 0 ) x r T ( 0 ) = p 0 11 0 p 0 13 0 0 p 0 22 0 p 0 13 p 0 13 0 p 0 11 0 0 p 0 13 0 p 0 22 , T 1 - proper case : p 0 11 = p 0 22 = 1 , p 0 13 = 1.5 . T 2 - proper case : p 0 11 = 1 , p 0 22 = 4 , p 0 13 = 1.5 .
All these assumptions in each of the T k -proper scenarios, as well as the values considered in the parameters of the Bernoulli random variables (detailed below, in accordance with the conditions established in Property 3), guarantee that x ( t ) and y ( i ) ( t ) are jointly T k -proper. These parameters are assumed to be constant in time, that is, p j , ν ( i ) ( t ) = p j , ν ( i ) , for j = 1 , 2 , ν = r , η , η η , and i = 1 , , 5 (Note that in the T 1 -proper scenario, p j , ν ( i ) = p j ( i ) , for all ν = r , η , η , η , j = 1 , 2 , i = 1 , , 5 ; and, in the T 2 -proper scenario, p j , r ( i ) = p j , η ( i ) and p j , η ( i ) = p j , η ( i ) , for j = 1 , 2 , i = 1 , , 5 ).
Firstly, taking fixed values for the probabilities p j , ν ( i ) for each of the T k -proper scenarios, the error variances of the local filters are compared with those of the centralized and distributed ones. These probabilities differ at each of the sensors, which allows us to make an initial attempt to contrast the three situations referred to the model, that is, the updated, delayed and uncertain observations. In this sense,
  • In the T 1 -proper case:
    -
    Sensor 1: p 1 ( 1 ) = 0.9 , p 2 ( 1 ) = 0.05 ,
    -
    Sensor 2: p 1 ( 2 ) = 0.7 , p 2 ( 2 ) = 0.05 ,
    -
    Sensor 3: p 1 ( 3 ) = 0.05 , p 2 ( 3 ) = 0.9 ,
    -
    Sensor 4: p 1 ( 4 ) = 0.05 , p 2 ( 4 ) = 0.7 ,
    -
    Sensor 5: p 1 ( 5 ) = p 2 ( 5 ) = 0.05 .
  • In the T 2 -proper case:
    -
    Sensor 1: p 1 , r ( 1 ) = 0.9 , p 1 , η ( 1 ) = 0.8 , p 2 , r ( 1 ) = p 2 , η ( 1 ) = 0.05 ,
    -
    Sensor 2: p 1 , r ( 2 ) = 0.7 , p 1 , η ( 2 ) = 0.6 , p 2 , r ( 2 ) = p 2 , η ( 2 ) = 0.05 ,
    -
    Sensor 3: p 1 , r ( 3 ) = p 1 , η ( 3 ) = 0.05 , p 2 , r ( 3 ) = 0.9 , p 2 , η ( 3 ) = 0.8 ,
    -
    Sensor 4: p 1 , r ( 4 ) = p 1 , η ( 4 ) = 0.05 , p 2 , r ( 4 ) = 0.7 , p 2 , η ( 4 ) = 0.6 ,
    -
    Sensor 5: p j , μ ( 5 ) = 0.05 , for j = 1 , 2 and μ = r , η .
The results are displayed in Figure 1. The superiority of both centralized and distributed fusion filters over the local ones at each sensor can be observed, as well as the fact that these fusion filters have practically the same effectiveness, since their error variances are very close. Moreover, the local filtering error variances reflect the previously described three situations, considering the fact that the observations from sensors 1 and 2 are more conducive to being updated, those from sensors 3 and 4, delayed, and the most unfavorable case is for those from 5, since there exists a greater probability that they contain only noise. As before, the local error variances increase; then, the performance of the local filters becomes poorer as the situation of the updated, delayed and uncertain observations is more likely. Analogous results can be obtained by considering other different values of the Bernoulli parameters.
Our attention is focused on analyzing the accuracy of both centralized and distributed fusion filters in different situations concerning the observations produced by the sensors and in both T k -proper scenarios. For this purpose, we have considered the following cases with different values for the Bernoulli parameters:
  • In the T 1 -proper scenario:
    -
    Cases in which updating probabilities vary at each sensor i:
    (1): p 1 ( i ) = 0.3 , p 2 ( i ) = 0.05 , p j ( l ) = 0.05 , for j = 1 , 2 and l i ;
    (2): p 1 ( i ) = 0.5 , p 2 ( i ) = 0.05 , p j ( l ) = 0.05 , for j = 1 , 2 and l i ;
    (3): p 1 ( i ) = 0.7 , p 2 ( i ) = 0.05 , p j ( l ) = 0.05 , for j = 1 , 2 and l i ;
    (4): p 1 ( i ) = 0.9 , p 2 ( i ) = 0.05 , p j ( l ) = 0.05 , for j = 1 , 2 and l i ;
    -
    Cases in which delay probabilities vary at each sensor i:
    (5): p 1 ( i ) = 0.05 , p 2 ( i ) = 0.3 , p j ( l ) = 0.05 , for j = 1 , 2 and l i ;
    (6): p 1 ( i ) = 0.05 , p 2 ( i ) = 0.5 , p j ( l ) = 0.05 , for j = 1 , 2 and l i ;
    (7): p 1 ( i ) = 0.05 , p 2 ( i ) = 0.7 , p j ( l ) = 0.05 , for j = 1 , 2 and l i ;
    (8): p 1 ( i ) = 0.05 , p 2 ( i ) = 0.9 , p j ( l ) = 0.05 , for j = 1 , 2 and l i ;
  • In the T 2 -proper scenario:
    -
    Cases in which updating probabilities vary at each sensor i:
    (1): p 1 , r ( i ) , p 1 , η ( i ) = ( 0.3 , 0.2 ) , p j , η ( l ) = p j , r ( l ) = 0.05 , for j = 1 , 2 and l i ;
    (2): p 1 , r ( i ) , p 1 , η ( i ) = ( 0.5 , 0.4 ) , p j , η ( l ) = p j , r ( l ) = 0.05 , for j = 1 , 2 and l i ;
    (3): p 1 , r ( i ) , p 1 , η ( i ) = ( 0.7 , 0.6 ) , p j , η ( l ) = p j , r ( l ) = 0.05 , for j = 1 , 2 and l i ;
    (4): p 1 , r ( i ) , p 1 , η ( i ) = ( 0.9 , 0.8 ) , p j , η ( l ) = p j , r ( l ) = 0.05 , for j = 1 , 2 and l i ;
    -
    Cases in which delay probabilities vary at each sensor i:
    (5): p 2 , r ( i ) , p 2 , η ( i ) = ( 0.3 , 0.2 ) , p j , η ( l ) = p j , r ( l ) = 0.05 , for j = 1 , 2 and l i ;
    (6): p 2 , r ( i ) , p 2 , η ( i ) = ( 0.5 , 0.4 ) , p j , η ( l ) = p j , r ( l ) = 0.05 , for j = 1 , 2 and l i ;
    (7): p 2 , r ( i ) , p 2 , η ( i ) = ( 0.7 , 0.6 ) , p j , η ( l ) = p j , r ( l ) = 0.05 , for j = 1 , 2 and l i ;
    (8): p 2 , r ( i ) , p 2 , η ( i ) = ( 0.9 , 0.8 ) , p j , η ( l ) = p j , r ( l ) = 0.05 , for j = 1 , 2 and l i ;
Note that the cases (1)–(4) in both scenarios allow us to contrast the performance of the filters by varying the probability that the observations are updated. Moreover, the cases (1)–(4) are sorted in ascending order of the update probabilities. We will start with case (1), in which there exists a low probability that the observations are updated (that is, they will most probably contain only noise), and this probability increases until case (4), in which it is more likely that the available observations are the updated ones. Analogous consideration about delayed observations in cases (5)–(8). Note that (5) is the most favorable case regarding uncertain observations and (8) is the most unfavorable case regarding uncertain observations. The error variances of both centralized and distributed fusion filters in all the situations described above for sensors 1, 2 and 5, are displayed in Figure 2, Figure 3 and Figure 4, for the T 1 -proper scenario, and in Figure 5, Figure 6 and Figure 7, for the T 2 -proper one. In view of these figures, by first comparing the cases (1)–(4), it can be observed that the error variances of both fusion estimators decrease and, hence, the accuracy of the filters is better, as the probability that the observations at each sensor are updated increases. This is also true for cases (5)–(8), with delayed observations. Secondly, we compare, in both scenarios, the cases in pairs (1)–(5), (2)–(6), (3)–(7), (4)–(8). Note that each of these pairs represents the situations of updated and delayed observations with the same probability. From these figures, it can be observed that the error variances of the fusion filters are smaller when the observations are updated than when they are delayed. Finally, in all the cases and in both scenarios, these figures show that the filtering error variances of the centralized fusion estimators are lower than that of the distributed fusion ones, but the difference between them is insignificant, and this is even more acceptable in practice if we analyze the computational advantages of the calculus of the estimators by means of the distributed fusion filtering algorithm.
Finally, the performance of the proposed fusion filtering estimators is analyzed for different numbers of sensors, specifically, 3, 4 and 5 sensors in the T 1 -proper scenario. It is considered that the observations from all the sensors are modeled by the same observation equation; that is, they have the same updating or delay probabilities, as well as the same variance and covariance of the noises. For the case of updated observations, three situations were analyzed: p 1 ( i ) = 0.3 , 0.6 , 0.9 for all i = 1 , , 5 , and the remaining ones, 0.05; another analogous situations for delayed observations: p 2 ( i ) = 0.3 , 0.6 , 0.9 for all i = 1 , , 5 , and the remaining ones, 0.05. In Figure 8 and Figure 9, the T 1 -proper centralized and distributed fusion filtering error variances are displayed for the cases of updated and delayed observations, respectively. This was identitcal to that expressed in the above figures, but a better performance of both centralized and distributed filters was observed when the number of sensors increased.

8. Conclusions

In the last few decades, the scientific community has shown great interest in studying the signal estimation problem from multi-sensor observations, since better estimations are obtained with this method. The fact that there are several sensors reduces the number of possible failures in the communication channels as well as the adverse effects of some faulty sensors. Two methods have traditionally been used to build the fusion estimator from the observations of all the sensors: centralized and distributed methods. The first one has the advantage that it provides the optimal estimator whereas, with the distributed one, suboptimal estimators are obtained. However, the distributed method presents great advantages over the centralized one, such as flexibility, robustness, and a major reduction in the computational load (which is more significant with a large number of sensors), which make it preferable in practice, especially when considering the fact that in many real problems, the difference in performance between the estimators obtained by both methods may be almost insignificant.
The study of the signal estimation problem in hypercomplex domains has also considerably grown, since these domains allow for many real problems to be modeled in a better way. To date, most of the estimation problems have been addressed in the quaternion domain since it is a normed algebra. However, the tessarine signal processing can yield better estimators depending on the characteristics of the signal. Recently, it has been possible to endow the tessarine domain with a metric space structure that has the properties necessary to guarantee the existence and uniqueness of the orthogonal projection [41] which is a way of obtaining the LS linear estimator. Therefore, the signal estimation problem was addressed in the tessarine domain in different scenarios, and properness conditions were defined that are analogous to those existing in the quaternion domain. This obtained a considerable reduction in the augmented system, and it led to a consequent decrease in computational cost.
In this paper, under T k -properness conditions [41,42], the LS linear centralized and distributed fusion filtering problems of tessarine signals from multi-sensor observations have been studied, and recursive algorithms have been proposed to calculate them. The observations at each sensor and instant of time can be updated, delayed or contain only noise, independently from the other sensors. A correlation has also been assumed between the signal and observation noises. The T k -properness conditions cause an important computational reduction in the calculus of the T k -proper fusion filters in comparison with the TWL estimators, which makes these conditions desirable in practice. The theoretical results are illustrated in a numerical simulation example, in which the performance of the estimators calculated by using both fusion algorithms is compared by taking different values of the Bernoulli parameters modeling the updating, delay, or uncertainty in the observations.
Future research is planned to explore the signal estimation problem in other hypercomplex algebras, as well as to address the decentralized fusion estimation problem under T k -properness scenarios and different hypotheses on the observations.

Author Contributions

All authors have contributed equally to the work. The functions mainly carried out by each specific author are detailed below. Conceptualization, J.D.J.-L.; Formal analysis, J.D.J.-L., R.M.F.-A., J.N.-M. and J.C.R.-M.; Methodology, J.D.J.-L.; Investigation, J.D.J.-L., R.M.F.-A., J.N.-M. and J.C.R.-M.; Visualization, R.M.F.-A., J.N.-M. and J.C.R.-M.; Writing—original draft preparation, J.D.J.-L.; Writing—review and editing, R.M.F.-A., J.N.-M. and J.C.R.-M.; Funding acquisition, R.M.F.-A. and J.N.-M.; Project administration, R.M.F.-A. and J.N.-M.; Software, J.D.J.-L.; Supervision, J.N.-M. and J.C.R.-M.; Validation, J.N.-M. and J.C.R.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been supported in part by I+D+i project with reference number 1256911, under ‘Programa Operativo FEDER Andalucía 2014–2020’, Junta de Andalucía, and Project EI-FQM2-2021 of ‘Plan de Apoyo a la Investigación 2021–2022’ of the University of Jaén.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study: in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A. Proof of Theorem 1

Appendix A.1. Preliminary Result

Property A1.
For the model described in (5) with the available observation equation in (7) and the hypotheses assumed, the following properties are satisfied:
1.
E u ¯ ( t ) ϵ k ( i ) H ( s ) = S ¯ ( i ) ( t ) Π k 1 γ 2 ( i ) H ( t ) δ t , s , t s .
2.
E v ¯ ( i ) ( t ) ϵ k ( j ) H ( s ) = R ¯ ( i ) ( t ) Π k 1 γ 2 ( i ) H ( t ) δ t , s δ i , j , t s .

Appendix A.2. Proof of Theorem 1

As it is known, the optimal LS linear filter x ¯ ^ ( i ) ( t | t ) is the orthogonal projection of x ¯ ( t ) onto the linear space spanned by the innovations ϵ k ( i ) ( 1 ) , , ϵ k ( i ) ( t ) , and it can be expressed in the following way:
x ¯ ^ ( i ) ( t | t ) = s = 1 t Θ ¯ k ( i ) ( s ) Ω k ( i ) 1 ( s ) ϵ k ( i ) ( s ) ,
where Θ ¯ k ( i ) ( s ) = E x ¯ ( t ) ϵ k ( i ) H ( s ) , and Ω k ( i ) ( s ) = E ϵ k ( i ) ( s ) ϵ k ( i ) H ( s ) . The existence and uniqueness of this projection in the tessarine domain is guaranteed in [41].
Firstly, from (A1), it is had that
x ¯ ^ ( i ) ( t | t ) = x ¯ ^ ( i ) ( t | t 1 ) + L ¯ k ( i ) ( t ) ϵ k ( i ) ( t ) ,
with L ¯ k ( i ) ( t ) = Θ ¯ k ( i ) ( t ) Ω k ( i ) 1 ( t ) . Then, Equation (10) is immediately derived from (A2), taking into account the characteristics of both T k -proper scenarios. Moreover, from the Theorem 3 in [41], the state equation in (5) and Property A1.1, it is obtained that
x ¯ ^ ( i ) ( t + 1 | t ) = Φ ¯ ( t ) x ¯ ^ ( i ) ( t | t ) + H ¯ k ( i ) ( t ) ϵ k ( i ) ( t ) ,
with H ¯ k ( i ) ( t ) = S ¯ ( i ) ( t ) Π k 1 γ 2 ( i ) H ( t ) Ω k ( i ) 1 ( t ) . Then, by characterizing (A3) for both T k -proper scenarios, Equation (11) is easily obtained.
Next, to derive Equation (12), we use the Theorem 3 on [41] and the observation equation in (7) to obtain the following expression for y ^ k ( i ) ( t | t 1 ) ,
y ^ k ( i ) ( t | t 1 ) = Π k γ 1 ( i ) ( t ) x ¯ ^ ( i ) ( t | t 1 ) + Π k γ 2 ( i ) ( t ) x ¯ ^ ( i ) ( t 1 | t 1 ) + G ¯ k ( i ) ( t 1 ) ϵ k ( i ) ( t 1 ) ,
with G ¯ k ( i ) ( t ) = R ¯ ( i ) ( t ) Π k 1 γ 2 ( i ) H ( t ) Ω k ( i ) 1 ( t ) , and where Property A1.2 and the hypotheses on the model (H1-H4) have been used. Then, Equation (12) is easily deduced from the definition of the innovation, and by characterizing (A4) for both T k -proper scenarios.
In order to obtain expression (13) in an easier way, we will express the innovation as follows
ϵ k ( i ) ( t ) = D k γ 1 ( i ) ( t ) Π k γ 1 ( i ) ( t ) x ¯ ( t ) ϵ k , a ( i ) ( t ) + D k γ 2 ( i ) ( t ) Π k γ 2 ( i ) ( t ) x ¯ ( t 1 ) ϵ k , b ( i ) ( t ) + D k 1 γ 2 ( i ) ( t ) v ¯ ( i ) ( t ) ϵ k , c ( i ) ( t ) + D k γ 2 ( i ) ( t ) Π k γ 2 ( i ) ( t ) v ¯ ( i ) ( t 1 ) ϵ k , d ( i ) ( t ) + Π k γ 1 ( i ) ( t ) x ¯ ˜ ( i ) ( t | t 1 ) ϵ k , e ( i ) ( t ) + Π k γ 2 ( i ) ( t ) x ¯ ˜ ( i ) ( t 1 | t 1 ) ϵ k , f ( i ) ( t ) + Π k γ 2 ( i ) ( t ) v ¯ ˜ ( i ) ( t 1 | t 1 ) ϵ k , g ( i ) ( t ) ,
where x ¯ ˜ ( i ) ( s | t 1 ) = x ¯ ( s ) x ¯ ^ ( i ) ( s | t 1 ) , for s = t , t 1 , and v ¯ ˜ ( i ) ( t | t ) = v ¯ ( i ) ( t ) v ¯ ^ ( i ) ( t | t ) . Then, from (A5), and taking into account the hypotheses on the model, the only non null members in Θ ¯ k ( i ) ( t ) = E x ¯ ( t ) ϵ k ( i ) H ( t ) are those ones corresponding to the terms ϵ k , e ( i ) ( t ) , ϵ k , f ( i ) ( t ) and ϵ k , g ( i ) ( t ) . More specifically, denoting by P ¯ ( i ) ( s | t 1 ) = E x ¯ ˜ ( i ) ( s | t 1 ) x ¯ ˜ ( i ) H ( s | t 1 ) , for s = t , t 1 , we obtain that E x ¯ ( t ) ϵ k , e ( i ) H ( t ) = P ¯ ( i ) ( t | t 1 ) Π k γ 1 ( i ) H ( t ) ; and, from the state equation in (5), the hypotheses on the model (H1-H4) and Property A1, it is obtained that
E x ¯ ( t ) ϵ k , f ( i ) H ( t ) = Φ ¯ ( t 1 ) P ¯ ( i ) ( t 1 | t 1 ) H ¯ k ( i ) ( t 1 ) Θ ¯ k ( i ) H ( t 1 ) Π k γ 2 ( i ) H ( t ) , E x ¯ ( t ) ϵ k , g ( i ) H ( t ) = S ¯ ( i ) ( t 1 ) Φ ¯ ( t 1 ) Θ ¯ k ( i ) ( t 1 ) G ¯ k ( i ) H ( t 1 ) H ¯ k ( i ) ( t 1 ) Ω k ( i ) ( t 1 ) G ¯ k ( i ) H ( t 1 ) Π k γ 2 ( i ) H ( t ) .
Then, by reordering these terms and using the characteristics of both T k -proper scenarios, Equation (13) is derived. Its initial condition is immediately obtained from y k ( i ) ( 1 ) in Equation (7) and the hypotheses on the model.
Next, to derive the expression (15) for the pseudo covariance matrix of the innovation, we have used Equation (A5) and as before, we will focus on the non null terms under the hypotheses on the model. So,
E ϵ k , a ( i ) ( t ) ϵ k , a ( i ) H ( t ) = Ψ 1 k ( i ) ( t ) , E ϵ k , a ( i ) ( t ) ϵ k , b ( i ) H ( t ) + E ϵ k , a ( i ) ( t ) ϵ k , d ( i ) H ( t ) = Ψ 2 k ( i ) ( t ) , E ϵ k , b ( i ) ( t ) ϵ k , b ( i ) H ( t ) = Ψ 3 k ( i ) ( t ) , E ϵ k , c ( i ) ( t ) ϵ k , c ( i ) H ( t ) + E ϵ k , d ( i ) ( t ) ϵ k , d ( i ) H ( t ) = Ψ 4 k ( i ) ( t ) T k E γ 2 r ( t ) E γ 2 r ( t ) T T n H R ¯ ( i ) ( t 1 ) T n T k H , E ϵ k , e ( i ) ( t ) ϵ k , e ( i ) H ( t ) = Π k γ 1 ( i ) ( t ) P ¯ ( i ) ( t | t 1 ) Π k γ 1 ( i ) H ( t ) , E ϵ k , f ( i ) ( t ) ϵ k , f ( i ) H ( t ) = Π k γ 2 ( i ) ( t ) P ¯ ( i ) ( t 1 | t 1 ) Π k γ 2 ( i ) H ( t ) .
Moreover, from the following expression (easily obtained from (A2) and the state equation in (5)),
x ¯ ˜ ( i ) ( t | t 1 ) = Φ ¯ ( t 1 ) x ¯ ˜ ( i ) ( t 1 | t 1 ) + u ¯ ( t 1 ) H ¯ k ( i ) ( t 1 ) ϵ k ( i ) ( t 1 ) ,
and Property A1, under the hypotheses on the model, we have that
E ϵ k , e ( i ) ( t ) ϵ k , f ( i ) H ( t ) = Π k γ 1 ( i ) ( t ) Φ ¯ ( t 1 ) P ¯ ( i ) ( t 1 | t 1 ) H ¯ k ( i ) ( t 1 ) Θ ¯ k ( i ) H ( t 1 ) Π k γ 2 ( i ) H ( t ) , E ϵ k , e ( i ) ( t ) ϵ k , g ( i ) H ( t ) = Π k γ 1 ( i ) ( t ) S ¯ ( i ) ( t 1 ) Φ ¯ ( t 1 ) Θ ¯ k ( i ) ( t 1 ) G ¯ k ( i ) H ( t 1 ) H ¯ k ( i ) ( t 1 ) Ω k ( i ) ( t 1 ) G ¯ k ( i ) H ( t 1 ) Π k γ 2 ( i ) H ( t ) , E ϵ k , f ( i ) ( t ) ϵ k , g ( i ) H ( t ) = Π k γ 2 ( i ) ( t ) Θ ¯ k ( i ) ( t 1 ) G ¯ k ( i ) H ( t 1 ) Π k γ 2 ( i ) H ( t ) , E ϵ k , g ( i ) ( t ) ϵ k , g ( i ) H ( t ) = Π k γ 2 ( i ) ( t ) R ¯ ( i ) ( t 1 ) G ¯ k ( i ) ( t 1 ) Ω k ( i ) ( t 1 ) G ¯ k ( i ) H ( t 1 ) Π k γ 2 ( i ) H ( t ) .
Then, by using (A6) and (A8), the resulting expression is characterized to both T k -proper scenarios, and so, (15) is derived. Its initial condition is immediately deduced from the model and the hypotheses assumed, as well as the recursive formula for D ¯ ( t ) = E x ¯ ( t ) x ¯ H ( t ) in (16).
Finally, to derive the Equation (17) for P k ( i ) ( t | t ) , we have used the following expression, immediate from (A2),
x ¯ ˜ ( i ) ( t | t ) = x ¯ ˜ ( i ) ( t | t 1 ) Θ ¯ k ( i ) ( t ) Ω ( i ) 1 ( t ) ϵ k ( i ) ( t ) ,
which leads to the following equation
P ¯ ( i ) ( t | t ) = P ¯ ( i ) ( t | t 1 ) Θ ¯ k ( i ) ( t ) Ω k ( i ) 1 ( t ) Θ ¯ k ( i ) H ( t ) .
Then, (17) is obtained by characterizing (A10) for both T k -proper scenarios. In an analogous way, from (A7), the recursive formula (18) for P k ( i ) ( t + 1 | t ) is deduced. Their initial conditions are immediately derived taking into account that x ˜ k ( i ) ( 0 | 0 ) = x k ( 0 ) , and x ˜ k ( i ) ( 1 | 0 ) = x k ( 1 ) .

Appendix B. Proof of Lemmas 1–4

Appendix B.1. Proof of Lemma 1

( i )
From (A4), the matrices Θ ¯ k ( i ) ( t 1 , t ) = E x ¯ ( t 1 ) ϵ k ( i ) H ( t ) are expressed as follows
Θ ¯ k ( i ) ( t 1 , t ) = E x ¯ ( t 1 ) y k ( i ) H ( t ) E x ¯ ( t 1 ) x ¯ ^ ( i ) H ( t | t 1 ) Π k γ 1 ( i ) H ( t ) K ¯ ( i i ) ( t 1 ) + Θ ¯ k ( i ) ( t 1 ) G ¯ k ( i ) H ( t 1 ) Π k γ 2 ( i ) H ( t ) ,
where K ¯ ( i i ) ( t ) = E x ¯ ^ ( i ) ( t | t ) x ¯ ^ ( i ) H ( t | t ) . Now, taking into account the observation Equation (7) and using (A3), it is easily deduced that
E x ¯ ( t 1 ) y k ( i ) H ( t ) = D ¯ ( t 1 ) A ¯ k ( i ) H ( t 1 ) , E x ¯ ( t 1 ) x ¯ ^ ( i ) H ( t | t 1 ) = K ¯ ( i i ) ( t 1 ) Φ ¯ H ( t 1 ) + Θ ¯ k ( i ) ( t 1 ) H ¯ k ( i ) H ( t 1 ) ,
with D ¯ ( t ) computed in (16) and A ¯ k ( i ) ( t ) = Π k γ 1 ( i ) ( t + 1 ) Φ ¯ ( t ) + Π k γ 2 ( i ) ( t + 1 ) .
Then, by substituting (A12) in (A11), reordering terms and characterizing for both T k -proper scenarios, (21) is derived.
( i i )
From (A4) and the hypotheses on the model, we obtain that
Θ ¯ v k ( i j ) ( t ) = E v ¯ ( i ) ( t ) ϵ k ( j ) H ( t ) = R ¯ ( i ) ( t ) Π k 1 γ 2 ( i ) H ( t ) δ i j , t 2 ,
with Θ ¯ v k ( i j ) ( 1 ) = R ¯ ( i ) ( 1 ) I k n , 0 k n × ( 4 k ) n T δ i j . Equation (A13) is characterized for both T k -proper scenarios to obtain (22).
( i i i )
Again, from (A4), the next expression for Θ ¯ v k ( i j ) ( t 1 , t ) = E v ¯ ( i ) ( t 1 ) ϵ k ( j ) H ( t ) is obtained
Θ ¯ v k ( i j ) ( t 1 , t ) = E v ¯ ( i ) ( t 1 ) y k ( j ) H ( t ) E v ¯ ( i ) ( t 1 ) x ¯ ^ ( j ) H ( t | t 1 ) Π k γ 1 ( j ) H ( t ) E v ¯ ( i ) ( t 1 ) x ¯ ^ ( j ) H ( t 1 | t 1 ) + Θ ¯ v k ( i j ) ( t 1 ) G ¯ k ( j ) H ( t 1 ) Π k γ 2 ( j ) H ( t ) .
Moreover, from (7), the hypotheses on the model, Equations (A2) and (A3), and Property A1.2, the following equations are obtained
E v ¯ ( i ) ( t 1 ) y k ( j ) H ( t ) = S ¯ ( i ) H ( t 1 ) Π k γ 1 ( j ) H ( t ) + R ¯ ( i ) ( t 1 ) Π k γ 2 ( i ) H ( t ) δ i j , E v ¯ ( i ) ( t 1 ) x ¯ ^ ( j ) H ( t | t 1 ) = Θ ¯ v k ( i j ) ( t 1 ) Φ ¯ ( t 1 ) L ¯ k ( j ) ( t 1 ) + H ¯ k ( j ) ( t 1 ) H , E v ¯ ( i ) ( t 1 ) x ¯ ^ ( j ) H ( t 1 | t 1 ) = Θ ¯ v k ( i j ) ( t 1 ) L ¯ k ( j ) H ( t 1 ) .
Hence, Equation (23) is derived by substituting (A15) in (A14), grouping terms and characterizing for both T k -proper scenarios.

Appendix B.2. Proof of Lemma 2

From (A4), the matrices L ¯ k ( i j ) ( t ) = E x ¯ ^ ( i ) ( t | t 1 ) ϵ k ( j ) H ( t ) can be expressed as follows
L ¯ k ( i j ) ( t ) = E x ¯ ^ ( i ) ( t | t 1 ) y k ( j ) H ( t ) K ¯ ( i j ) ( t , t 1 ) Π k γ 1 ( j ) H ( t ) E x ¯ ^ ( i ) ( t | t 1 ) x ¯ ^ ( j ) H ( t 1 | t 1 ) + L ¯ k ( i j ) ( t , t 1 ) G ¯ k ( j ) H ( t 1 ) Π k γ 2 ( j ) H ( t ) .
Now, by using (7), the hypotheses on the model, Equations (A2) and (A3), and Property A1.2, we have that
E x ¯ ^ ( i ) ( t | t 1 ) y k ( j ) H ( t ) = K ¯ ( i i ) ( t , t 1 ) Π k γ 1 ( j ) H ( t ) + Φ ¯ ( t 1 ) K ¯ ( i i ) ( t 1 ) + H ¯ k ( i ) ( t 1 ) Θ ¯ k ( i ) H ( t 1 ) Π k γ 2 ( j ) H ( t ) + C ¯ k ( i ) ( t 1 ) Θ ¯ v k ( j i ) H ( t 1 ) Π k γ 2 ( j ) H ( t ) , E x ¯ ^ ( i ) ( t | t 1 ) x ¯ ^ ( j ) H ( t 1 | t 1 ) = Φ ¯ ( t 1 ) K ¯ ( i j ) ( t 1 ) + H ¯ k ( i ) ( t 1 ) N ¯ k ( j i ) H ( t 1 ) .
where C ¯ k ( i ) ( t ) = Φ ¯ ( t ) L ¯ k ( i ) ( t ) + H ¯ k ( i ) ( t ) , and N ¯ k ( i j ) ( t ) = L ¯ k ( i j ) ( t ) + L ¯ k ( i ) ( t ) M k ( i j ) ( t ) , with M k ( i j ) ( t ) = E ϵ k ( i ) ( t ) ϵ k ( j ) H ( t ) . Then, Equation (24) is obtained from (A16) and (A17), by using the characteristics of the T k -proper scenarios.
Secondly, Equation (25) is easily derived by using the following expression, obtained from (A2) and (A3),
x ¯ ^ ( i ) ( t | t 1 ) = Φ ¯ ( t 1 ) x ¯ ^ ( i ) ( t 1 | t 2 ) + C ¯ k ( i ) ( t 1 ) ϵ k ( i ) ( t 1 ) ,
and characterizing for both T k -proper scenarios.
Finally, by following an analogous reasoning to that used before in the derivation of L ¯ k ( i j ) ( t ) = E x ¯ ^ ( i ) ( t 1 | t 1 ) ϵ k ( j ) H ( t ) , Equation (26) is deduced from the expressions below
L ¯ k ( i j ) ( t 1 , t ) = E x ¯ ^ ( i ) ( t 1 | t 1 ) y k ( j ) H ( t ) E x ¯ ^ ( i ) ( t 1 | t 1 ) x ¯ ^ ( j ) H ( t | t 1 ) Π k γ 1 ( j ) H ( t ) K ¯ ( i j ) ( t 1 ) Π k γ 2 ( j ) H ( t ) E x ¯ ^ ( i ) ( t 1 | t 1 ) ϵ k ( j ) H ( t 1 ) G ¯ k ( j ) H ( t 1 ) Π k γ 2 ( j ) H ( t ) ,
where, by using (7), the hypotheses on the model, Equations (A2) and (A3), and Property A1, it is obtained that
E x ¯ ^ ( i ) ( t 1 | t 1 ) y k ( j ) H ( t ) = K ¯ ( i i ) ( t 1 ) A ¯ k ( j ) H ( t 1 ) + Θ ¯ k ( i ) ( t 1 ) H ¯ k ( i ) H ( t 1 ) Π k γ 1 ( j ) H ( t ) + L ¯ k ( i ) ( t 1 ) Θ ¯ v k ( j i ) H ( t 1 ) Π k γ 2 ( j ) H ( t ) , E x ¯ ^ ( i ) ( t 1 | t 1 ) ϵ k ( j ) H ( t 1 ) = N ¯ k ( i j ) ( t 1 ) .
with E x ¯ ^ ( i ) ( t 1 | t 1 ) x ¯ ^ ( j ) H ( t | t 1 ) given in (A17).

Appendix B.3. Proof of Lemma 3

Equations (27) and (28) can be easily derived by using (A4), the hypotheses on the model, Property A1.2, and the T k -properness characteristics.

Appendix B.4. Proof of Lemma 4

From (A2) and (A3), the expressions below for K ¯ ( i j ) ( t ) = E x ^ ( i ) ( t | t ) x ^ ( j ) H ( t | t ) and K ¯ ( i j ) ( t + 1 , t ) = E x ^ ( i ) ( t + 1 | t ) x ^ ( j ) H ( t + 1 | t ) can be obtained
K ¯ ( i j ) ( t ) = K ¯ ( i j ) ( t , t 1 ) + N ¯ k ( i j ) ( t ) L ¯ k ( j ) H ( t ) + L ¯ k ( i ) ( t ) L ¯ k ( j i ) H ( t ) ,
K ¯ ( i j ) ( t + 1 , t ) = Φ ¯ ( t ) K ¯ ( i j ) ( t ) Φ ¯ H ( t ) + N ¯ k ( i j ) ( t ) H ¯ k ( j ) H ( t ) + H ¯ k ( i ) ( t ) L ¯ k ( j i ) H ( t + 1 , t ) .
Therefore, by characterizing these equations to the T k -proper scenarios, Equations (29) and (30) are obtained.

References

  1. Castanedo, F. A review of data fusion techniques. Sci. World. J. 2013, 2013, 704504. [Google Scholar] [CrossRef] [PubMed]
  2. Fourati, H. Multisensor Data Fusion: From Algorithms and Architectural Design to Applications, 1st ed.; CRC Press, Taylor and Francis Group LLC: Boca Raton, FL, USA, 2015. [Google Scholar]
  3. Sun, S.; Lin, H.; Ma, J.; Li, X. Multi-sensor distributed fusion estimation with applications in networked systems: A review paper. Inf. Fusion 2017, 38, 122–134. [Google Scholar] [CrossRef]
  4. Abu Bakr, M.; Lee, S. Distributed multisensor data fusion under unknown correlation and data inconsistency. Sensors 2017, 17, 2472. [Google Scholar] [CrossRef] [Green Version]
  5. Noack, B. State Estimation for Distributed Systems with Stochastic and Set-Membership Uncertainties; KIT Scientific Publishing: Karlsruhe, Germany, 2014. [Google Scholar]
  6. He, S.; Shin, H.-S.; Xu, S.; Tsourdos, A. Distributed estimation over a low-cost sensor network: A review of state-of-the-art. Inf. Fusion 2020, 54, 21–43. [Google Scholar] [CrossRef]
  7. Linares-Pérez, J.; Hermoso-Carazo, A.; Caballero-águila, R.; Jiménez-López, J.D. Least-squares linear filtering using observations coming from multiple sensors with one- or two-step random delay. Signal Process. 2009, 89, 2045–2052. [Google Scholar] [CrossRef]
  8. Ma, J.; Sun, S. Centralized fusion estimators for multisensor systems with random sensor delays, multiple packet dropouts and uncertain observations. IEEE Sens. J. 2013, 13, 1228–1235. [Google Scholar] [CrossRef]
  9. Liu, W.-Q.; Wang, X.-M.; Deng, Z.-L. Robust centralized and weighted measurement fusion Kalman estimators for uncertain multisensor systems with linearly correlated white noises. Inf. Fusion 2017, 35, 11–25. [Google Scholar] [CrossRef]
  10. Liang, J.; Shen, B.; Dong, H.; Lam, J. Robust distributed state estimation for sensor networks with multiple stochastic communication delays. Int. J. Syst. Sci. 2011, 42, 1459–1471. [Google Scholar] [CrossRef]
  11. Lin, H.; Sun, S. Distributed fusion estimator for multi-sensor asynchronous sampling systems with missing measurements. IET Signal Process. 2016, 10, 724–731. [Google Scholar] [CrossRef]
  12. Sui, T.; Marelli, D.; Sun, X.; Fu, M. Multi-sensor state estimation over lossy channels using coded measurements. Automatica 2020, 111, 108561. [Google Scholar] [CrossRef]
  13. Xing, Z.; Xia, Y.; Yan, L.; Lu, K.; Gong, Q. Multisensor distributed weighted Kalman filter fusion with network delays, stochastic uncertainties, autocorrelated, and cross-correlated noises. IEEE Trans. Syst. Man Cyber. Syst. 2018, 48, 716–726. [Google Scholar] [CrossRef]
  14. Zhang, J.; Gao, S.; Li, G.; Xia, J.; Qi, X.; Gao, B. Distributed recursive filtering for multi-sensor networked systems with multi-step sensor delays, missing measurements and correlated noise. Signal Process. 2021, 181, 107868. [Google Scholar] [CrossRef]
  15. Mo, Y.; Sinopoli, B. Kalman filtering with intermittent observations: Tail distribution and critical value. IEEE Trans. Autom. Control 2012, 57, 677–689. [Google Scholar]
  16. Ihler, A.; Fisher III, J.W.; Willsky, A.S. Loopy belief propagation: Convergence and effects of message errors. J. Mach. Learn. Technol. 2005, 6, 905–936. [Google Scholar]
  17. Duan, Y.; Zhang, X.; Li, Z. A new quaternion-based Kalman filter for human body motion tracking using the second estimator of the optimal quaternion algorithm and the joint angle constraint method with inertial and magnetic sensors. Sensors 2020, 20, 6018. [Google Scholar] [CrossRef]
  18. Yao, Y.; Du, Z.; Huang, X.; Li, R. Derivation and simulation verification of the relationship between world coordinates and local coordinates under virtual reality engine. Virtual Real. 2020, 24, 263–269. [Google Scholar] [CrossRef]
  19. Ortolani, F.; Comminiello, D.; Uncini, A. The widely linear block quaternion least mean square algorithm for fast computation in 3D audio systems. In Proceedings of the IEEE International Workshop on Machine Learning for Signal Processing (MLSP 2016), Salerno, Italy, 13–16 September 2016; p. 7738842. [Google Scholar]
  20. Celsi, M.R.; Scardapane, S.; Comminiello, D. Quaternion neural networks for 3D sound source location in reverberant environments. In Proceedings of the IEEE International Workshop on Machine Learning for Signal Processing (MLSP 2020), Espoo, Finland, 21–24 September 2020; p. 9231809. [Google Scholar]
  21. Grakhova, E.P.; Abdrakhmanova, G.I.; Schmidt, S.P.; Vinogradova, I.L.; Sultanov, A.K. The quadrature modulation of quaternion signals for capacity upgrade of high-speed fiber-optic wireless communication systems. In Proceedings of the SPIE—The Society for Optical Engineering, Munich, Germany, 23–27 June 2019; p. 11146. [Google Scholar]
  22. Ahmad, Z.; Hashim, S.J.; Rokhani, F.Z.; Al-Haddad, S.A.R.; Sali, A.; Takei, K. Quaternion model of higher-order rotating polarization wave modulation for high data rate M2M LPWAN communication. Sensors 2021, 21, 383. [Google Scholar] [CrossRef]
  23. Labunets, V.G. Hypercomplex models of multichannel images. Proc. Steklow Inst. 2021, 313, S155–S168. [Google Scholar] [CrossRef]
  24. Augereau, B.; Carré, P. Hypercomplex polynomial wavelet-filter bank transform for color image. Signal Process. 2017, 136, 16–28. [Google Scholar] [CrossRef]
  25. Mennano, G.M.; Mazzotti, A. Deconvolution of multicomponent seismis data by means of quaternions: Theory and preliminary results. Geophys. Prospect. 2012, 60, 217–238. [Google Scholar] [CrossRef]
  26. Bahia, B.; Sacchi, M.D. Widely linear denoising of multicomponent seismic data. Geophys. Prospect. 2020, 68, 431–445. [Google Scholar] [CrossRef]
  27. Takahashi, K.; Fujita, M.; Hashimoto, M. Remarks on octonion-valued neural networks with application to robot manipulator control. In Proceedings of the IEEE International Conference on Mechatronics (ICM 2021), Kashiva, Japan, 7–9 March 2021; p. 9385617. [Google Scholar]
  28. Takahashi, K. Comparison of high-dimensional neural networks using hypercomplex numbers in a robot manipulator control. Artif. Life Robot. 2021, 26, 367–377. [Google Scholar] [CrossRef]
  29. Dogic, Z.; Sharma, P.; Zakhary, M.J. Hypercomplex liquid crystals. Annu. Rev. Condens. Matter Phys. 2014, 5, 137–157. [Google Scholar] [CrossRef] [Green Version]
  30. Ramírez-Tamayo, D.; Balcer, M.; Montoya, A.; Millwater, H. Mixed-mode stress intensity factors computation in functionally graded materials using a hypercomplex-variable finite element formulation. Int. J. Fract. 2020, 226, 219–232. [Google Scholar] [CrossRef]
  31. Gao, Z.Y.; Niu, X.J.; Guo, M.F. Quaternion-based Kalman filter for micro-machined strapdown attitude heading reference system. Chin. J. Aeronaut. 2002, 15, 171–175. [Google Scholar] [CrossRef] [Green Version]
  32. Martins, P.V.R.; Silva, O.M.; Lenzi, A. Insertion loss analysis of slender beams with periodic curvatures using quaternion-based parametrization, FE method and wave propagation approach. J. Sound Vib. 2019, 455, 82–95. [Google Scholar] [CrossRef]
  33. Sabatelli, S.; Sechi, F.; Fanucci, L.; Rocchi, A. A sensor fusion algorithm for an integrated angular position estimation with inertial measurement units. In Proceedings of the Design, Automation and Test in Europe (DATE 2011), Grenoble, France, 14–18 March 2011; pp. 273–276. [Google Scholar]
  34. Tannous, H.; Istrate, D.; Benlarbi-Delai, A.; Sarrazin, J.; Gamet, D.; Ho Ba Tho, M.C.; Dao, T.T. A new multi-sensor fusion scheme to improve the accuracy of knee flexion kinematics for functional rehabilitation movements. J. Sens. 2016, 16, 1914. [Google Scholar] [CrossRef] [Green Version]
  35. Talebi, S.; Kanna, S.; Mandic, D. A distributed quaternion Kalman filter with applications to smart grid and target tracking. IEEE. Trans. Signal Inf. Process. Netw. 2016, 2, 477–488. [Google Scholar] [CrossRef]
  36. Talebi, S.P.; Werner, S.; Mandic, D.P. Quaternion-valued distributed filtering and control. IEEE. Trans. Autom. Control 2020, 65, 4246–4256. [Google Scholar] [CrossRef]
  37. Wu, J.; Zhou, Z.; Fourati, H.; Li, R.; Liu, M. Generalized linear quaternion complementary filter for attitude estimation from multi-sensor observations: An optimization approach. IEEE. Trans. Autom. Sci. Eng. 2019, 16, 1330–1343. [Google Scholar] [CrossRef]
  38. Jiménez-López, J.D.; Fernández-Alcalá, R.M.; Navarro-Moreno, J.; Ruiz-Molina, J.C. Widely linear estimation of quaternion signals with intermittent observations. Signal Process. 2017, 136, 92–101. [Google Scholar] [CrossRef]
  39. Fernández-Alcalá, R.M.; Navarro-Moreno, J.; Jiménez-López, J.D.; Ruiz-Molina, J.C. Semi-widely linear estimation algorithms of quaternion signals with missing observations and correlated noises. J. Frankl. Inst. 2020, 357, 3075–3096. [Google Scholar] [CrossRef]
  40. Navarro-Moreno, J.; Fernández-Alcalá, R.M.; Jiménez-López, J.D.; Ruiz-Molina, J.C. Widely linear estimation for multisensor quaternion systems with mixed uncertainties in the observations. J. Frankl. Inst. 2019, 356, 3115–3138. [Google Scholar] [CrossRef]
  41. Navarro-Moreno, J.; Fernández-Alcalá, R.M.; Jiménez-López, J.D.; Ruiz-Molina, J.C. Tessarine signal processing under the T-properness condition. J. Frankl. Inst. 2020, 357, 10099–10125. [Google Scholar] [CrossRef]
  42. Navarro-Moreno, J.; Ruiz-Molina, J.C. Wide-sense Markov signals on the tessarine domain. A study under properness conditions. Signal Process. 2021, 183, 108022. [Google Scholar] [CrossRef]
  43. Fernández-Alcalá, R.M.; Navarro-Moreno, J.; Ruiz-Molina, J.C. T-proper hypercomplex centralized fusion estimation for randomly multiple sensor delays systems with correlated noises. Sensors 2021, 21, 5729. [Google Scholar] [CrossRef] [PubMed]
  44. Nitta, T.; Kobayashi, M.; Mandic, D.P. Hypercomplex widely linear estimation through the lens of underpinning geometry. IEEE Trans. Signal Process. 2019, 67, 3985–3994. [Google Scholar] [CrossRef]
Figure 1. Filtering error variances in the T 1 -proper scenario (top) and T 2 -proper one (bottom).
Figure 1. Filtering error variances in the T 1 -proper scenario (top) and T 2 -proper one (bottom).
Mathematics 09 02961 g001
Figure 2. T 1 -proper fusion filtering error variances for updating probabilities: (1): p 1 ( 1 ) = 0.3 , p 2 ( 1 ) = 0.05 ; (2): p 1 ( 1 ) = 0.5 ,   p 2 ( 1 ) = 0.05 ; (3): p 1 ( 1 ) = 0.7 ,   p 2 ( 1 ) = 0.05 ; (4): p 1 ( 1 ) = 0.9 ,   p 2 ( 1 ) = 0.05 ; and delay probabilities: (5): p 1 ( 1 ) = 0.05 , p 2 ( 1 ) = 0.3 ; (6): p 1 ( 1 ) = 0.05 , p 2 ( 1 ) = 0.5 ; (7): p 1 ( 1 ) = 0.05 , p 2 ( 1 ) = 0.7 ; (8): p 1 ( 1 ) = 0.05 , p 2 ( 1 ) = 0.9 .
Figure 2. T 1 -proper fusion filtering error variances for updating probabilities: (1): p 1 ( 1 ) = 0.3 , p 2 ( 1 ) = 0.05 ; (2): p 1 ( 1 ) = 0.5 ,   p 2 ( 1 ) = 0.05 ; (3): p 1 ( 1 ) = 0.7 ,   p 2 ( 1 ) = 0.05 ; (4): p 1 ( 1 ) = 0.9 ,   p 2 ( 1 ) = 0.05 ; and delay probabilities: (5): p 1 ( 1 ) = 0.05 , p 2 ( 1 ) = 0.3 ; (6): p 1 ( 1 ) = 0.05 , p 2 ( 1 ) = 0.5 ; (7): p 1 ( 1 ) = 0.05 , p 2 ( 1 ) = 0.7 ; (8): p 1 ( 1 ) = 0.05 , p 2 ( 1 ) = 0.9 .
Mathematics 09 02961 g002
Figure 3. T 1 -proper fusion filtering error variances for updating probabilities: (1): p 1 ( 2 ) = 0.3 , p 2 ( 2 ) = 0.05 ; (2): p 1 ( 2 ) = 0.5 ,   p 2 ( 2 ) = 0.05 ; (3): p 1 ( 2 ) = 0.7 ,   p 2 ( 2 ) = 0.05 ; (4): p 1 ( 2 ) = 0.9 ,   p 2 ( 2 ) = 0.05 ; and delay probabilities: (5): p 1 ( 2 ) = 0.05 ,   p 2 ( 2 ) = 0.3 ; (6): p 1 ( 2 ) = 0.05 ,   p 2 ( 2 ) = 0.5 ; (7): p 1 ( 2 ) = 0.05 ,   p 2 ( 2 ) = 0.7 ; (8): p 1 ( 2 ) = 0.05 ,   p 2 ( 2 ) = 0.9 .
Figure 3. T 1 -proper fusion filtering error variances for updating probabilities: (1): p 1 ( 2 ) = 0.3 , p 2 ( 2 ) = 0.05 ; (2): p 1 ( 2 ) = 0.5 ,   p 2 ( 2 ) = 0.05 ; (3): p 1 ( 2 ) = 0.7 ,   p 2 ( 2 ) = 0.05 ; (4): p 1 ( 2 ) = 0.9 ,   p 2 ( 2 ) = 0.05 ; and delay probabilities: (5): p 1 ( 2 ) = 0.05 ,   p 2 ( 2 ) = 0.3 ; (6): p 1 ( 2 ) = 0.05 ,   p 2 ( 2 ) = 0.5 ; (7): p 1 ( 2 ) = 0.05 ,   p 2 ( 2 ) = 0.7 ; (8): p 1 ( 2 ) = 0.05 ,   p 2 ( 2 ) = 0.9 .
Mathematics 09 02961 g003
Figure 4. T 1 -proper fusion filtering error variances for updating probabilities: (1): p 1 ( 5 ) = 0.3 , p 2 ( 5 ) = 0.05 ; (2): p 1 ( 5 ) = 0.5 ,   p 2 ( 5 ) = 0.05 ; (3): p 1 ( 5 ) = 0.7 ,   p 2 ( 5 ) = 0.05 ; (4): p 1 ( 5 ) = 0.9 ,   p 2 ( 5 ) = 0.05 ; and delay probabilities: (5): p 1 ( 5 ) = 0.05 , p 2 ( 5 ) = 0.3 ; (6): p 1 ( 5 ) = 0.05 , p 2 ( 5 ) = 0.5 ; (7): p 1 ( 5 ) = 0.05 , p 2 ( 5 ) = 0.7 ; (8): p 1 ( 5 ) = 0.05 , p 2 ( 5 ) = 0.9 .
Figure 4. T 1 -proper fusion filtering error variances for updating probabilities: (1): p 1 ( 5 ) = 0.3 , p 2 ( 5 ) = 0.05 ; (2): p 1 ( 5 ) = 0.5 ,   p 2 ( 5 ) = 0.05 ; (3): p 1 ( 5 ) = 0.7 ,   p 2 ( 5 ) = 0.05 ; (4): p 1 ( 5 ) = 0.9 ,   p 2 ( 5 ) = 0.05 ; and delay probabilities: (5): p 1 ( 5 ) = 0.05 , p 2 ( 5 ) = 0.3 ; (6): p 1 ( 5 ) = 0.05 , p 2 ( 5 ) = 0.5 ; (7): p 1 ( 5 ) = 0.05 , p 2 ( 5 ) = 0.7 ; (8): p 1 ( 5 ) = 0.05 , p 2 ( 5 ) = 0.9 .
Mathematics 09 02961 g004
Figure 5. T 2 -proper fusion filtering error variances for updating probabilities: (1): p 1 , r ( 1 ) , p 1 , η ( 1 ) = ( 0.3 , 0.2 ) ; (2): p 1 , r ( 1 ) , p 1 , η ( 1 ) = ( 0.5 , 0.4 ) ; (3): p 1 , r ( 1 ) , p 1 , η ( 1 ) = ( 0.7 , 0.6 ) ; (4): p 1 , r ( 1 ) , p 1 , η ( 1 ) = ( 0.9 , 0.8 ) ; and delay probabilities: (5): p 2 , r ( 1 ) , p 2 , η ( 1 ) = ( 0.3 , 0.2 ) ; (6): p 2 , r ( 1 ) , p 2 , η ( 1 ) = ( 0.5 , 0.4 ) ; (7): p 2 , r ( 1 ) , p 2 , η ( 1 ) = ( 0.7 , 0.6 ) ; (8): p 2 , r ( 1 ) , p 2 , η ( 1 ) = ( 0.9 , 0.8 ) . In each case, the remaining probabilities are 0.05.
Figure 5. T 2 -proper fusion filtering error variances for updating probabilities: (1): p 1 , r ( 1 ) , p 1 , η ( 1 ) = ( 0.3 , 0.2 ) ; (2): p 1 , r ( 1 ) , p 1 , η ( 1 ) = ( 0.5 , 0.4 ) ; (3): p 1 , r ( 1 ) , p 1 , η ( 1 ) = ( 0.7 , 0.6 ) ; (4): p 1 , r ( 1 ) , p 1 , η ( 1 ) = ( 0.9 , 0.8 ) ; and delay probabilities: (5): p 2 , r ( 1 ) , p 2 , η ( 1 ) = ( 0.3 , 0.2 ) ; (6): p 2 , r ( 1 ) , p 2 , η ( 1 ) = ( 0.5 , 0.4 ) ; (7): p 2 , r ( 1 ) , p 2 , η ( 1 ) = ( 0.7 , 0.6 ) ; (8): p 2 , r ( 1 ) , p 2 , η ( 1 ) = ( 0.9 , 0.8 ) . In each case, the remaining probabilities are 0.05.
Mathematics 09 02961 g005
Figure 6. T 2 -proper fusion filtering error variances for updating probabilities: (1): p 1 , r ( 2 ) , p 1 , η ( 2 ) = ( 0.3 , 0.2 ) ; (2): p 1 , r ( 2 ) , p 1 , η ( 2 ) = ( 0.5 , 0.4 ) ; (3): p 1 , r ( 2 ) , p 1 , η ( 2 ) = ( 0.7 , 0.6 ) ; (4): p 1 , r ( 2 ) , p 1 , η ( 2 ) = ( 0.9 , 0.8 ) ; and delay probabilities: (5): p 2 , r ( 2 ) , p 2 , η ( 2 ) = ( 0.3 , 0.2 ) ; (6): p 2 , r ( 2 ) , p 2 , η ( 2 ) = ( 0.5 , 0.4 ) ; (7): p 2 , r ( 2 ) , p 2 , η ( 2 ) = ( 0.7 , 0.6 ) ; (8): p 2 , r ( 2 ) , p 2 , η ( 2 ) = ( 0.9 , 0.8 ) . In each case, the remaining probabilities are 0.05.
Figure 6. T 2 -proper fusion filtering error variances for updating probabilities: (1): p 1 , r ( 2 ) , p 1 , η ( 2 ) = ( 0.3 , 0.2 ) ; (2): p 1 , r ( 2 ) , p 1 , η ( 2 ) = ( 0.5 , 0.4 ) ; (3): p 1 , r ( 2 ) , p 1 , η ( 2 ) = ( 0.7 , 0.6 ) ; (4): p 1 , r ( 2 ) , p 1 , η ( 2 ) = ( 0.9 , 0.8 ) ; and delay probabilities: (5): p 2 , r ( 2 ) , p 2 , η ( 2 ) = ( 0.3 , 0.2 ) ; (6): p 2 , r ( 2 ) , p 2 , η ( 2 ) = ( 0.5 , 0.4 ) ; (7): p 2 , r ( 2 ) , p 2 , η ( 2 ) = ( 0.7 , 0.6 ) ; (8): p 2 , r ( 2 ) , p 2 , η ( 2 ) = ( 0.9 , 0.8 ) . In each case, the remaining probabilities are 0.05.
Mathematics 09 02961 g006
Figure 7. T 2 -proper fusion filtering error variances for updating probabilities: (1): p 1 , r ( 5 ) , p 1 , η ( 5 ) = ( 0.3 , 0.2 ) ; (2): p 1 , r ( 5 ) , p 1 , η ( 5 ) = ( 0.5 , 0.4 ) ; (3): p 1 , r ( 5 ) , p 1 , η ( 5 ) = ( 0.7 , 0.6 ) ; (4): p 1 , r ( 5 ) , p 1 , η ( 5 ) = ( 0.9 , 0.8 ) ; and delay probabilities: (5): p 2 , r ( 5 ) , p 2 , η ( 5 ) = ( 0.3 , 0.2 ) ; (6): p 2 , r ( 5 ) , p 2 , η ( 5 ) = ( 0.5 , 0.4 ) ; (7): p 2 , r ( 5 ) , p 2 , η ( 5 ) = ( 0.7 , 0.6 ) ; (8): p 2 , r ( 5 ) , p 2 , η ( 5 ) = ( 0.9 , 0.8 ) . In each case, the remaining probabilities are 0.05.
Figure 7. T 2 -proper fusion filtering error variances for updating probabilities: (1): p 1 , r ( 5 ) , p 1 , η ( 5 ) = ( 0.3 , 0.2 ) ; (2): p 1 , r ( 5 ) , p 1 , η ( 5 ) = ( 0.5 , 0.4 ) ; (3): p 1 , r ( 5 ) , p 1 , η ( 5 ) = ( 0.7 , 0.6 ) ; (4): p 1 , r ( 5 ) , p 1 , η ( 5 ) = ( 0.9 , 0.8 ) ; and delay probabilities: (5): p 2 , r ( 5 ) , p 2 , η ( 5 ) = ( 0.3 , 0.2 ) ; (6): p 2 , r ( 5 ) , p 2 , η ( 5 ) = ( 0.5 , 0.4 ) ; (7): p 2 , r ( 5 ) , p 2 , η ( 5 ) = ( 0.7 , 0.6 ) ; (8): p 2 , r ( 5 ) , p 2 , η ( 5 ) = ( 0.9 , 0.8 ) . In each case, the remaining probabilities are 0.05.
Mathematics 09 02961 g007
Figure 8. T 1 -proper fusion filtering error variances from updated observations with 3, 4 and 5 sensors.
Figure 8. T 1 -proper fusion filtering error variances from updated observations with 3, 4 and 5 sensors.
Mathematics 09 02961 g008
Figure 9. T 1 -proper fusion filtering error variances from delayed observations with 3, 4 and 5 sensors.
Figure 9. T 1 -proper fusion filtering error variances from delayed observations with 3, 4 and 5 sensors.
Mathematics 09 02961 g009
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jiménez-López, J.D.; Fernández-Alcalá, R.M.; Navarro-Moreno, J.; Ruiz-Molina, J.C. The Distributed and Centralized Fusion Filtering Problems of Tessarine Signals from Multi-Sensor Randomly Delayed and Missing Observations under Tk-Properness Conditions. Mathematics 2021, 9, 2961. https://doi.org/10.3390/math9222961

AMA Style

Jiménez-López JD, Fernández-Alcalá RM, Navarro-Moreno J, Ruiz-Molina JC. The Distributed and Centralized Fusion Filtering Problems of Tessarine Signals from Multi-Sensor Randomly Delayed and Missing Observations under Tk-Properness Conditions. Mathematics. 2021; 9(22):2961. https://doi.org/10.3390/math9222961

Chicago/Turabian Style

Jiménez-López, José D., Rosa M. Fernández-Alcalá, Jesús Navarro-Moreno, and Juan C. Ruiz-Molina. 2021. "The Distributed and Centralized Fusion Filtering Problems of Tessarine Signals from Multi-Sensor Randomly Delayed and Missing Observations under Tk-Properness Conditions" Mathematics 9, no. 22: 2961. https://doi.org/10.3390/math9222961

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop