𝕋-Proper Hypercomplex Centralized Fusion Estimation for Randomly Multiple Sensor Delays Systems with Correlated Noises

The centralized fusion estimation problem for discrete-time vectorial tessarine signals in multiple sensor stochastic systems with random one-step delays and correlated noises is analyzed under different T-properness conditions. Based on Tk, k=1,2, linear processing, new centralized fusion filtering, prediction, and fixed-point smoothing algorithms are devised. These algorithms have the advantage of providing optimal estimators with a significant reduction in computational cost compared to that obtained through a real or a widely linear processing approach. Simulation examples illustrate the effectiveness and applicability of the algorithms proposed, in which the superiority of the Tk linear estimators over their counterparts in the quaternion domain is apparent.


Introduction
Multi-sensor systems and related information fusion estimation theory have attracted much attention over the last few decades due to their wide range of applications in many fields, including target tracking, robotics, navigation, big data, and signal processing [1][2][3][4][5][6][7].
In practice, failures during data transmission are unavoidable and lead to uncertain systems.In this regard, a significant problem is the estimation of the state from systems with random sensor delays (see, for example, ref. [8][9][10][11][12][13]).Such delays may be mainly caused by computational load, heavy network traffic, and the limited bandwidth of the communication channel, as well as other limitations, which mean that the measurements are not always up to date [8].It is commonly assumed that measurement delays can be described by Bernoulli distributed random variables with known conditional probabilities, where the values 1 and 0 of these variables indicate the presence or absence of measurement delays in the corresponding sensor [10].
Traditionally, there have been two basic approaches to process the information from multiple sensors, centralized and distributed fusion.In the former approach, all the measurement data from each sensor are collected in a fusion center where they are fused and processed, whereas in the distributed fusion method, the measurements of each sensor are transmitted to a local processor where they are independently processed before being transmitted to the fusion center.It is well known that centralized fusion methods lead to the best (optimal) solution when all sensors work healthily [14][15][16].The strength of this approach lies in the fact that it is easy to implement, and it makes possible the best use of the available information.Accordingly, with the purpose of optimal estimation, centralized fusion methodology has received increased attention in recent literature related to multisensor fusion estimation (see, for example, ref. [9,[17][18][19]).Notwithstanding the foregoing, the main disadvantage of this approach is the high computational load that may be required, especially when the number of sensors is too large.Alternatively, distributed fusion methodologies are developed with the purpose of designing solutions with a reduced computational load.Although distributed fusion approach presents a better robustness, flexibility and reliability due to its parallel structure; the main handicap of these solutions is that they are suboptimal and, hence, it is desirable to explore other alternatives that can alleviate the computational demand.In this respect, the use of hypercomplex algebras may well offer an ideal framework in which to analyze the properness characteristics of the signals which lead to lower computational costs without losing optimality.
In general, the implementation of hypercomplex algebras in signal processing problems has expanded rapidly because of their natural ability to model multi-dimensional data giving rise to better geometrical interpretations.In this connection, quaternions and tessarines appear as 4D hypercomplex algebras composed of a real part and three imaginary parts, which provide them with the ideal structure for describing three and four-dimensional signals.Nowadays, they play a fundamental role in a variety of applications such as robotics, avionics, 3D graphics, and virtual reality [20].In principle, the use of quaternions or tessarines means renouncing some of the usual algebraic properties of the real or complex fields.Thus, while quaternion algebra is non-commutative, tessarines become a non-division algebra.These properties make each algebra more appropriate for every specific problem.With this in mind, in [21][22][23][24] the application of these two isodimensional algebras is compared with the objective of showing how the choice of a particular algebra may determine the proposed method performance.
In the related literature, quaternion algebra has been widely used as a signal processing tool and it is still a trending topic in different areas.In particular, in the area of multi-sensor fusion estimation, ref. [25,26] proposed sensor fusion estimation algorithms based on a quaternion extended Kalman filter, ref. [27,28] have provided robust distributed quaternion Kalman filtering algorithm for data fusion over sensor networks dealing with three-dimensional data, and [29] designed a linear quaternion fusion filter from multisensor observations.A common characteristic of all the estimation algorithms above is that their methodologies are based on a strictly linear (SL) processing.However, in the quaternion domain, optimal linear processing is widely linear (WL), which requires the consideration of the quaternion signal and its three involutions.In this framework, ref. [30] devised WL filtering, prediction and smoothing algorithms for multi-sensor systems with mixed uncertainties of sensor delays, packet dropout and missing observations.Interestingly, when the signal presents properness properties (cancellation of one or more of the three complementary covariance matrices), the optimal processing is SL (if the signal is Q-proper) or semi-widely linear (if the signal is C-proper), which amounts to operate on a vector with reduced dimension, which means a significant reduction in the computational load of the associated algorithms (please review [31][32][33][34] for further details).
On the other hand, the use of tessarines is less common in the signal processing literature and, to the best of the authors' knowledge, they have never been considered in multi-sensor fusion estimation problems.In general, the use of tessarines in estimation problems has been limited by the fact that it is not being a normed division algebra.This drawback was successfully overcome in [23] by introducing a metric that guarantees the existence and unicity of the optimal estimator.Moreover, although the optimal processing in the tessarine field is the WL processing, under properness conditions, it is possible to get the optimal solution from estimation algorithms with lower computational costs.In this sense, ref. [23,24] introduced the concept of T 1 and T 2 -properness and provided a statistical test to determine whether a signal presents one of these properness properties.According to the type of properness, the most suitable form of processing is T 1 linear processing, which supposes to operate on the signal itself, or T 2 linear processing, based on the augmented vector given by the signal and its conjugate.The application of both T 1 and T 2 linear processing to the estimation problem has provided optimal estimation algorithms of reduced dimension.
Motivated by the above discussions, in this paper we consider a tessarine multiple sensor system where each sensor may be delayed at any time independently from the others.The probability of the occurrence of each delay is dealt by a Bernoulli distribution.Moreover, unlike most sensor fusion estimation algorithms, the observation noises of different sensors can be correlated.In this context, new centralized fusion filtering, prediction and fixed-point smoothing algorithms are designed under both T 1 and T 2 -properness conditions.The algorithms proposed provide the optimal estimations of the state; meanwhile, the computational load has been reduced with respect to the counterpart tessarine WL (TWL) estimation algorithms.It is important to note that such savings in computational demand cannot be achieved in the real field.The superiority of these algorithms obtained from a T k linear approach over those derived in the quaternion domain is numerically demonstrated under different conditions of properness.
The remainder of the paper is organized as follows.Section 2 introduces the notation used throughout the paper and briefly reviews the main concepts related to the processing of tessarine signals and their implications under T k properness.Then, in Section 3, the problem of estimating a tessarine signal in linear discrete stochastic systems with random state delays and multiple sensors is formulated.Concretely, under T k -properness conditions, a compact state-space model of reduced dimension is proposed.From this model, and based on T k -properness properties, T k centralized fusion filtering, step ahead prediction, and fixed-point smoothing algorithms are devised in Section 4. Furthermore, the goodness of these algorithms in performance is numerically analyzed in Section 5 by means of a simulation example, where the superiority of the T k estimation algorithms above over their counterparts in the quaternion domain is evidenced.The paper ends with a section of conclusions.In order to maintain continuity, all technical proofs have been deferred to the Appendixes A-C.

Preliminaries
Throughout this paper, and unless otherwise stated, all the random variables are assumed to have zero-mean.Moreover, the notation and terminology is fairly standard.They are summarized in the following two subsections.

Notation
Matrices are indicated by boldfaced uppercase letters, column vectors as boldfaced lowercase letters, and scalar quantities by lightfaced lowercase letters.Moreover, the following notation is used.
I n denotes the identity matrix of dimension n. 0 n×m denotes the n × m zero matrix. 1 n denotes the n-vector of all ones.0 n denotes the n-vector of all zeros.Superscript * stands for the tessarine conjugate.Superscript T stands for the transpose.Superscript H stands for the Hermitian transpose.Subscript r represents the real part of a tessarine.Subscript ν, for ν = η, η , η , represents the imaginary part of a tessarine.Z stands for the integer field.R stands for the real field.Accordingly, A ∈ R n×m means that A is a real n × m matrix, and similarly r ∈ R n means that r is a n-dimensional real vector.T stands for the tessarine field.Accordingly, A ∈ T n×m means that A is a tessarine n × m matrix, and similarly r ∈ T n means that r is a m-dimensional real vector.E[•] is the expectation operator.Cov(•) is the covariance operator.diag(•) is a diagonal (or block diagonal) matrix with elements specified on the main diagonal.δ n,l is the Kronecker delta function, which is equal to one if l = n, and zero otherwise.
• denotes the Hadamard product.⊗ denotes the Kronecker product.

Basic Concepts and Properties
The following property of the Hadamard product will be useful.
Definition 1.A tessarine random signal x(t) ∈ T n is a stochastic process of the form [23] x(t) = x r (t) + ηx η (t) + η x η (t) + η x η (t), t ∈ Z where x ν (t) ∈ R n , for ν = r, η, η , η , are real random signals and {1, η, η , η } obeys the following rules: The conjugate of a given tessarine random signal x(t) ∈ T n , is Moreover, the following two auxiliary tessarine vectors are defined: For a complete description of the second-order statistical properties of x(t), we need to consider the augmented tessarine signal vector x(t) = [x T (t), x H (t), x η T (t), x η T (t)] T .The following relationship between the augmented vector and the real vector T can be established: where Definition 2. Given two tessarine random signals x(t), y(s) ∈ T n , the product between them is defined as The following property of the product is easy to check.Note that, depending on the vanishing of the different pseudo correlation functions R xx ν (t, s), ν = * , η, η , various kinds of tessarine properness can be defined.In particular, the following properness conditions in the tessarine domain have recently been introduced in [23,24].Definition 4. A random signal x(t) ∈ T n is said to be T 1 -proper (respectively, T 2 -proper) if, and only if, the functions R xx ν (t, s), with ν = * , η, η (respectively, ν = η, η ), vanish ∀t, s ∈ Z.
Remark that, T 1 properness is more restrictive than T 2 properness.Statistical tests to experimentally check whether a signal is T k -proper, k = 1, 2, or improper have been proposed in [23,24].
It should be highlighted that the different properness properties have direct implications on the optimal linear processing.In general, the optimal linear processing is the widely linear processing, which requires to operate on the augmented tessarine vector x(t).Nevertheless, in the case of joint T k -properness, k = 1, 2, the optimal linear processing is reduced to a T k linear processing, with the corresponding decrease in the dimension of the problem.In particular, T 1 linear processing is based on the tessarine random signal itself, and T 2 linear processing considers the augmented vector given by the signal and its conjugate [24].

Problem Formulation
Consider the class of linear discrete stochastic systems with state delays and multiple sensors where R is the number of sensors, is the product defined in (2), F j (t) ∈ T n×n , j = 1, 2, 3, 4, are deterministic matrices, x(t) ∈ T n is the system state to be estimated, u(t) ∈ T n is a tessarine noise, z (i) (t) ∈ T n is the ith sensor outputs with tessarine sensor noise v (i) j,ν (t), and with possible outcomes {0, 1} that indicates if the ν part of the jth observation component of the ith sensor is up-to-date (case γ (i) j,ν (t)) = 1) or there exits one-step delay (case γ The following assumptions for the above system (3) are made.
Assumption 1.For a given sensor i, the Bernoulli variable vector γ (i) (t) is independent of γ (i) (s), for t = s, and also γ (i) (t) is independent of γ (j) (t), for any two sensors i = j.Assumption 2. For a given sensor i, γ (i) (t) is independent of x(t), u(t) and v (j) (t), for any i, j = 1, . . ., R. Assumption 3. u(t) and v (i) (t) are correlated white noises with respective pseudo variances Q(t) and R (i) Assumption 5.The initial state x(0) is independent of the additive noises u(t) and v (i) (t), for t ≥ 0 and i = 1, . . ., R.

Remark 1.
From the hypotheses established on the Bernoulli random variables it follows that, for any j 1 ,

One-State Delay System under T k -Properness
In this section, a TWL one-state delay system, which exploits the full amount secondorder statistics information available, is introduced and analyzed in T k -properness scenarios, k = 1, 2.
For this purpose, consider the augmented vectors x(t), z(i) (t), and ȳ(i) (t) of x(t), z (i) (t), and y (i) (t), respectively.Then, by applying Property 2 on system (3), the following TWL one-state delay model can be defined: where Moreover, from Assumption 3, the pseudo correlation matrices associated to the augmented noise vectors ū(t) and v(i) (t) are given by The following result establishes conditions on system (5), which lead to T k -properness properties of the processes involved.Proposition 1.Consider the TWL one-state delay model (5).

1.
If x(0) and u(t) are T 1 -proper, and Φ(t) is a block diagonal matrix of the form and u(t) and v (i) (t) are cross T 1 -proper, then x(t) and y (i) (t) are jointly T 1 -proper.

2.
If x(0) and u(t) are T 2 -proper, and Φ(t) is a block diagonal matrix of the form proper and u(t), and v (i) (t) are cross T 2 -proper, then x(t) and y (i) (t) are jointly T 2 -proper.
Proof.The proof follows immediately from the application of the corresponding conditions on system (5) and the computation of the augmented pseudo correlation matrices R x(t, s) and R x ȳ(i) (t, s).
Remark 2. Note that under T 1 -properness conditions, . ., R, takes the form of a block diagonal matrix as follows:

Compact State-Space Model
By stacking the observations at each sensor in a global observation vector z(t) = z(1) T (t), . . ., z(R) T (t) T , the TWL one-state delay system (5) can be rewritten in a compact form as where v(t) and y(t) denote the stacking vector of v(i) T (t) and ȳ(i) T (t), for i = 1, . . ., R, respectively.
, and In this paper, our aim is to investigate the centralized fusion estimation problem under conditions of T k -properness, with k = 1, 2. In this sense, the use of T k -properness properties allows us to consider the observation equation with reduced dimension where x(t) satisfies the state equation in (7), • T 2 -proper scenario: , where Similarly, is given by the block diagonal matrix Accordingly, whereas the optimal linear processing for the estimation of a tessarine signal x(t) is the TWL processing based on the set of measurements { y(1), . . .y(t)}, under T k -properness conditions the optimal estimator of x(t) ∈ T n , xT k (t|s), can be computed by projecting on the set of measurements {y k (1), . . ., y k (s)}, for k = 1, 2. Thereby, T k estimators are obtained that have the same performance as TWL estimators, but with a lower computational complexity.More importantly, this computational load saving cannot be achieved with the real approach.
Note that tessarine algebra is not a Hilbert space and, as a consequence, neither the existence nor the uniqueness of the projection on a set of tessarines is guaranteed.Nevertheless, this drawback has been overcome in [23] by defining a suitable metric, which assures the existence and uniqueness of these projections.
The following property sets the correlations between the noises, ū(t) and v(t), and both the augmented state x(t) and the observations y k (t).Property 3.Under Assumptions 1-4, the following correlations hold.

1.
Correlations between noises and the augmented state: Correlations between noises and T k observations: Remark 4. Observe that, under a T k -properness setting, the state equation in (7) is equivalent to the T k state equation where, • in a T 1 -proper scenario, x 1 (t) x(t), u 1 (t) u(t), and Nevertheless, Equation (9) cannot be used together with the observation Equation (8), since the latter involves the augmented state vector x(t).

T k -Proper Centralized Fusion Estimation Algorithms
In this section, the T k centralized fusion filter, prediction, and fixed-point smoothing algorithms are designed on the basis of the set of observations {y k (1), . . ., y k (s)}, k = 1, 2, defined in (8).
With this purpose in mind, the observation Equation ( 8) is used to devise filtering, prediction, and smoothing algorithms for the augmented state vector x(t).Then, by applying T k -properness properties, the recursive formulas for the filtering, prediction, and smoothing estimators of x k (t) are easily determined.Finally, the desired T k centralized fusion filtering, prediction and fixed-point smoothing estimators are obtained as a subvector of them.
Theorems 1-3 summarize the recursive formulas for the computation of these T k estimators as well as their associated error variances.

T k Centralized Fusion Filter
Theorem 1.The optimal T k centralized fusion filter xT k (t|t) and one-step predictor xT k (t + 1|t) for the state x(t) are obtained by extracting the first n components of the optimal estimator xk (t|t) and xk (t + 1|t), respectively, which are recursively computed from the expressions with xk (0|0) = 0 kn and xk (1|0) = 0 kn , and where Moreover, ε k (t) are the innovations calculated as follows with m = kRn, ε k (0) = 0 m , and where , where Θ(t) is computed through the equation ), and the innovations covariance matrix Ω k (t) is obtained as with where where with entries given in (4), and where D(t) = R x(t, t) is recursively computed from Finally, the T k filtering and prediction error pseudo covariance matrices P T k (t|t) and P T k (t + 1|t), respectively, are obtained from the filtering and prediction error pseudo covariance matrices P k (t|t) and P k (t + 1|t), calculated from the recursive expressions with and Remark 5.In the implementation of the above algorithm, the particular structure of Σ(t) under T k -properness conditions should be taken into consideration.In this regard, it is not difficult to check that Σ(t) is a block diagonal matrix of the form

T k Centralized Fusion Predictor
Theorem 2. The optimal T k centralized fusion predictor xT k (t + τ|t) for the state x(t) is obtained by extracting the first n components of the optimal estimator xk (t + τ|t), which is recursively computed from the expression with the initialization the one-step predictor xk (t + 1|t) given by (11).
Moreover, the T k -proper prediction error pseudo covariance matrix P T k (t + τ|t) is obtained from the prediction error pseudo covariance matrix P k (t + τ|t), computed from the recursive expression with the initialization the one-step prediction error pseudo covariance matrix given by (18).

T k Centralized Fusion Smoother
Theorem 3. The optimal T k centralized fusion fixed-point smoother xT k (t|s), for a fixed instant t < s, for the state x(t) is obtained by extracting the n first components of the optimal estimator xk (t|s), which is recursively computed from the expressions with initial condition xk (t|t) given by (10), and where the innovations ε k (s) are recursively computed from (12) and with initialization Θ k (t, t) = Θ k (t) given by ( 13) and E k (t, t) = P k (t|t).Furthermore, the T k fixed-point smoothing error pseudo covariance matrix is recursively computed through the expression with P k (t|t) the filtering error pseudo covariance matrix (17).
As mentioned above, the main advantage of the proposed T k centralized fusion algorithms is that the resulting T k centralized fusion estimators coincide with the optimal TWL counterparts; meanwhile, they lead to computational savings with respect to the one derived from a TWL approach.Remark 6.The computational demand of the proposed tessarine estimation algorithms under T k , for k = 1, 2 properness conditions is similar to that of their counterparts in the quaternion domain, i.e., the QSL and QSWL estimation algorithms, respectively, (review [34] for a comparative analysis of the computational complexity of quaternion estimators).Therefore, the computational load of TWL estimation algorithms is of order O(64R 3 n 3 ), whereas the T k , for k = 1, 2, algorithms are of order O(m 3 ), with m = kRn.

Simulation Examples
In this section, the effectiveness of the above T k -proper centralized fusion estimation algorithms is experimentally analyzed.With this aim, the following simulation examples have be chosen to reveal the superiority of the proposed T k -proper estimators over their counterparts in the quaternion domain, when T k -properness conditions are present.
Let us consider the following tessarine system with three sensors: with f 1 = 0.9 − 0.3η + 0.02η + 0.1η ∈ T. The following assumptions are made on the initial state and additive noises. 1.
The initial state x 0 is a tessarine Gaussian variable determined by the real covariance matrix 2. u(t) is a tessarine white Gaussian noise with a real covariance matrix 3. The measurement noises v (i) (t) of the three sensors are tessarine white Gaussian noises defined as where the coefficients α i are the constant scalars α 1 = 0.5, α 2 = 0.8, and α 3 = 0.4 and w (i) (t), i = 1, 2, 3, are T 1 -proper tessarine white Gaussian noises with mean zeros and real covariance matrices with β 1 = 4, β 2 = 8, and β 3 = 25, and independent of u(t).Note that, if α i = 0, then the noises u(t) and v (i) (t) are uncorrelated.In the opposite case, when α i becomes more different from 0, the correlation between u(t) and v (i) (t) is stronger.
Moreover, at every sensor i, the Bernoulli random variables γ ν , for all t ∈ T. In this framework, a comparative study between tessarine and quaternion approaches is carried out to evaluate the performance of the proposed filtering, prediction and smoothing algorithms under T 1 and T 2 properness conditions.Specifically, besides the filtering, the 3-step prediction and fixed-point smoother at t = 20 problems are considered in our simulations.
Firstly, these differences are displayed in Figure 1 considering different degrees of correlations between the state and measurement noises: independent noises (α 1 = α 2 = α 3 = 0), low correlations (α 1 = 0.5, α 2 = 0.8, α 3 = 0.4), and high correlations (α 1 = 5, α 2 = 8, α 3 = 4) and two levels of uncertainties: high delay probabilities (case p 1 = 0.5, p 2 = 0.2, p 3 = 0.4) and low delay probabilities (case p 1 = 0.9, p 2 = 0.5, p 3 = 0.8).As we can see, in all situations these differences are positive, which indicate that the proposed T 1 estimators outperform the QSL estimators.Moreover, this superiority in performance increases when the correlation between the system noises is higher.With respect to the levels of uncertainties, a better behavior of the T 1 estimators over the QSL counterparts is generally observed in the scenario of high delays probabilities, i.e., when the Bernoulli probabilities are smaller.Next, in order to evaluate the performance of the proposed estimators versus the probability of delay, we consider the same Bernoulli probabilities in the three sensors (p 1 = p 2 = p 3 = p), and the difference between the T 1 and QSL error variances are computed for different values of p.
Figure 2 illustrates these differences for p = 0, 0.2, 0.4, 0.6, 0.8, 1.In these figures, the superiority in performance of T 1 estimators over QSL estimators is confirmed since DE 1 > 0 in every case.Additionally, in the filtering and prediction problems it is observed that this superiority is higher for the small-est Bernoulli probabilities, i.e., when the delay probabilities are greater.On the other hand, in the fixed-point smoothing problem, a similar behavior for Bernoulli probabilities p and 1 − p is obtained, the advantages of the T 1 smoothing algorithm being higher than the QSL one at intermediate values of p (case p = 0.4 and p = 0.6).These results are examined in detail below.Our aim now is to analyze the benefits of our T 1 estimation algorithms in terms of the Bernoulli probabilities of the three sensors p.In this analysis, different values of c in (26) are also considered.Then, the means of the difference between the T 1 and QSL filtering, prediction, and fixed-point smoothing error variances have been computed as

•
Filtering problem: MDE 20|t) denote the difference between the T 1 and QSL filtering, 3step prediction, and fixed-point smoothing error variances, respectively, for a value of the Bernoulli probability p.Note that in the case c = 0, the noise u(t) is, besides being T 1 -proper, Q-proper, and a higher value of c means that the noise u(t) moves further away from the Q-properness condition.The results of this analysis are depicted in Figure 3 where, on the one hand, we can clearly observe how the best performance of T 1 filtering and prediction estimators over the QSL counterparts is obtained for the smallest Bernoulli probabilities.Specifically, except for the case c = 0.8, the maximum difference between T 1 and QSL errors is achieved when the Bernoulli probability takes the value 0, i.e., when only one-step delay exists in the measurements.However, in the fixed-point smoothing problem T 1 is more advantageous when the Bernoulli probabilities p tend to 0.5.On the other hand, in every case, the superiority of our T 1 estimation algorithms is more evident as the parameter c in (26) grows, i.e., the noise u(t) is further away from the Q-properness condition.

Study Case 2: T 2 -Proper Systems
Consider the values a = 6 in ( 25), b = c = 0.3 in (26), and the Bernoulli probabilities for the three sensors as in Section 5.1.Note that, under these conditions, both x(t) and y (i) (t), i = 1, 2, 3, are jointly T 2 -proper.
Thus, we are interested in comparing the behavior of T 2 centralized fusion estimators with their counterparts in the quaternion domain, i.e., the quaternion semi-widely linear (QSWL) estimators.For this purpose, the T 2 and QSWL error variances, P 2 (t|s) and P QSW L (t|s), respectively, have been computed by considering different Bernoulli probabilities for the three sensors.
Figures 4 and 5 compare the difference between QSWL and T 2 centralized estimation error variances for different Bernoulli probabilities p 1 , p 2 and p 3 .Specifically, Figure 4 analyzes the filtering and 3-step prediction error variance differences DE 2 (t|t) and DE 2 (t + 3|t) for the following cases: 1.
From these figures, we can reaffirm that T 2 processing is a better approach than the QSWL processing in terms of performance (DE 2 > 0).Moreover, in the filtering and 3-step prediction problems (Figure 4), this fact is more evident when the probabilities of the Bernoulli variables decrease (that is, the delay probabilities increase).
The differences between both QSWL and T 2 error variances for the fixed-point smoothing problem are illustrated in Figure 5.Note that, since the behavior of the differences between QSWL and T 2 fixed-point smoothing errors is similar for Bernoulli probabilities values p i and 1 − p i , these differences are analyzed in the following cases: In every situation, the better behavior of T 2 processing over the QSWL processing is verified, and also this superiority increases when the Bernoulli probabilities tends to 0.5, i.e., when there is a similar chance of receiving updated and delayed information.

Discussion
From among the different sensor fusion methods, it is the centralized fusion techniques that provide the optimal estimators from measurements of all sensors.Nevertheless, to avoid the computational load involved in these estimates, especially in systems with a large number of sensors, suboptimum estimation algorithms have been traditionally designed by using a decentralized fusion approach.This paper has overcome the above computational difficulties without renouncing to obtain the optimal solution, by considering hypercomplex algebras.Quaternions and, more recently, tessarines are the most usual 4D hypercomplex algebra employed in signal processing.Commonly, since both quaternions and tessarines are isomorfic spaces to R 4 , they involve the same computational complexity.Interestingly, under properness conditions, this complexity in terms of dimension is reduced to a half for QSWL and T 2 -proper methods and four times for QSL and T 1 -proper methods, which leads to a significant reduction in the computational load of our algorithms.Precisely, it is in this context that the use of hypercomplex algebras becomes an ideal tool with computational advantages over the existing methods to address the centralized fusion estimation problem.
In general, neither of these algebras always performs better than the other, and the choice of the most suitable one is conditioned by the characteristics of the signal.Due to the commutativity and reduced computational complexity, the tessarine algebra makes it particularly interesting for our purposes.Thus, under conditions of T k -properness, filtering, prediction, and fixed-point smoothing algorithms of reduced dimension have been devised for the estimation of a vectorial tessarine signal based on one-step randomly delayed observations coming from multiple sensors stochastic systems with different delay rates and correlated noises.The reduction of the dimension of the problem under T k -properness scenarios makes it possible for these algorithms to facilitate the computation of the optimal estimates with a lower computational cost in comparison with the real processing approach.It should be highlighted that this computational saving cannot be attained in the real field.
The good performance of the algorithms proposed has been experimentally illustrated by means of two simulation examples, where the better behavior of the proposed T k estimates over their counterparts in the quaternion domain under T k -properness conditions has been evidenced.
In future research, we will set out to explore the design of decentralized fusion estimation algorithms for hypercomplex signals and investigate the use of new hypercomplex algebras in this field.
In order to simplify the proof of Theorem 1, the following results have been previously established.

Appendix A.1. Preliminary Results
The following property, stated without proof, about the correlations between the innovations ε k (t) and the augmented state x(t) and the noises ū(t) and v k (t), will be useful in the proof of Theorem 1.

Figure 1 .
Figure 1.Difference between QSL and T 1 error variances for the problem of (a) filtering, (b) 3-step prediction and (c) fixed-point smoothing.

2 .
Difference between QSL and T 1 error variances for the problem of (a) filtering, (b) 3-step prediction and (c) fixed-point smoothing.
for p varying from 0 to 1 and the values of c = 0, 0.3, 0.6, and 0.8, and where DE p 1 (t|t), DE p 1 (t + 3|t) and DE p 1

Figure 3 .
Figure 3. Mean of the difference between QSL and T 1 error variances for the problem of (a) filtering, (b) 3-step prediction, and (c) fixed-point smoothing.

Figure 4 .
Figure 4. Difference between QSWL and T 2 error variances for the problem of filtering (left column) and 3-step prediction (right column) for Cases 1-3.

Figure 5 .
Figure 5. Difference between QSWL and T 2 error variances for the fixed-point smoothing problem for Cases 4-6.