Distributed Ellipsoidal Intersection Fusion Estimation for Multi-Sensor Complex Systems

This paper investigates the problem of distributed ellipsoidal intersection (DEI) fusion estimation for linear time-varying multi-sensor complex systems with unknown input disturbances and measurement data transmission delays. For the problem with external unknown input disturbance signals, a non-informative prior distribution is used to model the problem. A set of independent random variables obeying Bernoulli distribution is also used to describe the situation of measurement data transmission delay caused by network channel congestion, and appropriate buffer areas are added at the link nodes to retrieve the delayed transmission data values. For multi-sensor systems with complex situations, a minimum mean square error (MMSE) local estimator is designed in a Bayesian framework based on the maximum a posteriori (MAP) estimation criterion. In order to deal with the unknown correlations among the local estimators and to select the fusion estimator with lower computational complexity, the fusion estimator is designed using ellipsoidal intersection (EI) fusion technique, and the consistency of the estimator is demonstrated. In this paper, the difference between DEI fusion and distributed covariance intersection (DCI) fusion and centralized fusion estimation is analyzed by a numerical example, and the superiority of the DEI fusion method is demonstrated.


Introduction
In recent years, multi-sensor systems have been widely used in sensor networks, artificial intelligence, combinatorial navigation, and industrial control. Since multi-sensor systems can provide more information for more accurate control of the system, it makes the information fusion estimation techniques of multi-sensor systems receive wide attention and have important research significance [1][2][3][4][5][6][7]. In complex systems with multiple sensors, the methods of information fusion estimation are generally divided into centralized fusion estimation and distributed fusion estimation. The principle is to fuse multiple estimates into one highly reliable estimation method according to the corresponding fusion algorithm [8]. In centralized fusion estimation, the measurement data from multiple sensors are processed by using state measurement enhancement methods. In contrast, distributed fusion estimation, with its unique parallel structure, puts the local state estimates of different sensors into the fusion center and follows the corresponding fusion rules for state estimation [9].
The centralized fusion estimator can provide the best estimation accuracy when all sensors are working properly. However, if the sensors fail in operation, the centralized fusion estimator cannot detect and discard the faulty sensors in time, leading to a decrease in the reliability of the fusion estimation results and an increase in the error. A suboptimal distributed estimator with a parallel structure can solve this problem well. The presence of a parallel structure makes it easy to detect and isolate the faulty sensors, so the distributed

Paper Contributions
In this paper, we study the problem of data fusion estimation for a linear time-varying multi-sensor complex system with two network-induced phenomena of both unknown input disturbances and measurement transmission delays. In order to obtain a fusion estimator with high accuracy and low computational cost, distributed fusion estimation is used in this paper for the system estimation. In this case, the information of unknown input perturbations is modeled by a non-informative prior distribution. All possible values are described by using a probability density function. The randomness of the measured data transmission delay is described by a set of independent random variables obeying Bernoulli distribution, and a buffer of finite length is added at the link node to obtain the data set for the delay measurement. For the design of the system local estimator, the MMSE local estimator is designed in a Bayesian framework based on the nature of the state-conditional distribution and the maximum a posteriori (MAP) estimation criterion. When fusion processing is performed on the local estimates, the correlation between the local estimates is unknown due to the randomness of the measurement data delay, which makes it difficult to obtain fusion results with high accuracy. To solve the problem of fusion estimation with unknown correlations between local estimates, a distributed ellipsoidal intersection (DEI) fusion estimator is designed by analyzing the distributed fusion algorithms. Compared with the distributed covariance intersection (DCI) fusion estimation, the problem of overly conservative estimation results is solved, and the estimation accuracy is improved. The parallel structure makes the designed estimator less computationally expensive and reduces the computational complexity than the centralized fusion estimator.

Paper Outline
The structure of this paper is as follows. In Section 2, we describe two networkinduced phenomena in multi-sensor complex systems: unknown input interference and measurement data transmission delay. Section 3 designs a local estimator for multi-sensor complex systems based on the MMSE criterion. Section 4 determines the estimation of the multi-sensor complex system using the DEI fusion estimator and shows the consistency of the designed DEI fusion estimator. The numerical simulation results and computational complexity analysis are given in Section 5. The conclusions are given in Section 6.

Problem Description
Let us consider a multi-sensor linear time-varying system disturbed by unknown input information: where x k ∈ R n denotes the state estimation vector at moment k, A k denotes the state matrix that is time-varying and matches the dimensionality of x k , d k ∈ R p denotes the external input vector, D k denotes the time-varying matrix that matches the dimensionality of d k , and the process noise is described by ω k ∈ R n , which has a mean of 0 and covariance matrix of Q k > 0. Additionally, we give the measurement equations for the sensors in the system that measure the data: where y i,k ∈ R m i denotes the measured data values in the ith network transmission channel with the total number of sensors L. C i,k denotes the time-varying matrix matching the dimensionality of x k , and v i,k ∈ R m i denotes the measurement noise with a mean of 0 and covariance of R i,k > 0. The measurement noise of each measurement channel is independent of each other, and the initial state x 0 , which obeys a Gaussian distribution, is also uncorrelated with ω k and v i,k . For the problem of data fusion estimation of a multi-sensor linear time-varying system with unknown input disturbances and measurement data transmission delays, the flow structure of the system is shown in Figure 1. The system works as follows: first, the multi-sensor system subject to unknown external input disturbances is measured by a multi-sensor to obtain information about the system state at each moment. The obtained measurement information is transmitted to the corresponding link nodes through the network channel, and a series of local state estimates are generated in the designed MMSE estimator. The local state estimates are fused at the fusion center to obtain the estimation results. and covariance of , > 0. The measurement noise of each measurement channel is independent of each other, and the initial state , which obeys a Gaussian distribution, is also uncorrelated with and , . For the problem of data fusion estimation of a multi-sensor linear time-varying system with unknown input disturbances and measurement data transmission delays, the flow structure of the system is shown in Figure 1. The system works as follows: first, the multi-sensor system subject to unknown external input disturbances is measured by a multi-sensor to obtain information about the system state at each moment. The obtained measurement information is transmitted to the corresponding link nodes through the network channel, and a series of local state estimates are generated in the designed MMSE estimator. The local state estimates are fused at the fusion center to obtain the estimation results. Since the external input vector is unknown and its information is not available, it cannot participate in the design of the estimator. In order not to affect the design of the estimator, it is guaranteed that the estimation accuracy will not be biased. In [19], an assumption is adopted: the number of channels of external input disturbance is guaranteed to be smaller than the number of channels of state estimation by controlling the rank of the time-varying matrix : rank( ) = , < . The proposed assumption is guaranteed by this intuitive formulation. Additionally, since all possible values of the unknown input vector appear with equal probability, we model using a non-informative prior distribution [16]. The probability density function of is, i.e., Inspired by [16,19], for our proposed hypothesis, the matrix of unknown input coefficients should strictly adhere to the matrix column full rank to ensure that the number of channels of the external input disturbances is smaller than the number of channels of the state estimation. Meanwhile, a new matrix is constructed under the principle of orthogonal complementation, such that matrix satisfies the rules of [ ] ∈ × , rank[ ] = , and = 0. Since the external input vector d k is unknown and its information is not available, it cannot participate in the design of the estimator. In order not to affect the design of the estimator, it is guaranteed that the estimation accuracy will not be biased. In [19], an assumption is adopted: the number of channels of external input disturbance is guaranteed to be smaller than the number of channels of state estimation by controlling the rank of the time-varying matrix D k : rank(D k ) = p, p < n. The proposed assumption is guaranteed by this intuitive formulation. Additionally, since all possible values of the unknown input vector d k appear with equal probability, we model d k using a non-informative prior distribution [16]. The probability density function of f is, i.e., Inspired by [16,19], for our proposed hypothesis, the matrix of unknown input coefficients D k should strictly adhere to the matrix column full rank to ensure that the number of channels of the external input disturbances is smaller than the number of channels of the state estimation. Meanwhile, a new matrix D ⊥ k is constructed under the principle of orthogonal complementation, such that matrix D ⊥ k satisfies the rules of D k D ⊥ k ∈ R n×n , rank D k D ⊥ k = n, and D T k D ⊥ k = 0. When the measurement information is transmitted through the network channel, the measurement delay occurs randomly, because the limited bandwidth of the network channel causes the channel congestion phenomenon. During transmission, if the measurement information is not received by the link node within a given time interval, packet loss occurs, so the packet loss phenomenon also exists at the same time [20]. Since the measurement delay and packet loss occur randomly, we adopt the Bernoulli distribution random variable approach to describe the phenomena triggering the measurement delay and packet loss [16].
First, we assume that the measurement data y i,k is a delay in the network channel for θ i,k moments and θ i,k is a random variable. We model the random variable θ i,k by using the probability mass function f i : where θ i,k for different channels and different moments are independent of each other. If no buffer exists at the link node, the transmission of measurement data from the sensor to the link node is considered successful only if θ i,k = 0; otherwise, the transmission fails. Therefore, the process of measurement data y i,k from the sensor to the link node is considered as a Bernoulli process. To solve the problem of the randomness of θ i,k , we obtain the information of the random variable θ i,k by adding an appropriate buffer at the link node and by measuring the time of data reacquisition from the buffer [19]. Here, we assume a buffer of length ε i (ε i ≥ 2), so that the link node can receive all measurement data with a delay time of k − ε i + 1. The earliest measurement update value κ i,k for the kth moment and the ith buffer is defined as: The receipt of the measurement y i,k is indicated by introducing a sequence of binary variables γ i t,k . When γ i t,k = 1, it indicates that the measurement is received at the kth moment or before. When the delay time is equal to or greater than ε i , the measurement data will be discarded, and this case is considered as a packet loss phenomenon. We define the set of measurements in the ith buffer at the kth moment by defining i,k , i.e., where γ i,t = γ i t,k , and our goal is to obtain an estimation problem for the state x k conditional on the set of measurements k 1,k , 2,k , · · · , L,k .

Local Estimation of Complex Multi-Sensor Systems
In this section, in order to solve the problem of estimating the state x k conditional on the measurement set k , we need to design local estimators at each link node to obtain the state estimates. Usually, the estimation for the state is often based on one observation, and the estimator is often designed in a Bayesian framework. Since the state x k is estimated based on the measurement set k , the unknown input d k is modeled using a non-informative prior distribution. According to the standard results of optimal estimation, the MMSE estimate is equivalent to the mean of the state-conditional distribution conditional on the measurement set, so the design of the local estimator can be performed using the MMSE estimation approach. Our goal in designing the local estimator is to find the recursive problem of the conditional distribution of the state x k conditional on the measurement set k .
To ensure that the local estimator design is error-free, we have to verify that the coefficient matrix rank(D k ) = p of the unknown input interference signal satisfies the assumption that p < n. By introducing T k D k D ⊥ k −1 and L k 0 I n−p T k , then using ]×n to obtain m ≥ p, which leads to rank C T i,k L T k−1 T = n, this verifies the hypothesis that the number of independent measurement channels is not less than the number of channels of the unknown external input by the rank of the coefficient matrix D k [19]. The above hypothesis will automatically hold when the measurement matrix C i,k satisfies the condition of full column rank. For the system that satisfies the stated assumptions, we can verify the rank of matrix D k by expressions based on the system expressions, regardless of whether the system is time-varying or not, ensuring the accuracy of the estimation results. For systems that satisfy the condition of rank(D k ) = p, we base the design of the estimator of state x k on the condition of the measurement set k by representing in a Bayesian framework, i.e., where P |X ( k |x k ) denotes the likelihood probability distribution and p X (x k ) denotes the prior probability distribution. The set of measurements i,k (γ i,0 y i,0 ), (γ i,1 y i,1 ), · · · , (γ i,t y i,t ), · · · , (γ i,k y i,k ) in the buffer of the ith link node, the posterior probability distribution of the state x k conditional on i,k is: The prior probability distribution p X (x k ) = P X| (x k i,k−1 ), due to the non-informative prior distribution modeling the unknown input disturbance information d k , is obtained according to the full probability formula: Converting Equation (9) to the Gaussian distribution form yields According to the nature of the marginal distribution of the multivariate Gaussian distribution, Equation (10) is organized to obtain: Based on Equation (11), it is known that the prior probability p X (x k ) obeys a Gaussian distribution, i.e., where Under the condition that the prior probability distribution p X (x k ) follows a Gaussian distribution and the measurement noise also follows a Gaussian distribution, the MMSE estimate is equivalent to the MAP estimate, so it can be converted to find the MAP estimate. The posterior probability distribution is proportional to the product of the likelihood probability and the prior probability, and since the prior distribution has been found, the likelihood probability distribution P |X (γ i,k y i,k x k ) is calculated.
Based on the measurement set i,k , the posterior probability distribution of the state x k is: The maximized posterior probability distribution function is: wherex MAP (γ i,k y i,k ) is called the maximum a posteriori estimator of x k . Substituting Equations (11) and (13) into (14), we obtain the posterior probability distribution P X| (x k γ i,k y i,k ), satisfying the Gaussian distribution of the form: Since the prior probability distribution and the measurement noise obey Gaussian distribution, i.e., Thus, we obtain a local estimator of the Gaussian distribution of state x k for a timevarying linear multi-sensor complex system with unknown input disturbances and measurement data transmission delays under the condition of a measurement set i,k, satisfying the condition of rank(D k ) = p: Our goal is to fuse the obtained local estimates at the fusion center to obtain estimation results with high accuracy.

Distributed Ellipsoidal Intersection (DEI) Fusion Estimation for Multi-Sensor Complex Systems
In this section, in order to solve the fusion problem of multi-sensor local estimation, a distributed fusion estimation algorithm suitable for linear multi-sensor time-varying discrete systems with unknown input disturbances and measurement transmission delays is selected. When we fuse the local estimates, we first consider the optimal matrix-weighted distributed fusion method for fusion estimation with the following fusion equation: where Ω i,k denotes the optimal weight matrix and ∑ L i=1 Ω i,k = I. However, since the optimal weight matrix depends on the information of the mutual covarianceP i,j k (i = j) between multi-sensors, and the proposed multi-sensor system is the phenomenon of a measurement transmission delay, the delay variables in the channel are all randomly occurring, resulting in a correlation between sensors that cannot be obtained [24]. An unknown correlation means that the mutual covariance cov x i , x j is not computable, so it is difficult for us to obtain the analytic expression of the mutual covarianceP i,j k between sensors, which causes some difficulties in the design of the fusion estimator.
Currently, a commonly used method in dealing with fusion estimation of unknown correlations is the CI fusion technique, which parameterizes the fusion formula and avoids the determination of the expression for the mutual correlation covariance cov x i , x j [25]. Although this approach is generally accepted, the CI fusion approach is suboptimal. Since the CI fusion technique focuses on the analysis of the fusion formula rather than the correlation, it may lead to conservative results of a fusion [26]. Based on this situation, there is another method that parameterizes the local estimates when dealing with the case of unknown correlations: the EI fusion method. This parametric approach introduces three new estimates that provide an explicit description of the correlation and expresses the information about the correlation in an explicit expression. Both conservative estimations are avoided, while the extraction of unknown correlation information is taken into account, and the accuracy of the fusion is guaranteed [27].
Next, we analyze the EI fusion method. First, we consider two random vectors: x i and x j ∈ R n with Gaussian distribution characteristics, which are both two prior estimates of the state vector x ∈ R n , i.e., Our goal is to fuse the two prior estimates into a new estimate x f that also obeys a Gaussian distribution, i.e., It is also important to ensure that the fusion results of these two prior estimates satisfy the consistency of the fusion estimates, i.e., P f P i and P f P j . To characterize the unknown correlation, the EI fusion technique is performed by introducing three new two-two independent random vectors x ii , x ij , x jj ∈ R n with a mean of µ ii , γ, µ jj ∈ R n and variance of Φ ii , Γ, Φ jj ∈ R n×n , respectively. The priori estimates x i , x j are represented by the information of x ii , x ij , x jj by constructing a new function Ψ: where Φ −1 ii + Γ −1 −1 is denoted as the variance P i of the priori estimate x i and According to the relationship between the random vectors x ii , x ij , x jj and the priori estimates x i , x j , we can express the correlation covariance cov x i , x j of the priori estimates Since the correlation is unknown, to obtain a description of an arbitrary correlation, the information of the mutual correlation covariance cov x i , x j is maximized. Based on the determinant of the mutual correlation covariance, it follows from Equation (22) that the problem of maximizing cov x i , x j can be transformed into the problem of minimizing Γ, i.e., with the case of unknown correlations: the EI fusion method. This parametric approach introduces three new estimates that provide an explicit description of the correlation and expresses the information about the correlation in an explicit expression. Both conservative estimations are avoided, while the extraction of unknown correlation information is taken into account, and the accuracy of the fusion is guaranteed [27]. Next, we analyze the EI fusion method. First, we consider two random vectors: and ∈ with Gaussian distribution characteristics, which are both two prior estimates of the state vector ∈ , i.e.,

∼ ( , ), ∼ ,
Our goal is to fuse the two prior estimates into a new estimate that also obeys a Gaussian distribution, i.e.,

∼ ,
It is also important to ensure that the fusion results of these two prior estimates satisfy the consistency of the fusion estimates, i.e., ≼ and ≼ .
To characterize the unknown correlation, the EI fusion technique is performed by introducing three new two-two independent random vectors , , ∈ with a mean of , , ∈ and variance of Φ , Γ, Φ ∈ × , respectively. The priori estimates , are represented by the information of , , by constructing a new function Ψ: where (Φ + Γ ) is denoted as the variance of the priori estimate and (Φ + Γ ) (Φ + Γ ) is the mean . According to the relationship between the random vectors , , and the priori estimates , , we can express the correlation covariance cov( , ) of the priori estimates , , i.e., cov , : Since the correlation is unknown, to obtain a description of an arbitrary correlation, the information of the mutual correlation covariance cov( , ) is maximized. Based on the determinant of the mutual correlation covariance, it follows from Equation (22) that the problem of maximizing cov( , ) can be transformed into the problem of minimizing Γ, i.e., Γ ≔ arg min log | | subject to ≽ , ≽ For random vectors ( , P), obeying Gaussian distribution can all be represented by the sublevel set ℇ , = { ∈ |( − ) ( − ) ≤ 1} . To represent minimal Γ intuitively, minimally Γ is characterized as the minimal ellipse containing ℇ , ∪ ℇ , .
Since the prior estimates can be described by the introduced random variables, the fusion of the prior estimates is equivalent to the fusion of the introduced random variables, and by conditioning the function of Ψ, the fusion results are presented, i.e., Substituting Equation (21) and the variable information into Equation (24), it is obtained that = ( To represent minimal Γ intuitively, minimally Γ is characterized as the minimal ellipse containing Since the prior estimates can fusion of the prior estimates is equ bles, and by conditioning the funct : = Ψ Substituting Equation (21) and tained that = ( = x j ,P j . Since the prior estimates can be described by the introduced random variables, the fusion of the prior estimates is equivalent to the fusion of the introduced random variables, and by conditioning the function of Ψ, the fusion results are presented, i.e., Substituting Equation (21) and the variable information into Equation (24), it is obtained that In order to pursue a computationally inexpensive fusion algorithm, the mean γ and variance Γ of the random variable x ij are represented with the information of a priori estimate [27], i.e., where [D Γ ] qq = max 1, D j qq , q = 1, · · · , n. The eigenvalue decomposition P i = S i D i S −1 i of the matrix P i yields the eigenvector matrix S i and the eigendiagonal matrix D i . The positive definite matrix can be the square root decomposed as A = LL T . According to the transformation relation in [28], it is obtained that D Since the minimization Γ is the shape of the minimum ellipsoid of P i and P j , D Γ = max 1, D j . This gives an algebraic expression for the correlation information between the local estimates.
Based on the above description, it can be seen that the EI fusion technique provides both an explicit description of the unknown correlation between the priori estimates and a parameterization of the fusion formula to ensure the accuracy and computational cost of the fusion results.
Since the obtained local estimates obey a Gaussian distribution, we use the EI fusion algorithm to design the estimator. In order to reduce the computational complexity of the fusion estimator, we use the method of fusing the local estimates in two by following the sequential fusion and obtain the final fusion estimate by performing the EI fusion process L -1 times [15]. The distributed sequential EI fusion estimator is as follows: The mean of the DEI fusion estimation result isx k = x L−1 s,k , and the error covariance iŝ P k = P L−1 s,k . In order to visually compare the superiority of EI fusion technique, we analyze the CI fusion, EI fusion technique, and minimization Γ by a simple numerical example. Suppose the two prior estimates x1, x2 obey the Gaussian distribution with a mean of 0 and covariance matrix P1 = 2 −1 −1 1 and P2 = 1/3 0 0 2 , respectively. By converting the Gaussian distribution into a sublevel set for the ellipsoid description, the obtained results are shown in Figure 2. The red curve indicates the ellipsoidal results of CI fusion, the green curve indicates the ellipsoidal results of EI fusion, and the blue zone line indicates the ellipsoid enclosed by the minimization Γ. The results show that the area enclosed by the CI fusion algorithm is larger than the area where the two local estimates intersect, and the estimation results are too conservative, while the EI fusion algorithm is within the area where the two local estimates intersect, and the fusion results are more accurate.
tained that , ≼ , , = 1,2,3 ⋯ , In summary, the distributed ellipsoid (DEI) fusion estimator designed in the paper has good consistency, and the fusion estimator outperforms the individual local estimators.

Numerical Examples
In this section, the proposed DEI fusion is validated by a numerical example in order to intuitively obtain a fusion estimation problem consistent with being able to solve the unknown correlation in a complex multi-sensor system with unknown input disturbances and measurement data transmission delays. First, consider a complex multi-sensor linear Here, we discuss the consistency of the designed EI fusion estimator, inspired by [15], based on local estimates obeying the Gaussian distribution; we take the results of the fusion of local estimates of sensors 1 and 2 for analysis. The local estimates, after EI fusion, are obtained according to Equation (23) as follows: Taking the inverse of both sides of the symbol simultaneously yields which further implies thatP By addingP −1 2,k to both sides of the symbol simultaneously, we obtain According to Equation (25), the first fusion result of P 1 s,k P 2,k and, similarly, P 1 s,k P 1,k . According to (27) and (28), we obtain The following conclusions can be drawn from the collation.
In the L − 1 times of the fusion process, based on mathematical induction, it is obtained that In summary, the distributed ellipsoid (DEI) fusion estimator designed in the paper has good consistency, and the fusion estimator outperforms the individual local estimators.

Numerical Examples
In this section, the proposed DEI fusion is validated by a numerical example in order to intuitively obtain a fusion estimation problem consistent with being able to solve the unknown correlation in a complex multi-sensor system with unknown input disturbances and measurement data transmission delays. First, consider a complex multi-sensor linear time-varying system with unknown external inputs and measurement data transmission delays with the expression: where the state matrix A =   a 11,k a 21,k a 31,k a 12,k a 22,k a 32,k a 13,k a 23,k a 33,k   . d k is a Rayleigh distributed random number obeying parameter 3.
The expression for each element in the state matrix is: The respective measurement matrices of the three sensors are as follows:  [26], the time of the measurement data transmission delay is described by a random Poisson distribution with parameters λ i (i = 1, 2, 3), and its probability density function is f i (j): The mean value of the Poisson distribution obeyed by each channel delay time is λ 1 = 5, λ 2 = 6, λ 3 = 5.The buffer length used by each node is ε 1 = ε 2 = ε 3 = 7. The mean and covariance of the initial state are set as: Figure 3 represents the state estimation plots of the DEI fusion estimation for state 1, state 2, and state 3. The black curve indicates the state values without disturbance from the external inputs, the red curve indicates the actual state values of the complex system, and the blue curve indicates the estimated values of the DEI fusion estimator, which shows that the designed DEI fusion estimator can estimate the complex multi-sensor system well.  Based on the property that any Gaussian distribution can be described by the sublevel set ℇ , = { ∈ |( − ) ( − ) ≤ 1}, the superiority of the DEI estimator is Based on the property that any Gaussian distribution can be described by the sublevel set Since the correlation is unknown, to obtain a description of an arbitrary correlation, the information of the mutual correlation covariance cov( , ) is maximized. Based on the determinant of the mutual correlation covariance, it follows from Equation (22) that the problem of maximizing cov( , ) can be transformed into the problem of minimizing Γ, i.e., Γ ≔ arg min log | | subject to ≽ , ≽ For random vectors ( , P), obeying Gaussian distribution can all be represented by the sublevel set ℇ , = { ∈ |( − ) ( − ) ≤ 1} . To represent minimal Γ intuitively, minimally Γ is characterized as the minimal ellipse containing ℇ , ∪ ℇ , .
Since the prior estimates can be described by the introduced random variables, the fusion of the prior estimates is equivalent to the fusion of the introduced random variables, and by conditioning the function of Ψ, the fusion results are presented, i.e., Substituting Equation (21) and the variable information into Equation (24), it is obtained that = ( x,P = x ∈ R n (x −x) T P −1 (x −x) ≤ 1 , the superiority of the DEI estimator is verified by comparing the area enclosed by the results of the DCI fusion and DEI fusion performances [30]. Figure 4 represents two local estimates (x 1 , x 2 ) of the 3D image represented by a sublevel set, and the fusion algorithm estimates the area enclosed by the two ellipsoids. Figure 5 shows the results of the volume of the enclosed region for both fusion algorithms. It can be seen that the conservative estimation of the DCI fusion results in a larger result for the volume of the enclosed region than the DEI fusion result, which validates the superior performance of the DEI fusion estimator. verified by comparing the area enclosed by the results of the DCI fusion and DEI fusion performances [30]. Figure 4 represents two local estimates ( , ) of the 3D image represented by a sublevel set, and the fusion algorithm estimates the area enclosed by the two ellipsoids. Figure 5 shows the results of the volume of the enclosed region for both fusion algorithms. It can be seen that the conservative estimation of the DCI fusion results in a larger result for the volume of the enclosed region than the DEI fusion result, which validates the superior performance of the DEI fusion estimator.   expresses the information abo tive estimations are avoided, w taken into account, and the acc Next, we analyze the EI and ∈ with Gaussian d mates of the state vector ∈ Our goal is to fuse the tw Gaussian distribution, i.e., It is also important to ens isfy the consistency of the fusio To characterize the unkn introducing three new two-t mean of , , ∈ and v mates , are represented by tion Ψ: : = Ψ , According to the relations estimates , , we can expre mates , , i.e., cov , : = Since the correlation is un the information of the mutual the determinant of the mutual the problem of maximizing co ing Γ, i.e., s For random vectors ( , P the sublevel set ℇ , = { ∈ tively, minimally Γ is characte Since the prior estimates fusion of the prior estimates is bles, and by conditioning the f : = verified by comparing the area enclosed by the results of the DCI fusion and DEI fusion performances [30]. Figure 4 represents two local estimates ( , ) of the 3D image represented by a sublevel set, and the fusion algorithm estimates the area enclosed by the two ellipsoids. Figure 5 shows the results of the volume of the enclosed region for both fusion algorithms. It can be seen that the conservative estimation of the DCI fusion results in a larger result for the volume of the enclosed region than the DEI fusion result, which validates the superior performance of the DEI fusion estimator.    Next, let us discuss the computational cost of the designed DEI fusion estimator and compare the computational complexity of the centralized estimator with that of the DEI fusion estimator. First, we unify the dimensions of the different measurement equations, i.e., y i,k ∈ R m i . It easily follows that the computational magnitude of the centralized estimator is O (Lm i ) 3 , and the computational magnitude of the DEI estimator is O Lm 3 i . Since L is a positive integer greater than 1, L < L 3 , it can be seen that the computational cost of the DEI estimator algorithm is smaller than that of the centralized algorithm.
In the centralized fusion estimation, the state estimation is handled using the state augmentation method, and the state transfer matrix is an invertible, sparse matrix. According to the nature of a computational complexity analysis, the computational order of magnitude of the centralized fusion estimator is obtained as O 2L 2 n 3 + 3(Ln) 2 m i + 2Lnm 2 i + m 3 i , while the computational order of magnitude of the designed DEI fusion estimator is O 2n 3 + 3n 2 m i + 5nm 2 i + m 3 i + (L − 1) n 2 m i + nm i . When m i = 1, the computational complexity analysis of the designed numerical example shows that the computational order of magnitude of the centralized fusion estimator is O(748) and that of the DEI fusion estimator is O(148). It can be seen that the designed DEI fusion estimator significantly reduces the computational cost.
From the analysis of the above results, it can be concluded that the designed DEI fusion estimator solves the problem of low computational cost that centralized fusion does not have and the problem that DCI fusion estimation is too conservative and verifies the superiority of the designed fusion algorithm. Although the distributed fusion algorithm is suboptimal, the proposed DEI fusion estimator with good accuracy and low computational cost is preferred for the estimation of multi-sensor complex systems.

Conclusions
In this paper, we studied the problem of data fusion estimation for a complex multisensor system with two network-induced phenomena of both unknown input disturbances and measurement transmission delays. By treating the unknown input disturbance as a non-informative prior distribution, the measured data transmission delay was represented by a set of independent stochastic Bernoulli processes, and a finite length buffer was added at the link nodes to retrieve the delayed data set. In analyzing the data fusion estimation problem, the MMSE local estimator was designed with a Bayesian framework for a multi-sensor complex system. For the problem of an unknown correlation between local estimates, a DEI fusion estimator that could solve arbitrary correlation was designed, and the consistency of the fusion estimator was demonstrated. In the paper, the superior tracking performance of the designed DEI fusion estimator was analyzed by simulation examples, and the problems of conservative estimation in DCI fusion estimations and high computational costs in centralized fusion were solved. Although information fusion is developing rapidly in this era of rapid development, the research on information fusion estimation needs further efforts.