Next Article in Journal
Relationship Between Offensive Performance and Symmetry of Muscle Function, and Injury Factors in Elite Volleyball Players
Previous Article in Journal
Research on Hidden Backdoor Prompt Attack Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distributed Maximum Correntropy Linear Filter Based on Rational Quadratic Kernel Against Non-Gaussian Noise

1
School of Cybersecurity, Northwestern Polytechnical University, Xi’an 710072, China
2
Department of Mathematics and Physics, Luoyang Institute of Science and Technology, Luoyang 471023, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(6), 955; https://doi.org/10.3390/sym17060955 (registering DOI)
Submission received: 22 April 2025 / Revised: 5 June 2025 / Accepted: 12 June 2025 / Published: 16 June 2025
(This article belongs to the Section Computer)

Abstract

:
This paper investigates the distributed state estimation problem for the linear system against non-Gaussian noise, where every sensor commutates information only within its adjacent sensors without the need for a fusion center. Correntropy is a similarity metric based on a kernel function that has symmetry. Symmetry means that for any two data points, the output value of the kernel function does not depend on the order of the data points. By adopting a correntropy cost function based on the rational quadratic kernel function approximation to restrain non-Gaussian heavy-tailed noise, a centralized maximum correntropy Kalman filter is first derived for the linear sens+or network system at first. Then the corresponding centralized maximum correntropy information filter is attained by employing the information matrices, which is a foundation for further designing distributed information algorithms under multi-sensor networks. Thirdly, the distributed rational quadratic maximum correntropy information filter and distributed adaptive rational quadratic maximum correntropy information filter are designed by exploiting the weighted census average to solve the non-Gaussian heavy-tailed noise interference in sensor networks. Finally, the performance of the proposed algorithms is illustrated through numerical simulations on the sensor network system.

1. Introduction

The distributed state estimation (DSE) problem has drawn more and more attention in the network domain because the large-scale sensor network is universally employed in various areas, for example target tracking [1,2,3], power systems [4], wireless camera networks [5], and unmanned aerial vehicles [6]. The study of distributed state estimation in various sensor networks has gained wide popularity due to its various advantages over most centralized estimation techniques, such as robustness to individual sensor failures, a low communication burden, expandability to sensor topology variation, and adaptability to modular subsystem adjustments [7]. Dissimilar to the centralized estimation, the distributed approaches do not require the fusion center, where data is interchanged only with the adjacent sensors rather than all the sensors.
Consensus-based methods are one of the most popular DSE algorithms, which are divided into three types [8,9,10,11]. The first type is consensus on estimates (CE) where the consensus average is applied to the predictions or state estimations [12]. The drawback of CE is that it overlooks the information that effectively contains the error covariance, which may result in an underestimation of the uncertainty in the estimation. The second type is consensus on measurements (CM) where the consensus average is employed based on the innovation pairs [13,14]. However, innovation pairs need to be calculated and exchanged in the CM type, which may make the calculation process complex. The third type is consensus on information (CI), which runs a consensus average on the information matrices and information vectors [15]. The consensus on the information matrix and information vector fully utilizes all the information from the state estimation; therefore, the CI type is a bit better compared to the other two types. All of these distributed consensus algorithms are devised on the centralized Kalman filter (CKF) or nonlinear Kalman filter family, which are based on the assumption of the Gaussian noise and employ the minimum mean square error (MMSE) criterion. However, non-Gaussian noise environments, such as heavy-tailed or impulsive noise, are commonly encountered in practical engineering applications, especially for low-cost sensor networks.
Based on the MMSE criterion, several distributed filter algorithms have been employed to deal with the non-Gaussian noise problem; the typical filter methods are the distributed particle filter [16,17,18] and the distributed Gaussian sum filter (GSF) [19]. In distributed particle filtering, each sensor performs local particle filtering and communicates with neighboring sensors to approximate a global estimate, which typically requires a large number of particles. Similarly, in the distributed Gaussian sum filter (GSF), each sensor implements the GSF to generate local estimates. These local estimates, represented by Gaussian components, are merged into a single Gaussian distribution using a moment-matching approach. The resulting merged estimates are then exchanged between neighboring sensors and fused using a weighted Kullback–Leibler divergence. However, a significant limitation of these filters is their high computational cost, which makes them impractical in sensor networks with constrained computational resources.
The performance of distributed state estimation methods can be significantly improved by employing a maximum correntropy criterion (MCC) instead of the aforementioned MMSE criterion, particularly in the presence of non-Gaussian noise disturbances [20,21,22,23]. B. Chen and X. Liu et al. [20] studied a maximum correntropy Kalman filter, which adopts MCC as the optimality criterion instead of using the MMSE to deal with the heavy-tailed impulsive noises. C. Lu and Z. Ren et al. [21] designed a robust recursive filter and smoother based on the cost function induced by the maximum mixture correntropy criterion for nonlinear dynamic models. J. Shao and W. Chen et al. [22] researched an adaptive multi-kernel sized-based maximum correntropy cubature Kalman filter to overcome the excessive convergence problems against non-Gaussian noise. In contrast to the analytical solution of KF or the KF family algorithms, the MCC-based filtering algorithms cannot yield the desired filter gain directly, so an iterative scheme with the Banach fixed-point theory (also named the contraction mapping theory) is proposed to realize the engineering applications. Generally, the Gaussian kernel function is usually employed to build the cost function in the aforesaid MCC-based filter algorithms, whereas the MCC-based filters based the Gaussian kernel function do not always achieve high performance and high robustness due to the difficulty in choosing the optimal kernel function or the appearance of singular matrices, which cause the algorithms to collapse prematurely [24,25]. Therefore, it is crucial to explore alternative types of kernel functions to address the inherent limitations of the Gaussian kernel function during the correntropy applications. The rational quadratic (RQ) kernel function can be regarded as an infinite sum of multiple Gaussian kernels with different characteristic length scales [26,27]. Consequently, the RQ kernel function owns the characters that the underlying function exhibits smooth variations across a range of length scales compared to the Gaussian kernel function [28]. Therefore, it is capable of capturing more intricate and unstable fluctuations in data trends compared to the Gaussian kernel. A challenging attempt has been made by utilizing the rational quadratic kernel function in place of the Gaussian kernel to overcome the shortage of singular matrices [24]. The work in [24] only concentrates on state estimation in a single sensor, and the DSE problem based on the rational quadratic kernel-based correntropy also plays a significant role in large-scale sensor networks. Therefore, the distributed maximum correntropy method based on a rational quadratic kernel function is of great research value and deserves taking time to study it.
In this paper, to further resolve the problem of the appearance of singular matrices and the kernel selection under the heavy-tailed and outliers of non-Gaussian noise in the sensor network system, the adaptive distributed maximum correntropy filter algorithm and its correlated algorithms, which employ the rational quadratic kernel function, are proposed to deal with the DSE problems. The major contributions of this paper are listed as follows:
  • The centralized rational quadratic maximum correntropy KF (CRQMCKF) is derived by adopting the correntropy cost function based on the rational quadratic kernel against the non-Gaussian noise for sensor networks.
  • The centralized rational quadratic maximum correntropy information filter (CRQMCIF) is derived to address the non-Gaussian noise disturbance, which is the corresponding information style of CRQMCKF.
  • The distributed rational quadratic maximum correntropy information filter (DRQMCIF) and the adaptive distributed rational quadratic maximum correntropy information filter (ADRQMCIF) algorithms, by adapting the consensus-weighted average based on the information matrices and information vectors, are acquired to handle the distributed state estimation problem in sensor networks.
The remaining parts of this paper are organized as follows. The relative background knowledge, including correntropy and weighted consensus average, is minutely introduced in Section 2. The CRQMCKF is designed based on the correntropy cost function by exploiting the rational quadratic kernel function, and then, the corresponding CRQMCIF algorithm is obtained through the information matrices; additionally, the DRQMCIF and ADRQMCIF algorithms are acquired by combining the weighted consensus average based on information matrices and information vectors in Section 3. The simulation results are shown in Section 4, which illustrates the robustness and estimation performance of the proposed algorithms. The conclusions are presented in Section 6.
To increase the readability and comprehensibility of the thesis, the nomenclature table in this paper is given.

2. Preliminary

In this section, the preliminaries are summarized separately. At first, the knowledge about correntropy is presented and the rational quadratic kernel function is advanced among the correntropy approximation. Then, the weighted consensus average is introduced to apply in the topology graph in Section 3.

2.1. Correntropy

To enhance the estimation capability of the filter under non-Gaussian noise interference, this paper chooses the maximum correntropy criterion. Correntropy is one of the similarity metrics, which comes from information-theoretic learning [29]. For the two arbitrary random variables X , Y with the joint probability density function (pdf) p ( x , y ) , correntropy is defined as
C ( X , Y ) = E [ κ ( X , Y ) ] = κ ( x , y ) p ( x , y ) d x d y ,
where E [ ] denotes the expectation operator, and κ ( , ) is a positive definite kernel function satisfying the Mercer theory [30,31]. The kernel function has symmetry. Symmetry means that for any two data points x and y, the output value of the kernel function does not depend on the order of the data points. This is a fundamental property of kernel functions because it reflects the similarity or the symmetry of the inner product.
In general, the joint pdf p ( x , y ) is mostly unavailable in the practical application. Therefore, C ( X , Y ) is mostly approximated by the mean estimator as
C ^ ( X , Y ) = 1 N i = 1 N κ ( x ( i ) , y ( i ) ) ,
in which ( x ( i ) , y ( i ) ) i = 1 N are N sample points extracted from pdf p ( x , y ) . For the kernel selection, there are already different types of methods [20,25,32,33], such as the Gaussian kernel, Student’s t kernel, Cauchy kernel, and Gaussian mixture kernel. In this article, the rational quadratic kernel function is chosen as
κ R Q ( e ) = 1 + | | e | | 2 2 α b 2 α , ( e = x y ) ,
where α > 0 is a scale mixture parameter, b stands for the length scale of the kernel (i.e., kernel width), and b > 0 . Due to the presence of two coefficients to be determined in Equation (3), it is too complex for practical engineering applications. Currently, single-parameter rational quadratic kernel functions are commonly adopted in the applications and literature on rational quadratic kernel functions [34,35,36]. Thus, single-parameter rational quadratic kernel functions are also utilized in this paper. To be identical to the rational quadratic kernel function in the above-mentioned literature, α = 1 is chosen in this paper. Then, the rational quadratic kernel function is viewed as
k R Q ( x , y ) = k R Q ( e ) = 1 + | | e | | 2 2 b 2 1 = 1 e 2 e 2 + 2 b 2 , ( e = x y ) .
The approximation of correntropy taken by the rational quadratic kernel function is expressed as
C ^ ( X , Y ) = 1 N i = 1 N κ R Q ( e ( i ) ) ,
where e ( i ) = x ( i ) y ( i ) , i = 1 , 2 , , N .
Remark 1. 
For correntropy approximations, the Gaussian kernel function is mostly adopted as the kernel, i.e.,
k ( x , y ) = G σ ( e ) = exp | | e | | 2 2 σ 2 , ( e = x y ) ,
in which σ > 0 represents the Gaussian kernel width. Like the above-mentioned, there are other kernel functions [25,32] and Gaussian mixture kernel functions [33] to be researched.

2.2. Weighted Consensus Average

The topology of the communication graph directly affects the consensus process in distributed filtering algorithms. In large-scale network systems, each node updates its estimate based not only on local measurements but also on information exchanged with its neighbors. The structure of the graph determines how efficiently and reliably information is propagated across the network. Therefore, the communication of a large-scale network system constituted by network sensors is depicted by an undirected connected topology graph G = ν , E , in which ν = 1 , , S denotes the set of sensors and E = { e s d = ( s , d ) | s , d ν } is the set of communication links. If s , d E , the d-th sensor can accept the data transferred from the s-th sensor, which is regarded as a neighborhood of the s-th sensor. Moreover, the neighborhood of the s-th sensor is denoted as N s = d ν | d , s E s , which includes the s-th sensor itself. If other sensors do not establish the link with the s-th sensor, then N s = s .
The weighted consensus average algorithm is the widely used distributed method for calculating the mean values. Suppose that every sensor s has an original value a s 0 , then each sensor accomplishes the following iteration
a s l = d N s λ s , d a d l 1 , l = 1 , 2 , , L 1 ,
in which l is the consensus iteration number, L is the maximum number of consensus iterations, and L ≥ 2; d N s λ s , d = 1 , λ s , d 0 ; and the weighted consensus matrix = ( λ s , d ) S × S is primitive [37].
When L , the following equation holds [38,39]
a s L = 1 S s = 1 S a s 0 .

3. Main Results

In this section, the centralized rational quadratic maximum correntropy Kalman filter is derived for a linear sensor network system at first. Subsequently, the centralized rational quadratic maximum correntropy information filter algorithm is obtained. Then, the distributed rational quadratic information maximum correntropy information filter and the adaptive distributed rational quadratic information maximum correntropy information filter are deduced, which are realized through the weighted average consensus of the information pairs within the information filter.

3.1. Centralized Rational Quadratic Maximum Correntropy Kalman Filter

Consider a linear dynamic system and measurement model with S sensors depicted by
x k = Φ k 1 x k 1 + ω k 1 ,
z k s = H k s x k + υ k s , s = 1 , , S ,
where x k n is the state vector at the k-th time step and z k s m is the measurement vector in the s-th sensor at the k-th time step; the state transition matrix Φ k 1 n × n and measurement matrix H k s m × n are pre-known; ω k 1 n and υ k s m are the mutually independent non-Gaussian process and measurement noise vectors, which have the nominal process noise covariance matrix Q k 1 and nominal measurement noise covariance matrix R k s for the Kalman family filter algorithm to proceed; and S is the total number of sensors. In the case of S 2 , the sensors are placed in a distributed way.
Firstly, the CRQMCKF filter algorithm is devised as follows. Under the Kalman-family framework, the CRQMCKF algorithm is comprised of prediction and update. In the prediction, the prior mean vector x ^ k | k 1 and the covariance matrix P k | k 1 are given by
x ^ k | k 1 = Φ k 1 x ^ k 1 | k 1 ,
P k | k 1 = Φ k 1 P k 1 Φ k 1 T + Q k 1 .
The Cholesky decomposition of P k | k 1 and R k s are written as
B p B p T = P k | k 1 , B r , s ( B r , s ) T = R r s .
To deal with non-Gaussian interference, the cost function is adopted by correntropy based on the rational-quadratic kernel as
J R Q ( x k ) = 1 n + m i = 1 n k R Q ( e x i ) + j = 1 m s = 1 S k R Q ( e z , s j ) ,
where k R Q ( ) is rational quadratic function described by Equation (4). Additionally, e x i   ( i = 1 , , n ) and e z , s j   ( j = 1 , , m ) are the i-th and the j-th element of the vectors e x and e z , s
e x = B p 1 ( x k x ^ k | k 1 ) ,
e z , s = ( B r s ) 1 ( z k s H k s x k ) , s = 1 , 2 , , S .
The state solution x ^ k | k is to maximize the above cost function utilizing all sensors’ measurements based on the maximum correntropy criterion (MCC)
x ^ k | k = arg max x k J R Q ( x k ) .
The optimal state estimate can be acquired by
J R Q ( x k ) x k = 0 .
Equation (18) can be further shown by
1 2 b 2 i = 1 n k R Q 2 ( e x i ) B p , i T B p , i 1 ( x k x ^ k | k 1 ) + 1 2 b 2 j = 1 m s = 1 S k R Q 2 ( e z , s j ) ( H k s ) T B r , s , j T B r , s , j 1 ( z k s H k s x k ) = 0
where B p , i   ( i = 1 , , n ) and B r , s , j   ( j = 1 , , m ) are the i-th and the j-th row of the matrixes B p and B r , s , respectively.
The matrix form of Equation (19) is expressed as
B p T C x B p 1 ( x k x ^ k | k 1 ) = s = 1 S ( H k s ) T ( B r , s ) T C z s ( B r , s ) 1 ( z k s H k s x k ) ,
in which
C x = diag ( k R Q 2 ( e x , 1 ) , , k R Q 2 ( e x , n ) ) ,
C z s = diag ( k R Q 2 ( e z , 1 s ) , , k R Q 2 ( e z , m s ) ) .
Let
P ¯ k | k 1 = B p C x 1 B p T , R ¯ k s = B r s ( C z s ) 1 ( B r s ) T .
According to the symbolism (23), Equation (20) can be expressed as
P ¯ k | k 1 1 + s = 1 S ( H k s ) T ( R ¯ k s ) 1 H k s x k = P ¯ k | k 1 1 x ^ k | k 1 + s = 1 S ( H k s ) T ( R ¯ k s ) 1 z k s .
Adding and subtracting s = 1 s ( H k s ) T ( R ¯ k s ) 1 H k s x ^ k | k 1 on the right side of the Equation (24)
P ¯ k | k 1 1 + s = 1 S ( H k s ) T ( R ¯ k s ) 1 H k s x k   = P ¯ k | k 1 1 + s = 1 S ( H k s ) T ( R ¯ k s ) 1 H k s x ^ k | k 1 + s = 1 S ( H k s ) T ( R ¯ k s ) 1 ( z k s H k s x ^ k | k 1 ) .
Multiplying P ¯ k | k 1 1 + s = 1 S ( H k s ) T ( R ¯ k s ) 1 H k s 1 on both sides of Equation (25), then the state estimation x ^ k | k can be obtained as the following (See Appendix A):
x ^ k | k = x ^ k | k 1 + K ¯ k ( z ¯ k H ¯ k x ^ k | k 1 ) ,
where z ¯ k is stacked measurement vector and defined as
z ¯ k = z k 1 T , z k 2 T , , z k S T T ,
H ¯ k is a stacked measurement matrix of all sensors and expressed as
H ¯ k = H k 1 T ; H k 2 T ; ; H k S T T .
K ¯ k is the gain matrix, which is given as
K ¯ k = P ¯ k | k 1 1 + H ¯ k T R ¯ k 1 H ¯ k 1 H ¯ k T R ¯ k 1 ,
in which
R ¯ k = diag ( R ¯ k 1 , R ¯ k 2 , , R ¯ k S ) .
K ¯ k is also rewritten as
K ¯ k = P ¯ k | k 1 H ¯ k T R ¯ k + H ¯ k P ¯ k | k 1 H ¯ k T 1 .
The detailed derivation of Equation (31) can be seen in Appendix B.
The state error covariance matrix P k | k can be calculated by
P k | k = ( I K ¯ k H ¯ k ) P k | k 1 ( I K ¯ k H ¯ k ) T + K ¯ k R k K ¯ k T .
The CRQMCKF algorithm is outlined in Algorithm 1.
Algorithm 1 Centralized rational quadratic maximum correntropy Kalman filter (CRQMCKF)
Initialization :   x 0 | 0 = x ^ 0 ,   P 0 | 0 = P 0 ;
For k = 1, 2, 3, …, do
Prediction :   update   x ^ k | k 1   and   P k | k 1 based on Equations (11) and (12);
Update :   Select   the   kernel   width   b ,   using   Equations   ( 13 ) ,   Equations   ( 15 )   and   ( 16 ) ,
Equations   ( 21 ) ( 23 ) ,   Equations   ( 26 ) ( 32 )   to   calculate   x ^ k | k   and   P k | k ;
Remark 2. 
Compared to the conventional centralized Kalman filter (CKF), the proposed CRQMCKF algorithm introduces  C x  and  C z s  for the s-th sensor to modify the estimation performance, which is changed according to deformation ( e x , i  in Equation (15) and  e z , j s  in Equation (16)) of the one-step predict errors and every sensor’s innovation by the kernel width b. When the model is disturbed by large errors,  C x  and  C z s  would reduce and decay according to an inverse proportional function in Equation (4) (the rational quadratic function in Equation (4) can be identically deformed to  k R Q ( x , y ) = 2 b 2 / e 2 + 2 b 2 , which is a translation of the inverse proportional function and has the same graph trend as the inverse proportional function). Mathematically, the rational quadratic function decays much slower than the exponential function, so  C x  and  C z s  are not easily equal to zero when the large outlier or pulse noise arises. This character can avoid the singular matrices under the computer precision requirements for multi-dimensional variables. Peculiarly, when  e x , i , e z , j s , then  C x 0  and  C z s 0 , the proposed algorithm ends early, which illustrates that this algorithm can voluntarily avoid the bad influence of abnormal errors, which are produced by system or measurement outliers.
Remark 3. 
The kernel width b has a significant impact on the algorithm estimation effect. When the kernel width b is too small, the estimation performance of the algorithm does not improve and sometimes even decreases, because the rational quadratic function values tend to zero and do not act as a regulator, whereas a large kernel width b makes the CRQMCKF algorithm converge to the centralized Kalman filter. Specially,  C x I n  and  C z s I m  when the kernel width  b , which means that CRQMCKF tends to CKF.

3.2. Centralized Rational Quadratic Maximum Correntropy Information Filter

To acquire the information form of the above CRQMCKF algorithm, like the standard centralized information filter (CIF), some approximation method in the derivations is adopted.
The step of prediction is the same as the above CRQMCKF algorithm. To obtain the information form of the state update, consider the information matrix Y k | k as the following
Y k | k = P k | k 1 = P k | k 1 1 + H ¯ k T R ¯ k 1 H ¯ k = P k | k 1 1 + s = 1 S ( H k s ) T ( R ¯ k s ) 1 H k s Y k | k 1 + s = 1 S U k s ,
where the one-prediction information matrix Y k | k 1 and the information matrix increment U k s are denoted as
Y k | k 1 = P k | k 1 1 ,
U k s = ( H k s ) T ( R ¯ k s ) 1 H k s .
Let y k | k = Y k | k x ^ k | k is the estimation information vector. Multiplying Y k on both ends of Equation (26),
y k | k = Y k | k x ^ k | k = [ P k | k 1 1 + H ¯ k T R ¯ k 1 H ¯ k ] [ x ^ k | k 1 + K ¯ k ( z ¯ k H ¯ k x ^ k | k 1 ) ] = P k | k | d 1 1 x ^ k | k 1 + H ¯ k T R ¯ k 1 H ¯ k x ^ k | k 1 + [ P k | k | d 1 1 + H ¯ k T R ¯ k 1 H ¯ k ] K ¯ k ( z ¯ k H ¯ k x ^ k | k 1 ) = P k | k | d 1 1 x ^ k | k 1 + H ¯ k T R ¯ k 1 H ¯ k x ^ k | k 1 + H ¯ k T R ¯ k 1 ( z ¯ k H ¯ k x ^ k | k 1 ) = P k | k | d 1 1 x ^ k | k 1 + H ¯ k R ¯ k 1 z ¯ k = P k | k | d 1 1 x ^ k | k 1 + s = 1 S ( H k s ) T ( R ¯ k s ) 1 z k s y k | k 1 + s = 1 S u k s ,
where the one-prediction information vector y k | k 1 and the information vector increment u s , k are given as
y k | k 1 = Y k | k 1 x ^ k | k 1 ,
u k s = ( H k s ) T ( R ¯ k s ) 1 z k s .
The detailed deducing process can be seen in Appendix C.
The CRQMCIF algorithm is summarized in Algorithm 2.
Algorithm 2 Centralized rational quadratic maximum correntropy information filter (CRQMCIF)
Initialization :   Let   x 0 | 0 = x ^ 0 ,   P 0 | 0 = P 0 ,   then   Y 0 = P 0 | 0 1   and   y 0 = Y 0 x ^ 0 ;
For k = 1, 2, 3, …, do
  Prediction :   update   Y k | k 1   and   y k | k 1 by Equations (34) and (37);
  Update :   Calculate   Y k | k   and   y k | k based on Equations (33) and (36);
  Calculate   x ^ k | k   by   x ^ k | k = ( Y k | k ) 1 y k | k ;
End.
Remark 4. 
From (21) and (22), when the kernel width b → ∞, C x I n , C z s I m , the estimation of CRQMCIF is similar to that of the standard CIF algorithm, for which the effects are equivalent to the CKF and CRQMCKF; when the  C x 0 C z s 0 , the estimation of CRQMCIF is only time prediction, which is the same as CRQMCKF. In general, as shown in the simulation outcome in Section 4.4, the estimation performance of CRQMCIF and CRQMCKF is similar.

3.3. Distributed Rational Quadratic Maximum Correntropy Information Filter

To effectively obtain the DRQMCIF, the weighted consensus average is introduced to calculate the weighted sum of the information matrix increment Equation (35) and the information vector increment Equation (38), respectively. Particularly, each sensor is given the same initial value x ^ 0 and P 0 , and thus Y 0 | 0 s = P 0 1 and y 0 | 0 s = Y 0 | 0 s x ^ 0 for all the sensors s (s = 1, 2,…, S) in the distributed algorithm. The initial value of the information matrix increment U k s , 0 = U k s and information vector increment initialization u k s , 0 = u k s selects the information increment form of CRQMCIF in Equations (35) and (38), which indicates the information increment items of the s-th sensor at the k-th time after the l-th consensus iteration. Applying the method in Equation (7) to perform the weighted consensus average, thus, the iterative information matrix increment U k s , l and information vector increment u k s , l is acquired as
U k s , l = d N s λ s , d U k s , l 1 , u k s , l = d N s λ s , d u k s , l 1 .
where l [ 1 , L ] is the number of consensus iterations, λ s , d 0 is the consensus weight value, and d N λ s , d = 1 . In this paper, the weighted consensus matrix is built based on the Metropolis weights, which are given as
λ s , d = 1 / ( 1 + max { l s , l d } ) , if   { s , d } ν ; 1 { s , d } ν λ s , d , if   s = d ; 0 , otherwise .
in which l s = | N s | represents the degree of the s-th sensor in the sensor network communication topology diagram.
From Equation (40), the Metropolis weight matrix Γ = ( λ s , d ) S × S is row stochastic and primitive, then U k s , L and u k s , L would respectively converge to the mean value of U k s and u k s when the total consensus iterative number L→∞, i.e.,
  lim L U k s , L = 1 S s = 1 S U k s , lim L u k s , L = 1 S s = 1 S u k s ,
which implies that the weight consensus average method could circuitously transmit localized information throughout the overall network.
After the weighted consensus averaging step, each sensor independently updates its information matrix and information vector through data transmitted from the neighboring sensor. Specifically, the information matrix and information vector are updated as follows:
  • Update of the Information Matrix: The information matrix Y k | k s for each sensor s at the k-th time step is computed by adding the weighted average increment of the information matrix ϖ k s U k s , l obtained after L rounds of consensus iterations to the information matrix at the previous time step Y k | k 1 s . This updating process encapsulates the integration of information exchanged between sensors and reflects the consensus gradually achieved during the iterative process.
    Y k | k s = Y k | k 1 s + ϖ k s U k s , L ,
    in which ϖ k s = S for L 2 ; when L = 1, let ϖ k s = 1 , which implies that each sensor simply accomplishes one local CRQMCIF employing the information transmitted by its linked sensors.
  • Update of the Information Vector: Similarly, the information vector y k | k s for the s-th sensor at the k-th time step is updated by adding the corresponding consensus averaged information vector increment ϖ k s u k s , l to the estimate at the previous time step y k | k 1 s . This ensures that the state estimation vector incorporates a collective understanding of the observed data after the consensus process. y k s is expressed by
    y k | k s = y k | k 1 s + ϖ k s u k s , L ,
    where ϖ k s is the same as described above.
The DRQMCIF algorithm is outlined in Algorithm 3.
Algorithm 3 Distributed rational quadratic maximum correntropy information filter (DRQMCIF)
Initialization :   Each   sensor   s   ( s = 1 ,   2 ,   ,   S )   is   given   initial   value   x 0 | 0 s = x ^ 0 ,   P 0 | 0 s = P 0 ,  
calculate   information   matrix   Y 0 | 0 s = P 0 1   and   information   vector   y 0 | 0 s = Y 0 | 0 s x ^ 0 | 0 ;
For k = 1, 2, 3, …, n
Calculate   U k s , 0   and   u k s , 0 by Equations (35) and (38);
Complete   the   consensus   average   by   Equation   ( 39 )   for   l = 1 , 2 , , L based on the Metropolis weight Equation (40);
Calculate   Y k | k s   and   y k | k s based on Equations (42) and (43);
Calculate   x ^ k | k s   by   x ^ k | k s = ( Y k | k s ) 1 y k | k s ;
End.
In distributed Algorithm 3, each sensor simply communicates with the neighbor sensors, which is ideal for low-cost sensors with finite communication capabilities. For every time interval k, each sensor s completes L consensus iterations. In the practice application, the estimation effect of DRQMCIF approaches that of CRQMCIF. For L→∞, there is the following conclusion.
Theorem 1. 
For the undirected linked sensor network, the weights consensus matrix  Γ = ( λ s , d ) S × S  is a right random matrix, i.e.,  d N s λ s , d = 1 ,  λ s , d 0  for  s = 1 , 2 , , S . And  Γ  is primitive; that is, there exists a positive integer k, satisfying  Γ k > 0 . When the iterative number L→∞, the DRQMCIF can attain the same estimation effect as the CRQMCIF.
Proof. 
See Appendix D. □

3.4. Adaptive Distributed Rational Quadratic Maximum Correntropy Information Filter

The kernel width of the rational quadratic kernel function plays a magnificent role in the convergence speed of the MCC-family algorithm. When other parameters are changeless, an extremely small kernel width affects the convergence and stability, and even the steady-state error may become larger. However, a particularly large kernel width reduces the rate of convergence and estimation performance. There is no way to obtain the theoretical optimal kernel width in the current research works. Thus, an adaptive adjustable kernel width is one of the popular manners to be selected to resolve this problem.
An online recursion design is enforced to update the kernel width in the MCC-family algorithm. Herein, the kernel width is designed as follows to enable the measurement-specific treatment of outliers
b j , k s = μ j , k s b max ,
where μ j , k s is the adaptive coefficient in the j-th element of measurement at the k-th step for the s-th sensor; b max is the presetting kernel width. The innovation vector at the s-th sensor is defined by
z ˜ k s = z k s z ^ k | k 1 s = z k s H k s x ^ k | k 1 = H k s x k + υ k s H k s x ^ k | k 1 = H k s x ˜ k | k 1 + υ k s .
To detect the measurement outliers, the innovation vector covariance matrix is acquired by
P z z , k | k 1 s = E [ z ˜ k s ( z ˜ k s ) T ] = E [ ( H k s x ˜ k | k 1 + υ k s ) ( H k s x ˜ k | k 1 + υ k s ) T ] = H k s P k | k 1 ( H k s ) T + R k s ,
and β j , k s is chosen as the following
β j , k s = τ P z z , j , k | k 1 s ( z ˜ j , k s ) 2 ,
where P z z , j , k | k 1 s is the j-th diagonal element of P z z , k | k 1 s , z ˜ j , k s is the j-th element of z ˜ k s , and τ is the confidence level factor, which is set based on the chi-square distribution with a single degree of freedom.
The adaptive coefficient μ j , k s is devised by
μ j , k s = β j , k s 2 β j , k s 2 + 2 b max 2 = 1 1 + 2 b max 2 / β j , k s 2 ,
where μ j , k s ( 0 , 1 ) is bounded and positive. The adaptive kernel width is obtained by substituting Equation (48) into Equation (44).
The ADRMCIF algorithm is listed in Algorithm 4.
Remark 5. 
When the measurement noise is Gaussian noise,  ( z ˜ j , k s ) 2  would be smaller than  τ P z z , j , k | k 1 s ,  β j , k s  is large. Under this circumstance, the adaptive coefficient  μ j , k s  becomes larger from Equation (48), which indicates that the kernel width is restructured to keep it large to ensure higher estimation precision. Meanwhile, conversely, when the measurement noise encounters the outliers,  ( z ˜ j , k s ) 2  would be prominently larger than  τ P z z , j , k | k 1 s ,  β j , k s  will be smaller. In this case, the adaptive coefficient  μ j , k s  becomes smaller, which forces the correntropy value to reduce to zero to eliminate the influence of outliers.
Algorithm 4 Adaptive distributed rational quadratic maximum correntropy information filter (ADRQMCIF)
Initialization :   Each   sensor   s   ( s = 1 ,   2 ,   ,   S )   is   given   initial   value   x 0 | 0 s = x ^ 0 ,   P 0 | 0 s = P 0 ,  
calculate   information   matrix   Y 0 | 0 s = P 0 1   and   information   vector   y 0 | 0 s = Y 0 | 0 s x ^ 0 | 0 ;
For k = 1, 2, 3, …, n
Update   x ^ k | k 1   and   P k | k 1 based on Equations (11) and (12);
  Calculate   the   adaptive   kernel   width   b j , k s by Equations (44)–(48);
  Calculate   Y k | k 1   and   y k | k 1 by Equations (34) and (37);
  Calculate   U k s , 0   and   u k s , 0 by Equations (35) and (38);
  Calculate   U k s , l   and   u k s , l   by   completing   the   consensus   average   Equation   ( 39 )   for
l = 1 , 2 , , L based on the Metropolis weight Equation (40);
  Calculate   Y k | k s   and   y k | k s based on Equations (42) and (43);
  Calculate   x ^ k | k s by 
x ^ k | k s = ( Y k | k s ) 1 y k | k s

End.
The flowchart of Algorithm 4 is showed in Appendix E.
Additionally, the adaptive kernel width is undiscerning to the value of b max . If the maximum value b max is preset as larger, because of the inadequate restraint of outliers, the innovation item z ˜ j , k s would become larger, which will result in a decrease in the adaptive coefficient μ j , k s . Therefore, the kernel width will not increase with the increment of b max but will keep becoming smaller to resist the outliers. Consequently, the proposed method of the adaptive kernel width is more convenient and stable in choosing the maximum kernel width. The adaptive distributed rational quadratic maximum correntropy information filter (ADRQMCIF) is listed in Algorithm 4.

3.5. Computational Complexity Analysis

In this subsection, the computational complexity is analyzed according to the floating-point operations. According to the ADRQMCIF algorithm, the computational complexities for some equations are presented in Table 1, where L represents consensus iterative numbers and d denotes the average numbers of each sensor’s neighbors.
Remark 6. 
O notation is utilized to depict the computational burden and O stands for the same order; and  m = i = 1 s m i .
Assume that the average of fixed-pointed iterative number of every sensor is T. The center rational quadratic maximum correntropy information filter algorithm involves Equations (11) and (12), (33) and (34), and (36) and (37). According to Table 1, the total computational complexity of CRQMCIF is
S C R Q M C I F = ( 2 T + 8 ) n 3 + ( 2 T + 6 ) S n 2 m + ( 2 T 1 ) n 2 + ( 4 T + 2 ) S n m 2 + ( 3 T 1 ) S n m + ( 4 T 1 ) n + 2 T S m 3 + 2 T s m + T O ( n 3 ) + 2 T S O ( m 3 )
The computational complexity of the ADRQMCIF is
S D R Q M C I F = ( 2 L d T + 8 ) n 3 + ( 2 T + 6 ) n 2 m i + ( 2 L d T + 9 ) n 2 + 2 T n m i 2 + 2 T n m i + ( L d T + d T + 4 ) n + 8 T m i 3 + ( 4 d + 2 ) T m i + ( T + 3 ) O ( n 3 ) + 2 T O ( m i 3 ) + 2 T O ( m )
From Equations (50) and (51), the computation consumption of CRQMCIF is generated by the central node, while that of the ADRQMCIF is dispersed by all sensors. The main difference between CRQMCIF and ADRQMCIF is the distributed consensus iteration and adaptive factor. The computational complexity of the consensus iteration step in ADRQMCIF depends on the numbers d and L of neighbors and the consensus iteration, i.e., (2n3 + 2n2 + n)TLd + dnT + O(n3). For large-scale networks (S   d), the distributed algorithm can significantly reduce the computational consumption for every sensor compared to the centralized algorithm.

4. Simulation Result

In this section, the influence of the kernel width and the impact of the consensus iteration are discussed for the DRQMCIF algorithm, separately. Then the performances of the proposed CRQMCKF, CRQMCIF, DRQMCIF, and ADRQMCIF algorithms are contrasted with the conventional CKF, CMCKF ( σ = 2), and DMCKF ( σ = 2) algorithms through simulations.

4.1. Simulation Model and Evaluation Benchmark

Consider the widely representative 2-D nearly-constant-velocity target tracking problem within a sensor network, which is commonly utilized in the literature [37] of distributed filtering, where the target’s positions are measured amidst clutter. The system model is described as
x k = 1 t 0 0 0 1 0 0 0 0 1 t 0 0 0 1 x k 1 + t 2 2 0 t 0 0 t 2 2 0 t w k ,
where x k = [ x k x ˙ k y k y ˙ k ] T , ( x k , y k ) and ( x ˙ k , y ˙ k ) are respectively the position and velocity of tracking targe in x and y directions. The sampling period is selected as t = 1 s. There are seven sensors to observe the target, and the communication topology graph among sensors is shown in Figure 1.
In the linear model, the measurements are the positions of the target, and the corresponding measurement model is given by
z k s = 1 0 0 0 0 0 1 0 x k + υ k s .
The initial value of the target is x 0 = 100 , 1 , 100 , 1 T with covariance P 0 = diag 100 , 5 , 100 , 5 . To guarantee the convergence of fixed-point iteration, the fixed-point iterative threshold and the maximum number of fixed-point iterations are preset as ε = 10 6 and N max = 60 .
The root-mean-square error (RMSE) and the average (RMSE) are adopted as the performance evaluation benchmarks. The RMSE and ARMSE of the position are calculated as
RMSE pos ( k ) = 1 M r = 1 M ( x k r x ^ k r ) 2 + s = 1 S ( y k s , r y ^ k s , r ) 2 , ARMSE pos = 1 K k = 1 K RMSE pos ( k ) ,
where M is the count of Monte Carlo runs, K is the total simulation period, and ( x ^ k , r s , y ^ k , r s ) is the estimation of the true position ( x k s , y k s ) at the s-th sensor for the r-th Monte Carlo experiment. The RMSE and ARMSE of velocity are calculated to those for the above position. The total time steps K are chosen as K = 200 s and the total Monte Carlo runs are M, selected as M = 100. All the simulations are conducted in MATLAB 2016b and run on an Intel Core i7-7700HQ, 2.80 GHz, 8 GB PC.

4.2. The Influence of Kernel Width

Firstly, the selection of the kernel width in the kernel function application can greatly influence the algorithm’s estimation precision and robustness. Therefore, the different kernel widths under the proposed DRQMCIF algorithm are discussed in this section.
This is considering that the process noise is Gaussian but the measurement noises are Gaussian mixture noise as follows
w k N 0 , Q ,
v k 0.9 N 0 , R + 0.1 N 0 , 200 R ,
where 0.9 and 0.1 are the probabilities in Gaussian mixture noises separately, Q = 0.01 I 2 and R = 4 I 2 . The Gaussian mixture noise is one of the representatives of non-Gaussian noise. The consensus iterative number L is set as L = 10. Let the kernel width c = 1, 20, 25, 35, 50, 80, 180, respectively. The confidence level factor is chosen by τ = 3.5 , and the maximum kernel width is preset as bmax = 90 in the proposed adaptive kernel width. The measurement noises in Equation (56) are one of the typical heavy-tailed non-Gaussian noises with a mixed-Gaussian distribution, which is often employed in the non-Gaussian filtering literature [20,37].
From Figure 2 and Figure 3, a too-small kernel width may decrease the estimation accuracy and robustness performance of the proposed DRQMCIF algorithm; for example, the DRQMCIF algorithm is even divergent when the kernel width b = 1. As the kernel width grows, the ARMSEs under the DRQMCIF algorithm get smaller and smaller, which means that the estimation accuracy is getting higher and higher. However, a too-large kernel width would make the DRQMCIF estimation performance decrease severely. It can easily be seen that the DRQMCIF estimation accuracy with the kernel width b = 180 is lower than b = 80. With the increase in the kernel width, the effect of the DRQMCIF algorithm effect reduces to the CKF. Thus, the estimation performance is influenced magnificently by the selection of the kernel width. It is concluded that the optimal kernel width is b = 80 from Figure 2 and Figure 3. According to the simulation results, the kernel width amid [50, 80] can acquire better estimation precision for the proposed DRQMCKF algorithm. The DRQMCIF algorithm result under the adaptive kernel width cannot reach that under the suitable kernel width b ∈ [50, 80]. Therefore, when the optimal kernel width selects b = 80, the proposed DRQMCKF algorithm can acquire better estimation precision and robustness performance than that under other kernel widths and adaptive kernel widths.

4.3. The Impact of Consensus Iterations

To assess the impact of consensus iterations, the devised DRQMCKF algorithm is simulated for the consensus iteration L = 1, 3, 5, 8, 10 under the same noise interference (in Equations (55) and (56)) as Section 4.2. The kernel width is selected as b = 80 from the conclusion in Section 4.2. The CRQMCIF and CKF algorithms are employed as the performance counterparts. Both CRQMCIF and DRQMCIF algorithms choose the optimal kernel width b = 80 obtained in Section 4.2.
The simulation outcomes are shown in Figure 4 and Figure 5. From these two figures, it can be seen that the estimation precision and performance would enhance when the consensus iteration number increases. The proposed DRQMCIF algorithm approximates the performance of the CRQMCIF algorithm when the number of consensus iterations L = 10, which illustrates that the consensus result of the distributed filtering algorithm will be closer to the global optimum (i.e., the central filtering algorithm) and can improve the accuracy of distributed filtering as the number of consensus iterations increases. When the number of consensus iteration L = 10, the performance of the proposed DRQMCIF algorithm is approximately equal to that of the CRQMCIF algorithm, which verifies the conclusion just mentioned. On the other side, the distributed algorithm could still acquire relatively better estimation performance under the consensus iteration number L = 5 and L = 8, which is of practical application for real sensor networks with limited communication resources. Each consensus iteration requires the exchange of information between nodes. If there are too many iterations, the communication requirements increase significantly, especially in large-scale networks, which may lead to bandwidth exhaustion or increased latency. Therefore, a smaller number of consensus iterations can be chosen in practical applications to avoid possible bandwidth exhaustion or delay increases, for instance, the number of consensus iterations can select L = 5, which may reach a better estimation performance in the practical engineering application.

4.4. Comparisons with the Relevant Algorithms

In this subsection, the performance of the proposed algorithms is contrasted with other related filter algorithms under the non-Gaussian noise. The following several algorithms are evaluated: the CRQMCKF (Algorithm 1), the CRQMCIF (Algorithm 2), the DRQMCIF (Algorithm 3), the (ADRQMCIF) (Algorithm 4), the CKF, the centralized maximum correntropy Kalman filter (CMCKF) [20] and the distributed maximum correntropy Kalman filter (DMCKF) [37]. The three performance indexes are adopted in the following discussion, which are RMSEs and ARMSEs of the position and velocity defined in Section 4.1, and the single-step run time (SSRT), separately.
In the following simulation comparisons, the kernel width under the DRQMCIF algorithm is selected as b = 80 according to the discussion in Section 4.2 and the consensus iterations are chosen as L = 10 according to the discussion in Section 4.3. To ensure a fair evaluation, the kernel width of the existing CMCKF is set as σ = 2 because this configuration provides better estimation performance compared to σ = 80 under these specified simulation conditions according to the results in the reference article [20]. For the ADRQMCIF algorithm, the pre-setting maximum kernel width selects bmax = 90 according to the conclusion in Section 4.2, and the confidence level factor is chosen as τ = 3.5. The two scenarios where the process and measurement noises are set as Gaussian-mixture and t non-Gaussian noise, respectively.

4.4.1. Scenario 1

In scenario 1, the process and measurement noises are assumed as the Gaussian-mixture noises, which is one of the representatives of common non-Gaussian noise,
w k 0.9 N 0 , Q + 0.1 N 0 , 40 Q ,
v k s 0.9 N 0 , R + 0.1 N 0 , 200 R ,
where Q and R are the same as in Section 4.2.
The simulation outcomes under Gaussian-mixture noise are displayed in Figure 6 and Figure 7 and Table 2. The estimation capability of the CKF algorithm is the worst among all algorithms due to the breakdown of the Gaussian noise assumption since the CKF algorithm is suitable to be employed in Gaussian noise problems. The RMSEs and ARMSEs of the position and velocity in these CMCKF, CRQMCKF, CRQMCIF, DRQMCIF, and ADRQMCIF algorithms are less than that of the CKF algorithm for all these algorithms and are based on MCC, which is specialized in handling the non-Gaussian noise problems. From Table 2, compared with the CMCKF algorithm, the proposed CRQMCIF algorithm improves the ARMSpos by 13.18%, the ARMSEvel by 4.75%, and the single-step run time (SSRT) by 13.89%; the proposed CRQMCKF algorithm improves the ARMSpos by 13.18%, the ARMSEvel by 4.75%, and the single-step run time (SSRT) by 17.12%. These outcomes illustrate that the estimation precision of the CRQMCIF algorithm is equivalent to that of the CRQMCKF algorithm and the single-step run time of the CRQMCIF is higher than that of the CRQMCKF algorithm since the information matrices increase the computational burden. Compared to the DMCKF algorithm, the proposed DRQMCIF in this paper improves the ARMSpos by 1.93‰, the ARMSEvel by 2.51%, and the single-step run time (SSRT) by 28.33%. However, the ARMSpos, the ARMSEvel, and the single-step run time (SSRT) of the ADRQMCIF algorithm are degraded because adaptive filtering recalculates the kernel width at each step based on error, which does not achieve the performance of the DRQMCIF under the optimal kernel width and leads to an increase in the computation time.
From Figure 6 and Figure 7, it can be seen that the proposed CRQMCKF, CRQMCIF, DRQMCIF, and ADRQMCIF algorithms based the rational quadratic kernel function can run to the specified step length and end prematurely compared to the CMCKF and DMCKF algorithms based the Gaussian kernel function. The CMCKF and DMCKF algorithms are unable to reach the required step and end the algorithm prematurely because the Gaussian kernel function induces a singular matrix. Compared with the Gaussian kernel function, the rational quadratic kernel function offers several advantages: it is robust to outliers and non-Gaussian noise; it can avoid the appearance of the singular matrices; it has lower computational complexity for that rational quadratic kernel function and avoids the exponential operation and thus reduces a large amount computational burden. These several benefits make the rational quadratic kernel function more suitable for data characterized by non-Gaussian heavy-tailed noise or large outliers. In contrast, the Gaussian kernel function is more sensitive to outliers and has a higher computational expense. However, the estimation performance of the ADRQMCIF algorithm cannot reach that of the DRQMCIF algorithm with the optimal kernel width. When the optimal kernel width cannot be determined in the practical engineering application, the ADRQMCIF algorithm can be an alternative algorithm to deal with the distributed state estimation problem under non-Gaussian noise interference.

4.4.2. Scenario 2

In scenario 2, the process and measurement noises are assumed to obey t distribution with the degree of freedom 1,
w k t ( 1 ) , v k s t ( 1 ) .
t distribution is one of the probability distributions that is similar to the normal distribution with its bell shape but has heavier tails and is one of the non-Gaussian noises. The probability distribution function of the d1-dimension t distribution is
f ( x ) = Γ ( ν + d 1 ) / 2 / ( ν π ) d 1 | Σ | Γ ν / 2 1 + 1 ν x μ T Σ 1 ( x μ ) ν + d 1 2 ,
where x d 1 , μ d 1 is the mean vector, Σ d 1 × d 1 denotes the covariance matrix or scale matrix satisfying Σ = ( ( v d 1 ) / v ) E [ ( x μ ) x μ T ] , ν is the freedom, Γ ( ) is gamma function, and Γ ( s ) = 0 + e x x s 1 d x .
The simulation results under t noise (which is another non-Gaussian noise) are exhibited in Table 3 and Figure 8 and Figure 9. Since t noise destroys the assumption of Gaussian noise, which is a prerequisite for the good estimation performance of the CKF algorithm, the estimation performance of the CKF algorithm is significantly worse than the other algorithms. From Table 3, compared to the CMCKF algorithm, which is based on the Gaussian kernel function, the proposed CRQMCKF increases the ARMSpos by 10.91%, the ARMSEvel by 5.45%, and the single-step run time (SSRT) by 14.08%; the proposed CRQMCIF algorithm improves the ARMSpos by 12.59%, the ARMSEvel by 4.16%, and the single-step run time (SSRT) by 8.88%. These results show that the estimation accuracy of the CRQMCKF algorithm approximates that of the CRQMCIF algorithm and the single-step run time of the CRQMCIF is larger than that of CRQMCKF algorithm because the information matrix enhanced the computation burden of the CRQMCIF algorithm. Compared to the DMCKF algorithm, the proposed DRQMCIF in this paper improves the ARMSpos by 1.67‰, the ARMSEvel by 2.51%, and the single-step run time (SSRT) by 26.57%, whereas the ARMSpos, the ARMSEvel, and the single-step run time (SSRT) of the ADRQMCIF algorithm are higher than that of the DMCKF algorithm since adaptive filtering needs to repeatedly calculate the corresponding kernel width according to the different error values at each step, which cannot reach the estimation performance of the DRQMCIF with the optimal kernel width and meanwhile results in an increase in the SSRT computation time.
From Figure 8 and Figure 9, it can easily be concluded that the presented CRQMCKF, CRQMCIF, DRQMCIF, and ADRQMCIF algorithms can arrive at the assigned step length; however, the CMCKF and DMCKF algorithms based on Gaussian kernel function were forced to end early. When the Gaussian function encounters a large outlier, its function rapidly becomes smaller or even tends to 0. When its function value exceeds the calculation accuracy of the computer, a singular matrix will appear and the algorithm will be forced to end early. The rational quadratic kernel function can easily avoid this situation. Simultaneously, the rational quadratic kernel function is only operated by addition, subtraction, multiplication, and division, so the exponential operation is avoided, and the computational complexity of the related algorithms will be reduced. Finally, the estimation accuracy of the ADRQMCIF algorithm cannot acquire that of the DRQMCIF algorithm with the optimal kernel width. When the optimal kernel width cannot be obtained in the engineering application, the ADRQMCIF algorithm can be chosen to resolve the distributed state estimation problems interfered with by the large outlier non-Gaussian noise.
In sum, from the discussion of two different non-Gaussian noises, Scenario 1 and Scenario 2, it can be seen that the distributed maximum correntropy linear algorithms based on rational quadratic kernel function are superior to the corresponding algorithms based on the Gaussian kernel function.

5. Experimental Setup for Multi-Sensor Target Tracking

5.1. Experimental Setup and Parameter Settings

To assess the practical performance of the proposed estimator, a multi-sensor experimental platform was constructed for the task of dynamic target tracking, as illustrated in Figure 10. The platform consists of a test vehicle equipped with an electromagnetic drive, a Nokov motion capture system with six Mars 2 cameras, 24 GHz millimeter-wave radar sensors, and multiple PCs for data acquisition and processing, and so on. The motion capture system provides sub-millimeter accuracy and serves as the ground-truth reference for the target’s position.
The camera system has a detection range of 0~1 m with a precision of 0.3 mm, while the radar sensors operate over a range of 0.1~3.5 m with a resolution of approximately 50 mm. The distributed sensor network is constructed using multiple CP210x USB-to-UART modules, where each sensor node communicates with the central processor. The test vehicle moves in a circular trajectory driven by magnetic propulsion and is used as the maneuvering target throughout the experiment. The vehicle model refers to the setup described in [40].
The communication topology of the sensor network is shown in Figure 11, comprising four sensor nodes S = { 1 , 2 , 3 , 4 } , and the directed edges are defined by
ε = {(1,1),(1,2),(1,3),(2,1),(2,2),(2,3),(2,4),(3,1),(3,2),(3,3),(4,2),(4,4)}.
This network enables decentralized estimation in a distributed framework, following the model structure referenced in [41].
The physical coordinates of the sensor nodes are fixed at the following:
  • Sensor 1: (−141.891 cm, 72.019 cm),
  • Sensor 2: (149.451 cm, 58.072 cm),
  • Sensor 3: (128.305 cm, −72.313 cm),
  • Sensor 4: (−165.279 cm, 56.663 cm).
In the simulation, both process and measurement noises are considered as Gaussian with zero mean and bounded variances. The process noise upper bound is set as Qk= diag([0, 0, 0, 0, 4 × 10−6]), and the measurement noise upper bound is R k s = 144. Due to asynchronous sensor updates, the system experiences random time delays, with a maximum delay set to d = 6.
The estimator parameters are configured as follows:
  • τ1 = 1.1, τ2 = 5, τ3 = 0.01,
  • λ0 = 0.7, λm = 0.01 for m = 1, …, 5 and λ6 = 0.99.
  • The initial state covariance is P0s = diag([5, 5, 10, 2, 0.003]).
The test vehicle begins from position (0 cm, −94 cm), and after a 3 s initialization, follows a circular path at an initial speed of 98 cm/s, with an average velocity of approximately 75 cm/s under a uniform magnetic field. The sensor sampling period is 100 ms.

5.2. Evaluation and Comparison

Figure 12 presents the position estimation errors of the proposed method compared with the benchmark algorithms described in [42,43]. Notably, because the test vehicle is a physical object with non-negligible dimensions, measurement deviations arise not only from internal sensor noise but also from occlusion, reflection, and the spatial extent of the vehicle body itself, which cannot be idealized as a point mass, and the sensor readings are subject to both intrinsic sensor noise and external disturbances caused by the car’s body. Moreover, from the analysis of sensor output, intermittent measurement losses were observed for sensors 1 and 4 at various time instants during the experiment.
For the estimator reported in [42], the error intervals are recorded as ranging from [−28.16 cm, 24.14 cm] and [−27.27 cm, 22.33 cm], with corresponding mid-values of 10.32 cm and 3.34 cm. In contrast, the distributed estimation approach in [43] yields the range of [−17.16 cm, 13.97 cm] and [−7.87 cm, 23.78 cm], with mid-values of 3.83 cm and 6.03 cm, respectively. For the single-sensor DRQMCIF method (sensor 1), the position errors are found within [−17.38 cm, 30.96 cm] and [−7.05 cm, 23.77 cm], with mid-values of 6.54 cm and 4.31 cm.
According to the results illustrated in the figure, the DRQMCIF algorithm based on a single sensor does not exhibit satisfactory performance, indicating that its estimation accuracy is limited in the absence of multi-sensor collaboration. When extended to a multi-sensor framework, the proposed estimator achieves error med-values of approximately 1.84 cm and 1.14 cm, with value ranges of [−8.09 cm, 23.18 cm] and [−8.93 cm, 13.16 cm], highlighting a notable advantage in both accuracy and robustness over the benchmark estimators in [42] and [43].
The collective results from simulations, experiments, and comparisons clearly affirm the superiority of the proposed DRQMCIF approach in terms of both robustness and accuracy under the non-Gaussian noise interference in the multi-sensors network. Future investigations will focus on incorporating other non-Gaussian noise models to further enhance estimation performance.

6. Conclusions

In this paper, some centralized and distributed maximum correntropy linear filters based on a rational quadratic kernel function are devised for state estimation over the sensor network system under the non-Gaussian noise, where each sensor transmits solely to its neighboring sensors without the need for a fusion center. A classical target tracking numerical simulation over the sensor network displays the performance and robustness of the proposed filter algorithms based on the appearance of the non-Gaussian noise interference. The proposed distributed algorithms can achieve a balance between the estimation precision and exchange burden when the optimal kernel width b = 80 is obtained. Although the optimal consensus performance is achieved when the number of iterations is selected to L = 10, the case of L = 5 still provides satisfactory estimation results in practical scenarios. Importantly, reducing the number of iterations to L = 5 leads to a notable decrease in the computational load. As such, adopting L = 5 represents a practical compromise that maintains acceptable estimation accuracy while significantly improving the computational efficiency. Further, when the optimal kernel width cannot be predetermined, the adaptive kernel distributed filter is a kind of substitute algorithm that can approximate the estimation performance in the practical application. However, several problems need to be further researched in future work as follows: the distributed maximum correntropy algorithms based on a rational quadratic kernel function can be extended to a nonlinear sensor network system operation under non-Gaussian noise interference. The authors are required to learn more methods that can also be against non-Gaussian noise interference to be used as comparison algorithms. Additionally, to improve the practical engineering application value, it should address more cost-effective sensor network-evoked scenarios, for example, packet loss, data delay, and a time-varying topology network (i.e., dynamic sensor network).

Author Contributions

X.Z.: Writing—review and editing, Formal analysis, Investigation, Validation. D.M.: Supervision, Funding acquisition, Project administration, Methodology. J.Y.: Suggestion and discuss, Software simulation. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Key R&D Program of China: 2021YFB3100901; NSF of China: 62074131, 62272389, 62372069; Shenzhen Fundamental Research Program: 20210317191843003; Shaanxi Provincial Key R&D Program: 2023-ZDLGY-23.

Data Availability Statement

The data that support the findings of this study are not openly available due to reasons of a confidentiality agreement. The relevant research is not yet complete, so we are not releasing the important code at this time, but we can provide some very basic code when you request it from the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

Nomenclature

bRational quadratic kernel widthb > 0
σ Gaussian kernel bandwidth σ > 0
LMaximum number of consensus iterationL > 0
lIteration index in the consensus iterationl  {1, 2,..., L}
STotal number of sensors in the network systemS  Z +
sSensor index in the network systems  {1, 2, ..., S}
kDiscrete-time index in simulationsk  {1, 2,..., K}
KTotal simulation time stepsK  Z +
MNumber of Monte Carlo trialsM  Z +
mIndex of Monte Carlo trialm  {1, 2,..., M}
bmaxPresetting kernel widthbmax > 0
τ Confidence level factor τ = 3.5
ε Fixed-point iterative threshold ε = 10−6
NmaxMaximum number of fixed-point iterationsNmax = 60

Appendix A. Derivation of the State Estimation Equation (26)

For s = 1 S ( H k s ) T ( R ¯ k s ) 1 H k s , from Equations (28) and (30),
s = 1 S ( H k s ) T ( R ¯ k s ) 1 H k s = ( H k 1 ) T ( R ¯ k 1 ) 1 H k 2 + ( H k 2 ) T ( R ¯ k 2 ) 1 H k 2 + + ( H k S ) T ( R ¯ k S ) 1 H k S = ( H k 1 ) T ( H k S ) T ( R ¯ k 1 ) 1 ( R ¯ k S ) 1 H k 1 H k s = H ¯ k T R ¯ k 1 H ¯ k ,
Combining Equation (27),
s = 1 S ( H k s ) T ( R ¯ k s ) 1 ( z k s H k s x ^ k | k 1 ) = ( H k 1 ) T ( H k S ) T ( R ¯ k 1 ) 1 ( R ¯ k S ) 1 × z k 1 z k S H k 1 H k S x ^ k | k 1 = H ¯ k T R ¯ k 1 ( z ¯ k H ¯ k x ^ k | k 1 ) .
Thus, Multiplying P ¯ k | k 1 1 + s = 1 S ( H k s ) T ( R ¯ k s ) 1 H k s 1 on both sides of Equation (25),
x ^ k | k = x ^ k | k 1 + P ¯ k | k 1 1 + s = 1 S ( H k s ) T ( R ¯ k s ) 1 H k s 1 s = 1 S ( H k s ) T ( R ¯ k s ) 1 ( z k s H k s x ^ k | k 1 ) = x ^ k | k 1 + P ¯ k | k 1 1 + H ¯ k T R ¯ k 1 H ¯ k 1 H ¯ k T R ¯ k 1 ( z ¯ k H ¯ k x ^ k | k 1 ) .
From Equations (29) and (26) is proved.

Appendix B. Derivation of Equation (31)

For the arbitrary matrix P , Q ,
P + P Q P = P ( I + Q P ) = ( I + P Q ) P .
Multiply left by ( I + P Q ) 1 and right by ( I + Q P ) 1 at both sides of the Equation (A4),
( I + P Q ) 1 P = P ( I + Q P ) 1 ,
where all the inversion matrixes exist and the dimension of all the matrixes satisfies matrix multiplication. For Equation (29), using Equation (A5) in the following ( = * ) step,
K ¯ k = P ¯ k | k 1 1 + H ¯ k T R ¯ k 1 H ¯ k 1 H ¯ k T R ¯ k 1 = ( I + H ¯ k T R ¯ k 1 H ¯ k P ¯ k | k 1 ) P ¯ k | k 1 1 1 H ¯ k T R ¯ k 1 = P ¯ k | k 1 ( I + H ¯ k T R ¯ k 1 H ¯ k P ¯ k | k 1 ) 1 H ¯ k T R ¯ k 1 = * P ¯ k | k 1 H ¯ k T I + R ¯ k 1 H ¯ k P ¯ k | k 1 H ¯ k T 1 R ¯ k 1 = P ¯ k | k 1 H ¯ k T R ¯ k ( I + R ¯ k 1 H ¯ k P ¯ k | k 1 H ¯ k T ) 1 = P ¯ k | k 1 H ¯ k T R ¯ k + H ¯ k P ¯ k | k 1 H ¯ k T 1 ,
in which * represents the use of the Equation (A5) to obtain the following equation, i.e.,
K ¯ k = P ¯ k | k 1 H ¯ k T R ¯ k + H ¯ k P ¯ k | k 1 H ¯ k T 1 .

Appendix C. Derivation of Equation (33)

Equation (32) is expanded as
P k | k = P k | k 1 P k | k 1 H ¯ k T K ¯ k T K ¯ k H ¯ k P k | k 1 + K ¯ k H ¯ k P k | k 1 H ¯ k T K ¯ k T + K ¯ k R k K ¯ k T = P k | k 1 P k | k 1 H ¯ k T K ¯ k T K ¯ k H ¯ k P k | k 1 + K ¯ k ( H ¯ k P k | k 1 H ¯ k T + R k ) K ¯ k T .
Substituting Equation (31) into Equation (A8),
P k | k = P k | k 1 P k | k 1 H ¯ k T K ¯ k T K ¯ k H ¯ k P k | k 1 + P ¯ k | k 1 H ¯ k T R ¯ k + H ¯ k P ¯ k | k 1 H ¯ k T 1 ( H ¯ k P k | k 1 H ¯ k T + R k ) K ¯ k T P k | k 1 P k | k 1 H ¯ k T K ¯ k T K ¯ k H ¯ k P k | k 1 + P ¯ k | k 1 H ¯ k T K ¯ k T = P k | k 1 K ¯ k H ¯ k P k | k 1 = P k | k 1 P k | k 1 H ¯ k T R ¯ k + H ¯ k P ¯ k | k 1 H ¯ k T 1 H ¯ k P k | k 1 .
The matrix inversion lemma [44] is
( A + B C 1 D ) 1 = A 1 A 1 B ( C + D A 1 B ) 1 A 1 .
Utilizing the matrix inversion lemma for P k | k 1 1 H ¯ k T R ¯ k 1 H ¯ k , then
( P k | k 1 1 H ¯ k T R ¯ k 1 H ¯ k ) 1 = P k | k 1 P k | k 1 H ¯ k T ( R ¯ k + H ¯ k P k | k 1 H ¯ k T ) H ¯ k P k | k 1 .
Comparing Equation (A11) with Equation (A9),
P k | k 1 ( P k | k 1 1 H ¯ k T R ¯ k 1 H ¯ k ) 1 .
Thus,
P k | k 1 1 = P k | k 1 1 H ¯ k T R ¯ k 1 H ¯ k = P k | k 1 1 ( H k 1 ) T , , ( H k S ) T diag ( ( R ¯ k 1 ) 1 , , ( R ¯ k S ) 1 ) ( H k 1 ) T , , ( H k S ) T T = P k | k 1 1 s = 1 S ( H k s ) T ( R ¯ k s ) 1 H k s . .

Appendix D. Proof of Theorem 1

This proof is predicated on mathematics induction. □
Firstly, For both CRQMCIF and DRQMCIF algorithms, every sensor owns the same initialization x ^ 0 and P 0 ; then, the information matrix initialization Y 0 | 0 s = P 0 1 , and information vector initialization y 0 | 0 s = Y 0 | 0 s x ^ 0 .
Secondly, for all the sensors (s = 1, 2, …, S), suppose that y k s and Y k s at the k-th time equal to y k and Y k in CRQMCIF, then y k + 1 | k s , Y k + 1 | k s and the information increment pairs ( u k s , l , U k s , l ) are equivalent to y k + 1 | k , Y k + 1 | k , and ( u k s , U k s ) in CRQMCIF. The initialization ( u k s , 0 , U k s , 0 ) of the information increment pairs ( u k s , l , U k s , l ) is set as ( u k s , U k s ) at the consensus iteration beginning, each sensor s transfers information L consensus iteration times with its neighbor sensors N s . According to the character of the consensus average algorithm, when L→∞,
lim L U k s , l = 1 S s = 1 S U k s , 0 ,
lim L u k s , l = 1 S s = 1 S u k s , 0 ,
Thus, each sensor can acquire
Y k s = Y k | k 1 s + S U k s , l Y k | k 1 s + s = 1 S ( H k s ) T ( R ¯ k s ) 1 H k s ,
y k s = y k | k 1 s + S u k s , l y k | k 1 s + s = 1 S ( H k s ) T ( R ¯ k s ) 1 z k s
which is identical to CRQMCIF.

Appendix E. Flowchart of Algorithm 4

Figure A1. Flowchart of Algorithm 4.
Figure A1. Flowchart of Algorithm 4.
Symmetry 17 00955 g0a1

References

  1. Zhang, J.; Gao, S.; Qi, X.; Yang, J.; Gao, B. Distributed Robust Cubature Information Filtering for Measurement Outliers in Wireless Sensor Networks. IEEE Access 2020, 8, 20203–20214. [Google Scholar] [CrossRef]
  2. Xia, J.; Gao, S.; Li, G.; Qi, X.; Gao, B.; Zhang, J. Distributed H∞-Constraint Robust Estimator for Multi-Sensor Networked Hybrid Uncertain Systems. IEEE Trans. Netw. Sci. Eng. 2021, 8, 3335–3348. [Google Scholar] [CrossRef]
  3. Zhang, J.; Zhao, S. Distributed Adaptive Tobit Kalman Filter for Networked Systems Under Sensor Delays and Censored Measurements. IEEE Trans. Signal Inf. Process. Over Netw. 2022, 8, 445–458. [Google Scholar] [CrossRef]
  4. Shi, Q.; Liu, M.; Hang, L. A Novel Distribution System State Estimator Based on Robust Cubature Particle Filter Used for Non-Gaussian Noise and Bad Data Scenarios. IET Gener. Transm. Distrib. 2022, 16, 1385–1399. [Google Scholar] [CrossRef]
  5. Kwon, H.; Hegde, C.; Kiarashi, Y.; Madala, V.S.K.; Singh, R.; Nakum, A.; Tweedy, R.; Tonetto, L.M.; Zimring, C.M.; Doiron, M.; et al. A Feasibility Study on Indoor Localization and Multiperson Tracking Using Sparsely Distributed Camera Network With Edge Computing. IEEE J. Indoor Seamless Position. Navig. 2023, 1, 187–198. [Google Scholar] [CrossRef]
  6. Zhou, Y.; Zheng, Z.; Huang, J.; Wang, C.; Xu, G.; Xuchen, Y.; Zha, B. Distributed Maximum Correntropy Cubature Information Filtering for Tracking Unmanned Aerial Vehicle. IEEE Sens. J. 2023, 23, 9925–9935. [Google Scholar] [CrossRef]
  7. Zhang, J.; Gao, S.; Xia, J.; Li, G.; Qi, X.; Gao, B. Distributed Adaptive Cubature Information Filtering for Bounded Noise System in Wireless Sensor Networks. Int. J. Robust Nonlinear Control 2021, 31, 4869–4896. [Google Scholar] [CrossRef]
  8. Chen, Q.; Wang, W.; Yin, C.; Jin, X.; Zhou, J. Distributed cubature information filtering based on weighted average consensus. Neurocomputing 2017, 243, 115–124. [Google Scholar] [CrossRef]
  9. Wang, X.; Niu, B.; Shang, Z.; Niu, Y. Distributed resilient adaptive consensus tracking control of nonlinear multi-agent systems dealing with deception attacks via K-filters approach. Automatica 2024, 169, 111871. [Google Scholar] [CrossRef]
  10. Liu, Y.; Xie, X.; Chadli, M.; Sun, J. Leaderless Consensus Control of Fractional-Order Nonlinear Multiagent Systems with Measurement Sensitivity and Actuator Attacks. IEEE Trans. Control Netw. Syst. 2024, 11, 2252–2262. [Google Scholar] [CrossRef]
  11. Zhu, Y.; Niu, B.; Shang, Z.; Wang, Z.; Wang, H. Distributed Adaptive Asymptotic Consensus Tracking Control for Stochastic Nonlinear MASs with Unknown Control Gains and Output Constraints. IEEE Trans. Autom. Sci. Eng. 2025, 22, 328–338. [Google Scholar] [CrossRef]
  12. Olfati-Saber, R.; Shamma, J.S. Consensus Filters for Sensor Networks and Distributed Sensor Fusion. In Proceedings of the 44th IEEE Conference on Decision and Control, Seville, Spain, 12–15 December 2005; pp. 6698–6703. [Google Scholar]
  13. Olfati-Saber, R. Distributed Kalman Filter with Embedded Consensus Filters. In Proceedings of the 44th IEEE Conference on Decision and Control, Seville, Spain, 12–15 December 2005; pp. 8179–8184. [Google Scholar]
  14. Olfati-Saber, R. Distributed Kalman filtering for sensor networks. In Proceedings of the 46th IEEE Conference on Decision and Control, New Orleans, LA, USA, 12–14 December 2007; pp. 5492–5498. [Google Scholar]
  15. Battistelli, G.; Chisci, L. Kullback-Leibler average, consensus on probability densities, and distributed state estimation with guaranteed stability. Automatica 2014, 50, 707–718. [Google Scholar] [CrossRef]
  16. Bolic, M.; Djuric, P.M.; Sangjin, H. Resampling algorithms and architectures for distributed particle filters. IEEE Trans. Signal Process. 2005, 53, 2442–2450. [Google Scholar] [CrossRef]
  17. Hlinka, O.; Hlawatsch, F.; Djuric, P.M. Distributed particle filtering in agent networks: A survey, classification, and comparison. IEEE Signal Process. Mag. 2013, 30, 61–81. [Google Scholar] [CrossRef]
  18. Hlinka, O.; Hlawatsch, F.; Djurić, P.M. Consensus-based Distributed Particle Filtering with Distributed Proposal Adaptation. IEEE Trans. Signal Process. 2014, 62, 3029–3041. [Google Scholar]
  19. Li, W.; Jia, Y. Distributed Gaussian sum filter for discrete-time nonlinear systems with Gaussian mixture noise. In Proceedings of the 35th Chinese Control Conference (CCC), Chengdu, China, 27–29 July 2016; pp. 1831–1836. [Google Scholar]
  20. Chen, B.; Liu, X.; Zhao, H.; Príncipe, J.C. Maximum Correntropy Kalman Filter. Automatica 2017, 76, 70–77. [Google Scholar] [CrossRef]
  21. Lu, C.; Feng, W.; Zhang, Y.; Li, Z. Maximum mixture correntropy based outlier-robust nonlinear filter and smoother. Signal Process. 2021, 188, 108215. [Google Scholar] [CrossRef]
  22. Shao, J.; Chen, W.; Zhang, Y.; Yu, F.; Chang, J. Adaptive Multikernel Size-Based Maximum Correntropy Cubature Kalman Filter for the Robust State Estimation. IEEE Sens. J. 2022, 22, 19835–19844. [Google Scholar] [CrossRef]
  23. Shen, B.; Wang, X.; Zou, L. Maximum Correntropy Kalman Filtering for Non-Gaussian Systems with State Saturations and Stochastic Nonlinearities. IEEE/CAA J. Autom. Sin. 2023, 10, 1223–1233. [Google Scholar] [CrossRef]
  24. Zhao, X.H.; Mu, D.J.; Yang, J.H.; Zhang, J.H. Rational-quadratic kernel-based maximum correntropy Kalman filter for the non-Gaussian noises. J. Frankl. Inst.-Eng. Appl. Math. 2024, 361, 107286. [Google Scholar] [CrossRef]
  25. Wang, J.; Lyu, D.; He, Z.; Zhou, H.; Wang, D. Cauchy kernel-based maximum correntropy Kalman filter. Int. J. Syst. Sci. 2020, 51, 3523–3538. [Google Scholar] [CrossRef]
  26. Duvenaud, D.K. Automatic Model Construction with Gaussian Processes; University of Cambridge: Cambridge, UK, 2014. [Google Scholar]
  27. Zhou, H. Forecasting of Stock Index Realized Volatility Based on Gaussian Process Regression with Compositional Kernel; Hunan Universtiy: Changsha, China, 2021. [Google Scholar]
  28. Swastanto, B.A. Gaussian Process Regression for Long-Term Time Series Forecasting; Delft University of Technology: Delft, The Netherlands, 2016. [Google Scholar]
  29. Príncipe, J.C. Information Theoretic Learning: Renyi’s Entropy and Kernel Perspectives; Springer Science & Business Media: New York, NY, USA, 2010. [Google Scholar]
  30. Minh, H.Q.; Niyogi, P.; Yao, Y. Mercer’s theorem, feature maps, and smoothing. In International Conference on Computational Learning Theory; Springer: Berlin/Heidelberg, Germany, 2006; pp. 154–168. [Google Scholar]
  31. Gao, G.; Zhong, Y.; Gao, Z.; Zong, H.; Gao, S. Maximum Correntropy Based Spectral Redshift Estimation for Spectral Redshift Navigation. IEEE Trans. Instrum. Meas. 2023, 72, 8503110. [Google Scholar] [CrossRef]
  32. Huang, H.; Zhang, H. Student’s t-Kernel-Based Maximum Correntropy Kalman Filter. Sensors 2022, 22, 1683. [Google Scholar] [CrossRef] [PubMed]
  33. Chen, B.; Wang, X.; Lu, N.; Wang, S.; Cao, J.; Qin, J. Mixture Correntropy for Robust Learning. Pattern Recognit. 2018, 79, 318–327. [Google Scholar] [CrossRef]
  34. Yang, S.; Li, H.; Gou, X.; Bian, C.; Shao, Q. Optimized Bayesian adaptive resonance theory mapping model using a rational quadratic kernel and Bayesian quadratic regularization. Appl. Intell. 2021, 52, 7777–7792. [Google Scholar] [CrossRef]
  35. Mohammadzadeh, P.; Tinati, M.A.; Shiri, H.; Tazekand, B.M. Improved MSVR-Based Range-Free Localization Using a Rational Quadratic Kernel Function. In Proceedings of the Iranian Conference on Electrical Engineering (ICEE2018), Mashhad, Iran, 8–10 May 2018; IEEE: New York, NY, USA, 2018; pp. 1–6. [Google Scholar]
  36. Chander, S.; Vijaya, P.; Dhyani, P. Multi kernel and dynamic fractional lion optimization algorithm for data clustering. Alex. Eng. J. 2018, 57, 267–276. [Google Scholar] [CrossRef]
  37. Wang, G.; Li, N.; Zhang, Y. Distributed maximum correntropy linear and nonlinear filters for systems with non-Gaussian noises. Signal Process. 2021, 182, 107937. [Google Scholar] [CrossRef]
  38. Battistelli, G.; Chisci, L.; Mugnai, G.; Farina, A.; Graziano, A. Consensus-based linear and nonlinear filtering. IEEE Trans. Autom. Control 2014, 60, 1410–1415. [Google Scholar] [CrossRef]
  39. Wang, G.; Li, N.; Zhang, Y. Hybrid consensus sigma point approximation nonlinear filter using statistical linearization. Trans. Inst. Meas. Control 2018, 40, 2517–2525. [Google Scholar] [CrossRef]
  40. Bar-Shalom, Y.; Li, X.R.; Kirubarajan, T. Estimation with Applications to Tracking and Navigation; Wiley: Hoboken, NJ, USA, 2001; pp. 466–470. [Google Scholar]
  41. Xia, J.; Gao, S.; Qi, X.; Zhang, J.; Li, G. Distributed Cubature H-infinity Information Filtering for Target Tracking against Uncertain Noise Statistics. Signal Process. 2020, 177, 107725. [Google Scholar] [CrossRef]
  42. Ge, Q.; Shao, T.; Duan, Z.; Wen, C. Performance Analysis of the Kalman Filter With Mismatched Noise Covariances. IEEE Trans. Autom. Control 2016, 61, 4014–4019. [Google Scholar] [CrossRef]
  43. Wang, G.; Xue, R.; Wang, J. A distributed maximum correntropy Kalman filter. Signal Process. 2019, 160, 247–251. [Google Scholar] [CrossRef]
  44. Tylavsky, D.J.; Sohie, G.R.L. Generalization of the matrix inversion lemma. Proc. IEEE 1986, 74, 1050–1052. [Google Scholar] [CrossRef]
Figure 1. Communication topology among sensors.
Figure 1. Communication topology among sensors.
Symmetry 17 00955 g001
Figure 2. RMSEpos under different kernel width values.
Figure 2. RMSEpos under different kernel width values.
Symmetry 17 00955 g002
Figure 3. RMSEvel under different kernel width values.
Figure 3. RMSEvel under different kernel width values.
Symmetry 17 00955 g003
Figure 4. RMSEpos under different consensus iterations.
Figure 4. RMSEpos under different consensus iterations.
Symmetry 17 00955 g004
Figure 5. RMSEvel under different consensus iterations.
Figure 5. RMSEvel under different consensus iterations.
Symmetry 17 00955 g005
Figure 6. RMSEpos of different filters under Gaussian-mixture noises.
Figure 6. RMSEpos of different filters under Gaussian-mixture noises.
Symmetry 17 00955 g006
Figure 7. RMSEvel of different filters under Gaussian-mixture noises.
Figure 7. RMSEvel of different filters under Gaussian-mixture noises.
Symmetry 17 00955 g007
Figure 8. RMSEpos of different filters under t noise.
Figure 8. RMSEpos of different filters under t noise.
Symmetry 17 00955 g008
Figure 9. RMSEvel of different filters under t noise.
Figure 9. RMSEvel of different filters under t noise.
Symmetry 17 00955 g009
Figure 10. Target tracking experimental testbed.
Figure 10. Target tracking experimental testbed.
Symmetry 17 00955 g010
Figure 11. Topology among multi radar sensor networks.
Figure 11. Topology among multi radar sensor networks.
Symmetry 17 00955 g011
Figure 12. Position errors under the tracking experiment. (a) Position errors of x-axis; (b) position errors of y-axis.
Figure 12. Position errors under the tracking experiment. (a) Position errors of x-axis; (b) position errors of y-axis.
Symmetry 17 00955 g012
Table 1. Computational complexities of some equations.
Table 1. Computational complexities of some equations.
EquationAddition/Subtraction and MultiplicationDivision, Matrix Inversion, Cholesky Decomposition and Exponentiation
(11)2n2n0
(12)4n3n20
(45)mi0
(46)4mi3mi20
(47)3midmi
(48)6mid0
(44)d(n + mi)
(13)0O(n3)
(15)2n2O(n3)
(16)2mi2O(mi3)
(33)4mi3 − 2mi2O(mi3)
(37)2n2n0
(38)2mi2n + 2minnO(mi3)
(39)Ld(2n3 + n2n)0
(42)4n30
(43)2n20
(49)2n2nO(n3)
Table 2. ARMSEs of different algorithms under Scenario 1 (Gaussian-mixture noise).
Table 2. ARMSEs of different algorithms under Scenario 1 (Gaussian-mixture noise).
AlgorithmsARMSEpos (m)ARMSEvel (m/s)SSRT (s)
CKF12.35772.70980.0913
CMCKF4.73082.03160.9275
CRQMCKF4.10751.93510.7687
CRQMCIF4.10701.93420.8182
DMCKF4.83562.18232.8724
DRQMCIF4.74212.12762.0586
ADRQMCIF5.23273.48974.7827
Table 3. ARMSEs of different algorithms under Scenario 2 (t noise).
Table 3. ARMSEs of different algorithms under Scenario 2 (t noise).
AlgorithmsARMSEpos (m)ARMSEvel (m/s)SSRT (s)
CKF13.26872.91530.1056
CMCKF4.73082.15460.9738
CRQMCKF4.21452.03720.8367
CRQMCIF4.13522.03540.8416
DMCKF4.90352.23572.9015
DRQMCIF4.82162.14282.1306
ADRQMCIF5.35963.53414.8105
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, X.; Mu, D.; Yang, J. Distributed Maximum Correntropy Linear Filter Based on Rational Quadratic Kernel Against Non-Gaussian Noise. Symmetry 2025, 17, 955. https://doi.org/10.3390/sym17060955

AMA Style

Zhao X, Mu D, Yang J. Distributed Maximum Correntropy Linear Filter Based on Rational Quadratic Kernel Against Non-Gaussian Noise. Symmetry. 2025; 17(6):955. https://doi.org/10.3390/sym17060955

Chicago/Turabian Style

Zhao, Xuehua, Dejun Mu, and Jiahui Yang. 2025. "Distributed Maximum Correntropy Linear Filter Based on Rational Quadratic Kernel Against Non-Gaussian Noise" Symmetry 17, no. 6: 955. https://doi.org/10.3390/sym17060955

APA Style

Zhao, X., Mu, D., & Yang, J. (2025). Distributed Maximum Correntropy Linear Filter Based on Rational Quadratic Kernel Against Non-Gaussian Noise. Symmetry, 17(6), 955. https://doi.org/10.3390/sym17060955

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop