Next Article in Journal
Two-Step Two-Photon Absorption Dynamics in π-π Conjugated Carbazole-Phthalocyanine/Graphene Quantum Dot Hybrids Under Picosecond Pulse Excitation
Previous Article in Journal
Symmetry-Aware CVAE-ACGAN-Based Feature Generation Model and Its Application in Fault Diagnosis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sequential Fusion Least Squares Method for State Estimation of Multi-Sensor Linear Systems Under Noise Cross-Correlation

by
Xu Liang
and
Chenglin Wen
*
School of Automation, Guangdong University of Petrochemical Technology, Maoming 525000, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(6), 948; https://doi.org/10.3390/sym17060948
Submission received: 10 May 2025 / Revised: 9 June 2025 / Accepted: 12 June 2025 / Published: 14 June 2025
(This article belongs to the Section Engineering and Materials)

Abstract

This paper investigates a multi-sensor system for the state estimation of a maneuvering target, wherein the process noise of the target dynamics and the measurement noise of the sensor network are mutually correlated, and the measurement noises across different sensors are also cross-correlated. Under such conditions, we propose a globally optimal sequential least squares fusion estimation algorithm within the framework of linear minimum mean square error (LMMSE) estimation. This method is specifically designed to preserve structural symmetry and to accommodate the time-ordered arrival of sensor observations transmitted over a network. Rigorous theoretical analysis establishes the performance equivalence between the proposed sequential fusion estimator and the centralized Kalman filter. Numerical simulations further demonstrate the algorithm’s superior estimation accuracy and stability under symmetry constraints, particularly when the noise statistics exhibit spatial or temporal symmetry.

1. Introduction

State estimation (or filtering) is one of the most important research areas in the operation of industrial systems [1]. It refers to the process of describing the internal state structure of a system based on external observed output data. Over the past few decades, state estimation problems have been present in nearly all fields, including industry, the military, and finance. They have been widely applied in areas such as autonomous driving [2], parameter estimation [3], system identification [4], target tracking, and aerospace [5]. Parameter identification has also found extensive applications across various fields, such as parameter updating in control systems, parameter estimation in communication systems, and fault diagnosis in power systems [6]. With technological advancements, multi-sensor systems have become increasingly prevalent. Multi-sensor information fusion, also known as multi-source information fusion, refers to the process of optimally combining data from single or multiple sources to obtain accurate and reliable information. This approach has been widely applied in numerous high-profile and emerging fields, including industrial process monitoring, inertial navigation, industrial robotics, unmanned systems [7], the Internet of Things (IoT) [8], electric motor and appliance design, and air traffic control systems [9]. By fusing data from multiple sensors, the resulting estimates are more accurate, the algorithms are simplified, and the overall application becomes more convenient.
In the field of multi-sensor information fusion estimation, there are three fundamental fusion strategies: centralized data fusion, distributed data fusion, and sequential data fusion [10]. Centralized fusion involves transmitting all sensor measurements to a central station for unified processing. Although this approach yields optimal state estimates, it requires waiting for all data to arrive before fusion can begin. This leads to high-dimensional states, increased computational burden, poor robustness, and difficulty in real-time filtering [11]. Distributed fusion, on the other hand, allows each sensor to generate local estimates based on its own measurements, which are then sent to a fusion center where they are combined using a predefined fusion rule. The parallel structure of distributed fusion makes fault detection and sensor isolation more straightforward, thus improving system reliability. However, the estimation accuracy of distributed fusion is generally lower than that of centralized methods. Sequential fusion processes sensor measurements in the order in which they arrive at the fusion center, updating the fused estimate at each step. While it achieves estimation accuracy comparable to centralized fusion, it is more suitable for real-time applications because it does not require waiting for all data to arrive. This makes it better equipped to handle delayed or asynchronous data and reduces overall system cost [12].
Kalman filtering is a classical fusion method widely used in control systems [13]. For linear Gaussian systems, the standard Kalman filter assumes independence between the process noise and measurement noise. This method is optimal in the sense of linear minimum variance and has the advantage of real-time recursive computation. However, in many practical scenarios, the process noise and measurement noise are not independent but correlated. Therefore, Kalman filters under correlated noise conditions have been successively developed. As environmental complexity increases and the surveillance area expands, the motion of a single target often needs to be observed by multiple spatially distributed sensors. Network transmission introduces delays, resulting in measurements arriving out of sequence. In such cases, for systems with cross-correlation between process and measurement noise, as well as autocorrelation among measurement noises, a centralized Kalman filter requires all sensor measurements to arrive before fusion can be completed. However, due to potential packet losses in the network, implementing such centralized filters becomes infeasible. To address this, sequential or distributed Kalman filters that can handle delayed and out-of-sequence data are needed. The core challenge in designing such filters lies in decorrelating the process noise and measurement noise, as well as removing the inter-sensor measurement noise correlations.
In the context of multi-sensor cooperative systems, reference [14] utilizes Cholesky decomposition and the inversion of unit lower triangular matrices to transform multi-sensor measurements with correlated noise into equivalent pseudo-measurements with uncorrelated noise, and then proposes an optimal centralized fusion filtering method based on this transformation. Reference [15] addresses systems with process noise and measurement noise that are finitely step autocorrelated and cross-correlated, respectively, and proposes a distributed fusion filter using a matrix-weighted fusion estimation algorithm. Reference [16] further considers the impact of correlated measurement noise among sensors and one-step correlation between process and measurement noise in distributed track fusion filtering for time-delay systems. In [17], a distributed filter is proposed using a decorrelation method at the sensor end. However, the assumption that all sensor information is fully accessible throughout the network is unrealistic for noise decorrelation. Reference [18] proposes a linearly optimal minimum-variance matrix-weighted fusion filter with a two-stage fusion structure under correlated noise, but due to inaccuracies in the estimation model, both filters yield suboptimal estimates. Reference [19] presents a sequential Kalman fusion method under uncorrelated process noise and inter-correlated measurement noise, but it fails to achieve performance equivalence with the centralized Kalman filter, rendering it suboptimal. Reference [20] studies the fusion filtering problem considering one-step autocorrelation among observation noises and one-step cross-correlation between observation and process noise, but relies on matrix augmentation, which significantly increases the computational load, especially in high-dimensional scenarios. Reference [21] designed sequential and distributed fusion filters, but the sequential method lacked a measurement noise estimator, leading to suboptimal results, and the distributed filter exhibited lower accuracy than the centralized counterpart. Reference [22] proposes a sequential predictive fusion Kalman filter under simultaneous cross-correlation between process and measurement noise as well as among measurement noises, but it also fails to achieve performance equivalence with the centralized filter and remains suboptimal. Although [23] shows that distributed fusion is more reliable, it is not optimal. Reference [24] demonstrates theoretical flaws in its filtering derivation: the induction-based validation step (Equation (41)) is erroneous, rendering its sequential Kalman fusion filter performance nonequivalent to the centralized version and invalidating its conclusions. Reference [25] proposes improved sequential filters under time- and event-triggered scheduling, considering correlation between measurement noise and state prediction. However, due to the omission of a measurement noise estimator in the fusion algorithm, the filters are suboptimal—when cross-correlation exists among measurement noises, the estimator cannot be zero. Reference [26] addresses environments where both the process and all measurement noises are mutually correlated and proposes a suboptimal sequential Kalman fusion method. However, the algorithm incorrectly treats predicted noise terms as deterministic estimates (see Equation (17)), making the predicted measurement estimate (Equation (16)) an unresolved random variable, thus rendering the method practically inoperable. Although [27] presents both distributed and sequential filtering under correlated noise, Equations (18) and (35) represent transformed measurement models that are not equivalent to the original, making the algorithm non-executable. Thus, sequential Kalman filtering under dual-noise correlation—where both process and measurement noises are mutually correlated, and measurement noises are inter-correlated—remains an unsolved challenge. To the best of the authors’ knowledge, there currently exists no recursive solution that addresses this dual-noise correlation scenario.
This paper, based on an analysis of the performance equivalence between the Kalman filter and its corresponding least squares estimator, designs a sequential least squares fusion method for system state estimation using time-ordered measurements, which achieves performance equivalence with the centralized Kalman filter. The contributions of this paper are as follows: (1) To the best of the authors’ knowledge, for the first time, a sequential least squares method equivalent in performance to the centralized fusion method is established under the scenario where the process noise is cross-correlated with all measurement noises, and the measurement noises are also mutually correlated. (2) A sequential closed-form expression for the estimation error covariance matrix of the filter is derived. (3) It is rigorously proven that the proposed sequential least squares fusion method is performance-equivalent to the centralized Kalman fusion method. (4) The proposed method is applicable to scenarios where cross-correlation exists between process noise and measurement noise due to transmission delays, and where measurement noises arriving with delays are also mutually correlated.
The remainder of this paper is organized as follows. In Section 2, the problem formulation is presented. In Section 3, a new centralized Kalman filter fusion estimation algorithm is proposed, which accounts for the cross-correlation between process noise and measurement noise as well as the autocorrelation among measurement noises. In Section 4, a sequential least squares fusion estimation algorithm is developed for time-sequentially arriving data under correlated noise, incorporating progressive decorrelation. Section 5 provides the performance analysis. In Section 6, a simulation example of a system with three sensors is given. Finally, Section 7 concludes the paper.

2. Description of Issues in Multi-Sensor Dynamic Systems with Correlated Noise

Consider a discrete-time varying linear stochastic control system with N sensors, described by the following state and measurement equations:
x k + 1 = A k x k + w k
y i k + 1 = H i k + 1 x k + 1 + v i k + 1
where k represents the discrete time index, x ( k + 1 ) R n × 1 is the state of the system to be estimated, A ( k ) R n × n is the state transition matrix, w ( k ) R n × 1 is a zero-mean white noise sequence, H i ( k + 1 ) is the measurement matrix, y i ( k + 1 ) R m × 1 is the measurement vector from the i-th sensor, and v i ( k + 1 ) R m × 1 is the measurement noise, with i = 1 , 2 , , N .
Assumption 1.
The process noise w ( k ) is independent of measurement noises v i ( k + 1 ) , i = 1 , 2 , , N , satisfying
E w k v i k + 1 w T l v j l + 1 T = Q k 0 0 R i k + 1 δ i j δ k l
where superscripts E and T denote the exception and the transpose.
Assumption 2.
The process noise w ( k ) is correlated with the measurement noise v i ( k + 1 ) , and there is also autocorrelation between v i ( k + 1 ) and v j ( k + 1 ) for i = 1 , 2 , , N , which satisfies
E w k v i k + 1 w T l v j l + 1 T = Q k B j k + 1 B i T k + 1 S i j k + 1 δ k l
This expression represents the cross-correlation between process noise w ( k ) and measurement noise v i ( k + 1 ) , where Q ( k ) is the process noise covariance, B i ( k + 1 ) is the noise coupling matrix, and S i j ( k + 1 ) is the measurement noise covariance. The symbol E denotes the mathematical expectation, and δ k l represents the Kronecker delta function, where
S i j = E v i ( k + 1 ) v j ( k + 1 ) T , i j , R i ( k + 1 ) , i = j ,
Assumption 3.
The initial state x ( 0 ) is independent of w ( k ) and v i ( k + 1 ) for i = 1 , 2 , , N , which satisfies
E x ( 0 ) = x 0 , E x ( 0 ) x 0 x ( 0 ) x 0 T = P 0
Under the condition of mutually independent noises (Assumption 1), the Kalman filter has been widely applied. In contrast, the goal of this paper is to develop a new filter that minimizes the state estimation error covariance under Assumptions 2 and 3.

3. Centralized Fusion Kalman Filtering Method Under Correlated Noise

A centralized Kalman filter model is developed for the dynamic systems described by Equations (1) and (2), utilizing the state equations and augmented observation matrices.
First, starting from Equation (2), we can derive the augmented observation equation as follows:
y k + 1 = H k + 1 x k + 1 + v k + 1
where the augmented observation vector y ( k + 1 ) , the augmented observation matrix H ( k + 1 ) , and the augmented observation noise v ( k + 1 ) are defined as
y k + 1 = y 1 T k + 1 , , y N T k + 1 T
v k + 1 = v 1 T k + 1 , , v N T k + 1 T
H k + 1 = H 1 T k + 1 , , H N T k + 1 T
R k + 1 = E v k + 1 v k + 1 T = R 1 k + 1 S 12 k + 1 S 1 N k + 1 S 21 k + 1 R 2 k + 1 S 2 N k + 1 S N 1 k + 1 S N 2 k + 1 R N k + 1
Initially, based on the statistical properties of the initial conditions and the observation data, the estimated state x ( k ) and its corresponding covariance matrix are computed.
x ^ k | k = E x k | x ^ 0 ; y 1 , y 2 , , y k
P k | k = E x k x ^ k | k x k x ^ k | k T | y 1 , y 2 , , y k
Here, x ^ k | k denotes the optimal estimate of the system state x k at time k, conditioned on all measurement information y 1 , y 2 , , y k available up to that time. In contrast, P k | k denotes the covariance matrix of the state estimation error under the same information set.
The operational process of centralized Kalman filtering under noise correlation can be divided into the following steps:
(1) Based on state Equation (1), the one-step propagation predicted estimate of the state variable, the state prediction estimation error, and the covariance matrix of the prediction estimation error are calculated as follows:
x ^ k + 1 | k = A k x ^ k | k
x ˜ k + 1 k = A k x ˜ k k + w k
P k + 1 k = E x ˜ k + 1 k x ˜ k + 1 k T = A k P k k A k T + Q k
(2) Based on the observation model, the one-step propagation prediction estimate of observation value y k + 1 , the observation prediction error, the cross-correlation error covariance matrix, and the autocorrelation covariance matrix are calculated as follows:
y ^ k + 1 | k = H k + 1 x ^ k + 1 k
y ˜ k + 1 | k = H k + 1 x ˜ k + 1 k + v k + 1
P x ˜ y ˜ k + 1 | k = E x ˜ k + 1 | k y ˜ k + 1 | k T   = P k + 1 | k H k + 1 T + S k + 1
P y ˜ y ˜ k + 1 | k = E y ˜ k + 1 | k y ˜ k + 1 | k T = H k + 1 P k + 1 | k H k + 1 T + H k + 1 S k + 1 + S k + 1 T H k + 1 T + R k + 1
where
S k + 1 = E w k v k + 1 T = B 1 k + 1 , , B N k + 1
(3) A linear Kalman filter is designed to estimate the state variable x k + 1 .
x ^ k + 1 | k + 1 = I K k + 1 H k + 1 x ^ k + 1 | k + K k + 1 y k + 1
In the equation, K k + 1 represents the unspecified optimal gain matrix.
(4) The optimal gain matrix K ( k + 1 ) is derived using the principle of orthogonality.
Next, the estimation error for the state variable x ( k + 1 ) is calculated:
x ˜ k + 1 | k + 1 = I K k + 1 H k + 1 x ˜ k + 1 | k K k + 1 v k + 1
Derived from the principle of orthogonality,
E x ˜ k + 1 | k + 1 y k + 1 T = 0
The gain matrix of the filter is obtained as
K k + 1 = P x ˜ y ˜ k + 1 | k P y ˜ y ˜ k + 1 | k 1
(5) The state variable x k + 1 estimation error covariance matrix is calculated as follows:
P k + 1 | k + 1 = E x ˜ k + 1 | k + 1 x ˜ k + 1 | k + 1 T = I K k + 1 H k + 1 P k + 1 | k K k + 1 S k + 1 T
Issues and Solutions with Centralized Fusion Algorithms:
(1) In multi-sensor systems, sensors distributed across different locations may experience data arrival delays due to network quality issues. If a centralized fusion algorithm is employed, significant delays can occur, particularly when some data cannot be received. In such cases, the centralized algorithm cannot be executed immediately.
(2) To address this challenge, this paper introduces a sequential least squares method, which assumes the independence of noise sources. By ensuring the independence of the process and measurement noises, we are able to derive the corresponding least squares estimator and the associated covariance matrix.
(3) Building on this, this paper develops a sequential least squares estimation algorithm that is equivalent to the centralized fusion algorithm. The observations are assumed to arrive in a sequential order, enabling the implementation of a sequential fusion least squares estimation approach based on the ordered sequence of measurements.

4. A Sequential Least Squares Method for Gradually Decorrelating Time-Series Arrival Data Under Noise Correlation

In this paper, we propose an approach inspired by the Gram–Schmidt orthogonalization method, which integrates stepwise orthogonalization with a sequential fusion least squares algorithm to tackle the multi-sensor fusion filtering problem, where process noise is correlated with measurement noise, and measurement noise exhibits cross-correlation. For generality, we assume that sensor data at the k + 1 -th time step arrive sequentially in the order y 1 , y 2 , , y N . The flowchart of the proposed algorithm is presented in Figure 1.

4.1. New Observation Equation for State Based on Predicted Estimates

Based on the state prediction estimation equation, it can also be equivalently expressed as the measurement equation for state x ( k + 1 ) :
y ¯ 0 k + 1 = H ¯ 0 k + 1 x k + 1 + v ¯ 0 k + 1
where y ¯ 0 ( k + 1 ) = x ^ ( k + 1 | k ) , H ¯ 0 ( k + 1 ) = I , with I R n × n , and v ¯ 0 ( k + 1 ) = x ˜ ( k + 1 | k ) .
The random variable v ¯ 0 ( k + 1 ) satisfies the following statistical properties:
v ¯ 0 ( k + 1 ) N 0 , R ¯ 0 ( k + 1 ) , R ¯ 0 ( k + 1 ) = P ( k + 1 | k ) ,
Remark 1.
Interpreting (27) as the measurement equation for state variable x ( k + 1 ) transforms the challenge of decoupling process noise v ( k + 1 ) from measurement noise into a unified decoupling problem involving both measurement noise and the estimation error x ˜ ( k + 1 | k ) . This is because x ˜ ( k + 1 | k ) is a linear combination of x ˜ ( k | k ) and w ( k ) , thereby addressing issues that could not be resolved in the previous literature.

4.2. Estimation Method for State Based on Observation Equation (1)

When data from sensor 1 arrives, the first step is to decorrelate it, since v ¯ 0 k + 1 and v 1 k + 1 are correlated noises; this decorrelation process is analogous to the Schmidt orthogonalization process.
(1) An equivalent reformulation of the observation Equation (1) is
y 1 k + 1 = H 1 k + 1 x k + 1 + v 1 k + 1 + G 1 k + 1 × y ¯ 0 k + 1 H ¯ 0 k + 1 x k + 1 v ¯ 0 k + 1
where G 1 k + 1 is the correlation matrix. Combining like terms in the above expression, we obtain
y ¯ 1 k + 1 = H ¯ 1 k + 1 x k + 1 + v ¯ 1 k + 1
where
y ¯ 1 ( k + 1 ) = y 1 ( k + 1 ) G 1 ( k + 1 ) y ¯ 0 ( k + 1 )
H ¯ 1 ( k + 1 ) = H 1 ( k + 1 ) G 1 ( k + 1 ) H ¯ 0 ( k + 1 )
v ¯ 1 ( k + 1 ) = v 1 ( k + 1 ) G 1 ( k + 1 ) v ¯ 0 ( k + 1 )
(2) Solve for the correlation matrix G 1 ( k + 1 ) , which is derived from the decorrelation constraint equation E v ¯ 1 ( k + 1 ) v ¯ 0 ( k + 1 ) T = 0 .
G 1 ( k + 1 ) = D 1 , 0 ( k + 1 ) R ¯ 0 1 ( k + 1 )
D 1 , 0 ( k + 1 ) = E v 1 ( k + 1 ) v 0 T ( k + 1 ) = E v 1 ( k + 1 ) w ( k ) T = B 1 ( k + 1 ) T
Note: G 1 , 0 ( k + 1 ) = G 1 ( k + 1 ) .
The augmented measurement noise v ¯ 1 ( k + 1 ) satisfies the following statistical properties:
v ¯ 1 ( k + 1 ) N 0 , R ¯ 1 ( k + 1 )
where
R ¯ 1 ( k + 1 ) = E v ¯ 1 ( k + 1 ) v ¯ 1 T ( k + 1 ) = R 1 ( k + 1 ) G 1 ( k + 1 ) D 1 , 0 ( k + 1 ) T
(3) Establish a least squares estimator based on the state model and the state estimates from sensor 1.
y ¯ 0 k + 1 y ¯ 1 k + 1 = H ¯ 0 k + 1 H ¯ 1 k + 1 x k + 1 + v ¯ 0 k + 1 v ¯ 1 k + 1
The estimated value is
x ¯ ^ 1 k + 1 = P k + 1 | k 1 + · 1 1 × P k + 1 | k 1 x ^ k + 1 | k + · 1
where · i = H ¯ i k + 1 T R ¯ i k + 1 1 H ¯ i k + 1 .
Estimation error:
x ¯ ˜ 1 k + 1 = x k + 1 x ¯ ^ 1 k + 1 = P k + 1 | k 1 · 1 1 × P k + 1 | k 1 x ˜ k + 1 | k + Δ 1
where Δ i = H ¯ i k + 1 T R ¯ i k + 1 1 v ¯ i k + 1 .
The state estimation error covariance matrix is
P ¯ 1 k + 1 = E x ¯ ˜ 1 k + 1 x ¯ ˜ 1 T k + 1 = P k + 1 | k 1 + · 1 1

4.3. Sequential Least Squares Fusion from Sensor i i = 1 , 2 , , l 1 to Sensor l

Assume that the following estimates have been obtained:
x ¯ ^ i k + 1 = E x k + 1 | x ^ 0 , y 1 k + 1 , , y i k + 1 = E x k + 1 | x ^ 0 , y ¯ 1 k + 1 , , y ¯ i k + 1
The corresponding covariance matrix is given by
P ¯ i ( k + 1 ) = E x ¯ ˜ i ( k + 1 ) x ¯ ˜ i T ( k + 1 )
where
x ¯ ˜ i ( k + 1 ) = x ( k + 1 ) x ¯ ^ i ( k + 1 )
The goal of this section is to establish a recursive least squares solution for state x ( k + 1 ) after observation y l ( k + 1 ) from sensor l arrives.
x ¯ ^ l k + 1 = E x k + 1 | x ^ 0 , y 1 k + 1 , , y l 1 k + 1 , y l k + 1 = E x k + 1 | x ¯ ^ l 1 k + 1 , y ¯ l k + 1 , l = 2 , 3 , N
The corresponding covariance matrix is given by
P ¯ l ( k + 1 ) = E x ¯ ˜ l ( k + 1 ) x ¯ ˜ l T ( k + 1 )
where
x ¯ ˜ l ( k + 1 ) = x ( k + 1 ) x ¯ ^ l ( k + 1 )
For convenience, we define
Y ¯ l 1 ( k + 1 ) = φ ¯ l 1 ( k + 1 ) x ( k + 1 ) + V ¯ l 1 ( k + 1 )
where
Y ¯ l 1 ( k + 1 ) = y ¯ 0 T ( k + 1 ) , , y ¯ l 1 T ( k + 1 ) T
V ¯ l 1 ( k + 1 ) = v ¯ 0 T ( k + 1 ) , , v ¯ l 1 T ( k + 1 ) T
φ ¯ l 1 ( k + 1 ) = H ¯ 0 T ( k + 1 ) , , H ¯ l 1 T ( k + 1 ) T
(1) The observation equation for l can be equivalently reformulated as follows:
y l ( k + 1 ) = H l ( k + 1 ) x ( k + 1 ) + v l ( k + 1 ) + G l ( k + 1 ) × Y ¯ l 1 ( k + 1 ) φ ¯ l 1 ( k + 1 ) x ( k + 1 ) V ¯ l 1 ( k + 1 )
By combining like terms, we obtain
y ¯ l ( k + 1 ) = H ¯ l ( k + 1 ) x ( k + 1 ) + v ¯ l ( k + 1 )
where
y ¯ l ( k + 1 ) = y l ( k + 1 ) G l ( k + 1 ) Y ¯ l 1 ( k + 1 )
H ¯ l ( k + 1 ) = H l ( k + 1 ) G l ( k + 1 ) φ ¯ l 1 ( k + 1 )
v ¯ l ( k + 1 ) = v l ( k + 1 ) G l ( k + 1 ) V ¯ l 1 ( k + 1 )
(2) The derivation of the correlation matrix G l ( k + 1 ) is obtained by solving the decorrelation constraint equations E v ¯ l ( k + 1 ) v ¯ i ( k + 1 ) T = 0 , for i = 1 , 2 , , l 1 .
G l ( k + 1 ) = D l ( k + 1 ) Λ l 1 1 ( k + 1 )
where
D l ( k + 1 ) = E v l ( k + 1 ) V ¯ l 1 ( k + 1 ) T = D l , 0 ( k + 1 ) , , D l , l 1 ( k + 1 )
Λ l 1 ( k + 1 ) = E V ¯ l 1 ( k + 1 ) V ¯ l 1 ( k + 1 ) T = diag R ¯ 0 ( k + 1 ) , , R ¯ l 1 ( k + 1 )
For convenience, we denote
G l ( k + 1 ) = G l , 0 ( k + 1 ) , , G l , l 1 ( k + 1 )
The statistical properties of v ¯ l ( k + 1 ) are as follows:
v ¯ l ( k + 1 ) N 0 , R ¯ l ( k + 1 ) .
where
R ¯ l ( k + 1 ) = E v ¯ l ( k + 1 ) v ¯ l ( k + 1 ) T = R l ( k + 1 ) G l ( k + 1 ) D l ( k + 1 ) T
(3) Develop a sequential least squares fusion estimation framework that incorporates the interactions from sensor i (where i = 1 , 2 , , l 1 ) to sensor l.
The least squares estimator is formulated as follows:
x ¯ ^ l 1 k + 1 y ¯ l k + 1 = I H ¯ l k + 1 x k + 1 + x ¯ ˜ l 1 k + 1 v ¯ l k + 1
The estimated value is
x ¯ ^ l k + 1 = P ¯ l 1 1 k + 1 + · l 1 × P ¯ l 1 1 k + 1 x ¯ ^ l 1 k + 1 + · l
Estimation error:
x ¯ ˜ l k + 1 = x k + 1 x ¯ ^ l k + 1 = P ¯ l 1 1 k + 1 · l 1 × P ¯ l 1 1 k + 1 x ˜ l 1 k + 1 + Δ l
The state estimation error covariance matrix is
P ¯ l k + 1 = E x ¯ ˜ l k + 1 x ¯ ˜ l T k + 1 = P ¯ l 1 1 k + 1 + · l 1
When l = N , the fusion center can obtain x ¯ ^ N k + 1 and P ¯ N k + 1 , thus establishing the global estimate of x k + 1 as
x ¯ ^ N k + 1 = P ¯ N 1 1 k + 1 + H ¯ N k + 1 T R ¯ N 1 k + 1 H ¯ N k + 1 1 P ¯ N 1 1 k + 1 x ¯ ^ N 1 k + 1 + H ¯ N k + 1 T R ¯ N 1 k + 1 y ¯ N k + 1
P ¯ N k + 1 = P ¯ N 1 1 k + 1 + H ¯ N k + 1 T R ¯ N k + 1 1 H ¯ N k + 1 1
The sequential least squares estimator is capable of managing systems with observation delays, and the algorithm does not require the acquisition of all data, enabling it to operate effectively even when some data are missing.
To more clearly illustrate the specific implementation steps of the proposed method, the calculation steps of the sequential algorithm are presented below:
Step 1: Establish the initial values x k | k = x 0 and P k | k = P 0 .
Step 2: Compute x ^ k + 1 | k , x ˜ k + 1 k , and P k + 1 k
Step 3: Establish the prediction equation y ¯ 0 k + 1 = H ¯ 0 k + 1 x k + 1 + v ¯ 0 k + 1 , where y ¯ 0 k + 1 = x ^ k + 1 | k , H ¯ 0 k + 1 = I , and v ¯ 0 k + 1 = x ˜ k + 1 | k
Step 4: Perform decorrelation processing on observation equation y 1 k + 1 = H 1 k + 1 x k + 1 + v 1 k + 1 , resulting in the decorrelated equation y ¯ 1 k + 1 = H ¯ 1 k + 1 x k + 1 + v ¯ 1 k + 1
Step 5: Combine the measurement equations from steps 3 and 4 and solve the resulting system using least squares to obtain x ¯ ^ 1 k + 1 , x ¯ ˜ 1 k + 1 , and P ¯ 1 k + 1
Step 6: After fusing data from the current i = l 1 sensors, obtain x ¯ ^ l 1 k + 1 , x ¯ ˜ l 1 k + 1 and P ¯ l 1 k + 1 . Perform decorrelation processing on observation equation y l k + 1 = H l k + 1 x k + 1 + v l k + 1 in the same manner as in step 5.
Step 7: Formulate and solve the system of equations to obtain x ¯ ^ l k + 1 , x ¯ ˜ l k + 1 , and P ¯ l k + 1 .
Step 8: If i = N , then x ¯ ^ N k + 1 and P ¯ N k + 1 will be obtained.

5. Performance Equivalence Analysis

This section establishes the equivalence between the proposed sequential least squares recursive algorithm and the centralized Kalman filtering algorithm in the presence of correlated noise. It is shown that their estimation error covariance matrices are identical, indicating equivalent estimation performance. For technical convenience, we define the following notation:
y ¯ ( k + 1 ) = H ¯ ( k + 1 ) x ( k + 1 ) + v ¯ ( k + 1 )
where
y ¯ T ( k + 1 ) = y ¯ 1 T ( k + 1 ) , , y ¯ N T ( k + 1 )
v ¯ ( k + 1 ) = v ¯ 1 T ( k + 1 ) , , v ¯ N T ( k + 1 ) T
H ¯ ( k + 1 ) = H ¯ 1 T ( k + 1 ) , , H ¯ N T ( k + 1 ) T
R ¯ ( k + 1 ) = E v ¯ ( k + 1 ) v ¯ T ( k + 1 ) = diag R ¯ 1 ( k + 1 ) , , R ¯ N ( k + 1 )
Proof of the equivalence between the centralized Kalman filter under correlated noise and the sequential recursive least squares method with stepwise decorrelation is as follows:
First, we can take the centralized measurement noise v ¯ k + 1 after decorrelation as an example and establish its relationship with the centralized measurement noise v k + 1 without decorrelation. Then, we substitute all the parameters of the centralized Kalman filter under correlated noise from Section 3 of this paper with the parameters after decorrelation. The derivation is as follows:
v ¯ k + 1 = v ¯ 1 T k + 1 , v ¯ 2 T k + 1 , v ¯ N T k + 1 T = v 1 k + 1 G 1 k + 1 v ¯ 0 k + 1 v 2 k + 1 G 2 , 0 k + 1 v ¯ 0 k + 1 G 2 , 1 k + 1 v 1 k + 1 v N k + 1 G N , 0 k + 1 v ¯ 0 k + 1 G N , N 1 k + 1 v N 1 k + 1
Then, v k + 1 can be expressed as
v k + 1 = M 1 k + 1 v ¯ k + 1 + N k + 1 v ¯ 0 k + 1
where
M 1 k + 1 = I 0 0 0 G 2 , 1 ( k + 1 ) I 0 0 G 3 , 1 ( k + 1 ) G 3 , 2 ( k + 1 ) I 0 G N , 1 ( k + 1 ) G N , 2 ( k + 1 ) G N , N 1 ( k + 1 ) I
N k + 1 = G 1 , 0 k + 1 G 2 , 0 k + 1 G N , 0 k + 1 = S T k + 1 R ¯ 0 k + 1 1
Likewise, H k + 1 and y k + 1 can also be expressed as
H k + 1 = M 1 k + 1 H ¯ k + 1 N k + 1 H 0 k + 1 1
y k + 1 = M 1 k + 1 y ¯ k + 1 + N k + 1 x ^ k + 1 | k
In substituting Equations (56), (59), and (60) into Equations (19), (20), (25), and (26), the cross-covariance matrix between the state prediction error and the observation prediction error, the auto-covariance matrix of the observation prediction error, the gain matrix, and the estimation error covariance matrix can be re-expressed as follows:
P x ˜ y ˜ k + 1 | k = E x ˜ k + 1 | k y ˜ k + 1 | k T = P k + 1 | k H ¯ k + 1 T M 1 k + 1 T
P y ˜ y ˜ k + 1 | k = E y ˜ k + 1 | k y ˜ k + 1 | k T = M 1 k + 1 H ¯ k + 1 P k + 1 | k H ¯ k + 1 T + R ¯ k + 1 M 1 k + 1 T
K k + 1 = P x ˜ y ˜ k + 1 | k P y ˜ y ˜ k + 1 | k 1 = P k + 1 | k H ¯ k + 1 T H ¯ k + 1 P k + 1 | k H ¯ k + 1 T + R ¯ k + 1 1 M 1 k + 1 1
P k + 1 | k + 1 = E x ˜ k + 1 | k + 1 x ˜ k + 1 | k + 1 T = I K k + 1 H k + 1 P k + 1 | k K k + 1 S k + 1 T = I P k + 1 | k H ¯ k + 1 T H ¯ k + 1 P k + 1 | k H ¯ k + 1 T + R ¯ k + 1 1 H ¯ k + 1 P k + 1 | k
Based on the matrix inversion lemma in reference [28], Equation (64) can be expressed as follows:
P k + 1 | k + 1 = P 1 k + 1 | k + H ¯ k + 1 T R ¯ k + 1 1 H ¯ k + 1 1
According to Equation (49), P ¯ N 1 k + 1 can be expressed as
P ¯ N 1 k + 1 = P ¯ N 2 1 k + 1 + H ¯ N 1 k + 1 T R ¯ N 1 k + 1 1 H ¯ N 1 k + 1 1
Thus, by analogy, Equation (49) can be expressed as
P ¯ N k + 1 = P ¯ N 1 1 k + 1 + H ¯ N k + 1 T R ¯ N k + 1 1 H ¯ N k + 1 1 = P 1 k + 1 | k + j = 1 N H ¯ j T k + 1 R ¯ j 1 k + 1 H ¯ j k + 1 1
P k + 1 | k + 1 can be expressed as
P k + 1 | k + 1 = P 1 k + 1 | k + H ¯ k + 1 T R ¯ k + 1 1 H ¯ k + 1 1 = P 1 k + 1 | k + j = 1 N H ¯ j T k + 1 R ¯ j 1 k + 1 H ¯ j k + 1 1
It can be rigorously proved that
P ( k + 1 | k + 1 ) P ¯ N ( k + 1 )
Therefore, it can be concluded that the sequential least squares algorithm, which relies on the measurement arrival time series y ¯ 1 ( k + 1 ) , y ¯ 2 ( k + 1 ) , , y ¯ N ( k + 1 ) , and the centralized fusion Kalman filtering algorithm, which is based on y ( k + 1 ) , exhibit identical performance under noise correlation.

6. Simulation Examples

Consider a three-sensor target tracking system with correlated noise:
x k + 1 = 1 T 0 1 x k + w k
y i k + 1 = H i k + 1 x k + 1 + v i k + 1
v i k + 1 = α i w k + β i η k + 1
where i = 1 , 2 , 3 . The sampling time is T = 1 . x k + 1 = x 1 x 2 is the state, and x 1 and x 2 represent the displacement and velocity of the target, respectively. The measurement matrix consists of H 1 k + 1 = 1 0 0 1 , H 2 k + 1 = 1 0.5 0 1 , and H 3 k + 1 = 1 1 0 1 . w k and η i k + 1 are uncorrelated white noise with means of 0 and variances of σ w 2 and σ η i 2 , respectively. Let σ w 2 = T 3 3 T 2 2 T 2 2 T , σ η 1 2 = d i a g 0.25 0.25 , σ η 2 2 = d i a g 0.36 0.36 , σ η 3 2 = d i a g 0.64 0.64 , β i = 1 α i , α 1 = 0.1 , α 2 = 0.2 , α 3 = 0.4 , β 1 = 0.9 , β 2 = 0.8 , and β 3 = 0.6 . The initial state is x 0 = 2 2 T and P 0 = d i a g 0.1 0.1 .
The processor sequentially updates the measurements from the three sensors described in (71). For each iteration of the local least squares estimator, the corresponding estimation results and their associated error covariance are computed, thereby demonstrating the equivalence between the proposed algorithm and the centralized algorithm in the simulation. This comparison reveals that both algorithms achieve the same level of estimation accuracy.
The root mean square error (RMSE) is used to evaluate the performance of the algorithm, defined as follows:
R M S E = 1 N i = 1 N x ˜ i k + 1 | k + 1 x ˜ i k + 1 | k + 1 T
where x ˜ i k + 1 | k + 1 = x k + 1 x ^ i k + 1 | k + 1 , x ˜ k + 1 | k + 1 represents the estimation error, x k + 1 is the true value of the state, x ^ k + 1 | k + 1 denotes the estimated state, and N represents the number of simulation runs.
The accuracy is calculated using the mean square error of the first sensor as the reference, as given by the following formula:
ε % = ε 1 ε i ε 1 × 100 % , i = 2 , 3
where ε denotes the root mean square error for each sensor.
Remark 2.
“×” shows that this sensor’s RMSE is shown as a reference, and there is no accuracy improvement.
Simulation I : According to the mean square errors shown in Table 1, the accuracy of the estimated state variables has been improved, which demonstrates the effectiveness of the proposed algorithm. Specifically, Figure 2 and Figure 3 show the tracking performance, while Figure 4 and Figure 5 illustrate the estimation error curves.
Simulation II: Figure 6 and Figure 7, respectively, show the comparison of tracking performance between the sequential least squares estimation algorithm proposed in this paper and the centralized fusion Kalman filtering algorithm. The comparison of their estimated results shows that the curves are completely overlapped, indicating that the proposed algorithm has the same estimation accuracy as the centralized fusion algorithm.
Simulation III: The comparison experiment aimed to evaluate the effects of considering versus ignoring noise correlation. We compared the estimation error covariance matrices in both cases. The simulation results show that the algorithm proposed in this paper, which accounts for noise correlation, yields a smaller estimation error covariance matrix, demonstrating superior performance. Furthermore, the comparison of the mean absolute errors is presented as shown in Table 2.
Simulation IV: To illustrate how the estimation error covariance of the proposed algorithm varies with the correlation coefficient, we kept the correlation coefficients of the first two sensors unchanged and sequentially varied the correlation coefficient of the third sensor. Table 3 presents the trace of the error covariance matrix as the correlation coefficient changes. The relationship of the estimation accuracy can be seen as follows:
t r P ( 0.6 ) > t r P ( 0.4 ) > t r P ( 0.2 ) > t r P ( 0 ) > t r P ( 0.2 ) > t r P ( 0.4 ) > t r P ( 0.6 )
Based on the simulation results, it can be seen that as the correlation coefficient increases, the estimation error covariance matrix progressively decreases. This improvement in estimation accuracy further ensures the robustness and effectiveness of the proposed algorithm.
The simulation results demonstrate that the proposed algorithm is capable of effectively addressing multi-sensor systems exhibiting both cross-correlation between process and measurement noise, as well as autocorrelation within the measurement noise. Under noisy conditions, the algorithm attains an accuracy level that matches that of the centralized Kalman filtering algorithm, thereby confirming that the proposed filtering strategy is optimal in the sense of the linear minimum mean square error (LMMSE). Additionally, the algorithm features recursive computation via sequential filtering, ensuring robust performance and practical applicability in real-time scenarios.

7. Conclusions

In the context of discrete-time, time-varying linear stochastic control systems with dual correlations in multi-sensor noise, this paper first treats the state prediction equation as an initial measurement equation. The difficulty of decorrelating process noise from measurement noise is transformed into the problem of jointly decorrelating measurement noise and estimation error. By performing a series of orthogonalizations on the measurement noise subspace of each sensor, the original measurement equation is converted into an equivalent one with uncorrelated noise. Meanwhile, the orthogonalization process preserves the symmetry of the original noise statistics, thereby avoiding additional bias that may arise from asymmetric transformations. Following the principle of sequential filtering, measurement information arriving at the fusion center is processed in sequence. This ensures real-time data fusion and significantly reduces the computational burden associated with high-dimensional matrix operations. The stepwise update adopts a symmetric recursive structure, ensuring both computational efficiency and numerical stability. The resulting estimator achieves global optimality in the sense of linear minimum mean square error (LMMSE) while preserving the statistical symmetry, and the paper provides a rigorous proof of the equivalence between the proposed sequential least squares algorithm, the centralized Kalman filter, and the centralized least-squares algorithm. Simulation results validate the effectiveness of the proposed algorithm.
The proposed method is applicable not only to scenarios with dual correlations—i.e., correlation between process and measurement noise, as well as temporal correlation within the measurement noise—but also to systems with delayed data. However, a limitation of this study is that it does not directly provide a sequential Kalman filter formulation, nor does it address generalized symmetry-preserving methods under correlated noise in nonlinear or non-Gaussian systems. These aspects will be the focus of future work.

Author Contributions

Conceptualization, X.L. and C.W.; methodology, C.W.; software, X.L.; writing—original draft preparation, X.L.; writing—review and editing, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by (1) the National Key R&D Program Intelligent Robot Key Special Project “Robot Joint Drive Control Integrated Chip” 2023YFB4704000, and (2) National Natural Science Foundation of China 62125307, U22A2046.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Rigatos, G.G. Particle filtering for state estimation in nonlinear industrial systems. IEEE Trans. Instrum. Meas. 2009, 58, 3885–3900. [Google Scholar] [CrossRef]
  2. Jonasson, M.; Rogenfelt, Å.; Lanfelt, C.; Fredriksson, J.; Hassel, M. Inertial navigation and position uncertainty during a blind safe stop of an autonomous vehicle. IEEE Trans. Veh. Technol. 2020, 69, 4788–4802. [Google Scholar] [CrossRef]
  3. Mechhoud, S.; Witrant, E.; Dugard, L.; Moreau, D. Estimation of heat source term and thermal diffusion in tokamak plasmas using a Kalman filtering method in the early lumping approach. IEEE Trans. Control Syst. Technol. 2014, 23, 449–463. [Google Scholar] [CrossRef]
  4. Zhang, Y.; Li, X.R. A fast UD factorization-based learning algorithm with applications to nonlinear system modeling and identification. IEEE Trans. Neural Netw. 1999, 10, 930–938. [Google Scholar] [CrossRef]
  5. Zerdali, E. A comparative study on adaptive EKF observers for state and parameter estimation of induction motor. IEEE Trans. Energy Convers. 2020, 35, 1443–1452. [Google Scholar] [CrossRef]
  6. Ding, F.; Wang, Y.; Ding, J. Recursive least squares parameter identification algorithms for systems with colored noise using the filtering technique and the auxilary model. Digit. Signal Process. 2015, 37, 100–108. [Google Scholar] [CrossRef]
  7. Fu, J.; Sun, G.; Yao, W.; Wu, L. On trajectory homotopy to explore and penetrate dynamically of multi-UAV. IEEE Trans. Intell. Transp. Syst. 2022, 23, 24008–24019. [Google Scholar] [CrossRef]
  8. Zhang, F.; Zhang, Y. A Big Data Mining and Blockchain-Enabled Security Approach for Agricultural Based on Internet of Things. Wirel. Commun. Mob. Comput. 2020, 2020, 6612972. [Google Scholar] [CrossRef]
  9. Sun, S.; Lin, H.; Ma, J.; Li, X. Multi-sensor distributed fusion estimation with applications in networked systems: A review paper. Inf. Fusion 2017, 38, 122–134. [Google Scholar] [CrossRef]
  10. Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J. Centralized, distributed and sequential fusion estimation from uncertain outputs with correlation between sensor noises and signal. Int. J. Gen. Syst. 2019, 48, 713–737. [Google Scholar] [CrossRef]
  11. Lin, H.; Sun, S. Optimal sequential fusion estimation with stochastic parameter perturbations, fading measurements, and correlated noises. IEEE Trans. Signal Process. 2018, 66, 3571–3583. [Google Scholar] [CrossRef]
  12. Zhang, W.A.; Shi, L. Sequential fusion estimation for clustered sensor networks. Automatica 2018, 89, 358–363. [Google Scholar] [CrossRef]
  13. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. Mar. 1960, 82, 35–45. [Google Scholar] [CrossRef]
  14. Duan, Z.; Han, C.; Tao, T. Optimal multi-sensor fusion target tracking with correlated measurement noises. In Proceedings of the 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No. 04CH37583), The Hague, The Netherlands, 10–13 October 2004; Volume 2, pp. 1272–1278. [Google Scholar]
  15. Wang, N.; Sun, S. Event-triggered sequential fusion filters based on estimators of observation noises for multi-sensor systems with correlated noises. Digit. Signal Process. 2021, 111, 102960. [Google Scholar] [CrossRef]
  16. Lv, N.; Sun, S. Distributed fusion estimators for multi-sensor time-delay systems with correlated noise. In Proceedings of the 2008 7th World Congress on Intelligent Control and Automation, Chongqing, China, 25–27 June 2008; pp. 1363–1368. [Google Scholar]
  17. Feng, J.; Zeng, M. Optimal distributed Kalman filtering fusion for a linear dynamic system with cross-correlated noises. Int. J. Syst. Sci. 2012, 43, 385–398. [Google Scholar] [CrossRef]
  18. Sun, S.L.; Deng, Z.L. Multi-sensor optimal information fusion Kalman filter. Automatica 2004, 40, 1017–1023. [Google Scholar] [CrossRef]
  19. Wen, C.; Wen, C.; Li, Y. An optimal sequential decentralized filter of discrete-time systems with cross-correlated noises. IFAC Proc. Vol. 2008, 41, 7560–7565. [Google Scholar] [CrossRef]
  20. Wen, C.; Cai, Y.; Wen, C.; Xu, X. Optimal sequential Kalman filtering with cross-correlated measurement noises. Aerosp. Sci. Technol. 2013, 26, 153–159. [Google Scholar] [CrossRef]
  21. Yan, L.; Li, X.R.; Xia, Y.; Fu, M. Optimal sequential and distributed fusion for state estimation in cross-correlated noise. Automatica 2013, 49, 3607–3612. [Google Scholar] [CrossRef]
  22. Feng, X.L.; Wen, C.L.; Xu, L.Z. A new optimal OOSM filtering algorithm. In Proceedings of the 2011 International Conference on Wavelet Analysis and Pattern Recognition, Online, 10–13 July 2011; pp. 177–181. [Google Scholar]
  23. Lin, H.; Sun, S. Distributed fusion estimator for multisensor multirate systems with correlated noises. IEEE Trans. Syst. Man Cybern. Syst. 2017, 48, 1131–1139. [Google Scholar] [CrossRef]
  24. Lin, H.; Sun, S. Globally optimal sequential and distributed fusion state estimation for multi-sensor systems with cross-correlated noises. Automatica 2019, 101, 128–137. [Google Scholar] [CrossRef]
  25. Yan, L.; Jiang, L.; Xia, Y.; Wu, Q.J. Event-triggered sequential fusion estimation with correlated noises. ISA Trans. 2020, 102, 154–163. [Google Scholar] [CrossRef] [PubMed]
  26. Tan, L.; Wang, Y.; Hu, C.; Zhang, X.; Li, L.; Su, H. Sequential Fusion Filter for State Estimation of Nonlinear Multi-Sensor Systems with Cross-Correlated Noise and Packet Dropout Compensation. Sensors 2023, 23, 4687. [Google Scholar] [CrossRef] [PubMed]
  27. Ma, J.; Sun, S. Globally optimal distributed and sequential state fusion filters for multi-sensor systems with correlated noises. Inf. Fusion 2023, 99, 101885. [Google Scholar] [CrossRef]
  28. Simon, D. Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
Figure 1. Flowchart of the sequential fusion estimation algorithm.
Figure 1. Flowchart of the sequential fusion estimation algorithm.
Symmetry 17 00948 g001
Figure 2. Tracking effect of first component of state vector of the three sensors.
Figure 2. Tracking effect of first component of state vector of the three sensors.
Symmetry 17 00948 g002
Figure 3. Tracking effect of second component of state vector of the three sensors.
Figure 3. Tracking effect of second component of state vector of the three sensors.
Symmetry 17 00948 g003
Figure 4. Estimation error of first component of the state vector of the three sensors.
Figure 4. Estimation error of first component of the state vector of the three sensors.
Symmetry 17 00948 g004
Figure 5. Estimation error of second component of the state vector of the three sensors.
Figure 5. Estimation error of second component of the state vector of the three sensors.
Symmetry 17 00948 g005
Figure 6. Tracking effect of first component of state vector of two algorithms.
Figure 6. Tracking effect of first component of state vector of two algorithms.
Symmetry 17 00948 g006
Figure 7. Tracking effect of second component of state vector of two algorithms.
Figure 7. Tracking effect of second component of state vector of two algorithms.
Symmetry 17 00948 g007
Table 1. Estimation error.
Table 1. Estimation error.
SensorsRMSE- x 1 (10−3)RMSE- x 2 (10−3)Improved- x 1 Improved- x 2
Sensors 14.5234.283××
Sensors 23.6243.43219.9%19.9%
Sensors 33.1493.27930.4%23.4%
Table 2. The absolute error means of two algorithms.
Table 2. The absolute error means of two algorithms.
AlgorithmsMAE- x 1 (10−3)MAE- x 2 (10−3)
Consider correlation2.11
Ignore correlation2.44.6
Table 3. Accuracy of t r P α for α = 0.6 , 0.4 , 0.2 , 0 , 0.2 , 0.4 , 0.6 .
Table 3. Accuracy of t r P α for α = 0.6 , 0.4 , 0.2 , 0 , 0.2 , 0.4 , 0.6 .
t r P ( 0.6 ) t r P ( 0.4 ) t r P ( 0.2 ) t r P ( 0 ) t r P ( 0.2 ) t r P ( 0.4 ) t r P ( 0.6 )
0.1182 0.1156 0.1115 0.1046 0.0935 0.0758 0.0499
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liang, X.; Wen, C. Sequential Fusion Least Squares Method for State Estimation of Multi-Sensor Linear Systems Under Noise Cross-Correlation. Symmetry 2025, 17, 948. https://doi.org/10.3390/sym17060948

AMA Style

Liang X, Wen C. Sequential Fusion Least Squares Method for State Estimation of Multi-Sensor Linear Systems Under Noise Cross-Correlation. Symmetry. 2025; 17(6):948. https://doi.org/10.3390/sym17060948

Chicago/Turabian Style

Liang, Xu, and Chenglin Wen. 2025. "Sequential Fusion Least Squares Method for State Estimation of Multi-Sensor Linear Systems Under Noise Cross-Correlation" Symmetry 17, no. 6: 948. https://doi.org/10.3390/sym17060948

APA Style

Liang, X., & Wen, C. (2025). Sequential Fusion Least Squares Method for State Estimation of Multi-Sensor Linear Systems Under Noise Cross-Correlation. Symmetry, 17(6), 948. https://doi.org/10.3390/sym17060948

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop