Next Article in Journal
Occupancy Prediction in IoT-Enabled Smart Buildings: Technologies, Methods, and Future Directions
Previous Article in Journal
SensorNet: An Adaptive Attention Convolutional Neural Network for Sensor Feature Learning
Previous Article in Special Issue
Multi-Sensor Image and Range-Based Techniques for the Geometric Documentation and the Photorealistic 3D Modeling of Complex Architectural Monuments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Sensor Adaptive Weighted Data Fusion Based on Biased Estimation

School of Information and Intelligent Science and Technology, Hunan Agricultural University, Changsha 410127, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(11), 3275; https://doi.org/10.3390/s24113275
Submission received: 25 April 2024 / Revised: 12 May 2024 / Accepted: 19 May 2024 / Published: 21 May 2024
(This article belongs to the Special Issue Multi-Sensor Data Fusion)

Abstract

:
In order to avoid the loss of optimality of the optimal weighting factor in some cases and to further reduce the estimation error of an unbiased estimator, a multi-sensor adaptive weighted data fusion algorithm based on biased estimation is proposed. First, it is proven that an unbiased estimator can further optimize estimation error, and the reasons for the loss of optimality of the optimal weighting factor are analyzed. Second, the method of constructing a biased estimation value by using an unbiased estimation value and calculating the optimal weighting factor by using estimation error is proposed. Finally, the performance of least squares estimation data fusion, batch estimation data fusion, and biased estimation data fusion is compared through simulation tests, and test results show that biased estimation data fusion has a greater advantage in accuracy, stability, and noise resistance.

1. Introduction

Data fusion can utilize the information of multiple different data sources to complement each other, so the application of data fusion algorithms can suppress negative environmental factors and get more accurate results [1,2]. Data fusion algorithms in combination with various algorithms can achieve better results [3,4], so they have been widely used in various fields [5,6]. However, data fusion can only be applied to linear systems where all noises are independent of each other; if it is applied to a nonlinear system, it needs to be combined with the extended Kalman filter or the unscented Kalman filter [7]. If noises are correlated, they need to be decoupled by linear changes [8].
It is necessary to first process the original data, which includes removing abnormal data, filtering system deviations, and lossless compression of data. Abnormal data is removed using the Grubbs criterion. Filtering bias uses bias estimation algorithms [9,10], and in distributed fusion, if data need to be restored in the data center, lossless compression is required [11].
Before conducting multi-sensor data fusion, each sensor can also use various algorithms to estimate the true value (called the estimation value), which can improve the fusion accuracy because the estimation value has higher accuracy than the measurement value.
When the measured physical quantity does not change over time, if there is sufficient understanding of the system, linear minimum mean square error estimation is the best method. This algorithm can find a linear function that minimizes mean square error through a single measurement value [12,13,14]. Otherwise, it is best to use least squares estimation [15,16] and batch estimation [17]. When the measured physical quantity changes over time, it can only be suitable to use Kalman filtering [18,19].
Adaptive weighted data fusion is a widely used multi-sensor data fusion algorithm with a unique optimal weighting factor that enables the fusion estimation value to have a lower estimation variance than the measurement variance of all sensors, but it requires prior knowledge of the measurement variance to run [20]. However, its combination with a partially unbiased estimation algorithm (hereinafter referred to as unbiased estimation data fusion) not only further reduces the estimation variance but also removes the need for prior knowledge.
Unbiased estimation data fusion has two shortcomings. The first one is that unbiased estimation does not mean that the estimation results must be reliable. According to the Gauss-Markov theorem, the unbiased estimation variance has a lower bound, and when the lower bound itself is very large, even if the unbiased estimation variance is minimized, the difference between the estimation results and the true value (hereinafter referred to as estimation error) may not be within the acceptable range [21]. The other one is that the optimal weighting factor is not necessarily optimal; the optimal weighting factor is calculated according to the measurement variance; a smaller measurement variance does not necessarily mean a smaller measurement error, so in some cases, the optimal weighting factor is not optimal. Therefore, this paper proposes a biased estimation data fusion algorithm that also does not require priori knowledge, which uses unbiased estimation values to construct biased estimation values with lower estimation errors and calculates the optimal weighting factor according to the estimation error to ensure its optimality.
This paper is organized as follows: Section 2 reviews some existing unbiased estimation data fusion algorithms, Section 3 describes the shortcomings of the existing algorithms and the feasibility of the improved methods, Section 4 describes the specific implementation of the algorithms in this paper, Section 5 compares the performance of the different algorithms, and Section 6 summarizes the entire content and provides prospects for the future.

2. Unbiased Estimation Data Fusion

When N sensors are measuring an invariant physical quantity and all the sensor nodes in a network measure the observed physical quantity at the same time, the measurement equation is:
z ( k ) i = H i x + v ( k ) i
where x is the unknown quantity to be measured, μ is the true value of x , and k is the discrete time indicator, z ( k ) i is the measurement value of the ith sensor at the moment k , H i is the measurement matrix of the ith sensor (the H are often unit matrices), v ( k ) i is the measurement noise, v ( k ) i ~ N ( 0 , v i 2 ) and are independent of each other, and v i 2 be called measurement variance. Moreover, according to the additivity of the normal distribution, z ( k ) i ~ N ( μ , v i 2 ) .
In this system, only z ( k ) i is a known quantity; how one can use z ( k ) i to calculate the true value of μ ? Since the measurement equations are designed for multiple sensors and the measurement variances may vary, adaptive weighted data fusion is often used to solve such problems, but adaptive weighted data fusion needs prior knowledge of the measurement variance of all the sensors, and in practice, the measurement covariance can be obtained by experiments.

2.1. Unbiased Estimation

Unbiased estimation refers to: if the mathematical expectation of an estimator is equal to the true value of the estimated parameter (this property is called unbiasedness), then this estimator is called the unbiased estimation of the estimated parameter. The mathematical expectation of the estimator constructed by unbiased estimation is equal to the true value of the estimated parameter, so using unbiased estimators instead of measurement values will not cause an increase in error. Unbiased estimation of sample variance can be used to replace the measurement variance of sensors, allowing adaptive weighted data fusion to operate without knowing the measurement variance of sensors. There are two commonly used unbiased estimation algorithms, as follows:
Least squares estimation:
Assuming that a particular sensor collects n data over a period of time, T ¯ , the estimation value, and σ 2 ^ , estimation variance, are given as follows:
T ¯ =   1 n   i = 1 n T i
σ 2 ^ = i = 1 n ( T i T ¯ ) 2 n 1
Batch estimation:
Assuming that a particular sensor collects n data over a period of time, these n data are randomly divided into two groups, where the data in the ith groups are: T i 1 , T i 2 T i n i ( n 2 n i 2 ) , i = 1,2 . T ¯ i , the sample mean, and σ i 2 , sample variance, are given as follows:
T ¯ i =   1 n i   j = 1 n i T i j         i = 1 ,   2
σ i 2 = 1 n i 1 j = 1 n i ( T i j T ¯ i ) 2 n i       i = 1 ,   2
According to the theory of batch estimation in statistics, the optimal estimation value T ^ and the optimal estimation variance σ 2 ^ of these 2 sets of data are given as follows:
T ^ = σ 1 2 T ¯ 2 + σ 2 2 T ¯ 1 i = 1 2 σ i 2
σ 2 ^ = σ 1 2   σ 2 2 i = 1 2 σ i 2

2.2. Adaptive Weighted Data Fusion

Adaptive weighted data fusion principle: assume that there are N sensors working on the same unknown quantity x (the true value is μ ) is measured, and the measurement value of the ith sensor is M i . The measurement variance of the ith sensor is v i 2 , and the measurement data of the ith sensor follows N μ , v i 2 , then the unbiased estimator M ^ is given as follows:
M ^ = i = 1 N w i M i
σ 2 ^ = i = 1 N w i 2 v i 2
Since M ^ is an unbiased estimator, so that
i = 1 N w i = 1
In order to find w i that minimizes σ 2 ^ (call this w i as the optimal weighting factor), the auxiliary function that is constructed according to the Lagrange multiplier method is given as follows:
f w 1 , w 2 , , w n , φ = i = 1 N w i 2 v i 2 + φ (   1 i = 1 N w i )
According to multivariate extremum theory, it is known that the solution of the following partial differential equation makes the estimation variance σ 2 ^ minimized, as follows:
f w 1 = 2 w 1 v 1 2 φ = 0 f w n = 2 w n v n 2 φ = 0   f φ =   1 i = 1 N w i = 0
The simplified formula is given as follows:
w i = θ v i 2 ,   i = 1,2 , , n ; θ = φ 2 i = 1 N w i = 1
Thus, the formula for w i are given as follows:
w i = ( v i 2 i = 1 n 1 v i 2 ) 1
So, M ^ and σ 2 ^ are given as follows:
M ^ = i = 1 N ( v i 2 i = 1 n 1 v i 2 ) 1 M i
σ 2 ^ = ( i = 1 n 1 v i 2 ) 1
When adaptive weighted data fusion is combined with other unbiased estimation algorithms, it is possible to replace M i and v i 2 in Equation (15) with an unbiased estimation value and an unbiased estimation variance. The Figure 1 shows the flowchart for batch estimation data fusion, and the flowchart for other unbiased estimation data fusion is more or less the same.

3. Rationale

Statistical inference is a statistical method of inferring the whole through samples, and there are two kinds of errors in statistical inference: systematic error and random error. Unbiased estimation has no systematic error because of its unbiased nature, but there is still random error, so its estimation results are not necessarily reliable.
The optimal weighting factor is calculated based on the measurement variance alone, but a smaller measurement variance does not mean a smaller measurement error. Although the measurement variance of the data measured by different sensors varies, the mathematical expectations are all equal, so, in practice, it is easy to have a smaller measurement variance but a larger measurement error.

3.1. Biased Estimators

Theorem 1.
The error of the unbiased estimator can be further optimized.
Proof of Theorem 1.
Assuming that the true value of the measured unknown quantity x is μ , the sensor measurement data follows N μ , v 2 . E is an unbiased estimator constructed using a linear combination of the sensor measurement data, and letting the estimation variance of E be σ 2 , then E ~ N μ , σ 2 , defining the estimation error e s t as follows:
e s t = E μ
E e s t = 0 , D e s t = σ 2 is easy to obtain, and e s t follows N 0 , σ 2 . Assuming that the estimation error is tolerable when e s t z , z , the probability that the estimation error is tolerable is:
P e s t z , z = 2 0 z 1 2 π σ e x 2 2 σ 2 d x
If z = σ , we obtain: P e s t σ , σ 68.3 % , which means that for any σ , there is about a 31.7% chance that e s t is larger than the estimation standard deviation σ , so that even if σ is very small, e s t may be able to be reduced further. □
Theorem 2.
A biased estimation value with a smaller error can be constructed from an unbiased estimation value.
Proof of Theorem 2.
Assuming unbiased estimator A follows N μ , σ 2 , the biased estimator corresponding to A is defined as B and B = A + s , where s is called offset, which is a nonzero variable. The following results can be obtained:
E B = E A + E s = μ + s ( s 0 )
Because E B is not equal to the true value μ , B is a biased estimator.
Assuming that A 1 is a specific estimate of A , its estimation error is d 1 and d 1 = A 1 μ . s 1 is the offset corresponding to A 1 and s 1 d 1 , and the biased estimator corresponding to A 1 is B 1 and B 1 = A 1 + s 1 . The estimation error of B 1 is e 1 and e 1 = A 1 μ + s 1 , so:
e 1 = d 1 + s 1         i f   A 1 μ s 1 > 0 d 1 s 1         i f   A 1 μ s 1 < 0   a n d   s 1 < d 1   s 1 d 1         i f   A 1 μ s 1 < 0   a n d   s 1 > d 1
So, when A 1 μ s 1 < 0 and s 1 < 2 d 1 , e 1 < d 1 . In other words, when the sign of s 1 is opposite to A 1 μ and s 1 < 2     d 1 , the estimation error of biased estimation is smaller than that of unbiased estimation. □

3.2. Optimal Weighting Factors

Theorem 3.
In some cases, the optimal weighting factor is not optimal.
Proof of Theorem 3.
Suppose there are two sensors S 1 and S 2 that simultaneously measure the unknown quantity x (the true value is μ ). Their measurement variances are v 1 2 and v 2 2 , and they satisfy v 1 2 < v 2 2 . Their measurement data are z 1 and z 2 . For adaptive weighted data fusion, w 1 , the weights of S 1 , will always be greater than w 2 , the weights of S 2 , since v 1 2 < v 2 2 .
Define m i , the measurement error (unlike e s t , m i only considers the magnitude of the error, not the sign of the error), as follows:
m i = z i μ
So m 1 = z 1 μ , m 2 = z 2 μ . If m 1 > m 2 , then w 1     m 1 + w 2     m 2 > w 1     m 2 + w 2     m 1 , which implies that the estimation error of the fusion estimation value computed from optimal weighting factors is not minimal. So, when v 1 2 < v 2 2 but m 1 > m 2 , the optimal weighting factor is not optimal, and the probability is given as follows:
P m 1 > m 2 = 2 + x z 1 1 2 π v 2 e x 2 2 v 2 2 d x     d z 1
It can be shown that P m 1 > m 2 > 0 , which indicates that the weights of any two sensors may not be optimal. The above proof uses measured values and measured variances, but replacing them with estimated values and estimated variances is equally valid. □
Theorem 4.
It is optimal to compute the optimal weighting factor using the measurement error.
Proof of Theorem 4.
Suppose there are N sensors measuring the unknown quantity x (the true value is μ ), their measurement values are M 1 , M 2 M n , and their measurement errors are m 1 , m 2 m n ; moreover, there are 0 < m i < m i + 1 . The optimal weighting factor calculated from the measurement error is w 1 , w 2 w n , because m i < m i + 1 , then w i > w i + 1 , so the corresponding fusion estimation value M ^ 1 is as follows:
M ^ 1 = i = 1 N w i M i
In this paper, only one case is proved, which has two weights different from w 1 , w 2 w n , and the proof can be easily generalized to other cases.
Counterfactual: suppose that when the optimal weighting factor is v 1 , v 2 , w 3 w n and v 1 < v 2 , the fusion estimation value M ^ 2 is more accurate than M ^ 1 , where M ^ 2 is given as follows:
M ^ 2 = v 1 M 1 + v 2 M 2 + i = 3 N w i M i
The common term of M ^ 1 and M ^ 2 is M ^ 3 , and M ^ 3 = i = 3 N w i M i , E ( M ^ 3 ) = α μ where α = i = 3 N w i . So M ^ 1 and M ^ 2 become this form:
M ^ 1 = w 1 M 1 + w 2 M 2 + M ^ 3 , M ^ 2 = v 1 M 1 + v 2 M 2 + M ^ 3
Because w 1 + w 2 + w n = v 1 + v 2 + w 3 + w n = 1 , so w 1 + w 2 = v 1 + v 2 and set w 1 + w 2 = θ . Substituting M i = μ + m i for M ^ 1 and M ^ 2 , then we get the following equation:
M ^ 1 = w 1 + w 2 μ + w 1 m 1 + w 2 m 2 + M ^ 3
M ^ 2 = v 1 + v 2 μ + v 1 m 1 + v 2 m 2 + M ^ 3 = w 1 + w 2 μ + v 1 m 1 + v 2 m 2 + M ^ 3
Because M ^ 3 + w 1 + w 2 μ = μ , so the larger w 1 m 1 + w 2 m 2 and v 1 m 1 + v 2 m 2 are, the less accurate M ^ 1 and M ^ 2 are. Because m 1 > 0 and m 2 > 0 , so w 1 m 1 + w 2 m 2 = w 1 m 1 + w 2 m 2 and v 1 m 1 + v 2 m 2 = v 1 m 1 + v 2 m 2 . Set f γ is given as follows:
f γ = γ m 1 + ( θ γ ) m 2
So f w 1 = w 1 m 1 + w 2 m 2 and f v 1 = v 1 m 1 + v 2 m 2 . The derivative of f γ is given as follows:
d f γ d γ = m 1 m 2 < 0
So, the larger γ is, the smaller f γ . Because w 1 > w 2 and v 1 < v 2 , so w 1 > θ 2 , v 1 < θ 2 , and w 1 m 1 + w 2 m 2 < v 1 m 1 + v 2 m 2 , M ^ 1 is more accurate than M ^ 2 . Contradicting the hypothesis, the optimal weighting factor calculated from the estimation error is better. The above proof uses measured values and measured variances, but replacing them with estimated values and estimated variances is equally valid. □

4. Algorithm Implementation

The algorithm in Section 4.1 only uses measurement data from a single sensor, so it can be implemented in a distributed manner. The algorithms in Section 4.2 and Section 4.3 include data fusion algorithms, so they can only be implemented using centralized methods.

4.1. Unbiased Estimation

Suppose that N sensors simultaneously measure the unknown quantity x (the true value is µ), and all the sensor nodes in a network measure the observed physical quantity at the same time. For any of the sensors, if it collects 10 data over a period of time, in order from smallest to largest: a 1 , a 2 , , a 10 , then construct the unbiased estimators as follows:
E 1 = a 5 + a 6 2 ,   E 2 = a 1 + a 2 + a 9 + a 10 4 ,   E 3 = a 3 + a 4 + a 7 + a 8 4 ,   E 4 = 1 10 i = 1 10 a i
Assuming that the sensor follows N μ , σ 2 , theoretically, E 1 ~ N μ , σ 2 2 and E 2 , E 3 ~ N μ , σ 2 4 . Because a 1 ~ a 10 are data sorted in order of size, these estimators have lower estimated variances. Through experiments, it was found that the estimated variances of E 1 ~ E 3 are approximately σ 2 7.5 , σ 2 8 , σ 2 9 . Therefore, E 1 ~ E 3 is independent of each other and has an estimated variance smaller than the measured variance.
E ^ , the mean of estimation values, and σ i 2 , the estimation variance, are given as follows:
E ^ = i = 1 4 E i 4 ,   σ i 2 = E i E ^ 2
An adaptive weighted data fusion algorithm is used to obtain x ^ , the unbiased estimation value, and σ 2 ^ , the unbiased estimation variance. And the specific formula is as follows:
w i = ( σ i 2 i = 1 n 1 σ i 2 ) 1 , x ^ = i = 1 4 w i E i , σ ^ 2 = ( i = 1 n 1 σ i 2 ) 1
Because E 4 is the least squares estimator, theoretically, the accuracy of the fused estimate x ^ will be higher than that of the least squares estimate.
If the number of measurement data collected by a sensor within a certain period is m and m is not equal to 10, the unbiased estimator can be constructed according to the following rules:
  • Each group consists of at least two measurement values, and each group does not use duplicate measurement values. The unbiased estimator corresponding to each group is the mean of the measurement values within the group.
  • When m is even (m = 2k), for any measurement value a i , it is necessary to ensure that a i and a m i are in the same group.
  • When m is odd (m = 2k + 1), for any measurement value a i , it is necessary to ensure that a i and a m i are in the same group. In addition, it is also necessary to ensure that a k , a k + 1 , and a k + 2 are in the same group.
  • Add an additional group that includes all measured values, such as E 4 in Equation (30).

4.2. Biased Estimation

s i , the size of the offset s i , needs to be determined first. For the ith sensor, define its unbiased estimation value and unbiased estimation variance as A i and σ i 2 . The fusion estimation value A ^ and the fusion estimation variance of these unbiased estimation values σ ^ 2 are computed by adaptive weighted data fusion algorithm.
Define g i as the degree of deviation of the ith unbiased estimation value from the true value. And g i is given as (use A ^ replaces true value) follows:
g i = A i A ^   σ ^
Define ω as the coefficient of s i , where the size of s i , and the value of ω i are determined by g i . When g i < 0.1 , ω i = 0 , when 1 > g i > 0.1 , ω i = 0.5 . Otherwise, ω i = 1 . ω is used to control the size of the offset and avoid excessive offset.
s i is given as follows:
s i = σ i N · g i · ω i
Then, the sign of s i needs to be determined. s i takes a negative sign when A i > A ^ and a positive sign otherwise. So B i is given as follows:
  B i =   σ i σ ^     N ·   g i · σ ^ · ω i           i f   A i > A ^   σ i σ ^     N ·   g i · σ ^ · ω i       i f   A i < A ^

4.3. Data Fusion

The application of the adaptive weighted data fusion algorithm requires measuring variance, which is replaced by estimated variance in unbiased estimation data fusion. In biased estimation data fusion, the estimated variance is calculated based on the estimation error, and the design concept of the calculation formula comes from the Kalman gain. Define e i as the biased estimation error of B i , and e i is given as follows:
e i = B i μ
The true value is unknown; use A ^ instead of the true value. So e i 2 is given as:
e i 2 = ( B i A ^ ) 2
Then, through 100,000 experiments, calculate e ¯ i 2 , mean of e i 2 , and g ¯ i , mean of g i , for different g-value intervals. The e ¯ i 2 and g ¯ i over the three g intervals are shown in the Table 1.
e ¯ i 2 and g ¯ i can be represented as follows:
e ¯ i 2 = 0.0176 , g ¯ i = 0.0493                   i f   g i < 0.1     e ¯ i 2 = 0.0203 , g ¯ i = 0.5268         i f   0.1 g i < 1 e ¯ i 2 = 0.0293 , g ¯ i = 1.6735                               i f   g i 1
So, σ ~ i 2 , which represents the biased estimation variance of B i , is given as follows:
σ ~ i 2 = e i 2 + g i g i + g ¯ i · ( e ¯ i 2 e i 2 )
Thus, w i , the optimal weighting factor, and M ^ , the fusion estimation value, are given as follows:
w i = ( σ ~ i 2 i = 1 n 1 σ ~ i 2 ) 1 , M ^ = i = 1 N w i B i
The flowchart of biased estimation data fusion is shown in Figure 2:

5. Results

The simulation test is divided into two parts as follows: 1. a normal distribution white noise test (all noise follows a normal distribution), and 2. A uniform distribution white noise test (environmental noise is uniformly distributed white noise). The test items for the two tests are shown in Table 2:
The testing mode is as follows: ten rounds of experiments are conducted for each combination of test items, and ten tests are conducted for each round. The average of the ten test results is used as the result of this round of experiments. Three sensors are used for each test, with 10 measurement data used for each sensor.
The test indicators are mean relative error ( m r e ) and mean square error ( m s e ). m r e is used to compare the average performance of the three algorithms, and m s e is used to compare the stability of the three algorithms. Their calculation formulas are as follows:
m r e i = j = 1 10 ( m i j μ ) μ 10
m s e i = j = 1 10 m i j μ 2 10
where i represents the number of test rounds and m i j represents the estimated value of the jth test in the ith round. Relative improvement ( r i ) is used to compare the performance differences between two algorithms, and the calculation formula for r i is as follows:
r i = V α V β V α × 100 %
where V α and V β represent the same indicator value (e.g., m r e ) for algorithms α and algorithms β . And, if the indicator has multiple values, calculate the mean of the indicator first, and then calculate r i . If r i is greater than 0, it means that algorithm β has improved r i compared to algorithm α .

5.1. Normal Distribution White Noise Test

The test results are as follows (Figure 3, Figure 4 and Figure 5 show the test results when the measurement noise is 0.25, and the remaining result graphs are Figure A1, Figure A2, Figure A3, Figure A4, Figure A5 and Figure A6 in Appendix A):
From Table 3, it can be seen that the m r e of biased estimation data fusion (BED) decreased by an average of 9.25% and 10.26% compared to least squares estimation data fusion (LSD) and batch estimation data fusion (BTD). Therefore, BED has higher fusion accuracy.
From Table 4, it can be seen that the m s e of BED decreased by an average of 16.24% and 20.28% compared to LSD and BTD. Therefore, the bed has higher stability.
And, as the noise variance increases, the average increases in m r e and m s e of LSD are 0.297% and 0.036, the average increases in m r e and m s e of BTD are 0.306% and 0.038, and the average increases in the m r e and m s e of BED are 0.269% and 0.03. So, m r e and m s e of BED decreased by an average of 11.59% and 18.85% compared to LSD and BTD, respectively. Therefore, BED has higher noise resistance.

5.2. Uniform Distribution White Noise Test

The test results are as follows (Figure 6, Figure 7 and Figure 8 show the test results when the measurement noise is 0.25, and the remaining result graphs are shown in Figure A7, Figure A8, Figure A9, Figure A10, Figure A11 and Figure A12):
From Table 5, it can be seen that the m r e of BED decreased by an average of 8.56% and 6.58% compared to LSD and BTD. Therefore, BED has higher fusion accuracy, but the advantage of m r e is reduced by 22.39% compared to normal distribution white noise testing.
From Table 6, it can be seen that the m s e of BED decreased by an average of 17.42% and 21.01% compared to LSD and BTD. Therefore, BED has higher fusion accuracy, but the advantage of m s e is increased by 5.23% compared to normal distribution white noise testing.
As the noise variance increases, the average increases in m r e and m s e of LSD are 0.305% and 0.03, the average increases in m r e and m s e of BTD are 0.298% and 0.031, and the average increases in m r e and m s e of BED are 0.278% and 0.024. So, m r e and m s e of BED decreased by an average of 8.21% and 21.29% compared to LSD and BTD. BED has higher noise resistance, and the comprehensive noise resistance is equivalent to that of normal distribution white noise testing.
In summary, even if the noise does not conform to a normal distribution, as long as the mean noise is 0, the algorithm can perform well.

6. Discussion

In this paper, an adaptive weighting data fusion algorithm based on biased estimation is proposed to address the shortcomings of unbiased estimation data fusion. The algorithm in this paper first further reduces the estimation error by biased estimation and then uses the estimation error instead of the measurement variance to compute the optimal weighting factor, which avoids the fact that the optimal weighting factor is not optimal in some cases. The final results of the simulation experiments show that this algorithm outperforms the original algorithm in terms of fusion accuracy, stability, and noise resistance, which are better than other algorithms.
The essence of this algorithm is to sacrifice unbiasedness for higher estimation accuracy, so the overall accuracy of biased estimation is significantly higher than that of unbiased estimation. However, due to the loss of unbiasedness, the performance of the algorithm is not as good as expected when using adaptive weighted data fusion, so there is still much room for improvement.

Author Contributions

Writing—original draft preparation, M.Q.; writing—review and editing, B.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Major Science and Technology Projects in Yunnan Province (202202AE090032).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. m r e and m s e when measurement variance = 1 and normal distribution noise variance = 0.2.
Figure A1. m r e and m s e when measurement variance = 1 and normal distribution noise variance = 0.2.
Sensors 24 03275 g0a1
Figure A2. m r e and m s e when measurement variance = 1 and normal distribution noise variance = 0.5.
Figure A2. m r e and m s e when measurement variance = 1 and normal distribution noise variance = 0.5.
Sensors 24 03275 g0a2
Figure A3. m r e and m s e when measurement variance = 1 and normal distribution noise variance = 1.
Figure A3. m r e and m s e when measurement variance = 1 and normal distribution noise variance = 1.
Sensors 24 03275 g0a3
Figure A4. m r e and m s e when measurement variance = 2 and normal distribution noise variance = 0.2.
Figure A4. m r e and m s e when measurement variance = 2 and normal distribution noise variance = 0.2.
Sensors 24 03275 g0a4
Figure A5. m r e and m s e when measurement variance = 2 and normal distribution noise variance = 0.5.
Figure A5. m r e and m s e when measurement variance = 2 and normal distribution noise variance = 0.5.
Sensors 24 03275 g0a5
Figure A6. m r e and m s e when measurement variance = 2 and normal distribution noise variance = 1.
Figure A6. m r e and m s e when measurement variance = 2 and normal distribution noise variance = 1.
Sensors 24 03275 g0a6
Figure A7. m r e and m s e when measurement variance = 1 and uniformly distributed noise variance = 0.2.
Figure A7. m r e and m s e when measurement variance = 1 and uniformly distributed noise variance = 0.2.
Sensors 24 03275 g0a7
Figure A8. m r e and m s e when measurement variance = 1 and uniformly distributed noise variance = 0.5.
Figure A8. m r e and m s e when measurement variance = 1 and uniformly distributed noise variance = 0.5.
Sensors 24 03275 g0a8
Figure A9. m r e and m s e when measurement variance = 1 and uniformly distributed noise variance = 1.
Figure A9. m r e and m s e when measurement variance = 1 and uniformly distributed noise variance = 1.
Sensors 24 03275 g0a9
Figure A10. m r e and m s e when measurement variance = 2 and uniformly distributed noise variance = 0.2.
Figure A10. m r e and m s e when measurement variance = 2 and uniformly distributed noise variance = 0.2.
Sensors 24 03275 g0a10
Figure A11. m r e and m s e when measurement variance = 2 and uniformly distributed noise variance = 0.5.
Figure A11. m r e and m s e when measurement variance = 2 and uniformly distributed noise variance = 0.5.
Sensors 24 03275 g0a11
Figure A12. m r e and m s e when measurement variance = 2 and uniformly distributed noise variance = 1.
Figure A12. m r e and m s e when measurement variance = 2 and uniformly distributed noise variance = 1.
Sensors 24 03275 g0a12

References

  1. Kenyeres, M.; Kenyeres, J. Average Consensus over Mobile Wireless Sensor Networks: Weight Matrix Guaranteeing Convergence without Reconfiguration of Edge Weights. Sensors 2020, 13, 3677–3702. [Google Scholar]
  2. Sha, G.; Radzienski, M.; Soman, R.; Cao, M.; Ostachowicz, W.; Xu, W. Multiple damage detection in laminated composite beams by data fusion of Teager energy operator-wavelet transform mode shapes. Compos. Struct. 2020, 235, 111798. [Google Scholar] [CrossRef]
  3. Sun, S.-L.; Deng, Z.-L. Multi-sensor optimal information fusion Kalman filter. Automatic 2004, 40, 1017–1023. [Google Scholar] [CrossRef]
  4. Meng, T.; Jing, X.; Yan, Z. A Survey on Machine Learning for Data Fusion. Inf. Fusion 2019, 57, 115–129. [Google Scholar] [CrossRef]
  5. Xia, Y.; Leung, H. Performance analysis of statistical optimal data fusion algorithms. Inf. Sci. 2014, 277, 808–824. [Google Scholar] [CrossRef]
  6. Kim, H.; Suh, D. Hybrid Particle Swarm Optimization for Multi-Sensor Data Fusion. Sensors 2018, 9, 2792. [Google Scholar] [CrossRef]
  7. Gao, B.; Hu, G.; Gao, S. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter. Sensors 2018, 18, 488. [Google Scholar] [CrossRef]
  8. Yan, L.; Xia, Y.; Fu, M. Optimal fusion estimation for stochastic systems with cross-correlated sensor noises. Sci. China Inf. Sci 2017, 60, 120205. [Google Scholar] [CrossRef]
  9. Zhang, T.; Li, H.; Yang, L. Multi-Radar Bias Estimation without a Priori Association. IEEE Access 2018, 6, 44616–44625. [Google Scholar] [CrossRef]
  10. Tian, W.; Huang, G.; Peng, H. Sensor Bias Estimation Based on Ridge Least Trimmed Squares. IEEE Trans. Aerosp. Electron. Syst. 2019, 56, 1645–1651. [Google Scholar] [CrossRef]
  11. Duan, Z.; Li, X.R. Lossless Linear Transformation of Sensor Data for Distributed Estimation Fusion. IEEE Trans. Signal Process. 2011, 59, 362–372. [Google Scholar] [CrossRef]
  12. Yaseen, M.; Canbilen, A.E.; Ikki, S. Channel Estimation in Visible Light Communication Systems: The Effect of Input Signal-Dependent Noise. IEEE Trans. Veh. Technol. 2023, 72, 14330–14340. [Google Scholar] [CrossRef]
  13. Li, Q.; El-Hajjar, M.; Hemadeh, I.; Shojaeifard, A.; Hanzo, L. Low-Overhead Channel Estimation for RIS-Aided Multi-Cell Networks in the Presence of Phase Quantization Errors. IEEE Trans. Veh. Technol. 2023, 73, 6626–6641. [Google Scholar] [CrossRef]
  14. Li, Q.; El-Hajjar, M.; Hemadeh, I.; Jagyasi, D.; Shojaeifard, A.; Hanzo, L. Performance analysis of active RIS-aided systems in the face of imperfect CSI and phase shift noise. IEEE Trans. Veh. Technol. 2023, 72, 8140–8145. [Google Scholar] [CrossRef]
  15. Fortunati, S.; Farina, A.; Gini, F. Least Squares Estimation and Cramér-Rao Type Lower Bounds for Relative Sensor Registration Process. IEEE Trans. Signal Process. 2011, 59, 1075–1087. [Google Scholar] [CrossRef]
  16. Zheng, Z.-W.; Zhu, Y.-S. New least squares registration algorithm for data fusion. IEEE Trans. Aerosp. Electron. Syst. 2004, 40, 1410–1416. [Google Scholar] [CrossRef]
  17. Wang, H.D.; Wang, D.Y. An improved adaptive weighted fusion algorithm for batch estimation of multi-wireless sensor data. J. Sens. Technol. 2015, 28, 1239–1243. [Google Scholar]
  18. Shao, S.; Zhang, K. An Improved Multisensor Self-Adaptive Weighted Fusion Algorithm Based on Discrete Kalman Filtering. Complexity 2020, 2020, 9673764. [Google Scholar] [CrossRef]
  19. Gan, Q.; Harris, C.J. Comparison of two measurement fusion methods for Kalman-filter-based multisensor data fusion. IEEE Trans. Aerosp. Electron. Syst. 2001, 37, 273–279. [Google Scholar] [CrossRef]
  20. Chen, Y.; Si, X.; Li, Z. Research on Kalman-filter based multisensor data fusion. J. Syst. Eng. Electron. 2007, 18, 497–502. [Google Scholar]
  21. Yue, Y.; Zuo, X.; Luo, X. A biased estimation method for multi-sensor data fusion to improve measurement reliability. J. Autom. 2014, 40, 1843–1852. [Google Scholar]
Figure 1. Flowchart of batch estimation data fusion.
Figure 1. Flowchart of batch estimation data fusion.
Sensors 24 03275 g001
Figure 2. Flowchart of biased estimation data fusion.
Figure 2. Flowchart of biased estimation data fusion.
Sensors 24 03275 g002
Figure 3. m r e and m s e when measurement variance = 0.25 and normal distribution noise variance = 0.2.
Figure 3. m r e and m s e when measurement variance = 0.25 and normal distribution noise variance = 0.2.
Sensors 24 03275 g003
Figure 4. m r e and m s e when measurement variance = 0.25 and normal distribution noise variance = 0.5.
Figure 4. m r e and m s e when measurement variance = 0.25 and normal distribution noise variance = 0.5.
Sensors 24 03275 g004
Figure 5. m r e and m s e when measurement variance = 0.25 and normal distribution noise variance = 1.
Figure 5. m r e and m s e when measurement variance = 0.25 and normal distribution noise variance = 1.
Sensors 24 03275 g005
Figure 6. m r e and m s e when measurement variance = 0.25 and uniformly distributed noise variance = 0.2.
Figure 6. m r e and m s e when measurement variance = 0.25 and uniformly distributed noise variance = 0.2.
Sensors 24 03275 g006
Figure 7. m r e and m s e when measurement variance = 0.25 and uniformly distributed noise variance = 0.5.
Figure 7. m r e and m s e when measurement variance = 0.25 and uniformly distributed noise variance = 0.5.
Sensors 24 03275 g007
Figure 8. m r e and m s e when measurement variance = 0.25 and uniformly distributed noise variance = 1.
Figure 8. m r e and m s e when measurement variance = 0.25 and uniformly distributed noise variance = 1.
Sensors 24 03275 g008
Table 1. Mean estimation error and mean g for different g intervals.
Table 1. Mean estimation error and mean g for different g intervals.
g i e ¯ i 2 g ¯ i
g i < 0.1 0.01760.0493
0.1 g i < 1 0.02030.5268
g i 1 0.02931.6735
Table 2. The test items for the two tests.
Table 2. The test items for the two tests.
Test ItemsTest Value 1Test Value 2Test Value 3
True value20nullnull
Measurement variance0.2512
Noise variance0.20.51
Table 3. m r e of all algorithms when noise follows normal distribution.
Table 3. m r e of all algorithms when noise follows normal distribution.
Measure VarianceNoise VarianceLeast Squares Estimation Data FusionBatch Estimation Data FusionBiased Estimation Data Fusion
0.250.20.506%0.518%0.445%
0.50.615%0.597%0.555%
1.00.872%0.887%0.816%
mean0.664%0.667%0.605%
10.21.048%1.043%0.943%
0.51.131%1.137%1.004%
1.01.035%1.053%0.958%
mean1.071%1.077%0.968%
20.21.018%1.022%0.870%
0.51.309%1.345%1.222%
1.01.449%1.471%1.339%
mean1.258%1.279%1.143%
Table 4. m s e of all algorithms when noise follows normal distribution.
Table 4. m s e of all algorithms when noise follows normal distribution.
Measure VarianceNoise VarianceLeast Squares Estimation Data FusionBatch Estimation Data FusionBiased Estimation Data Fusion
0.250.20.0170.0170.014
0.50.0240.0250.018
1.00.0470.0500.042
mean0.0290.0300.024
10.20.0680.0670.053
0.50.0750.0810.061
1.00.0650.0660.054
mean0.0690.0710.056
20.20.0730.0790.055
0.50.1030.1040.091
1.00.1270.1360.111
mean0.1010.1060.085
Table 5. m r e of all algorithms when noise follows uniform distribution.
Table 5. m r e of all algorithms when noise follows uniform distribution.
Measure VarianceNoise VarianceLeast Squares Estimation Data FusionBatch Estimation Data FusionBiased Estimation Data Fusion
0.250.20.463%0.494%0.430%
0.50.455%0.458%0.424%
1.00.564%0.578%0.518%
mean0.494%0.510%0.457%
10.20.762%0.830%0.705%
0.50.769%0.794%0.685%
1.00.962%0.947%0.865%
mean0.831%0.857%0.751%
20.21.075%1.084%0.981%
0.51.058%1.071%0.982%
1.01.180%1.167%1.078%
mean1.104%1.107%1.013%
Table 6. m s e of all algorithms when noise follows uniform distribution.
Table 6. m s e of all algorithms when noise follows uniform distribution.
Measure VarianceNoise VarianceLeast Squares Estimation Data FusionBatch Estimation Data FusionBiased Estimation Data Fusion
0.250.20.0120.0130.011
0.50.0120.0120.011
1.00.0180.0210.016
mean0.0140.0150.012
10.20.0370.0430.032
0.50.0410.0410.031
1.00.0540.0550.046
mean0.0440.0460.036
20.20.0710.0750.058
0.50.0670.0710.058
1.00.0850.0860.069
mean0.0740.0770.061
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qiu, M.; Liu, B. Multi-Sensor Adaptive Weighted Data Fusion Based on Biased Estimation. Sensors 2024, 24, 3275. https://doi.org/10.3390/s24113275

AMA Style

Qiu M, Liu B. Multi-Sensor Adaptive Weighted Data Fusion Based on Biased Estimation. Sensors. 2024; 24(11):3275. https://doi.org/10.3390/s24113275

Chicago/Turabian Style

Qiu, Mingwei, and Bo Liu. 2024. "Multi-Sensor Adaptive Weighted Data Fusion Based on Biased Estimation" Sensors 24, no. 11: 3275. https://doi.org/10.3390/s24113275

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop