Next Article in Journal
Experimental Demonstration and Circuitry for a Very Compact Coil-Only Pulse Echo EMAT
Next Article in Special Issue
Auxiliary Truncated Unscented Kalman Filtering for Bearings-Only Maneuvering Target Tracking
Previous Article in Journal
Zero-Sum Matrix Game with Payoffs of Dempster-Shafer Belief Structures and Its Applications on Sensors
Previous Article in Special Issue
Random Finite Set Based Bayesian Filtering with OpenCL in a Heterogeneous Platform
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

State Estimation Using Dependent Evidence Fusion: Application to Acoustic Resonance-Based Liquid Level Measurement

1
School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China
2
School of Engineering, Huzhou University, Huzhou 310027, China
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(4), 924; https://doi.org/10.3390/s17040924
Submission received: 15 February 2017 / Revised: 18 April 2017 / Accepted: 19 April 2017 / Published: 21 April 2017

Abstract

:
Estimating the state of a dynamic system via noisy sensor measurement is a common problem in sensor methods and applications. Most state estimation methods assume that measurement noise and state perturbations can be modeled as random variables with known statistical properties. However in some practical applications, engineers can only get the range of noises, instead of the precise statistical distributions. Hence, in the framework of Dempster-Shafer (DS) evidence theory, a novel state estimatation method by fusing dependent evidence generated from state equation, observation equation and the actual observations of the system states considering bounded noises is presented. It can be iteratively implemented to provide state estimation values calculated from fusion results at every time step. Finally, the proposed method is applied to a low-frequency acoustic resonance level gauge to obtain high-accuracy measurement results.

1. Introduction

Estimating the state of a dynamic system based on noisy sensor measurements is a common problem in sensor methods and applications [1,2]. Mainstream estimation methods all assume that both the system state noise and measurement noise can be modeled as random variables with known statistical properties. The Kalman filter, which supposes both of noises obey Gaussian distributions, is, by far, the most popular method [3]. The basic Kalman filter is only applicable to linear systems. In order to deal with nonlinear cases Bucy and Sunahara proposed the extended Kalman filter (EKF) [4,5]. The EKF uses the first order Taylor expansion technique to linearize state and observation equations, and then obtains state estimations by the Kalman filter. On the other hand, approximation to a state probability distribution of a nonlinear system is, to a great extent, easier and more feasible than a linear approximation to a nonlinear function [6]. Based on this idea, Gordon and Salmond proposed the particle filter (PF) [6]. The performance of the PF is commonly superior to the EKF because it can usually provide more precise information about state posterior probability distribution than does the EKF, especially when it takes a multimodal shape or noise distributions are non-Gaussian [6,7].
The precondition of the above methods is that the noise statistical properties must be known. However, in some practical applications, what engineers can obtain are not precise statistical distributions [8], but ranges of noises. Hence, a group of state estimation methods considering bounded noises, also known as the bounded-error methods, appeared [9,10,11,12]. Assuming that all variables belong to known compact sets, these methods build simple sets, such as ellipsoids or boxes, guaranteed to contain all state vectors consistent with given constraints. For linear systems, some scholars began to study such state estimation methods in the 1960s [9,10,11]. For nonlinear systems, the corresponding studies are relatively rare. Khemane et al. and Jaulin proposed bounded-error state and parameter estimations for nonlinear systems [13,14]. Gning proposed a relatively simple and fast bounded-error method based on interval analysis and constraint propagation, which was successfully applied to dynamic vehicle localization [12], but when the noise bounds cannot be precisely determined, its robustness will unavoidably decline [7]. That is to say, if the bounds are too tight, then the data may become inconsistent with the system equations, and in this case, this method fails to provide a solution. On the contrary, if the bounds are overestimated, then the estimated state becomes very imprecise, and this method becomes overly pessimistic [7].
In order to deal with this problem, Nassreddine proposed an improved method by integrating interval analysis with DS evidence theory. Its key idea is to replace the set-based representation of uncertainty by a more general formalism, namely, mass functions in evidence theory [7]. It introduces possibility distributions to model bounded noises, and then uses mass functions, i.e., evidence composed of interval focal elements and their masses to approximate these distributions. Essentially, such mass functions can be regarded as “generalized boxes” composed of a collection of boxes with associated weights. These mass functions can be propagated in the system equations using interval arithmetic and constraint-satisfaction techniques to get the mass function of system state at each time step. Pignistic expectation of this mass function is calculated as the state estimation value. Therefore, this approach extends the pure interval approach, making it more robust and accurate.
Nassreddine’s research showed the powerful ability of DS evidence theory to deal with the uncertainty of dynamic systems. Hence, this paper further presents a new state estimation method, which uses not only evidential description of uncertainty, but also dependent evidence fusion. Here, the state equation and the observation equation of a dynamic system and the actual observations of system states are regarded as three information sources. The random set description of evidence and extension principle of random set is used to obtain state evidence and observation evidence from these three information sources and to propagate them in the system equations. There are correlation among these evidence, so the proposed combination rule of dependent evidence is used to fuse the propagated evidence and Pignistic expectation of fusion results is calculated as state estimation value at each time step. Compared with Nassreddine’s method, it is shown that the proposed approach generates more accurate estimation results by combining dependent evidence. An industrial liquid level detection apparatus was employed to show the better performance of the approach.

2. Foundations of Dempster-Shafer (DS) Evidence Theory

The DS theory is a mechanism formalized by Shafer for representing and reasoning with uncertain, imprecise, and incomplete information. It is initially based on Dempster’s original work on the modeling of uncertainty in terms of upper and lower probabilities induced by a multi-valued mapping rather than as a single probability value [15]. One of the specificities of this theory is that the objects of study are no more the universe, i.e., a set, defined as the frame of discernment hereinafter, but the power set of this universe. In this section we introduce some main concepts of this theory and some necessary notions that will be used in the proposed approach. A more detailed exposition and some background information can be found in [16].

2.1. Basic Concepts in DS Evidence Theory

Definition 1 (Frame of discernment).
A set is called a frame of discernment if it contains mutually exclusive and exhaustive possible hypotheses. This set is usually denoted as Θ. The power set of Θ is denoted as 2Θ.
Definition 2 (Mass function).
A function m: 2Θ → [0, 1] is called a mass function on Θ if it satisfies the following two conditions: (1) m ( ) = 0 ; (2) A 2 Θ m ( A ) = 1 . This function is also named as a basic belief assignment (BBA). A subset A with a non-null mass is viewed as a focal element. Commonly, if an information source can provide a mass function on Θ, then this mass function is called a body of evidence, abbreviated to evidence (E).
Definition 3 (Dempster’s combination rule).
If m1, m2 are two BBAs induced from two statistically independent information sources, then a combined BBA can be obtained by using Dempster’s combination rule:
m ( A ) = { B C = A m 1 ( B ) m 2 ( C ) 1 B C = m 1 ( B ) m 2 ( C ) , A Θ a n d A 0 , A =
Note that the Dempster’s combination rule is meaningful only when B C = m 1 ( B ) m 2 ( C ) < 1 , i.e., m1 and m2 are not totally conflicting. This rule can be used to synthesize uncertain, imprecise or incomplete information coming from different sources.

2.2. The Degree of Dependence and the Combination of Dependent Evidence

In DS evidence theory, Dempster’s combination rule is the most important tool for computing a new BBA from two BBAs based on two pieces of evidence. This rule requires that the two pieces of evidence must be independent, which is considered to be a very strong constraint and cannot always be met in practice. Wu, Yang and Liu [17] pointed that if there are two pieces of evidence which are partially derived from the same information source, then both of them are mutually dependent. This interpretation concentrates on the connotation of independence conception in evidence combination operation. In this case, Wu, Yang and Liu [17] proposed the energy of evidence concept, and then, deduced the degree of dependence and the dependency coefficient between the two from the energy of the intersection of the two. Based on these notions, the combination of dependent evidence can be realized.
Definition 4 (The energy of evidence E).
The energy of evidence E, En(E) is defined as:
En ( E ) = i = 1 A i Θ n ( E ) m ( A i ) | A i |
where |Ai| is the number of elements in the focal element Ai, n(E) is the number of distinct focal elements in E. Obviously, En(E) have some valuable characteristics: (1) if every m(Ai) = 0, namely, m(Θ) = 1, then En(E) = 0 and the evidence E represents no useful information; (2) if |Ai| = 1 and m(Θ) = 0, then En(E) = 1 and the E contains the maximum useful information; (3) En(E) ∈ [0, 1].
Suppose that the BBAs of evidence E1 and E2 are m1 and m2, respectively, and their focal element sequences are Ai and Bj. It is possible that some focal elements of E1 and E2 are induced by the same information source. In this case, E1 and E2 will be dependent, then the energy of the intersection of the two pieces of evidence can be described by:
En ( E 1 , E 2 ) = i j = 1 D i j | { D i j } | m ( D i j ) | D i j |
where Dij denotes dependent focal element, |{Dij}| is the number of distinct Dij and the BBA function m is derived from m1 and m2.
The relationship of En(E1), En(E2) and En(E1, E2) is illustrated in Figure 1; especially, En(E1, E2) = 0 implies the independence between E1 and E2. The value of En(E1, E2) measures the dependency of the two pieces of evidence.
Definition 5 (The degree of dependence between two pieces of evidence).
En(E1, E2) is defined as the degree of dependence between E1 and E2. Actually, the partial energies En(E1) − En(E1, E2) in E1 and En(E2) − En(E1, E2) in E2 are independent of each other. If energy En(E1, E2) is partitioned into two parts, with each part attached to E1 and E2, respectively, as follows:
D ( E 1 , E 2 ) = 2 En ( E 1 , E 2 ) En ( E 1 ) + En ( E 2 )
then two corresponding independent pieces of evidence can be generated from E1 and E2.
For E1, its final independent energy can be calculated as:
En f ( E 1 ) = En ( E 1 ) En ( E 1 , E 2 ) + En ( E 1 , E 2 ) En ( E 1 ) En ( E 1 ) + En ( E 2 ) = En ( E 1 ) En ( E 1 , E 2 ) En ( E 2 ) En ( E 1 ) + En ( E 2 ) = En ( E 1 ) ( 1 En ( E 1 , E 2 ) En ( E 1 ) + En ( E 2 ) En ( E 2 ) En ( E 1 ) ) = En ( E 1 ) ( 1 1 2 D ( E 1 , E 2 ) En ( E 2 ) En ( E 1 ) )
Similarly:
En f ( E 2 ) = En ( E 2 ) ( 1 1 2 D ( E 1 , E 2 ) En ( E 1 ) En ( E 2 ) )
Definition 6 (The dependency coefficient between two pieces of evidence).
The dependency coefficient of E1 to E2 is defined as:
R 12 = 1 2 D ( E 1 , E 2 ) En ( E 2 ) En ( E 1 )
and the dependency coefficient of E2 to E1 is defined as:
R 21 = 1 2 D ( E 1 , E 2 ) En ( E 1 ) En ( E 2 )
E1 and E2 can be modified by R12 and R21, respectively, to obtain their corresponding independent E1 and E2′, their BBA functions are given by:
m 1 ( A ) = { ( 1 R 12 ) m 1 ( A ) , A Θ , A Θ 1 A Θ m 1 ( A ) A = Θ
m 2 ( B ) = { ( 1 R 21 ) m 2 ( B ) , B Θ , B Θ 1 B Θ m 2 ( B ) B = Θ
Consequently, the requirement of Dempster’s rule is met and the combination of E1′ and E2′ can be implemented according to Dempster’s rule in (1). Finally, the combination of E1 and E2 is indirectly realized by the combination of E1′ and E2′. Actually, reference [17] gives the decorrelation method to correct E1 and E2 by dependency coefficients such that the corrected E1′ and E2′ can be deemed as the independent evidence and combined using Dempster’s rule.

2.3. The Random Set Description of Evidence

2.3.1. Random Set and Random Relation

Definition 7
(Random set [18,19]). A finite support random set on Θ is a pair (,m) where is a finite family of distinct non-empty subsets of Θ and m is a mapping → [0, 1] and such that ∑Am(A) = 1.
is called the support of the random set and m is called a basic belief assignment. Such a random set (,m) is equivalent to a mass function in the sense of Shafer.
Definition 8
(Random relation [18,19]). Let Θ = Θ1 × Θ2 × …× Θn be a multi-dimensional space, where “×” indicates Cartesian product. A finite support random relation is a random set (,m) on Θ.
The projections of a random relation on Cartesian product Θ1 × Θ2 ×…× Θn are defined by Shafer to be the marginal random set (k,mk) (k = 1,2,…, n):
C k Θ k , m k ( C k ) = { m ( A ) | C k = Proj Θ k ( A ) }
Proj Θ k ( A ) = { u k Θ k | u = ( u 1 , , u k , u n ) A }
For A , A = C1 × C2 ×…× Cn, if m(A) = m1(C1) × m2(C2) ×…× mn(Cn), then (,m) is called decomposable Cartesian product random relation, and marginal random sets (1,m1 ), (2,m2),…, (n,mn) are mutually independent.

2.3.2. Extension Principles

Let ξ = (ξ1, ξ2,…, ξn) be a variable on Θ = Θ1 × Θ2 × …× Θn, ζ = f(ξ), ζ Φ , f : Θ→Φ is the function of ξ. The random set (,ρ) of ζ, which is the image of random relation (,m) of ξ through f, is given by extension principles [20,21,22]:
R = { R i = f ( A i ) | A i F }
ρ ( R j ) = { m ( A i ) | R j = f ( A i ) }
where:
f ( A i ) = { f ( u ) | u A i } , i = 1 , 2 , , M
M is the number of element of . The summation in Equation (14) accounts for the fact that more than one focal element Ai may yield the same image Rj through f.
The key of constructing (,ρ) is to calculate the image of Ai through f. If ξ is a continuous variable on Θ, then Θ = ℝn, becomes a finite family of distinct non-empty sub-intervals on Θ. In this case, the process of constructing (,ρ) is given as follows:
For each ξk in ξ, let its marginal random set be (k,mk) and the focal element of (k,mk) be a interval [ak-,ak+], then the focal element of (,m) can be given as:
A = [ a 1 , a 1 + ] × × [ a n , a n + ]
The image of A can be calculated by using the methods of Interval Analysis [19,20,21]; if A is a convex set, then A has 2n vertices, denoted as vj (j = 1,2,…, 2n). If function f has certain properties, the Vertex Method can help reduce the calculation time considerably [22]:
Proposition 1.
A , if ζ = f(ξ) is continuous in A and also no extreme point exists in this region (including its boundaries), then the value of interval function can be obtained by:
f ( A ) = R = [ min j { f ( v j ) : j = 1 , 2 , , 2 n } , max j { f ( v j ) : j = 1 , 2 , , 2 n } ]
Thus, function f has to be evaluated 2n times for each focal element A. This computational burden can be further reduced if the hypotheses of the following Proposition 2 hold [21].
Proposition 2.
If f is continuous, if its partial derivatives are also continuous and if f is a strictly monotonic function with respect to each ξk, k = 1, 2,…, n, then:
v min ,   f ( v min ) = min j { f ( v j ) : j = 1 , 2 , , 2 n }
v max ,   f ( v max ) = max j { f ( v j ) : j = 1 , 2 , , 2 n }
There is a case in point. Let ξ = (ξ123), A = [a1-,a1+] × [a2-,a2+] × [a3-,a3+]. Assume f and its partial derivatives are all continuous. If f is increasing with respect to ξ1 and ξ2, decreasing with respect to ξ3 respectively, then f has to be calculated only twice for each focal element A, namely, f(A) = [f(vmin),f(vmax)], vmin = [a1-,a2-,a3+], vmax = [a1+,a2+,a3-]. Totally, 2M evaluations of f are needed to obtain complete (,ρ).
Furthermore, the expectation of (,ρ) is given by [23]:
𝔼 ( ρ ) = j = 1 N ρ ( R j ) ( r j + + r j 2 )
where Rj = [rj-, rj+], j = 1,2,⋯, N, N is the number of focal element Rj.

3. State Estimation Based on Dependent Evidence Fusion

3.1. Dynamic System Model under Bounded Noises

The dynamic systems mode constructed by the state and observation equations is as follows:
{ x k + 1 = f ( x k , v k ) z k + 1 = g ( x k + 1 , w k + 1 ) k = 1 , 2 , 3 ,
where the relationship between state xk+1 at time k + 1 and state xk at time k is described as function f. The relationship between observation zk+1 at time k + 1 and state xk+1 at time k + 1 is described as function g. vk and wk are bounded additive state noise variable and observation noise variable respectively, which are independent of each other. These two noises can be approximated to triangle possibility distributions [7], noted as πv and πw, respectively, (the noise distributions are identical at each time step), as shown in Figure 2.
π v ( v ) = { v v a v c v a i f v a v v c v b v v b v c i f v c v v b 0 o t h e r w i s e
where [va,vb] is the support interval of the state noise, vc is the mode of state noise, similarly:
π w ( w ) = { w w a w c w a i f w a w w c w b w w b w c i f w c w w b 0 o t h e r w i s e

3.2. Recursive Algorithm of State Estimation Based on Extension Principles and Dependent Evidence Fusion

Figure 3 shows the flow of the proposed recursive algorithm. The following steps will be introduced in detail.
Step 1: Construct noise evidence to approximate πv and πw. Initially, we construct evidence ( k v , m k v ) to approximate the possibility distribution πv of state noise variable vk. For any α ∈ (0, 1], α cut set of πv is [9]:
[ π α v , π α v + ] = { v | π v ( v ) α }
If there exist α0, α1,⋯, αp−1 which satisfy 0 = α0 < α1 <⋯< αp−1 < 1, then their corresponding α-cut sets will satisfy [ π α 0 v , π α 0 v + ] [ π a 1 v , π a 1 v + ] [ π α p 1 v , π α p 1 v + ] where p is a positive integer. Take these p α-cut sets as focal elements with nested closed interval forms, then their corresponding BBAs are:
m ( [ π α i v , π α i v + ] ) = { α i + 1 α i + 1 α i 1 α i i f i = 0 i f i = 1 , 2 , , p 2 i f i = p 1
Figure 4 gives an example that when p = 3, and m can be constructed by uniformly cutting α three times. Distinctly, m corresponds to a possibility distribution π that approximates πv. Certainly, a better approximation of the continuous possibility distribution can be obtained by increasing the number p of cut sets, at the expense of higher computational complexity.
It is worth noticing that m is constructed on the condition that all values outside the support interval [va,vb] are completely impossible. However, in practice, the bounds va and vb are commonly given based on available measurement knowledge or real data, so they may not be precise and the values outside [va,vb] may appear. To account for the imprecision of the support interval [7], constructs ( k v , m k v ) by discounting m with a small discount rate εv, in which, m k v is defined as [9]:
m k v ( A ) = { ( 1 ε v ) m ( A ) ( 1 ε v ) m ( Θ ) + ε v i f A Ξ i f A = Θ
where Ξ = { [ π α i v , π α i v + ] | i = 1 , 2 , , p 1 } , Θ = ℝ, accordingly, k v = Ξ Θ . In the course of implementing the proposed algorithm, Θ can be replaced by the closed interval [va′,vb′], here va′ >> va and vb′ >> vb such that the following interval operations can be done easily.
In the same way, we can construct evidence ( k w , m k w ) using the possibility distribution πw of observation noise variable wk.
Step 2: Obtain state prediction evidence. E k + 1 | k x = ( R k + 1 | k x , ρ k + 1 | k x ) at time k + 1 from state equation. Suppose the estimation result at time k is x ^ k | k . When k = 1, x ^ 1 | 1 is initialized as real observation z1. Considering the influence of noise to the state, we construct the state evidence ( k x , m k x ) of x ^ k | k by adding noise to x ^ k | k :
k x = { [ π α 0 v + x ^ k | k , π α 0 v + + x ^ k | k ] , [ π a 1 v + x ^ k | k , π a 1 v + + x ^ k | k ] , , [ π α q 1 v + x ^ k | k , π α q 1 v + + x ^ k | k ] , Θ } ; m k x = m k v
Thus, taking ( k v , m k v ) and ( k x , m k x ) as the inputs of state equation x k + 1 = f ( x k , v k ) , we can get the state prediction evidence E k + 1 | k x = ( k + 1 | k x , ρ k + 1 | k x ) by mapping from the inputs to the outputs based on the extension principles in Equations (13) and (14).
Step 3: Obtain observation prediction evidence. E k + 1 | k z = ( F k + 1 | k z , m k + 1 | k z ) at time k + 1 from observation equation.
Taking the state prediction evidence ( k + 1 | k x , ρ k + 1 | k x ) in Step 2 as the input of the observation equation g ( x k + 1 ) , we can get E k + 1 | k z = ( F k + 1 | k z , m k + 1 | k z ) based on the extension principles in (13) and (14).
Step 4: Obtain fusion evidence. E ^ k + 1 z = ( F ^ k + 1 z , m ^ k + 1 | k z ) at time k + 1 in observation domain.
Firstly, in Step 1, we get evidence ( k w , m k w ) using the possibility distribution πw of wk:
k w = { [ π α 0 w , π α 0 w + ] , [ π a 1 w , π a 1 w + ] , , [ π α p 1 w , π α p 1 w + ] , Θ }   m k w = m k v
After getting observation zk+1 at time k + 1, considering the influence of noise to the observation, we construct the evidence ( k + 1 z , m k + 1 z ) of zk+1 through adding noise to zk+1:
k + 1 z = { [ π α 0 w + z k + 1 , π α 0 w + + z k + 1 ] , [ π a 1 w + z k + 1 , π a 1 w + + z k + 1 ] , , [ π α p 1 w + z k + 1 , π α p 1 w + + z k + 1 ] , Θ } ; m k z = m k w .
Secondly, using Dempster′s combination rule, we fuse ( k + 1 z , m k + 1 z ) and ( k + 1 | k z , m k + 1 | k z ) to get the fusion evidence E ^ k + 1 z = ( F ^ k + 1 z , m ^ k + 1 z ) in observation domain at time k + 1. As for the relationship between ( k + 1 | k z , m k + 1 | k z ) and ( k + 1 z , m k + 1 z ) . The former is obtained by propagating x ^ k | k from state equation f ( x k , v k ) to observation equation g ( x k + 1 ) ; the latter is constructed by adding noise π w ( w ) to zk+1. It can be seen that the former completely comes from the state information x ^ k | k at past time step k which does not use the observation noise w k + 1 ( π w ( w ) ), but uses the state noise v k ( π v ( v ) ). Because w k + 1 ( π w ( w ) ) and v k ( π v ( v ) ) are independent of each other, so it is believed that the former and the latter are also independent of each other. Hence both of them can be directly fused using Dempster′s combination rule.
Step 5: Get new evidence. E ^ k + 1 x = ( ^ k + 1 x , ρ ^ k + 1 x ) at time k + 1 in state domain.
Taking ^ k + 1 z , m ^ k + 1 z attained in the Step 4 as the input of inverse function g 1 ( z k + 1 ) , we can get ( ^ k + 1 x , ρ ^ k + 1 x ) by using the extension principles in Equations (13) and (14).
Step 6: Get state estimation evidence. ( ^ k + 1 | k + 1 x , m ^ k + 1 | k + 1 x ) and state estimate x ^ k + 1 | k + 1 at time k + 1.
Using Dempster′s combination rule, we can fuse ( k + 1 x , ρ k + 1 x ) attained in Step 5 and ( k + 1 | k x , ρ k + 1 | k x ) attained in Step 2. That is to say, we utilize the former to revise the latter to get state estimation evidence ( ^ k + 1 | k + 1 x , m ^ k + 1 | k + 1 x ) . ( ^ k + 1 x , ρ ^ k + 1 x ) is obtained by inverse mapping of fusion evidence ( ^ k + 1 z , m ^ k + 1 z ) in observation domain. ( ^ k + 1 z , m ^ k + 1 z ) is obtained by the fusion of observation evidence ( k + 1 z , m k + 1 z ) and observation prediction evidence ( k + 1 | k z , m k + 1 | k z ) . In Step 3, it is noted that ( k + 1 | k z , m k + 1 | k z ) is related to ( k + 1 | k x , ρ k + 1 | k x ) , so ( ^ k + 1 x , ρ ^ k + 1 x ) and ( k + 1 | k x , ρ k + 1 | k x ) are certainly mutually dependent. Therefore, the combination of dependent evidence must be used for fusing both of them. For the focal elements of ( ^ k + 1 x , ρ ^ k + 1 x ) and ( k + 1 | k x , ρ k + 1 | k x ) are the closed intervals on real numbers, here we extend the combination of dependent evidence in the discrete frame of discernment introduced in Section 2.2 to that in the continuous frame of discernment (see the corresponding proposition and example in Appendix A). ( ^ k + 1 x , ρ ^ k + 1 x ) and ( k + 1 | k x , ρ k + 1 | k x ) can be fused using the extended combination of dependent evidence to get state estimation evidence ( ^ k + 1 | k + 1 x , m ^ k + 1 | k + 1 x ) at time k + 1.
Finally, Pignistic expectation of ( ^ k + 1 | k + 1 x , m ^ k + 1 | k + 1 x ) is calculated as state estimation value x ^ k + 1 | k + 1 by Equation (20). Using state estimation at time k + 1 to do next iteration, we can estimate state at every time step.
In conclusion, as shown in Figure 3, the whole recursive algorithm is actualized under the framework of DS evidence theory. The corresponding evidence in state and observation domains are not only propagated and transformed by the extension principle, but also fused by the Dempster′s combination rule and the proposed combination rule for dependent evidence. Especially, fusion procedure can make that the masses focus to those interval focal elements that contain the system state, so as to get the accurate estimation results, which is the main difference from Nassreddine’s method under the framework of the interval analysis. In next section, our approach will be applied to liquid level estimation using an industrial level apparatus to show its better performance than possible with Nassreddine’s method.

4. Application to Liquid Level Measurement

Level measurement methods based on sound reflection phenomena have been successfully applied in some areas of process industry (chemical, waste water treatment, petroleum, etc.) because the level is the main monitored process variable used in industrial alarm systems. Ultrasonic measurement methods, with good directivity, convenient operation and so on, have become some of the most commonly used techniques [24]. Their measuring principle is to emit an ultrasound toward a liquid surface and receive the echos, then to calculate the distance from the surface to the acoustic receiver device by multiplying the sound velocity by the round-trip time [25]. However, this method is susceptible to the quality of the instrument itself and environmental noise, which will deteriorate the measurement accuracy. Besides, if the ultrasound encounters foams, residues, deposits, etc., in the measurement process, it is also prone to parasitic reflection, thereby the ultrasound propagation path is changed, which seriously affects the measurement accuracy [26].
On the contrary, low-frequency sound waves have longer wavelength and it is easy to generate the diffraction phenomenon which can effectively overcome the problem of parasitic reflection due to foams, residues, deposits, etc. When a speaker emits sound waves with a uniform change from a frequency fL to a higher frequency fH toward the surface and a microphone receives the corresponding echoes, the generated standing wave signals extracted in the oscilloscope can be used to calculate the height of the liquid level. Kumperščak and Završnik [25,26] used this idea to measure liquid levels. However, they both directly used observations to calculate the liquid level. In practice, if the measurements obtained by using a speaker and a microphone are not precise enough and if the effect of environmental noise is inevitable, then the deviation of the final measurement results will be unacceptable, which is the most common shortcoming in the present level measurement methods.
In our earlier work [27], we have used the Evidential Reasoning(ER) rule to deal with liquid level estimation with bounded noises, but the ER-based method only provides an initial idea for state estimation under the framework of DS evidence theory and only gives precise estimated results when the level length is less than 1.6 m. In order to improve the evidence fusion-based state estimate method, this paper introduces a new information source, Dempster combination rule and evidence dependence conceptions. We construct the state equation and observation equation based on the principle of level measurement using acoustic standing waves, and then use the proposed algorithm to estimate the frequencies of the standing waves, which can be translated into the liquid level height (0 m–10 m). Compared with the direct measurement method and Nassreddine’s method, the estimation results verify that our algorithm has obvious advantages and improves the level estimation accuracy.

4.1. Acoustic Standing Wave Level Gauge

The structure of an acoustic standing wave level gauge is shown in Figure 5, and mainly consists of a waveguide (a tube), a speaker, a microphone, a thermometer and a controller. When sound waves in the frequency range [fL,fH] generated by a signal generator (audio card and speaker) vertically propagate to a surface and echoes appear, superposition of both waves will generate standing waves. Here, y1 denotes the sound wave generated by speaker and y2 denotes the echo reflected by the surface:
y 1 = A cos 2 π ( P t L λ )
y 2 = A cos 2 π ( P t + L λ )
The synthesis wave of y1 and y2 can be expressed as:
y = 2 A cos ( π 2 L λ ) cos ( 2 π P t )
where A is the amplitude of sound wave, P is the frequency of sound wave and L is the distance from the top of the tube to the surface of liquid as shown in Figure 5. From Equation (29), we know that when L and λ have the following relation, the amplitude of synthesis wave reaches the maximum:
L = n k λ k 2 k = 1 , 2 , 3 ,
In this case, this synthesis wave is defined as the standing wave and its wavelength is:
λ k = c f k = 331.4 + 0.6 T f k
where λk is the wavelength of kth standing wave, fk is the frequency of kth standing wave (kth resonance frequency) in [fL,fH]. c is sound velocity, and T is temperature.
Substituting Equation (30) into Equation (31), we obtain:
L = n k ( 331.4 + 0.6 T ) 2 f k
where, nk is given as [28]:
n k = f k ( f k + 1 f k )
and:
n k + 1 = n k + 1
Theoretically, in Equation (33), fk+1fk = fF, fF is the fundamental resonance frequency and fk = nk fF, nk ∈ ℕ+ (the set of all positive integers) [26,28]. For example, if L = 9.6 m, T = 23.9 °C, and n = 1, then the fundamental resonance frequency can be calculated by (32):
f F = n ( 331.4 + 0.6 T ) 2 L = 18 Hz
If the frequency range [fL,fH] is [1000 Hz, 2500 Hz], then there are 82 resonance frequencies in this range, k = 1, 2,⋯, 82, and nk = 56,57,⋯,137. Consequentially, f1 = 56 × 18 Hz, f2 = 57 × 18 Hz,⋯, f82 = 137 × 18 Hz.

4.2. System Model

Firstly, we consider the resonance frequency as the estimated state and construct the corresponding state equation. If we can continuously collect the resonance frequency fk+1, then we have the following equations:
L = n k + 1 ( 331.4 + 0.6 T ) 2 f k + 1
From Equations (32) and (36), obviously, we can get:
n k + 1 ( 331.4 + 0.6 T ) 2 f k + 1 = n k ( 331.4 + 0.6 T ) 2 f k
f k + 1 = n k + 1 n k f k
Consequentially, we can establish the recursive linear state equation and observation equation, respectively:
x k + 1 = n k + 1 n k x k + v k
z k + 1 = x k + 1 + w k + 1
where xk = fk, zk is the observation of fk, wk and vk are independent noise sequences coming from speaker and microphone, respectively, satisfying the conditions:
v k = [ v a , v b ]
w k = [ w a , w b ]
The intervals [va,vb] and [wa,wb] denote the boundaries of the state noise and observation noise, respectively. The state noise vk and observation noise wk can be expressed by possibility distributions πv and πw with the support intervals [va,vb] and [wa,wb], respectively.
It should be noted that, in theory, nk in (39) should be taken as a positive integer. However, in practice, it can be only calculated by observations zk and zk+1 according to Equation (33). Because of observation imprecision, the calculated nk is commonly not a positive integer, so it should be approximated as:
n k = z k ( z k + 1 z k )
where “ ” denotes the operator that “round numbers to the nearest integer”.

4.3. Liquid Level Estimation Tests

In order to construct the level gauge in Figure 5, we use a low-precision microphone and speaker to emit and receive cosine sound waves, respectively, an electronic thermometer to collect temperature and a PVC tube with a diameter of d = 75 mm to transmit sounds. The estimated level L is the distance from the surface to the speaker platform. The controller transmits sine or cosine waves to drive the speaker to emit the signals vertically to the liquid surface. We use the software AUDIOSCSI (Brothers Studio, Shenzhen, China) which is based on an audio controller (82801HBM-ICH8M with sampling rate 44,100 Hz, Intel Corporation, Santa Clara, CA, USA) to generate sound waves. The frequencies of wave change with uniform speed from fL = 1000 Hz to fH = 2400 Hz in 5 s. Thus, the microphone receives the synthesis waves and sends them to the controller as shown in Figure 6 (L = 4.6 m). It can be seen that there are the frequencies of 39 adjacent standing waves collected by microphone in [1000 Hz, 2500 Hz]. Figure 7 shows the resonance frequency fk (k = 1,2,⋯, 39) extracted from the spectrum of the synthesis waves by the fast detecting algorithm in [28]. In this experiment, set liquid level distance L = 4.6 m, the ambient temperature is 26.5 °C and sound velocity is 347.3 m/s. The state equation and observation equation of resonance frequency are shown in Equations (39) and (40).
For the state noise vk, we use a high-precision oscillograph (TPS2024, Tektronix, Shanghai, China) to receive the cosine sound waves emitted by the audio controller and speaker in range [1000 Hz, 2500 Hz] and calculate errors about 100 frequency points uniformly selected from 1000 Hz to 2500 Hz. As in [7], the bounds of vk, are taken to be plus or minus three times the standard deviation of errors. So the possibility distributions πv of vk can be constructed as in Figure 8. Where the expectation of πv is 0, standard deviation σv is 0.1, then the support intervals [va,vb] = [−0.3, 0.3], mode vc= 0. Set α0 = 0, α1 = 1/3, α2 = 2/3, we can get three nested closed intervals and their BBAs to approximate πv as in Figure 8.
Furthermore, in Step 1 of the proposed algorithm in Section 3.2, discounting m at rate εv = 0.05 and approximating as the closed interval [vc 100σv, vc + 100σv] = [−10, 10], we can construct the evidence ( k v , m k v ) of vk according to Equation (26) as shown in Table 1.
The observation noise wk is mainly related to the microphone and the fast detecting algorithm. Firstly, we extract observation values of resonance frequencies in range [1000 Hz, 2500 Hz] about 30 level values uniformly selected from L = 1.3 m to L = 10.6 m by the fast detecting algorithm. Secondly, we calculate the errors between the theoretically correct values (true values) and observation values. In the same way, the possibility distributions πw of wk and the corresponding closed intervals and their BBAs can be constructed as in Figure 9, where, σw = 1.23, wc = −6.9, so [wa,wb] = [−10.59, −3.21]. Furthermore, discounting m at rate εv = 0.05 and approximating as the closed interval [wc − 100σw,wc + 100σw] = [−129.7, 115.7], we can construct the evidence ( k w , m k w ) of wk shown in Table 2.
From Figure 7, it can be seen that the first observation value of resonance frequency z1 = 1023.3 Hz. According to Step 2) in Section 3.2, the first estimation result x ^ 1 | 1 is initialized as the real observation z1. After obtaining ( 1 v , m 1 v ) and ( 1 w , m 1 w ) , our recursive algorithm presented in Section 3.2 can be used to estimate resonance frequencies at each step k. Figure 10a gives the estimation results of our method and Nassreddine’s method, together with the true values and observations (zk). Figure 10b gives the absolute errors between true values and the estimated values of our method, the estimated values of Nassreddine’s method, and zk respectively. It can be seen that the estimation accuracy and convergence of our method are better than those of Nassreddine’s method because of the focusing function of the proposed fusion procedure for dependent evidence.
Finally, we can calculate the estimate level L by Equation (32) according to the estimated resonance frequencies of our method, Nassreddine’s method, and zk respectively as shown in Figure 11a,b gives the corresponding absolute values of length estimation errors. Obviously, the more accurate the estimations of resonance frequencies are, the more accurate the estimations of level L are. As our method always provides more accurate estimations of resonance frequencies, it is therefore always superior.
More experiments are performed for different values of L to find the mean of absolute values of estimation errors and to show the efficacy of the proposed method as shown in Table 3. Here, for every values of L, from above to below, Table 3 gives in order the experiment results of our method, Nassreddine’s method, and direct measurement method (namely, substituting zk into (32)).
It should be noted that the calculation complexity of our algorithm is relatively high, and meanwhile, with the increase of the measured length of level, the corresponding synthetic wave contains more and more resonance frequency points, so the CPU time will increase, so it needs a longer time (the hardware in this test: CPU E8400, CPU Clock Speed 3.00 GHz, RAM 2 GB). But to the situation that liquid level change relatively slowly, our method is applicable. Certainly, the rapid development of data processing capability of computer hardware will make the complexity less of an issue.

5. Conclusions

The subject of Nassreddine’s method is still interval analysis, it introduces evidence, namely belief-function to elaborate bounded noises. In detail, it gives the improved form of noise bounds (the triangular possibility distribution), here the error interval is extended to the evidence construction, namely many interval focal elements with the corresponding belief assignments. Obviously, the latter has more information than the former. Then, it still uses interval arithmetic and constraint-satisfaction techniques to propagate not only interval focal elements, but also its belief assignments, hence its performance is slightly better than that of pure interval propagation. However, the subject of our method is DS evidence theory and random set theory and introduces Dempster combination rule and evidence dependence conceptions. Although we still use Nassreddine’s evidence construction technique, the random set description of evidence and extension principle of random set are used to obtain state evidence and observation evidence from the defined information sources and to propagate them in the system equations. The main contribution is to realize the fusion of the propagated evidence, in which the degree of dependence and the combination of dependent evidence are further considered.
As a whole, compared with Nassreddine’s method, our method increases the state estimation accuracy. The application in liquid level estimation using an industrial level apparatus shows the efficacy of the proposed method. Certainly, it is worth noting that, in a given application, there are some constraint conditions such as the continuous, monotonic and invertible properties of state and observation equations, and state observability. When they cannot be satisfied, the computational burden will inevitably be increased because of the use of additional complex interval operation algorithms or matrix operation algorithms (for multidimensional states) in [21,22]. Hence fast operation algorithms should be studied in the further. On the other hand, although the proposed procedure of evidence fusion can make the masses focus to those interval focal elements that contain the system state, so as to get the accurate estimation results, how to further evaluate the convergence of fusion using available theories is still a problem worthy studying which will promote the usage of evidence theory in state estimation.

Acknowledgements

This work was supported by the NSFC (No. 61433001, 61374123, 61573275, U1509203), and the Zhejiang Province Research Program Project of Commonweal Technology Application (No. 2016C31071).

Author Contributions

Xiaobin Xu and Zhenghui Li conceived and designed the experiments; Zhenghui Li performed the experiments; Guo Li and Zhenghui Li analyzed the data; Zhe Zhou contributed materials and analysis tools; Xiaobin Xu wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

We first generalize the definition of evidence energy as shown in Proposition A1.
Proposition A1.
Suppose E = (x,m) is a body of evidence whose focal elements are the closed intervals. Energy of the evidence E can be defined by:
En ( E ) = i = 1 [ x i , x i + ] Θ n ( E ) m ( [ x i , x i + ] ) d ( [ x i , x i + ] ) / min i ( d ( [ x i , x i + ] ) )
where [xi+, xi-] denotes interval focal element, d([xi-, xi+]) means interval width and n(E) is the number of interval focal elements. For example, if E = (x,m), A1 = [−0.3, 2.6], m(A1) = 0.3; A2 = [0.3,1.9], m(A2) = 0.7, mind([xi-, xi+]) = 1.6, then:
En ( E ) = 0.3 2.9 / 1.6 + 0.7 1.6 / 1.6 = 0.8655
It can be proved that En′(E) meets the three conditions of the evidence energy: (1) if m(Θ) = 1, then En′(E) = 0 and E represents no useful information; (2) if d ( [ x i , x i + ] ) = min i ( d ( [ x i , x i + ] ) ) and m(Θ) = 0, then En′(E) = 1 and E contains the maximum useful information; (3) En′(E) ∈ [0, 1].
Proof. 
For mass function satisfying [ x i , x i + ] Θ m ( [ x i , x i + ] ) = 1 and d ( [ x i , x i + ] ) / min i ( d ( [ x i , x i + ] ) ) 1 , there are four cases to discuss:
Case 1: If m(Θ) = 1 and the mass of other focal elements is zero, then En′(E) = 0, namely, the evidence does not provide any information. So, the Condition (1) holds;
Case 2: If m(Θ) = 0 and any interval focal element [xi+,xi-] meets that d ( [ x i , x i + ] ) / min i ( d ( [ x i , x i + ] ) ) = 1 , then En′(E) = 1, namely, the evidence provides maximum useful information. The Condition (2) holds;
Case 3: If m(Θ) = 0 and not all of the interval focal elements [xi+,xi-] meet that d ( [ x i , x i + ] ) / min i ( d ( [ x i , x i + ] ) ) = 1 , then En′(E) (0,1);
Case 4: If 0 < m(Θ) < 1, then En′(E) (0,1);
From Case 1 to Case 4, the Conditions (3) can be proved. Let E1 = (x,mx) and E2 = (y,my) be two pieces of evidence with closed interval focal elements. If some focal elements of E1 and E2 be induced by the same information sources, then the energy of the intersection of the two pieces of evidence can be described by:
En ( E 1 , E 2 ) = i j = 1 D i j | { D i j } | m ( D i j ) d ( D i j ) / min { min x i { d ( [ x i , x i + ] } , min y j { d ( [ y j , y j + ] ) } }
where Dij is the focal elements induced by the same sources, {Dij} is the set of these focal element, |{Dij}| is the number of them. Consequentially, the degree of dependence between E1 and E2 can be obtained by Definition 5:
D ( E 1 , E 2 ) = 2 En ( E 1 , E 2 ) En ( E 1 ) + En ( E 2 )
and the dependency coefficient between E1 and E2 can be redefined as:
R 12 = 1 2 D ( E 1 , E 2 ) En ( E 2 ) En ( E 1 )
R 21 = 1 2 D ( E 1 , E 2 ) En ( E 1 ) En ( E 2 )
E1 and E2 can be modified by R 12 and R 21 respectively to obtain the corresponding two pieces of independent evidence E1′ and E2′, their BBA functions are given by:
m 1 ( [ x i , x i + ] ) = { m 1 ( [ x i , x i + ] ) ( 1 R 12 ) , [ x i , x i + ] Θ 1 [ x i , x i + ] Θ m 1 ( [ x i , x i + ] ) [ x i , x i + ] = Θ
m 2 ( [ y j , y j + ] ) = { m 2 ( [ y j , y j + ] ) ( 1 R 21 ) , [ y j , y j + ] Θ 1 [ y j , y j + ] Θ m 2 ( [ y j , y j + ] ) [ y j , y j + ] = Θ
After obtaining BBAs of E1′ and E2′, we can use Dempster combination rule to fuse E1′and E2′, namely, to fuse the dependent evidence E1 and E2 indirectly. ☐
Although Wu, Yang and Liu [17] gave the definition of the energy of the intersection as in Equation (3), this conception of intersection is obscure, namely, Dij and m(Dij) are rarely clearly defined. Nevertheless, in our method, the two pieces of dependent evidence are ( ^ k + 1 x , ρ ^ k + 1 x ) and ( k + 1 | k x , ρ k + 1 | k x ) , according to the interval operations (extension principles and combination rule) used to generate them, { D i j } = ^ k + 1 x k + 1 | k x and m ( D i j ) = ρ ^ k + 1 x ( D i j ) = ρ k + 1 | k x ( D i j ) .
For example, if the focal elements of the new fusion evidence ( ^ k + 1 x , ρ ^ k + 1 x ) can be given as: A1 = [−0.30, 2.60], m(A1) = 0.3; A2 = [0.30, 1.90], m(A2) = 0.6, A3 = [0.32, 1.93], m(A3) = 0.1; the focal elements of the state estimation evidence ( k + 1 | k x , ρ k + 1 | k x ) can be given as: A1 = [0.21, 3.50], m(A1) = 0.1, A2 = [0.41, 1.61], m(A2) = 0.3, A3 = [0.30, 1.90], m(A3) = 0.6, then D i j = [ 0.30 , 1.90 ] , m ( D i j ) = 0.6 . From Equation (A1), obviously, we can get En ( ^ k + 1 x , ρ ^ k + 1 x ) = 0.8655, En ( k + 1 | k x , ρ k + 1 | k x ) = 0.7865, then the energy of the intersection of the two pieces of evidence can be calculated by Equation (A3):
En ( ^ k + 1 x , ρ ^ k + 1 x ) , ( k + 1 | k x , ρ k + 1 | k x ) = m ( [ 0.3 , 1.9 ] ) d ( [ 0.3 , 1.9 ] / min { 1.6 , 1.2 } = 0.6 1.6 / 1.2 = 0.45
We can calculate the degree of dependence between the new fusion evidence and the state estimation evidence by Definition 5 and from Equation (A4), we can get:
D ( ( ^ k + 1 x , ρ ^ k + 1 x ) , ( k + 1 | k x , ρ k + 1 | k x ) ) = 2 × En ( ( ^ k + 1 x , ρ ^ k + 1 x ) , ( k + 1 | k x , ρ k + 1 | k x ) ) En ( ^ k + 1 x , ρ ^ k + 1 x ) + En ( k + 1 | k x , ρ k + 1 | k x ) = 2 × 0.45 0.8655 + 0.7865 = 0.5448
The new fusion evidence is denoted as E1, and the state estimation evidence is denoted as E2. From Equations (A5) and (A6), we can get the dependency coefficient between E1 and E2:
R 12 = 1 2 × 0.5448 × 0.7865 0.8655 = 0.2475 ,   R 21 = 1 2 × 0.5448 × 0.8655 0.7865 = 0.2997
From Equations (A7) and (A8), we can calculate the BBAs of the corresponding E1′ and E2′ as m 1 ( A 1 ) = 0.4733 , m 1 ( A 2 ) = 0.4515 , m 1 ( A 3 ) = 0.0752 and m 2 ( A 1 ) = 0.3697 , m 2 ( A 2 ) = 0.2101 , m 2 ( A 3 ) = 0.4202 . Finally, using the Dempster combination rule to fuse E1′ and E2′, we obtain state estimation evidence ( ^ k + 1 | k + 1 x , m ^ k + 1 | k + 1 x ) as: A1 = [0.21, 2.60], m(A1) = 0.1749, A2 = [0.41,1.61], m(A2) = 0.0994, A3 = [0.30, 1.90], m(A3) = 0.1988, A4 = [0.30, 1.90], m(A4) = 0.1669, A5 = [0.41, 1.61], m(A5) = 0.0948, A6 = [0.30, 1.90], m(A6) =0.1890, A7 = [0.32, 1.93], m(A7) = 0.0278, A8 = [0.41, 1.61], m(A8) = 0.0157, A9 = [0.32, 1.90], m(A9) = 0.0315.

References

  1. Song, Y.; Nuske, S.; Scherer, S. A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors. Sensors 2017, 17, 11. [Google Scholar] [CrossRef] [PubMed]
  2. Gao, S.; Liu, Y.; Wang, J. The Joint Adaptive Kalman Filter (JAKF) for Vehicle Motion State Estimation. Sensors 2016, 16, 1103. [Google Scholar] [CrossRef] [PubMed]
  3. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef]
  4. Kalman, R.E.; Bucy, R.S. New Results in Linear Filtering and Prediction Theory. J. Basic Eng. 1961, 83, 95–108. [Google Scholar] [CrossRef]
  5. Sunahara, Y.; Yamashita, K. An approximate method of state estimation for non-linear dynamical systems with state-dependent noise. Int. J. Control. 1970, 11, 957–972. [Google Scholar] [CrossRef]
  6. Gordon, N.J.; Salmond, D.J.; Smith, A.F.M. Novel approach to nonlinear/non-Gaussian Bayesian state estimation. Proc. Inst. Elect. Eng. F 1993, 140, 107–113. [Google Scholar] [CrossRef]
  7. Nassreddine, G.; Abdallah, F.; Denoux, T. State estimation using interval analysis and belief-function theory: application to dynamic vehicle localization. IEEE Trans. Syst. Man Cybern. B Cybern. 2010, 40, 1205–1218. [Google Scholar] [CrossRef] [PubMed]
  8. Brynjarsdóttir, J.; O’Hagan, A. Learning about physical parameters: The importance of model discrepancy. Inverse Prob. 2014, 30, 114007. [Google Scholar] [CrossRef]
  9. Bertsekas, D.P.; Rhodes, I.B. Recursive state estimation for a set-membership description of uncertainty. IEEE Trans. Autom. Control. 1971, 16, 117–128. [Google Scholar] [CrossRef]
  10. Maksarov, D.; Norton, J.P. State bounding with ellipsoidal set description of the uncertainty. Int. J. Control. 1996, 65, 847–866. [Google Scholar] [CrossRef]
  11. Servi, L.; Ho, Y. Recursive estimation in the presence of uniformly distributed measurement noise. IEEE Trans. Autom. Control 1981, 26, 563–565. [Google Scholar] [CrossRef]
  12. Gning, A.; Bonnifait, P. Constraints propagation techniques on intervals for a guaranteed localization using redundant data. Automatica 2006, 42, 1167–1175. [Google Scholar] [CrossRef]
  13. Khemane, F.; Abbas-Turki, M.; Durieu, C.; Raynaud, H.F.; Conan, J.M. Bounded-error state and parameter estimation of tip-tilt disturbances in adaptive optics systems. Int. J. Adapt Control Signal Process. 2014, 28, 1081–1093. [Google Scholar] [CrossRef]
  14. Jaulin, L. Range-only slam with occupancy maps: A set-membership approach. IEEE Trans. Robot 2011, 27, 1004–1010. [Google Scholar] [CrossRef]
  15. Dempster, A.P. Upper and lower probabilities induced by a multivalued mapping. Ann. Math. Stat. 1967, 38, 325–339. [Google Scholar] [CrossRef]
  16. Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976. [Google Scholar]
  17. Wu, Y.; Yang, J.; Liu, K.; Liu, L. On the evidence inference theory. Inform Sci. 1996, 89, 245–260. [Google Scholar] [CrossRef]
  18. Dubois, D.; Prade, H. Random sets and fuzzy interval analysis. Fuzzy Sets Syst. 1991, 42, 87–101. [Google Scholar] [CrossRef]
  19. Xu, X.; Zhou, D.; Ji, Y.; Wen, C. Approximating Probability Distribution of Circuit Performance Function for Parametric Yield Estimation Using Transferable Belief Model. Sci. Chin. Inf. Sci. 2013, 56, 50–69. [Google Scholar] [CrossRef]
  20. Goodman, I.R.; Mahler, R.P.S.; Nguyen, H.T. Mathematics of Data Fusion; Springer: Houten, The Netherlands, 1997. [Google Scholar]
  21. Jaulin, L.; Kieffer, M.; Didrit, O.; Walter, E. Applied Interval Analysis; Springer: London, UK, 2001. [Google Scholar]
  22. Dong, W.; Shah, H.C. Vertex method for computing functions of fuzzy variables. Fuzzy Sets Syst. 1987, 42, 65–78. [Google Scholar] [CrossRef]
  23. Smets, P. Belief functions on real numbers. Int. J. Approx. Reason. 2005, 40, 181–223. [Google Scholar] [CrossRef]
  24. Kazys, R.; Sliteris, R.; Rekuviene, R. Ultrasonic technique for density measurement of liquids in extreme conditions. Sensors 2015, 15, 19393–19415. [Google Scholar] [CrossRef] [PubMed]
  25. Donlagić, D.; Kumperščak, V.; Završnik, M. Low-frequency acoustic resonance level gauge. Sens. Actuators A Phys. 1996, 57, 209–215. [Google Scholar] [CrossRef]
  26. Donlagić, D.; Zavrsnik, M.; Sirotic, I. The use of one-dimensional acoustical gas resonator for fluid level measurements. IEEE Trans. Instrum Meas. 2000, 49, 1095–1100. [Google Scholar] [CrossRef]
  27. Xu, X.B.; Zhang, Z.; Zheng, J. State Estimation Method Based on Evidential Reasoning Rule. In Proceedings of the 2015 IEEE Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China, 19–20 December 2015. [Google Scholar]
  28. Xu, X.B.; Zhao, C.P.; Xia, B.D.; Wang, Z.; Wen, C.L. A fluid level measurement method based on acoustical resonance in a special frequency range. Acta Metroa Sin. 2011, 32, 53–57. [Google Scholar]
Figure 1. The relationships of two pieces of dependent evidence.
Figure 1. The relationships of two pieces of dependent evidence.
Sensors 17 00924 g001
Figure 2. (a) Triangle possibility distributions of state noises, (b) Triangle possibility distributions of observation noises.
Figure 2. (a) Triangle possibility distributions of state noises, (b) Triangle possibility distributions of observation noises.
Sensors 17 00924 g002
Figure 3. Flowchart of state estimation iterative algorithm.
Figure 3. Flowchart of state estimation iterative algorithm.
Sensors 17 00924 g003
Figure 4. The possibility distribution of state noise and its evidence construction.
Figure 4. The possibility distribution of state noise and its evidence construction.
Sensors 17 00924 g004
Figure 5. Structure of a level gauge.
Figure 5. Structure of a level gauge.
Sensors 17 00924 g005
Figure 6. Waveform graph (L = 4.6 m).
Figure 6. Waveform graph (L = 4.6 m).
Sensors 17 00924 g006
Figure 7. Resonance frequencies and amplitudes (L = 4.6 m).
Figure 7. Resonance frequencies and amplitudes (L = 4.6 m).
Sensors 17 00924 g007
Figure 8. Probability distribution πv of state noise v.
Figure 8. Probability distribution πv of state noise v.
Sensors 17 00924 g008
Figure 9. Probability distribution πw of observation noise w.
Figure 9. Probability distribution πw of observation noise w.
Sensors 17 00924 g009
Figure 10. (a) Estimation results of resonance frequencies, (b) Absolute values of frequency estimation errors.
Figure 10. (a) Estimation results of resonance frequencies, (b) Absolute values of frequency estimation errors.
Sensors 17 00924 g010
Figure 11. (a) Estimation results of level L, (b) Absolute values of length estimation errors.
Figure 11. (a) Estimation results of level L, (b) Absolute values of length estimation errors.
Sensors 17 00924 g011
Table 1. Evidence of state noise.
Table 1. Evidence of state noise.
k v [−0.1, 0.1][−0.2, 0.2][−0.3, 0.3][−10, 10]
m k v 0.31670.31670.31670.05
Table 2. Evidence of observation noise.
Table 2. Evidence of observation noise.
k w [−8.13, −5.67][−9.36, −4.44][−10.59, −3.21][−129.7, 115.7]
m k w 0.31670.31670.31670.05
Table 3. Experimental results for different values of L.
Table 3. Experimental results for different values of L.
NoTrue L (m)T (°C)Runtime (s)Mean Error (m)NoTrue L (m)T (°C)Runtime (s)Mean Error (m)
11.3271.810.012665.626.520.110.018
0.880.0164.810.0374
-0.238-0.0661
22.126.52.490.025476.626.523.570.0238
1.330.03645.610.0436
-0.0441-0.088
32.626.57.810.014487.626.527.940.0299
2.050.02976.770.0530
-0.0591-0.1060
43.626.59.620.014198.623.931.230.0216
2.340.03127.410.0456
-0.0468-0.1295
54.626.516.070.0160109.623.935.150.0435
3.810.03378.360.0732
-0.0552-0.1624

Share and Cite

MDPI and ACS Style

Xu, X.; Li, Z.; Li, G.; Zhou, Z. State Estimation Using Dependent Evidence Fusion: Application to Acoustic Resonance-Based Liquid Level Measurement. Sensors 2017, 17, 924. https://doi.org/10.3390/s17040924

AMA Style

Xu X, Li Z, Li G, Zhou Z. State Estimation Using Dependent Evidence Fusion: Application to Acoustic Resonance-Based Liquid Level Measurement. Sensors. 2017; 17(4):924. https://doi.org/10.3390/s17040924

Chicago/Turabian Style

Xu, Xiaobin, Zhenghui Li, Guo Li, and Zhe Zhou. 2017. "State Estimation Using Dependent Evidence Fusion: Application to Acoustic Resonance-Based Liquid Level Measurement" Sensors 17, no. 4: 924. https://doi.org/10.3390/s17040924

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop