Next Article in Journal
Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition
Previous Article in Journal
A Time-Aware Routing Map for Indoor Evacuation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory

1
School of Computer and Information Science, Southwest University, Chongqing 400715, China
2
School of Electronic and Information, Northwestern Polytechnical University, Xi’an 710072, China
3
Big Data Decision Institute, Jinan University, Tianhe, Guangzhou 510632, China
4
Department of Civil & Environmental Engineering, School of Engineering, Vanderbilt University, Nashville, TN 37235, USA
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(1), 113; https://doi.org/10.3390/s16010113
Submission received: 29 November 2015 / Revised: 3 January 2016 / Accepted: 11 January 2016 / Published: 18 January 2016
(This article belongs to the Section Physical Sensors)

Abstract

:
Sensor data fusion plays an important role in fault diagnosis. Dempster–Shafer (D-R) evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods.

1. Introduction

With the development of sensor data fusion technology, it is playing a more and more important role in fault diagnosis. On account of the complexity of the target and the background, the data detected by a single sensor are insufficient and unreliable to make a decision. In addition, due to the impact of the surroundings, the information derived from the sensors may contain errors, which leads to an incorrect result in the fault diagnosis system. A multi-sensor system can partially overcome the above limitations and shortages by combining a group of sensors to detect information and make a decision by considering all of the information obtained from the detection system [1,2], which improves the reliability and accuracy of the fault diagnosis system effectually [3,4].
In practical applications, the information collected from the sensors is imprecise and uncertain. How to deal with the uncertain information effectively to make a reasonable decision or optimization has had great attention paid [5,6]. To address this issue, some theories focused on uncertainty modeling and data fusion have been introduced, such as evidence theory [7,8], fuzzy set theory [9,10,11,12], Bayesian networks [13] and D-numbers [14]. Dempster–Shafer evidence theory (D-S evidence theory) is an imprecise reasoning theory, which was first proposed by Dempster [15] and then developed by Shafer [16]. As the generalization of Bayes method, D-S evidence theory can deal with uncertain information without prior probability. When the uncertain information is represented by probability, D-S evidence theory can definitely degenerate to the probability theory. D-S evidence theory is useful in uncertainty modeling [17] and data fusion [18,19,20], which contributes to its wide application in the fields of uncertain information processing [21,22,23] and decision making [24,25,26]. Cai e t a l . introduced the Bayesian network and proposed to establish two layers, a fault layer and a fault symptom layer, to develop a fault diagnosis model and to perform data fusion [27]. It should be pointed out that D-numbers can model and fuse more uncertain information, which is also an efficient math tool to handle data uncertainty [14,28,29].
However, there may exist conflict among the data collected from different sensors. In addition, the error contained in the data can also lead to conflict [30]. It may come to a counterintuitive conclusion by using Dempster’s combination rule when faced with highly conflicting evidence [31]. How to handle the conflict is inevitable in fault diagnosis. There are two classes of solutions to address the issue. The first is to improve the combination rule method, while the other is to modify the data model [32]. Yager improved the combination rule by distributing the conflict factor to the universal set, which means knowing nothing [33]. Smets introduced a conjunctive combination rule [34,35]. Dubios and Prade put forward a disjunctive combination rule [36,37]. Some typical works to improve the data method are briefly introduced as follows. Murphy is in favor of modifying evidence instead of the combination rule; she proposed to average the belief function first and perform the data fusion next [38]. Deng e t a l . introduced a weighted averaging method [39], which is more reasonable compared to Murphy’s simple averaging [40,41]. Zhang e t a l . introduced the vector space to deal with the issue [42].
Fan and Zuo introduced a fuzzy membership function and an importance index to improve D-S evidence theory [43]. Three factors are taken into consideration: evidence sufficiency, evidence importance and conflict degree of evidence. Though this method improves the accuracy of fault diagnosis, it still has some problems. First, it introduces a judging process of the conflict degree of evidence, according to the different combination rule being adopted, which makes it much more complex to make a decision. Besides, Fan and Zuo’s method only considers evidence sufficiency and evidence importance, which can be regarded as the static property of sensors’ reliability, and ignores the dynamic property of sensors reliability reflected in the real-time detection process.
It is obvious that the sensor reliability plays a significant role in decision making and fault diagnosis [44]. Sensor reliability can quantify sensor performance and reflect the reasonability of sensor data. In general, sensor performance is measured in long-term practice. This kind of reliability is called static reliability, which mainly depends on technical factors of the sensor itself. However, in a dynamic situation, with the surrounding conditions changing with time, the sensors may perform with different reliability at different times. It is difficult to measure such changeable reliability with one parameter in practical applications. Therefore, dynamic reliability is adopted to reflect the variation of sensor reliability at different times. Note that the reliability of a dynamic system is different from dynamic reliability on account of the reliability of a dynamic system being composed of static reliability and dynamic reliability. It can be considered that the reliability of a dynamic system is from the macro perspective, while the dynamic reliability is from the micro perspective. Additionally, the dynamic reliability approximates the real-time reliability. Cai e t a l . proposed to evaluate such dynamic reliability on the basis of dynamic Bayesian networks [45,46,47]. Rogova and Nimier have made a complete survey of evaluating the sensor reliability [48] in information fusion, which can be summed up as three levels: sensor level, data level and symbol level. The first level is inherent in a sensor, while the second and the third levels are application oriented [44]. Based on all of the above, this paper proposes a new method to model the reliability at two levels: The first level is static reliability, and the second level is dynamic reliability. The static reliability mainly depends on the technical factors, such as manufacturing craft and noise due to different materials. It can be measured by comparing the detection value with the actual value in long-term practice and the experts’ assessment. The dynamic reliability is influenced by the properties of the target and the surroundings. It can be evaluated by comparing the consistency of the outputs with other sensors aimed at the same input. If one sensor’s outputs are in great consensus with others, it is considered to have great reliability. The new method distributes different weights to different sensor data according to the sensor reliability and adopts the weighted averaging method to combine different evidence. The new method considering both the static reliability and the dynamic reliability of a sensor is more reasonable to cope with conflicting evidence effectually.
The proposed method has the following advantages. First, it is a generalized version of our previous work [44]. Compared to the existing method, the dynamic property of sensor reliability is not only determined by the evidence distance function, but also by the information volume of the sensor itself. It is more reasonable, since the information volume is an important parameter of the sensor report and should be taken into consideration in sensor data fusion. Second, the proposed method improves the accuracy in fault diagnosis, since it is efficient at conflict management. It is useful to practical engineering, since the methodology of this paper can be easily extended to other multi-sensor systems.
The paper is organized as follow. Section 2 introduces the preliminaries of the D-S evidence theory [15,16] and Deng entropy. Fan and Zuo’s method is briefly described in Section 2. Section 3 presents the new method to modeling sensor reliability. A numerical example is illustrated in Section 4 to show the efficiency of the new method. Finally, this paper is concluded in Section 5.

2. Preliminaries

In this section, some preliminaries are briefly introduced below.

2.1. Dempster–Shafer Evidence Theory

Dempster–Shafer evidence theory (D-S evidence theory) is also called belief function theory [15,16].
Let Θ be a set of n mutually-exclusive and collectively-exhaustive events, which is called the frame of discernment. The elements in Θ represent all of the possible faults in the fault domain of the object. Θ, also known as the sample space, is defined as Θ = { θ 1 , θ 2 , , θ n } . The power set of Θ is denoted by 2 Θ , whose element is called a hypothesis or a proposition. On the basis of the above two concepts, the definition of the mass function can be described. A mass function, also called basic belief assignment (BBA), is a mapping mfrom 2 Θ to 0 , 1 , which is given below:
m : 2 Θ [ 0 , 1 ]
Satisfying:
m = 0 A Θ m A = 1
The value m ( A ) represents the belief degree distributed to hypothesis A. Note that m ( ) = 0 means that there is no belief degree assigned to the empty set, which is required in the closed world. While in the open world [30], the criterion is not required, and m ( ) can be bigger than zero. All subsets A of Θ that satisfying m ( A ) > 0 are called focal elements.
Dempster’s combination rule, also called the orthogonal sum, is defined as follows:
m C = m i ( X ) m i ( Y ) = 0 X Y = X Y = C , X , Y Θ m i ( X ) × m i ( Y ) 1 K X Y
K is called the conflict factor between m ( X ) and m ( Y ) , which is defined below:
K = X Y = , X , Y Θ m i X × m i Y
When there are more than two pieces of evidence, these can be combined in the following form:
m = m 1 m 2 m n = m 1 m 2 m n

2.2. Weighted Average Combination Method [39]

In Dempster’s combination rule [15], K is adopted to measure the dissimilarity degree between BBAs. However, it does not respect the metric axioms under the conditions of identityand triangle inequality [44]. Here is a numerical example to illustrate the case.
Example 1: Assume there are two pieces of evidence, m 1 and m 2 , whose BBAs are given below:
m 1 ( { ω 1 } ) = 0 . 2 , m 1 ( { ω 2 } ) = 0 . 2 , m 1 ( { ω 3 } ) = 0 . 2 , m 1 ( { ω 4 } ) = 0 . 2 , m 1 ( { ω 5 } ) = 0 . 2 ,
m 2 ( { ω 1 } ) = 0 . 2 , m 2 ( { ω 2 } ) = 0 . 2 , m 2 ( { ω 3 } ) = 0 . 2 , m 2 ( { ω 4 } ) = 0 . 2 , m 2 ( { ω 5 } ) = 0 . 2 .
Use Equation (4) directly, and the conflict factor between two pieces of evidence is:
K = 0 . 2 × ( 0 . 2 + 0 . 2 + 0 . 2 + 0 . 2 ) × 5 = 0 . 8
It is obvious that the two pieces of evidence are completely the same. However, the conflict factor is not equal to zero, which is not reasonable. In order to address the issue, Liu proposed a novel approach to measure the degree of conflict, which combines the conflict factor and betting comments [49]. Though it is useful in conflict measurement, it is too complex to be calculated. Jousselme e t a l . introduce a distance to measure the dissimilarity between two pieces of evidence [50]. The evidence is expressed in the form of the vector space. The distance between two pieces of evidence m 1 · and m 2 · denotes d B O E m 1 , m 2 , which is defined as:
d B O E m 1 , m 2 = 1 2 m 1 m 2 T D ̲ ̲ m 1 m 2
m 1 and m 2 are the vector form of evidence, respectively. D ̲ ̲ is a matrix of 2 Θ × 2 Θ , and the elements of D ̲ ̲ are defined as:
D ̲ ̲ s 1 , s 2 = s 1 s 2 s 1 s 2 s 1 , s 2 2 Θ
When there are multiple pieces of evidence, the distances of every two pieces of evidence can be expressed in the form of a distance matrix D M , which is given below:
D M = 0 d 12 d 1 m d 21 0 d 2 m d m 1 d m 2 0
On account of the distance measuring the dissimilarity of evidence, the greater the distance of two pieces of evidence is, the less the two pieces of evidence support each other, the greater the conflict between these pieces of evidence is. Thus, the similarity measure S i m i j can be defined:
S i m ( m i , m j ) = 1 d ( m i , m j )
Additionally, the similarity measure matrix (SMM) is shown as:
S M M = 1 S 12 S 1 m S 21 1 S 2 m S m 1 S m 2 1
The support degree of each evidence is given as:
S u p m i = j = 1 , j i m S i m m i , m j
After normalization, the credibility degree C r d i of evidence i is given below:
C r d i = S u p m i m a x S u p m i i = 1 , 2 , , k
The bigger the C r d is, the more the evidence is supported by others, the more reliable it is and the more important the role it will play in the final fusion result.
There is no denying that D-S evidence theory [15] is effective in uncertainty modeling and data fusion. However, it may reach a counterintuitive conclusion when dealing with highly conflicting evidence. Zadeh has proposed such a numerical example [31]:
Example 4: Assume there are two pieces of evidence m 1 and m 2 . The BBAs supported by such evidence are:
m 1 ( { F 1 } ) = 0 . 9 , m 1 ( { F 3 } ) = 0 . 1 ,
m 2 ( { F 2 } ) = 0 . 9 , m 2 ( { F 3 } ) = 0 . 1 .
Use Equation (3), and the BBA of hypothesis F 3 is calculated as:
m ( { F 3 } ) = 0 . 1 × 0 . 1 1 0 . 9 × 0 . 1 0 . 1 × 0 . 9 0 . 9 × 0 . 9 = 1
The fusion result distributes total belief to F 3 , while the two initial pieces of evidence do not support evidence F 3 well. Obviously, the final result deviates from reality, which may lead to the wrong decision. To handle this issue, Murphy introduced a simple averaging method to modify the BBAs [38]. Deng e t a l . proposed to apply the weighted averaging method, which is more reasonable [39]. In the weighted averaging method, different evidence plays different important roles in the final combination result according to the weights. If evidence has a big weight, it will have a great effect in the decision making; while if evidence is assigned a small weight, it will have little influence in the final fusion result. Assume there are n pieces of evidence; the weighted averaging method is summarized as:
m ( A ) = i = 1 n w i m i ( A ) i = 1 n w i = 1
In fault diagnosis, the weights are given according to the efficiency of the evidence. The more reliable and accurate the evidence is, the higher the weight is. On the contrary, the less reliable and accurate the evidence is, the lower the weight distribution. In this paper, the weights are given based on the reliability of the sensors. The higher credibility a sensor has, the greater effect it will have on the final fusion and decision making.

2.3. Deng Entropy

Deng entropy is the generalization of Shannon entropy, which was first proposed by Deng [51]. It is an efficient way to measure uncertainty, not only under the situation where the uncertainty is represented by a probability distribution, but also the situation where the uncertainty is represented by BBA. Thanks to this advantage, Deng entropy is widely applied in D-S evidence theory. When the uncertainty is expressed in the form of a probability distribution, Deng entropy definitely degenerates to Shannon entropy. The related concepts are given below.
Let A i be a proposition of BBA m; the cardinality of the set A i is denoted by A i . Deng entropy E d of set A i is defined as:
E d = i m A i log m A i 2 A i 1
When the belief value is only assigned to a single element, Deng entropy can definitely degenerate to Shannon entropy, namely:
E d = i m A i log m A i 2 A i 1 = i m A i log m A i
For more detailed information, please refer to [51].

2.4. Fan and Zuo’s Method

Fan and Zuo proposed to improve the evidence by evidence sufficiency, evidence importance and the conflict among evidence [43]. In practical applications, the data obtained from a sensor may contain uncertainty and errors. Fan and Zuo introduced the fuzzy relationship function to measure evidence sufficiency, which denotes μ. Besides, not al of the pieces of evidence are of the same importance. Fan and Zuo introduced the evidence weight to represent evidence importance, which denotes v. The modification of BBAs considering both evidence sufficiency and evidence importance is given below:
m i , A = α i , j · m i A , 1 B θ α i , j · m i B A θ B θ , A = θ
where α i , j is the combination of sufficiency index μ and importance index v, which is defined as α i , j = v i , j · μ i .
After modification, if the BBAs are still in conflict with each other, Fan and Zuo proposed to use the non-conflict factor to modify Dempster’s combination rule. For more detailed information, please refer to [43].
Though Fan and Zuo’s method can handle the conflict problem and perform data fusion effectively, it has some limitations. First, it only considers the static reliability of sensors, such as evidence sufficiency and evidence importance, and ignores the dynamic reliability, which is reflected in the real-time detection process; it is not reasonable in practice. Besides, Fan and Zuo introduced a process of judging the conflict degree between pieces of evidence, according to the degree of different combination rules adopted, which makes it much more complex to make a decision.

3. The Proposed Method

3.1. Static Reliability

The sensor reliability is of great value in comprehending and quantifying the sensor performance. Whether the fusion result is reasonable is closely associated with the static reliability of sensors, such as accuracy, work efficiency and experts with different knowledge. The sensor static reliability can be affected by technical factors and noise, such as principle, material, manufacturing craft, and so on. It can be evaluated by comparing the sensor outputs with the actual values in long-term practical applications. In this paper, we adopt evidence sufficiency and evidence importance in Fan and Zuo’s method [43] to measure the static reliability of sensors. The static reliability index is denoted as w s , where superscript s means “static reliability”. w s combines the sufficiency index and the importance index, which is defined below:
w s = μ i × ν i , j = α i , j
If an evidence has a high sufficiency level and a high level of importance, it will be assigned a high weight, so that it can have a great effect on the final data fusion result and the decision making.

3.2. Dynamic Reliability

The sensor reliability is also related to the target and surrounding properties, such as environment noises, the presence of unknown targets and the deception behaviors of observed targets [44]. Due to different sources, different sensors have different adaptationsto the environment. Hence, it is also important to take the dynamic reliability of sensors in the combination process into consideration. The dynamic reliability is generally evaluated by measuring the consensus among a group of sensors. As for the same input, the sensors may have different reports. If a sensor report reaches a good consensus with those of other sensors, it has good adaptive performance to the environment, which means it is stable and reliable in detection. From this point of view, the weight assigned to this type of sensor is supposed to be great to guarantee that it can play a more important part in the final combination result and decision making. On the contrary, if a sensor has poor adaptability to the environment, the weight assigned to this kind of sensors should be small, so that it has little influence on the final result.
Evidence distance is an efficient tool to measure the dissimilarity between every two pieces of evidence [50]. Additionally, it can be adopted to reflect the consensus among the sensors, which can be used to evaluate the dynamic reliability. If evidence has a large distance from others, it is poorly supported by other evidence, namely it has great conflict with the others, which means it has a lower level of credibility and has little consensus with the others. To reduce the influence of such evidence on the final decision, it will be assigned small weights.
In this paper, one contribution is that not only evidence distance, but also Deng entropy are introduced to measure the information volume of the evidence [51]. Suppose that there is evidence containing a great volume of information; it is supposed to have a little conflict with others; in other words, it has great consensus with other evidence and is well supported by others. As for this kind of evidence, it will be distributed with a big weight to have a great effect on the final decision.
The dynamic reliability combining both evidence distance and Deng entropy denoted w d is defined as:
w d = C r d i × E d i
where the superscript d of w d represents “dynamic reliability”. C r d i is calculated by Equation (12). E d i is obtained after normalization of Equation (14). The process is given below:
E d ( i ) = E d ( i ) m a x E d ( i )
If evidence has a high dynamic reliability that is equal to one, this means this piece of evidence is completely reliable. Besides, this piece of evidence has not only the highest credibility, but also the maximum information volume. In addition, this evidence will play a significant part in the final result. On the contrary, if evidence has a very low dynamic reliability, which equals zero, this means this evidence highly conflicts with others, and it may distribute the whole BBA to a single element. It is obvious that this type of evidence will not participate in decision making. The dynamic reliability considering both Deng entropy [51] and evidence distance [50] can reflect the adaptability and the dynamic reliability of the sensors effectually.

3.3. Comprehensive Reliability of Sensor

Based on all of the above, this paper proposes a new comprehensive method to model the sensor reliability. The new model combining the static reliability and dynamic reliability is more reasonable. It is defined below:
w = w s × w d
where w s is given in Equation (17) and w d is obtained by Equation (18). According to the sensor reliability, the weight of each piece of evidence can be obtained. It is obvious that the more reliable the sensor is, the greater effect it has on the fusion result, which is great help for making the right decision.
Suppose there are n pieces of evidence, the final weights on the basis of Equation (20) after normalization is given below:
w ( i ) = w ( i ) i = 1 n w ( i )
Use the value to make weighted averaging according to Equation (13), and the weighted evidence can be obtained. Combine the weighted evidence n 1 times, and the fusion result can be obtained to make the final decision. To illustrate the process more clearly, Figure 1 shows the flowchart of the new method specifically.
Figure 1. The flowchart of the new method.
Figure 1. The flowchart of the new method.
Sensors 16 00113 g001
In Fan and Zuo’s method [43], the evidence sufficiency and evidence importance both belong to static reliability, which neglects the significance of dynamic reliability. In contrast, the new method is more reasonable and considerate. The relationship between Fan and Zuo’s method and the new method is illustrated in Figure 2.
Figure 2. The relationship between Fan and Zuo’s method and the new method.
Figure 2. The relationship between Fan and Zuo’s method and the new method.
Sensors 16 00113 g002
D-S evidence theory simply combines the initial evidence to make a decision [43]. While in the other two methods, the reliability of sensors is also taken into consideration. Fan and Zuo’s method [43] only requires the input of evidence sufficiency and evidence importance [43]. However, it is not easy to obtain these parameters in practice. In comparison, the new method is more reasonable and has greater consideration, which makes a great contribution to improve the accuracy of decision making.

4. Application

The example from paper [43] is given to demonstrate the effectiveness of the new method.
Example 5: Assume a machine has three gears G 1 , G 2 and G 3 , and the failure modes F 1 , F 2 , F 3 represent that there are faults in G 1 , G 2 , and G 3 , respectively. The fault hypothesis set is θ = F 1 , F 2 , F 3 . Suppose there are three types of sensors named S 1 , S 2 and S 3 , respectively. Additionally, the evidence derived from different sensors is denoted by E = E 1 , E 2 , E 3 . The BBAs based on these pieces of evidence are given in Table 1.
Table 1. Basic belief assignments (BBAs) for the example.
Table 1. Basic belief assignments (BBAs) for the example.
F 1 F 2 F 2 , F 3 θ
E 1 : m 1 · 0.60.10.10.2
E 2 : m 2 · 0.050.80.050.1
E 3 : m 3 · 0.70.10.10.1
The conflict factors between each pair of evidence are k 1 , 2 = 0 . 52 , k 1 , 3 = 0 . 26 , k 2 , 3 = 0 . 605 . It is obvious that the second piece of evidence conflicts highly with the others. Assume the sufficiency indexes of the three pieces of evidence are 1, 0.6, 1, respectively. Additionally, the importance indexes are 1, 0.34, 1.
According to Equation (12), the credibility degree C r d i of these three pieces of evidence can be calculated based on the initial BBAs.
C r d 1 = 1 . 0000 C r d 2 = 0 . 5523 C r d 3 = 0 . 9660
Adopt Equation (14) to calculate the Deng entropy, which is given below:
E d 1 = 2 . 2909 E d 2 = 1 . 3819 E d 3 = 1 . 7960
Additionally, the results after normalization of the Deng entropy according to Equation (19) are as follows:
φ 1 = 1 . 0000 φ 2 = 0 . 6032 φ 3 = 0 . 7840
Then, the static reliability w s and the dynamic reliability w d can be obtained based on Equations (17) and (18), respectively. On basis of all of the above, the final weights according to Equation (20) are given below:
w 1 = 1 × 1 × 1 × 1 = 1 w 2 = 0 . 6 × 0 . 34 × 0 . 5523 × 0 . 6032 = 0 . 0680 w 3 = 1 × 1 × 0 . 9660 × 0 . 7840 = 0 . 7573
The final weights after normalization are shown as follows:
w 1 = 0 . 5479 w 2 = 0 . 0372 w 3 = 0 . 4149
Use the weights to modify the BBAs, and the results are given below:
m ( { F 1 } ) = 0 . 6210 , m ( { F 2 } ) = 0 . 1261 , m ( { F 2 , F 3 } ) = 0 . 0981 , m ( Θ ) = 0 . 1548 .
After the combination by Equation (3), the final BBAs are:
m ( { F 1 } ) = 0 . 8948 m ( { F 2 } ) = 0 . 0739 m ( { F 2 , F 3 } ) = 0 . 0241 m ( Θ ) = 0 . 0072
Table 2 shows the results obtained by different methods.
Table 2. Comparison between the proposed method and other methods. D-S, Dempster–Shafer.
Table 2. Comparison between the proposed method and other methods. D-S, Dempster–Shafer.
F 1 F 2 F 2 , F 3 θ
D-S evidence theory0.45190.50480.03360.0096
Fan and Zuo’s method [43]0.81190.10960.05260.0259
The proposed method0.89480.07390.02410.0072
According to the proposed method, the fault F 1 has a belief degree of 89.48%, while the fault F 2 only has a belief degree of 7.39%. It is clear that m { F 1 } > m { F 2 } . Therefore, we can find that the fault is F 1 , which means that Gear 1 has a fault.
In D-S evidence theory, the BBA of fault F 1 is 0.4519, while that of F 2 is 0.5048. Due to the conflict evidence E 2 , D-S evidence theory comes to the wrong result that m { F 2 } > m { F 1 } , which may lead to the wrong decision; while the other two methods deal with the conflict evidence E 2 , so that they can both reach the right result.
In Fan and Zuo’s method, the belief degree of F 1 is 81.19%, while the new method has a higher belief degree of 89.48%. The main reason is that the proposed method takes into consideration not only the static reliability represented by evidence sufficiency and evidence importance, but also the dynamic reliability measured by evidence distance and entropy, which decreases the conflict evidence F 2 ’s influence on the final result enormously.
The new method improves the accuracy of fault diagnosis from 81.19% to 89.48%, which illustrates the efficiency of the new method in conflict management and fault diagnosis.

5. Conclusions

How to efficiently model sensor reliability greatly affects the performance of the sensor fusion system. To address this issue, a new sensor reliability model combining both dynamic reliability and static reliability is presented in this paper. The dynamic property of the sensor reliability is determined by the distance function of the sensor report and the information volume of each sensor report. A new discounting coefficient is proposed to improve the classical Dempster combination rule. An application in fault diagnosis is illustrated to show the efficiency of our proposed method. It seems that our proposed method is more efficient for handling highly conflicting evidence. In addition, from the result obtained in this paper, the new method can identify the fault correctly and improve the accuracy of fault diagnosis from 81.19% to 89.48%.
The proposed method has two aspects of merit. From the aspect of the math model of sensor data fusion, the proposed work takes into consideration not only evidence distance, but also the information volume of the sensor itself, which contributes to the dynamic property of sensor reliability more reasonably. From the aspect of a real application in fault diagnosis, the proposed method can effectually handle conflict management. The application results show that the accuracy in fault diagnosis is improved from 81.19% to 89.48%. Besides, it can be easily extended to other multi-sensor systems, which makes it useful to practical engineering.

Acknowledgments

The authors greatly appreciate the reviewers’ suggestions and the editor’s encouragement. The work is partially supported by the National High Technology Research and Development Program of China (863 Program) (Grant No. 2013AA013801), the National Natural Science Foundation of China (Grant Nos. 61174022,61573290,61503237) and the China State Key Laboratory of Virtual Reality Technology and Systems, Beihang University (Grant No. BUAA-VR-14KF-02).

Author Contributions

Yong Deng designed and performed the research. Kaijuan Yuan wrote the paper. Kaijuan Yuan, Liguo Fei and Bingyi Kang performed the computation. Yong Deng, Kaijuan Yuan and Fuyuan Xiao analyzed the data. All authors discussed the results and commented on the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xu, L.; Krzyzak, A.; Suen, C.Y. Methods of combining multiple classifiers and their applications to handwriting recognition. IEEE Trans. Syst. Man Cybern. 1992, 22, 418–435. [Google Scholar] [CrossRef]
  2. Lu, X.; Wang, Y.; Jain, A.K. Combining classifiers for face recognition. In Proceedings of the 2003 International Conference on Multimedia and Expo, Baltimore, MD, USA, 6–9 July 2003; Volume 3, pp. 13–16.
  3. García, F.; Jiménez, F.; Anaya, J.J.; Armingol, J.M.; Naranjo, J.E.; de la Escalera, A. Distributed pedestrian detection alerts based on data fusion with accurate localization. Sensors 2013, 13, 11687–11708. [Google Scholar] [CrossRef] [PubMed]
  4. Xu, X.; Liu, P.; Sun, Y.; Wen, C. Fault diagnosis based on the updating strategy of interval-valued belief structures. Chin. J. Electron. 2014, 23, 753–760. [Google Scholar]
  5. Walley, P.; de Cooman, G. A Behavioural Model For Linguistic Uncertainty. Inf. Sci. 2001, 134, 1–37. [Google Scholar] [CrossRef]
  6. Deng, Y.; Liu, Y.; Zhou, D. An Improved Genetic Algorithm with Initial Population Strategy for Symmetric TSP. Math. Probl. Eng. 2015, 2015. [Google Scholar] [CrossRef]
  7. Jones, R.W.; Lowe, A.; Harrison, M. A framework for intelligent medical diagnosis using the theory of evidence. Knowl. Based Syst. 2002, 15, 77–84. [Google Scholar] [CrossRef]
  8. Jiang, W.; Yang, Y.; Luo, Y.; Qin, X. Determining Basic Probability Assignment Based on the Improved Similarity Measures of Generalized Fuzzy Numbers. Int. J. Comput. Commun. Control 2015, 10, 333–347. [Google Scholar] [CrossRef]
  9. Liu, H.C.; Liu, L.; Lin, Q.L. Fuzzy failure mode and effects analysis using fuzzy evidential reasoning and belief rule-based methodology. IEEE Trans. Reliab. 2013, 62, 23–36. [Google Scholar] [CrossRef]
  10. Jiang, W.; Luo, Y.; Qin, X.; Zhan, J. An improved method to rank generalized fuzzy numbers with different left heights and right heights. J. Intell. Fuzzy Syst. 2015, 28, 2343–2355. [Google Scholar] [CrossRef]
  11. Deng, Y. A Threat Assessment Model under Uncertain Environment. Math. Probl. Eng. 2015, 2015. [Google Scholar] [CrossRef]
  12. Mardani, A.; Jusoh, A.; Zavadskas, E.K. Fuzzy multiple criteria decision-making techniques and applications—Two decades review from 1994 to 2014. Expert Syst. Appl. 2015, 42, 4126–4148. [Google Scholar] [CrossRef]
  13. Cai, B.; Liu, Y.; Liu, Z.; Tian, X.; Zhang, Y.; Ji, R. Application of Bayesian Networks in Quantitative Risk Assessment of Subsea Blowout Preventer Operations. Risk Anal. 2013, 33, 1293–1311. [Google Scholar] [CrossRef] [PubMed]
  14. Deng, X.; Hu, Y.; Deng, Y.; Mahadevan, S. Supplier selection using AHP methodology extended by D numbers. Expert Syst. Appl. 2014, 41, 156–167. [Google Scholar] [CrossRef]
  15. Dempster, A.P. Upper and lower probabilities induced by a multivalued mapping. In Classic Works of the Dempster-Shafer Theory of Belief Functions; Yager, R.R., Liu, L., Eds.; Springer: Berlin, Germany, 1966; Volume 38, pp. 57–72. [Google Scholar]
  16. Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976; Volume 1. [Google Scholar]
  17. Al-Ani, A.; Deriche, M. A new technique for combining multiple classifiers using the Dempster-Shafer theory of evidence. J. Artif. Intell. Res. 2002, 17, 333–361. [Google Scholar]
  18. Zhao, Z.S.; Zhang, L.; Zhao, M.; Hou, Z.G.; Zhang, C.S. Gabor face recognition by multi-channel classifier fusion of supervised kernel manifold learning. Neurocomputing 2012, 97, 398–404. [Google Scholar] [CrossRef]
  19. Moosavian, A.; Khazaee, M.; Najafi, G.; Kettner, M.; Mamat, R. Spark plug fault recognition based on sensor fusion and classifier combination using Dempster–Shafer evidence theory. Appl. Acoust. 2015, 93, 120–129. [Google Scholar] [CrossRef]
  20. Han, D.Q.; Han, C.Z.; Yang, Y. Multi-class SVM classifiers fusion based on evidence combination. In Proceedings of the International Conference on Wavelet Analysis and Pattern Recognition, Beijing, China, 2–4 November 2007; Volume 2, pp. 579–584.
  21. Le, C.A.; Huynh, V.N.; Shimazu, A.; Nakamori, Y. Combining classifiers for word sense disambiguation based on Dempster–Shafer theory and OWA operators. Data Knowl. Eng. 2007, 63, 381–396. [Google Scholar] [CrossRef]
  22. Wang, X.; Huang, J.Z.; Wang, X.; Huang, J.Z. Editorial: Uncertainty in learning from big data. Fuzzy Sets Syst. 2015, 258, 1–4. [Google Scholar] [CrossRef]
  23. Deng, Y.; Mahadevan, S.; Zhou, D. Vulnerability assessment of physical protection systems: A bio-inspired approach. Int. J. Unconv. Comput. 2015, 11, 227–243. [Google Scholar]
  24. Molina, C.; Yoma, N.B.; Wuth, J.; Vivanco, H. ASR based pronunciation evaluation with automatically generated competing vocabulary and classifier fusion. Speech Commun. 2009, 51, 485–498. [Google Scholar] [CrossRef]
  25. Zhang, X. Interactive patent classification based on multi-classifier fusion and active learning. Neurocomputing 2014, 127, 200–205. [Google Scholar] [CrossRef]
  26. Rikhtegar, N.; Mansouri, N.; Oroumieh, A.A.; Yazdani-Chamzini, A.; Zavadskas, E.K.; Kildiené, S. Environmental impact assessment based on group decision-making methods in mining projects. Econ. Res. 2014, 27, 378–392. [Google Scholar] [CrossRef]
  27. Cai, B.; Liu, Y.; Fan, Q.; Zhang, Y.; Liu, Z.; Yu, S.; Ji, R. Multi-source information fusion based fault diagnosis of ground-source heat pump using Bayesian network. Appl. Energy 2014, 114, 1–9. [Google Scholar] [CrossRef]
  28. Deng, X.; Hu, Y.; Deng, Y.; Mahadevan, S. Environmental impact assessment based on D numbers. Expert Syst. Appl. 2014, 41, 635–643. [Google Scholar] [CrossRef]
  29. Liu, H.C.; You, J.X.; Fan, X.J.; Lin, Q.L. Failure mode and effects analysis using D numbers and grey relational projection method. Expert Syst. Appl. 2014, 41, 4670–4679. [Google Scholar] [CrossRef]
  30. Deng, Y. Generalized evidence theory. Appl. Intell. 2015, 43, 530–543. [Google Scholar] [CrossRef]
  31. Zadeh, L.A. A simple view of the Dempster-Shafer theory of evidence and its implication for the rule of combination. AI Mag. 1986, 7. [Google Scholar] [CrossRef]
  32. Haenni, R. Are alternatives to Dempster’s rule of combination real alternatives?: Comments on About the belief function combination and the conflict management problem—-Lefevre et al. Inf. Fusion 2002, 3, 237–239. [Google Scholar] [CrossRef]
  33. Yager, R.R. On the Dempster-Shafer framework and new combination rules. Inf. Sci. 1987, 41, 93–137. [Google Scholar] [CrossRef]
  34. Smets, P. The combination of evidence in the transferable belief model. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12, 447–458. [Google Scholar] [CrossRef]
  35. Fu, C.; Yang, S. Conjunctive combination of belief functions from dependent sources using positive and negative weight functions. Expert Syst. Appl. 2014, 41, 1964–1972. [Google Scholar] [CrossRef]
  36. Dubois, D.; Prade, H. Representation and combination of uncertainty with belief functions and possibility measures. Comput. Intell. 1988, 4, 244–264. [Google Scholar] [CrossRef]
  37. Smets, P. Belief functions: The disjunctive rule of combination and the generalized Bayesian theorem. Int. J. Approx. Reason. 1993, 9, 1–35. [Google Scholar] [CrossRef]
  38. Murphy, C.K. Combining belief functions when evidence conflicts. Decis. Support Syst. 2000, 29, 1–9. [Google Scholar] [CrossRef]
  39. Deng, Y.; Shi, W.K.; Zhu, Z.F.; Liu, Q. Combining belief functions based on distance of evidence. Decis. Support Syst. 2004, 38, 489–493. [Google Scholar]
  40. Han, D.; Han, C.; Yang, Y. Multiple classifiers fusion based on weighted evidence combination. In Proceedings of the 2007 IEEE International Conference on Automation and Logistics, Jinan, China, 18–21 August 2007; pp. 2138–2143.
  41. Fu, C.; Chin, K.S. Robust evidential reasoning approach with unknown attribute weights. Knowl. Based Syst. 2014, 59, 9–20. [Google Scholar] [CrossRef]
  42. Zhang, Z.; Liu, T.; Chen, D.; Zhang, W. Novel Algorithm for Identifying and Fusing Conflicting Data in Wireless Sensor Networks. Sensors 2014, 14, 9562–9581. [Google Scholar] [CrossRef] [PubMed]
  43. Fan, X.; Zuo, M.J. Fault Diagnosis of Machines Based on D-S Evidence Theory. Part 1: D-S Evidence Theory and Its Improvement. Pattern Recognit. Lett. 2006, 27, 366–376. [Google Scholar] [CrossRef]
  44. Guo, H.; Shi, W.; Deng, Y. Evaluating sensor reliability in classification problems based on evidence theory. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2006, 36, 970–981. [Google Scholar] [CrossRef]
  45. Cai, B.; Liu, Y.; Ma, Y.; Liu, Z.; Zhou, Y.; Sun, J. Real-time reliability evaluation methodology based on dynamic Bayesian networks: A case study of a subsea pipe ram BOP system. ISA Trans. 2015, 58, 595–604. [Google Scholar] [CrossRef] [PubMed]
  46. Cai, B.; Liu, Y.; Fan, Q.; Zhang, Y.; Yu, S.; Liu, Z.; Dong, X. Performance evaluation of subsea BOP control systems using dynamic Bayesian networks with imperfect repair and preventive maintenance. Eng. Appl. Artif. Intell. 2013, 26, 2661–2672. [Google Scholar] [CrossRef]
  47. Cai, B.; Liu, Y.; Zhang, Y.; Fan, Q.; Liu, Z.; Tian, X. A dynamic Bayesian networks modeling of human factors on offshore blowouts. J. Loss Prev. Process Ind. 2013, 26, 639–649. [Google Scholar] [CrossRef]
  48. Rogova, G.L.; Nimier, V. Reliability in information fusion: Literature survey. In Proceedings of the Seventh International Conference on Information Fusion, Stockholm, Sweden, 28 June–1 July 2004; Volume 2, pp. 1158–1165.
  49. Liu, W. Analyzing the degree of conflict among belief functions. Artif. Intell. 2006, 170, 909–924. [Google Scholar] [CrossRef] [Green Version]
  50. Jousselme, A.L.; Grenier, D.; Bossé, É. A new distance between two bodies of evidence. Inf. Fusion 2001, 2, 91–101. [Google Scholar] [CrossRef]
  51. Deng, Y. Deng Entropy: A Generalized Shannon Entropy to Measure Uncertainty. Available online: http://vixra.org/abs/1502.0222 (accessed on 16 January 2016).

Share and Cite

MDPI and ACS Style

Yuan, K.; Xiao, F.; Fei, L.; Kang, B.; Deng, Y. Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory. Sensors 2016, 16, 113. https://doi.org/10.3390/s16010113

AMA Style

Yuan K, Xiao F, Fei L, Kang B, Deng Y. Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory. Sensors. 2016; 16(1):113. https://doi.org/10.3390/s16010113

Chicago/Turabian Style

Yuan, Kaijuan, Fuyuan Xiao, Liguo Fei, Bingyi Kang, and Yong Deng. 2016. "Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory" Sensors 16, no. 1: 113. https://doi.org/10.3390/s16010113

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop