This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
In the target classification based on belief function theory, sensor reliability evaluation has two basic issues: reasonable dissimilarity measure among evidences, and adaptive combination of static and dynamic discounting. One solution to the two issues has been proposed here. Firstly, an improved dissimilarity measure based on dualistic exponential function has been designed. We assess the static reliability from a training set by the local decision of each sensor and the dissimilarity measure among evidences. The dynamic reliability factors are obtained from each test target using the dissimilarity measure between the output information of each sensor and the consensus. Secondly, an adaptive combination method of static and dynamic discounting has been introduced. We adopt Parzenwindow to estimate the matching degree of current performance and static performance for the sensor. Through fuzzy theory, the fusion system can realize selflearning and selfadapting with the sensor performance changing. Experiments conducted on real databases demonstrate that our proposed scheme performs better in target classification under different target conditions compared with other methods.
Belief function theory has been widely applied in intelligent decision systems [
The main purpose for the sensors' reliability evaluation is to determine an appropriate discounting factor for the sensor. We adopt the sensor discounting factor to denote its reliability according to the relation that the reliability is equal to 1 minus the discount factor [
After obtaining the reliability factor, basic belief assignment (BBA) from a multisensor can be corrected by the corresponding factors. The most classic method is Shafer's discounting rule [
However, the existing methods of sensor reliability evaluation and evidence discounting have several problems. The dissimilarity measure of reasonable evidences is the basic issue of both static and dynamic reliability assessment. For example, dissimilarity measures among evidences are unreasonable in Guo's [
In order to resolve the above problems, this paper puts forward a scheme of sensor reliability evaluation and evidence discounting, which mainly includes two parts. First, we have designed an improved dissimilarity measure based on a dualistic exponential function so as to assess the static reliability from a training set by local decision of each sensor and distance measure between evidences. The dynamic reliability factors are gained from every test target by dissimilarity measures between the output information of each sensor and the consensus of total evidences simultaneously. Second, we have introduced an adaptive combination method of static and dynamic discounting based on fuzzy theory and Parzenwindow density estimation, which can be suitable for different kinds of uncertain target environments.
The rest of the paper is divided into six parts. Section 2 reviews the belief function theory. An improved dissimilarity measure based on a dualistic exponential function is presented in Section 3. Evaluation methods of static and dynamic discounting factor are respectively introduced in Section 4. In Section 5, we propose an adaptive combination mechanism of static and dynamic reliability discounting. The experiments and analysis are arranged in Section 6, where we compare the proposed method with other methods on real datasets. Then, a conclusion is presented in Section 7.
Belief function theory is regarded as a useful tool of representing and processing uncertain knowledge. In this section, a brief review of the belief function theory is introduced.
Let Ω = {
If there is no ambiguity,
The mass
The belief function
The combination of multiple BBAs can be realized through the conjunction rule. Let
The normalized factor is:
Because of the various conditions' influence, doubts about the reliability of an information source
The probability of reliability (1 −
In the transferable belief model (TBM) [
The credal level and its beliefs are expressed by belief functions.
In the pignistic level where for the purpose of making decisions, belief functions are converted into the pignistic probabilities denoted
The relation can be constructed between the two functions by the pignistic transformation on Ω:
Fundamentally speaking, an accurate dissimilarity measure between BBAs is the basis of the sensor reliability evaluation, either static or dynamic. For instance, the basic idea of the dynamic discounting method is that, if one source of evidence is different from other sources, it has a low reliability.
In this section, we first review the existing dissimilarity measure methods used in sensor reliability evaluation. Then, an improved dissimilarity measure method based on dualistic exponential function has been designed. Finally, several dissimilarity measure methods are compared and discussed.
In belief function theory, the dissimilarity between evidences reflects the inconsistency of sensors. In order to describe the inconsistency in a quantity, it is necessary to define quantitatively the dissimilarity measure, then, the targetoriented corresponding strategy emerges in such circumstances. There are three dissimilarity measure methods, namely BBM type, distance type and complex type dissimilarity measures.
This was been proposed by Shafer [
The evidences are regarded as linear space vectors in this dissimilarity measure, which reflects the geometric meaning of the inconsistency between evidences. On the basis of distance metric definition [
This dissimilarity measure combines both the BBM type and distance type measures, and has the form of dualistic variables corresponding to two kinds of measures [
Before designing the measurement method, the dissimilarity measurement should be firstly fit for the intuitive logic of persons. Secondly, it can measure the dissimilarity among more than two pieces of evidence simultaneously. In addition, the dissimilarity measurement should have good operational capability. Considering the comprehensiveness of dissimilarity measurement, this paper integrates both the BBM type and distance type dissimilarity measurements. Firstly, the new function is proposed to replace
The specific strategy includes three steps:
The problem of
This paper expounds that the local dissimilarity consists of local potential dissimilarity and local direct dissimilarity. We construct the local potential dissimilarity among
For
Theorem 1: For
Proof: in
If
If
Hence, whether local direct dissimilarity or local potential dissimilarity can be expressed by
Theorem 2: For
Proof: on one hand, in
With
If and only if
The problem of
When
We put forth the exponential dualistic function, through the association of multiple parameters, making up onesidedness defect of a single parameter. In the same discernment frame Ω, with
Comparing with the existing dissimilarity measurement methods, the advantages of our method can be shown in the following examples:
Example 1: Let three BBAs
The contrast results of different dissimilarity measurement methods are shown in
From
Example 2: Let three pair BBAs be in the same discernment frame Ω = {
First pair:
Second pair:
Third pair:
From
Example 3: Let Ω be a discernment frame with 20 elements. We use 1, 2,
There are 20 cases where subset
All in all, our improved dissimilarity measure method has three advantages. Firstly, it is much closer to human logic and has no one ticket veto problem. Secondly, it overcomes the operational problem of existing dualistic conflict measure methods. Thirdly, it can measure the conflict among any pieces of evidence simultaneously and face interchangeability and combinability.
In this section, the static discounting factor is assessed from a training set by comparing the sensor reading with the truth, which is based on the study of last section. Our method permits us to assess the static discounting factors of individual sensors. Different from the static method, the dynamic discounting factor is assessed in the process of target recognition, which is always on the test set. We deem that the sensor whose evidence in accordance with those of majority sensors is reliable comparatively. Then we bring forward an evaluation method of dynamic reliability based upon our improved dissimilarity measure. Different methods are discussed in the end of this section.
The static discounting factor of sensor
The sensor reading about the class of each target
In order to investigate the recognition performance of a fusion system acrosstheboard, the evaluation of dynamic reliability of each sensor is an important issue. When the realtime observation environment changes relative to the training environment, such as the decline and the invalidation of the sensor performance caused by environment noise and hostile interference, the static reliability and discount factor from the preliminary training no longer reflect the sensor performance and current status independently. Therefore, static evaluation of the sensor reliability is not enough, and the reliability of each sensor must be estimated dynamically in the fusion system.
Elouedi [
Guo [
The proposed method of Yang [
Elouedi's method [
Xu' method [
Static reliability evaluation of sensors is a process of obtaining static discounting factors based on the training set. It is actually to obtain prior knowledge and performance of each sensor, and thus quantify sensor static reliability better. There are several key problems which must be considered: reasonable evidence dissimilarity measure under different conditions; the reliability evaluation of sensor which is based on the different categories of output; how to make full use of information on limited training samples.
Guo's method is based on the Jousselme distance measure. The distance metric may not conform to the common sense, as shown in Example 1. We adopt the method of last section based on the improved measure BEF (
Next, we start with the reliability evaluation based on the output of maximal pignistic probability. Based on the acquisition method of static discounting factor of Guo's [
For each training target
We can obtain the static discounting factor
The factor
As known in
In the training set Γ = {
Naturally, we get the static discounting factor under the condition of each class via simple averaging operation:
Furthermore, we acquire the static reliability factor of the pdimensional vector as follows:
The paper has an essential premise: each sensor is independent. In this premise, we follow the idea of Guo's method [
Suppose the total number of sensors is
In order to measure the consistency between the output of each sensor
The dissimilarity measure
Therefore,
Compared with other methods, the advantages of our evaluation methods are:
We can calculate the dissimilarity measure between BBAs directly, which are derived from the reading of the sensor and actual value of data. The output category based on maximum pignistic probability is regarded as conditions of fine discounting, which can use the samples fully and there is no additional loss in the calculation process.
The dissimilarity measure and mean operation can correspond to either certain training samples or general training samples, and also have considerable flexibility.
Compared with other methods, our method has no information loss, the computational complexity of our method is fairly lower, and the reliability evaluation would be more reasonable and accurate.
In this section, we will study the combination of static and dynamic discounting factor penetratingly. Based on [
Guo [
While in this method, the static weight
To achieve adaptive combining of static and dynamic discounting factor, we use a new implementation framework, showing in
In order to simplify the problem, this paper makes the following assumptions: in the whole sensor group, the performance of single sensor or several sensors will be affected by the environment, and the performance of most of the other sensors remains the same, which is more common and reasonable in the heterogeneous sensors.
In accordance with the above assumptions, we come up with the following ideas. First of all, from the training samples, we obtain the sequence of dynamic discounting factor for each sensor, and then use the nonparametric estimation method to acquire the probability density function of dynamic discounting variables. In the testing process, according to the realtime dynamic discounting of each target, we can calculate the matching degree between the current sensor performance and static environment sensor performance and catch hold of the sensor performance. This paper has arranged two steps to realize the adaptive combination of static and dynamic discounting factors.
Question: given a sequence of independent and identically distributed random variables
The Parzenwindow estimate method [
Assuming
In order to calculate the number
Now, the number of samples with
Common window function has a variety of forms. The window function and normal window function are most widely used, and the specific form is:
The window function:
The normal window function:
Note that, in the basic formula for estimation method of the Parzenwindow, window width
For
Let
After obtaining the matching degree of current performance and the static performance based on each sensor, we need to realize dynamic learning of static and dynamic discounting weights. Because the matching degree can reflect that the current sensor is in a static environment, or away from the static environment, or in the intermediate state, which has a certain ambiguity, so the fuzzy set theory [
Dynamic fuzzy variable
We perform three series of experiments. The performances of various static methods are firstly compared to those of classifiers trained using the U.C.I datasets. Then we research the behavior of several static methods. In the third series of experiments, we can obtain the related conclusions by comparing two combination methods of static and dynamic discounting in different situations. The experimental setup is described in Section 6.1, and the results are presented and discussed in Sections 6.2–6.4.
The datasets used in these experiments are summarized in
In a single dataset, all the features are divided into three groups, each group contains several features and each group is the basis of classifiers. The description of generating classifiers is shown in
To fairly compare the various methods, we adopt the construction method of [
Let B be a dataset composed of
All targets in the dataset B are divided into three groups: the dataset for generating BBAs
Based on the dataset
In order to make a decision, decisions obtained by three classifiers are then combined according to the majority vote method and the Dempster rule of combination method respectively.
The correct classification rates of classifiers and two methods are respectively shown in
To implement different methods of evaluating the static discounting factor, the following steps are carried out:
By testing every classifier based on the dataset
For each object, based on the different static discounting factors and
Once the BBAs are obtained, the final results by the Dempster rule of combination can be computed according the formula:
Thus the final object can be classified by the maximum pignistic probability rule.
Repeat step (2) until all the data in the test set are tested.
We compare the performances of various methods by evaluating the static discounting factors. The classification results via five kinds of methods are shown in
We have three classifiers based on the dataset
For every object, different BBAs are calculated by every classifier in the test set
For the same object, based on the different dynamic discounting factors and
Once the BBAs are obtained, the final results are gained by the Dempster rule of combination.
Thus the final object can be classified by using the maximum pignistic probability rule.
Repeat step (1) and (2) until all the data in the test set are tested.
We compare the performances of several methods using the dynamic discounting factors. As shown in
In order to compare the performances of different combination methods, a typical scenario has been designed, which is made up of a series of cases affected by the actual environment. A typical scenario is designed as follows. A reconnaissance ship with three heterogeneous sensors on the sea carries out the task of reconnaissance. This kind of scenario may include visual camera (VIS), infrared camera (IR) and Radar. There might be many unexpected conditions for reconnaissance ship on the sea, which affect the performance of the sensor. Based on this assumption, we design four experiments.
Since sea fog reduces visibility, the performance of the visual camera will decline. Then the dataset
In our experiments we superimpose a fixed Gaussian white noise on the actual value of one classifier; whose standard deviation equals to twothirds of actual value. The threshold in Equations (
We conduct experiments with the changes of datasets and values of standard deviation in
It can be observed that, the recognition ability of the first classifier drops sharply because of the influence of uncertainty, while the performances of other two classifiers remain the same. Experimental results in several datasets prove that, our adaptive combination method (our static method and our dynamic method) is the best, our adaptive combination method (Guo's static method and Guo's dynamic method) lists proxime accessit, and Guo's combination method (Guo's static method and Guo's dynamic method) is the third one. The results testify the finer capability of our static method, our dynamic method and our adaptive combination strategy.
A fixed environmental interference has been discussed before. However, the actual situation is not static and marine environment is influenced by many factors. For example, the fog at sea may thicken or be thin, which will affect VIS. The detection performance of radar is directly related to the intensity of the sea clutter. Then we do an experiment on the dataset
Standard deviation of Gaussian noise adds fixedly in the whole testing process.(
Standard deviation of Gaussian noise adds increasingly in the whole testing process.(
Standard deviation of Gaussian noise adds increasingly until intermediate stage, and then the noise disappears in the remaining testing process. (
Noise model changes according to a random noise of uniform distribution. (
We can see from
When a reconnaissance ship detects enemy targets on the sea, sometimes the enemy may release some fake targets, which will lead to the completely wrong identification of some sensors on ships. For example, the perfect stealth technology of enemy targets can mislead the radar. In this situation, we study and analyze the effect of adaptive combination method for static and dynamic discounting factors. By changing some labels of actual target for one sensor in the dataset
When enemy targets find that this reconnaissance ship is detected nearby, the enemy targets often cause the performance degradation of all sensors, through the omnibearing interference. Under this kind of situation, the effect of combining method proposed in this paper is studied by comparing with other methods. In our experiment, we superimpose a fixed Gaussian white noise on the actual value of all classifiers, whose standard deviation is equal to twothirds of actual value. The result is shown in
It is important to note that, classification results of the adaptive combination method are better than Guo's combination method and the majority vote method in the above cases, which show the effectiveness and applicability of our methods (our static method, our dynamic method and our adaptive combination strategy).
In this paper, a new sensor reliability algorithm is present to correct the basic belief assignment from each sensor, which can improve the recognition accuracy and robustness of the fusion system. This paper mainly has two innovative aspects:
First of all, an improved dissimilarity measure based on dualistic exponential function has been designed. This paper integrates the advantages of both BBM type and distance type dissimilarity measures. The improved measure method is more intuitive for people and overcomes the operational problem of the existing dualistic dissimilarity measure methods. On account of this point, we assess the static reliability from a training set by the local decision of each sensor and dissimilarity measure between evidences. Meanwhile, the dynamic reliability factors are gained from every test target by the dissimilarity measure, which is between the output information of each sensor and the consensus.
Secondly, based on the Parzenwindow estimation and fuzzy theory, this paper introduces an adaptive method of combining static and dynamic discounting. The static weight of original method is determined beforehand by experts completely and cannot be changed with the actual environment. For solving this problem, we adopt the Parzenwindow estimate to acquire the matching degree of the current performance for sensors and the static performance based on the training samples. Then, our implementation mechanism can be suitable for different kinds of target environment via three fuzzy variables, which shows the classification accuracy and the robustness of our methods by comparing with other methods.
We have used only the evaluation methods of sensor reliability on the classical discounting. In future, we will combine the adaptive method with different discounting mechanisms, which may improve the performance of fusion system further.
Comparison of different methods when subset
Implementation framework of Guo's combining method.
Implementation framework of the our combining method.
The estimation of overall probability density function
Relational graph of fuzzy variables and static matching degree.
Instant result of different methods in the whole test process on dataset glass (a) and pendigits (b).
Results of different methods on dataset yeast (a), glass (b), waveform (c), and pendigits (d) by adding the fixed Gaussian noise.
Instant results of different methods in the whole test process on different conditions. Add a fixed Gaussian noise on classifier 1 (a), Add a Gaussian noise on classifier 1 increasingly in the whole testing process (b), Add a Gaussian noise on classifier 1 increasingly to intermediate stage (c), Add a fixed random noise of uniform distribution on classifier 1 (d), Change partial labels of the targets of classifier 1 (e), Add a fixed Gaussian noise on all classifiers (f).
Analysis of different dissimilarity measure methods.
BBM  Shafer [ 

It can measure the dissimilarity of more than three pieces of evidences; and the implement efficiency is high.  Its results are often counterintuitive, for example the problem of onevote veto. 
 
Jia [ 

It includes both direct dissimilarity and potential conflict.  In different evidence conditions, the dissimilarity measure results between evidences are large relatively.  
 
Distance  Wang [ 

Its form is intuitive and simple with high execution efficiency.  The measure is not careful without considering the compatible parts of focal elements. 
 
Jousselme [ 

It describes the dissimilarity between evidences and has the support of distance axiom.  Its computation is large when the number of elements for discernment framework is large, and it is not reasonable sometimes.  
 
Complex  Liu [ 

It includes both BBM type and distance type of dissimilarity measures.  The dualistic dissimilarity measure leads to the complexity of determining the threshold, which has not uniform criterion. 
 
Guo [ 

It overcomes the operation complexity of dualistic dissimilarity measure.  The results of the dissimilarity measure are illogical sometimes. 
Different consistency measure functions.
Function Form 




Contrast results of different dissimilarity measurement methods.
 

<0, 0.6067>  0.7430  0.1654  0.6429  0.5293  
<0, 0.0233>  0.0283  0.0114  0.0460  0.0279 
Contrast of different dissimilarity measurement methods.
 

The first pair  <0.9075,0.85>  0.85  0.8296  0.93  0.878 
The second pair  <0.0975,0.55>  0.6946  0.2059  0.6675  0.5306 
The third pair  <0,0.7>  0.8062  0.1738  0.8  0.6543 
Comparisons of different conflict measurement methods.
A = {1}  <0.05, 0.605>  0.78581  0.18801  0.825  0.63 
A = {1,2}  <0.05, 0.42667>  0.68666  0.16353  0.825  0.4458 
A = {1,2,3}  <0.05, 0.24833>  0.57053  0.12233  0.825  0.285 
A = {1,…,4}  <0.05, 0.195>  0.42367  0.10597  0.825  0.2032 
A = {1,…,5}  <0.05, 0.125>  0.13229  0.081178  0.825  0.1237 
A = {1,…,6}  <0.05, 0.25833>  0.38837  0.12517  0.85167  0.2266 
A = {1,…,7}  <0.05, 0.35357>  0.50292  0.14895  0.87071  0.304 
A = {1,…,8}  <0.05, 0.425>  0.57053  0.16323  0.885  0.3648 
A = {1,…,9}  <0.05, 0.48056>  0.61874  0.17247  0.89611  0.4141 
A = {1,…,10}  <0.05, 0.525>  0.65536  0.17879  0.905  0.455 
A = {1,…,11}  <0.05, 0.56136>  0.6844  0.18331  0.91227  0.4896 
A = {1,…,12}  <0.05, 0.59167>  0.70817  0.18665  0.91833  0.5192 
A = {1,…,13}  <0.05, 0.61731>  0.72809  0.1892  0.92346  0.545 
A = {1,…,14}  <0.05, 0.63929>  0.74513  0.19118  0.92786  0.5677 
A = {1,…,15}  <0.05, 0.65833>  0.75993  0.19276  0.93167  0.5877 
A = {1,…,16}  <0.05, 0.675>  0.77298  0.19403  0.935  0.6056 
A = {1,…,17}  <0.05, 0.68971>  0.78461  0.19508  0.93794  0.6216 
A = {1,…,18}  <0.05, 0.70278>  0.79509  0.19595  0.94056  0.6361 
A = {1,…,19}  <0.05, 0.71447>  0.80461  0.19668  0.94289  0.6493 
A = {1,…,20}  <0.05, 0.725>  0.81333  0.1973  0.945  0.6613 
Description of the datasets [
 

Yeast  10  8  495  495  494 
Glass  6  9  72  71  71 
Segment  7  19  770  770  770 
Waveform  3  21  1,667  1,667  1,666 
Pendigits  10  16  3,664  3,664  3,664 
Description of generating classifiers.
 

Yeast  8  1→2  3→5  6→8  15 
Glass  9  1→3  4→6  7→9  12 
Segment  19  1→10  11→13  14→19  2 
Waveform  21  1→8  9→13  14→21  7 
Pendigits  16  1→5  6→10  11→16  3 
Correct classification rates of classifiers and two methods.
Classifier 1  0.4615  0.6197  0.8558  0.6351  0.7268  
Classifier 2  0.3664  0.6197  0.8636  0.7293  0.8401  
Classifier 3  0.3968  0.6056  0.8896  0.6447  0.8352  
Majority Vote  0.3725  0.7042  0.9104  0.7401  0.8799  
Dempster (No Discounting)  0.4008  0.7183  0.9221  0.7923  0.9427 
Correct classification rates of five methods using static discounting factor.
Elouedi [ 
0.5304  0.7183  0.9351  0.7791  0.9539  
Elouedi( 
0.4615  0.7183  0.9286  0.7809  0.9419  
Yang [ 
0.5385  0.7183  0.9390  0.7815  0.9525  
Guo [ 
0.4595  0.7183  0.9260  0.7809  0.9421  
Our static method  0.7809  0.9531 
Correct classification rates of three methods using dynamic discounting factor.
Guo [ 
0.4352  0.7465  0.9338  0.7665  0.9301  
Xu [ 
0.4352  0.7465  0.9338  0.7725  0.9432  
Our dynamic method  0.9312 