Next Article in Journal
Time-Aware Explainable Recommendation via Updating Enabled Online Prediction
Previous Article in Journal
Node Deployment Optimization for Wireless Sensor Networks Based on Virtual Force-Directed Particle Swarm Optimization Algorithm and Evidence Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Decision Probability Transformation Method Based on the Neural Network

School of Artificial Intelligence, Henan University, Zhengzhou 450046, China
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(11), 1638; https://doi.org/10.3390/e24111638
Submission received: 6 October 2022 / Revised: 31 October 2022 / Accepted: 8 November 2022 / Published: 11 November 2022

Abstract

:
When the Dempster–Shafer evidence theory is applied to the field of information fusion, how to reasonably transform the basic probability assignment (BPA) into probability to improve decision-making efficiency has been a key challenge. To address this challenge, this paper proposes an efficient probability transformation method based on neural network to achieve the transformation from the BPA to the probabilistic decision. First, a neural network is constructed based on the BPA of propositions in the mass function. Next, the average information content and the interval information content are used to quantify the information contained in each proposition subset and combined to construct the weighting function with parameter r. Then, the BPA of the input layer and the bias units are allocated to the proposition subset in each hidden layer according to the weight factors until the probability of each single-element proposition with the variable is output. Finally, the parameter r and the optimal transform results are obtained under the premise of maximizing the probabilistic information content. The proposed method satisfies the consistency of the upper and lower boundaries of each proposition. Extensive examples and a practical application show that, compared with the other methods, the proposed method not only has higher applicability, but also has lower uncertainty regarding the transformation result information.

1. Introduction

Uncertain information [1] plays a significant role in many engineering applications, including multi-attribute decision-making [2,3,4], fault diagnosis [5,6,7], image processing [8,9], knowledge inference [10,11], risk assessment [12,13,14,15], and pattern classification [16,17]. Recent studies have focused on how to measure and handle information uncertainty, and a series of theories have been introduced, such as modeling information uncertainty based on the entropy function [18,19,20,21] and using Dempster–Shafer (DS) evidence theory [22,23,24,25,26,27,28,29,30] to deal with uncertain information. Compared with the traditional probability theory, the DS evidence theory can directly use the basic probability assignment (BPA) of multi-subset focal elements to express information uncertainty. However, a large number of elements in a proposition can make the credibility assignment too fragmented, thus causing certain complications in the decision-making process and reducing its precision. The BPA of multi-element propositions can respond to the support for each single-element proposition, which can be reasonably mapped to form probability in the frame of discernment [31,32,33,34,35,36,37].
Sudano [31] mapped the BPA of multi-element propositions to the probability of each single-element proposition according to a certain ratio using the belief and plausibility functions. Smets [32] equally distributed the BPA of multi-element propositions to each single-element proposition according to cardinality. However, this method is too conservative and does not fully use the known information. Pan [33] assigned the BPA of multi-element propositions according to the ordered weighted average operator (OWA) and then determined the final probability of each single-element proposition using the minimum entropy difference as a constraint. When a single-element proposition was not contained in any multi-element proposition, the allocation could still affect the single-element proposition, leading to anti-intuitive results. Deng [34] proposed a probability transformation method based on the belief interval, which first obtains the preference of each single-element proposition by the possibility degree, then quantifies the data information about the belief interval of each singleton based on the continuous interval argument ordered weighted average (C-OWA) operator, and finally calculates the support degree of the singleton based on quantitative data information to reasonably allocate the BPA of multi-subset focal elements. However, this method can result in extreme cases where the preference degree of a single-element proposition can be zero, which will further lead to a support degree of zero, thus meaning that the single-subset proposition is not assigned by the BPA to a multi-element proposition. Huang [35] proposed a probabilistic transformation method based on the Shapley value. This method requires that the degree to which each single-element proposition contributes to the multi-element propositions is determined; then, these are transformed according to the contribution degree. Since the marginal probability of a single-element proposition may be zero, which can mean that the proposition is not considered in the assignment of multi-element propositions, this may lead to inaccurate transformation results. Li [36] proposed a probability transformation method based on the ordered visibility graph (OVG). The OVG network was constructed according to the BPA order, and then the weight of each single-element proposition was obtained according to the proposition edges and cardinality. According to the weight, the BPA of multi-element propositions was transformed into the probability of each single-element proposition. This method only uses the out-degree and in-degree of the nodes to determine the weight and does not make full use of the influence of nodes with larger shadows. Chen [37] improved Li’s method as follows. First, the OVG network was constructed by each proposition order of information volume, and the weighted adjacency matrix was constructed with proposition edges or belief entropy. Then, the weight of each single-element proposition was obtained from the matrix and cardinality. Finally, the probability of each single-element proposition was obtained by assigning the BPA of multi-element propositions according to weight.
The key to the above methods is obtaining the distribution weight of the multi-element proposition. Inspired by these methods, under the premise of obtaining reasonable transformation results and minimizing the uncertainty of information, an efficient probability transformation method based on a neural network is proposed. The neural network is a computational model [38,39] designed to simulate the neural network of the human brain. Similar to human brain neurons, it consists of multiple nodes (neurons), which are interconnected to model the complex relationship between data. The connections between different nodes are given different weights, representing the influence of one node on another. Each node represents a specific function, which is calculated by combining the input and weight of other nodes. The result of the calculation is input into the activation function, which, in turn, provides the final output of the neuron and is passed on to the next neuron. The main questio is how to construct the neuronal network and changing weight, assigning the BPA of a proposition subset layer by layer according to weight until the probability of each single-element proposition containing the parameter is output. Then, the probabilistic information content (PIC) [31] is determined (with PIC being the dual form of Shannon entropy [40], which was first proposed by Sudano [31] and is widely used to measure the uncertainty of probability distribution) and, finally, optimal probability transformation results are obtained under the premise of PIC maximization. The main contributions of this study can be summarized as follows:
(1).
The BPA of a multi-element proposition with the largest cardinality is used as an input layer of a neural network, and the BPAs of the remaining existing propositions are used as bias units of the neural network. The hidden network layers are constructed according to decreasing order of cardinality for the proposition subset of the input layer and bias units. Finally, the probability of each single-element proposition is output by the output layer.
(2).
The interval information content (IIC) and average information content (AIC) are introduced to quantify the information contained in each proposition subset and combined to construct a weighting function containing the parameter. The weighting function reaches its extreme value at a certain point, in preparation for the use of the constraints below. After obtaining the PIC of the change based on the probability of each single-element proposition, the maximum PIC value is taken as a constraint to obtain the optimal transformation results.
The remainder of this paper is organized as follows. Section 2 briefly introduces the DS evidence theory and decision probability transformation methods. Section 3 describes the properties of belief entropy and PIC. Section 4 proposes a probability transformation method based on a neural network and proves its properties. Section 5 presents numerical simulation results to demonstrate the rationality and superiority of the proposed method compaed to the existing methods. A practical application of our proposed method is described in Section 6. Section 7 concludes the paper.

2. Preliminaries

In this section, preliminaries such as the DS evidence theory [24] and some existing probability transformation methods are briefly introduced.

2.1. DS Evidence Theory

Assume a set Θ is composed of k mutually exclusive elements, which can be expressed as: Θ = θ 1 , θ 2 , , θ k , where Θ is a frame of discernment; these elements are randomly combined to form the power set as follows:
2 Θ = , θ 1 , θ 2 , , θ k , θ 1 θ 2 , , θ 1 θ 2 θ 3 , , Θ ,
where ∅ denotes the empty set. If there is a function: m : 2 Θ 0 , 1 satisfying the conditions of m = 0 and A Θ m A = 1 , then m is called the mass function, and m A > 0 is the BPA of a proposition A .
If B e l : 2 Θ 0 , 1 holds for A 2 Θ , then:
B e l A = B A m B ,
where B e l A denotes the belief function, which represents the degree of trust that proposition A is true.
If P l : 2 Θ 0 , 1 holds for A 2 Θ , then:
P l A = B A = m B ,
where P l A denotes the plausibility function, which represents the degree of trust that proposition A is non-false.
Since the belief function B e l represents a degree of agreement with a proposition, and the plausibility function P l indicates a degree of disagreement with the proposition; the interval formed by the belief and plausibility functions represents the uncertainty of the proposition. The intervals are formed by B e l and P l , as shown in Figure 1.

2.2. Probability Transformation Methods

In the frame of discernment, Θ = θ 1 , θ 2 , , θ k , where θ i Θ , when there is a multi-element proposition D in evidence, which has a large BPA, then the information assignment is too scattered, with large uncertainty, so it is difficult to make decisions directly based on the BPA. To facilitate the decision-making process and improve its accuracy, the following improvements have been introduced in recent studies.
Sudano [31] used B e l and P l to obtain the probability of a single-element proposition by:
Pr a P l θ i = B e l θ i + ε · P l θ i ,
where ε = 1 θ i Θ B e l θ i θ i Θ P l θ i .
When only P l is used, the probability of a single-element proposition is calculated by:
Pr P l θ i = D Θ , θ i D P l θ i D i Θ , D i = 1 , i D i = D P l D i m D .
In contrast, when only B e l is used, the probability of a single-element proposition is obtained by:
Pr B e l θ i = D Θ , θ i D B e l θ i D i Θ , D i = 1 , i D i = D B e l D i m D .
Smets [32] provided an in-depth analysis of the reasonability of probability transformation in the decision-making domain and proposed the Pignistic probability transformation method, which is given by:
B e t P θ i = θ i D Θ m D D ,
where D is the cardinality of a multi-element proposition D .
Pan [33] proposed a probability transformation method based on the OWA operator and entropy difference, and the average function can be expressed by:
M B e l , P l θ i = B e l θ i + P l θ i 2 .
The adopted normalization function is as follows:
ρ θ i = M B e l , P l θ i θ j Θ M B e l , P l θ j .
The probability of each single-element proposition obtained using the OWA operator is given by:
O W A P m θ i = T i r T i 1 r ,
where T i = k = 1 i ρ θ k and T 0 = 0 . Since the probability of a single-element proposition contains an unknown variable r, min E d H is used to determine the variable r, where E d is the Deng entropy of the BPA, and H is the Shannon entropy of probability.
Deng [34] defined the belief interval and preference degree of a single-element proposition based on the belief and plausibility function, as follows:
l i = P l θ i B e l θ i ,
p i j = = max 1 max P l θ i B e l θ j l i + l j , 0 , 0 ,
p θ i = = 1 n 1 j = 1 , j i k p i j ,
where i = 1 , 2 , , k . The quantization of belief interval data was performed according to the C-OWA operator as follows:
β i = P l θ i + B e l θ i · 2 Θ 2 Θ + 1 .
The preference degree was used to modify the quantized belief interval data to obtain the support degree of the single-element proposition as follows:
S u p θ i = β i · p θ i .
The probability of a single-element proposition is given by:
I T P θ i = m θ i + θ i B Θ ε θ i m B ,
where ε θ i = S u p θ i j = 1 , θ j B Θ B S u p θ j .
Huang [35] proposed a probability transformation method based on the Shapley value, where the marginal probability of a proposition θ i for a proposition D is expressed by:
M P θ i = m D m D θ i ,
where θ i D , and D Θ ; D θ i denotes a subset of propositions D excluding θ i . The average marginal probability contribution of θ i in D is calculated by:
A M P D θ i = 1 D ! D Θ m D m D θ i .
The probability of each single-element proposition is obtained by:
M P S V θ i = D Θ , θ i D A M P D θ i .
After reordering each proposition according to the BPA from the largest to the smallest, Li [36] obtained a set of edges of each proposition based on the ordered visibility graph and obtained the weight of each single-element proposition according to the cardinality as follows:
g θ i = K θ i D 2 Θ , θ i D K θ i ,
K θ i = θ i D , D 2 Θ 1 D K D .
The probability of a single-element proposition is calculated by:
O V G P m θ i = D Θ , θ i D g θ i m D 1 m , m 1 .
Chen [37] ordered propositions according to the Deng entropy magnitude, and the Deng entropy of a proposition is calculated by:
I V A i = m A i log 2 m A i 2 A i 1 .
After ordering, which is denoted as 1 , I V A 1 , 2 , I V A 2 , , s , I V A s , and where A 1 is the ordered proposition, the network is constructed by the OVG. Then, the weighted adjacency matrix is obtained based on the set of edges with an internal element b i j . When there is a connection between A i and A j , then b i j = 1 ; otherwise, b i j = 0 . The two edge weights are obtained using the node distance and belief entropy, respectively, as follows:
w i j = b i j i j ,
w i j = 1 2 I V A i a k i = 1 I V A i + I V A i a k j = 1 I V A i .
Then, the degree of a focal element is calculated based on the edge weights as follows:
D A i = j = 1 s w i j .
The weight of a single-element proposition is calculated by:
w θ i = θ i A D A A .
Finally, the probability of each single-element proposition is given by:
O V G W P θ i = m θ i + θ i A 2 Θ w θ i θ j A w θ j m A .

3. Shannon Entropy and Probabilistic Information Content

This section describes how to evaluate the performance of a transformation method after the probabilistic transformation is completed.

3.1. Shannon Entropy

Shannon [40] first introduced the concept of entropy into the field of information theory, defining the entropy of discrete finite sets. Assume a finite discrete set is defined as F = f 1 , f 2 , , f n , and its probability distribution is denoted by P = p f 1 , p f 2 , , p f n . Then, the Shannon entropy of this set is obtained by:
H F = i = 1 n p f i log a p f i ,
where i = 1 n p f i = 1 ; in this study, a = e , where e is Euler’s number. In a discrete set, when the probability of each element is equally distributed, i.e., p f 1 = p f 2 = = p f n = 1 n , then, the Shannon entropy is maximal, and it is given by H max F = i = 1 n 1 n ln 1 n . The Shannon entropy is used to measure information uncertainty, and the greater information uncertainty is, the greater the entropy will be and vice versa.

3.2. Probabilistic Information Content

Sudano [31] developed a PIC method to evaluate transformation results. The PIC for the probability distribution P = p f 1 , p f 2 , , p f n is calculated by:
P I C F = 1 + 1 H max F i = 1 n p f i ln p f i .
The PIC is expressed as a dual of the normalized Shannon entropy, and varies between zero and one. The smaller the PIC value is, the greater the information uncertainty will be, and vice versa. When P I C = 1 , there is no interference of uncertain information in decision-making, but when P I C = 0 , it will be impossible to make a decision based on the information. The PIC has often been used to evaluate the performance of probabilistic transformation methods.

4. Probability Transformation Based on the Neural Network

The existing methods assign the BPA of multi-element propositions to single-element propositions according to certain weights, but these methods have the defects of not fully considering the relationship between propositions and using the existing information, which will lead to inaccuracies in the weight factors generated in some special cases, and the probability transformation results that are obtained are counter-intuitive. To overcome the shortcomings of the existing methods, this study proposes a probability transformation method based on a neural network. In this section, the neural network construction and weighting function are introduced. The optimal transformation result is described in detail.

4.1. Neural Network Construction

A neural network has one input layer and one output layer, and the number of hidden layers and bias nodes depends on the actual situation of the evidence. In this paper, a neural network uses the ReLu function [41] as the activation function.
If the frame of discernment Θ = θ 1 , θ 2 , , θ k k n exists, the BPA m θ 1 , θ 2 , , θ n of the proposition θ 1 , θ 2 , , θ n with the maximum cardinality used as the input network layer. The BPAs of the remaining propositions are denoted as bias units, the propositions in the hidden layer are subsets of the input layer and the bias unit propositions, and the size of the cardinality decreases with the layer number. From the combination of elements in probability statistics, it is known that when the number of proposition subsets of the first hidden layer is C n n 1 = n , the BPA of proposition subsets is expressed as N m θ 1 , θ 2 , , θ n 1 , N m θ 1 , θ 2 , , θ n 2 , θ n , …, N m θ 2 , θ 3 , , θ n , where each proposition subset cardinality is ( n 1 ). The number of proposition subsets of the second hidden layer is C n n 2 = n × n 1 2 , and the number of proposition subsets of the jth hidden layer is C n n j = n × n 1 × × n j + 1 j ! , where j ! = j × j 1 × j 2 × × 2 × 1 . There are ( n 2 ) hidden layers; the output layer gives the probability values of all single-element propositions in the frame of discernment as follows: P N m θ 1 , P N m θ 2 ,…, P N m θ k . The neural network structure is presented in Figure 2.
Each neuron in the hidden layer is composed of multiple parts, and the initial value of a neuron is obtained based on the accumulation of weights and bias units and then activated using the activation function before obtaining the BPA of each focal proposition, as shown in Figure 3.

4.2. AIC and IIC Values

Assume the set of discernment Θ = θ 1 , θ 2 , , θ k , and a proposition A i Θ . In order to accurately quantify the information content of the proposition A i , the average information content ( AIC ) and interval information content ( IIC ) based on belief and plausibility functions are defined as follows:
A I C A i = e 2 Θ B e l A i + P l A i 2 ,
I I C A i = e 2 Θ P l A i B e l A i .

4.3. Weighting Function with Variable

This paper proposes a weighting function containing variable parameter r, which can obtain the weight of each proposition by combining the AIC and IIC. The weighting function is defined as follows:
W A i = r A I C A i 1 r + 1 r I I C A i r ,
where 0 r 1 .
Theorem 1.
The weight varies with a variable r; the function curve is neither monotonically increasing nor monotonically decreasing but has an extreme value point at a certain point.
Proof. 
Let W A i = f r , A I C A i = X , and I I C A i = Y so that f r = r X 1 r + ( 1 r ) Y r , where 0 < r < 1 , X 1 and Y 1 , When r = 0 , then f 0 = 1 ; when r = 0 , then f 0 = 1 . The derivative of the weighting function is given by:
f r = X 1 r r X 1 r ln X Y r + ( 1 r ) Y r ln Y ,
since f 0 = X + ln Y 1 and f 1 = 1 ln X Y , regardless of the values of X and Y, f 0 and f 1 will always have opposite signs. According to the first sufficient condition for determining an extreme value, there exists Z 0 , 1 so that f Z = 0 ; this point represents an extreme value of the function. □
If A i is a proposition of the pth hidden layer, assign its BPA to the subset proposition a i of the ( p + 1 )th layer as follows:
w a i = W a i a j A i W a j ,
N m a i = m a i + a i A i Θ A i = n p w a i N m A i ,
N m a i = max 0 , N m a i ,
where N m A i is the BPA of the proposition A i ; m a i is the bias unit; when there is no bias unit, then N m a i = a i A i 2 Θ A i = n p w a i N m A i . If A i is a proposition subset of the last hidden layer, its BPA is assigned to a single proposition subset θ i as follows:
P N m θ i = m θ i + θ i A i Θ A i = 2 w θ i N m A i ,
P N m θ i = max 0 , P N m θ i .
At this point, the output probability contains variable parameter r .
Theorem 2.
According to the related literature [34], this transformation method can be justified by verifying the consistency of the upper and lower bounds: B e l θ i P N m θ i P l θ i .
Proof. 
Because P N m θ i = m θ i + θ i A i Θ A i = 2 w θ i N m A i , P N m θ i m θ i , m θ i = B e l θ i , so P N m θ i B e l θ i holds; because N m A i is jointly assigned by the BPA and bias nodes of the previous subset of propositions, so P l θ i P N m θ i ; finally, inequality B e l θ i P N m θ i P l θ i holds. □
The above proof shows that the proposed method is a reasonable decision probability transformation method and, in the following, a specific example with large uncertainty is used to illustrate the calculation process of the proposed method.
Example 1.
Assume the frame of discernment is Θ = θ 1 , θ 2 , θ 3 , and m is a mass function in Θ. Then, the corresponding BPA is given by:
m θ 1 = 0.2 , m θ 2 = 0.1 , m θ 1 , θ 2 = 0.3 , m θ 2 , θ 3 = 0.25 , m θ 1 , θ 2 , θ 3 = 0.15
First, a neural network is constructed based on the evidence, as follows: m θ 1 , θ 2 , θ 3 is the input layer; m θ 1 , m θ 2 , m θ 1 , θ 2 and m θ 2 , θ 3 are the bias units; θ 1 , θ 2 , θ 1 , θ 3 , and θ 2 , θ 3 are the proposition subsets in the hidden layer.
Then, the AIC and IIC values of the hidden-layer propositions are obtained and used to construct the weight factors as follows:
A I C θ 1 , θ 2 = e 2 Θ B e l θ 1 , θ 2 + P l θ 1 , θ 2 2 = 601.84 ,
I I C θ 1 , θ 2 = e 2 Θ B e l θ 1 , θ 2 P l θ 1 , θ 2 = 24.53 ,
W θ 1 , θ 2 = r × 601 . 84 1 r + ( 1 r ) × 4 . 95 r .
Similarly,
W θ 1 , θ 3 = r × 81 . 45 1 r + 1 r × 270 . 43 r ,
W θ 2 , θ 3 = r × 403 . 43 1 r + 1 r × 601 . 85 r .
Then, assigning m θ 1 , θ 2 , θ 3 to proposition subsets in the hidden layer according to the weights and combining the bias units, the BPA of each proposition in the hidden layer can be obtained by:
N m θ 1 , θ 2 = m θ 1 , θ 2 + W θ 1 , θ 2 W θ 1 , θ 2 + W θ 1 , θ 3 + W θ 2 , θ 3 × m θ 1 , θ 2 , θ 3 = 0.3 + r × 601.85 1 r + ( 1 r ) × 4.95 r × 0.15 r × 601.85 1 r + ( 1 r ) × 4.95 r + r × 81.45 1 r + 1 r × 16.44 r + r × 99.48 1 r + 1 r × 6.05 r
N m θ 1 , θ 2 = max 0 , N m θ 1 , θ 2 ,
N m θ 1 , θ 3 = W θ 1 , θ 3 W θ 1 , θ 2 + W θ 1 , θ 3 + W θ 2 , θ 3 × m θ 1 , θ 2 , θ 3 = r × 81.45 1 r + 1 r × 16.44 r × 0.15 r × 601.85 1 r + ( 1 r ) × 4.95 r + r × 81.45 1 r + 1 r × 16.44 r + r × 99.48 1 r + 1 r × 6.05 r
N m θ 1 , θ 3 = max 0 , N m θ 1 , θ 3 ,
N m θ 2 , θ 3 = m θ 2 , θ 3 + W θ 2 , θ 3 W θ 1 , θ 2 + W θ 1 , θ 3 + W θ 2 , θ 3 × m θ 1 , θ 2 , θ 3 = 0.25 + r × 99.48 1 r + 1 r × 6.05 r × 0.15 r × 601.85 1 r + ( 1 r ) × 4.95 r + r × 81.45 1 r + 1 r × 16.44 r + r × 99.48 1 r + 1 r × 6.05 r
N m θ 2 , θ 3 = max 0 , N m θ 2 , θ 3 .
For simplicity, in the following, N m θ 1 , θ 2 , N m θ 1 , θ 3 , and N m θ 2 , θ 3 are denoted by g 12 r , g 13 r , and g 23 r , respectively. Next, the weight of each single-element proposition can be obtained by:
W θ 1 = r × e 4 g 12 r + g 13 r + 0.4 1 r + 1 r × e 8 g 12 r + g 13 r 1 r ,
W θ 2 = r × e 4 g 12 r + g 23 r + 0.2 1 r + 1 r × e 8 g 12 r + g 23 r r ,
W θ 3 = r × e 4 g 13 r + g 23 r 1 r + 1 r × e 8 g 13 r + g 23 r r .
Similarly, W θ 1 , W θ 2 , and W θ 3 are denoted by q 1 r , q 2 r , and q 3 r , respectively. The bias units are combined to obtain the probability of a single-element proposition by:
P N m θ 1 = 0.2 + W θ 1 W θ 1 + W θ 2 × N m θ 1 , θ 2 + W θ 1 W θ 1 + W θ 3 × N m θ 1 , θ 3 = 0.2 + q 1 r q 1 r + q 2 r × g 12 r + q 1 r q 1 r + q 3 r × g 13 r
P N m θ 1 = max 0 , P N m θ 1 ,
P N m θ 2 = 0.1 + W θ 2 W θ 1 + W θ 2 × N m θ 1 , θ 2 + W θ 2 W θ 2 + W θ 3 × N m θ 2 , θ 3 = 0.2 + q 2 r q 1 r + q 2 r × g 12 r + q 2 r q 2 r + q 3 r × g 23 r
P N m θ 2 = max 0 , P N m θ 2 ,
P N m θ 3 = W θ 3 W θ 1 + W θ 3 × N m θ 1 , θ 3 + W θ 3 W θ 2 + W θ 3 × N m θ 2 , θ 3 = q 3 r q 1 r + q 3 r × g 13 r + q 3 r q 2 r + q 3 r × g 23 r
P N m θ 3 = max 0 , P N m θ 3 .
The changing trends of the probability and the PIC value of single-element propositions with r are shown in Figure 4.

4.4. Optimal Probability Calculation of Single-Element Propositions

The larger the PIC value is, the lower the information uncertainty in the decision-making, the better the transformation results, and the more favorable the decision-making. Additionally, the higher the information uncertainty, the worse the transformation results and the less favorable the decision-making. According to [33], to obtain optimal transformation results, it is necessary to determine parameter r by maximizing the PIC as follows:
arg m a x r P I C P N m Θ s . t P I C P N m Θ = 1 + 1 H max P N m Θ i = 1 k P N m θ i ln P N m θ i
where H max P N m Θ = i = 1 k 1 k ln 1 k . For Example 1, it holds that m a x P I C P N m Θ = 0.2230 , r = 0.91 , so P N m θ 1 = 0.2755 , P N m θ 2 = 0.6380 , and P N m θ 3 = 0.0865 . This transformation process is illustrated in Figure 5.
Next, a brief overview of the proposed method steps is given. The exact content of each step varies with the actual situation, but the main operations of each of the steps are as follows:
Step 1: Different propositions are combined to construct a neural network model;
Step 2: The AIC and IIC of proposition are obtained by Equations (30) and (31), weights are initiated combining the AIC and IIC by Equation (32), and the BPA of each proposition is assigned according to the weights until the single-element proposition probability containing the variable parameter r is output by Equations (34)–(38);
Step 3: The parameter r is determined according to the constraints and the optimal transformation results are obtained by Equation (39).

5. Analysis and Numerical Examples

In this section, a few examples are given to compare the transformation results of the proposed method with those of the other methods [31,32,33,34,35,36,37] to verify the rationality and accuracy of the method proposed in this paper. The method performances are evaluated based on the PIC value.
Example 2.
The frame of discernment is Θ = A , B , C , and m is a mass function in Θ ; then, the BPA is given by:
m A = 0.2 , m B , C = 0.8
The probability transformation results of different methods for Example 2 are shown in Table 1.
In the probability transformation, since there is no information on single-element proposition A in the multi-element proposition, the assignment should be independent of the proposition A, and it is impossible to discern the difference between propositions B and C based on the known conditions. Intuitively, the BPA of a multi-element proposition should equally be assigned to the single-element propositions B and C. However, the PeBel transformation method cannot be used to obtain the transformation results, and the OWA method is influenced by the single-element proposition A when assigning the BPA of the multi-element proposition, which leads to counter-intuitive transformation results. The other probability transformation methods distribute the BPA of multi-element propositions equally according to the elemental cardinality to obtain a single-element proposition probability, which is consistent with the intuitive results.
Example 3.
The frame of discernment is Θ = A , B , and m is a mass function in Θ; then, the BPA is given by:
m A = 0.5 , m A , B = 0.5
Step 1: According to the cardinality of propositions, m A , B is used as the input layer, m A denoted the bias unit, and the probability of a single-element proposition is obtained by the output layer.
Step 2: The AIC and IIC of the single-element proposition A and B are obtained by Equations (30) and (31), respectively, as follows:
A I C A = 20.0855 ,
A I C B = 2.7183 ,
I I C A = 7.3891 ,
I I C B = 7.3891 .
The weights of a single-element are calculated by Equation (32) as follows:
W A = r × 20 . 0855 1 r + 1 r × 7 . 3891 r ,
W B = r × 2 . 7183 1 r + 1 r × 7 . 3891 r .
The probabilities of a single-element proposition are obtained by Equations (34), (37), and (38):
P N m A = 0.5 + W A W A + W B × 0.5
P N m A = max 0 , P N m A ,
P N m B = W B W A + W B × 0.5
P N m B = max 0 , P N m B .
The changing trends of the probability and PIC value with r are shown in Figure 6.
Step 3: Obtain the optimal probability by Equation (39) as follows:
r = 0.19 ,
P N m A = 0.837 ,
P N m B = 0.163 .
The transformation results of different methods for Example 3 are shown in Table 2 and Figure 7.
Since the BPA of a single-subset proposition A, but not of a single-subset proposition B, directly exists in the evidence, when assigning multi-element propositions, intuitively, the proposition A should have a larger weight; however, in this test, the OWA method assigned a larger weight to the proposition B. The OVG, PraPl, PrPl, and BetP methods assigned the BPA of multi-element proposition equally to propositions A and B . The OVGWP method and the proposed method considered the prior information of m A = 0.5 , thus considering the connection between multi-element propositions and each single-element proposition, and assigned a larger weight to the single-subset proposition A . The transformation results are reasonable, and the PIC is relatively larger, which is more favorable for decision-making.
Example 4.
The frame of discernment is Θ = A , B , C , and m is a mass function in Θ; then, the BPA is given by:
m A = 0.1 , m A , B = 0.2 , m B , C = 0.3 , m A , B , C = 0.4
Step 1: The proposition cardinality defines m A , B , C = 0.4 as the input layer of the neural network; the hidden-layer proposition subsets are denoted by A , B , A , C , and B , C ; m A = 0.1 and m B , C = 0.3 are bias units; P N m A , P N m B , and P N m C denote the output layer.
Step 2: Calculate the AIC and IIC values of propositions A , B , A , C , and B , C by Equations (30) and (31) as follows:
A I C A , B = 181.2722 ,
A I C A , C = 81.4509 ,
A I C B , C = 121.5104 ,
I I C A , B = 270.4264 ,
I I C A , C = 1339.4308 ,
I I C B , C = 121.5104 .
The weights of each proposition are obtained by Equation (32) as follows:
W A , B = r × 181 . 2722 1 r + 1 r × 270 . 4264 r ,
W A , C = r × 81 . 4509 1 r + 1 r × 1339 . 4308 r ,
W B , C = r × 121 . 5104 1 r + 1 r × 121 . 5104 r .
The BPAs of proposition subsets are obtained by Equations (34) and (35) as follows:
N m A , B = 0.2 + W A , B W A , B + W A , C + W B , C × 0.4 = g A B r ,
N m A , B = max 0 , g A B r ,
N m A , C = W A , C W A , B + W A , C + W B , C × 0.4 = g A C r ,
N m A , C = max 0 , g A C r ,
N m B , C = 0.3 + W B , C W A , B + W A , C + W B , C × 0.4 = g B C r ,
N m B , C = max 0 , g B C r .
The AIC and IIC values of each single-element proposition are calculated by Equations (30) and (31) as follows:
A I C A = e 4 × 0.1 + g A B r + g A C r ,
A I C B = e 4 × g A B r + g B C r ,
A I C C = e 4 × g A C r + g B C r ,
I I C A = e 8 × g A B r + g A C r ,
I I C B = e 8 × g A B r + g B C r ,
I I C C = e 8 × g A C r + g B C r .
The weights of each single-element proposition are obtained by Equation (32) as follows:
W A = r × A I C A 1 r + 1 r × I I C A r = q A r ,
W B = r × A I C B 1 r + 1 r × I I C B r = q B r ,
W C = r × A I C C 1 r + 1 r × I I C C r = q C r .
The probabilities of each proposition are calculated by Equations (37) and (38):
P N m A = 0.1 + q A r q A r + q B r × g A B r + q A r q A r + q C r × g A C r
P N m A = max 0 , P N m A ,
P N m B = q B r q A r + q B r × g A B r + q B r q B r + q C r × g B C r
P N m B = max 0 , P N m B ,
P N m C = q C r q A r + q C r × g A C r + q C r q B r + q C r × g B C r
P N m C = max 0 , P N m C .
When the value of parameter r varies in the range of [0, 1], the probability and PIC curves change, as shown in Figure 8.
Step 3: Determine the optimal probability by Equation (39):
r = 0.16 ,
P N m A = 0.305 ,
P N m B = 0.494 ,
P N m C = 0.201 .
The transformation results of different methods are shown in Table 3 and Figure 9.
The PrBel method is limited and cannot obtain the correct transformation results. The PraPl, ITP, and OWA methods assign a larger weight to proposition A than to proposition B, due to the direct presence of the BPA of a single-element proposition A in the evidence. However, it is not considered that the multi-element propositions contain single-element proposition B when P l B > P l A . Intuitively, a single-element proposition B should have a larger weight. The transformation results of the PrPl, ITP, MPSV, OVGP, and OVGPWP methods and the proposed method are reasonable. However, compared with the other methods, the proposed method has the largest PIC value of 0.0596, lower information uncertainty after transformation, and is more beneficial to the decision-making process.

6. Practical Application

In this section, the newly proposed method is applied to the practical problem of target recognition to further verify its effectiveness.
Example 5.
The Iris dataset can be divided into three categories : S e t o s a , V e r s i c o l o r , V i r g i n i c a as FOD Θ = S e , V e , V i , each category contains four attributes: S L , S W , P L , P W ; then, the Iris objects are assigned as BPAs m S L , m S W , m P L , m P W in light of the attributes given in Table 4.
The specific steps can be described as follows:
Step 1: The degree of credibility between evidence is measured according to the literature [42].
Step 2: The credibility degree of each evidence is modified.
Step 3: The weighted average evidence is obtained by taking into account the relationship between the bodies of evidence and the relative importance of the collected evidence.
Step 4: According to the Dempster combination rule, the combination result is obtained by combining the weighted average evidence three times.
Step 5: Due to the large information uncertainty in the combination result, the combination result is transformed into probability distribution by the proposed method.
For the target recognition problem in the Iris dataset, the proposed method is compared with some existing methods [35,42,43,44], as shown in Table 5.
The recognition results of the five methods are the same, and the category of Iris is identified as V e . It can be seen from Table 5 that this method outperformed the comparison methods, and the belief in V e is as high as 92.01 % , the belief degrees of Xiao’s method [43], Jiang’s method [44], MSDF [42], MPSV [35] for the category V e are 73.90 % , 87.98 % , 91.63 % , and 91.86 % , respectively.

7. Conslusions

How to reasonably transform the BPA under the DS evidence theory into probability before decision-making has become a major research hotspot. In this paper, a probability transformation method is proposed by combining the BPA with a neural network. The BPA of multi-element propositions with maximum cardinality is used as the input network layer, and the BPAs of the remaining propositions are used as bias nodes, which are assigned to the proposition subsets in each hidden layer of the neural network according to the weights. The probability of each single-element proposition is obtained as the network output. The AIC and IIC values of each proposition subset are determined using the belief and plausibility functions, respectively, and then combined to obtain the weight factors of the contained variables to output the probability containing variables. Finally, the PIC is maximized as a constraint to determine the variables to obtain the optimal probability transformation result. The proposed method is verified by numerical examples and compared with the other methods. The results indicate that the proposed method is more reasonable and has better generalizability and lower information uncertainty regarding the transformed results than the other method, which makes it more beneficial for decision-making. However, the proposed method may generate a large computational effort when the cardinality of a multi-element proposition is too large. In the future, a more comprehensive evaluation index for probabilistic transformation results could be explored to further verify the rationality and superiority of this method and apply this method to more practical scenarios.

Author Contributions

Methodology, J.L. and A.Z.; validation, J.L.; formal analysis, H.L.; writing—original draft preparation, A.Z.; writing—review and editing, J.L.and A.Z.; funding acquisition, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Natural Science Foundation of China (No. 61976080), the Programs for Science and Technology Development of Henan Province, China (No. 222102210004), the Key Research Projects of the University in Henan Province, China (No. 20B510001).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xu, J.; Deng, Z.; Song, Q.; Chi, Q.; Wu, T.; Huang, Y.; Liu, D.; Gao, M. Multi-UAV counter-game model based on uncertain information. Appl. Math. Comput. 2020, 366, 124684. [Google Scholar] [CrossRef]
  2. Xiao, F.; Wen, J.; Pedrycz, W. Generalized divergence-based decision making method with an application to pattern classification. IEEE Trans. Knowl. Data Eng. 2022. [Google Scholar] [CrossRef]
  3. Liu, P.; Diao, H.; Zou, L.; Deng, A. Uncertain multi-attribute group decision making based on linguistic-valued intuitionistic fuzzy preference relations. Inf. Sci. 2019, 508, 293–308. [Google Scholar] [CrossRef]
  4. Yan, M.; Wang, J.; Dai, Y.; Han, H. A method of multiple-attribute group decision making problem for 2-dimension uncertain linguistic variables based on cloud model. Optim. Eng. 2021, 22, 2403–2427. [Google Scholar] [CrossRef]
  5. Zhu, D.; Cheng, X.; Yang, L.; Chen, Y.; Yang, S. Information fusion fault diagnosis method for deep-sea human occupied vehicle thruster based on deep belief network. IEEE Trans. Cybern. 2021, 52, 9414–9427. [Google Scholar] [CrossRef]
  6. Yao, L.; Wu, Y. Robust fault diagnosis and fault-tolerant control for uncertain multiagent systems. Int. J. Robust Nonlinear Control. 2020, 30, 8192–8205. [Google Scholar] [CrossRef]
  7. Chen, Y.; Tang, Y. An improved approach of incomplete information fusion and its application in sensor data-based fault diagnosis. Mathematics 2021, 9, 1292. [Google Scholar] [CrossRef]
  8. Zhu, Z.; Wei, H.; Hu, G.; Li, Y.; Qi, G.; Mazur, N. A novel fast single image dehazing algorithm based on artificial multiexposure image Fusion. IEEE Trans. Instrum. Meas. 2021, 70, 500153. [Google Scholar] [CrossRef]
  9. Zhang, Y.; Hua, C. A new adaptive visual tracking scheme for robotic system without image-space velocity information. IEEE Trans. Syst. Man Cybern. Syst. 2021, 52, 5249–5258. [Google Scholar] [CrossRef]
  10. Li, L.; Xie, Y.; Chen, X.; Yue, W.; Zeng, Z. Dynamic uncertain causality graph based on cloud model theory for knowledge representation and reasoning. Int. J. Mach. Learn. Cybern. 2020, 11, 1781–1799. [Google Scholar] [CrossRef]
  11. Legner, C.; Pentek, T.; Otto, B. Accumulating design knowledge with reference models: Insights from 12 years’ research into data management. J. Assoc. Inf. Syst. 2020, 21, 735–770. [Google Scholar] [CrossRef]
  12. Chen, J.; Wang, C.; He, K.; Zhao, Z.; Chen, M.; Du, R.; Ahn, G. Semantics-aware privacy risk assessment using self-learning weight assignment for mobile apps. IEEE Trans. Dependable Secur. Comput. 2021, 18, 15–29. [Google Scholar] [CrossRef] [Green Version]
  13. Zhang, Z.; Tang, Q.; Ruiz, R.; Zhang, L. Ergonomic risk and cycle time minimization for the U-shaped worker assignment assembly line balancing problem: A multi-objective approach. Comput. Oper. Res. 2020, 118, 104905. [Google Scholar] [CrossRef]
  14. Liu, M.; Liang, B.; Zheng, F.; Chu, F. Stochastic airline fleet assignment with risk aversion. IEEE Trans. Intell. Transp. Syst. 2019, 20, 3081–3090. [Google Scholar] [CrossRef]
  15. Mahmood, N.; Butalia, T.; Qin, R.; Manasrah, M. Concurrent events risk assessment generic models with enhanced reliability using Fault tree analysis and expanded rotational fuzzy sets. Expert Syst. Appl. 2022, 197, 116681. [Google Scholar] [CrossRef]
  16. Xiao, F.; Pedrycz, W. Negation of the quantum mass function for multisource quantum information fusion with its application to pattern classification. IEEE Trans. Pattern Anal. Mach. Intell. 2022. [Google Scholar] [CrossRef]
  17. Xiao, F.; Cao, Z.; Lin, T. A complex weighted discounting multisource information fusion with its application in pattern classification. IEEE Trans. Knowl. Data Eng. 2022. [Google Scholar] [CrossRef]
  18. Xiao, F. On the maximum entropy negation of a complex-valued distribution. IEEE Trans. Fuzzy Syst. 2021, 29, 3259–3269. [Google Scholar] [CrossRef]
  19. Cui, H.; Zhou, L.; Li, Y.; Kang, B. Belief entropy-of-entropy and its application in the cardiac interbeat interval time series analysis. Chaos Solitons Fractals 2022, 155, 111736. [Google Scholar] [CrossRef]
  20. Zhang, H.; Deng, Y. Entropy measure for orderable sets. Inf. Sci. 2021, 561, 141–151. [Google Scholar] [CrossRef]
  21. Xiao, F. GIQ: A generalized intelligent quality-based approach for fusing multi-source information. IEEE Trans. Fuzzy Syst. 2021, 2021 29, 2018–2031. [Google Scholar] [CrossRef]
  22. Deng, X.; Jiang, W.; Wang, Z. Zero-sum polymatrix games with link uncertainty: A Dempster-Shafer theory solution. Appl. Math. Comput. 2019, 340, 101–112. [Google Scholar] [CrossRef]
  23. Xue, Y.; Deng, Y. Mobius transformation in generalized evidence theory. Appl. Intell. 2021, 52, 7818–7831. [Google Scholar] [CrossRef]
  24. Xiao, F. A new divergence measure for belief functions in D–S evidence theory for multisensor data fusion. Inf. Sci. 2020, 514, 462–483. [Google Scholar] [CrossRef]
  25. Deng, J.; Deng, Y.; Cheong, K. Combining conflicting evidence based on Pearson correlation coefficient and weighted graph. Int. J. Intell. Syst. 2020, 36, 7443–7460. [Google Scholar] [CrossRef]
  26. Wang, Z.; Xiao, F. An improved multi-Source data fusion method based on the belief entropy and divergence measure. Entropy 2019, 21, 611. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Deng, Z.; Wang, J. Measuring total uncertainty in evidence theory. Int. J. Intell. Syst. 2021, 36, 1721–1745. [Google Scholar] [CrossRef]
  28. Zhao, K.; Li, L.; Chen, Z.; Sun, R.; Yuan, G.; Li, J. A survey: Optimization and applications of evidence fusion algorithm based on Dempster-Shafer theory. Appl. Soft Comput. 2022, 124, 106075. [Google Scholar] [CrossRef]
  29. Li, R.; Chen, Z.; Li, H.; Tang, Y. A new distance-based total uncertainty measure in Dempster-Shafer evidence theory. Appl. Intell. 2021, 52, 1209–1237. [Google Scholar] [CrossRef]
  30. Qiang, C.; Deng, Y. A new correlation coefficient of mass function in evidence theory and its application in fault diagnosis. Appl. Intell. 2021, 52, 7832–7842. [Google Scholar] [CrossRef]
  31. Sudano, J. Pignistic probability transforms for mixes of low- and high-probability events. Comput. Sci. 2015, 23–27. [Google Scholar]
  32. Smets, P.; Kennes, R. The transferable belief model. Artif. Intell. 1994, 66, 191–234. [Google Scholar] [CrossRef]
  33. Pan, L.; Deng, Y. Probability transform based on the ordered weighted averaging and entropy difference. Int. J. Comput. Commun. Control. 2020, 15, 3743. [Google Scholar] [CrossRef]
  34. Deng, Z.; Wang, J. A novel decision probability transformation method based on belief interval. Knowl.-Based Syst. 2020, 208, 106427. [Google Scholar] [CrossRef]
  35. Huang, C.; Mi, X.; Kang, B. Basic probability assignment to probability distribution function based on the Shapley value approach. Int. J. Intell. Syst. 2021, 36, 4210–4236. [Google Scholar] [CrossRef]
  36. Li, M.; Zhang, Q.; Deng, Y. A new probability transformation based on the ordered visibility graph. Int. J. Comput. Int. Syst. 2016, 31, 44–67. [Google Scholar] [CrossRef]
  37. Chen, L.; Deng, Y.; Cheong, K. Probability transformation of mass function: A weighted network method based on the ordered visibility graph. Eng. Appl. Artif. Intell. 2021, 105, 104438. [Google Scholar] [CrossRef]
  38. Rathipriya, R.; Rahman, A.; Dhamodharavadhani, S.; Meero, A.; Yoganandan, G. Demand forecasting model for time-series pharmaceutical data using shallow and deep neural network model. Neural. Comput. Appl. 2022, 1–13. [Google Scholar] [CrossRef]
  39. Jagtap, A.; Shin, Y.; Kawaguchi, K.; Karniadakis, G. Deep Kronecker neural networks: A general framework for neural networks with adaptive activation functions. Neurocomputing 2021, 468, 165–180. [Google Scholar] [CrossRef]
  40. Shannon, C.E. A mathematical theory of communication. Sigmobile Mob. Comput. Commun. Rev. 2001, 5, 3–55. [Google Scholar] [CrossRef] [Green Version]
  41. Boob, D.; Dey, S.; Lan, G. Complexity of training ReLU neural network. Discret. Optim 2022, 44, 100620. [Google Scholar] [CrossRef]
  42. Xiao, F. Evidence combination based on prospect theory for multi-sensor data fusion. ISA Trans. 2020, 106, 253–261. [Google Scholar] [CrossRef] [PubMed]
  43. Xiao, F. Multi-sensor data fusion based on the belief divergence measure of evidences and the belief entropy. Inf. Fusion 2019, 46, 23–32. [Google Scholar] [CrossRef]
  44. Jiang, W.; Hu, W.; Xie, C. A new engine fault diagnosis method based on multi-sensor data fusion. Appl. Sci. 2017, 7, 280. [Google Scholar] [CrossRef]
Figure 1. Illustration of evidence intervals of the DS evidence theory.
Figure 1. Illustration of evidence intervals of the DS evidence theory.
Entropy 24 01638 g001
Figure 2. Schematic representation of the neural network structure.
Figure 2. Schematic representation of the neural network structure.
Entropy 24 01638 g002
Figure 3. Internal structure of neurons.
Figure 3. Internal structure of neurons.
Entropy 24 01638 g003
Figure 4. The changing trend of the single-element proposition probability (a) and the changing trend of PIC value (b).
Figure 4. The changing trend of the single-element proposition probability (a) and the changing trend of PIC value (b).
Entropy 24 01638 g004
Figure 5. The implementation process of Example 1.
Figure 5. The implementation process of Example 1.
Entropy 24 01638 g005
Figure 6. The changing trend of the single-element proposition probability (a) and the changing trend of PIC value (b).
Figure 6. The changing trend of the single-element proposition probability (a) and the changing trend of PIC value (b).
Entropy 24 01638 g006
Figure 7. The probabilistic transformation results (a) and the PIC values (b) of different methods.
Figure 7. The probabilistic transformation results (a) and the PIC values (b) of different methods.
Entropy 24 01638 g007
Figure 8. The changing trend of the single-element proposition probability (a) and the changing trend of PIC value (b).
Figure 8. The changing trend of the single-element proposition probability (a) and the changing trend of PIC value (b).
Entropy 24 01638 g008
Figure 9. The probabilistic transformation results (a) and the PIC values (b) of different methods for Example 4.
Figure 9. The probabilistic transformation results (a) and the PIC values (b) of different methods for Example 4.
Entropy 24 01638 g009
Table 1. Probability transformation results of different methods for Example 2.
Table 1. Probability transformation results of different methods for Example 2.
MethodABC
PraPl [31]0.20.40.4
PrPl [31]0.20.40.4
PrBel [31]0.2NaNNaN
BetP [32]0.20.40.4
ITP [34]0.20.40.4
MPSV [35]0.20.40.4
OWAP [33]0.32300.37560.3014
OVGP [36]0.20.40.4
OVGWP1 [37]0.20.40.4
OVGWP2 [37]0.20.40.4
PNm0.20.40.4
Table 2. Probability transformation results of different methods for Example 3.
Table 2. Probability transformation results of different methods for Example 3.
MethodAB PIC
PraPl [31]0.750.250.1877
PrPl [31]0.750.250.1877
PrBel [31]1NaNNaN
BetP [32]0.750.250.1877
ITP [34]10NaN
MPSV [35]10NaN
OWAP [33]0.60.40.0290
OVGP [36]0.750.250.1887
OVGWP1 [37]0.8750.1250.4564
OVGWP2 [37]0.8750.1250.4564
PNm0.8370.1630.3594
Table 3. Probability transformation results of different methods for Example 4.
Table 3. Probability transformation results of different methods for Example 4.
MethodABC PIC
PraPl [31]0.3740.3520.2740.0078
PrPl [31]0.3090.4380.2530.0240
PrBel [31]0.7NaNNaNNaN
BetP [32]0.3330.3830.2840.0067
ITP [34]0.4140.3880.1980.0414
MPSV [35]0.3330.4000.2670.0122
OWAP [33]0.3840.3560.2600.0120
OVGP [36]0.3930.4180.1890.0474
OVGWP1 [37]0.3640.4450.1910.0494
OVGWP2 [37]0.3570.4510.1920.0500
PNm0.3050.4940.2010.0596
Table 4. Modeled BPA of an object from Iris dataset.
Table 4. Modeled BPA of an object from Iris dataset.
BPA m SL m SW m PL m PW
m S e 0.04370.0865 1.40 × 10 9 8.20 × 10 6
m V e 0.33460.28790.65700.6616
m V i 0.29160.18390.17260.1692
m S e , V e 0.04370.0863 1.30 × 10 9 8.20 × 10 6
m S e , V i 0.02390.0865 1.40 × 10 11 3.80 × 10 6
m V e , V i 0.23850.18250.17040.1692
m S e , V e , V i 0.02390.0863 1.40 × 10 11 3.80 × 10 6
Table 5. Results of the target recognition problem in Iris dataset.
Table 5. Results of the target recognition problem in Iris dataset.
Method m Se m Ve m Vi Target
Xiao’s method [43]0.00530.73900.2407 V e
Jiang’s method [44] 4.90 × 10 4 0.87980.1130 V e
MSDF [42] 6.88 × 10 5 0.91630.0790 V e
MPSV [35] 7.24 × 10 5 0.91860.0813 V e
PNm0.00010.92010.0798 V e
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, J.; Zhao, A.; Liu, H. A Decision Probability Transformation Method Based on the Neural Network. Entropy 2022, 24, 1638. https://doi.org/10.3390/e24111638

AMA Style

Li J, Zhao A, Liu H. A Decision Probability Transformation Method Based on the Neural Network. Entropy. 2022; 24(11):1638. https://doi.org/10.3390/e24111638

Chicago/Turabian Style

Li, Junwei, Aoxiang Zhao, and Huanyu Liu. 2022. "A Decision Probability Transformation Method Based on the Neural Network" Entropy 24, no. 11: 1638. https://doi.org/10.3390/e24111638

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop