Next Article in Journal
Student Perceptions of Online Education and Digital Technologies during the COVID-19 Pandemic: A Systematic Review
Previous Article in Journal
Minimally Persistent Graph Generation and Formation Control for Multi-Robot Systems under Sensing Constraints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evidence Network Inference Recognition Method Based on Cloud Model

Naval Aviation University, Yantai 264001, China
*
Authors to whom correspondence should be addressed.
Electronics 2023, 12(2), 318; https://doi.org/10.3390/electronics12020318
Submission received: 5 December 2022 / Revised: 4 January 2023 / Accepted: 4 January 2023 / Published: 8 January 2023
(This article belongs to the Section Systems & Control Engineering)

Abstract

:
Uncertainty is widely present in target recognition, and it is particularly important to express and reason the uncertainty. Based on the advantage of the evidence network in uncertainty processing, this paper presents an evidence network reasoning recognition method based on a cloud fuzzy belief. In this method, a hierarchical structure model of an evidence network is constructed; the MIC (maximum information coefficient) method is used to measure the degree of correlation between nodes and determine the existence of edges, and the belief of corresponding attributes is generated based on the cloud model. In addition, the method of information entropy is used to determine the conditional reliability table of non-root nodes, and the target recognition under uncertain conditions is realized afterwards by evidence network reasoning. The simulation results show that the proposed method can deal with the random uncertainty and cognitive uncertainty simultaneously, overcoming the problem that the traditional method has where it cannot carry out hierarchical recognition, and it can effectively use sensor information and expert knowledge to realize the deep cognition of the target intention.

1. Introduction

Uncertainty is widespread in the real world, and such uncertainty can be either aleatory uncertainty, caused by randomness, or epistemic uncertainty, caused by limited knowledge; thus, it is particularly important to represent and make inferences about uncertainty [1,2,3,4,5]. The development of information technology has greatly enhanced sensor detection capabilities and has provided rich information for target intent recognition; however, in the face of the increasingly complex information environment, both the sensor target attribute information and a priori knowledge present great uncertainty and complexity [6,7,8,9,10], which leads to difficulties in accurate target intent recognition, and the effective fusion of real-time acquired target attribute information and a priori knowledge has become the mainstream method for target intent recognition in recent years.
From the existing theoretical and applied research on target intent recognition, the mainstream methods currently include template matching, Bayes theory [11,12], DS evidence theory [13,14,15], fuzzy inference, neural networks, deep learning, reinforcement learning, etc. Jiang et al. [16] constructed a template library based on domain expert knowledge, extracted features from target actions, and determined target intent by inferring the degree of matching between the features and the template library from DS evidence. Yang et al. [17] constructed a dynamic sequence Bayesian network model and proposed an intention recognition model based on the extended multi-entity Bayesian network by analyzing the limitations of the multi-entity Bayesian network in expressing the probability transfer relationship of rule knowledge. Xu et al. [18] realized target intention recognition by defining target motion feature parameters and using target state features and expert knowledge to build fuzzy inference models and rules. Li et al. [19] treated target intent recognition as a multi-classification problem and proposed a long short-term memory (LSTM) target recognition method based on improved the attention mechanism. Xue et al. [20] designed a new deep learning method known as panoramic convolutional long short-term memory networks (PCLSTM) to improve the recognition of targets in order to solve the problem where traditional methods have difficulty effectively capturing the essential features of target information. Chen et al. [21] applied the knowledge graph to target intention recognition, built the target ontology model, and analyzed the binary relation to obtain the knowledge graph, where the authors then input signals into the knowledge graph to realize the intention recognition.
Target intention recognition is a process of deducing the target intention based on the observation of the target’s features and behaviors. It is not only related to multiple feature dimensions, but also to multiple logical levels. The effectiveness of the system decision can be only improved after the target feature information obtained by various sensors is fused and reasoned [22,23,24,25]. The existing methods generally judge the target intention directly according to the motion features and do not fully consider the hierarchical characteristics of the target intention, which leads to the problems of consuming more time and having low procedural efficiency, and it is difficult to achieve the real-time recognition of the hierarchical target intention. Among many uncertain information reasoning methods, the evidential reasoning methods represented by the DS evidence theory and its derived model, the evidential network (EN) [26,27,28,29,30], have a strong expression and reasoning ability for uncertain information. It is considered to be very suitable for target recognition in complex information environments [31,32,33,34,35]. To solve the problem of target recognition by using an evidence network, an important task is to determine the topology of the network and the relationship between each node in the network. Typically, the traditional evidence network modeling method is to manually establish the network structure and provide the network parameters according to the causal relationship based on expert experience. This method is suitable for the situations of lack of knowledge and scarce data. However, it does not apply to cases with large amounts of statistical and measurement data.
To address the above two aspects, this paper proposes a data-driven evidence network inference recognition method based on traditional recognition algorithms that cannot perform hierarchical recognition and fail to effectively utilize sensor information. This method can combine the hierarchical idea of the expert thinking process and use the data to automatically obtain the network structure and parameters so as to realize the evidence inference recognition for the hierarchical target intent.
The rest of the paper is organized as follows. Section 2 of the paper provides the theoretical foundation of evidence networks, including DS evidence theory and evidence networks. Section 3 lists the method of constructing the evidence network structure model for multi-level target intent recognition, including evidence network structure modeling, cloud model-based belief generation, evidence network parameter construction, and evidence network inference methods. Section 4 presents numerical simulations to test and validate the method proposed in this paper. Finally, a brief conclusion is provided in Section 5.

2. Theoretical Foundations of Evidence Network

2.1. DS Evidence Theory

Evidence theory, a set of mathematical theories established by Dempster and Shafer in the late 1960s and early 1970s, is a further expansion of probability theory, which is flexible and effective at dealing with imprecision and uncertainty in the absence of a priori information, and the theory has been widely used in a variety of fields such as pattern recognition, fault diagnosis, risk analysis, and human factors reliability analyses.
Definition 1.
Let the finite set of all values of a proposition be denoted by Θ = { X 1 , X 2 , , X i , , X N } , where Θ is a complete set composed of N independent and mutually exclusive elements. Then Θ is called the frame of discernment of the proposition. The power set 2 Θ of Θ is a set of 2 N elements,
2 Θ = { , { X 1 } , { X 2 } , , { X N } , { X 1 , X 2 } , , { X 1 , X 2 , , X i } , , Θ }
where   denotes the empty set. If A 2 Θ , then A is said to be a hypothesis or proposition.
Definition 2.
For the identification frame  Θ , if there is a function  m : 2 Θ [ 0 , 1 ]  and the following conditions are satisfied, then
{ m ( ) = 0 A 2 Θ m ( A ) = 1 m ( A ) 0
In DS evidence theory, known as basic probability assignment (BPA) or mass function, BPA is essentially an evaluation weight for various hypotheses. However, BPA is not a probability because it does not satisfy the countability and additivity. If m ( A ) > 0 , A is called the focal element of the basic probability assignment m on Θ , and the set of all focal elements forms the core of the BPA.
Definition 3.
Let  m , B e l , and  P l  be the BPA, belief function, and plausibility function on Θ , respectively. Then for any A 2 Θ , there is
B e l ( A ) = B A m ( B )
P l ( A ) = 1 B e l ( A ¯ ) = B A m ( B )
B e l ( A ) and P l ( A ) are the lower and upper bound functions of proposition A , respectively, and P l ( A ) B e l ( A ) .
Definition 4.
Assuming that m 1 and m 2 are two independent BPAs in the identification frame Θ , Dempster’s combination rule can be expressed as follows:
m ( A ) = { 1 1 K B , C 2 Θ | B C = A m 1 ( B ) m 2 ( C ) , A 0 ,         A =                        
where  K  can be expressed as
K = B , C 2 Θ | B C = m 1 ( B ) m 2 ( C )
K is the conflict coefficient, which is used to measure the degree of conflict among focal elements of evidence. The greater the value of K , the greater the conflict. Obviously, Dempster’s combination rule is only valid if K < 1 .

2.2. Evidential Network

In many dynamic complex systems, state probability estimation is very difficult. An evidence network combines the advantages of DS evidence theory and the Bayesian network to provide an intuitive problem description method, which makes the relationship between the variables easy to understand. It is a widely used uncertain reasoning method that can solve the uncertain problem in target recognition more effectively. Evidence network models are able to transform uncertain knowledge and logical relationships into graphical models, making it possible for uncertainty to propagate through the network, while taking into account the dependencies between dynamic evolution and the conditions of use.
An evidence network is a directed acyclic graph (DAG) that can be formally described as E N = ( ( Q , A ) , P ) , where E N denotes an evidence network, ( Q , A ) denotes a directed acyclic graph with Q nodes, Q denotes the set of nodes, A denotes the set of connected arcs between nodes, the directed arcs between nodes denote logical relationships between the variables, and P denotes the network parameters. An evidence network consists of a network structure and network parameters, and a basic evidence network is shown in Figure 1.
Using the conditional belief function (CBF) as the parameter model of an evidence network is a very effective way to describe knowledge, which can quantitatively describe the degree of association between the nodes. The expression of the parameters of the evidence network model is provided below.
Definition 5.
Let  Θ  be the identification frame and m be the basic reliability assignment on Θ . For A , B Θ , the conditional basic belief is defined as
m ( B | A ) = { X A ¯ m ( B X ) ,         B A Θ 0             other
Definition 6.
Let Θ be the identification frame. B e l is the belief function on Θ , and for A , B Θ , the conditional belief function on Θ is defined as
B e l ( B | A ) = B e l ( B A ¯ ) B e l ( A ¯ ) ,     B Θ
where B e l ( B | A ) represents the belief of B given A .
Definition 7.
Let Θ be the identification frame. P l is the plausibility function on Θ , and for A , B Θ , the conditional plausibility function on Θ is defined as
P l ( B | A ) = P l ( A B ) ,     B Θ
According to the DS evidence theory, there is a correspondence between the BPA, belief function, and plausibility function. Similarly, there is a correlation between the conditional belief function, which can be transformed into each other using the algorithm. The following is the transformation relationship between the conditional BPA, conditional belief function, and conditional plausibility function.
Theorem 1.
Let the identification frames of X and Y be Θ X and Θ Y , respectively. If x Θ X , for any y Θ Y , the conditional belief function of y given x is
B e l ( y | x ) = z Θ Y , z y m ( z | x )
Theorem 2.
Let the identification frames of X and Y be Θ X and Θ Y , respectively. If x Θ X , for any y Θ Y , the conditional plausibility function of y given x is
P l ( y | x ) = z Θ Y , z y m ( z | x )
Theorem 3.
The following relationships exist between the conditional BPA, conditional belief function, and conditional plausibility function:
{ B e l ( y | x ) = 1 P l ( y ¯ | x ) m ( y | x ) = z y ( 1 ) | y z | B e l ( z | x )

3. Structural Modeling of Multi-Level Target Intent Recognition Evidence Network

This section will introduce a target intention recognition method based on evidence network reasoning. Based on data preprocessing, measured data obtained by sensors are used to drive the generation of an evidence network structure and root node belief function, and the evidence network reasoning algorithm based on the conditional belief parameters is then used to solve the non-root node belief function and finally achieve target intention recognition. Figure 2 shows the intent recognition process.

3.1. Evidence Network Structure Modeling

In order to describe the relationship between the nodes of an evidence network more accurately, make the node elements as few as possible, and improve the reasoning efficiency, the correlation between the elements of the network should be considered. The stronger the correlation between the elements, the greater the possibility of a causal relationship between them. Therefore, it is necessary to adopt the evidence network structure modeling method to establish the relationship model between the elements. The structure of evidence network expresses the causal relationship between events. The modeling of evidence network structure expresses the logic of target recognition at the level of qualitative analysis, and it mainly studies the key elements in the system, establishing the relationship between the elements. The structural model of an evidence network mainly includes nodes and edges. Nodes represent research objects, and edges describe the logical relations between nodes. Because the nodes can be determined by the variables in the data set, the edges between the nodes can be mined using the data set.
The traditional evidence network structure modeling is too subjective, so this paper adopts a data-driven evidence network structure modeling method. In order to be able to identify nonlinear functional relations and process uncertain information, a maximum information coefficient (MIC) is introduced in this section to measure the degree of correlation between nodes, determine the existence of edges, and establish the structure of an evidence network. The MIC can capture the correlation between two variables from massive data. Because the MIC is symmetric, that is, M I C ( x , y ) = M I C ( y , x ) , it can only determine if the correlation between the variables is undirected.
Assuming that there are two variables and they have a correlation, a grid can be drawn on the scatter plot corresponding to the two variables, and the relationship between the two can be partitioned. Let D represent the finite data set of ordered pairs x and y , and let the data sample size be N. Given the ordered pairs < a , b > , the X and Y planes can be divided into multiple units, where such a division is called grid G . Let D | G represent the distribution of points in D on grid G . For fixed D , a different grid G produces different distributions. The eigenmatrix of data set D can be calculated by the following formula:
M ( D ) x , y = max I ( D | G ) log   min { a , b }
where, max I ( D | G ) represents the maximum mutual information of grid partition into G on data set D .
For data set D with sample size N = n , the maximum information coefficient can be calculated by the following formula:
M I C ( D ) = max x y < B ( n ) { M ( D ) x , y }
where B ( n ) is a function of sample size, usually set as B ( n ) = n 0.6 . The degree of dependence of two nodes can be determined by the size of the MIC. If M I C x , y is large, it indicates that node x is directly or indirectly connected to node y through one or more nodes; if M I C x , y is small, it indicates that the probability of association between node x and node y is small. When the MIC value between two nodes is large, it does not mean that the two nodes must be directly connected in the evidence network. When a pair of nodes are indirectly connected through multiple nodes, their MIC value will be larger. Therefore, if only the MIC value is used to determine whether nodes are connected, redundant edges will be introduced. An example of producing redundant edges is shown in Figure 3. Figure 3a shows that node x and node y are indirectly connected through nodes a , b , and c , respectively, resulting in a large MIC value. If nodes are directly connected according to the MIC value, the obtained evidence network structure, as shown in Figure 3b, produces redundant edges and is inconsistent with the actual network.
This paper introduces a modified MIC evidence network structure modeling method [36]. The steps are as follows:
Step 1: Create an MIC value list of node x i , and store MIC values between node x i and all other nodes in descending order, denoted as M I C x i .
Step 2: Correct the MIC value and record it as M I C x i δ .
M I C x i δ ( x j ) = M I C x i ( x j ) δ m h = 1 m ( M I C x i ( x h ) + M I C x j ( x h ) )
where M I C x i ( x j ) represents the MIC value between node x i and node x j and δ represents the penalty factor. The larger the penalty factor is, the smaller the probability of generating triangle rings in the network, and δ = 0.1 is preferable. x h represents a class of nodes in both list M I C x i and list M I C x j , and m is the number of nodes x h .
Step 3: Set M M I C x i δ as the maximum value of M I C x i δ in the list, and set γ as the isolation threshold factor of M M I C x i δ . If M M I C x i δ < γ is satisfied, node x i is considered as the isolated node.
Step 4: If node x i and node x j have undirected edges, the following conditions must be met:
{ M I C x i δ ( x j ) α × M M I C x i δ o r M I C x j δ ( x i ) α × M M I C x j δ
where α is the connection threshold factor. The larger α is, the stricter the conditions for generating edges and the lower the probability of generating redundant edges.

3.2. Belief Generation Based on a Cloud Model

3.2.1. Gaussian Cloud Model

The cloud model [37] is a mathematical method that can model uncertain data. Three digital features, the expected value E x , entropy E n and hyper entropy H e , are used to represent a concept as a whole. E x reflects the center of gravity position of the cloud droplet group, E n reflects the range acceptable by this qualitative concept, that is, the ambiguity, and H e reflects the cohesion of the uncertainty of all points, that is, the agglomeration of the cloud droplet.
The two-order normal forward cloud model generator is adopted, and its implementation process is described as follows.
Step 1: Generate a normal random number with E n as the expected value and H e 2 as the variance.
E n = N O R M ( E n , H e 2 )
Step 2: Substitute the measured value x to obtain the determination u ( x ) ,
u ( x ) = exp ( x E x ) 2 2 E n 2
where E x is the expected value of target characteristic attribute value in the database, E n is the normal random number obtained in Step 1, and x is the measured value of the unknown target.

3.2.2. A Method of Belief Generation Based on Cloud Model

The membership of the target characteristic parameter was set as the mean value of the parameter value. The entropy was k times of the standard deviation of the prior value, where the size of k reflected the dispersion degree of the estimated noise error. The super entropy was l times of the standard deviation of the prior value of the cloud model distribution, and the size of l could reflect the randomness. Assuming that the expected value of the characteristic parameter value of an attribute class is E x and the standard deviation is E n , the membership degree of the corresponding attribute class is calculated according to the measured data and then converted into the BPA on the attribute.
Step 1: Generate a normal random number with an expected value of k · E n and a variance of ( l · E n ) 2 , E n i = N O R M ( k · E n , ( l · E n ) 2 ) ;
Step 2: Substitute the target measurement feature parameter x i = x 0 and calculate its membership degree of a certain feature class as:
μ i = exp ( ( x 0 E x ) 2 2 E n i )
In this way, the Gaussian cloud model can be used to describe the membership distribution of training samples on each attribute.
Step 3: Construction of composite category attribute model
Target identification needs to rely on multiple attributes, such as speed and height. For the attribute of target speed, membership functions corresponding to high speed, medium speed, and low speed are expressed as μ H ( x ) , μ M ( x ) , and μ L ( x ) , respectively. Different membership functions may intersect, which corresponds to the combination state of categories. If the membership function μ H M ( x ) represents the attribute model that may belong to category H or category M , its mathematical expression can be described as:
μ H M = min { μ H ( x ) , μ M ( x ) }
Membership function μ H M L ( x ) represents attribute models that may belong to categories H , M , and L , and its mathematical expression can be described as:
μ H M L = min { μ H ( x ) , μ M ( x ) , μ L ( x ) }
Correspondingly, the membership function μ 12 n ( x ) represents the attribute model that may belong to any category in the identification framework Θ = { 1 , 2 , , n } , and its mathematical expression can be described as:
μ 12 n ( x ) = min { μ 1 ( x ) , μ 2 ( x ) , , μ n ( x ) }
Step 4: Match the test sample with the Gaussian cloud model
Suppose Q is a proposition in the identification frame and t is the value of the test sample on an attribute. Then, the matching degree between the test sample t and proposition Q is defined as:
F ( Q t ) = μ Q ( x ) | x = t
The size of the F ( Q t ) value represents the matching degree between the sample and proposition Q , which depends on the intersection point between the Gaussian cloud model corresponding to proposition Q and the test sample. Q can represent either a monad set proposition or a multi-subset proposition. For example, for the target velocity data set, the matching degree between test sample t and propositions { H } , { H , M } , and { H , M , L } is defined as:
{ F ( H t ) = μ H ( x ) | x = t F ( H M t ) = μ H M ( x ) | x = t F ( H M L t ) = μ H M L ( x ) | x = t
Step 5: Pignistic probability transformation
Because of the reliability assignment problem of multiple subset propositions, it is impossible to make a decision directly based on the BPA, so the basic belief assignment m ( · ) can be approximated as a probability by using pignistic probability transformation B e t P ( · ) to facilitate the hard decision of evidential reasoning.
Assuming that m ( · ) is the reliability function on the identification frame Θ , B e t P ( · ) represents its pignistic probability distribution, and then corresponding to any element x Θ , there is
B e t P ( x ) = x B 2 Θ 1 | B | · m ( B ) 1 m ( )
where B is a proposition in the belief function m and | B | represents the potential of proposition B .

3.3. Construction of Evidence Network Parameters

An evidence network is mainly composed of a network structure and network parameters. Network parameters express quantitative knowledge, that is, the degree of influence from cause to result, and use different forms of belief function to represent the quantitative correlation between nodes. In the process of building an evidence network model for target recognition, network parameters can be in the form of a conditional belief table, and expert knowledge can be included in the conditional belief table, which can reflect the degree of correlation or influence between nodes.
The original DS evidence theory only considers the evidence modeling problem under the same recognition framework. For the relationship between nodes in the evidence network model, it is necessary to consider the evidence representation of two or more variables under different recognition frameworks, namely the joint reliability function model. When the scale of the problem is large and the variable state space is large, the combined explosion problem easily occurs in the joint belief function model. The conditional belief function is an effective way to describe knowledge, and its function is similar to the conditional probability function in Bayesian networks. Compared with the joint belief function model, the conditional belief function model is more direct and easier to understand. In addition, from the perspective of model complexity, the conditional belief function model also has advantages over the joint belief function. Therefore, this paper chooses the conditional belief function as the parameter model of the evidence network.
In order to generate conditional belief, the method based on information entropy is used to determine the conditional belief table. Assume that the characteristic attribute of target intent recognition is A T i { 0 , 1 , 2 , 3 } , which can be graded by multiple experts. Assume that the evaluation result of an expert is E i j k = { ( G n , β n k ) , n = 1 , , N } , where G n represents the n th evaluation level, β n k represents its support level, and n = 1 N β n k = 1 . If p ( G n ) represents the probability at the evaluation grade G n , a probability distribution P defined on Θ can be obtained based on the evaluation results. When determining the conditional belief distribution, entropy can be used to describe the degree of uncertainty in the information. The information entropy of this probability distribution can be calculated and taken as the belief value of multiple subsets. In order to obtain the belief value of the monad set element, the belief value of multiple subsets can be removed and allocated to the monad set in the corresponding proportion:
{ m P ( { G n | G n Θ , p ( G n ) 0 } ) = n = 1 N p ( G n ) log k   p ( G n ) m P ( G n ) = [ 1 m P ( { G n | G n Θ , p ( G n ) 0 } ) ] p ( G n )
where N represents the number of evaluation levels and k represents the number of experts. When the evaluation levels of all experts are different, there is p ( G n ) = 1 k , m P ( { G n | G n Θ , p ( G n ) 0 } ) = m P ( Θ ) = 1. In this case, it means completely unknown. If all expert evaluation results are consistent, such as G 1 , p ( G 1 ) = 1 and m P ( G 1 ) = 1 , which means that all reliability is assigned to G 1 .

3.4. The Method of Evidence Network Inference

By building an evidence network with the conditional belief as the parameter, and after obtaining the measurement information of some evidence network nodes, it is necessary to estimate the state of unknown nodes through evidence network inference. That is, based on the known network structure model and parameter model, the belief inference algorithm is used to calculate the inference value of nodes of interest. In practice, an evidence network inference for target recognition is used to obtain the key variables of the target recognition system, establish the relationship between them through the evidence network structure model, use the evidence network parameter model to describe the degree of influence between variables, and then deduce the state information of other nodes according to the state information of input variables. In theory, the evidence network inference based on the conditional belief function as the parameter model needs strict mathematical derivation algorithm to achieve effective information transmission. The inference mode can be divided into forward reasoning and reverse reasoning. The inference process includes three main steps: inference initialization, information transfer, and node information update.
Figure 4 shows the evidence network model with belief reliability as the parameter. The node set of the evidence network is N = { Y , Z , X } , the directed arc set is A = { ( Y , X ) , ( Z , X ) } , the parameter model is B e l ( x | y ) and B e l ( x | z ) , and the identification framework of Y , Z , X is assumed to be Θ Y , Θ Z , Θ X and y Θ Y , z Θ Z , x Θ X , respectively. Then, the state information of X can be obtained from y , z and conditional belief B e l ( x | y ) and B e l ( x | z ) ; that is, B e l X ( x ) can be regarded as the function F of y , z , B e l ( x | y ) and B e l ( x | z ) :
B e l X ( x ) = F ( y , z , B e l ( x | y ) , B e l ( x | z ) )
The solution of function F needs to go through several steps, such as information transformation, forward reasoning, and belief synthesis. Information transformation can be processed through the relationship between beliefs. This paper mainly studies the forward reasoning algorithm, proposes a new belief synthesis algorithm, and realizes the evidence network reasoning under the conditional belief parameter model.
Assuming that the identification frames of nodes Y and X are Θ Y , Θ X respectively, then for y Y , x X , the conditional basic belief function from Y to X is defined and extended, and the marginalization operation theorems are:
m X ( x | y ) = ( i : y i y x i ) = x i : y i y m X ( x i | y i )
If the belief information on each subset of Y is known, denoted by m 0 ( y ) , y Y , then x X , and the conditional basic belief can be expressed as
m X ( x ) = y Y m X ( x | y ) m 0 ( y )
The conditional belief and conditional plausibility function can be expressed as:
B e l X ( x ) = y Y m 0 ( y ) B e l X ( x | y ) = y Y m 0 ( y ) ( y i y b X ( x | y i ) y i y b X ( | y i ) )
P l X ( x ) = y Y m 0 ( y ) P l X ( x | y ) = y Y m 0 ( y ) ( 1 y i y ( 1 P l X ( x | y i ) ) )
For any x Θ X , y Θ Y ,
P l X ( y | x ) = P l ( x | y ) P l ( y ) P l ( x )
Through forward reasoning, B e l X ( x ) | y 0 can be obtained by reasoning from y 0 and B e l ( x | y ) , and B e l X ( x ) | z 0 can be obtained by reasoning from z 0 and B e l ( x | z ) . B e l X ( x ) | y 0 and B e l X ( x ) | z 0 need to be synthesized to obtain the final reasoning result of X . Dempster’s rule of composition can be adopted for evidence synthesis.

4. The Illustrative Example

This section takes target intention recognition as an example to systematically verify the method proposed above and adopts hierarchical evidence network reasoning, which is divided into three levels of a reasoning and recognition system: state level, action level, and intention level. The data-driven cloud model can generate the belief degree of the feature attributes of the target, which can make full use of the prior information of the known samples of the target, adopt the method based on information entropy to determine the conditional belief table, effectively integrate expert knowledge, and finally realize the intention recognition of the target through evidential reasoning.

4.1. Constructing Evidence Network

Based on the prior knowledge of experts in combination with the MIC algorithm, the correlation between nodes is judged. The evidence network structure modeling is carried out for the target intention recognition, and the dependency and ownership relationship between nodes are obtained. As shown in Figure 5, the target intention can be expressed in a hierarchical way, which is divided into the state layer, action layer, and intention layer. The perceived state describes the state data of the target, indicating the change in the target’s action, including the distance, speed, and other information. Action reasoning describes the action taken by the target and the role it plays, including entity action and electromagnetic action, among which entity action is associated with distance and speed information and electromagnetic action is associated with alarm information. Intentional reasoning describes the inference or estimation of an entity’s intention.
Through the previous experiment, it can be seen that after determining the network structure and network parameters, the target intention can be estimated based on the inference method. In the aspect of network structure construction, the algorithm can correctly construct hierarchical evidence network structure by combining expert prior knowledge with the MIC algorithm. Compared with the traditional method that simply relies on expert prior knowledge, the algorithm effectively makes use of the information associated in the data of different attributes. Through experimental verification, the distance, azimuth, and velocity information are associated with the entity action node; warning information is associated with the electromagnetic action, and the entity action node and electromagnetic action node are associated with the target intention node. The whole network structure in Figure 5 conforms to the conditional independence of the data set, and the constructed network structure is reasonable and can meet the requirements of evidential reasoning.

4.2. Cloud Fuzzy Belief Generation

The training samples were selected from the data set to construct its cloud model on various attributes. The remaining samples were used as test samples to test the BPA generation.
According to the recognition framework of target feature attribute R , A , V as { H , M , L } , the E x , E n , and H e of their corresponding training samples are calculated, respectively. For example, under attribute R , E x L = 100 , E x M = 150 , E x H = 200 , E n = 1 , and H e = 0.2 . Similarly, we can obtain the E x , E n , and H e under attribute A and E x , E n , and H e under attribute V . According to the obtained parameters, the Gaussian cloud model E n = N O R M ( E n , H e 2 ) on the target attributes R , A , V can be constructed as shown in Figure 6.
On the basis of obtaining the Gaussian cloud model of each attribute of the target, the input sample can be matched with the Gaussian cloud model to obtain the matching degree between the input sample and the characteristic attributes of the target, thus generating the BPA of the test sample.

4.3. Conditional Belief Parameter

Target intent recognition requires input information from sensors, including target distance ( R ), azimuth ( A ), velocity ( V ), warning information ( W ), and output target intent by reasoning through the evidence network. There are specific dependence or influence relationships between nodes in the evidence network. Table 1 describes the framework of variables related to target intent recognition.
The evidence network requires a conditional belief table with the state value of each node as the condition to represent the connection strength of causal relationship between nodes. In the process of target intention recognition, a conditional belief table based on the value state of the parent node of a node is also needed to express the strength of the relationship between events or activities. The number of parameters that need to be estimated for the conditional belief exponentially increases with the increase in the number of parent nodes. Therefore, when constructing the conditional belief table, the direct cause of the event should be selected to reduce the number of parameters in the conditional belief table and avoid the combination explosion problem. The conditional belief can be obtained byusing sample learning and expert estimation. When there is not enough sample data, the conditional belief must be estimated, which is a very difficult task. In this paper, the expert knowledge estimation is used to estimate the conditional belief parameters. It is assumed that the conditional belief parameters are formed by combining the knowledge of five experts, and the conditional belief of the entity action node is obtained. The identification frame of entity action node is Θ = { F , Y , P } , and each expert can make an independent evaluation of the three action styles F , Y , P . According to the expert evaluation results, the conditional belief m(PO = F|L, M, H), m(PO = Y|L, M, H), m(PO = P|L, M, H),…, m(PO = FY|L, M, H),…, m(PO = FY|L, M, H) of entity action node P O is obtained. When five experts provide the m(PO = U|L, M, H), U { F , Y , P } of the evaluation results of { E 1 = F , E 2 = Y , E 3 = F , E 4 = Y , E 5 = F } , the probability distribution is p ( F ) = 3 / 5 , p ( Y ) = 2 / 5 , p ( P ) = 0 under the probability distribution, and the physical action node P O belief distribution can be obtained through the calculation of the following conditions:
m ( P O = F , Y | L , M , H ) = ( 3 5 × l o g 5 3 5 + 2 5 × l o g 5 2 5 ) = 0.4182
m ( P O = F | L , M , H ) = ( 1 0.4182 ) × 3 5 = 0.3491
m ( P O = Y | L , M , H ) = ( 1 0.4182 ) × 2 5 = 0.2327
The conditional belief parameters of the evidence network of node PO, node EO, and node TI were obtained according to the above methods, as shown in Table 2, Table 3 and Table 4.
In the experiment, the method based on information entropy is used to determine the conditional belief table, which can make full use of the expert knowledge to estimate the conditional belief parameters. As can be seen from Table 2, Table 3 and Table 4, the conditional belief parameters of node PO, node EO, and node TI are given, which can quantitatively reflect the degree of influence of lower level nodes on upper level nodes. For the conditional probability of Bayesian networks, the probability can only be assigned to a single element in the identification frame. However, for the conditional belief data of the evidence network, the conditional parameters of the evidence network can be assigned on the multi-element set. As shown in Table 3, when the conditional parameter W = N, in addition to the assignment of single element EO = H and EO = L of nodes, the conditional reliability parameter m(HL) = 0.3109 can also be assigned to multiple elements EO = HL of nodes, indicating that there is cognitive uncertainty about EO = H, L, and HL under the condition W = N. At this time, a part of the belief is assigned to the recognition frame element HL of the node state. It can be seen that this is a form of multi-element belief allocation that can fully describe inaccurate and incomplete information, reflecting the advantages of evidential reasoning network.

4.4. Evidence Network Reasoning Based on Conditional Belief Parameters

Assume that at a certain moment, the basic belief of target features R, A, V, and W after cloud fuzzy belief processing is
m ( R = H ) = 0.1 , m ( R = M ) = 0.2 , m ( R = L ) = 0.7 ,
m ( A = H ) = 0.6 , m ( A = M ) = 0.2 , m ( A = L ) = 0.2 ,
m ( V = H ) = 0.5 , m ( V = M ) = 0.3 , m ( V = L ) = 0.2 ,
m ( W = N ) = 0.2 , m ( W = S ) = 0.3 , m ( W = T ) = 0.5 .
According to the basic belief assignment and conditional basic belief, the corresponding node information is obtained by forward reasoning.
(1) Use the alarm information to conduct electromagnetic action EO reasoning:
B e l ( E O = H ) = m ( E O = H | W = N ) · m ( W = N ) + m ( E O = H | W = S ) · m ( W = S ) + m ( E O = H | W = T ) · m ( W = T ) = 0.6323
B e l ( E O = L ) = m ( E O = L | W = N ) · m ( W = N ) + m ( E O = L | W = S ) · m ( W = S ) + m ( E O = L | W = T ) · m ( W = T ) = 0.1801
B e l ( E O = H L ) = m ( E O = H L | W = N ) · m ( W = N ) + m ( E O = H L | W = S ) · m ( W = S ) + m ( E O = H L | W = T ) · m ( W = T ) = 0.1876
After pignistic transformation, the belief degree of the EO single factor is obtained: B e l ( E O = H ) = 0.7261 , B e l ( E O = H ) = 0.2739 .
(2) Reasoning entity action PO according to distance R, azimuth A, and velocity V
B e l ( P O = F ) = m ( P O = F | R = L , A = H , V = H ) · m ( R = L , A = H , V = H ) + m ( P O = F | R = L , A = M , V = H ) · m ( R = L , A = M , V = H ) + m ( P O = F | R = M , A = M , V = H ) · m ( R = M , A = M , V = H ) = 0.2556
B e l ( P O = Y ) = m ( P O = Y | R = L , A = M , V = H ) · m ( R = L , A = M , V = H ) + m ( P O = Y | R = M , A = M , V = H ) · m ( R = M , A = M , V = H ) + m ( P O = Y | R = M , A = L , V = H ) · m ( R = M , A = L , V = H ) = 0.0213
B e l ( P O = P ) = m ( P O = P | R = M , A = L , V = H ) · m ( R = M , A = L , V = H ) = 0.0047
After pignistic transformation, the belief degree of the PO single factor is obtained:
B e l ( P O = F ) = 0.8457 , B e l ( P O = Y ) = 0.1266 , B e l ( P O = Y ) = 0.0277
(3) Reasoning target intention TI according to entity action PO and electromagnetic action EO
B e l ( T I = A ) = m ( T I = A | P O = F , E O = H ) · m ( P O = F , E O = H ) + m ( T I = A | P O = Y , E O = H ) · m ( P O = Y , E O = H ) = 0.6648
B e l ( T I = C ) = m ( T I = C | P O = Y , E O = H ) · m ( P O = Y , E O = H ) + m ( T I = C | P O = P , E O = L ) · m ( P O = P , E O = L ) = 0.0137
B e l ( T I = L ) = m ( T I = L | P O = P , E O = L ) · m ( P O = P , E O = L ) = 0.0042
According to the above reasoning and pignistic transformation, it can be concluded that the target intention TI at this time is A, C, and L, respectively, and the reliability is 0.9516, 0.0409, 0.0075, B e l ( T I = A ) B e l ( T I = C ) B e l ( T I = L ) , where it can be concluded that the target intention is A.
The prior belief distribution m(R), m(A), m(V), and m(W) of parent nodes R, A, V, and W can be generated through the cloud model. By using hierarchical evidential reasoning, we can obtain the belief values of different target intentions, and the one with the highest belief value is the target intention. As can be seen from this example, the maximum belief obtained by reasoning is 0.9516, and the corresponding target intention TI is finally determined to be A.

4.5. Comparative Experimental Analysis

4.5.1. Analysis of Accuracy

The evidence network has the ability to deal with uncertain information and incomplete information and can better deal with the noise in the original sensor data. Considering that the current evidence network structure modeling methods are lacking and cannot compare and verify the rationality of the evidence network structure shown in Figure 4 with other modeling methods, the main purpose of this paper is to better verify the effect of an evidence network on target intention recognition, where the network structure is only an intermediate link in the process of target intention recognition. Therefore, this section intends to use the network node and structure model shown in Figure 4 as the benchmark framework for target intent recognition and then select other models for comparative analysis and verification. Considering the similarity between the Bayesian network and evidence network as well as the wide application of the Bayesian network model in target intention recognition, this section introduces the Bayesian network model as the benchmark and measures the effect of the evidence network by comparing the accuracy of target intention recognition results of the two models. In order to compare the effectiveness of the evidence network reasoning method more comprehensively, the evidence network model is compared with the common support vector machine (SVM) model.
The experimental data set mainly includes four attribute information points of the target, which are the distance, azimuth, velocity, and warning information, respectively. The target intention includes A, C, and L. Using the K-fold cross-validation method, the data set was randomly divided into K subsets of almost equal size. A single subset was retained as test data, and the other K-1 subsets were used as training data. The cross-validation was repeated K times, once for each subset. In this experiment, a five-fold cross-validation was used to test the accuracy of intention recognition, and the accuracy of the recognition results is shown in Table 5.
As can be seen from Table 5, the accuracy rate of the evidence network is higher than that of the other two models. The SVM model has the lowest accuracy rate, while the accuracy rate of the Bayesian network is between the two models. In the experiment, the accuracy of the evidence network model is higher than that of the Bayesian network model. On the one hand, it can be explained that the evidence network model also has the ability of target intention reasoning and recognition; on the other hand, it also indirectly verifies the accuracy of the established evidence network structure. As a classical target recognition algorithm, SVM has a relatively simple structure, but its parameters have a great impact on the recognition performance, and it is difficult to select the appropriate parameters. In addition, for target intention recognition, SVM directly uses the target attribute data and cannot effectively use the correlation structure information between target attributes such as in the evidence network model. Thus, its recognition accuracy is low.
In order to further study the impact of data size on target intent recognition results, Figure 7 shows the change in recognition accuracy under different training data set sizes. The horizontal axis represents the size of the data, that is, the proportion of the experimental data set (training data and test data) used to the total data. For example, scales = 0.6 indicates that 60% of the data are randomly selected as cross-training data, and the remaining data are used as test data. The vertical axis shows the accuracy of the intention recognition.
As can be seen from Figure 7, with the increase in training data set size, the curve presents an overall upward trend, indicating that the accuracy of all identification methods presents an overall trend of improvement, and the results are gradually consistent in the region. However, when the scale of training data is small, the identification accuracy of the evidence network is better than the other two types of models. It is found that the identification effect of the evidence network model is obviously better than the SVM model. Compared with the Bayesian network model, the evidence network model also has some advantages as the intention recognition method based on the evidence network model does not need to make accurate probabilistic judgment on uncertain evidence, avoids the loss of uncertain information, and obtains more accurate recognition results.

4.5.2. Analysis of Sensitivity

In order to verify the effectiveness of the algorithm proposed in this paper, noise is added to the feature data of the target, and the noise level is varied according to 10%, 15% and 20% to obtain the belief results of the target intention recognition; the consistency with the known results is compared. Figure 8 shows the recognition belief levels of target intention A, C, and L under three different noise levels. It can be seen that although the noise levels greatly vary, the belief levels corresponding to each element in the framework of target intention identification remain relatively stable, which proves that the proposed intention recognition method is insensitive to noise changes and has good stability and reliability. The proposed method can correctly identify the target intention.

5. Conclusions

Aiming at the practical problems of target recognition such as it being hierarchical and data-driven, this paper proposes a target recognition method based on evidence network reasoning, which realizes the effective recognition of target intention. DS theory can represent probabilistic uncertainty and cognitive uncertainty, and EN provides an expression framework to identify the relationship between nodes of the target. The MIC is used to measure the degree of correlation between nodes, determine the existence of edges, and establish the evidence network structure, which can effectively make up for the shortcomings of the original evidence network structure, which was constructed in a subjective way. The Gaussian cloud model was used to describe the membership distribution of the training samples in each attribute, and the BPA of the target attribute was obtained according to the measurement data of the sensor, which provided the front-end input for the evidence network reasoning. The method based on information entropy is used to determine the conditional reliability table of non-root nodes, which provides a new way to obtain the parameters of the evidence network. Simulation results show that the proposed method can effectively construct a multi-level target intent recognition network model, obtain the target feature belief by using data samples, and realize the target intent recognition by evidential reasoning based on conditional reliability parameters. In the next step, the relative importance of different experts should be considered in the acquisition of the conditional reliability of the evidence network to assign different weights. In addition, due to the different network structures presented by different data sets in practice, research should be carried out on the generation method of the evidence network model based on the characteristics of data sets.

Author Contributions

Conceptualization, H.W. and X.G.; methodology, H.W.; software, H.W., X.G.; validation, H.W., X.Y.; formal analysis, H.W. and X.Y.; investigation, H.W.; resources, H.W. and X.G.; data curation, H.W.; writing—original draft preparation, H.W.; writing—review and editing, X.Y.; visualization, H.W.; supervision, X.Y.; project administration, X.G.; funding acquisition, X.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Youth Foundation of the National Science Foundation of China (grant number 62001503), the Excellent Youth Scholar of the National Defense Science and Technology Foundation of China (grant number 2017-JCJQ-ZQ-003), and the Special Fund for the Taishan Scholar Project (grant number ts 201712072).

Conflicts of Interest

The authors declare that there are no conflict of interest regarding the publication of this article.

References

  1. Xu, X.; Zheng, J.; Yang, J.B.; Xu, D.L.; Chen, Y.W. Data classification using evidence reasoning rule. Knowl. Based Syst. 2017, 116, 144–151. [Google Scholar] [CrossRef] [Green Version]
  2. Yang, J.B.; Xu, D.L. Evidential reasoning rule for evidence combination. Artif. Intell. 2013, 205, 1–29. [Google Scholar] [CrossRef]
  3. Deng, X.; Jiang, W. Dependence assessment in human reliability analysis using an evidential network approach extended by belief rules and uncertainty measures. Ann. Nucl. Energy 2018, 117, 183–193. [Google Scholar] [CrossRef]
  4. Shenoy, P.P. Binary join trees for computing marginals in the Shenoy-Shafer architecture. Int. J. Approx. Reason. 1997, 17, 239–263. [Google Scholar] [CrossRef] [Green Version]
  5. Rahman, M.; Islam, M.Z. Missing value imputation using a fuzzy clustering-based EM approach. Knowl. Inf. Syst. 2016, 46, 389–422. [Google Scholar] [CrossRef]
  6. Mi, J.; Lu, N.; Li, Y.F.; Huang, H.Z.; Bai, L. An evidential network-based hierarchical method for system reliability analysis with common cause failures and mixed uncertainties. Reliab. Eng. Syst. Saf. 2022, 220, 108295. [Google Scholar] [CrossRef]
  7. Zuo, L.; Xiahou, T.; Liu, Y. Reliability assessment of systems subject to interval-valued probabilistic common cause failure by evidential networks. J. Intell. Fuzzy Syst. 2019, 36, 3711–3723. [Google Scholar] [CrossRef]
  8. Xiao, F. EFMCDM: Evidential fuzzy multicriteria decision making based on belief entropy. IEEE Trans. Fuzzy Syst. 2019, 28, 1477–1491. [Google Scholar] [CrossRef]
  9. Chang, L.L.; Zhou, Z.J.; Chen, Y.W.; Liao, T.J.; Hu, Y.; Yang, L.H. Belief rule base structure and parameter joint optimization under disjunctive assumption for nonlinear complex system modeling. IEEE Trans. Syst. Man Cybern. Syst. 2017, 48, 1542–1554. [Google Scholar] [CrossRef]
  10. Zhou, T.; Chen, M.; Wang, Y.; He, J.; Yang, C. Information entropy-based intention prediction of aerial targets under uncertain and incomplete information. Entropy 2020, 22, 279. [Google Scholar] [CrossRef]
  11. Simon, C.; Weber, P.; Evsukoff, A. Bayesian networks inference algorithm to implement Dempster Shafer theory in reliability analysis. Reliab. Eng. Syst. Saf. 2008, 93, 950–963. [Google Scholar] [CrossRef] [Green Version]
  12. Simon, C.; Weber, P. Evidential networks for reliability analysis and performance evaluation of systems with imprecise knowledge. IEEE Trans. Reliab. 2009, 58, 69–87. [Google Scholar] [CrossRef] [Green Version]
  13. Dempster, A.P. Upper and lower probabilities induced by a multivalued mapping. Ann. Math. Stat. 1967, 38, 325–339. [Google Scholar] [CrossRef]
  14. Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976; Volume 42. [Google Scholar]
  15. Machot, F.A.; Mayr, H.C.; Ranasinghe, S. A hybrid reasoning approach for activity recognition based on answer set programming and dempster–shafer theory. In Recent Advances in Nonlinear Dynamics and Synchronization; Springer: Cham, Switzerland, 2018; pp. 303–318. [Google Scholar]
  16. Jiang, W.; Han, D.; Fan, X.; Duanmu, D. Research on Threat Assessment Based on Dempster–Shafer Evidence Theory. In Green Communications and Networks; Springer: Dordrecht, The Netherlands, 2012; pp. 975–984. [Google Scholar]
  17. Yang, Y.T.; Yang, J.; Li, J.G. Research on air target tactical intention recognition based on EMEBN. Fire Control Command Control 2022, 47, 163–170. [Google Scholar]
  18. Xu, J.P.; Zhang, L.F.; Han, D.Q. Air target intention recognition based on fuzzy inference. Command. Inf. Syst. Technol. 2020, 11, 44–48. [Google Scholar]
  19. Li, Z.W.; Li, S.Q.; Peng, M.Y. Air combat intention recognition method of target based on LSTM improved by attention mechanism. Electron. Opt. Control 2022. Available online: https://kns.cnki.net/kcms/detail/41.1227.TN.20220321.1355.002.html (accessed on 3 January 2023).
  20. Xue, J.; Zhu, J.; Xiao, J.; Tong, S.; Huang, L. Panoramic convolutional long short-term memory networks for combat intension recognition of aerial targets. IEEE Access 2020, 8, 183312–183323. [Google Scholar] [CrossRef]
  21. Chen, Y.M.; Li, C.Y. Simulation of target tactical intention recognition based on knowledge map. Comput. Simul. 2019, 36, 1–4. [Google Scholar]
  22. Kammouh, O.; Gardoni, P.; Cimellaro, G.P. Probabilistic framework to evaluate the resilience of engineering systems using Bayesian and dynamic Bayesian networks. Reliab. Eng. Syst. Saf. 2020, 198, 106813. [Google Scholar] [CrossRef]
  23. Bougofa, M.; Taleb-Berrouane, M.; Bouafia, A.; Baziz, A.; Kharzi, R.; Bellaouar, A. Dynamic availability analysis using dynamic Bayesian and evidential networks. Process Saf. Environ. Prot. 2021, 153, 486–499. [Google Scholar] [CrossRef]
  24. Bougofa, M.; Bouafia, A.; Bellaouar, A. Availability Assessment of Complex Systems under Parameter Uncertainty using Dynamic Evidential Networks. Int. J. Perform. Eng. 2020, 16, 510–519. [Google Scholar] [CrossRef]
  25. Lin, Z.; Xie, J. Research on improved evidence theory based on multi-sensor information fusion. Sci. Rep. 2021, 11, 9267. [Google Scholar] [CrossRef] [PubMed]
  26. Xu, H.; Smets, P. Reasoning in evidential networks with conditional belief functions. Int. J. Approx. Reason. 1996, 14, 155–185. [Google Scholar] [CrossRef] [Green Version]
  27. Benavoli, A.; Ristic, B.; Farina, A.; Oxenham, M.; Chisci, L. An approach to threat assessment based on evidential networks. In Proceedings of the 2007 IEEE 10th International Conference on Information Fusion, Quebec, QC, Canada, 9–12 July 2007; pp. 1–8. [Google Scholar]
  28. Reshef, D.N.; Reshef, Y.A.; Finucane, H.K.; Grossman, S.R.; McVean, G.; Turnbaugh, P.J.; Sabeti, P.C. Detecting novel associations in large data sets. Science 2011, 334, 1518–1524. [Google Scholar] [CrossRef] [PubMed]
  29. Wang, H.; Deng, X.; Zhang, Z.; Jiang, W. A new failure mode and effects analysis method based on Dempster–Shafer theory by integrating evidential network. IEEE Access 2019, 7, 79579–79591. [Google Scholar] [CrossRef]
  30. Pourreza, P.; Saberi, M.; Azadeh, A.; Chang, E.; Hussain, O. Health, safety, environment and ergonomic improvement in energy sector using an integrated fuzzy cognitive map–Bayesian network model. Int. J. Fuzzy Syst. 2018, 20, 1346–1356. [Google Scholar] [CrossRef]
  31. Zuo, L.; Xiahou, T.; Liu, Y. Evidential network-based failure analysis for systems suffering common cause failure and model parameter uncertainty. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 2019, 233, 2225–2235. [Google Scholar] [CrossRef]
  32. Mi, J.; Li, Y.F.; Peng, W.; Huang, H.Z. Reliability analysis of complex multi-state system with common cause failure based on evidential networks. Reliab. Eng. Syst. Saf. 2018, 174, 71–81. [Google Scholar] [CrossRef]
  33. Qiu, S.; Sallak, M.; Schön, W.; Ming, H.X. A valuation-based system approach for risk assessment of belief rule-based expert systems. Inf. Sci. 2018, 466, 323–336. [Google Scholar] [CrossRef]
  34. Jiao, L.; Denoeux, T.; Pan, Q. A hybrid belief rule-based classification system based on uncertain training data and expert knowledge. IEEE Trans. Syst. Man Cybern. Syst. 2015, 46, 1711–1723. [Google Scholar] [CrossRef] [Green Version]
  35. Jiang, J.; Li, X.; Zhou, Z.J.; Xu, D.L.; Chen, Y.W. Weapon system capability assessment under uncertainty based on the evidential reasoning approach. Expert Syst. Appl. 2011, 38, 13773–13784. [Google Scholar] [CrossRef]
  36. You, Y.; Sun, J.; Ge, B.; Zhao, D.; Jiang, J. A data-driven M2 approach for evidential network structure learning. Knowl. Based Syst. 2020, 187, 104810. [Google Scholar] [CrossRef]
  37. Wang, G.; Xu, C.; Li, D. Generic normal cloud model. Inf. Sci. 2014, 280, 1–15. [Google Scholar] [CrossRef]
Figure 1. Basic evidence network diagram.
Figure 1. Basic evidence network diagram.
Electronics 12 00318 g001
Figure 2. Process of target intention recognition in evidence network reasoning.
Figure 2. Process of target intention recognition in evidence network reasoning.
Electronics 12 00318 g002
Figure 3. Example of the causes of redundant edges.
Figure 3. Example of the causes of redundant edges.
Electronics 12 00318 g003
Figure 4. Evidence network under the conditional belief parameter model.
Figure 4. Evidence network under the conditional belief parameter model.
Electronics 12 00318 g004
Figure 5. Evidence network structure of target intent recognition.
Figure 5. Evidence network structure of target intent recognition.
Electronics 12 00318 g005
Figure 6. Gaussian cloud fuzzy membership of target attributes.
Figure 6. Gaussian cloud fuzzy membership of target attributes.
Electronics 12 00318 g006
Figure 7. Comparison of the recognition accuracy of EN, BN, and SVM models with different data scales.
Figure 7. Comparison of the recognition accuracy of EN, BN, and SVM models with different data scales.
Electronics 12 00318 g007
Figure 8. Target intention recognition belief under different noise levels.
Figure 8. Target intention recognition belief under different noise levels.
Electronics 12 00318 g008
Table 1. Frame description of variables related to target intent recognition.
Table 1. Frame description of variables related to target intent recognition.
Feature of TargetMark of RepresentationIdentify Frames and Elements
Intention of targetTI{A, C, L}
Activity of entityPO{F, Y, P}
Electromagnetic activityEO{H, L}
DistanceR{H, M, L}
AzimuthA{H, M, L}
VelocityV{H, M, L}
Warning informationW{N, S, T}
Table 2. Conditional belief parameters of node PO.
Table 2. Conditional belief parameters of node PO.
Condition Parameter\Node VariablePO = FPO = YPO = PPO = FYPO = YP
R = L, A = H, V = Hm(F) = 1m(Y) = 0m(P) = 0m(FY) = 0m(YP) = 0
R = M, A = M, V = Hm(F) = 0.5513m(Y) = 0.1378m(P) = 0m(FY) = 0.3109m(YP) = 0
R = L, A = M, V = Hm(F) = 0.3491m(Y) = 0.2327m(P) = 0m(FY) = 0.4182m(YP) = 0
R = M, A = L, V = Hm(F) = 0m(Y) = 0.3491m(P) = 0.2327m(FY) = 0m(YP) = 0.4182
Table 3. Conditional belief parameters of node EO.
Table 3. Conditional belief parameters of node EO.
Condition Parameter\Node VariableEO = HEO = LEO = HL
W = Nm(H) = 0.1378m(L) = 0.5513m(HL) = 0.3109
W = Sm(H) = 0.3491m(L) = 0.2327m(HL) = 0.4182
W = Tm(H) = 1m(L) = 0m(HL) = 0
Table 4. Conditional belief parameters of node TI.
Table 4. Conditional belief parameters of node TI.
Condition Parameter\Node VariableTI = ATI = CTI = LTI = ACTI = CL
PO = F, EO = Hm(A) = 1m(C) = 0m(L) = 0m(AC) = 0m(CL) = 0
PO = Y, EO = Hm(A) = 0.5513m(C) = 0.1378m(L) = 0m(AC) = 0.3109m(CL) = 0
PO = P, EO = Lm(A) = 0m(C) = 0.1378m(L) = 0.5513m(AC) = 0m(CL) = 0.3109
Table 5. Comparison of target intention recognition accuracy.
Table 5. Comparison of target intention recognition accuracy.
Estimation ModelEN ModelBN ModelSVM
Accuracy rate92.36%89.72%87.59%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, H.; Guan, X.; Yi, X. Evidence Network Inference Recognition Method Based on Cloud Model. Electronics 2023, 12, 318. https://doi.org/10.3390/electronics12020318

AMA Style

Wang H, Guan X, Yi X. Evidence Network Inference Recognition Method Based on Cloud Model. Electronics. 2023; 12(2):318. https://doi.org/10.3390/electronics12020318

Chicago/Turabian Style

Wang, Haibin, Xin Guan, and Xiao Yi. 2023. "Evidence Network Inference Recognition Method Based on Cloud Model" Electronics 12, no. 2: 318. https://doi.org/10.3390/electronics12020318

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop