Next Article in Journal
Towards Autonomous Powerline Inspection: A Real-Time UAV-Edge Computing Framework for Early Identification of Fire-Related Hazards
Previous Article in Journal
First Attempts to Control Forest Pests Using Multi-Rotor Unmanned Aerial Spraying Systems (UASSs) in Forest Ecosystems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Theoretical Framework for Event-Driven Correction in UAV Swarm Situational Awareness: Mechanism Design with Evidence-Theoretic Foundations

Naval Aviation University, Yantai 264001, China
*
Author to whom correspondence should be addressed.
Drones 2026, 10(3), 182; https://doi.org/10.3390/drones10030182
Submission received: 6 January 2026 / Revised: 1 March 2026 / Accepted: 5 March 2026 / Published: 6 March 2026
(This article belongs to the Section Artificial Intelligence in Drones (AID))

Highlights

What are the main findings?
  • A novel evidence weakening operation iFVs was introduced and rigorously proved to be commutative with Dempster’s combination rule. This fundamental theoretical advancement overcomes the critical limitation of non-commutativity in traditional evidence discounting methods, enabling order-invariant evidence reliability adjustment.
  • An event-driven, distributed correction protocol is designed, leveraging the commutativity property to enable penetrative propagation and asynchronous parallel recomputation of corrections. This protocol allows for efficient local evidence updates to trigger consistent global state updates without re-source-intensive global re-computation.
What are the implications of the main findings?
  • The proposed framework provides a mathematically rigorous and scalable solution for robust and efficient information fusion in dynamic multi-agent systems, such as UAV swarms. It ensures consistent distributed situational awareness under uncertainty while significantly reducing correction latency, communication overhead, and storage requirements.
  • Extensive simulations demonstrate that the weakening-based parallel protocol achieves sublinear scalability and a zero correction error rate, significantly outperforming traditional methods. This makes it highly suitable for large-scale, real-time applications in contested environments where low-latency cognitive convergence is critical.

Abstract

The effectiveness of unmanned aerial vehicle (UAV) swarms in complex and dynamic environments relies heavily on real-time and consistent situational awareness throughout the network. Effective event-driven correction mechanisms must meet two essential requirements: they must robustly handle uncertainties inherent in challenging situations and ensure strict commutativity between weighting and fusion operations to allow for distributed implementation. To tackle the critical issue of uncertain information processing, this work adopts Dempster–Shafer evidence theory because of its advantages in representing and managing epistemic uncertainty. However, the traditional discounting operation in evidence theory does not satisfy commutativity with the combination rule, which poses a significant barrier to distributed implementation. To address this limitation, we introduce a novel evidence weakening operation that is rigorously proven to be commutative with Dempster’s combination rule. This theoretical advancement enables the design of a distributed protocol that supports efficient propagation and parallel computation of corrections. Simulation results demonstrate that the proposed protocol achieves a zero correction error rate, along with approximately 40% reduction in latency and 35% savings in communication overhead compared to conventional serial discounting methods, while maintaining sublinear scalability. This approach provides a feasible solution for robust and efficient information fusion in dynamic multi-agent systems.

Graphical Abstract

1. Introduction

The efficacy of unmanned aerial vehicle (UAV) swarms [1] in critical areas such as wide-area surveillance [2], disaster response [3], and cooperative engagement [4] is heavily dependent on their capability for real-time and accurate situational awareness of dynamic environments [5]. This perceptual process is inherently a collective effort, where each unit gathers local data through onboard sensors, and a shared understanding is formed through collaborative information fusion. However, sudden environmental changes or adversarial interventions may trigger local evidence that contradicts the global consensus, necessitating an event-driven correction mechanism to achieve rapid cognitive convergence.
In fact, real-world countermeasure environments are often fraught with uncertainties. Sensors can exhibit measurement noise and blind spots, electromagnetic interference can distort data, and moving targets may present misleading signatures. Additionally, active adversarial jamming or sudden environmental changes can create situations where local units observe critical evidence that contradicts the global consensus. These instances of “cognitive conflict” require the swarm to quickly and effectively correct its global situational assessment. Traditional information fusion methods, which typically rely on periodic or query-based data updates, are not equipped to meet the low-latency demands for cognitive convergence that arise from such asynchronous events. As a result, developing an event-driven, distributed correction mechanism has become a critical technological priority and a significant challenge for enhancing the survivability and mission effectiveness of UAV swarms in contested environments.
The core challenge of event-driven correction is twofold: it stems from a fundamental mathematical constraint (the non-commutativity of reliability weighting) and is exacerbated by practical system constraints such as network delays and the need for efficient synchronization. This work primarily addresses the former, as solving the commutativity problem is a prerequisite for achieving consistent fusion in any distributed and asynchronous setting. Formally, when a node generates new awareness information at time  t 1 > t 0  to replace its previous report information, the system must update the global state consistently and efficiently. This challenge stems from fundamental mathematical constraints in information fusion [6]: most mainstream algorithms (e.g., Kalman filtering variants [7] and traditional Bayesian methods [8]) employ weighting and fusion operators that violate the commutative and associative laws [9]. Consequently, the order in which correction information is integrated influences the outcome, often necessitating the reconstruction of a centralized global state. This approach incurs unacceptable latency and resource consumption in resource-constrained distributed UAV networks.
Dempster–Shafer evidence theory (DST) [10] provides a principled framework for handling uncertainty and incomplete information in fusion tasks [11]. This theory has been widely applied across various fields, including risk assessment [12,13], deception detection [14,15], fault diagnosis [16,17], and decision support [18,19], among others. Although distributed information fusion based on DST has been extensively studied, the classical evidence discounting operation—analogous to weighting—does not commute with Dempster’s combination rule [20,21]. While subsequent research has developed enhanced discounting models [22,23,24,25,26], the fundamental issue of non-commutativity remains unresolved, precluding the design of both consistent and efficient distributed correction protocols.
Alternative paradigms, such as consensus algorithms or data-driven fusion, often lack DST’s rigorous handling of epistemic uncertainty or introduce “black-box” characteristics unsuitable for high-stakes decisions. Therefore, a gap exists: a comprehensive framework that simultaneously satisfies mathematical rigor (preserving commutativity for consistency), operational efficiency (enabling fully distributed, asynchronous processing), and expressive power (handling uncertainty and conflict).
To bridge this gap, this work makes the following key contributions:
(1)
Theoretical innovation: We propose a novel evidence weakening operation within DST and formally prove its commutativity with Dempster’s combination rule (Theorem 1). This resolves the fundamental order-dependency of traditional discounting, providing the necessary algebraic foundation for distributed asynchronous fusion.
(2)
Protocol design: Leveraging this commutativity, we design a practical, event-driven correction protocol for hierarchical UAV swarms. Its core innovation is the decoupling of penetrative update propagation from local asynchronous recomputation, enabling low-latency global corrections without centralized coordination or global re-fusion.
(3)
Comprehensive validation: Through extensive Monte Carlo simulations (30,000 trials), we provide a rigorous performance evaluation. Results demonstrate that our protocol guarantees global consistency (0% correction error), reduces latency by ~40% and communication overhead by ~35% compared to serial benchmarks, and exhibits excellent scalability. We also analyze and affirm the decision-robustness of the weakening operation despite the potential emergence of negative intermediate values.
The remainder of this paper is organized as follows. Section 2 reviews DST fundamentals and critically analyzes the limitation of existing discounting methods. Section 3 presents our system model and formally states the event-driven correction problem. Section 4 details the proposed evidence weakening operation and its commutative properties. Section 5 describes the design of the distributed correction protocol. Section 6 provides the experimental setup, results, and discussion. Finally, Section 7 concludes the paper.

2. Background and Critical Limitation of Existing Discounting Methods

This section revisits the fundamental concepts of Dempster–Shafer evidence theory, which serves as a robust foundation for uncertainty modeling. It subsequently examines the traditional evidence discounting method, exposing a critical flaw: its non-commutativity with Dempster’s rule of combination. This non-commutativity is identified as the fundamental issue that undermines the feasibility of consistent and efficient distributed, event-driven corrections. The section concludes by outlining the specific requirements for a viable fusion framework, clearly identifying the research gap that this study aims to address.

2.1. Dempster–Shafer Evidence Theory [10,27]

2.1.1. Frame of Discernment (FOD)

Definition 1. 
Let   Θ  be a set of mutually exclusive and collectively exhaustive events defined by   Θ = θ 1 , θ 2 , , θ n , where the set   Θ  is called the FOD. The   2 Θ  is the power set of   Θ  . If   A 2 Θ ,   A  is called a hypothesis or proposition.

2.1.2. Mass Function and BPA

Definition 2. 
For a FOD   Θ , a mass function is expressed as a mapping   m : 2 Θ 0 ,   1 , which satisfies the following two attributes
A Θ m ( A ) = 1 and   m = 0 ,
In DST,  m  is also called a BPA. For example,  m ( A )  is BPA of  A , which accurately reflects the extent to which  A  is supported.

2.1.3. Commonality Function

Definition 3. 
From this BPA, a commonality function   Q  is defined as
Q A = A B m ( B ) ,
where  Q : 2 Θ 0 ,   1  .

2.1.4. Dempster Combination

The Dempster combination has been widely used to combine multiple independent evidence, and its definition is as follows.
Definition 4. 
Suppose the two mass functions   m 1  and   m 2  are on the frame of the discernment   Θ , and then the Dempster combination can be defined as follows (  represents the orthogonal summation operation.)
m 1 m 2 A = 0 , A = , 1 1 k 12 A 1 A 2 = A m 1 ( A 1 ) m 2 ( A 2 ) , A .
where the conflict coefficient  k 12  is defined as follows
k 12 = A 1 Θ , A 2 Θ , A 1 A 2 = m 1 ( A 1 ) m 2 ( A 2 ) .
Notably, the Dempster combination is useful only under the condition that  k 12 < 1  . If there is  N  evidence, the Dempster combination can be expressed by  i = 1 N m i = m 1 m 2 m N , which is further denoted as
m 1 m 2 m N A = 0 , A = , 1 1 k 12 N A 1 A 2 A N = A m 1 ( A 1 ) m 2 ( A 2 ) m N ( A N ) , A .
where  k 12 N = A 1 A 2 A N = m 1 ( A 1 ) m 2 ( A 2 ) m N ( A N ) .
The Dempster combination can also be computed utilizing the commonality function, which is denoted as follows
Q 1 Q 2 Q N A = K Q 1 A Q 2 A Q N A   for   A Θ ,
where
K = A Θ 1 A + 1 Q 1 A Q 2 A Q N A 1

2.1.5. Pignistic Transform

Definition 5. 
The probabilistic expression of a singleton element   A  in   Θ  could be obtained by pignistic probability function
B e t P ( θ ) = θ A m ( A ) A ,
where   A  is the cardinality of proposition  A .

2.2. Traditional Evidence Discounting and Its Limitations

To address the issue of unreliable evidence sources, Shafer introduced the classical evidence discounting method.
Definition 6. 
A discounting coefficient   α 0 ,   1  represents the weight of the evidence, then the discounted evidence   m   α is denoted as:
m A α = α · m A m ( Θ ) α = 1 α + α · m ( Θ )
Note on the reliability parameter α: In this work, the symbol  α [ 0 ,   1 ]  consistently represents a reliability or confidence measure of an evidence source, where  α = 1  denotes full reliability and  α = 0  denotes complete unreliability. Numerous established methods exist for determining  α , including those based on sensor quality metrics, historical accuracy, or channel conditions. The focus of this paper is not on how  α  is estimated, but on how it is mathematically applied within the fusion process. Here, in the context of traditional discounting,  α  functions as the discount factor. In Section 4, the same  α  will serve as the weakening factor in our proposed operation. Although the mathematical transformations differ, the semantic interpretation of  α  as a reliability weight remains consistent across both operations.
However, a critical drawback of this method is that the discounting operation is not commutative with Dempster’s combination rule. That is, in general:  m 1 m 2 α m 1 α m 2 α . This non-commutativity fundamentally undermines its applicability in event-driven correction scenarios. The order of operations—whether discounting is applied before or after fusion—yields different results, rendering consistent distributed asynchronous correction unachievable.
Subsequent research [7,28,29,30] has proposed various enhanced discounting models. Nevertheless, most remain grounded in the conceptual framework of traditional discounting and fail to fundamentally resolve the commutativity issue. This persistent shortcoming serves as the direct motivation for introducing the evidence weakening operation in Section 3.

2.3. Scene Technical Requirements

Designing distributed fusion algorithms necessitates effectively handling network topology, communication constraints, and dynamic environments. Existing research can be broadly categorized as follows:
(1)
Probability-based methods: Approaches such as distributed Kalman filters and their variants are optimal under linear Gaussian assumptions but exhibit limited capability in handling nonlinear, non-Gaussian problems, and epistemic uncertainties.
(2)
Consensus-based algorithms: These methods iteratively drive the states of network nodes toward agreement. However, they often face challenges related to convergence speed and stability in the presence of highly conflicting evidence.
(3)
D-S theory-based methods: As previously discussed, the inherent associativity and commutativity of the classic D-S combination rule make it naturally suitable for distributed fusion. However, as analyzed in Section 2.2, introducing evidence discounting (i.e., weighting adjustments) to handle reliability differences disrupts these crucial algebraic properties.
The scenario addressed in this paper demands a fusion framework that simultaneously satisfies three key requirements:
  • Mathematical rigor: Fusion operations and weighting operations (such as discount operations in evidence theory) must be commutative.
  • Strong expressiveness: The framework must explicitly handle uncertainty, ignorance, and evidence conflict.
  • Engineering feasibility: It must possess a computationally efficient form suitable for implementation in resource-constrained distributed systems such as UAV swarms.
The following section details our proposed evidence weakening operation. This operation preserves the advantages of D-S theory in handling uncertainty while, by guaranteeing commutativity with the fusion operation, establishing a theoretical foundation for building an efficient and consistent event-driven correction mechanism.

3. System Model and Problem Formulation

This section formally defines the system context and the core problem addressed in this work. First, we introduce the hierarchical network architecture that models our UAV swarm and the evidence-theoretic framework used to represent situational awareness. Subsequently, we delineate the event-driven correction problem, highlighting the fundamental challenge introduced by traditional reliability discounting methods.

3.1. Hierarchical UAV Swarm Network Model

We model the information aggregation process within the UAV swarm using a logical hierarchical tree topology. This model defines the data fusion and command flow, not necessarily the physical communication links. While real-world swarm networks may exhibit diverse and complex graph structures for overall connectivity [31], the hierarchical tree is a prevalent and effective abstraction for scalable, multi-hop data fusion and coordinated decision-making [32]. This logical structure can map to either a global command hierarchy or, more commonly, emerge as local sub-structures during task-specific, regional data aggregation phases. As illustrated in Figure 1, this architecture comprises three logical node types:
(1)
Leaf nodes (C1, C2,  ): Front-end sensing units (e.g., individual UAVs) responsible for local data acquisition, target recognition, and generating initial BPAs.
(2)
Child nodes (B1, B2,  ): Regional fusion centers that aggregate evidence from their child nodes (leaf or other intermediate nodes) and report fused results to their parent node.
(3)
Root node (A): The global fusion center that synthesizes situational awareness from the entire network and makes final decisions.

3.1.1. Key Parameters and Topology Generation

The topology is characterized by two key parameters: its depth ( D ) and fan-out ( F ). The depth defines the number of hierarchical levels (excluding the root), while the fan-out defines the maximum number of children per parent node. The total number of nodes  N  in a complete tree is given by:
N = F D + 1 1 F 1
In our simulations, such topologies are generated algorithmically to reflect various swarm scales and command structures. The generation logic, detailed in Section 6.3, begins with the root node and recursively assigns child nodes until the specified depth  D  is reached, adhering to the defined fan-out  F . This process ensures extensibility, supporting simulations ranging from small squadrons to large-scale swarms with thousands of nodes.

3.1.2. Operational Assumptions

We operate under the following core assumptions for the network:
(1)
Communication: Bidirectional, reliable communication links exist along the edges of the tree. We assume the underlying network employs efficient, topology-aware routing protocols (conceptually similar to those studied for scalable ad-hoc UAV networks [1]) to manage multi-hop data transfer.
(2)
Computation: Each node, particularly child and root nodes, possesses sufficient onboard computational capability to perform evidence fusion and correction operations locally.
(3)
Node reliability heterogeneity: To model real-world variations in sensor quality or platform trustworthiness, each node is associated with a reliability weight, dynamically assigned (e.g., from a uniform distribution) in our simulations. This weight directly influences the evidence discounting/weakening factor applied to its reports.

3.2. Evidence Representation and Fusion Model

This section details the representation of situational awareness and the core fusion mechanism within our hierarchical swarm model. We adopt the DST as the mathematical framework. The local situational assessment of each node is represented as a BPA defined over a frame of discernment  Θ . The fusion of evidence between nodes follows Dempster’s rule of combination (⊕), which is commutative and associative—properties fundamental to the subsequent analysis.

3.2.1. System Workflow: Evidence Flow and Storage

Within the hierarchical network model (Section 3.1), the lifecycle of evidence unfolds as follows. Leaf Nodes, acting as sensing units, are responsible for generating initial BPAs (e.g.,  m C 1 , m C 2 , }) based on their local observations. These BPAs are transmitted upstream to their respective parent (Child) nodes.
Each Child Node and the Root Node functions as a fusion center. Upon receiving BPAs from its children, a node fuses them using Dempster’s rule (⊕) to produce a consolidated BPA representing the integrated situational awareness of its entire subtree. Crucially, each non-leaf node maintains a local cache storing this fused BPA. This cached state serves as a summary of the evidence from all descendant nodes and is the basis for both further fusion upstream and, as will be detailed in Section 5, efficient local recalculation during corrections.
The Root Node ultimately holds the global situational awareness, denoted  S A , which is the fused BPA aggregating evidence from the entire swarm.

3.2.2. The Ideal Case: Parallelizable Fusion Thanks to Commutativity

The commutativity and associativity of Dempster’s rule confer a significant systemic advantage. Mathematically, for a set of BPAs, the global fused result  m g l o b a l = m 1 m 2 m n  is independent of the order in which the pairwise combinations are performed.
Translated to our hierarchical network, this implies that the sequential, layer-by-layer fusion process from leaves to root is logically equivalent to a flat, single-step fusion of all leaf node BPAs in a centralized manner. This property establishes an ideal foundation for corrections: in principle, updating the BPA of any leaf node (e.g., from  m o l d C i  to  m n e w C i ) would only require substituting the corresponding BPA in this “virtual” flat fusion formula and recomputing the global result. This suggests the potential for a simple, order-agnostic, and parallelizable correction mechanism.

3.2.3. The Practical Need and Its Fundamental Conflict: Evidence Discounting

The ideal case above assumes all evidence sources are equally reliable. In practice, UAV nodes may have varying credibility due to sensor quality, environmental conditions, or adversarial interference. To handle this source reliability and manage high conflict scenarios—a critical concern in contested environments—the standard approach is to apply a discounting operation to a BPA before fusion, using a discount factor  α [ 0 ,   1 ]  to attenuate the belief from less reliable sources.
However, introducing discounting fundamentally alters the fusion algebra. A key, well-established result is that the discounting operation breaks the commutativity of subsequent Dempster combination. That is  m 1 m 2 α m 1 α m 2 α  in the general case.
This loss of commutativity has severe consequences for a distributed, asynchronous system:
(1)
Order-dependency: The final fused result becomes dependent on the sequence in which discounted evidence is combined.
(2)
Inconsistency in dynamic updates: In a swarm where nodes operate concurrently, minor variations in processing or communication delays can lead to different effective fusion orders across the network. This results in inconsistent global situational awareness ( S A ) among nodes, undermining the very goal of data fusion.
(3)
Collapse of the ideal mechanism: The elegant, order-agnostic correction mechanism implied in Section 3.2.2 becomes impossible. Any practical protocol must now explicitly manage sequences, inevitably leading to increased latency, complexity, and the need for synchronization.
Therefore, the core challenge in designing a practical distributed correction protocol is to overcome the order-dependency introduced by necessary reliability weighting (discounting). This necessitates a novel weighting mechanism that, unlike traditional discounting, preserves commutativity with Dempster’s rule.

3.3. Problem Statement: Event-Driven Correction with Commutativity

Building upon the conflict identified in Section 3.2.3, we now formally define the event-driven correction problem for a UAV swarm. The challenge is to design a distributed protocol that can efficiently and consistently incorporate a local evidence update, while operating under the necessary condition of source reliability weighting—a condition that traditionally breaks commutativity and leads to order-dependent inconsistency.
Consider the hierarchical network and evidence fusion model established in Section 3.1 and Section 3.2. The BPAs generated by the nodes are denoted as  m B 1 , , m C 1 , m C 2 , . Let  m o l d ( C i )  be the BPA reported by leaf node  C i  at time  t 0 . An event-driven correction is triggered when  C i , at time  t 1 > t 0 , generates a new BPA  m n e w ( C i )  based on fresh evidence, necessitating an update.
The primary objective is to replace  m o l d ( C i )  with  m n e w ( C i )  and propagate this change through the network to update the global situational awareness  S A  at the root, applying necessary reliability weighting throughout the process.
To be effective and practical, the solution must satisfy the following definitive goals under a critical constraint:
(1)
Constraint C1 (commutativity preservation): The reliability weighting mechanism applied to evidence must be commutative with Dempster’s rule of combination. This is the foundational requirement to eliminate order-dependency and guarantee global state consistency in an asynchronous, distributed system.
(2)
Goal G1 (global consistency): The protocol must ensure that the final fused state after correction is mathematically identical to the result obtained from a hypothetical, synchronous global re-fusion of all network evidence, including the updated  m n e w ( C i ) . This is the correctness criterion.
(3)
Goal G2 (low latency): The correction latency, defined as  T = t u p d a t e t 0 (where  t u p d a t e  is the time that is consistently updated), must be minimized. This metric directly impacts the swarm’s responsiveness to dynamic events.
(4)
Goal G3 (scalability and efficiency): The protocol’s communication and computational overhead must scale efficiently with the swarm size (characterized by depth  D  and fan-out  F ). The design should avoid mechanisms that require cost linear with the total number of nodes  N , such as network-wide flooding or full re-fusion from scratch.
In summary, this work seeks to achieve the tripartite objectives of consistency (G1), timeliness (G2), and efficiency (G3) under the non-negotiable theoretical constraint of commutativity (C1). Addressing this problem necessitates a two-part contribution: (i) a novel theoretical construct for reliability weighting that satisfies C1, and (ii) a distributed protocol leveraging this construct to enable efficient, penetrative update propagation. The theoretical foundation for (i) is presented in Section 4, leading to the design of the event-driven correction protocol (ii) in Section 5.

4. The Proposed Evidence Weakening Operation

Having established that the non-commutativity of traditional discounting is a fundamental barrier to consistent and efficient distributed correction (Section 3.2.3), this chapter introduces the core theoretical contribution of this work: the evidence weakening operation. We present its formal mathematical definition and, most importantly, provide a rigorous proof of its commutativity with Dempster’s combination rule. This mathematical property serves as the keystone that enables order-invariant processing of evidence reliability adjustments, thereby directly resolving the critical limitation identified previously and laying the indispensable groundwork for the subsequent distributed protocol design.

4.1. Definition and Core Properties of the Weakening Operation

To formally define the weakening operation, we first introduce the necessary notation within the Dempster–Shafer framework. Let the BPA on a frame of discernment  Θ  be represented in vector form as  M = m A 1 , m A 2 , , m ( A 2 n 1 ) T , where  A i  are the non-empty subsets of  Θ . The corresponding commonality function vector is denoted as  Q = Q A 1 , Q A 2 , , Q ( A 2 n 1 ) , T . These two vectors are related through a linear transformation,  Q = Γ M , where  Γ  is an invertible matrix determined solely by the cardinality of  Θ . Furthermore, Dempster’s combination rule can be expressed using the element-wise (Hadamard) product   of commonality vectors. (The complete matrix formalism, including the explicit forms of Γ and the detailed derivation, is elaborated in Appendix A.1 for readers interested in the mathematical underpinnings.)

4.1.1. Formal Definition and Intuition

Definition 7. 
(The Power Operation of BPA) Suppose the mass function   m  is on the frame of the discernment  Θ  and  α R + , the power operation of   m   is denoted by   m α   and for arbitrary   A Θ   its commonality function can be calculated as follows
Q α A = K α Q A α ,
where
K α = H 2 n 1 Γ n 1 Q p .
Especially, when the exponent  α < 1 , we establish the concept of power operation of evidence as weakening of the evidence. Here the exponent  α  is referred to as the weakening factor.
As introduced in Section 2.2, the parameter  α [ 0 ,   1 ]  is the reliability or confidence measure. In this definition, it serves as the weakening factor. We emphasize that the proposed weakening operation provides a novel, commutative method to apply a given reliability weight  α . The development of specific  α  estimation algorithms, while crucial for deployment, is orthogonal to the core contribution of this work—the introduction of a commutative weighting mechanism itself.

4.1.2. Key Theorem: Commutativity with Dempster’s Rule

The principal theoretical advantage of the proposed weakening operation is its commutativity with Dempster’s rule of combination. This property is formalized in the following theorem.
Theorem 1 (Commutativity). 
For any two BPAs  m 1  and  m 2  on the same frame of discernment  Θ , and any two weakening factors  α R + , the proposed weakening operation commutes with Dempster’s combination rule  :
m 1 α m 2 α = m 1 m 2 α ,
Furthermore, this commutativity extends inductively to the fusion of   n   pieces of weakened evidence:
m 1 α m 2 α m 2 α = m 1 α ,
Theorem 2 (Composition). 
For BPA  m  on the same frame of discernment  Θ , and any two weakening factors  α 1 , α 2 R + , the proposed weakening operation commutes with Dempster’s combination rule  :
m α 1 α 2 = m α 1 α 2 = m α 2 α 1 ,
Proof sketch and significance: The proof leverages the specific algebraic form of Definition 1 and the associativity of Dempster’s rule. A complete derivation is provided in Appendix A.2. The core insight is that the element-wise power operation on the commonality vector Qinteracts linearly with the Hadamard product formulation of Dempster’s rule, ultimately making the order of combination irrelevant. This theorem is the cornerstone of our entire framework. It mathematically guarantees that applying reliability weights (via the weakening factor  α ) in a distributed, asynchronous manner will always yield a globally consistent fusion result, thereby directly solving the order-dependency problem posed by traditional discounting and stated in Section 4.1.

4.2. Illustrative Examples and Comparative Analysis

To intuitively demonstrate the effect of the proposed weakening operation and to contrast it with traditional discounting, we present two numerical examples based on a ternary frame of discernment  Θ = { θ 1 , θ 2 , θ 3 } .

4.2.1. Effect of the Weakening Operation

We begin by examining the behavior of the weakening operation under various  α . A BPA  m  is randomly generated, as shown in Table 1.
The geometric image [27] of the BPA is shown in Figure 2.
By Definition 7, we compute the powers of the BPA with various Exponents and display the results in Table 2. The geometric images of these powers and the basic probability numbers for each proposition are depicted in Figure 3, respectively.
From Table 2, and Figure 3, it is evident that
(1)
As the power increases, the coverage area of BPA at different powers gradually expands;
(2)
When the exponent is 0, the basic probability number of the complete uncertainty is 1, and the basic probability numbers of other propositions are all 0, with the widest coverage area;
(3)
When the exponent increases to 600 and 700, basic probability numbers are 0.9997 and 0.9999, respectively, approaching 1, and its coverage area almost becomes a single point. With further increase in power, the evidence’s power tends towards indicating decisions based on this evidence as  θ 2 ;
(4)
With different exponents, it does not always maintain a dominant position compared to other propositions. Instead, it gradually occupies a leading position with increasing exponent, and its basic probability number ultimately tends towards 1.

4.2.2. Example 2

The method proposed in this article for weakening has a similar effect to discounting. For the sake of convenience, we will refer to both the discount factor and the weakening factor as the confidence parameter of evidence, denoted as  α 0 ,   1 .
The BPA data from Example 1 is used. When the confidence parameter  α 0 ,   0.1 ,   0.3 ,   0.6 ,   0.9 ,   1 , the results of weakening and discounting of the BPA are shown in Table 3.
The basic probability numbers of each proposition under different confidence parameters are shown in Figure 4.
From Table 3 and Figure 4, we could find that.
(1)
As previously analyzed, when  α = 0 , 1 , both methods have exactly the same effect;
(2)
For intermediate values, discounting and weakening produce different effects: for discounting of the evidence, the basic probability numbers of all proper subsets decrease proportionally; conversely, for weakening of the evidence, the basic probability numbers of all proper subsets decrease non-linearly, and their rate of change is related to the original BPA.
(3)
Both weakening and discounting are capable of converting the fundamental probability value into a universal event.

4.3. Operational Implications for Distributed Protocol Design

The evidence weakening operation defined in Section 4.1, along with its proven commutativity (Theorem 1), provides the indispensable theoretical foundation for the event-driven correction mechanism. This mathematical property translates directly into three pivotal design principles that address the core challenges of distributed, asynchronous systems: locality, penetrability, and consistent weight propagation.

4.3.1. Locality of Correction

The commutativity property ensures that the order of applying the weakening operation (adjusting evidence reliability) and the Dempster fusion operation is interchangeable. This has a profound implication for system design: corrections can be processed locally and immediately at any node that receives updated evidence.
Mechanism: An intermediate node, upon receiving a new, weakened BPA from a child node, does not need to request retransmission of raw data from its other children or initiate a global synchronization. Instead, it can recalculate its own fused state locally by substituting the updated child‘s BPA into its fusion formula, leveraging the cached historical states of its other children.
Benefit: This locality eliminates the need for backward propagation to original evidence sources or a costly full-network re-computation. It confines the computational and communication overhead of a correction event largely to the nodes along the path from the source to the root, dramatically reducing latency and enhancing scalability.

4.3.2. Penetrability of Correction

The commutativity, combined with the associativity of Dempster‘s rule, guarantees that a correction signal can propagate upward through the network hierarchy in a hop-by-hop, “penetrative” manner without losing global consistency.
Mechanism: Each intermediate node along the path from the correcting leaf to the root can perform its local recalculation (as described above) asynchronously and in parallel with simply forwarding the correction packet upstream. The mathematical guarantees ensure that the result of this distributed, incremental update process is identical to the result of a hypothetical, centralized global re-fusion performed at the root node.
Benefit: This penetrability enables a pipeline effect. The root node can begin integrating the update as soon as the packet arrives, without waiting for all intermediate nodes to finish their local computations. This design maximizes parallelism, minimizes end-to-end correction latency, and obviates the need for complex centralized coordination.

4.3.3. Consistent Weight Propagation

The weakening factor α serves as a carrier for meta-information regarding the evidence source’s credibility, timeliness, or context. A critical advantage of the weakening operation over traditional discounting is how this weight is handled during fusion.
Mechanism: Because the weakening operation commutes with fusion, the semantic meaning of the weight αis preserved and consistently applied regardless of the network topology or the order in which evidence from different nodes arrives. The weight is attached to the evidence of BPA and influences the fusion outcome in a mathematically well-defined and order-invariant way.
Benefit: This consistent weight propagation prevents the semantic distortion or ambiguity that plagues non-commutative discounting schemes, where the final interpretation of a “discount factor” can depend on the accidental sequence of operations. It ensures that the reliability information encoded in α is faithfully and predictably reflected in the global situational awareness.

4.3.4. Synthesis: From Principles to Protocol

In summary, the commutativity of the evidence weakening operation is not merely an abstract mathematical curiosity; it is the enabling technology for a practical correction protocol. It dictates a protocol architecture where:
Updates are processed locally at each node, minimizing overhead.
Update packets can penetrate directly to the root, enabling low-latency propagation.
Reliability weights are propagated consistently, ensuring meaningful data fusion.
These three principles—locality, penetrability, and consistent weight propagation—directly inform the concrete design of the event-driven correction protocol presented in the next chapter. The protocol operationalizes these principles through specific packet structures, node behaviors, and state update rules, thereby transforming the theoretical guarantees of Section 4.2 into a working system for UAV swarm situational awareness.

5. Design and Implementation of the Distributed Event-Driven Correction Protocol

Building upon the commutative evidence weakening operation established in Section 4, this chapter details the design of the event-driven correction protocol. First, the overall workflow and core mechanism of the protocol are introduced. Subsequently, the specific behaviors and algorithms for each type of node (leaf, intermediate, and root) are elaborated, followed by an analysis of the protocol’s key attributes derived from its theoretical foundation. Finally, to enable rigorous performance validation, the comprehensive simulation methodology and experimental setup are described in detail to ensure reproducibility.

5.1. Protocol Overview and Workflow

The protocol’s core innovation is its penetrative propagation mechanism, enabled by the commutativity of the weakening operation. The end-to-end workflow, depicted in Figure 5, involves coordinated actions across leaf, child, and root nodes, and consists of three stages:
Event detection and packet generation (Leaf Node): A leaf node that detects a significant change in its local situational awareness computes a weakening factor based on its current credibility, applies the weakening operation (Definition 1), and generates a correction packet containing the old and new weakened evidence.
Penetrative upload and asynchronous recalculation (Intermediate Node): An intermediate node receiving a correction packet executes two processes concurrently: (i) immediately forwarding the packet to its parent node to minimize latency, and (ii) asynchronously initiating a local state recalculation by leveraging cached historical states and commutativity to update its own fused BPA without requiring retransmission from other child nodes.
Global incremental update (Root Node): The root node identifies the affected subtree using the path information in the packet. It then computes an incremental update to the global situational awareness by fusing the new evidence with the cached states of unaffected subtrees. Commutativity guarantees this result is consistent with a full global re-computation. The root may also arbitrate potential conflicts arising from concurrent corrections.

5.2. Node Behaviors and Detailed Algorithms

5.2.1. Event Detection and Packet Generation (Leaf Node)

Upon detecting a significant change, a leaf node  C i :
(1)
Validates the new BPA  m n e w .
(2)
Weakens the evidence based on its current credibility (channel quality, node reliability), determining a weakening factor and computing  m n e w α C i  using Definition 7.
(3)
Generates a Correction Packet containing:
  • Node ID, Timestamp, Path to the root.
  • Weakened BPAs:  m o l d α C i (cached) and  m n e w α C i .
  • The weakening factor  α C i .

5.2.2. Penetrative Upload and Asynchronous Recalculation (Intermediate Nodes  B i )

An intermediate node receiving a Correction Packet executes two processes concurrently:
(1)
P1: forwarding: immediately forwards the packet to its parent node, minimizing latency.
(2)
P2: local recalculation: asynchronously, initiates a local state update:
  • It retrieves the historically cached, weakened BPAs of its other child nodes.
  • Leveraging commutativity, it recalculates its own state by substituting the updated child’s BPA, avoiding the need for other children to retransmit data.
  • It updates its local cache and may optionally propagate a state change digest upward.

5.2.3. Global Incremental Update (Root Node  A )

The root node  A , upon receiving the packet:
(1)
Identifies the subtree affected using the Path-field.
(2)
Computes the incremental update to the global state by fusing the new evidence from the packet with the cached states of unaffected subtrees. Commutativity ensures this local update is consistent with a full global re-computation.
(3)
Updates the global situational awareness  S A .

5.3. Protocol Analysis: Theoretical Properties and a Comparative Study

The protocol’s correctness and inherent efficiency are not merely empirical observations but are direct mathematical corollaries of the theoretical foundation established in Section 4.

5.3.1. Theoretical Guarantees: Correctness, Locality, and Penetrability

The protocol’s correctness and efficiency are not empirical claims but direct mathematical consequences of the theoretical foundation established in Section 4.
Correctness via commutativity: The commutativity of the weakening operation with Dempster’s rule (Theorem 1) is fundamental. It ensures that the sequence of “weaken then fuse” can be reversed at any intermediate node. This property permits correct local re-computation using only cached, weakened evidence from sibling nodes. The final result at the root is guaranteed to be mathematically identical to a hypothetical, synchronous global re-fusion of all raw evidence, thereby satisfying the global consistency goal (G1).
Scalability via associativity: The associativity of Dempster’s rule, preserved under the weakening operation, ensures that partial fusion results can be correctly aggregated hop-by-hop. This supports truly distributed and asynchronous processing across the multi-layer network. Corrections can penetrate upward while intermediate nodes recalculate in parallel, without requiring global synchronization or altering the final outcome.
The protocol’s correctness is fundamentally ensured by the theoretical properties from Section 3:
  • Commutativity is Key: It permits the reversal of the “weaken then fuse” order at intermediate nodes, allowing correct local recomputation using cached data, ensuring the result is identical to a root-based global recomputation.
  • Associativity enables Scalability: It ensures that corrections can be applied and aggregated hop-by-hop without affecting the final outcome, supporting distributed, asynchronous processing in a multi-layer network.

5.3.2. Comparative Analysis: Order-Dependent Discounting vs. Order-Invariant Weakening

To elucidate the fundamental operational difference, we consider a canonical scenario: a 9-UAV swarm in a three-tier hierarchy (Figure 1) collaboratively identifying a target’s operational mode. Each node has a BPA and an associated confidence parameter ( α ). Figure 6 depicts two computational models for fusing this evidence.
In the figure, the dashed boxes denote the calculation models of two types of information fusion. “−⊲−” symbolizes discounting or weakening processing, and “ ” represents evidence combination. The primary distinction between the two models lies in the varying sequences of discount and fusion. Computational Model 1 is a hierarchical fusion model, while Computational Model 2 conducts parallel operations initially and subsequently combines them. Regarding the evidence weakening approach, as it maintains the associative law of evidence combination, Computational Model 1 can be equivalent to Computational Model 2; however, this is not feasible for the classical discount method. It should be noted that these are equivalent models in calculation and do not genuinely modify the generation and transmission of information. Nevertheless, when the perceptual information is added, vanishes, or undergoes changes, Computational Model 2 can demonstrate superior flexibility and adaptability.
As the number and hierarchy of nodes increase and as the communication and sensing environment deteriorates, the advantages of Computational Model 2 become more evident. This model allows the sensor network to process incoming information asynchronously, rather than synchronously. This capability further underscores the significant research implications and practical applications of our proposed evidence-weakening method.

5.3.3. Timeline of an Event-Driven Correction

To visualize the protocol workflow, consider a timeline within the aforementioned swarm:
(1)
t 0 (initial state): Most UAVs report evidence strongly supporting “Searching” mode.
(2)
t 1 (event trigger): A front-line UAV ( C 4 ) detects a sudden change in radar beam patterns, indicative of “Tracking” mode. It generates a new BPA strongly supporting “Tracking”.
(3)
Correction process: UAV  C 4  triggers the correction protocol. The Correction Packet penetrates the network ( C 4 B 2 A ). Intermediate node  B 2  asynchronously recalculates its regional picture using cached data from other children.
(4)
Global consensus: The root node fuses the update, and the global situational awareness consistently and rapidly converges from “Searching” to “Tracking” with high confidence.

5.3.4. Inherent Efficiency Advantages

The protocol offers significant performance benefits over traditional methods:
  • Low Latency: Penetrative forwarding ensures the correction request reaches the root with minimal delay.
  • Reduced Communication Overhead: Only lightweight correction packets traverse the network, not raw data or full state snapshots.
  • Distributed Computational Load: Local recalculation parallelizes the workload across intermediate nodes, preventing a root-node bottleneck. The computational complexity per node is largely independent of network size.
  • Scalability: The protocol’s performance degrades gracefully (sublinearly) as the network grows, unlike the linear/global re-computation overhead of baseline methods.
This section has presented a concrete protocol that effectively operationalizes the theory of evidence weakening. By leveraging commutativity and associativity, the protocol achieves efficient, consistent, and scalable event-driven correction, fulfilling the core requirements set forth for dynamic UAV swarm situational awareness. The next chapter will validate these claims through extensive simulations.

6. Experimental Validation and Performance Analysis

This chapter presents a comprehensive experimental evaluation of the proposed event-driven correction protocol. We begin by detailing the simulation setup to ensure reproducibility. Subsequently, we systematically analyze the results across multiple dimensions—correctness, robustness, and efficiency—comparing the proposed method against established baselines. The chapter concludes with a discussion synthesizing the findings, addressing limitations, and outlining future work.

6.1. Simulation Setup and Methodology

All simulations were conducted in a MATLAB (vR2022a)-based discrete-event environment designed to model distributed UAV swarm networks.

6.1.1. The Procedure of a Single Monte Carlo Trial

A single Monte Carlo trial follows this procedure: (1) Generate a random hierarchical tree topology; (2) Initialize BPAs and node weights; (3) Select a random leaf node, perturb its BPA to simulate a local evidence update, and trigger the correction event; (4) Execute all four baseline protocols (defined in Section 6.1.4); (5) Record all performance metrics. This process is repeated for 30,000 independent trials per parameter configuration to ensure statistical significance (confidence level ≥ 0.95 for the given sample size of 30,000).

6.1.2. Parameter Configuration

Key simulation parameters are summarized in Table 4. These parameters are designed to reflect dynamic, real-world operational scenarios and are varied stochastically across trials to ensure generalizability and avoid overfitting to specific configurations.

6.1.3. Network Topology and Evidence Model Initialization

(1)
Topology generation: Networks are generated as hierarchical trees. Starting from the root, child nodes are recursively added until the specified depth  D  is reached, with the number of children per node  F . This models command layers in UAV swarms, from small squads (e.g., 9 nodes with  D = 2 F = 2 ) to large-scale clusters (thousands of nodes).
(2)
Evidence and state initialization: Each leaf node is assigned a randomly generated BPA over  Θ . A node weight  α (confidence parameter) is randomly assigned from a uniform distribution over  [ 0 ,   1 ]  to model heterogeneous reliability. An initial global state is formed via bottom-up fusion, and all intermediate nodes cache their subtree’s fused BPA to support asynchronous recomputation during corrections.

6.1.4. Baseline Methods for Comparison

To ensure a fair evaluation, four method combinations are implemented as benchmarks. Each combination is tested under identical network topologies and random seeds, allowing for effective control of variables. The design emphasizes contrasting the commutativity-driven advantages of the proposed method with traditional approaches.
Combination 1: discount-serial (method = ‘discount’; strategy = ‘serial’). Applies traditional discounting (traditional_discount) sequentially along the path from leaf to root nodes.
Combination 2: discount-parallel (method = ‘discount’; strategy = ‘parallel’). Attempts parallel processing but reverts to global recomputation (recalculate complete fusion) due to non-commutativity, resulting in inefficiency.
Combination 3: weakening-serial (method = ‘weakening’; strategy = ‘serial’). Uses the evidence weakening operation in a serial path processing manner, validating the benefits of commutativity in sequential scenarios.
Combination 4: weakening-parallel (method = ‘weakening’; strategy = ‘parallel’). Integrates the commutativity of evidence weakening with parallel strategies, enabling penetrative propagation and asynchronous recomputation. This represents the optimized protocol proposed in this study.

6.1.5. Evaluation Metrics

Metrics for evaluating the protocol’s performance are based on 30,000 Monte Carlo trials for statistical reliability.
(1)
Decision Change Rate.
The probability that the global situational awareness decision differs after a correction event compared to the pre-correction state. This metric quantifies the disturbance degree of corrections on the global state.
(2)
Discount Method Error Rate.
The discrepancy rate between global decision results obtained using the discount method in parallel structures compared to serial structures is noteworthy. This metric specifically measures correction errors caused by the non-commutativity of traditional discounting operations. A higher error rate indicates a greater unreliability of discount methods in distributed parallel environments due to their dependence on order. In contrast, the evidence weakening method maintains a zero error rate, thanks to its commutativity property. This serves as a key indicator of the theoretical advantage of the proposed weakening operation.
(3)
Correction Latency.
The total time taken (in milliseconds) from when a correction event is triggered at a leaf node until the global state update is completed at the root node. This duration includes transmission delays (calculated as the number of path hops multiplied by the base delay), the time taken for operations (such as weighting and fusion processing), and the time required for accessing storage.
(4)
Communication Overhead.
The total data transmission volume (measured in bytes) across the network that results from a correction event is calculated based on the size of the correction packet. This size is determined by the BPA vector dimension multiplied by 4 bytes, plus any additional metadata. The total is accumulated over each transmission hop. For instance, if the BPA length is 7, a single packet is approximately 36 bytes. This metric assesses bandwidth efficiency, which is crucial for networks with limited resources.
(5)
Storage Overhead.
The memory consumption per node (in bytes per node) needed to cache historical states for asynchronous recomputation is calculated based on the total length of the cached BPA vectors, multiplied by 4 bytes (a value that can be adjusted), and averaged across all nodes. This metric is used to evaluate the protocol’s scalability. Ideally, the growth in storage overhead should be sublinear to efficiently support large-scale networks.

6.2. Results and Analysis

This section examines the performance of the event-driven correction protocol across various network topologies and configurations. Through Monte Carlo simulations, we highlight the protocol’s behavior and robustness in dynamic environments.

6.2.1. Fundamental Limitation: Decision Inconsistency Risk of Traditional Discounting

Let  m 1  and  m 2  are two BPAs on the same FOD  Θ = { θ 1 , θ 2 , , θ n }  and denote  m 12 = m 1 m 2 α m 21 = m 1 α m 2 α . In this study, we use pignistic probabilities to make a decision, specifically choosing the subset with the highest pignistic probability value as the outcome.
Next, we introduce the risk coefficient of decision inconsistency to quantify decision inconsistency above a certain threshold. The process of estimating the risk coefficient through simulation is as follows:
  • Firstly, utilizing the Monte Carlo simulation method, a large number of  m 1  and  m 2  are randomly generated, then  m 12  and  m 21  are calculated.
  • Secondly, let  B e t P 12  and  B e t P 21  represent pignistic functions corresponding to  m 12  and  m 21 , respectively. Then let  J 12  and  J 21  denote the decision results based on  m 12  and  m 21 . If  J 12 = J 21 , it is deemed a consistent decision; otherwise, it is considered an inconsistent decision.
  • Thirdly, the frequency of inconsistent decisions is tallied as the risk coefficient of decision inconsistency.
Figure 7 illustrates the variation of the inconsistent risk coefficient corresponding to  n = 2, 3, 4, 5, and 6, respectively, with respect to the confidence parameter. It can be observed from the figure that the smaller the confidence parameter is and the greater the number of elements in the recognition framework, the higher the possibility of inconsistency in decision-making will be.
Furthermore, let the number of BPA be  N . Figure 8 presents the variation of the inconsistent risk coefficient corresponding to  N = 2, 3, 4, 5, and 6 when  n = 3  with the confidence coefficient. It can be observed that the larger the number of BPA and the lower the confidence coefficient, the higher the possibility of decision inconsistency.
In summary, these comprehensive findings do more than quantify a trend; they fundamentally reveal a critical vulnerability. The escalating decision inconsistency with increasing frame size, number of evidence sources, and lower reliability quantitatively demonstrates the inherent and severe unsuitability of the traditional discounting operation for distributed, asynchronous fusion. Its order-dependent nature makes consistent outcomes under dynamic conditions unattainable. These results empirically validate the fundamental inadequacy of discounting for distributed fusion and directly motivate the commutative weakening alternative proposed herein.

6.2.2. Correctness and Robustness Under Network Dynamics

This section comprehensively evaluates the protocol’s correctness (measured by the correction error rate) and robustness (observed via the decision change rate) under dynamic network structures and varying correction parameters. A systematic comparison between the proposed weakening method and the traditional discounting method is presented to highlight the former’s theoretical advantages in practical dynamic environments.
  • Impact of Network Topology
Network topology serves as the backbone of distributed systems, directly influencing the propagation paths of correction events and the effectiveness of evidence fusion. The combined effect of network depth ( D ) and fan-out coefficient ( F ) is analyzed using heatmaps.
As shown in Figure 9, the decision change rate decreases significantly with increasing network depth and fan-out. For example, under moderate fan-out conditions (third row in Figure 9), the rate drops from 23.65% at depth = 3 to 0.75% at depth = 8, a reduction of 96.8%. For a fixed depth of 5, the change rate decreases from 17.00% to 3.90% as the fan-out increases. This indicates that more complex topologies (high depth, high fan-out) provide more “buffering layers” for evidence fusion, effectively smoothing the impact of local corrections and enhancing global situational stability. The lowest decision change rate (0.45%) occurs at  d e p t h = 8  with the highest fan-out, demonstrating a synergistic stabilizing effect.
Discount method performance (Figure 10a): Its error rate is highly dependent on topology, reaching up to 11.4% in simple networks ( d e p t h = 3 , lowest fan-out). The error rate decreases with increasing depth (e.g., from 11.4% to 5.4% at minimum fan-out) and fan-out (e.g., from 11.4% to 5.4% at depth = 3), indicating that complex topologies can suppress, but not eliminate, the error propagation caused by non-commutativity.
Weakening method performance (Figure 10b): In stark contrast, the weakening method maintains a constant zero error rate across all topological configurations. This validates its mathematical superiority: the commutativity of evidence weakening with Dempster’s combination ensures consistency between parallel and serial strategies, independent of network structure. This finding provides a decisive basis for selecting this method in high-reliability scenarios.
B.
Impact of correction parameters
Sensitivity analysis of the correction parameters—node weight ( α ) and correction magnitude ( Δ )—is crucial for practical system deployment.
Impact on decision change rate: Figure 11 shows that the decision change rate increases steadily with node weight, from 1.38% in low-weight regions to 9.46% in high-weight areas, as corrections from more reliable nodes rightly receive greater emphasis. The change rate saturates quickly beyond a correction magnitude threshold of ~0.4, stabilizing between 9.5% and 9.6%. This saturation behavior has practical engineering value, allowing systems to trigger global corrections only for significant changes, avoiding overreaction to minor fluctuations.
Impact on correction error rate: As shown in Figure 12 (for the discount method only), the relationship between correction parameters and error rate is more complex and unfavorable.
Correction magnitude analysis (Figure 12b): Error rates show a fluctuating upward trend, ranging from 21.8% to 35.1%, significantly higher than the effect of weight. The weak and unstable correlation between magnitude and error rate indicates unpredictable responses of the discount method to correction intensity. The peak error rate (35.1%) at magnitude = 0.9 highlights the highest risk under extreme corrections.
Summary: The experimental results in this section demonstrate that while increased topological complexity can enhance decision stability, it cannot overcome the inherent correction errors of the traditional discounting method stemming from its non-commutativity. In contrast, the weakening method, by virtue of its mathematical properties, guarantees absolute correctness (zero error rate) under any network topology and parameter setting. Furthermore, the parameter analysis reveals the unpredictable error behavior of the discounting method, underscoring the necessity for a deterministic and reliable alternative—the weakening operation proposed in this work.

6.2.3. Efficiency and Scalability Performance

This section presents a comprehensive evaluation of the protocol’s operational efficiency along three critical dimensions: correction latency, communication overhead, and storage overhead. Based on extensive Monte Carlo simulations, we analyze the performance across different method combinations and investigate the scaling behavior with respect to key network and task parameters, thereby revealing intrinsic performance characteristics and optimization potentials.
  • Comparative performance of method combinations
Figure 13 provides a holistic comparison of the four method combinations (discount-serial, discount-parallel, weakening-serial, weakening-parallel) across the core performance metrics.
The results unequivocally demonstrate the systemic advantages of parallel propagation strategies enabled by the penetrative update mechanism:
(1)
Latency performance: Parallel strategies achieve significantly lower end-to-end correction latency. Both discounting and weakening methods exhibit an average reduction of approximately 40% in their parallel versions compared to their serial counterparts. This gain stems from the ability of parallel (penetrative) propagation to forward correction packets upstream immediately, overlapping transmission with local asynchronous recomputation, thereby avoiding the sequential bottleneck inherent in serial strategies, which incur latency proportional to the network depth  D .
(2)
Communication overhead: Parallel strategies substantially reduce network bandwidth consumption. By minimizing redundant data transmission through intermediate nodes—each node only forwards a lightweight correction packet rather than retransmitting full raw evidence or global states—the communication overhead is lowered by an average of 35%.
(3)
Storage overhead: A critical trade-off is observed. Serial methods, which require intermediate nodes to cache more detailed historical states to support potential rollbacks or sequential reprocessing, incur approximately 400% higher storage overhead per node than parallel methods. This highlights a significant efficiency advantage of the parallel architecture in memory-constrained environments.
B.
Scaling behavior with network and task parameters
Figure 14 systematically illustrates how the performance metrics scale with network depth ( D ), fan-out coefficient ( F ), and frame size ( | Θ | ), offering insights into the protocol’s adaptability to different operational scales and complexities.
The analysis reveals distinct scaling patterns for serial and parallel structures:
(1)
Performance of serial structures:
  • vs. network depth ( D ): All three metrics (latency, communication, storage) scale approximately linearly with network depth. Deeper trees introduce more hops and fusion stages, directly increasing propagation delay and cumulative data transfer.
  • vs. fan-out ( F ): Latency and communication overhead are largely unaffected by fan-out in serial schemes, as the critical path remains the chain from leaf to root. However, storage overhead scales linearly with F, as each parent node must cache state from all its children.
  • vs. frame size ( | Θ | ): All metrics scale exponentially with the frame size, as the dimensionality of the BPA vector (and thus the size of transmitted packets and cached states) is  2 Θ 1 .
(2)
Performance of parallel structures (including the proposed weakening-parallel):
  • vs. network depth ( D ): Latency scales linearly with depth, as it is primarily determined by the number of hops to the root. Crucially, both communication and storage overhead grow sublinearly with depth, showcasing the protocol’s superior scalability. This is because the penetrative propagation and local recomputation confine the communication and storage burden largely to the path of the correction, not the entire network.
  • vs. fan-out ( F ): All three metrics are largely unaffected by fan-out. The parallel protocol’s performance is decoupled from the breadth of the tree, as updates do not require synchronization across siblings at the same level.
  • vs. frame size ( | Θ | ): Similar to serial structures, metrics scale exponentially with frame size, reflecting the fundamental data complexity of the evidence representation.
Summary: The weakening-parallel combination (the proposed protocol) consistently delivers the best overall performance profile. It not only maintains a zero correction error rate (as established in Section 6.3.2) but also achieves the lowest latency and communication overhead among all compared methods while demanding minimal storage. Its scaling laws—linear latency and sublinear communication/storage overhead with respect to network depth and independence from fan-out—formally validate its efficiency and scalability for large-scale, dynamic UAV swarm networks.

6.2.4. Analysis of Negative BPA Phenomenon and Decision Consistency

This section addresses the potential emergence of negative Basic Probability Assignment (BPA) values—a recognized sensitivity in Dempster–Shafer Theory—under the proposed weakening operation. We first clarify the mathematical precondition for their occurrence, then present systematic empirical validation demonstrating that, when they appear, these values do not propagate to affect final decisions or compromise global consistency. The analysis confirms that negative intermediate values are a bounded and computationally benign byproduct of the commutative weighting mechanism.
(1)
Condition for emergence and empirical validation
Mathematical Precondition: Negative BPA values can emerge as a direct mathematical consequence of applying the power operation (Definition 7) with a weakening factor  α < 1 . It is critical to note that  α < 1  is a necessary but not sufficient condition; their actual manifestation depends on the specific numerical structure of the original BPA. This explains their bounded occurrence in practice, as quantified below.
Empirical Validation and Core Findings: To rigorously assess the impact on decision consistency, we conducted 30,000 Monte Carlo trials across diverse network topologies and parameter ranges ( α [ 0 ,   1 ] ). The key results are:
Negative BPA values were observed in 4831 instances (16.1% of all trials), exclusively within the  α < 1  regime. This quantifies the phenomenon’s limited and predictable scope.
Most importantly, in every one of these 4831 instances, two decisive conditions held:
  • The subsequent pignistic probability transformation yielded strictly non-negative probability distributions.
  • Consequently, the final decision (i.e., the hypothesis with maximum pignistic probability) remained perfectly consistent with the decision derived from the theoretical, order-invariant global fusion baseline.
This large-scale experimental evidence robustly supports the claim that negative intermediate BPA values, within the defined operational framework, are isolated to the representation layer and do not affect the system’s final, decision-theoretic output.
(2)
Implications for system robustness and theoretical boundary
The preservation of ordinal information through the pignistic transform ensures that decision robustness is maintained irrespective of intermediate value signs. Therefore, the occurrence of negative BPA values is computationally benign and requires no special handling or constraints within the protocol’s pipeline for achieving consistent fusion.
In conclusion, our comprehensive analysis confirms that negative BPA values are a natural, bounded consequence ( α < 1 ) of the proposed commutative operation, with no observed adverse effect on decision consistency in distributed fusion. This supports the practical viability of the framework. The development of a complete formal theory to fully characterize the semantics and general algebraic properties of these values constitutes a valuable direction for future theoretical work, as noted in Section 6.3.2.

6.3. Discussion and Limitations

6.3.1. Synthesis of Experimental Findings

The experimental results collectively validate the core theoretical propositions and the design objectives outlined in Section 3.3. First and foremost, regarding correctness (Goal G1), the proposed weakening-parallel protocol achieved a 0% correction error rate across all tested configurations. This result decisively resolves the fundamental limitation of order-dependent inconsistency plaguing traditional discounting in asynchronous, parallel structures, providing direct empirical confirmation of the commutativity property established in Theorem 1.
Second, concerning efficiency and scalability (Goals G2 and G3), the same protocol consistently delivered superior performance, exhibiting the lowest latency and communication overhead while requiring minimal storage. Crucially, its communication and storage overheads scale sublinearly with network depth and are independent of network fan-out, validating the efficiency of its event-driven, localized processing paradigm. This demonstrates that the theoretical advantages of commutativity translate directly into tangible system-level gains.
Finally, the investigation into the negative BPA phenomenon confirmed that it is a mathematically consistent byproduct of the weakening operation that does not propagate to final decisions (pignistic probabilities) or compromise decision consistency, thereby assuring the robustness of the proposed framework.

6.3.2. Limitations and Future Work

While this work establishes a solid foundation, several assumptions delineate its scope and point to productive avenues for future research, addressing points raised during review.
(1)
Static reliability weakening factor and its semantic foundation: The parameter  α [ 0 ,   1 ]  serves a unified semantic role across both traditional discounting and the proposed weakening operation: it quantifies the reliability or confidence in an evidence source. The core innovation of this work lies in providing a commutative mathematical transformation (the weakening operation) to apply any given  α , in contrast to the non-commutative discounting operation. To isolate and prove the benefits of this commutative property, the study adopts a simplifying assumption:  α  is quasi-static and known within the short time frame of a correction event. In practice, whether  α  is assigned subjectively or derived objectively (e.g., from link quality or data freshness) is a separate, higher-layer decision. A critical next step is the design of dynamic, context-aware estimation models for  α  (e.g., based on link quality or data freshness), which is essential for enabling fully autonomous reliability assessment in real-world deployments.
(2)
Conflict management in adversarial scenarios: The proposed weakening operation inherently manages conflict by attenuating the influence of less reliable evidence—a core function it inherits and preserves from traditional discounting. Thus, it effectively handles conflicts arising from uncertainty and source heterogeneity, which is the primary focus of this work. For environments with extreme, adversarial conflicts designed to destabilize fusion, the integration of the commutative weakening operator with other robust combination rules (e.g., PCR6) or higher-level conflict detection schemes presents a valuable direction for enhancing resilience.
(3)
The validation in this work was intentionally conducted in a controlled, discrete-event simulation environment. This approach served the strategic purpose of isolating and conclusively demonstrating the intrinsic benefits of the commutative weakening operation and the protocol it enables, by eliminating confounding factors from unpredictable network dynamics. While this establishes a vital proof of concept and performance baseline, we acknowledge that it represents a simplifying abstraction from the non-ideal conditions of real-world wireless networks.
Therefore, a critical and necessary next phase of research is to progressively integrate the protocol with high-fidelity network simulators (e.g., NS-3) or hardware-in-the-loop testbeds. This will enable the assessment of its performance and robustness under realistic network impairments—such as packet loss, latency jitter, bandwidth contention, and intermittent connectivity. Such a stepwise integration is essential to bridge the gap between algorithmic validation and practical system deployment, quantifying the protocol’s resilience and informing any necessary adaptations for real-world use.
(4)
Formal Theory for Negative BPA Values: While empirical results confirm the decision-robustness of the framework, constructing a complete formal theory to characterize the general properties and bounds of negative BPA values under the weakening operation is identified as a key objective for subsequent theoretical research.

7. Conclusions

This research tackled the critical challenge of maintaining consistent situational awareness in distributed UAV swarms by introducing an event-driven correction framework founded on a novel commutative evidence weakening operation within Dempster–Shafer theory.
The core contribution is twofold. Theoretically, we formally defined the evidence weakening operation and rigorously proved its commutativity with Dempster’s rule. This property eliminates the order-dependency that fundamentally restricts traditional discounting methods, providing a new mathematical tool for reliable evidence weighting in asynchronous, distributed fusion. Algorithmically, leveraging this commutativity, we designed a practical event-driven parallel correction protocol. Its key innovation is the decoupling of penetrative event propagation from local, asynchronous belief recomputation, enabling swift global updates without the communication and synchronization bottlenecks of sequential approaches.
Extensive simulations based on 30,000 Monte Carlo trials systematically validated the framework’s effectiveness. The results demonstrate that: (1) The protocol guarantees global consistency (0% correction error), solving the core problem traditional methods cannot; (2) It achieves significantly lower latency and communication overhead than serial and discounting-based baselines; (3) Its performance scales favorably with network size, exhibiting sublinear growth in key overheads with depth; (4) The phenomenon of negative intermediate BPA values poses no threat to final decision integrity.
In summary, through theoretical innovation, mechanism design, and empirical validation, this work provides an efficient, scalable, and theoretically sound solution for real-time belief updating in distributed agent systems. Future work will focus on dynamic weakening factor estimation, integration with robust fusion rules for adversarial settings, and validation in more realistic network environments.

Author Contributions

Conceptualization, X.G. and H.Y.; methodology, X.G. and H.Y.; validation, L.R.; writing—original draft preparation, L.R. and H.Y.; writing—review and editing, L.R. and H.Y.; funding acquisition, X.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Special Fund for Taishan Scholar Project, grant number ts201712072, the Postdoctoral Fellowship Program of CPSF, grant number GZB20230989 and the China Postdoctoral Science Foundation, grant number 2024M764298.

Data Availability Statement

The data presented in this study are available upon reasonable request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Mathematical Derivations and Proofs

Appendix A.1. Complete Matrix Formalism and Derivations

Appendix A.1.1. Vector Representation and the Matrix

This section provides the detailed definitions omitted from the main text for brevity.
The vector  M = m A 1 , m A 2 , , m ( A 2 n 1 ) T , where  A 1 , A 2 , , A 2 n 1  is a permutation of a subset within the FOD. Similarly, the vector forms of commonality function are denoted as  Q = Q A 1 , Q A 2 , , Q ( A 2 n 1 ) , T .
For example, for the 2-dimensional FOD  Θ = { θ 1 , θ 2 } , define
M = m θ 1 , m θ 2 , m ( θ 1 θ 2 ) T .
According to Equation (2), it can be seen that the belief function is linearly related to the BPA, that is
Q = Γ n M ,
where matrix  Γ n  is a  ( 2 n 1 ) -order invertible constant matrix, only related to the identification FOD’s cardinality and the subsets’ arrangement. And it holds that
M = Γ n 1 Q .
When the number of elements in the FOD is different,  Γ n  takes on different forms, specifically when  n = 2 ,   3  it takes values as shown in Equations (A4) and (A5).
Γ 2 = 1 0 1 0 1 1 0 0 1 ,   Γ 2 1 = 1 0 1 0 1 1 0 0 1 ,
Γ 3 = 1 0 0 1 1 0 1 1 0 1 0 1 1 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 ,   Γ 3 1 = 1 0 0 1 1 0 1 1 0 1 0 1 1 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 .
Let  X Y  represent the Hadamard product of matrices  X  and  Y , which means multiplying corresponding elements. For example, if  Z = X Y , their corresponding elements satisfy  z i j = x i j y i j  for  i , j = 1 , 2 , , n .
The BPA vectors  M 1  and  M 2  of two pieces of evidence are combined through the Dempster combination to obtain a new BPA vector  M 3 , with corresponding commonality functions  Q 1 Q 2 , and  Q 3 . According to Equation (A2), we have
Q 3 = K Q 1 Q 2 .
Since  Q 1 = Γ n M 1  and  Q 2 = Γ n M 2 , thus
Γ n M 3 = K Γ n M 1 Γ n M 2 .
It refers that
M 3 = K Γ n 1 Γ n M 1 Γ n M 2 .
The presence of BPA is subject to normalization restrictions, that is
H 2 n 1 M 3 = 1 ,
where  H 2 n 1  is an  2 n 1 -dimensional row vector consisting entirely of elements equal to 1. Thus, by substituting Equation (A8) into Equation (A9), we obtain
K H 2 n 1 Γ n 1 Γ n M 1 Γ n M 2 = 1 .
Subsequently,
K = H 2 n 1 Γ n 1 Γ n M 1 Γ n M 2 1 .
By further substituting Equations (A11) and (A2) into Equation (A6), we are able to derive
Q 3 = Q 1 Q 2 H 2 n 1 Γ n 1 Q 1 Q 2 ,
and
M 3 = Γ n 1 Γ n M 1 Γ n M 2 H 2 n 1 Γ n 1 Γ n M 1 Γ n M 2 .
Equation (A13) represents the matrix calculation form of the Dempster combination, making it convenient for programming implementation.

Appendix A.1.2. Power Operations with Positive Integer Exponents

Given evidence  E 1 , let its mass function, mass function vectors, commonality function vectors be denoted as  m 1 M 1  and  Q 1 , respectively. Considering  p  combinations of  m 1
m 1 m 1 m 1 p ,
the associativity and commutativity of the Dempster combination allow for the simplification of the above equation as
m 1 m 1 m 1 p = m 1 p .
In this study, we referred to it as the  p -th power of  m 1  without ambiguity.
Similarly, let
Q 1 Q 1 Q 1 p = Q 1 p .
Then for  A Θ , the Dempster combination in the form of commonality functions,  Q 1 p A = K p Q 1 ( A ) p , can be calculated as
Q 1 p = Q 1 p H 2 n 1 Γ n 1 Q 1 p .
Further, in the form of mass functions, it can be calculated as
M 1 p = Γ n 1 Γ n M 1 p H 2 n 1 Γ n 1 Γ n M 1 p .

Appendix A.1.3. The Inverse Operations

In order to study the inverse operation of the above operation, let
m 2 = m 1 p .
On one hand, Equation (A19) shows that when  m 1  and  p Z +  are determined, it becomes evident that  m 2  is singular.
On the other hand, we will demonstrate that both  m 2  and  p Z +  can also determine a unique  m 1 , namely, if  m 2 = m 1 p  and  m 2 = m 3 p , then  m 1 = m 2  holds.
The  j -th element of vector  Q i  is denoted as  q i , j , where  i = 1 , 2 , 3  and  j = 1 , 2 , , 2 n 1 . Since
Q 2 = K 1 Q 1 p and   Q 2 = K 3 Q 3 p .
Then for every  j ,
q 2 , j = K 1 q 1 , j p and   q 2 , j = K 3 q 3 , j p
hold and subsequently
q 1 , j = K 1 1 / p q 2 , j 1 / p and   q 3 , j = K 3 1 / p q 2 , j 1 / p
hold.
Similar to Equations (A9), we have
H 2 n 1 Γ n 1 Q 1 = 1 = H 2 n 1 Γ n 1 Q 3 .
Substituting Equation (A22) into Equation (A23) results in
K 1 1 / p H 2 n 1 Γ n 1 q 2 , 1 1 / p , q 2 , 2 1 / p , , q 2 , 2 n 1 1 / p = K 3 1 / p H 2 n 1 Γ n 1 q 2 , 1 1 / p , q 2 , 2 1 / p , , q 2 , 2 n 1 1 / p .
As  H 2 n 1 Γ n 1 q 2 , 1 1 / p , q 2 , 2 1 / p , , q 2 , 2 n 1 1 / p  is constant, it can be inferred that
K 1 = K 3 .
Further,
Q 1 = Q 3 ,
which also results in  m 1 = m 3 .
The above analysis indicates that there exists an inverse operation for the  p -th ( p Z + ) power of evidence, which we will refer to as the  1 / p -th power of evidence in this article. Additionally, Equation (A19) can be transformed into the following form
m 1 = m 2 1 / p .
Specially, for  A Θ Q 2 A = Q 1 1 / p A = K 1 / p Q 1 ( A ) 1 / p  can be calculated as
Q 2 = Q 1 1 / p = Q 1 1 / p H 2 n 1 Γ n 1 Q 1 1 / p ,
and
M 2 = M 1 1 / p = Γ n 1 Γ n M 1 1 / p H 2 n 1 Γ n 1 Γ n M 1 1 / p .

Appendix A.1.4. Power Operations with Rational Number Exponents

The associativity and commutativity of The Dempster combination imply that
m 1 1 / p q = m 1 q 1 / p
holds for arbitrary  p , q Z + . Given the existence of  p , q Z +  such that  α = q / p  for arbitrary  α Q + , we can further define
m α = m q 1 / p ,
indicating the existence of the  α -th power of evidence exists.
Furthermore, we can derive that
Q α = Q α H 2 n 1 Γ n 1 Q α ,
and
M x = Γ n 1 Γ n M x H 2 n 1 Γ n 1 Γ n M x .

Appendix A.1.5. Power Operations with Real Number Exponentiations

For any given positive real number  α ^ , it can be approximated by a sequence of rational numbers  α i R + i = 1 , 2 , , thus we can define
Q α ^ ( A ) = lim i Q α i ( A ) .
Then according to Equation (A32) and Equation (A34),  Q α ( A )  is continuous with respect to the variable  α R + , it holds that
Q α ^ A = lim i Q α i A = lim i K α i Q A α i = lim i K α i lim i Q A α i = K x ^ Q A α ^ .
The above generalization leads to the definition of the power operation for any real exponent α, as stated in Definition 7.

Appendix A.2. Proof of Theorems 1 and 2

This appendix provides the complete mathematical proofs for Theorem 1 and Theorem 2 stated in Section 4.1.2 of the main text, which establish the foundational algebraic properties of the proposed evidence weakening operation.

Appendix A.2.1. Proof of Theorem 1 (Commutativity)

Proof. 
Let  Q 1  and  Q 2  be the commonality function vectors corresponding to  m 1  and  m 2 , respectively. According to the matrix formulation of Dempster’s rule (see Appendix A.1), the combination of two BPAs in the commonality domain is given by their Hadamard product followed by normalization:
Since
Q 1 Q 2 α = Q 1 Q 2 α H 2 n 1 Γ n 1 Q 1 Q 2 α = Q 1 Q 2 H 2 n 1 Γ n 1 Q 1 Q 2 α H 2 n 1 Γ n 1 Q 1 Q 2 H 2 n 1 Γ n 1 Q 1 Q 2 α = Q 1 Q 2 α H 2 n 1 Γ n 1 Q 1 Q 2 α H 2 n 1 Γ n 1 Q 1 Q 2 α H 2 n 1 Γ n 1 Q 1 Q 2 α = Q 1 Q 2 α H 2 n 1 Γ n 1 Q 1 Q 2 α = Q 1 α Q 2 α H 2 n 1 Γ n 1 Q 1 α Q 2 α = Q 1 α H 2 n 1 Γ n 1 Q 1 α 1 Q 2 α H 2 n 1 Γ n 1 Q 2 α 1 H 2 n 1 Γ n 1 Q 1 α H 2 n 1 Γ n 1 Q 1 α 1 Q 2 α H 2 n 1 Γ n 1 Q 2 α 1 = Q 1 α Q 2 α H 2 n 1 Γ n 1 Q 1 α Q 2 α = Q 1 α Q 2 α ,
Therefore, their corresponding BPAs are equal, proving Theorem 1. □

Appendix A.2.2. Proof of Theorem 2 (Composition)

Proof. 
Let  Q  be the commonality function vectors corresponding to  m . By the Definition 7, the commonality vector after applying exponents  α 1  is
Q α 1 = Q α 1 K α 1 , where   K α 1 = H 2 n 1 Γ n 1 Q α 1
Now, apply the operation with exponent  α 2  to the result  Q α 1 :
Q α 1 α 2 = Q α 1 α 2 K α 2 = Q α 1 / K α 1 α 2 K α 2 = Q α 1 α 2 / K α 1 α 2 K α 2
where  K α 2 = H 2 n 1 Γ n 1 Q α 1 α 2 = H 2 n 1 Γ n 1 Q α 1 α 2 / K α 1 α 2 .
Substituting  K α 2  back in to Equation (A38):
Q α 1 α 2 = Q α 1 α 2 / K α 1 α 2 H 2 n 1 Γ n 1 Q α 1 α 2 / K α 1 α 2 = Q α 1 α 2 H 2 n 1 Γ n 1 Q α 1 α 2 .
The final expression in Equation (A39) is precisely the commonality vector of  m α 1 α 2 , by definition. Therefore,
m α 1 α 2 = m α 1 α 2 .
The equality  m α 1 α 2 = m α 2 α 1  follows from the commutativity of multiplication  α 1 α 2 = α 2 α 1  and the symmetry of the above derivation. This completes the proof of Theorem 2. □
Significance of the Proofs: The proofs demonstrate that the defined power operation interacts harmoniously with the algebraic structure of Dempster’s rule in the commonality domain. Theorem 1 establishes that weighting (weakening/strengthening) and fusion are order-agnostic, which is the foundational property enabling the distributed, asynchronous correction protocol. Theorem 2 shows that sequential weighting operations are equivalent to a single weighting with a composite factor, ensuring the stability and simplicity of repeated adjustments.

References

  1. Mohsan, S.A.H.; Khan, M.A.; Noor, F.; Ullah, I.; Alsharif, M.H. Towards the Unmanned Aerial Vehicles (UAVs): A Comprehensive Review. Drones 2022, 6, 147. [Google Scholar] [CrossRef]
  2. Zhang, X.; Zhao, W.; Liu, C.; Li, J. Distributed Multi-Target Search and Surveillance Mission Planning for Unmanned Aerial Vehicles in Uncertain Environments. Drones 2023, 7, 355. [Google Scholar] [CrossRef]
  3. Chandran, I.; Vipin, K. Multi-UAV networks for disaster monitoring: Challenges and opportunities from a network perspective. Drone Syst. Appl. 2024, 12, 1–28. [Google Scholar] [CrossRef]
  4. Shahar, F.S.; Sultan, M.T.H.; Nowakowski, M.; Łukaszewicz, A. UGV-UAV Integration Advance-ments for Coordinated Missions: A Review. J. Intell. Robot. Syst. 2025, 111, 69. [Google Scholar] [CrossRef]
  5. Zhang, Z.; Zhu, L. A Review on Unmanned Aerial Vehicle Remote Sensing: Platforms, Sensors, Data Processing Methods, and Applications. Drones 2023, 7, 398. [Google Scholar] [CrossRef]
  6. Zhang, C.; Yang, Z.; He, X.; Deng, L. Multimodal Intelligence: Representation Learning, Information Fusion, and Applications. IEEE J. Sel. Top. Signal Process. 2020, 14, 478–493. [Google Scholar] [CrossRef]
  7. Grewal, M.S. Kalman Filtering. In International Encyclopedia of Statistical Science; Lovric, M., Ed.; Springer: Berlin/Heidelberg, Germany, 2025; pp. 1285–1289. [Google Scholar] [CrossRef]
  8. Heard, N. An Introduction to Bayesian Inference, Methods and Computation; Springer: Cham, Switzerland, 2021. [Google Scholar] [CrossRef]
  9. Liu, X.; Liu, S.; Xiang, J.; Jiang, W.; Wang, J. A conflict evidence fusion method based on the composite discount factor and the game theory. Inf. Fusion 2023, 94, 1–16. [Google Scholar] [CrossRef]
  10. Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976. [Google Scholar] [CrossRef]
  11. Cuzzolin, F. Uncertainty measures: A critical survey. Inf. Fusion 2025, 114, 102609. [Google Scholar] [CrossRef]
  12. Chen, X.; Deng, Y. An Evidential Software Risk Evaluation Model. Mathematics 2022, 10, 2325. [Google Scholar] [CrossRef]
  13. Sezer, S.İ.; Akyuz, E.; Arslan, O. An extended HEART Dempster–Shafer evidence theory approach to assess human reliability for the gas freeing process on chemical tankers. Reliab. Eng. Syst. Saf. 2022, 220, 108275. [Google Scholar] [CrossRef]
  14. Kang, B.; Zhao, C. Deceptive evidence detection in information fusion of belief functions based on rein-forcement learning. Inf. Fusion 2024, 103, 102102. [Google Scholar] [CrossRef]
  15. Cui, H.; Zhang, H.; Chang, Y.; Song, Y.; Deng, Y. BGC: Belief gravitational clustering approach and its application in the counter-deception of belief functions. Eng. Appl. Artif. Intell. 2023, 123, 106235. [Google Scholar] [CrossRef]
  16. Guan, X.; Yu, H.; Yi, X. Optimized methods for basic probability assignments in evidence theory: Applications to fault diagnosis. Eng. Appl. Artif. Intell. 2025, 160, 111737. [Google Scholar] [CrossRef]
  17. Gao, X.; Jiang, P.; Xie, W.; Wei, B. Decision fusion method for fault diagnosis based on closeness and Dempster-Shafer theory. J. Intell. Fuzzy Syst. 2021, 40, 12185–12194. [Google Scholar] [CrossRef]
  18. Sachan, S.; Almaghrabi, F.; Yang, J.; Xu, D. Evidential reasoning for preprocessing uncertain categorical data for trustworthy decisions: An application on healthcare and finance. Expert Syst. Appl. 2021, 185, 115597. [Google Scholar] [CrossRef]
  19. Denœux, T.; Li, S.; Sriboonchitta, S. Evaluating and Comparing Soft Partitions: An Approach Based on Dempster–Shafer Theory. IEEE Trans. Fuzzy Syst. 2018, 26, 1231–1244. [Google Scholar] [CrossRef]
  20. Haenni, R. Are alternatives to Dempster’s rule of combination real alternatives Comments on About the belief function combination and the conflict management problem. Inf. Fusion 2002, 3, 237–239. [Google Scholar] [CrossRef]
  21. Bell, D.A. Discounting and Combination Operations in Evidential Reasoning. In Proceedings of the Uncertainty in Artificial Intelligence; Heckerman, D., Mamdani, A., Eds.; Morgan Kaufmann: Burlington, MA, USA, 1993; pp. 477–484. [Google Scholar]
  22. Quost, B.; Denœux, T. Contextual Discounting of Belief Functions. In Symbolic and Quantitative Approaches to Reasoning with Uncertainty; Godo, L., Ed.; Springer: Berlin/Heidelberg, Germany, 2005; pp. 552–562. [Google Scholar] [CrossRef]
  23. Kurdej, M.; Cherfaoui, V. Conservative, proportional and optimistic contextual discounting in the belief functions theory. In Proceedings of the 16th International Conference on Information Fusion, Istanbul, Turkey, 9–12 July 2013; pp. 1–7. [Google Scholar]
  24. Mercier, D.; Lefèvre, É.; Delmotte, F. Belief functions contextual discounting and canonical decomposi-tions. Int. J. Approx. Reason. 2012, 53, 146–158. [Google Scholar] [CrossRef][Green Version]
  25. Geng, Y.; Wang, S.; Wang, X. A Novel Conflicting Evidence Discounting Method Based on TOPSIS Decision-Making. In Proceedings of the 2018 IEEE CSAA Guidance, Navigation and Control Conference (CGNCC), Xiamen, China, 10–12 August 2018; pp. 1–6. [Google Scholar]
  26. Hu, Q.; Zhou, Q.; Li, Z.; Wu, W.; Cao, Z.; Xiao, F. Evidence generalization-based discounting method: Assigning unreliable information to partial ignorance. Artif. Intell. Rev. 2024, 57, 188. [Google Scholar] [CrossRef]
  27. Cuzzolin, F. The Geometry of Uncertainty; Springer International Publishing: Cham, Switzerland, 2021. [Google Scholar] [CrossRef]
  28. Xue, D.; Wang, Y.; Yang, C. An evidence combination approach based on fuzzy discounting. Soft Comput. 2020, 25, 3003–3011. [Google Scholar] [CrossRef]
  29. Zhou, Q.; Deng, Y. NPNT Non-oscillating process negation transformation of mass functions and a negation-based discounting method in information fusion. Eng. Appl. Artif. Intell. 2022, 116, 105381. [Google Scholar] [CrossRef]
  30. Xiao, F.; Cao, Z.; Lin, C.-T. A complex weighted discounting multisource information fusion with its application in pattern classification. IEEE Trans. Knowl. Data Eng. 2023, 35, 7609–7623. [Google Scholar] [CrossRef]
  31. Ponniah, J.; Dantsker, O.D. Strategies for Scaleable Communication and Coordination in Multi-Agent (UAV) Systems. Aerospace 2022, 9, 488. [Google Scholar] [CrossRef]
  32. Bakirci, M. A Novel Swarm Unmanned Aerial Vehicle System: Incorporating Autonomous Flight, Real-Time Object Detection, and Coordinated Intelligence for Enhanced Performance. Trait. Du Signal 2023, 40, 2063–2078. [Google Scholar] [CrossRef]
Figure 1. The wireless sensor network with the air platform serving as the perception node.
Figure 1. The wireless sensor network with the air platform serving as the perception node.
Drones 10 00182 g001
Figure 2. The geometric image of the BPA in computational example.
Figure 2. The geometric image of the BPA in computational example.
Drones 10 00182 g002
Figure 3. The powers of the BPA. (a) The geometric images of the powers of the BPA; (b) The basic probability numbers for each proposition of the powers of the BPA.
Figure 3. The powers of the BPA. (a) The geometric images of the powers of the BPA; (b) The basic probability numbers for each proposition of the powers of the BPA.
Drones 10 00182 g003
Figure 4. The basic probability numbers of each proposition under different  α .
Figure 4. The basic probability numbers of each proposition under different  α .
Drones 10 00182 g004
Figure 5. The mechanism flowchart.
Figure 5. The mechanism flowchart.
Drones 10 00182 g005
Figure 6. Computational model 1 and Computational model 2.
Figure 6. Computational model 1 and Computational model 2.
Drones 10 00182 g006
Figure 7. The variation of the inconsistent risk coefficient corresponding to  n = 2 , 3 , 4 , 5 , 6 ,  respectively.
Figure 7. The variation of the inconsistent risk coefficient corresponding to  n = 2 , 3 , 4 , 5 , 6 ,  respectively.
Drones 10 00182 g007
Figure 8. The variation of the inconsistent risk coefficient corresponding to  N = 2 , 3 , 4 , 5 , 6  (when  n = 3 ) , respectively.
Figure 8. The variation of the inconsistent risk coefficient corresponding to  N = 2 , 3 , 4 , 5 , 6  (when  n = 3 ) , respectively.
Drones 10 00182 g008
Figure 9. The combined heatmap of network topology on decision change rate impact on decision stability.
Figure 9. The combined heatmap of network topology on decision change rate impact on decision stability.
Drones 10 00182 g009
Figure 10. Impact of network topology on correction error rate. (a) Discounting method; (b) weakening method impact on correction correctness: Figure 10 reveals a fundamental divergence between the two methods in terms of the correction error rate.
Figure 10. Impact of network topology on correction error rate. (a) Discounting method; (b) weakening method impact on correction correctness: Figure 10 reveals a fundamental divergence between the two methods in terms of the correction error rate.
Drones 10 00182 g010
Figure 11. Impact of correction parameters on decision change rate. (a) Correction node weight; (b) correction magnitude.
Figure 11. Impact of correction parameters on decision change rate. (a) Correction node weight; (b) correction magnitude.
Drones 10 00182 g011
Figure 12. Impact of correction parameters on correction error rate. (a) Correction node weight; (b) correct magnitude. Node weight analysis (Figure 12a): Error rates exhibit a unimodal distribution, peaking near weight = 0.7 (error rate 5.73%) rather than changing monotonically. The weight ≈ 0.7 region represents a critical point of system sensitivity.
Figure 12. Impact of correction parameters on correction error rate. (a) Correction node weight; (b) correct magnitude. Node weight analysis (Figure 12a): Error rates exhibit a unimodal distribution, peaking near weight = 0.7 (error rate 5.73%) rather than changing monotonically. The weight ≈ 0.7 region represents a critical point of system sensitivity.
Drones 10 00182 g012
Figure 13. Results of correction latency, communication overhead, and storage overhead on the four combinations. (a) Correction latency; (b) communication overhead; (c) storage overhead.
Figure 13. Results of correction latency, communication overhead, and storage overhead on the four combinations. (a) Correction latency; (b) communication overhead; (c) storage overhead.
Drones 10 00182 g013
Figure 14. Effect of network depth, fan-out coefficient, and frame size on performance metrics. (ac) vs. network depth; (df) vs. Fan-out coefficient; (gi) vs. frame size.
Figure 14. Effect of network depth, fan-out coefficient, and frame size on performance metrics. (ac) vs. network depth; (df) vs. Fan-out coefficient; (gi) vs. frame size.
Drones 10 00182 g014aDrones 10 00182 g014b
Table 1. BPA generated in a random manner in Example.
Table 1. BPA generated in a random manner in Example.
A θ 1 θ 2 θ 3 θ 1 θ 2 θ 1 θ 3 θ 2 θ 3 θ 1 θ 2 θ 3
m ( A ) 0.20780.10750.14090.07320.03140.21610.2230
Table 2. The results of the powers of the BPA in Example with different Exponent  α .
Table 2. The results of the powers of the BPA in Example with different Exponent  α .
Exponent   α θ 1 θ 2 θ 3 θ 1 θ 2 θ 1 θ 3 θ 2 θ 3 θ 1 θ 2 θ 3
00000001
0.10.04150.00730.01910.02410.01110.05870.8381
0.50.14630.04950.08200.06780.03030.17930.4447
10.20780.10750.14090.07320.03140.21610.2230
1.50.23520.15980.18590.06340.02610.21030.1193
20.24560.20470.22160.05080.02000.19100.0664
30.24230.27480.27470.02960.01070.14590.0220
40.22540.32570.31230.01620.00530.10750.0077
50.20430.36380.33990.00860.00250.07820.0027
80.14350.43640.38900.00110.00020.02950.0001
120.08610.49050.41530.00010.00000.00800.0000
200.03440.53720.42680.00000.00000.00150.0000
510.00040.66880.33080.00000.00000.00000.0000
60000.99970.00030000
70000.99990.00010000
Table 3. The results of weakening and discounting of the BPA.
Table 3. The results of weakening and discounting of the BPA.
Exponent   α Methods θ 1 θ 2 θ 3 θ 1 θ 2 θ 1 θ 3 θ 2 θ 3 θ 1 θ 2 θ 3
0weakening0000001
discounting0000001
0.1weakening0.04150.00730.01910.02410.01110.05880.8381
discounting0.02080.01080.01410.00730.00310.02160.9223
0.3weakening0.10330.02690.05280.05360.02430.13590.6031
discounting0.06230.03230.04230.02200.00940.06480.7669
0.6weakening0.16280.06130.09530.07140.03160.19290.3846
discounting0.12470.06450.08450.04390.01880.12970.5339
0.9weakening0.19910.09620.13040.07410.03200.21370.2544
discounting0.18700.09680.12680.06590.02830.19450.3008
1weakening0.20780.10750.14090.07320.03140.21610.2230
discounting0.20780.10750.14090.07320.03140.21610.2230
Table 4. Key simulation parameters and their ranges/values.
Table 4. Key simulation parameters and their ranges/values.
ParameterDescriptionValues/Range
Network Topology
Depth   ( D )Number of hierarchical layers (excluding root)3 to 8
Fan-out (F)Maximum children per parent node2 to 5
Evidence Model
Frame   Size   ( Θ )Cardinality of the frame of discernment2 to 4
BPA Perturbation RangeSimulates sensor noise for correction events±5%
Node   Weight   ( α )Reliability/confidence parameter [ 0 ,   1 ]  (uniform)
Performance
Base Transmission DelayPer-hop communication latency0.4–1.0 ms
Operation Time (Fusion/Weakening)Processing time per BPA operation0.2–0.5 ms
Monte Carlo TrialsIndependent runs per configuration30,000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, H.; Guan, X.; Ruan, L. A Theoretical Framework for Event-Driven Correction in UAV Swarm Situational Awareness: Mechanism Design with Evidence-Theoretic Foundations. Drones 2026, 10, 182. https://doi.org/10.3390/drones10030182

AMA Style

Yu H, Guan X, Ruan L. A Theoretical Framework for Event-Driven Correction in UAV Swarm Situational Awareness: Mechanism Design with Evidence-Theoretic Foundations. Drones. 2026; 10(3):182. https://doi.org/10.3390/drones10030182

Chicago/Turabian Style

Yu, Haotian, Xin Guan, and Lang Ruan. 2026. "A Theoretical Framework for Event-Driven Correction in UAV Swarm Situational Awareness: Mechanism Design with Evidence-Theoretic Foundations" Drones 10, no. 3: 182. https://doi.org/10.3390/drones10030182

APA Style

Yu, H., Guan, X., & Ruan, L. (2026). A Theoretical Framework for Event-Driven Correction in UAV Swarm Situational Awareness: Mechanism Design with Evidence-Theoretic Foundations. Drones, 10(3), 182. https://doi.org/10.3390/drones10030182

Article Metrics

Back to TopTop