You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

26 June 2025

GDM-DTM: A Group Decision-Making-Enabled Dynamic Trust Management Method for Malicious Node Detection in Low-Altitude UAV Networks

,
,
,
,
and
1
College of Artificial Intelligence, Tianjin University of Science and Technology, Tianjin 300457, China
2
College of Engineering, Qatar University, Doha P.O. Box 2713, Qatar
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Intelligence, Security, Trust and Privacy Advances in IoT, Bigdata and 5G Networks (2nd Edition)

Abstract

As a core enabler of the emerging low-altitude economy, UAV networks face significant security risks during operation, including malicious node infiltration and data tampering. Existing trust management schemes suffer from deficiencies such as strong reliance on infrastructure, insufficient capability for multi-dimensional trust evaluation, and vulnerability to collusion attacks. To address these issues, this paper proposes a group decision-making (GDM)-enabled dynamic trust management method, termed GDM-DTM, for low-altitude UAV networks. GDM-DTM comprises four core parts: Subjective Consistency Evaluation, Objective Consistency Evaluation, Global Consistency Evaluation, and Self-Proof Consistency Evaluation. Furthermore, the method integrates a Dynamic Trust Adjustment Mechanism with multi-attribute trust computation, enabling efficient trust evaluation independent of ground infrastructure and thereby facilitating effective malicious UAV detection. The experimental results demonstrate that under identical conditions with a malicious node ratio of 30%, GDM-DTM achieves an accuracy of 85.04% and an F-score of 91.66%. Compared to the current state-of-the-art methods, this represents an improvement of 6.04 percentage points in accuracy and 3.71 percentage points in F-score.

1. Introduction

As a new economic form, the low-altitude economy relies on low-altitude airspace resources and aviation technology to create an industrial ecosystem encompassing various dimensions such as research and development, manufacturing, flight services, and comprehensive support []. Centered around unmanned aerial vehicle (UAV) systems as the primary carrier, and through the deep integration of low-altitude flight activities with ground infrastructure, related UAV technologies such as multi-UAV communication, intelligent relay selection and deployment, and collaborative air–ground network optimization have also witnessed rapid development []. As a result, the low-altitude economy is gradually evolving into a strategic emerging industry that is being actively promoted by major global economies.
However, with the exponential growth of UAV networks, the security threats faced by the system are becoming increasingly complex. Attacks such as malicious node infiltration, communication link hijacking, and false information dissemination not only degrade network performance and compromise communication security [,], but also risk triggering catastrophic consequences like UAV loss of control and crashes. Consequently, these threats pose severe risks to stable low-altitude economic system operation. In addition, when UAVs carry out missions, they often face scenarios without dependence on ground infrastructure. For example, when performing outdoor surveying tasks, the openness of the communication environment and the dynamic nature of the network topology make traditional security measures ineffective.
In this context, trust management mechanisms have emerged as critical solutions for ensuring UAV network security by continuously assessing node trustworthiness, detecting malicious activities, and mitigating attack impacts. However, existing mechanisms still encounter notable limitations. Primarily, the reliability of malicious node detection remains low in scenarios without support from ground infrastructure. Most conventional trust management systems are centralized, such as those based on guarantee mechanisms [] or reputation systems [], which inherently face risks associated with single points of failure. To mitigate centralization risks, decentralized approaches including consensus-based trust evaluation [,,] and blockchain-based trust storage [,,] have been proposed. Nonetheless, these decentralized solutions still depend on auxiliary ground infrastructure, restricting their effectiveness in fully infrastructure-independent scenarios. In contrast, our proposed GDM-DTM method enhances malicious node detection performance by introducing a group decision-making (GDM) framework that does not rely on ground infrastructure, thus overcoming these critical limitations.
Secondly, existing trust modeling approaches remain inadequate when addressing complex attack scenarios. Traditional trust evaluation models primarily rely on single-dimensional metrics, such as historical interaction frequency or link quality [,], which limits their ability to detect sophisticated, multi-dimensional coordinated attacks. In response, recent studies have explored more comprehensive models by incorporating historical behavior patterns, reputation metrics, and contextual information [,,,]. These approaches represent important progress in enhancing trust model expressiveness. However, they still suffer from key limitations, including poor generalization across diverse environments and insufficient adaptability to rapidly changing network topologies. Particularly in dynamic UAV networks, where node behavior and connections fluctuate frequently, existing models struggle to maintain consistent and accurate trust assessments. In contrast, our proposed GDM-DTM framework addresses these issues by integrating multi-attribute trust computation with a group decision-making mechanism, thereby enabling dynamic and robust trust evaluation even in highly variable and adversarial conditions.
Based on these challenges, we propose a consistency evaluation trust management method based on group decision-making, called GDM-DTM, in low-altitude economy UAV networks. The main contributions of this paper are as follows:
  • GDM-Based Trust Management: We integrate group decision-making (GDM) into the UAV trust management method, termed GDM-DTM. GDM-DTM evaluates and aggregates trust values based on four consensus parts to make consensus-driven decisions, while trust management governs UAV trustworthiness. This approach has significantly enhanced malicious node detection capability in scenarios without ground infrastructure.
  • Multi-dimensional Trust Modeling: A trust value calculation method based on multi-dimensional UAV attribute data is proposed to provide a more reliable trust computation approach.
  • Proactive Trust Validation via Self-Proof Consistency: To counter collusion and false trust propagation, we introduce a self-proof consistency concept for UAV environments and propose self-proof consistency degree as a new metric to evaluate consensus levels. GDM-DTM transforms trust validation from passive estimation to proactive provision, thus more effectively identifying and adjusting discrepancies in opinions.
  • Dynamic Threshold and Weight Adjustment: A dynamic threshold adjustment algorithm based on attribute weight distribution and node tolerance is introduced to enhance the system’s environmental adaptability in dynamic topologies and novel attack scenarios.
  • We conduct detailed evaluation experiments on the effectiveness of GDM-DTM in trust evaluation and malicious node detection. The experimental results show that GDM-DTM not only outperforms traditional methods in terms of performance but also demonstrates stronger adaptability and robustness when dealing with complex attack scenarios.

3. UAV System Architecture

A UAV system consists of four core modules: hardware, software, radio communication links, and application software []. The hardware platform includes the flight controller, sensors, actuators, and the UAV body. The flight controller serves as the core unit, coordinating the guidance, control, and navigation systems, while the sensor array collects data such as speed and altitude. The software platform is responsible for UAV trajectory control, the navigation system determines the position, the guidance system directs the UAV to the destination, and the control system ensures precise flight. The communication link is used for data and command transmission, including communication between UAVs and ground stations, inter-UAV communication, and local link communication. Application software is used for task execution, such as trust computation, consistency evaluation, and dynamic adjustments, running on the onboard computer.
The low-altitude economic UAV group is shown in Figure 1, in which all the UAVs simulated in this paper are equipped with onboard computers. The UAVs are divided into three categories: evaluator UAVs, evaluated UAVs, and neighbor UAVs. During data transmission, to ensure security, UAVs first perform an initial trust computation. The evaluator is responsible for conducting consistency trust evaluations based on their opinion and other UAVs’ opinions to select the next-hop UAV. The evaluated UAV is the candidate UAV for information transmission. Neighboring UAVs provide the evaluator with attribute trust values regarding the evaluated UAV to enhance evaluation reliability. Based on the consistency evaluation results, the evaluator transmits the adjusted trust values back to the ground control station, which is then uploaded to the cloud. If a UAV providing services to the ground user is marked as a malicious UAV, the user will receive an alert.
Figure 1. The scenario of GDM-DTM in a low-altitude economy.

4. Proposed GDM-DTM

4.1. Overview of the Proposed GDM-DTM

To address the challenges in trust management for UAVs in the low-altitude economy, particularly the limitations in malicious node detection in scenarios without reliance on ground infrastructure and the inadequacy of trust modeling under complex attack scenarios, this paper proposes GDM-DTM, a trust management method based on group decision-making.
GDM-DTM constructs a hierarchical and progressive trust management system by integrating three core components: the Consistency Evaluation Algorithm (Algorithm 1), the Dynamic Trust Adjustment Mechanism (Algorithm 2), and the State Update Mechanism (Algorithm 3), as illustrated in Figure 2. These components are designed to work in a sequential and interdependent manner to support real-time and adaptive trust evaluation in UAV networks under adversarial conditions.
Figure 2. Framework of the proposed GDM-DTM.
Algorithm 1, called the Consistency Evaluation Algorithm, focuses on measuring the degree of consensus among UAVs from four perspectives—subjective, objective, global, and self-proof consistency—based on multi-dimensional attribute trust values (e.g., speed, altitude, heading, acceleration). This consistency assessment enhances the granularity and credibility of group decision-making.
Algorithm 2, called the Dynamic Trust Adjustment Mechanism, takes the output of the consistency evaluation and compares the consistency degree similarity and the consistency threshold. It classifies the situation into three scenarios: positive consensus, negative consensus, or no consensus, and then triggers corresponding trust value adjustments or additional evaluation rounds. This mechanism improves adaptive responsiveness to inconsistencies and maintains trust system robustness in dynamic environments.
Algorithm 3, called the State Update Mechanism, operates based on the outcomes of UAV information transmission. It applies reward or penalty weights depending on whether the transmitted messages are verified as authentic or tampered. The mechanism updates attribute trust values, weight matrices, message transmission records, and confidence levels accordingly. This enables the system to continuously evolve based on real-time network behavior, ensuring that trust evaluations remain aligned with actual UAV performance.
In summary, Algorithm 1 provides initial consensus estimation, Algorithm 2 ensures adaptive adjustment and resolution of inconsistencies, and Algorithm 3 maintains long-term accuracy by continuously refining trust values and interaction histories. Together, they form a closed-loop system that balances consensus reliability, attack resilience, and real-time adaptability.

4.2. Consistency Evaluation Algorithm

To ensure the formation of a robust and multi-perspective consensus evaluation among UAVs, the Consistency Evaluation Algorithm in GDM-DTM is composed of four parts.
Each part focuses on a distinct perspective of trust consensus: the Subjective Consistency Evaluation part is the evaluation perspective of the evaluator UAVs based on multi-dimensional attribute trust; the objective consistency part is the evaluation perspective of aggregating the multi-dimensional attribute trust of neighbor UAVs; the global consistency part integrates subjective and objective evaluations to produce a reasonable global evaluation perspective; and the self-proof consistency part captures the evaluated UAV’s self-proof trust information. The consistency degree of each evaluation perspective is calculated to reflect the trust level of that evaluation perspective, and at the same time, by the consistency similarity degree is calculated between different parts, thereby supporting the dynamic adjustment of trust.

4.2.1. Subjective Consistency Part

  • Multi-dimensional Attribute Trust Computation
The trust computation is initialized by evaluating the data transmitted through neighbor UAVs. To address evaluation bias caused by fluctuations in network load, we adopt a trigger condition based on the amount of information k, rather than a fixed time interval. The trust computation process begins when the evaluator receives a set of data from neighbor UAVs. Based on multi-dimensional attribute trust computation, we collect the following series of attributes from the UAV: The evaluator UAV is initialized through trust computation based on multi-dimensional attribute data transmitted from neighbor UAVs. To address evaluation bias caused by fluctuations in network load, we adopt a trigger condition based on the amount of information k , rather than a fixed time interval. The trust computation process begins when the evaluator receives a set of data from neighboring UAVs. Based on multi-dimensional attribute trust computation, we collect the following series of attributes from the UAV: speed (v), defined as the UAV’s current speed; altitude (e), defined as the UAV’s current flight altitude; acceleration (a), defined as the UAV’s current acceleration; and heading (h), which represents the UAV’s heading, among others.
The evaluator UAV E i receives information from the same evaluated UAV E j at different times { t 1 , t 2 , , t k } about the same attribute, represented as M = { M E j t 1 , M E j t 2 , , M E j t k } M = k . To compute the attribute trust value for E j , the first-order difference Δ r t λ = r t λ r t 1 λ is calculated for each attribute, where r t λ represents the value of attribute λ at time t . The first-order difference represents the numerical change in the attribute value in the information.
Then, based on the statistical distribution characteristics of the attribute values, data screening is performed using the following criterion: if the attribute value r t λ satisfies inequality (1), it is considered normal data; otherwise, it is regarded as abnormal data.
P r Δ r t λ μ ε σ 2 ε 2 ,
where μ and σ represent the mathematical expectation and standard deviation of Δ r t λ , respectively, while E determines the magnitude of the occurrence probability.
Finally, the attribute trust value a i , j m is calculated as the ratio of the number of normal data to the total number of received data:
a i , j m = | M | O m | M | ,
where m represents the m -th attribute, and O m denotes the number of outlier data for attribute m in the M information. The value of a i , j m lies between 0 and 1.
Based on the GDM framework, in trust management, UAVs make decisions based on their attribute trust values from a set of alternatives. This GDM problem is modeled as a CRP problem. The alternative set is represented as E = { E 1 , E 1 , , E n } n 2 , where E n represents the n -th UAV.
Furthermore, the trust matrix A i represents the trust values of the i -th UAV in the attributes of other UAVs, which is used to describe the UAV’s trust in the attributes of other UAVs, as shown in (3):
A i = a i , 1 1 a i , 1 2 a i , 1 m a i , 2 1 a i , 2 2 a i , 2 m a i , n 1 a i , n 2 a i , n m n × m .
In this equation, A i represents the trust values of the i -th UAV in the attributes of other UAVs, where a i , j   m represents the trust of E i in the m -th attribute of E j .
  • Weight Calculation
A weight matrix w i based on information feedback is introduced to describe the social relationships of E i as shown in (4).
w i = w i , 1 1 w i , 1 2 w i , 1 m w i , 2 1 w i , 2 2 w i , 2 m w i , n 1 w i , n 2 w i , n m n × m ,
where w i , n m represents the weight of the m -th attribute of E n at the node of E i , corresponding to a i , n m .
  • Subjective Consistency Degree Calculation
The attribute trust values of the evaluated UAV E k , computed by the evaluator UAV E i , are aggregated with weights to form the trust value for E k , referred to as the subjective consistency degree. This is represented as S C D i , k , as shown in (5):
S C D i , k = j = 1 m   w i , k j a i , k j ,
where m is the number of attributes, and w i , k j represents the weight of the attribute trust value a i , k j .

4.2.2. Objective Consistency Part

Based on the attribute trust values of E i ’s neighbor UAVs E g , the weighted aggregation forms the trust value for E k , which is referred to as the objective consistency degree. This is denoted as O C D i , k , as shown in Equation (6):
O C D i , k = j = 1 m   w i , k j g O O   w g , k j ^ a g , k j ,
where O represents the neighborhood of E i that interacts with E k , and w g , k j ^ represents the normalized weight based on the set O , given by w g , k j ^ = w g , k j t O O w t , k j .

4.2.3. Global Consistency Part

The global consistency degree G C D i , k is a comprehensive consistency degree calculated based on S C D i , k and O C D i , k , denoted as (7):
G C D i , k = β S C D i , k + 1 β O C D i , k ,
where β represents the confidence level of the evaluator UAV E i .

4.2.4. Self-Proof Consistency Part

The self-proof consistency degree S C D k represents the trust value provided by E k itself, denoted as (8):
S C D k = j = 1 m w i , k j a k j ,
where a k i represents the self-proven trust level of the j -th attribute. The self-proof consistency degree S C D k is used to determine whether it significantly differs from other consistency degrees. Specifically, if the similarity between S C D k and other consistency degrees does not meet the dynamic consistency threshold, the probability of malicious behavior by E k is high.

4.2.5. Consistency Degree Similarity Calculation

The consistency degree similarity S C L t , k indicates the similarity between S C D k and other consistency degrees, where t { i , o , o i } , as shown in (9)–(11):
S C L i , k = 1 S C D k S C D i , k max S C D k , S C D i , k ,
S C L o , k = 1 S C D k O C D i , k max S C D k , O C D i , k ,
S C L o i , k = 1 S C D k G C D i , k max S C D k , G C D i , k ,
where t { i , o , o i } is a temporary variable representing the three consistency degrees. The higher the similarity level, the smaller the divergence in attribute trust values. To determine whether the consistency degree similarity is met, a consistency threshold is applied, as shown (12):
S C L t , k { S C L i , k , S C L o , k , S C L o i , k } μ ,
where μ represents the consistency threshold. When the consistency degree similarity exceeds μ , the self-proof consistency degree can be accepted by the evaluator E i . The consensus threshold is given by (13):
μ = β μ i + ( 1 β ) g O | O |   w i , g ^ μ g ,
where μ i represents the confidence tolerance of E i , and μ g represents the confidence tolerance of E g . w i , g ^ denotes the normalized weight of E g as viewed from E i .

4.3. Dynamic Trust Adjustment Mechanism

Based on the consistency evaluation calculation results, a Dynamic Trust Adjustment Mechanism is proposed. This mechanism adjusts the UAV’s attribute trust values according to the relationship between the consistency degree similarity and the consistency threshold. The mechanism mainly includes two scenarios: (1) when the consistency degree similarity between the evaluator UAV and its neighbor UAVs reaches agreement, and (2) when the consistency degree similarity between the evaluator UAV and its neighbor UAVs does not reach agreement.
As outlined in Table 2, the Dynamic Trust Adjustment Mechanism operates as follows. When the subjective and objective consistency degrees are in agreement, the system proceeds based on their unified assessment: if both deem the evaluated UAV benign, the information copy is transmitted to the UAV; conversely, if both classify it as malicious, the UAV is flagged as such, and communication is terminated. In cases where the assessments disagree, the mechanism computes the global consistency degree. Subsequently, the evaluator determines whether an additional round of attribute trust value adjustment is required by comparing the global consistency degree similarity against threshold.
Table 2. Process of the Dynamic Trust Adjustment Mechanism.
  • Case 1: S C L o i , k μ ;
This case indicates that the evaluator UAV E i has strong trust in the evaluated UAV E k . To avoid overconfidence from E i leading to information being transmitted to malicious UAVs, E i adjusts the trust value of the j -th attribute with the greatest divergence between itself and its neighbors regarding E k (denoted as D o k j ) to align with the attribute trust values of its neighbors. D o k j is calculated as (14):
D o k j = g O | O |   w g , k j ^ ( a g , k j a k j ) ,
where w g , k j ^ represents the normalized weight of the j -th attribute trust value, and a k j represents the self-proven trust value of the j -th attribute. The attribute corresponding to the maximum D o k j is found and adjusted according to the following rule:
a i , k j = β a i , k j + 1 β g O O   w g , k j ^ a g , k j ,
where a i , k j represents the adjusted trust value of the j -th attribute for E i . The new global consistency degree similarity S C L o i , k is then recalculated. If S C L o i , k μ , E i sends the marked information to E k ; if S C L o i , k < μ , E k is considered a malicious UAV.
  • Case 2: S C L o i , k < μ ;
This case indicates that E i has a positive evaluation of E k , but does not fully trust E k . To mitigate the risk of collusion attacks between E k and its neighbors, E i sends a marking message if its confidence exceeds 0.5 and it can bear the responsibility for potential information transmission failure. Otherwise, E k is recognized as a malicious UAV, and no information is transmitted.
  • Case 3: S C L o i , k μ ;
This case indicates that E i does not fully trust E k , but its neighbors have a positive evaluation of E k . To overcome E i ’s bias, E i integrates the attribute trust values of its neighbors. The attribute trust value with the greatest divergence between E i and E k for the j -th attribute (denoted as D i k j ) is found using the following rule:
D i k j = a i , k j a k j .
The corresponding attribute trust value is then adjusted according to Equation (15). The new global consistency degree similarity S C L o i , k is recalculated. If S C L o i , k μ , the marking message is sent; otherwise, E k is considered a malicious UAV.
  • Case 4: S C L o i , k < μ ;
If the confidence level of the neighbors exceeds 0.5, E i generates a marking message for E k . Otherwise, E k is classified as a malicious UAV, and E i will not transmit any information to E k .

4.4. State Update Mechanism

Before information transmission, the expected attribute trust values are pre-computed to facilitate updates when the information reaches its destination. The system dynamically updates the attribute trust value weights based on the amount of information transmitted. A message matrix is then introduced to update the UAV’s weights (e.g., w i , n ), as shown (17):
D i = { d i , 1 s , d i , 1 t } , { d i , 2 s , d i , 2 t } , , { d i , n s , d i , n t } 1 × n ,
In this equation, d i , n s represents the number of untampered messages received by node E i through E n , and d i , n t represents the total number of messages transmitted through E n . Specifically, each message contains a list representing the sequence of nodes it has passed through. If the message successfully reaches the destination node and is untampered with, the corresponding value in the message matrix and the weights of the traversed nodes are updated as d i , g s = d i , g s + 1 . Then, the weight w i , g is updated as w i , g = d i , g s d i , g t , where w i , g represents the weight of E g from the perspective of E i .
For Case 1, when the attribute trust values of the evaluator are closer to those of its neighbors, the following formula applies:
a i , k j = β a i , k j + ( 1 β ) g O | O |   w g , k j ^ a g , k j .
For Case 2, no information transmission occurs, and no update to the attribute trust values is made.
For Cases 3 and 4, the expected attribute trust value calculation is divided into two cases based on whether a marking message is sent. If no message is sent, the formula is as follows:
a i , k j = m a x { β a i , k j + ( 1 β ) | g O | O |   w g , k j ^ a g , k j a i , k j | m a x { g O | O |   w g , k j ^ a g , k j , a i , k j } , a i , k j } ,
where | g O | O |   w g , k j ^ a g , k j a i , k j | quantifies the difference between objective consensus and subjective consensus. This difference metric ensures that the global consensus improves the accuracy of benign UAV recognition and encourages trust evaluations to be more consistent across all parties. When a marking message is sent, the neighbors E g provide critical attribute trust values to encourage the evaluator E i to transmit information. Therefore, the update rule is related to E g as follows:
a i , k j = β a i , k j + ( 1 β ) g O | O |   a i , g j + a i , k j 2 m .
When the information reaches the final destination node, the node checks the authenticity of the received information, generates a broadcast list, and notifies the UAVs to update their attribute trust values, weight matrices w i , and information record matrices.
If the transmitted information has been tampered with, the attribute trust value of E i for E k will be reduced according to the following formula:
a i , k j ¯ = a i , k j e γ a i , k j a i , k j ,
where a i , k j represents the expected attribute trust value of E i for E k , and a i , k j represents the attribute trust value before the corresponding information transmission. a i , k j is the current attribute trust value. γ controls the strength of the adjustment, increasing sensitivity to repeated tampering. The information matrix records are then adjusted to reduce the weight of E k and its guarantor E g . The adjustment formula is as follows:
d i , j t = d i , j t × γ , j { k , g } ,
where d i , j t refers to the updated total number of information transmissions from E i to E j { E k , E g } . Then, the weight of E k on E i will decrease according to w i , g = d i , g s d i , g t .
If the information has been verified, the attribute trust value is increased based on the current and expected attribute trust value increments, as follows:
a i , k j ¯ = a i , k j + 1 a i , k j 1 e γ a i , k j a i , k j ,
where a i , k j a i , k j represents the increment in the expected attribute trust value. The number of untampered messages d i , k s and the total transmitted messages d i , k t will increase as d i , k s = d i , k s + 1 , d i , k t = d i , k t + 1 . Additionally, the weights of the evaluated UAV and its neighbors are updated. The result of data transmission affects the evaluator’s confidence level, and the update formula is as follows:
β = 1 F T + F ,
where T represents the number of truthful messages sent, and F represents the number of tampered messages.

5. Performance Evaluation

5.1. Simulation Setup

The experimental simulations were conducted using the Opportunistic Network Environment (ONE) Simulator version 1.6.0 []. The UAV flight paths were modeled based on a digital representation of downtown Helsinki. To ensure the relevance and validity of the simulation, key parameters were selected based on established research practices and validated reference models in the literature [,,,]. Specifically, the simulation duration was set to 5000 s, providing adequate time for observing UAV behavior patterns and trust dynamics. A total of 120 UAVs were randomly deployed across the simulation map to simulate realistic node density and network interactions. The main simulation parameters are shown in Table 3.
Table 3. Simulation parameters.
UAV mobility speeds are uniformly distributed between 3 m/s and 15 m/s, reflecting typical operational speeds in low-altitude urban environments. The UAV communication range of 300 m was chosen based on common short-range wireless communication standards used in existing UAV network simulations. The bandwidth was set to 5 Mbit/s, sufficient for the transmission of typical data payloads expected in UAV communications. Additionally, each UAV was equipped with a storage buffer of 2 GB to manage data during transit effectively.
In the experiment, UAVs were categorized into benign and malicious UAVs, with each group following distinct movement patterns based on their role. The benign UAVs move according to their task types and the specified target locations, performing their missions with predefined paths. On the other hand, malicious UAVs behave unpredictably, following random routes, and intentionally tampering with their attributes such as speed and heading. Malicious UAVs generate and broadcast false information to neighboring UAVs, contributing to deception in the network. Meanwhile, benign UAVs generate 2 MB sized information, which is accurately transmitted as part of the network’s data flow.
To evaluate the performance of the proposed GDM-DTM, we conducted comprehensive experiments focusing on three key aspects: (1) UAV trust value calculation evaluation, (2) trust management system performance analysis, and (3) comparative benchmarking against existing models. Specifically, the trust value calculation evaluation analysis will track the changes in trust values for both benign and malicious UAVs over time to verify the stability and accuracy of the multi-dimensional attribute trust computation model and Dynamic Trust Adjustment Mechanism. System performance was evaluated using accuracy, precision, recall, and F-score metrics under different malicious ratios and message densities to assess the system’s ability to detect malicious UAVs. Furthermore, the proposed method was compared with D2MIF, FedTrust, and B5G (SVM) to comprehensively evaluate the advantages of the GDM-DTM method in terms of malicious node detection accuracy, model robustness, dynamic environmental adaptability, and overall performance.

5.2. UAV Trust Value Calculation Evaluation Analysis

Figure 3 shows the impact of different weight configurations on the UAV trust value changes over time in scenarios where the malicious node ratios are 10% and 30%. Specifically, the experiment examines three key attributes: speed (v), location coordinates (l), and message volume (m), with two different weight combinations: m:v:l = 2:2:6 (left) and m:v:l = 1:5:4 (right).
Figure 3. Trust evaluation over simulation time.
The experimental results indicate that although different weight combinations affect the overall trust level, the trust mean for benign UAVs increases over time across all configurations, while the trust mean for malicious UAVs gradually decreases. As the malicious ratio increases, the rate of change of the trust value decreases. The core mechanisms behind these findings are as follows: (1) The Dynamic Trust Adjustment Mechanism adjusts trust values based on the results of information transmission, enhancing the trust in benign nodes and suppressing malicious nodes. (2) Multi-level consistency evaluation (subjective, objective, global, and self-proof consistency) comprehensively identifies abnormal attributes, such as the divergence between global consensus and self-proof trust, triggering dynamic trust adjustments and improving detection sensitivity. (3) The dynamic update strategy for the weight matrix and confidence level optimizes the reliability and anti-interference capability of group decision-making. (4) The accumulation of information over time in the simulation provides data support for multi-dimensional attribute statistical analysis, effectively weakening the short-term camouflage effects of malicious nodes. In conclusion, the experimental results validate the system’s effectiveness in distinguishing benign and malicious UAVs. The system ensures trust value reliability with good robustness across different malicious node ratios and weight configurations.

5.3. UAV Trust Management System Performance Analysis

5.3.1. Attack Model

The message tampering and fake information generation attack model (MTGI) was adopted. Malicious UAVs tamper with the data upon receiving messages from benign UAVs and perform transmission. At the same time, malicious UAVs randomly generate fake information to interfere with other benign UAVs. Malicious UAVs recognize each other, while benign UAVs are unaware of this.

5.3.2. Performance Evaluation Under MTGI

We present definitions below:
(i) True Positive (TP): The device is predicted to be malicious, and it is actually malicious.
(ii) True Negative (TN): The device is predicted to be benign, and it is actually benign.
(iii) False Negative (FN): The device is predicted to be benign, but it is actually malicious.
(iv) False Positive (FP): The device is predicted to be malicious, but it is actually benign.
  • Accuracy: Represents the percentage of correctly identified results.
A c c u r a c y = T P + T N T P + T N + F P + F N
  • Precision: Represents the proportion of predicted malicious nodes that are actually malicious.
P r e c i s i o n = T P T P + F P
  • Recall: Represents the proportion of actual malicious nodes that are correctly identified.
R e c a l l = T P T P + F N
  • F-score: A performance measure calculated from precision and recall.
F β = 1 + β 2 Precision Recall β 2 P r e c i s i o n + Recall
Accuracy, precision, and recall are highly correlated with the ratio of benign to malicious UAVs. In cases of low malicious ratios, although higher accuracy and recall can be achieved, the precision is lower. This is an undesirable result for systems requiring high security or robustness. Although these three metrics focus on different aspects of identification, they fail to find a balance between them. Therefore, to balance precision and recall, the F-score (F value) metric is used.
  • Accuracy Performance
As shown in Figure 4a, the system’s accuracy gradually decreases as the ratio of malicious UAVs ( ξ ) increases. When the number of messages is 2000, the accuracy reaches 90.79% at ξ = 5 % and remains at 84.55% when ξ = 40 % . Notably, the amount of information has a significant impact on system performance: when ξ = 20 % , increasing the number of messages from 400 to 2000 results in a 1.98% increase in accuracy (83.91% to 85.89%). This phenomenon stems from the dual effects of the Dynamic Trust Adjustment Mechanism: on one hand, the weight normalization strategy weakens the impact of coordinated attacks by malicious nodes on a single attribute; on the other hand, the confidence update model introduced in Equation (24) incorporates the feedback on information authenticity into the evaluation system, effectively suppressing the self-reinforcing effect of collusive networks. The experimental results show that the system maintains 83% detection accuracy even in high malicious ratio scenarios ( ξ = 30 % ), validating the environmental adaptability of the group decision-making mechanism.
Figure 4. The performance of GDM-DTM under MTGI in terms of (a) accuracy, (b) precision, (c) recall, and (d) F-score.
  • Precision
As shown in Figure 4b, precision decreases non-linearly with an increasing malicious ratio. For the scenario with 2000 messages, as ξ increases from 5% to 40%, precision drops from 97.12% to 90.24%. This relatively smooth decline stems from the filtering mechanism of the self-proof consensus part: when the self-proof trust value generated by malicious nodes through Equation (8) deviates from other consistency parts beyond the dynamic threshold μ (Equation (13)), the system calls the Dynamic Trust Adjustment Mechanism. For example, if a malicious node falsifies a 10° deviation in heading, the objective consensus part detects this anomaly through multi-source verification data from neighboring nodes, triggering trust value weight correction towards the objective consensus direction (Equation (20)).
  • Recall
As shown in Figure 4c, recall decreases gradually as the malicious UAV ratio increases. For the scenario with 2000 messages, recall decreases from 91.90% to 89.53% as ξ increases from 5% to 40%, verifying the system’s robustness under high malicious density. This robustness is primarily due to the combined effects of the multi-dimensional trust evaluation mechanism: cross-validation of multiple attributes such as speed and altitude effectively identifies short-term camouflage behaviors; Self-Proof Consistency Evaluation (Equation (7)) dynamically detects deviations between self-proof trust values and global consensus, such as heading differences triggering adjustments (Equation (15)), reducing the risk of false negatives. The synergy of the multi-level consensus mechanism and dynamic feedback ensures that the system maintains a high recall rate in complex attack scenarios.
  • F-score
The F-score is the harmonic mean of precision and recall and is used to assess the overall classification performance of the model. As shown in the experimental data in Figure 4d, the F-score decreases gradually as the malicious UAV ratio ( ξ ) increases, but the decline is relatively smooth. Under different malicious ratios (5–40%) and message volumes (400–2000 messages), the F-score remains at a high level (87.62–94.44%), indicating that the GDM-DTM system maintains good stability even at higher malicious node ratios. This is primarily due to the collaborative effects of multi-dimensional trust computation and dynamic adjustment mechanisms; self-proof consistency quickly identifies abnormal behaviors by comparing node self-proof data with other consensus (Equations (9)–(11)), while dynamic weight updates (Equations (17)–(20)) continuously optimize trust evaluation based on information exchange.

5.3.3. Comparison of GDM-DTM with Other Models

Based on the aforementioned experiments, we compared the performance of GDM-DTM, D2MIF [], FedTrust [], and B5G (SVM) [] in a scenario where the malicious UAV ratio is 30% and 2000 messages are transmitted to demonstrate their performance. Figure 5 shows the accuracy, precision, recall, and F-score for D2MIF, FedTrust, and B5G (SVM) in the MTGI scenario for malicious node detection models. In terms of accuracy, FedTrust (93.4%) and GDM-DTM (85.04%) perform best, but GDM-DTM leads in recall (90.33%) and F-score (91.66%), reflecting its advantages in comprehensive detection and dynamic balance. D2MIF, based on isolation forests and reinforcement learning, has balanced accuracy (81.1%) and recall (80.9%) but a lower F-score (80.8%), likely due to insufficient dynamic threshold adjustment leading to false positives. B5G (SVM) achieves 100% precision but only a 78% recall, indicating that its strict filtering strategy may miss detecting real malicious nodes. FedTrust, relying on deep federated learning, has the highest accuracy but a low F-score (80%), reflecting its weaker adaptability to imbalanced data. In contrast, GDM-DTM maintains 85.04% accuracy and 91.66% F-score under high malicious node ratios, leveraging multi-dimensional trust evaluation and group decision-making mechanisms, effectively suppressing collusive attacks and enhancing robustness, offering superior overall performance.
Figure 5. Performance comparison of GDM-DTM, D2MIF, FedTrust, and B5G (SVM).

6. Conclusions and Future Directions

This study investigates the trust management problem in low-altitude economy UAV networks and proposes a consistency evaluation trust management method based on group decision-making. By integrating group decision-making and multi-dimensional attribute data, this method provides a solution for malicious node detection in UAV networks. The experimental results show that the proposed method demonstrates high accuracy and robustness in detecting malicious UAVs and propagating false information. Although there are certain limitations in terms of computational complexity and adaptability to real-world scenarios, future research could optimize algorithms, expand application scenarios, and integrate technologies such as blockchain to further enhance the system’s real-time capabilities and security. This research provides new insights into UAV trust system network and promotes the safe development of the low-altitude economy.

Author Contributions

Methodology, Y.H.; software, Y.G.; validation, H.W.; formal analysis, C.X.; data curation, M.M.; writing—original draft preparation, Y.H.; writing—review and editing, Y.G. and C.W.; project administration, C.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Tianjin College Student Innovation and Entrepreneurship Training Program, grant number 202410057029.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhou, Y. Unmanned Aerial Vehicles Based Low-Altitude Economy with Lifecycle Techno-Economic-Environmental Analysis for Sustainable and Smart Cities. J. Clean. Prod. 2025, 499, 145050. [Google Scholar] [CrossRef]
  2. Zheng, K.; Fu, J.; Liu, X. Relay Selection and Deployment for NOMA-Enabled Multi-AAV-Assisted WSN. IEEE Sens. J. 2025, 25, 16235–16249. [Google Scholar] [CrossRef]
  3. Teng, M.; Gao, C.; Wang, Z. A Communication-Based Identification of Critical Drones in Malicious Drone Swarm Networks. Complex Intell. Syst. 2024, 10, 3197–3211. [Google Scholar] [CrossRef]
  4. Zilberman, A.; Stulman, A.; Dvir, A. Identifying a Malicious Node in a UAV Network. IEEE Trans. Netw. Serv. Manage. 2024, 21, 1226–1240. [Google Scholar] [CrossRef]
  5. Meng, X.; Liu, D. GeTrust: A Guarantee-Based Trust Model in Chord-Based P2P Networks. IEEE Trans. Dependable Secur. Comput. 2018, 15, 54–68. [Google Scholar] [CrossRef]
  6. You, X.; Hou, F.; Chiclana, F. A Reputation-Based Trust Evaluation Model in Group Decision-Making Framework. Inf. Fusion 2024, 103, 102082. [Google Scholar] [CrossRef]
  7. Zhaofeng, M.; Lingyun, W.; Xiaochang, W. Blockchain-Enabled Decentralized Trust Management and Secure Usage Control of IoT Big Data. IEEE Internet Things J. 2020, 7, 4000–4015. [Google Scholar] [CrossRef]
  8. Haseeb, K.; Saba, T.; Rehman, A. Trust Management with Fault-Tolerant Supervised Routing for Smart Cities Using Internet of Things. IEEE Internet Things J. 2022, 9, 22608–22617. [Google Scholar] [CrossRef]
  9. Chen, X.; Ding, J.; Lu, Z. A Decentralized Trust Management System for Intelligent Transportation Environments. IEEE Trans. Intell. Transport. Syst. 2022, 23, 558–571. [Google Scholar] [CrossRef]
  10. Deng, M.; Lyu, Y.; Yang, C. Lightweight Trust Management Scheme Based on Blockchain in Resource-Constrained Intelligent IoT Systems. IEEE Internet Things J. 2024, 11, 25706–25719. [Google Scholar] [CrossRef]
  11. Xiang, X.; Cao, J.; Fan, W. Blockchain Enabled Dynamic Trust Management Method for the Internet of Medical Things. Decis. Support Syst. 2024, 180, 114184. [Google Scholar] [CrossRef]
  12. Gu, C.; Ma, B.; Hu, D. A Dependable and Efficient Decentralized Trust Management System Based on Consortium Blockchain for Intelligent Transportation Systems. IEEE Trans. Intell. Transport. Syst. 2024, 25, 19430–19443. [Google Scholar] [CrossRef]
  13. Zheng, J.; Xu, J.; Du, H. Trust Management of Tiny Federated Learning in Internet of Unmanned Aerial Vehicles. IEEE Internet Things J. 2024, 11, 21046–21060. [Google Scholar] [CrossRef]
  14. Wang, P.; Xu, N.; Zhang, H. Dynamic Access Control and Trust Management for Blockchain-Empowered IoT. IEEE Internet Things J. 2022, 9, 12997–13009. [Google Scholar] [CrossRef]
  15. Din, I.U.; Awan, K.A.; Almogren, A. Secure and Privacy-Preserving Trust Management System for Trustworthy Communications in Intelligent Transportation Systems. IEEE Access 2023, 11, 65407–65417. [Google Scholar] [CrossRef]
  16. Wang, C.; Cai, Z.; Seo, D. TMETA: Trust Management for the Cold Start of IoT Services with Digital-Twin-Aided Blockchain. IEEE Internet Things J. 2023, 10, 21337–21348. [Google Scholar] [CrossRef]
  17. Yu, Y.; Lu, Q.; Fu, Y. Dynamic Trust Management for the Edge Devices in Industrial Internet. IEEE Internet Things J. 2024, 11, 18410–18420. [Google Scholar] [CrossRef]
  18. Xu, Q.; Zhang, L.; Qin, X. A Novel Machine Learning-Based Trust Management Against Multiple Misbehaviors for Connected and Automated Vehicles. IEEE Trans. Intell. Transport. Syst. 2024, 25, 16775–16790. [Google Scholar] [CrossRef]
  19. A Review of Research on UAV Network Intrusion Detection and Trust Systems; National Cybersecurity Institute, Wuhan University: Wuhan, China, 2023.
  20. Liu, Z.; Guo, J.; Huang, F. Lightweight Trustworthy Message Exchange in Unmanned Aerial Vehicle Networks. IEEE Trans. Intell. Transport. Syst. 2023, 24, 2144–2157. [Google Scholar] [CrossRef]
  21. Kundu, J.; Alam, S.; Koner, C. Trust-Based Dynamic Leader Selection Mechanism for Enhanced Performance in Flying Ad-Hoc Networks (FANETs). IEEE Trans. Intell. Transport. Syst. 2024, 25, 20616–20627. [Google Scholar] [CrossRef]
  22. Liang, J.; Liu, W.; Xiong, N.N. An Intelligent and Trust UAV-Assisted Code Dissemination 5G System for Industrial Internet-of-Things. IEEE Trans. Ind. Inf. 2022, 18, 2877–2889. [Google Scholar] [CrossRef]
  23. Zhang, Y.; Wang, W.; Shi, F. Reputation-Based Raft-Poa Layered Consensus Protocol Converging UAV Network. Comput. Netw. 2024, 240, 110170. [Google Scholar] [CrossRef]
  24. Akram, J.; Anaissi, A.; Rathore, R.S. GALTrust: Generative Adverserial Learning-Based Framework for Trust Management in Spatial Crowdsourcing Drone Services. IEEE Trans. Consum. Electron. 2024, 70, 6196–6207. [Google Scholar] [CrossRef]
  25. Akram, J.; Anaissi, A.; Rathore, R.S. Digital Twin-Driven Trust Management in Open RAN-Based Spatial Crowdsourcing Drone Services. IEEE Trans. Green Commun. Netw. 2024, 8, 1061–1075. [Google Scholar] [CrossRef]
  26. Zha, Q.; Dong, Y.; Zhang, H. A Personalized Feedback Mechanism Based on Bounded Confidence Learning to Support Consensus Reaching in Group Decision Making. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 3900–3910. [Google Scholar] [CrossRef]
  27. Zhou, X.; Li, S.; Wei, C. Consensus Reaching Process for Group Decision-Making Based on Trust Network and Ordinal Consensus Measure. Inf. Fusion 2024, 101, 101969. [Google Scholar] [CrossRef]
  28. Zeng, Y.; Guvenc, I.; Zhang, R.; Geraci, G.; Matolak, D.W. (Eds.) Front Matter. In UAV Communications for 5G and Beyond; Wiley: Hoboken, NJ, USA, 2020; ISBN 978-1-119-57569-6. [Google Scholar][Green Version]
  29. Keränen, A.; Ott, J.; Kärkkäinen, T. The ONE Simulator for DTN Protocol Evaluation. In Proceedings of the Second International ICST Conference on Simulation Tools and Techniques, Rome, Italy, 2–6 March 2009; ICST: Rome, Italy, 2009. [Google Scholar][Green Version]
  30. Yan, J.; Zhao, X.; Li, Z. Deep-Reinforcement-Learning-Based Computation Offloading in UAV-Assisted Vehicular Edge Computing Networks. IEEE Internet Things J. 2024, 11, 19882–19897. [Google Scholar] [CrossRef]
  31. Kurunathan, H.; Huang, H.; Li, K. Machine Learning-Aided Operations and Communications of Unmanned Aerial Vehicles: A Contemporary Survey. IEEE Commun. Surv. Tutor. 2022, 26, 496–533. [Google Scholar] [CrossRef]
  32. Rahmani, M.; Delavernhe, F.; Mohammed Senouci, S. Toward Sustainable Last-Mile Deliveries: A Comparative Study of Energy Consumption and Delivery Time for Drone-Only and Drone-Aided Public Transport Approaches in Urban Areas. IEEE Trans. Intell. Transport. Syst. 2024, 25, 17520–17532. [Google Scholar] [CrossRef]
  33. Ajakwe, S.O.; Kim, D.-S.; Lee, J.-M. Drone Transportation System: Systematic Review of Security Dynamics for Smart Mobility. IEEE Internet Things J. 2023, 10, 14462–14482. [Google Scholar] [CrossRef]
  34. Liu, W.; Lin, H.; Wang, X. D2MIF: A Malicious Model Detection Mechanism for Federated-Learning-Empowered Artificial Intelligence of Things. IEEE Internet Things J. 2023, 10, 2141–2151. [Google Scholar] [CrossRef]
  35. Awan, K.A.; Ud Din, I.; Zareei, M. Securing IoT with Deep Federated Learning: A Trust-Based Malicious Node Identification Approach. IEEE Access 2023, 11, 58901–58914. [Google Scholar] [CrossRef]
  36. Abubaker, Z.; Javaid, N.; Almogren, A. Blockchained Service Provisioning and Malicious Node Detection via Federated Learning in Scalable Internet of Sensor Things Networks. Comput. Netw. 2022, 204, 108691. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.