Next Article in Journal
Empowering End-Users with Cybersecurity Situational Awareness: Findings from IoT-Health Table-Top Exercises
Previous Article in Journal
A Multi-Feature Semantic Fusion Machine Learning Architecture for Detecting Encrypted Malicious Traffic
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fed-DTB: A Dynamic Trust-Based Framework for Secure and Efficient Federated Learning in IoV Networks: Securing V2V/V2I Communication

by
Ahmed Alruwaili
1,*,
Sardar Islam
1 and
Iqbal Gondal
2
1
Institute for Sustainable Industries & Liveable Cities (ISILC), Victoria University, Melbourne, VIC 3000, Australia
2
School of Computing Tech, Royal Melbourne Institute of Technology (RMIT), Melbourne, VIC 3000, Australia
*
Author to whom correspondence should be addressed.
J. Cybersecur. Priv. 2025, 5(3), 48; https://doi.org/10.3390/jcp5030048
Submission received: 12 May 2025 / Revised: 11 July 2025 / Accepted: 16 July 2025 / Published: 19 July 2025

Abstract

The Internet of Vehicles (IoV) presents a vast opportunity for optimised traffic flow, road safety, and enhanced usage experience with the influence of Federated Learning (FL). However, the distributed nature of IoV networks creates certain inherent problems regarding data privacy, security from adversarial attacks, and the handling of available resources. This paper introduces Fed-DTB, a new dynamic trust-based framework for FL that aims to overcome these challenges in the context of IoV. Fed-DTB integrates the adaptive trust evaluation that is capable of quickly identifying and excluding malicious clients to maintain the authenticity of the learning process. A performance comparison with previous approaches is shown, where the Fed-DTB method improves accuracy in the first two training rounds and decreases the per-round training time. The Fed-DTB is robust to non-IID data distributions and outperforms all other state-of-the-art approaches regarding the final accuracy (87–88%), convergence rate, and adversary detection (99.86% accuracy). The key contributions include (1) a multi-factor trust evaluation mechanism with seven contextual factors, (2) correlation-based adaptive weighting that dynamically prioritises trust factors based on vehicular conditions, and (3) an optimisation-based client selection strategy that maximises collaborative reliability. This work opens up opportunities for more accurate, secure, and private collaborative learning in future intelligent transportation systems with the help of federated learning while overcoming the conventional trade-off of security vs. efficiency.

1. Introduction

1.1. Background and Motivation

The Internet of Things (IoT) has changed the world by allowing different devices to communicate and cooperate between different domains, revolutionising industries such as healthcare, smart cities, and transportation [1]. IoT technology principles motivated the development of the Internet of Vehicles (IoV) to become a promising framework that is designed to integrate smart, connected, and often autonomous vehicles into a collaborative and interactive network [2]. IoV enables advanced functionalities, including cooperative perception, predictive maintenance, and improved road safety, with real-time communication and collaboration among vehicles [2,3]. Beyond facilitating inter-vehicle communication, IoV transforms vehicles into intelligent entities capable of performing a large variety of tasks independently. Using high-dimensional sensory input and sophisticated decision-making algorithms, these vehicles operate as robotic systems, adapting to dynamic environments and improving traffic flow, efficiency, and safety [4].
The complexity of IoV has led to the adoption of reinforcement learning (RL) as a promising framework that allows the vehicle to learn and make decisions through interactions with its environment [4]. Vehicles learn from their experiences, receive rewards or penalties for performing a set of actions, and adapt to optimise their behaviour over time. Since IoV is highly dimensional and complex, Deep Reinforcement Learning (DRL) extends RL’s capabilities and has emerged as a promising approach for IoV, where it enables vehicles to learn adaptive strategies from high-dimensional sensory inputs and complex traffic scenarios [2,3]. This capability of DRL improves decision-making in unpredictable and time-varying conditions, ultimately enhancing traffic flow, efficiency, and safety. However, DRL deployment in IoV faces numerous challenges, including the necessity for large, diverse, and representative datasets for effective model training in heterogeneous and dynamic environments. Moreover, it also poses challenges of privacy, security, and scalability [2,3,4].
To address these concerns, Federated Learning (FL) has become a promising paradigm, allowing collaborative learning among vehicles while preserving their privacy  [5,6]. In FL, vehicles train models locally on their own data and share only model updates with a central server or other vehicles. The server then aggregates these updates to form a global model, which is then distributed to the vehicles. With this process, vehicles are able to leverage collective knowledge without exposing sensitive raw data, such as driver behaviours and routes. Participation in FL gives vehicles the ability to improve object detection, understand traffic patterns, and optimise routes to improve driving performance and road safety. However, the integration of FL into IoV brings a set of significant challenges, particularly in autonomous driving systems where security and privacy requirements are critical [7]. Compromised or malicious vehicles can transmit poisoned updates containing corrupted information that aims to corrupt global model performance and introduce security vulnerability. These poisoned updates compromise the model’s integrity and create delays affecting critical operations’ responsiveness. Moreover, managing information flows in IoV requires robust and efficient solutions due to fluctuating communication quality, dynamic network conditions, and frequent car transitions.
These interconnected challenges collectively reveal a critical research gap in current IoV federated learning approaches. While existing solutions address individual aspects such as privacy preservation or basic intrusion detection, there remains a fundamental need for integrated frameworks that can simultaneously manage trust evaluation, adapt to dynamic vehicular conditions, and maintain collaborative learning effectiveness under adversarial threats. The convergence of these challenges necessitates the development of adaptive trust mechanisms specifically designed for IoV environments that can dynamically assess participant reliability while preserving the collaborative benefits of federated learning. This work addresses this gap through the development of Fed-DTB, a comprehensive trust-based framework that integrates multiple vehicular factors with adaptive weighting mechanisms to enable secure and efficient federated learning in dynamic IoV networks.

1.2. Related Works and Limitations

Building on these challenges, previous work has proposed various approaches to enhance security, privacy, and trust management in IoV and FL systems, mostly in areas like Intrusion Detection Systems (IDS), privacy preservation, and trust evaluation.
Several studies concentrated on detecting malicious activities in vehicular networks and protecting sensitive information that exists. For instance, Khraisat and Alazab [8] proposed a deep learning-based IDS capable of identifying malicious messages and detecting attacks on in-vehicle protocols (e.g., CAN). Their approach improved vehicular network security by preemptively countering threats. Lo et al. [9] have surveyed federated learning issues with regard to security and privacy and proposed secure aggregation and differential privacy techniques to protect data confidentiality while allowing collaborative model training. Similarly, Cui et al. [10] demonstrated how deep reinforcement learning can optimise secure task offloading in IoV edge computing through trust mechanisms that boost system performance. These research studies demonstrate the importance of integrating IDS security systems with privacy improvements and strong communication capabilities throughout the IoV network [8,9,10,11]. Such system measures lead to high performance while ensuring protection of sensitive data. The security and privacy methods create strong foundations, yet they might not account for performance delays resulting from poisoned updates in IoV systems. Current approaches usually optimise accuracy and privacy, but they do not take account of the overhead induced by the adversarial mitigation measures. This gap highlights the demand for latency-sensitive trust mechanisms specific to time-sensitive IoV applications.
Blockchain technology has been integrated with federated learning (FL) to enhance trust and security in vehicular collaboration, offering transparency and tamper resistance for model updates. For example, Chai et al. adopted blockchain technology to support hierarchical FL systems for establishing a secure data exchange between vehicles and roadside units (RSUs) in Intelligent Transportation Systems (ITS) environments [12]. Similarly, in Wang et al.’s study [13], the authors built a blockchain FL system with smart contracts for protecting model update integrity, while Chai et al. [14] established an FL system based on consortium blockchain to find untrustworthy participants. While the blockchain-based approaches improve transparency and help with adversarial resistance, they increase computational overhead and latency. Such oversight creates scalability problems for large-scale dynamic IoV networks because they require immediate decisions combined with fast adaptability. Blockchain technology with FL frameworks creates more trustworthy vehicular collaboration but offers restricted performance in delay-sensitive scenarios of IoV applications.
In addition to security-centric methods, trust management has emerged as an essential component in IoT and FL contexts [15]. Aiche et al. [16] applied Analytical Hierarchy Process (AHP) to evaluate trust levels in IoT water treatment systems based on data accuracy, reliability, and system experience. Boakye-Boateng et al. [17] demonstrated that trust-based frameworks could defend smart grid substations against cyberattacks, underscoring trust’s role in critical infrastructure protection. Within federated learning, Rjoub et al. [18] introduced a trust-based reinforcement selection approach for IoT device scheduling, incorporating trust scores and energy levels; Nishio and Yonetani [19] proposed a client selection protocol in Multi-Access Edge Computing (MEC) frameworks to optimise heterogeneous resources and efficiency; and Mazloomi et al. [20] introduced a trust approach based on trust metrics to improve training accuracy and reduce initial training loss in FL servers. While they show that trust brings benefits to the efficiency and security of IoT and FL networks, they fail to address the key challenges concerning IoV environments.
Recent advances in FL-IoT research have focused on secure data management and resource optimisation in vehicular networks. Nguyen et al. [21] established a taxonomy of FL-IoT services, highlighting the role of federated learning in the management of secure vehicular data. Subsequent studies expand FL to address resource optimisation and scalability challenges. For instance, FL optimises vehicular data offloading via RSU-mediated resource allocation [22], while vertical and horizontal FL architectures [23,24] employ differential privacy [25] and cloud aggregation to safeguard sensitive data. However, these frameworks lack rigorous evaluations of resilience against adversarial attacks or inference threats. Privacy-preserving FL models [23,24] also incur high communication latency due to centralised cloud aggregation, limiting real-time applicability. Furthermore, data offloading strategies [22] fail to address privacy risks when third-party aggregators process sensitive information.
FL-driven resource management strategies aim to enhance efficiency in vehicular systems. Centralised federated learning (CFL) optimises power control [26], while DRL-based schemes improve resource allocation in vehicle-to-everything (V2X) communication [27]. Decentralised FL (DFL) further refines efficiency in MEC-aided V2V networks [28]. Beyond transportation, FL supports smart city applications by leveraging vehicles as clients for city-scale data management [29]. Despite these innovations, resource management models [26,27,28] exhibit untested learning latency and unverified accuracy in DRL-based methods, which may compromise real-time ITS and V2X performance. Privacy simulations in these models remain incomplete, and smart city FL implementations [29] lack scalability analyses, raising concerns about adaptability to large-scale deployments.
Recent advances in federated learning security have demonstrated significant progress in addressing both privacy and robustness challenges, with machine learning approaches showing increasing effectiveness in cybersecurity applications [30,31]. Zhao et al. [32] proposed a novel privacy-preserving aggregation framework that protects against inference attacks using computational Diffie-Hellman (CDH) encryption, achieving minimal accuracy loss while maintaining lightweight computation costs. Similarly, Zhao et al. [33] introduced FedSuper, a Byzantine-robust federated learning approach that utilises shadow dataset supervision to maintain robustness against threatening scenarios with high Byzantine ratios (up to 0.9) and extreme non-IID data distributions. While these advances address crucial privacy preservation and Byzantine robustness challenges in general federated learning contexts, the Internet of Vehicles domain presents unique operational requirements that necessitate specialised approaches. IoV environments exhibit distinct characteristics, including real-time mobility constraints, dynamic energy limitations, fluctuating network topologies, and safety-critical decision requirements [34] that are not fully addressed by existing privacy or robustness frameworks. This motivates the development of IoV-specific trust mechanisms that can dynamically adapt to vehicular context while ensuring both security and operational efficiency.
Across these lines of work, we observe three persistent tensions: (i) security-centric schemes seldom quantify resource cost and latency; (ii) resource-optimisation studies treat trust as static or implicit, overlooking adversarial behaviour; (iii) single-factor trust approaches ignore mobility-induced drift and heterogeneous data. Fed-DTB resolves these tensions by unifying dynamic multi-factor trust with cost-aware optimisation and by validating the design under simulated vehicular mobility and adversarial conditions.
Despite these substantial advances, several limitations remain evident when focusing on IoV scenarios:
  • Existing frameworks for vehicular data sharing lack rigorous testing against adversarial attacks (e.g., model poisoning) or inference threats, while integrated blockchain in FL models ignores the computational overhead of consensus protocols in large-scale networks.
  • Trust mechanisms relying on static and single factors (e.g., IDS or user feedback) fail to adapt to dynamic IoV conditions such as fluctuating network reliability, energy availability, or evolving adversarial tactics.
  • Privacy-preserving FL architectures depend on centralised cloud aggregation, introducing latency bottlenecks that hinder deployment in mobile vehicular environments.
  • Publicly available datasets lack realistic IoV-specific scenarios (e.g., adversarial behaviours, heterogeneous network states), and scalability analyses for smart city FL implementations are absent.
  • Resource management strategies, particularly those using DRL, suffer from untested learning latency and incomplete privacy simulations, undermining real-time decision-making in ITS and V2X applications.
  • The convergence behaviour of FL models in heterogeneous vehicular networks remains unvalidated, raising concerns about stability under real-world conditions.

1.3. Contributions of This Study

To overcome the limitations identified above and tackle the unique challenges of IoV-based federated learning, we propose Fed-DTB, a Federated Dynamic Trust-Based framework designed explicitly for vehicular environments. Fed-DTB fuses internal indicators (IDS alerts, energy consumption, FL participation history) with external context (regulatory compliance, network stability). Unlike static models, it employs adaptive weighting that recalibrates factor importance in real time. Based on the foregoing limitations, Fed-DTB advances the state of the art as follows:
  • We introduce a multi-factor trust evaluation system that integrates diverse IoV-specific factors, including network status, vehicle state, energy efficiency, compliance with traffic rules, federated learning participation, user feedback, and IDS alerts. By combining these factors in a cohesive and context-aware manner, our approach provides a richer and more dynamic measure of trustworthiness compared with existing single-metric or static trust mechanisms.
  • We propose a context-aware adaptive weighting mechanism that dynamically adjusts the priority of trust factors (e.g., prioritising intrusion detection system (IDS) [35] alerts during attacks or energy efficiency in low-resource scenarios) to reflect real-time IoV conditions.
  • We design a trust-driven client selection framework that optimises the choice of FL participants based on contextual trust scores, enhancing global model accuracy while minimising training latency.
  • We generate a custom synthetic dataset that simulates realistic IoV scenarios, including adversarial attacks, dynamic network states, and vehicular mobility, to address the lack of public benchmarks for IoV-FL research. Experimental validation using standard benchmarks (e.g., MNIST, CIFAR-10) confirms the framework’s generalisation across diverse scenarios.
  • Through extensive experiments, we validate the framework’s robustness against poisoning attacks, improved convergence behaviour, and enhanced accuracy compared with single-factor trust baselines.
Previous work on trust or security in federated learning typically relies on a single static factor (e.g., IDS or user feedback). By contrast, Fed-DTB offers the first lightweight, dynamic multi-factor trust mechanism validated under realistic IoV mobility.
The rest of this paper is organised as follows. Section 2 describes the system model and problem formulation. Section 3 presents the proposed Fed-DTB trust management framework and vehicle selection mechanism. Section 4 discusses experimental results and analysis. Section 5 provides discussion and implications of the findings. Finally, Section 6 concludes the paper and outlines future research directions.

2. System Model and Problem Formulation

2.1. System Model

We consider an IoV network architecture consisting of a set of vehicles V = { v 1 , v 2 , , v N } and a central server S, as illustrated in Figure 1. Each vehicle v i V is equipped with multiple sensors, communication modules (e.g., cellular, Wi-Fi, DSRC [36]), and onboard computing units. These capabilities enable vehicles to collect local data, perform computations, and transmit model updates to the central server or other vehicles in real time. This architecture lays the groundwork for our trust-based mechanism, which ensures that only reliable vehicles contribute to the federated learning process, as detailed in subsequent sections.
Each vehicle v i V generates local datasets D i containing data points ( x i j , y i j ) . Here, x i j denotes the i-th sensor on vehicle v i that produced the data, and j-th represents the index of the data point collected by the sensor, while y i j represents the corresponding label or action. FL enables vehicles to train local models L M i on their datasets D i while maintaining data privacy by transmitting only model updates to the central server S. The server aggregates these updates to refine the global model G M [7]. While this approach preserves privacy and leverages distributed intelligence, it requires mechanisms to filter contributions from trustworthy vehicles to safeguard the global model’s integrity against malicious participants.
The dynamic nature of IoV networks, characterised by fluctuating network conditions, varying vehicle states, and evolving environmental contexts, requires a trust evaluation mechanism. Vehicles continuously move in and out of coverage areas, face changing resource availability, and encounter different road or traffic conditions. Against this backdrop, an effective trust management strategy must integrate multiple factors and remain flexible in assigning their importance over time, ensuring that the global FL model is trained using updates from genuine, reliable sources.

2.2. Threat Model

While many vehicles behave honestly in the IoV ecosystem, others may be compromised by malicious entities aiming to disrupt the FL process. Adversarial threats in this context can take several forms [37,38,39]:
  • Model poisoning and misrepresentation: Malicious participants may send tampered model updates or corrupted training data, degrading the accuracy and reliability of the global FL model.
  • Backdoor or Trojan attacks: Hidden triggers embedded into the model can cause specific malicious behaviours when activated, compromising safety and exposing the system to significant risks.
  • Training process disruption: Adversaries may send outdated or incomplete updates, exploit connectivity issues, or deliberately induce delays, undermining model convergence and system performance.
  • Exploitation of in-vehicle vulnerabilities: Attackers can target internal protocols (e.g., CAN Bus) to inject malicious commands, jeopardising vehicle operations and data integrity.
To mitigate these risks, we assume the presence of an IDS in each vehicle, capable of detecting many common attacks and anomalies [40,41]. Although IDS systems are not infallible, they play a pivotal role in trust evaluation. Furthermore, communications between vehicles and the central server are encrypted (e.g., using AES [42]) to ensure confidentiality and integrity. However, tampering at the source remains a potential vulnerability, underscoring the importance of our trust-based mechanism as a critical first line of defence. This mechanism evaluates and selects participants based on trustworthiness, safeguarding the FL process from adversarial interference.

2.3. Problem Formulation

The main objective involves creating a secure and effective FL framework for IoV networks, which depends on allowing only trustworthy vehicles to participate. Multiple elements form the trust score of each individual vehicle, as the framework combines external factors from network quality, traffic regulations, and user feedback with internal vehicle metrics for health and efficiency and IDS system alerts. The multiple-factor assessment approach delivers an extensive evaluation method that considers conditions for assessing vehicle reliability.

2.3.1. Trust Score Computation

The trustworthiness of each vehicle v i at time t is quantified using a novel trust model by integrating dynamic weighting mechanisms and multidimensional factors. In addition, trust evolution over time is considered through implicit decay mechanisms. These include cumulative behavioural anomalies, prolonged disconnection periods, and decreased model contribution relevance. This approach ensures that trust does not remain static but evolves based on both short-term and long-term indicators of vehicular reliability and consistency. Such temporal modelling supports the exclusion of clients exhibiting passive degradation even in the absence of acute security events. The overall trust score τ i ( t ) is calculated as
τ i ( t ) = τ I i ( t ) · w N · τ N i ( t ) + w V S · τ V S i ( t ) + w E · τ E i ( t ) + w C · τ C i ( t ) + w P · τ P i ( t ) + w U · τ U i ( t )
In this model, the following is true:
  • τ I i ( t ) is the IDS alert score, set to 1 if no alerts are generated and 0 otherwise.
  • τ N i ( t ) , τ V S i ( t ) , τ E i ( t ) , τ C i ( t ) , τ P i ( t ) , τ U i ( t ) are scores for network status, vehicle status, energy efficiency, compliance with traffic regulations, FL participation quality, and user feedback, respectively.
  • w N , w V S , w E , w C , w P , w U are weights indicating the relative importance of each factor.
Factor Weights
The weights in the trust score calculation reflect two key considerations: First, a security perspective that prioritises critical factors (IDS alerts) due to their importance in identifying threats and safeguarding local data. Second, a correlation approach that adjusts weights based on their statistical relevance. In particular, Pearson’s correlation coefficients are periodically computed to assign higher weights to strongly correlated factors like network stability ( τ N i ) while minimising the influence of less significant ones. This ensures that the weighting adapts effectively to evolving IoV conditions.
Dynamic Weight Adjustment
Fed-DTB employs Equation (2), a dynamic mechanism for weight adjustment that adapts to evolving IoV conditions and operational feedback. Unlike static schemes, this approach recalculates the correlation between each factor’s score ( τ k ) and the overall trust score ( τ i ) after periodic training rounds, adjusting weights based on their statistical relevance. For example, factors exhibiting stronger correlations, such as network stability ( τ N ), are assigned higher weights, while less significant factors receive lower weights. The weights are computed as
w k ( t + 1 ) = Corr ( τ k , τ ) j = 1 n Corr ( τ j , τ )
where Corr ( τ k , τ ) is the correlation between factor k’s partial trust score τ k and the overall trust τ , j = 1 n Corr ( τ j , τ ) sums these correlations for all n factors (indexed by j), and  ( t + 1 ) denotes the updated weight for the next federated learning round. The correlation term Corr ( τ k , τ ) in Equation (2) is computed explicitly using Pearson’s correlation coefficient based on vehicle trust-score data collected from recent federated learning rounds. Specifically, the correlation is calculated across a sliding window of the latest L = 5 training rounds as follows:
Corr ( τ k , τ ) = i = 1 m ( τ k , i τ ¯ k ) ( τ i τ ¯ ) i = 1 m ( τ k , i τ ¯ k ) 2 i = 1 m ( τ i τ ¯ ) 2
where τ k , i denotes the partial trust score for factor k of vehicle i, τ i represents the overall trust score of vehicle i, and  τ ¯ k and τ ¯ are their respective mean values computed over all participating vehicles (m) within the current training round. To maintain numerical stability and avoid division by zero, a small correction term ε = 10 9 is added to the denominator. Pairwise deletion is employed to handle missing data effectively, ensuring the robustness and accuracy of correlation estimates. The absolute value of the computed correlation is taken prior to normalisation in Equation (2), thus guaranteeing non-negative and stable weights in subsequent federated learning rounds. The sum of all weights remains mathematically normalised to 1, ensuring a balanced contribution from each factor:
w N + w V S + w E + w C + w P + w U = 1 .
This ensures that no single factor inordinately influences the overall trust score. The dynamic adjustment mechanism recalculates weights based on statistical insights from recent training rounds. This process ensures the weighting system remains responsive to the IoV’s real-time dynamics, enabling robust and adaptive trust evaluations across changing vehicular states, network conditions, and adversarial threats.

2.3.2. Trust-Based Vehicle Selection Mechanism

At each FL training round, the central server aggregates the trust scores from all vehicles and selects a subset for model updates. The selection aims to do the following:
  • The total trust of selected vehicles is maximised, enhancing global model reliability.
  • Vehicles with low trust scores or flagged by IDS alerts are excluded to maintain security.
  • The selected subset adapts to dynamic IoV conditions like network stability and vehicular reliability.
We define a binary variable x i , indicating whether vehicle v i is selected. We then solve the following optimisation problem:
maximise i = 1 N x i τ i ( t )
subject to
i = 1 N | v i | K ,
τ i ( t ) τ min , i ,
τ I i ( t ) = 1 , i ,
x i { 0 , 1 } , i .
Here, K is the maximum number of vehicles to be selected, τ min is the minimum acceptable trust threshold, and  τ I i ( t ) = 1 ensures that only vehicles without IDS alerts are considered. By solving this optimisation problem, the central server enforces a trust-based curation of participants, directly aligning selection criteria with the multi-factor trust evaluation.

2.3.3. Determining the Minimum Trust Score Threshold

The minimum trust score threshold τ min is a critical parameter that ensures the selection of vehicles with high reliability while mitigating the impact of outliers and temporary anomalies. It is dynamically adjusted based on historical trust data or learned heuristics [43]. A common approach sets
τ min = μ + σ
where μ and σ are the mean and standard deviation of historical trust scores. This approach prioritises vehicles scoring significantly above the norm, ensuring robust and adaptive participant selection. The  μ + σ threshold balances between including the most reliable vehicles and excluding outliers, reflecting the inherently dynamic nature of IoV networks. This method effectively mitigates temporary anomalies that may bias trust distributions. By deriving the minimum trust threshold from historical performance data and statistical methods, the system ensures that selection criteria remain robust and evolve with the network’s operational history.

3. Proposed Framework (Fed-DTB)

Building on the trust-based and adaptive concepts discussed in previous sections, we introduce Fed-DTB, a Federated, Dynamic Trust-Based framework designed to enhance the security, reliability, and efficiency of federated learning in IoV networks. The core idea behind Fed-DTB is to be a first line of defence by integrating multiple trust factors into the vehicular selection process and FL training, ensuring that only the most trustworthy and contextually suitable vehicles influence the global model. Fed-DTB consists of three main components:
  • A trust management system that evaluates the trustworthiness of each vehicle by integrating a diverse range of factors, including network status, vehicle state, energy consumption, compliance with traffic rules, contributions to the federated learning process, user feedback, and IDS alarms.
  • Using computed trust scores, our framework implements a trust-based vehicle selection mechanism that identifies and selects the most reliable vehicles for global model updates. Vehicles that do not meet the trustworthiness criteria or trigger IDS alerts are excluded.
  • A federated learning algorithm that allows the selected set of trusted vehicles to participate in training the shared global model [44,45].
The architecture of the Fed-DTB is depicted in Figure 2. Each vehicle gathers data from its sensors and trains an LM locally without disclosing raw data, thus preserving privacy. The trust management system for each vehicle calculates the overall trust score by evaluating various factors that capture the IoV’s complexity and dynamism. This overall trust score, along with LM, is transmitted to the central server. The central server aggregates all trust scores, selects the most trustworthy vehicles for federated learning, and then integrates their contributions into the GM. The optimised GM is then shared back with all vehicles for subsequent rounds of local training and trust assessment [3]. This process allows the global model to adapt regularly to the evolving IoV scenario.

3.1. Trust Management System

The trust management system determines the trustworthiness of each vehicle by considering various parameters that represent its behaviour and performance within the IoV network. Rather than treating vehicles as static participants, the system adapts to continuously changing conditions in network reliability, vehicle states, and operational environments. Trust scores are computed locally for each vehicle using collected data and IDS alerts and are updated over time to adapt to dynamic changes in vehicle behaviour and the network environment [19].

3.1.1. Trust Score Application

As introduced in Section 2, each vehicle v i calculates an overall trust score τ i ( t ) at each time step t. The trust score integrates seven carefully selected factors, each normalised to a value between 0 and 1, where 0 indicates completely untrustworthy behaviour and 1 represents fully trustworthy conduct.
Trust Factor Selection Rationale and Operational Feasibility
The selection of our seven-factor trust evaluation framework is systematically grounded in established IoV operational requirements, security standards, and practical deployment considerations. Each factor addresses specific challenges in vehicular federated learning while utilising data readily available through existing vehicular infrastructure:
  • IDS Alert Score ( τ I i ( t ) ): Serves as the primary security indicator mandated by connected vehicle cybersecurity standards (ISO/SAE 21434) [46]. Indicates whether IDS has detected suspicious or malicious activity from v i . Available through onboard intrusion detection systems. IDS alerts are critical; if triggered, they can nullify the entire trust score, ensuring that vehicles suspected of malicious intent are excluded from FL participation.
  • Network Status Score ( τ N i ( t ) ): Essential for V2X communication reliability, directly impacting the quality and timeliness of model parameter exchanges. Reflects the stability and reliability of the vehicle’s network connection, as IoV networks are susceptible to varying signal strengths, bandwidth limitations, and intermittent connectivity. Obtained via standard V2X communication modules (DSRC/C-V2X) already deployed in connected vehicles for safety applications.
  • Vehicle Status Score ( τ VS i ( t ) ): Determines the computational reliability and sustained participation capability. Evaluates the operational health of the vehicle’s components (e.g., engine, brakes, sensors), as a well-maintained and reliable vehicle is more likely to produce accurate and trustworthy data. Accessible through OBD-II diagnostic ports mandated in all vehicles manufactured after 1996.
  • Energy Efficiency Score ( τ E i ( t ) ): Ensures sustainable long-term participation in resource-constrained vehicular environments. Measures how efficiently a vehicle manages its energy resources, as vehicles that risk running low on energy may fail to complete their local training or share timely updates, reducing their reliability. Provided by existing battery management systems and fuel monitoring systems accessible through CAN bus interfaces.
  • Compliance Score ( τ C i ( t ) ): Addresses regulatory requirements and safety assurance, increasingly important for autonomous vehicle operations. Assesses traffic rules and regulations adherence, as non-compliant vehicles may engage in risky behaviours that contaminate data quality or introduce unusual patterns into the learning process. Compliance with traffic regulations is evaluated through onboard telemetry data sourced from vehicular control systems and GPS navigation systems already present in modern vehicles.
  • Federated Learning Participation Score ( τ P i ( t ) ): Measures collaborative reliability and commitment to the federated learning process through historical participation patterns. Quantifies the vehicle’s contribution to the FL process, considering factors like the volume and quality of its local data and the magnitude of its model updates. More substantial and useful contributions raise a vehicle’s trustworthiness. Maintained by FL infrastructure logs as part of standard protocol operations.
  • User Feedback Score ( τ U i ( t ) ): Captures real-world operational assessment and behavioural patterns from vehicle occupants. Incorporates community or user-based feedback regarding the vehicle’s reliability, performance, or user satisfaction levels, thereby adding a human-centric dimension to trust evaluation. Collected through standard vehicle human–machine interfaces (touchscreens, voice commands, mobile applications) commonly available in contemporary vehicles.
This comprehensive approach ensures that trust evaluation captures both technical performance and behavioural reliability across multiple operational dimensions while ensuring immediate deployability in existing connected vehicle fleets without requiring additional hardware investments or infrastructure modifications.
The trust score computation model described in Section 2.3.1 (Equation (1)) is applied here to evaluate and select trustworthy vehicles for participation in the federated learning process.
Detailed Factor Calculations
The trust score for each factor is computed using specific metrics that measure the vehicle’s performance in that factor. These factors include the IDS alert score τ I i ( t ) , network status score τ N i ( t ) , vehicle status score τ V S i ( t ) , energy efficiency score τ E i ( t ) , compliance score τ C i ( t ) , federated learning participation score τ P i ( t ) , and user feedback score τ U i ( t )  [19,45]. In the following, we provide detailed explanations for calculating each factor score.
  • IDS Alert Score  
The IDS alert score τ I i ( t ) indicates the level of trustworthiness of the vehicle v i based on the alerts provided by the IDS after analysing in-vehicle network traffic and identifying abnormalities and possible attacks. A high IDS alert score implies that the vehicle has not exhibited any malicious behaviour or been compromised.
We define
I i ( t ) = k = 1 n I W k · I i k ( t )
where
  • I i k ( t ) { 0 , 1 } indicates whether an alert of type k is generated for vehicle v i at time t.
  • W k is the weight of alert type k, reflecting its severity and importance.
  • n I is the number of alert types.
The IDS alert score is computed as
τ I i ( t ) = 1 , if I i ( t ) ϵ I 0 , otherwise
where ϵ I > 0 is a predefined IDS alert score threshold.
Unlike traditional binary IDS-based exclusion mechanisms, our IDS component provides a continuous probabilistic threat score. While Equation (12) applies a threshold for exclusion in extreme cases, the underlying IDS analysis outputs a risk likelihood score, which can also be integrated into the trust computation as a continuous value. This supports more nuanced trust modulation and allows the trust framework to react proportionally to different classes and severities of detected anomalies.
Impact on Trust Score: If the vehicle triggers IDS alerts exceeding the threshold ϵ I , it is treated as compromised and excluded by setting its trust score to zero. Otherwise, its contribution is weighted based on the trust score, which indirectly includes the probabilistic confidence output of the IDS.
  • Network Status Score
The network status score, τ N i ( t ) , quantifies the availability and stability of the network connections established by vehicle v i at time t. Reliable network connectivity is critical for ensuring timely communication and effective participation in the federated learning process.
Let N S i ( t ) = { N S i 1 ( t ) , N S i 2 ( t ) , , N S i n N S ( t ) } denote a set of normalised network status indicators, such as signal strength, connection stability, and bandwidth usage. Each indicator N S i k ( t ) is assigned a weight W k , reflecting its relative importance. The network status score is computed as a weighted average:
τ N i ( t ) = k = 1 n N S W k · N S i k ( t ) k = 1 n N S W k
Impact on Trust Score: A lower network status score indicates unreliable connectivity, leading to potential delays or failures in transmitting model updates. Such circumstances reduce the vehicle’s reliability in the federated learning process, resulting in a corresponding adjustment to its trust score. Conversely, vehicles with stable and strong network connections are more likely to be selected for participation due to their dependability.
  • Vehicle Status Score
The vehicle status score τ V S i ( t ) assesses vehicle v i ’s operational status and health at time t. Factors such as engine health and brake system functionality impact the safety and predictability of the vehicle.
Using normalised health indicators H i k ( t ) (e.g., engine health status, brake system status, sensor functionality) and their weights W k , the score is
τ V S i ( t ) = k = 1 n H W k · H i k ( t ) k = 1 n H W k
Impact on Trust Score: A vehicle with poor operational status may behave unpredictably or be at higher risk of accidents, which can compromise data quality and reliability. Therefore, the trust score decreases with lower vehicle status scores, discouraging participation of vehicles that may not contribute reliable data.
  • Energy Efficiency Score
The energy efficiency score τ E i ( t ) quantifies the overall energy efficiency of vehicle v i at time t. Energy-efficient vehicles can handle computational tasks required for federated learning without consuming resources.
With normalised energy indicators E i k ( t ) (e.g., battery level, energy consumption rate) and weights W k , it is computed as
τ E i ( t ) = k = 1 n E W k · E i k ( t ) k = 1 n E W k
Impact on Trust Score: Vehicles with low energy efficiency may not sustain the computational and communication demands of federated learning, leading to incomplete model updates or communication failures. Consequently, their trust scores are reduced to reflect their limited capability, ensuring that only vehicles capable of contributing effectively are selected.
  • Compliance Score
The compliance score τ C i ( t ) measures how vehicle v i complies with traffic rules and regulations at time t. Compliance indicates responsible behaviour and reliability.
Using compliance ratios C i k (e.g., adherence to speed limits, stop sign compliance) and weights W k :
τ C i ( t ) = k = 1 n r W k · C i k k = 1 n r W k
Impact on Trust Score: Non-compliant vehicles may engage in risky behaviours, affecting the validity of the data they collect and potentially introducing bias or anomalies into the learning process. A lower compliance score reduces the trust score, discouraging the inclusion of such vehicles.
  • Federated Learning Participation Score
The participation score for federated learning τ P i ( t ) evaluates the contribution of the vehicle v i to the federated learning process. It reflects both the quality and quantity of the vehicle’s data and model updates.
It is calculated as
τ P i ( t ) = Δ L M i ( t ) · | D i | j V s Δ L M j ( t ) · | D j |
where
  • Δ L M i ( t ) is the local model update vector of vehicle v i .
  • | D i | is the size of the local dataset.
  • V s is the set of vehicles selected for federated learning at time t.
Impact on Trust Score: Vehicles that contribute more significantly to the model updates and have larger datasets are considered more valuable to the learning process. A higher participation score increases trust, incentivising vehicles to contribute effectively.
  • User Feedback Score
The user feedback score τ U i ( t ) assesses feedback provided by users regarding vehicle v i . Positive feedback indicates satisfaction and reliability.
With normalised feedback indicators F i k ( t ) (e.g., user ratings, satisfaction scores) and weights W k :
τ U i ( t ) = k = 1 n F W k · F i k ( t ) k = 1 n F W k
Impact on Trust Score: Vehicles receiving positive user feedback are likely to be reliable and perform well, increasing their trust scores. Conversely, negative feedback reduces the trust score, reflecting potential issues in reliability or performance.

3.1.2. Trustworthiness Threshold

To determine whether a vehicle is considered trustworthy, a threshold τ min is defined. If the overall trust score τ i ( t ) is greater than or equal to τ min , the vehicle is deemed trustworthy; otherwise, it is considered untrustworthy.
  • If τ i ( t ) τ min , the vehicle is trusted.
  • If τ i ( t ) < τ min , the vehicle is untrusted.

3.1.3. Algorithm for Trust Management

The algorithm 1 formalises the trust management process, describing the steps to calculate the trust scores for each vehicle over time.
Algorithm 1 Trust Management System
Require: 
Set of vehicles V, time period T, t time step, factor weights w, IDS alert threshold τ min
Ensure: 
Trust scores { τ i ( t ) } for all v i V , t T
  1:
for each t T  do
  2:
   for each vehicle v i V  do
  3:
     Collect data and IDS alerts from vehicle v i
  4:
     Compute IDS alert score τ I i ( t ) using Equations (11) and (12)
  5:
     if  τ I i ( t ) = 0  then
  6:
        Set τ i ( t ) = 0
  7:
        Continue to next vehicle
  8:
     end if
  9:
     Compute network status score τ N i ( t ) using Equation (13)
10:
     Compute vehicle status score τ V S i ( t ) using Equation (14)
11:
     Compute energy efficiency score τ E i ( t ) using Equation (15)
12:
     Compute compliance score τ C i ( t ) using Equation (16)
13:
     Compute federated learning participation score τ P i ( t ) using Equation (17)
14:
     Compute user feedback score τ U i ( t ) using Equation (18)
15:
     Compute overall trust score τ i ( t ) using Equation (1)
16:
     if  τ i ( t ) < τ min  then
17:
        Vehicle v i is considered untrusted
18:
     else
19:
        Vehicle v i is considered trusted
20:
     end if
21:
   end for
22:
end for
23:
return Trust scores { τ i ( t ) }

3.1.4. Trust Score Model Architecture

Figure 3 illustrates the architecture of the trust score model, showing how each factor contributes to the overall trust score.
In summary, the proposed mathematical trust score model provides a comprehensive and flexible framework for evaluating the trustworthiness of vehicles in IoV networks based on multiple criteria, as shown in Algorithm 1. By normalising scores between 0 and 1 and carefully selecting factor weights, the model can be tailored to specific application requirements, balancing security, performance, and user satisfaction.

3.2. Implementation of Vehicle Selection Mechanism

The vehicle selection mechanism, described in Section 2.3.2, selects the most trustworthy vehicles using trust scores and IDS alerts to maximise the total trust score while excluding those with low scores or IDS alerts. This process solves the optimisation problem defined in Equation (5), subject to the constraints in Equations (6)–(9). This optimisation problem is solved using a threshold approach, Equation (10), as presented in Algorithm 2.
Formulating the selection as an optimisation problem ensures that the chosen subset of vehicles reflects the current environment, balancing factors such as network stability, energy efficiency, and compliance.
Algorithm 2 Vehicle Selection Mechanism
Require: 
Set of vehicles V, trust scores { τ i ( t ) } , IDS alert scores { τ I i ( t ) } , maximum number of selected vehicles K, minimum trust score threshold τ min
Ensure: 
Set of selected vehicles V s
 1:
for each vehicle v i V  do
 2:
   if  τ I i ( t ) = 0  then
 3:
     Set τ i ( t ) = 0
 4:
   end if
 5:
end for
 6:
Sort vehicles in descending order of τ i ( t )
 7:
V s = Top K vehicles with τ i ( t ) τ min  
 8:
return  V s

3.3. Federated Learning Algorithm

The federated learning algorithm enables selected vehicles to train a global model together without compromising their local data. It builds upon the FedAvg algorithm [47] with security and efficiency improvements for IoV networks.
Algorithm 3 takes as input the set of selected vehicles V s , the current global model G M ( t ) , the local datasets { D i } , the trust scores { τ i ( t ) } at the current time step t, the learning rate η , the number of local training epochs E, and the privacy parameters ϵ and δ . It ensures that each step is performed securely and efficiently, as described below:
Algorithm 3 Federated Learning Algorithm
Require: 
Selected vehicles V s , global model G M ( t ) , local datasets { D i } , trust scores { τ i ( t ) } , learning rate η , number of local epochs E, privacy parameters ϵ , δ
Ensure: 
Updated global model G M ( t + 1 )
 1:
for each vehicle v i V s  do
 2:
   Train local model L M i ( t ) = LocalTraining ( G M ( t ) , D i , E )
 3:
   Compute local model update Δ L M i ( t ) = L M i ( t ) G M ( t )
 4:
   Apply differential privacy Δ L M i ( t ) D P = ApplyDifferentialPrivacy ( Δ L M i ( t ) , ϵ , δ )
 5:
   Send Δ L M i ( t ) D P to the central server
 6:
end for
 7:
Server performs secure aggregation of differentially private updates
 8:
G M ( t + 1 ) = G M ( t ) + η · WeightedAverage ( Δ L M i ( t ) D P , { τ i ( t ) } )
 8:
return G M ( t + 1 )
Each selected vehicle v i uses its local data D i to train a local model L M i ( t ) , starting from the current global model G M ( t ) . The training is performed for E epochs. The local model update Δ L M i ( t ) is computed as the difference between the local model L M i ( t ) and the global model G M ( t ) . To ensure privacy, differential privacy is applied to the local model update Δ L M i ( t ) . This involves adding noise to the update using the privacy parameters ϵ and δ , resulting in Δ L M i ( t ) D P . This step masks individual contributions and ensures that the model update does not reveal sensitive information about the local data.
The differentially private model updates Δ L M i ( t ) D P from all selected vehicles and sends them to the central server. The server aggregates these updates securely to compute an overall model update. This step ensures that the individual updates remain private and are only combined in an aggregated form. The global model G M ( t + 1 ) is updated by adding the weighted average of the differentially private model updates. The weights are the trust scores { τ i ( t ) } of the respective vehicles. This weighted update ensures that more trustworthy vehicles have a greater influence on the global model, improving its overall quality and robustness. The updated global model G M ( t + 1 ) is returned as the final output of the algorithm.
This approach improves security and efficiency by incorporating trust scores in the global model update, adding differential privacy to local updates, and using secure aggregation to protect individual vehicles’ data privacy [5]. It is important to note that privacy mechanisms (such as differential privacy) are applied exclusively to protect communication integrity and model confidentiality. They are not involved in the trust evaluation process. This modularity ensures that the trust computation remains unaffected by noise injection, maintaining the accuracy and interpretability of reliability assessments. The Fed-DTB offers a comprehensive solution for secure and efficient FL in IoV networks, capable of detecting and eliminating malicious or low-quality participants while allowing legitimate vehicles to contribute to the shared global model without compromising sensitive information.

4. Experimental Results and Analysis

The experimental evaluation of the proposed Fed-DTB framework aims to demonstrate its effectiveness in addressing the unique challenges of FL in IoV networks. We investigate the framework’s robustness, adaptability, and efficiency through extensive simulations under realistic and adversarial conditions. This section presents the experimental setup, dataset distribution, evaluation metrics, and analysis of the obtained results.

4.1. Experimental Setup

To evaluate the effectiveness and robustness of the proposed Fed-DTB framework, we implemented it in Google Colab, leveraging its cloud-based GPU resources to simulate a distributed IoV environment. The implementation utilised TensorFlow 2.7.0, along with the Keras API for model construction, to ensure efficient distributed training. A custom TensorFlow federated aggregator was used to represent the central server, while individual TensorFlow clients represented vehicles in the IoV environment. This setup closely mimics the dynamics of federated learning in vehicular networks.
Figure 4 illustrates the experimental setup, showing the data preparation phase, adversarial data injection into a subset of clients, and the federated learning process, including trust computation and model aggregation at the central server.

4.2. Dataset Distribution

Our experiments rely on three datasets to capture the heterogeneity, dimensionality, and adversarial conditions characteristic of real-world IoV systems: MNIST Dataset: A well-known benchmark in machine learning [47] comprising 60,000 training images. We partitioned MNIST into clients using a non-IID scheme to reflect the uneven data distributions often encountered in IoV networks [45]. To approximate a fleet-scale workload while respecting practical memory limits, the MNIST training set is first expanded by systematic replication in proportion to the number of simulated clients; with fifty clients, this yields roughly 4.8 × 10 5 images. In raw-pixel terms (≈ 3.8 × 10 8 grayscale pixels), this is of the same computational order as the CIFAR-100 training set (50,000 colour images, ≈ 1.5 × 10 8 RGB pixels). The resulting memory footprint proved tractable on the desktop simulation workstation used in this study (single 8 GB GPU) and would also fall within the 2 GB VRAM envelope typical of an in-vehicle ECU. The enlarged MNIST set is then partitioned with a Dirichlet allocation (concentration 0.4), producing intentionally unbalanced label mixes that mirror the skew observed in operational IoV fleets. This procedure supplies both volume and heterogeneity without additional data collection overhead. CIFAR-10 Dataset: This dataset introduces additional complexity and higher-dimensional images, simulating scenarios where vehicles collect more diverse visual data [48].
Synthetic IoV Dataset: Inspired by empirical observations and prior vehicular studies [49,50,51,52], we generated this dataset to mimic factors unique to connected vehicles, including signal strength, connection stability, vehicle health, and user feedback metrics. The generation process follows statistical models consistent with wireless communication theory, where the signal strength degrades with distance; mechanical engineering principles, as the engine health deteriorates over time; and trust scores that evolve based on operational conditions and user experience rooted in human–computer interaction and adaptive systems theory.
We performed a correlation analysis between key parameters to validate the realism and relevance of the dataset. The results, shown in Figure 5, confirm that the relationships among metrics such as connection stability, engine health, and trust score align with the expected behaviour of real-world IoV systems. Specifically, connection stability correlates with geographic proximity and traffic density, engine health impacts computational reliability for FL participation, and trust scores reflect historical performance consistency. These patterns align with studies of the established vehicular network [49,50,51,52]. For instance, higher connection stability strongly correlates with trust scores (Figure 6), reflecting the system’s robustness under reliable communication conditions. Similarly, satisfaction feedback is closely tied to trust scores, as Figure 7 shows the importance of user-centric parameters in evaluating vehicular reliability.
By injecting controlled anomalies (e.g., increased bandwidth usage, sporadic IDS alerts), the dataset provides a systematic and controlled foundation to mirror realistic adversarial scenarios frequently reported in IoV research. For example, urban-route simulations produce more interference, while rural conditions produce stronger average signal strength, as the correlation matrix supports. This approach enables a detailed and structured exploration of relationships and behaviours that might be difficult to isolate in real-world conditions.
While these validations confirm the dataset’s reliability and relevance for benchmarking federated learning models, it is important to acknowledge the inherent limitations of synthetic datasets. Despite being designed to reflect typical IoV environments, synthetic data may not fully capture the variability and complexity of real-world conditions. Future work will focus on validating the proposed solution using field-collected data, further demonstrating its robustness and applicability in diverse and dynamic environments.
We have established a heterogeneous federated environment by combining MNIST, CIFAR-10, and the synthetic IoV dataset. This diversity of data types and adversarial scenarios highlights Fed-DTB’s resilience and ability to maintain robust performance and secure collaboration under realistic IoV conditions.

4.3. Adversarial Data Injection and IDS Performance

To simulate adversarial behaviour, we injected false data into 20% of the clients following the approach of Sun et al. [53]. The data injection function, implemented in Python version 3.9.19, added Gaussian noise ( μ = 0 , σ 2 = 20 ) to 20% of these clients’ data and randomised their labels to simulate label-flipping attacks. Figure 8 illustrates the impact of adversarial data injection on the MNIST dataset, highlighting the differences between normal data and adversarially altered data. The IDS was pre-trained on a mix of normal MNIST data and poisoned anomalies, achieving a validation accuracy of 99.86%. These results highlight the IDS’s capability to detect subtle adversarial manipulations, even in scenarios with varying degrees of data injection. The IDS plays a critical role in maintaining the integrity of the federated learning process by identifying anomalous clients.

4.4. Evaluation Metrics

To evaluate the performance of the Fed-DTB framework, we utilised several key metrics to assess model accuracy, robustness, efficiency, and adaptability. Accuracy reflects the proportion of correctly classified instances, providing an overall performance measure. Precision, recall, and F1 Score capture the model’s ability to accurately classify positive instances and balance false positives and negatives. ROC-AUC evaluates the system’s capability to distinguish between positive and negative classes. Loss measures the error during optimisation, with lower values indicating better model performance. Timing metrics were employed to measure computational efficiency, including the time required for local training, communication between clients and the central server, and trust management operations. Trust scores were tracked across rounds to monitor the framework’s ability to adapt to client behaviour dynamically and maintain reliability. Client exclusion metrics quantified the number of excluded clients in each round, highlighting the system’s robustness against adversarial behaviours and its capacity to preserve overall performance despite the presence of untrustworthy participants. Finally, we confirmed that the framework is not overly sensitive to the local-epoch count E; varying E between 1, 3, and 5 changed final accuracy by less than 0.8 percentage points and altered convergence by no more than two global rounds.

4.5. Results Analysis

In this section, we present the experimental results to evaluate the performance of the proposed Fed-DTB framework under various configurations and scenarios. The performance is analysed using key metrics. Furthermore, a comparative analysis is conducted to position the proposed framework against existing solutions in the literature, highlighting its strengths and areas for improvement.

4.5.1. Fed-DTB Performance Under Low-Complexity Conditions

The results for the configuration with 10 clients provide an in-depth analysis of the system’s performance under low-complexity conditions. This setup was essential to evaluate the framework’s ability to detect and exclude untrustworthy clients while maintaining robust performance. Figure 9 and Figure 10 illustrate the evolution of accuracy and loss over 10 rounds. Initially, the accuracy was moderate at 62.91%, affected by the presence of malicious clients, while the loss was high at 1.8026. Over successive rounds, as malicious and low performance clients were identified and excluded, accuracy steadily improved, stabilising at 88.09% by round 7, while loss decreased sharply to 0.7225. These results highlight the system’s rapid convergence and effectiveness in enhancing its performance through iterative rounds.
The detailed behaviour of key performance metrics, including precision, recall, and F1 Score, is depicted in Figure 11. These metrics demonstrated significant improvement after the exclusion of low-trust clients in the early rounds. By round 10, the precision reached 89.65%, recall stabilised at 88.09%, and the F1 Score achieved 88.24%, indicating the system’s ability to strike a balance between false positives and false negatives. Furthermore, the ROC-AUC metric, shown in Figure 12, exhibited consistently high values, culminating at 0.9886 in the final round, reflecting the model’s exceptional discriminatory ability between benign and malicious data sources.
The effectiveness of the trust mechanism in identifying and excluding untrustworthy clients is demonstrated in Figure 13. Four malicious clients were excluded by round 2 due to their misleading information, while two low-performance clients were excluded in rounds 3 and 5, leading to a marked improvement in the trustworthiness of the remaining clients. This exclusion process directly contributed to the observed enhancements in model performance. The corresponding computational efficiency is demonstrated in Figure 14, where the training time decreased from 32.1 s in the first round to 3.2 s in the final round, reflecting the reduced computational overhead as untrustworthy clients were excluded and the data quality improved.
Finally, the performance metrics for each round are summarised in Table 1. This table provides a detailed breakdown of metrics such as accuracy, loss, F1 Score, the number of excluded clients, and total round time, showing the stabilisation of the system’s performance after round 7. These results confirm the system’s capability to deliver robust and efficient performance in a low-complexity setup with 10 clients.

4.5.2. Fed-DTB Performance Under High-Complexity Conditions

The results from configurations with 25, 50, and 100 clients clearly analyse the system’s performance under high-complexity conditions. The 25-client setup tested scalability with moderate complexity, while the 50-client configuration introduced higher demands and risks from malicious clients. The 100-client evaluation further assessed performance in a large-scale, heterogeneous environment, demonstrating the system’s robustness and effectiveness.
Figure 15 and Figure 16 show the accuracy and loss over 10 rounds for 25, 50, and 100 clients’ configurations. Figure 15 (Accuracy Evaluation): The x-axis represents the rounds (1–10), while the y-axis shows accuracy values from 0.6 to 0.9 (60–90%). The y-axis range has been optimised for readability, focusing on the relevant performance range rather than starting from zero. Figure 16 (Loss Evaluation): The x-axis represents the rounds (1–10), while the y-axis shows loss values from 0.6 to 2.2. This range captures the complete loss trajectory from initial high values to final convergence. The client groups are distinguished by specific markers: 25 clients with an orange line (square markers), 50 clients with a green line (triangular markers), and 100 clients with a light red line (light blue “X” markers).
The results indicate a clear progression in performance. Initially, the accuracy was low, ranging from 18.3% to 25.41% (though the figure’s y-axis is scaled from 60 to 90% for enhanced readability), with high loss values between 2.1 and 2.2, reflecting the presence of malicious clients and noise in the dataset. As the system’s trust-based exclusion mechanism operated, accuracy sharply improved, reaching values between 84.7% and 87.9% by the final round. Concurrently, loss values decreased significantly, stabilising between 0.6895 and 0.8092. These trends highlight the system’s ability to adapt, exclude untrustworthy clients, and achieve consistent performance improvements across all client configurations.
Evaluation of key metrics, as shown in Figure 17, including (a) precision, (b) recall, (c) F1 Score, and (d) ROC-AUC, revealed consistent improvements across configurations with 25, 50, and 100 clients.
For the 25-client setup, the precision increased steadily, stabilising at 90.0%, while recall reached 85.2%, and the F1 Score achieved 86.09%. The ROC-AUC score remained consistently high, gaining 0.9872, reflecting the reliability of the system under moderate complexity. These metrics demonstrate the model’s ability to classify malicious and non-malicious clients while maintaining robust performance accurately.
In the 50-client configuration, the system handled increased complexity effectively, with precision stabilising at 91.2%, recall reaching 87.9%, and the F1 Score improving to 89.5%. The ROC-AUC score peaked at 0.9913, further highlighting the improved ability of the system to distinguish between benign and malicious clients under higher demands.
For the 100-client configuration, which posed the highest complexity due to its larger and more diverse client pool, precision stabilised at 89.8%, recall reached 84.7%, and the F1 score achieved 87.1%. The ROC-AUC score reached 0.9859, highlighting the robustness and adaptability of the system even under challenging conditions.
The evolution of the trust score for 10 rounds, as shown in Figure 18, highlights the system’s ability to exclude malicious clients while maintaining robust performance steadily. In the 100-client configuration, the system excluded 56 clients by round 10, showcasing its effectiveness in managing large-scale conditions. For the 50-client setup, 31 clients were excluded, reflecting the model’s strong capacity to stabilise performance under moderate complexity. In the 25-client configuration, 16 clients were excluded by round 10, further confirming the system’s ability to improve dataset quality. In all configurations, the mean trust scores of the remaining clients increased steadily, demonstrating the effectiveness of the trust-based selection strategy.
The computational efficiency of the system, as illustrated in Figure 19, demonstrates significant improvements across all configurations. In the 25-client configuration, the training time per round stabilised at 36.74 s after initial reductions, reflecting optimised training processes enabled by trust-based exclusions. The breakdown of computational components (Figure 20) shows that 96.6% of the time was allocated to computation, with minimal contributions from communication (0.1%) and overhead (3.3%), confirming the efficiency of the system’s use of resources. For the 50-client setup, the training time stabilised around 57.03 s, following an initial peak of over 200 s. The breakdown (Figure 21) indicates that computation remained the dominant factor at 95.9%, with communication and overhead accounting for only 0.1% and 3.35%, respectively. This underscores the system’s ability to handle increased client pools without sacrificing efficiency. In the 100-client configuration, the training time stabilised at approximately 95.6 s after peaking above 391 s in the early rounds. Figure 22 highlights that computation dominated at 98.4%, with overhead contributing 1.4% and communication only 0.1%. This demonstrates the system’s scalability and effective management of computational resources, even in high-complexity scenarios.
Finally, Table 2 comprehensively summarises the system’s performance metrics across all configurations (25, 50, and 100 clients) for each round. The metrics include accuracy, loss, F1 Score, precision, the number of excluded clients, and total round time. This consolidated table demonstrates the system’s ability to adapt to varying client loads, maintaining robust performance, strong classification metrics, and efficient processing times across different levels of complexity. The results validate the proposed framework’s scalability, resilience, and adaptability, effectively handling moderate to highly complex federated learning scenarios.
Table 2 also delineates the operating envelope we explored. Within each fleet size, the accuracy rises sharply in the first two rounds, once malicious clients are excluded, and then stabilises. When comparing final figures across configurations, the accuracy stays essentially flat (0.881 for ten clients, 0.872 for fifty, and 0.862 for one hundred), while the mean round-time extends from 13.97 s to 160.72 s. This exposes a clear throughput-latency trade-off; beyond about fifty vehicles, additional participants add less than one percentage point of reliability but more than double the turnaround time. The same numbers serve as a stress test. Even under the heaviest load examined (100 clients), the global model settles at 0.862 accuracy; the price is roughly 2.7 min of latency, acceptable for fleet-level analytics but restrictive for sub-second safety loops.

4.5.3. Comparison with State-of-the-Art

For fairness, each baseline is quoted on its native dataset exactly as reported in the originating paper; re-training all methods on a unified dataset is left to future replication studies. Table 3 offers a comparative summary of several Federated Learning (FL) approaches, spanning diverse datasets and highlighting each method’s achieved accuracy as well as its computational or security implications. TrustBandit, for instance, demonstrates moderate accuracy (75.64% with 40% malicious clients) but suffers from elevated cost when data is highly heterogeneous. In contrast, CONTRA and Client Selection in FL obtain higher accuracies on standard benchmarks like MNIST or CIFAR-10 (up to 87.4% and 87%, respectively), yet they exhibit increased overhead and remain sensitive to adversarial behaviours. Approaches such as FSL and FedCS focus on performance improvements (e.g., 76.83% on CIFAR-FS and 79% on CIFAR-10) but have not explicitly addressed adversarial robustness or efficient scalability.
Meanwhile, RCFL balances accuracy in the 75–82% range across MNIST and CIFAR-10 but lacks rigorous security threat evaluations. Although it improves accuracy to approximately 87% in CIFAR-10, it introduces an overhead of approximately 1.3× while also raising concerns about the readiness for real-world deployment. By comparison, our proposed Fed-DTB achieves 86-88% accuracy under varying client configurations (10–100 clients) and demonstrates resilient performance in the face of malicious or low-performance clients. Despite a minor overhead increase, Fed-DTB uniquely integrates robust aggregation and adaptive weighting, thereby offering stronger protection against poisoning attacks. Fed-DTB is dataset-agnostic. Any researcher can retrain it on further benchmarks by using the same configuration template provided in our experiment files. Detailed instructions will be released once the paper is accepted.

5. Discussion and Implications

The FL within IoV networks has attracted much interest as it takes advantage of data privacy, improves real-time decision-making, and minimises communications overhead in large-scale vehicular environments. Despite the increasing progress of decentralised intelligence, current approaches have highly restricted applicability in real-world IoV scenarios due to their key deficiencies. Many modern approaches fall short when it comes to testing with varying adversarial threats, use static or single-factor trust mechanisms, utilise centralised cloud aggregation, and ignore the high heterogeneity and high dynamic conditions within IoV. These challenges may obstruct FL’s applicability, scalability, and effectiveness in IoV networks due to the safety-critical and resource constraints.
A prominent shortcoming in vehicular data-sharing schemes lies in the inadequate evaluation of adversarial attacks, such as model poisoning and inference threats. Even when certain blockchain-based FL architectures are proposed, they typically overlook the substantial computational and communication overhead introduced by consensus protocols, particularly in large-scale IoV networks, where the number of vehicles can surge exponentially. Our proposed framework, Fed-DTB, recognises the high stakes of IoV security and makes adversarial resilience a central design principle. By explicitly testing the model under various poisoning scenarios and measuring trust indicators (e.g., IDS alerts, energy usage, suspicious feedback), Fed-DTB identifies malicious participants and adapts its trust thresholds accordingly. This adaptability is crucial in preventing computational blowouts: once malicious clients are flagged and excluded, the subsequent training rounds become more efficient, thereby balancing security with system overhead.
Numerous existing approaches rely on single-factor or static trust metrics, focusing solely on user feedback, IDS logs, or model performance in isolation. Such static solutions struggle to cope with dynamic IoV conditions, where network latency, node mobility, and adversarial strategies may evolve rapidly. By contrast, Fed-DTB introduces multi-factor trust evaluation, combining internal and external indicators like vehicle mobility, compliance with traffic rules, FL participation history, and real-time network stability. Moreover, the adaptive weighting mechanism within Fed-DTB recalibrates the priority of each factor based on the prevailing vehicular context. For instance, if an IoV environment is under active attack, IDS alerts are given higher weight, whereas scenarios with fluctuating connectivity might prioritise energy or network reliability metrics. This adaptability significantly enhances the framework’s robustness to ever-shifting adversarial techniques and heterogeneous data conditions, bridging a core limitation of previously rigid trust approaches.
Although privacy-preserving FL frameworks generally rely on secure aggregation or homomorphic encryption, many still involve centralised cloud servers that induce latency bottlenecks, especially problematic in fast-moving vehicular scenarios. Prolonged or unpredictable round times can delay model updates, compromising real-time decision-making in safety-critical use cases such as collision avoidance and traffic flow optimisations. Fed-DTB alleviates these bottlenecks by integrating client selection with an efficient, trust-driven coordination mechanism. Instead of indiscriminately waiting for all nodes to communicate with a centralised aggregator, the system focuses on high-trust vehicles that are both reliable and timely. This approach reduces round communication overhead and permits partial but higher-quality updates, mitigating latency concerns in an environment where connectivity may be transient or unstable.
Another barrier to IoV-centric FL research is the lack of public datasets representing realistic adversarial behaviours, diverse network topologies, and resource constraints. Many published benchmarks either simulate homogeneous traffic or omit malicious client scenarios altogether, rendering them less applicable to real-world vehicular contexts. Recognising this, Fed-DTB includes a custom synthetic dataset that captures adversarial attacks, dynamic vehicular states, and energy fluctuations. By incorporating these diverse conditions, the experiments more faithfully replicate the environment of a smart city or complex ITS deployment. Moreover, the results presented (including performance metrics in different client scale configurations) offer valuable scalability insights that many existing FL frameworks do not provide. Such analyses underscore the model’s capacity to adapt across small- to medium-scale IoV networks and pave the way for larger-scale adoption.
Resource allocation and scheduling in IoV-FL lean on advanced algorithms such as DRL. However, untested learning latency and incomplete privacy simulations erode confidence in their operational viability. More critically, these approaches often assume fixed adversarial conditions or ignore the overhead of trust evaluations. In contrast, Fed-DTB embraces context-aware resource management: the trust-driven client selection process naturally accommodates the dynamic availability of computing and communication resources. By prioritising resource-limited but high-trust vehicles, the framework ensures minimal training disruption and fosters more reliable convergence in scenarios with shifting vehicular populations or intermittent power constraints. Moreover, real-time performance is bolstered by the early exclusion of untrustworthy nodes; rather than expending cycles on suspicious updates, Fed-DTB swiftly channels resources towards genuinely contributingclients. This strategy addresses a core shortcoming in existing DRL-based resource schedulers, where slow or inconclusive trust checks can hamper real-time responsiveness.
Finally, verifying the convergence behaviour of FL models in non-IID vehicular networks remains an unresolved question in much of the literature. The ephemeral and mobile nature of vehicles, combined with adversarial data injection, raises doubts about whether global models can stabilise within practical round limits. The Fed-DTB multi-factor trust approach tackles this by filtering out outlier contributions early in each round, preservingthe integrity of global updates. Empirical findings indicate that convergence can be achieved consistently despite widely varying data distributions and malicious interference.
The Fed-DTB remains explicable because every trust decision is the weighted sum of seven observable factors, and those weights are logged in each round. Operators can trace, for instance, how the IDS weight dominates during an attack and recedes once the threat is cleared. Preliminary counts of client selection in ten random seeds yield a coefficient of variation below 0.25, indicating that no benign subgroup is systematically excluded. These properties suggest that the framework offers transparent reasoning while treating participants evenhandedly.
By systematically addressing the core limitations of current FL solutions for IoV adversarial threats, static trust mechanisms, scalability bottlenecks, and the lack of realistic vehicular datasets, Fed-DTB presents a holistic design that moves the field towards more robust, context-aware deployments. Its adversarial resilience stems from active detection and the rapid exclusion of poisoned clients via multi-factor trust checks, ensuring compromised updates do not linger in the global model. Meanwhile, the adaptive trust mechanism employs a context-aware weighting strategy that dynamically reconfigures the focus of trust factors (e.g., IDS alerts, energy usage, or network stability) in response to evolving vehicular conditions. These improvements not only improve efficiency and scalability by allocating resources to reliable, high-quality updates and reducing overall latency but also maintain realistic testing scenarios through a custom dataset that simulates adversarial and heterogeneous states typical of large-scale IoV environments. Ultimately, Fed-DTB stands as a significant step forward in deploying robust federated intelligence, enabling intelligent transportation systems to manage data integrity, performance, and security in tandem, even under dynamic and potentially adversarial conditions.

6. Conclusions

This work presents Fed-DTB, an adaptive federated learning framework that tackles challenges in IoV ecosystems, such as adversarial interference, inflexible trust architectures, and dependencies on centralised aggregation. Unlike prior approaches, Fed-DTB employs a dynamic trust evaluation mechanism that holistically analyses vehicular parameters (including node reliability, data consistency, and behavioural anomalies) to enable context-sensitive client selection. Experimental evaluations confirm the framework’s capacity to sustain high model accuracy while filtering out malicious or unreliable participants, thereby neutralising poisoning threats and performance bottlenecks. To bridge the gap between simulation and real-world conditions, we generated an IoV dataset replicating urban mobility patterns, adversarial behaviours, and network heterogeneity. Tests across diverse scenarios reveal Fed-DTB’s scalability, achieving stable convergence even in large-scale deployments. While parameter calibration for edge cases and energy-efficient adaptations necessitates deeper investigation, the framework advances secure, decentralised learning for ITS. However, three limitations qualify these gains. First, the evaluation relied on synthetic traces and a single poisoning pattern; confirming robustness on real vehicular logs and against broader adversary taxonomies (e.g., back-door or Sybil attacks) is an essential next step. Second, training latency rises noticeably as the fleet scales, which is tolerable for fleet-level analytics but unsuitable for sub-second safety loops; future work will explore roadside or hierarchical aggregation to restore real-time performance. Addressing these points will strengthen the pathway from an experimental prototype to deployable privacy-preserving intelligence for smart city transport networks.

Author Contributions

Conceptualisation, A.A., S.I. and I.G.; methodology, A.A.; introduction, A.A.; results, A.A.; discussion, A.A.; formal analysis, A.A.; data curation, A.A.; conclusions, A.A.; writing—original draft preparation, A.A.; project supervision, research design, review, feedback, revision, and editing, S.I. and I.G.; supervision, S.I. and I.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The MNIST and CIFAR-10 datasets used in this study are publicly available and can be accessed via the following link: http://yann.lecun.com/exdb/mnist/ (accessed on 27 March 2025), https://www.cs.toronto.edu/~kriz/cifar.html (accessed on 27 March 2025).

Acknowledgments

The author would like to thank Zaraan Alsaadi and Hamed Allhibi for their valuable discussions and support during this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
IoVInternet of Vehicles
IoTInternet of Things
AVAutonomous Vehicle
RLReinforcement Learning
DRLDeep Reinforcement Learning
FLFederated Learning
Fed-DTBFederated Learning with Dynamic Trust-Based mechanism
IDSIntrusion Detection System
GMGlobal Model
LMLocal Model
CANController Area Network
FedAvgFederated Averaging
MNISTModified National Institute of Standards and Technology
GPUGraphics Processing Unit
DSRCDedicated Short-Range Communications
AHPAnalytical Hierarchy Process
DPDifferential Privacy
ReLURectified Linear Unit
Non-IIDNon-Independent and Identically Distributed

References

  1. Zhou, J.; Zhang, S.; Lu, Q.; Dai, W.; Chen, M.; Liu, X.; Pirttikangas, S.; Shi, Y.; Zhang, W.; Herrera-Viedma, E. A survey on federated learning and its applications for accelerating industrial internet of things. arXiv 2021, arXiv:2104.10501. [Google Scholar] [CrossRef]
  2. Kiran, B.R.; Sobh, I.; Talpaert, V.; Mannion, P.; Sallab, A.A.A.; Yogamani, S.; Pérez, P. Deep Reinforcement Learning for Autonomous Driving: A Survey. arXiv 2021, arXiv:2002.00444. [Google Scholar] [CrossRef]
  3. Qi, J. Federated reinforcement learning: Techniques, applications, and open challenges. Intell. Robot. 2021, 1, 18–57. [Google Scholar] [CrossRef]
  4. Malik, S.; Khan, M.A.; El-Sayed, H. Collaborative Autonomous Driving—A Survey of Solution Approaches and Future Challenges. Sensors 2021, 21, 3783. [Google Scholar] [CrossRef] [PubMed]
  5. Li, Q.; Wen, Z.; Wu, Z.; Hu, S.; Wang, N.; Li, Y.; Liu, X.; He, B. A Survey on Federated Learning Systems: Vision, Hype and Reality for Data Privacy and Protection. IEEE Trans. Knowl. Data Eng. 2023, 35, 3347–3366. [Google Scholar] [CrossRef]
  6. Richter, L.; Lenk, S.; Bretschneider, P. Advancing Electric Load Forecasting: Leveraging Federated Learning for Distributed, Non-Stationary, and Discontinuous Time Series. Smart Cities 2024, 7, 2065–2093. [Google Scholar] [CrossRef]
  7. Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated Machine Learning: Concept and Applications. arXiv 2019, arXiv:1902.04885. [Google Scholar] [CrossRef]
  8. Khraisat, A.; Alazab, A. A critical review of intrusion detection systems in the internet of things: Techniques, deployment strategy, validation strategy, attacks, public datasets and challenges. Cybersecurity 2021, 4, 18. [Google Scholar] [CrossRef]
  9. Lo, S.K.; Lu, Q.; Wang, C.; Paik, H.-Y.; Zhu, L. A Systematic Literature Review on Federated Machine Learning: From a Software Engineering Perspective. ACM Comput. Surv. 2021, 54, 1–39. [Google Scholar] [CrossRef]
  10. Cui, Y.; Li, H.; Zhang, D.; Zhu, A.; Li, Y.; Qiang, H. Multiagent Reinforcement Learning-Based Cooperative Multitype Task Offloading Strategy for Internet of Vehicles in B5G/6G Network. IEEE Internet Things J. 2023, 10, 12248–12260. [Google Scholar] [CrossRef]
  11. Ang, S.; Ho, M.; Huy, S.; Janarthanan, M. Utilizing IDS and IPS to Improve Cybersecurity Monitoring Process. J. Cyber Secur. Risk Audit. 2025, 2025, 77–88. [Google Scholar] [CrossRef]
  12. Chai, H.; Leng, S.; Chen, Y.; Zhang, K. A hierarchical blockchain-enabled federated learning algorithm for knowledge sharing in Internet of Vehicles. IEEE Trans. Intell. Transp. Syst. 2021, 22, 3975–3986. [Google Scholar] [CrossRef]
  13. Wang, N.; Yang, W.; Wang, X.; Wu, L.; Guan, Z.; Du, X.; Guizani, M. A blockchain based privacy-preserving federated learning scheme for Internet of Vehicles. Digit. Commun. Netw. 2024, 10, 126–134. [Google Scholar] [CrossRef]
  14. Chai, H.; Leng, S.; Zhang, K.; Mao, S. Proof-of-Reputation Based-Consortium Blockchain for Trust Resource Sharing in Internet of Vehicles. IEEE Access 2019, 7, 175744–175757. [Google Scholar] [CrossRef]
  15. Qi, J.-J.; Li, Z.-Z. Managing Trust for Secure Active Networks. In Proceedings of the Multi-Agent Systems and Applications IV, Budapest, Hungary, 15–17 September 2005; Springer: Berlin/Heidelberg, Germany, 2005; pp. 628–631. [Google Scholar] [CrossRef]
  16. Aiche, A.; Tardif, P.-M.; Erritali, M. Modeling Trust in IoT Systems for Drinking-Water Management. Future Internet 2024, 16, 273. [Google Scholar] [CrossRef]
  17. Boakye-Boateng, K.; Ghorbani, A.A.; Lashkari, A.H. Implementation of a Trust-Based Framework for Substation Defense in the Smart Grid. Smart Cities 2024, 7, 99–140. [Google Scholar] [CrossRef]
  18. Rjoub, G.; Wahab, O.A.; Bentahar, J.; Cohen, R.; Bataineh, A.S. Trust-Augmented Deep Reinforcement Learning for Federated Learning Client Selection. Inf. Syst. Front. 2022, 26, 1261–1278. [Google Scholar] [CrossRef]
  19. Nishio, T.; Yonetani, R. Client selection for federated learning with heterogeneous resources in mobile edge. In Proceedings of the ICC 2019—2019 IEEE International Conference on Communications (ICC), Shanghai, China, 20–24 May 2019; pp. 1–7. [Google Scholar]
  20. Mazloomi, F.; Heydari, S.S.; El-Khatib, K. Trust-based Knowledge Sharing Among Federated Learning Servers in Vehicular Edge Computing. In Proceedings of the Int’l ACM Symposium on Design and Analysis of Intelligent Vehicular Networks and Applications, Montreal, QC, Canada, 30 October–3 November 2023; pp. 9–15. [Google Scholar] [CrossRef]
  21. Nguyen, D.C.; Ding, M.; Pathirana, P.N.; Seneviratne, A.; Li, J.; Poor, H.V. Federated learning for internet of things: A comprehensive survey. IEEE Commun. Surv. Tutor. 2021, 23, 1622–1658. [Google Scholar] [CrossRef]
  22. Cao, J.; Zhang, K.; Wu, F.; Leng, S. Learning cooperation schemes for mobile edge computing empowered Internet of Vehicles. In Proceedings of the 2020 IEEE Wireless Communications and Networking Conference (WCNC), Seoul, Republic of Korea, 25–28 May 2020; pp. 1–6. [Google Scholar]
  23. Lu, Y.; Huang, X.; Dai, Y.; Maharjan, S.; Zhang, Y. Federated learning for data privacy preservation in vehicular cyber-physical systems. IEEE Netw. 2020, 34, 50–56. [Google Scholar] [CrossRef]
  24. Zhao, Y.; Zhao, J.; Yang, M.; Wang, T.; Wang, N.; Lyu, L.; Lam, K.Y. Local differential privacy-based federated learning for internet of things. IEEE Internet Things J. 2020, 8, 8836–8853. [Google Scholar] [CrossRef]
  25. Dwork, C. Differential Privacy. In Automata, Languages and Programming; Bugliesi, M., Preneel, B., Sassone, V., Wegener, I., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; pp. 1–12. [Google Scholar] [CrossRef]
  26. Zhang, X.; Peng, M.; Yan, S.; Sun, Y. Deep-reinforcement-learning based mode selection and resource allocation for cellular V2X communications. IEEE Internet Things J. 2020, 7, 6380–6391. [Google Scholar] [CrossRef]
  27. Nguyen, D.C.; Pathirana, P.N.; Ding, M.; Seneviratne, A. Privacypreserved task offloading in mobile blockchain with deep reinforcement learning. IEEE Trans. Netw. Serv. Manag. 2019, 68, 8050–8062. [Google Scholar]
  28. Bai, J.; Dong, H. Federated Learning-driven Trust Prediction for Mobile Edge Computing-based IoT Systems. In Proceedings of the 2023 IEEE International Conference on Web Services (ICWS), Chicago, IL, USA, 2–8 July 2023; pp. 131–137. [Google Scholar]
  29. Albaseer, A.; Ciftler, B.S.; Abdallah, M.; Al-Fuqaha, A. Exploiting unlabeled data in smart cities using federated edge learning. In Proceedings of the 2020 International Wireless Communications and Mobile Computing (IWCMC), Limassol, Cyprus, 15–19 June 2020; pp. 1666–1671. [Google Scholar]
  30. Jøsang, A.; Ismail, R.; Boyd, C. A survey of trust and reputation systems for online service provision. Decis. Support Syst. 2007, 43, 618–644. [Google Scholar] [CrossRef]
  31. Alshuaibi, A.; Almaayah, M.; Ali, A. Machine Learning for Cybersecurity Issues: A systematic Review. J. Cyber Secur. Risk Audit. 2025, 2025, 36–46. [Google Scholar] [CrossRef]
  32. Zhao, P.; Cao, Z.; Jiang, J.; Gao, F. Practical Private Aggregation in Federated Learning Against Inference Attack. IEEE Internet Things J. 2023, 10, 318–329. [Google Scholar] [CrossRef]
  33. Zhao, P.; Jiang, J.; Zhang, G. FedSuper: A Byzantine-Robust Federated Learning Under Supervision. ACM Trans. Sens. Netw. 2024, 20, 36. [Google Scholar] [CrossRef]
  34. Lippi, G.; Aljawarneh, M.; Al-Na’amneh, Q.; Hazaymih, R.; Dhomeja, L.D. Security and Privacy Challenges and Solutions in Autonomous Driving Systems: A Comprehensive Review. J. Cyber Secur. Risk Audit. 2025, 2025, 23–41. [Google Scholar] [CrossRef]
  35. Alruwaili, A.; Islam, S.M.N.; Gondal, I. Cybersecurity in Robotic Autonomous Vehicles: Machine Learning Applications to Detect Cyber Attacks, 1st ed.; CRC Press: Boca Raton, FL, USA, 2025. [Google Scholar] [CrossRef]
  36. Kenney, J.B. Dedicated Short-Range Communications (DSRC) Standards in the United States. Proc. IEEE 2011, 99, 1162–1182. [Google Scholar] [CrossRef]
  37. Alazab, M.; Rm, S.P.; Parimala, M.; Reddy, P.; Gadekallu, T.R.; Pham, Q.-V. Federated learning for cybersecurity: Concepts, challenges and future directions. IEEE Trans. Ind. Inform. 2021, 18, 3501–3509. [Google Scholar] [CrossRef]
  38. Alotaibi, E.; Sulaiman, R.B.; Almaiah, M. Assessment of cybersecurity threats and defense mechanisms in wireless sensor networks. J. Cyber Secur. Risk Audit. 2025, 2025, 47–59. [Google Scholar] [CrossRef]
  39. Almuqren, A.A. Cybersecurity threats, countermeasures and mitigation techniques on the IoT: Future research directions. J. Cyber Secur. Risk Audit. 2025, 1, 1–11. [Google Scholar] [CrossRef]
  40. Gmiden, M.; Gmiden, M.H.; Trabelsi, H. An intrusion detection method for securing in-vehicle CAN bus. In Proceedings of the 2016 17th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering (STA), Sousse, Tunisia, 19–21 December 2016; pp. 176–180. [Google Scholar]
  41. Lokman, S.-F.; Othman, A.T.; Abu-Bakar, M.-H. Intrusion detection system for automotive Controller Area Network (CAN) bus system: A review. EURASIP J. Wirel. Commun. Netw. 2019, 2019, 184. [Google Scholar] [CrossRef]
  42. Daemen, J.; Rijmen, V. AES Proposal: Rijndael. 1999. Available online: https://www.cs.miami.edu/home/burt/learning/Csc688.012/rijndael/rijndael_doc_V2.pdf (accessed on 20 July 2024).
  43. N, J.; Patil, R. A Multi-tier accredit based security for trustworthiness in VANET’s using broadcasting mechanism. In Proceedings of the 2023 Second International Conference on Electrical, Electronics, Information and Communication Technologies (ICEEICT), Trichirappalli, India, 5–7 April 2023; pp. 1–7. [Google Scholar] [CrossRef]
  44. Chen, M.; Cui, S. Communication Efficient Federated Learning for Wireless Networks. In Wireless Networks; Springer Nature: Cham, Switzerland, 2024. [Google Scholar] [CrossRef]
  45. Yang, Q.; Liu, Y.; Cheng, Y.; Kang, Y.; Chen, T.; Yu, H. Federated Learning. In Synthesis Lectures on Artificial Intelligence and Machine Learning; Springer International Publishing: Cham, Switzerland, 2020. [Google Scholar] [CrossRef]
  46. ISO/SAE 21434:2021; Road Vehicles—Cybersecurity Engineering. Beyond Security. ISO: Geneva, Switzerland, 2021. Available online: https://www.iso.org/standard/70918.html (accessed on 11 May 2025).
  47. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; Arcas, B.A.Y. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (PMLR), Fort Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. Available online: https://proceedings.mlr.press/v54/mcmahan17a.html (accessed on 12 September 2023).
  48. Krizhevsky, A.; Hinton, G. Learning Multiple Layers of Features from Tiny Images. 2009. Available online: https://www.cs.utoronto.ca/~kriz/learning-features-2009-TR.pdf (accessed on 11 May 2025).
  49. Kong, X.; Chen, Q.; Hou, M.; Rahim, A.; Ma, K.; Xia, F. RMGen: A Tri-Layer Vehicular Trajectory Data Generation Model Exploring Urban Region Division and Mobility Pattern. IEEE Trans. Veh. Technol. 2022, 71, 9225–9238. [Google Scholar] [CrossRef]
  50. Wang, Y.; Mahmood, A.; Sabri, M.F.M.; Zen, H. TM–IoV: A First-of-Its-Kind Multilabeled Trust Parameter Dataset for Evaluating Trust in the Internet of Vehicles. Data 2024, 9, 103. [Google Scholar] [CrossRef]
  51. Bajracharya, C. Performance Evaluation for Secure Communications in Mobile Internet of Vehicles with Joint Reactive Jamming and Eavesdropping Attacks. IEEE Trans. Intell. Transp. Syst. 2022, 23, 22563–22570. [Google Scholar] [CrossRef]
  52. Wan, J.; Liu, J.; Shao, Z.; Vasilakos, A.V.; Imran, M.; Zhou, K. Mobile Crowd Sensing for Traffic Prediction in Internet of Vehicles. Sensors 2016, 16, 88. [Google Scholar] [CrossRef] [PubMed]
  53. Sun, G.; Cong, Y.; Dong, J.; Wang, Q.; Lyu, L.; Liu, J. Data poisoning attacks on federated machine learning. IEEE Internet Things J. 2021, 9, 11365–11375. [Google Scholar] [CrossRef]
  54. Deressa, B.; Hasan, M.A. TrustBandit: Optimizing Client Selection for Robust Federated Learning Against Poisoning Attacks. In Proceedings of the IEEE INFOCOM 2024—IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Vancouver, BC, Canada, 20 May 2024; pp. 1–8. [Google Scholar]
  55. Awan, S.; Luo, B.; Li, F. Contra: Defending against poisoning attacks in federated learning. In Proceedings of the Computer Security–ESORICS 2021: 26th European Symposium on Research in Computer Security, Darmstadt, Germany, 4–8 October 2021; Springer International Publishing: Berlin/Heidelberg, Germany, 2021. Part I 26. pp. 455–475. [Google Scholar]
  56. Rizve, M.N.; Khan, S.; Khan, F.S.; Shah, M. Exploring complementary strengths of invariant and equivariant representations for few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 10836–10846. [Google Scholar]
  57. Liu, Y.; Chang, S.; Liu, Y. FedCS: Communication-Efficient Federated Learning with Compressive Sensing. In Proceedings of the 2022 IEEE 28th International Conference on Parallel and Distributed Systems (ICPADS), Nanjing, China, 10–12 January 2023; pp. 17–24. [Google Scholar] [CrossRef]
  58. Hui, Y.; Hu, J.; Cheng, N.; Zhao, G.; Chen, R.; Luan, T.H.; Aldubaikhy, K. RCFL: Redundancy-Aware Collaborative Federated Learning in Vehicular Networks. IEEE Trans. Intell. Transp. Syst. 2023, 25, 5539–5553. [Google Scholar] [CrossRef]
  59. Bai, Y.; Fan, M. A method to improve the privacy and security for federated learning. In Proceedings of the 2021 IEEE 6th International Conference on Computer and Communication Systems (ICCCS), Chengdu, China, 23–26 April 2021; pp. 704–708. [Google Scholar]
Figure 1. IoV architecture.
Figure 1. IoV architecture.
Jcp 05 00048 g001
Figure 2. Fed-DTB framework architecture.
Figure 2. Fed-DTB framework architecture.
Jcp 05 00048 g002
Figure 3. Trust Score Model Architecture.
Figure 3. Trust Score Model Architecture.
Jcp 05 00048 g003
Figure 4. Experimental setup and data flow.
Figure 4. Experimental setup and data flow.
Jcp 05 00048 g004
Figure 5. Correlation matrix illustrating relationships among vehicle parameters.
Figure 5. Correlation matrix illustrating relationships among vehicle parameters.
Jcp 05 00048 g005
Figure 6. Correlation between connection stability and trust score.
Figure 6. Correlation between connection stability and trust score.
Jcp 05 00048 g006
Figure 7. The relationship between satisfaction feedback and trust score.
Figure 7. The relationship between satisfaction feedback and trust score.
Jcp 05 00048 g007
Figure 8. Data before and after injection.
Figure 8. Data before and after injection.
Jcp 05 00048 g008
Figure 9. Accuracy evaluation.
Figure 9. Accuracy evaluation.
Jcp 05 00048 g009
Figure 10. Loss evaluation.
Figure 10. Loss evaluation.
Jcp 05 00048 g010
Figure 11. Precision, recall, and F1 Score performance.
Figure 11. Precision, recall, and F1 Score performance.
Jcp 05 00048 g011
Figure 12. RUC-AUC evaluation.
Figure 12. RUC-AUC evaluation.
Jcp 05 00048 g012
Figure 13. Trust mechanism effectiveness.
Figure 13. Trust mechanism effectiveness.
Jcp 05 00048 g013
Figure 14. Training time evaluation.
Figure 14. Training time evaluation.
Jcp 05 00048 g014
Figure 15. Accuracy evaluation for configurations of 25, 50, and 100 clients.
Figure 15. Accuracy evaluation for configurations of 25, 50, and 100 clients.
Jcp 05 00048 g015
Figure 16. Loss evaluation for configurations of 25, 50, and 100 clients.
Figure 16. Loss evaluation for configurations of 25, 50, and 100 clients.
Jcp 05 00048 g016
Figure 17. Performance metrics across different number of clients: (a) precision, (b) recall, (c) F1 Score, and (d) ROC-AUC.
Figure 17. Performance metrics across different number of clients: (a) precision, (b) recall, (c) F1 Score, and (d) ROC-AUC.
Jcp 05 00048 g017
Figure 18. Trust mechanism effectiveness for configurations of 25, 50, and 100 clients.
Figure 18. Trust mechanism effectiveness for configurations of 25, 50, and 100 clients.
Jcp 05 00048 g018
Figure 19. The training time per round for configurations of 25, 50, and 100 clients.
Figure 19. The training time per round for configurations of 25, 50, and 100 clients.
Jcp 05 00048 g019
Figure 20. The average time distribution for configurations of 25 clients.
Figure 20. The average time distribution for configurations of 25 clients.
Jcp 05 00048 g020
Figure 21. The average time distribution for configurations of 50 clients.
Figure 21. The average time distribution for configurations of 50 clients.
Jcp 05 00048 g021
Figure 22. The average time distribution for configurations of 100 clients.
Figure 22. The average time distribution for configurations of 100 clients.
Jcp 05 00048 g022
Table 1. Summary statistics for 10 clients.
Table 1. Summary statistics for 10 clients.
MetricValue
Final Accuracy0.8809
Final Loss0.7225
Final Precision0.8965
Final Recall0.8809
Final F1 Score0.8824
Final ROC-AUC0.9886
Total Excluded Clients6
Average Training Time3.2 s
Table 2. Summary statistics for different client configurations.
Table 2. Summary statistics for different client configurations.
Metric25 Clients50 Clients100 Clients
Final Accuracy0.86030.87240.8616
Final Loss0.76020.68200.7183
Final Precision0.90370.90700.9063
Final Recall0.86030.87240.8616
Final F10.88260.87820.8777
Final ROC-AUC0.98990.99920.9975
Total Excluded Clients163156
Average Training Time (secs)39.598166.1915160.7224
Table 3. Comparison of different FL methods.
Table 3. Comparison of different FL methods.
MethodDataset(s) UsedAccuracy (%)Efficiency/Observations
TrustBandit [54]Fashion-MNIST, MNIST75.64% (40% malicious), 70.97% (50% malicious)Computational cost increases; requires a small “root” dataset; struggles with extreme data heterogeneity.
CONTRA [55]MNIST, CIFAR-10, LOAN87.4% (MNIST), 82% (CIFAR-10), 83.2% (LOAN)Computational overhead ∼1.60×; sensitive to adversarial adaptation strategies.
Client Selection in FL [19]MNIST, KMNIST, FEMNIST87% (MNIST), 65% (KMNIST), 53% (FEMNIST)Increases overhead; may exclude useful clients due to strict criteria; no explicit security focus.
FSL [56]miniImageNet, CIFAR-FS, FC10066.82% (miniImageNet), 76.83% (CIFAR-FS), 47.38% (FC100)Multi-head distillation improves performance; model complexity increases training time; not directly evaluated for security threats.
FedCS [57]CIFAR-1079% (CIFAR-10)Increases training time by 33.3 min; not evaluated for adversarial robustness.
RCFL [58]MNIST, CIFAR-1082% (MNIST), 75% (CIFAR-10)Raises communication overhead; lacks explicit security threat evaluations.
A Method to Improve Privacy and Security for FL [59]CIFAR-1087% (CIFAR-10)∼1.3× computational overhead; scalability issues; slower convergence; lacks real-world security testing.
Our Model Fed-DTBMNIST, CIFAR-10, Synthetic IoV88% (10 Clients), 86% (25 Clients), 88% (50 Clients), 86% (100 Clients)Robust aggregation; adaptive weighting; minor overhead increase; high accuracy; robust to poisoning (excludes 100% malicious/low-performance clients).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alruwaili, A.; Islam, S.; Gondal, I. Fed-DTB: A Dynamic Trust-Based Framework for Secure and Efficient Federated Learning in IoV Networks: Securing V2V/V2I Communication. J. Cybersecur. Priv. 2025, 5, 48. https://doi.org/10.3390/jcp5030048

AMA Style

Alruwaili A, Islam S, Gondal I. Fed-DTB: A Dynamic Trust-Based Framework for Secure and Efficient Federated Learning in IoV Networks: Securing V2V/V2I Communication. Journal of Cybersecurity and Privacy. 2025; 5(3):48. https://doi.org/10.3390/jcp5030048

Chicago/Turabian Style

Alruwaili, Ahmed, Sardar Islam, and Iqbal Gondal. 2025. "Fed-DTB: A Dynamic Trust-Based Framework for Secure and Efficient Federated Learning in IoV Networks: Securing V2V/V2I Communication" Journal of Cybersecurity and Privacy 5, no. 3: 48. https://doi.org/10.3390/jcp5030048

APA Style

Alruwaili, A., Islam, S., & Gondal, I. (2025). Fed-DTB: A Dynamic Trust-Based Framework for Secure and Efficient Federated Learning in IoV Networks: Securing V2V/V2I Communication. Journal of Cybersecurity and Privacy, 5(3), 48. https://doi.org/10.3390/jcp5030048

Article Metrics

Back to TopTop