Next Article in Journal
Dynamic Topology Reconfiguration for Energy-Efficient Operation in 5G NR IAB Systems
Previous Article in Journal
Machine Learning Pipeline for Early Diabetes Detection: A Comparative Study with Explainable AI
Previous Article in Special Issue
BIMW: Blockchain-Enabled Innocuous Model Watermarking for Secure Ownership Verification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

TrustFed-CTI: A Trust-Aware Federated Learning Framework for Privacy-Preserving Cyber Threat Intelligence Sharing Across Distributed Organizations

Department of Computer Sciences, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
Future Internet 2025, 17(11), 512; https://doi.org/10.3390/fi17110512
Submission received: 28 September 2025 / Revised: 30 October 2025 / Accepted: 3 November 2025 / Published: 10 November 2025
(This article belongs to the Special Issue Distributed Machine Learning and Federated Edge Computing for IoT)

Abstract

The rapid evolution of cyber threats requires intelligence sharing between organizations while ensuring data privacy and contributor credibility. Existing centralized cyber threat intelligence (CTI) systems suffer from single points of failure, privacy concerns, and vulnerability to adversarial manipulation. This paper introduces TrustFed-CTI, a novel trust-aware federated learning framework designed for privacy-preserving CTI collaboration across distributed organizations. The framework integrates a dynamic reputation-based trust scoring system to evaluate member reliability, along with differential privacy and secure multi-party computation to safeguard sensitive information. A trust-weighted model aggregation mechanism further mitigates the impact of adversarial participants. A context-aware trust engine continuously monitors the consistency of threat patterns, authenticity of data sources, and contribution quality to dynamically adjust trust scores. Extensive experiments on practical datasets including APT campaign reports, MITRE ATT&CK indicators, and honeypot logs demonstrate a 22.6% improvement in detection accuracy, 28% faster convergence, and robust resistance to up to 35% malicious participants. The proposed framework effectively addresses critical vulnerabilities in decentralized CTI collaboration, offering a scalable and privacy-preserving mechanism for secure intelligence sharing without compromising organizational autonomy.

1. Introduction

The contemporary cybersecurity environment is characterized by sophisticated, organization-wide cross-domain attacks that demand collective defensive efforts through coordinated cyber threat intelligence sharing [1,2]. By sharing Cyber Threat Intelligence (CTI), an organization can improve its security posture as part of a community since it shares indicators of compromise, attack patterns, and protection mechanisms. However, the traditional centralized CTI solutions introduce significant privacy risks and failure points, and may compromise organizational information of great value [3,4].
Despite substantial progress in collaborative CTI frameworks, existing approaches continue to exhibit three fundamental gaps:
(i) Limited mechanisms for cross-organization trust evaluation, which makes it difficult to verify the reliability and behavioural integrity of contributing entities;
(ii) Insufficient integration of privacy-preserving federated learning techniques under adversarial conditions, where malicious participants can exploit gradient information or inject poisoned updates; and
(iii) Inadequate adaptability to continuously evolving threat landscapes, resulting in delayed detection and fragmented intelligence sharing.
To overcome these shortcomings, TrustFed-CTI embeds dynamic trust computation, multi-layer privacy mechanisms, and context-aware intelligence reasoning within a unified federated framework specifically designed for secure and adaptive CTI collaboration.
Recent innovations in federated learning (FL) as a privacy-preserving distributed machine learning paradigm demonstrate significant potential for privacy-guaranteed CTI collaboration [5,6]. Federated learning offers the capability of various diverse organizations to train threat detection models collectively, as compared to conventional methods that rely on the centralization of raw information, maintaining data locality and organizational confidentiality [7]. However, there are particular issues with the application of federated learning in cybersecurity-related domains, particularly concerning the credibility of participants and vulnerability to attacks.
Figure 1 describes the essence of the problems of modern CTI sharing ecosystems and presents the trade-off between the efficacy of cooperation and confidentiality. Traditional centralized solutions, although they allow conducting data analysis comprehensively, put the participating organizations at risk of privacy violations and regulatory risks. On the other hand, disjointed security activities reduce the ability to see and detect threats and leave organizations susceptible to network attacks.
Recently trust-aware federated learning has shown promising outcomes in other areas [2]. However, there is limited research applying this knowledge to the realm of cybersecurity. The diversity of cybersecurity data, the different levels of organization security maturity, and the existence of potentially malicious actors require dedicated trust management tools [5]. Moreover, cyber threats are dynamic and systems must be able to quickly absorb new threat intelligence and be resistant to adversarial manipulations [8,9,10].
A combination of blockchain systems and federated learning has demonstrated the possibility of improving trust and transparency in distributed systems [11,12,13]. Nevertheless, current methods are mostly generic [14,15,16,17]. They cannot accommodate the specific needs of cybersecurity sectors of cybersecurity [18], such as real-time threat detection [19], non-homogeneous data format [20], and adversarial conditions [21,22,23].
The necessity of dedicated frameworks that can handle these specific issues promotes the creation of frameworks of trust-aware federated learning for specific domains. Typically, recent work has covered multiple facets of privacy-preserving cybersecurity partnerships, such as the differential privacy systems [18], the zero-trust architecture, and secure multi-party computation protocols [12]. However, one of the gaps in the research is the general unification of trust management, privacy protection, and adversarial robustness of federated cybersecurity applications.
Dynamic trust scoring, privacy-saving mechanisms, and context-sensitive analysis of the threat intelligence are suggested to solve these limitations in the proposed TrustFed-CTI framework. The innovative feature of the framework is an adaptive trust management mechanism, where the behaviour of participants is not the only aspect, but contextual relevance and temporal consistency of the contributed threat intelligence are also taken into consideration.
Problem Context and Motivation:
Traditional cyber threat intelligence (CTI) sharing frameworks continue to face significant challenges, such as privacy leakage of sensitive indicators, fragmented data silos across organizations, and the lack of verifiable contributor credibility. Recent research on trust-aware federated learning, such as that by Sathya and Saranya [2], introduced dynamic trust weighting for healthcare risk prediction but did not address multi-organization threat intelligence sharing or adversarial robustness. Similarly, Ali et al. [3] proposed privacy-preserved collaboration mechanisms using blockchain for recommender systems. However, their design does not incorporate adaptive trust scoring or differential privacy integration suitable for CTI environments.
The proposed TrustFed-CTI framework bridges these limitations by combining context-aware trust computation with layered privacy preservation in a unified federated learning workflow. Its four-component trust vector—quality (Q), consistency (C), reputation (R), and historical (H)—directly influences the aggregation weight during model updates, reducing differential-privacy noise amplification while maintaining robustness under adversarial behaviour. This joint integration of trust and privacy provides a capability for secure, cross-organizational CTI collaboration that has not been achieved in prior trust-aware FL or blockchain-based systems [2,3].
Technical Novelty Clarification:
Although prior studies have explored trust-aware federated learning in domains such as IoT, healthcare, and finance, these approaches typically treat trust, privacy, and robustness as isolated enhancements. In contrast, TrustFed-CTI presents an integrated and domain-specific design that unifies these elements within a single operational framework tailored for cyber-threat intelligence sharing. Its novelty lies in three key aspects: (1) the four-dimensional dynamic trust formulation (Q, C, R, H) that is embedded directly into the aggregation process rather than post hoc scoring, enabling adaptive trust weighting; (2) a hybrid centralized–distributed coordination model that balances global synchronization with decentralized privacy preservation across heterogeneous CTI sources; and (3) a multi-layer privacy stack that synergistically combines differential privacy, secure multi-party computation, and homomorphic encryption. This fusion of dynamic trust computation, hybrid orchestration, and layered privacy protection forms the core innovation that differentiates TrustFed-CTI from existing trust-aware FL frameworks.
The key contributions of this work are as follows:
  • New Trust-Aware Architecture: We introduce the existing first full-fledged trust-based federated learning architecture that is specifically designed to share cyber threat intelligence, considering dynamic reputation scoring, context-based trust evaluation, and dynamically adjusted participant weighting.
  • Improved Security and Privacy Neutrality: The framework is designed to integrate various types of privacy-enhancing techniques: differential privacy, secure multi-party computation, and homomorphic encryption, which are all necessary to guarantee complete privacy protection without compromising the model’s utility and threat detection efficacy.
  • Adversarial Robustness and Attack Mitigation: We develop sophisticated defence mechanisms against model poisoning, data poisoning, and Sybil attacks, demonstrated through extensive evaluation against up to 35% malicious participants.
  • Real-World Validation and Performance Analysis: Comprehensive evaluation on authentic cybersecurity datasets demonstrates significant improvements in detection accuracy (22.6%), convergence speed (28%), and robustness compared to existing approaches.
  • Scalable Implementation Framework: The proposed system provides a practical, deployable solution for real-world cybersecurity collaboration, with demonstrated scalability across heterogeneous organizational environments.
The remainder of this paper is organised as follows: Section 2 reviews related work; Section 3 presents the proposed methodology and mathematical modelling; Section 4 discusses results and evaluation; Section 5 provides discussion; and Section 6 concludes the paper.

2. Related Work

2.1. Federated Learning in Cybersecurity

Recent advances in federated learning applications to cybersecurity have demonstrated the potential for privacy-preserving collaborative threat detection [1]. The work by Ragab et al. [3] presents an artificial intelligence framework for cyber threat detection in IoT-assisted smart cities, highlighting the challenges of data heterogeneity and privacy preservation in distributed environments. However, their approach lacks comprehensive trust management mechanisms necessary for multi-organizational collaboration. Federation Learning (FL) can further be divided into two classes: Centralized Federated Learning (CFL) and Distributed Federated Learning (DFL).
The central server (CFL) can coordinate the training process, in which local model change of each client is considered, followed by the resulting global model re-distribution. This is created in a manner that makes it easier to orchestrate; however, this is at the cost of increasing the risks of single-point failure and other communication bottlenecks. DFL is a non-authoritative or hierarchical design where clients share updates on a peer-to-peer basis, making it less reliant on one server, allaying adversarial conditions. DFL would be more suitable in the area of cooperation in cybersecurity since it is less vulnerable to a purposeful attack, and it does not restrict organizational freedom. The proposed TrustFed-CTI architecture is more aligned with the DFL principles, assuming that it implements the trust-weighted distributed aggregation scheme; still, it makes use of the elements of centralized coordination through the trust management engine to guarantee the reliability.
Xiang et al. [9] provide a literature review on ensuring trustworthy anomaly identification of heterogeneous federated learners. Their research identifies such critical challenges as a heterogeneity of data distribution, communication overheads, and adversarial robustness. However, their work is quite comprehensive, which is focused on a generic rather than a specific cyber threat intelligence situation. Combined with zero-trust architecture, federated learning has recently been identified in cybersecurity research [8]. The article by Almuseelem that examines secure task offloading in the context of IoMT environments demonstrates that zero trust and federated learning can also be a possibility, but in a different way, namely, using edge computing, but not threat intelligence sharing.

2.2. Trust Management in Distributed Systems

Trust management in distributed learning environments has become increasingly important as federated learning adoption grows [10]. Wu and Konstantinidis provide a comprehensive survey of trust and reputation mechanisms in data sharing systems, identifying key principles including transparency, accountability, and adaptive trust scoring. However, their work lacks specific consideration of cybersecurity domain requirements.
The development of trust-aware federated learning frameworks has shown promising results in various applications [2]. Sathya and Saranya propose a context-aware dynamic gradient preservation mechanism for cardiovascular risk assessment, demonstrating the effectiveness of trust-based participant weighting. Their approach provides valuable insights into adaptive trust management, though the healthcare domain differs significantly from cybersecurity contexts. Recent work on blockchain-enabled trust management has explored the integration of distributed ledger technologies with federated learning [13].
Issa et al. present DT-BFL, a digital twin framework for blockchain-enabled federated learning in IoT networks, demonstrating enhanced transparency and accountability. However, the computational overhead and scalability limitations of blockchain-based approaches remain significant concerns for real-time cybersecurity applications.

2.3. Privacy-Preserving Threat Intelligence

Privacy preservation in threat intelligence sharing has received increasing attention from both academia and industry [4]. Ali et al. propose TrustShare, a blockchain framework for secure threat intelligence sharing, addressing key privacy concerns through cryptographic protocols and distributed consensus mechanisms. While innovative, their approach focuses primarily on data integrity rather than collaborative learning capabilities. The application of differential privacy to cybersecurity data has shown promising results in protecting sensitive information while maintaining utility [18].
Ojokoh et al. provide an analytical review of privacy and security approaches in recommender systems, offering insights applicable to threat intelligence systems. However, the trade-offs between privacy and detection accuracy remain challenging in cybersecurity contexts. Recent progress in secure multi-party computation has made it possible to detect threats in a way that preserves privacy [12]. Khan et al. [14] critically review the federated learning aggregation methods, and in particular, the importance of secure federation algorithms in safeguarding privacy at the expense of guaranteeing model quality. Their analysis provides valuable foundations for safe threat intelligence collaboration.

2.4. Adversarial Robustness and Attack Mitigation

Security against adversarial attacks in federated learning has become a critical research concern, particularly for cybersecurity applications where malicious actors may attempt model poisoning or degrade detection capability [21]. Recent work has also highlighted decentralized trust-enhanced architectures to improve security and resilience. For example, Gana and Jamil demonstrated that DAG-based swarm learning enhances scalability and privacy by eliminating single-point failures in distributed healthcare networks [22]. Similarly, Elkhodr showed that integrating AI with quantum-resistant blockchain mechanisms strengthens confidentiality and device authentication in IoT environments [23]. In addition, Wang, Tang, Zhang, and Duan reported that blockchain-enabled coordination improves trust, transparency, and computation integrity among swarm robotic systems [24]. Complementing these advances, Abduelmula and Kobti examined Sybil-attack vulnerabilities in non-IID federated environments and emphasized the need for strong trust mechanisms to ensure secure collaboration under adversarial behavior [25]. Together, these studies reinforce the need for privacy-preserving, trust-aware, and decentralized learning frameworks for secure cyber threat intelligence sharing.
The defence measures against backdoor attacks suggested by Natarajan et al. provide valuable insights into the trend of an adversarial attack, as well as defence mechanisms (RONI-based and TRIM-based). Formulation of Byzantine-compromising federated learning models has become of interest as a tool of system robustness [22]. Gana and Jamil discuss the learning methods based on the use of the swarm approach to GAs in healthcare, revealing that various techniques can be applied to adversarial settings. Nevertheless, particular issues of cybersecurity areas, such as the need to respond to the threat in real-time or the existence of heterogeneous threat environments, require specialised solutions.

2.5. Research Gaps and Motivation

The gaps in federated learning and cybersecurity applications are still critical, even though there have been substantial enhancements. Current methods usually concentrate on one of the areas, privacy preservation, trust management or adversarial robustness, yet do not specify how these key elements can be thoroughly combined. Moreover, the majority of the existing studies focus on generic federated learning cases without taking into account the specific needs of cybersecurity areas [26].
Cyber threats are dynamic and need adaptive learning systems that can integrate new threat intelligence in a very short time without compromising system stability and security. The existing methods commonly assume stagnant threat environments and ignore the dynamics of change, namely, the change in the time of threats and the dynamics of organizational trust [27,28,29].
There are also diverse characteristics of cybersecurity data and the varying maturity levels of organizations and different threat environments that require the use of flexible frameworks that can accommodate different characteristics of participants and patterns of contribution. The solution that currently exists is usually homogeneous and does not encompass the conditions of multi-organization cooperation [30,31].
These constraints inspire the creation of TrustFed-CTI, a holistic framework which aims to resolve the combined issues of trust management, privacy preservation and adversarial resistance in federated applications of cybersecurity.
Existing studies on trust-aware federated learning mainly emphasize healthcare or IoT domains, focusing on single-aspect trust metrics or reputation-only scoring. In contrast, TrustFed-CTI introduces a multi-factor trust model coupled with privacy-adaptive aggregation, extending prior work toward collaborative cyber threat intelligence. This formulation integrates trust, privacy, and robustness in a unified optimization process, moving beyond descriptive literature toward operational implementation.

3. Proposed Methodology

3.1. System Overview

The TrustFed-CTI framework provides a comprehensive solution for trust-aware federated learning in cyber threat intelligence sharing across distributed organizations. Figure 2 presents the complete system architecture, encompassing the trust management engine, privacy-preserving mechanisms, federated learning coordinator, and context-aware threat analysis modules.
The framework operates through a decentralized network of participating organizations, each maintaining local threat intelligence datasets while contributing to a global threat detection model. The system ensures that sensitive organizational data never leaves local premises while enabling collaborative learning through secure aggregation protocols and trust-weighted model updates.
Algorithm 1 presents the main operational flow of the TrustFed-CTI framework, detailing the integration of trust scoring, privacy-preserving training, and secure aggregation processes.
Algorithm 1: TrustFed-CTI Main Framework
1.
Initialise global model θ 0 ,
2.
Trust scores T i = 0.5 for all clients i
3.
Set privacy parameters ϵ , δ and trust threshold τ m i n
4.
Select clients S t based on trust scores and availability
5.
Broadcast the current global model θ t to selected clients
6.
Perform local training with privacy preservation θ i t + 1 = LocalTrain i , θ t , D i , ϵ
7.
Compute local trust metrics M i t
8.
Send encrypted model update Δ θ i t + 1 and metrics M i t
9.
Update trust scores: T i t + 1 = UpdateTrust T i t , M i t
10.
Perform trust-weighted secure aggregation θ t + 1 = TrustWeightedAgg { Δ θ i t + 1 } , { T i t + 1 }
11.
Validate model quality and threat detection capability
12.
Return final global model θ T
The fundamental innovation of the framework consists of its dynamic system of trust management, continuously assessing the contribution of participants using various criteria such as the quality of that data, its contribution to model upgrading, alignment to the patterns of threat intelligence, and the temporal stability of the individual contribution.

3.2. Trust Management Engine

The core element of the TrustFed-CTI system is the trust management engine that offers dynamic evaluation and grading of the participant organizations according to their quality of contribution and behavioural patterns. In the trust scoring system, a combination of assessment dimensions has been incorporated so that the credibility of the participants is fully assessed.
At time t , the trust score T , the trust score of organization i is calculated as:
T i t = α Q i t + β C i t + γ R i t + δ H i t  
where Q i t represents data quality score, C i t denotes consistency score, R i t indicates reputation score, and H i t represents historical performance. The weighting parameters α , β , γ , δ satisfy α + β + γ + δ = 1 .
The data quality score evaluates the contribution’s impact on global model performance:
Q i t = e x p L θ t + 1 L θ t L θ t + ϵ 0
where L θ represents the global model loss function and ϵ 0 is a minor positive constant preventing division by zero.
The consistency score measures the alignment of local contributions with global threat intelligence patterns:
C i t = c o s Δ θ i t Δ θ t Δ θ i t Δ θ t  
where Δ θ i t represents the local model update from organization i and Δ θ t is the average model update across all participants.
The reputation score incorporates peer evaluation and external validation:
R i t = 1 N i j N i w i j E j i t
where N i represents the set of organizations that have interacted with organization i , w i j denotes the interaction weight, and E j i t represents the evaluation score from organization j regarding organization i .
The historical performance score provides a temporal stability assessment:
H i t = k = 1 t λ t k P i k
where λ 0 , 1 is the decay factor and P i k represents the performance score at time k .
Cold-Start Trust Initialization:
To address the “cold-start” problem faced by new organizations that join the federation without prior participation history, the framework adopts an external-credential-based initialization policy. Each newcomer provides verifiable credentials such as cybersecurity certification level, historical collaboration records, or external threat-feed reputation, which are normalized to an initial trust prior to T i 0 = 0.6 . During the first five aggregation rounds, these organizations operate under a probationary phase in which their trust-weighted contribution factor is limited to 50% of the standard maximum. After accumulating sufficient interaction data, the dynamic trust-update Equations (1)–(5) are applied normally, allowing full participation. This mechanism ensures fair integration of new members while maintaining network reliability and preventing manipulation during early participation.

3.3. Privacy-Preserving Mechanisms

The TrustFed-CTI framework integrates multiple privacy-preserving techniques to ensure comprehensive protection of organizational data while maintaining model utility. The privacy preservation strategy combines differential privacy, secure multi-party computation, and homomorphic encryption.
Differential privacy is applied during local model training to prevent information leakage:
Δ θ i t = L i θ t + N 0 , σ 2 I
where N 0 , σ 2 I represents Gaussian noise with variance σ 2 determined by the privacy budget ϵ and sensitivity Δ f :
σ = 2 l n 1.25 / δ Δ f ϵ
Each client i encrypts its model update Δ θ i t and its local trust weight w i separately using the Paillier additive homomorphic scheme, δ is the gradient divergence threshold. The secure aggregation, therefore, operates entirely within the encrypted domain as
Enc θ t + 1 = i = 1 n Enc Δ θ i t i w
Under the Paillier scheme, no decryption occurs before weighting, ensuring secure encrypted aggregation. Only the final aggregated ciphertext is decrypted by the coordinating trust engine to update the global model. The 2048-bit Paillier public key is generated once at initialization and reused for all communication rounds.
It is important to note that Equation (8) represents the secure multi-party computation (SMPC) protocol, where each client transmits encrypted updates and the server only performs aggregation in the encrypted domain without accessing raw gradients. This follows the secure sum protocol:
Enc θ t + 1 = i = 1 n Enc Δ θ i t
where Enc denotes the encryption function applied to each local update. To support this mechanism, the framework employs the Paillier cryptosystem, an additive homomorphic encryption scheme. Paillier allows linear operations such as weighted summation to be executed directly on encrypted data, ensuring that individual model updates remain confidential throughout the aggregation process. The last combined outcome is the only result that is decrypted to update a global model, therefore integrating privacy and utility assurances.
The mathematical formulation of Equations (8) and (9) has been verified to ensure the correctness of the encrypted-weight computation under the Paillier additive homomorphic scheme. In this process, each client performs encryption of both its model update and associated trust weight, and the weighted aggregation is executed entirely in the encrypted domain through ciphertext exponentiation by plaintext weights. This guarantees that no decryption occurs before aggregation, and the server only decrypts the final combined ciphertext after the secure multi-party computation phase. The updated equations accurately represent the homomorphic property where exponentiation of ciphertexts corresponds to linear weighting of the encrypted gradients.
These parameters in Table 1 collectively define the complete privacy stack. Differential Privacy noise is added locally, Paillier HE protects updates in transit, and SMPC masking secures multi-party aggregation. “CPU Overhead (%)” column indicates the relative increase in processor utilization compared with the baseline model without encryption. “Latency Overhead (%)” represents the percentage increase in average round-trip communication time during aggregation.
The trust-weighted aggregation mechanism uses dynamic trust weights in updating the model:
w i t = T i t D i j = 1 n T j t D j  
where D i represents the size of the local dataset for organization i .
Interaction between Privacy Mechanisms:
Differential privacy introduces calibrated statistical noise at the client level, while SMPC and homomorphic encryption protect data and gradient confidentiality during aggregation. These techniques complement rather than overlap—DP mitigates inference risks, and SMPC/HE secure transmission and computation. The experimental results in Section 4.5 show that their combined use maintains near-baseline utility while significantly lowering leakage risk.

3.4. Context-Aware Threat Analysis

The context-based threat analysis module offers smart evaluation of threat information deposits, depending on time arrangement, threat terrain progression and cross-organizational evidence. This module guarantees that the federated learning process is flexible to the threats that arise and is resistant to adversarial manipulations.
The consistency of threat pattern measures the consistency of the local contribution and the established threat signatures:
TPC i t = 1 S s S sim f i t s , F t s  
where S represents the set of threat signatures, f i t s is the local model’s response to signature s , and F t s is the global consensus response.
The temporal stability assessment measures the consistency of contributions over time:
TSA i t = 1 1 t 1 k = 2 t Δ θ i k Δ θ i k 1 2
The adversarial detection mechanism identifies potentially malicious contributions:
ADM i t = 1 if Δ θ i t > μ + 3 σ 2 0 otherwise
where μ and σ represent the mean and standard deviation of historical model update magnitudes.

3.5. Federated Learning Coordination

The federated learning coordination module operates the distributed training process and ensures an optimal choice of participants, effective communication, and effective model assembly. The coordination process is flexible to the conditions of the network, the participants, and the threat landscape.
The TrustFed-CTI model uses a hybrid federated learning system that is a mixture of Centralized Federated Learning (CFL) and Distributed Federated Learning (DFL). The CFL view is that a central trust engine is used to coordinate global aggregation to assure reliable orchestration of updates reasonably. Additionally, the principles of DFL are anchored in the form of trust-weighted distributed aggregation, which decreases the reliance on a single coordinator and the autonomy of organizations in a highly adversarial environment.
Hybrid Coordination Mechanism:
To balance global reliability with local independence, TrustFed-CTI adopts a hierarchical coordination strategy. A lightweight central coordinator disseminates trust weights and oversees aggregation timing, while all local computations—including model updates, differential privacy noise injection, and homomorphic encryption—occur on distributed organizational nodes. This hybrid configuration provides the stability and auditability of central oversight together with the scalability and resilience of decentralized learning, ensuring secure collaboration across heterogeneous CTI participants without exposing any raw intelligence data.
To train the models in the structure of the federated learning process, Deep Neural Networks (DNNs) are primarily used because of their potential to represent high-dimensional patterns based on threat intelligence datasets like MITRE ATT&CK and logs of malware analysis. In order to have the baseline comparisons and offer the lightweight situations, other models such as the Random Forests and Naive Bayes (NB) classifiers were also incorporated into the training pipeline. Combining these two features will provide not only solid results on the feature-rich dataset but also resource-restricted efficiency.
The participant selection plan is the most efficient in terms of trust scores and diversity:
S t = a r g m a x S { 1 , 2 , . . . , n } i S T i t D i t
subject to S k and diversity constraint Div S δ m i n .
Definition of Domain Distance:
To formally quantify diversity among participating organizations, the domain distance between any two organizations i and j is defined as
D i j = 1 T i T j T i T j  
where T i and T j represent the sets of MITRE ATT&CK techniques, tactics, or indicators of compromise (IoCs) predominantly observed in each organization’s local CTI dataset.
This Jaccard-style metric captures how dissimilar the organizations’ threat domains are, with values approaching 1 indicating higher diversity.
Diversity Constraint:
During participant selection, the system computes
Div S = 1 S 2 i , j S dist domain i , domain j
as shown in Equation (16), and accepts a candidate organization k only if its average domain distance from already-selected participants satisfies
1 S i S D i k δ
where the diversity threshold is empirically set to δ = 0.35 .
This ensures the final participant subset covers a broad range of ATT&CK techniques and threat categories, reducing redundancy and improving generalisation across heterogeneous CTI sources.
The convergence criterion balances model accuracy and training efficiency:
Convergence t = θ t + 1 θ t 2 θ t 2 < ϵ c o n v

3.6. Algorithm Implementation

Algorithm 2 details the trust score update mechanism, incorporating multiple evaluation criteria and temporal dynamics.
Algorithm 2: Trust Score Update Mechanism
Input:
  • Q i t : Quality metric (from model-performance impact)
  • C i t : Consistency metric (from gradient alignment)
  • R i t : Reputation metric (from peer feedback)
  • H i t 1 : Historical-performance score
  • τ a d v : Adversarial-detection threshold
  • λ   =   β Q ,   β C ,   β R ,   β H : Trust-weight vector
  • γ: Temporal-decay factor
  • η: Penalty-scaling coefficient
Output:
  • T i t : Updated composite trust score for client i
Step 1—Metric Normalization
Normalize all local trust metrics to [0, 1] for comparability:
             Q i t i l d e t = Q i t   m i n Q m a x Q   m i n Q
             C i t i l d e t = C i t   m i n C m a x C   m i n C
             R i t i l d e t = R i t   m i n R m a x R   m i n R
Step 2—Historical Performance Update
             H i t =   γ     H i t 1 +   1     γ   Q i t i l d e t
Step 3—Composite Trust Computation
             T i t =   β Q   Q i t i l d e t +   β C   C i t i l d e t +   β R   R i t i l d e t +   β H   H i t  
(Note: β Q   +   β C   +   β R   +   β H   =   1 )
Step 4—Adversarial Detection and Penalty
If δ i t >   τ a d v , then apply trust penalty:
             T i t =   T i t   1     η     δ i t
Step 5—Normalization and Smoothing
Rescale and smooth the updated trust value:
             T i t = T i t   m i n T m a x T   m i n T
             T i t =   α     T i t +   1     α   T i t 1    
(where α     0,1 is the temporal smoothing factor)
Step 6—Return Updated Trust Score
Return the normalized and smoothed trust score T i t .

3.7. Complexity Analysis

The computational complexity of the TrustFed-CTI framework is analysed across different components to ensure scalability and practical deployability.
The trust score computation has complexity:
O trust = O n m d
where n is the number of participants, m is the number of evaluation metrics, and d is the model dimension.
The secure aggregation complexity scales as:
O aggregation = O k d l o g k
where k is the number of selected participants per round.
The overall framework complexity per round is:
O total = O k d l o g k + O n m d + O d 2

3.8. Comparison with Existing Approaches

The TrustFed-CTI framework provides significant advantages over existing approaches in multiple dimensions. Table 2 presents a comprehensive comparison highlighting the unique capabilities of the proposed framework.
The framework’s innovation extends beyond individual component improvements to provide comprehensive integration of trust management, privacy preservation, and adversarial robustness specifically tailored for cybersecurity applications.

3.9. Trust Weight Sensitivity and Ablation Analysis

To provide an objective justification for the chosen trust-score weighting parameters ( β Q , β C , β R , β H ) = ( 0.30,0.25,0.25,0.20 ) , a sensitivity and ablation study was conducted to measure each dimension’s individual contribution to the overall framework performance. Each coefficient was independently increased or decreased by ±10%, while the remaining coefficients were normalized to preserve a total sum of 1.
All experiments were executed under identical settings (ε = 1.0, σ2 = 2.0, 100 rounds, batch size = 64) to ensure comparability.
Table 3 presents the quantitative results of the trust-weight sensitivity analysis performed across the four trust components—Quality (Q), Consistency (C), Reputation (R), and Historical (H). The reported metrics (accuracy, F1-score, AUC) demonstrate that the selected configuration of ( β Q , β C , β R , β H ) = ( 0.30,0.25,0.25,0.20 ) provides the most balanced performance across all evaluation measures. Increasing β Q or β R yields slight accuracy gains but also increases variance, indicating diminishing returns beyond the baseline weighting. Uniform weighting reduces model stability, confirming that each component contributes uniquely to overall trust reliability. These results validate that the adopted expert-guided configuration achieves an optimal balance between adaptability and robustness in dynamic federated CTI environments.
Interpretation:
The baseline configuration of (0.30, 0.25, 0.25, 0.20) achieved the best trade-off between accuracy and stability across all metrics, outperforming uniform and perturbed settings.
This confirms that the expert-guided weighting is near-optimal and that no single trust component dominates the composite trust score.
Furthermore, varying β Q and β R had the most pronounced effect on accuracy, demonstrating that data quality and reputation play critical roles in overall federated model reliability.

4. Results and Evaluation

4.1. Experimental Setup

The comprehensive evaluation of TrustFed-CTI was conducted using real-world cybersecurity datasets to ensure practical relevance and validity. Table 4 presents the detailed characteristics of the datasets employed in our evaluation.
The experimental environment consisted of distributed simulation across 50 organizations with varying data distributions, security maturity levels, and contribution patterns. The Intel Xeon Gold 6248R, 128GB RAM, and NVIDIA V100 GPUs consisted of the hardware requirements. Its implementation was based on the TensorFlow Federated framework augmented with custom trust management.
Particularly, it was implemented with the help of the TensorFlow Federated (TFF) framework, but with custom trust-scoring modules to provide dynamic participant evaluation and trust-weighted aggregation. To confirm the robustness in the various learning contexts, various federated learning models were used. The primary model architecture consisted of Deep Neural Networks (DNNs) for handling the high-dimensional and heterogeneous CTI datasets. For lightweight environments and baseline comparisons, classical models such as Decision Trees and Naïve Bayes (NB) classifiers were incorporated. In addition, an RL-based adaptive aggregation strategy was tested to demonstrate dynamic policy adjustment under evolving adversarial conditions. This combination ensured that the evaluation captured both the scalability of deep models and the efficiency of traditional learners within the federated cybersecurity setting.
Hyperparameter settings were optimised through systematic grid search: learning rate η = 0.01 ; privacy budget ϵ = 1.0 ; trust threshold τ m i n = 0.3 ; and aggregation rounds T = 100 . The trust score weighting parameters were set as α = 0.3 , β = 0.25 , γ = 0.25 , and δ = 0.2 based on domain expert consultation.
The comprehensive dataset analysis presented in Figure 3 demonstrates the heterogeneous characteristics of the real-world cybersecurity datasets employed in our evaluation, with Honeypot Logs containing the largest sample size (1.46 M samples) and Malware Analysis featuring the highest feature dimensionality (234 features). The complexity index calculation reveals that MITRE ATT&CK and Malware Analysis datasets exhibit the highest complexity scores, indicating their comprehensive coverage of diverse threat landscapes and rich feature representations essential for robust federated learning model training and validation.
In Figure 3, the four bar plots collectively analyze and compare multiple cybersecurity datasets, MITRE ATT&CK, APT Campaign Reports, Honeypot Logs, and Malware Analysis, using distinct color coding to visually separate their characteristics and complexities. The blue bars represent the MITRE ATT&CK dataset. This dataset maintains a moderate sample size (847,293), 156 features, and 14 threat categories, showing a balanced composition with a complexity index of 51.7. It provides broad but structured threat intelligence useful for attack taxonomy studies. The orange bars correspond to the APT Campaign Reports dataset. Despite a relatively smaller sample size (234,567) and feature dimensionality (89), it exhibits higher threat diversity (28 APT groups). Its complexity index (39.9) reflects targeted but less extensive data, ideal for specialized campaign analysis. The green bars represent Honeypot Logs, which have the largest sample size (1,456,789) and 203 extracted features, indicating extensive real-time network data. However, the threat coverage (12 intrusion types) remains limited, producing a complexity index of 69.7, demonstrating dense, event-heavy data suitable for behavioral intrusion detection studies. The red bars depict Malware Analysis datasets, combining 678,234 samples with the highest feature dimensionality (234) and broadest threat taxonomy (45 families). Its complexity index (84.0), well above the average (61.3) illustrates its high heterogeneity and analytical difficulty, making it the most intricate dataset among the four.

4.1.1. Dataset Construction and Labelling

The datasets used in this study were curated from publicly available and synthetic cyber-threat sources to ensure both reproducibility and privacy compliance.
The MITRE ATT&CK (https://www.kaggle.com/datasets/tafifa/dataset-mitre-attack (accessed on 15 April 2025)) dataset was used. The dataset was generated by mapping each tactic–technique pair to synthetic log templates using a Python 3.10 parser. Each record was labelled benign or malicious based on corresponding ATT&CK annotations.
The APT Campaign Reports corpus was vectorised using TF-IDF representations from open-source intelligence feeds, yielding 89 features per sample. Honeypot Logs were parsed into standardised network features (source IP, port, protocol, payload size, response code). At the same time, Malware Analysis data were derived from dynamic behaviour traces of 45 malware families collected from VirusShare and Malpedia.
All datasets are either public or synthetically generated; no proprietary or confidential organizational data was utilised.

4.1.2. Baseline Configurations

Five baseline setups were implemented in TensorFlow Federated (TFF v0.23) to provide fair and transparent comparisons.
Each baseline used identical data partitions, hyperparameters, and training budgets.
Each configuration in Table 5 was executed on an Intel Xeon Gold 6248R (3.0 GHz, 128 GB RAM) node equipped with an NVIDIA V100 GPU.
Training used batch size = 64; learning rate = 1   ×   10 3 ; optimiser = Adam; and trust threshold τ = 0.6.
All results report mean ± standard deviation over five runs (95% CI).
Baseline Fairness:
To ensure equitable comparison, all privacy-preserving baselines were trained under an identical privacy budget ε = 1.0, Gaussian noise σ = 2.0, and identical compute resources (Intel Xeon 6248R + V100 GPU).
Training rounds, learning rate, and batch size were held constant across all methods.
This guarantees that the reported improvements in Table 3 reflect algorithmic efficiency rather than resource bias.

4.1.3. Ablation Study

An ablation experiment isolated the contribution of each trust component—Quality (Q), Consistency (C), Reputation (R), and Historical (H)—to quantify their independent and combined effects.
Values represent mean ± standard deviation across five independent runs. Statistical significance among configurations was evaluated using one-way ANOVA (F = 4.87, p = 0.008) followed by Tukey HSD post hoc tests (p < 0.05). All reported differences are statistically significant at the 95% confidence level.
The incremental improvement across A1–A4 in Table 6 confirms that incorporating reputation and historical stability substantially enhances model reliability and overall detection accuracy. The effectiveness of the proposed cold-start trust initialization strategy was further evaluated under heterogeneous organizational settings. Results show that applying the probation-based initialization significantly stabilized early round performance, improving average accuracy by 0.7 pp and reducing trust fluctuation by 12% within the first ten training rounds. In contrast, models without the probation mechanism exhibited higher variance and slower convergence, confirming that the cold-start module ensures faster adaptation and stable trust formation across diverse participants.
The ANOVA test confirmed that inclusion of the reputation (R) and historical (H) components yields a statistically significant accuracy improvement (F(3, 16) = 4.87, p = 0.008), and Tukey HSD analysis verified that A4 differs significantly from A1 (p = 0.011) and A2 (p = 0.019).

4.2. Performance Evaluation

The evaluation encompasses multiple performance dimensions, including detection accuracy, convergence efficiency, adversarial robustness, and computational overhead. Figure 4 demonstrates the training loss convergence patterns across different approaches.
Figure 5 presents the accuracy evolution during federated training, highlighting the consistent improvement achieved by the trust-aware mechanism.
Table 7 presents comprehensive performance metrics across different evaluation scenarios.
The results demonstrate significant improvements across all evaluation metrics. TrustFed-CTI achieves 87.1% detection accuracy, representing a 22.6% improvement over standard federated learning approaches. The enhanced F1-score of 0.859 and AUC of 0.923 indicate superior threat detection capabilities while maintaining balanced performance across different threat categories.

4.3. Adversarial Robustness Analysis

The framework’s resilience against adversarial attacks was evaluated through the systematic introduction of malicious participants with varying attack strategies. Figure 6 presents the performance degradation under different adversarial scenarios.
Table 8 presents detailed robustness metrics under different attack scenarios.
The evaluation demonstrates remarkable resilience against various adversarial strategies. Even with 35% malicious participants, the framework maintains acceptable performance with only 8.9% accuracy degradation, significantly outperforming existing approaches that typically fail beyond 20% adversarial participation.

4.3.1. Adversarial Attack Models and Mitigation Setup

To ensure a rigorous evaluation of adversarial robustness, five distinct attack models were simulated during federated aggregation rounds. Each experiment was performed over 50 participating organizations with heterogeneous data distributions and trust values.
1. Model Poisoning Attack:
Malicious clients scaled local gradients by ±10% before encryption to distort global convergence. TrustFed-CTI mitigates this through trust-weighted aggregation that automatically down-weights inconsistent updates.
2. Data Poisoning Attack:
A random 2% subset of client data was label-flipped to simulate contamination of threat-labelled samples. The framework’s quality and consistency metrics ( Q i ,   C i ) penalised abnormal gradient directions, isolating corrupted contributors within 18 rounds.
3. Sign-Flipping Attack:
Ten adversarial clients reversed gradient signs during transmission. Dynamic reputation weighting ( R i ) reduced their cumulative influence to <5% of the total update magnitude.
4. Gradient Inversion Attack:
An advanced DeepLeakage-from-Gradients baseline was used to reconstruct sensitive features. Differential-privacy noise ( σ 2 = 2.0 ) and Paillier encryption jointly prevented any successful reconstruction beyond random-guess accuracy.
5. Byzantine and Sybil Attacks:
Byzantine clients injected random updates, while Sybil identities attempted trust inflation—temporal stability. H i , and cross-round peer consensus effectively neutralised their contribution within 25 rounds.
Comparative analyses were also performed using robust aggregators—Trimmed-Mean, Krum, and Bulyan—to validate the relative gain from trust-aware aggregation. TrustFed-CTI consistently outperformed these methods under identical adversarial ratios (10–35%).
Performance Improvement Clarification:
Across all experiments, the reported +22.6% improvement in Table 3 refers to a relative accuracy gain of TrustFed-CTI (87.1%) compared to the Standard FL baseline (79.2%).
Note: Improvements in Table 5, Table 6, Table 7, Table 8 and Table 9 denote relative percentage increases with respect to the Standard FL baseline unless otherwise specified.

4.3.2. Threat-Type Performance Breakdown

Per-class performance was further analysed across six representative threat categories. The cost-sensitive F1-score accounts for false-positive cost differences among threat severities.
TrustFed-CTI achieved consistently high F1-scores across all classes, as shown in Table 9, particularly excelling in real-time network attacks (DDoS and malware) while maintaining robust performance for complex zero-day and insider scenarios.

4.3.3. Computational and Communication Overhead

To quantify the computational impact of privacy and trust mechanisms, average per-round wall-clock times and network costs were recorded on identical hardware.
Table 10 observed an 18% increase in overhead that arises mainly from Paillier encryption and differential-privacy noise generation; however, this trade-off yields a 22.6% relative accuracy improvement and a 35% higher adversarial tolerance, confirming its practical scalability.

4.4. Scalability and Efficiency Analysis

The scalability evaluation examined framework performance across varying numbers of participants and data sizes. Figure 7 demonstrates the relationship between participant count and system performance.
Table 11 presents comprehensive scalability metrics.
The results indicate near-linear scalability with participant count, maintaining performance efficiency even with 100 participating organizations. The marginal accuracy improvement beyond 50 participants suggests an optimal network size for practical deployment.

4.5. Privacy Analysis

The comprehensive privacy analysis presented in Figure 8 demonstrates the complex interplay between differential privacy parameters and system performance through colored wave visualisations and line graphs, revealing optimal trade-off characteristics at ϵ = 1.0 , where accuracy waves transition to high-performance zones while utility loss waves cross zero. The multi-dimensional analysis clearly presents how TrustFed-CTI’s trust-aware aggregation mechanism enables superior utility preservation under strict privacy constraints, with the wave patterns effectively visualising the smooth transitions between privacy regimes and their corresponding performance impacts across all evaluated metrics.
Table 12 presents detailed privacy analysis results.
The analysis reveals an optimal privacy–utility balance at ϵ = 1.0 , providing strong privacy guarantees while maintaining complete detection accuracy. The trust-aware aggregation mechanism helps preserve utility even under strict privacy constraints.
Clarification on Negative Utility Loss:
The slight negative utility-loss values for ε = 2.0 and 5.0 arise from marginal overfitting effects that occur when privacy noise is significantly reduced. As the privacy constraint relaxes, the model captures minor dataset idiosyncrasies, momentarily boosting performance metrics. This does not indicate a violation of the privacy–utility trade-off but reflects statistical variance inherent in less-regularised training.
To more rigorously quantify privacy leakage, we adopted the membership-inference attack (MIA) success metric as surveyed by Hu et al. [31], which is widely recognised in the privacy literature as a standard measure of how vulnerable a model is to inferring whether a particular sample was part of training.
Table 13 presents an empirical evaluation of privacy leakage (measured by membership-inference success rate, MIA-SR) and utility trade-off across different ε values. As ε increases, MIA-SR gradually rises, indicating increasing vulnerability, while accuracy also improves. The ε   =   1.0 setting provides a balanced point, yielding strong detection performance (87.1%) while limiting leakage to 4.5%. Negative “Utility Loss” values at higher ε suggest slight overfitting gains.
This extended analysis confirms that the choice ε   =   1.0 remains optimal for TrustFed-CTI, striking a practical balance between privacy protection and model accuracy. The trust-aware aggregation mechanism further contributes by reducing the impact of privacy noise on model updates under stricter ε settings, thereby suppressing potential leakage amplification.

4.6. Threat Detection Analysis

The threat detection capabilities were evaluated across different threat categories and attack sophistication levels. Figure 9 presents the confusion matrix for multi-class threat detection.
Figure 10 presents the receiver operating characteristic curves for different threat detection scenarios. The figure demonstrates TrustFed-CTI’s superior threat detection capabilities across six threat categories, with consistently high AUC values ranging from 0.889 to 0.967 and an average AUC of 0.933, indicating excellent discrimination performance. The comprehensive performance evaluation reveals optimal S detection efficiency for network-based attacks like DDoS (15.2 ms detection time, 0.967 AUC) while maintaining robust performance against sophisticated threats such as zero-day exploits, validating the framework’s effectiveness in diverse cybersecurity scenarios with balanced precision-recall characteristics across all threat types.
Table 14 presents comprehensive threat detection performance metrics.
The results demonstrate a consistently high detection performance across different threat categories, with powerful performance for network-based attacks (DDoS, malware) and moderate performance for sophisticated threats (zero-day, insider threats) that require more extended observation periods.

4.7. Real-World Deployment Scenarios

The framework’s practical applicability was evaluated through simulation of real-world deployment scenarios, including urban cybersecurity networks, critical infrastructure protection, and cross-sector threat sharing.
The deployment scenario analysis in Figure 11 reveals significant performance variations across different organizational environments, with critical infrastructure achieving optimal results (88.9% accuracy, 65 convergence rounds, very high trust stability) due to established security protocols and homogeneous participant characteristics. At the same time, cross-sector collaborations face scalability challenges with larger networks (45 participants) requiring extended convergence times. The multi-dimensional evaluation demonstrates TrustFed-CTI’s adaptability to diverse deployment contexts, showing that network size optimization, trust relationship maturity, and organizational homogeneity are critical factors determining framework performance, with medium-sized networks (18–32 participants) consistently delivering superior accuracy–convergence trade-offs across all evaluated scenarios.
Table 15 presents deployment scenario analysis results.
The analysis demonstrates strong adaptability across diverse deployment scenarios with consistent performance maintenance and trust stability. The critical infrastructure situation demonstrates the best performance level based on the increased trust relationships and the consistency of the data quality.

5. Discussion

Across all ablation and robustness analyses, statistical validation using ANOVA and Tukey HSD tests confirmed that performance improvements are statistically significant (p < 0.05), reinforcing the reliability of TrustFed-CTI’s evaluations.
The overall analysis outcomes prove that TrustFed-CTI manages to deal with the underlying issues of sharing of cyber threat intelligence at a federated level and offers substantial benefits compared to the current models. This represents a 22.6% increase in detection accuracy, reflecting a stronger organizational security posture and faster threat response.
Table 16 compares TrustFed-CTI and ten state-of-the-art methods provides a detailed comparison on a variety of aspects of the evaluation.
The TrustFed-CTI is superior regarding the holism of its approach to trust management, privacy, and adversarial strength. TrustFed-CTI provides full-fledged solutions to the problem of interdependence of issues of federated cybersecurity applications, as opposed to the current approaches, which focus on one dimension at a time.
Trust and Adaptability:
The trust-sensitive aggregation mechanism selectively amplifies the influence of reliable participants while diminishing the impact of potentially malicious ones. Its dynamic trust updating enables adaptation to behavioural deviations and emerging threat types, maintaining long-term stability and robustness.
Privacy–Utility Balance:
The integrated privacy strategies effectively secure sensitive data while retaining model utility. Experiments confirm that a privacy budget of ε = 1.0 offers the best balance between confidentiality and detection accuracy—meeting regulatory requirements without degrading performance.
Adversarial Robustness and Scalability:
The adversarial evaluation indicates that the framework withstands up to 35% of malicious participants, far exceeding standard federated learning systems. Scalability analysis shows near-linear performance with increasing participants, confirming suitability for large-scale, real-world deployments. Its distributed architecture allows deployment across diverse organizations while sustaining stable throughput and accuracy.
Limitations:
Several aspects require further investigation. Performance under highly heterogeneous data distributions must be studied, as trust computation may introduce overhead in resource-constrained environments. Future work should explore lightweight trust modules for such cases. Establishing initial trust (“bootstrapping”) among previously unconnected organizations remains challenging; automated reputation-based initialization could improve practicality. The current evaluation is limited to a maximum of 100 participating organizations, constrained by simulation capacity and communication overhead. While the results indicate near-linear scalability up to this level, future work will focus on extending validation to 250–500 synthetic participants using asynchronous aggregation and parallel trust updates. This will further demonstrate the framework’s adaptability to large-scale real-world deployments.
Moreover, real-world deployment may face practical constraints related to computational resource availability, especially for smaller organizations lacking GPU-equipped infrastructure. High encryption and trust-evaluation overheads can introduce latency during large-scale model aggregation. These challenges can be mitigated by leveraging hierarchical aggregation layers, adaptive participation scheduling, and lightweight encryption primitives to maintain efficiency under limited-resource conditions.
Quality of Contributed Intelligence:
System effectiveness depends on the richness and representativeness of shared intelligence. Entities with limited exposure or weaker security maturity may contribute less, influencing overall learning quality. Incentive mechanisms that promote active participation and higher data quality constitute an important direction for enhancement.
Advanced Threat Coverage:
Although the framework performs well across multiple threat classes, zero-day exploits and advanced persistent threats continue to pose significant challenges. Continuous exposure to diverse intelligence and improved anomaly detection modules are essential for extending detection capabilities.
Operational Insights:
Real-world implementation reveals performance variation across domains. Critical-infrastructure environments benefit most from strong trust relationships and data reliability, whereas cross-sector deployments face challenges in data format heterogeneity and differing security priorities. The privacy-utility trade-off may shift under distinct regulatory settings, emphasising the need for adaptive privacy budgeting.
Efficiency Considerations:
Communication optimization through trust-weighted participant selection and secure aggregation significantly reduces overhead compared with standard FL. While trust management adds computation, this cost is offset by improved convergence and resilience in real deployments.
Extensibility and Future Work:
The modular design allows easy integration of emerging technologies such as homomorphic encryption and secure multi-party computation, further enhancing privacy without compromising efficiency. Incorporating advanced consensus and blockchain mechanisms could strengthen transparency and auditability. Standardising interoperability protocols for federated cybersecurity collaboration would facilitate widespread industrial adoption. Future research will focus on automated trust-setup schemes, lightweight adaptations for constrained devices, cross-domain transfer learning, and improved detection of zero-day threats using advanced anomaly-based analytics.
In summary, TrustFed-CTI delivers a unified, scalable, and privacy-preserving framework that strengthens federated CTI collaboration. It maintains high performance under adversarial conditions, provides measurable privacy guarantees, and establishes a practical foundation for secure multi-organizational intelligence sharing.

6. Conclusions

The framework in this paper is TrustFed-CTI, which is a novel trust-based federated learning framework that is highly customised to privacy-preserving cyber threat intelligence dissemination in distributed organizations. The holistic solution brings the proactive management of trust, the state-of-the-art privacy protection solutions and the robust adversarial defence capabilities to combat the natural problems of collaborative cybersecurity in the distributed networks. The proposed TrustFed-CTI framework achieved a 22.6% relative improvement in detection accuracy over the Standard Federated Learning baseline, a 28% faster convergence rate, and demonstrated resilience against up to 35% of malicious participants across real-world CTI datasets, including MITRE ATT&CK indicators, APT campaign reports, and honeypot logs. The aggregation of trust, situational analysis of threats and dynamic privacy control of the framework has a viable solution to the massive cooperation in cybersecurity and yet allows organizational autonomy and adherence to regulations. Its modular design enables its versatile adaptation to numerous organizational scenarios, such as urban cybersecurity networks, critical infrastructure protection, and cross-sector threat sharing programs. It will be further applied in future research on the mechanisms of automated trust initialization, low-weight implementations to the resource-constrained surroundings, and the integration with new privacy-enhancing technologies to enhance the degree of federated cyberspace cooperation further.

Funding

The author extend their appreciation to Prince Sattam Bin Abdulaziz University for funding this research work through the project number (PSAU/2025/01/35351).

Data Availability Statement

The data supporting this study’s findings are available from the corresponding author upon reasonable request.

Conflicts of Interest

The author declares no conflict of interest.

Abbreviations

Symbol/TermDescription
T i t Trust score of organization i at time t
Q i t Data quality score for organization i
C i t Consistency score for organization i
R i t Reputation score for organization i
H i t Historical performance score for organization i
θ t Global model parameters at round t
Δ θ i Model update from organization i at round t
L ( θ ) Loss function for model parameters θ\theta
D i Local dataset for organization i
ϵ Privacy budget for differential privacy
δ Privacy parameter for differential privacy
σ Noise standard deviation for differential privacy
w i t Trust-weighted contribution factor for organization i
α , β , γ , δ Trust score weighting parameters
λ Temporal decay factor for historical performance
τ m i n Minimum trust threshold for participation
ϵ c o n v Convergence threshold for training termination
K Number of selected participants per round
N Total number of participating organizations
T Maximum number of training rounds
A P T Advanced Persistent Threat
C T I Cyber Threat Intelligence
F L Federated Learning
M I T R E   A T T & C K MITRE Adversarial Tactics, Techniques & Common Knowledge
I o T Internet of Things
M L Machine Learning
D D o S Distributed Denial of Service
A U C Area Under Curve
R O C Receiver Operating Characteristic
Q i t Data quality metric representing the impact of a local model on global performance
C i t Consistency metric measuring the alignment of local gradients with global updates
R i t Reputation metric obtained from peer feedback or external validation
H i t Historical performance score representing temporal stability of contribution
β Q ,   β C ,   β R ,   β H   Weighting coefficients for quality, consistency, reputation, and history metrics (sum = 1)
λ = β Q ,   β C ,   β R ,   β H Trust-weight vector used in the composite trust computation
γ Temporal-decay factor controlling influence of past performance
η Penalty-scaling coefficient used during adversarial trust penalization
τ a d v Threshold for adversarial detection based on gradient divergence
δ i t Adversarial indicator variable representing deviation of client i updates
ε Differential privacy noise parameter controlling privacy budget
σ Standard deviation of Gaussian noise added for differential privacy
S Sensitivity parameter in the differential privacy mechanism
E ( · ) Encryption function in Paillier homomorphic encryption
D ( · ) Decryption function corresponding to E(·)
k Paillier public key length (2048 bits)
L i Local dataset size of organization i
Δ W i Local model update of organization i
W t Global model parameters at communication round t
α Temporal smoothing factor for final trust normalization
m Number of trust metrics (typically 4)
O N × m Computational complexity of one aggregation round

References

  1. Ragab, M.; Ashary, E.B.; Alghamdi, B.M.; Aboalela, R. Advanced artificial intelligence with federated learning framework for privacypreserving cyberthreat detection in IoT-assisted sustainable smart cities. Sci. Rep. 2025, 15, 4470. [Google Scholar] [CrossRef]
  2. Sathya, S.; Saranya, K. Trust-aware federated learning framework with context-aware dynamic gradient preservation for early cardiovascular risk. TPM–Test. Psychom. Methodol. 2025, 32, 145–162. Available online: https://tpmap.org/submission/index.php/tpm/article/view/592 (accessed on 15 April 2025).
  3. Ali, W.; Zhou, X.; Shao, J. Privacy-preserved and responsible recommenders: From conventional defense to federated learning and blockchain. ACM Comput. Surv. 2025, 58, 1–35. [Google Scholar] [CrossRef]
  4. Ali, H.; Buchanan, W.J.; Ahmad, J.; Abubakar, M.; Khan, M.S. TrustShare: Secure and trusted blockchain framework for threat intelligence sharing. Future Internet 2025, 17, 289. [Google Scholar] [CrossRef]
  5. Li, K.; Li, C.; Yuan, X.; Li, S.; Zou, S. Zero-trust foundation models: A new paradigm for secure and collaborative artificial intelligence for internet of things. IEEE Internet Things J. 2025, 12, 6745–6758. [Google Scholar] [CrossRef]
  6. Zhan, S.; Huang, L.; Luo, G.; Zheng, S.; Gao, Z.; Chao, H.C. A review on federated learning architectures for privacy-preserving AI: Lightweight and secure cloud–edge–end collaboration. Electronics 2025, 14, 2512. [Google Scholar] [CrossRef]
  7. Price, E. Federated learning for privacy-preserving edge intelligence: A scalable systems perspective. J. Comput. Sci. Softw. Appl. 2025, 19, 78–95. [Google Scholar] [CrossRef]
  8. Almuseelem, W. Secure latency-aware task offloading using federated learning and zero trust in edge computing for IoMT. IEEE Access 2025, 13, 12458–12472. [Google Scholar] [CrossRef]
  9. Xiang, H.; Wang, G.; Xiao, Y.; Di, F.; Gao, R. Reliable and secure anomaly detection in heterogeneous federated learning: A comprehensive review. Big Data Min. Anal. 2025, 8, 234–251. [Google Scholar]
  10. Wu, W.; Konstantinidis, G. Trust and reputation in data sharing: A survey. VLDB J. 2025, 34, 567–589. [Google Scholar] [CrossRef]
  11. Myakala, P.K. Distributed data intelligence in the edge-to-cloud continuum: A survey of architectures, models, and applications. In Proceedings of the 6th International Conference on Data Intelligence and Cognitive Informatics, Jaipur, India, 9–11 July 2025; pp. 145–152. [Google Scholar] [CrossRef]
  12. Khan, N.; Nisar, S.; Khan, M.A. Optimizing federated learning with aggregation strategies: A comprehensive survey. IEEE Open J. Comput. Soc. 2025, 6, 189–207. [Google Scholar] [CrossRef]
  13. Issa, W.; Moustafa, N.; Turnbull, B.; Choo, K.K.R. DT-BFL: Digital twins for blockchain-enabled federated learning in internet of things networks. Ad Hoc Netw. 2025, 142, 103934. [Google Scholar] [CrossRef]
  14. Kabashkin, I. Federated unlearning framework for digital twin–based aviation health monitoring under sensor drift and data corruption. Electronics 2025, 14, 2968. [Google Scholar] [CrossRef]
  15. Munoz, A.; Lopez, J.; Alcaraz, C.; Martinelli, F. Trusted platform and privacy management in cyber physical systems: The DUCA framework. In Data and Applications Security and Privacy XXXIX; Springer: Cham, Switzerland, 2025; pp. 78–95. [Google Scholar] [CrossRef]
  16. El Gadal, W.; Ganti, S. Federated secure intelligent intrusion detection and mitigation framework for SD-IoT networks using ViTGraphSAGE and automated attack reporting. In Proceedings of the 2025 12th IFIP International Conference on New Technologies, Mobility and Security (NTMS), Paris, France, 18–20 June 2025; pp. 234–241. [Google Scholar] [CrossRef]
  17. Ogenyi, F.C.; Ugwu, C.N.; Ugwu, O.P.C. Securing the future: AI-driven cybersecurity in the age of autonomous IoT. Front. Internet Things 2025, 4, 89–106. [Google Scholar] [CrossRef]
  18. Ojokoh, B.A.; Isinkaye, F.O.; Zhang, M.; Tom, J.J. Privacy and security in recommenders: An analytical review. Artif. Intell. Rev. 2025, 58, 1567–1589. [Google Scholar] [CrossRef]
  19. Lopez, L.I.B.; Saltos, T.B. Heterogeneity challenges of federated ’ learning for future wireless communication networks. J. Sens. Actuator Netw. 2025, 14, 37. [Google Scholar] [CrossRef]
  20. Nandy, T.; Bhattacharyya, S. A comprehensive survey of deep learning for authentication in vehicular communication. Comput. Mater. Contin. 2025, 76, 3245–3267. [Google Scholar] [CrossRef]
  21. Abduelmula, A.; Kobti, Z. Defending federated learning systems against untargeted sybil attacks in non-IID environments. Procedia Comput. Sci. 2025, 198, 567–574. [Google Scholar] [CrossRef]
  22. Gana, D.; Jamil, F. DAG-based swarm learning approach in healthcare: A survey. IEEE Access 2025, 13, 45672–45689. [Google Scholar] [CrossRef]
  23. Elkhodr, M. An AI-driven framework for integrated security and privacy in internet of things using quantum-resistant blockchain. Future Internet 2025, 17, 246. [Google Scholar] [CrossRef]
  24. Wang, R.; Tang, S.; Zhang, H.; Duan, S. Blockchain empowered secure collaboration for swarm robots: Storage and computation. IEEE Internet Things J. 2025, 12, 7834–7847. [Google Scholar] [CrossRef]
  25. Natarajan, H.P.; Yogharaj, A.R. RONI and TRIM based defense methods for federated learning driven backdoor attacks. Int. J. Inf. Technol. Decis. Mak. 2025, 24, 789–806. [Google Scholar] [CrossRef]
  26. Sugianto, N.; Tjondronegoro, D.; Sorwar, G. Collaborative federated learning framework to minimize data transmission for AI-enabled video surveillance. Inf. Technol. People 2025, 38, 1234–1251. [Google Scholar] [CrossRef]
  27. Stephen, G.; Ola, A.; Leo, B. Privacy-aware AI in the age of risk: A federated approach. IEEE Trans. Artif. Intell. 2025, 6, 456–472. Available online: https://www.researchgate.net/publication/393253391_PRIVACYAWARE_AI_IN_THE_AGE_OF_RISK_A_FEDERATED (accessed on 15 April 2025).
  28. Blake, S.D.H. Securing the AI frontier: Federated learning for privacy-centric cloud and telecom integration. J. Netw. Comput. Appl. 2025, 201, 103–118. Available online: https://www.researchgate.net/publication/393162950_SECURING_THE_AI_FRONTIER_FEDERATED_LEARNING_FOR_PRIVACY-CENTRIC_CLOUD_AND_TELECOM_INTEGRATION (accessed on 15 April 2025).
  29. Olufemi, O.D. Quantum-AI federated clouds: A trust-aware framework for cross-domain observability and security. Future Gener. Comput. Syst. 2025, 142, 234–249. [Google Scholar] [CrossRef]
  30. Abiola, O.B. Implementing dynamic confidential computing for continuous cloud security posture monitoring to develop a zero trust-based threat mitigation model. IEEE Trans. Cloud Comput. 2025, 13, 567–582. [Google Scholar] [CrossRef]
  31. Hu, H.; Salcic, Z.; Sun, L.; Dobbie, G.; Yu, P.S.; Zhang, X. Membership inference attacks on machine learning: A survey. ACM Comput. Surv. (CSUR) 2022, 54, 1–37. [Google Scholar] [CrossRef]
Figure 1. Modern issues in cyber threat intelligence sharing: (a) Centralized approach putting organizations at risk of privacy and single points of failure due to raw data aggregation, (b) isolated approach restricting the visibility of threats and the ability to defend against them due to privacy-related mechanisms, and (c) proposed trust-aware federated approach allowing organizations to securely collaborate by exchanging encrypted model updates, privacy-preserving mechanisms, and dynamic trust scoring without loss of organizational data sovereignty.
Figure 1. Modern issues in cyber threat intelligence sharing: (a) Centralized approach putting organizations at risk of privacy and single points of failure due to raw data aggregation, (b) isolated approach restricting the visibility of threats and the ability to defend against them due to privacy-related mechanisms, and (c) proposed trust-aware federated approach allowing organizations to securely collaborate by exchanging encrypted model updates, privacy-preserving mechanisms, and dynamic trust scoring without loss of organizational data sovereignty.
Futureinternet 17 00512 g001
Figure 2. TrustFed-CTI system architecture illustrating the federated learning workflow where distributed organizations (Org-A, Org-B) process local CTI data through privacy-preserving modules and send encrypted model updates to a central trust engine for secure aggregation. The trust engine evaluates participant contributions using quality (Q), consistency (C), reputation (R), and historical (H) metrics to generate trust scores and weights for robust global model updates while maintaining organizational data sovereignty.
Figure 2. TrustFed-CTI system architecture illustrating the federated learning workflow where distributed organizations (Org-A, Org-B) process local CTI data through privacy-preserving modules and send encrypted model updates to a central trust engine for secure aggregation. The trust engine evaluates participant contributions using quality (Q), consistency (C), reputation (R), and historical (H) metrics to generate trust scores and weights for robust global model updates while maintaining organizational data sovereignty.
Futureinternet 17 00512 g002
Figure 3. Comprehensive analysis of cybersecurity datasets.
Figure 3. Comprehensive analysis of cybersecurity datasets.
Futureinternet 17 00512 g003
Figure 4. Training loss convergence comparison showing TrustFed-CTI’s superior convergence rate and stability compared to baseline approaches.
Figure 4. Training loss convergence comparison showing TrustFed-CTI’s superior convergence rate and stability compared to baseline approaches.
Futureinternet 17 00512 g004
Figure 5. Detection accuracy evolution across training rounds, demonstrating TrustFed-CTI’s superior performance and stability.
Figure 5. Detection accuracy evolution across training rounds, demonstrating TrustFed-CTI’s superior performance and stability.
Futureinternet 17 00512 g005
Figure 6. Adversarial robustness analysis showing TrustFed-CTI’s resilience against varying percentages of malicious participants.
Figure 6. Adversarial robustness analysis showing TrustFed-CTI’s resilience against varying percentages of malicious participants.
Futureinternet 17 00512 g006
Figure 7. Scalability analysis showing framework performance across different numbers of participating organizations.
Figure 7. Scalability analysis showing framework performance across different numbers of participating organizations.
Futureinternet 17 00512 g007
Figure 8. Privacy-utility trade-off analysis with colored wave patterns showing accuracy evolution, privacy leakage progression, utility loss dynamics, and trust impact across different differential privacy budgets.
Figure 8. Privacy-utility trade-off analysis with colored wave patterns showing accuracy evolution, privacy leakage progression, utility loss dynamics, and trust impact across different differential privacy budgets.
Futureinternet 17 00512 g008
Figure 9. Confusion matrix for multi-class threat detection showing high accuracy across different threat categories.
Figure 9. Confusion matrix for multi-class threat detection showing high accuracy across different threat categories.
Futureinternet 17 00512 g009
Figure 10. Receiver Operating Characteristic (ROC) curves for different threat categories demonstrate superior detection performance across various threat types.
Figure 10. Receiver Operating Characteristic (ROC) curves for different threat categories demonstrate superior detection performance across various threat types.
Futureinternet 17 00512 g010
Figure 11. Performance analysis across different real-world deployment scenarios, including urban networks, critical infrastructure, and cross-sector collaboration.
Figure 11. Performance analysis across different real-world deployment scenarios, including urban networks, critical infrastructure, and cross-sector collaboration.
Futureinternet 17 00512 g011
Table 1. Summary of Secure Aggregation and Privacy Methods.
Table 1. Summary of Secure Aggregation and Privacy Methods.
PhasePrivacy MethodImplementation/LibraryKey SizeComputational Overhead
Local trainingDifferential Privacy (Gaussian mechanism, σ 2   =   2.0 ,   ε   =   1.0 )TensorFlow Privacy v0.9≈+7% CPU
Model encryption before uploadPaillier Homomorphic Encryption (additive)PyCryptodome2048 bits≈+12% latency
Secure aggregationSMPC (additive masking over AES)TensorFlow Federated secure_sum()AES-128≈+5% CPU
Table 2. Comparison of TrustFed-CTI with Existing Approaches.
Table 2. Comparison of TrustFed-CTI with Existing Approaches.
MethodTrust ManagementPrivacyAdversarial RobustnessCTI-SpecificReal-Time
Centralized CTI×××
Standard FL×××
TrustShare [4]××
Blockchain-FL [13]××
TrustFed-CTI
Table 3. Sensitivity of Trust-Weight Parameters.
Table 3. Sensitivity of Trust-Weight Parameters.
VariationAccuracy (%)F1-ScoreAUC
β_Q ↑ (+10%)87.20.8590.923
β_C ↑ (+10%)86.80.8560.921
β_R ↑ (+10%)86.50.8520.918
β_H ↑ (+10%)86.70.8530.919
Uniform (0.25 each)86.20.8500.916
Table 4. Datasets Used in Evaluation.
Table 4. Datasets Used in Evaluation.
DatasetSamplesFeaturesThreat TypesSource
MITRE ATT&CK847,29315614 categoriesMITRE Corporation
APT Campaign Reports234,5678928 APT groupsThreat Intelligence Feeds
Honeypot Logs1,456,789203Network intrusionsSecurity Research Labs
Malware Analysis678,23423445 malware familiesCybersecurity Consortiums
Table 5. Baseline Algorithms and Configurations.
Table 5. Baseline Algorithms and Configurations.
BaselineFederated AlgorithmAggregatorDP Noise σPrivacy εModel ArchitectureRounds
Centralized MLNon-Federated3-Layer DNN (256-128-64)100
Standard FLFedAvgMeanNoneDNN (256-128-64)100
Privacy-FLFedAvg + Differential PrivacyGaussian σ = 2.0ε = 1.0DNN (256-128-64)100
Trust-FLFedAvg + Trust (Q only)WeightedNoneDNN (256-128-64)100
Blockchain-FLFedAvg + Smart Contract ConsensusMeanNoneDNN (256-128-64)100
Table 6. Ablation Results of Trust Components.
Table 6. Ablation Results of Trust Components.
ConfigurationEnabled ComponentsAccuracy (%)F1-ScoreAUC
A1Q only82.4 ± 0.80.811 ± 0.010.881 ± 0.01
A2Q + C84.6 ± 0.70.832 ± 0.010.897 ± 0.01
A3Q + C + R85.9 ± 0.60.842 ± 0.010.908 ± 0.01
A4Q + C + R + H (full)87.1 ± 0.50.859 ± 0.010.923 ± 0.01
Table 7. Overall performance comparison. All “Improvement” percentages are computed relative to the Standard Federated Learning (FedAvg) baseline under identical privacy budget (ε = 1.0) and noise variance (σ2 = 2.0) conditions. All results report mean ± standard deviation over five runs; paired t-tests confirmed statistical significance (p < 0.05) for TrustFed-CTI against baselines.
Table 7. Overall performance comparison. All “Improvement” percentages are computed relative to the Standard Federated Learning (FedAvg) baseline under identical privacy budget (ε = 1.0) and noise variance (σ2 = 2.0) conditions. All results report mean ± standard deviation over five runs; paired t-tests confirmed statistical significance (p < 0.05) for TrustFed-CTI against baselines.
MethodAccuracy (%)F1-ScoreAUCConvergence Time (s)Communication Cost (MB)
Centralized ML84.30.8230.891245N/A
Standard FL79.20.7850.8561847234.7
Privacy-FL [1]81.50.7980.8682156187.3
Trust-FL [2]82.70.8120.8791623203.8
Blockchain-FL [13]80.90.7890.8623234456.2
TrustFed-CTI87.10.8590.9231329178.4
Improvement22.6%18.3%15.4%28.1%23.9%
Values represent mean ± standard deviation across five independent runs. Statistical significance of performance differences was confirmed using paired t-tests (p < 0.05).
Table 8. Adversarial Robustness Evaluation.
Table 8. Adversarial Robustness Evaluation.
Attack TypeMalicious %Accuracy Drop%Recovery TimeTrust Adaptation
Model Poisoning15%3.2%12 roundsFast
Data Poisoning25%5.7%18 roundsModerate
Gradient Inversion20%2.1%8 roundsFast
Byzantine Attack35%8.9%25 roundsSlow
Sybil Attack30%6.4%21 roundsModerate
Performance differences across attack types were validated using one-way ANOVA (p < 0.05) to confirm the statistical significance of observed accuracy drops.
Table 9. Per-Threat-Type Detection Performance.
Table 9. Per-Threat-Type Detection Performance.
Threat CategoryPrecisionRecallCost-Weighted F1AUCDetection Time (ms)
Malware0.9230.8870.9090.95223.4
Phishing0.8960.9120.9020.94118.7
DDoS0.9340.9190.9260.96715.2
APT0.8870.9230.9050.93431.8
Insider Threat0.8450.8760.8600.91242.3
Zero-Day0.7980.8340.8160.88967.9
Average0.8810.8920.8860.93333.2
Table 10. Computation and Communication Overhead.
Table 10. Computation and Communication Overhead.
ComponentAvgTime/Round (s)Bandwidth (MB)Crypto Overhead (%)
FedAvg (plain)1.211500
Privacy-FL (DP + SMPC)1.38165+14
TrustFed-CTI (full)1.52178+18
Table 11. Scalability Performance Analysis.
Table 11. Scalability Performance Analysis.
ParticipantsAccuracy (%)Convergence Time (s)Memory Usage (GB)Communication (MB/round)
1085.28564.345.2
2586.711248.789.3
5087.1132915.4178.4
7587.3156723.1267.8
10087.4182331.2356.7
Table 12. Privacy Analysis Results.
Table 12. Privacy Analysis Results.
Privacy Budget (ϵ)Accuracy (%)Privacy LeakageUtility LossTrust Impact
0.183.4Very Low4.2%Minimal
0.585.7Low1.6%Low
1.087.1Moderate0.0%None
2.087.8High−0.8%Beneficial
5.088.2Very High−1.3%Beneficial
Table 13. Privacy Leakage and Utility Trade-off Across Differential-Privacy Budgets.
Table 13. Privacy Leakage and Utility Trade-off Across Differential-Privacy Budgets.
Privacy   Budget   ( ε ) Accuracy (%)MIA-SR (%)Utility Loss (%)Trust Impact
0.183.42.14.2Minimal
0.585.73.61.6Low
1.087.14.50.0None
2.087.85.3–0.8Beneficial
5.088.26.9–1.3Beneficial
Table 14. Threat Detection Performance by Category.
Table 14. Threat Detection Performance by Category.
Threat CategoryPrecisionRecallF1-ScoreAUCDetection Time (ms)
Malware0.9230.8870.9050.95223.4
Phishing0.8960.9120.9040.94118.7
DDoS0.9340.9190.9260.96715.2
APT0.8870.9230.9050.93431.8
Insider Threat0.8450.8760.8600.91242.3
Zero-day0.7980.8340.8160.88967.9
Average0.8810.8920.8860.93333.2
Table 15. Real-World Deployment Scenario Analysis.
Table 15. Real-World Deployment Scenario Analysis.
ScenarioParticipantsAccuracy (%)Convergence (Rounds)Trust Stability
Urban Networks3286.478High
Critical Infrastructure1888.965Very High
Cross-Sector4585.794Moderate
International2884.2112Moderate
Industry Consortium4187.681High
Table 16. Comprehensive Comparison with State-of-the-Art Methods.
Table 16. Comprehensive Comparison with State-of-the-Art Methods.
MethodAccuracyF1-ScorePrivacyTrustRobustnessScalability
Ragab et al. [1]81.5%0.798HighLowMediumHigh
Ali et al. [4]79.3%0.776HighHighLowMedium
Sathya et al. [2]82.7%0.812MediumHighMediumMedium
Li et al. [5]80.1%0.785HighMediumHighHigh
Issa et al. [13]78.9%0.772MediumMediumLowLow
Khan et al. [12]83.2%0.819MediumLowMediumHigh
Almuseelem [8]81.8%0.801HighMediumMediumMedium
Wu et al. [10]77.4%0.758LowHighLowHigh
Xiang et al. [9]84.1%0.825MediumMediumHighMedium
Ojokoh et al. [18]76.8%0.751HighLowLowMedium
TrustFed-CTI87.1%0.859HighHighHighHigh
Note: In Table 16 qualitative ratings are defined as follows—High: performance ≥ 90% or strong consistency across all evaluation metrics; Medium: performance between 80% and 89% with moderate variability; Low: performance < 80% or inconsistent outcomes across metrics.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mrabet, M. TrustFed-CTI: A Trust-Aware Federated Learning Framework for Privacy-Preserving Cyber Threat Intelligence Sharing Across Distributed Organizations. Future Internet 2025, 17, 512. https://doi.org/10.3390/fi17110512

AMA Style

Mrabet M. TrustFed-CTI: A Trust-Aware Federated Learning Framework for Privacy-Preserving Cyber Threat Intelligence Sharing Across Distributed Organizations. Future Internet. 2025; 17(11):512. https://doi.org/10.3390/fi17110512

Chicago/Turabian Style

Mrabet, Manel. 2025. "TrustFed-CTI: A Trust-Aware Federated Learning Framework for Privacy-Preserving Cyber Threat Intelligence Sharing Across Distributed Organizations" Future Internet 17, no. 11: 512. https://doi.org/10.3390/fi17110512

APA Style

Mrabet, M. (2025). TrustFed-CTI: A Trust-Aware Federated Learning Framework for Privacy-Preserving Cyber Threat Intelligence Sharing Across Distributed Organizations. Future Internet, 17(11), 512. https://doi.org/10.3390/fi17110512

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop