Next Article in Journal
QiGSAN: A Novel Probability-Informed Approach for Small Object Segmentation in the Case of Limited Image Datasets
Previous Article in Journal
A Meta-Survey of Generative AI in Education: Trends, Challenges, and Research Directions
Previous Article in Special Issue
Toward the Mass Adoption of Blockchain: Cross-Industry Insights from DeFi, Gaming, and Data Analytics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Federated Learning with Adversarial Optimisation for Secure and Efficient 5G Edge Computing Networks

Computer Science Research Centre, University of the West of England, Bristol BS16 1QY, UK
*
Author to whom correspondence should be addressed.
Big Data Cogn. Comput. 2025, 9(9), 238; https://doi.org/10.3390/bdcc9090238
Submission received: 31 July 2025 / Revised: 8 September 2025 / Accepted: 15 September 2025 / Published: 17 September 2025
(This article belongs to the Special Issue Application of Cloud Computing in Industrial Internet of Things)

Abstract

With the evolution of 5G edge computing networks, privacy-aware applications are gaining significant attention due to their decentralised processing capabilities. However, these networks face substantial challenges to ensure privacy and security, specifically in a Federated Learning (FL) setup, where adversarial attacks can potentially influence the model integrity. Conventional privacy-preserving FL mechanisms are often susceptible to such attacks, leading to degraded model performance and severe security vulnerabilities. To address this issue, we propose FL with adversarial optimisation framework to improve adversarial robustness in 5G edge computing networks while ensuring privacy preservation. The proposed framework considers two models; a classifier model and an adversary model, where the classifier model is integrated with the adversary model, trained jointly considering Fast Gradient Sign Method (FGSM) for generation of adversarial perturbations. This adversarial optimisation enhances classifier’s resilience to attacks, thereby improving both privacy preservation and model accuracy. Experimental analysis reveals that the proposed model achieves up to 99.44 % accuracy on adversarial test data, while improving robustness and sustaining high precision and recall across varying client scenarios. The experimental results further ensure the effectiveness of the proposed model in terms of communication efficiency and computational efficiency while reducing inference time and FLOPs making it ideal for secure 5G edge computing applications.

1. Introduction

The emergence of 5G wireless networks has significantly transformed computing capabilities by enabling ultra-low latency, high data throughput and massive device connectivity [1]. One of the significant advancements empowered by 5G is edge computing, which enables computations to be performed at edge users instead of exclusively depending on centralised computing. 5G edge computing is capable of dealing with low-latency applications including smart healthcare, autonomous vehicles, and Industrial Internet-of-Things (IIoT) because it offers reduced bandwidth consumption and lower data transmission delays. However, the increase in the number of distributed devices opens up increased possibilities of privacy issues and security threats, specifically in Federated Learning (FL)-based environments that consider decentralised training across various edge nodes [2].
FL enables edge devices to individually train their local models by maintaining their data privacy while collaboratively training a global model. Despite its privacy-preserving benefits, FL is susceptible to adversarial threats that can compromise both data privacy and model integrity [3]. More precisely, attackers can influence FL models by introducing malicious updates via model poisoning or manipulate training data through backdoor attacks. Moreover, they may infer private information using membership inference attacks and gradient leakage. Such adversarial activities are particularly concerning in 5G edge computing networks due to the heterogeneous nature of edge devices in terms of computational power and security capabilities, which makes them attractive targets for attackers [4].
Conventional privacy-preserving FL algorithms including differential privacy, homomorphic encryption and secure multi-party computation offer certain levels of privacy. However, they encounter multiple limitations while handling adversarial attacks in the FL environment. The aforementioned traditional mechanisms focus primarily on protecting data privacy, but often become unsuccessful in the detection and mitigation of malicious model updates from compromised edge devices, making them inefficient in combating model poisoning attacks and backdoor attacks [3,5]. Moreover, traditional techniques such as secure aggregation and homomorphic encryption introduce a considerable amount of communication overhead, which restricts their scalability and practicality for 5G resource-limited edge devices. Furthermore, FL methods based on differential privacy inject noise to protect data privacy, which hinders the performance of the model and does not efficiently impede membership inference attacks while leaving FL models susceptible to inference and gradient-based attacks [5,6].
To address the aforementioned challenges, adversarial optimisation is an emerging solution to improve the robustness of FL models in 5G edge computing. Adversarial optimisation involves the training of FL models to combat adversarial attacks through various techniques including adversarial training, intrusion detection-based filtering of malicious updates and robust aggregation methods [7]. Moreover, adversarial optimisation allows the development of effective defence mechanisms while ensuring the optimal balance between security, model performance and efficiency [8,9]. This article provides a novel FL with adversarial optimisation framework for 5G edge computing networks while enhancing FL privacy, security and robustness against adversarial attacks. The key contributions of this work are summarised as follows:
  • This paper proposes a novel FL framework with adversarial optimisation to strengthen the security of 5G edge computing networks. By incorporating a classifier model and an adversary model, the proposed algorithm simultaneously improves the robustness of the FL model against adversarial attacks. This ensures a more secure and private FL training across edge devices.
  • To train the proposed framework, adversary model considers the Fast Gradient Sign Method (FGSM) for generation of stronger perturbations based on the classifier model’s responses in an iterative manner. This guarantees the improvement of model resilience in privacy-sensitive 5G edge computing networks.
  • For performance evaluation, extensive level simulations have been performed considering the 5G-Network Intrusion Detection Dataset (NIDD) [10], which validates the adaptability of the proposed algorithm to 5G edge computing networks including 5G-enabled IoT. Comprehensive experimental analysis reveals valuable insights into the proposed FL with adversarial optimisation algorithm in terms of accuracy, scalability, computational efficiency and time efficiency.
  • To demonstrate generalisability beyond 5G edge computing, the proposed model is further validated on IDS-IoT-2024 and CICIDS2017 datasets. Results confirm the generalisation capability of proposed method across diverse FL-based intrusion detection domains.
The rest of the article is organised as follows. Section 2 presents an overview of the most related contributions from the existing literature. Section 3 provides the details of the problem formulation and the mathematical model. Section 4 elaborates the design of the proposed methodology. Section 5 provides a detailed discussion of the experimental results and Section 6 concludes the article.

2. Related Work

Several researchers have explored FL for privacy-preserving data analysis in decentralised edge computing networks. For instance, the authors of [11] examined secure aggregation protocol for FL to prevent data leakage during model updates. In [12], the authors introduced a foundational FL framework as federated averaging to efficiently aggregate decentralised model updates. Despite these advancements, conventional FL algorithms remain vulnerable to adversarial attacks, including data poisoning attacks and backdoor attacks which pose significant threats to FL models. Ref. [13] demonstrated how malicious clients can inject backdoor triggers into global models without detection. In addition, ref. [14] presented a comprehensive analysis of poisoning attacks while illustrating their detrimental impact on FL models. To counteract such malicious activities, various defensive strategies have been proposed. Differential privacy mechanisms, such as those discussed by the authors of [15] add noise to model updates to obscure sensitive information. However, an unreasonable amount of noise can degrade the model’s accuracy. Similarly, ref. [16] demonstrated the use of secure multi-party computation to ensure privacy, but introduces significant computational overhead. The authors of [17] proposed homomorphic encryption methods to enable secure model aggregation but remain computationally intensive for large-scale FL applications. To address the limitations of existing approaches, recent works have examined adversarial learning techniques. In [18], the authors proposed an adversarial training approach to improve the robustness of the model against inference attacks. In contrast, the authors of [19] provide a comprehensive analysis of adversarial vulnerabilities in FL and presented the decision boundary-based federated adversarial training algorithm to enhance both accuracy and robustness in FL models. Furthermore, ref. [20] provides a comprehensive analysis of current backdoor attack strategies and defences in FL, highlighting challenges and potential future directions. Wang et al. have made significant contributions to the intersection of FL and IoT/5G systems. For instance, ref. [21] proposed a hierarchical FL mechanism to improve anomaly detection in IIoT, which addresses communication efficiency in the context of distributed IoT devices. Similarly, in [22], the authors introduced a federated reinforcement learning approach to quality-of-service and privacy-aware routing for 5G-enabled IoT with the focus on optimising the communication performance and privacy. Moreover, in [23], they extended FL for disease diagnosis in medical IoT environments while prioritising the privacy-preserving methods to safeguard sensitive medical data. Prior work conducted by our research group explores defending against adversarial machine learning attacks [24,25], as well as data privacy and performance challenges within federated learning environments [26,27]. Inspired by these advancements, we propose a novel FL framework that incorporates these two domains, extending adversarial optimisation to leverage both an adversary model and a classifier model in a two-way manner that can dynamically simulate and mitigate adversarial attacks, leading to a more robust classification model.

3. System Model and Problem Formulation

This section presents the system model and problem formulation to enhance the security in 5G edge computing networks while improving robustness against adversarial attacks. Here, robustness refers to the ability of the proposed model to sustain reliable detection accuracy even when the inputs are influenced by adversarial manipulations. The proposed system model formulates the FL with adversarial optimisation as a min-max optimisation problem. The core idea revolves around the simultaneous training of a robust classifier and an adversary model in a decentralised learning environment. Overall, the classifier aims to minimise the classification loss, whereas the adversary attempts to maximise it by generating strong adversarial perturbations. This adversarial setup enhances the resilience of the classifier against malicious attacks.
In the considered system model, let M represent the number of clients participating in the FL system, where each client m { 1 , 2 , , M } possesses a local dataset denoted as:
D m = x m i , y m i i = 1 n m ,
where x m i R d represents the input data, y m i { 1 , , C } is the corresponding class label and n m denotes the number of data samples at client m. The total number of data samples across all clients is:
N = m = 1 M n m .
The classifier model is parametrised by θ and is represented as:
f θ : R d R C .
The adversary model, designed to generate adversarial perturbations, is parametrised by ϕ and denoted as:
g ϕ : R d R d .
The adversary employs FGSM to generate adversarial examples that are aimed at misleading the classifier. The adversarial perturbation is expressed as follows [9]:
x a d v = x + ϵ · sign x L f θ ( x ) , y ,
where L f θ ( x ) , y represents the loss function, typically cross-entropy loss, ϵ is the perturbation budget, restricting the distortion introduced by the adversary. To ensure the perturbations remain realistic, the generated adversarial examples are constrained by the L norm [9]:
x a d v x ϵ .
This ensures that the adversarial examples remain within a feasible range while maintaining perceptual similarity to the original data. The proposed model considers the generation and training of these adversarial examples locally at each client m and are never transmitted to the global server. Only the model parameters are shared during training which ensures that both the raw data and adversarially perturbed data are not exposed outside the client environment. This architecture safeguards client-side traffic information while simultaneously hardening the classifier against adversarial manipulation. This complements the inherent privacy-preserving nature of FL.
In the considered FL framework, the objective is modelled as a min-max optimisation problem. The classifier model f θ minimises a joint loss function, while the adversary model g ϕ maximises the adversarial loss to induce model degradation. This adversarial optimisation objective can be expressed as:
min θ max ϕ 1 M m = 1 M E ( x , y ) D m λ L f θ ( x ) , y + ( 1 λ ) L f θ g ϕ ( x ) , y ,
λ [ 0 , 1 ] is a trade-off parameter that balances the clean loss and adversarial loss, L f θ ( x ) , y is the clean classification loss, L f θ g ϕ ( x ) , y is the adversarial classification loss. Each client performs local training by optimising its own classifier objective using stochastic gradient descent. The local classifier objective for each client m is given by:
min θ m E ( x , y ) D m λ L f θ m ( x ) , y + ( 1 λ ) L f θ m g ϕ ( x ) , y .
The model parameters are updated using gradient-based optimisation [12]:
θ m t + 1 = θ m t η θ m L m ,
where L m is the local loss function and η is the learning rate. After a specified number of local training epochs, the updated model parameters are transmitted to the central server. The server aggregates these models using the federated averaging algorithm to obtain the global model [12]:
θ t + 1 = m = 1 M n m N θ m t .
This aggregated model serves as the updated global model for the next communication round.
Overall, to maintain effective training dynamics, several constraints and regularisation mechanisms are incorporated. The adversarial perturbations generated by adversary model g ϕ are strictly bounded by the L norm [9]:
g ϕ ( x ) x ϵ .
Furthermore, gradient clipping is applied to prevent gradient explosion during training, ensuring that the gradients are within a manageable range:
θ L G max ,
where G max is a predefined gradient threshold. Additionally, a L 2 regularisation term is introduced to mitigate overfitting and encourage generalisation:
L r e g ( θ ) = λ r 2 θ 2 ,
where λ r 0 controls the strength of regularisation. The final mathematical formulation of the FL with adversarial optimisation can be represented as:
min θ max ϕ m = 1 M n m N E ( x , y ) D m λ L f θ ( x ) , y + ( 1 λ ) L f θ g ϕ ( x ) , y + λ r 2 θ 2
subject to : g ϕ ( x ) x ϵ .
The formulated optimisation problem encompasses the adversarial interaction between the classifier model and the adversary model to ensure the robustness of the FL model. The FL system effectively strengthens its resilience to malicious activities by jointly optimising both clean and adversarial losses while constraining adversarial perturbations. The classifier achieves robust generalisation even under adversarial threats through iterative local training and global aggregation.

4. Proposed Federated Learning with Adversarial Optimisation Algorithm

In this section, we provide the overall methodology of the proposed framework. This section provides the dataset description and pre-processing of the dataset followed by training algorithm of the proposed FL with adversarial optimisation model for secure and robust 5G edge computing networks. Figure 1 illustrates the overall framework of the proposed model including key stages from data collection to model aggregation and Algorithm 1 provides the detailed steps for training the proposed model. The following subsections provide the detailed explanation of each stage of the workflow in the proposed framework.
Algorithm 1: Federated Learning with Adversarial Optimisation
Bdcc 09 00238 i001

4.1. Dataset Description and Pre-Processing

In the proposed FL with adversarial optimisation mechanism, we considered the latest and most realistic 5G-NIDD intrusion detection dataset for 5G wireless networks [10]. 5G-NIDD is a fully labelled dataset from 5G testbed at the University of Oulu, Finland. 5G-NIDD features data from base stations, including attack scenarios such as port scans (UDPScan, SYNScan, TCPConnect) and Denial-of-Service (Slowrate, ICMP, SYN, UDP, HTTPFloods). Overall, the dataset is comprised of 9 classes with 8 attack classes and a benign class. The class distribution of 5G-NIDD dataset is presented in Figure 2. Specifically, the dataset contains 477,737 benign data samples that is 39.29 % of the dataset. Moreover, the dataset encompasses eight different attack types involving 457,340 instances of UDP Flood that contributes 37.61 % to the dataset, 140,812 data instances of HTTP Flood with 11.58 % data contribution, 73,124 data points of SlowrateDoS that is 6.01 % of dataset, 20,052 instants of TCPConnectScan with 1.65 % contribution towards whole dataset, 20,043 entries of SYNScan with 1.65 % involvement in dataset, 15,906 samples of UDPScan with 1.31 % data contribution, 9721 entries of SYNFlood contributing 0.8 % to dataset and 1155 data examples of ICMPFlood that is only 0.1 % of the dataset. Moreover, the dataset contains 52 feature columns. Pre-processing the dataset plays a fundamental role in training the machine and deep learning models. As presented in Figure 1, the data pre-processing starts by removing one duplicated row i.e., data sample of the dataset. Subsequently, the redundant feature columns with 80 % to 99 % missing values including dTos, dDSb, dTtl, dHops, SrcGap, DstGap, SrcWin, DstWin, sVid, dVid, SrcTCPBase, DstTCPBase and some other irrelevant columns named as Unnamed: 0, Seq, Label and Attack Tool are eliminated. This is because these do not contribute significantly to the training of the network, resulting in a new dataset with 36 columns including 35 feature columns and a label column. After that, the remaining features with missing values within the acceptable range are identified. The identified feature columns are sTos, sDSb, sTtl and sHops, each having 214 missing values, which is only 0.0176 % of the total feature values, were imputed based on the respective feature distribution.

4.2. Training of Federated Learning with Adversarial Optimisation Model

The training process of the proposed framework can be broken down into several steps as depicted in Figure 1, which are also aligned with the steps in Algorithm 1. In Algorithm 1, these steps are described in depth while providing the details relevant to generation of adversarial examples, interaction of server and clients and aggregation of local models. The first step in the proposed workflow mentioned in Figure 1 is to split the training data across multiple clients. Each client is assigned a portion of the data while ensuring decentralisation and data privacy. This step assures that the local data is kept private and only the model updates are shared with global model. Once the training data is divided among clients, the next step involves the initialisation of global model parameters followed by transmitting the initialised global model parameters to all the clients as mentioned in both Figure 1 and Algorithm 1. This includes the classifier model parameters θ t and adversary model parameters ϕ t . This ensures every client starts with the same initial global model. In next step, the clients receive the initialised global model and set up their local classifier and adversary models for training. These local models are now ready to be trained using the local client specific data. At each global communication round t, the server synchronises both models across all clients by broadcasting the updated parameters. This ensures that every client works with the most recent version of the models.
Once the local models are initialised, each client proceeds with its local training using its respective dataset. For every local training epoch, the client divides its dataset into mini-batches and, for each batch, generates adversarial examples using FGSM as mentioned in step 4 of Figure 1. This is achieved by perturbing the original input data in the direction of the gradient of the classifier’s loss with respect to the input. The perturbation is controlled by a predefined parameter ϵ , which determines the strength of the adversarial attack. The adversarial input x a d v is computed as follows [9]:
x a d v = x + ϵ · sign x L f θ m t ( x ) , y ,
where L is the loss function, f θ m t represents the classifier and y denotes the true label. Then, in step 5 clients consider the adversarial data and train their classifier models using both clean and adversarial data. Each client calculates both the clean loss and the adversarial loss. The clean loss measures the classifier’s performance on the original data, while the adversarial loss evaluates its performance on the adversarially perturbed data. The combined loss function, which is a weighted sum of the clean and adversarial losses is then used to update the classifier model that is the step 6 of Figure 1. A regularisation term is also included to mitigate overfitting, resulting in the following total loss [18]:
L total = λ L clean + ( 1 λ ) L adv + λ r 2 θ m t 2 ,
where λ controls the balance between clean and adversarial training and λ r is the regularisation coefficient.
The adversary model is updated using gradient ascent to maximise adversarial loss. This ensures the generation of more effective adversarial examples, further challenging the classifier. The update rule for the adversary is given as follows:
ϕ m t + 1 = ϕ m t + η ϕ ϕ L adv ,
where η ϕ is the learning rate for the adversary. Conversely, the classifier is updated using gradient descent to minimise the total loss [12]:
θ m t + 1 = θ m t η θ θ L total ,
where η θ is the classifier’s learning rate. After completing the local training for the specified number of epochs, each client sends its updated classifier and adversary parameters back to the server as mentioned in step 7 of Figure 1. Then, in step 8, the server aggregates the model updates using federated averaging. This involves computing a weighted average of the client models, where the weights are proportional to the number of data samples held by each client. The number of samples per client affects the weighting of the aggregated updates which ensures that clients with more data have a greater influence on the global model update. However, no raw data is transmitted to the server, preserving privacy throughout the training process. The global classifier and adversary parameters are updated as follows [12]:
θ t + 1 = m = 1 M n m N θ m t ,
ϕ t + 1 = m = 1 M n m N ϕ m t ,
where M is the total number of clients, n m is the number of samples at client m and N = m = 1 M n m is the total number of samples across all clients.
This process of local training followed by model aggregation is repeated for a specified number of global rounds, allowing the model to converge and improve at each step as presented in step 9 of the proposed workflow in Figure 1. At the end of the training, the server returns the final classifier and adversary model parameters. This approach not only ensures the classifier’s robustness against adversarial attacks but also maintains data privacy by keeping data decentralised between clients.

4.3. Testing of the Trained Federated Learning with Adversarial Optimisation Model

The trained FL with adversarial optimisation model is evaluated for clean test data as well as adversarial test data to observe the overall model performance in terms of robustness against adversarial attacks for 5G edge computing networks.

5. Performance Evaluation

This section presents the performance evaluation of the proposed FL with adversarial optimisation model while providing a comparison of its results with the standard FL algorithm. All the simulations are carried out using the TensorFlow and Scikit-Learn libraries on the Google Compute Engine, which provides an Nvidia Tesla T4 Graphics Processing Unit (GPU) with high RAM for smooth execution of deep learning algorithms.

Implementation of Federated Learning with Adversarial Optimisation Model

The 5G-NIDD dataset is divided into training dataset and testing dataset as 80:20 i.e., 80 % training data and 20 % testing data. The class distribution is preserved during the train–test split and stratified sampling is utilised to ensure that each class is proportionally represented in both the training and testing datasets. This ensures that no class is underrepresented or overrepresented in both the sets. The training dataset along with its respective labels is given as an input to train the proposed model. In the FL setup, the data is Independently and Identically Distributed (IID) distributed among the participating clients to ensure balanced local training. Unlike standard FL, the proposed approach incorporates both a classifier model and an adversary model, which are trained in tandem to enhance robustness against adversarial attacks. The classifier is a dense neural network consisting of four fully connected layers with ReLU activation functions. It has an input layer, followed by hidden layers with 128, 64 and 32 neurons, respectively. Dropout layers with a rate of 0.2 are applied after the first two hidden layers to prevent overfitting. The output layer uses a softmax activation function to predict the class probabilities. The adversary model is designed using the FGSM to generate adversarial perturbations. It takes the classifier and the input data as input, computes the gradient of the loss with respect to the input using a gradient tape and applies perturbations based on the sign of the gradients. The perturbations are scaled using a fixed epsilon value of 0.1 to maximise the classifier’s loss, effectively simulating adversarial attacks. The adversary model is trained to iteratively generate more robust adversarial examples by refining its perturbations with each iteration, considering the classifier’s response to previous attacks. This training strategy forces the adversary to adapt and optimise its perturbations, effectively challenging the classifier and enhancing the model’s resilience against a range of adversarial scenarios. FL is performed with various clients scenarios including 5 ,   10 ,   20 ,   30 , and 40 clients scenario, each client in respective case trains its local model for 5 epochs per global round, using a mini-batch size of 128. The global model for each scenario is trained over 100 communication rounds, where local model updates are aggregated using weighted averaging. The learning rate is initialised at 0.001 and follows an exponential decay schedule with a decay factor of 0.95 . Both the classifier and adversary models are optimised using the Adam optimiser with a weight decay of 1 × 10 5 . Performance is evaluated using metrics such as accuracy, precision, recall and F1-score for clean inputs, while adversarial accuracy is measured using adversarial examples generated from the adversary model with perturbation strength 0.1 . The proposed adversarial FL approach ensures robust intrusion detection by mitigating the impact of adversarial attacks in FL environments.
The testing dataset and adversarial dataset are used to test the trained FL with adversarial optimisation model. Figure 3 presents the convergence analysis of the standard FL model and the proposed FL with adversarial optimisation model under both the clean test data and adversarial test data. The test accuracy and test loss for the 5-client scenario is illustrated in Figure 3a and Figure 3b respectively, while Figure 3c and Figure 3d show the test accuracy and test loss for the 40-client case respectively. It is evident that the overall performance of the proposed model is not compromised as the global test accuracy remains constantly high. Moreover, the test loss of the proposed approach is significantly lower in comparison with the standard FL. For instance, in Figure 3d, the test loss of the simple FL model with 40 clients reaches 19.9694 on adversarial test data, whereas the proposed framework preserves the lower test loss of 0.0124 , indicating enhanced learning capability and greater stability. The effectiveness and generalisation of the model is also evident from its persistent performance over various rounds of training. The performance of the standard FL fluctuates under adversarial attacks over increased rounds of training, while the proposed model maintains stability in terms of accuracy and loss as observed in Figure 3a–d. This illustrates that the proposed framework helps to restrain the accumulation of errors, which commonly affects the standard FL under adversarial attacks.
Figure 4 illustrates the robustness and scalability of the proposed model against adversarial attacks. It is observed that the existence of the adversarial data intensively degrades the performance of the simple FL model, while our approach with adversarial optimisation maintains significantly higher accuracy rate. For instance, in models trained with 5 clients, the accuracy rate of simple FL drops to 59.54 % on adversarial data, whereas the FL model with adversarial optimisation retains higher accuracy rate of 92.79 % . This trend is followed in different client configurations, with the adversarial optimisation-based model attaining the accuracy of 99.44 % as compared to standard FL model with accuracy of 69.13 % for 40 clients scenario. Overall, the experimental analysis depicts significant rise in model performance on adversarial data in comparison with standard FL model while providing comparable performance on clean data.
The robustness of the proposed FL with adversarial optimisation mechanism is further highlighted by the enhancement in precision, recall and F1-score, as presented in Table 1. For example, in 40 clients case, it is notable that the simple FL achieves an F1-score of 70 % only when compared to the FL with adversarial optimisation which provides an impressive F1-score of 99 % . This reveals that the proposed method not only accurately classifies adversarial data but also manages to preserve the balance among precision and recall, resulting in fewer misclassifications. The confusion matrices provided in Figure 5 and Figure 6 further signifies the reduction of misclassification on adversarial data in case of FL with adversarial optimisation when compared to simple FL model.
To evaluate the classification performance of the proposed algorithm, confusion matrices were generated considering both clean test data and adversarial test data for varying number of clients including 5 clients scenario and 40 clients scenario as presented in Figure 5 and Figure 6 respectively. For the standard FL model with 5 clients, the confusion matrix on clean test data indicates relatively high accuracy as shown in Figure 5a. However, a sharp decline in performance is observed under adversarial conditions with a noticeable increase in false positives and false negatives as displayed in Figure 5b. In contrast, Figure 5c,d demonstrate that the proposed FL with adversarial optimisation model maintains robust classification performance across both clean test data and adversarial test data highlighting the effectiveness of adversarial optimisation in enhancing resilience to perturbations. When the models are scaled to 40 clients, the standard FL model again performs well on clean test data as shown in Figure 6a but suffers significantly under adversarial attacks where the misclassification rate dominates the confusion matrix as depicted in Figure 6b. On the contrary, the proposed FL with adversarial optimisation model continues to exhibit strong performance, exhibiting fewer misclassifications and higher consistency in both clean test scenario and adversarial test scenario as illustrated in Figure 6c,d. This emphasizes the robustness and scalability of the proposed model while highlighting its suitability for large-scale deployment in federated edge networks exposed to adversarial threats.
Figure 7 and Table 2 present the sensitivity analysis to assess the performance and reliability of our proposed FL architecture integrated with adversarial optimisation. The detailed sensitivity analysis involved the performance evaluation of both the standard FL model and the proposed FL with adversarial optimisation model against adversarial test datasets generated using varying perturbation strengths 0.01 ,   0.05 ,   0.1 ,   0.2 , and 0.3 . Both the models were trained considering 40 clients scenario, where the proposed model incorporated adversarial training using perturbation strength 0.1 while the standard FL model did not consider any adversarial robustness mechanisms. In Figure 7, it is seen that the proposed model maintains significantly higher accuracy rate and resilience with the increase in perturbation strengths as compared to the achievable accuracy rate of the standard FL model. It is to be noted that the proposed model provides the maximum achievable accuracy of 99.44 % when tested on adversarial testing data with perturbation strength 0.1 . This is because the proposed model is trained considering the perturbation strength 0.1 . Overall, the accuracy of the standard FL model decreases drastically which indicates vulnerability to stronger attacks, whereas the proposed FL with adversarial optimisation model exhibits only a gradual decline while maintaining significantly higher accuracy even at ϵ = 0.3 . This stable degradation pattern of achievable accuracy under severe adversarial conditions demonstrates the resilience of the proposed model and underscores the need for adversarial optimisation in privacy preserving FL frameworks. In Table 2, we observed that the performance of both models is comparable at ϵ = 0.01 with F1-score of 93 % and F1-score of 94 % for the standard FL model and the proposed FL with adversarial optimisation model respectively. However, the performance of standard FL model degraded rapidly, dropping to F1-score of 70 % at ϵ = 0.1 and further reduced to 46 % at ϵ = 0.3 . On the contrary, the proposed FL with adversarial optimisation model consistently outperformed its counterpart while achieving a remarkable F1-score of 99 % at ϵ = 0.1 and sustaining robustness even at higher perturbation strengths with F1-score of 80 % and 54 % at ϵ = 0.2 and ϵ = 0.3 respectively. Sensitivity analysis underscores the efficiency of the proposed FL with adversarial optimisation model in improving the resilience of FL models, specifically under stronger and varying adversarial threats. The performance gap between standard FL and proposed FL model highlights the need to integrate adversarial robustness mechanisms in federated environments, particularly in security critical applications like 5G edge computing networks and IoT.
To assess generalisability of the proposed model beyond 5G-NIDD, we further validated our framework on two widely used IoT intrusion detection datasets including IDS-IoT-2024 dataset [28] and CICIDS2017 dataset [29] using the same federated setup comprising of 10 clients and training the models with ϵ = 0.1 . Table 3 provide the performance comparison across all three datasets. Results reveal that the proposed framework consistently preserves clean test accuracy while substantially outperforming standard FL under adversarial perturbations with various strengths. For an instance, at ϵ = 0.1 , the adversarial accuracy increases from 61.25 % achieved using standard FL to 96.81 % using adversarial optimisation on 5G-NIDD, from 45.88 % to 97.77 % on IDS-IoT-2024 and from 60.86 % to 96.81 % on CICIDS2017. These findings demonstrate that the proposed model generalises effectively to diverse FL-based intrusion detection networks which confirms its applicability beyond 5G edge computing scenarios.
Integrating adversarial optimisation does not compromise communication efficiency. Regardless of the escalation in the number of clients, communication overhead remains comparatively stable in both the standard FL and FL with adversarial optimisation approaches. Moreover, the proposed model provides a slight reduction in communication overhead as compared to simple FL as presented in Figure 8. For instance, the communication overhead for simple FL in case of 40 clients is 590.12 MB, while the suggested algorithm shows slight optimisation in this with 578.4 MB communication overhead. These findings suggest that the FL with adversarial optimisation is feasible for large-scaled FL applications without introducing additional communication cost.
The computational efficiency is also an important factor while dealing with real-time applications. Therefore, computational efficiency and time efficiency are evaluated. The proposed FL with adversarial optimisation model demonstrates an improvement in time efficiency. To be more precise, the proposed mechanism maintains reduced average inference time per data sample of 64.87 ms as compared to standard FL model with average inference time per sample of 69.18 ms. Furthermore, the computational cost of the suggested FL model is also lower as compared to standard FL model in terms of Floating-point Operations Per Second (FLOPS) per inference. Specifically, the simple FL model demands 38,470 FLOPs per inference, whereas the proposed model requires 37,718 FLOPs indicating reduced computational complexity.
Overall, it is to be noted that the proposed FL with adversarial optimisation model strengthens privacy and security while ensuring enhancement in robustness against adversarial attacks, scalability, as well as computational efficiency.

6. Conclusions

This article presents FL with adversarial optimisation framework to enhance security, adversarial robustness, and privacy preservation in 5G edge computing networks. The proposed mechanism incorporates a classifier model and an adversary model to improve the resilience of FL against malicious activities. Adversary model used FGSM for producing adversarial perturbations to challenge the classifier model and enhance its robustness against adversarial attacks. Comprehensive and real-world 5G network dataset namely, the 5G-NIDD dataset is utilised to train and evaluate the presented model. To evaluate the effectiveness and scalability of the proposed algorithm, we conducted experimental analysis across various client scenarios, including 5 ,   10 ,   20 ,   30 , and 40 clients. Experimental results demonstrate that the presented approach significantly improves the achievable adversarial accuracy of 99.44 % with 40 clients while preserving competitive clean accuracy. Moreover, the proposed model demonstrates communication efficiency through reduced overhead and computational efficiency in terms of reduced inference time in comparison with conventional FL models. In addition to the 5G-NIDD dataset, the proposed framework is also validated on IDS-IoT-2024 and CICIDS2017 which confirms that the proposed mechanism generalises effectively to other FL-based intrusion detection domains, further supporting its applicability beyond 5G edge computing. In future, more sophisticated adversarial methods including Projected Gradient Descent (PGD), Carlini and Wagner (C&W) and DeepFool attacks will be explored to optimise the adversary model for further enhancement of model’s robustness against ever evolving adversarial attacks. Additionally, non-IID (non-Independent and Identically Distributed) data distributions will also be considered to evaluate the performance of the proposed model to better simulate the real-world scenario.

Author Contributions

Conceptualisation, S.Z., J.W. and P.L.; methodology, S.Z., J.W. and P.L.; software, S.Z.; validation, S.Z., J.W. and P.L.; formal analysis, S.Z., J.W. and P.L.; investigation, S.Z.; resources, S.Z. and P.L.; data curation, S.Z., J.W. and P.L.; writing—original draft preparation, S.Z.; writing—review and editing, J.W. and P.L.; visualisation, S.Z., J.W. and P.L.; supervision, P.L.; project administration, P.L.; funding acquisition, P.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the College of Arts, Technology and Environment at the University of the West of England.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All experimental data can be made available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
5G-NIDD5G-Network Intrusion Detection Dataset
FGSMFast Gradient Sign Method
FLFederated Learning
FLOPSFloating-point Operations Per Second
GPUGraphics Processing Unit
IIDIndependent and Identically Distributed
IIoTIndustrial Internet of Things
PGDProjected Gradient Descent
C&WCarlini and Wagner

References

  1. Hassan, N.; Yau, K.L.A.; Wu, C. Edge computing in 5G: A review. IEEE Access 2019, 7, 127276–127289. [Google Scholar] [CrossRef]
  2. Lee, J.; Solat, F.; Kim, T.Y.; Poor, H.V. Federated learning-empowered mobile network management for 5G and beyond networks: From access to core. IEEE Commun. Surv. Tutor. 2024, 26, 2176–2212. [Google Scholar] [CrossRef]
  3. Nowroozi, E.; Haider, I.; Taheri, R.; Conti, M. Federated learning under attack: Exposing vulnerabilities through data poisoning attacks in computer networks. IEEE Trans. Netw. Serv. Manag. 2025, 22, 822–831. [Google Scholar] [CrossRef]
  4. Han, G.; Ma, W.; Zhang, Y.; Liu, Y.; Liu, S. BSFL: A blockchain-oriented secure federated learning scheme for 5G. J. Inf. Secur. Appl. 2025, 89, 103983. [Google Scholar] [CrossRef]
  5. Feng, Y.; Guo, Y.; Hou, Y.; Wu, Y.; Lao, M.; Yu, T.; Liu, G. A survey of security threats in federated learning. Complex Intell. Syst. 2025, 11, 1–26. [Google Scholar] [CrossRef]
  6. Rao, B.; Zhang, J.; Wu, D.; Zhu, C.; Sun, X.; Chen, B. Privacy inference attack and defense in centralized and federated learning: A comprehensive survey. IEEE Trans. Artif. Intell. 2024, 6, 333–353. [Google Scholar] [CrossRef]
  7. Tahanian, E.; Amouei, M.; Fateh, H.; Rezvani, M. A Game-theoretic Approach for Robust Federated Learning. Int. J. Eng. Trans. A Basics 2021, 34, 832–842. [Google Scholar]
  8. Guo, Y.; Qin, Z.; Tao, X.; Dobre, O.A. Federated Generative-Adversarial-Network-Enabled Channel Estimation. Intell. Comput. 2024, 3, 0066. [Google Scholar] [CrossRef]
  9. Grierson, S.; Thomson, C.; Papadopoulos, P.; Buchanan, B. Min-max training: Adversarially robust learning models for network intrusion detection systems. In Proceedings of the 2021 14th International Conference on Security of Information and Networks (SIN), Virtual, 15–17 December 2021; IEEE: New York, NY, USA, 2021; Volume 1, pp. 1–8. [Google Scholar]
  10. Samarakoon, S.; Siriwardhana, Y.; Porambage, P.; Liyanage, M.; Chang, S.Y.; Kim, J.; Kim, J.; Ylianttila, M. 5G-NIDD: A Comprehensive Network Intrusion Detection Dataset Generated over 5G Wireless Network. arXiv 2022, arXiv:2212.01298. [Google Scholar] [CrossRef]
  11. Bonawitz, K.; Eichner, H.; Grieskamp, W.; Huba, D.; Ingerman, A.; Ivanov, V.; Kiddon, C.; Konečnỳ, J.; Mazzocchi, S.; McMahan, B.; et al. Towards federated learning at scale: System design. Proc. Mach. Learn. Syst. 2019, 1, 374–388. [Google Scholar]
  12. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. [Google Scholar]
  13. Bagdasaryan, E.; Veit, A.; Hua, Y.; Estrin, D.; Shmatikov, V. How to backdoor federated learning. In Proceedings of the International Conference on Artificial Intelligence and Statistics, Online, 26–28 August 2020; pp. 2938–2948. [Google Scholar]
  14. Lyu, L.; Yu, H.; Yang, Q. Threats to federated learning: A survey. arXiv 2020, arXiv:2003.02133. [Google Scholar] [CrossRef]
  15. Abadi, M.; Chu, A.; Goodfellow, I.; McMahan, H.B.; Mironov, I.; Talwar, K.; Zhang, L. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, 24–28 October 2016; pp. 308–318. [Google Scholar]
  16. Bonawitz, K.; Ivanov, V.; Kreuter, B.; Marcedone, A.; McMahan, H.B.; Patel, S.; Ramage, D.; Segal, A.; Seth, K. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, TX, USA, 30 October–3 November 2017; pp. 1175–1191. [Google Scholar]
  17. Phong, L.T.; Aono, Y.; Hayashi, T.; Wang, L.; Moriai, S. Privacy-preserving deep learning via additively homomorphic encryption. IEEE Trans. Inf. Forensics Secur. 2017, 13, 1333–1345. [Google Scholar] [CrossRef]
  18. Nasr, M.; Shokri, R.; Houmansadr, A. Machine learning with membership privacy using adversarial regularization. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, Toronto, ON, Canada, 15–19 October 2018; pp. 634–646. [Google Scholar]
  19. Zhang, J.; Li, B.; Chen, C.; Lyu, L.; Wu, S.; Ding, S.; Wu, C. Delving into the adversarial robustness of federated learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 11245–11253. [Google Scholar]
  20. Nguyen, T.D.; Nguyen, T.; Le Nguyen, P.; Pham, H.H.; Doan, K.D.; Wong, K.S. Backdoor attacks and defenses in federated learning: Survey, challenges and future research directions. Eng. Appl. Artif. Intell. 2024, 127, 107166. [Google Scholar] [CrossRef]
  21. Wang, X.; Garg, S.; Lin, H.; Hu, J.; Kaddoum, G.; Piran, M.J.; Hossain, M.S. Toward accurate anomaly detection in Industrial Internet of Things using hierarchical federated learning. IEEE Internet Things J. 2021, 9, 7110–7119. [Google Scholar] [CrossRef]
  22. Wang, X.; Hu, J.; Lin, H.; Garg, S.; Kaddoum, G.; Piran, M.J.; Hossain, M.S. QoS and privacy-aware routing for 5G-enabled industrial Internet of Things: A federated reinforcement learning approach. IEEE Trans. Ind. Inform. 2021, 18, 4189–4197. [Google Scholar] [CrossRef]
  23. Wang, X.; Hu, J.; Lin, H.; Liu, W.; Moon, H.; Piran, M.J. Federated learning-empowered disease diagnosis mechanism in the Internet of Medical Things: From the privacy-preservation perspective. IEEE Trans. Ind. Inform. 2022, 19, 7905–7913. [Google Scholar] [CrossRef]
  24. McCarthy, A.; Ghadafi, E.; Andriotis, P.; Legg, P. Defending against adversarial machine learning attacks using hierarchical learning: A case study on network traffic attack classification. J. Inf. Secur. Appl. 2023, 72, 103398. [Google Scholar] [CrossRef]
  25. McCarthy, A.; Ghadafi, E.; Andriotis, P.; Legg, P. Functionality-preserving adversarial machine learning for robust classification in cybersecurity and intrusion detection domains: A survey. J. Cybersecur. Priv. 2022, 2, 154–190. [Google Scholar] [CrossRef]
  26. White, J.; Legg, P. Evaluating Data Distribution Strategies in Federated Learning: A Trade-Off Analysis Between Privacy and Performance for IoT Security. In International Conference on Cyber Security, Privacy in Communication Networks; Springer: Singapore, 2023; pp. 17–37. [Google Scholar]
  27. White, J.; Legg, P. Federated learning: Data privacy and cyber security in edge-based machine learning. In Data Protection in a Post-Pandemic Society: Laws, Regulations, Best Practices and Recent Solutions; Springer: Cham, Switzerland, 2023; pp. 169–193. [Google Scholar]
  28. Koppula, M.; Leo Joseph, L.M.I. A Real-World Dataset “IDSIoT2024” for Machine Learning/Deep Learning Based Cyber Attack Detection System for IoT Architecture. In Proceedings of the 2025 3rd International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT), Bengaluru, India, 5–7 February 2025; pp. 1757–1764. [Google Scholar] [CrossRef]
  29. Sharafaldin, I.; Lashkari, A.H.; Ghorbani, A.A.; Ribeiro, E.A. CICIDS2017 (Cleaned and Preprocessed Version). Dataset. Available online: https://www.kaggle.com/datasets/ericanacletoribeiro/cicids2017-cleaned-and-preprocessed (accessed on 31 July 2025).
Figure 1. FL with adversarial optimisation workflow.
Figure 1. FL with adversarial optimisation workflow.
Bdcc 09 00238 g001
Figure 2. Overview of the dataset distribution.
Figure 2. Overview of the dataset distribution.
Bdcc 09 00238 g002
Figure 3. Convergence analysis of models.
Figure 3. Convergence analysis of models.
Bdcc 09 00238 g003
Figure 4. Test accuracy comparison of models with various number of clients.
Figure 4. Test accuracy comparison of models with various number of clients.
Bdcc 09 00238 g004
Figure 5. Confusion matrices for models considering 5 clients scenario. (a) Confusion matrix of the FL model with clean test data. (b) Confusion matrix of the standard FL model with adversarial test data. (c) Confusion matrix of the proposed model with clean test data. (d) Confusion matrix of the proposed model with adversarial test data.
Figure 5. Confusion matrices for models considering 5 clients scenario. (a) Confusion matrix of the FL model with clean test data. (b) Confusion matrix of the standard FL model with adversarial test data. (c) Confusion matrix of the proposed model with clean test data. (d) Confusion matrix of the proposed model with adversarial test data.
Bdcc 09 00238 g005
Figure 6. Confusion matrices for models considering 40 clients scenario. (a) Confusion matrix of the FL model with clean test data. (b) Confusion matrix of the standard FL model with adversarial test data. (c) Confusion matrix of the proposed model with clean test data. (d) Confusion matrix of the proposed model with adversarial test data.
Figure 6. Confusion matrices for models considering 40 clients scenario. (a) Confusion matrix of the FL model with clean test data. (b) Confusion matrix of the standard FL model with adversarial test data. (c) Confusion matrix of the proposed model with clean test data. (d) Confusion matrix of the proposed model with adversarial test data.
Bdcc 09 00238 g006aBdcc 09 00238 g006b
Figure 7. Test accuracy comparison of models with various perturbation strengths.
Figure 7. Test accuracy comparison of models with various perturbation strengths.
Bdcc 09 00238 g007
Figure 8. Communication efficiency comparison of models with various no. of clients.
Figure 8. Communication efficiency comparison of models with various no. of clients.
Bdcc 09 00238 g008
Table 1. Performance comparison of models with various no. of clients.
Table 1. Performance comparison of models with various no. of clients.
ClientsStrategyClean Test DataAdversarial Test Data
Precision Recall F1 Precision Recall F1
5FL98%98%98%62%60%57%
Proposed96%96%96%93%93%93%
10FL98%98%98%67%61%63%
Proposed97%97%97%97%97%97%
20FL98%98%98%60%56%56%
Proposed97%97%97%98%98%98%
30FL98%98%98%66%65%64%
Proposed98%98%98%99%99%99%
40FL98%98%98%72%69%70%
Proposed98%98%98%99%99%99%
Table 2. Performance comparison of trained models with various perturbation strengths.
Table 2. Performance comparison of trained models with various perturbation strengths.
PerturbationStrategyPrecisionRecallF1
ϵ = 0.01FL93%93%93%
Proposed94%94%94%
ϵ = 0.05FL78%77%78%
Proposed97%97%97%
ϵ = 0.1FL72%69%70%
Proposed99%99%99%
ϵ = 0.2FL54%51%52%
Proposed80%81%80%
ϵ = 0.3FL48%45%46%
Proposed61%57%54%
Table 3. Performance comparison of trained models over different datasets.
Table 3. Performance comparison of trained models over different datasets.
Performance ParameterTest DataAlgorithm5G-NIDD [10]IDS-IoT-2024 [28]CICIDS2017 [29]
AccuracyCleanProposed97.23%98.63%97.95%
Standard FL97.57%99.25%98.8%
ϵ = 0.01 Proposed96.19%98.33%97.17%
Standard FL95.22%97.66%90.83%
ϵ = 0.05 Proposed96.04%98.13%97.12%
Standard FL78.99%56.44%79.05%
ϵ = 0.1 Proposed96.81%97.77%96.81%
Standard FL61.25%45.88%60.86%
ϵ = 0.2 Proposed78.55%93.68%93.01%
Standard FL49.77%34.56%51.14%
ϵ = 0.3 Proposed50.10%64.84%85.82%
Standard FL43.89%28.52%47.09%
PrecisionCleanProposed97%99%98%
Standard FL98%99%99%
ϵ = 0.01 Proposed96%98%97%
Standard FL95%98%90%
ϵ = 0.05 Proposed96%98%97%
Standard FL79%64%85%
ϵ = 0.1 Proposed97%98%97%
Standard FL67%56%80%
ϵ = 0.2 Proposed83%94%92%
Standard FL55%47%70%
ϵ = 0.3 Proposed59%77%84%
Standard FL49%44%66%
RecallCleanProposed97%99%98%
Standard FL98%99%99%
ϵ = 0.01 Proposed96%98%97%
Standard FL95%98%91%
ϵ = 0.05 Proposed96%98%97%
Standard FL79%56%79%
ϵ = 0.1 Proposed97%98%97%
Standard FL61%46%61%
ϵ = 0.2 Proposed79%94%93%
Standard FL50%35%51%
ϵ = 0.3 Proposed50%65%86%
Standard FL44%29%47%
F1-ScoreCleanProposed97%99%98%
Standard FL98%99%99%
ϵ = 0.01 Proposed96%98%97%
Standard FL95%98%90%
ϵ = 0.05 Proposed96%98%97%
Standard FL79%59%82%
ϵ = 0.1 Proposed97%98%97%
Standard FL63%48$68%
ϵ = 0.2 Proposed78%94%92%
Standard FL51%37%59%
ϵ = 0.3 Proposed46%66%85%
Standard FL45%31%55%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zafar, S.; White, J.; Legg, P. Federated Learning with Adversarial Optimisation for Secure and Efficient 5G Edge Computing Networks. Big Data Cogn. Comput. 2025, 9, 238. https://doi.org/10.3390/bdcc9090238

AMA Style

Zafar S, White J, Legg P. Federated Learning with Adversarial Optimisation for Secure and Efficient 5G Edge Computing Networks. Big Data and Cognitive Computing. 2025; 9(9):238. https://doi.org/10.3390/bdcc9090238

Chicago/Turabian Style

Zafar, Saniya, Jonathan White, and Phil Legg. 2025. "Federated Learning with Adversarial Optimisation for Secure and Efficient 5G Edge Computing Networks" Big Data and Cognitive Computing 9, no. 9: 238. https://doi.org/10.3390/bdcc9090238

APA Style

Zafar, S., White, J., & Legg, P. (2025). Federated Learning with Adversarial Optimisation for Secure and Efficient 5G Edge Computing Networks. Big Data and Cognitive Computing, 9(9), 238. https://doi.org/10.3390/bdcc9090238

Article Metrics

Back to TopTop