Edge-FLGuard: A Federated Learning Framework for Real-Time Anomaly Detection in 5G-Enabled IoT Ecosystems
Abstract
:1. Introduction
2. Related Work
2.1. Security in 5G-IoT Networks
2.2. Edge AI for Anomaly Detection
2.3. Federated Learning in IoT Security
2.4. Research Gap and Contribution
- Lightweight, edge-deployable deep learning models (autoencoders and LSTMs);
- A privacy-preserving and communication-efficient federated learning pipeline;
- Robustness evaluations under both synthetic and public attack datasets, with explicit measurement of detection accuracy, adversarial resilience, and communication cost.
3. System Architecture: Edge-FLGuard
3.1. Overview of the Framework
- IoT Device Layer: Composed of low-power, resource-constrained devices (e.g., cameras, sensors, smart meters, and routers) that generate telemetry and network traffic. These endpoints do not perform inference but serve as data sources.
- Edge Intelligence Layer: Intermediate nodes such as gateways, micro-servers, or embedded systems capable of running lightweight deep learning models (autoencoder and LSTM). These nodes perform local anomaly detection and periodically participate in federated learning updates.
- Federated Coordination Layer: A centralized or semi-centralized aggregator (cloud or fog server) responsible for orchestrating model synchronization, update aggregation (e.g., FedAvg, Krum), and redistribution.
3.2. Local Anomaly Detection Components
- Autoencoders: Employed for unsupervised anomaly detection by reconstructing expected behavior from input features. Significant reconstruction errors signal potential anomalies.
- Long Short-Term Memory (LSTM) networks: Capture temporal dependencies in telemetry and traffic flows, making them well-suited for detecting delayed or stealthy threats over time.
3.3. Federated Learning Orchestration
- Global Initialization: The FL coordinator distributes a common model (e.g., pre-trained on public data) to all the participating edge nodes.
- Local Training: Each edge node fine-tunes the model using its local dataset for a predefined number of epochs (E).
- Model Update: Instead of raw data, only model weight updates or gradients are sent back to the coordinator.
- Secure Aggregation: The server uses FedAvg or robust alternatives like trimmed mean or Krum to merge updates into a new global model.
- Model Distribution: The updated model is redistributed to edge nodes for the next round.
3.4. Communication and Coordination Model
- Upstream Flow: IoT devices send preprocessed data or events to their nearest edge node via MQTT or CoAP protocols.
- Lateral Flow: Edge nodes may share alerts with neighboring gateways for consensus in case of high-severity incidents.
- FL Model Exchange: Model updates are transmitted over encrypted gRPC or MQTT channels using TLS 1.3. Updates are compressed using quantization (e.g., 8-bit float) and may be sparsified to reduce payload size.
3.5. Security and Resilience Enhancements
- Privacy Strategies:
- ∘
- Optional integration of differential privacy during local model updates (via Gaussian noise injection).
- ∘
- Gradient clipping and noise injection to obfuscate sensitive patterns.
- ∘
- Use of TLS 1.3 for transport-layer security during model transmissions.
- Model Poisoning Mitigation:
- ∘
- Byzantine-resilient aggregation (e.g., trimmed mean or Krum) can be substituted for FedAvg in hostile environments.
- ∘
- Update filtering: Discarding outlier updates based on gradient similarity metrics or historical contribution scores.
- ∘
- Fallback mode: Edge nodes can revert to previous model versions or enforce anomaly threshold tightening if global models are suspected to be corrupted.
- On-Device Verification:
- ∘
- Edge nodes retain autonomy to override global model decisions if local rules are violated (e.g., zero-day traffic spikes).
3.6. Integration with Cybersecurity Workflows
- Alerting and Logging: Supports output to SIEM or SOAR platforms via syslog, REST API, or MQTT topics.
- Policy Enforcement: Integrates with SDN controllers or edge firewalls for automated mitigation (e.g., device quarantine).
- Human-in-the-Loop: Includes dashboard hooks for security analysts to audit decisions or retrain thresholds.
4. Dataset and Experimental Setup
4.1. Dataset Summary
4.2. Attack Scenarios
4.3. Data Preprocessing Pipeline
- Normalization: All features were min-max scaled to [0, 1].
- Windowing: Sequential models (e.g., LSTM) were trained on 10-time-step sequences using sliding windows.
- Label Encoding: Multi-class labels were converted into binary (normal vs. anomalous) for unsupervised detection and multi-label classification.
- Dimensionality Reduction: Principal Component Analysis (PCA) was applied to reduce feature dimensionality from 80+ to 30 components for autoencoders.
4.4. Evaluation Metrics
4.5. Experimental Environment Summary
5. Methodology
5.1. Edge Machine Learning Models
5.1.1. Autoencoders
5.1.2. LSTM Networks
5.2. Federated Learning Process
5.2.1. Training Workflow
- Model Broadcast: A central aggregator sends the current global model to participating edge nodes.
- Local Update: Each client performs training on local data, as follows:
- Secure Aggregation: The server collects and aggregates updates, as follows:
5.2.2. Handling Non-IID Data
- Proximal regularization (e.g., FedProx);
- Client selection or clustering based on data similarity;
- Local fine-tuning post-aggregation for personalization.
5.3. Anomaly Detection Pipeline
- Feature Extraction and Normalization: Input data (e.g., IoT telemetry and network logs) is preprocessed into fixed-length, normalized feature vectors or sequences.
- Model Inference: Data is passed through either the autoencoder or LSTM model to compute anomaly scores as follows:
- Autoencoder: Mean squared reconstruction error;
- LSTM: Deviation between predicted and observed sequences.
- Thresholding: Anomalies are detected based on a dynamic or percentile-based threshold θ as follows:
- Alerting Generation: Depending on policy, edge nodes may
- Trigger alerts (e.g., to SIEM or admin systems);
- Log events for future FL training;
- Enforce mitigation policies (e.g., quarantine or reset).
6. Experimental Configuration
6.1. Model Configuration
6.2. Federated Learning Settings
7. Results and Discussion
7.1. Anomaly Detection Performance
7.2. Latency and Resource Overhead
7.3. Comparative Analysis with Baseline Architectures
- A centralized model trained on global data in the cloud;
- A local-only baseline, with models trained and deployed independently.
7.4. Privacy and Scalability Analysis
- First, gradient norm clipping was applied during client-side training to prevent potential overfitting or model inversion attacks. On average, only 6.8% of clients exceeded the configured L2 norm threshold (set to 1.0), suggesting that most updates adhered to safe update magnitudes without distortion.
- We also introduced differential privacy (DP) noise to local model updates using Gaussian perturbations. The noise-to-signal ratio (NSR), defined as the L2 norm of the added noise over the L2 norm of the raw gradient, was maintained at approximately NSR = 0.21. Despite this privacy enhancement, detection performance remained robust—The federated LSTM model maintained an F1-score of 0.91, compared to 0.93 without noise, indicating only a 2.1% degradation in performance for significant gains in privacy protection.
- Finally, we tracked the empirical privacy budget ε throughout federated training. Using Rényi differential privacy accounting with clipping bound , noise multiplier , and a total of 50 communication rounds, the cumulative budget was estimated at with . This level of privacy is well-aligned with established best practices in differentially private learning for distributed systems.
7.5. Limitations
- Convergence Delay under High Non-IID: Training becomes slower with highly imbalanced or disjoint client datasets.
- Vulnerability to Large-Scale Poisoning: The framework remains susceptible when >40% of clients are adversarial, despite basic filtering and fallback logic.
- Hardware Constraints: While tested on realistic edge platforms (e.g., Pi 4, Jetson Nano), true ultra-constrained deployment (e.g., microcontrollers) will require further optimization.
8. Conclusions and Future Work
- High detection performance, achieving F1-scores ≥ 0.91 and AUC-ROC values up to 0.96;
- Low inference latency (<20 ms), compatible with real-time edge deployments;
- Efficient scalability, maintaining stability with up to 100 participating clients.
Future Work
- Adaptive Detection via Reinforcement Learning: Future implementations may integrate online learning techniques to dynamically adjust anomaly thresholds and response strategies in real-time, based on evolving device behavior or contextual feedback.
- Robust Federated Defenses: We aim to enhance resilience to adversarial clients by adopting Byzantine-resilient aggregation algorithms (e.g., Krum and median) and incorporating client reputation scoring to detect and mitigate malicious behavior during FL rounds.
- Deployment in Smart City Environments: A critical step will be field deployment in live urban IoT ecosystems—such as smart traffic infrastructure, utility monitoring, or edge video analytics platforms—to validate Edge-FLGuard’s robustness under real-world data and network variability.
- Model Explainability for Edge Decisions: Integrating explainability techniques (e.g., SHAP) will be crucial for enhancing the transparency of anomaly decisions, particularly in regulated or human-supervised domains. This will support auditability, trust, and adoption in safety-critical applications.
- Energy and Thermal Profiling for Ultra-Low-Power Devices: While Edge-FLGuard operates within acceptable memory and latency constraints, future work will include profiling power consumption and thermal behavior across ultra-constrained hardware platforms (e.g., ARM Cortex-M series and battery-powered sensors). This will inform model compression strategies and sustainable edge AI deployment at scale.
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. Attack Simulation Parameters
- DDoS Attacks: We used hping3 and Scapy to generate high-volume packet floods targeting simulated edge nodes, with packet rates ranging from 10,000 to 100,000 packets per second. Attacks were scheduled in bursts of 30–60 s during each simulation cycle.
- Spoofing Attacks: IP and MAC address spoofing were emulated using custom Python scripts that randomly altered device identifiers within session payloads. Spoofing frequency was configured to mimic periodic intrusion behavior without overwhelming traffic volume.
- Unauthorized Access Attempts: Synthetic logs were crafted to include credential stuffing and privilege escalation patterns, simulated via system call and process injection anomalies (inspired by TON_IoT attack taxonomies).
- Model Poisoning: In the federated setting, up to 30% of clients were intentionally poisoned by either label-flipping their local datasets or submitting inverted gradient updates to the server. These adversarial clients were randomly selected at each training round.
- Labeling Strategy: Attack intervals and injected traffic were timestamped and annotated to create labeled datasets. For evaluation, anomaly labels were assigned at the flow level for network data and sequence-level for telemetry data (LSTM inputs).
Appendix B. Statistical Validation
Model | Training | F1-Score (±σ) | AUC-ROC (±σ) |
---|---|---|---|
Autoencoder | Local-only | 0.79 ± 0.014 | 0.85 ± 0.012 |
Autoencoder | Federated FL | 0.89 ± 0.011 | 0.94 ± 0.010 |
LSTM | Local-only | 0.84 ± 0.013 | 0.89 ± 0.011 |
LSTM | Federated FL | 0.91 ± 0.009 | 0.96 ± 0.008 |
References
- Hossain, M.S.; Muhammad, G. Cloud-Assisted Industrial Internet of Things (IIoT)—Enabled Framework for Health Monitoring. Comput. Netw. 2016, 101, 192–202. [Google Scholar] [CrossRef]
- Khan, A.A.; Rehmani, M.H.; Reisslein, M. Cognitive Radio for Smart Grids: Survey of Architectures, Spectrum Sensing Mechanisms, and Networking Protocols. IEEE Commun. Surv. Tutor. 2016, 18, 860–898. [Google Scholar] [CrossRef]
- Choo, K.-K.R. Internet of Things (IoT) Security and Forensics: Challenges and Opportunities. In Proceedings of the 2th Workshop on CPS&IoT Security and Privacy; Association for Computing Machinery: New York, NY, USA, 2021; pp. 27–28. [Google Scholar] [CrossRef]
- Alshamrani, A.; Myneni, S.; Chowdhary, A.; Huang, D. A Survey on Advanced Persistent Threats: Techniques, Solutions, Challenges, and Research Opportunities. IEEE Commun. Surv. Tutor. 2019, 21, 1851–1877. [Google Scholar] [CrossRef]
- Intersoft Consulting. The EU General Data Protection Regulation (GDPR). 2018. Available online: https://gdpr-info.eu/ (accessed on 27 May 2025).
- Nasr, M.; Shokri, R.; Houmansadr, A. Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-Box Inference Attacks against Centralized and Federated Learning. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 20–22 May 2019; pp. 739–753. [Google Scholar] [CrossRef]
- Shi, W.; Cao, J.; Zhang, Q.; Li, Y.; Xu, L. Edge Computing: Vision and Challenges. IEEE Internet Things J. 2016, 3, 637–646. [Google Scholar] [CrossRef]
- Luo, Y.; Xiao, Y.; Cheng, L.; Peng, G.; Yao, D. (Daphne) Deep Learning-Based Anomaly Detection in Cyber-Physical Systems: Progress and Opportunities. ACM Comput. Surv. 2021, 54, 106. [Google Scholar] [CrossRef]
- Meng, L.; Li, D. Novel Edge Computing-Based Privacy-Preserving Approach for Smart Healthcare Systems in the Internet of Medical Things. J. Grid Comput. 2023, 21, 66. [Google Scholar] [CrossRef]
- Zolanvari, M.; Teixeira, M.A.; Gupta, L.; Khan, K.M.; Jain, R. Machine Learning-Based Network Vulnerability Analysis of Industrial Internet of Things. IEEE Internet Things J. 2019, 6, 6822–6834. [Google Scholar] [CrossRef]
- Kairouz, P.; McMahan, H.B.; Avent, B.; Bellet, A.; Bennis, M.; Bhagoji, A.N.; Bonawitz, K.; Charles, Z.; Cormode, G.; Cummings, R.; et al. Advances and Open Problems in Federated Learning. Found. Trends® Mach. Learn. 2021, 14, 1–210. [Google Scholar] [CrossRef]
- Shubyn, B.; Mrozek, D.; Maksymyuk, T.; Sunderam, V.; Kostrzewa, D.; Grzesik, P.; Benecki, P. Federated Learning for Anomaly Detection in Industrial IoT-Enabled Production Environment Supported by Autonomous Guided Vehicles. In Proceedings of the Computational Science—ICCS 2022; Groen, D., de Mulatier, C., Paszynski, M., Krzhizhanovskaya, V.V., Dongarra, J.J., Sloot, P.M.A., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 409–421. [Google Scholar] [CrossRef]
- Bagdasaryan, E.; Veit, A.; Hua, Y.; Estrin, D.; Shmatikov, V. How To Backdoor Federated Learning. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR, Palermo, Sicily, Italy, 3 June 2020; pp. 2938–2948. [Google Scholar] [CrossRef]
- Nguyen, H.; Nguyen, T.; Leppänen, T.; Partala, J.; Pirttikangas, S. Situation Awareness for Autonomous Vehicles Using Blockchain-Based Service Cooperation. arXiv 2022, arXiv:2204.03313. [Google Scholar] [CrossRef]
- Aledhari, M.; Razzak, R.; Parizi, R.M.; Saeed, F. Federated Learning: A Survey on Enabling Technologies, Protocols, and Applications. IEEE Access 2020, 8, 140699–140725. [Google Scholar] [CrossRef]
- Ucci, D.; Aniello, L.; Baldoni, R. Survey of Machine Learning Techniques for Malware Analysis. Comput. Secur. 2019, 81, 123–147. [Google Scholar] [CrossRef]
- Javaid, A.; Niyaz, Q.; Sun, W.; Alam, M. A Deep Learning Approach for Network Intrusion Detection System. In Proceedings of the 9th EAI International Conference on Bio-inspired Information and Communications Technologies (formerly BIONETICS); ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering): Brussels, Belgium, 2016; pp. 21–26. [Google Scholar] [CrossRef]
- Khan, M.A.; Salah, K. IoT Security: Review, Blockchain Solutions, and Open Challenges. Future Gener. Comput. Syst. 2018, 82, 395–411. [Google Scholar] [CrossRef]
- Dorri, A.; Steger, M.; Kanhere, S.S.; Jurdak, R. BlockChain: A Distributed Solution to Automotive Security and Privacy. IEEE Commun. Mag. 2017, 55, 119–125. [Google Scholar] [CrossRef]
- Meidan, Y.; Bohadana, M.; Mathov, Y.; Mirsky, Y.; Shabtai, A.; Breitenbacher, D.; Elovici, Y. N-BaIoT—Network-Based Detection of IoT Botnet Attacks Using Deep Autoencoders. IEEE Pervasive Comput. 2018, 17, 12–22. [Google Scholar] [CrossRef]
- Revathi, M.; Ramalingam, V.V.; Amutha, B. A Machine Learning Based Detection and Mitigation of the DDOS Attack by Using SDN Controller Framework. Wirel. Pers. Commun. 2022, 127, 2417–2441. [Google Scholar] [CrossRef]
- Vinayakumar, R.; Alazab, M.; Soman, K.P.; Poornachandran, P.; Al-Nemrat, A.; Venkatraman, S. Deep Learning Approach for Intelligent Intrusion Detection System. IEEE Access 2019, 7, 41525–41550. [Google Scholar] [CrossRef]
- Dai, Y.; Chen, Z.; Li, J.; Heinecke, S.; Sun, L.; Xu, R. Tackling Data Heterogeneity in Federated Learning with Class Prototypes. Proc. AAAI Conf. Artif. Intell. 2023, 37, 7314–7322. [Google Scholar] [CrossRef]
- McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; Arcas, B.A. y Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR, Fort Lauderdale, FL, USA, 10 April 2017; pp. 1273–1282. [Google Scholar]
- Savazzi, S.; Nicoli, M.; Rampa, V. Federated Learning With Cooperating Devices: A Consensus Approach for Massive IoT Networks. IEEE Internet Things J. 2020, 7, 4641–4654. [Google Scholar] [CrossRef]
- Wang, S.; Tuor, T.; Salonidis, T.; Leung, K.K.; Makaya, C.; He, T.; Chan, K. Adaptive Federated Learning in Resource Constrained Edge Computing Systems. IEEE J. Sel. Areas Commun. 2019, 37, 1205–1221. [Google Scholar] [CrossRef]
- Ochiai, H.; Nishihata, R.; Tomiyama, E.; Sun, Y.; Esaki, H. Detection of Global Anomalies on Distributed IoT Edges with Device-to-Device Communication. arXiv 2024, arXiv:2407.11308. [Google Scholar] [CrossRef]
- Zhao, Y.; Li, M.; Lai, L.; Suda, N.; Civin, D.; Chandra, V. Federated Learning with Non-IID Data. arXiv 2018, arXiv:1806.00582. [Google Scholar] [CrossRef]
- Geyer, R.C.; Klein, T.; Nabi, M. Differentially Private Federated Learning: A Client Level Perspective. arXiv 2018, arXiv:1712.07557. [Google Scholar] [CrossRef]
- Bonawitz, K.; Ivanov, V.; Kreuter, B.; Marcedone, A.; McMahan, H.B.; Patel, S.; Ramage, D.; Segal, A.; Seth, K. Practical Secure Aggregation for Privacy-Preserving Machine Learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security; Association for Computing Machinery: New York, NY, USA, 2017; pp. 1175–1191. [Google Scholar] [CrossRef]
- Blanchard, P.; El Mhamdi, E.M.; Guerraoui, R.; Stainer, J. Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent. In Proceedings of the Advances in Neural Information Processing Systems; Curran Associates, Inc.: Nice, France, 2017; Volume 30, Available online: https://dl.acm.org/doi/10.5555/3294771.3294783 (accessed on 5 June 2025).
Study | Focus Area | Approach | Strengths | Limitations |
---|---|---|---|---|
Vinayakumar et al. (2019) [22] | Deep learning for IDS | Centralized autoencoder + CNN | High detection accuracy | Centralized and not scalable to IoT |
Savazzi et al. (2020) [25] | Federated learning in IoT | Consensus-based FL for anomaly detection | Decentralized and scalable | No real-time evaluation and assumes IID data |
Dai et al. (2023) [23] | Non-IID handling in FL | Prototype-based personalization (FedNH) | Addresses client heterogeneity | Not tested on IoT or edge environments |
Nguyen et al. (2022) [14] | Blockchain for AV edge sharing | Hyperledger + secure data contracts | Privacy, trust, and realistic deployment | Limited ML integration |
Ochiai et al. (2024) [27] | D2D edge anomaly Detection | WAFL-autoencoder + D2D | Fully distributed and unsupervised | High communication load and new method |
Bagdasaryan et al. (2020) [13] | FL security | Backdoor attacks in FL | Highlights threat model; baseline for defenses | No mitigation strategies and not IoT-specific |
Blanchard et al. (2017) [31] | Byzantine FL | Krum aggregation | Robust to adversaries in theory | High cost and not adapted for constrained devices |
Shubyn et al. (2022) [12] | Federated anomaly detection | FL for IIoT with mobile robots | Industry-oriented and real-world setup | Basic models and lacks privacy techniques |
This Work (Edge-FLGuard) | 5G-IoT security | FL + edge autoencoders + LSTM | Real-time, privacy-preserving, and robust to heterogeneity | (To be discussed in Results) |
Dataset | Source | Features | Classes | Samples Used |
---|---|---|---|---|
CICIDS2017 | Public | Network flow-level features (80+) | Normal, DDoS, DoS, PortScan, Web Attack, Bot, and Brute Force | ~3 million |
TON_IoT | Public | Sensor logs + system telemetry | Normal, reconnaissance, password cracking, and injection | ~500,000 |
Synthetic (custom) | Lab-generated | Spoofed MAC/IP, DDoS burst, and model poisoning | Multi-attack labels with time-sequencing | ~Varied |
Attack Type | Source | Description |
---|---|---|
DDoS | CICIDS, Synthetic | High-volume packet floods targeting edge nodes |
Spoofing | Synthetic | IP/MAC spoofing in temporal bursts |
Unauthorized Access | TON_IoT | Credential stuffing and privilege escalation |
Model Poisoning | Synthetic | Malicious updates or label flipping during FL rounds |
Metric | Definition | Purpose |
---|---|---|
Precision (P) | False alert minimization | |
Recall (R) | Anomaly sensitivity | |
False Positive Rate (FPR) | Measures false alarms on normal data | |
F1-Score | Balance of P and R | |
AUC-ROC | Threshold-independent accuracy | |
Inference Latency | Avg. time per sample (ms) | Real-time viability |
Update Time | Local training + communication (s) | Training efficiency |
Memory Usage | RAM consumption on edge (MB) | Resource feasibility |
Gradient Norm Clipping Ratio | % updates above L2 threshold | Privacy leakage indicator |
Noise-to-Signal Ratio (NSR) | DP strength vs. accuracy | |
Privacy Budget (ε) | Cumulative ε tracked over rounds | Differential privacy accounting |
Component | Specification |
---|---|
Edge Devices | Raspberry Pi 4, Jetson Nano |
FL Aggregator | Ubuntu 22.04, 16 GB RAM, 8 vCPUs |
ML Libraries | TensorFlow 2.13, PyTorch 1.13 |
FL Framework | Flower (FLwr) + custom client-server pipeline |
Simulation Tools | Mininet, Scapy, Wireshark, Docker |
Protocols | MQTT (Mosquitto), gRPC with TLS 1.3 |
Reproducibility | 5 runs with fixed random seeds |
Parameter | Value |
---|---|
Hidden Layers | 128 → 64 → 32 |
Activation (Enc) | ReLU |
Activation (Dec) | Sigmoid |
Loss Function | MSE |
Parameter | Value |
---|---|
Layers | 2 (64, 32 units) |
Sequence Length | 10 |
Dropout | 0.3 |
Loss Function | Reconstruction MSE |
Parameter | Value |
---|---|
Optimizer | Adam |
Learning Rate | 1 × 10−3 |
Batch Size | 64 |
Epochs per Round | 3 |
Parameter | Value |
---|---|
Global Rounds | 50 |
Clients per Round | 10 (from 50 total) |
Aggregation Method | FedAvg (optionally: trimmed mean) |
Client Participation Rate | 20% |
Non-IID Strategy | Clients grouped by traffic type |
Adversarial Setup | 30% poisoned clients (FL rounds) |
Communication | Encrypted gRPC (TLS 1.3) |
Model | Training | Precision | Recall | F1-Score | AUC-ROC | FPR | FNR |
---|---|---|---|---|---|---|---|
Autoencoder | Local-only | 0.84 | 0.76 | 0.79 | 0.85 | 0.09 | 0.24 |
Autoencoder | Federated FL | 0.91 | 0.88 | 0.89 | 0.94 | 0.06 | 0.12 |
LSTM | Local-only | 0.87 | 0.82 | 0.84 | 0.89 | 0.07 | 0.18 |
LSTM | Federated FL | 0.93 | 0.90 | 0.91 | 0.96 | 0.05 | 0.08 |
Metric | Local AE | FL AE | Local LSTM | FL LSTM |
---|---|---|---|---|
Inference latency (ms) | 12.4 | 13.1 | 18.7 | 19.6 |
Model update time (s) | — | 3.4 | — | 4.1 |
Memory usage (MB) | 26 | 29 | 39 | 42 |
System | F1-Score | Latency (ms) | Privacy Exposure |
---|---|---|---|
Centralized Model | 0.92 | 45.6 | High (full data sharing) |
Local-only Baseline | 0.79 | 11.3 | Low (but low accuracy) |
Edge-FLGuard (ours) | 0.91 | 13.1 | Low (no raw data shared) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Reis, M.J.C.S. Edge-FLGuard: A Federated Learning Framework for Real-Time Anomaly Detection in 5G-Enabled IoT Ecosystems. Appl. Sci. 2025, 15, 6452. https://doi.org/10.3390/app15126452
Reis MJCS. Edge-FLGuard: A Federated Learning Framework for Real-Time Anomaly Detection in 5G-Enabled IoT Ecosystems. Applied Sciences. 2025; 15(12):6452. https://doi.org/10.3390/app15126452
Chicago/Turabian StyleReis, Manuel J. C. S. 2025. "Edge-FLGuard: A Federated Learning Framework for Real-Time Anomaly Detection in 5G-Enabled IoT Ecosystems" Applied Sciences 15, no. 12: 6452. https://doi.org/10.3390/app15126452
APA StyleReis, M. J. C. S. (2025). Edge-FLGuard: A Federated Learning Framework for Real-Time Anomaly Detection in 5G-Enabled IoT Ecosystems. Applied Sciences, 15(12), 6452. https://doi.org/10.3390/app15126452