Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (183)

Search Parameters:
Keywords = non-iid

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 2740 KiB  
Article
Lightweight Anomaly Detection in Digit Recognition Using Federated Learning
by Anja Tanović and Ivan Mezei
Future Internet 2025, 17(8), 343; https://doi.org/10.3390/fi17080343 - 30 Jul 2025
Viewed by 132
Abstract
This study presents a lightweight autoencoder-based approach for anomaly detection in digit recognition using federated learning on resource-constrained embedded devices. We implement and evaluate compact autoencoder models on the ESP32-CAM microcontroller, enabling both training and inference directly on the device using 32-bit floating-point [...] Read more.
This study presents a lightweight autoencoder-based approach for anomaly detection in digit recognition using federated learning on resource-constrained embedded devices. We implement and evaluate compact autoencoder models on the ESP32-CAM microcontroller, enabling both training and inference directly on the device using 32-bit floating-point arithmetic. The system is trained on a reduced MNIST dataset (1000 resized samples) and evaluated using EMNIST and MNIST-C for anomaly detection. Seven fully connected autoencoder architectures are first evaluated on a PC to explore the impact of model size and batch size on training time and anomaly detection performance. Selected models are then re-implemented in the C programming language and deployed on a single ESP32 device, achieving training times as short as 12 min, inference latency as low as 9 ms, and F1 scores of up to 0.87. Autoencoders are further tested on ten devices in a real-world federated learning experiment using Wi-Fi. We explore non-IID and IID data distribution scenarios: (1) digit-specialized devices and (2) partitioned datasets with varying content and anomaly types. The results show that small unmodified autoencoder models can be effectively trained and evaluated directly on low-power hardware. The best models achieve F1 scores of up to 0.87 in the standard IID setting and 0.86 in the extreme non-IID setting. Despite some clients being trained on corrupted datasets, federated aggregation proves resilient, maintaining high overall performance. The resource analysis shows that more than half of the models and all the training-related allocations fit entirely in internal RAM. These findings confirm the feasibility of local float32 training and collaborative anomaly detection on low-cost hardware, supporting scalable and privacy-preserving edge intelligence. Full article
(This article belongs to the Special Issue Intelligent IoT and Wireless Communication)
Show Figures

Figure 1

22 pages, 2678 KiB  
Article
Federated Semi-Supervised Learning with Uniform Random and Lattice-Based Client Sampling
by Mei Zhang and Feng Yang
Entropy 2025, 27(8), 804; https://doi.org/10.3390/e27080804 - 28 Jul 2025
Viewed by 168
Abstract
Federated semi-supervised learning (Fed-SSL) has emerged as a powerful framework that leverages both labeled and unlabeled data distributed across clients. To reduce communication overhead, real-world deployments often adopt partial client participation, where only a subset of clients is selected in each round. However, [...] Read more.
Federated semi-supervised learning (Fed-SSL) has emerged as a powerful framework that leverages both labeled and unlabeled data distributed across clients. To reduce communication overhead, real-world deployments often adopt partial client participation, where only a subset of clients is selected in each round. However, under non-i.i.d. data distributions, the choice of client sampling strategy becomes critical, as it significantly affects training stability and final model performance. To address this challenge, we propose a novel federated averaging semi-supervised learning algorithm, called FedAvg-SSL, that considers two sampling approaches, uniform random sampling (standard Monte Carlo) and a structured lattice-based sampling, inspired by quasi-Monte Carlo (QMC) techniques, which ensures more balanced client participation through structured deterministic selection. On the client side, each selected participant alternates between updating the global model and refining the pseudo-label model using local data. We provide a rigorous convergence analysis, showing that FedAvg-SSL achieves a sublinear convergence rate with linear speedup. Extensive experiments not only validate our theoretical findings but also demonstrate the advantages of lattice-based sampling in federated learning, offering insights into the interplay among algorithm performance, client participation rates, local update steps, and sampling strategies. Full article
(This article belongs to the Special Issue Number Theoretic Methods in Statistics: Theory and Applications)
Show Figures

Figure 1

12 pages, 759 KiB  
Article
Privacy-Preserving Byzantine-Tolerant Federated Learning Scheme in Vehicular Networks
by Shaohua Liu, Jiahui Hou and Gang Shen
Electronics 2025, 14(15), 3005; https://doi.org/10.3390/electronics14153005 - 28 Jul 2025
Viewed by 185
Abstract
With the rapid development of vehicular network technology, data sharing and collaborative training among vehicles have become key to enhancing the efficiency of intelligent transportation systems. However, the heterogeneity of data and potential Byzantine attacks cause the model to update in different directions [...] Read more.
With the rapid development of vehicular network technology, data sharing and collaborative training among vehicles have become key to enhancing the efficiency of intelligent transportation systems. However, the heterogeneity of data and potential Byzantine attacks cause the model to update in different directions during the iterative process, causing the boundary between benign and malicious gradients to shift continuously. To address these issues, this paper proposes a privacy-preserving Byzantine-tolerant federated learning scheme. Specifically, we design a gradient detection method based on median absolute deviation (MAD), which calculates MAD in each round to set a gradient anomaly detection threshold, thereby achieving precise identification and dynamic filtering of malicious gradients. Additionally, to protect vehicle privacy, we obfuscate uploaded parameters to prevent leakage during transmission. Finally, during the aggregation phase, malicious gradients are eliminated, and only benign gradients are selected to participate in the global model update, which improves the model accuracy. Experimental results on three datasets demonstrate that the proposed scheme effectively mitigates the impact of non-independent and identically distributed (non-IID) heterogeneity and Byzantine behaviors while maintaining low computational cost. Full article
(This article belongs to the Special Issue Cryptography in Internet of Things)
Show Figures

Figure 1

24 pages, 1530 KiB  
Article
A Lightweight Robust Training Method for Defending Model Poisoning Attacks in Federated Learning Assisted UAV Networks
by Lucheng Chen, Weiwei Zhai, Xiangfeng Bu, Ming Sun and Chenglin Zhu
Drones 2025, 9(8), 528; https://doi.org/10.3390/drones9080528 - 28 Jul 2025
Viewed by 338
Abstract
The integration of unmanned aerial vehicles (UAVs) into next-generation wireless networks greatly enhances the flexibility and efficiency of communication and distributed computation for ground mobile devices. Federated learning (FL) provides a privacy-preserving paradigm for device collaboration but remains highly vulnerable to poisoning attacks [...] Read more.
The integration of unmanned aerial vehicles (UAVs) into next-generation wireless networks greatly enhances the flexibility and efficiency of communication and distributed computation for ground mobile devices. Federated learning (FL) provides a privacy-preserving paradigm for device collaboration but remains highly vulnerable to poisoning attacks and is further challenged by the resource constraints and heterogeneous data common to UAV-assisted systems. Existing robust aggregation and anomaly detection methods often degrade in efficiency and reliability under these realistic adversarial and non-IID settings. To bridge these gaps, we propose FedULite, a lightweight and robust federated learning framework specifically designed for UAV-assisted environments. FedULite features unsupervised local representation learning optimized for unlabeled, non-IID data. Moreover, FedULite leverages a robust, adaptive server-side aggregation strategy that uses cosine similarity-based update filtering and dimension-wise adaptive learning rates to neutralize sophisticated data and model poisoning attacks. Extensive experiments across diverse datasets and adversarial scenarios demonstrate that FedULite reduces the attack success rate (ASR) from over 90% in undefended scenarios to below 5%, while maintaining the main task accuracy loss within 2%. Moreover, it introduces negligible computational overhead compared to standard FedAvg, with approximately 7% additional training time. Full article
(This article belongs to the Special Issue IoT-Enabled UAV Networks for Secure Communication)
Show Figures

Figure 1

25 pages, 1169 KiB  
Article
DPAO-PFL: Dynamic Parameter-Aware Optimization via Continual Learning for Personalized Federated Learning
by Jialu Tang, Yali Gao, Xiaoyong Li and Jia Jia
Electronics 2025, 14(15), 2945; https://doi.org/10.3390/electronics14152945 - 23 Jul 2025
Viewed by 198
Abstract
Federated learning (FL) enables multiple participants to collaboratively train models while efficiently mitigating the issue of data silos. However, large-scale heterogeneous data distributions result in inconsistent client objectives and catastrophic forgetting, leading to model bias and slow convergence. To address the challenges under [...] Read more.
Federated learning (FL) enables multiple participants to collaboratively train models while efficiently mitigating the issue of data silos. However, large-scale heterogeneous data distributions result in inconsistent client objectives and catastrophic forgetting, leading to model bias and slow convergence. To address the challenges under non-independent and identically distributed (non-IID) data, we propose DPAO-PFL, a Dynamic Parameter-Aware Optimization framework that leverages continual learning principles to improve Personalized Federated Learning under non-IID conditions. We decomposed the parameters into two components: local personalized parameters tailored to client characteristics, and global shared parameters that capture the accumulated marginal effects of parameter updates over historical rounds. Specifically, we leverage the Fisher information matrix to estimate parameter importance online, integrate the path sensitivity scores within a time-series sliding window to construct a dynamic regularization term, and adaptively adjust the constraint strength to mitigate the conflict overall tasks. We evaluate the effectiveness of DPAO-PFL through extensive experiments on several benchmarks under IID and non-IID data distributions. Comprehensive experimental results indicate that DPAO-PFL outperforms baselines with improvements from 5.41% to 30.42% in average classification accuracy. By decoupling model parameters and incorporating an adaptive regularization mechanism, DPAO-PFL effectively balances generalization and personalization. Furthermore, DPAO-PFL exhibits superior performance in convergence and collaborative optimization compared to state-of-the-art FL methods. Full article
Show Figures

Figure 1

25 pages, 654 KiB  
Article
Entropy-Regularized Federated Optimization for Non-IID Data
by Koffka Khan
Algorithms 2025, 18(8), 455; https://doi.org/10.3390/a18080455 - 22 Jul 2025
Viewed by 200
Abstract
Federated learning (FL) struggles under non-IID client data when local models drift toward conflicting optima, impairing global convergence and performance. We introduce entropy-regularized federated optimization (ERFO), a lightweight client-side modification that augments each local objective with a Shannon entropy penalty on the per-parameter [...] Read more.
Federated learning (FL) struggles under non-IID client data when local models drift toward conflicting optima, impairing global convergence and performance. We introduce entropy-regularized federated optimization (ERFO), a lightweight client-side modification that augments each local objective with a Shannon entropy penalty on the per-parameter update distribution. ERFO requires no additional communication, adds a single-scalar hyperparameter λ, and integrates seamlessly into any FedAvg-style training loop. We derive a closed-form gradient for the entropy regularizer and provide convergence guarantees: under μ-strong convexity and L-smoothness, ERFO achieves the same O(1/T) (or linear) rates as FedAvg (with only O(λ) bias for fixed λ and exact convergence when λt0); in the non-convex case, we prove stationary-point convergence at O(1/T). Empirically, on five-client non-IID splits of the UNSW-NB15 intrusion-detection dataset, ERFO yields a +1.6 pp gain in accuracy and +0.008 in macro-F1 over FedAvg with markedly smoother dynamics. On a three-of-five split of PneumoniaMNIST, a fixed λ matches or exceeds FedAvg, FedProx, and SCAFFOLD—achieving 90.3% accuracy and 0.878 macro-F1—while preserving rapid, stable learning. ERFO’s gradient-only design is model-agnostic, making it broadly applicable across tasks. Full article
(This article belongs to the Special Issue Advances in Parallel and Distributed AI Computing)
Show Figures

Figure 1

17 pages, 1738 KiB  
Article
Multimodal Fusion Multi-Task Learning Network Based on Federated Averaging for SDB Severity Diagnosis
by Songlu Lin, Renzheng Tang, Yuzhe Wang and Zhihong Wang
Appl. Sci. 2025, 15(14), 8077; https://doi.org/10.3390/app15148077 - 20 Jul 2025
Viewed by 492
Abstract
Accurate sleep staging and sleep-disordered breathing (SDB) severity prediction are critical for the early diagnosis and management of sleep disorders. However, real-world polysomnography (PSG) data often suffer from modality heterogeneity, label scarcity, and non-independent and identically distributed (non-IID) characteristics across institutions, posing significant [...] Read more.
Accurate sleep staging and sleep-disordered breathing (SDB) severity prediction are critical for the early diagnosis and management of sleep disorders. However, real-world polysomnography (PSG) data often suffer from modality heterogeneity, label scarcity, and non-independent and identically distributed (non-IID) characteristics across institutions, posing significant challenges for model generalization and clinical deployment. To address these issues, we propose a federated multi-task learning (FMTL) framework that simultaneously performs sleep staging and SDB severity classification from seven multimodal physiological signals, including EEG, ECG, respiration, etc. The proposed framework is built upon a hybrid deep neural architecture that integrates convolutional layers (CNN) for spatial representation, bidirectional GRUs for temporal modeling, and multi-head self-attention for long-range dependency learning. A shared feature extractor is combined with task-specific heads to enable joint diagnosis, while the FedAvg algorithm is employed to facilitate decentralized training across multiple institutions without sharing raw data, thereby preserving privacy and addressing non-IID challenges. We evaluate the proposed method across three public datasets (APPLES, SHHS, and HMC) treated as independent clients. For sleep staging, the model achieves accuracies of 85.3% (APPLES), 87.1% (SHHS_rest), and 79.3% (HMC), with Cohen’s Kappa scores exceeding 0.71. For SDB severity classification, it obtains macro-F1 scores of 77.6%, 76.4%, and 79.1% on APPLES, SHHS_rest, and HMC, respectively. These results demonstrate that our unified FMTL framework effectively leverages multimodal PSG signals and federated training to deliver accurate and scalable sleep disorder assessment, paving the way for the development of a privacy-preserving, generalizable, and clinically applicable digital sleep monitoring system. Full article
(This article belongs to the Special Issue Machine Learning in Biomedical Applications)
Show Figures

Figure 1

36 pages, 8047 KiB  
Article
Fed-DTB: A Dynamic Trust-Based Framework for Secure and Efficient Federated Learning in IoV Networks: Securing V2V/V2I Communication
by Ahmed Alruwaili, Sardar Islam and Iqbal Gondal
J. Cybersecur. Priv. 2025, 5(3), 48; https://doi.org/10.3390/jcp5030048 - 19 Jul 2025
Viewed by 429
Abstract
The Internet of Vehicles (IoV) presents a vast opportunity for optimised traffic flow, road safety, and enhanced usage experience with the influence of Federated Learning (FL). However, the distributed nature of IoV networks creates certain inherent problems regarding data privacy, security from adversarial [...] Read more.
The Internet of Vehicles (IoV) presents a vast opportunity for optimised traffic flow, road safety, and enhanced usage experience with the influence of Federated Learning (FL). However, the distributed nature of IoV networks creates certain inherent problems regarding data privacy, security from adversarial attacks, and the handling of available resources. This paper introduces Fed-DTB, a new dynamic trust-based framework for FL that aims to overcome these challenges in the context of IoV. Fed-DTB integrates the adaptive trust evaluation that is capable of quickly identifying and excluding malicious clients to maintain the authenticity of the learning process. A performance comparison with previous approaches is shown, where the Fed-DTB method improves accuracy in the first two training rounds and decreases the per-round training time. The Fed-DTB is robust to non-IID data distributions and outperforms all other state-of-the-art approaches regarding the final accuracy (87–88%), convergence rate, and adversary detection (99.86% accuracy). The key contributions include (1) a multi-factor trust evaluation mechanism with seven contextual factors, (2) correlation-based adaptive weighting that dynamically prioritises trust factors based on vehicular conditions, and (3) an optimisation-based client selection strategy that maximises collaborative reliability. This work opens up opportunities for more accurate, secure, and private collaborative learning in future intelligent transportation systems with the help of federated learning while overcoming the conventional trade-off of security vs. efficiency. Full article
Show Figures

Figure 1

31 pages, 4220 KiB  
Article
A Novel Multi-Server Federated Learning Framework in Vehicular Edge Computing
by Fateme Mazloomi, Shahram Shah Heydari and Khalil El-Khatib
Future Internet 2025, 17(7), 315; https://doi.org/10.3390/fi17070315 - 19 Jul 2025
Viewed by 256
Abstract
Federated learning (FL) has emerged as a powerful approach for privacy-preserving model training in autonomous vehicle networks, where real-world deployments rely on multiple roadside units (RSUs) serving heterogeneous clients with intermittent connectivity. While most research focuses on single-server or hierarchical cloud-based FL, multi-server [...] Read more.
Federated learning (FL) has emerged as a powerful approach for privacy-preserving model training in autonomous vehicle networks, where real-world deployments rely on multiple roadside units (RSUs) serving heterogeneous clients with intermittent connectivity. While most research focuses on single-server or hierarchical cloud-based FL, multi-server FL can alleviate the communication bottlenecks of traditional setups. To this end, we propose an edge-based, multi-server FL (MS-FL) framework that combines performance-driven aggregation at each server—including statistical weighting of peer updates and outlier mitigation—with an application layer handover protocol that preserves model updates when vehicles move between RSU coverage areas. We evaluate MS-FL on both MNIST and GTSRB benchmarks under shard- and Dirichlet-based non-IID splits, comparing it against single-server FL and a two-layer edge-plus-cloud baseline. Over multiple communication rounds, MS-FL with the Statistical Performance-Aware Aggregation method and Dynamic Weighted Averaging Aggregation achieved up to a 20-percentage-point improvement in accuracy and consistent gains in precision, recall, and F1-score (95% confidence), while matching the low latency of edge-only schemes and avoiding the extra model transfer delays of cloud-based aggregation. These results demonstrate that coordinated cooperation among servers based on model quality and seamless handovers can accelerate convergence, mitigate data heterogeneity, and deliver robust, privacy-aware learning in connected vehicle environments. Full article
Show Figures

Figure 1

20 pages, 2382 KiB  
Article
Heterogeneity-Aware Personalized Federated Neural Architecture Search
by An Yang and Ying Liu
Entropy 2025, 27(7), 759; https://doi.org/10.3390/e27070759 - 16 Jul 2025
Viewed by 274
Abstract
Federated learning (FL), which enables collaborative learning across distributed nodes, confronts a significant heterogeneity challenge, primarily including resource heterogeneity induced by different hardware platforms, and statistical heterogeneity originating from non-IID private data distributions among clients. Neural architecture search (NAS), particularly one-shot NAS, holds [...] Read more.
Federated learning (FL), which enables collaborative learning across distributed nodes, confronts a significant heterogeneity challenge, primarily including resource heterogeneity induced by different hardware platforms, and statistical heterogeneity originating from non-IID private data distributions among clients. Neural architecture search (NAS), particularly one-shot NAS, holds great promise for automatically designing optimal personalized models tailored to such heterogeneous scenarios. However, the coexistence of both resource and statistical heterogeneity destabilizes the training of the one-shot supernet, impairs the evaluation of candidate architectures, and ultimately hinders the discovery of optimal personalized models. To address this problem, we propose a heterogeneity-aware personalized federated NAS (HAPFNAS) method. First, we leverage lightweight knowledge models to distill knowledge from clients to server-side supernet, thereby effectively mitigating the effects of heterogeneity and enhancing the training stability. Then, we build random-forest-based personalized performance predictors to enable the efficient evaluation of candidate architectures across clients. Furthermore, we develop a model-heterogeneous FL algorithm called heteroFedAvg to facilitate collaborative model training for the discovered personalized models. Comprehensive experiments on CIFAR-10/100 and Tiny-ImageNet classification datasets demonstrate the effectiveness of our HAPFNAS, compared to state-of-the-art federated NAS methods. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

32 pages, 1750 KiB  
Article
Latency Analysis of UAV-Assisted Vehicular Communications Using Personalized Federated Learning with Attention Mechanism
by Abhishek Gupta and Xavier Fernando
Drones 2025, 9(7), 497; https://doi.org/10.3390/drones9070497 - 15 Jul 2025
Viewed by 417
Abstract
In this paper, unmanned aerial vehicle (UAV)-assisted vehicular communications are investigated to minimize latency and maximize the utilization of available UAV battery power. As communication and cooperation among UAV and vehicles is frequently required, a viable approach is to reduce the transmission of [...] Read more.
In this paper, unmanned aerial vehicle (UAV)-assisted vehicular communications are investigated to minimize latency and maximize the utilization of available UAV battery power. As communication and cooperation among UAV and vehicles is frequently required, a viable approach is to reduce the transmission of redundant messages. However, when the sensor data captured by the varying number of vehicles is not independent and identically distributed (non-i.i.d.), this becomes challenging. Hence, in order to group the vehicles with similar data distributions in a cluster, we utilize federated learning (FL) based on an attention mechanism. We jointly maximize the UAV’s available battery power in each transmission window and minimize communication latency. The simulation experiments reveal that the proposed personalized FL approach achieves performance improvement compared with baseline FL approaches. Our model, trained on the V2X-Sim dataset, outperforms existing methods on key performance indicators. The proposed FL approach with an attention mechanism offers a reduction in communication latency by up to 35% and a significant reduction in computational complexity without degradation in performance. Specifically, we achieve an improvement of approximately 40% in UAV energy efficiency, 20% reduction in the communication overhead, and 15% minimization in sojourn time. Full article
Show Figures

Figure 1

22 pages, 8849 KiB  
Article
Research into Robust Federated Learning Methods Driven by Heterogeneity Awareness
by Junhui Song, Zhangqi Zheng, Afei Li, Zhixin Xia and Yongshan Liu
Appl. Sci. 2025, 15(14), 7843; https://doi.org/10.3390/app15147843 - 13 Jul 2025
Viewed by 374
Abstract
Federated learning (FL) has emerged as a prominent distributed machine learning paradigm that facilitates collaborative model training across multiple clients while ensuring data privacy. Despite its growing adoption in practical applications, performance degradation caused by data heterogeneity—commonly referred to as the non-independent and [...] Read more.
Federated learning (FL) has emerged as a prominent distributed machine learning paradigm that facilitates collaborative model training across multiple clients while ensuring data privacy. Despite its growing adoption in practical applications, performance degradation caused by data heterogeneity—commonly referred to as the non-independent and identically distributed (non-IID) nature of client data—remains a fundamental challenge. To mitigate this issue, a heterogeneity-aware and robust FL framework is proposed to enhance model generalization and stability under non-IID conditions. The proposed approach introduces two key innovations. First, a heterogeneity quantification mechanism is designed based on statistical feature distributions, enabling the effective measurement of inter-client data discrepancies. This metric is further employed to guide the model aggregation process through a heterogeneity-aware weighted strategy. Second, a multi-loss optimization scheme is formulated, integrating classification loss, heterogeneity loss, feature center alignment, and L2 regularization for improved robustness against distributional shifts during local training. Comprehensive experiments are conducted on four benchmark datasets, including CIFAR-10, SVHN, MNIST, and NotMNIST under Dirichlet-based heterogeneity settings (alpha = 0.1 and alpha = 0.5). The results demonstrate that the proposed method consistently outperforms baseline approaches such as FedAvg, FedProx, FedSAM, and FedMOON. Notably, an accuracy improvement of approximately 4.19% over FedSAM is observed on CIFAR-10 (alpha = 0.5), and a 1.82% gain over FedMOON on SVHN (alpha = 0.1), along with stable enhancements on MNIST and NotMNIST. Furthermore, ablation studies confirm the contribution and necessity of each component in addressing data heterogeneity. Full article
(This article belongs to the Special Issue Cyber-Physical Systems Security: Challenges and Approaches)
Show Figures

Figure 1

21 pages, 4285 KiB  
Article
Federated Learning for Human Pose Estimation on Non-IID Data via Gradient Coordination
by Peng Ni, Dan Xiang, Dawei Jiang, Jianwei Sun and Jingxiang Cui
Sensors 2025, 25(14), 4372; https://doi.org/10.3390/s25144372 - 12 Jul 2025
Viewed by 386
Abstract
Human pose estimation is an important downstream task in computer vision, with significant applications in action recognition and virtual reality. However, data collected in a decentralized manner often exhibit non-independent and identically distributed (non-IID) characteristics, and traditional federated learning aggregation strategies can lead [...] Read more.
Human pose estimation is an important downstream task in computer vision, with significant applications in action recognition and virtual reality. However, data collected in a decentralized manner often exhibit non-independent and identically distributed (non-IID) characteristics, and traditional federated learning aggregation strategies can lead to gradient conflicts that impair model convergence and accuracy. To address this, we propose the Federated Gradient Harmonization aggregation strategy (FedGH), which coordinates update directions by measuring client gradient discrepancies and integrating gradient-projection correction with a parameter-reconstruction mechanism. Experiments conducted on a self-constructed single-arm robotic dataset and the public Max Planck Institute for Informatics (MPII Human Pose Dataset) dataset demonstrate that FedGH achieves average Percentage of Correct Keypoints (PCK) of 47.14% and 66.31% across all keypoints, representing improvements of 1.82 and 0.36 percentage points over the Federated Adaptive Weighting (FedAW) method. On our self-constructed dataset, FedGH attains a PCK of 86.4% for shoulder detection, surpassing other traditional federated learning methods by 20–30%. Moreover, on the self-constructed dataset, FedGH reaches over 98% accuracy in the keypoint heatmap regression model within the first 10 rounds and remains stable between 98% and 100% thereafter. This method effectively mitigates gradient conflicts in non-IID environments, providing a more robust optimization solution for distributed human pose estimation. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

31 pages, 2077 KiB  
Article
FD-IDS: Federated Learning with Knowledge Distillation for Intrusion Detection in Non-IID IoT Environments
by Haonan Peng, Chunming Wu and Yanfeng Xiao
Sensors 2025, 25(14), 4309; https://doi.org/10.3390/s25144309 - 10 Jul 2025
Viewed by 426
Abstract
With the rapid advancement of Internet of Things (IoT) technology, intrusion detection systems (IDSs) have become pivotal in ensuring network security. However, the data produced by IoT devices is typically sensitive and tends to display non-independent and identically distributed (Non-IID) properties. These factors [...] Read more.
With the rapid advancement of Internet of Things (IoT) technology, intrusion detection systems (IDSs) have become pivotal in ensuring network security. However, the data produced by IoT devices is typically sensitive and tends to display non-independent and identically distributed (Non-IID) properties. These factors impose significant limitations on the application of traditional centralized learning. In response to these issues, this study introduces a novel IDS framework grounded in federated learning and knowledge distillation (KD), termed FD-IDS. The proposed FD-IDS aims to tackle issues related to safeguarding data privacy and distributed heterogeneity. FD-IDS employs mutual information for feature selection to enhance training efficiency. For Non-IID data scenarios, the system combines a proximal term with KD. The proximal term restricts the deviation between local and global models, while KD utilizes the global model to steer the training process of local models. Together, these mechanisms effectively alleviate the problem of model drift. Experiments conducted on both the Edge-IIoT and N-BaIoT datasets demonstrate that FD-IDS achieves promising detection performance across multiple evaluation metrics. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

14 pages, 1418 KiB  
Article
Privacy-Preserving Data Sharing via PCA-Based Dimensionality Reduction in Non-IID Environments
by Yeon-Ji Lee, Na-Yeon Shin and Il-Gu Lee
Electronics 2025, 14(13), 2711; https://doi.org/10.3390/electronics14132711 - 4 Jul 2025
Viewed by 265
Abstract
The proliferation of mobile devices has generated exponential data growth, driving efforts to extract value. However, mobile data often presents non-independent and identically distributed (non-IID) challenges owing to varying device, environmental, and user factors. While data sharing can mitigate non-IID issues, direct raw [...] Read more.
The proliferation of mobile devices has generated exponential data growth, driving efforts to extract value. However, mobile data often presents non-independent and identically distributed (non-IID) challenges owing to varying device, environmental, and user factors. While data sharing can mitigate non-IID issues, direct raw data transmission poses significant security risks like privacy breaches and man-in-the-middle attacks. This paper proposes a secure data-sharing mechanism using principal component analysis (PCA). Each node independently builds a local PCA model to reduce data dimensionality before sharing. Receiving nodes then recover data using a similarly constructed local PCA model. Sharing only dimensionally reduced data instead of raw data enhances transmission privacy. The method’s effectiveness was evaluated from both legitimate user and attacker perspectives. Experimental results demonstrated stable accuracy for legitimate users post-sharing, while attacker accuracy significantly dropped. The optimal number of principal components was also experimentally determined. Under optimal configuration, the proposed method achieves up to 42 times greater memory efficiency and superior privacy metrics compared with conventional approaches, demonstrating its advantages. Full article
Show Figures

Figure 1

Back to TopTop