Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (89)

Search Parameters:
Keywords = FED-AVG

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2909 KiB  
Article
Novel Federated Graph Contrastive Learning for IoMT Security: Protecting Data Poisoning and Inference Attacks
by Amarudin Daulay, Kalamullah Ramli, Ruki Harwahyu, Taufik Hidayat and Bernardi Pranggono
Mathematics 2025, 13(15), 2471; https://doi.org/10.3390/math13152471 - 31 Jul 2025
Viewed by 312
Abstract
Malware evolution presents growing security threats for resource-constrained Internet of Medical Things (IoMT) devices. Conventional federated learning (FL) often suffers from slow convergence, high communication overhead, and fairness issues in dynamic IoMT environments. In this paper, we propose FedGCL, a secure and efficient [...] Read more.
Malware evolution presents growing security threats for resource-constrained Internet of Medical Things (IoMT) devices. Conventional federated learning (FL) often suffers from slow convergence, high communication overhead, and fairness issues in dynamic IoMT environments. In this paper, we propose FedGCL, a secure and efficient FL framework integrating contrastive graph representation learning for enhanced feature discrimination, a Jain-index-based fairness-aware aggregation mechanism, an adaptive synchronization scheduler to optimize communication rounds, and secure aggregation via homomorphic encryption within a Trusted Execution Environment. We evaluate FedGCL on four benchmark malware datasets (Drebin, Malgenome, Kronodroid, and TUANDROMD) using 5 to 15 graph neural network clients over 20 communication rounds. Our experiments demonstrate that FedGCL achieves 96.3% global accuracy within three rounds and converges to 98.9% by round twenty—reducing required training rounds by 45% compared to FedAvg—while incurring only approximately 10% additional computational overhead. By preserving patient data privacy at the edge, FedGCL enhances system resilience without sacrificing model performance. These results indicate FedGCL’s promise as a secure, efficient, and fair federated malware detection solution for IoMT ecosystems. Full article
Show Figures

Figure 1

22 pages, 2678 KiB  
Article
Federated Semi-Supervised Learning with Uniform Random and Lattice-Based Client Sampling
by Mei Zhang and Feng Yang
Entropy 2025, 27(8), 804; https://doi.org/10.3390/e27080804 - 28 Jul 2025
Viewed by 211
Abstract
Federated semi-supervised learning (Fed-SSL) has emerged as a powerful framework that leverages both labeled and unlabeled data distributed across clients. To reduce communication overhead, real-world deployments often adopt partial client participation, where only a subset of clients is selected in each round. However, [...] Read more.
Federated semi-supervised learning (Fed-SSL) has emerged as a powerful framework that leverages both labeled and unlabeled data distributed across clients. To reduce communication overhead, real-world deployments often adopt partial client participation, where only a subset of clients is selected in each round. However, under non-i.i.d. data distributions, the choice of client sampling strategy becomes critical, as it significantly affects training stability and final model performance. To address this challenge, we propose a novel federated averaging semi-supervised learning algorithm, called FedAvg-SSL, that considers two sampling approaches, uniform random sampling (standard Monte Carlo) and a structured lattice-based sampling, inspired by quasi-Monte Carlo (QMC) techniques, which ensures more balanced client participation through structured deterministic selection. On the client side, each selected participant alternates between updating the global model and refining the pseudo-label model using local data. We provide a rigorous convergence analysis, showing that FedAvg-SSL achieves a sublinear convergence rate with linear speedup. Extensive experiments not only validate our theoretical findings but also demonstrate the advantages of lattice-based sampling in federated learning, offering insights into the interplay among algorithm performance, client participation rates, local update steps, and sampling strategies. Full article
(This article belongs to the Special Issue Number Theoretic Methods in Statistics: Theory and Applications)
Show Figures

Figure 1

24 pages, 1530 KiB  
Article
A Lightweight Robust Training Method for Defending Model Poisoning Attacks in Federated Learning Assisted UAV Networks
by Lucheng Chen, Weiwei Zhai, Xiangfeng Bu, Ming Sun and Chenglin Zhu
Drones 2025, 9(8), 528; https://doi.org/10.3390/drones9080528 - 28 Jul 2025
Viewed by 397
Abstract
The integration of unmanned aerial vehicles (UAVs) into next-generation wireless networks greatly enhances the flexibility and efficiency of communication and distributed computation for ground mobile devices. Federated learning (FL) provides a privacy-preserving paradigm for device collaboration but remains highly vulnerable to poisoning attacks [...] Read more.
The integration of unmanned aerial vehicles (UAVs) into next-generation wireless networks greatly enhances the flexibility and efficiency of communication and distributed computation for ground mobile devices. Federated learning (FL) provides a privacy-preserving paradigm for device collaboration but remains highly vulnerable to poisoning attacks and is further challenged by the resource constraints and heterogeneous data common to UAV-assisted systems. Existing robust aggregation and anomaly detection methods often degrade in efficiency and reliability under these realistic adversarial and non-IID settings. To bridge these gaps, we propose FedULite, a lightweight and robust federated learning framework specifically designed for UAV-assisted environments. FedULite features unsupervised local representation learning optimized for unlabeled, non-IID data. Moreover, FedULite leverages a robust, adaptive server-side aggregation strategy that uses cosine similarity-based update filtering and dimension-wise adaptive learning rates to neutralize sophisticated data and model poisoning attacks. Extensive experiments across diverse datasets and adversarial scenarios demonstrate that FedULite reduces the attack success rate (ASR) from over 90% in undefended scenarios to below 5%, while maintaining the main task accuracy loss within 2%. Moreover, it introduces negligible computational overhead compared to standard FedAvg, with approximately 7% additional training time. Full article
(This article belongs to the Special Issue IoT-Enabled UAV Networks for Secure Communication)
Show Figures

Figure 1

22 pages, 1359 KiB  
Article
Fall Detection Using Federated Lightweight CNN Models: A Comparison of Decentralized vs. Centralized Learning
by Qasim Mahdi Haref, Jun Long and Zhan Yang
Appl. Sci. 2025, 15(15), 8315; https://doi.org/10.3390/app15158315 - 25 Jul 2025
Viewed by 263
Abstract
Fall detection is a critical task in healthcare monitoring systems, especially for elderly populations, for whom timely intervention can significantly reduce morbidity and mortality. This study proposes a privacy-preserving and scalable fall-detection framework that integrates federated learning (FL) with transfer learning (TL) to [...] Read more.
Fall detection is a critical task in healthcare monitoring systems, especially for elderly populations, for whom timely intervention can significantly reduce morbidity and mortality. This study proposes a privacy-preserving and scalable fall-detection framework that integrates federated learning (FL) with transfer learning (TL) to train deep learning models across decentralized data sources without compromising user privacy. The pipeline begins with data acquisition, in which annotated video-based fall-detection datasets formatted in YOLO are used to extract image crops of human subjects. These images are then preprocessed, resized, normalized, and relabeled into binary classes (fall vs. non-fall). A stratified 80/10/10 split ensures balanced training, validation, and testing. To simulate real-world federated environments, the training data is partitioned across multiple clients, each performing local training using pretrained CNN models including MobileNetV2, VGG16, EfficientNetB0, and ResNet50. Two FL topologies are implemented: a centralized server-coordinated scheme and a ring-based decentralized topology. During each round, only model weights are shared, and federated averaging (FedAvg) is applied for global aggregation. The models were trained using three random seeds to ensure result robustness and stability across varying data partitions. Among all configurations, decentralized MobileNetV2 achieved the best results, with a mean test accuracy of 0.9927, F1-score of 0.9917, and average training time of 111.17 s per round. These findings highlight the model’s strong generalization, low computational burden, and suitability for edge deployment. Future work will extend evaluation to external datasets and address issues such as client drift and adversarial robustness in federated environments. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

25 pages, 654 KiB  
Article
Entropy-Regularized Federated Optimization for Non-IID Data
by Koffka Khan
Algorithms 2025, 18(8), 455; https://doi.org/10.3390/a18080455 - 22 Jul 2025
Viewed by 236
Abstract
Federated learning (FL) struggles under non-IID client data when local models drift toward conflicting optima, impairing global convergence and performance. We introduce entropy-regularized federated optimization (ERFO), a lightweight client-side modification that augments each local objective with a Shannon entropy penalty on the per-parameter [...] Read more.
Federated learning (FL) struggles under non-IID client data when local models drift toward conflicting optima, impairing global convergence and performance. We introduce entropy-regularized federated optimization (ERFO), a lightweight client-side modification that augments each local objective with a Shannon entropy penalty on the per-parameter update distribution. ERFO requires no additional communication, adds a single-scalar hyperparameter λ, and integrates seamlessly into any FedAvg-style training loop. We derive a closed-form gradient for the entropy regularizer and provide convergence guarantees: under μ-strong convexity and L-smoothness, ERFO achieves the same O(1/T) (or linear) rates as FedAvg (with only O(λ) bias for fixed λ and exact convergence when λt0); in the non-convex case, we prove stationary-point convergence at O(1/T). Empirically, on five-client non-IID splits of the UNSW-NB15 intrusion-detection dataset, ERFO yields a +1.6 pp gain in accuracy and +0.008 in macro-F1 over FedAvg with markedly smoother dynamics. On a three-of-five split of PneumoniaMNIST, a fixed λ matches or exceeds FedAvg, FedProx, and SCAFFOLD—achieving 90.3% accuracy and 0.878 macro-F1—while preserving rapid, stable learning. ERFO’s gradient-only design is model-agnostic, making it broadly applicable across tasks. Full article
(This article belongs to the Special Issue Advances in Parallel and Distributed AI Computing)
Show Figures

Figure 1

17 pages, 1738 KiB  
Article
Multimodal Fusion Multi-Task Learning Network Based on Federated Averaging for SDB Severity Diagnosis
by Songlu Lin, Renzheng Tang, Yuzhe Wang and Zhihong Wang
Appl. Sci. 2025, 15(14), 8077; https://doi.org/10.3390/app15148077 - 20 Jul 2025
Viewed by 519
Abstract
Accurate sleep staging and sleep-disordered breathing (SDB) severity prediction are critical for the early diagnosis and management of sleep disorders. However, real-world polysomnography (PSG) data often suffer from modality heterogeneity, label scarcity, and non-independent and identically distributed (non-IID) characteristics across institutions, posing significant [...] Read more.
Accurate sleep staging and sleep-disordered breathing (SDB) severity prediction are critical for the early diagnosis and management of sleep disorders. However, real-world polysomnography (PSG) data often suffer from modality heterogeneity, label scarcity, and non-independent and identically distributed (non-IID) characteristics across institutions, posing significant challenges for model generalization and clinical deployment. To address these issues, we propose a federated multi-task learning (FMTL) framework that simultaneously performs sleep staging and SDB severity classification from seven multimodal physiological signals, including EEG, ECG, respiration, etc. The proposed framework is built upon a hybrid deep neural architecture that integrates convolutional layers (CNN) for spatial representation, bidirectional GRUs for temporal modeling, and multi-head self-attention for long-range dependency learning. A shared feature extractor is combined with task-specific heads to enable joint diagnosis, while the FedAvg algorithm is employed to facilitate decentralized training across multiple institutions without sharing raw data, thereby preserving privacy and addressing non-IID challenges. We evaluate the proposed method across three public datasets (APPLES, SHHS, and HMC) treated as independent clients. For sleep staging, the model achieves accuracies of 85.3% (APPLES), 87.1% (SHHS_rest), and 79.3% (HMC), with Cohen’s Kappa scores exceeding 0.71. For SDB severity classification, it obtains macro-F1 scores of 77.6%, 76.4%, and 79.1% on APPLES, SHHS_rest, and HMC, respectively. These results demonstrate that our unified FMTL framework effectively leverages multimodal PSG signals and federated training to deliver accurate and scalable sleep disorder assessment, paving the way for the development of a privacy-preserving, generalizable, and clinically applicable digital sleep monitoring system. Full article
(This article belongs to the Special Issue Machine Learning in Biomedical Applications)
Show Figures

Figure 1

20 pages, 1903 KiB  
Article
Study on the Regulatory Effect of Water Extract of Artemisia annua L. on Antioxidant Function of Mutton Sheep via the Keap1/Nrf2 Signaling Pathway
by Gen Gang, Ruiheng Gao, Ruizhen Li, Xiao Jin, Yuanyuan Xing, Sumei Yan, Yuanqing Xu and Binlin Shi
Antioxidants 2025, 14(7), 885; https://doi.org/10.3390/antiox14070885 - 18 Jul 2025
Viewed by 363
Abstract
This study was conducted through in vivo and in vitro experiments and aimed to reveal the regulatory effect of water extract of Artemisia annua L. (WEAA) on the antioxidant function of mutton sheep and the underlying mechanism. In the in vivo experiment, 32 [...] Read more.
This study was conducted through in vivo and in vitro experiments and aimed to reveal the regulatory effect of water extract of Artemisia annua L. (WEAA) on the antioxidant function of mutton sheep and the underlying mechanism. In the in vivo experiment, 32 Dorper × Han female sheep (3 months old; avg. body weight: 24 ± 0.09 kg) were allocated to four groups (eight lambs/group) and fed a diet containing 0, 500, 1000, and 1500 mg/kg WEAA, respectively. In the in vitro experiments, peripheral blood lymphocytes (PBLs) were cultured with different doses of WEAA (0, 25, 50, 100, 200, 400 µg/mL) to determine the optimal concentration, followed by a 2 × 2 factorial experiment with four treatment groups (six replicates per treatment group): the ML385(−)/WEAA(−) group, the ML385(−)/WEAA(+) group, the ML385(+)/WEAA(−) group, and the ML385(+)/WEAA(+) group. The results showed that WEAA supplementation dose-dependently increased serum, liver and spleen tissue total antioxidant capacity, glutathione peroxidase (GSH-Px), and catalase (CAT) activity while reducing malondialdehyde level (p < 0.05). Moreover, WEAA supplementation significantly upregulated the liver and spleen expression of nuclear factor erythroid 2-related factor 2, superoxide dismutase 2, GSH-Px, CAT and NAD(P)H quinone dehydrogenase 1 (p < 0.05) while significantly downregulating the kelch-like ECH associated protein 1 expression in a dose-dependent manner (p < 0.05), thereby activating the Keap1/Nrf2 pathway with the peak effect observed in the 1000 mg/kg WEAA group. Additionally, supplementation with 100 µg/mL of WEAA had significant antioxidation activity in the culture medium of PBLs. Its action mechanism involved the Keap1/Nrf2 pathway; specifically, WEAA exerted its antioxidant effect by upregulating the gene expression related to the Keap1/Nrf2 pathway. In conclusion, WEAA enhances sheep’s antioxidant capacity by up-regulating Keap1/Nrf2 pathway genes and boosting antioxidant enzyme activity. The results provided experimental support for the potential application of WEAA in intensive mutton sheep farming. Full article
Show Figures

Figure 1

21 pages, 733 KiB  
Article
A Secure and Privacy-Preserving Approach to Healthcare Data Collaboration
by Amna Adnan, Firdous Kausar, Muhammad Shoaib, Faiza Iqbal, Ayesha Altaf and Hafiz M. Asif
Symmetry 2025, 17(7), 1139; https://doi.org/10.3390/sym17071139 - 16 Jul 2025
Viewed by 489
Abstract
Combining a large collection of patient data and advanced technology, healthcare organizations can excel in medical research and increase the quality of patient care. At the same time, health records present serious privacy and security challenges because they are confidential and can be [...] Read more.
Combining a large collection of patient data and advanced technology, healthcare organizations can excel in medical research and increase the quality of patient care. At the same time, health records present serious privacy and security challenges because they are confidential and can be breached through networks. Even traditional methods with federated learning are used to share data, patient information might still be at risk of interference while updating the model. This paper proposes the Privacy-Preserving Federated Learning with Homomorphic Encryption (PPFLHE) framework, which strongly supports secure cooperation in healthcare and at the same time providing symmetric privacy protection among participating institutions. Everyone in the collaboration used the same EfficientNet-B0 architecture and training conditions and keeping the model symmetrical throughout the network to achieve a balanced learning process and fairness. All the institutions used CKKS encryption symmetrically for their models to keep data concealed and stop any attempts at inference. Our federated learning process uses FedAvg on the server to symmetrically aggregate encrypted model updates and decrease any delays in our server communication. We attained a classification accuracy of 83.19% and 81.27% when using the APTOS 2019 Blindness Detection dataset and MosMedData CT scan dataset, respectively. Such findings confirm that the PPFLHE framework is generalizable among the broad range of medical imaging methods. In this way, patient data are kept secure while encouraging medical research and treatment to move forward, helping healthcare systems cooperate more effectively. Full article
(This article belongs to the Special Issue Exploring Symmetry in Wireless Communication)
Show Figures

Figure 1

20 pages, 2382 KiB  
Article
Heterogeneity-Aware Personalized Federated Neural Architecture Search
by An Yang and Ying Liu
Entropy 2025, 27(7), 759; https://doi.org/10.3390/e27070759 - 16 Jul 2025
Viewed by 295
Abstract
Federated learning (FL), which enables collaborative learning across distributed nodes, confronts a significant heterogeneity challenge, primarily including resource heterogeneity induced by different hardware platforms, and statistical heterogeneity originating from non-IID private data distributions among clients. Neural architecture search (NAS), particularly one-shot NAS, holds [...] Read more.
Federated learning (FL), which enables collaborative learning across distributed nodes, confronts a significant heterogeneity challenge, primarily including resource heterogeneity induced by different hardware platforms, and statistical heterogeneity originating from non-IID private data distributions among clients. Neural architecture search (NAS), particularly one-shot NAS, holds great promise for automatically designing optimal personalized models tailored to such heterogeneous scenarios. However, the coexistence of both resource and statistical heterogeneity destabilizes the training of the one-shot supernet, impairs the evaluation of candidate architectures, and ultimately hinders the discovery of optimal personalized models. To address this problem, we propose a heterogeneity-aware personalized federated NAS (HAPFNAS) method. First, we leverage lightweight knowledge models to distill knowledge from clients to server-side supernet, thereby effectively mitigating the effects of heterogeneity and enhancing the training stability. Then, we build random-forest-based personalized performance predictors to enable the efficient evaluation of candidate architectures across clients. Furthermore, we develop a model-heterogeneous FL algorithm called heteroFedAvg to facilitate collaborative model training for the discovered personalized models. Comprehensive experiments on CIFAR-10/100 and Tiny-ImageNet classification datasets demonstrate the effectiveness of our HAPFNAS, compared to state-of-the-art federated NAS methods. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

22 pages, 8849 KiB  
Article
Research into Robust Federated Learning Methods Driven by Heterogeneity Awareness
by Junhui Song, Zhangqi Zheng, Afei Li, Zhixin Xia and Yongshan Liu
Appl. Sci. 2025, 15(14), 7843; https://doi.org/10.3390/app15147843 - 13 Jul 2025
Viewed by 403
Abstract
Federated learning (FL) has emerged as a prominent distributed machine learning paradigm that facilitates collaborative model training across multiple clients while ensuring data privacy. Despite its growing adoption in practical applications, performance degradation caused by data heterogeneity—commonly referred to as the non-independent and [...] Read more.
Federated learning (FL) has emerged as a prominent distributed machine learning paradigm that facilitates collaborative model training across multiple clients while ensuring data privacy. Despite its growing adoption in practical applications, performance degradation caused by data heterogeneity—commonly referred to as the non-independent and identically distributed (non-IID) nature of client data—remains a fundamental challenge. To mitigate this issue, a heterogeneity-aware and robust FL framework is proposed to enhance model generalization and stability under non-IID conditions. The proposed approach introduces two key innovations. First, a heterogeneity quantification mechanism is designed based on statistical feature distributions, enabling the effective measurement of inter-client data discrepancies. This metric is further employed to guide the model aggregation process through a heterogeneity-aware weighted strategy. Second, a multi-loss optimization scheme is formulated, integrating classification loss, heterogeneity loss, feature center alignment, and L2 regularization for improved robustness against distributional shifts during local training. Comprehensive experiments are conducted on four benchmark datasets, including CIFAR-10, SVHN, MNIST, and NotMNIST under Dirichlet-based heterogeneity settings (alpha = 0.1 and alpha = 0.5). The results demonstrate that the proposed method consistently outperforms baseline approaches such as FedAvg, FedProx, FedSAM, and FedMOON. Notably, an accuracy improvement of approximately 4.19% over FedSAM is observed on CIFAR-10 (alpha = 0.5), and a 1.82% gain over FedMOON on SVHN (alpha = 0.1), along with stable enhancements on MNIST and NotMNIST. Furthermore, ablation studies confirm the contribution and necessity of each component in addressing data heterogeneity. Full article
(This article belongs to the Special Issue Cyber-Physical Systems Security: Challenges and Approaches)
Show Figures

Figure 1

25 pages, 1524 KiB  
Article
Detecting Emerging DGA Malware in Federated Environments via Variational Autoencoder-Based Clustering and Resource-Aware Client Selection
by Ma Viet Duc, Pham Minh Dang, Tran Thu Phuong, Truong Duc Truong, Vu Hai and Nguyen Huu Thanh
Future Internet 2025, 17(7), 299; https://doi.org/10.3390/fi17070299 - 3 Jul 2025
Viewed by 392
Abstract
Domain Generation Algorithms (DGAs) remain a persistent technique used by modern malware to establish stealthy command-and-control (C&C) channels, thereby evading traditional blacklist-based defenses. Detecting such evolving threats is especially challenging in decentralized environments where raw traffic data cannot be aggregated due to privacy [...] Read more.
Domain Generation Algorithms (DGAs) remain a persistent technique used by modern malware to establish stealthy command-and-control (C&C) channels, thereby evading traditional blacklist-based defenses. Detecting such evolving threats is especially challenging in decentralized environments where raw traffic data cannot be aggregated due to privacy or policy constraints. To address this, we present FedSAGE, a security-aware federated intrusion detection framework that combines Variational Autoencoder (VAE)-based latent representation learning with unsupervised clustering and resource-efficient client selection. Each client encodes its local domain traffic into a semantic latent space using a shared, pre-trained VAE trained solely on benign domains. These embeddings are clustered via affinity propagation to group clients with similar data distributions and identify outliers indicative of novel threats without requiring any labeled DGA samples. Within each cluster, FedSAGE selects only the fastest clients for training, balancing computational constraints with threat visibility. Experimental results from the multi-zones DGA dataset show that FedSAGE improves detection accuracy by up to 11.6% and reduces energy consumption by up to 93.8% compared to standard FedAvg under non-IID conditions. Notably, the latent clustering perfectly recovers ground-truth DGA family zones, enabling effective anomaly detection in a fully unsupervised manner while remaining privacy-preserving. These foundations demonstrate that FedSAGE is a practical and lightweight approach for decentralized detection of evasive malware, offering a viable solution for secure and adaptive defense in resource-constrained edge environments. Full article
(This article belongs to the Special Issue Security of Computer System and Network)
Show Figures

Figure 1

17 pages, 16364 KiB  
Article
FedeAMR-CFF: A Federated Automatic Modulation Recognition Method Based on Characteristic Feature Fine-Tuning
by Meng Zhang, Jiankun Ma, Zhenxi Zhang and Feng Zhou
Sensors 2025, 25(13), 4000; https://doi.org/10.3390/s25134000 - 26 Jun 2025
Viewed by 413
Abstract
Modulation recognition technology, as one of the core technologies in the field of wireless communications, holds significant importance in intelligent communication systems such as link adaptation and IoT devices. In recent years, deep learning-based automatic modulation recognition (DL-AMR) has emerged as a major [...] Read more.
Modulation recognition technology, as one of the core technologies in the field of wireless communications, holds significant importance in intelligent communication systems such as link adaptation and IoT devices. In recent years, deep learning-based automatic modulation recognition (DL-AMR) has emerged as a major research direction in this domain. Existing DL-AMR schemes primarily adopt a centralized training architecture, where a unified model is trained on a central server using local data from terminal devices. Although such methods achieve high recognition accuracy, they carry substantial privacy leakage risks. Moreover, when terminal devices independently train models solely based on their local data, the model performance often suffers due to issues like data distribution disparities and insufficient training samples. To address the critical challenges of high data privacy leakage risks, excessive communication overhead, and data silos in automatic modulation recognition tasks, this paper proposes a federated automatic modulation recognition method based on characteristic feature fine-tuning (FedeAMR-CFF). Specifically, the clients extract representative features through distance-based metric screening, and the server aggregates model parameters via the FedAvg algorithm and fine-tunes the model using the collected features. This method not only safeguards client data privacy but also facilitates effective knowledge transfer across distributed datasets while significantly mitigating the non-independent and identically distributed problem. Experimental validation demonstrates that FedeAMR-CFF achieves an improvement of 3.43% compared to the best-performing local model. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

24 pages, 9073 KiB  
Article
Data-Bound Adaptive Federated Learning: FedAdaDB
by Fotios Zantalis and Grigorios Koulouras
IoT 2025, 6(3), 35; https://doi.org/10.3390/iot6030035 - 24 Jun 2025
Viewed by 470
Abstract
Federated Learning (FL) enables decentralized Machine Learning (ML), focusing on preserving data privacy, but faces a unique set of optimization challenges, such as dealing with non-IID data, communication overhead, and client drift. Adaptive optimizers like AdaGrad, Adam, and Adam variations have been applied [...] Read more.
Federated Learning (FL) enables decentralized Machine Learning (ML), focusing on preserving data privacy, but faces a unique set of optimization challenges, such as dealing with non-IID data, communication overhead, and client drift. Adaptive optimizers like AdaGrad, Adam, and Adam variations have been applied in FL, showing good results in convergence speed and accuracy. However, it can be quite challenging to combine good convergence, model generalization, and stability in an FL setup. Data-bound adaptive methods like AdaDB have demonstrated promising results in centralized settings by incorporating dynamic, data-dependent bounds on Learning Rates (LRs). In this paper, FedAdaDB is introduced, which is an FL version of AdaDB aiming to address the aforementioned challenges. FedAdaDB uses the AdaDB optimizer at the server-side to dynamically adjust LR bounds based on the aggregated client updates. Extensive experiments have been conducted comparing FedAdaDB with FedAvg and FedAdam on three different datasets (EMNIST, CIFAR100, and Shakespeare). The results show that FedAdaDB consistently offers better and more robust outcomes, in terms of the measured final validation accuracy across all datasets, for a trade-off of a small delay in the convergence speed at an early stage. Full article
(This article belongs to the Special Issue IoT Meets AI: Driving the Next Generation of Technology)
Show Figures

Figure 1

22 pages, 2065 KiB  
Article
FedEmerge: An Entropy-Guided Federated Learning Method for Sensor Networks and Edge Intelligence
by Koffka Khan
Sensors 2025, 25(12), 3728; https://doi.org/10.3390/s25123728 - 14 Jun 2025
Viewed by 392
Abstract
Introduction: Federated Learning (FL) is a distributed machine learning paradigm where a global model is collaboratively trained across multiple decentralized clients without exchanging raw data. This is especially important in sensor networks and edge intelligence, where data privacy, bandwidth constraints, and data locality [...] Read more.
Introduction: Federated Learning (FL) is a distributed machine learning paradigm where a global model is collaboratively trained across multiple decentralized clients without exchanging raw data. This is especially important in sensor networks and edge intelligence, where data privacy, bandwidth constraints, and data locality are paramount. Traditional FL methods like FedAvg struggle with highly heterogeneous (non-IID) client data, which is common in these settings. Background: Traditional FL aggregation methods, such as FedAvg, weigh client updates primarily by dataset size, potentially overlooking the informativeness or diversity of each client’s contribution. These limitations are especially pronounced in sensor networks and IoT environments, where clients may hold sparse, unbalanced, or single-modality data. Methods: We propose FedEmerge, an entropy-guided aggregation approach that adjusts each client’s impact on the global model based on the information entropy of its local data distribution. This formulation introduces a principled way to quantify and reward data diversity, enabling an emergent collective learning dynamic in which globally informative updates drive convergence. Unlike existing methods that weigh updates by sample count or heuristics, FedEmerge prioritizes clients with more representative, high-entropy data. The FedEmerge algorithm is presented with full mathematical detail, and we prove its convergence under the Polyak–Łojasiewicz (PL) condition. Results: Theoretical analysis shows that FedEmerge achieves linear convergence to the optimal model under standard assumptions (smoothness and PL condition), similar to centralized gradient descent. Empirically, FedEmerge improves global model accuracy and convergence speed on highly skewed non-IID benchmarks, and it reduces performance disparities among clients compared to FedAvg. Evaluations on CIFAR-10 (non-IID), Federated EMNIST, and Shakespeare datasets confirm its effectiveness in practical edge-learning settings. Conclusions: This entropy-guided federated strategy demonstrates that weighting client updates by data diversity enhances learning outcomes in heterogeneous networks. The approach preserves privacy like standard FL and adds minimal computation overhead, making it a practical solution for real-world federated systems. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

18 pages, 1005 KiB  
Article
FedEach: Federated Learning with Evaluator-Based Incentive Mechanism for Human Activity Recognition
by Hyun Woo Lim, Sean Yonathan Tanjung, Ignatius Iwan, Bernardo Nugroho Yahya and Seok-Lyong Lee
Sensors 2025, 25(12), 3687; https://doi.org/10.3390/s25123687 - 12 Jun 2025
Viewed by 452
Abstract
Federated learning (FL) is a decentralized approach that aims to establish a global model by aggregating updates from diverse clients without sharing their local data. However, the approach becomes complicated when Byzantine clients join with arbitrary manipulation, referred to as malicious clients. Classical [...] Read more.
Federated learning (FL) is a decentralized approach that aims to establish a global model by aggregating updates from diverse clients without sharing their local data. However, the approach becomes complicated when Byzantine clients join with arbitrary manipulation, referred to as malicious clients. Classical techniques, such as Federated Averaging (FedAvg), are insufficient to incentivize reliable clients and discourage malicious clients. Other existing Byzantine FL schemes to address malicious clients are either incentive-reliable clients or need-to-provide server-labeled data as the public validation dataset, which increase time complexity. This study introduces a federated learning framework with an evaluator-based incentive mechanism (FedEach) that offers robustness with no dependency on server-labeled data. In this framework, we introduce evaluators and participants. Unlike the existing approaches, the server selects the evaluators and participants among the clients using model-based performance evaluation criteria such as test score and reputation. Afterward, the evaluators assess and evaluate whether a participant is reliable or malicious. Subsequently, the server exclusively aggregates models from these identified reliable participants and the evaluators for global model updates. After this aggregation, the server calculates each client’s contribution, prioritizing each client’s contribution to ensure the fair recognition of high-quality updates and penalizing malicious clients based on their contributions. Empirical evidence obtained from the performance in human activity recognition (HAR) datasets highlights FedEach’s effectiveness, especially in environments with a high presence of malicious clients. In addition, FedEach maintains computational efficiency so that it is reliable for efficient FL applications such as sensor-based HAR with wearable devices and mobile sensing. Full article
(This article belongs to the Special Issue Wearable Devices for Physical Activity and Healthcare Monitoring)
Show Figures

Figure 1

Back to TopTop