Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (83)

Search Parameters:
Keywords = federated client selection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 8047 KiB  
Article
Fed-DTB: A Dynamic Trust-Based Framework for Secure and Efficient Federated Learning in IoV Networks: Securing V2V/V2I Communication
by Ahmed Alruwaili, Sardar Islam and Iqbal Gondal
J. Cybersecur. Priv. 2025, 5(3), 48; https://doi.org/10.3390/jcp5030048 - 19 Jul 2025
Viewed by 259
Abstract
The Internet of Vehicles (IoV) presents a vast opportunity for optimised traffic flow, road safety, and enhanced usage experience with the influence of Federated Learning (FL). However, the distributed nature of IoV networks creates certain inherent problems regarding data privacy, security from adversarial [...] Read more.
The Internet of Vehicles (IoV) presents a vast opportunity for optimised traffic flow, road safety, and enhanced usage experience with the influence of Federated Learning (FL). However, the distributed nature of IoV networks creates certain inherent problems regarding data privacy, security from adversarial attacks, and the handling of available resources. This paper introduces Fed-DTB, a new dynamic trust-based framework for FL that aims to overcome these challenges in the context of IoV. Fed-DTB integrates the adaptive trust evaluation that is capable of quickly identifying and excluding malicious clients to maintain the authenticity of the learning process. A performance comparison with previous approaches is shown, where the Fed-DTB method improves accuracy in the first two training rounds and decreases the per-round training time. The Fed-DTB is robust to non-IID data distributions and outperforms all other state-of-the-art approaches regarding the final accuracy (87–88%), convergence rate, and adversary detection (99.86% accuracy). The key contributions include (1) a multi-factor trust evaluation mechanism with seven contextual factors, (2) correlation-based adaptive weighting that dynamically prioritises trust factors based on vehicular conditions, and (3) an optimisation-based client selection strategy that maximises collaborative reliability. This work opens up opportunities for more accurate, secure, and private collaborative learning in future intelligent transportation systems with the help of federated learning while overcoming the conventional trade-off of security vs. efficiency. Full article
Show Figures

Figure 1

39 pages, 784 KiB  
Review
A Review of Research on Secure Aggregation for Federated Learning
by Xing Zhang, Yuexiang Luo and Tianning Li
Future Internet 2025, 17(7), 308; https://doi.org/10.3390/fi17070308 - 17 Jul 2025
Viewed by 148
Abstract
Federated learning (FL) is an advanced distributed machine learning method that effectively solves the data silo problem. With the increasing popularity of federated learning and the growing importance of privacy protection, federated learning methods that can securely aggregate models have received widespread attention. [...] Read more.
Federated learning (FL) is an advanced distributed machine learning method that effectively solves the data silo problem. With the increasing popularity of federated learning and the growing importance of privacy protection, federated learning methods that can securely aggregate models have received widespread attention. Federated learning enables clients to train models locally and share their model updates with the server. While this approach allows collaborative model training without exposing raw data, it still risks leaking sensitive information. To enhance privacy protection in federated learning, secure aggregation is considered a key enabling technology that requires further in-depth investigation. This paper summarizes the definition, classification, and applications of federated learning; reviews secure aggregation protocols proposed to address privacy and security issues in federated learning; extensively analyzes the selected protocols; and concludes by highlighting the significant challenges and future research directions in applying secure aggregation in federated learning. The purpose of this paper is to review and analyze prior research, evaluate the advantages and disadvantages of various secure aggregation schemes, and propose potential future research directions. This work aims to serve as a valuable reference for researchers studying secure aggregation in federated learning. Full article
Show Figures

Figure 1

31 pages, 9063 KiB  
Article
Client Selection in Federated Learning on Resource-Constrained Devices: A Game Theory Approach
by Zohra Dakhia and Massimo Merenda
Appl. Sci. 2025, 15(13), 7556; https://doi.org/10.3390/app15137556 - 5 Jul 2025
Viewed by 336
Abstract
Federated Learning (FL), a key paradigm in privacy-preserving and distributed machine learning (ML), enables collaborative model training across decentralized data sources without requiring raw data exchange. FL enables collaborative model training across decentralized data sources while preserving privacy. However, selecting appropriate clients remains [...] Read more.
Federated Learning (FL), a key paradigm in privacy-preserving and distributed machine learning (ML), enables collaborative model training across decentralized data sources without requiring raw data exchange. FL enables collaborative model training across decentralized data sources while preserving privacy. However, selecting appropriate clients remains a major challenge, especially in heterogeneous environments with diverse battery levels, privacy needs, and learning capacities. In this work, a centralized reward-based payoff strategy (RBPS) with cooperative intent is proposed for client selection. In RBPS, each client evaluates participation based on locally measured battery level, privacy requirement, and the model’s accuracy in the current round computing a payoff from these factors and electing to participate if the payoff exceeds a predefined threshold. Participating clients then receive the updated global model. By jointly optimizing model accuracy, privacy preservation, and battery-level constraints, RBPS realizes a multi-objective selection mechanism. Under realistic simulations of client heterogeneity, RBPS yields more robust and efficient training compared to existing methods, confirming its suitability for deployment in resource-constrained FL settings. Experimental analysis demonstrates that RBPS offers significant advantages over state-of-the-art (SOA) client selection methods, particularly those relying on a single selection criterion such as accuracy, battery, or privacy alone. These one-dimensional approaches often lead to trade-offs where improvements in one aspect come at the cost of another. In contrast, RBPS leverages client heterogeneity not as a limitation, but as a strategic asset to maintain and balance all critical characteristics simultaneously. Rather than optimizing performance for a single device type or constraint, RBPS benefits from the diversity of heterogeneous clients, enabling improved accuracy, energy preservation, and privacy protection all at once. This is achieved by dynamically adapting the selection strategy to the strengths of different client profiles. Unlike homogeneous environments, where only one capability tends to dominate, RBPS ensures that no key property is sacrificed. RBPS thus aligns more closely with real-world FL deployments, where mixed-device participation is common and balanced optimization is essential. Full article
Show Figures

Figure 1

25 pages, 1524 KiB  
Article
Detecting Emerging DGA Malware in Federated Environments via Variational Autoencoder-Based Clustering and Resource-Aware Client Selection
by Ma Viet Duc, Pham Minh Dang, Tran Thu Phuong, Truong Duc Truong, Vu Hai and Nguyen Huu Thanh
Future Internet 2025, 17(7), 299; https://doi.org/10.3390/fi17070299 - 3 Jul 2025
Viewed by 333
Abstract
Domain Generation Algorithms (DGAs) remain a persistent technique used by modern malware to establish stealthy command-and-control (C&C) channels, thereby evading traditional blacklist-based defenses. Detecting such evolving threats is especially challenging in decentralized environments where raw traffic data cannot be aggregated due to privacy [...] Read more.
Domain Generation Algorithms (DGAs) remain a persistent technique used by modern malware to establish stealthy command-and-control (C&C) channels, thereby evading traditional blacklist-based defenses. Detecting such evolving threats is especially challenging in decentralized environments where raw traffic data cannot be aggregated due to privacy or policy constraints. To address this, we present FedSAGE, a security-aware federated intrusion detection framework that combines Variational Autoencoder (VAE)-based latent representation learning with unsupervised clustering and resource-efficient client selection. Each client encodes its local domain traffic into a semantic latent space using a shared, pre-trained VAE trained solely on benign domains. These embeddings are clustered via affinity propagation to group clients with similar data distributions and identify outliers indicative of novel threats without requiring any labeled DGA samples. Within each cluster, FedSAGE selects only the fastest clients for training, balancing computational constraints with threat visibility. Experimental results from the multi-zones DGA dataset show that FedSAGE improves detection accuracy by up to 11.6% and reduces energy consumption by up to 93.8% compared to standard FedAvg under non-IID conditions. Notably, the latent clustering perfectly recovers ground-truth DGA family zones, enabling effective anomaly detection in a fully unsupervised manner while remaining privacy-preserving. These foundations demonstrate that FedSAGE is a practical and lightweight approach for decentralized detection of evasive malware, offering a viable solution for secure and adaptive defense in resource-constrained edge environments. Full article
(This article belongs to the Special Issue Security of Computer System and Network)
Show Figures

Figure 1

22 pages, 891 KiB  
Article
Federated Learning-Based Location Similarity Model for Location Privacy Preserving Recommendation
by Liang Zhu, Jingzhe Mu, Liping Yu, Yanpei Liu, Fubao Zhu and Jingzhong Gu
Electronics 2025, 14(13), 2578; https://doi.org/10.3390/electronics14132578 - 26 Jun 2025
Viewed by 256
Abstract
With the proliferation of mobile devices and wireless communications, Location-Based Social Networks (LBSNs) have seen tremendous growth. Location recommendation, as an important service in LBSNs, can provide users with locations of interest by analyzing their complex check-in information. Currently, most location recommendations use [...] Read more.
With the proliferation of mobile devices and wireless communications, Location-Based Social Networks (LBSNs) have seen tremendous growth. Location recommendation, as an important service in LBSNs, can provide users with locations of interest by analyzing their complex check-in information. Currently, most location recommendations use centralized learning strategies, which carry the risk of user privacy breaches. As an emerging learning strategy, federated learning is widely applied in the field of location recommendation to address privacy concerns. We propose a Federated Learning-Based Location Similarity Model for Location Privacy Preserving Recommendation (FedLSM-LPR) scheme. First, the location-based similarity model is used to capture the differences between locations and make location recommendations. Second, the penalty term is added to the loss function to constrain the distance between the local model parameters and the global model parameters. Finally, we use the REPAgg method, which is based on clustering for client selection, to perform global model aggregation to address data heterogeneity issues. Extensive experiments demonstrate that the proposed FedLSM-LPR scheme not only delivers superior performance but also effectively protects the privacy of users. Full article
(This article belongs to the Special Issue Big Data Security and Privacy)
Show Figures

Figure 1

18 pages, 1005 KiB  
Article
FedEach: Federated Learning with Evaluator-Based Incentive Mechanism for Human Activity Recognition
by Hyun Woo Lim, Sean Yonathan Tanjung, Ignatius Iwan, Bernardo Nugroho Yahya and Seok-Lyong Lee
Sensors 2025, 25(12), 3687; https://doi.org/10.3390/s25123687 - 12 Jun 2025
Viewed by 418
Abstract
Federated learning (FL) is a decentralized approach that aims to establish a global model by aggregating updates from diverse clients without sharing their local data. However, the approach becomes complicated when Byzantine clients join with arbitrary manipulation, referred to as malicious clients. Classical [...] Read more.
Federated learning (FL) is a decentralized approach that aims to establish a global model by aggregating updates from diverse clients without sharing their local data. However, the approach becomes complicated when Byzantine clients join with arbitrary manipulation, referred to as malicious clients. Classical techniques, such as Federated Averaging (FedAvg), are insufficient to incentivize reliable clients and discourage malicious clients. Other existing Byzantine FL schemes to address malicious clients are either incentive-reliable clients or need-to-provide server-labeled data as the public validation dataset, which increase time complexity. This study introduces a federated learning framework with an evaluator-based incentive mechanism (FedEach) that offers robustness with no dependency on server-labeled data. In this framework, we introduce evaluators and participants. Unlike the existing approaches, the server selects the evaluators and participants among the clients using model-based performance evaluation criteria such as test score and reputation. Afterward, the evaluators assess and evaluate whether a participant is reliable or malicious. Subsequently, the server exclusively aggregates models from these identified reliable participants and the evaluators for global model updates. After this aggregation, the server calculates each client’s contribution, prioritizing each client’s contribution to ensure the fair recognition of high-quality updates and penalizing malicious clients based on their contributions. Empirical evidence obtained from the performance in human activity recognition (HAR) datasets highlights FedEach’s effectiveness, especially in environments with a high presence of malicious clients. In addition, FedEach maintains computational efficiency so that it is reliable for efficient FL applications such as sensor-based HAR with wearable devices and mobile sensing. Full article
(This article belongs to the Special Issue Wearable Devices for Physical Activity and Healthcare Monitoring)
Show Figures

Figure 1

24 pages, 1347 KiB  
Article
SecFedDNN: A Secure Federated Deep Learning Framework for Edge–Cloud Environments
by Roba H. Alamir, Ayman Noor, Hanan Almukhalfi, Reham Almukhlifi and Talal H. Noor
Systems 2025, 13(6), 463; https://doi.org/10.3390/systems13060463 - 12 Jun 2025
Cited by 1 | Viewed by 1051
Abstract
Cyber threats that target Internet of Things (IoT) and edge computing environments are growing in scale and complexity, which necessitates the development of security solutions that are both robust and scalable while also protecting privacy. Edge scenarios require new intrusion detection solutions because [...] Read more.
Cyber threats that target Internet of Things (IoT) and edge computing environments are growing in scale and complexity, which necessitates the development of security solutions that are both robust and scalable while also protecting privacy. Edge scenarios require new intrusion detection solutions because traditional centralized intrusion detection systems (IDSs) lack in the protection of data privacy, create excessive communication overhead, and show limited contextual adaptation capabilities. This paper introduces the SecFedDNN framework, which combines federated deep learning (FDL) capabilities to protect edge–cloud environments from cyberattacks such as Distributed Denial of Service (DDoS), Denial of Service (DoS), and injection attacks. SecFedDNN performs edge-level pre-aggregation filtering through Layer-Adaptive Sparsified Model Aggregation (LASA) for anomaly detection while supporting balanced multi-class evaluation across federated clients. A Deep Neural Network (DNN) forms the main model that trains concurrently with multiple clients through the Federated Averaging (FedAvg) protocol while keeping raw data local. We utilized Google Cloud Platform (GCP) along with Google Colaboratory (Colab) to create five federated clients for simulating attacks on the TON_IoT dataset, which we balanced across selected attack types. Initial tests showed DNN outperformed Long Short-Term Memory (LSTM) and SimpleNN in centralized environments by providing higher accuracy at lower computational costs. Following federated training, the SecFedDNN framework achieved an average accuracy and precision above 84% and recall and F1-score above 82% across all clients with suitable response times for real-time deployment. The study proves that FDL can strengthen intrusion detection across distributed edge networks without compromising data privacy guarantees. Full article
Show Figures

Figure 1

14 pages, 2408 KiB  
Article
Backpack Client Selection Keeping Swarm Learning in Industrial Digital Twins for Wireless Mapping
by Xingjia Wei, Ning Su, Yikai Guo and Pengcheng Zhao
Electronics 2025, 14(12), 2323; https://doi.org/10.3390/electronics14122323 - 6 Jun 2025
Viewed by 344
Abstract
Digital twin virtual–real mapping and precise modeling require the synchronization of large amounts of data, which leads to high communication overhead in wireless channels in industrial Internet of Things (IoT). To solve this problem, this study proposes an architecture of Digital Twin–Swarm learning [...] Read more.
Digital twin virtual–real mapping and precise modeling require the synchronization of large amounts of data, which leads to high communication overhead in wireless channels in industrial Internet of Things (IoT). To solve this problem, this study proposes an architecture of Digital Twin–Swarm learning (DT-SL) for industrial IoT digital twins. SL is an emerging distributed federated learning (FL) method that eliminates the need for centralized servers completely. However, it faces the problem of wireless channel congestion caused by high concurrent parameter transmission. In view of the above architecture, a novel KSL scheme based on the backpack model is used to construct the DT model. The backpack optimization problem is used to select the client with the largest contribution to participate in Keeping SL twin modeling. In addition, the experimental results evaluated the performance of the proposed method. The absolute value of the client’s updated parameter quantity decreased by 23.6% on average. The convergence rate of the aggregation model increased by 34.1%, and the model aggregation MSE value decreased to 0.03. Full article
Show Figures

Graphical abstract

18 pages, 546 KiB  
Article
Resource Allocation for Federated Learning with Heterogeneous Computing Capability in Cloud–Edge–Client IoT Architecture
by Xubo Zhang and Yang Luo
Future Internet 2025, 17(6), 243; https://doi.org/10.3390/fi17060243 - 30 May 2025
Viewed by 377
Abstract
A federated learning (FL) framework for cloud–edge–client collaboration performs local aggregation of model parameters through edges, reducing communication overhead from clients to the cloud. This framework is particularly suitable for Internet of Things (IoT)-based secure computing scenarios that require extensive computation and frequent [...] Read more.
A federated learning (FL) framework for cloud–edge–client collaboration performs local aggregation of model parameters through edges, reducing communication overhead from clients to the cloud. This framework is particularly suitable for Internet of Things (IoT)-based secure computing scenarios that require extensive computation and frequent parameter updates, as it leverages the distributed nature of IoT devices to enhance data privacy and reduce latency. To address the issue of high-computation-capability clients waiting due to varying computing capabilities under heterogeneous device conditions, this paper proposes an improved resource allocation scheme based on a three-layer FL framework. This scheme optimizes the communication parameter volume from clients to the edge by implementing a method based on random dropout and parameter completion before and after communication, ensuring that local models can be transmitted to the edge simultaneously, regardless of different computation times. This scheme effectively resolves the problem of high-computation-capability clients experiencing long waiting times. Additionally, it optimizes the similarity pairing method, the Shapley Value (SV) aggregation strategy, and the client selection method to better accommodate heterogeneous computing capabilities found in IoT environments. Experiments demonstrate that this improved scheme is more suitable for heterogeneous IoT client scenarios, reducing system latency and energy consumption while enhancing model performance. Full article
Show Figures

Figure 1

19 pages, 2912 KiB  
Article
Explainable Clustered Federated Learning for Solar Energy Forecasting
by Syed Saqib Ali, Mazhar Ali, Dost Muhammad Saqib Bhatti and Bong Jun Choi
Energies 2025, 18(9), 2380; https://doi.org/10.3390/en18092380 - 7 May 2025
Viewed by 948
Abstract
Explainable Artificial Intelligence (XAI) is a well-established and dynamic field defined by an active research community that has developed numerous effective methods for explaining and interpreting the predictions of advanced machine learning models, including deep neural networks. Clustered Federated Learning (CFL) mitigates the [...] Read more.
Explainable Artificial Intelligence (XAI) is a well-established and dynamic field defined by an active research community that has developed numerous effective methods for explaining and interpreting the predictions of advanced machine learning models, including deep neural networks. Clustered Federated Learning (CFL) mitigates the difficulties posed by heterogeneous clients in traditional federated learning by categorizing related clients according to data characteristics, facilitating more tailored model updates, and improving overall learning efficiency. This paper introduces Explainable Clustered Federated Learning (XCFL), which adds explainability to clustered federated learning. Our method improves performance and explainability by selecting features, clustering clients, training local clients, and analyzing contributions using SHAP values. By incorporating feature-level contributions into cluster and global aggregation, XCFL ensures a more transparent and data-driven model update process. Weighted aggregation by feature contributions improves consumer diversity and decision transparency. Our results show that XCFL outperforms FedAvg and other clustering methods. Our feature-based explainability strategy improves model performance and explains how features affect clustering and model adjustments. XCFL’s improved accuracy and explainability make it a promising solution for heterogeneous and distributed learning environments. Full article
(This article belongs to the Section A2: Solar Energy and Photovoltaic Systems)
Show Figures

Figure 1

20 pages, 896 KiB  
Article
MAB-Based Online Client Scheduling for Decentralized Federated Learning in the IoT
by Zhenning Chen, Xinyu Zhang, Siyang Wang and Youren Wang
Entropy 2025, 27(4), 439; https://doi.org/10.3390/e27040439 - 18 Apr 2025
Viewed by 405
Abstract
Different from conventional federated learning (FL), which relies on a central server for model aggregation, decentralized FL (DFL) exchanges models among edge servers, thus improving the robustness and scalability. When deploying DFL into the Internet of Things (IoT), limited wireless resources cannot provide [...] Read more.
Different from conventional federated learning (FL), which relies on a central server for model aggregation, decentralized FL (DFL) exchanges models among edge servers, thus improving the robustness and scalability. When deploying DFL into the Internet of Things (IoT), limited wireless resources cannot provide simultaneous access to massive devices. One must perform client scheduling to balance the convergence rate and model accuracy. However, the heterogeneity of computing and communication resources across client devices, combined with the time-varying nature of wireless channels, makes it challenging to estimate accurately the delay associated with client participation during the scheduling process. To address this issue, we investigate the client scheduling and resource optimization problem in DFL without prior client information. Specifically, the considered problem is reformulated as a multi-armed bandit (MAB) program, and an online learning algorithm that utilizes contextual multi-arm slot machines for client delay estimation and scheduling is proposed. Through theoretical analysis, this algorithm can achieve asymptotic optimal performance in theory. The experimental results show that the algorithm can make asymptotic optimal client selection decisions, and this method is superior to existing algorithms in reducing the cumulative delay of the system. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

17 pages, 9409 KiB  
Article
Dynamic Client Selection and Group-Balanced Personalization for Data-Imbalanced Federated Speech Recognition
by Chundong Xu, Ziyu Wu, Fengpei Ge and Yuheng Zhi
Electronics 2025, 14(7), 1485; https://doi.org/10.3390/electronics14071485 - 7 Apr 2025
Viewed by 562
Abstract
Federated learning has been widely applied in automatic speech recognition. However, variations in speaker behaviors result in a significant data imbalance across client devices. Conventional federated speech recognition algorithms typically use fixed probabilities to select clients for each round in model training, often [...] Read more.
Federated learning has been widely applied in automatic speech recognition. However, variations in speaker behaviors result in a significant data imbalance across client devices. Conventional federated speech recognition algorithms typically use fixed probabilities to select clients for each round in model training, often overlooking the disparities in data volume among clients. In reality, the substantial differences in data quantity can extend the training duration and compromise the stability of the global model. Moreover, models trained through federated learning on global data often fail to achieve optimal performance for individual local clients. While personalized federated learning strategies hold promise for enhancing model performance, the inherent diversity of speech data makes it challenging to apply state-of-the-art personalized methods effectively to speech recognition tasks. In this paper, a dynamic client selection algorithm is proposed to solve the problem of data disparities among different clients. It can be effectively combined with most federated learning algorithms and dynamically adjusts the selection probabilities of clients based on their dataset size during training. Experimental results demonstrate that this algorithm saved training time by 26% compared to traditional methods on public datasets while maintaining the equivalent model performance. To optimize the personalized federated learning, this paper proposes a novel group-balanced personalization strategy that fine-tunes groupings of clients based on their dataset size. The experimental results show that this algorithm brought a relatively 12% reduction in character error rate, while it did not increase computational costs. In particular, the group-balanced personalization effectively improved the model performance for clients with smaller datasets than local fine-tuning. The combination of dynamic client selection and group-balanced personalization significantly enhanced training efficiency and model performance. Full article
Show Figures

Figure 1

38 pages, 9923 KiB  
Article
A Verifiable, Privacy-Preserving, and Poisoning Attack-Resilient Federated Learning Framework
by Washington Enyinna Mbonu, Carsten Maple, Gregory Epiphaniou and Christo Panchev
Big Data Cogn. Comput. 2025, 9(4), 85; https://doi.org/10.3390/bdcc9040085 - 31 Mar 2025
Viewed by 805
Abstract
Federated learning is the on-device, collaborative training of a global model that can be utilized to support the privacy preservation of participants’ local data. In federated learning, there are challenges to model training regarding privacy preservation, security, resilience, and integrity. For example, a [...] Read more.
Federated learning is the on-device, collaborative training of a global model that can be utilized to support the privacy preservation of participants’ local data. In federated learning, there are challenges to model training regarding privacy preservation, security, resilience, and integrity. For example, a malicious server can indirectly obtain sensitive information through shared gradients. On the other hand, the correctness of the global model can be corrupted through poisoning attacks from malicious clients using carefully manipulated updates. Many related works on secure aggregation and poisoning attack detection have been proposed and applied in various scenarios to address these two issues. Nevertheless, existing works are based on the trust confidence that the server will return correctly aggregated results to the participants. However, a malicious server may return false aggregated results to participants. It is still an open problem to simultaneously preserve users’ privacy and defend against poisoning attacks while enabling participants to verify the correctness of aggregated results from the server. In this paper, we propose a privacy-preserving and poisoning attack-resilient federated learning framework that supports the verification of aggregated results from the server. Specifically, we design a zero-trust dual-server architectural framework instead of a traditional single-server scheme based on trust. We exploit additive secret sharing to eliminate the single point of exposure of the training data and implement a weight selection and filtering strategy to enhance robustness to poisoning attacks while supporting the verification of aggregated results from the servers. Theoretical analysis and extensive experiments conducted on real-world data demonstrate the practicability of our proposed framework. Full article
Show Figures

Figure 1

22 pages, 1180 KiB  
Article
FedDyH: A Multi-Policy with GA Optimization Framework for Dynamic Heterogeneous Federated Learning
by Xuhua Zhao, Yongming Zheng, Jiaxiang Wan, Yehong Li, Donglin Zhu, Zhenyu Xu and Huijuan Lu
Biomimetics 2025, 10(3), 185; https://doi.org/10.3390/biomimetics10030185 - 17 Mar 2025
Viewed by 589
Abstract
Federated learning (FL) is a distributed learning technique that ensures data privacy and has shown significant potential in cross-institutional image analysis. However, existing methods struggle with the inherent dynamic heterogeneity of real-world data, such as changes in cellular differentiation during disease progression or [...] Read more.
Federated learning (FL) is a distributed learning technique that ensures data privacy and has shown significant potential in cross-institutional image analysis. However, existing methods struggle with the inherent dynamic heterogeneity of real-world data, such as changes in cellular differentiation during disease progression or feature distribution shifts due to different imaging devices. This dynamic heterogeneity can cause catastrophic forgetting, leading to reduced performance in medical predictions across stages. Unlike previous federated learning studies that paid insufficient attention to dynamic heterogeneity, this paper proposes the FedDyH framework to address this challenge. Inspired by the adaptive regulation mechanisms of biological systems, this framework incorporates several core modules to tackle the issues arising from dynamic heterogeneity. First, the framework simulates intercellular information transfer through cross-client knowledge distillation, preserving local features while mitigating knowledge forgetting. Additionally, a dynamic regularization term is designed in which the strength can be adaptively adjusted based on real-world conditions. This mechanism resembles the role of regulatory T cells in the immune system, balancing global model convergence with local specificity adjustments to enhance the robustness of the global model while preventing interference from diverse client features. Finally, the framework introduces a genetic algorithm (GA) to simulate biological evolution, leveraging mechanisms such as gene selection, crossover, and mutation to optimize hyperparameter configurations. This enables the model to adaptively find the optimal hyperparameters in an ever-changing environment, thereby improving both adaptability and performance. Prior to this work, few studies have explored the use of optimization algorithms for hyperparameter tuning in federated learning. Experimental results demonstrate that the FedDyH framework improves accuracy compared to the SOTA baseline FedDecorr by 2.59%, 0.55%, and 5.79% on the MNIST, Fashion-MNIST, and CIFAR-10 benchmark datasets, respectively. This framework effectively addresses data heterogeneity issues in dynamic heterogeneous environments, providing an innovative solution for achieving more stable and accurate distributed federated learning. Full article
Show Figures

Figure 1

20 pages, 833 KiB  
Article
Mobility Prediction and Resource-Aware Client Selection for Federated Learning in IoT
by Rana Albelaihi
Future Internet 2025, 17(3), 109; https://doi.org/10.3390/fi17030109 - 1 Mar 2025
Cited by 1 | Viewed by 994
Abstract
This paper presents the Mobility-Aware Client Selection (MACS) strategy, developed to address the challenges associated with client mobility in Federated Learning (FL). FL enables decentralized machine learning by allowing collaborative model training without sharing raw data, preserving privacy. However, client mobility and limited [...] Read more.
This paper presents the Mobility-Aware Client Selection (MACS) strategy, developed to address the challenges associated with client mobility in Federated Learning (FL). FL enables decentralized machine learning by allowing collaborative model training without sharing raw data, preserving privacy. However, client mobility and limited resources in IoT environments pose significant challenges to the efficiency and reliability of FL. MACS is designed to maximize client participation while ensuring timely updates under computational and communication constraints. The proposed approach incorporates a Mobility Prediction Model to forecast client connectivity and resource availability and a Resource-Aware Client Evaluation mechanism to assess eligibility based on predicted latencies. MACS optimizes client selection, improves convergence rates, and enhances overall system performance by employing these predictive capabilities and a dynamic resource allocation strategy. The evaluation includes comparisons with advanced baselines such as Reinforcement Learning-based FL (RL-based) and Deep Learning-based FL (DL-based), in addition to Static and Random selection methods. For the CIFAR dataset, MACS achieved a final accuracy of 95%, outperforming Static selection (85%), Random selection (80%), RL-based FL (90%), and DL-based FL (93%). Similarly, for the MNIST dataset, MACS reached 98% accuracy, surpassing Static selection (92%), Random selection (88%), RL-based FL (94%), and DL-based FL (96%). Additionally, MACS consistently required fewer iterations to achieve target accuracy levels, demonstrating its efficiency in dynamic IoT environments. This strategy provides a scalable and adaptable solution for sustainable federated learning across diverse IoT applications, including smart cities, healthcare, and industrial automation. Full article
Show Figures

Figure 1

Back to TopTop