Challenges and Opportunities Presented by Federated Learning in Mobile Computing

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Networks".

Deadline for manuscript submissions: 15 May 2025 | Viewed by 5954

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, Kennesaw State University, Marietta, GA 30060, USA
Interests: security and privacy in mobile edge computing; efficient and private federated learning; data-driven optimization in cyber-physical systems

E-Mail Website
Guest Editor
Department of Computer Science, Kennesaw State University, Marietta, GA 30060, USA
Interests: computer architecture; machine learning; Internet of Things

E-Mail Website
Guest Editor
Department of Computer & Information Sciences, Towson University, Towson, MD 21252, USA
Interests: robust and secure deep learning networks; cybersecurity; security and privacy in Internet of Things (IoT); smart transportation; smart city; and smart healthcare.
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the explosive growth of mobile devices and the rapid development of communication technology, mobile computing has quickly emerged as a new paradigm, bringing computing power to the edge of the network. Due to the increasing need for decentralized and privacy-preserving computation, federated learning (FL) has become a pivotal technique in the realm of mobile computing. As a decentralized machine learning approach, FL allows mobile devices to collaboratively learn a shared model while keeping the training data local, thereby offering potential solutions to challenges inherent to data privacy and bandwidth efficiency. Since mobile devices are utilized in everyday life, a variety of applications have been developed with the integration of FL into mobile computing, including face recognition, personalized recommendations, health monitoring, voice recognition, autonomous driving, etc. Nevertheless, there remain challenges in the deployment of FL in mobile computing, such as issues related to scalability, privacy and security, energy efficiency, and communication efficiency and ensuring model robustness under heterogeneous settings. With this Special Issue, we seek high-quality submissions that highlight recent advances, existing and potential challenges, and opportunities in the field of federated learning in mobile computing.

For this Special Issue, we invite contributions in the form of original research articles as well as comprehensive literature reviews. Research areas of interest include (but are not limited to) the following:

  • Federated learning algorithms for mobile devices;
  • Communication-efficient federated learning;
  • Energy-efficient federated learning;
  • Secure and privacy-preserving mechanisms in federated learning;
  • Challenges in non-IID data distributions in FL;
  • Challenges in heterogeneous data and devices in FL;
  • Scalability and resource management in federated learning;
  • Machine learning and AI for wireless communications;
  • B5G networks and federated learning;
  • Security and privacy issues in mobile computing;
  • Applications of federated learning in mobile computing.

We look forward to receiving your contributions.

Dr. Xinyue Zhang
Dr. Bobin Deng
Dr. Qianlong Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • federated learning
  • mobile computing
  • security
  • Internet of Things
  • mobile networks

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 5054 KiB  
Article
Privacy-Preserving Approach to Edge Federated Learning Based on Blockchain and Fully Homomorphic Encryption
by Yun Deng, Baiqi Guo and Shouxue Chen
Electronics 2025, 14(2), 361; https://doi.org/10.3390/electronics14020361 - 17 Jan 2025
Viewed by 934
Abstract
To address the issues of high single-point failure risk, weak privacy protection, and poor resistance to poisoning attacks in edge federated learning, an edge federated learning privacy protection scheme based on blockchain and fully homomorphic encryption is proposed. This scheme uses blockchain technology [...] Read more.
To address the issues of high single-point failure risk, weak privacy protection, and poor resistance to poisoning attacks in edge federated learning, an edge federated learning privacy protection scheme based on blockchain and fully homomorphic encryption is proposed. This scheme uses blockchain technology combined with the CKKS (Cheon–Kim–Kim–Song) fully homomorphic encryption scheme to encrypt computational parameters. This approach reduces the risk of privacy leakage and provides edge federated learning with features such as anti-tampering, resistance to single-point failure, and data traceability. In addition, an unsupervised mechanism for identifying model gradient parameter updates is designed. This mechanism uses the consistency of historical model gradient parameter updates from edge servers as the identification basis. It can effectively detect malicious updates from edge servers, improving the accuracy of the aggregated model. Experimental results show that the proposed method can resist poisoning attacks from 70% of malicious edge servers. It offers privacy protection, transparent model aggregation, and resistance to single-point failure. Furthermore, the method achieves high model accuracy and meets stringent security, accuracy, and traceability requirements in edge federated learning scenarios. Full article
Show Figures

Figure 1

23 pages, 823 KiB  
Article
dy-TACFL: Dynamic Temporal Adaptive Clustered Federated Learning for Heterogeneous Clients
by Syed Saqib Ali, Mazhar Ali, Dost Muhammad Saqib Bhatti and Bong-Jun Choi
Electronics 2025, 14(1), 152; https://doi.org/10.3390/electronics14010152 - 2 Jan 2025
Viewed by 575
Abstract
Federated learning is a potential solution for training secure machine learning models on a decentralized network of clients, with an emphasis on privacy. However, the management of system/data heterogeneity and the handling of time-varying client interests still pose challenges to traditional federated learning [...] Read more.
Federated learning is a potential solution for training secure machine learning models on a decentralized network of clients, with an emphasis on privacy. However, the management of system/data heterogeneity and the handling of time-varying client interests still pose challenges to traditional federated learning (FL) approaches. Therefore, we propose the concept of dynamic temporal adaptive clustered federated learning (dy-TACFL) to tackle the issue of of client heterogeneity in time-varying environments. By continuously analyzing and assigning appropriate clusters to the clients with similar behavior, the proposed federated clustering approach increases both prediction accuracy and clustering efficiency. First, a silhouette coefficient-based threshold is used in the temporal adaptive clustering federated learning (TACFL) algorithm to evaluate cluster stability in each round of federated training. Then, an affinity propagation-based dynamic clustering (APD-CFL) algorithm is proposed to adaptively organize clients into an appropriate number of clusters, taking into account the complex underlying pattern. The experimental findings indicate that the proposed time-based adaptive clustered federated learning algorithms can significantly improve prediction accuracy compared to the existing clustered federated learning algorithms. Full article
Show Figures

Figure 1

22 pages, 1890 KiB  
Article
FedSparse: A Communication-Efficient Federated Learning Framework Based on Sparse Updates
by Jiachen Li, Yuchao Zhang, Yiping Li, Xiangyang Gong and Wendong Wang
Electronics 2024, 13(24), 5042; https://doi.org/10.3390/electronics13245042 - 22 Dec 2024
Viewed by 845
Abstract
Federated learning (FL) strikes a balance between privacy preservation and collaborative model training. However, the periodic transmission of model updates or parameters from each client to the federated server incurs substantial communication overhead, especially for participants with limited network bandwidth. This overhead significantly [...] Read more.
Federated learning (FL) strikes a balance between privacy preservation and collaborative model training. However, the periodic transmission of model updates or parameters from each client to the federated server incurs substantial communication overhead, especially for participants with limited network bandwidth. This overhead significantly hampers the practical applicability of FL in real-world scenarios. To address this challenge, we propose FedSparse, an innovative sparse communication framework designed to enhance communication efficiency. The core idea behind FedSparse is to introduce a communication overhead regularization term into the client’s objective function, thereby reducing the number of parameters that need to be transmitted. FedSparse incorporates a Resource Optimization Proximal (ROP) term and an Importance-based Regularization Weighting (IRW) mechanism into the client update objective function. The local update process optimizes both the empirical risk and communication overhead by applying a sparse regularization weighted by update importance. By making minimal modifications to traditional FL approaches, FedSparse effectively reduces the number of parameters transmitted, thereby decreasing the communication overhead. We evaluate the effectiveness of FedSparse through experiments on various datasets under non-independent and identically distributed (non-IID) conditions, demonstrating its flexibility in resource-constrained environments. On the MNIST, Fashion-MNIST, and CIFAR datasets, FedSparse reduces the communication overhead by 24%, 17%, and 5%, respectively, compared to the baseline algorithm, while maintaining similar model performance. Additionally, on simulated non-IID datasets, FedSparse achieves a 6% to 8% reduction in communication resource consumption. By adjusting the sparsity intensity hyperparameter, we demonstrate that FedSparse can be tailored to different FL applications with varying communication resource constraints. Finally, ablation studies highlight the individual contributions of the ROP and IRW modules to the overall improvement in communication efficiency. Full article
Show Figures

Figure 1

14 pages, 1578 KiB  
Article
A Platform of Federated Learning Management for Enhanced Mobile Collaboration
by Farkhod Yusubov and KangYoon Lee
Electronics 2024, 13(20), 4104; https://doi.org/10.3390/electronics13204104 - 18 Oct 2024
Cited by 1 | Viewed by 1231
Abstract
Federated learning (FL) has emerged as a crucial technology in today’s data-centric environment, enabling decentralized machine learning while safeguarding user privacy. This study introduces “Federated Learning ML Operations (FedOps) Mobile”, a novel FL framework optimized for the dynamic and heterogeneous ecosystem of mobile [...] Read more.
Federated learning (FL) has emerged as a crucial technology in today’s data-centric environment, enabling decentralized machine learning while safeguarding user privacy. This study introduces “Federated Learning ML Operations (FedOps) Mobile”, a novel FL framework optimized for the dynamic and heterogeneous ecosystem of mobile devices. FedOps Mobile addresses the inherent challenges of FL—such as system scalability, device heterogeneity, and operational efficiency—through advanced on-device training using TensorFlow Lite and CoreML. The framework’s innovative approach includes sophisticated client selection mechanisms that assess device readiness and capabilities, ensuring equitable and efficient participation across the network. Additionally, FedOps Mobile leverages remote device control for seamless task management and continuous learning, all without compromising the user experience. The main contribution of this study is the demonstration that federated learning across heterogeneous devices, especially those using different operating systems, can be both practical and efficient using the FedOps Mobile framework. This was validated through experiments that evaluated three key areas: operational efficiency, model personalization, and resource optimization in multi-device settings. The results showed that the proposed method excels in client selection, energy consumption, and model optimization, establishing a new benchmark for federated learning in diverse and complex environments. Full article
Show Figures

Figure 1

15 pages, 4816 KiB  
Article
Evaluation of Thermal Stress on Heterogeneous IoT-Based Federated Learning
by Yi Gu, Liang Zhao, Tianze Liu and Shaoen Wu
Electronics 2024, 13(16), 3140; https://doi.org/10.3390/electronics13163140 - 8 Aug 2024
Viewed by 1250
Abstract
Federated learning is a novel paradigm allowing the training of a global machine-learning model on distributed devices. It shares model parameters instead of private raw data during the entire model training process. While federated learning enables machine learning processes to take place collaboratively [...] Read more.
Federated learning is a novel paradigm allowing the training of a global machine-learning model on distributed devices. It shares model parameters instead of private raw data during the entire model training process. While federated learning enables machine learning processes to take place collaboratively on Internet of Things (IoT) devices, compared to data centers, IoT devices with limited resource budgets typically have less security protection and are more vulnerable to potential thermal stress. Current research on the evaluation of federated learning is mainly based on the simulation of multi-clients/processes on a single machine/device. However, there is a gap in understanding the performance of federated learning under thermal stress in real-world distributed low-power heterogeneous IoT devices. Our previous work was among the first to evaluate the performance of federated learning under thermal stress on real-world IoT-based distributed systems. In this paper, we extended our work to a larger scale of heterogeneous real-world IoT-based distributed systems to further evaluate the performance of federated learning under thermal stress. To the best of our knowledge, the presented work is among the first to evaluate the performance of federated learning under thermal stress on real-world heterogeneous IoT-based systems. We conducted comprehensive experiments using the MNIST dataset and various performance metrics, including training time, CPU and GPU utilization rate, temperature, and power consumption. We varied the proportion of clients under thermal stress in each group of experiments and systematically quantified the effectiveness and real-world impact of thermal stress on the low-end heterogeneous IoT-based federated learning system. We added 67% more training epochs and 50% more clients compared with our previous work. The experimental results demonstrate that thermal stress is still effective on IoT-based federated learning systems as the entire global model and device performance degrade when even a small ratio of IoT devices are being impacted. Experimental results have also shown that the more influenced client under thermal stress within the federated learning system (FLS) tends to have a more major impact on the performance of FLS under thermal stress. Full article
Show Figures

Figure 1

Back to TopTop