Next Article in Journal
Multivariate Frequency and Amplitude Estimation for Unevenly Sampled Data Using and Extending the Lomb–Scargle Method
Previous Article in Journal
Improved Robust Model Predictive Trajectory Tracking Control for Intelligent Vehicles Based on Multi-Cell Hyperbody Vertex Modeling and Double-Layer Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Optimizing Client Participation in Communication-Constrained Federated LLM Adaptation with LoRA

Department of Computing, Gachon University, Seongnam 13120, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(21), 6538; https://doi.org/10.3390/s25216538
Submission received: 8 September 2025 / Revised: 7 October 2025 / Accepted: 22 October 2025 / Published: 23 October 2025

Abstract

Federated learning (FL) enables privacy-preserving adaptation of large language models (LLMs) across distributed clients. However, deploying FL in edge environments remains challenging because of the high communication overhead of full-model updates. Recent advances in parameter-efficient fine-tuning (PEFT), particularly low-rank adaptation (LoRA), have substantially reduced update sizes by injecting lightweight trainable matrices into pretrained transformers, thereby making FL with LLMs more feasible. In this paper, we propose LoRaC-GA, a communication-aware optimization framework that dynamically determines the optimal number of clients to participate in each round under a fixed bandwidth constraint. We formulated a max-min objective to jointly maximize the model accuracy and communication efficiency and solved the resulting non-convex problem using a genetic algorithm (GA). To further reduce the overhead, we integrated a structured peer-to-peer collaboration protocol with log2K complexity, enabling scalable communication without full connectivity. The simulation results demonstrate that LoRaC-GA adaptively selects the optimal client count, achieving competitive accuracy while significantly reducing the communication cost. The proposed framework is well-suited for bandwidth-constrained edge deployments involving large-scale LLMs.
Keywords: federated learning; large language models; communication efficiency; client selection; parameter-efficient tuning federated learning; large language models; communication efficiency; client selection; parameter-efficient tuning

Share and Cite

MDPI and ACS Style

Solat, F.; Lee, J. Optimizing Client Participation in Communication-Constrained Federated LLM Adaptation with LoRA. Sensors 2025, 25, 6538. https://doi.org/10.3390/s25216538

AMA Style

Solat F, Lee J. Optimizing Client Participation in Communication-Constrained Federated LLM Adaptation with LoRA. Sensors. 2025; 25(21):6538. https://doi.org/10.3390/s25216538

Chicago/Turabian Style

Solat, Faranaksadat, and Joohyung Lee. 2025. "Optimizing Client Participation in Communication-Constrained Federated LLM Adaptation with LoRA" Sensors 25, no. 21: 6538. https://doi.org/10.3390/s25216538

APA Style

Solat, F., & Lee, J. (2025). Optimizing Client Participation in Communication-Constrained Federated LLM Adaptation with LoRA. Sensors, 25(21), 6538. https://doi.org/10.3390/s25216538

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop