Client Selection in Federated Learning on Resource-Constrained Devices: A Game Theory Approach
Abstract
1. Introduction
- A multi-objective client selection strategy: we propose RBPS, a game-theoretic method based on NE, that jointly considers accuracy, battery level, and privacy sensitivity. The method dynamically adjusts each client’s reward using contextual device information.
- Cooperative participation logic: clients with the highest rewards are selected to contribute and receive updated models. Non-selected clients remain inactive, reducing unnecessary battery usage and limiting the exposure of sensitive data.
- Realistic simulation of heterogeneous devices: we simulate a broad range of clients, including smartphones, edge nodes, and IoT devices, with assigned profiles reflecting real-world battery, privacy, and accuracy characteristics.
- Comprehensive empirical evaluation: we perform extensive experiments using four benchmark datasets (MNIST, EMNIST, Fashion-MNIST, and CIFAR-10) over 300 communication rounds. Our evaluation spans various client pool sizes (10, 25, 50, 100, 1000, and 5000) and includes both homogeneous client configurations (e.g., all IoT or all smartphones) and heterogeneous client pools that mix devices of different capabilities. We also compare RBPS against several state-of-the-art (SOA) methods, including Oort, FedCS, FRS, and FedProm, to assess its effectiveness in balancing accuracy, battery preservation, and privacy.
- Analysis of trade-offs: our results highlight that RBPS provides a better balance across accuracy, battery sustainability, and privacy preservation, particularly in large, heterogeneous client pools where other methods either sacrifice performance or overburden specific device classes.
2. Related Work
3. Preliminaries
3.1. Federated Learning
- is the global model parameter vector;
- is the local objective function (e.g., cross-entropy loss) on client k’s data;
- is the number of data points on client k;
- is the total number of data points across all K clients.
Federated Learning Algorithms
- FedAvg is a basic aggregation method proposed by McMahan et al. [18] It works by selecting a subset of clients in each training round. Each selected client performs local model updates on its private data using stochastic gradient descent (SGD) or another optimizer. The clients then send their updated model parameters to a central server, which aggregates them (typically through weighted averaging based on dataset sizes) to form the new global model. FedAvg is efficient in terms of communication and computation and serves as the standard baseline in many FL studies.
- FedProx proposed by Li et al. [19] extends FedAvg by introducing a proximal term to the local objective function. This regularization term penalizes significant divergence of local model parameters from the global model, which helps stabilize training when client data is non-IID (i.e., not independently and identically distributed). FedProx is particularly useful in heterogeneous environments where clients may have diverse data distributions, computation capacities, and participation frequencies.
3.2. Client Selection in FL
- Random Selection: clients are chosen randomly, often used as a baseline.
- Battery Optimization: prioritizing clients with sufficient battery levels to prevent premature device shutdowns during training.
- Privacy Preservation: choosing clients based on their privacy needs, ensuring that the most privacy-sensitive data is processed by devices with adequate security.
- Accuracy-based Selection: selecting clients with high local model accuracy to speed up convergence.
3.3. Nash Equilibrium
3.4. Reward-Based Payoff Strategy
- Accuracy: the local model’s accuracy on the client’s data.
- Battery Level: the remaining battery percentage of the device.
- Privacy: the privacy level of the data, which is impacted by the client’s ability to maintain confidentiality.
- is the contribution to model accuracy (e.g., historical validation performance);
- is the normalized battery level;
- is the privacy preference or sensitivity score (e.g., set by user or device type);
- , , and are weights that adjust the importance of each factor based on the client’s characteristics.
3.5. Battery Consumption and Privacy Loss
3.5.1. Case 1: Device Participates in Training
- : batch size;
- : model complexity;
- : number of steps per epoch;
- : data size weight;
- : device-specific energy cost constant (based on mAh per operation).
3.5.2. Case 2: Device Does Not Participate in Training
- represents the privacy level at time t;
- is the privacy level after the client has participated in the next round;
- is the privacy degradation factor, a parameter that defines how much each update reduces privacy;
- refers to the magnitude of the update sent by the client to the server, which is determined by the difference in the client’s local model compared to the global model;
- The total number of clients is included to account for the dilution of privacy as more clients contribute updates to the global model.
4. Game Theoretic Client Selection Model
4.1. System Setup and Objective
- : participate in training.
- : skip training.
- : current local model accuracy.
- : normalized battery level.
- : privacy preference (higher values indicate stronger preferences).
4.2. Nash Game Formulation
- Strategy set:
- Utility function:
- represents the action profile of all clients other than i.
- , , and are time-varying reward weights that satisfy the following condition:These weights are dynamically updated at each training round based on the real-time state of each client. They reflect three key aspects: the client’s contribution to model accuracy (), battery constraints (), and privacy sensitivity (). The adjustment of these weights is grounded in both theoretical reasoning Formulas (4)–(6) and empirical observations. In practice, involving a client in training improves global model accuracy, but at the cost of energy consumption and potential privacy risks. Therefore, to balance this trade-off, raw utility scores are computed from current client metrics: recent local model accuracy, normalized battery level, and a predefined privacy score. These scores are then normalized to ensure the resulting weights sum to one. This dynamic weighting allows the system to prioritize clients that provide high utility while considering their limitations. For example, clients with low battery or high privacy sensitivity receive higher or , which lowers their chance of being selected. In contrast, clients contributing significantly to accuracy are favored through increased . This adaptive mechanism ensures fairness, improves training efficiency, and allows the strategy to perform well under diverse and changing edge environments.
- , are penalty weights for battery and privacy degradation, respectively.
- and denote the expected battery drop and privacy loss.
4.3. Client Selection Algorithm
Algorithm 1 Nash-Based Client Participation Decision |
|
4.4. Battery and Privacy Cost Estimation
- : battery consumption expected if selected for training at round t.
- : privacy degradation expected if selected for training at round t.
5. Realistic Modeling of Resource Constraints
5.1. Device Classification and Capabilities
5.2. Battery Consumption Modeling
5.3. Privacy Loss Modeling
6. Experimental Evaluation
6.1. Experimental Setup
6.1.1. Datasets
- MNIST: the MNIST dataset consists of 60,000 grayscale images of handwritten digits (0–9) for training and 10,000 images for testing. Each image is 28 × 28 pixels.
- EMNIST: the EMNIST dataset is an extension of MNIST, including both digits and letters (uppercase and lowercase). It contains 814,255 characters for training and 141,292 characters for testing.
- Fashion-MNIST: Fashion-MNIST contains 60,000 28 × 28 grayscale images of 10 fashion categories, with 10,000 test images. This dataset is a more challenging alternative to MNIST.
- CIFAR-10: The CIFAR-10 dataset consists of 60,000 32 × 32 color images categorized into 10 different classes, with 6000 images per class.
6.1.2. Model Architecture
- For MNIST and Fashion-MNIST: a simpler convolutional neural network (CNN) architecture was used:
- –
- Input layer: 28 × 28 pixels for MNIST, 32 × 32 for Fashion-MNIST.
- –
- Convolutional layers:
- ∗
- First convolutional layer with 32 filters of size 3 × 3 and ReLU activation;
- ∗
- Second convolutional layer with 64 filters of size 3 × 3 and ReLU activation.
- –
- Fully connected layer: dense layer with 128 units and ReLU activation.
- –
- Output layer: 10 units (corresponding to 10 classes) with softmax activation.
- For EMNIST: given the increase in data complexity and the inclusion of both digits and letters, a slightly deeper model with additional convolutional layers was used:
- –
- Input Layer: 28 × 28 pixels.
- –
- Convolutional layers:
- ∗
- First convolutional layer with 32 filters of size 3 × 3 and ReLU activation;
- ∗
- Second convolutional layer with 64 filters of size 3 × 3 and ReLU activation;
- ∗
- Third convolutional layer with 128 filters of size 3 × 3 and ReLU activation.
- –
- Fully connected layer: dense layer with 256 units and ReLU activation.
- –
- Output layer: 62 units (corresponding to 62 classes) with softmax activation.
- For CIFAR-10: a deeper CNN architecture was used to handle the complexity of the CIFAR-10 dataset:
- –
- Input Layer: 32 × 32 pixels.
- –
- Convolutional layers:
- ∗
- First convolutional layer with 32 filters of size 3 × 3 and ReLU activation;
- ∗
- Second convolutional layer with 64 filters of size 3 × 3 and ReLU activation;
- ∗
- Third convolutional layer with 128 filters of size 3 × 3 and ReLU activation.
- –
- Fully connected layer: dense layer with 512 units and ReLU activation.
- –
- Output layer: 10 units (corresponding to 10 classes) with softmax activation.
6.2. Model Parameters
6.3. Simulation of Device Heterogeneity
7. Results and Discussion
7.1. Impact of Device Heterogeneity on FL Performance
- Homogeneous pools: all clients of a single type (smartphones, edge, or IoT).
- Semi-heterogeneous pools: 50 clients from each of two types (e.g., smartphones + edge).
- Fully heterogeneous pools: 33 smartphones, 33 edge devices, and 34 IoT devices.
7.1.1. Impact on Accuracy Across Datasets
- Figure 2: while all pools performed well, homogeneous smartphone-only pools reached 91% accuracy more quickly. However, the fully heterogeneous 3-type pool achieved a similar result while maintaining a smoother convergence path due to the diverse contributions from different devices.
- Figure 3: the 2-type and 3-type heterogeneous pools outperformed the homogeneous 1-type pools. IoT-only and edge-only pools struggled to reach competitive accuracy, with the 3-type pool maintaining steady gains through the mid and late rounds.
- Figure 4: the performance gap widened, with homogeneous 1-type pools overfitting quickly or slowing down, particularly the smartphone-only pools. In contrast, the 3-type heterogeneous pools preserved accuracy and reached 88% with less fluctuation.
- Figure 5: this more complex task, requiring deeper models and better generalization, showed that homogeneous 1-type pools plateaued below 75%. The 3-type heterogeneous pools, however, steadily climbed to 78%, highlighting the importance of device diversity as model complexity increases.
7.1.2. Impact on Battery Consumption Across Datasets
- Figure 6: homogeneous smartphone-only pools experienced sharp battery drops (>9% per round), while IoT-only pools consumed less battery (3%) but made minimal progress. In contrast, the fully heterogeneous 3-type pools balanced battery usage effectively: smartphones contributed early, while edge and IoT devices helped later, ensuring more sustainable battery consumption.
- Figure 7, Figure 8 and Figure 9: as the models grew in size, the battery burden increased. Homogeneous 1-type pools, particularly smartphone-only ones, became unsustainable. On the other hand, the 3-type heterogeneous pools intelligently distributed the workload across devices, preventing early dropout and maintaining stable total battery usage.
7.1.3. Impact on Privacy Across Datasets
7.1.4. Effect of Heterogeneous Client Distribution on FL Trade-Offs
7.2. Impact of Client Pool Size on Accuracy, Battery, and Privacy
- 10 clients: approx. 3 smartphones, 3 edge, and 4 IoT;
- 100 clients: 33 smartphones, 33 edge, and 34 IoT;
- 1000 clients: 330 smartphones, 330 edge, and 340 IoT.
7.3. Comparative Analysis of Client Selection Strategies
- Random selection: clients are selected uniformly at random;
- Accuracy-only selection: chooses clients based on their past contribution to model accuracy;
- Battery-only selection: prioritizes clients with the highest remaining battery power;
- Privacy-only selection: selects clients based on minimizing privacy risks;
- RBPS.
7.3.1. Accuracy Performance over 300 Rounds
7.3.2. Loss Reduction over 300 Rounds
7.3.3. Battery Level over 300 Rounds
7.3.4. Privacy Loss over 300 Rounds
7.4. Discussion
8. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
FL | Federated learning |
RBPS | Reward-based payoff strategy |
ML | Machine learning |
NE | Nash equilibrium |
SOA | State-of-the-art |
FedCS | Federated client selection |
FRS | Fair resource scheduling |
MDA | Multi device aggregation |
TiFL | Tiered FL |
FedAvg | Federated Averaging |
SGD | Stochastic gradient descent |
GT | Game theory |
NN | Neural network |
Flwr | Flower |
CNN | Convolutional neural network |
References
- Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated Machine Learning: Concept and Applications. ACM Trans. Intell. Syst. Technol. 2019, 10, 1–19. [Google Scholar] [CrossRef]
- Cho, J.; Kim, H. Towards Efficient Client Selection in Federated Learning: A Survey. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 1621–1634. [Google Scholar]
- Tan, Y.; Long, G.; Liu, L.; Zhou, T.; Lu, Q.; Jiang, J.; Zhang, C. FedProto: Federated Prototype Learning Across Heterogeneous Clients. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 22 February–1 March 2022; Volume 36, pp. 8432–8440. [Google Scholar]
- Ndou, N.; Ajoodha, R.; Jadhav, A. Music Genre Classification: A Review of Deep-Learning and Traditional Machine-Learning Approaches. In Proceedings of the 2021 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS), Toronto, ON, Canada, 21–24 April 2021; pp. 1–6. [Google Scholar]
- Wang, P.; Fan, E.; Wang, P. Comparative Analysis of Image Classification Algorithms Based on Traditional Machine Learning and Deep Learning. Pattern Recognit. Lett. 2021, 141, 61–67. [Google Scholar] [CrossRef]
- Nishio, T.; Yonetani, R. Client selection for federated learning with heterogeneous resources in mobile edge. In Proceedings of the ICC 2019—2019 IEEE International Conference on Communications (ICC), Shanghai, China, 20–24 May 2019; pp. 1–7. [Google Scholar]
- Lai, K.; Bennis, M.; Niyato, D. Oort: Efficient Federated Learning via Guided Participant Selection. IEEE Trans. Mob. Comput. 2021, 20, 3305–3318. [Google Scholar]
- Sultana, A.; Haque, M.M.; Chen, L.; Xu, F.; Yuan, X. Eiffel: Efficient and Fair Scheduling in Adaptive Federated Learning. IEEE Trans. Parallel Distrib. Syst. 2022, 33, 4282–4294. [Google Scholar] [CrossRef]
- Niu, X.; Sun, L.; Song, L.; Li, B. FedProm: Preference- and priority-aware client selection for personalized federated learning. In Proceedings of the ACM International Conference on Multimedia (MM), Lisboa, Portugal, 10–14 October 2022; pp. 4345–4353. [Google Scholar]
- Liu, Y.; Yang, Y.; Zhang, Y.; Wang, W. FedGCS: Federated graph client selection in edge computing. In Proceedings of the IEEE International Conference on Communications (ICC), Dublin, Ireland, 7–11 June 2020. [Google Scholar]
- Diao, Q.; Meng, T.; Xiao, W.; Li, M.; Chen, J.; Zhang, M. HeteroFL: Computation and communication efficient federated learning for heterogeneous clients. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 4290–4300. [Google Scholar]
- Chai, Z.; Ali, A.; Zawad, S.; Truex, S.; Anwar, A.; Baracaldo, N.; Zhou, Y.; Ludwig, H.; Yan, F.; Cheng, Y. Tifl: A Tier-Based Federated Learning System. In Proceedings of the 29th International Symposium on High-Performance Parallel and Distributed Computing (HPDC), Stockholm, Sweden, 23–26 June 2020; pp. 125–136. [Google Scholar]
- Li, Q.; Li, X.; Zhou, L.; Yan, X. Adafl: Adaptive client selection and dynamic contribution evaluation for efficient federated learning. In Proceedings of the ICASSP 2024—2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Republic of Korea, 14–19 April 2024; pp. 6645–6649. [Google Scholar]
- Zaw, C.W.; Pandey, S.R.; Kim, K.; Hong, C.S. Energy-aware resource management for federated learning in multi-access edge computing systems. IEEE Access 2021, 9, 34938–34950. [Google Scholar] [CrossRef]
- Xu, Y.; Xiao, M.; Wu, J.; Tan, H.; Gao, G. A personalized privacy preserving mechanism for crowdsourced federated learning. IEEE Trans. Mob. Comput. 2023, 23, 1568–1585. [Google Scholar] [CrossRef]
- Dritsas, E.; Trigka, M. Federated Learning for IoT: A Survey of Techniques, Challenges, and Applications. J. Sens. Actuator Netw. 2025, 14, 9. [Google Scholar] [CrossRef]
- Liu, Y.; Qu, Z.; Wang, J. Compressed Hierarchical Federated Learning for Edge-Level Imbalanced Wireless Networks. IEEE Trans. Comput. Soc. Syst. 2025, 1–12. [Google Scholar] [CrossRef]
- McMahan, H.B.; Moore, E.; Ramage, D.; Hampson, S.; Agüera y Arcas, B. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS), Fort Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. [Google Scholar]
- Li, T.; Sahu, A.K.; Talwalkar, A.; Smith, V. Federated Optimization in Heterogeneous Networks. In Proceedings of the Machine Learning and Systems (MLSys), Austin, TX, USA, 2–4 March 2020; Volume 2, pp. 429–450. [Google Scholar]
- Marfo, W.; Tosh, D.K.; Moore, S.V. Adaptive client selection in federated learning: A network anomaly detection use case. arXiv 2025, arXiv:2501.15038. [Google Scholar]
- Liu, J.; Sun, H.; Xu, H. Bayesian Nash Equilibrium in price competition under multinomial logit demand. Eur. J. Oper. Res. 2025, 324, 669–689. [Google Scholar] [CrossRef]
- Facchinei, F.; Kanzow, C. Generalized Nash equilibrium problems. Ann. Oper. Res. 2010, 175, 177–211. [Google Scholar] [CrossRef]
- Armoogum, S.; Bassoo, V. Privacy of energy consumption data of a household in a smart grid. In Smart Power Distribution Systems; Elsevier: Amsterdam, The Netherlands, 2019; pp. 163–177. [Google Scholar]
- Klass, A.B.; Wilson, E.J. Remaking energy: The critical role of energy consumption data. Calif. Law Rev. 2016, 104, 1095. [Google Scholar]
- Espressif Systems. ESP32 Series Datasheet [Online]. 2022. Available online: https://www.espressif.com/sites/default/files/documentation/esp32_datasheet_en.pdf (accessed on 2 July 2024).
- Li, T.; Sahu, A.K.; Talwalkar, A.; Smith, V. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Process. Mag. 2020, 37, 50–60. [Google Scholar] [CrossRef]
- Rahmati, A.; Zhong, L. Understanding human–battery interaction on mobile phones. In Proceedings of the 9th International Conference on Human Computer Interaction with Mobile Devices and Services, Singapore, 9–12 September 2007; pp. 265–272. [Google Scholar]
- Fredrikson, M.; Jha, S.; Ristenpart, T. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, Denver, CO, USA, 12–16 October 2015; pp. 1322–1333. [Google Scholar]
- Shokri, R.; Stronati, M.; Song, L.; Shmatikov, V. Membership inference attacks against machine learning models. In Proceedings of the IEEE Symposium on Security and Privacy, San Jose, CA, USA, 22–26 May 2017; pp. 3–18. [Google Scholar]
- Katsomallos, M.; Tzompanaki, K.; Kotzinos, D. Privacy, space and time: A survey on privacy-preserving continuous data publishing. J. Spat. Inf. Sci. 2019, 19, 57–103. [Google Scholar] [CrossRef]
- Zhou, J.; Cao, Z.; Dong, X.; Vasilakos, A.V. Security and privacy for cloud-based IoT: Challenges. IEEE Commun. Mag. 2017, 55, 26–33. [Google Scholar] [CrossRef]
- NVIDIA Corporation. Jetson Nano Developer Kit User Guide [Online]. 2021. Available online: https://developer.nvidia.com/embedded/jetson-nano-developer-kit (accessed on 3 August 2024).
- Google LLC. Pixel 6 Battery Specifications [Online]. 2022. Available online: https://store.google.com/product/pixel_6_specs (accessed on 10 August 2024).
- Adafruit Industries. Adafruit: Electronics, DIY Electronics, and Open-Source Hardware. Available online: https://www.adafruit.com/ (accessed on 3 May 2025).
- NVIDIA Corporation, Jetson Modules, Support, Ecosystem, and Lineup. Available online: https://developer.nvidia.com/embedded/jetson-modules (accessed on 3 May 2025).
- Samsung Electronics Italia S.p.A. Samsung Italia: Smartphone, Elettrodomestici, TV, Informatica. Available online: https://www.samsung.com/it/ (accessed on 3 May 2025).
- Google LLC. Google Store Italia: Dispositivi e Accessori Made by Google. Available online: https://store.google.com/?hl=it (accessed on 3 May 2025).
Method | Accuracy-Aware | Battery-Aware | Privacy-Aware | Heterogeneous Devices | Cooperative | Selective Update |
---|---|---|---|---|---|---|
FedCS [6] | ✔ | ✔ | ✘ | ✔ | ✘ | ✘ |
Oort [7] | ✔ | ✔ | ✘ | ✔ | ✘ | ✘ |
FRS [8] | ✔ | ✔ | ✘ | ✔ | ✔ | ✘ |
FedProm [9] | ✔ | ✘ | ✘ | ✘ | ✔ | ✘ |
FedGCS [10] | ✔ | ✔ | ✘ | ✘ | ✘ | ✘ |
MDA [11] | ✔ | ✘ | ✘ | ✔ | ✔ | ✘ |
TiFL [12] | ✔ | ✔ | ✘ | ✔ | ✘ | ✘ |
RBPS (Ours) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Device Type | Examples | CPU | Battery (mAh) | Uplink | Privacy Risk | Notes |
---|---|---|---|---|---|---|
IoT Devices [34] | ESP32, Arduino | Low | 200–500 | Slow | High | Battery constrained and privacy-sensitive |
Edge Devices [35] | Raspberry Pi, Jetson Nano | Medium | 2000–4000 | Moderate | Medium | Moderate performance, battery-aware |
Smartphones [36,37] | Pixel, Galaxy, etc. | High | 3000–5000 | Fast | Low | High performance, latency-sensitive |
Device Type | (mAh) | (mAh) | Avg. Drop (%) |
---|---|---|---|
IoT Devices | 5–15 | <1 | 3–4% |
Edge Devices | 50–100 | 5–10 | 5–7% |
Smartphones | 150–250 | 20–30 | 8–10% |
Device Type | |
---|---|
IoT Devices | 0.8–1.0 |
Edge Devices | 0.4–0.6 |
Smartphones | 0.2–0.5 |
Parameter | MNIST & Fashion-MNIST | EMNIST | CIFAR-10 |
---|---|---|---|
Optimizer | Adam | Adam | Adam |
Learning Rate | 0.001 | 0.001 | 0.001 |
Batch Size | 32 | 64 | 64 |
Epochs | 20 | 30 | 50 |
Convolution layer 1 | 32 filters, 3 × 3, ReLU | 32 filters, 3 × 3, ReLU | 32 filters, 3 × 3, ReLU |
Convolution layer 2 | 64 filters, 3 × 3, ReLU | 64 filters, 3 × 3, ReLU | 64 filters, 3 × 3, ReLU |
Convolution layer 3 | — | 128 filters, 3 × 3, ReLU | 128 filters, 3 × 3, ReLU |
Fully connected layer | 128 units, ReLU | 256 units, ReLU | 512 units, ReLU |
Output layer | 10 units (softmax) | 62 units (softmax) | 10 units (softmax) |
Loss function | Categorical crossentropy | Categorical crossentropy | Categorical crossentropy |
Dataset | Clients | Accuracy (%) | Battery Usage (%) | Avg. Privacy Loss |
---|---|---|---|---|
MNIST | 10 | 94 | 18.0 | 0.40 |
100 | 89 | 12.0 | 0.25 | |
1000 | 64 | 7.0 | 0.12 | |
EMNIST | 10 | 90 | 19.0 | 0.44 |
100 | 85 | 13.0 | 0.27 | |
1000 | 60 | 8.0 | 0.13 | |
Fashion-MNIST | 10 | 91 | 17.0 | 0.38 |
100 | 86 | 11.0 | 0.22 | |
1000 | 61 | 6.5 | 0.11 | |
CIFAR-10 | 10 | 80 | 20.0 | 0.47 |
100 | 75 | 14.0 | 0.29 | |
1000 | 50 | 9.0 | 0.14 |
Method | Accuracy | Battery Efficiency | Privacy | Remarks |
---|---|---|---|---|
Oort | Very High | Poor | Poor | Fastest convergence; inefficient and privacy-leaking. |
FedCS | Medium | Moderate | Moderate | Resource-matching; lacks privacy-awareness. |
FRS | Medium | High | Moderate | Fair and battery-efficient; not privacy-aware. |
FedProm | Low | High | Best | Maximizes privacy and personalization; lower accuracy. |
RBPS (Ours) | High | Moderate | Strong | Balanced across all objectives; dynamic and cooperative. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Dakhia, Z.; Merenda, M. Client Selection in Federated Learning on Resource-Constrained Devices: A Game Theory Approach. Appl. Sci. 2025, 15, 7556. https://doi.org/10.3390/app15137556
Dakhia Z, Merenda M. Client Selection in Federated Learning on Resource-Constrained Devices: A Game Theory Approach. Applied Sciences. 2025; 15(13):7556. https://doi.org/10.3390/app15137556
Chicago/Turabian StyleDakhia, Zohra, and Massimo Merenda. 2025. "Client Selection in Federated Learning on Resource-Constrained Devices: A Game Theory Approach" Applied Sciences 15, no. 13: 7556. https://doi.org/10.3390/app15137556
APA StyleDakhia, Z., & Merenda, M. (2025). Client Selection in Federated Learning on Resource-Constrained Devices: A Game Theory Approach. Applied Sciences, 15(13), 7556. https://doi.org/10.3390/app15137556