Federated Learning over MU-MIMO Vehicular Networks
Abstract
1. Introduction
- We evaluate the performance of FL over vehicular scenarios, which are realistically modeled using the road and mobility models from 3GPP [14]. These models are more complex and realistic than mobility models typically used in the literature. Additionally, we consider MU-MIMO-capable base stations, which are not frequently considered in FL-related studies. Moreover, we consider the learning task of object classification on the European traffic sign data set, which is a relevant data set for vehicular applications. This data set is statistically and geographically more diverse, and therefore more challenging to train on, than commonly used data sets such as MNIST and CIFAR-10.
- Based on the defined MU-MIMO vehicular scenario, we investigate the challenge of vehicle selection and resource management by characterizing vehicles based on their importance in the learning process and their wireless channel quality. We then propose the “vehicle-beam-iterative” (VBI) algorithm to approximate the solution of the defined optimization problem. The evaluation of the VBI algorithm provides insights into the novel and realistic scenario under investigation.
- We show that MU-MIMO-capable base stations improve the convergence time of the global model by enabling the selection of multiple vehicles on the same time–frequency resources and improving the achievable vehicle data rates.
- We show that the local loss is an effective vehicle selection metric for scenarios with non-independent and identically distributed (IID) data, assuming that all vehicles have the same training times. When vehicles have different training times, e.g., due to different data set sizes and/or processing capabilities, the loss-based policies do not provide substantial gains.
- We demonstrate, through realistic numerical evaluations, that convergence time in scenarios where vehicles have different data set sizes is longer than in scenarios where vehicles have the same data set sizes.
2. System Model
2.1. Network Model
2.2. Learning Model
2.3. Communication Model
3. Problem Formulation
3.1. Vehicle Importance
3.2. Latency Considerations
3.3. Problem Formulation
4. Proposed Solution
4.1. Algorithmic Description
Algorithm 1 Vehicle-Beam-Iterative (VBI) Algorithm |
|
- STEP 1 (lines 6–8): For each vehicle that has not already been selected for training, the beam that maximizes the importance of the vehicle v is obtained, assuming that .
- STEP 2 (lines 9–11): From line 9 onwards, the algorithm iterates over all beams to define per beam whether or not a vehicle will be assigned to it and which vehicle that will be. Depending on whether or not a vehicle is assigned to the beam, different steps are followed later on. Therefore, in line 10 a decision variable is initialized to False. Then, in line 11, based on the derived potential vehicle-beam pairs from step 1 (line 7), the vehicle that has the highest importance on each beam b is selected.
- STEP 3 (lines 12–18): In this step a decision is taken on whether or not the selected vehicle can be scheduled on beam b. In line 12, it is checked whether or not vehicle is the first vehicle to be scheduled on beam b. If it is the first one, line 13 sets the training time at beam b equal to the training time of vehicle and line 14 sets the decision variable to True. If vehicle is not the first vehicle to be scheduled on beam b, line 15 evaluates according to constraint (26) if vehicle can be co-scheduled with the other vehicle(s) already scheduled on beam b. If vehicle can be co-scheduled, in line 16, the training time at beam b is set to the minimum time between the training time set in a previous iteration, when scheduling a different vehicle, and the training time of the newly scheduled vehicle . Line 17 sets the decision variable to True.
- STEP 4 (lines 19–25): If vehicle is scheduled on beam b, i.e., the decision variable is True, in lines 20 and 21, the VBI algorithm updates accordingly the entries of matrix involving vehicle to ensure that it is assigned only to beam b. Next, line 22 reduces the total available uploading latency budget accordingly and line 23 increases the total vehicle importance with the importance of the newly scheduled vehicle. In case that vehicle is not scheduled, i.e., the decision variable is False, no action is taken and the vehicle can be re-considered for scheduling in a later iteration.
- STEP 5 (lines 26–32): After iterating over all beams, and before starting a new iteration as a result of line 5, an update step takes place. Specifically, lines 26–32 discard vehicle–beam pairs that cannot fulfill constraint (27) due to the newly scheduled vehicles in the given algorithm iteration.
4.2. Algorithm Behavior
5. Scenario Configuration
5.1. Learning Task
5.2. Learning Scenarios
5.3. Baseline Algorithms
5.4. Wireless Scenario
6. Results and Discussion
6.1. VBI Algorithm Relative Performance
6.2. Same Data Set Size
6.2.1. IID Data
6.2.2. Non-IID Data
6.3. Different Data Set Sizes
6.3.1. IID Data
6.3.2. Non-IID Data
6.4. Comparison of Learning Scenarios
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
3GPP | Third-Generation Partnership Project |
CBC | COIN-OR Branch and Cut |
CNN | Convolutional Neural Network |
CPU | Central Processing Unit |
FL | Federated Learning |
FLOP | Floating Point OPeration |
GoB | Grid of Beams |
IID | Independent and Identically Distributed |
ML | Machine Learning |
MU-MIMO | Multi-User Multiple Input Multiple Output |
OFDMA | Orthogonal Frequency Division Multiple Access |
ReLU | Rectified Linear Unit |
SGD | Stochastic Gradient Descent |
SNR | Signal-to-Noise Ratio |
UPRA | Uniform Planar Rectangular Array |
VBI | Vehicle Beam Iterative |
VGG | Visual Geometry Group |
Appendix A
Appendix A.1. Antenna Model
Appendix A.2. Cell Edge Beam Downtilt
Appendix A.3. Antenna Array Configuration
Appendix A.4. Beam Connection Time
Beam Direction | Distance [m] | Distance [m] | Long Base [m] | Height [m] | Maximum Connection Time [s] |
---|---|---|---|---|---|
10.4–17.0 | |||||
1.3–1.6 | |||||
0.4–0.8 |
Appendix A.5. Broadcast Bit Rate
References
- Balkus, S.V.; Wang, H.; Cornet, B.D.; Mahabal, C.; Ngo, H.; Fang, H. A survey of collaborative machine learning using 5G vehicular communications. IEEE Commun. Surv. Tutor. 2022, 24, 1280–1303. [Google Scholar] [CrossRef]
- Zhang, H.; Bosch, J.; Olsson, H.H. End-to-end federated learning for autonomous driving vehicles. In Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 18–22 July 2021. [Google Scholar] [CrossRef]
- Du, Z.; Wu, C.; Yoshinaga, T.; Yau, K.L.A.; Ji, Y.; Li, J. Federated learning for vehicular internet of things: Recent advances and open numbers. IEEE Open J. Comput. Soc. 2020, 1, 45–61. [Google Scholar] [CrossRef] [PubMed]
- Lim, W.Y.B.; Luong, N.C.; Hoang, D.T.; Jiao, Y.; Liang, Y.C.; Yang, Q.; Niyato, D.; Miao, C. Federated learning in mobile edge networks: A comprehensive survey. IEEE Commun. Surv. Tutor. 2020, 22, 2031–2063. [Google Scholar] [CrossRef]
- Zhang, H.; Bosch, J.; Olsson, H. Real-time End-to-End Federated Learning: An Automotive Case Study. In Proceedings of the 2021 IEEE 45th Annual Computers, Software, and Applications Conference (COMPSAC), Madrid, Spain, 12–16 July 2021. [Google Scholar] [CrossRef]
- Raftopoulou, M.; da Silva, J.M.B., Jr.; Litjens, R.; Poor, H.V.; Van Mieghem, P. Agent selection framework for federated learning in resource-constrained wireless networks. IEEE Trans. Mach. Learn. Commun. Netw. 2024, 2, 1265–1282. [Google Scholar] [CrossRef]
- Hellström, H.; da Silva, J.M.B., Jr.; Amiri, M.M.; Chen, M.; Fodor, V.; Poor, H.V.; Fischione, C. Wireless for Machine Learning: A Survey. Found. Trends Signal Process. 2022, 15, 290–399. [Google Scholar] [CrossRef]
- Chen, M.; Yang, Z.; Saad, W.; Yin, C.; Poor, H.V.; Cui, S. A joint learning and communications framework for federated learning over wireless networks. IEEE Trans. Wireless Commun. 2021, 20, 269–283. [Google Scholar] [CrossRef]
- Zeng, Q.; Du, Y.; Huang, K.; Leung, K.K. Energy-Efficient Radio Resource Allocation for Federated Edge Learning. In Proceedings of the 2020 IEEE International Conference on Communications Workshops (ICC Workshops), Dublin, Ireland, 7–11 June 2020. [Google Scholar] [CrossRef]
- Shi, W.; Zhou, S.; Niu, Z.; Jiang, M.; Geng, L. Joint Device Scheduling and Resource Allocation for Latency Constrained Wireless Federated Learning. IEEE Trans. Wireless Commun. 2021, 20, 453–467. [Google Scholar] [CrossRef]
- Fan, K.; Chen, W.; Li, J.; Deng, X.; Han, X.; Ding, M. Mobility-Aware Joint User Scheduling and Resource Allocation for Low Latency Federated Learning. In Proceedings of the 2023 IEEE/CIC International Conference on Communications in China (ICCC), Dalian, China, 10–12 August 2023. [Google Scholar] [CrossRef]
- Deveaux, D.; Higuchi, T.; Uçar, S.; Wang, C.H.; Härri, J.; Altintas, O. On the orchestration of federated learning through vehicular knowledge networking. In Proceedings of the 2020 IEEE Vehicular Networking Conference (VNC), New York, NY, USA, 16–18 December 2020. [Google Scholar] [CrossRef]
- Guan, Z.; Wang, Z.; Cai, Y.; Wang, X. Deep reinforcement learning based efficient access scheduling algorithm with an adaptive number of devices for federated learning IoT systems. Internet Things 2023, 24, 100980. [Google Scholar] [CrossRef]
- 3GPP. About 3GPP. 2024. Available online: https://www.3gpp.org/about-us (accessed on 1 April 2025).
- Janocha, K.; Czarnecki, W.M. On Loss Functions for Deep Neural Networks in Classification. arXiv 2017, arXiv:1702.05659. [Google Scholar] [CrossRef]
- Murphy, K.P. Probabilistic Machine Learning: An introduction; MIT Press: Cambridge, MA, USA, 2022. [Google Scholar]
- McMahan, H.B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. [Google Scholar]
- Dahlman, E.; Parkvall, S.; Sköld, J. 5G NR: The Next Generation Wireless Access Technology, 1st ed.; Elsevier Academic Press: Cambridge, MA, USA, 2018. [Google Scholar]
- 3GPP. TR 38.913; 5G; Study on Scenarios and Requirements for Next Generation Access Technologies; 3GPP Technical Report; v.17; 2022. Available online: https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=2996 (accessed on 1 April 2025).
- Goldsmith, A. Wireless Communications; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
- Zeng, Q.; Du, Y.; Huang, K.; Leung, K.K. Energy-efficient resource management for federated edge learning with CPU-GPU heterogeneous computing. IEEE Trans. Wireless Commun. 2021, 20, 7947–7962. [Google Scholar] [CrossRef]
- Forrest, J.; Ralphs, T.; Santos, H.G.; Vigerske, S.; Forrest, J.; Hafer, L.; Kristjansson, B.; jpfasano; EdwinStraver; Lubin, M.; et al. coin-or/Cbc. 2023. Available online: https://zenodo.org/records/7843975 (accessed on 1 April 2025). [CrossRef]
- Serna, C.G.; Ruichek, Y. Classification of traffic signs: The European dataset. IEEE Access 2018, 6, 78136–78148. [Google Scholar] [CrossRef]
- Chilamkurthy, S. Keras Tutorial-Traffic Sign Recognition. 2017. Available online: https://chsasank.com/keras-tutorial.html (accessed on 1 April 2025).
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar] [CrossRef]
- Nishio, T.; Yonetani, R. Client selection for federated learning with heterogeneous resources in mobile edge. In Proceedings of the 2019 IEEE International Conference on Communications (ICC), Shanghai, China, 20–24 May 2019. [Google Scholar] [CrossRef]
- 3GPP. TR 37.885; Study on Evaluation Methodology for New Vehicle-to-Everything (V2X) Use Cases for LTE and NR; 3GPP Technical Report; v.15.3.0; 2019. Available online: https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=3209 (accessed on 1 April 2025).
- Raftopoulou, M.; da Silva, J.M.B., Jr. FLoverWireless: System-Level Simulator for FL over Wireless Networks. 2024. Available online: https://zenodo.org/records/12506109 (accessed on 1 April 2025). [CrossRef]
- Venugopal, K.; Valenti, M.C.; Heath, R.W. Device-to-device millimeter wave communications: Interference, coverage, rate and finite topologies. IEEE Trans. Wireless Commun. 2016, 15, 6175–6188. [Google Scholar] [CrossRef]
- Balanis, C.A. Antenna Theory: Analysis and Design, 4th ed.; John Wiley and Sons Inc.: Hoboken, NJ, USA, 2016. [Google Scholar]
- Gunnarsson, F.; Johansson, M.N.; Furuskar, A.; Lundevall, M.; Simonsson, A.; Tidestav, C.; Blomgren, M. Downtilted base station antennas—A simulation model proposal and impact on HSPA and LTE performance. In Proceedings of the 2008 IEEE 68th Vehicular Technology Conference, Calgary, AB, Canada, 21–24 September 2008. [Google Scholar] [CrossRef]
- Lin, B.; Wang, W.; Guo, J.; Fei, Z. Outage performance for UAV communications under imperfect beam alignment: A stochastic geometry approach. In Proceedings of the 2021 IEEE 21st International Conference on Communication Technology (ICCT), Tianjin, China, 13–16 October 2021. [Google Scholar] [CrossRef]
Symbol | Description |
---|---|
Estimated uplink SNR at vehicle v from beam b in [dB] | |
Constant tuning the relative significance of the learning importance and the resource consumption | |
Application-specific latency budget in [s] | |
Broadcast time of the global model in [s] | |
Time for all selected vehicles to train and upload their local models in [s] | |
Start time of uplink transmission to beam b in [s] | |
Training time of vehicle v in [s] | |
Upload time of vehicle v on beam b in [s] | |
Upload latency budget at each beam in [s] | |
Start time of uplink transmissions on each beam in [s] | |
Training times of vehicles in [s] | |
Optimization matrix with beam associations between the vehicles and the base station beams | |
Importance of vehicles at each beam | |
Upload times of vehicles at each beam in [s] | |
The weights of the global model | |
The weights of the local model at vehicle v | |
Optimization vector for vehicle selection | |
Number of beams at each base station m | |
Total number of base station beams in the network | |
Available transmission resources | |
Consumption of transmission resources of vehicle v on beam b | |
The loss function of the model | |
The total number of samples | |
Number of training samples at vehicle v | |
Number of testing samples at vehicle v | |
Number of base stations in the network | |
Noise figure at the base stations in [dB] | |
Maximum transmit power of vehicles in [dBm] | |
Total vehicle importance | |
Bit rate at vehicle v on beam b in [Mbps] | |
Number of vehicles in the network | |
Number of selected vehicles for training | |
Z | Size of the FL model in [Mbits] |
System bandwidth in [MHz] | |
Importance of vehicle v on beam b |
Parameter | Same Data Set Size | Different Data Set Sizes | ||
---|---|---|---|---|
IID | Non-IID | IID | Non-IID | |
Number of classes per vehicle | 10 | 2 | 10 | 2 |
Training samples per vehicle | 150 | 150 | 150 (average) | 150 (average) |
Training samples per class per vehicle | 15 | 75 (average) | 15 (average) | 75 (average) |
Testing samples per vehicle | 50 | 50 | 50 (average) | 50 (average) |
Testing samples per class at FL server | 100 | 100 | 100 | 100 |
Base Station | Vehicle | ||
---|---|---|---|
Parameter | Value | Parameter | Value |
49 dBm | 23 dBm | ||
5 dB | 9 dB | ||
12 dBi | 6 dBi |
Algorithm | Same Data Set Size and IID Data | Same Data Set Size and Non-IID Data | Different Data Set Sizes and IID Data | Different Data Set Sizes and Non-IID Data | ||||
---|---|---|---|---|---|---|---|---|
85% | 90% | 85% | 90% | 85% | 90% | 85% | 90% | |
VBI-rate | 95 | 115 | 145 | 170 | 170 | 208 | 198 | - |
VBI-loss | 95 | 112 | 137 | 157 | 172 | 206 | 190 | 243 |
max-loss-rate | 92 | 110 | 137 | 155 | 175 | 208 | 193 | 248 |
random-rate | 97 | 117 | 145 | 162 | 180 | 212 | 195 | - |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Raftopoulou, M.; da Silva, J.M.B., Jr.; Litjens, R.; Poor, H.V.; Van Mieghem, P. Federated Learning over MU-MIMO Vehicular Networks. Entropy 2025, 27, 941. https://doi.org/10.3390/e27090941
Raftopoulou M, da Silva JMB Jr., Litjens R, Poor HV, Van Mieghem P. Federated Learning over MU-MIMO Vehicular Networks. Entropy. 2025; 27(9):941. https://doi.org/10.3390/e27090941
Chicago/Turabian StyleRaftopoulou, Maria, José Mairton B. da Silva, Jr., Remco Litjens, H. Vincent Poor, and Piet Van Mieghem. 2025. "Federated Learning over MU-MIMO Vehicular Networks" Entropy 27, no. 9: 941. https://doi.org/10.3390/e27090941
APA StyleRaftopoulou, M., da Silva, J. M. B., Jr., Litjens, R., Poor, H. V., & Van Mieghem, P. (2025). Federated Learning over MU-MIMO Vehicular Networks. Entropy, 27(9), 941. https://doi.org/10.3390/e27090941