ML-RASPF: A Machine Learning-Based Rate-Adaptive Framework for Dynamic Resource Allocation in Smart Healthcare IoT
Abstract
1. Introduction
- We present ML-RASPF, a novel hybrid mist–edge–cloud framework for rate-adaptive and latency-aware IoT service provisioning in smart healthcare systems.
- We formulate the service provisioning problem as a joint optimization model that integrates both latency constraints and service delivery rates. This formulation enables intelligent, QoS-aware resource allocation across heterogeneous IoT environments.
- We propose a modular, ML-based algorithmic suite combining supervised learning for traffic prediction and RL for real-time service rate adaptation.
- We evaluate the proposed framework within EdgeCloudSim, using realistic smart healthcare workloads and the results show that ML-RASPF significantly outperforms state-of-the-art rate-adaptive methods by reducing latency, energy consumption, and bandwidth utilization while improving service delivery rate.
2. Related Work
2.1. Heuristic and Optimization-Based Resource Allocation
2.2. Machine Learning and Deep Learning-Based Service Provisioning
2.3. Latency-Aware and Fog–Edge–Cloud Architectures
2.4. Healthcare Resource Allocation Approaches
3. Optimal Service Provisioning Framework
3.1. Customized Mist–Edge–Cloud Framework
3.1.1. Perception Layer
3.1.2. Mist Layer
3.1.3. Edge Computing Layer
3.1.4. Central Cloud Layer
3.1.5. Cloud Application Layer
3.2. Security and Privacy Consideration
3.3. Smart Healthcare with Emergency and Routine Services
4. Analytical Framework for Heterogeneous Service Provisioning
4.1. Problem Overview and Healthcare Service Requirements
4.1.1. Utility-Based Formulation for Service Delivery Rate
4.1.2. Delay-Aware Utility Adjustment
4.1.3. Approximation and Convex Transformation
4.1.4. Machine Learning Integration
4.1.5. Traffic and Demand Prediction
4.1.6. Delay Estimation and Latency Modeling
4.1.7. Reinforcement Learning for Adaptive Rate Control
4.1.8. Emergency-Aware Prioritization
4.1.9. Framework Integration
4.2. Algorithms for Rate-Adaptive Provisioning
Algorithm 1 Network Data Collection and Initialization |
Require: Set of services , links , link capacities Ensure: Initialized weight matrix W and price matrix P 1: Initialize matrices and 2: for all services do 3: for all consumers i of service s do 4: for all links do 5: {Compute using Equation (16)} 6: {Initialize price (see Equation (16))} 7: end for 8: end for 9: end for 10: return W, P |
Algorithm 2 Price Computation and Weight Update |
Require: Current weights W, prices P, link capacities C, service rates Ensure: Updated W and P 1: for all nodes e do 2: for all links t connected to e do 3: for all services s using t do 4: {Compute per-service max rate on link t (see Equation (10))} 5: end for 6: {Compute total link load} 7: {Update link price based on overload (ref. Equation (3))} 8: for all services s and consumers i on t do 9: {Update allocation weight} 10: if then 11: {Normalize weight to preserve fairness} 12: end if 13: {Consumer-specific price (used in Equation (8))} 14: Update W and P 15: end for 16: end for 17: end for 18: return W, P |
Algorithm 3 Service Rate Adaptation and Delivery |
Require: Updated link prices P, weights W, service paths Ensure: Adapted delivery rates 1: for all nodes e do 2: if e is a provider of service s then 3: for all consumers i of service s do 4: {Aggregate path cost (used in Equation (8))} 5: Observe state {State includes price, delay, previous rate} 6: Select action using RL policy : adjust {Action chosen to optimize utility} 7: Receive reward {Reward defined in Equation (8)} 8: Update policy parameters using gradient of {Policy improvement via DQN gradient} 9: {Adaptive rate via inverse utility (related to Equation (1))} 10: end for 11: end if 12: // Check for new service requests to trigger feedback-based re-optimization 13: if any new service is requested by node e then 14: Update network state and repeat Algorithm 2 15: end if 16: end if 17: return Updated rates |
4.3. Complexity Analysis
5. Experimental Evaluation
5.1. Simulation Setup
- Interactive diagnostic kiosks provide patient-specific diagnostic support, lab report access, and symptom checkers. These require moderate delivery rates but have stringent latency requirements.
- Informational displays broadcast hospital alerts, safety protocols, and public health information. These involve high-bandwidth, video-rich content with stable rate requirements.
- Patient devices such as tablets or smartphones used by inpatients or visitors for accessing hospital Wi-Fi, EHRs, or teleconsultation. These represent latency-tolerant services.
5.2. Simulation Parameters
5.3. Performance Metrics and Baselines
5.3.1. Latency
5.3.2. Service Delivery Rate
5.3.3. Energy Consumption
5.3.4. Bandwidth Utilization
5.3.5. Load Balancing Efficiency
5.3.6. Baselines for Comparison
- Energy-aware offloading [8]: an energy-aware task offloading framework that leverages dynamic load balancing and resource compatibility evaluation among fog nodes. The method uses lightweight metaheuristics to optimize offloading decisions based on task priority, fog availability, and energy profile, but it does not consider ML-based traffic prediction or RL-based rate control.
- JANUS [7]: a latency-aware traffic scheduling system for IoT data streaming in edge environments. JANUS employs multi-level queue management and global coordination using heuristic stream selection policies. Although effective in managing latency-sensitive streams, it does not perform joint rate-latency optimization or predictive traffic adaptation.
- FCFS: a baseline queuing strategy where incoming service requests are served in the order they arrive, without considering bandwidth, latency sensitivity, or system state. FCFS represents non-prioritized resource allocation and serves as a lower-bound reference.
- LRFS: a heuristic baseline that prioritizes service requests with the smallest bandwidth requirements. While this may help reduce short-term congestion, it often leads to unfair treatment of larger or high-priority flows, particularly in healthcare workloads.
5.4. Results and Analysis
5.4.1. Latency
5.4.2. Service Delivery Rate
5.4.3. Energy Consumption
5.4.4. Bandwidth Utilization
5.4.5. Load Balancing Efficiency
6. Conclusions
Funding
Data Availability Statement
Conflicts of Interest
References
- Framingham, Mass. The Growth in Connected IoT Devices Is Expected to Generate 79.4 ZB of Data in 2025, According to a New IDC Forecast. Available online: https://www.telecomtv.com/content/iot/the-growth-in-connected-iot-devices-is-expected-to-generate-79-4zb-of-data-in-2025-according-to-a-new-idc-forecast-35522/#:~:text=A%20new%20forecast%20from%20International%20Data%20Corporation,79.4%20zettabytes%20(ZB)%20of%20data%20in%202025.&text=%22Understanding%20the%20amount%20of%20data%20created%20from,scale%20in%20this%20accelerating%20data%2Ddriven%20IoT%20market.%22 (accessed on 16 April 2025).
- Sun, M.; Quan, S.; Wang, X.; Huang, Z. Latency-aware scheduling for data-oriented service requests in collaborative IoT-edge-cloud networks. Future Gener. Comput. Syst. 2025, 163, 107538. [Google Scholar] [CrossRef]
- Banitalebi Dehkordi, A. EDBLSD-IIoT: A comprehensive hybrid architecture for enhanced data security, reduced latency, and optimized energy in industrial IoT networks. J. Supercomput. 2025, 81, 359. [Google Scholar] [CrossRef]
- Khan, S.; Khan, S. Latency aware graph-based microservice placement in the edge-cloud continuum. Clust. Comput. 2025, 28, 88. [Google Scholar] [CrossRef]
- Pervez, F.; Zhao, L. Efficient Queue-Aware Communication and Computation Optimization for a MEC-Assisted Satellite-Aerial-Terrestrial Network. IEEE Internet Things J. 2025, 12, 13972–13987. [Google Scholar] [CrossRef]
- Tripathy, S.S.; Bebortta, S.; Mohammed, M.A.; Nedoma, J.; Martinek, R.; Marhoon, H.A. An SDN-enabled fog computing framework for wban applications in the healthcare sector. Internet Things 2024, 26, 101150. [Google Scholar] [CrossRef]
- Wen, Z.; Yang, R.; Qian, B.; Xuan, Y.; Lu, L.; Wang, Z.; Peng, H.; Xu, J.; Zomaya, A.Y.; Ranjan, R. JANUS: Latency-aware traffic scheduling for IoT data streaming in edge environments. IEEE Trans. Serv. Comput. 2023, 16, 4302–4316. [Google Scholar] [CrossRef]
- Mahapatra, A.; Majhi, S.K.; Mishra, K.; Pradhan, R.; Rao, D.C.; Panda, S.K. An energy-aware task offloading and load balancing for latency-sensitive IoT applications in the Fog-Cloud continuum. IEEE Access 2024, 12, 14334–14349. [Google Scholar] [CrossRef]
- Du, A.; Jia, J.; Chen, J.; Wang, X.; Huang, M. Online Queue-Aware Service Migration and Resource Allocation in Mobile Edge Computing. IEEE Trans. Veh. Technol. 2025, 74, 8063–8078. [Google Scholar] [CrossRef]
- San José, S.G.; Marquès, J.M.; Panadero, J.; Calvet, L. NARA: Network-Aware Resource Allocation mechanism for minimizing quality-of-service impact while dealing with energy consumption in volunteer networks. Future Gener. Comput. Syst. 2025, 164, 107593. [Google Scholar] [CrossRef]
- Al-Saedi, A.A.; Boeva, V.; Casalicchio, E. Fedco: Communication-efficient federated learning via clustering optimization. Future Internet 2022, 14, 377. [Google Scholar] [CrossRef]
- Centofanti, C.; Tiberti, W.; Marotta, A.; Graziosi, F.; Cassioli, D. Taming latency at the edge: A user-aware service placement approach. Comput. Netw. 2024, 247, 110444. [Google Scholar] [CrossRef]
- Liu, Z.; Xu, X. Latency-aware service migration with decision theory for Internet of Vehicles in mobile edge computing. Wirel. Netw. 2024, 30, 4261–4273. [Google Scholar] [CrossRef]
- Amzil, A.; Abid, M.; Hanini, M.; Zaaloul, A.; El Kafhali, S. Stochastic analysis of fog computing and machine learning for scalable low-latency healthcare monitoring. Clust. Comput. 2024, 27, 6097–6117. [Google Scholar] [CrossRef]
- Li, Y.; Zhang, Q.; Yao, H.; Gao, R.; Xin, X.; Guizani, M. Next-Gen Service Function Chain Deployment: Combining Multi-Objective Optimization with AI Large Language Models. IEEE Netw. 2025, 39, 20–28. [Google Scholar] [CrossRef]
- Ahmed, W.; Iqbal, W.; Hassan, A.; Ahmad, A.; Ullah, F.; Srivastava, G. Elevating e-health excellence with IOTA distributed ledger technology: Sustaining data integrity in next-gen fog-driven systems. Future Gener. Comput. Syst. 2025, 168, 107755. [Google Scholar] [CrossRef]
- Najim, A.H.; Al-sharhanee, K.A.M.; Al-Joboury, I.M.; Kanellopoulos, D.; Sharma, V.K.; Hassan, M.Y.; Issa, W.; Abbas, F.H.; Abbas, A.H. An IoT healthcare system with deep learning functionality for patient monitoring. Int. J. Commun. Syst. 2025, 38, e6020. [Google Scholar] [CrossRef]
- Ji, X.; Gong, F.; Wang, N.; Xu, J.; Yan, X. Cloud-Edge Collaborative Service Architecture with Large-Tiny Models Based on Deep Reinforcement Learning. IEEE Trans. Cloud Comput. 2025, 13, 288–302. [Google Scholar] [CrossRef]
- Shang, L.; Zhang, Y.; Deng, Y.; Wang, D. MultiTec: A Data-Driven Multimodal Short Video Detection Framework for Healthcare Misinformation on TikTok. IEEE Trans. Big Data, 2025; early access. [Google Scholar]
- Fei, Y.; Fang, H.; Yan, Z.; Qi, L.; Bilal, M.; Li, Y.; Xu, X.; Zhou, X. Privacy-Aware Edge Computation Offloading with Federated Learning in Healthcare Consumer Electronics System. IEEE Trans. Consum. Electron. 2025; early access. [Google Scholar]
- Ali, A.; Arafa, A. Delay sensitive hierarchical federated learning with stochastic local updates. IEEE Trans. Cogn. Commun. Netw. 2025; early access. [Google Scholar]
- Peng, Z.; Xu, C.; Wang, H.; Huang, J.; Xu, J.; Chu, X. P2b-trace: Privacy-preserving blockchain-based contact tracing to combat pandemics. In Proceedings of the 2021 International Conference on Management of Data, Virtual, 20–25 June 2021; pp. 2389–2393. [Google Scholar]
- EdgeCloudSim. Available online: https://github.com/CagataySonmez/EdgeCloudSim (accessed on 16 April 2025).
- Zhang, T.; Jin, J.; Zheng, X.; Yang, Y. Rate Adaptive Fog Service Platform for Heterogeneous IoT Applications. IEEE Internet Things J. 2019, 7, 176–188. [Google Scholar] [CrossRef]
Approach | ML-Based | Latency-Aware | Rate-Adaptive | Architecture | Domain | Key Limitations |
---|---|---|---|---|---|---|
Amzil et al. [14] | × | ✓ | × | Fog–Cloud | Healthcare | High overhead due to tensor mapping, lacks adaptiveness under dynamic loads. |
Centofanti et al. [12] | × | ✓ | × | Edge–Cloud | Crowdsensing | Assumes deterministic environment, lacks real-time adaptability. |
Mahapatra et al. [8] | × | ✓ | ✓ | Fog–Cloud | IoT/Healthcare | Uses metaheuristics; lacks learning-based decision-making or predictive models. |
Wen et al. (JANUS) [7] | × | ✓ | ✓ | Edge–Cloud | Streaming | Queue-based heuristic stream selection; lacks proactive learning and mist integration. |
Du et al. [9] | × | ✓ | ✓ | Edge–Cloud | General IoT | Focus on queue-aware migration; not suitable for highly variable healthcare demands. |
Fei et al. [20] | ✓ | × | × | Edge–Cloud | Healthcare | Focuses on privacy with FL; lacks delivery rate optimization and latency support. |
Najim et al. [17] | ✓ | × | × | Fog–Cloud | IoT–Vehicles | Lacks training scalability and rate adaptation; accuracy affected by data gaps. |
Ji et al. [18] | ✓ | ✓ | × | Edge–Cloud | Smart City | DRL adds processing delay; lacks fine-grained control in mist layers. |
Li et al. [15] | × | ✓ | × | Cloud–Edge | SFC | Multi-objective model, but lacks ML support and ignores dynamic adaptation. |
ML-RASPF | ✓ | ✓ | ✓ | Mist–Edge–Cloud | Healthcare | Integrates ML forecasting + RL adaptation in modular real-time architecture. |
Parameter | Value/Description |
---|---|
Gradient-based step size | 0.01 |
Number of service consumers | 9 |
Number of service types | 3 (diagnostics, info, Wi-Fi) |
Number of communication links | 13 |
Link capacity | 20 Mb/s |
Link capacity – | 16, 15, 14 Mb/s |
Link capacity – | 13 Mb/s |
Edge forwarders (mist nodes) | 3 |
Edge cloudlet nodes | 1 per forwarder |
Central cloud nodes | 1 |
Mist node energy consumption | 3.5 W (static baseline) |
Edge node energy consumption | 3.7 W (static baseline) |
Cloud energy consumption | 9.7 kW (data center model) |
CPU | Intel Core i7 E3-1225 |
Processor frequency | 3.3 GHz |
RAM | 16 GB |
Operating system | Windows 10 64-bit |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rafique, W. ML-RASPF: A Machine Learning-Based Rate-Adaptive Framework for Dynamic Resource Allocation in Smart Healthcare IoT. Algorithms 2025, 18, 325. https://doi.org/10.3390/a18060325
Rafique W. ML-RASPF: A Machine Learning-Based Rate-Adaptive Framework for Dynamic Resource Allocation in Smart Healthcare IoT. Algorithms. 2025; 18(6):325. https://doi.org/10.3390/a18060325
Chicago/Turabian StyleRafique, Wajid. 2025. "ML-RASPF: A Machine Learning-Based Rate-Adaptive Framework for Dynamic Resource Allocation in Smart Healthcare IoT" Algorithms 18, no. 6: 325. https://doi.org/10.3390/a18060325
APA StyleRafique, W. (2025). ML-RASPF: A Machine Learning-Based Rate-Adaptive Framework for Dynamic Resource Allocation in Smart Healthcare IoT. Algorithms, 18(6), 325. https://doi.org/10.3390/a18060325