Adaptive Scheduling in Cognitive IoT Sensors for Optimizing Network Performance Using Reinforcement Learning
Abstract
:1. Introduction
- Propose a novel scheme for the use of cognitive techniques in IoT sensors with a reinforcement learning procedure to dynamically change the state of the sensors. A sensor state model is established, and each sensor adapts its new state based on three types of parameters. These changes affect all performance parameters, including energy enhancement, sensor reconfiguration, and minimizing delay, latencies, and packet loss.
- Define and utilize three types of parameters to create a reward function in which the states adaptively switch from current to new states, and the agent learns from the traffic condition and plays a vital role in changing these states.
- Implement the proposed ASC-RL in Python and check its applicability with various parameters such as joint Gaussian distributions, event correlations, prediction accuracy, and energy efficiency with a combined reward score. Finally, a comparative analysis was performed with the detection and transition probabilities, false alarm probabilities, and transmission success rate.
2. Related Work
3. Preliminaries
3.1. System Model for ASC-RL
3.2. Four State Model
3.3. Component-Based Cognitive Sensor
3.4. States in ASC-RL
3.5. Actions in ASC-RL
3.6. Rewards in ASC-RL
4. Adaptive Scheduling in Cognitive IoT Sensors for Optimizing Network Performance Using Reinforcement Learning (ASC-RL)
4.1. Working Procedure of the ASC-RL
4.2. Sensor Data Collection
4.3. Problem Formulation
4.4. Reinforcement Learning-Based Optimum Solutions
4.5. Derivation in Baseline for ASC-RL Agent
5. Experimental Setup and Performance Metrics
6. Performance Evaluation of ASC-RL
6.1. Joint Gaussian Distributions in ASC-RL
6.2. Event Correlations Inside ASC-RL
6.3. Prediction Accuracy and Energy Efficiency with Combined Reward Score
7. Comparative Analysis
7.1. Detection and Transition Probabilities
7.2. False Alarm Probabilities
7.3. Transmission Success Rate
7.4. Energy Efficiency and Reliability Threshold
7.5. Training Performance with Comparative Evaluation
8. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Li, C.; Wang, J.; Wang, S.; Zhang, Y. A review of IoT applications in healthcare. Neurocomputing 2024, 565, 127017. [Google Scholar] [CrossRef]
- Gkagkas, G.; Vergados, D.J.; Michalas, A.; Dossis, M. The Advantage of the 5G Network for Enhancing the Internet of Things and the Evolution of the 6G Network. Sensors 2024, 24, 2455. [Google Scholar] [CrossRef] [PubMed]
- Khan, M.N.; Rahman, H.U.; Khan, M.Z. An energy efficient adaptive scheduling scheme (EASS) for mesh grid wireless sensor networks. J. Parallel Distrib. Comput. 2020, 146, 139–157. [Google Scholar] [CrossRef]
- Ullah, I.; Adhikari, D.; Su, X.; Palmieri, F.; Wu, C.; Choi, C. Integration of data science with the intelligent IoT (IIoT): Current challenges and future perspectives. Digit. Commun. Netw. 2024, 11, 280–298. [Google Scholar] [CrossRef]
- Casillo, M.; Cecere, L.; Colace, F.; Lorusso, A.; Santaniello, D. Integrating the internet of things (IoT) in SPA medicine: Innovations and challenges in digital wellness. Computers 2024, 13, 67. [Google Scholar] [CrossRef]
- Rajkumar, Y.; Santhosh Kumar, S. A comprehensive survey on communication techniques for the realization of intelligent transportation systems in IoT based smart cities. Peer-to-Peer Netw. Appl. 2024, 17, 1263–1308. [Google Scholar] [CrossRef]
- Pulimamidi, R. To enhance customer (or patient) experience based on IoT analytical study through technology (IT) transformation for E-healthcare. Meas. Sens. 2024, 33, 101087. [Google Scholar] [CrossRef]
- Shahab, H.; Iqbal, M.; Sohaib, A.; Khan, F.U.; Waqas, M. IoT-based agriculture management techniques for sustainable farming: A comprehensive review. Comput. Electron. Agric. 2024, 220, 108851. [Google Scholar] [CrossRef]
- Duguma, A.; Bai, X. Contribution of Internet of Things (IoT) in improving agricultural systems. Int. J. Environ. Sci. Technol. 2024, 21, 2195–2208. [Google Scholar] [CrossRef]
- Magara, T.; Zhou, Y. Internet of things (IoT) of smart homes: Privacy and security. J. Electr. Comput. Eng. 2024, 2024, 7716956. [Google Scholar] [CrossRef]
- Nassereddine, M.; Khang, A. Applications of Internet of Things (IoT) in smart cities. In Advanced IoT Technologies and Applications in the Industry 4.0 Digital Economy; CRC Press: Boca Raton, FL, USA, 2024; pp. 109–136. [Google Scholar]
- Khan, M.N.; Rahman, H.U.; Khan, M.Z.; Mehmood, G.; Sulaiman, A.; Shaikh, A.; Alqhatani, A. Energy-efficient dynamic and adaptive state-based scheduling (EDASS) scheme for wireless sensor networks. IEEE Sens. J. 2022, 22, 12386–12403. [Google Scholar] [CrossRef]
- Nilima, S.I.; Bhuyan, M.K.; Kamruzzaman, M.; Akter, J.; Hasan, R.; Johora, F.T. Optimizing Resource Management for IoT Devices in Constrained Environments. J. Comput. Commun. 2024, 12, 81–98. [Google Scholar] [CrossRef]
- Poyyamozhi, M.; Murugesan, B.; Rajamanickam, N.; Shorfuzzaman, M.; Aboelmagd, Y. IoT—A Promising Solution to Energy Management in Smart Buildings: A Systematic Review, Applications, Barriers, and Future Scope. Buildings 2024, 14, 3446. [Google Scholar] [CrossRef]
- Pandey, S.; Bhushan, B. Recent Lightweight cryptography (LWC) based security advances for resource-constrained IoT networks. Wirel. Netw. 2024, 30, 2987–3026. [Google Scholar] [CrossRef]
- Sun, Y.; Jung, H. Machine Learning (ML) Modeling, IoT, and Optimizing Organizational Operations through Integrated Strategies: The Role of Technology and Human Resource Management. Sustainability 2024, 16, 6751. [Google Scholar] [CrossRef]
- Arshi, O.; Rai, A.; Gupta, G.; Pandey, J.K.; Mondal, S. IoT in energy: A comprehensive review of technologies, applications, and future directions. Peer-to-Peer Netw. Appl. 2024, 17, 2830–2869. [Google Scholar] [CrossRef]
- Khan, M.N.; Rahman, H.U.; Hussain, T.; Yang, B.; Qaisar, S.M. Enabling Trust in Automotive IoT: Lightweight Mutual Authentication Scheme for Electronic Connected Devices in Internet of Things. IEEE Trans. Consum. Electron. 2024, 70, 5065–5078. [Google Scholar] [CrossRef]
- Khan, M.N.; Khalil, I.; Ullah, I.; Singh, S.K.; Dhahbi, S.; Khan, H.; Alwabli, A.; Al-Khasawneh, M.A. Self-adaptive and content-based scheduling for reducing idle listening and overhearing in securing quantum IoT sensors. Internet Things 2024, 27, 101312. [Google Scholar] [CrossRef]
- Mumuni, A.; Mumuni, F. Automated data processing and feature engineering for deep learning and big data applications: A survey. J. Inf. Intell. 2024, 3, 113–153. [Google Scholar] [CrossRef]
- Hu, B. Deep learning image feature recognition algorithm for judgment on the rationality of landscape planning and design. Complexity 2021, 2021, 9921095. [Google Scholar] [CrossRef]
- Rajawat, A.S.; Goyal, S.; Chauhan, C.; Bedi, P.; Prasad, M.; Jan, T. Cognitive adaptive systems for industrial internet of things using reinforcement algorithm. Electronics 2023, 12, 217. [Google Scholar] [CrossRef]
- Rubio-Martín, S.; García-Ordás, M.T.; Bayón-Gutiérrez, M.; Prieto-Fernández, N.; Benítez-Andrades, J.A. Enhancing ASD detection accuracy: A combined approach of machine learning and deep learning models with natural language processing. Health Inf. Sci. Syst. 2024, 12, 20. [Google Scholar] [CrossRef] [PubMed]
- Muzaffar, M.U.; Sharqi, R. A review of spectrum sensing in modern cognitive radio networks. Telecommun. Syst. 2024, 85, 347–363. [Google Scholar] [CrossRef]
- Ge, J.; Liang, Y.C.; Wang, S.; Sun, C. RIS-assisted cooperative spectrum sensing for cognitive radio networks. IEEE Trans. Wirel. Commun. 2024, 23, 12547–12562. [Google Scholar] [CrossRef]
- Wang, J.; Wang, Z.; Zhang, L. A simultaneous wireless information and power transfer-based multi-hop uneven clustering routing protocol for EH-cognitive radio sensor networks. Big Data Cogn. Comput. 2024, 8, 15. [Google Scholar] [CrossRef]
- Laidi, R.; Djenouri, D.; Balasingham, I. On predicting sensor readings with sequence modeling and reinforcement learning for energy-efficient IoT applications. IEEE Trans. Syst. Man, Cybern. Syst. 2021, 52, 5140–5151. [Google Scholar] [CrossRef]
- Gao, A.; Wang, Q.; Wang, Y.; Du, C.; Hu, Y.; Liang, W.; Ng, S.X. Attention enhanced multi-agent reinforcement learning for cooperative spectrum sensing in cognitive radio networks. IEEE Trans. Veh. Technol. 2024, 73, 10464–10477. [Google Scholar] [CrossRef]
- Malik, T.S.; Malik, K.R.; Afzal, A.; Ibrar, M.; Wang, L.; Song, H.; Shah, N. RL-IoT: Reinforcement learning-based routing approach for cognitive radio-enabled IoT communications. IEEE Internet Things J. 2022, 10, 1836–1847. [Google Scholar] [CrossRef]
- Ghamry, W.K.; Shukry, S. Spectrum access in cognitive IoT using reinforcement learning. Clust. Comput. 2021, 24, 2909–2925. [Google Scholar] [CrossRef]
- Liu, X.; Sun, C.; Yu, W.; Zhou, M. Reinforcement-learning-based dynamic spectrum access for software-defined cognitive industrial internet of things. IEEE Trans. Ind. Inform. 2021, 18, 4244–4253. [Google Scholar] [CrossRef]
- Tan, X.; Zhou, L.; Wang, H.; Sun, Y.; Zhao, H.; Seet, B.C.; Wei, J.; Leung, V.C. Cooperative multi-agent reinforcement-learning-based distributed dynamic spectrum access in cognitive radio networks. IEEE Internet Things J. 2022, 9, 19477–19488. [Google Scholar] [CrossRef]
- Hemelatha, S.; Kumar, A.; Manchanda, M.; Manashree, K.G.; Kulkarni, O.S. Cognitive Radio-Enabled Internet of Things Communications: A Reinforcement Learning-Based Routing Method. In Proceedings of the 2024 Global Conference on Communications and Information Technologies (GCCIT), Bangalore, India, 25–26 October 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–7. [Google Scholar]
- Pham, T.T.H.; Noh, W.; Cho, S. Multi-agent reinforcement learning based optimal energy sensing threshold control in distributed cognitive radio networks with directional antenna. ICT Express 2024, 10, 472–478. [Google Scholar] [CrossRef]
- Ghamry, W.K.; Shukry, S. Multi-objective intelligent clustering routing schema for internet of things enabled wireless sensor networks using deep reinforcement learning. Clust. Comput. 2024, 27, 4941–4961. [Google Scholar] [CrossRef]
- Mukherjee, A.; Divya, A.; Sivvani, M.; Pal, S.K. Cognitive intelligence in industrial robots and manufacturing. Comput. Ind. Eng. 2024, 191, 110106. [Google Scholar] [CrossRef]
- Dvir, E.; Shifrin, M.; Gurewitz, O. Cooperative Multi-Agent Reinforcement Learning for Data Gathering in Energy-Harvesting Wireless Sensor Networks. Mathematics 2024, 12, 2102. [Google Scholar] [CrossRef]
- Matei, A.; Cocoșatu, M. Artificial Internet of Things, sensor-based digital twin urban computing vision algorithms, and blockchain cloud networks in sustainable smart city administration. Sustainability 2024, 16, 6749. [Google Scholar] [CrossRef]
- Al-Quayed, F.; Humayun, M.; Alnusairi, T.S.; Ullah, I.; Bashir, A.K.; Hussain, T. Context-Aware Prediction with Secure and Lightweight Cognitive Decision Model in Smart Cities. Cogn. Comput. 2025, 17, 44. [Google Scholar] [CrossRef]
- Sultan, S.M.; Waleed, M.; Pyun, J.Y.; Um, T.W. Energy conservation for internet of things tracking applications using deep reinforcement learning. Sensors 2021, 21, 3261. [Google Scholar] [CrossRef]
- Bai, W.; Zheng, G.; Xia, W.; Mu, Y.; Xue, Y. Multi-User Opportunistic Spectrum Access for Cognitive Radio Networks Based on Multi-Head Self-Attention and Multi-Agent Deep Reinforcement Learning. Sensors 2025, 25, 2025. [Google Scholar] [CrossRef]
- Tripathy, J.; Balasubramani, M.; Rajan, V.A.; Aeron, A.; Arora, M. Reinforcement learning for optimizing real-time interventions and personalized feedback using wearable sensors. Meas. Sens. 2024, 33, 101151. [Google Scholar] [CrossRef]
- Suresh, S.S.; Prabhu, V.; Parthasarathy, V.; Senthilkumar, G.; Gundu, V. Intelligent data routing strategy based on federated deep reinforcement learning for IOT-enabled wireless sensor networks. Meas. Sens. 2024, 31, 101012. [Google Scholar] [CrossRef]
- Chen, J.; Zhang, Z.; Fan, D.; Hou, C.; Zhang, Y.; Hou, T.; Zou, X.; Zhao, J. Distributed Decision Making for Electromagnetic Radiation Source Localization Using Multi-Agent Deep Reinforcement Learning. Drones 2025, 9, 216. [Google Scholar] [CrossRef]
- Flandermeyer, S.A.; Mattingly, R.G.; Metcalf, J.G. Deep reinforcement learning for cognitive radar spectrum sharing: A continuous control approach. IEEE Trans. Radar Syst. 2024, 2, 125–137. [Google Scholar] [CrossRef]
- Mei, R.; Wang, Z. Multi-Agent Deep Reinforcement Learning-Based Resource Allocation for Cognitive Radio Networks. IEEE Trans. Veh. Technol. 2024. [Google Scholar] [CrossRef]
- Canese, L.; Cardarilli, G.C.; Dehghan Pir, M.M.; Di Nunzio, L.; Spanò, S. Design and Development of Multi-Agent Reinforcement Learning Intelligence on the Robotarium Platform for Embedded System Applications. Electronics 2024, 13, 1819. [Google Scholar] [CrossRef]
- Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; Klimov, O. Proximal policy optimization algorithms. arXiv 2017, arXiv:1707.06347. [Google Scholar]
Scheme | Parameters Used | Advantages | Drawback(s) |
---|---|---|---|
CAS-IIoT-RL [22] | Adaptive, dynamic decision controls | Enhanced decision making in industrial settings | Limited to simulations, lacks real-time validation |
RL-IoT [29] | Routing, CR-IoT, CRCN, retransmission | Decreased delay and collisions, improved throughput | Specific to routing, generalization may be limited |
SCA-RL [30] | Bayesian algorithm, idle channel prediction | Reduces spectrum handover, avoids retransmission collisions | Limited adaptability in dynamic environments |
NOMA [31] | Dynamic Q-learning, spectrum access | Increased throughput and spectrum utilization | Disrupts continuity of packet flow |
LSTM-RL [27] | Spatiotemporal patterns, DQN, MDP | Optimizes energy, maintains prediction accuracy | High computational cost for long-term prediction |
MARL [32] | Deep recurrent Q-network, cooperative approach | Validated improvement in cognitive radio networks | Requires high-level coordination |
RL-IoT [33] | EED minimization, AODV-IoT, ML techniques | Avoids data collision, improved routing | Dependent on specific validation scenarios |
MA-DCSS [34] | Dec-POMDP, CTDE, conditional probabilities | High detection accuracy, optimal control | Fails in distributed systems |
MICRC [35] | Multi-objective clustering, RL routing | Better energy enhancement, new network design | High data processing and communication overhead |
CIRM [36] | ANN, continual learning, brain-based model | Adaptive robotic movement in manufacturing | Informal structure may lack robustness |
MRL-CSS [28] | Multi-agent DDPG, adaptive CSS | Improved sensing accuracy, cooperative sensing | Not scalable due to high communication cost |
CMRL-DG [37] | EH-WSNs, MARL, adaptive learning | High performance despite sensor failures | Structure-specific, poor in delay-tolerant networks |
SDTVA [38] | Big data, PRISMA, Shiny app | Smart city data analytics and evidence mapping | Complex architecture, requires real-time support |
CPSL-CM [39] | Blockchain, secure routing, RL | Effective fault detection and secure communication | High computational demand |
LSTM-DQN [40] | Short-term memory, minimum distance function | Energy-efficient target tracking | High RL computation needed |
MOSA-RL [41] | Multi-head attention, multi-agent RL | Flexible, improves throughput and convergence | Delay and latency in communication |
RL-ORI [42] | Sensor-driven decisions, healthcare monitoring | Improved decision making in medical processes | Requires extensive real-time sensor data |
IDR-FRL [43] | Federated RL, node relocation, load balancing | Reduces latency, packet loss | Not scalable to large-scale networks |
MURPPO [44] | Dual-actor structure, task-specific rewards | Effective urban radiation localization | Ignores authentication errors |
DRL-CRS [45] | Pulse-agile radar, waveform updates | Efficient in changing spectrum scenarios | Computationally complex and slow |
MDRL-RA [46] | LSTM, multi-agent PPO, QoS parameters | Improved payload delivery and sensing | Delay and overhearing due to complexity |
MRL-IPP [47] | Q-RTS, multi-agent scalability | Reliable, decreases convergence time | Needs generalization for wider use |
Symbols | Meaning | Symbols | Meaning |
---|---|---|---|
Sensing Data | Generic State | ||
Active State | Wait State | ||
Route State | Microprocessing unit | ||
Sensing Module | Radio Module | ||
Actions | Probability of stochastic process | ||
Total Current | Current in Radio Module | ||
Current in Radio Link | Current in Microprocessor | ||
Reward | Current in Radio Module | ||
Current in Radio Link | Current in Microprocessor | ||
Reinforcement Learning | Prediction Accuracy | ||
Combined Reward Score | Energy Efficiency |
Parameter | Symbol | Matric Value |
---|---|---|
RL-Algorithm | PPO “(Proximal Policy Optimization)” | |
Learning Rate | 0.0003/0.0004 | |
Discount Factor | 0.998/0.989 | |
Clip Range | 0.2/0.3 | |
Epochs | 10 | |
No. of Sensors | 4–6, 10–16 | |
Network | IoT | |
Dimension | 64–128 | |
Data Generation Pattern | Poisson distribution ( = 2–5 packets/s) | |
Sensor Dynamics | – | static |
Communication Topology | clustered | |
Transmission Range | R | 100–250 m |
Simulation Episodes | 5000–10,000 | |
Framework | OpenAI Gym + PyTorch |
Figure | Parameters | Key Observations | Optimal Condition |
---|---|---|---|
Figure 4 | Signal–Energy | Good signal strength and energy result in a good , while lowering or raising energy consumption may result in a bad | Better signal strengths, moderate energy |
Figure 5 | Energy–Noise | Lesser noise with optimum energy improves , higher noise results in bad | Lower noise, moderate energy |
Figure 6 | Signal–Noise | Good signal and lower noise result in better , a weaker signal or high noise reduces | High signal, low noise |
CAS-IIoT-RL | LSTM-RL | AEM-RL | ASC-RL | % Increase | |
---|---|---|---|---|---|
0.0 | 0.55 | 0.55 | 0.55 | 0.55 | 0.00% |
0.2 | 0.56 | 0.57 | 0.58 | 0.59 | 5.41% |
0.4 | 0.61 | 0.63 | 0.64 | 0.66 | 6.84% |
0.6 | 0.73 | 0.75 | 0.77 | 0.80 | 6.25% |
0.8 | 0.86 | 0.89 | 0.92 | 0.95 | 6.93% |
1.0 | 0.92 | 0.96 | 0.98 | 1.00 | 5.43% |
Episode | CAS-IIoT-RL | LSTM-RL | AEM-RL | ASC-RL | Improvement (%) |
---|---|---|---|---|---|
100 | 0.355 | 0.336 | 0.315 | 0.296 | 10.10% |
200 | 0.305 | 0.292 | 0.275 | 0.252 | 13.54% |
300 | 0.255 | 0.248 | 0.235 | 0.208 | 16.96% |
400 | 0.205 | 0.200 | 0.195 | 0.160 | 19.61% |
500 | 0.155 | 0.152 | 0.145 | 0.112 | 24.06% |
Latency Th (ms) | CAS-IIoT-RL | LSTM-RL | AEM-RL | ASC-RL | % Increase |
---|---|---|---|---|---|
100 | 0.65 | 0.64 | 0.63 | 0.69 | 6.25% |
200 | 0.68 | 0.67 | 0.66 | 0.72 | 6.06% |
300 | 0.71 | 0.70 | 0.69 | 0.75 | 5.71% |
400 | 0.73 | 0.72 | 0.71 | 0.77 | 5.48% |
500 | 0.75 | 0.74 | 0.73 | 0.79 | 5.17% |
Reliability Threshold | CAS-IIoT-RL (bits/Hz) | LSTM-RL (bits/Hz) | AEM-RL (bits/Hz) | ASC-RL (bits/Hz) | % Increase Over Avg |
---|---|---|---|---|---|
0.600 | 0.7550 | 0.7217 | 0.7176 | 0.8833 | 20.77% |
0.644 | 0.6983 | 0.6844 | 0.6692 | 0.8328 | 21.76% |
0.688 | 0.6849 | 0.6714 | 0.6502 | 0.8310 | 24.24% |
0.732 | 0.6669 | 0.6661 | 0.6364 | 0.8313 | 26.63% |
0.777 | 0.6461 | 0.6392 | 0.6311 | 0.8214 | 28.58% |
0.822 | 0.6544 | 0.6381 | 0.6173 | 0.7991 | 25.53% |
0.866 | 0.6261 | 0.6100 | 0.6103 | 0.8082 | 31.31% |
0.910 | 0.6306 | 0.6161 | 0.5934 | 0.8146 | 32.81% |
0.955 | 0.6111 | 0.6065 | 0.5937 | 0.8177 | 35.43% |
1.000 | 0.6238 | 0.6087 | 0.5841 | 0.8040 | 32.77% |
Method | Success Rate | Mean Success Rate | Standard Dev | Epoch@0.8+ SR |
---|---|---|---|---|
ASC-RL | 0.997 | 0.865 | 0.207 | 36.21 |
CAS-IIoT-RL | 0.966 | 0.823 | 0.227 | 43.56 |
LSTM-RL | 0.945 | 0.782 | 0.241 | 49.12 |
AEM-RL | 0.936 | 0.778 | 0.256 | 48.34 |
Parameter | CAS-IIoT-RL | LSTM-RL | AEM-RL | ASC-RL |
---|---|---|---|---|
Algorithm | DQN | LSTM | A2C | PPO |
Space | Feature | Time-Series | QoS Metrics | |
Discrete | Discrete | Continuous | Discrete | |
0.001 | 0.0005 | 0.0003 | 0.0003 | |
0.9 | 0.95 | 0.98 | 0.99 | |
Archeteture | 2-layer NN | LSTM + Dense | 3-layer NN | 3-layer NN + ReLU |
Episodes | 5000 | 7000 | 8000 | 10,000 |
Reward | Delay, Energy | Latency, PDR | Latency, Energy | Energy, Delay |
Environment | Static IIoT | Edge IIoT | Edge+ Fog | Dynamic IoT |
Implementation | Python-2 | TensorFlow 1.x | Python + Keras | PyTorch + Gym |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Khan, M.N.; Lee, S.; Shah, M. Adaptive Scheduling in Cognitive IoT Sensors for Optimizing Network Performance Using Reinforcement Learning. Appl. Sci. 2025, 15, 5573. https://doi.org/10.3390/app15105573
Khan MN, Lee S, Shah M. Adaptive Scheduling in Cognitive IoT Sensors for Optimizing Network Performance Using Reinforcement Learning. Applied Sciences. 2025; 15(10):5573. https://doi.org/10.3390/app15105573
Chicago/Turabian StyleKhan, Muhammad Nawaz, Sokjoon Lee, and Mohsin Shah. 2025. "Adaptive Scheduling in Cognitive IoT Sensors for Optimizing Network Performance Using Reinforcement Learning" Applied Sciences 15, no. 10: 5573. https://doi.org/10.3390/app15105573
APA StyleKhan, M. N., Lee, S., & Shah, M. (2025). Adaptive Scheduling in Cognitive IoT Sensors for Optimizing Network Performance Using Reinforcement Learning. Applied Sciences, 15(10), 5573. https://doi.org/10.3390/app15105573