Previous Issue
Volume 7, March
 
 

IoT, Volume 7, Issue 2 (June 2026) – 13 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
18 pages, 1957 KB  
Article
IoT-Based System for Real-Time Water Quality Monitoring and Advanced Turbidity and pH Sensor Calibration to Improve Accuracy and Reliability Using ThingSpeak
by Mulhim Al Drees, Abbas E. Rahma, Samah Daffalla, Rawabi Alsudais, Naser Fathi Alsubaie, Mohammed Albrahim, Hassan Abdullah Alghanim and Mustafa I. Almaghasla
IoT 2026, 7(2), 42; https://doi.org/10.3390/iot7020042 - 12 May 2026
Abstract
Water quality has become a major concern for public health, agriculture, and industry, necessitating reliable and continuous monitoring. Conventional monitoring methods are often time-consuming, rely on manual sampling, and involve complex equipment or procedures, making them unsuitable for real-time applications. This study presents [...] Read more.
Water quality has become a major concern for public health, agriculture, and industry, necessitating reliable and continuous monitoring. Conventional monitoring methods are often time-consuming, rely on manual sampling, and involve complex equipment or procedures, making them unsuitable for real-time applications. This study presents an Internet of Things (IoT)-based system for real-time water quality monitoring using ESP32 hardware integrated with the ThingSpeak platform. The system enhances the accuracy of turbidity and pH measurements using advanced sensor calibration techniques. Nephelometric methods and glass electrodes are employed for turbidity detection and pH sensing, respectively, across various water types—including tap water, groundwater, wastewater, saline water, and treated water—to address issues such as environmental drift and measurement inaccuracies. The turbidity sensor was calibrated using a standard six-point method with formazin solutions (0–1064 NTU), whereas pH calibration utilized a three-point approach with NIST-traceable buffer solutions (pH 4, 7, and 10). The results indicate that turbidity measurement errors, initially ranging from 15.75% to 422%, were reduced to below 10% after calibration. Similarly, pH accuracy was significantly improved across all tested water matrices. The system enables real-time data visualization via ThingSpeak, and the implementation of multi-point calibration ensures high data reliability for continuous monitoring. Overall, this approach offers an accurate, efficient, and practical solution for real-time water quality management. Full article
21 pages, 574 KB  
Article
Hybrid Deep Architectures in Contrastive Latent Space: Performance Analysis of VAE-MLP, VAE-MoTE, and VAE-GAT for IoT Botnet Detection
by Hassan Wasswa and Timothy Lynar
IoT 2026, 7(2), 41; https://doi.org/10.3390/iot7020041 - 12 May 2026
Abstract
The rapid proliferation of Internet of Things (IoT) devices has significantly expanded the attack surface of modern networks leading to a surge in IoT-based botnet attacks. Detecting such attacks remains challenging due to the high dimensionality and heterogeneity of IoT network traffic. This [...] Read more.
The rapid proliferation of Internet of Things (IoT) devices has significantly expanded the attack surface of modern networks leading to a surge in IoT-based botnet attacks. Detecting such attacks remains challenging due to the high dimensionality and heterogeneity of IoT network traffic. This study proposes and evaluates three hybrid deep learning architectures for IoT botnet detection that combine representation learning with supervised classification: VAE-encoder-MLP, VAE-encoder-GAT, and VAE-encoder-MoTE. A Variational Autoencoder is initially trained to learn a compact latent representation of the high-dimensional traffic features. Subsequently, the pretrained VAE-encoder component is employed to project the data into a lower-dimensional embedding space. These embeddings are then used to train three different downstream classifiers: a multilayer perceptron (MLP), a graph attention network (GAT), and a mixture of tiny experts (MoTE) model. To further enhance representation discriminability, supervised contrastive learning is incorporated to encourage intra-class compactness and inter-class separability. The proposed architectures are evaluated on two widely studied benchmark datasets—the CICIoT2022 and N-BaIoT dataset—under both binary and multiclass classification settings. Experimental results demonstrate that all three models achieve near-perfect performance in binary attack detection, with accuracy exceeding 99.8%. In the more challenging multiclass scenario, the VAE-encoder-MLP model achieves the best overall performance, reaching accuracies of 98.55% on CICIoT2022 and 99.75% on N-BaIoT. These findings provide insights into the design of efficient and scalable deep learning architectures for IoT intrusion detection. Full article
(This article belongs to the Special Issue Cybersecurity in the Age of the Internet of Things)
Show Figures

Figure 1

19 pages, 407 KB  
Article
An IoT-Aware Certificateless Signature Scheme for Protection Against Type-I and Type-II Super Adversaries
by Parichehr Dadkhah, Parvin Rastegari, Mohammad Dakhilalian, Phil Yeoh, Mingzhong Wang, Shahrzad Saremi and Rania Shibl
IoT 2026, 7(2), 40; https://doi.org/10.3390/iot7020040 - 7 May 2026
Viewed by 94
Abstract
Internet of Things (IoT) assists in efficient connectivity and automation of various applications by making use of wireless communication technology. Ensuring secure authentication and data integrity are the main challenges in this open wireless platform. Although existing cryptographic methods can address these security [...] Read more.
Internet of Things (IoT) assists in efficient connectivity and automation of various applications by making use of wireless communication technology. Ensuring secure authentication and data integrity are the main challenges in this open wireless platform. Although existing cryptographic methods can address these security challenges, most of them incur additional computational and communication overhead, which is unsuitable for resource-constrained IoT devices. Nowadays, researchers have focused on proposing efficient schemes to satisfy security requirements in open wireless IoT frameworks. Recently, a Certificateless Signature (CLS) scheme was developed for the IoT environment. However, in this paper, we show that this CLS scheme is vulnerable to attacks by super Type-II adversaries. To strengthen this scheme, we propose a novel and efficient CLS scheme with existential unforgeability against super adversaries in the Random Oracle Model (ROM). The proposed CLS scheme achieves reduced computational complexity and communication cost. As such, it is suitable for wireless IoT networks to provide secure message authentication and data integrity. Full article
(This article belongs to the Special Issue Advances in Wireless Communication Technologies for IoT Devices)
21 pages, 697 KB  
Article
Assessing Internet of Things Readiness on University Campuses: A Smart Campus-Oriented Approach
by Dejan Arsenijević, Jasmina Arsenijević, Srđan Tegeltija, Xiaoshuan Zhang, Gordana Ostojić and Stevan Stankovski
IoT 2026, 7(2), 39; https://doi.org/10.3390/iot7020039 - 27 Apr 2026
Viewed by 250
Abstract
The Internet of Things (IoT) is increasingly recognized as a core digital infrastructure supporting digital transformation, particularly in complex environments such as university campuses, which can be conceptualized as smart campus ecosystems. However, many organizations encounter difficulties when implementing IoT due to insufficient [...] Read more.
The Internet of Things (IoT) is increasingly recognized as a core digital infrastructure supporting digital transformation, particularly in complex environments such as university campuses, which can be conceptualized as smart campus ecosystems. However, many organizations encounter difficulties when implementing IoT due to insufficient organizational and technological readiness. This paper presents the University Campus IoT (UCIoT) readiness assessment model, which conceptualizes IoT readiness as a manifestation of organizational digital transformation readiness within the smart campus context. The model consists of 24 dimensions grouped into organizational and technological categories and is implemented through structured questionnaires and a supporting software tool. The model was developed using the design science research methodology and evaluated through a case study conducted at the University Campus of Novi Sad, Serbia. The results demonstrate that the model provides a structured and realistic assessment of IoT readiness and helps identify organizational and technological bottlenecks relevant to IoT implementation. The main contribution of this research is a context-specific readiness assessment framework tailored to university campuses that integrates organizational, technological, and client readiness dimensions. Full article
30 pages, 1083 KB  
Article
HILANDER: High-Performance Intelligent Learning-Based Task Offloading for Network-Aware Dynamic Edge Resource Allocation
by Garrik Brel Jagho Mdemaya, Armel Nkonjoh Ngomade and Mthulisi Velempini
IoT 2026, 7(2), 38; https://doi.org/10.3390/iot7020038 - 27 Apr 2026
Viewed by 273
Abstract
Edge computing has emerged as a promising paradigm to minimize latency and energy consumption while improving computational efficiency for mobile devices. Latency-sensitive applications such as autonomous driving, augmented reality, and industrial automation require ultra-low response times, making efficient task offloading a necessity in [...] Read more.
Edge computing has emerged as a promising paradigm to minimize latency and energy consumption while improving computational efficiency for mobile devices. Latency-sensitive applications such as autonomous driving, augmented reality, and industrial automation require ultra-low response times, making efficient task offloading a necessity in edge computing. However, distributing optimally computational tasks among edge servers remains a challenge, especially when considering latency, energy consumption, and workload balancing simultaneously. Although existing approaches have focused on one or two of these objectives, they do not provide a holistic solution that incorporates all three factors. In addition, some existing solutions do not take advantage of parallelism at the edge layer, resulting in bottlenecks and inefficient resource usage. In this paper, we propose a novel learning-based task offloading model that integrates parallel processing at the edge layer, adaptive workload balancing, and joint latency–energy optimization. Moreover, by dynamically adjusting the number of selected edge servers for parallel execution, our approach achieves optimal trade-offs between performance and resource efficiency. Our experimental setup includes several edge servers and several randomly deployed devices. It employs Apache HTTP Benchmark (AB) to generate realistic Mobile Edge Computing workloads. The obtained results show that our method outperforms existing approaches by reducing latency, lowering energy consumption, and maintaining a balanced workload across edge nodes. Full article
Show Figures

Graphical abstract

29 pages, 6964 KB  
Article
Distance-Aware Attenuation Modeling of a Helmet-Mounted Edge Thermal System Using MLX90640 and Raspberry Pi 5 for Industrial Safety Applications: Linear Regression Approach
by Songwut Boonsong, Paniti Netinant, Rerkchai Fooprateepsiri, Meennapa Rukhiran and Manasanan Bunpalwong
IoT 2026, 7(2), 37; https://doi.org/10.3390/iot7020037 - 26 Apr 2026
Viewed by 258
Abstract
Thermal hazards in industrial environments often remain undetected until critical failure or injury occurs. Conventional handheld infrared cameras require manual operation and limit continuous situational awareness. This study presents the design and field validation of a wearable helmet-mounted real-time thermal system based on [...] Read more.
Thermal hazards in industrial environments often remain undetected until critical failure or injury occurs. Conventional handheld infrared cameras require manual operation and limit continuous situational awareness. This study presents the design and field validation of a wearable helmet-mounted real-time thermal system based on the MLX90640 infrared array sensor and a Raspberry Pi 5 edge computing platform. Experimental validation was performed across multiple scenarios of 400 measurements based on industrial distances of 100 cm and 150 cm. The performance of the system was tested against a pre-calibrated hotspot infrared thermometer using linear regression analysis and standard error metrics to determine proportional agreement. The results indicate a strong proportional relationship between the two systems at both industrial distances, with R2 values ranging from 0.9885 to 0.9973 at 100 cm and from 0.9586 to 0.9867 at 150 cm. A moderate increase in mean absolute error (MAE) was observed as the measurement distance increased. Statistically significant increases in error were identified in mechanically dynamic scenarios where statistically significant increases in measurement error were observed (p-value < 0.05), indicating distance-dependent sensitivity under moving mechanical conditions. The higher absolute errors at longer distances mainly result from field-of-view expansion, reduced target occupancy, and mixed-pixel hotspot effects rather than weakened proportional trend stability. An industrial distance-aware linear regression model was developed to describe behavior and support calibrations under different deployment conditions. Despite minor absolute deviations during dynamic operations, the system maintained strong trend-tracking performance, suggesting suitability for daily preliminary hazard monitoring in industrial safety maintenance. Full article
18 pages, 699 KB  
Article
PatternStudio: A Neuro-Symbolic Framework for Dynamic and High-Throughput Complex Event Processing
by Jesús Rosa-Bilbao
IoT 2026, 7(2), 36; https://doi.org/10.3390/iot7020036 - 22 Apr 2026
Viewed by 238
Abstract
Complex Event Processing (CEP) is essential for real-time analytics in domains such as industrial IoT, cybersecurity, and financial monitoring, yet CEP adoption is still hindered by the difficulty of authoring temporal rules and by rigid redeployment workflows. This paper presents PatternStudio, a neuro-symbolic [...] Read more.
Complex Event Processing (CEP) is essential for real-time analytics in domains such as industrial IoT, cybersecurity, and financial monitoring, yet CEP adoption is still hindered by the difficulty of authoring temporal rules and by rigid redeployment workflows. This paper presents PatternStudio, a neuro-symbolic CEP framework that translates natural language specifications into validated event-processing patterns and executes them on a deterministic Apache Flink-based runtime without interrupting service. The generative layer is constrained to produce a typed intermediate representation, while the symbolic layer enforces validation and runtime execution guarantees. We evaluate the prototype as a single-node system-characterization study on commodity hardware representative of edge and near-edge gateways rather than microcontroller-class devices. Under this setting, PatternStudio reaches 47,910 events per second at 250 active rules while maintaining a bounded memory footprint between 1.6 GB and 1.9 GB during the reported runs. Beyond 500 active rules, throughput degradation is driven primarily by CPU saturation and alert amplification, which also explains the sharp increase in tail latency. Additional measurements with parallelism 4, a static baseline, and a two-stage NL-to-IR evaluation further show that the architecture remains functional under partitioned execution, incurs moderate dynamic-orchestration overhead, preserves rule structure reliably under natural-language authoring, and supports interchangeable LLM backends at the semantic front end. Full article
Show Figures

Figure 1

20 pages, 5108 KB  
Article
Privacy-Preserving Emergency Vehicle Authentication Scheme Using Zero-Knowledge Proofs and Blockchain
by Hanshi Li, Drishti Oza, Masami Yoshida and Taku Noguchi
IoT 2026, 7(2), 35; https://doi.org/10.3390/iot7020035 - 21 Apr 2026
Viewed by 391
Abstract
Emergency vehicle authentication in vehicular ad hoc networks must satisfy strict latency, privacy, and trust constraints. Existing Public Key Infrastructure- and Conditional Privacy-Preserving Authentication-based schemes incur substantial overhead from certificate management and expensive per-hop verification, making them unsuitable for real-time emergency scenarios. We [...] Read more.
Emergency vehicle authentication in vehicular ad hoc networks must satisfy strict latency, privacy, and trust constraints. Existing Public Key Infrastructure- and Conditional Privacy-Preserving Authentication-based schemes incur substantial overhead from certificate management and expensive per-hop verification, making them unsuitable for real-time emergency scenarios. We propose a lightweight zero-knowledge- and blockchain-assisted authentication scheme that eliminates certificates, pseudonym pools, and the requirement for online interaction with a trusted authority during the authentication phase. The Certificate Authority (CA) is involved only during offline initialization stages (vehicle enrollment and Merkle tree construction); once provisioning is complete, the runtime authentication process operates without any online CA interaction. Each emergency vehicle registers one-time hash commitments on-chain after proving membership in a category-specific Merkle tree, and authenticates messages by broadcasting a hash along with a zero-knowledge proof of preimage knowledge. Roadside units verify the proof and consult the on-chain state to enforce single-use semantics, creating a tamper-resistant audit trail. Evaluation using the Veins framework (OMNeT++/SUMO) demonstrated a constant 288-byte authenticated payload, millisecond-level end-to-end delay independent of hop count, and stable blockchain processing under sustained load. Full article
(This article belongs to the Special Issue Internet of Vehicles (IoV))
Show Figures

Figure 1

22 pages, 4808 KB  
Article
Transforming Opportunistic Routing: A Deep Reinforcement Learning Framework for Reliable and Energy-Efficient Communication in Mobile Cognitive Radio Sensor Networks
by Suleiman Zubair, Bala Alhaji Salihu, Altyeb Altaher Taha, Yakubu Suleiman Baguda, Ahmed Hamza Osman and Asif Hassan Syed
IoT 2026, 7(2), 34; https://doi.org/10.3390/iot7020034 - 21 Apr 2026
Viewed by 315
Abstract
The Mobile Reliable Opportunistic Routing (MROR) protocol improves data-forwarding reliability in Cognitive Radio Sensor Networks (CRSNs) through mobility-aware virtual contention groups and handover zoning. However, its heuristic decision logic is difficult to optimize under highly dynamic spectrum access and random node mobility. To [...] Read more.
The Mobile Reliable Opportunistic Routing (MROR) protocol improves data-forwarding reliability in Cognitive Radio Sensor Networks (CRSNs) through mobility-aware virtual contention groups and handover zoning. However, its heuristic decision logic is difficult to optimize under highly dynamic spectrum access and random node mobility. To address this limitation, we present DRL-MROR, a refined routing framework that incorporates deep reinforcement learning (DRL) to enable intelligent and adaptive forwarding decisions. In DRL-MROR, the secondary users (SUs) act as autonomous agents that observe local state information, including primary-user activity, link quality, residual energy, and neighbor-mobility patterns. Each agent learns a forwarding policy through a Deep Q-Network (DQN) optimized for long-term network utility in terms of throughput, delay, and energy efficiency. We formulate routing as a Markov Decision Process (MDP) and use experience replay with prioritized sampling to improve learning stability and convergence. The DQN used at each node is intentionally lightweight, requiring 5514 trainable parameters, about 21.5 kB of weight storage in 32-bit precision, and approximately 5.4k multiply-accumulate operations per inference, which supports practical deployment on edge-capable CRSN nodes. Extensive simulations show that DRL-MROR outperforms the original MROR protocol and representative AI-based routing baselines such as AIRoute under diverse operating conditions. The results indicate gains of up to 38% in throughput, 42% in goodput, a 29% reduction in energy consumed per packet, and an approximately 18% improvement in network lifetime, while maintaining high route stability and fairness. DRL-MROR also reduces control overhead by about 30% and average end-to-end delay by up to 32%, maintaining strong performance even under elevated PU activity and higher node mobility. These results show that augmenting opportunistic routing with lightweight DRL can substantially improve adaptability and efficiency in next-generation IoT-oriented CRSNs. Full article
(This article belongs to the Special Issue Advances in Wireless Communication Technologies for IoT Devices)
Show Figures

Graphical abstract

40 pages, 1741 KB  
Article
Edge AI Bridge: A Micro-Layer Intrusion Detection Architecture for Smart-City IoT Networks
by Sethu Subramanian N, Prabu P, Kurunandan Jain and Prabhakar Krishnan
IoT 2026, 7(2), 33; https://doi.org/10.3390/iot7020033 - 16 Apr 2026
Viewed by 671
Abstract
Smart-city IoT ecosystems depend on a large number of devices with limited resources, which often lack built-in security mechanisms. While traditional cloud-based or gateway-centric intrusion detection systems (IDSs) offer essential security, they are still characterized by high detection latency, considerable bandwidth demand, and [...] Read more.
Smart-city IoT ecosystems depend on a large number of devices with limited resources, which often lack built-in security mechanisms. While traditional cloud-based or gateway-centric intrusion detection systems (IDSs) offer essential security, they are still characterized by high detection latency, considerable bandwidth demand, and a lack of precise monitoring of single device actions. This study proposes the Edge AI Bridge, a novel micro-computing security layer positioned between IoT devices and the gateway to enable early-stage threat interception. The architecture integrates embedded AI hardware with a hybrid pipeline, utilizing unsupervised anomaly detection for behavioral profiling and a lightweight signature-matching module to minimize false positives. System operations—including localized traffic inspection, protocol parsing, and feature extraction—are performed before data aggregation, which preserves device-level privacy and reduces the computational burden on the IoT gateway. The contemporary CIC-IoT-2023 dataset, which captures a wide range of smart-city protocols and attack vectors, is used to evaluate the architecture. The Edge AI Bridge leads to a significant reduction in detection latency—≈50 ms on average as opposed to the 500 ms of cloud-based solutions—while the resource footprint is kept low to about 20% CPU utilization. The Edge AI Bridge demonstrates a potential solution that is scalable, modular, and can preserve privacy while improving the cyber resilience of the smart-city infrastructures that are large, heterogeneous, and difficult to manage. Full article
Show Figures

Figure 1

21 pages, 786 KB  
Article
Intelligent Railway Wagon Health Assessment Using IoT Sensors and Predictive Analytics for Safety-Critical Applications
by Shiva Kumar Mysore Gangadhara, Krishna Alabhujanahalli Neelegowda, Anitha Arekattedoddi Chikkalingaiah and Naveena Chikkaguddaiah
IoT 2026, 7(2), 32; https://doi.org/10.3390/iot7020032 - 2 Apr 2026
Viewed by 677
Abstract
The safety and reliability of railway wagon operations largely depend on the timely detection of degradation in safety-critical components such as axle bearings, wheelsets, and braking systems. Conventional maintenance strategies based on fixed inspection intervals are often inadequate for capturing the actual operating [...] Read more.
The safety and reliability of railway wagon operations largely depend on the timely detection of degradation in safety-critical components such as axle bearings, wheelsets, and braking systems. Conventional maintenance strategies based on fixed inspection intervals are often inadequate for capturing the actual operating conditions of wagon components, leading to delayed fault detection or unnecessary maintenance actions. To address these limitations, this paper proposes a sensor-based health assessment framework for the continuous monitoring of railway wagons under operational conditions. The proposed framework integrates multi-sensor data acquisition, systematic signal preprocessing, feature-based health indicator construction, and temporal degradation analysis to evaluate component health in real time. A safety-oriented decision logic is employed to classify operating conditions and generate reliable alerts while minimizing false detections caused by transient disturbances. The effectiveness of the proposed approach is validated using a publicly available run-to-failure bearing dataset that exhibits degradation characteristics similar to those observed in railway wagon axle bearings. Experimental results demonstrate that the proposed framework achieves improved classification accuracy, higher detection reliability, reduced false alarm rates, and lower detection latency compared to representative existing condition monitoring approaches. In addition, the computational efficiency of the proposed model confirms its suitability for real-time deployment. The results indicate that the proposed health assessment framework provides a practical and reliable solution for safety-critical railway wagon monitoring and forms a strong foundation for future extensions toward predictive maintenance and remaining useful life estimation. Full article
Show Figures

Figure 1

30 pages, 1666 KB  
Article
Cryptanalysis and Improvement of the SMEP-IoV Protocol: A Secure and Lightweight Protocol for Message Exchange in IoV Paradigm
by Gelare Oudi Ghadim, Parvin Rastegari, Mohammad Dakhilalian, Faramarz Hendessi, Shahrzad Saremi, Rania Shibl, Yassine Himeur, Shadi Atalla and Wathiq Mansoor
IoT 2026, 7(2), 31; https://doi.org/10.3390/iot7020031 - 31 Mar 2026
Viewed by 505
Abstract
The Internet of Vehicles (IoV) is a rapidly evolving technology that provides real-time connectivity, enhanced road safety, and reduced traffic congestion; however, its inherently open communication channels expose it to serious security and privacy threats. In 2021, Chaudhry proposed SMEP-IoV, a lightweight message [...] Read more.
The Internet of Vehicles (IoV) is a rapidly evolving technology that provides real-time connectivity, enhanced road safety, and reduced traffic congestion; however, its inherently open communication channels expose it to serious security and privacy threats. In 2021, Chaudhry proposed SMEP-IoV, a lightweight message authentication protocol designed to satisfy essential security requirements. This paper presents a comprehensive security analysis of SMEP-IoV and reveals several serious vulnerabilities. Specifically, sensitive credentials are stored in plaintext without tamper-resistant protection, and both authentication and session key derivation depend directly on these credentials. These structural flaws allow an adversary to extract the stored secrets, generate valid authentication messages, and derive the established session key, enabling vehicle impersonation and session key disclosure attacks. Moreover, compromise of long-term secrets facilitates key compromise impersonation attacks. It also fails to ensure anonymity and perfect forward secrecy. To address these issues, we propose an enhanced authentication protocol for resource-constrained IoV environments, leveraging a three-factor authentication mechanism combined with lightweight cryptographic primitives. Formal security analyses using BAN logic, Tamarin, and ProVerif confirm its resilience against known attacks, while NS-3 simulations validate its scalability, high throughput, and low End-to-End Delay (E2ED). The results highlight the protocol as a robust, efficient, and scalable solution for large-scale IoV deployments. Full article
(This article belongs to the Special Issue Internet of Vehicles (IoV))
Show Figures

Figure 1

31 pages, 1333 KB  
Article
Optimal Security Task Offloading in Cognitive IoT Networks: Provably Optimal Threshold Policies and Model-Free Learning
by Ning Wang and Yali Ren
IoT 2026, 7(2), 30; https://doi.org/10.3390/iot7020030 - 26 Mar 2026
Viewed by 598
Abstract
The proliferation of Internet of Things (IoT) devices has introduced significant security challenges. Resource-constrained devices face sophisticated threats but lack the computational capacity for advanced security analysis. This study investigates optimal security task allocation in Cognitive IoT (CIoT) networks. It specifically examines when [...] Read more.
The proliferation of Internet of Things (IoT) devices has introduced significant security challenges. Resource-constrained devices face sophisticated threats but lack the computational capacity for advanced security analysis. This study investigates optimal security task allocation in Cognitive IoT (CIoT) networks. It specifically examines when IoT devices should process security tasks locally or offload them to Mobile Edge Computing (MEC) servers. The problem is formulated as a Continuous-Time Markov Decision Process (CTMDP). The study demonstrates that the optimal offloading policy has a threshold structure. Security tasks are offloaded to MEC servers when the offloading queue length is below a critical threshold, k. Otherwise, tasks are processed locally. This structural property is robust to changes in MEC server configurations and threat arrival patterns. It ensures an optimal and easily implementable security policy under the exponential model. Theoretical analysis establishes upper bounds on the performance of AI-based security controllers using the same models. The results also show that standard model-free Q-learning algorithms can recover optimal thresholds without any prior knowledge of the system parameters. Simulations across multiple reinforcement learning architectures, including Q-learning, State–Action–Reward–State–Action (SARSA), and Deep Q-networks (DQN), confirm that all methods converge to the predicted threshold. This empirically validates the analytical findings. The threshold structure remains effective under practical imperfections such as imperfect sensing and parameter estimation errors. Systems maintain 85% to 93% of their optimal performance. This work extends threshold Markov Decision Process (MDP) analysis from classical queuing theory to the context of CIoT security offloading. It provides optimal and practical policies and model-free algorithms for use by resource-constrained devices. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop