-
A Novel Hybrid Opcode Selection Framework for Precise Malware Detection on Resource-Limited Devices -
The Association Between Religiosity and Lifelong Cancer Incidence in an Israeli Male Cohort: A Competing Risk Survival Analysis -
Understanding Energy Efficiency of AI Deployments in IoT-Driven Smart Cities
Journal Description
IoT
IoT
is an international, peer-reviewed, open access journal on Internet of Things (IoT) published quarterly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions
- High Visibility: indexed within ESCI (Web of Science), Scopus, EBSCO, and other databases.
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 25.5 days after submission; acceptance to publication is undertaken in 5.3 days (median values for papers published in this journal in the second half of 2025).
- Journal Rank: JCR - Q2 (Telecommunications) / CiteScore - Q1 (Computer Science (miscellaneous))
- Recognition of Reviewers: APC discount vouchers, optional signed peer review, and reviewer names published annually in the journal.
- Journal Clusters of Network and Communications Technology: Future Internet, IoT, Telecom, Journal of Sensor and Actuator Networks, Network, Signals.
Impact Factor:
2.8 (2024);
5-Year Impact Factor:
3.2 (2024)
Latest Articles
Assessing Internet of Things Readiness on University Campuses: A Smart Campus-Oriented Approach
IoT 2026, 7(2), 39; https://doi.org/10.3390/iot7020039 - 27 Apr 2026
Abstract
The Internet of Things (IoT) is increasingly recognized as a core digital infrastructure supporting digital transformation, particularly in complex environments such as university campuses, which can be conceptualized as smart campus ecosystems. However, many organizations encounter difficulties when implementing IoT due to insufficient
[...] Read more.
The Internet of Things (IoT) is increasingly recognized as a core digital infrastructure supporting digital transformation, particularly in complex environments such as university campuses, which can be conceptualized as smart campus ecosystems. However, many organizations encounter difficulties when implementing IoT due to insufficient organizational and technological readiness. This paper presents the University Campus IoT (UCIoT) readiness assessment model, which conceptualizes IoT readiness as a manifestation of organizational digital transformation readiness within the smart campus context. The model consists of 24 dimensions grouped into organizational and technological categories and is implemented through structured questionnaires and a supporting software tool. The model was developed using the design science research methodology and evaluated through a case study conducted at the University Campus of Novi Sad, Serbia. The results demonstrate that the model provides a structured and realistic assessment of IoT readiness and helps identify organizational and technological bottlenecks relevant to IoT implementation. The main contribution of this research is a context-specific readiness assessment framework tailored to university campuses that integrates organizational, technological, and client readiness dimensions.
Full article
Open AccessArticle
HILANDER: High-Performance Intelligent Learning-Based Task Offloading for Network-Aware Dynamic Edge Resource Allocation
by
Garrik Brel Jagho Mdemaya, Armel Nkonjoh Ngomade and Mthulisi Velempini
IoT 2026, 7(2), 38; https://doi.org/10.3390/iot7020038 - 27 Apr 2026
Abstract
►▼
Show Figures
Edge computing has emerged as a promising paradigm to minimize latency and energy consumption while improving computational efficiency for mobile devices. Latency-sensitive applications such as autonomous driving, augmented reality, and industrial automation require ultra-low response times, making efficient task offloading a necessity in
[...] Read more.
Edge computing has emerged as a promising paradigm to minimize latency and energy consumption while improving computational efficiency for mobile devices. Latency-sensitive applications such as autonomous driving, augmented reality, and industrial automation require ultra-low response times, making efficient task offloading a necessity in edge computing. However, distributing optimally computational tasks among edge servers remains a challenge, especially when considering latency, energy consumption, and workload balancing simultaneously. Although existing approaches have focused on one or two of these objectives, they do not provide a holistic solution that incorporates all three factors. In addition, some existing solutions do not take advantage of parallelism at the edge layer, resulting in bottlenecks and inefficient resource usage. In this paper, we propose a novel learning-based task offloading model that integrates parallel processing at the edge layer, adaptive workload balancing, and joint latency–energy optimization. Moreover, by dynamically adjusting the number of selected edge servers for parallel execution, our approach achieves optimal trade-offs between performance and resource efficiency. Our experimental setup includes several edge servers and several randomly deployed devices. It employs Apache HTTP Benchmark (AB) to generate realistic Mobile Edge Computing workloads. The obtained results show that our method outperforms existing approaches by reducing latency, lowering energy consumption, and maintaining a balanced workload across edge nodes.
Full article

Graphical abstract
Open AccessArticle
Distance-Aware Attenuation Modeling of a Helmet-Mounted Edge Thermal System Using MLX90640 and Raspberry Pi 5 for Industrial Safety Applications: Linear Regression Approach
by
Songwut Boonsong, Paniti Netinant, Rerkchai Fooprateepsiri, Meennapa Rukhiran and Manasanan Bunpalwong
IoT 2026, 7(2), 37; https://doi.org/10.3390/iot7020037 - 26 Apr 2026
Abstract
Thermal hazards in industrial environments often remain undetected until critical failure or injury occurs. Conventional handheld infrared cameras require manual operation and limit continuous situational awareness. This study presents the design and field validation of a wearable helmet-mounted real-time thermal system based on
[...] Read more.
Thermal hazards in industrial environments often remain undetected until critical failure or injury occurs. Conventional handheld infrared cameras require manual operation and limit continuous situational awareness. This study presents the design and field validation of a wearable helmet-mounted real-time thermal system based on the MLX90640 infrared array sensor and a Raspberry Pi 5 edge computing platform. Experimental validation was performed across multiple scenarios of 400 measurements based on industrial distances of 100 cm and 150 cm. The performance of the system was tested against a pre-calibrated hotspot infrared thermometer using linear regression analysis and standard error metrics to determine proportional agreement. The results indicate a strong proportional relationship between the two systems at both industrial distances, with R2 values ranging from 0.9885 to 0.9973 at 100 cm and from 0.9586 to 0.9867 at 150 cm. A moderate increase in mean absolute error (MAE) was observed as the measurement distance increased. Statistically significant increases in error were identified in mechanically dynamic scenarios where statistically significant increases in measurement error were observed (p-value < 0.05), indicating distance-dependent sensitivity under moving mechanical conditions. The higher absolute errors at longer distances mainly result from field-of-view expansion, reduced target occupancy, and mixed-pixel hotspot effects rather than weakened proportional trend stability. An industrial distance-aware linear regression model was developed to describe behavior and support calibrations under different deployment conditions. Despite minor absolute deviations during dynamic operations, the system maintained strong trend-tracking performance, suggesting suitability for daily preliminary hazard monitoring in industrial safety maintenance.
Full article
Open AccessArticle
PatternStudio: A Neuro-Symbolic Framework for Dynamic and High-Throughput Complex Event Processing
by
Jesús Rosa-Bilbao
IoT 2026, 7(2), 36; https://doi.org/10.3390/iot7020036 - 22 Apr 2026
Abstract
Complex Event Processing (CEP) is essential for real-time analytics in domains such as industrial IoT, cybersecurity, and financial monitoring, yet CEP adoption is still hindered by the difficulty of authoring temporal rules and by rigid redeployment workflows. This paper presents PatternStudio, a neuro-symbolic
[...] Read more.
Complex Event Processing (CEP) is essential for real-time analytics in domains such as industrial IoT, cybersecurity, and financial monitoring, yet CEP adoption is still hindered by the difficulty of authoring temporal rules and by rigid redeployment workflows. This paper presents PatternStudio, a neuro-symbolic CEP framework that translates natural language specifications into validated event-processing patterns and executes them on a deterministic Apache Flink-based runtime without interrupting service. The generative layer is constrained to produce a typed intermediate representation, while the symbolic layer enforces validation and runtime execution guarantees. We evaluate the prototype as a single-node system-characterization study on commodity hardware representative of edge and near-edge gateways rather than microcontroller-class devices. Under this setting, PatternStudio reaches 47,910 events per second at 250 active rules while maintaining a bounded memory footprint between 1.6 GB and 1.9 GB during the reported runs. Beyond 500 active rules, throughput degradation is driven primarily by CPU saturation and alert amplification, which also explains the sharp increase in tail latency. Additional measurements with parallelism 4, a static baseline, and a two-stage NL-to-IR evaluation further show that the architecture remains functional under partitioned execution, incurs moderate dynamic-orchestration overhead, preserves rule structure reliably under natural-language authoring, and supports interchangeable LLM backends at the semantic front end.
Full article
Open AccessArticle
Privacy-Preserving Emergency Vehicle Authentication Scheme Using Zero-Knowledge Proofs and Blockchain
by
Hanshi Li, Drishti Oza, Masami Yoshida and Taku Noguchi
IoT 2026, 7(2), 35; https://doi.org/10.3390/iot7020035 - 21 Apr 2026
Abstract
Emergency vehicle authentication in vehicular ad hoc networks must satisfy strict latency, privacy, and trust constraints. Existing Public Key Infrastructure- and Conditional Privacy-Preserving Authentication-based schemes incur substantial overhead from certificate management and expensive per-hop verification, making them unsuitable for real-time emergency scenarios. We
[...] Read more.
Emergency vehicle authentication in vehicular ad hoc networks must satisfy strict latency, privacy, and trust constraints. Existing Public Key Infrastructure- and Conditional Privacy-Preserving Authentication-based schemes incur substantial overhead from certificate management and expensive per-hop verification, making them unsuitable for real-time emergency scenarios. We propose a lightweight zero-knowledge- and blockchain-assisted authentication scheme that eliminates certificates, pseudonym pools, and the requirement for online interaction with a trusted authority during the authentication phase. The Certificate Authority (CA) is involved only during offline initialization stages (vehicle enrollment and Merkle tree construction); once provisioning is complete, the runtime authentication process operates without any online CA interaction. Each emergency vehicle registers one-time hash commitments on-chain after proving membership in a category-specific Merkle tree, and authenticates messages by broadcasting a hash along with a zero-knowledge proof of preimage knowledge. Roadside units verify the proof and consult the on-chain state to enforce single-use semantics, creating a tamper-resistant audit trail. Evaluation using the Veins framework (OMNeT++/SUMO) demonstrated a constant 288-byte authenticated payload, millisecond-level end-to-end delay independent of hop count, and stable blockchain processing under sustained load.
Full article
(This article belongs to the Special Issue Internet of Vehicles (IoV))
►▼
Show Figures

Figure 1
Open AccessArticle
Transforming Opportunistic Routing: A Deep Reinforcement Learning Framework for Reliable and Energy-Efficient Communication in Mobile Cognitive Radio Sensor Networks
by
Suleiman Zubair, Bala Alhaji Salihu, Altyeb Altaher Taha, Yakubu Suleiman Baguda, Ahmed Hamza Osman and Asif Hassan Syed
IoT 2026, 7(2), 34; https://doi.org/10.3390/iot7020034 - 21 Apr 2026
Abstract
The Mobile Reliable Opportunistic Routing (MROR) protocol improves data-forwarding reliability in Cognitive Radio Sensor Networks (CRSNs) through mobility-aware virtual contention groups and handover zoning. However, its heuristic decision logic is difficult to optimize under highly dynamic spectrum access and random node mobility. To
[...] Read more.
The Mobile Reliable Opportunistic Routing (MROR) protocol improves data-forwarding reliability in Cognitive Radio Sensor Networks (CRSNs) through mobility-aware virtual contention groups and handover zoning. However, its heuristic decision logic is difficult to optimize under highly dynamic spectrum access and random node mobility. To address this limitation, we present DRL-MROR, a refined routing framework that incorporates deep reinforcement learning (DRL) to enable intelligent and adaptive forwarding decisions. In DRL-MROR, the secondary users (SUs) act as autonomous agents that observe local state information, including primary-user activity, link quality, residual energy, and neighbor-mobility patterns. Each agent learns a forwarding policy through a Deep Q-Network (DQN) optimized for long-term network utility in terms of throughput, delay, and energy efficiency. We formulate routing as a Markov Decision Process (MDP) and use experience replay with prioritized sampling to improve learning stability and convergence. The DQN used at each node is intentionally lightweight, requiring 5514 trainable parameters, about 21.5 kB of weight storage in 32-bit precision, and approximately 5.4k multiply-accumulate operations per inference, which supports practical deployment on edge-capable CRSN nodes. Extensive simulations show that DRL-MROR outperforms the original MROR protocol and representative AI-based routing baselines such as AIRoute under diverse operating conditions. The results indicate gains of up to 38% in throughput, 42% in goodput, a 29% reduction in energy consumed per packet, and an approximately 18% improvement in network lifetime, while maintaining high route stability and fairness. DRL-MROR also reduces control overhead by about 30% and average end-to-end delay by up to 32%, maintaining strong performance even under elevated PU activity and higher node mobility. These results show that augmenting opportunistic routing with lightweight DRL can substantially improve adaptability and efficiency in next-generation IoT-oriented CRSNs.
Full article
(This article belongs to the Special Issue Advances in Wireless Communication Technologies for IoT Devices)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Edge AI Bridge: A Micro-Layer Intrusion Detection Architecture for Smart-City IoT Networks
by
Sethu Subramanian N, Prabu P, Kurunandan Jain and Prabhakar Krishnan
IoT 2026, 7(2), 33; https://doi.org/10.3390/iot7020033 - 16 Apr 2026
Abstract
►▼
Show Figures
Smart-city IoT ecosystems depend on a large number of devices with limited resources, which often lack built-in security mechanisms. While traditional cloud-based or gateway-centric intrusion detection systems (IDSs) offer essential security, they are still characterized by high detection latency, considerable bandwidth demand, and
[...] Read more.
Smart-city IoT ecosystems depend on a large number of devices with limited resources, which often lack built-in security mechanisms. While traditional cloud-based or gateway-centric intrusion detection systems (IDSs) offer essential security, they are still characterized by high detection latency, considerable bandwidth demand, and a lack of precise monitoring of single device actions. This study proposes the Edge AI Bridge, a novel micro-computing security layer positioned between IoT devices and the gateway to enable early-stage threat interception. The architecture integrates embedded AI hardware with a hybrid pipeline, utilizing unsupervised anomaly detection for behavioral profiling and a lightweight signature-matching module to minimize false positives. System operations—including localized traffic inspection, protocol parsing, and feature extraction—are performed before data aggregation, which preserves device-level privacy and reduces the computational burden on the IoT gateway. The contemporary CIC-IoT-2023 dataset, which captures a wide range of smart-city protocols and attack vectors, is used to evaluate the architecture. The Edge AI Bridge leads to a significant reduction in detection latency—≈50 ms on average as opposed to the 500 ms of cloud-based solutions—while the resource footprint is kept low to about 20% CPU utilization. The Edge AI Bridge demonstrates a potential solution that is scalable, modular, and can preserve privacy while improving the cyber resilience of the smart-city infrastructures that are large, heterogeneous, and difficult to manage.
Full article

Figure 1
Open AccessArticle
Intelligent Railway Wagon Health Assessment Using IoT Sensors and Predictive Analytics for Safety-Critical Applications
by
Shiva Kumar Mysore Gangadhara, Krishna Alabhujanahalli Neelegowda, Anitha Arekattedoddi Chikkalingaiah and Naveena Chikkaguddaiah
IoT 2026, 7(2), 32; https://doi.org/10.3390/iot7020032 - 2 Apr 2026
Abstract
►▼
Show Figures
The safety and reliability of railway wagon operations largely depend on the timely detection of degradation in safety-critical components such as axle bearings, wheelsets, and braking systems. Conventional maintenance strategies based on fixed inspection intervals are often inadequate for capturing the actual operating
[...] Read more.
The safety and reliability of railway wagon operations largely depend on the timely detection of degradation in safety-critical components such as axle bearings, wheelsets, and braking systems. Conventional maintenance strategies based on fixed inspection intervals are often inadequate for capturing the actual operating conditions of wagon components, leading to delayed fault detection or unnecessary maintenance actions. To address these limitations, this paper proposes a sensor-based health assessment framework for the continuous monitoring of railway wagons under operational conditions. The proposed framework integrates multi-sensor data acquisition, systematic signal preprocessing, feature-based health indicator construction, and temporal degradation analysis to evaluate component health in real time. A safety-oriented decision logic is employed to classify operating conditions and generate reliable alerts while minimizing false detections caused by transient disturbances. The effectiveness of the proposed approach is validated using a publicly available run-to-failure bearing dataset that exhibits degradation characteristics similar to those observed in railway wagon axle bearings. Experimental results demonstrate that the proposed framework achieves improved classification accuracy, higher detection reliability, reduced false alarm rates, and lower detection latency compared to representative existing condition monitoring approaches. In addition, the computational efficiency of the proposed model confirms its suitability for real-time deployment. The results indicate that the proposed health assessment framework provides a practical and reliable solution for safety-critical railway wagon monitoring and forms a strong foundation for future extensions toward predictive maintenance and remaining useful life estimation.
Full article

Figure 1
Open AccessArticle
Cryptanalysis and Improvement of the SMEP-IoV Protocol: A Secure and Lightweight Protocol for Message Exchange in IoV Paradigm
by
Gelare Oudi Ghadim, Parvin Rastegari, Mohammad Dakhilalian, Faramarz Hendessi, Shahrzad Saremi, Rania Shibl, Yassine Himeur, Shadi Atalla and Wathiq Mansoor
IoT 2026, 7(2), 31; https://doi.org/10.3390/iot7020031 - 31 Mar 2026
Abstract
The Internet of Vehicles (IoV) is a rapidly evolving technology that provides real-time connectivity, enhanced road safety, and reduced traffic congestion; however, its inherently open communication channels expose it to serious security and privacy threats. In 2021, Chaudhry proposed SMEP-IoV, a lightweight message
[...] Read more.
The Internet of Vehicles (IoV) is a rapidly evolving technology that provides real-time connectivity, enhanced road safety, and reduced traffic congestion; however, its inherently open communication channels expose it to serious security and privacy threats. In 2021, Chaudhry proposed SMEP-IoV, a lightweight message authentication protocol designed to satisfy essential security requirements. This paper presents a comprehensive security analysis of SMEP-IoV and reveals several serious vulnerabilities. Specifically, sensitive credentials are stored in plaintext without tamper-resistant protection, and both authentication and session key derivation depend directly on these credentials. These structural flaws allow an adversary to extract the stored secrets, generate valid authentication messages, and derive the established session key, enabling vehicle impersonation and session key disclosure attacks. Moreover, compromise of long-term secrets facilitates key compromise impersonation attacks. It also fails to ensure anonymity and perfect forward secrecy. To address these issues, we propose an enhanced authentication protocol for resource-constrained IoV environments, leveraging a three-factor authentication mechanism combined with lightweight cryptographic primitives. Formal security analyses using BAN logic, Tamarin, and ProVerif confirm its resilience against known attacks, while NS-3 simulations validate its scalability, high throughput, and low End-to-End Delay (E2ED). The results highlight the protocol as a robust, efficient, and scalable solution for large-scale IoV deployments.
Full article
(This article belongs to the Special Issue Internet of Vehicles (IoV))
►▼
Show Figures

Figure 1
Open AccessArticle
Optimal Security Task Offloading in Cognitive IoT Networks: Provably Optimal Threshold Policies and Model-Free Learning
by
Ning Wang and Yali Ren
IoT 2026, 7(2), 30; https://doi.org/10.3390/iot7020030 - 26 Mar 2026
Abstract
►▼
Show Figures
The proliferation of Internet of Things (IoT) devices has introduced significant security challenges. Resource-constrained devices face sophisticated threats but lack the computational capacity for advanced security analysis. This study investigates optimal security task allocation in Cognitive IoT (CIoT) networks. It specifically examines when
[...] Read more.
The proliferation of Internet of Things (IoT) devices has introduced significant security challenges. Resource-constrained devices face sophisticated threats but lack the computational capacity for advanced security analysis. This study investigates optimal security task allocation in Cognitive IoT (CIoT) networks. It specifically examines when IoT devices should process security tasks locally or offload them to Mobile Edge Computing (MEC) servers. The problem is formulated as a Continuous-Time Markov Decision Process (CTMDP). The study demonstrates that the optimal offloading policy has a threshold structure. Security tasks are offloaded to MEC servers when the offloading queue length is below a critical threshold, . Otherwise, tasks are processed locally. This structural property is robust to changes in MEC server configurations and threat arrival patterns. It ensures an optimal and easily implementable security policy under the exponential model. Theoretical analysis establishes upper bounds on the performance of AI-based security controllers using the same models. The results also show that standard model-free Q-learning algorithms can recover optimal thresholds without any prior knowledge of the system parameters. Simulations across multiple reinforcement learning architectures, including Q-learning, State–Action–Reward–State–Action (SARSA), and Deep Q-networks (DQN), confirm that all methods converge to the predicted threshold. This empirically validates the analytical findings. The threshold structure remains effective under practical imperfections such as imperfect sensing and parameter estimation errors. Systems maintain 85% to 93% of their optimal performance. This work extends threshold Markov Decision Process (MDP) analysis from classical queuing theory to the context of CIoT security offloading. It provides optimal and practical policies and model-free algorithms for use by resource-constrained devices.
Full article

Figure 1
Open AccessArticle
EEDC: Energy-Efficient Distance-Controlled Clustering for Bottleneck Avoidance in Wireless Sensor Networks
by
Ahmad Abuashour, Yahia Jazyah and Naser Zaeri
IoT 2026, 7(1), 29; https://doi.org/10.3390/iot7010029 - 15 Mar 2026
Abstract
►▼
Show Figures
Wireless Sensor Networks (WSNs) commonly employ clustering to improve scalability and energy efficiency; however, cluster heads (CHs) located near the base station (BS) often suffer from excessive relay traffic, leading to rapid energy depletion and reduced network lifetime. This article proposes an Energy-Efficient
[...] Read more.
Wireless Sensor Networks (WSNs) commonly employ clustering to improve scalability and energy efficiency; however, cluster heads (CHs) located near the base station (BS) often suffer from excessive relay traffic, leading to rapid energy depletion and reduced network lifetime. This article proposes an Energy-Efficient Distance-Controlled Clustering (EEDC) scheme that adjusts CH density and transmission power according to each node’s distance from the BS. In EEDC, a higher number of CHs is deployed near the BS to balance forwarding loads, while fewer CHs are selected in distant regions to conserve energy. Additionally, CHs adapt their transmission power to enable distance-proportional communication. A mathematical model is developed to analyze the relationship between CH distribution, transmission power, and overall energy consumption. Performance evaluation is conducted through simulations and compared with LEACH, HEED, DEEC, SEP, and EECS. The results show that EEDC improves the stability period by up to 42%, extends network lifetime by 23%, increases average residual energy by 13–29%, enhances throughput by 16–44%, and achieves 23–61% higher packet delivery efficiency. Moreover, cumulative CH energy consumption is reduced by 5–21%, leading to more balanced energy distribution. These findings indicate that distance-controlled CH selection and adaptive transmission power effectively alleviate the BS energy bottleneck and enhance the energy efficiency and operational longevity of clustered WSNs.
Full article

Figure 1
Open AccessArticle
IoT-Assisted Hydroponic System for Andrographis paniculata: Enhanced Productivity and Pharmaceutical-Grade Quality
by
Krit Funsian, Yaowarat Sirisathitkul, Pumiphat Khotchanakhen, Apiwit Bunta, Kanittha Srikwan, Kingkan Bunluepuech, Athakorn Promwee, Chih-Yi Chiu and Karanrat Thammarak
IoT 2026, 7(1), 28; https://doi.org/10.3390/iot7010028 - 10 Mar 2026
Abstract
This study presents an Internet of Things (IoT)-assisted semi-open hydroponic system for cultivating Andrographis paniculata under tropical conditions, aiming to enhance biomass productivity, andrographolide (AG) yield, and production efficiency. IoT-assisted hydroponics, non-IoT hydroponics, and soil-based cultivation were compared in 10 m2 greenhouses.
[...] Read more.
This study presents an Internet of Things (IoT)-assisted semi-open hydroponic system for cultivating Andrographis paniculata under tropical conditions, aiming to enhance biomass productivity, andrographolide (AG) yield, and production efficiency. IoT-assisted hydroponics, non-IoT hydroponics, and soil-based cultivation were compared in 10 m2 greenhouses. The IoT system enabled real-time monitoring and adaptive regulation of temperature, relative humidity, light intensity, nutrient solution pH, and electrical conductivity (EC). IoT-assisted hydroponics achieved earlier harvest (≈90 days) and the highest fresh biomass yield (0.409 ± 0.014 kg m−2) while maintaining per-plant productivity (15.74 ± 0.54 g plant−1) comparable to soil-based cultivation. Andrographolide concentration reached 25.58 ± 3.36 mg g−1 DW (2.56% w/w), meeting pharmacopeial requirements. Owing to stable environmental regulation and tolerance to high planting density, the IoT system produced the highest areal AG productivity (209.5 mg m−2), representing a four- to tenfold increase over the other systems. Despite higher operational costs, IoT-assisted hydroponics achieved the lowest AG unit cost (≈6.77 USD g−1). While most previous studies emphasize tissue-level AG concentration, system-level productivity and cost efficiency under realistic cultivation conditions remain insufficiently explored. Overall, IoT-enabled semi-open hydroponics provides a scalable and economically viable approach for medicinal plant production, bridging the gap between open-field cultivation and fully controlled plant factory systems.
Full article
(This article belongs to the Topic Smart Farming 2.0: IoT and Edge AI for Precision Crop Management and Sustainability)
►▼
Show Figures

Figure 1
Open AccessArticle
Understanding Energy Efficiency of AI Deployments in IoT-Driven Smart Cities
by
Salvatore Bramante, Filippo Ferrandino and Alessandro Cilardo
IoT 2026, 7(1), 27; https://doi.org/10.3390/iot7010027 - 8 Mar 2026
Abstract
The pervasive adoption of AI and AIoT applications at the network edge presents both opportunities and challenges for smart cities. With a focus on the energy efficiency of AI in urban environments, this paper provides a systematic comparative analysis of representative edge hardware
[...] Read more.
The pervasive adoption of AI and AIoT applications at the network edge presents both opportunities and challenges for smart cities. With a focus on the energy efficiency of AI in urban environments, this paper provides a systematic comparative analysis of representative edge hardware platforms, i.e., embedded GPUs, FPGAs, and ultra-low-power microcontroller-/sensor-class devices, assessing their suitability for AI workloads in IoT-driven smart city infrastructures. The evaluation, based on direct characterization of diverse neural networks and relevant datasets, quantifies computational performance and energy behavior through inference latency, throughput, and energy/per inference measurements. Across the evaluated network–board pairs, the measured inference power spans several orders of magnitude, ranging from 0.1–10 mW for ultra-low-power Intelligent Sensor Processing Units (ISPUs) up to 1–10 W for embedded GPUs, highlighting the wide design space between the least and most power-demanding configurations. Results indicate that embedded GPUs provide a favorable performance-to-power ratio for computationally intensive workloads, while MCU/ISPU-class solutions, despite throughput limitations, offer compelling advantages in ultra-low-power scenarios when combined with quantization and pruning, making them well-suited for distributed sensing and actuation typical of smart city deployments. Overall, this comparative analysis guides hardware selection for heterogeneous, sustainable AI-enabled urban services.
Full article
(This article belongs to the Special Issue IoT-Driven Smart Cities)
►▼
Show Figures

Figure 1
Open AccessArticle
Automated Framework for Testing Random Number Generators for IoT Security Applications Using NIST SP 800-22
by
Juan Castillo, Pere Aran Vila, Francisco Palacio, Blas Garrido, Sergi Hernández and Albert Cirera
IoT 2026, 7(1), 26; https://doi.org/10.3390/iot7010026 - 7 Mar 2026
Abstract
The continuous expansion of the Internet of Things (IoT) has intensified the need to evaluate and guarantee the quality of entropy sources used in random number generation, an essential element in securing communications used in IoT ecosystems. This work presents an automated and
[...] Read more.
The continuous expansion of the Internet of Things (IoT) has intensified the need to evaluate and guarantee the quality of entropy sources used in random number generation, an essential element in securing communications used in IoT ecosystems. This work presents an automated and web-based framework designed to execute and analyze the results of statistical tests defined in the NIST SP 800-22 standard, enabling systematic assessment of entropy sources and random numbers generators in IoT devices and environments. The proposed system integrates a Python-based backend built upon an optimized implementation of the original NIST suite, along with an intuitive web interface that facilitates configuration, monitoring, and parallel execution of tests through Representational State Transfer (REST) endpoints. Session management based on Redis ensures reliable and concurrent operation of multiple users or devices while maintaining isolation and data integrity. To demonstrate its applicability, an emulated IoT ecosystem was implemented in which multiple virtual devices periodically and asynchronously request real-time validation of their local random numbers generators. The obtained results confirm the system’s capability to detect deficiencies in pseudo random generators and validate true random number sources, highlighting its potential as a diagnostic and verification tool for distributed IoT security systems. The tool developed in this work is fully accessible to the public, allowing researchers, engineers, and practitioners to evaluate random number generators without requiring specialized hardware or proprietary software.
Full article
(This article belongs to the Topic Privacy Challenges and Solutions in the Internet of Things)
►▼
Show Figures

Figure 1
Open AccessArticle
A Foundational Edge-AI Sensing Framework for Occupancy-Driven Energy Management in SMOs
by
Yutong Chen, Daisuke Sumiyoshi, Xiangyu Wang, Takahiro Yamamoto, Takahiro Ueno and Jewon Oh
IoT 2026, 7(1), 25; https://doi.org/10.3390/iot7010025 - 5 Mar 2026
Abstract
Occupant presence is a primary driver of Heating, Ventilation, and Air Conditioning (HVAC) and lighting energy consumption in office environments. Existing occupancy-sensing solutions often rely on privacy-sensitive modalities or require costly infrastructure, limiting their applicability in Small and Medium Offices (SMOs). To address
[...] Read more.
Occupant presence is a primary driver of Heating, Ventilation, and Air Conditioning (HVAC) and lighting energy consumption in office environments. Existing occupancy-sensing solutions often rely on privacy-sensitive modalities or require costly infrastructure, limiting their applicability in Small and Medium Offices (SMOs). To address these limitations, this study proposes a lightweight CSI-based occupancy-sensing framework based on a dual-core ESP32-S3 architecture, enabling concurrent CSI processing, environmental sensing, and cloud communication. A multi-stage signal preprocessing pipeline compresses raw CSI streams into a compact statistical feature matrix, achieving 98.86% classification accuracy for multi-level occupancy estimation. Compared with image-based baselines such as DenseNet121, the proposed approach reduces input data size to 24 kB and model parameters to 138 K, yielding over 129× reduction in transmission volume without sacrificing performance. These results demonstrate that the proposed framework provides a practical, privacy-preserving, and edge-deployable solution for occupancy-aware energy management in SMOs.
Full article
(This article belongs to the Special Issue IoT Meets AI: Driving the Next Generation of Technology)
►▼
Show Figures

Figure 1
Open AccessFeature PaperArticle
A Novel Hybrid Opcode Feature Selection Framework for Efficient and Effective IoT Malware Detection
by
Bakhan Tofiq Ahmed, Noor Ghazi M. Jameel and Bakhtiar Ibrahim Saeed
IoT 2026, 7(1), 24; https://doi.org/10.3390/iot7010024 - 2 Mar 2026
Abstract
Malware’s proliferation in the Internet of Things (IoT) ecosystem requires precise, efficient detection systems capable of operating on IoT devices. Existing static analysis approaches often fail due to computational inefficiency stemming from high feature dimensionality inherent in raw opcode features. This research addresses
[...] Read more.
Malware’s proliferation in the Internet of Things (IoT) ecosystem requires precise, efficient detection systems capable of operating on IoT devices. Existing static analysis approaches often fail due to computational inefficiency stemming from high feature dimensionality inherent in raw opcode features. This research addresses this limitation by proposing a novel machine-learning (ML)-driven Intelligent Hybrid Feature Selection (IHFS) framework with two distinct architectures. IHFS1 combines a filter method (variance threshold) with an embedded method (LGBM feature importance). Conversely, IHFS2 integrates variance thresholding with a wrapper method (Recursive Feature Elimination with Cross-Validation using LGBM) for optimal selection. This framework is specifically designed to select an optimally stable and minimal feature subset from the initial 1183 opcode frequency vector extracted from ARM binaries. Applying this framework to a multi-family IoT malware dataset, the IHFS architectures yielded distinct and highly efficient feature subsets: IHFS1 achieved a 95.77% reduction (to 50 features), while IHFS2 attained a 98.06% reduction (to 23 features). Evaluation across eight ML models confirmed that the Random Forest (with IHFS1 subset) and Decision Tree (with IHFS2 subset) classifiers were the best performing, achieving robust classification metrics that outperform current state-of-the-art solutions. The Decision Tree model demonstrated exceptional detection capabilities, with an accuracy of 99.87%, a precision of 99.82%, a recall of 99.88%, and an F1-score of 99.85%. It achieved an average inference time of 0.058 ms per sample. Experimental results attained on a native ARM64 environment validate the deployment feasibility of the proposed system for resource-constrained IoT devices, such as the Raspberry Pi. The proposed system achieves a high-throughput, low-overhead security posture while maintaining host operational stability, processing a single ELF binary in just 3.431 ms.
Full article
(This article belongs to the Special Issue Cybersecurity in the Age of the Internet of Things)
►▼
Show Figures

Figure 1
Open AccessReview
Edge AI for SD-IoT: A Systematic Review on Scalability and Latency
by
Ernando P. Batista, Jr., Alex Santos, Maycon Peixoto, Gustavo Figueiredo and Cassio Prazeres
IoT 2026, 7(1), 23; https://doi.org/10.3390/iot7010023 - 27 Feb 2026
Abstract
The growing demand for IoT applications in highly dynamic environments with multiple connected devices introduces significant scalability and low-latency challenges. In the context of software-defined networking (SDN) integrated with Edge environments, adopting machine learning (ML) techniques has emerged as a promising approach to
[...] Read more.
The growing demand for IoT applications in highly dynamic environments with multiple connected devices introduces significant scalability and low-latency challenges. In the context of software-defined networking (SDN) integrated with Edge environments, adopting machine learning (ML) techniques has emerged as a promising approach to meet these requirements. This study presents a Systematic Literature Review (SLR) that identifies and analyzes ML-based solutions applied to Software-Defined Internet of Things (SD-IoT) infrastructures in Edge environments, emphasizing improving low latency and scalability. Following established methodological best practices, we conducted the review, including a clear definition of research questions, well-defined inclusion and exclusion criteria, a structured search protocol, and multiple scientific databases. Based on the analysis of selected studies, the main strategies employed to enhance network performance are categorized, along with the level of fidelity and complexity of the experimental environments used, and the realism and applicability of the proposed solutions are discussed. Furthermore, drawing from the context of the selected studies, the most recurrent ML approaches are presented—including supervised, unsupervised, and reinforcement learning methods—along with a discussion of their advantages and limitations in dynamic network scenarios. By compiling and organizing the contributions from the literature, this paper provides a comprehensive overview of the state of the art in applying ML to SD-IoT networks, shedding light on current trends, existing gaps, and research opportunities aimed at building more intelligent and adaptable solutions for IoT environments.
Full article
(This article belongs to the Special Issue IoT Meets AI: Driving the Next Generation of Technology)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Ensemble Machine Learning Approach for Traffic Congestion and Travel Time Prediction in Urban Bus Rapid Transit Systems: A Case Study of Trans Metro Bandung
by
Rendy Munadi, Dadan Nur Ramadan, Sussi, Nurwulan Fitriyanti and Hilal H. Nuha
IoT 2026, 7(1), 22; https://doi.org/10.3390/iot7010022 - 27 Feb 2026
Abstract
Traffic congestion and travel time uncertainty remain major challenges to the operational efficiency of Bus Rapid Transit (BRT) systems in urban areas of developing countries. This study proposes an integrated solution for the Trans Metro Bandung (TMB) system by leveraging Internet of Things
[...] Read more.
Traffic congestion and travel time uncertainty remain major challenges to the operational efficiency of Bus Rapid Transit (BRT) systems in urban areas of developing countries. This study proposes an integrated solution for the Trans Metro Bandung (TMB) system by leveraging Internet of Things (IoT)–based GPS data and tree-based ensemble machine learning algorithms. Spatio-temporal data collected from on-board GPS modules are processed to predict traffic congestion levels and estimate travel time across route segments. The performance of Decision Tree, Random Forest, and XGBoost models is evaluated in terms of prediction accuracy, interpretability, and computational efficiency, with particular consideration for deployment on resource-constrained hardware. Experiments conducted on 20,156 data samples show that the Decision Tree model achieves the highest congestion classification accuracy of 96.8%, while Random Forest outperforms other models in travel time regression, achieving an R2 value of 0.95 and a root mean square error (RMSE) of 5.80 min. The trained models are successfully deployed on a Raspberry Pi 3B microcontroller for real-time inference, enabling fleet management and travel planning without reliance on cloud connectivity. The results demonstrate that cost-effective and interpretable machine learning solutions can deliver reliable performance in heterogeneous urban infrastructures while providing a replicable framework for medium-sized cities seeking to implement affordable smart transportation systems.
Full article
(This article belongs to the Special Issue IoT-Driven Smart Cities)
►▼
Show Figures

Figure 1
Open AccessSystematic Review
Performance Trade-Offs in Multi-Tenant IoT–Cloud Security: A Systematic Review of Emerging Technologies
by
Bader Alobaywi, Mohammed G. Almutairi and Frederick T. Sheldon
IoT 2026, 7(1), 21; https://doi.org/10.3390/iot7010021 - 22 Feb 2026
Abstract
►▼
Show Figures
Multi-tenancy is essential for scalable IoT–Cloud systems; however, it introduces complex security vulnerabilities at the intersection of shared cloud infrastructures and resource-constrained IoT environments. This systematic review evaluates next-generation security frameworks designed to enforce tenant isolation without violating the strict latency (<10 ms)
[...] Read more.
Multi-tenancy is essential for scalable IoT–Cloud systems; however, it introduces complex security vulnerabilities at the intersection of shared cloud infrastructures and resource-constrained IoT environments. This systematic review evaluates next-generation security frameworks designed to enforce tenant isolation without violating the strict latency (<10 ms) and energy bounds of lightweight sensors. Adhering to PRISMA guidelines, we analyze selected high-quality studies to categorize intersectional threats, including cross-tenant data leakage, side-channel attacks, and privilege escalation. Our analysis identifies a critical, unresolved conflict: existing mitigation strategies often incur a 12% computational and communication overhead, creating a significant barrier for real-time applications. Furthermore, we critically analyze emerging technologies, including Zero Trust Architectures (ZTA), adaptive Artificial Intelligence (AI), blockchain, and Post-Quantum Cryptography (PQC). We find that direct PQC deployment is currently infeasible for LPWAN protocols due to key-size constraints (1.6 KB) that exceed typical payload limits. To address these challenges, we propose a novel multi-layer security design principle that offloads heavy isolation and cryptographic workloads to hardware-accelerated edge gateways, thereby maintaining tenant isolation without compromising real-time performance. Finally, this review serves as a roadmap for future research, highlighting federated learning and hardware enclaves as essential pathways for securing next-generation multi-tenant IoT ecosystems.
Full article

Figure 1
Open AccessArticle
A Layered Architecture for Concurrent CSI-Based Applications in Smart Environments
by
Shervin Mehryar
IoT 2026, 7(1), 20; https://doi.org/10.3390/iot7010020 - 17 Feb 2026
Abstract
The prevalence of radio frequency signals in indoor environments has in recent years given rise to new technologies across many domains such as robotics, healthcare, and surveillance. Radio frequency signals propagate in the wireless medium through multiple paths and carry useful environment-dependent information.
[...] Read more.
The prevalence of radio frequency signals in indoor environments has in recent years given rise to new technologies across many domains such as robotics, healthcare, and surveillance. Radio frequency signals propagate in the wireless medium through multiple paths and carry useful environment-dependent information. Capturing and analyzing these signal patterns can offer new solutions for a number of applications relevant to ranging, tracking, perception and recognition. In this work we propose a novel architecture, separating physical, back-bone networks, and inference layers, towards fully ubiquitous passive recognition systems that scale with the number of environments and applications. We propose a back-bone architecture that utilizes a novel Cross Dual-Path Attention (CDPA) block to capture spatial and temporal correlations from Channel State Information (CSI) for device-free, multi-task applications. Subsequently, a distill and transfer algorithm is proposed to generalize the inference capabilities of CDPA over multiple target environments for scalable training and reduced computational costs. By sharing knowledge between models across a shared network, experimentation shows that edge devices can be deployed with improved performance while simultaneously meeting strict computation and memory requirements. Our distributed learning paradigm demonstrates that CDPA-based models are capable of using passive signals in a non-intrusive and privacy-protecting manner, in order to achieve ubiquitous recognition at scale in smart environments.
Full article
(This article belongs to the Topic Federated Edge Intelligence for Next Generation AI Systems)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Electronics, Future Internet, Information, JSAN, Sensors, IoT
Privacy Challenges and Solutions in the Internet of Things
Topic Editors: Abdul Majeed, Safiullah KhanDeadline: 30 June 2026
Topic in
Applied Sciences, Electronics, IoT, Materials, Robotics, Sensors, Machines, Automation
Smart Production in Terms of Industry 4.0 and 5.0
Topic Editors: Iwona Paprocka, Cezary Grabowik, Jozef HusarDeadline: 29 July 2026
Topic in
Computers, Electronics, Future Internet, IoT, Network, Sensors, JSAN, Technologies, BDCC
Challenges and Future Trends of Wireless Networks
Topic Editors: Stefano Scanzio, Ramez Daoud, Jetmir Haxhibeqiri, Pedro SantosDeadline: 30 September 2026
Topic in
AI, Applied Sciences, Computers, Electronics, Entropy, Future Internet, Information, IoT, Sensors, Telecom
Advances in Sixth Generation and Beyond (6G&B)
Topic Editors: Luis Javier García Villalba, Ana Lucila Sandoval OrozcoDeadline: 31 October 2026
Conferences
Special Issues
Special Issue in
IoT
Advances in Wireless Communication Technologies for IoT Devices
Guest Editors: Long Zhao, Rui Chen, Jie MeiDeadline: 30 May 2026
Special Issue in
IoT
Enabling Intelligent and Scalable IoT Communications via LEO Satellite Networks
Guest Editors: Ala Arman, Zahra Ziran, Oras BakerDeadline: 31 May 2026
Special Issue in
IoT
IoT in Healthcare and Digital Health: IoT Solutions for Real-Time Health Monitoring
Guest Editors: Ana Margarida Mota, Sofia Rita FernandesDeadline: 30 June 2026
Special Issue in
IoT
IoT-Based Assistive Technologies and Platforms for Healthcare
Guest Editors: Mahmoud Abouyoussef, Mohamed IbrahemDeadline: 30 June 2026


