error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,238)

Search Parameters:
Keywords = Internet of Things (IoT) network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 3734 KB  
Article
A Hierarchical Framework Leveraging IIoT Networks, IoT Hub, and Device Twins for Intelligent Industrial Automation
by Cornelia Ionela Bădoi, Bilge Kartal Çetin, Kamil Çetin, Çağdaş Karataş, Mehmet Erdal Özbek and Savaş Şahin
Appl. Sci. 2026, 16(2), 645; https://doi.org/10.3390/app16020645 - 8 Jan 2026
Abstract
Industrial Internet of Things (IIoT) networks, Microsoft Azure Internet of Things (IoT) Hub, and device twins (DvT) are increasingly recognized as core enablers of adaptive, data-driven manufacturing. This paper proposes a hierarchical IIoT framework that integrates industrial IoT networking, DvT for asset-level virtualisation, [...] Read more.
Industrial Internet of Things (IIoT) networks, Microsoft Azure Internet of Things (IoT) Hub, and device twins (DvT) are increasingly recognized as core enablers of adaptive, data-driven manufacturing. This paper proposes a hierarchical IIoT framework that integrates industrial IoT networking, DvT for asset-level virtualisation, system-level digital twins (DT) for cell orchestration, and cloud-native services to support the digital transformation of brownfield, programmable logic controller (PLC)-centric modular automation (MA) environments. Traditional PLC/supervisory control and data acquisition (SCADA) paradigms struggle to meet interoperability, observability, and adaptability requirements at scale, motivating architectures in which DvT and IoT Hub underpin real-time orchestration, virtualisation, and predictive-maintenance workflows. Building on and extending a previously introduced conceptual model, the present work instantiates a multilayered, end-to-end design that combines a federated Message Queuing Telemetry Transport (MQTT) mesh on the on-premises side, a ZigBee-based backup mesh, and a secure bridge to Azure IoT Hub, together with a systematic DvT modelling and orchestration strategy. The methodology is supported by a structured analysis of relevant IIoT and DvT design choices and by a concrete implementation in a nine-cell MA laboratory featuring a robotic arm predictive-maintenance scenario. The resulting framework sustains closed-loop monitoring, anomaly detection, and control under realistic workloads, while providing explicit envelopes for telemetry volume, buffering depth, and latency budgets in edge-cloud integration. Overall, the proposed architecture offers a transferable blueprint for evolving PLC-centric automation toward more adaptive, secure, and scalable IIoT systems and establishes a foundation for future extensions toward full DvT ecosystems, tighter artificial intelligence/machine learning (AI/ML) integration, and fifth/sixth generation (5G/6G) and time-sensitive networking (TSN) support in industrial networks. Full article
(This article belongs to the Special Issue Novel Technologies of Smart Manufacturing)
Show Figures

Figure 1

34 pages, 852 KB  
Article
The Vehicle Routing Problem with Time Window and Randomness in Demands, Travel, and Unloading Times
by Gilberto Pérez-Lechuga and Francisco Venegas-Martínez
Logistics 2026, 10(1), 13; https://doi.org/10.3390/logistics10010013 - 7 Jan 2026
Abstract
Background: The vehicle routing problem (VRP) is of great importance in the Industry 4.0 era because enabling technologies such as the internet of things (IoT), artificial intelligence (AI), big data, and geographic information systems (GISs) allows for real-time solutions to versions of [...] Read more.
Background: The vehicle routing problem (VRP) is of great importance in the Industry 4.0 era because enabling technologies such as the internet of things (IoT), artificial intelligence (AI), big data, and geographic information systems (GISs) allows for real-time solutions to versions of the problem, adapting to changing conditions such as traffic or fluctuating demand. Methods: In this paper, we model and optimize a classic multi-link distribution network topology, including randomness in travel times, vehicle availability times, and product demands, using a hybrid approach of nested linear stochastic programming and Monte Carlo simulation under a time-window scheme. The proposed solution is compared with cutting-edge metaheuristics such as Ant Colony Optimization (ACO), Tabu Search (TS), and Simulated Annealing (SA). Results: The results suggest that the proposed method is computationally efficient and scalable to large models, although convergence and accuracy are strongly influenced by the probability distributions used. Conclusions: The developed proposal constitutes a viable alternative for solving real-world, large-scale modeling cases for transportation management in the supply chain. Full article
22 pages, 1021 KB  
Article
A Multiclass Machine Learning Framework for Detecting Routing Attacks in RPL-Based IoT Networks Using a Novel Simulation-Driven Dataset
by Niharika Panda and Supriya Muthuraman
Future Internet 2026, 18(1), 35; https://doi.org/10.3390/fi18010035 - 7 Jan 2026
Abstract
The use of resource-constrained Low-Power and Lossy Networks (LLNs), where the IPv6 Routing Protocol for LLNs (RPL) is the de facto routing standard, has increased due to the Internet of Things’ (IoT) explosive growth. Because of the dynamic nature of IoT deployments and [...] Read more.
The use of resource-constrained Low-Power and Lossy Networks (LLNs), where the IPv6 Routing Protocol for LLNs (RPL) is the de facto routing standard, has increased due to the Internet of Things’ (IoT) explosive growth. Because of the dynamic nature of IoT deployments and the lack of in-protocol security, RPL is still quite susceptible to routing-layer attacks like Blackhole, Lowered Rank, version number manipulation, and Flooding despite its lightweight architecture. Lightweight, data-driven intrusion detection methods are necessary since traditional cryptographic countermeasures are frequently unfeasible for LLNs. However, the lack of RPL-specific control-plane semantics in current cybersecurity datasets restricts the use of machine learning (ML) for practical anomaly identification. In order to close this gap, this work models both static and mobile networks under benign and adversarial settings by creating a novel, large-scale multiclass RPL attack dataset using Contiki-NG’s Cooja simulator. To record detailed packet-level and control-plane activity including DODAG Information Object (DIO), DODAG Information Solicitation (DIS), and Destination Advertisement Object (DAO) message statistics along with forwarding and dropping patterns and objective-function fluctuations, a protocol-aware feature extraction pipeline is developed. This dataset is used to evaluate fifteen classifiers, including Logistic Regression (LR), Support Vector Machine (SVM), Decision Tree (DT), k-Nearest Neighbors (KNN), Random Forest (RF), Extra Trees (ET), Gradient Boosting (GB), AdaBoost (AB), and XGBoost (XGB) and several ensemble strategies like soft/hard voting, stacking, and bagging, as part of a comprehensive ML-based detection system. Numerous tests show that ensemble approaches offer better generalization and prediction performance. With overfitting gaps less than 0.006 and low cross-validation variance, the Soft Voting Classifier obtains the greatest accuracy of 99.47%, closely followed by XGBoost with 99.45% and Random Forest with 99.44%. Full article
Show Figures

Graphical abstract

23 pages, 3750 KB  
Article
Lightweight Frame Format for Interoperability in Wireless Sensor Networks of IoT-Based Smart Systems
by Samer Jaloudi
Future Internet 2026, 18(1), 33; https://doi.org/10.3390/fi18010033 - 7 Jan 2026
Abstract
Applications of smart cities, smart buildings, smart agriculture systems, smart grids, and other smart systems benefit from Internet of Things (IoT) protocols, networks, and architecture. Wireless Sensor Networks (WSNs) in smart systems that employ IoT use wireless communication technologies between sensors in the [...] Read more.
Applications of smart cities, smart buildings, smart agriculture systems, smart grids, and other smart systems benefit from Internet of Things (IoT) protocols, networks, and architecture. Wireless Sensor Networks (WSNs) in smart systems that employ IoT use wireless communication technologies between sensors in the Things layer and the Fog layer hub. Such wireless protocols and networks include WiFi, Bluetooth, and Zigbee, among others. However, the payload formats of these protocols are heterogeneous, and thus, they lack a unified frame format that ensures interoperability. In this paper, a lightweight, interoperable frame format for low-rate, small-size Wireless Sensor Networks (WSNs) in IoT-based systems is designed, implemented, and tested. The practicality of this system is underscored by the development of a gateway that transfers collected data from sensors that use the unified frame to online servers via message queuing and telemetry transport (MQTT) secured with transport layer security (TLS), ensuring interoperability using the JavaScript Object Notation (JSON) format. The proposed frame is tested using market-available technologies such as Bluetooth and Zigbee, and then applied to smart home applications. The smart home scenario is chosen because it encompasses various smart subsystems, such as healthcare monitoring systems, energy monitoring systems, and entertainment systems, among others. The proposed system offers several advantages, including a low-cost architecture, ease of setup, improved interoperability, high flexibility, and a lightweight frame that can be applied to other wireless-based smart systems and applications. Full article
(This article belongs to the Special Issue Wireless Sensor Networks and Internet of Things)
Show Figures

Figure 1

46 pages, 2455 KB  
Systematic Review
Performance Analysis of Explainable Deep Learning-Based Intrusion Detection Systems for IoT Networks: A Systematic Review
by Taiwo Blessing Ogunseyi, Gogulakrishan Thiyagarajan, Honggang He, Vinay Bist and Zhengcong Du
Sensors 2026, 26(2), 363; https://doi.org/10.3390/s26020363 - 6 Jan 2026
Abstract
The opaque nature of black-box deep learning (DL) models poses significant challenges for intrusion detection systems (IDSs) in Internet of Things (IoT) networks, where transparency, trust, and operational reliability are critical. Although explainable artificial intelligence (XAI) has been increasingly adopted to enhance interpretability, [...] Read more.
The opaque nature of black-box deep learning (DL) models poses significant challenges for intrusion detection systems (IDSs) in Internet of Things (IoT) networks, where transparency, trust, and operational reliability are critical. Although explainable artificial intelligence (XAI) has been increasingly adopted to enhance interpretability, its impact on detection performance and computational efficiency in resource-constrained IoT environments remains insufficiently understood. This systematic review investigates the performance of an explainable deep learning-based IDS for IoT networks by analyzing trade-offs among detection accuracy, computational overhead, and explanation quality. Following the PRISMA methodology, 129 peer-reviewed studies published between 2018 and 2025 are systematically analyzed to address key research questions related to XAI technique trade-offs, deep learning architecture performance, post-deployment XAI evaluation practices, and deployment bottlenecks. The findings reveal a pronounced imbalance in existing approaches, where high detection accuracy is often achieved at the expense of computational efficiency and rigorous explainability evaluation, limiting practical deployment on IoT edge devices. To address these gaps, this review proposes two conceptual contributions: (i) an XAI evaluation framework that standardizes post-deployment evaluation categories for explainability, and (ii) the Unified Explainable IDS Evaluation Framework (UXIEF), which models the fundamental trilemma between detection performance, resource efficiency, and explanation quality in IoT IDSs. By systematically highlighting performance–efficiency gaps, methodological shortcomings, and practical deployment challenges, this review provides a structured foundation and actionable insights for the development of trustworthy, efficient, and deployable explainable IDS solutions in IoT ecosystems. Full article
Show Figures

Figure 1

22 pages, 1308 KB  
Article
From Edge Transformer to IoT Decisions: Offloaded Embeddings for Lightweight Intrusion Detection
by Frédéric Adjewa, Moez Esseghir and Leïla Merghem-Boulahia
Sensors 2026, 26(2), 356; https://doi.org/10.3390/s26020356 - 6 Jan 2026
Viewed by 25
Abstract
The convergence of Artificial Intelligence (AI) and the Internet of Things (IoT) is enabling a new class of intelligent applications. Specifically, Large Language Models (LLMs) are emerging as powerful tools not only for natural language understanding but also for enhancing IoT security. However, [...] Read more.
The convergence of Artificial Intelligence (AI) and the Internet of Things (IoT) is enabling a new class of intelligent applications. Specifically, Large Language Models (LLMs) are emerging as powerful tools not only for natural language understanding but also for enhancing IoT security. However, the integration of these computationally intensive models into resource-constrained IoT environments presents significant challenges. This paper provides an in-depth examination of how LLMs can be adapted to secure IoT ecosystems. We identify key application areas, discuss major challenges, and propose optimization strategies for resource-limited settings. Our primary contribution is a novel collaborative embeddings offloading mechanism for IoT intrusion detection named SEED (Semantic Embeddings for Efficient Detection). This system leverages a lightweight, fine-tuned BERT model, chosen for its proven contextual and semantic understanding of sequences, to generate rich network embeddings at the edge. A compact neural network deployed on the end-device then queries these embeddings to assess network flow normality. This architecture alleviates the computational burden of running a full transformer on the device while capitalizing on its analytical performance. Our optimized BERT model is reduced by approximately 90% from its original size, now representing approximately 41 MB, suitable for the Edge. The resulting compact neural network is a mere 137 KB, appropriate for the IoT devices. This system achieves 99.9% detection accuracy with an average inference time of under 70 ms on a standard CPU. Finally, the paper discusses the ethical implications of LLM-IoT integration and evaluates the resilience of LLMs in dynamic and adversarial environments. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2025)
Show Figures

Figure 1

23 pages, 1409 KB  
Article
Rotational Triboelectric Energy Harvester Utilizing Date-Seed Waste as Tribopositive Layer
by Haider Jaafar Chilabi, Luqman Chuah Abdullah, Waleed Al-Ashtari, Azizan As’arry, Hanim Salleh and Eris E. Supeni
Micro 2026, 6(1), 3; https://doi.org/10.3390/micro6010003 - 5 Jan 2026
Viewed by 47
Abstract
The growing need for self-powered Internet of Things networks has raised interest in converting abundant waste into reliable energy harvesters despite long-standing material and technology challenges. As demand for environmentally friendly self-powered IoT devices continues to rise, attention toward green waste as an [...] Read more.
The growing need for self-powered Internet of Things networks has raised interest in converting abundant waste into reliable energy harvesters despite long-standing material and technology challenges. As demand for environmentally friendly self-powered IoT devices continues to rise, attention toward green waste as an eco-friendly energy source has strengthened. However, its direct utilisation in high-performance energy harvesters remains a significant challenge. Driven by the growing need for renewable sources, the triboelectric nanogenerator has emerged as an innovative technology for converting mechanical energy into electricity. In this work, the design, fabrication, and characterisation of a rotating triboelectric energy harvester as a prototype device employing date seed waste as the tribopositive layer are presented. The date seeds particles, measuring 1.2 to 2 mm, were pulverised using a grinder, mixed with epoxy resin, and subsequently applied to the grating-disc structure. The coated surface was machined on a lathe to provide a smooth surface facing. The performance of the prototype was evaluated through a series of experiments to examine the effects of rotational speed, the number of grating-disc structures, the epoxy mixing process, and the prototype’s influence on the primary system, as well as to determine the optimal power output. An increase in rotational speed (RPM) enhanced power generation. Furthermore, increasing the number of gratings and pre-mixing of epoxy with the biomaterial resulted in enhanced output power. Additionally, with 10 gratings, operating at 1500 rpm, and a 24 h pre-mixing method, the harvester achieved maximum voltage and power outputs of 129 volts and 1183 μW at 7 MΩ. Full article
40 pages, 2940 KB  
Article
Hybrid GNN–LSTM Architecture for Probabilistic IoT Botnet Detection with Calibrated Risk Assessment
by Tetiana Babenko, Kateryna Kolesnikova, Yelena Bakhtiyarova, Damelya Yeskendirova, Kanibek Sansyzbay, Askar Sysoyev and Oleksandr Kruchinin
Computers 2026, 15(1), 26; https://doi.org/10.3390/computers15010026 - 5 Jan 2026
Viewed by 173
Abstract
Detecting botnets in IoT environments is difficult because most intrusion detection systems treat network events as independent observations. In practice, infections spread through device relationships and evolve through distinct temporal phases. A system that ignores either aspect will miss important patterns. This paper [...] Read more.
Detecting botnets in IoT environments is difficult because most intrusion detection systems treat network events as independent observations. In practice, infections spread through device relationships and evolve through distinct temporal phases. A system that ignores either aspect will miss important patterns. This paper explores a hybrid architecture combining Graph Neural Networks with Long Short-Term Memory networks to capture both structural and temporal dynamics. The GNN component models behavioral similarity between traffic flows in feature space, while the LSTM tracks how patterns change as attacks progress. The two components are trained jointly so that relational context is preserved during temporal learning. We evaluated the approach on two datasets with different characteristics. N-BaIoT contains traffic from nine devices infected with Mirai and BASHLITE, while CICIoT2023 covers 105 devices across 33 attack types. On N-BaIoT, the model achieved 99.88% accuracy with F1 of 0.9988 and Brier score of 0.0015. Cross-validation on CICIoT2023 yielded 99.73% accuracy with Brier score of 0.0030. The low Brier scores suggest that probability outputs are reasonably well calibrated for risk-based decision making. Consistent performance across both datasets provides some evidence that the architecture generalizes beyond a single benchmark setting. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Figure 1

25 pages, 607 KB  
Article
Lightweight One-to-Many User-to-Sensors Authentication and Key Agreement
by Hussein El Ghor, Ahmad Hani El Fawal, Ali Mansour, Ahmad Ahmad-Kassem and Abbass Nasser
Information 2026, 17(1), 47; https://doi.org/10.3390/info17010047 - 4 Jan 2026
Viewed by 84
Abstract
The proliferation of Internet of Things (IoT) deployments demands Authentication and Key Agreement (AKA) protocols that scale from one initiator to many devices while preserving strong security guarantees on constrained hardware. Prior lightweight one-to-many designs often rely on a network-wide secret, reuse a [...] Read more.
The proliferation of Internet of Things (IoT) deployments demands Authentication and Key Agreement (AKA) protocols that scale from one initiator to many devices while preserving strong security guarantees on constrained hardware. Prior lightweight one-to-many designs often rely on a network-wide secret, reuse a single group session key across devices, or omit Perfect Forward Secrecy (PFS), leaving systems vulnerable to compromise and traffic exposure. To this end, we present in this paper a lightweight protocol, named Lightweight One-To-many User-to-Sensors Authentication and Key Agreement (LOTUS-AKA), that achieves mutual authentication, PFS, and per-sensor key isolation while keeping devices free of public-key costs. The user and gateway perform an ephemeral elliptic-curve Diffie–Hellman exchange to derive a short-lived group key, from which independent per-sensor session keys are expanded via Hashed Message Authentication Code HMAC-based Key Derivation Function (HKDF). Each sensor receives its key through a compact Authenticated Encryption with associated data (AEAD) wrap under its long-term secret; sensors perform only hashing and AEAD, with no elliptic-curve operations. The login path uses an augmented Password-Authenticated Key Exchange (PAKE) to eliminate offline password guessing in the smart-card theft setting, and a stateless cookie gates expensive work to mitigate denial-of-service. We provide a game-based security argument and a symbolic verification model, and we report microbenchmarks on Cortex-M–class platforms showing reduced device computation and linear low-constant communication overhead with the number of sensors. The design offers a practical path to secure, scalable multi-sensor sessions in resource-constrained IoT. Full article
(This article belongs to the Special Issue Extended Reality and Cybersecurity)
Show Figures

Figure 1

22 pages, 1096 KB  
Article
Modeling DECT-2020 as a Tandem Queueing System and Its Application to the Peak Age of Information Analysis
by Dmitry Nikolaev, Anna Zhivtsova, Sergey Matyushenko, Yuliya Gaidamaka and Yevgeni Koucheryavy
Mathematics 2026, 14(1), 186; https://doi.org/10.3390/math14010186 - 4 Jan 2026
Viewed by 80
Abstract
The Peak Age of Information (PAoI) quantifies the freshness of updates used in cyber-physical systems (CPSs), realized within the Internet of Things (IoT) paradigm, encompassing devices, networks, and control algorithms. Consequently, PAoI is a critical metric for real-time applications enabled by Ultra-Reliable Low [...] Read more.
The Peak Age of Information (PAoI) quantifies the freshness of updates used in cyber-physical systems (CPSs), realized within the Internet of Things (IoT) paradigm, encompassing devices, networks, and control algorithms. Consequently, PAoI is a critical metric for real-time applications enabled by Ultra-Reliable Low Latency Communication (URLLC). While highly useful for system evaluation, the direct analysis of this metric is complicated by the correlation between the random variables constituting the PAoI. Thus, it is often evaluated using only the mean value rather than the full distribution. Furthermore, since CPS communication technologies like Wi-Fi or DECT-2020 involve multiple processing stages, modeling them as tandem queueing systems is essential for accurate PAoI analysis. In this paper, we develop an analytical model for a DECT-2020 network segment represented as a two-phase tandem queueing system, enabling detailed PAoI analysis via Laplace–Stieltjes transforms (LST). We circumvent the dependence between generation and sojourn times by classifying updates into four mutually exclusive groups. This approach allows us to derive the LST of the PAoI and determine the exact Probability Density Function (PDF) for M|M|1M|M|1 system. We also calculate the mean and variance of the PAoIs and validate our results through numerical experiments. Additionally, we evaluate the impact of different service time distributions on PAoI variability. These findings contribute to the theoretical understanding of PAoI in tandem queueing systems and provide practical insights for optimizing DECT-2020-based communication systems. Full article
Show Figures

Figure 1

19 pages, 963 KB  
Article
MIGS: A Modular Edge Gateway with Instance-Based Isolation for Heterogeneous Industrial IoT Interoperability
by Yan Ai, Yuesheng Zhu, Yao Jiang and Yuanzhao Deng
Sensors 2026, 26(1), 314; https://doi.org/10.3390/s26010314 - 3 Jan 2026
Viewed by 191
Abstract
The exponential proliferation of the Internet of Things (IoT) has catalyzed a paradigm shift in industrial automation and smart city infrastructure. However, this rapid expansion has engendered significant heterogeneity in communication protocols, creating critical barriers to seamless data integration and interoperability. Conventional gateway [...] Read more.
The exponential proliferation of the Internet of Things (IoT) has catalyzed a paradigm shift in industrial automation and smart city infrastructure. However, this rapid expansion has engendered significant heterogeneity in communication protocols, creating critical barriers to seamless data integration and interoperability. Conventional gateway solutions frequently exhibit limited flexibility in supporting diverse protocol stacks simultaneously and often lack granular user controllability. To mitigate these deficiencies, this paper proposes a novel, modular IoT gateway architecture, designated as MIGS (Modular IoT Gateway System). The proposed architecture comprises four distinct components: a Management Component, a Southbound Component, a Northbound Component, and a Cache Component. Specifically, the Southbound Component employs instance-based isolation and independent task threading to manage heterogeneous field devices utilizing protocols such as Modbus, MQTT, and OPC UA. The Northbound Component facilitates reliable bidirectional data transmission with cloud platforms. A dedicated Cache Component is integrated to decouple data acquisition from transmission, ensuring data integrity during network latency. Furthermore, a web-based Control Service Module affords comprehensive runtime management. We explicate the data transmission methodology and formulate a theoretical latency model to quantify the impact of the Python Global Interpreter Lock (GIL) and serialization overhead. Functional validation and theoretical analysis confirm the system’s efficacy in concurrent multi-protocol communication, robust data forwarding, and operational flexibility. The MIGS framework significantly enhances interoperability within heterogeneous IoT environments, offering a scalable solution for next-generation industrial applications. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Graphical abstract

18 pages, 1420 KB  
Article
FedPrIDS: Privacy-Preserving Federated Learning for Collaborative Network Intrusion Detection in IoT
by Sameer Mankotia, Daniel Conte de Leon and Bhaskar P. Rimal
J. Cybersecur. Priv. 2026, 6(1), 10; https://doi.org/10.3390/jcp6010010 - 2 Jan 2026
Viewed by 223
Abstract
One of the major challenges for effective intrusion detection systems (IDSs) is continuously and efficiently incorporating changes on cyber-attack tactics, techniques, and procedures in the Internet of Things (IoT). Semi-automated cross-organizational sharing of IDS data is a potential solution. However, a major barrier [...] Read more.
One of the major challenges for effective intrusion detection systems (IDSs) is continuously and efficiently incorporating changes on cyber-attack tactics, techniques, and procedures in the Internet of Things (IoT). Semi-automated cross-organizational sharing of IDS data is a potential solution. However, a major barrier to IDS data sharing is privacy. In this article, we describe the design, implementation, and evaluation of FedPrIDS: a privacy-preserving federated learning system for collaborative network intrusion detection in IoT. We performed experimental evaluation of FedPrIDS using three public network-based intrusion datasets: CIC-IDS-2017, UNSW-NB15, and Bot-IoT. Based on the labels in these datasets for attack type, we created five fictitious organizations, Financial, Technology, Healthcare, Government, and University and evaluated IDS accuracy before and after intelligence sharing. In our evaluation, FedPrIDS showed (1) a detection accuracy net gain of 8.5% to 14.4% from a comparative non-federated approach, with ranges depending on the organization type, where the organization type determines its estimated most likely attack types, privacy thresholds, and data quality measures; (2) a federated detection accuracy across attack types of 90.3% on CIC-IDS-2017, 89.7% on UNSW-NB15, and 92.1% on Bot-IoT; (3) maintained privacy of shared NIDS data via federated machine learning; and (4) reduced inter-organizational communication overhead by an average 50% and showed convergence within 20 training rounds. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

26 pages, 5249 KB  
Article
Deep Reinforcement Learning-Based Intelligent Water Level Control: From Simulation to Embedded Implementation
by Kevin Cusihuallpa-Huamanttupa, Erwin J. Sacoto-Cabrera, Roger Jesus Coaquira-Castillo, L. Walter Utrilla Mego, Julio Cesar Herrera-Levano, Yesenia Concha-Ramos and Edison Moreno-Cardenas
Sensors 2026, 26(1), 245; https://doi.org/10.3390/s26010245 - 31 Dec 2025
Viewed by 344
Abstract
This article presents the design, simulation, and real-time implementation of an intelligent water level control system using Deep Reinforcement Learning (DRL) with the Deep Deterministic Policy Gradient (DDPG) algorithm. The control policy was initially trained in a MATLAB-based simulation environment, where actor–critic neural [...] Read more.
This article presents the design, simulation, and real-time implementation of an intelligent water level control system using Deep Reinforcement Learning (DRL) with the Deep Deterministic Policy Gradient (DDPG) algorithm. The control policy was initially trained in a MATLAB-based simulation environment, where actor–critic neural networks were trained and optimized to ensure accurate and robust performance under dynamic and nonlinear conditions. The trained policy was subsequently deployed on a low-cost embedded platform (Arduino Uno), demonstrating its feasibility for real-time embedded applications. Experimental results confirm the controller’s ability to adapt to external disturbances. Quantitatively, the proposed controller achieved a steady-state error of less than 0.05 cm and an overshoot of 16% in the physical implementation, outperforming conventional proportional–integral–derivative (PID) control by 22% in tracking accuracy. The combination of the DDPG algorithm and low-cost hardware implementation demonstrates the feasibility of real-time deep learning-based control for intelligent water management. Furthermore, the proposed architecture is directly applicable to low-cost Internet of Things (IoT)-based water management systems, enabling autonomous and adaptive control in real-world hydraulic infrastructures. This proposal demonstrates its potential for smart agriculture, distributed sensor networks, and scalable and resource-efficient water systems. Finally, the main novelty of this work is the deployment of a DRL-based controller on a resource-constrained microcontroller, validated under real-world perturbations and sensor noise. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

12 pages, 1642 KB  
Article
Polarization-Shift Backscatter Identification for SWIPT-Based Battery-Free Sensor Nodes
by Taki E. Djidjekh and Alexandru Takacs
Electronics 2026, 15(1), 186; https://doi.org/10.3390/electronics15010186 - 31 Dec 2025
Viewed by 176
Abstract
Battery-Free Sensor Nodes (BFSNs) used in Simultaneous Wireless Information and Power Transfer (SWIPT) systems often rely on lightweight communication protocols with minimal security overhead due to strict energy constraints. As a result, conventional protocol-dependent security mechanisms cannot be employed, leaving BFSNs vulnerable to [...] Read more.
Battery-Free Sensor Nodes (BFSNs) used in Simultaneous Wireless Information and Power Transfer (SWIPT) systems often rely on lightweight communication protocols with minimal security overhead due to strict energy constraints. As a result, conventional protocol-dependent security mechanisms cannot be employed, leaving BFSNs vulnerable to replay, spoofing, and other security threats. This paper explores a protocol-independent security mechanism that enhances BFSN security by exploiting the power wave for controlled backscattering. The method introduces a Manchester-encoded digital private key generated by the BFSN’s low-power microcontroller and backscattered through a polarization-shifting module enabled by a fail-safe RF switch, thereby avoiding the need for a dedicated backscattering rectifier. A LoRaWAN-based BFSN integrating this add-on module was implemented to experimentally validate the approach. Results show successful extraction of the backscattered key with minimal energy overhead (approximately 95 µJ for a 3 ms identification sequence), while the original high-efficiency RF rectifier used for harvesting remains unmodified. The orthogonal polarization between the incoming and backscattered waves additionally reduces clutter and cross-jamming effects. These findings demonstrate that secure identification can be seamlessly incorporated into existing BFSNs without altering their core architecture, offering an easy-to-integrate and energy-efficient solution for improving security in SWIPT-based sensing systems. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

21 pages, 1238 KB  
Review
Wi-Fi RSS Fingerprinting-Based Indoor Localization in Large Multi-Floor Buildings
by Inoj Neupane, Seyed Shahrestani and Chun Ruan
Electronics 2026, 15(1), 183; https://doi.org/10.3390/electronics15010183 - 30 Dec 2025
Viewed by 236
Abstract
Location estimation is significant in this era of the Internet of Things (IoT). Satellite and cellular signals are often blocked indoors, prompting researchers to explore alternative wireless technologies for indoor positioning. Among these, Wi-Fi Received Signal Strength (RSS) with fingerprinting is dominant in [...] Read more.
Location estimation is significant in this era of the Internet of Things (IoT). Satellite and cellular signals are often blocked indoors, prompting researchers to explore alternative wireless technologies for indoor positioning. Among these, Wi-Fi Received Signal Strength (RSS) with fingerprinting is dominant in large, multi-floor buildings due to its existing infrastructure, acceptable accuracy, low cost, easy deployment, and scalability. This study aims to systematically search and review the literature on the use of real Wi-Fi RSS fingerprints for indoor localization or positioning in large, multi-floor buildings, in accordance with PRISMA guidelines, to identify current trends, performance, and gaps. Our findings highlight three main public datasets in this fields (covering areas over 10,000 sq.m). Recent trends indicate the widespread adoption of Deep Learning (DL) techniques, particularly Convolutional Neural Networks (CNNs) and Stacked Autoencoders (SAEs). While buildings (in the same vicinity) and their respective floors are accurately identified, the maximum average error remains around 7 m. A notable gap is the lack of public datasets with detailed room or zone information. This review intends to serve as a guide for future researchers looking to improve indoor location estimation in large, multi-floor structures such as universities, hospitals, and malls. Full article
(This article belongs to the Special Issue Machine Learning Approach for Prediction: Cross-Domain Applications)
Show Figures

Figure 1

Back to TopTop