Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,839)

Search Parameters:
Keywords = Internet adaptation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 28643 KB  
Article
Benchmarking MARL for UAV-Assisted Mobile Edge Computing Under Realistic 3D Collision Avoidance Navigation Constraints for Periodic Task Offloading
by Jiacheng Gu, Qingxu Meng, Qiurui Sun, Bing Zhu, Songnan Zhao and Shaode Yu
Technologies 2026, 14(4), 202; https://doi.org/10.3390/technologies14040202 - 27 Mar 2026
Abstract
The rapid growth of Internet of Things (IoT) and Industrial IoT applications has intensified the demand for low-latency and reliable computation support for deadline-constrained periodic real-time tasks. While unmanned aerial vehicles (UAVs) enabling mobile edge computing (MEC) can reduce latency by bringing compute [...] Read more.
The rapid growth of Internet of Things (IoT) and Industrial IoT applications has intensified the demand for low-latency and reliable computation support for deadline-constrained periodic real-time tasks. While unmanned aerial vehicles (UAVs) enabling mobile edge computing (MEC) can reduce latency by bringing compute closer to data sources, terrestrial MEC deployments often suffer from limited coverage and poor adaptability to spatially heterogeneous demand. In this paper, we study a multiple-UAV-assisted MEC system serving cluster-based IoT networks, where cluster heads generate deadline-constrained periodic tasks for offloading under strict deadlines. To ensure practical feasibility in dense urban environments, we benchmark UAV mobility using a realistic 3D collision avoidance navigation graph with shortest-path execution, rather than assuming unconstrained continuous UAV motion in free space. On top of this benchmark, we systematically compare three multi-agent reinforcement learning (MARL) paradigms for joint navigation and periodic task offloading: (i) continuous 3D control MARL that outputs motion commands directly; (ii) discrete graph-based MARL that selects collision-free shortest paths; and (iii) asynchronous macro-action MARL. Using a high-fidelity 3D digital twin of San Francisco, we evaluate these paradigms under a unified protocol in terms of offloading success, end-to-end latency, and energy consumption. The results reveal clear performance trade-offs induced by realistic 3D collision avoidance constraints and provide actionable insights for designing UAV-assisted MEC systems supporting periodic real-time task offloading. Full article
11 pages, 1126 KB  
Proceeding Paper
Electric Vehicle Charging and Discharging Control Management Strategy Based on Deep Reinforcement Learning
by Chuan Yang, Wenge Huang and Xin Li
Eng. Proc. 2026, 128(1), 44; https://doi.org/10.3390/engproc2026128044 - 24 Mar 2026
Abstract
With the widespread adoption of electric vehicles (EVs), the management and scheduling of charging and discharging play a crucial role in the performance of both the electricity grid and electric vehicles. Particularly in the context of peak shaving, valley filling, and the promotion [...] Read more.
With the widespread adoption of electric vehicles (EVs), the management and scheduling of charging and discharging play a crucial role in the performance of both the electricity grid and electric vehicles. Particularly in the context of peak shaving, valley filling, and the promotion of the energy internet infrastructure, efficient management of the EV charging and discharging process is vital. This study investigates the control and management issues surrounding EV charging and discharging, proposing a management strategy based on deep reinforcement learning. By constructing an intelligent decision-making model, it integrates factors such as the operating conditions of the electrical grid, user behavioral preferences, EV battery characteristics, and renewable energy outputs. The study collects real-world EV usage data from a city, establishing an experimental environment to simulate the interaction between the electricity grid and electric vehicles. Using techniques such as Deep Q-Network (DQN) and policy gradients, it constructs a decision network to explore charging and discharging strategies across different time scales and load situations. Experimental results show that this strategy, compared to traditional charging schedule methods, can effectively reduce energy loss during charging, enhance battery life, and balance the grid load, while suppressing demand peaks, thus achieving intelligent optimization and reliability enhancement of the charging and discharging process. Particularly, an adaptive charging power adjustment technique within the strategy can dynamically adjust the charging power according to the real-time status of the EV and grid load without affecting the user’s daily use, thereby achieving the dual objectives of efficient energy saving and economy. The research also quantitatively analyzes battery degradation characteristics and the continuity of charging to ensure the long-term sustainability of the charging strategy. The research findings are significant for understanding and guiding the practical management of EV charging and discharging. Full article
Show Figures

Figure 1

22 pages, 9878 KB  
Article
Field Trial of a Low-Cost Sensor Network for Hydrometeorological Monitoring of Water Pans and Small Dams in Kenya
by Nils Michalke, John M. Gathenya, Joseph K. Sang and Rehema Ndeda
Hydrology 2026, 13(4), 101; https://doi.org/10.3390/hydrology13040101 - 24 Mar 2026
Viewed by 34
Abstract
Water pans and small dams play a vital role in supplying domestic water in rural regions characterised by seasonal rainfall regimes, with increasing importance as a climate change adaptation measure. Despite their small individual size, the collective impact of numerous water pans is [...] Read more.
Water pans and small dams play a vital role in supplying domestic water in rural regions characterised by seasonal rainfall regimes, with increasing importance as a climate change adaptation measure. Despite their small individual size, the collective impact of numerous water pans is significant. Commercially available monitoring systems are often too costly to be justified for these decentralised infrastructures, resulting in limited data availability that impedes detailed studies aimed at improving their performance. Here, we developed a low-cost monitoring station network that measures water level (JSN-SR04T ultrasonic sensor), precipitation (3D-printed tipping-bucket gauge), and air temperature and humidity (DHT22 sensor). Each station costs less than 12,000 KES (≈93 USD in March 2026), making it suitable for such decentralised multi-site monitoring. A field trial conducted from June to November 2025 at four water pans in the Kakia-Esamburmbur Catchment, Kenya, compared the collected data with an automatic weather station and manual observations. Water level measurements were more accurate than manual reference readings, while air temperature showed biases of 1.4 to 1.8 °C. Precipitation data were largely inaccurate due to inadequate sensor levelling. Overall operational reliability reached 83%, indicating potential for improvements to reduce maintenance efforts and fully exploit the advantages of its low-cost hardware. Full article
Show Figures

Figure 1

21 pages, 9626 KB  
Article
An Improved AlexNet-Based Image Recognition Method for Transmission Line Wildfires
by Zilin Zhao and Guoyong Duan
Algorithms 2026, 19(4), 245; https://doi.org/10.3390/a19040245 - 24 Mar 2026
Viewed by 35
Abstract
The wildfires in the vicinity of the power transmission corridors are famous for their sudden occurrence, rapid growth, and susceptibility to interference from fire-like interferences at night, which can easily lead to line discharge and trip accidents, thus affecting the safe operation of [...] Read more.
The wildfires in the vicinity of the power transmission corridors are famous for their sudden occurrence, rapid growth, and susceptibility to interference from fire-like interferences at night, which can easily lead to line discharge and trip accidents, thus affecting the safe operation of the power system. In order to address the issue of the high false alarm rate and poor generalization performance of wildfire image recognition in complex power transmission corridor environments, a wildfire image recognition method based on an improved AlexNet is proposed in this paper. The proposed method improves the description of flame and smoke properties at different scales by designing a reparameterized multi-scale feature extraction structure, and effectively alleviates the influence of strong light reflection and fire-like interference at night by using lightweight multi-scale attention and hybrid pooling attention mechanisms. A wildfire image dataset is constructed based on 1246 on-site images of the power transmission corridor captured by a visual monitoring device and 600 wildfire images downloaded from the internet, and tested in real-world imbalanced distribution scenarios. The experimental results show that the proposed method can recognize wildfire images with an accuracy of 96.9% and an F1 value of 94.9% on the test dataset, which is much higher than that of the original AlexNet, and has a strong ability to adapt to cross-dataset tests. The research work can provide technical support for online monitoring and operation and maintenance of wildfires in power transmission corridors. Full article
(This article belongs to the Special Issue AI-Based Techniques in Smart Grid Operations)
Show Figures

Figure 1

20 pages, 1326 KB  
Systematic Review
Reimagining Traditional Workspaces Through Digitalisation and Hybrid Perspective: A Systematic Review
by Ayogeboh Epizitone and Smangele Pretty Moyane
Informatics 2026, 13(4), 46; https://doi.org/10.3390/informatics13040046 - 24 Mar 2026
Viewed by 107
Abstract
Workspace digitalisation presents a transformative shift from traditional, physically bounded offices to virtual, technology-enabled environments. Digital technologies like cloud computing, artificial intelligence, and the Internet of Things enable remote collaboration, data accessibility, and operational efficiency, thereby accelerating this transformation. Digital workspaces transcend geographical [...] Read more.
Workspace digitalisation presents a transformative shift from traditional, physically bounded offices to virtual, technology-enabled environments. Digital technologies like cloud computing, artificial intelligence, and the Internet of Things enable remote collaboration, data accessibility, and operational efficiency, thereby accelerating this transformation. Digital workspaces transcend geographical limitations, enabling a more flexible, inclusive, and adaptive work culture. They offer better work–life balance, with flexible options, reduced commuting time, and increased personal autonomy and control over commitments, compared to traditional workspaces. Despite these benefits, digitalisation creates cybersecurity, data privacy, and digital divide issues, where unequal access to digital tools and skills can exacerbate social and economic inequalities. The lack of physical interaction affects team cohesion and company culture. Hence, this paper explores these phenomena to uncover their implications and consider possible strategies to optimise workspace digitalisation, providing a comprehensive systematic review of extant literature within the study context, offering pragmatic insights and recommendations for workspaces. This study has found workspace digitalisation to be a complex, multifaceted phenomenon that provides flexibility, efficiency, and innovation, but also poses challenges that must be carefully managed. It postulates that as technology and work progress, a hybrid model that blends digital and traditional workspaces would be suited to each organisation’s needs and goals. Full article
(This article belongs to the Section Social Informatics and Digital Humanities)
Show Figures

Figure 1

33 pages, 1935 KB  
Article
Smart Industrial Safety in High-Noise Environments Using IoT and AI
by Alessia Bramanti, Luca Catarinucci, Mattia Cotardo, Rosaria Del Sorbo, Claudia Giliberti, Mazhar Jan, Luca Landi, Raffaele Mariconte, Teodoro Montanaro, Federico Paolucci, Luigi Patrono, Davide Rollo, Francesco Antonio Salzano and Ilaria Sergi
Electronics 2026, 15(6), 1311; https://doi.org/10.3390/electronics15061311 - 20 Mar 2026
Viewed by 179
Abstract
High noise levels in industrial workplaces pose significant challenges to occupational safety, particularly with hearing protection and effective communication. Traditional hearing protection devices, while effectively attenuating harmful noise, often compromise situational awareness by excessively isolating workers from the acoustic environment and preventing the [...] Read more.
High noise levels in industrial workplaces pose significant challenges to occupational safety, particularly with hearing protection and effective communication. Traditional hearing protection devices, while effectively attenuating harmful noise, often compromise situational awareness by excessively isolating workers from the acoustic environment and preventing the perception of critical auditory cues (e.g., emergency alarms), thereby introducing additional safety risks. This paper presents a smart industrial safety system that integrates Internet of Things (IoT) and artificial intelligence (AI) and is based on intelligent hearing protection devices to (a) selectively attenuate hazardous industrial noise while (b) preserving human speech and (c) reproduce targeted audio notifications to workers near malfunctioning or hazardous machinery. A real-time voice activity detection (VAD) model is employed to distinguish vocal components from background noise to adaptively control digital signal processing filters. Furthermore, indoor localization enables the delivery of targeted audio messages to workers in proximity to relevant events. Experimental evaluations on embedded hardware demonstrate that the selected VAD model operates well within real-time constraints and effectively supports dynamic noise filtering. Objective evaluation of the filtering stage using Mean Opinion Score (MOS), signal-to-noise ratio (SNR), and Harmonics-to-Noise Ratio (HNR) shows consistent quality improvements across all tested conditions, with MOS gains up to +118%, SNR increases between +10.4 and +29.0 dB, and HNR improvements up to +6.22 dB, indicating enhanced speech intelligibility and preservation of voice harmonic structure even under high-noise scenarios. Robustness validation of the VAD module across varying acoustic conditions confirms reliable speech detection performance, achieving perfect classification at +10 dB SNR, very high accuracy at 0 dB (98.3%, ROC AUC 0.998), and stable operation even at 7 dB SNR (79.8% accuracy, ROC AUC 0.878). The proposed architecture achieves a balanced trade-off between hearing protection and speech intelligibility while enhancing the effectiveness of safety communications in noisy industrial environments. Full article
Show Figures

Figure 1

29 pages, 2311 KB  
Review
Trust Assessment Methods for Blockchain-Empowered Internet of Things Systems: A Comprehensive Review
by Mostafa E. A. Ibrahim, Yassine Daadaa and Alaa E. S. Ahmed
Appl. Sci. 2026, 16(6), 2949; https://doi.org/10.3390/app16062949 - 18 Mar 2026
Viewed by 185
Abstract
The Internet of things (IoT) is rapidly pervading daily life and linking everything. Although higher connectivity offers many benefits, including higher productivity, robotic processes, and decision-making guided by data, it also poses a number of security dangers. Modern risks to data authenticity and [...] Read more.
The Internet of things (IoT) is rapidly pervading daily life and linking everything. Although higher connectivity offers many benefits, including higher productivity, robotic processes, and decision-making guided by data, it also poses a number of security dangers. Modern risks to data authenticity and confidence are getting harder to handle through typical central safety solutions. In this paper, we present a detailed investigation of the latest innovations and approaches for assessing reputation and confidence in the blockchain-empowered Internet of Things (BIoT) area. A comprehensive literature search was conducted across major electronic databases, including IEEE, Springer, Elsevier, Wiley, MDPI, and top indexed conference proceedings. The publication year was restricted to the period from 2018 to 2025. The methodological quality of a total of 122 studies met the inclusion criteria assessed using predefined quality measures. We figure out existing flaws at each layer of IoT architecture, illustrating how autonomous, transparent, and impenetrable blockchain ledgers address these flaws. Plus, we analytically compare public, private, consortium, and hybrid blockchain networking architectures to emphasize the underlying compromises among security, reliability, and decentralization. We also assess how reputation evaluation techniques evolved over time, moving from classical fuzzy logic and weighted average models to modern mature game theory and machine learning (ML) models, addressing their limitations in terms of computational overhead, scalability, adaptability, and deployment feasibility in IoT systems. Additionally, we outline future directions for BIoT system trust assessment and identify research limitations and potential solutions. Our research indicates that although ML-driven models offer more accurate predictions for identifying illicit node activities, they are still constrained by limited unbalanced data and high processing overhead. Full article
(This article belongs to the Special Issue Advanced Blockchain Technologies and Their Applications)
Show Figures

Figure 1

19 pages, 2335 KB  
Article
IoT-Simulated Digital Twin with AI Traffic Signal Control for Real-Time Traffic Optimization in SUMO
by Vasilica Cerasela Doiniţa Ceapă, Vasile Alexandru Apostol, Ioan Stefan Sacală, Constantin Florin Căruntu, Russ Ross, Dj Holt, Mircea Segărceanu and Luiza Elena Burlacu
Sensors 2026, 26(6), 1880; https://doi.org/10.3390/s26061880 - 17 Mar 2026
Viewed by 167
Abstract
Urban traffic congestion leads to longer travel times, economic losses, and increased pollution. Recent advances in the Internet of Things (IoT) provide detailed real-time traffic data, yet testing adaptive control strategies directly on live networks remains costly and risky. To address this challenge, [...] Read more.
Urban traffic congestion leads to longer travel times, economic losses, and increased pollution. Recent advances in the Internet of Things (IoT) provide detailed real-time traffic data, yet testing adaptive control strategies directly on live networks remains costly and risky. To address this challenge, we propose an IoT-driven digital twin framework for the design and evaluation of AI-based traffic management systems. The framework is implemented in the Simulation of Urban MObility (SUMO) and uses its Python 3.14.2 API to emulate a dense network of IoT sensors that stream real-time information on vehicle density, queue lengths, and waiting times. This simulated IoT data feeds an AI agent that adapts traffic signal control in real time. The agent is trained with a composite reward function to jointly minimise vehicle waiting times and emissions. Its performance is compared with fixed-time and vehicle-actuated control under varying traffic demand scenarios. Results demonstrate the effectiveness of combining IoT-based simulation with AI control, providing a safe and scalable pathway towards the real-world deployment of intelligent traffic management systems. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

20 pages, 315 KB  
Systematic Review
Green Scheduling and Task Offloading in Edge Computing: A Systematic Review
by Adriana Rangel Ribeiro, Ana Clara Santos Andrade, Gabriel Leal dos Santos, Guilherme Dinarte Marcondes Lopes, Edvard Martins de Oliveira, Adler Diniz de Souza and Jeremias Barbosa Machado
Network 2026, 6(1), 17; https://doi.org/10.3390/network6010017 - 16 Mar 2026
Viewed by 147
Abstract
This paper presents a Systematic Literature Review (SLR) on green scheduling and task offloading strategies for energy optimization in edge computing environments. The evolution of low-latency, high-performance applications has driven the widespread adoption of distributed computing paradigms such as Edge Computing, Fog-Cloud architectures, [...] Read more.
This paper presents a Systematic Literature Review (SLR) on green scheduling and task offloading strategies for energy optimization in edge computing environments. The evolution of low-latency, high-performance applications has driven the widespread adoption of distributed computing paradigms such as Edge Computing, Fog-Cloud architectures, and the Internet of Things (IoT). In this context, Mobile Edge Computing (MEC) is often combined with Unmanned Aerial Vehicles (UAVs) to extend computational capabilities to areas with limited infrastructure, bringing processing closer to the data source to reduce latency and improve scalability. Nevertheless, these systems encounter substantial energy-related challenges, particularly in battery-powered or resource-constrained environments. To address these concerns, green computing strategies—especially energy-efficient scheduling and task offloading—have emerged as promising approaches to optimize energy usage in edge environments. Green scheduling optimizes task allocation to minimize energy consumption, whereas offloading redistributes workloads from resource-constrained devices to edge or cloud servers. Increasingly, these techniques are enhanced through artificial intelligence (AI) and machine learning (ML), enabling adaptive and context-aware decision-making in dynamic environments. This paper conducts a systematic literature review (SLR) to synthesize the most widely adopted strategies for energy-efficient scheduling and task offloading in edge computing, highlighting their impact on sustainability and performance. The analysis provides a comprehensive view of the state of the art, examines how architectural contexts influence energy-aware decisions, and highlights the role of AI/ML in enabling intelligent and sustainable edge systems. The findings reveal current research gaps and outline future directions to advance the development of robust, scalable, and environmentally responsible computing infrastructures. Full article
Show Figures

Figure 1

43 pages, 6922 KB  
Article
Multi-Flow Hybrid Task Offloading Scheme for Multimodal High-Load V2I Services
by Weiqi Luo, Yaqi Hu, Maoqiang Wu, Yijie Zhou, Rong Yu and Junbin Qin
Electronics 2026, 15(6), 1229; https://doi.org/10.3390/electronics15061229 - 16 Mar 2026
Viewed by 334
Abstract
In the Internet of Vehicles (IoV), connected vehicles generate high-load perception tasks with large-scale and multimodal sensitive data, imposing strict requirements on latency, computing, and privacy. Existing solutions still suffer from high task service latency and privacy risks. To address these issues, this [...] Read more.
In the Internet of Vehicles (IoV), connected vehicles generate high-load perception tasks with large-scale and multimodal sensitive data, imposing strict requirements on latency, computing, and privacy. Existing solutions still suffer from high task service latency and privacy risks. To address these issues, this paper proposes an integrated framework that jointly considers multi-flow task offloading, adaptive privacy preservation, and latency-aware resource incentive mechanism. Specifically, we propose a Location-Aware and Trust-based (LA-Trust) dual-node task offloading algorithm based on deep reinforcement learning (DRL), which treats pre-partitioned subtasks as multiple parallel flows and enables flow-level collaborative offloading optimization across neighboring nodes, allows subtask data uploading and processing to proceed concurrently, and incorporates node security into decision making. To further enhance privacy protection, a Distribution-Aware Local Differential Privacy (DA-LDP) algorithm is designed to adaptively inject artificial noise according to data heterogeneity, balancing privacy protection and task execution accuracy. In addition, a Delay-Cost Reverse Auction (DC-RA) algorithm is proposed to further reduce latency by introducing wireless channel modeling between idle vehicles and edge nodes into the incentive mechanism. Experimental results show that the proposed framework improves task execution accuracy by 38% and reduces offloading cost, delay, incentive cost, and auction communication latency by 64.41%, 64.64%, 19%, and 44%, respectively, while more than 60% of tasks are offloaded to high-trust nodes. Full article
Show Figures

Figure 1

21 pages, 1611 KB  
Article
Mobility-Aware Cooperative Optimization for Task Offloading and Resource Allocation in Multi-Edge Computing
by Dong Chen, Ximing Zhang, Kequan Lin, Chunhua Mei and Ru Huo
Algorithms 2026, 19(3), 221; https://doi.org/10.3390/a19030221 - 16 Mar 2026
Viewed by 177
Abstract
The rapid proliferation of mobile Internet of Things (IoT) devices has introduced significant resource scheduling challenges in multi-edge computing networks, where device mobility leads to dynamic network connectivity and load imbalance, complicating task offloading and resource management. To address these issues, this paper [...] Read more.
The rapid proliferation of mobile Internet of Things (IoT) devices has introduced significant resource scheduling challenges in multi-edge computing networks, where device mobility leads to dynamic network connectivity and load imbalance, complicating task offloading and resource management. To address these issues, this paper presents a mobility-driven hierarchical optimization framework for task offloading and computation resource allocation in multi-region edge computing environments, a functionally coupled hierarchical framework that integrates mobility-aware heuristic offloading with multi-agent deep deterministic policy gradient (MADDPG)-based resource allocation. Devices are first clustered according to their mobility patterns, and offloading decisions are dynamically made based on trajectory and dwell-time characteristics. Each edge server is modeled as an autonomous agent, and an MADDPG framework is adopted to collaboratively optimize resource allocation, with the joint objective of minimizing task processing delay and system energy consumption. Experimental evaluations under diverse mobility and workload conditions show that the proposed approach achieves a 19.0% reduction in task delay compared to the Multi-Objective Gray Wolf Optimization (MOGWO) method at the largest device scale (60 devices) and maintains comparable energy efficiency. Furthermore, it exhibits stronger adaptability and scheduling performance across varying mobility group distributions. These results confirm the effectiveness of the proposed method in enhancing system performance within dynamic mobile edge computing scenarios. Full article
Show Figures

Figure 1

29 pages, 2188 KB  
Review
Post-Quantum Authentication in the Internet of Medical Things: A System-Level Review and Future Directions
by Fatima G. Abdullah and Tayseer S. Atia
Computers 2026, 15(3), 189; https://doi.org/10.3390/computers15030189 - 15 Mar 2026
Viewed by 310
Abstract
The Internet of Medical Things (IoMT) has become a core component of modern healthcare infrastructures, enabling continuous patient monitoring, remote diagnostics, and data-driven clinical decision-making. Despite these advances, authentication in IoMT environments remains a critical security challenge, intensified by strict resource constraints of [...] Read more.
The Internet of Medical Things (IoMT) has become a core component of modern healthcare infrastructures, enabling continuous patient monitoring, remote diagnostics, and data-driven clinical decision-making. Despite these advances, authentication in IoMT environments remains a critical security challenge, intensified by strict resource constraints of medical devices and the emerging threat posed by quantum computing to classical cryptographic techniques. This systematic review investigates authentication mechanisms in IoMT from both post-quantum and system-level perspectives. A structured literature review was conducted using a PRISMA-informed methodology across major scientific databases, including IEEE Xplore, ACM Digital Library, SpringerLink, ScienceDirect, and MDPI. From an initial set of 95 records, 63 studies were selected for qualitative synthesis following screening and eligibility assessment. To organise existing research, this study introduces a multi-dimensional classification framework that categorises authentication solutions according to cryptographic paradigm (classical, hybrid, and post-quantum), deployment architecture, system objectives, and clinical operational constraints. The comparative synthesis demonstrates important trade-offs between security strength, latency, computational overhead, and energy consumption that are frequently underexplored in the existing literature. Furthermore, the analysis identifies key research gaps related to scalability in heterogeneous medical environments, trust establishment across administrative and clinical domains, usability under strict timing constraints, and resilience against quantum-capable adversaries. Based on these findings, future research directions are outlined toward adaptive, lightweight, and context-aware post-quantum authentication frameworks designed for real-world IoMT deployments. Limitations of this review include restriction to English-language publications and selected databases. This study received no external funding, and the review protocol was not formally registered. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Figure 1

38 pages, 1411 KB  
Article
Cybersecurity Digital Twins for Industrial Systems: From Literature Synthesis to Framework Design
by Konstantinos E. Kampourakis, Vasileios Gkioulos and Sokratis Katsikas
Information 2026, 17(3), 286; https://doi.org/10.3390/info17030286 - 12 Mar 2026
Viewed by 352
Abstract
Digital Twins (DTs) are increasingly recognized as a strategic technology for enhancing cybersecurity in industrial environments, particularly in the face of rising threats targeting Operational Technology (OT). After comparatively examining closely related DT–cybersecurity frameworks to position the contribution within the existing research landscape, [...] Read more.
Digital Twins (DTs) are increasingly recognized as a strategic technology for enhancing cybersecurity in industrial environments, particularly in the face of rising threats targeting Operational Technology (OT). After comparatively examining closely related DT–cybersecurity frameworks to position the contribution within the existing research landscape, this paper presents a systematic literature review and comparative analysis of 19 recent DT-based cybersecurity studies, focusing on their relevance to incident detection and response in sectors such as Industrial Internet of Things (IIoT), manufacturing, and energy. The analysis evaluates each study across multiple dimensions, including attack types, detection and response mechanisms, DT integration, and technology stacks. From this review, we derive a consolidated set of requirements, categorized as functional, non-functional, security-specific, and domain-specific. These requirements serve as the foundation for a novel, cybersecurity-focused, ISO 23247-based framework. The proposed architecture formalizes a DT-enabled incident detection and response lifecycle aligned with ISO 23247. It is explicitly mapped to the derived requirements and detailed with practical implementation considerations. This work contributes a structured, evidence-based approach to DT-based security engineering and offers a reference design for researchers and practitioners aiming to build resilient, adaptive cybersecurity solutions in industrial settings. Full article
Show Figures

Figure 1

17 pages, 3681 KB  
Article
Developing a BIM–GIS-Based Digital Twin for the Operation and Maintenance of an Urban Ring Road: The M-30 Case Study
by Jorge Jerez Cepa and Marcos García Alberti
Appl. Sci. 2026, 16(6), 2673; https://doi.org/10.3390/app16062673 - 11 Mar 2026
Viewed by 325
Abstract
The implementation of digital twin (DTw) in infrastructure management is becoming increasingly important. Although digitalization in the Architecture, Engineering, Construction, and Operations (AECO) sector is progressing slowly, enabling technologies such as Building Information Modelling (BIM), Geographic Information Systems (GIS), Internet of Things (IoT) [...] Read more.
The implementation of digital twin (DTw) in infrastructure management is becoming increasingly important. Although digitalization in the Architecture, Engineering, Construction, and Operations (AECO) sector is progressing slowly, enabling technologies such as Building Information Modelling (BIM), Geographic Information Systems (GIS), Internet of Things (IoT) and data management allow for more informed and efficient management of ageing and highly complex assets. With the aim of improving the operation and maintenance (O&M) of transport infrastructure, the use of an integrated BIM–GIS model is proposed as the basis for a future DTw for an existing highway, the M-30 urban ring road in Madrid. This study develops an as-built digital model based on real GIS data, point clouds and BIM (LOD 300), adapting it to existing management systems using a relational database with unique identifiers. The infrastructure is modelled in a segmented and georeferenced manner, incorporating roads, tunnels, bridges and equipment as independent entities. Access to the model is guaranteed through 3D GIS scenes, interactive panels and BIM viewers geared towards management. In addition, a cost–benefit analysis is carried out using a Return On Investment (ROI) that evaluates the implementation of BIM in the management of this infrastructure. Full article
(This article belongs to the Special Issue Building Information Modelling: From Theories to Practices)
Show Figures

Figure 1

23 pages, 11915 KB  
Article
IoT-Assisted Hydroponic System for Andrographis paniculata: Enhanced Productivity and Pharmaceutical-Grade Quality
by Krit Funsian, Yaowarat Sirisathitkul, Pumiphat Khotchanakhen, Apiwit Bunta, Kanittha Srikwan, Kingkan Bunluepuech, Athakorn Promwee, Chih-Yi Chiu and Karanrat Thammarak
IoT 2026, 7(1), 28; https://doi.org/10.3390/iot7010028 - 10 Mar 2026
Viewed by 264
Abstract
This study presents an Internet of Things (IoT)-assisted semi-open hydroponic system for cultivating Andrographis paniculata under tropical conditions, aiming to enhance biomass productivity, andrographolide (AG) yield, and production efficiency. IoT-assisted hydroponics, non-IoT hydroponics, and soil-based cultivation were compared in 10 m2 greenhouses. [...] Read more.
This study presents an Internet of Things (IoT)-assisted semi-open hydroponic system for cultivating Andrographis paniculata under tropical conditions, aiming to enhance biomass productivity, andrographolide (AG) yield, and production efficiency. IoT-assisted hydroponics, non-IoT hydroponics, and soil-based cultivation were compared in 10 m2 greenhouses. The IoT system enabled real-time monitoring and adaptive regulation of temperature, relative humidity, light intensity, nutrient solution pH, and electrical conductivity (EC). IoT-assisted hydroponics achieved earlier harvest (≈90 days) and the highest fresh biomass yield (0.409 ± 0.014 kg m−2) while maintaining per-plant productivity (15.74 ± 0.54 g plant−1) comparable to soil-based cultivation. Andrographolide concentration reached 25.58 ± 3.36 mg g−1 DW (2.56% w/w), meeting pharmacopeial requirements. Owing to stable environmental regulation and tolerance to high planting density, the IoT system produced the highest areal AG productivity (209.5 mg m−2), representing a four- to tenfold increase over the other systems. Despite higher operational costs, IoT-assisted hydroponics achieved the lowest AG unit cost (≈6.77 USD g−1). While most previous studies emphasize tissue-level AG concentration, system-level productivity and cost efficiency under realistic cultivation conditions remain insufficiently explored. Overall, IoT-enabled semi-open hydroponics provides a scalable and economically viable approach for medicinal plant production, bridging the gap between open-field cultivation and fully controlled plant factory systems. Full article
Show Figures

Figure 1

Back to TopTop