Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,374)

Search Parameters:
Keywords = cloud and edge computing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 20391 KB  
Article
Multi-Temporal Sentinel-1 SAR Analysis for Smallholder Agricultural Mapping: A Coefficient of Variation Approach for Food Security Monitoring in Kenya
by Zach Little, Cameron Carlson and Troy Bouffard
Land 2026, 15(3), 371; https://doi.org/10.3390/land15030371 - 26 Feb 2026
Abstract
Monitoring agricultural production in developing nations is essential for assessing food security. Nevertheless, persistent cloud cover in tropical regions severely limits optical satellite observations, and ground-truth data for classification validation are typically unavailable. This study developed a remote sensing methodology to classify agricultural [...] Read more.
Monitoring agricultural production in developing nations is essential for assessing food security. Nevertheless, persistent cloud cover in tropical regions severely limits optical satellite observations, and ground-truth data for classification validation are typically unavailable. This study developed a remote sensing methodology to classify agricultural land in southern Uasin Gishu County, Kenya, using weather-independent Synthetic Aperture Radar (SAR) imagery without requiring in situ training data. We processed 29 Sentinel-1 C-band VH-polarized scenes through the Alaska Satellite Facility’s Radiometric Terrain Correction pipeline. We computed the Coefficient of Variation (CV) across the 2017 time series to quantify temporal backscatter variance. VH polarization was selected over VV because a preliminary analysis showed that VV sensitivity to water surface dynamics confounded the CV algorithm. Preprocessing masks excluded water bodies, urban areas, and edge pixels to reduce classification errors from non-agricultural sources of temporal variability. Unsupervised ISO Cluster classification partitioned the CV raster into land-cover classes, and a Python-based statistical analysis determined optimal threshold values. Active agriculture pixels (n = 581,807) exhibited a mean CV of 0.469 (SD = 0.087), while non-agricultural pixels (n = 623,484) showed a mean CV of 0.274 (SD = 0.049). The optimal classification threshold of 0.357, determined by the intersection of fitted normal distributions, achieved an overall accuracy of 87.5% (Kappa = 0.73) when validated against Sentinel-2 reference imagery. User’s accuracy for agriculture was 96.6%, indicating that pixels classified as agricultural were highly reliable, while omission errors reducing producer’s accuracy to 84.6% were primarily attributable to edge pixels and land cover types where preprocessing masks or threshold placement excluded pixels exhibiting intermediate temporal dynamics. The classification identified approximately 810 km2 of actively cultivated land (54% of the southern study area), corresponding to an estimated 69,500 to 162,200 metric tonnes (assuming 30–70% maize fraction) of potential maize production based on FAO yield data. The methodology provides a replicable, cost-effective tool for food security monitoring in cloud-prone regions where ground-truth data are unavailable. Full article
Show Figures

Figure 1

19 pages, 1335 KB  
Article
Resource Allocation and Task Migration in DIMA-Oriented Mobile Edge Computing Systems
by Ning Wang, Liang Liu, Peng Wei, Meiyan Teng and Xiaolin Qin
Mathematics 2026, 14(5), 781; https://doi.org/10.3390/math14050781 - 26 Feb 2026
Abstract
Avionics systems are evolving from Integrated Modular Avionics (IMA) to Distributed Integrated Modular Avionics (DIMA), where distributed computing nodes are interconnected through real-time networks to support flexible resource sharing and latency-critical services. This architecture is highly consistent with the paradigm of Mobile Edge [...] Read more.
Avionics systems are evolving from Integrated Modular Avionics (IMA) to Distributed Integrated Modular Avionics (DIMA), where distributed computing nodes are interconnected through real-time networks to support flexible resource sharing and latency-critical services. This architecture is highly consistent with the paradigm of Mobile Edge Computing (MEC), in which distributed edge resources collaboratively process computation workloads close to users to meet stringent real-time requirements. However, efficient task scheduling and migration remain key challenges in such distributed MEC platforms, since many existing approaches are designed for traditional centralized architectures and lack effective support for runtime workload dynamics and migration overheads. In this paper, we abstract the computing resource and task models for DIMA-oriented MEC systems and propose two algorithms: an Efficient Workload Scheduling Algorithm (EWSA) for workload placement and a Workload Migration Algorithm (WMA) for adaptive task relocation. CloudSim-based simulations show that the proposed methods significantly outperform the benchmark JIT-C approach in scheduling performance and migration efficiency, demonstrating their effectiveness for real-time distributed edge computing environments. Full article
Show Figures

Figure 1

30 pages, 15102 KB  
Article
FireVision: An Early Fire and Smoke Detection Platform Utilizing Mask R-CNN Deep Learning Inferences
by Konstantina Spanoudaki, Meropi Tsoumani, Sotirios Kontogiannis, Myrto Konstantinidou, Ion Anastasios Karolos and George Kokkonis
Algorithms 2026, 19(3), 169; https://doi.org/10.3390/a19030169 - 24 Feb 2026
Viewed by 91
Abstract
This paper presents FireVision, an innovative platform and model for real-time fire detection and monitoring. The platform utilizes automated drone flights to collect high-resolution imagery in both suburban and forested settings. Ensemble deep learning inference, based on Mask R-CNN weak learners, is employed [...] Read more.
This paper presents FireVision, an innovative platform and model for real-time fire detection and monitoring. The platform utilizes automated drone flights to collect high-resolution imagery in both suburban and forested settings. Ensemble deep learning inference, based on Mask R-CNN weak learners, is employed to trigger alerts. Detection performance is further enhanced by integrating ResNet-50, ResNet-101, and ResNet-152 classifiers, which can be deployed in the cloud or on the drone’s edge co-processing units. Additionally, a fire criticality index is introduced, leveraging detection bounds and masks to assess the severity of fire events, alongside an automated drone path-planning algorithm for identifying critical fire incidents. Experiments were conducted using a supervised, mask-annotated dataset to evaluate model accuracy and inference speed across various cloud and edge computing configurations. Results indicate that ResNet-101 surpasses ResNet-50 by 5 to 12.5 percent in mAP@0.5 mask accuracy, with an 18 percent increase in inference time on the cloud and a 27 percent increase on the drone edge device GPU. In comparison, ResNet-152 achieves a 0.5 to 1.2 percent improvement in mAP@0.5 over ResNet-101, but its inference time is nine times slower in the cloud and 1.3 times slower on the GPU. Full article
(This article belongs to the Special Issue AI Applications and Modern Industry)
24 pages, 3302 KB  
Systematic Review
Performance Trade-Offs in Multi-Tenant IoT–Cloud Security: A Systematic Review of Emerging Technologies
by Bader Alobaywi, Mohammed G. Almutairi and Frederick T. Sheldon
IoT 2026, 7(1), 21; https://doi.org/10.3390/iot7010021 - 22 Feb 2026
Viewed by 287
Abstract
Multi-tenancy is essential for scalable IoT–Cloud systems; however, it introduces complex security vulnerabilities at the intersection of shared cloud infrastructures and resource-constrained IoT environments. This systematic review evaluates next-generation security frameworks designed to enforce tenant isolation without violating the strict latency (<10 ms) [...] Read more.
Multi-tenancy is essential for scalable IoT–Cloud systems; however, it introduces complex security vulnerabilities at the intersection of shared cloud infrastructures and resource-constrained IoT environments. This systematic review evaluates next-generation security frameworks designed to enforce tenant isolation without violating the strict latency (<10 ms) and energy bounds of lightweight sensors. Adhering to PRISMA guidelines, we analyze selected high-quality studies to categorize intersectional threats, including cross-tenant data leakage, side-channel attacks, and privilege escalation. Our analysis identifies a critical, unresolved conflict: existing mitigation strategies often incur a 12% computational and communication overhead, creating a significant barrier for real-time applications. Furthermore, we critically analyze emerging technologies, including Zero Trust Architectures (ZTA), adaptive Artificial Intelligence (AI), blockchain, and Post-Quantum Cryptography (PQC). We find that direct PQC deployment is currently infeasible for LPWAN protocols due to key-size constraints (1.6 KB) that exceed typical payload limits. To address these challenges, we propose a novel multi-layer security design principle that offloads heavy isolation and cryptographic workloads to hardware-accelerated edge gateways, thereby maintaining tenant isolation without compromising real-time performance. Finally, this review serves as a roadmap for future research, highlighting federated learning and hardware enclaves as essential pathways for securing next-generation multi-tenant IoT ecosystems. Full article
Show Figures

Figure 1

28 pages, 842 KB  
Review
AI-Driven Virtual Power Plants: A Comprehensive Review
by Jian Li, Chenxi Wang and Yonghe Liu
Energies 2026, 19(4), 1084; https://doi.org/10.3390/en19041084 - 20 Feb 2026
Viewed by 392
Abstract
The rapid proliferation of distributed energy resources (DERs), including photovoltaics, wind power, battery energy storage, and electric vehicles, has transformed traditional power systems into highly decentralized and data-rich environments. Virtual power plants (VPPs) have emerged as a key mechanism for aggregating these heterogeneous [...] Read more.
The rapid proliferation of distributed energy resources (DERs), including photovoltaics, wind power, battery energy storage, and electric vehicles, has transformed traditional power systems into highly decentralized and data-rich environments. Virtual power plants (VPPs) have emerged as a key mechanism for aggregating these heterogeneous assets and enabling coordinated control, market participation, and grid-support functions. Recent advances in artificial intelligence (AI) have further elevated the scalability, autonomy, and responsiveness of VPP operations. This paper presents a comprehensive review of AI for VPPs, organized around a taxonomy of machine learning, deep learning, reinforcement learning, and hybrid approaches, and examines how these methods map to core VPP functions such as forecasting, scheduling, market bidding, aggregation, and ancillary services. In parallel, we analyze enabling architectural frameworks—including centralized cloud, distributed edge, hybrid cloud–edge collaboration, and emerging 5G/LEO satellite communication infrastructures—that support real-time data exchange and scalable deployment of intelligent control. By integrating methodological, functional, and architectural perspectives, this review highlights the evolution of VPPs from rule-based coordination to intelligent, autonomous energy ecosystems. Key research challenges are identified in data quality, model interpretability, multi-agent scalability, cyber-physical resilience, and the integration of AI with digital twins and edge-native computation. These findings outline promising directions for next-generation intelligent VPPs capable of delivering secure, flexible, and self-optimizing DER aggregation at scale. Full article
(This article belongs to the Collection Review Papers in Energy and Environment)
Show Figures

Figure 1

21 pages, 2079 KB  
Article
Assuring Brokerage Quality in the Cloud–Edge Continuum
by Evangelos Barmpas, Simeon Veloudis, Yiannis Verginadis and Iraklis Paraskakis
Future Internet 2026, 18(2), 107; https://doi.org/10.3390/fi18020107 - 19 Feb 2026
Viewed by 185
Abstract
The Cloud–Edge Continuum (CEC) has emerged as a paradigm for distributing computational resources across cloud, fog, and edge layers, enabling latency-sensitive applications to operate efficiently. However, ensuring the quality of service (QoS) brokerage in such environments remains a challenge. Existing frameworks primarily focus [...] Read more.
The Cloud–Edge Continuum (CEC) has emerged as a paradigm for distributing computational resources across cloud, fog, and edge layers, enabling latency-sensitive applications to operate efficiently. However, ensuring the quality of service (QoS) brokerage in such environments remains a challenge. Existing frameworks primarily focus on resource management techniques such as allocation, scheduling, and offloading but fail to address the quality assurance of the brokerage process itself. This paper introduces SLA governance as a means of ensuring the quality of service brokerage by validating—through automated reasoning—Service Level Agreements (SLAs) against meta-quality constraints—high-level policies that define permissible QoS conditions. We propose an ontology-driven approach leveraging the ODRL ontology for SLA representation and capturing meta-quality constraints. Our method also enables introspective reasoning ensuring internal SLA consistency. Additionally, we integrate SLA governance with a real-time monitoring framework, the Event Management System (EMS), to continuously track workload performance and trigger SLA adaptation when necessary. This integration ensures that SLA-based brokerage decisions remain dynamic and context-aware. Full article
(This article belongs to the Special Issue Cloud and Edge Computing for the Next-Generation Networks)
Show Figures

Figure 1

21 pages, 1805 KB  
Article
Introducing LEAF: LLM Edge Assessment Framework for Generative AI on the Edge
by Mustafa Abdulkadhim and Sandor R. Repas
Mach. Learn. Knowl. Extr. 2026, 8(2), 48; https://doi.org/10.3390/make8020048 - 18 Feb 2026
Viewed by 452
Abstract
The transition of Large Language Models (LLMs) from centralized clouds to edge environments is critical for addressing privacy concerns, latency bottlenecks, and operational costs. However, existing edge benchmarking frameworks remain tailored to discriminative Deep Learning tasks (e.g., object detection), failing to capture the [...] Read more.
The transition of Large Language Models (LLMs) from centralized clouds to edge environments is critical for addressing privacy concerns, latency bottlenecks, and operational costs. However, existing edge benchmarking frameworks remain tailored to discriminative Deep Learning tasks (e.g., object detection), failing to capture the multidimensional challenges of generative AI, specifically the trade-offs between token generation speed, semantic accuracy, and hardware sustainability. To address this gap, we introduce LEAF (LLM Edge Assessment Framework), a novel evaluation methodology that integrates Circular Economy principles directly into performance metrics. LEAF assesses edge deployments across five synergistic pillars: Circular Economy Score, Energy Efficiency (Joules/Token), Performance Speed (Tokens/Second), semantic accuracy (BERTScore), and End-to-End Latency. We validate LEAF through an extensive experimental analysis of five distinct hardware classes, ranging from embedded IoT devices (Raspberry Pi 4 and 5, NVIDIA Jetson Nano) to professional edge servers (NVIDIA T400) and repurposed legacy workstations (NVIDIA GTX 1050 Ti). Utilizing 4-bit quantized models via the Ollama runtime, our results reveal a counterintuitive insight: repurposed consumer hardware significantly outperforms modern purpose-built edge SoCs. The legacy GTX 1050 Ti achieved a 20× speedup over the Raspberry Pi 4 and maintained superior energy-per-task efficiency compared to low-power ARM architectures by minimizing active runtime. These findings challenge the prevailing narrative that newer silicon is essential for Edge AI, demonstrating that sustainable, high-performance inference can be achieved by extending the lifecycle of existing hardware. LEAF thus provides a blueprint for a “Green Edge” ecosystem that balances computational capability with environmental responsibility. Full article
(This article belongs to the Section Data)
Show Figures

Graphical abstract

25 pages, 2045 KB  
Article
A Comparative Analysis of Self-Aware Reinforcement Learning Models for Real-Time Intrusion Detection in Fog Networks
by Nyashadzashe Tamuka, Topside Ehleketani Mathonsi, Thomas Otieno Olwal, Solly Maswikaneng, Tonderai Muchenje and Tshimangadzo Mavin Tshilongamulenzhe
Future Internet 2026, 18(2), 100; https://doi.org/10.3390/fi18020100 - 14 Feb 2026
Viewed by 205
Abstract
Fog computing extends cloud services to the network edge, enabling low-latency processing for Internet of Things (IoT) applications. However, this distributed approach is vulnerable to a wide range of attacks, necessitating advanced intrusion detection systems (IDSs) that operate under resource constraints. This study [...] Read more.
Fog computing extends cloud services to the network edge, enabling low-latency processing for Internet of Things (IoT) applications. However, this distributed approach is vulnerable to a wide range of attacks, necessitating advanced intrusion detection systems (IDSs) that operate under resource constraints. This study proposes integrating self-awareness (online learning and concept drift adaptation) into a lightweight RL (reinforcement learning)-based IDS for fog networks and quantitatively comparing it with non-RL static thresholds and bandit-based approaches in real time. Novel self-aware reinforcement learning (RL) models, the Hierarchical Adaptive Thompson Sampling–Reinforcement Learning (HATS-RL) model, and the Federated Hierarchical Adaptive Thompson Sampling–Reinforcement Learning (F-HATS-RL), were proposed for real-time intrusion detection in a fog network. These self-aware RL policies integrated online uncertainty estimation and concept-drift detection to adapt to evolving attacks. The RL models were benchmarked against the static threshold (ST) model and a widely adopted linear bandit (Linear Upper Confidence Bound/LinUCB). A realistic fog network simulator with heterogeneous nodes and streaming traffic, including multi-type attack bursts and gradual concept drift, was established. The models’ detection performance was compared using metrics including latency, energy consumption, detection accuracy, and the area under the precision–recall curve (AUPR) and the area under the receiver operating characteristic curve (AUROC). Notably, the federated self-aware agent (F-HATS-RL) achieved the best AUROC (0.933) and AUPR (0.857), with a latency of 0.27 ms and the lowest energy consumption of 0.0137 mJ, indicating its ability to detect intrusions in fog networks with minimal energy. The findings suggest that self-aware RL agents can detect traffic–dynamic attack methods and adapt accordingly, resulting in more stable long-term performance. By contrast, a static model’s accuracy degrades under drift. Full article
Show Figures

Figure 1

23 pages, 2573 KB  
Article
Development of an Unattended Ionosphere–Geomagnetism Monitoring System with Dual-Adversarial AI for Remote Mid–High-Latitude Regions
by Cheng Cui, Zhengxiang Xu, Zefeng Liu, Zejun Hu, Fuqiang Li, Yinke Dou and Yuchen Wang
Aerospace 2026, 13(2), 179; https://doi.org/10.3390/aerospace13020179 - 13 Feb 2026
Viewed by 168
Abstract
To address coverage gaps in high-latitude space weather monitoring caused by constraints in energy, bandwidth, and labeled samples, this study presents a systematic solution deployed in Hailar, China. We constructed a Cloud–Edge–Terminal system featuring wind–solar hybrid energy and RK3588-based edge computing, achieving six [...] Read more.
To address coverage gaps in high-latitude space weather monitoring caused by constraints in energy, bandwidth, and labeled samples, this study presents a systematic solution deployed in Hailar, China. We constructed a Cloud–Edge–Terminal system featuring wind–solar hybrid energy and RK3588-based edge computing, achieving six months of stable ionospheric–geomagnetic observation under −40 °C. Furthermore, we propose a Dual-Adversarial Recurrent Autoencoder (DA-RAE) for anomaly detection. Utilizing a single-source domain strategy, the model learns physical manifolds from quiet-day data, enabling zero-shot anomaly perception in the unsupervised target domain. Field tests in March 2025 demonstrated superior generalized anomaly detection capabilities, successfully identifying both transient space weather events and environmental equipment faults (baseline drifts). This work validates the value of edge intelligence for autonomous operations in extreme environments, providing a reproducible paradigm for global ground-based networks. Full article
(This article belongs to the Special Issue Situational Awareness Using Space-Based Sensor Networks)
Show Figures

Figure 1

31 pages, 1411 KB  
Article
Practical Considerations for the Development of Two-Stage Deterministic EMS (Cloud–Edge) to Mitigate Forecast Error Impact on the Objective Function
by Gregorio Fernández, J. F. Sanz Osorio, Roberto Rocca, Luis Luengo-Baranguan and Miguel Torres
Appl. Sci. 2026, 16(4), 1844; https://doi.org/10.3390/app16041844 - 12 Feb 2026
Viewed by 211
Abstract
The growing penetration of Distributed Energy Resources (DERs)—such as photovoltaic generation, battery energy storage, electric vehicles, hydrogen technologies and flexible loads—requires advanced Energy Management Systems (EMS) capable of coordinating their operation and leveraging controllability to optimize microgrid performance and enable flexibility provision to [...] Read more.
The growing penetration of Distributed Energy Resources (DERs)—such as photovoltaic generation, battery energy storage, electric vehicles, hydrogen technologies and flexible loads—requires advanced Energy Management Systems (EMS) capable of coordinating their operation and leveraging controllability to optimize microgrid performance and enable flexibility provision to the grid. When the physical, electrical, and economic system model is properly defined, the main sources of performance degradation typically arise from forecast uncertainty and temporal discretization effects, which propagate into sub-optimal schedules and infeasible setpoints. This paper proposes and tests a two-stage deterministic EMS architecture featuring rolling-horizon planning at an upper layer and fast local setpoint adaptation at a lower layer, jointly to reduce the impact of forecast errors and other uncertainties on the objective function. The first stage can be deployed either on the edge or in the cloud, depending on computational requirements, whereas the second stage is executed locally, close to the physical assets, to ensure timely corrective action. In the simulated cloud-executed planning case, moving from hourly to 15 min granularity improves the objective value from −49.39€ to −72.12€, corresponding to an approximate 46% reduction in operating cost. In our case study, the proposed second-stage local adaptation can reduce the mean absolute error (MAE) of the EMS performance loss by approximately 50% compared with applying the first-stage schedule without local correction. Results show that this two-stage hierarchical EMS effectively limits objective-function degradation while preserving operational efficiency and robustness. Full article
Show Figures

Figure 1

27 pages, 10069 KB  
Article
Accelerating CNN Inference via In-Network Computing in Information-Centric Networking
by Kaiwei Hu, Haojiang Deng and Botao Ma
Electronics 2026, 15(4), 775; https://doi.org/10.3390/electronics15040775 - 11 Feb 2026
Viewed by 151
Abstract
Although Convolutional Neural Networks (CNNs) have achieved remarkable accuracy in intelligent tasks, their increasing complexity hinders low-latency execution. While edge computing mitigates the wide-area network delays typical of cloud-based inference, it remains constrained by limited computational resources when processing complex models under high [...] Read more.
Although Convolutional Neural Networks (CNNs) have achieved remarkable accuracy in intelligent tasks, their increasing complexity hinders low-latency execution. While edge computing mitigates the wide-area network delays typical of cloud-based inference, it remains constrained by limited computational resources when processing complex models under high concurrency. Collaborative inference has emerged as a promising paradigm to address these limitations; however, existing approaches often struggle with rigid routing, limited scalability, and inefficient resource utilization. In this paper, we propose a novel collaborative inference acceleration mechanism that integrates In-Network Computing (INC) within an Information-Centric Networking (ICN) framework. By leveraging the name-based resolution capability of ICN, our approach dynamically harnesses underutilized computational resources across distributed INC nodes, enabling flexible layer-wise offloading that transcends the limitations of static IP paths. Furthermore, a distributed decision-making and node-selection algorithm is designed to orchestrate CNN layer assignment based on real-time network conditions and node workloads. Extensive simulations on representative models demonstrate the effectiveness of the proposed method. Specifically, for the computationally intensive VGG16 model under high concurrency, the average task completion time is reduced by 43.3% and 60.2% relative to IP-based and Edge-Cloud baselines, respectively, with a load balancing fairness index maintained above 0.86. Full article
Show Figures

Figure 1

31 pages, 6177 KB  
Review
From Point Clouds to Predictive Maintenance: A Review of Intelligent Railway Infrastructure Monitoring
by Yalin Zhang, Peng Dai, Mykola Sysyn, Yuchuan Hu, Lei Kou, Haoran Song and Jing Shi
Sensors 2026, 26(4), 1131; https://doi.org/10.3390/s26041131 - 10 Feb 2026
Viewed by 216
Abstract
Point cloud technology, characterized by its high-precision 3D geometric acquisition in complex railway environments, has become a cornerstone for the intelligent detection, monitoring, and maintenance of railway infrastructure. This paper provides a systematic review of point cloud applications across critical railway scenarios, encompassing [...] Read more.
Point cloud technology, characterized by its high-precision 3D geometric acquisition in complex railway environments, has become a cornerstone for the intelligent detection, monitoring, and maintenance of railway infrastructure. This paper provides a systematic review of point cloud applications across critical railway scenarios, encompassing track geometry extraction, infrastructure component identification, tunnel and bridge modeling, clearance and encroachment analysis, and structural condition monitoring. We evaluate various mobile and stationary acquisition platforms alongside their typical data processing workflows. Furthermore, this review synthesizes cutting-edge advancements in processing algorithms, with a focus on feature extraction, semantic segmentation, and the transformative impact of deep learning and artificial intelligence on data fusion. Notably, the paper explores the synergy between point clouds and computational mechanics, specifically the construction of high-fidelity digital twins through multi-physics coupling to enable real-time simulation of structural stress distribution and damage evolution. We critically analyze persistent technical bottlenecks, such as acquisition efficiency, monitoring precision, data fragmentation, environmental interference, and the complexities of multi-modal data fusion. Finally, the paper outlines future research trajectories, focusing on autonomous intelligent sensing, multi-sensor integration, and the comprehensive digital transformation of railway infrastructure management, aiming to provide a robust theoretical framework and technical roadmap for the sustainable intelligentization of global railway systems. Full article
Show Figures

Figure 1

17 pages, 1992 KB  
Article
Dynamic Micro-Batch and Token-Budget Scheduling for IoT-Scale Pipeline-Parallel LLM Inference
by Juncheol Ahn, Yubin Son, Daemin Kim and Sejin Park
Sensors 2026, 26(4), 1101; https://doi.org/10.3390/s26041101 - 8 Feb 2026
Viewed by 348
Abstract
Large language models in IoT–edge–cloud settings face bursty, heterogeneous requests that make pipeline-parallel inference prone to micro-batch imbalance and communication stalls, causing GPU idle time and SLO violations. We propose a runtime-adaptive scheduler that jointly tunes token budgets and micro-batch counts to balance [...] Read more.
Large language models in IoT–edge–cloud settings face bursty, heterogeneous requests that make pipeline-parallel inference prone to micro-batch imbalance and communication stalls, causing GPU idle time and SLO violations. We propose a runtime-adaptive scheduler that jointly tunes token budgets and micro-batch counts to balance prefill/decode workloads and minimize pipeline bubbles under changing compute and network conditions. On a four-node pipeline-parallel cluster across Llama-2-13b and Qwen2.5-14b at 100/1000 Mbps, our method outperforms vLLM and SGLang, reducing GPU idle time by up to 55% and improving throughput by up to 1.61 × while improving TTFT/ITL SLO satisfaction. These results show that dynamic scheduling is essential for scalable, latency-stable LLM inference in IoT–edge–cloud environments. Full article
(This article belongs to the Special Issue Cloud and Edge Computing for IoT Applications)
Show Figures

Figure 1

26 pages, 300 KB  
Review
Theoretical Foundations and Architectural Evolution of Cyberspace Endogenous Security: A Comprehensive Survey
by Heming Zhang, Jian Li, Hong Wang, Shizhong Xu, Hong Yang and Haitao Wu
Appl. Sci. 2026, 16(4), 1689; https://doi.org/10.3390/app16041689 - 8 Feb 2026
Viewed by 259
Abstract
The endogenous security paradigm has emerged to address the limitations of traditional cybersecurity, which relies on reactive “patching” and struggles against unknown threats, APTs, and supply chain attacks. Centered on the principle that “structure determines security”, it diverges from detection-based approaches by employing [...] Read more.
The endogenous security paradigm has emerged to address the limitations of traditional cybersecurity, which relies on reactive “patching” and struggles against unknown threats, APTs, and supply chain attacks. Centered on the principle that “structure determines security”, it diverges from detection-based approaches by employing systems theory and cybernetics to architect closed-loop systems with “heterogeneous execution, multimodal adjudication, and dynamic scheduling”. This is realized through intrinsic architectural constructs such as dynamism, heterogeneity, and redundancy. Theoretically, it transforms deterministic component-level attacks into probabilistic system-level events, thereby shifting the security foundation from a “cognitive contest” to an “entropy-driven confrontation”. This paper provides a comprehensive review of this paradigm. We begin by elucidating its philosophical foundations and core axioms, focusing on the Dynamic Heterogeneous Redundancy (DHR) model, which converts attacks on specific vulnerabilities into probabilistic events under the core assumption of independent heterogeneous execution entities. Next, we trace the architectural evolution from early mimic defense prototypes to a universal framework, analyzing key developments including expanded heterogeneity dimensions, intelligence-driven dynamic policies, and enhanced adjudication mechanisms. We then explore essential enabling technologies and their integration with cutting-edge trends such as artificial intelligence, 6G, and cloud-native computing. Through case studies of the 5G core network and intelligent connected vehicles, the engineering feasibility of the endogenous security paradigm has been validated, with quantifiable security gains demonstrated. In a live-network pilot of the endogenous security micro-segmentation system for the 5G core, resource consumption (CPU/memory usage) of network function virtual machines remained below 3% under steady-state service loads. The system concurrently maintained microsecond-level forwarding performance and achieved carrier-grade core service availability of 99.999%. These results demonstrate that the endogenous security mechanism delivers high-level structural security with an acceptable performance cost. The paper also critically summarizes current theoretical, engineering, and ecosystem challenges, while outlining future research directions such as “Endogenous Security as a Service” and convergence with quantum-safe technologies. Full article
(This article belongs to the Special Issue AI Technology and Security in Cloud/Big Data)
22 pages, 3535 KB  
Article
Bridge Health Monitoring and Assessment in Industry 5.0: Lessons Learned from Long-Term Real-Time Field Monitoring of Highway Bridges
by Prakash Bhandari, Shinae Jang, Song Han and Ramesh B. Malla
Infrastructures 2026, 11(2), 55; https://doi.org/10.3390/infrastructures11020055 - 7 Feb 2026
Viewed by 269
Abstract
The rapid aging of bridges has increased interest in real-time, data-driven monitoring for predictive maintenance and safety management; however, practical deployment on in-service bridges remains limited. This paper presents lessons learned from long-term field deployment of real-time bridge joint monitoring systems on three [...] Read more.
The rapid aging of bridges has increased interest in real-time, data-driven monitoring for predictive maintenance and safety management; however, practical deployment on in-service bridges remains limited. This paper presents lessons learned from long-term field deployment of real-time bridge joint monitoring systems on three in-service highway bridges and demonstrates how these insights can support the transition toward Industry 5.0. A unified framework is introduced to integrate key enabling technologies, including Internet of Things (IoT), digital twins, and artificial intelligence (AI), into a practical, human-centric monitoring architecture. Best practices for achieving durable, site-compliant, and cost-effective system design are summarized, with emphasis on sensor selection, wireless communication strategies, modular system development, and maintaining seamless operation. The development of a Docker-based analytics and visualization platform illustrates how interactive dashboards enhance human–machine collaboration and support informed decision-making. The role of advanced analytical tools, including digital twins, AI, and statistical modeling, in providing reliable structural assessments is highlighted, along with guidance on balancing cloud and edge computing for energy-efficient performance under constraints such as limited power, weather exposure, and site accessibility. Overall, the findings support the development of scalable, resilient, and human-centric real-time monitoring systems that advance data-driven decision-making and directly contribute to the realization of Industry 5.0 objectives in bridge health management. Full article
Show Figures

Figure 1

Back to TopTop