Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (320)

Search Parameters:
Keywords = service orchestration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 1217 KB  
Article
The Missing Layer in Modern IT: Governance of Commitments, Not Just Compute and Data
by Rao Mikkilineni and William Patrick Kelly
Computers 2026, 15(5), 275; https://doi.org/10.3390/computers15050275 - 24 Apr 2026
Abstract
Contemporary enterprise IT operations are largely implemented on Shannon–Turing computing models in which programs execute read–compute–write cycles over data structures, while governance—fault handling, configuration control, auditability, continuity, and accounting—is applied externally through infrastructure platforms, observability stacks, and human operational processes. This separation scales [...] Read more.
Contemporary enterprise IT operations are largely implemented on Shannon–Turing computing models in which programs execute read–compute–write cycles over data structures, while governance—fault handling, configuration control, auditability, continuity, and accounting—is applied externally through infrastructure platforms, observability stacks, and human operational processes. This separation scales analytical throughput but accumulates what we term coherence debt: locally expedient operational commitments whose provenance and revisability degrade over time until exposed by failures, security incidents, regulatory demands, or architectural transitions. This paper examines the evolution of operational computing models that integrate com-pupation with regulation at two distinct levels. First, Distributed Intelligent Managed Elements (DIME) extend the classical Turing cycle toward a supervised execution loop—read–check-with-oracle–compute–write—by incorporating signaling overlays and FCAPS (Fault, Configuration, Accounting, Performance, and Security) supervision into computation in progress. Second, the Autopoietic Management and Orchestration System (AMOS), grounded in the General Theory of Information, the Burgin–Mikkilineni Thesis, and Deutsch’s epistemic framework, fully decouples process executors from governance by treating any Turing-equivalent engine as a replaceable execution substrate while elevating knowledge structures—encoded as local and global Digital Genomes—to first-class operational state within a governed knowledge network. Using a distributed microservice transaction testbed, we demonstrate how this approach operationalizes topology-as-data, a capability-oriented control plane, decoupled application-layer FCAPS independent of infrastructure management, and policy-selectable consistency/availability semantics. Our results show that the principal benefit of AMOS is not circumventing theoretical constraints such as the Consistency, Availability, and Partition tolerance (CAP) theorem, but governing their trade-offs as explicit, auditable commitments with defined convergence pathways and controlled return to a coherent system state, thereby reducing coherence debt and improving operational reliability in distributed AI-enabled enterprise systems. Full article
(This article belongs to the Special Issue Cloud Computing and Big Data Mining)
23 pages, 6049 KB  
Article
Seamless Inter-Domain Mobility with Hybrid SDN-LISP
by Kuljaree Tantayakul, Adisak Intana, Aung Aung and Riadh Dhaou
Future Internet 2026, 18(5), 227; https://doi.org/10.3390/fi18050227 - 22 Apr 2026
Viewed by 181
Abstract
One of the challenges in managing mobility in a heterogeneous network domain remains a significant challenge in Software-Defined Networking (SDN). While SDN has effectively facilitated intra-domain mobility, inter-domain mobility has been a major issue, leading to service interruptions, packet loss, and unstable communication [...] Read more.
One of the challenges in managing mobility in a heterogeneous network domain remains a significant challenge in Software-Defined Networking (SDN). While SDN has effectively facilitated intra-domain mobility, inter-domain mobility has been a major issue, leading to service interruptions, packet loss, and unstable communication sessions. This article presents a new concept in mobility management: a hybrid SDN-LISP network that facilitates inter-domain communication by integrating SDN with the Locator/Identifier Separation Protocol (LISP). The main idea is to introduce a new event-based orchestration model that uses OpenFlow Packet-In messages to provide instantaneous updates to Endpoint Identifiers-to-Routing Locators (EID-to-RLOC) mappings, unlike traditional LISP, which relies on timers for updates. The proposed framework has been implemented and evaluated on a Mininet-WiFi testbed under various mobility conditions. The results obtained from the experimental evaluation reveal that packet loss is reduced by 92.32% when using the proposed framework over the conventional SDN Mobility approach. Although there is a slight increase in jitter overhead due to LISP encapsulation of 0.628 ms, the framework does not compromise Transmission Control Protocol (TCP) session continuity. In addition, the control plane synchronization time is also minimized to 277.5 ms. This reveals that the proposed framework is a stable mobility solution that does not depend on any conventional IP mobility solutions and can be used in future network environments requiring seamless inter-domain connectivity. Full article
(This article belongs to the Section Network Virtualization and Edge/Fog Computing)
18 pages, 701 KB  
Article
PatternStudio: A Neuro-Symbolic Framework for Dynamic and High-Throughput Complex Event Processing
by Jesús Rosa-Bilbao
IoT 2026, 7(2), 36; https://doi.org/10.3390/iot7020036 - 22 Apr 2026
Viewed by 92
Abstract
Complex Event Processing (CEP) is essential for real-time analytics in domains such as industrial IoT, cybersecurity, and financial monitoring, yet CEP adoption is still hindered by the difficulty of authoring temporal rules and by rigid redeployment workflows. This paper presents PatternStudio, a neuro-symbolic [...] Read more.
Complex Event Processing (CEP) is essential for real-time analytics in domains such as industrial IoT, cybersecurity, and financial monitoring, yet CEP adoption is still hindered by the difficulty of authoring temporal rules and by rigid redeployment workflows. This paper presents PatternStudio, a neuro-symbolic CEP framework that translates natural language specifications into validated event-processing patterns and executes them on a deterministic Apache Flink-based runtime without interrupting service. The generative layer is constrained to produce a typed intermediate representation, while the symbolic layer enforces validation and runtime execution guarantees. We evaluate the prototype as a single-node system-characterization study on commodity hardware representative of edge and near-edge gateways rather than microcontroller-class devices. Under this setting, PatternStudio reaches 47,910 events per second at 250 active rules while maintaining a bounded memory footprint between 1.6 GB and 1.9 GB during the reported runs. Beyond 500 active rules, throughput degradation is driven primarily by CPU saturation and alert amplification, which also explains the sharp increase in tail latency. Additional measurements with parallelism 4, a static baseline, and a two-stage NL-to-IR evaluation further show that the architecture remains functional under partitioned execution, incurs moderate dynamic-orchestration overhead, preserves rule structure reliably under natural-language authoring, and supports interchangeable LLM backends at the semantic front end. Full article
28 pages, 3851 KB  
Article
Joint Service Chain Orchestration and Computation Offloading via GNN-Based QMIX in Industrial IoT
by Xinzhi Huang and Bingxin Tian
Sensors 2026, 26(8), 2559; https://doi.org/10.3390/s26082559 - 21 Apr 2026
Viewed by 131
Abstract
In IIoT edge computing, multi-edge server collaborative scheduling faces two core issues due to random task arrivals, heterogeneous resources, and complex topology: traditional model-driven methods cannot make dynamic decisions in dynamic environments, and conventional MARL fails to characterize inter-node topological dependencies and load [...] Read more.
In IIoT edge computing, multi-edge server collaborative scheduling faces two core issues due to random task arrivals, heterogeneous resources, and complex topology: traditional model-driven methods cannot make dynamic decisions in dynamic environments, and conventional MARL fails to characterize inter-node topological dependencies and load correlations. To address this, this paper investigates the joint optimization of task offloading, computing resource allocation, and SFC orchestration in IIoT, constructs a cloud-edge-end collaborative architecture, and models the problem as a POMDP to minimize the overall system cost under multiple constraints. A graph-guided value-decomposition MARL method is proposed, which extracts spatial topology and neighborhood-load features of edge nodes via a GNN and combines them with the QMIX framework to realize multi-agent centralized training and distributed execution. Simulations show that the algorithm converges stably under different server scales and task loads, significantly outperforms benchmark algorithms, and can suppress performance degradation in high-load scenarios, demonstrating its robustness and scalability in complex industrial environments. Full article
(This article belongs to the Special Issue Artificial Intelligence and Edge Computing in IoT-Based Applications)
Show Figures

Figure 1

38 pages, 4252 KB  
Article
System-Level Offline Time Synchronization Architecture for Distributed Electrical Signal Monitoring Using Raspberry Pi 5
by Adriana Burlibaşa, Silviu Epure, Mihai Culea, Cristinel Radu Dache, Cristian Victor Lungu, George-Andrei Marin and Ciprian Vlad
Sensors 2026, 26(8), 2519; https://doi.org/10.3390/s26082519 - 19 Apr 2026
Viewed by 164
Abstract
Accurate time synchronization is essential in distributed electrical signal monitoring, where phase coherence and event correlation depend on precise timing agreement between acquisition nodes. Conventional approaches often rely on a single synchronization source, typically internet-based Network Time Protocol (NTP) or GPS-disciplined clocks, which [...] Read more.
Accurate time synchronization is essential in distributed electrical signal monitoring, where phase coherence and event correlation depend on precise timing agreement between acquisition nodes. Conventional approaches often rely on a single synchronization source, typically internet-based Network Time Protocol (NTP) or GPS-disciplined clocks, which is impractical in isolated, offline, or cost-sensitive scenarios. This paper introduces an autonomous offline synchronization architecture for multi-node monitoring systems built on Raspberry Pi 5 (RPI5) platforms connected to a private Ethernet network. Instead of depending on one timing method, the system integrates several complementary mechanisms: battery-backed RTC persistence via the J5 interface, deterministic orchestration through systemd services, automated boot time recovery, chrony-managed NTP discipline, and Precision Time Protocol (PTP) hardware timestamping using PTP Hardware Clock (PHC). Synchronization performance is validated through continuous multi-day measurements of long-term stability, inter-node phase coherence, and short-term jitter. Controlled power-loss scenarios are also included to verify recovery behavior. The system maintains sub-microsecond alignment between nodes using only commodity hardware and no external time source. To further confirm inter-node timestamp alignment at the signal level, both hardware-based reference signal injection and software-based synchronized signal emulation are employed, providing ground-truth validation alongside scalable and reproducible evaluation. The results show that low-cost embedded hardware can support reliable, long-duration synchronization in fully offline installations. Full article
(This article belongs to the Section Sensor Networks)
29 pages, 2696 KB  
Article
B2CDMS: A Blockchain-Based Architecture for Secure and High-Throughput Classified Document Logging
by Enis Konacaklı and Can Eyüpoğlu
Electronics 2026, 15(8), 1681; https://doi.org/10.3390/electronics15081681 (registering DOI) - 16 Apr 2026
Viewed by 187
Abstract
The secure management of classified documents containing sensitive information is critical for governments, military organizations, and the industry. Traditional data loss prevention (DLP) systems lack robustness against insider threats, particularly regarding access log integrity and tamper-proof auditing. To address log security, the previous [...] Read more.
The secure management of classified documents containing sensitive information is critical for governments, military organizations, and the industry. Traditional data loss prevention (DLP) systems lack robustness against insider threats, particularly regarding access log integrity and tamper-proof auditing. To address log security, the previous literature has proposed multiple solutions, including private and hybrid blockchain models (e.g., Ethereum + MultiChain) to ensure audit trail integrity. However, hybrid architectures often face challenges such as unpredictable transaction costs (gas fees) and potential privacy risks when scaled for enterprise DLP logs. Conversely, private architectures may require higher resources, potentially causing bottlenecks on endpoints. In this paper, we propose an optimized Blockchain-Based Classified Document Management System (B2CDMS) utilizing a permissioned architecture. Our work demonstrates the challenges, advantages, and weak points of current solutions. We optimized a permissioned blockchain (BC) (Hyperledger Fabric v2.5) with an External Chaincode Builder using the Chaincode-as-a-Service (CCaaS) pattern. We compared our proposed private architecture with a hybrid architecture (Ethereum + MultiChain) and a public solution (Ethereum). We conducted a comprehensive analysis using pseudo Trellix ePolicy Orchestrator (ePO) Data Loss Prevention (DLP) logs. Experimental results on an Apple Silicon M4 (Apple Inc., Cupertino, CA, USA) testbed show that the proposed architecture achieves a throughput of 845.8 Transactions Per Second (TPS) with a sub-second latency of 55 ms, aiming to eliminate the bottlenecks of public blockchains. Furthermore, the system introduces a privacy-preserving hashing mechanism (i.e., committing only deterministic Secure Hash Algorithm 256-bit (SHA-256) digests to the immutable ledger while keeping the actual sensitive Personally Identifiable Information (PII) strictly in off-chain databases) compliant with General Data Protection Regulation (GDPR). It ensures that classified document metadata remains immutable and secure against rogue access benefiting from admin privileges. This study concludes that permissioned blockchain architectures offer a scalable and resource-efficient solution for forensic evidence preservation throughout the classified document lifecycle. Full article
Show Figures

Figure 1

29 pages, 1089 KB  
Article
Time-Aware Graph Neural Network for Asynchronous Multi-Station Integrated Sensing and Communications Fusion in Open RAN
by Zhiqiang Shen, Wooseok Shin and Jitae Shin
Sensors 2026, 26(8), 2376; https://doi.org/10.3390/s26082376 - 12 Apr 2026
Viewed by 262
Abstract
Multi-station sensing telemetry typically arrives out-of-order at the Open RAN (O-RAN) Near-RT RIC due to non-deterministic jitter in cloud-native protocol stacks, inducing a “temporal scrambling” effect that invalidates traditional spatial fusion. To bridge this gap, we introduce Age-of-Sensing (AoS) as a dynamic reliability [...] Read more.
Multi-station sensing telemetry typically arrives out-of-order at the Open RAN (O-RAN) Near-RT RIC due to non-deterministic jitter in cloud-native protocol stacks, inducing a “temporal scrambling” effect that invalidates traditional spatial fusion. To bridge this gap, we introduce Age-of-Sensing (AoS) as a dynamic reliability metric for asynchronous sensing reports and establish an AoS-aware graph neural network (GNN) paradigm for asynchronous sensing fusion. This paradigm shifts the focus from conventional spatial-only aggregation to time-aware inference by explicitly incorporating sensing freshness into graph-based fusion. As a physics-informed realization of this paradigm, we present Time-Aware Fusion (TA-Fusion), which introduces a TA-Gate mechanism to recalibrate node trust prior to graph aggregation. Unlike passive feature concatenation, the TA-Gate serves as an active gating signal to prioritize fresh telemetry while adaptively suppressing stale outliers. On a standardized O-RAN benchmark, TA-Fusion achieves a root mean square error (RMSE) of 12.22 m, delivering a 21.7% reduction in Mean absolute error (MAE) over the AoS-aware GNN baseline and maintaining robustness in extreme jitter scenarios where traditional linear methods suffer from severe accuracy degradation due to their static weighting logic. Extensive Monte Carlo simulations confirm that the framework preserves consistent error bounds across diverse base station geometries without manual recalibration. These findings support the real-time feasibility of the proposed paradigm for delay-critical Integrated Sensing and Communication (ISAC) services, providing a resilient spatial foundation for 6G orchestration under substantial network-layer jitter. Full article
(This article belongs to the Special Issue Mobile Sensing and Computing in Internet of Things)
Show Figures

Figure 1

26 pages, 914 KB  
Article
AI-Amplification Indicator: An Actor-Level Scoring Framework for Ransomware Operations on the Dark Web
by Mostafa Moallim, Seokhee Lee, Ibrahim Alzahrani, Faisal Abdulaziz Alfouzan and Kyounggon Kim
J. Cybersecur. Priv. 2026, 6(2), 70; https://doi.org/10.3390/jcp6020070 - 8 Apr 2026
Viewed by 482
Abstract
Ransomware operations have evolved from isolated malware incidents into organized ransomware-as-a-service (RaaS) ecosystems that employ coordinated tactics, techniques, and procedures and increasingly rely on automation and artificial intelligence to scale intrusions. However, most assessments remain artifact-centric, focusing on malware signatures or aggregate victim [...] Read more.
Ransomware operations have evolved from isolated malware incidents into organized ransomware-as-a-service (RaaS) ecosystems that employ coordinated tactics, techniques, and procedures and increasingly rely on automation and artificial intelligence to scale intrusions. However, most assessments remain artifact-centric, focusing on malware signatures or aggregate victim counts, which provide limited visibility into differences in actor-level behavior and operational capability. This study introduces the AI-Amplification Indicator (AIAI), an interpretable actor-level scoring framework that transforms publicly observable leak-site disclosures and verifiable open-source evidence into quantitative behavioral profiles. Using continuous monitoring of dark web leak portals, we construct a standardized dataset of ransomware disclosures for 2025 with temporal, geographic, and sector metadata. AIAI measures four complementary dimensions: GenAI-enabled social engineering, operational tempo and orchestration, targeting breadth and diversification, and temporal scaling dynamics. Indicators are computed for all observed actors, while comparative profiling focuses on the ten most active actors to ensure stable behavioral estimation. The analysis reveals substantial heterogeneity in posting cadence, targeting strategies, and scaling dynamics, as well as limited but measurable evidence of automated or AI-assisted deception. These differences are not captured by victim counts alone. The proposed framework provides a transparent and reproducible approach for actor-level ransomware intelligence, enabling systematic comparison of operational styles and supporting data-driven defensive prioritization. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

31 pages, 2475 KB  
Article
Fuzzy-Logic Workload Orchestration Framework for Smart Campuses in Edge-Cloud System Architecture
by Abdullah Fawaz Aljulayfi
Electronics 2026, 15(8), 1556; https://doi.org/10.3390/electronics15081556 - 8 Apr 2026
Viewed by 325
Abstract
Transforming a conventional university campus into a smart campus by leveraging modern technologies aims to deliver university services efficiently, effectively, and at low cost. Modern technologies enhance campus life by providing services, such as smart classrooms and campus security, on demand. Seamless service [...] Read more.
Transforming a conventional university campus into a smart campus by leveraging modern technologies aims to deliver university services efficiently, effectively, and at low cost. Modern technologies enhance campus life by providing services, such as smart classrooms and campus security, on demand. Seamless service delivery requires reliable and efficient access to the services that take into consideration the dynamic contextual attributes related to, e.g., end-device mobility, latency sensitivity, and resource constraints. University staff, students, and visitors frequently submit different types of service requests on the move, which requires a robust orchestration framework capable of managing these requests across edge-cloud environments. The orchestration framework needs to intelligently distribute the workload, taking into consideration the latency sensitivity requirements and contextual conditions, including resource constraints. Therefore, a fuzzy-logic orchestration framework for smart-campus environments in edge-cloud architecture is proposed. The framework incorporates key factors, including user speed, resource utilization, and request delay sensitivity, in the decision-making process to satisfy both service consumers and service providers. It prioritizes latency-sensitive requests while simultaneously enhancing resource utilization efficiency. Simulation-based experimental results demonstrate the effectiveness of the proposed framework compared with benchmark approaches in orchestrating incoming workloads under several user and contextual conditions. Additionally, the results show that the proposed framework improves the execution rate by 30% compared to benchmark models and achieves more than double resource utilization efficiency. Full article
Show Figures

Figure 1

33 pages, 6015 KB  
Article
Use Infrastructures and the Design Evidence Link (DEL) for Urban Climate Mitigation: An Ex Ante and Ex Post Verification of User-Centred Mitigation Impacts
by Francesca Scalisi
Sustainability 2026, 18(7), 3587; https://doi.org/10.3390/su18073587 - 6 Apr 2026
Viewed by 357
Abstract
Achieving urban climate neutrality and interim mitigation targets requires rapid demand-side emission reductions, yet current user-centred interventions remain fragmented, are often concentrated on low-impact actions, and rarely provide a traceable basis for comparing outcomes, validity conditions, and equity implications across contexts. This paper [...] Read more.
Achieving urban climate neutrality and interim mitigation targets requires rapid demand-side emission reductions, yet current user-centred interventions remain fragmented, are often concentrated on low-impact actions, and rarely provide a traceable basis for comparing outcomes, validity conditions, and equity implications across contexts. This paper reframes demand-side mitigation as a design problem of “use infrastructures”: integrated configurations of communication, product-technology, services, interaction, and governance that make low-carbon choices practicable within everyday routines. We introduce the Design Evidence Link (DEL) as a traceability device supporting ex ante configuration (selection and orchestration of levers) and ex post verification (monitoring, attribution of outcomes, and trade-off control). Through a design-led comparative analysis of nine international cases in high-impact sectors (household energy, ground mobility, food systems, and circular economy/materials), we derive and consolidate a shared extraction and coding protocol that links determinants (barriers and enablers) to design requirements and decision-grade metrics (carbon impact, adoption, continuity, and equity), explicitly qualifying uncertainty and evidence levels. Cross-case results show that effective interventions rely less on isolated information and more on coordinated action packages that reduce cognitive and economic frictions, enhance data credibility through standards and accountability, and embed follow-up mechanisms that support behavioural continuity. DEL also surfaces recurring validity conditions and failure modes (digital exclusion, trust erosion, rebound, and lock-in), translating them into operational criteria for policy and design. Compared with behaviour-change or theory-of-change framings, DEL focuses on the observable orchestration of integrated conditions of use and on the explicit grading of evidence. It should therefore be read as a structured analytical–operational framework for ex ante and ex post assessment, whose transferability remains conditional on source quality, contextual prerequisites, and the limits of the selected cases. Full article
Show Figures

Figure 1

45 pages, 3695 KB  
Article
Towards a Reference Architecture for Machine Learning Operations
by Miguel Ángel Mateo-Casalí, Andrés Boza and Francisco Fraile
Computers 2026, 15(4), 218; https://doi.org/10.3390/computers15040218 - 1 Apr 2026
Viewed by 630
Abstract
Industrial organisations increasingly rely on machine learning (ML) to improve quality, maintenance, and planning in Industry 4.0/5.0 ecosystems. However, turning experimental models into reliable services on the production floor remains complex due to the heterogeneity of operational technologies (OTs) and information technologies (ITs), [...] Read more.
Industrial organisations increasingly rely on machine learning (ML) to improve quality, maintenance, and planning in Industry 4.0/5.0 ecosystems. However, turning experimental models into reliable services on the production floor remains complex due to the heterogeneity of operational technologies (OTs) and information technologies (ITs), including implementation constraints, latency in edge-fog-cloud scenarios, governance requirements, and continuous performance degradation caused by data drift. Although Machine Learning Operations (MLOps) provides lifecycle practices for deployment, monitoring, and retraining, the evidence is fragmented across tool-centric descriptions, case-specific pipelines, and conceptual architectures, offering limited guidance on which industrial constraints should inform architectural decisions and how to evaluate solutions. This work addresses that gap through a PRISMA-guided systematic review of 49 studies on industrial MLOps (with the search and screening primarily targeting Industry 4.0/IIoT operationalisation contexts, as reflected in the search strategy and corpus) and an evidence-based synthesis of principles, challenges, lifecycle practices, and enabling technologies. From this synthesis, industrial requirements are derived that encompass OT/IT integration, edge-fog-cloud orchestration, security and traceability, and observability-based lifecycle control. On this basis, a reference architecture is proposed that maps these requirements to functional layers, data and control flows, and verifiable responsibilities. To support reproducibility and practical inspectability, the article also presents an open-source architectural instantiation aligned with the proposed decomposition. Finally, the evaluation is illustrated through a predictive maintenance use case (tool breakage) in a single CNC machining cell, where the objective is to demonstrate end-to-end feasibility under realistic operational constraints rather than cross-scenario superiority or broad industrial generalisability. Full article
(This article belongs to the Special Issue Machine Learning: Innovation, Implementation, and Impact)
Show Figures

Figure 1

26 pages, 423 KB  
Article
Hardware-Anchored ES-SPA: A Dynamic Zero-Trust Architecture for Secure eSIM Provisioning in 6G IoT via Moving Target Defense
by Hari N. N., Kurunandan Jain, Prabu P and Prabhakar Krishnan
Future Internet 2026, 18(4), 187; https://doi.org/10.3390/fi18040187 - 1 Apr 2026
Viewed by 541
Abstract
The rapid evolution of 6G networks and large-scale Internet of Things (IoT) deployments intensifies security and privacy challenges in embedded SIM (eSIM) Remote SIM Provisioning (RSP), particularly during the bootstrap and profile delivery phases. Traditional perimeter-based and VPN-centric approaches expose static attack surfaces, [...] Read more.
The rapid evolution of 6G networks and large-scale Internet of Things (IoT) deployments intensifies security and privacy challenges in embedded SIM (eSIM) Remote SIM Provisioning (RSP), particularly during the bootstrap and profile delivery phases. Traditional perimeter-based and VPN-centric approaches expose static attack surfaces, making provisioning workflows vulnerable to denial-of-service (DoS) attacks, reconnaissance, and profile lock-in risks. This paper presents MTD-SDP-eSIM, a hardware-anchored Zero Trust Architecture that secures eSIM provisioning by integrating the embedded Universal Integrated Circuit Card (eUICC) as a root of trust with Software-Defined Perimeter (SDP), Software-Defined Networking (SDN), and Moving Target Defense (MTD). The framework introduces Hardware-Anchored Single Packet Authorization (ES-SPA), which cryptographically binds initial access to tamper-resistant eUICC credentials and enforces an authenticate-before-connect model. A unified Zero Trust controller dynamically orchestrates SDP access control, SDN-based micro-segmentation, and MTD-driven Network Address Shuffling during high-risk provisioning phases. This framework is validated on a high-fidelity 6G testbed built using ns-3, Open5GS, and P4-programmable switches. Experimental results demonstrate a 90% DoS survival rate during provisioning, a 35% scalability improvement over VPN-based baselines, and a 75% reduction in profile lock-in failures through runtime deletion verification. These findings confirm that anchoring dynamic network defenses in hardware-rooted identity significantly enhances the resilience, scalability, and privacy of eSIM provisioning for massive 6G IoT deployments. Full article
Show Figures

Graphical abstract

25 pages, 11970 KB  
Article
Workload-Aware Edge Node Orchestration and Dynamic Resource Scaling in MEC
by Efthymios Oikonomou and Angelos Rouskas
Future Internet 2026, 18(4), 184; https://doi.org/10.3390/fi18040184 - 1 Apr 2026
Viewed by 458
Abstract
The emergence of edge computing introduces significant opportunities to improve real-time responsiveness and reduce latency by deploying computational resources closer to end users, at the edge, compared to traditional centralized cloud computing. However, stochastic and fluctuating workloads pose challenges in maintaining Quality of [...] Read more.
The emergence of edge computing introduces significant opportunities to improve real-time responsiveness and reduce latency by deploying computational resources closer to end users, at the edge, compared to traditional centralized cloud computing. However, stochastic and fluctuating workloads pose challenges in maintaining Quality of Service, often leading to resource fragmentation, service node saturation, and energy inefficiencies. In addition, imbalances in service node utilization, arising from either under-utilization or over-utilization, degrade the overall system performance and lead to unnecessary operational costs. Furthermore, finding an optimal balance between total latency cost and load balancing in different network topologies remains a significant challenge. In this research, we propose and evaluate a workload-aware orchestration framework that integrates short-term workload forecasting with dynamic resource scaling to efficiently manage edge node infrastructure under dynamic processing demands. The framework employs heuristic schemes that consider both workload distribution and service proximity throughout the edge network to optimize the distribution of edge users’ service requests across service nodes. Simulation results on grid and irregular edge network topologies, utilizing both synthetic and real-world dataset, demonstrate that the proposed framework and the integrated heuristics outperform other benchmark approaches. Specifically, our framework achieves up to 20% lower load imbalance variance, maintains high resource utilization, decreases system reconfigurations and increases service reliability, providing a robust, low-overhead and adaptive solution for dynamic orchestration in edge computing environments and infrastructures. Full article
(This article belongs to the Special Issue Edge and Fog Computing for the Internet of Things, 2nd Edition)
Show Figures

Graphical abstract

14 pages, 260 KB  
Article
Personal Financial Planning Practices of the Middle Class in Bangladesh: Implications for Financial Inclusion
by Md. Arif Hossain Mazumder and Michelle Cull
J. Risk Financial Manag. 2026, 19(4), 250; https://doi.org/10.3390/jrfm19040250 - 1 Apr 2026
Viewed by 494
Abstract
Goal-based financial management and planning for future financial needs is a critical element for individuals’ wellbeing in the modern world. Lacking the ability to identify key financial needs may negatively affect one’s standard of living and lead to adverse consequences. Like other aspects [...] Read more.
Goal-based financial management and planning for future financial needs is a critical element for individuals’ wellbeing in the modern world. Lacking the ability to identify key financial needs may negatively affect one’s standard of living and lead to adverse consequences. Like other aspects of individuals’ lives, culture plays an important role in shaping attitudes and, consequently, financial planning practices. In this respect, the literature lacks studies from developing countries and south-east Asian cultures. This study aims to address this gap by undertaking an exploratory investigation of the personal financial practices of the emerging middle class in Bangladesh. Using surveys and interviews, we find that immediate financial needs and family financial wellbeing are central to the financial concerns of the middle class. We found previous financial hardship, along with perceived wealth, to drive savings and other financial planning practices. However, findings also suggest poor knowledge of and lack of trust in financial products and services which may jeopardize financial inclusion. Orchestrated efforts from professionals, advocacy groups, and regulators are needed to build trust in the financial system and facilitate financial inclusion—a key component for economic growth and societal advancement in a developing country such as Bangladesh. Full article
28 pages, 2619 KB  
Article
A Dynamic Clustering Framework for Intelligent Task Orchestration in Mobile Edge Computing
by Mona Alghamdi, Atm S. Alam and Asma Cherif
Computers 2026, 15(4), 214; https://doi.org/10.3390/computers15040214 - 1 Apr 2026
Viewed by 417
Abstract
Mobile edge computing (MEC) enables resource-constrained mobile devices to execute delay-sensitive and compute-intensive applications by offloading tasks to nearby edge servers. However, task orchestration in MEC is challenged by the highly dynamic system conditions, unreliable networks, and distributed edge environments. Moreover, as the [...] Read more.
Mobile edge computing (MEC) enables resource-constrained mobile devices to execute delay-sensitive and compute-intensive applications by offloading tasks to nearby edge servers. However, task orchestration in MEC is challenged by the highly dynamic system conditions, unreliable networks, and distributed edge environments. Moreover, as the number of mobile users, tasks, and distributed computing resources (edge/cloud servers) increases, the task orchestration process becomes more complex due to the expanded decision space and the need to efficiently allocate heterogeneous resources under latency and capacity constraints. As the decision space grows, exhaustive-search-based orchestration becomes computationally infeasible. Clustering approaches often rely on proximity-only grouping, while learning-based solutions require extensive training and parameter tuning. To address these challenges, this paper proposes a Multi-Criteria Hierarchical Clustering-based Task Orchestrator (MCHC-TO), a novel framework that integrates multi-criteria decision making with divisive hierarchical clustering for preference-aware and adaptive workload orchestration. Edge servers are first evaluated using multiple decision criteria, and the resulting preference rankings are exploited to form hierarchical preference-based clusters. Incoming tasks are then assigned to the most suitable cluster based on task requirements, enabling efficient resource utilization and dynamic decision-making. Extensive simulations conducted using an edge computing simulator demonstrate that the proposed MCHC-TO framework consistently outperforms benchmark approaches, achieving reductions in average service delay and task failure rate of up to 48% and 92%, respectively. These results highlight the effectiveness of combining multi-criteria evaluation with hierarchical clustering for robust and dynamic task orchestration in MEC environments. Full article
(This article belongs to the Special Issue Mobile Fog and Edge Computing)
Show Figures

Graphical abstract

Back to TopTop