Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,766)

Search Parameters:
Keywords = cloud services

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1734 KB  
Review
Recent Progress in Development of Hollow-Core Fibers for Telecommunications and Data Transmission Applications
by Krzysztof Borzycki
Photonics 2026, 13(5), 494; https://doi.org/10.3390/photonics13050494 (registering DOI) - 15 May 2026
Abstract
The progress made in several fields after 2023 is rather significant. Attenuation achieved by the best HCFs was reduced to 0.05–0.10 dB/km at 1550 nm, while the lowest attenuation achieved in a single-mode fiber with a pure silica core equals 0.14 dB/km. Polarization [...] Read more.
The progress made in several fields after 2023 is rather significant. Attenuation achieved by the best HCFs was reduced to 0.05–0.10 dB/km at 1550 nm, while the lowest attenuation achieved in a single-mode fiber with a pure silica core equals 0.14 dB/km. Polarization mode dispersion (PMD) has been reduced to a level typical of SMFs, through fiber spinning. In November 2024, Microsoft announced a 2-year plan to install 15,000 km of HCF cables between and within data centers processing data for Microsoft Azure cloud services. Furthermore, several HCF manufacturers have emerged: UK-based Microsoft Azure Fiber and two Microsoft subcontractors, namely Corning Inc. and Heraeus Covantics, plus two major HCF manufacturers in China, YOFC and Linfiber. Additionally, extensive work was carried out on optical amplifiers to enable new transmission bands in HCFs, both at short wavelengths (≈1300–1500 nm), with bismuth-doped active fibers, and long wavelengths (≈1700–2100 nm), with thulium- and holmium-doped fibers. On the other hand, progress in HCF standardization, splicing and elimination of loss bands introduced by contaminants, has been marginal. Standardization is blocked by multiple fiber designs being tried, with no clear winner emerging yet. Despite this, hollow-core fibers have been successfully debuted in large-scale commercial data centers and are also used in low-latency data links. Full article
Show Figures

Figure 1

24 pages, 1434 KB  
Article
Adaptive Service Migration in Hybrid MEC–Cloud Environments: A Queueing-Theoretic Framework for Split-User Offloading
by Anna Kushchazli, Kseniia Leonteva, Darina Shiyapova, Alexandr Priscepov and Irina Kochetkova
Future Internet 2026, 18(5), 258; https://doi.org/10.3390/fi18050258 - 14 May 2026
Abstract
Resource-constrained Multi-Access Edge Computing (MEC) nodes cannot fully replace cloud infrastructure, yet existing service placement models treat edge hosting as an all-or-nothing decision. This paper proposes a queueing-theoretic framework for split-user offloading in hybrid MEC–cloud environments. The system is modeled as a Continuous-Time [...] Read more.
Resource-constrained Multi-Access Edge Computing (MEC) nodes cannot fully replace cloud infrastructure, yet existing service placement models treat edge hosting as an all-or-nothing decision. This paper proposes a queueing-theoretic framework for split-user offloading in hybrid MEC–cloud environments. The system is modeled as a Continuous-Time Markov Chain (CTMC) over a load-vector state space that admits a product-form stationary distribution. A delay-aware greedy orchestration policy determines, at every arrival and departure event, which service occupies the MEC node and how many of its users are offloaded from the cloud. Closed-form expressions are derived for average end-to-end (E2E) delay, MEC occupancy and saturation probabilities, per-service hosting probabilities, and delay-saving indicators. Numerical analysis of a five-service industrial scenario shows that the proposed split-user mechanism keeps the MEC node occupied for most of the observation time (around 97% at the baseline load), naturally prioritizes services with the largest aggregate latency benefit, and substantially reduces the average delay compared with a cloud-only configuration. The analytical results are validated by discrete-event simulation, which matches the CTMC values with relative discrepancy below 1% under the Poisson/exponential assumptions; additional simulations quantify the sensitivity to alternative arrival and service-time distributions. The framework provides analytically tractable, interpretable decision logic with negligible runtime overhead, making it a suitable analytical foundation for cloud service orchestration platforms that must meet strict QoS targets in next-generation edge networks. Full article
(This article belongs to the Special Issue Cloud Computing and Cloud Service Orchestration)
Show Figures

Figure 1

21 pages, 3718 KB  
Article
Deep Convolutional Neural Networks for Stress Detection: A Facial Emotion-Aware Approach
by Tianrui Li and Yingjie Zhang
Electronics 2026, 15(10), 2109; https://doi.org/10.3390/electronics15102109 - 14 May 2026
Abstract
This paper proposes an intelligent stress detection method based on convolutional neural networks and the DeepFace framework, addressing the challenges of increasingly prominent global mental health issues and the limitations of traditional psychological services in terms of early warning latency and coverage. A [...] Read more.
This paper proposes an intelligent stress detection method based on convolutional neural networks and the DeepFace framework, addressing the challenges of increasingly prominent global mental health issues and the limitations of traditional psychological services in terms of early warning latency and coverage. A three-level cascaded strategy combining RetinaFace, MTCNN, and OpenCV is first employed for face detection and localization, and facial expression features are extracted via the DeepFace framework. By integrating Russell’s valence–arousal model with Lazarus’s cognitive appraisal theory, an emotion–stress mapping rule is constructed to convert seven-category emotion probability distributions into 1–5 scale stress values. The method employs a cloud–edge collaborative flow, with feature extraction performed at the edge and original images promptly destroyed to mitigate privacy risks. Experiments on public expression datasets indicate that the method achieves above 99% face detection accuracy, 84.99% emotion recognition accuracy, and 86.09% stress assessment consistency grounded in the emotion–stress mapping rule, with an average response time per frame of approximately 200 ms. Based on 233 multi-scenario surveys, some respondents show limited stress self-awareness, suggesting traditional self-reporting may have blind spots, and thus this method serves as a useful supplement. Full article
18 pages, 410 KB  
Article
A Low-Code Containerized Edge Architecture for IIoT Telemetry Orchestration: Mitigating Cloud API Rate Limits Through Dual-Path Routing
by Jesús Rosa-Bilbao
Sensors 2026, 26(10), 3082; https://doi.org/10.3390/s26103082 - 13 May 2026
Abstract
This paper investigates whether a low-code workflow engine can operate as practical Industrial Internet of Things (IIoT) middleware at the edge when cloud application programming interface (API) rate limits make direct telemetry upload unsustainable. The main contribution is a dual-path architecture in which [...] Read more.
This paper investigates whether a low-code workflow engine can operate as practical Industrial Internet of Things (IIoT) middleware at the edge when cloud application programming interface (API) rate limits make direct telemetry upload unsustainable. The main contribution is a dual-path architecture in which a Hot Path persists all telemetry locally, while a Cold Path selectively forwards only anomalous or summary events to cloud services. The architecture is implemented as a lightweight containerized stack based on n8n, Eclipse Mosquitto, InfluxDB, and Grafana, and evaluated on a Raspberry Pi 4 under baseline, cloud-only saturation, and edge-filtered stress scenarios. Under the cloud-only condition, the external endpoint is throttled to approximately 60 requests/min, yielding a rejection rate of 98.0% (95% Wilson confidence interval: 97.43–98.44%). Under the dual-path condition, the same inbound load is fully retained locally while outbound cloud traffic is reduced by 98.0%, thereby avoiding throttling without sacrificing edge-side data fidelity. The measured Hot Path processing latency remains around 5 ms on average, with observed peaks below 10 ms, which is compatible with soft real-time monitoring workloads. Compared with more established low-code tools such as Node-RED, the novelty of the study is not the existence of visual orchestration itself, but the combination of containerized deployment, explicit hot/cold decoupling, and an empirical rate-limit mitigation analysis focused on low-cost edge hardware. Full article
38 pages, 5046 KB  
Article
Using Sentinel-2 Time Series to Monitor the Loss of Individual Large Trees in Humanized Landscapes
by João Gonçalo Soutinho, Kerri T. Vierling, Lee A. Vierling, Jörg Müller and João F. Gonçalves
Remote Sens. 2026, 18(10), 1519; https://doi.org/10.3390/rs18101519 - 12 May 2026
Viewed by 272
Abstract
Large trees are keystone ecological structures that sustain biodiversity and ecosystem services, particularly in human-altered landscapes. However, their persistence is increasingly threatened by land-use change, urban expansion, and inadequate monitoring. This study develops and validates a scalable, automated framework for monitoring the loss [...] Read more.
Large trees are keystone ecological structures that sustain biodiversity and ecosystem services, particularly in human-altered landscapes. However, their persistence is increasingly threatened by land-use change, urban expansion, and inadequate monitoring. This study develops and validates a scalable, automated framework for monitoring the loss of large individual trees using satellite image time series and breakpoint detection. We compared four spectral indices (SIs): Enhanced Vegetation Index 2–EVI2; Normalized Burn Ratio–NBR; Normalized Difference Red Edge–NDRE, and the Normalized Difference Vegetation Index–NDVI derived from Sentinel-2 imagery (2015–2025) for 691 georeferenced trees in Lousada, northern Portugal. Data were accessed and processed in Google Earth Engine and analyzed using a custom R-based workflow, including cloud masking, gap-filling, temporal interpolation, upper-envelope smoothing, deseasonalization, and break detection. Five breakpoint detection algorithms were compared: BFAST, energy-divisive, linear regression of structural changes, wild-binary segmentation, and change point models. Detected breakpoints were subsequently post-validated to determine whether they were associated with declines in SIs, using three pre-/post-breakpoint methods: comparisons of short- and long-term medians and a randomized trend analysis. As a baseline, these algorithms/post-validation logic were compared against the Continuous Change Detection and Classification (CCDC) approach. The results indicate moderate but consistent break detection performance, with a maximum balanced accuracy of 73% (for EVI2 or NDVI and using the energy-divisive algorithm coupled with the long-term median post-validator) under conservative validation criteria and high specificity for surviving trees. CCDC ranked comparatively lower at 62%. Algorithm performance varied substantially, with the energy-divisive providing the most conservative detection and the wild-binary segmentation yielding higher sensitivity. Performance was further influenced by tree structural attributes and species identity, with larger, taller and isolated trees, as well as particular genera, showing higher detection accuracy, with genus Eucalyptus, Tilia and Celtis yielding top performance results (79–65%) and Quercus, Castanea and Platanus the lowest (62–60%). By integrating satellite observations with large-tree inventory data from the Green Giants citizen science project, this study demonstrates the potential of decentralized, Earth observation-based monitoring to support tree-level loss assessments in fragmented landscapes. The proposed framework provides a transferable foundation for wide-scale monitoring of large trees in peri-urban and mixed-use environments. Full article
(This article belongs to the Special Issue Urban Ecology Monitoring Using Remote Sensing)
Show Figures

Figure 1

46 pages, 5023 KB  
Review
A Survey of Fault and Intrusion Tolerance Approaches for Scientific Workflow Scheduling in Cloud Computing
by Mazen Farid, Oluwatosin Ahmed Amodu, Heng Siong Lim, Jamil Abedalrahim Jamil Alsayaydeh, Mohammed Fadhl Abdullah and Faten A. Saif
Computers 2026, 15(5), 304; https://doi.org/10.3390/computers15050304 - 10 May 2026
Viewed by 325
Abstract
To provide reliable services in the cloud, fault tolerance is perhaps the most important consideration. The inherent sensitivity to failure hampers cloud services’ performance and reliability. As a result, fault tolerance becomes a required characteristic to maintain reliability, which is difficult to provide [...] Read more.
To provide reliable services in the cloud, fault tolerance is perhaps the most important consideration. The inherent sensitivity to failure hampers cloud services’ performance and reliability. As a result, fault tolerance becomes a required characteristic to maintain reliability, which is difficult to provide due to the dynamic architecture and complex inter-dependencies. To address the issues of cloud reliability, many fault-tolerant approaches have been developed in the literature. This paper presents a recent research survey that seeks to classify the various faults and intrusion tolerance architectures. Furthermore, it provides a thorough critical analysis of existing fault and intrusion tolerance, as well as combined approaches, aimed at enhancing the dependability, availability, and execution of cloud services. The report also includes a comparison of the studied systems’ framework based on various essential criteria such as cost, makespan, reliability, security, resource utilization, energy consumption, and failure ratio. This study aims to comprehensively review this subject for researchers to draw insights from existing patterns in the literature and provide deeper perspectives into some of the challenging issues and prospects. This will enhance the development of highly resilient fault-tolerant and intrusion-resistive scheduling algorithms for current and future cloud applications. Full article
(This article belongs to the Special Issue Cloud Computing and Big Data Mining)
Show Figures

Figure 1

19 pages, 1889 KB  
Article
RAMI 4.0 Architecture for Industrial Traceability with Artificial Intelligence and Integrated Security
by Carlos Villafuerte, Melissa Moncayo and William Oñate
Automation 2026, 7(3), 72; https://doi.org/10.3390/automation7030072 - 8 May 2026
Viewed by 300
Abstract
The demands of competitiveness in global markets require the integration of Industry 4.0 (I4.0) digital technologies for any manufacturing company, regardless of size. Industrial operations require complete supply chain visibility to ensure data protection and authenticity throughout the process. This document presents a [...] Read more.
The demands of competitiveness in global markets require the integration of Industry 4.0 (I4.0) digital technologies for any manufacturing company, regardless of size. Industrial operations require complete supply chain visibility to ensure data protection and authenticity throughout the process. This document presents a distributed architecture based on RAMI 4.0, designed for product traceability in industrial environments. It integrates automation tools, IIoT communication, cloud storage, artificial intelligence, and secure data transmission using encrypted communication protocols. The system consists of a hybrid architecture; only the first, lower-level layer corresponds to a simulated manufacturing plant with deterministic and stochastic dynamics within the production line. In the second part, the middle and upper layers are implemented, where plant data is transmitted to a cloud instance, stored in a PostgreSQL database, and subsequently analyzed using automated scripts. Reporting capabilities are incorporated with ChatGPT-3.5 Turbo, and visualization is provided through Odoo. Experimental tests demonstrated an average end-to-end communication latency of less than 200 ms, a packet loss rate of 2.67%, and 100% reliability in verifying requested reports when using the cognitive computing service. Furthermore, the results of the systematic vulnerability identification process for the architecture show a significant reduction in overall risk for most assets, with a predominant shift from high or moderate to low or moderate. The proposed architecture is validated in a simulated industrial environment under controlled conditions, demonstrating its viability as a prototype rather than as a fully implemented industrial solution. Full article
(This article belongs to the Topic Smart Production in Terms of Industry 4.0 and 5.0)
Show Figures

Figure 1

15 pages, 1171 KB  
Article
Intelligent Task Distribution Using Hybrid Algorithms and Enhancing Performance by Integrating FRLB and PBLB
by Yahia Jazyah
Information 2026, 17(5), 445; https://doi.org/10.3390/info17050445 - 5 May 2026
Viewed by 165
Abstract
In modern computing environments characterized by high variability and complex workloads, traditional load-balancing algorithms such as Round Robin and Least Connections are often found to be less effective in distributing tasks and maintaining optimal performance. In this paper, a hybrid load-balancing algorithm is [...] Read more.
In modern computing environments characterized by high variability and complex workloads, traditional load-balancing algorithms such as Round Robin and Least Connections are often found to be less effective in distributing tasks and maintaining optimal performance. In this paper, a hybrid load-balancing algorithm is proposed, where the strengths of Fastest Response Load Balancing (FRLB) and Priority-Based Load Balancing (PBLB) are combined. Through this adaptive approach, response times are minimized and load distribution across heterogeneous server environments is balanced more effectively. In the recent literature, the need for enhanced load-balancing solutions that can adapt to dynamic conditions in cloud computing, IoT, and large-scale web services has been increasingly emphasized. By integrating a hybrid mechanism, a robust solution is provided by the hybrid algorithm, which is designed to merge between FRLB and PBLB. As demonstrated through simulations, a noticeable improvement in performance is achieved, with a significant reduction in average response time when compared to FRLB and PBLB. The proposed algorithm outperforms the classical FRLB and PBLB. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

29 pages, 466 KB  
Article
A Composable Architectural Model for Digital Twin Computing Applications
by Saverio Ieva, Davide Loconte, Andrea Pazienza, Matteo Colombo, Federico Marzo, Giuseppe Loseto, Floriano Scioscia and Michele Ruta
Appl. Sci. 2026, 16(9), 4541; https://doi.org/10.3390/app16094541 - 5 May 2026
Viewed by 319
Abstract
Digital Twins (DTs) are increasingly deployed in Industry 4.0 to enable real-time monitoring, analysis, and control, yet the transition from isolated DT instances to plant-wide ecosystems across cloud and edge infrastructures introduces fragmentation and coordination challenges among heterogeneous assets, data sources, and services. [...] Read more.
Digital Twins (DTs) are increasingly deployed in Industry 4.0 to enable real-time monitoring, analysis, and control, yet the transition from isolated DT instances to plant-wide ecosystems across cloud and edge infrastructures introduces fragmentation and coordination challenges among heterogeneous assets, data sources, and services. This paper addresses this gap by proposing a cloud-native Digital Twin Computing Layer (DTCL) that provides a unified control and orchestration plane for composing and operating DT applications in Smart Manufacturing. The DTCL is designed as a three-tier architecture comprising a developer-facing user interface, a Deploy Engine for automated deployment and lifecycle management, and a Service Catalog of reusable, independently deployable microservices. Standardized interaction is supported through semantic DT models and API- and message-based communication mechanisms. A governance workflow, based on service discovery and validation, is introduced to support non-redundant integration and controlled evolution of services. The approach is demonstrated through a Smart Manufacturing predictive maintenance case study and further extended with a Smart Mobility scenario for urban public transport planning, highlighting the flexibility of the DTCL across different application domains. Overall, the DTCL supports modular composition, interoperability, and lifecycle governance across heterogeneous Digital Twin applications, providing a scalable foundation for both industrial and urban data-driven scenarios. Full article
(This article belongs to the Special Issue Data-Driven Digital Twin for Smart Manufacturing and Industry 4.0)
Show Figures

Figure 1

22 pages, 597 KB  
Article
Scaling Computer Vision: A Comparative Analysis of Cloud Infrastructures for AI-Based Image Processing and Classification Applications
by Haojie Zheng, Carlos Reaño, Alberto Castillo, Juan F. Ariño-Sales, Álvaro Igual and Carles Igual
Electronics 2026, 15(9), 1953; https://doi.org/10.3390/electronics15091953 - 5 May 2026
Viewed by 199
Abstract
Artificial intelligence-driven computer vision has undergone rapid expansion in recent years, largely propelled by progress in deep learning techniques and the availability of extensive annotated datasets. Nevertheless, the large-scale adoption of such systems remains challenging for many organizations due to financial constraints and [...] Read more.
Artificial intelligence-driven computer vision has undergone rapid expansion in recent years, largely propelled by progress in deep learning techniques and the availability of extensive annotated datasets. Nevertheless, the large-scale adoption of such systems remains challenging for many organizations due to financial constraints and technological complexity. In this context, cloud computing has become an appealing alternative, as it offers elastic, on-demand resources under a pay-as-you-go model. Despite these advantages, the use of cloud platforms also introduces specific challenges for computer vision applications. One of the key open issues concerns the assessment of whether it is better to use classical Infrastructure (IaaS) or Containers (CaaS) to build applications. In this paper, we evaluated and compared these two models by using a real-world use case: an AI-based image processing and classification application. The best-performing model achieved speed-ups of up to 2.12× and reduced resource consumption and costs by up to 22% compared with the other evaluated alternatives. Full article
Show Figures

Figure 1

30 pages, 1749 KB  
Article
Constructing an Ensemble Stacking Model for Detecting DDoS Attacks
by Chin-Ling Chen and Wan-Jing Lee
Telecom 2026, 7(3), 51; https://doi.org/10.3390/telecom7030051 - 5 May 2026
Viewed by 284
Abstract
Distributed Denial-of-Service (DDoS) attacks continue to escalate in scale and complexity, posing significant threats to modern network infrastructures and cloud services. Although many machine learning and deep learning approaches have been proposed for intrusion detection, most existing studies rely on raw traffic features [...] Read more.
Distributed Denial-of-Service (DDoS) attacks continue to escalate in scale and complexity, posing significant threats to modern network infrastructures and cloud services. Although many machine learning and deep learning approaches have been proposed for intrusion detection, most existing studies rely on raw traffic features and binary classification, which limits their ability to capture complex temporal characteristics of multi-class DDoS attacks. To address these challenges, this study proposes an ensemble stacking framework combined with a frequency-domain feature representation for DDoS detection using the CIC-DDoS2019 dataset. Random Forest (RF), AdaBoost, and XGBoost are employed as base learners, while Logistic Regression is adopted as the meta-learner, and grid search cross-validation is used to determine the optimal hyperparameters. The main contributions of this study are threefold. First, a feature extraction pipeline integrating Fast Fourier Transform (FFT), sliding-window segmentation, and SHA256-based deduplication is proposed to capture temporal–frequency characteristics of network traffic while reducing redundant feature segments. Second, a stacking ensemble model is constructed to integrate heterogeneous classifiers and improve classification robustness across multiple attack types. Third, the proposed framework significantly improves computational efficiency by reducing feature redundancy, leading to substantial reductions in model training time. Experimental results demonstrate that the proposed FFT + SHA256 + SW stacking model achieves near-perfect detection performance, with an accuracy of 0.9997 and an F1-score of 0.9998 on the original dataset, which further improves to an accuracy of 0.9998 and an F1-score of 0.9999 when combined with SMOTE. Statistical evaluation using the Friedman test confirms that the stacking model consistently achieves the best ranking among the evaluated classifiers. The results indicate that the proposed approach provides an accurate, efficient, and scalable solution for large-scale DDoS attack detection. Full article
Show Figures

Figure 1

27 pages, 1540 KB  
Article
Quantitative Analysis of Information Security and Privacy Challenges in Government Cloud Service Adoption
by Ndukwe Ukeje, Jairo A. Gutierrez and Krassie Petrova
Information 2026, 17(5), 440; https://doi.org/10.3390/info17050440 - 2 May 2026
Viewed by 545
Abstract
The government’s adoption of cloud computing is critical for digital transformation, but it faces persistent concerns over information security, privacy, governance, and risk. This study examines the factors influencing a government’s intention to adopt cloud services, adapting the Unified Theory of Acceptance and [...] Read more.
The government’s adoption of cloud computing is critical for digital transformation, but it faces persistent concerns over information security, privacy, governance, and risk. This study examines the factors influencing a government’s intention to adopt cloud services, adapting the Unified Theory of Acceptance and Use of Technology (UTAUT) with constructs tailored to the public sector. A cross-sectional survey was conducted across 90 Nigerian government organisations, producing 230 valid responses from IT professionals, administrators, and policy personnel. The statistical analysis of the data was conducted using SPSS and structural equation modelling in AMOS. Validity and reliability were confirmed through composite reliability, Cronbach’s alpha, and discriminant validity measures. Findings show that privacy (β = 0.11, p < 0.05), governance framework (β = 0.34, p < 0.001), performance expectancy (β = 0.38, p < 0.001), and information security (β = 0.10, p < 0.05) significantly influence government intention to adopt cloud services. Performance expectancy emerged as the strongest predictor. Contrary to expectations, perceived risk did not significantly moderate the relationships, and interaction terms were non-significant. The final model explained 45% of the variance in adoption intention (R2 = 0.45). The study highlights the importance of strengthening governance frameworks, emphasising tangible performance outcomes, and positioning information security and privacy as an enabler of adoption rather than a barrier. By adapting UTAUT to the government context and disentangling the role of perceived risk, the study offers both theoretical refinement and practical guidance for policymakers aiming to accelerate digital transformation and secure cloud adoption. Full article
(This article belongs to the Special Issue Internet of Things and Cloud-Fog-Edge Computing, 2nd Edition)
Show Figures

Figure 1

26 pages, 685 KB  
Article
Experimental Evaluation of Serverless Data Layer Architectures for Smart City Internet of Things Applications
by Victor Ariel Leal Sobral and Jonathan L. Goodall
Smart Cities 2026, 9(5), 80; https://doi.org/10.3390/smartcities9050080 (registering DOI) - 1 May 2026
Viewed by 275
Abstract
Comparative, experimentally grounded evidence for selecting smart city IoT data-layer architectures remains limited, complicating practical design decisions. This study provides an applied architecture decision-making guide by evaluating seven serverless data-layer architectures within a clearly defined service boundary (The Things Network, Azure-managed ingestion services, [...] Read more.
Comparative, experimentally grounded evidence for selecting smart city IoT data-layer architectures remains limited, complicating practical design decisions. This study provides an applied architecture decision-making guide by evaluating seven serverless data-layer architectures within a clearly defined service boundary (The Things Network, Azure-managed ingestion services, and Delta Lake persistence on object storage). Using a 21-day pilot deployment with nine LoRaWAN sensors, we compare ingestion completeness, median ingestion latency (estimated from TTN receive timestamps to Delta Lake commit times), cloud costs within an explicit boundary (ingestion, compute, and storage), and implementation/operational complexity proxies. Under the observed workload, TTN Storage Integration offers the lowest-cost archival ingestion via batching, Event Grid provides the most cost-effective near-real-time option among reliable pipelines, and Event Hubs demonstrates the highest ingestion completeness. The results are synthesized into practical guidance that maps common smart city application requirements to appropriate serverless ingestion patterns. Full article
Show Figures

Figure 1

22 pages, 1973 KB  
Article
A Task Scheduling and Management Platform for Multi-Workload Smart Elderly Care on Pure-Edge CPU-TPU Heterogeneous Nodes
by Tuo Nie, Dajiang Yang, Xin Guo, Wenxuan Zhu and Bochao Su
Future Internet 2026, 18(5), 242; https://doi.org/10.3390/fi18050242 - 1 May 2026
Viewed by 276
Abstract
Smart care applications impose increasingly stringent requirements on low-latency execution, privacy preservation, and continuous monitoring. These requirements are driving intelligent services from cloud-centric architectures toward edge-side deployment. When multiple care-related workloads are deployed on resource-constrained edge devices, performance bottlenecks arise not only from [...] Read more.
Smart care applications impose increasingly stringent requirements on low-latency execution, privacy preservation, and continuous monitoring. These requirements are driving intelligent services from cloud-centric architectures toward edge-side deployment. When multiple care-related workloads are deployed on resource-constrained edge devices, performance bottlenecks arise not only from model inference itself, but also from process scheduling, inter-process communication, and resource coordination overhead. To address this issue, this paper presents a task scheduling and management platform for multi-workload smart elderly care on a single pure-edge CPU–TPU heterogeneous node. The platform adopts a shared-memory and event-driven synchronization mechanism together with fine-grained process partitioning, thereby establishing a data-sharing and runtime-coordination framework for concurrent multi-workload execution. To evaluate the effectiveness of the proposed platform, experiments were conducted under single-workload, multi-workload, multi-resolution, and long-term runtime settings. The results show that, compared with two baseline schemes, the proposed platform improves the average frame rate by 66.7% and 71.1%, reduces net memory usage by 96.3% and 45.3%, and lowers net power consumption by 46.8% and 37.7%, respectively, under the single-workload setting. Under 10 concurrent workload instances, the system still maintains a stable frame rate of 42.03 ± 0.73 fps, demonstrating strong concurrency scalability. Multi-resolution experiments further indicate that the performance degradation at higher resolutions is mainly constrained by the front-end data supply stage. A continuous 10-day runtime experiment additionally verifies the sustained operating capability and resource stability of the platform under pure-edge deployment. These results demonstrate that node-level shared-memory and event-driven coordination can effectively improve the execution efficiency, scalability, and stability of real-time multi-workload analytics on such pure-edge heterogeneous nodes, providing a useful basis for future extensions to multi-node edge environments and edge–cloud collaborative task scheduling. Full article
Show Figures

Graphical abstract

3 pages, 132 KB  
Editorial
Scalable and Distributed Cloud Continuum Orchestration for Next-Generation IoT Applications: Latest Advances and Prospects—2nd Edition
by Dimitrios Dechouniotis and Ioannis Dimolitsas
Future Internet 2026, 18(5), 240; https://doi.org/10.3390/fi18050240 - 1 May 2026
Viewed by 233
Abstract
With the advent of the Internet of Things (IoT), the centralized cloud computing service delivery paradigm has been gradually transformed into a cloud continuum that includes edge and fog computing and heterogeneous IoT devices with varying computing and power capabilities [...] Full article
Back to TopTop