Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (81)

Search Parameters:
Keywords = workload concurrency

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 1753 KB  
Article
A Hybrid Knowledge Extraction Method to Support Early Concurrent Engineering in the Aerospace Industry
by Eliott Duverger, Rebeca Arista, Alexis Aubry and Eric Levrat
Aerospace 2026, 13(4), 337; https://doi.org/10.3390/aerospace13040337 - 3 Apr 2026
Viewed by 139
Abstract
In the early stages of concurrent engineering, the ability to assess design change impact is fundamentally limited by the availability of expert knowledge. Knowledge-Based Engineering (KBE) provides structured approaches for the capture, formalization, management, and diffusion of knowledge within complex organizations. KBE has [...] Read more.
In the early stages of concurrent engineering, the ability to assess design change impact is fundamentally limited by the availability of expert knowledge. Knowledge-Based Engineering (KBE) provides structured approaches for the capture, formalization, management, and diffusion of knowledge within complex organizations. KBE has increasingly turned toward ontology-based methodologies, leveraging their robust framework for shared conceptualization and reasoning capabilities. Integrated with Model-Based Systems Engineering (MBSE), such Ontology-Based Engineering (OBE) methodologies provide the necessary infrastructure for knowledge-driven workflows in a Digital Engineering (DE) context. Such integration is critical for complex engineering sectors such as the aerospace industry. However, the traditional knowledge acquisition process is expert-centric and, consequently, resource-intensive. The digital transformation of the industry has led to an explosion of data volumes, and raised concerns toward statistical approaches. This study implements a hybrid knowledge acquisition method within the OBE framework and MBSE environment. Specifically, this method combines human expertise and interpretable machine learning techniques to formalize knowledge models and instantiate them with concrete design rules. Applied in a real-world use-case involving workload estimation, this paper aims to enhance cross-domain collaboration during the conceptual design phase of new aircrafts. Full article
Show Figures

Figure 1

25 pages, 2191 KB  
Article
Storage I/O Characterization for an Embedded Multi-Sensor Platform: Performance Bottlenecks and Design Guidelines
by Luca Notarianni, Roberto Bagnato, Anna Sabatini, Giulia Di Tomaso and Luca Vollero
Electronics 2026, 15(7), 1490; https://doi.org/10.3390/electronics15071490 - 2 Apr 2026
Viewed by 243
Abstract
Microcontroller-based embedded systems integrating multiple sensors are increasingly required to support continuous data acquisition, on-board processing, and long-term storage within tightly coupled hardware–software architectures. In such platforms, overall performance is often constrained not by computational capability but by storage I/O behavior, particularly under [...] Read more.
Microcontroller-based embedded systems integrating multiple sensors are increasingly required to support continuous data acquisition, on-board processing, and long-term storage within tightly coupled hardware–software architectures. In such platforms, overall performance is often constrained not by computational capability but by storage I/O behavior, particularly under real-time constraints and concurrent workloads. This study presents a comprehensive empirical evaluation of eMMC storage performance on an STM32U5 microcontroller running the ThreadX RTOS. The proposed methodology combines multi-dimensional stress testing, controlled task concurrency (0–4 tasks), and long-duration aging analysis (90 h), together with timing variability assessment under electrical stress and interrupt-driven preemption. Both synthetic workloads and realistic sensor-node scenarios with heterogeneous and asynchronous access patterns are considered. The results highlight significant performance limitations, including up to 98% throughput degradation under four concurrent tasks and a nonlinear increase in metadata latency as free space decreases below 40% (from 10 ms to over 200 ms for file creation). Additionally, timing jitter increases by 2–5× under voltage variation and interrupt load. Based on these findings, practical firmware-level design guidelines are derived, including sector-aligned buffering, dedicated I/O task architectures, and proactive capacity management, enabling substantial improvements in throughput and latency. This study provides quantitative insights and reproducible methodologies for optimizing storage subsystems in multi-sensor embedded applications. Full article
(This article belongs to the Special Issue Embedded Systems and Microcontroller Smart Applications)
Show Figures

Figure 1

19 pages, 1710 KB  
Article
Energy Behavior of AI Workloads Under Resource Partitioning in Multi-Tenant Systems
by Jiyoon Kim, Siyeon Kang, Woorim Shin, Kyungwoon Cho and Hyokyung Bahn
Appl. Sci. 2026, 16(7), 3129; https://doi.org/10.3390/app16073129 - 24 Mar 2026
Viewed by 179
Abstract
Traditional cloud pricing models are allocation-centric, where users are charged based on reserved resources rather than workload energy consumption. However, modern AI workloads exhibit substantial and heterogeneous power behavior, limiting the effectiveness of such allocation-centric pricing. This paper presents a comprehensive experimental study [...] Read more.
Traditional cloud pricing models are allocation-centric, where users are charged based on reserved resources rather than workload energy consumption. However, modern AI workloads exhibit substantial and heterogeneous power behavior, limiting the effectiveness of such allocation-centric pricing. This paper presents a comprehensive experimental study of nine widely used workloads across 50 controlled configurations, including standalone and concurrent executions under varying resource partitions. Our results show that total system power is largely unaffected by how resources are divided among co-located workloads, except in cases of explicit resource under-provisioning or severe resource contention. Across 45 workload–core groups, 41 exhibit a coefficient of variation below 3% across different co-located workloads, demonstrating structural stability of workload-level power profiles under heterogeneous execution environments. In contrast, deployment choice (e.g., CPU versus GPU execution) can shift the same model into distinct power regimes. Based on measured power decomposition and scaling behavior, we derive an empirical categorization framework distinguishing GPU-dominant and CPU-dominant workloads, further characterized by utilization and memory dimensions. From an energy perspective, CPU utilization (for CPU-dominant workloads) and SM utilization (for GPU-dominant workloads) emerge as the primary determinants of power magnitude, while memory-related parameters contribute marginally to overall power. These findings provide empirical evidence that allocation-based pricing is a weak proxy for actual energy cost and motivate energy-aligned cloud management strategies grounded in workload power profiles. As our findings are derived from a controlled single-node experiment, evaluations under more realistic data center environments will be required for further generalization. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

19 pages, 1639 KB  
Article
Generic XDP Under Embedded SoC Constraints: A Lightweight L2 Data Plane with Cost Decomposition Analysis
by Ruijia Yao, Hongyang Xiao, Zhengwang Xu, Runyang Xiao and Zhou Huang
Electronics 2026, 15(6), 1281; https://doi.org/10.3390/electronics15061281 - 19 Mar 2026
Viewed by 264
Abstract
With the proliferation of edge computing and the Internet of Things (IoT), embedded SoC devices are increasingly expected to provide data-plane functions such as network access, data aggregation, and Layer-2 forwarding. On resource-constrained platforms, however, the conventional Linux Bridge forwarding path traverses a [...] Read more.
With the proliferation of edge computing and the Internet of Things (IoT), embedded SoC devices are increasingly expected to provide data-plane functions such as network access, data aggregation, and Layer-2 forwarding. On resource-constrained platforms, however, the conventional Linux Bridge forwarding path traverses a relatively long protocol stack and bridging pipeline, incurring a high per-packet processing cost and making the system prone to CPU saturation under small-packet and high-concurrency workloads. Meanwhile, high-performance forwarding solutions designed for server environments often rely on specific hardware and driver capabilities; most legacy NICs in embedded deployments only support XDP in generic mode, making direct reuse of these solutions difficult. To address these challenges, this paper designs and implements a lightweight Layer-2 data-plane system based on XDP generic, tailored for embedded SoC constraints. The system performs minimal forwarding decisions early in the kernel receive path and streamlines the data-path structure via direct redirection, thereby reducing the structural overhead caused by repeated protocol-stack traversal. To provide a unified explanation for performance evolution across packet sizes, we further decompose the forwarding cost under the dual upper bounds of CPU capacity and link bandwidth. This framework characterizes the cost differences between XDP generic and Linux Bridge across workload regimes and elucidates the mechanism by which the bottleneck shifts from CPU-bound to bandwidth-bound operation. Experimental results on an RK3588 platform show that, under embedded generic-XDP constraints, the proposed design exhibits reproducible structural advantages over Linux Bridge in the CPU-bound small-packet and high-PPS regime, while the gap naturally narrows as the bottleneck shifts toward the link-bandwidth limit. These results indicate that, even under the strong constraints of XDP generic—where copy operations and sk_buff-related overheads remain—the forwarding path can still benefit from structural simplification in embedded environments. The study therefore provides empirical support for both the source of benefit and its applicability boundary and offers an explanatory framework for understanding lightweight Layer-2 forwarding on resource-constrained edge devices. Full article
(This article belongs to the Section Networks)
Show Figures

Figure 1

26 pages, 1916 KB  
Article
Sensing Cognitive Responses Through a Non-Invasive Brain–Computer Interface
by Hristo Hristov, Zlatogor Minchev, Mitko Shoshev, Irina Kancheva, Veneta Koleva, Teodor Vakarelsky, Kalin Dimitrov and Dimiter Prodanov
Sensors 2026, 26(6), 1892; https://doi.org/10.3390/s26061892 - 17 Mar 2026
Viewed by 416
Abstract
Cognitive stress, also known as mental workload, constitutes a central topic within the field of psychophysiology due to its role in modulating attention, autonomic regulation, and stress reactivity. Furthermore, it bears direct relevance to practical monitoring systems that employ non-invasive sensing techniques. This [...] Read more.
Cognitive stress, also known as mental workload, constitutes a central topic within the field of psychophysiology due to its role in modulating attention, autonomic regulation, and stress reactivity. Furthermore, it bears direct relevance to practical monitoring systems that employ non-invasive sensing techniques. This study investigates whether a multimodal, non-invasive measurement setup can detect systematic physiological differences between Resting periods and short episodes of cognitive load within the same individuals. Additionally, it explores the capacity of such a system to differentiate tasks characterized by varying cognitive demands. A sequential, within-subject protocol was employed, comprising five consecutive phases (rest 1, Stroop, rest 12, subtraction, rest 3), during which five modalities were recorded concurrently: EEG, heart rate (HR), galvanic skin response (GSR), facial surface temperature, and oxygen saturation (SpO2). Beyond phase-wise inspection of time-series data, an exploratory assessment of similarity across participants was conducted using correlation coefficients. The maximum cross-participant correlations observed were 0.88 (HR), 0.90 (GSR), 0.83 (facial temperature), and 0.77 (SpO2); however, these correlations were used only as exploratory descriptors of inter-individual similarity and did not imply a significant phase effect. For inferential analysis, phase-wise epoch means were evaluated through one-factor repeated-measures ANOVA. The heart rate exhibited a robust main effect of phase (F(4, 32) = 10.5862, p_GG = 0.01044, ηp2 = 0.5696), with higher HR observed during cognitive load epochs (e.g., 77.841 ± 11.777 bpm at rest 1 versus 83.926 ± 14.532 bpm during subtraction). The relatively large standard deviation reflects variability between subjects rather than variability within epochs. Regarding processed baseline-referenced GSR, the omnibus phase effect was not statistically significant under the conservative Greenhouse–Geisser correction; therefore, GSR was interpreted as exploratory in this dataset. Facial temperature and SpO2 likewise did not show statistically significant omnibus phase effects under Greenhouse–Geisser correction (e.g., SpO2: p_GG = 0.1209). EEG-derived measures provide supplementary central evidence of task engagement; entropy variations within an approximate dynamic range of 0.2 to 0.8 were observed, and the α/θ ratios demonstrated nearly a twofold distinction between rest and cognitive load epochs across different leads. Full article
(This article belongs to the Special Issue Biosignal Sensing Analysis (EEG, EMG, ECG, PPG) (2nd Edition))
Show Figures

Figure 1

17 pages, 631 KB  
Article
Effective Cloud–Edge Workflow Scheduling via Decoupled Offline Learning and Unified Sequence Modeling
by Zhuojing Tian, Dianxi Shi, Yushu Chen and Wenlai Zhao
Appl. Sci. 2026, 16(5), 2496; https://doi.org/10.3390/app16052496 - 5 Mar 2026
Viewed by 289
Abstract
Efficient workflow scheduling in cloud–edge environments is severely bottlenecked by long-horizon dependencies and myopic resource fragmentation. This paper proposes the Decoupled Offline Sequence-based (DOS) scheduling framework to address these challenges. By decoupling expert policy learning from runtime deployment, DOS utilizes a multi-dimensional priority-aware [...] Read more.
Efficient workflow scheduling in cloud–edge environments is severely bottlenecked by long-horizon dependencies and myopic resource fragmentation. This paper proposes the Decoupled Offline Sequence-based (DOS) scheduling framework to address these challenges. By decoupling expert policy learning from runtime deployment, DOS utilizes a multi-dimensional priority-aware linearization strategy to deterministically transform DAG-structured workflows into dependency-consistent sequences. Leveraging offline expert trajectories, we train UDC, a Gated CNN achieving unified sequence modeling via innovative triplet-to-unary encoding, equipped with explicit action masking to distill long-horizon spatio-temporal packing patterns. This mechanism enables rapid feed-forward inference without costly online environment interactions or policy updates. Extensive evaluations on real-world Alibaba cluster workloads demonstrate that DOS not only consistently minimizes average makespan compared to classical heuristics, but also drastically reduces resource-blocked steps under extreme concurrency versus online Actor–Critic experts. Crucially, compared to the Decision Transformer (DT) baseline, the UDC model achieves strictly scale-invariant and significantly lower inference latency, highlighting its robust scalability and practicality for large-scale continuum systems. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

21 pages, 1036 KB  
Article
Spec-LAMP: Robust Spectre Attack Detection Under Web-Based LLM Workload via L1D Miss Pending Event
by Jiajia Jiao, Quan Zhou and Yulian Li
Entropy 2026, 28(3), 254; https://doi.org/10.3390/e28030254 - 26 Feb 2026
Viewed by 332
Abstract
As Large Language Models (LLMs) become increasingly integrated into web environments, they introduce complex microarchitectural noise that challenges existing hardware security mechanisms. This paper investigates the impact of concurrent web-based LLM workloads on the detection accuracy of Spectre attacks. Firstly, we constructed a [...] Read more.
As Large Language Models (LLMs) become increasingly integrated into web environments, they introduce complex microarchitectural noise that challenges existing hardware security mechanisms. This paper investigates the impact of concurrent web-based LLM workloads on the detection accuracy of Spectre attacks. Firstly, we constructed a representative dataset by executing multiple web-accessible LLMs (e.g., DeepSeek, Kimi, Doubao and Qwen) alongside Spectre attacks, capturing the specific interference patterns introduced by these AI workloads. Experimental analysis reveals that traditional Hardware Performance Counter (HPC)-based detectors, relying primarily on branch prediction and Last-Level Cache (LLC) events, suffer significant accuracy degradation due to the masking effects of LLM-induced noise. To address this limitation, we then propose a novel Spectre attack detector Spec-LAMP via augmenting conventional HPC feature sets with the L1D Miss Pending event. This new metric specifically captures unresolved speculative memory dependencies, a distinctive characteristic of Spectre attacks that remains discernible even under web-accessible LLM interference. Comparative statistical analysis demonstrates that incorporating this event significantly enhances the separability between malicious and benign executions. Finally, experimental results show that our proposed feature augmentation effectively restores detection performance, increasing average accuracy from 85.15% to 98.43% and demonstrating superior robustness compared to traditional approaches in realistic web-based LLM scenarios. Full article
(This article belongs to the Special Issue Information-Theoretic Security and Privacy)
Show Figures

Figure 1

20 pages, 788 KB  
Article
Efficient Management of High-Frequency Sensor Data Streams Using a Read-Optimized Learned Index
by Hu Luo, Jiabao Wen, Desheng Chen, Zhengjian Li, Meng Xi, Jingyi He, Shuai Xiao and Jiachen Yang
Sensors 2026, 26(4), 1217; https://doi.org/10.3390/s26041217 - 13 Feb 2026
Viewed by 385
Abstract
The rapid growth of sensor data in IoT and Digital Twins necessitates high-performance spatial indexing. Traditional indexes like Rtrees suffer from high storage overhead, while state-of-the-art learned indexes like GLIN encounter a “Refinement Bottleneck” due to coarse-grained Minimum Bounding Rectangle (MBR) filtering. Furthermore, [...] Read more.
The rapid growth of sensor data in IoT and Digital Twins necessitates high-performance spatial indexing. Traditional indexes like Rtrees suffer from high storage overhead, while state-of-the-art learned indexes like GLIN encounter a “Refinement Bottleneck” due to coarse-grained Minimum Bounding Rectangle (MBR) filtering. Furthermore, existing solutions often trade update throughput for query accuracy, failing in dynamic IoT workloads with concurrent reads and writes. We propose DyGLIN (Dynamic Generate Learning-Based Index), a dynamic, read-optimized learned spatial index tailored for high-frequency sensor streams. DyGLIN introduces a decoupled leaf architecture separating query processing from data maintenance. To accelerate queries, we implement a hierarchical filtering pipeline using hierarchical MBRs (HMBR) and Cuckoo Filters to aggressively prune false positives. For maintenance, a Delta Buffer mechanism amortizes update costs, while logical deletion ensures high throughput. Experiments on real-world datasets show that DyGLIN reduces query latency by 26.4% [95% CI: 20.1%, 38.6%] compared to GLIN. It achieves 30.0% [95% CI: 21.4%, 35.9%] higher insertion throughput and superior deletion performance, with only an 18.5% [95% CI: 16.8%, 19.8%] increase in memory overhead. Full article
Show Figures

Figure 1

27 pages, 10069 KB  
Article
Accelerating CNN Inference via In-Network Computing in Information-Centric Networking
by Kaiwei Hu, Haojiang Deng and Botao Ma
Electronics 2026, 15(4), 775; https://doi.org/10.3390/electronics15040775 - 11 Feb 2026
Viewed by 267
Abstract
Although Convolutional Neural Networks (CNNs) have achieved remarkable accuracy in intelligent tasks, their increasing complexity hinders low-latency execution. While edge computing mitigates the wide-area network delays typical of cloud-based inference, it remains constrained by limited computational resources when processing complex models under high [...] Read more.
Although Convolutional Neural Networks (CNNs) have achieved remarkable accuracy in intelligent tasks, their increasing complexity hinders low-latency execution. While edge computing mitigates the wide-area network delays typical of cloud-based inference, it remains constrained by limited computational resources when processing complex models under high concurrency. Collaborative inference has emerged as a promising paradigm to address these limitations; however, existing approaches often struggle with rigid routing, limited scalability, and inefficient resource utilization. In this paper, we propose a novel collaborative inference acceleration mechanism that integrates In-Network Computing (INC) within an Information-Centric Networking (ICN) framework. By leveraging the name-based resolution capability of ICN, our approach dynamically harnesses underutilized computational resources across distributed INC nodes, enabling flexible layer-wise offloading that transcends the limitations of static IP paths. Furthermore, a distributed decision-making and node-selection algorithm is designed to orchestrate CNN layer assignment based on real-time network conditions and node workloads. Extensive simulations on representative models demonstrate the effectiveness of the proposed method. Specifically, for the computationally intensive VGG16 model under high concurrency, the average task completion time is reduced by 43.3% and 60.2% relative to IP-based and Edge-Cloud baselines, respectively, with a load balancing fairness index maintained above 0.86. Full article
Show Figures

Figure 1

37 pages, 1376 KB  
Article
Photonic-Aware Routing in Hybrid Networks-on-Chip via Decentralized Deep Reinforcement Learning
by Elena Kakoulli
AI 2026, 7(2), 65; https://doi.org/10.3390/ai7020065 - 9 Feb 2026
Viewed by 640
Abstract
Edge artificial intelligence (AI) workloads generate bursty, heterogeneous traffic on Networks-on-Chip (NoCs) under tight energy and latency constraints. Hybrid NoCs that overlay electronic meshes with silicon photonic express links can reduce long-path latency via wavelength-division multiplexing, but thermal drift and intermittent optical availability [...] Read more.
Edge artificial intelligence (AI) workloads generate bursty, heterogeneous traffic on Networks-on-Chip (NoCs) under tight energy and latency constraints. Hybrid NoCs that overlay electronic meshes with silicon photonic express links can reduce long-path latency via wavelength-division multiplexing, but thermal drift and intermittent optical availability complicate routing. This study introduces a decentralized, photonic-aware controller based on Deep Reinforcement Learning (DRL) with Proximal Policy Optimization (PPO). The policy uses router-local observables—per-port buffer occupancy with short histories, hop distance, a local injection estimate, and a per-cycle optical validity signal—and applies action masking so chosen outputs are always feasible; the controller is co-designed with the router pipeline to retain single-cycle decisions and a modest memory footprint. Cycle-accurate simulations with synthetic traffic and benchmark-derived traces evaluate mean packet latency, throughput, and energy per delivered bit against deterministic, adaptive, and recent DRL baselines; ablation studies isolate the roles of optical validity cues and locality. The results show consistent improvements in congestion-forming regimes and on long electronic paths bridged by photonic links, with robustness across mesh sizes and wavelength concurrency. Overall, the evidence indicates that photonic-aware PPO provides a practical, thermally robust control plane for hybrid NoCs and a scalable routing solution for AI-centric manycore and edge systems. Full article
Show Figures

Figure 1

35 pages, 2414 KB  
Article
Hierarchical Caching for Agentic Workflows: A Multi-Level Architecture to Reduce Tool Execution Overhead
by Farhana Begum, Craig Scott, Kofi Nyarko, Mansoureh Jeihani and Fahmi Khalifa
Mach. Learn. Knowl. Extr. 2026, 8(2), 30; https://doi.org/10.3390/make8020030 - 27 Jan 2026
Viewed by 1055
Abstract
Large Language Model (LLM) agents depend heavily on multiple external tools such as APIs, databases and computational services to perform complex tasks. However, these tool executions create latency and introduce costs, particularly when agents handle similar queries or workflows. Most current caching methods [...] Read more.
Large Language Model (LLM) agents depend heavily on multiple external tools such as APIs, databases and computational services to perform complex tasks. However, these tool executions create latency and introduce costs, particularly when agents handle similar queries or workflows. Most current caching methods focus on LLM prompt–response pairs or execution plans and overlook redundancies at the tool level. To address this, we designed a multi-level caching architecture that captures redundancy at both the workflow and tool level. The proposed system integrates four key components: (1) hierarchical caching that operates at both the workflow and tool level to capture coarse and fine-grained redundancies; (2) dependency-aware invalidation using graph-based techniques to maintain consistency when write operations affect cached reads across execution contexts; (3) category-specific time-to-live (TTL) policies tailored to different data types, e.g., weather APIs, user location, database queries and filesystem and computational tasks; and (4) session isolation to ensure multi-tenant cache safety through automatic session scoping. We evaluated the system using synthetic data with 2.25 million queries across ten configurations in fifteen runs. In addition, we conducted four targeted evaluations—write intensity robustness from 4 to 30% writes, personalized memory effects under isolated vs. shared cache modes, workflow-level caching comparison and workload sensitivity across five access distributions—on an additional 2.565 million queries, bringing the total experimental scope to 4.815 million executed queries. The architecture achieved 76.5% caching efficiency, reducing query processing time by 13.3× and lowering estimated costs by 73.3% compared to a no-cache baseline. Multi-tenant testing with fifteen concurrent tenants confirmed robust session isolation and 74.1% efficiency under concurrent workloads. Our evaluation used controlled synthetic workloads following Zipfian distributions, which are commonly used in caching research. While absolute hit rates vary by deployment domain, the architectural principles of hierarchical caching, dependency tracking and session isolation remain broadly applicable. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

28 pages, 684 KB  
Article
Data-Centric Serverless Computing with LambdaStore
by Kai Mast, Suyan Qu, Aditya Jain, Andrea Arpaci-Dusseau and Remzi Arpaci-Dusseau
Software 2026, 5(1), 5; https://doi.org/10.3390/software5010005 - 21 Jan 2026
Viewed by 626
Abstract
LambdaStore is a data-centric serverless platform that breaks the split between stateless functions and external storage in classic cloud computing platforms. By scheduling serverless invocations near data instead of pulling data to compute, LambdaStore substantially reduces the state access cost that dominates today’s [...] Read more.
LambdaStore is a data-centric serverless platform that breaks the split between stateless functions and external storage in classic cloud computing platforms. By scheduling serverless invocations near data instead of pulling data to compute, LambdaStore substantially reduces the state access cost that dominates today’s serverless workloads. Leveraging its transactional storage engine, LambdaStore delivers serializable guarantees and exactly-once semantics across chains of lambda invocations—a capability missing in current Function-as-a-Service offerings. We make three key contributions: (1) an object-oriented programming model that ties function invocations with its data; (2) a transaction layer with adaptive lock granularity and an optimistic concurrency control protocol designed for serverless workloads to keep contention low while preserving serializability; and (3) an elastic storage system that preserves the elasticity of the serverless paradigm while lambda functions run close to their data. Under read-heavy workloads, LambdaStore lifts throughput by orders of magnitude over existing serverless platforms while holding end-to-end latency below 20 ms. Full article
Show Figures

Figure 1

11 pages, 1270 KB  
Article
How Should Doctors Learn Wellbeing? Perspectives from Early-Career General Practitioners Across Europe
by Constanze Dietzsch, Johanna Klutmann, Helene Junge, Sandra Jordan, Sophie Sun, Aaron Poppleton and Fabian Dupont
Int. Med. Educ. 2026, 5(1), 14; https://doi.org/10.3390/ime5010014 - 21 Jan 2026
Viewed by 394
Abstract
(1) Background: The evolving demands of general practice have increased stress, workload, and fatigue among patients and doctors. In 2022, the European Young Family Doctors Movement (EYFDM) identified wellbeing as a key competency for future GPs. This study primarily explored the perspectives of [...] Read more.
(1) Background: The evolving demands of general practice have increased stress, workload, and fatigue among patients and doctors. In 2022, the European Young Family Doctors Movement (EYFDM) identified wellbeing as a key competency for future GPs. This study primarily explored the perspectives of early-career GPs on integrating wellbeing in general practice training. (2) Methods: A concurrent mixed-methods approach combined a quantitative survey with a town hall discussion at the EYFDM workshop during WONCA Europe 2023 in Brussels. The meeting included brainstorming, subgroup discussions, and synthesis of findings. Subgroup discussions among young GPs and GP trainees were recorded, analyzed using content analysis, and validated through two rounds of stakeholder consultation. (3) Results: Participants advocated for mandatory wellbeing-focused timeslots during training with flexible, self-selected learning activities. Proposals included a toolbox with individual, group, and supervised options. A cultural shift towards prioritizing wellbeing as part of professional development was unanimously supported. Senior GP involvement was seen as crucial for driving this change, alongside wellbeing training for coaches and role models. (4) Conclusions: GP trainees across Europe emphasize the need for greater focus on wellbeing in training, supported by a generational cultural shift. Voluntary, diverse learning activities (toolbox) and role-modeling activities with experienced GPs may support wellbeing to be embedded as a core competency in general practice. Full article
(This article belongs to the Special Issue New Advancements in Medical Education)
Show Figures

Figure 1

29 pages, 2803 KB  
Article
Benchmarking SQL and NoSQL Persistence in Microservices Under Variable Workloads
by Nenad Pantelic, Ljiljana Matic, Lazar Jakovljevic, Stefan Eric, Milan Eric, Miladin Stefanović and Aleksandar Djordjevic
Future Internet 2026, 18(1), 53; https://doi.org/10.3390/fi18010053 - 15 Jan 2026
Viewed by 953
Abstract
This paper presents a controlled comparative evaluation of SQL and NoSQL persistence mechanisms in containerized microservice architectures under variable workload conditions. Three persistence configurations—SQL with indexing, SQL without indexing, and a document-oriented NoSQL database, including supplementary hybrid SQL variants used for robustness analysis—are [...] Read more.
This paper presents a controlled comparative evaluation of SQL and NoSQL persistence mechanisms in containerized microservice architectures under variable workload conditions. Three persistence configurations—SQL with indexing, SQL without indexing, and a document-oriented NoSQL database, including supplementary hybrid SQL variants used for robustness analysis—are assessed across read-dominant, write-dominant, and mixed workloads, with concurrency levels ranging from low to high contention. The experimental setup is fully containerized and executed in a single-node environment to isolate persistence-layer behavior and ensure reproducibility. System performance is evaluated using multiple metrics, including percentile-based latency (p95), throughput, CPU utilization, and memory consumption. The results reveal distinct performance trade-offs among the evaluated configurations, highlighting the sensitivity of persistence mechanisms to workload composition and concurrency intensity. In particular, indexing strategies significantly affect read-heavy scenarios, while document-oriented persistence demonstrates advantages under write-intensive workloads. The findings emphasize the importance of workload-aware persistence selection in microservice-based systems and support the adoption of polyglot persistence strategies. Rather than providing absolute performance benchmarks, the study focuses on comparative behavioral trends that can inform architectural decision-making in practical microservice deployments. Full article
Show Figures

Figure 1

26 pages, 1012 KB  
Article
AoI-Aware Data Collection in Heterogeneous UAV-Assisted WSNs: Strong-Agent Coordinated Coverage and Vicsek-Driven Weak-Swarm Control
by Lin Huang, Lanhua Li, Songhan Zhao, Daiming Qu and Jing Xu
Sensors 2026, 26(2), 419; https://doi.org/10.3390/s26020419 - 8 Jan 2026
Viewed by 359
Abstract
Unmanned aerial vehicle (UAV) swarms offer an efficient solution for data collection from widely distributed ground users (GUs). However, incomplete environment information and frequent changes make it challenging for standard centralized planning or pure reinforcement learning approaches to simultaneously maintain global solution quality [...] Read more.
Unmanned aerial vehicle (UAV) swarms offer an efficient solution for data collection from widely distributed ground users (GUs). However, incomplete environment information and frequent changes make it challenging for standard centralized planning or pure reinforcement learning approaches to simultaneously maintain global solution quality and local flexibility. We propose a hierarchical data collection framework for heterogeneous UAV-assisted wireless sensor networks (WSNs). A small set of high-capability UAVs (H-UAVs), equipped with substantial computational and communication resources, coordinate regional coverage, trajectory planning, and uplink transmission control for numerous resource-constrained low-capability UAVs (L-UAVs) across power-Voronoi-partitioned areas using multi-agent deep reinforcement learning (MADRL). Specifically, we employ Multi-Agent Deep Deterministic Policy Gradient (MADDPG) to enhance H-UAVs’ decision-making capabilities and enable coordinated actions. The partitions are dynamically updated based on GUs’ data generation rates and L-UAV density to balance workload and adapt to environmental dynamics. Concurrently, a large number of L-UAVs with limited onboard resources perform self-organized data collection from GUs and execute opportunistic relaying to a remote access point (RAP) via H-UAVs. Within each Voronoi cell, L-UAV motion follows a weighted Vicsek model that incorporates GUs’ age of information (AoI), link quality, and congestion avoidance. This spatial decomposition combined with decentralized weak-swarm control enables scalability to large-scale L-UAV deployments. Experiments demonstrate that the proposed strong and weak agent MADDPG (SW-MADDPG) scheme reduces AoI by 30% and 21% compared to No-Voronoi and Heuristic-HUAV baselines, respectively. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

Back to TopTop