Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (843)

Search Parameters:
Keywords = workload management

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 7094 KB  
Article
Research on Pilot Workload Identification Based on EEG Time Domain and Frequency Domain
by Weiping Yang, Yixuan Li, Lingbo Liu, Haiqing Si, Haibo Wang, Ting Pan, Yan Zhao and Gen Li
Aerospace 2026, 13(2), 114; https://doi.org/10.3390/aerospace13020114 - 23 Jan 2026
Abstract
Pilot workload is a critical factor influencing flight safety. This study collects both subjective and objective data on pilot workload using the NASA-TLX questionnaire and electroencephalogram acquisition systems during simulated flight tasks. The raw EEG signals are denoised through preprocessing techniques, and relevant [...] Read more.
Pilot workload is a critical factor influencing flight safety. This study collects both subjective and objective data on pilot workload using the NASA-TLX questionnaire and electroencephalogram acquisition systems during simulated flight tasks. The raw EEG signals are denoised through preprocessing techniques, and relevant EEG features are extracted using time-domain and frequency-domain analysis methods. One-way ANOVA is employed to examine the statistical differences in EEG indicators under varying workload levels. A fusion model based on CNN-Bi-LSTM is developed to train and classify the extracted EEG features, enabling accurate identification of pilot workload states. The results demonstrate that the proposed hybrid model achieves a recognition accuracy of 98.2% on the test set, confirming its robustness. Additionally, under increased workload conditions, frequency-domain features outperform time-domain features in discriminative power. The model proposed in this study effectively recognizes pilot workload levels and offers valuable insights for civil aviation safety management and pilot training programs. Full article
(This article belongs to the Special Issue Human Factors and Performance in Aviation Safety)
22 pages, 3108 KB  
Article
Cell-Based Optimization of Air Traffic Control Sector Boundaries Using Traffic Complexity
by César Gómez Arnaldo, José María Arroyo López, Raquel Delgado-Aguilera Jurado, María Zamarreño Suárez, Javier Alberto Pérez Castán and Francisco Pérez Moreno
Aerospace 2026, 13(1), 101; https://doi.org/10.3390/aerospace13010101 - 20 Jan 2026
Viewed by 74
Abstract
The increasing demand for air travel has intensified the need for more efficient air traffic management (ATM) solutions. One of the key challenges in this domain is the optimal sectorization of airspace to ensure balanced controller workload and operational efficiency. Traditional airspace sectors, [...] Read more.
The increasing demand for air travel has intensified the need for more efficient air traffic management (ATM) solutions. One of the key challenges in this domain is the optimal sectorization of airspace to ensure balanced controller workload and operational efficiency. Traditional airspace sectors, typically static and based on historical flow patterns, often fail to adapt to evolving traffic complexity, resulting in imbalanced workload distribution and reduced system performance. This study introduces a novel methodology for optimizing ATC sector geometries based on air traffic complexity indicators, aiming to enhance the balance of operational workload across sectors. The proposed optimization is formulated in the horizontal plane using a two-dimensional cell-based airspace representation. A graph-partitioning optimization model with spatial and operational constraints is applied, along with a refinement step using adjacent-cell pairs to improve geometric coherence. Tested on real data from Madrid North ACC, the model achieved significant complexity balancing while preserving sector shapes in a real-world case study based on a Spanish ACC. This work provides a methodological basis to support static and dynamic airspace design and has the potential to enhance ATC efficiency through data-driven optimization. Full article
(This article belongs to the Special Issue AI, Machine Learning and Automation for Air Traffic Control (ATC))
Show Figures

Figure 1

27 pages, 3582 KB  
Article
Multi-Objective Joint Optimization for Microservice Deployment and Request Routing
by Zhengying Cai, Fang Yu, Wenjuan Li, Junyu Liu and Mingyue Zhang
Symmetry 2026, 18(1), 195; https://doi.org/10.3390/sym18010195 - 20 Jan 2026
Viewed by 72
Abstract
Microservice deployment and request routing can help improve server efficiency and the performance of large-scale mobile edge computing (MEC). However, the joint optimization of microservice deployment and request routing is extremely challenging, as dynamic request routing easily results in asymmetric network structures and [...] Read more.
Microservice deployment and request routing can help improve server efficiency and the performance of large-scale mobile edge computing (MEC). However, the joint optimization of microservice deployment and request routing is extremely challenging, as dynamic request routing easily results in asymmetric network structures and imbalanced microservice workloads. This article proposes multi-objective joint optimization for microservice deployment and request routing based on structural symmetry. Firstly, the structural symmetry of microservice deployment and request routing is defined, including spatial symmetry and temporal symmetry. A constrained nonlinear multi-objective optimization model was constructed to jointly optimize microservice deployment and request routing, where the structural symmetric metrics take into account the flow-aware routing distance, workload balancing, and request response delay. Secondly, an improved artificial plant community algorithm is designed to search for the optimal route to achieve structural symmetry, including the environment preparation and dependency installation, service packaging and image orchestration, arrangement configuration and dependency management, deployment execution and status monitoring. Thirdly, a benchmark experiment is designed to compare with baseline algorithms. Experimental results show that the proposed algorithm can effectively optimize structural symmetry and reduce the flow-aware routing distance, workload imbalance, and request response delay, while the computational overhead is small enough to be easily deployed on resource-constrained edge computing devices. Full article
Show Figures

Figure 1

26 pages, 1629 KB  
Article
Performance Evaluation of MongoDB and RavenDB in IIoT-Inspired Data-Intensive Mobile and Web Applications
by Mădălina Ciumac, Cornelia Aurora Győrödi, Robert Ștefan Győrödi and Felicia Mirabela Costea
Future Internet 2026, 18(1), 57; https://doi.org/10.3390/fi18010057 - 20 Jan 2026
Viewed by 72
Abstract
The exponential growth of data generated by modern digital applications, including systems inspired by Industrial Internet of Things (IIoT) requirements, has accelerated the adoption of NoSQL databases due to their scalability, flexibility, and performance advantages over traditional relational systems. Among document-oriented solutions, MongoDB [...] Read more.
The exponential growth of data generated by modern digital applications, including systems inspired by Industrial Internet of Things (IIoT) requirements, has accelerated the adoption of NoSQL databases due to their scalability, flexibility, and performance advantages over traditional relational systems. Among document-oriented solutions, MongoDB and RavenDB stand out due to their architectural features and their ability to manage dynamic, large-scale datasets. This paper presents a comparative analysis of MongoDB and RavenDB, focusing on the performance of fundamental CRUD (Create, Read, Update, Delete) operations. To ensure a controlled performance evaluation, a mobile and web application for managing product orders was implemented as a case study inspired by IIoT data characteristics, such as high data volume and frequent transactional operations, with experiments conducted on datasets ranging from 1000 to 1,000,000 records. Beyond the core CRUD evaluation, the study also investigates advanced operational scenarios, including joint processing strategies (lookup versus document inclusion), bulk data ingestion techniques, aggregation performance, and full-text search capabilities. These complementary tests provide deeper insight into the systems’ architectural strengths and their behavior under more complex and data-intensive workloads. The experimental results highlight MongoDB’s consistent performance advantage in terms of response time, particularly with large data volumes, while RavenDB demonstrates competitive behavior and offers additional benefits such as built-in ACID compliance, automatic indexing, and optimized mechanisms for relational retrieval and bulk ingestion. The analysis does not propose a new benchmarking methodology but provides practical insights for selecting an appropriate document-oriented database for data intensive mobile and web application contexts, including IIoT-inspired data characteristics, based on a controlled single-node experimental setting, while acknowledging the limitations of a single-host experimental environment. Full article
Show Figures

Graphical abstract

15 pages, 1352 KB  
Review
Respiratory Support in Cardiogenic Pulmonary Edema: Clinical Insights from Cardiology and Intensive Care
by Nardi Tetaj, Giulia Capecchi, Dorotea Rubino, Giulia Valeria Stazi, Emiliano Cingolani, Antonio Lesci, Andrea Segreti, Francesco Grigioni and Maria Grazia Bocci
J. Cardiovasc. Dev. Dis. 2026, 13(1), 54; https://doi.org/10.3390/jcdd13010054 - 20 Jan 2026
Viewed by 91
Abstract
Cardiogenic pulmonary edema (CPE) is a life-threatening manifestation of acute heart failure characterized by rapid accumulation of fluid in the interstitial and alveolar spaces, leading to severe dyspnea, hypoxemia, and respiratory failure. The condition arises from elevated left-sided filling pressures that increase pulmonary [...] Read more.
Cardiogenic pulmonary edema (CPE) is a life-threatening manifestation of acute heart failure characterized by rapid accumulation of fluid in the interstitial and alveolar spaces, leading to severe dyspnea, hypoxemia, and respiratory failure. The condition arises from elevated left-sided filling pressures that increase pulmonary capillary hydrostatic pressure, disrupt alveolo-capillary barrier integrity, and impair gas exchange. Neurohormonal activation further perpetuates congestion and increases myocardial workload, creating a vicious cycle of hemodynamic overload and respiratory compromise. Respiratory support is a cornerstone of management in CPE, aimed at stabilizing oxygenation, reducing the work of breathing, and facilitating ventricular unloading while definitive therapies, such as diuretics, vasodilators, inotropes, or mechanical circulatory support (MCS), address the underlying cause. Among available modalities, non-invasive ventilation (NIV) with continuous positive airway pressure (CPAP) or bilevel positive airway pressure (BiPAP) has the strongest evidence base in moderate-to-severe CPE, consistently reducing the need for intubation and providing rapid relief of dyspnea. High-flow nasal cannula (HFNC) represents an emerging alternative in patients with moderate hypoxemia or intolerance to mask ventilation, and should be considered an adjunctive option in selected patients with less severe disease or NIV intolerance, although its efficacy in severe presentations remains uncertain. Invasive mechanical ventilation is reserved for refractory cases, while extracorporeal membrane oxygenation (ECMO) and other advanced circulatory support modalities may be necessary in cardiogenic shock. Integration of respiratory strategies with hemodynamic optimization is essential, as positive pressure ventilation favorably modulates preload and afterload, synergizing with pharmacological unloading. Future directions include personalization of ventilatory strategies using advanced monitoring, novel interfaces to improve tolerability, and earlier integration of MCS. In summary, respiratory support in CPE is both a bridge and a decisive therapeutic intervention, interrupting the cycle of hypoxemia and hemodynamic deterioration. A multidisciplinary, individualized approach remains central to improving outcomes in this high-risk population. Full article
(This article belongs to the Section Cardiovascular Clinical Research)
Show Figures

Figure 1

24 pages, 334 KB  
Article
The Impact of Compassion Fatigue on the Psychological Well-Being of Nurses Caring for Patients with Dementia: A Cross-Sectional Post-COVID-19 Data Analysis
by Maria Topi, Paraskevi Tsioufi, Evangelos C. Fradelos, Foteini Malli, Evmorfia Koukia and Polyxeni Mangoulia
Healthcare 2026, 14(2), 224; https://doi.org/10.3390/healthcare14020224 - 16 Jan 2026
Viewed by 186
Abstract
Background/Objectives: Nurses are susceptible to compassion fatigue due to the nature of their professional responsibilities. Factors contributing to this vulnerability include daily patient interactions and organizational elements within their work environment, as well as work-related stress and sociodemographic characteristics, including age, marital status, [...] Read more.
Background/Objectives: Nurses are susceptible to compassion fatigue due to the nature of their professional responsibilities. Factors contributing to this vulnerability include daily patient interactions and organizational elements within their work environment, as well as work-related stress and sociodemographic characteristics, including age, marital status, years of professional experience, and, notably, gender. This research investigates the relationship between compassion fatigue and the levels of anxiety and depression, as well as the professional quality of life among nurses providing care to dementia patients in Greece. Methods: A cross-sectional survey was carried out with 115 nurses working in dementia care centers in Greece. The Hospital Anxiety and Depression Scale (HADS), the Professional Quality of Life Scale (ProQOL-5), and the participants’ personal, demographic, and professional information were all included in an electronic questionnaire. Multiple regression analysis was used. Results: A total of 42.6% of nurses rated their working environment as favorable. Additionally, 23.5% of the sample exhibited high levels of compassion satisfaction, whereas 46.1% demonstrated low levels of burnout. Female gender (p = 0.022) and a higher family income (p = 0.046) was positively associated with compassion satisfaction. Regression analysis indicated that elevated symptoms of anxiety and depression were found to correlate with decreased compassion satisfaction, increased burnout, and heightened secondary post-traumatic stress. Conclusions: Engaging in the care of patients with dementia, particularly throughout the pandemic period, has underscored a pronounced susceptibility to compassion fatigue, physical fatigue, pain, psychological stress, and a reduced quality of life. These results highlight the importance for nursing management to adopt specific organizational measures, including proper staffing levels, balancing workloads, and conducting routine mental health assessments. Full article
(This article belongs to the Section Healthcare Quality, Patient Safety, and Self-care Management)
13 pages, 1361 KB  
Article
Mitigating Write Amplification via Stream-Aware Block-Level Buffering in Multi-Stream SSDs
by Hyeonseob Kim and Taeseok Kim
Appl. Sci. 2026, 16(2), 838; https://doi.org/10.3390/app16020838 - 14 Jan 2026
Viewed by 148
Abstract
Write amplification factor (WAF) is a critical performance and endurance bottleneck in flash-based solid-state drives (SSDs). Multi-streamed SSDs mitigate WAF by enabling logical data streams to be written separately, thereby improving the efficiency of garbage collection. However, despite the architectural potential of multi-streaming, [...] Read more.
Write amplification factor (WAF) is a critical performance and endurance bottleneck in flash-based solid-state drives (SSDs). Multi-streamed SSDs mitigate WAF by enabling logical data streams to be written separately, thereby improving the efficiency of garbage collection. However, despite the architectural potential of multi-streaming, prior research has largely overlooked the design of write buffer management schemes tailored to this model. In this paper, we propose a stream-aware block-level write buffer management technique that leverages both spatial and temporal locality to further reduce WAF. Although the write buffer operates at the granularity of pages, eviction is performed at the block level, where each block is composed exclusively of pages from the same stream. All pages and blocks are tracked using least recently used (LRU) lists at both global and per-stream levels. To avoid mixing data with disparate hotness and update frequencies, pages from the same stream are dynamically grouped into logical blocks based on their recency order. When space is exhausted, eviction is triggered by selecting a full block of pages from the cold region of the global LRU list. This strategy prevents premature eviction of hot pages and aligns physical block composition with logical stream boundaries. The proposed approach enhances WAF and garbage collection efficiency without requiring hardware modification or device-specific extensions. Experimental results confirm that our design delivers consistent performance and endurance improvements across diverse multi-streamed I/O workloads. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

15 pages, 288 KB  
Article
Qualitative Evaluation of a Clinical Decision-Support Tool for Improving Anticoagulation Control in Non-Valvular Atrial Fibrillation in Primary Care
by Maria Rosa Dalmau Llorca, Elisabet Castro Blanco, Zojaina Hernández Rojas, Noèlia Carrasco-Querol, Laura Medina-Perucha, Alessandra Queiroga Gonçalves, Anna Espuny Cid, José Fernández Sáez and Carina Aguilar Martín
Healthcare 2026, 14(2), 199; https://doi.org/10.3390/healthcare14020199 - 13 Jan 2026
Viewed by 221
Abstract
Objectives: Clinical decision-support systems are computer-based tools to improve healthcare decision-making. However, their effectiveness depends on being positively perceived and well understood by healthcare professionals. Qualitative research is particularly valuable for exploring related behaviors and attitudes. This study aims to explore experiences [...] Read more.
Objectives: Clinical decision-support systems are computer-based tools to improve healthcare decision-making. However, their effectiveness depends on being positively perceived and well understood by healthcare professionals. Qualitative research is particularly valuable for exploring related behaviors and attitudes. This study aims to explore experiences of family physicians and nurses concerning the visualization, utility and understanding of the non-valvular atrial fibrillation clinical decision-support system (CDS-NVAF) tool in primary care in Catalonia, Spain. Methods: We performed a qualitative study, taking a pragmatic utilitarian approach, comprising focus groups with healthcare professionals from primary care centers in the intervention arm of the CDS-NVAF tool randomized clinical trial. A thematic content analysis was performed. Results: Thirty-three healthcare professionals participated in three focus groups. We identified three key themes: (1) barriers to tool adherence, encompassing problems related to understanding the CDS-NVAF tool, alert fatigue, and workload; (2) using the CDS-NVAF tool: differences in interpretations of Time in Therapeutic Range (TTR) assessments, and the value of TTR for assessing patient risk; (3) participants’ suggestions: improvements in workflow, technical aspects, and training in non-valvular atrial fibrillation management. Conclusions: Healthcare professionals endorsed a clinical decision-support system for managing oral anticoagulation in non-valvular atrial fibrillation patients in primary care. However, they emphasized the view that the CDS-NVAF requires technical changes related to its visualization and better integration in their workflow, as well as continuing training to reinforce their theoretical and practical knowledge for better TTR interpretation. Full article
(This article belongs to the Section Digital Health Technologies)
22 pages, 363 KB  
Review
Human Factors, Competencies, and System Interaction in Remotely Piloted Aircraft Systems
by John Murray and Graham Wild
Aerospace 2026, 13(1), 85; https://doi.org/10.3390/aerospace13010085 - 13 Jan 2026
Viewed by 290
Abstract
Research into Remotely Piloted Aircraft Systems (RPASs) has expanded rapidly, yet the competencies, knowledge, skills, and other attributes (KSaOs) required of RPAS pilots remain comparatively underexamined. This review consolidates existing studies addressing human performance, subject matter expertise, training practices, and accident causation to [...] Read more.
Research into Remotely Piloted Aircraft Systems (RPASs) has expanded rapidly, yet the competencies, knowledge, skills, and other attributes (KSaOs) required of RPAS pilots remain comparatively underexamined. This review consolidates existing studies addressing human performance, subject matter expertise, training practices, and accident causation to provide a comprehensive account of the KSaOs underpinning safe civilian and commercial drone operations. Prior research demonstrates that early work drew heavily on military contexts, which may not generalize to contemporary civilian operations characterized by smaller platforms, single-pilot tasks, and diverse industry applications. Studies employing subject matter experts highlight cognitive demands in areas such as situational awareness, workload management, planning, fatigue recognition, perceptual acuity, and decision-making. Accident analyses, predominantly using the human factors accident classification system and related taxonomies, show that skill errors and preconditions for unsafe acts are the most frequent contributors to RPAS occurrences, with limited evidence of higher-level latent organizational factors in civilian contexts. Emerging research emphasizes that RPAS pilots increasingly perform data-collection tasks integral to professional workflows, requiring competencies beyond aircraft handling alone. The review identifies significant gaps in training specificity, selection processes, and taxonomy suitability, indicating opportunities for future research to refine RPAS competency frameworks and support improved operational safety. Full article
(This article belongs to the Special Issue Human Factors and Performance in Aviation Safety)
Show Figures

Graphical abstract

33 pages, 1529 KB  
Article
An SQL Query Description Problem with AI Assistance for an SQL Programming Learning Assistant System
by Ni Wayan Wardani, Nobuo Funabiki, Htoo Htoo Sandi Kyaw, Zihao Zhu, I Nyoman Darma Kotama, Putu Sugiartawan and I Nyoman Agus Suarya Putra
Information 2026, 17(1), 65; https://doi.org/10.3390/info17010065 - 9 Jan 2026
Viewed by 293
Abstract
Today, relational databases are widely used in information systems. SQL (structured query language) is taught extensively in universities and professional schools across the globe as a programming language for its data management and accesses. Previously, we have studied a web-based programming learning assistant [...] Read more.
Today, relational databases are widely used in information systems. SQL (structured query language) is taught extensively in universities and professional schools across the globe as a programming language for its data management and accesses. Previously, we have studied a web-based programming learning assistant system (PLAS) to help novice students learn popular programming languages by themselves through solving various types of exercises. For SQL programming, we have implemented the grammar-concept understanding problem (GUP) and the comment insertion problem (CIP) for its initial studies. In this paper, we propose an SQL Query Description Problem (SDP) as a new exercise type for describing the SQL query to a specified request in a MySQL database system. To reduce teachers’ preparation workloads, we integrate a generative AI-assisted SQL query generator to automatically generate a new SDP instance with a given dataset. An SDP instance consists of a table, a set of questions and corresponding queries. Answer correctness is determined by enhanced string matching against an answer module that includes multiple semantically equivalent canonical queries. For evaluation, we generated 11 SDP instances on basic topics using the generator, where we found that Gemini 3.0 Pro exhibited higher pedagogical consistency compared to ChatGPT-5.0, achieving perfect scores in Sensibleness, Topicality, and Readiness metrics. Then, we assigned the generated instances to 32 undergraduate students at the Indonesian Institute of Business and Technology (INSTIKI). The results showed an average correct answer rate of 95.2% and a mean SUS score of 78, which demonstrates strong initial student performance and system acceptance. Full article
(This article belongs to the Special Issue Generative AI Transformations in Industrial and Societal Applications)
Show Figures

Graphical abstract

18 pages, 1346 KB  
Article
ALEX: Adaptive Log-Embedded Extent Layer for Low-Amplification SQLite Writes on Flash Storage
by Youngmi Baek and Jung Kyu Park
Appl. Sci. 2026, 16(2), 672; https://doi.org/10.3390/app16020672 - 8 Jan 2026
Viewed by 273
Abstract
Efficient metadata and page management are essential for sustaining database performance on modern flash-based storage. However, conventional SQLite configurations—rollback journal and WAL—often trigger excessive small writes and frequent synchronization events, leading to high write amplification and degraded tail latency, particularly on UFS and [...] Read more.
Efficient metadata and page management are essential for sustaining database performance on modern flash-based storage. However, conventional SQLite configurations—rollback journal and WAL—often trigger excessive small writes and frequent synchronization events, leading to high write amplification and degraded tail latency, particularly on UFS and NVMe devices. This study introduces ALEX (Adaptive Log-Embedded Extent Layer), a lightweight VFS-level extension that coalesces scattered 4 KB page updates into sequential, page-aligned extents while embedding compact log records for recovery. The proposed design reduces redundant writes through in-memory page deduplication, minimizes fdatasync()frequency by flushing multi-page extents, and preserves full SQLite compatibility. We evaluate ALEX on both Linux NVMe SSDs and Android UFS storage under controlled workloads. Results show that ALEX significantly lowers write amplification, reduces sync counts, and improves p95–p99 write latency compared with baseline SQLite modes. The approach consistently achieves near-sequential write patterns without modifying SQLite internals. These findings demonstrate that lightweight extent-based coalescing can provide substantial efficiency gains for embedded and mobile database systems, offering a practical direction for enhancing SQLite performance on flash devices. Full article
Show Figures

Figure 1

22 pages, 312 KB  
Article
Machine Learning-Enhanced Database Cache Management: A Comprehensive Performance Analysis and Comparison of Predictive Replacement Policies
by Maryam Abbasi, Paulo Váz, José Silva, Filipe Cardoso, Filipe Sá and Pedro Martins
Appl. Sci. 2026, 16(2), 666; https://doi.org/10.3390/app16020666 - 8 Jan 2026
Viewed by 230
Abstract
The exponential growth of data-driven applications has intensified performance demands on database systems, where cache management represents a critical bottleneck. Traditional cache replacement policies such as Least Recently Used (LRU) and Least Frequently Used (LFU) rely on simple heuristics that fail to capture [...] Read more.
The exponential growth of data-driven applications has intensified performance demands on database systems, where cache management represents a critical bottleneck. Traditional cache replacement policies such as Least Recently Used (LRU) and Least Frequently Used (LFU) rely on simple heuristics that fail to capture complex temporal and frequency patterns in modern workloads. This research presents a modular machine learning-enhanced cache management framework that leverages pattern recognition to optimize database performance through intelligent replacement decisions. Our approach integrates multiple machine learning models—Random Forest classifiers, Long Short-Term Memory (LSTM) networks, Support Vector Machines (SVM), and Gradient Boosting methods—within a modular architecture enabling seamless integration with existing database systems. The framework incorporates sophisticated feature engineering pipelines extracting temporal, frequency, and contextual characteristics from query access patterns. Comprehensive experimental evaluation across synthetic workloads, real-world production datasets, and standard benchmarks (TPC-C, TPC-H, YCSB, and LinkBench) demonstrates consistent performance improvements. Machine learning-enhanced approaches achieve 8.4% to 19.2% improvement in cache hit rates, 15.3% to 28.7% reduction in query latency, and 18.9% to 31.4% increase in system throughput compared to traditional policies and advanced adaptive methods including ARC, LIRS, Clock-Pro, TinyLFU, and LECAR. Random Forest emerges as the most practical solution, providing 18.7% performance improvement with only 3.1% computational overhead. Case study analysis across e-commerce, financial services, and content management applications demonstrates measurable business impact, including 8.3% conversion rate improvements and USD 127,000 annual revenue increases. Statistical validation (p<0.001, Cohen’s d>0.8) confirms both statistical and practical significance. Full article
Show Figures

Figure 1

25 pages, 1705 KB  
Article
A Carbon-Efficient Framework for Deep Learning Workloads on GPU Clusters
by Dong-Ki Kang and Yong-Hyuk Moon
Appl. Sci. 2026, 16(2), 633; https://doi.org/10.3390/app16020633 - 7 Jan 2026
Viewed by 243
Abstract
The explosive growth of artificial intelligence (AI) services has led to massive scaling of GPU computing clusters, causing sharp rises in power consumption and carbon emissions. Although hardware-level accelerator enhancements and deep neural network (DNN) model compression techniques can improve power efficiency, they [...] Read more.
The explosive growth of artificial intelligence (AI) services has led to massive scaling of GPU computing clusters, causing sharp rises in power consumption and carbon emissions. Although hardware-level accelerator enhancements and deep neural network (DNN) model compression techniques can improve power efficiency, they often encounter deployment barriers and risks of accuracy loss in practice. To address these issues without altering hardware or model architectures, we propose a novel Carbon-Aware Resource Management (CA-RM) framework for GPU clusters. In order to minimize the carbon emission, the CA-RM framework dynamically adjusts energy usage by combining real-time GPU core frequency scaling with intelligent workload placement, aligning computation with the temporal availability of renewable generation. We introduce a new metric, performance-per-carbon (PPC), and develop three optimization formulations: carbon-constrained, performance-constrained, and PPC-driven objectives that simultaneously respect DNN model training deadlines, inference latency requirements, and carbon emission budgets. Through extensive simulations using real-world renewable energy traces and profiling data collected from NVIDIA RTX4090 GPU running representative DNN workloads, we show that the CA-RM framework substantially reduces carbon emission while satisfying service-level agreement (SLA) targets across a wide range of workload characteristics. Through experimental evaluation, we verify that the proposed CA-RM framework achieves approximately 35% carbon reduction on average, compared to competing approaches, while still ensuring acceptable processing performance across diverse workload behaviors. Full article
(This article belongs to the Section Green Sustainable Science and Technology)
Show Figures

Figure 1

14 pages, 1025 KB  
Article
visionMC: A Low-Cost AI System Using Facial Recognition and Voice Interaction to Optimize Primary Care Workflows
by Marius Cioca and Adriana Lavinia Cioca
Inventions 2026, 11(1), 6; https://doi.org/10.3390/inventions11010006 - 6 Jan 2026
Viewed by 206
Abstract
This pilot study evaluated the visionMC system, a low-cost artificial intelligence system integrating HOG-based facial recognition and voice notifications, for workflow optimization in a family medicine practice. Implemented on a Raspberry Pi 4, the system was tested over two weeks with 50 patients. [...] Read more.
This pilot study evaluated the visionMC system, a low-cost artificial intelligence system integrating HOG-based facial recognition and voice notifications, for workflow optimization in a family medicine practice. Implemented on a Raspberry Pi 4, the system was tested over two weeks with 50 patients. It achieved 85% recognition accuracy and an average detection time of 3.4 s. Compared with baseline, patient waiting times showed a substantial reduction in waiting time and administrative workload, and the administrative workload decreased by 5–7 min per patient. A satisfaction survey (N = 35) indicated high acceptance, with all scores above 4.5/5, particularly for usefulness and waiting time reduction. These results suggest that visionMC can improve efficiency and enhance patient experience with minimal financial and technical requirements. Larger multicenter studies are warranted to confirm scalability and generalizability. visionMC demonstrates that effective AI integration in small practices is feasible with minimal resources, supporting scalable digital health transformation. Beyond biometric identification, the system’s primary contribution is streamlining practice management by instantly displaying the arriving patient and enabling rapid chart preparation. Personalized greetings enhance patient experience, while email alerts on motion events provide a secondary security benefit. These combined effects drove the observed reductions in waiting and administrative times. Full article
Show Figures

Figure 1

34 pages, 5058 KB  
Article
A Machine Learning Framework for Predicting and Resolving Complex Tactical Air Traffic Events Using Historical Data
by Anthony De Bortoli, Cynthia Koopman, Leander Grech, Remi Zaidan, Didier Berling and Jason Gauci
Aerospace 2026, 13(1), 54; https://doi.org/10.3390/aerospace13010054 - 5 Jan 2026
Viewed by 234
Abstract
One of the key functions of Air Traffic Management (ATM) is to balance airspace capacity and demand. Despite measures that are taken during the strategic and pre-tactical phases of flight, demand–capacity imbalances still occur in flight, often manifesting as localised regions of high [...] Read more.
One of the key functions of Air Traffic Management (ATM) is to balance airspace capacity and demand. Despite measures that are taken during the strategic and pre-tactical phases of flight, demand–capacity imbalances still occur in flight, often manifesting as localised regions of high traffic complexity, known as hotspots. These hotspots emerge dynamically, leaving air traffic controllers with limited anticipation time and increased workload. This paper proposes a Machine Learning (ML) framework for the prediction and resolution of hotspots in congested en-route airspace up to an hour in advance. For hotspot prediction, the proposed framework integrates trajectory prediction, spatial clustering, and complexity assessment. The novelty lies in shifting complexity assessment from a sector-level perspective to the level of individual hotspots, whose complexity is quantified using a set of normalised, sector-relative metrics derived from historical data. For hotspot resolution, a Reinforcement Learning (RL) approach, based on Proximal Policy Optimisation (PPO) and a novel neural network architecture, is employed to act on airborne flights. Three single-clearance type agents—a speed agent, a flight-level agent, and a direct routing agent—and a multi-clearance type agent are trained and evaluated on thousands of historical hotspot scenarios. Results demonstrate the suitability of the proposed framework and show that hotspots are strongly seasonal and mainly occur along traffic routes. Furthermore, it is shown that RL agent performance tends to degrade with hotspot complexity in terms of certain performance metrics but remains the same, or even improves, in terms of others. The multi-clearance type agent solves the highest percentage of hotspots; however, the FL agent achieves the best overall performance. Full article
(This article belongs to the Section Air Traffic and Transportation)
Show Figures

Figure 1

Back to TopTop