Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (594)

Search Parameters:
Keywords = operating system logs

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 2555 KB  
Article
Spatial Heat Load Density Analysis for Assessing 4th Generation District Heating Potential in Extreme Cold Climate Cities: A Case Study of Ulaanbaatar, Mongolia
by Tsolmon Khalzan and Batmunkh Sereeter
Energies 2026, 19(7), 1598; https://doi.org/10.3390/en19071598 - 24 Mar 2026
Abstract
Ulaanbaatar, the capital of Mongolia, operates one of the world’s largest district heating (DH) systems in the coldest national capital (heating degree-days ~5800). Despite serving over 60% of the city’s 1.6 million residents, the current 3rd generation DH system suffers from high thermal [...] Read more.
Ulaanbaatar, the capital of Mongolia, operates one of the world’s largest district heating (DH) systems in the coldest national capital (heating degree-days ~5800). Despite serving over 60% of the city’s 1.6 million residents, the current 3rd generation DH system suffers from high thermal losses (~17–18%) and relies on coal-fired combined heat and power plants. Transitioning to 4th generation district heating (4GDH) with lower supply temperatures could reduce these losses while enabling future low-temperature renewable energy integration. A geographic information system (GIS)-based spatial heat load density (HLD) analysis uses operational data from the Ulaanbaatar District Heating Company, encompassing 13,500 buildings with a total connected capacity of 3924 MW. Grid-based spatial analysis was performed at two resolutions (1 km2 and 2 km2). Threshold sensitivity analysis was conducted across HLD criteria of 1–5 MW/km2. Results indicate that median HLD values exceed the European reference threshold of 3 MW/km2, with log-normal distributions confirmed by Shapiro–Wilk tests. Three candidate pilot zones were identified. A hybrid temperature strategy (65/35 °C above −25 °C; 90/60 °C below) further contextualizes the findings. These results suggest spatially favorable conditions for 4GDH development, providing a quantitative foundation for subsequent techno-economic feasibility studies. Full article
(This article belongs to the Special Issue Trends and Developments in District Heating and Cooling Technologies)
Show Figures

Figure 1

36 pages, 5099 KB  
Article
DML–LLM Hybrid Architecture for Fault Detection and Diagnosis in Sensor-Rich Industrial Systems
by Yu-Shu Hu, Saman Marandi and Mohammad Modarres
Sensors 2026, 26(6), 2008; https://doi.org/10.3390/s26062008 - 23 Mar 2026
Abstract
Fault Detection and Diagnosis (FDD) in complex industrial systems requires methods that can handle uncertain operating conditions, soft thresholds, evolving sensor behavior, and increasing volumes of heterogeneous data. Traditional model-based or rule-driven approaches offer interpretability but lack adaptability, while purely data-driven and Large [...] Read more.
Fault Detection and Diagnosis (FDD) in complex industrial systems requires methods that can handle uncertain operating conditions, soft thresholds, evolving sensor behavior, and increasing volumes of heterogeneous data. Traditional model-based or rule-driven approaches offer interpretability but lack adaptability, while purely data-driven and Large Language Model (LLM)-based methods often struggle with consistency, traceability, and causal grounding. Dynamic Master Logic (DML) provides a causal and temporal reasoning structure with fuzzy rules that capture gradual drift, soft limits, and asynchronous sensor signals while preserving traceability and deterministic evidence propagation. Building on this foundation, this paper presents a DML–LLM hybrid architecture that integrates targeted LLM inference to interpret unstructured information such as logs, notes, or retrieved documents under controlled prompts that maintain domain constraints. The combined system integrates Bayesian updating, deterministic routing, and semantic interpretation into a unified FDD pipeline. In a semiconductor manufacturing case study, the proposed framework reduced time to detection (TTD) from 7.4 h to 1.2 h and improved the F1 score from 0.59 to 0.83 when compared with conventional Statistical Process Control (SPC) and Fault Detection and Classification (FDC) workflows. Provenance completeness increased from 18% to 96%, while engineer triage time was reduced from 72 min to 18 min per event. These results demonstrate that the hybrid framework provides a scalable and explainable approach to anomaly detection and fault diagnosis in sensor-rich industrial environments. Full article
(This article belongs to the Special Issue Anomaly Detection and Fault Diagnosis in Sensor Networks)
Show Figures

Figure 1

39 pages, 1642 KB  
Article
A Post-Quantum Secure Architecture for 6G-Enabled Smart Hospitals: A Multi-Layered Cryptographic Framework
by Poojitha Devaraj, Syed Abrar Chaman Basha, Nithesh Nair Panarkuzhiyil Santhosh and Niharika Panda
Future Internet 2026, 18(3), 165; https://doi.org/10.3390/fi18030165 - 20 Mar 2026
Viewed by 94
Abstract
Future 6G-enabled smart hospital infrastructures will support latency-critical medical operations such as robotic surgery, autonomous monitoring, and real-time clinical decision systems, which require communication mechanisms that ensure both ultra-low latency and long-term cryptographic security. Existing security solutions either rely on classical cryptographic protocols [...] Read more.
Future 6G-enabled smart hospital infrastructures will support latency-critical medical operations such as robotic surgery, autonomous monitoring, and real-time clinical decision systems, which require communication mechanisms that ensure both ultra-low latency and long-term cryptographic security. Existing security solutions either rely on classical cryptographic protocols that are vulnerable to quantum attacks or deploy isolated post-quantum primitives without providing a unified framework for secure real-time medical command transmission. This research presents a latency-aware, multi-layered post-quantum security architecture for 6G-enabled smart hospital environments. The proposed framework establishes an end-to-end secure command transmission pipeline that integrates hardware-rooted device authentication, post-quantum key establishment, hybrid payload protection, dynamic access enforcement, and tamper-evident auditing within a coherent system design. In contrast to existing approaches that focus on individual security mechanisms, the architecture introduces a structured integration of Kyber-based key encapsulation and Dilithium digital signatures with hybrid AES-based encryption and legacy-compatible key transport, while Physical Unclonable Function authentication provides hardware-bound device identity verification. Zero Trust access control, metadata-driven anomaly detection, and blockchain-style audit logging provide continuous verification and traceability, while threshold cryptography distributes cryptographic authority to eliminate single points of compromise. The proposed architecture is evaluated using a discrete-event simulation framework representing adversarial conditions in realistic 6G medical communication scenarios, including replay attacks, payload manipulation, and key corruption attempts. Experimental results demonstrate improved security and operational efficiency, achieving a 48% reduction in detection latency, a 68% reduction in false-positive anomaly detection rate, and a 39% improvement in end-to-end round-trip latency compared to conventional RSA-AES-based architectures. These results demonstrate that the proposed framework provides a practical and scalable approach for achieving post-quantum secure and low-latency command transmission in next-generation 6G smart hospital systems. Full article
(This article belongs to the Special Issue Key Enabling Technologies for Beyond 5G Networks—2nd Edition)
Show Figures

Graphical abstract

18 pages, 2185 KB  
Article
Boosting NH3-Selective Catalytic Reduction of NOx by Cooperation of Nb and Boron Nitride to V-Based Catalyst over a Wide Temperature Window
by Bora Jeong, Myeung-Jin Lee, Ho Sung Jang, Sunmi Shin, Tae-hyung Kim, Heesoo Lee and Hong-Dae Kim
Appl. Nano 2026, 7(1), 9; https://doi.org/10.3390/applnano7010009 - 19 Mar 2026
Viewed by 116
Abstract
The commercialization of V-based catalysts for the selective catalytic reduction of NOx by NH3 (NH3-SCR) is hindered by their narrow operating temperature window, insufficient low-temperature (LT) activity, and severe SO2-to-SO3 oxidation. To bridge this gap, we herein [...] Read more.
The commercialization of V-based catalysts for the selective catalytic reduction of NOx by NH3 (NH3-SCR) is hindered by their narrow operating temperature window, insufficient low-temperature (LT) activity, and severe SO2-to-SO3 oxidation. To bridge this gap, we herein introduced Nb and hexagonal BN into a VW/TiO2 system to simultaneously enhance its LT SCR activity, suppress undesired side reactions, and improve durability. Nb incorporation promoted V5+/V4+ redox cycling and enhanced lattice oxygen mobility, thus reducing the apparent activation energy and suppressing SO2 oxidation at elevated temperatures. However, excessive Nb loading induced NH3 oxidation and N2O formation. This drawback was mitigated by introducing BN as a dispersion promoter, which helped secure high catalytic performance at a reduced Nb content. The VWNb/Ti-BN catalyst achieved superior NOx conversion and N2 selectivity over a wide temperature range and benefited from notably suppressed NH3 oxidation and SO2-to-SO3 oxidation. Kinetic analysis revealed that Nb primarily lowered the reaction energy barrier via redox property enhancement, whereas BN accelerated surface reaction turnover by stabilizing and dispersing active acidic sites, markedly increasing the turnover frequency without reducing the activation energy. In situ spectroscopic analysis confirmed the accelerated consumption of adsorbed NH3 species and enhanced formation of reactive NOx intermediates, indicating SCR pathway enhancement. After aging in the presence of SO2 and H2O, the best-performing honeycomb-type monolithic catalyst retained and NOx conversion of >80%, demonstrating excellent long-term durability under practical conditions. A composition-aware machine learning model based on log-ratio-transformed variables quantitatively identified the synergistic balance among V, Nb, W, BN, and TiO2 as the dominant factor governing LT SCR performance. Thus, this work provides valuable mechanistic insights and a strategy for designing wide-temperature-window SCR catalysts with improved activity, selectivity, and resistance to sulfur poisoning. Full article
Show Figures

Graphical abstract

24 pages, 3360 KB  
Article
Satellite-Based Machine Learning for Temporal Assessment of Water Quality Parameter Prediction in a Coastal Shallow Lake
by Anja Batina, Ljiljana Šerić, Andrija Krtalić and Ante Šiljeg
J. Mar. Sci. Eng. 2026, 14(6), 566; https://doi.org/10.3390/jmse14060566 - 18 Mar 2026
Viewed by 194
Abstract
Satellite remote sensing increasingly supports water quality monitoring, yet the temporal transferability of machine learning (ML) models remains insufficiently tested, particularly in coastal shallow lakes subject to hydrological variability. This study evaluates the predictive robustness of satellite-based ML models for electrical conductivity (EC), [...] Read more.
Satellite remote sensing increasingly supports water quality monitoring, yet the temporal transferability of machine learning (ML) models remains insufficiently tested, particularly in coastal shallow lakes subject to hydrological variability. This study evaluates the predictive robustness of satellite-based ML models for electrical conductivity (EC), turbidity (TUR), water temperature (WT), and dissolved oxygen (DO) in Vrana Lake, Croatia. A total of 409 in situ measurements collected during 2023–2024 and 2025 were paired with Sentinel-2 and Landsat 8–9 imagery. Pearson, Spearman, and Kendall correlation analyses were applied for parameter-specific band selection using original, inverse, quadratic, and logarithmic feature transformations. Seventeen regression algorithms were evaluated under six training–testing split strategies, including strict temporal projection. WT exhibited high robustness (R2 ≈ 0.90 under temporal projection) due to its strong dependence on thermal bands, while DO achieved moderate temporal stability (R2 = 0.51) using log-transformed predictors. EC and TUR demonstrated substantial performance degradation under temporal separation (R2 = 0.14 and −4.62, respectively), reflecting sensitivity to distribution shifts. For parameters showing sufficient stability, interpretable band-based retrieval equations were derived using the most strongly correlated spectral predictors. These findings highlight the importance of temporally structured validation and demonstrate that model complexity does not guarantee operational robustness in shallow, dynamically evolving lake systems. Full article
(This article belongs to the Special Issue Assessment and Monitoring of Coastal Water Quality)
Show Figures

Figure 1

13 pages, 2221 KB  
Proceeding Paper
Improving Preventive Maintenance Efficiency in University Laboratories Using Radio Frequency Identification-Based Decision Support System and Rapid Application Development Method
by Rizky Fajar Ahmad Gurnita, Rayinda Pramuditya Soesanto, Amelia Kurniawati and Fahmy Habib Hasanudin
Eng. Proc. 2026, 128(1), 41; https://doi.org/10.3390/engproc2026128041 - 18 Mar 2026
Viewed by 124
Abstract
Laboratory asset maintenance in higher education institutions often suffers from inefficiencies due to incomplete data and reactive maintenance practices. We designed a radio frequency identification (RFID)-based information system that supports preventive maintenance and decision-making for laboratory asset management. Utilizing the rapid application development [...] Read more.
Laboratory asset maintenance in higher education institutions often suffers from inefficiencies due to incomplete data and reactive maintenance practices. We designed a radio frequency identification (RFID)-based information system that supports preventive maintenance and decision-making for laboratory asset management. Utilizing the rapid application development method, the system was developed through iterative prototyping and stakeholder engagement. The system integrates RFID-based asset identification with a web-based interface for real-time monitoring and log management. A decision-support module was also implemented, allowing stakeholders to prioritize maintenance tasks based on asset age, repair frequency, and usage patterns. Evaluation results of user acceptance testing showed an average score of 82%, indicating strong usability and relevance. The results demonstrate that integrating RFID with decision-support features significantly improve maintenance planning, reduce operational risk, and optimize resource allocation in academic laboratory environments. Full article
Show Figures

Figure 1

26 pages, 2185 KB  
Article
Visually Sustainable but Spatially Broken? A Two-Level Assessment of How Generative AI Encodes Sustainable Urban Design Principles
by Sanghoon Jung
Sustainability 2026, 18(6), 2943; https://doi.org/10.3390/su18062943 - 17 Mar 2026
Viewed by 134
Abstract
Generative AI enables rapid visualization of sustainable urban design scenarios, yet the question of whether these outputs encode sustainability as operable spatial logic, rather than merely depicting it as a visual impression, remains underexplored. This study proposes a two-level assessment framework that scores [...] Read more.
Generative AI enables rapid visualization of sustainable urban design scenarios, yet the question of whether these outputs encode sustainability as operable spatial logic, rather than merely depicting it as a visual impression, remains underexplored. This study proposes a two-level assessment framework that scores the same sustainability dimensions at both the visual-representation level and the spatial-logic level, treating the systematic decoupling between the two as a form of visual greenwashing: system-induced representational distortion rather than deliberate misrepresentation. Using AI-workflow reports from two site-based urban design studios (47 students, 12 teams, 36 coded scenes), the framework integrates rubric-based scoring with qualitative process tracing of breakdown–repair logs. Results show that image-level scores consistently outperform logic-level scores across all five dimensions, with the gap most severe in mobility hierarchy and walkability and smallest in green/blue infrastructure. Case analysis reveals that breakdowns arise from failures in program encoding, urban-scale coherence, functional-boundary demarcation, and relational-condition matching, and that students deploy multi-stage repair pipelines, including prompt restructuring, tool switching, reference injection, and external-source compositing, to re-inject collapsed spatial logic. These findings reframe AI-assisted urban design as repair-centered workmanship rather than automated production. The study proposes three guardrails to prevent visual sustainability from substituting for spatial-logic sustainability: image–logic paired submission, design audit trail formalization, and gap-based red-flag review. Full article
Show Figures

Figure 1

26 pages, 1623 KB  
Article
Graph-Augmented Fault Diagnosis in Power Systems with Imbalanced Text Data: A Knowledge Extraction and Agent-Based Reasoning Framework
by Yipu Zhang, Yan Guo, Qingbiao Lin, Zhantao Fan, Shengmin Qiu, Xiaogang Wu and Xiaotao Fang
Technologies 2026, 14(3), 181; https://doi.org/10.3390/technologies14030181 - 17 Mar 2026
Viewed by 158
Abstract
Fault diagnosis in modern power systems increasingly depends on unstructured operation and maintenance (O&M) logs, yet real-world logs are often small in scale and highly imbalanced across fault types, which degrades the generalizability of standard neural models. This paper proposes a graph-augmented diagnostic [...] Read more.
Fault diagnosis in modern power systems increasingly depends on unstructured operation and maintenance (O&M) logs, yet real-world logs are often small in scale and highly imbalanced across fault types, which degrades the generalizability of standard neural models. This paper proposes a graph-augmented diagnostic framework that integrates imbalance-aware knowledge extraction with interpretable reasoning. The framework consists of three stages: (1) domain adaptation of a BERT–BiLSTM–CRF NER model and a BERT–MLP RE model using an imbalance-aware training recipe that combines Low-Rank Adaptation (LoRA), a mixed focal–range loss, and undersampling; (2) construction of a power-system knowledge graph that organizes extracted entities and relations (e.g., fault devices, abnormal phenomena, causes, and handling measures); and (3) a graph-augmented assistant agent that reuses the NER model as a graph-aware retriever within a retrieval-augmented generation (RAG) architecture to support contextualized and interpretable diagnostic reasoning. Experiments on 3921 real-world fault-processing logs show consistent gains: NER reaches 92.0% accuracy and 71.3% Macro-F1 (vs. 80.3% and 63.2%), and RE achieves 88.0% accuracy and 70.1% F1 (vs. 82.1% and 60.4%), while reducing average training time per epoch by about 18%. These results demonstrate an efficient and practical path toward robust log-based fault diagnosis under scarce and imbalanced data. Full article
Show Figures

Figure 1

21 pages, 836 KB  
Article
Trace-LogVector-Based Relational Retrieval for Conversational System Log Analysis
by Sun-Chul Park and Young-Han Kim
Sensors 2026, 26(6), 1806; https://doi.org/10.3390/s26061806 - 12 Mar 2026
Viewed by 234
Abstract
System logs generated in IoT-based and sensor-driven cloud environments encode execution traces and complex relationships among services, functions, and data stores. In many IoT deployments, telemetry is pre-processed at the edge and then integrated into backend services (e.g., application servers and databases) for [...] Read more.
System logs generated in IoT-based and sensor-driven cloud environments encode execution traces and complex relationships among services, functions, and data stores. In many IoT deployments, telemetry is pre-processed at the edge and then integrated into backend services (e.g., application servers and databases) for analytics and operations. During this integration, service executions record relational dependencies (e.g., function-to-data-store interactions) as operational logs (or aggregated statistics), which constitute key evidence for operating sensor-driven services. We therefore evaluate TLV using publicly reproducible backend execution logs as a representative backend model and discuss the generality and limitations of this choice. However, most existing retrieval-augmented generation (RAG) approaches remain document-centric, representing logs as flat textual chunks that fail to preserve execution flow and entity relationships, which are critical for diagnosing complex service execution pipelines in sensor-driven cloud backends. In this study, we propose Trace-LogVector (TLV), a relational log representation that transforms system logs into trace-level retrieval units while explicitly preserving execution order and entity interactions. TLV is constructed based on the Chunk as Relational Data (CARD) design principle, which represents execution flows using entity-centric multi-chunk structures rather than single aggregated text chunks. To evaluate the impact of relational log representation, we conduct controlled experiments comparing single-chunk and CARD-based multi-chunk TLV under identical embedding and retrieval settings. Retrieval performance is quantitatively assessed using Hit@5 and Mean Reciprocal Rank at 5 (MRR@5). Experimental results show that the proposed multi-chunk TLV achieves a Hit@5 of 1.000 and an MRR@5 of 0.900, consistently outperforming the single-chunk baseline across all evaluation queries. These findings demonstrate that preserving execution contexts and entity relationships as relational retrieval units is a key factor in improving RAG-based system log analysis for monitoring and diagnosing large-scale sensor networks and cloud systems. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

16 pages, 1565 KB  
Article
Shrimp Market Under Innovation Schemes: Hidden Markov Modeling
by Johnny Javier Triviño-Sanchez, Alexander Fernando Haro-Sarango, Julián Coronel-Reyes, Carlos Alfredo De Loor-Platón and Dayanna Soria-Encalada
J. Risk Financial Manag. 2026, 19(3), 214; https://doi.org/10.3390/jrfm19030214 - 12 Mar 2026
Viewed by 234
Abstract
This article models the Ecuadorian shrimp market as a nonlinear system with recurring latent regimes that affect margins and planning decisions. A multivariate Hidden Markov Model (HMM) with Gaussian emissions in log space is estimated via the Baum–Welch algorithm to segment the joint [...] Read more.
This article models the Ecuadorian shrimp market as a nonlinear system with recurring latent regimes that affect margins and planning decisions. A multivariate Hidden Markov Model (HMM) with Gaussian emissions in log space is estimated via the Baum–Welch algorithm to segment the joint dynamics of pounds produced, dollars invoiced, and average price. The analysis uses monthly data from January 2017 to May 2025 (T = 101). The selected four-state specification shows strong fit and outperforms linear alternatives (log likelihood = 480.9; AIC = 859.8; BIC = 729.5). The dominant regime (State 2) concentrates high prices (~USD 2.97/lb) with intermediate production and acts as an attractor (stationary probability ≈ 1), while States 0 and 1 capture orderly expansion and oversupply conditions, and State 3 reflects episodic demand rallies. Adverse regimes (States 0–1) exhibit expected durations of 6–8 months, suggesting natural reversion toward the profitable regime. These estimates enable probabilistic regime forecasting and Monte Carlo scenario simulation to support hedging, inventory management, and financial stress testing. Overall, the proposed HMM framework provides an operational decision tool for producers, traders, and policymakers seeking to anticipate regime shifts, mitigate oversupply cycles, and stabilize margins. Full article
(This article belongs to the Section Mathematics and Finance)
Show Figures

Figure 1

26 pages, 5380 KB  
Article
Analyzing Characteristics of Public Transport Complex Networks Based on Multi-Source Big Data Fusion: A Case Study of Cangzhou, China
by Linfang Zhou, Yongsheng Chen, Dongpu Ren and Qing Lan
Future Internet 2026, 18(3), 144; https://doi.org/10.3390/fi18030144 - 11 Mar 2026
Viewed by 186
Abstract
Quantitative evaluation of public transit networks (PTNs) with complex-network models informs route optimization and operational adjustments. Prior studies emphasize large cities and pay limited attention to small-sized urban systems. This study examines the bus network of Cangzhou City, Hebei Province, China, to broaden [...] Read more.
Quantitative evaluation of public transit networks (PTNs) with complex-network models informs route optimization and operational adjustments. Prior studies emphasize large cities and pay limited attention to small-sized urban systems. This study examines the bus network of Cangzhou City, Hebei Province, China, to broaden the empirical scope and characterize PTNs in smaller cities. The dataset for this study comprises route and stop records, passenger boarding logs, and bus GPS traces. We develop a general workflow for bus data cleaning and completion. To characterize the dynamic bus network and compare it with the static network, we construct a static network and Directed Weighted Dynamic Network I (DWDN I) using the L-space method, and we construct Directed Weighted Dynamic Network II (DWDN II) using the P-space method. We calculated network metrics including degree, weighted degree, clustering coefficient, path length, network diameter, network efficiency, and small-world coefficient. The principal results show that: (1) at the macroscopic level, the dynamic PTN tracks passenger demand, as the average degree, weighted average degree, and clustering coefficient fluctuate in concert with passenger flows; (2) key stations concentrate in the urban core, and stations with high weighted degree display pronounced spatial autocorrelation; (3) the exponential form of the weighted-degree distribution indicates that the examined bus network is not scale-free, while the dynamic network’s small-world coefficient exceeds that of the static network across time periods, reflecting stronger small-world characteristics. This study integrates network and spatial attributes of the PTN to offer an exploratory case for investigating public transit networks in third-tier cities. The findings can inform comparable studies and offer practical guidance for bus operators. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Figure 1

27 pages, 1636 KB  
Article
Traffic Incident Impact Prediction Using Machine Learning and Explainable AI: Evidence from Istanbul
by Adem Korkmaz, Ufuk Çelik and Vedat Tümen
Electronics 2026, 15(6), 1162; https://doi.org/10.3390/electronics15061162 - 11 Mar 2026
Viewed by 273
Abstract
Traffic incident impact prediction remains challenging for intelligent transportation systems due to complex spatiotemporal dependencies. This study analyzes 38,430 real-world traffic incidents from Istanbul (2022–2024) to predict normalized traffic deviation ΔTraffic(%) using machine [...] Read more.
Traffic incident impact prediction remains challenging for intelligent transportation systems due to complex spatiotemporal dependencies. This study analyzes 38,430 real-world traffic incidents from Istanbul (2022–2024) to predict normalized traffic deviation ΔTraffic(%) using machine learning with rigorous temporal validation. Three models—Random Forest (RF), XGBoost, and LightGBM—were evaluated using rolling-origin cross-validation (2022 training, 2023 testing; 2022–2023 training, 2024 testing) to prevent temporal leakage, employing a strictly operational 13-feature set that excludes information unavailable at incident onset (t0). LightGBM achieved MAE = 26.81 ± 1.94% and R2 = 0.506 ± 0.042 (mean ± std across folds) with 95% bootstrap confidence intervals of [27.54%, 28.81%] for MAE on the 2024 test set, significantly outperforming historical baselines (R2 = 0.100 ± 0.054, p < 0.001, Bonferroni-corrected). Feature ablation studies revealed that temporal features contribute 65.2% of predictive power, while incident type contributes only 1.3%. Distributional robustness analysis confirms conclusions are stable across distributional treatments (log, winsorised, quantile), with feature importance rank correlations ρ = 1.000 between all treatment pairs. This work provides empirical evidence for context-aware traffic management systems and demonstrates the importance of proper temporal validation in transportation forecasting. Full article
Show Figures

Figure 1

36 pages, 5029 KB  
Article
Option-C Verified Semantic Digital Twins for Decarbonized, Pressure-Reliable Central Business District Hospitals
by Zhe Wei
Buildings 2026, 16(6), 1096; https://doi.org/10.3390/buildings16061096 - 10 Mar 2026
Viewed by 219
Abstract
Central business district (CBD) hospitals must sustain reliable pressure relationships in critical rooms while reducing whole-facility carbon under tight space and disruption constraints. We developed an ontology-grounded semantic digital twin that normalizes building automation system (BAS) and building management system (BMS) telemetry into [...] Read more.
Central business district (CBD) hospitals must sustain reliable pressure relationships in critical rooms while reducing whole-facility carbon under tight space and disruption constraints. We developed an ontology-grounded semantic digital twin that normalizes building automation system (BAS) and building management system (BMS) telemetry into a unified semantic store consistent with Brick Schema, enabling portable asset discovery via query and thereby supporting forecasting, anomaly detection, and multi-objective optimization without dependence on vendor point naming conventions. Whole-facility impacts were verified using International Performance Measurement and Verification Protocol Option C–style measurement and verification with an S0-calibrated baseline model and residual-based savings attribution. Relative to the baseline (S0), the intervention (S3) produced a step increase in the critical-room pressure-compliance pass rate, tighter room-to-corridor differential-pressure (ΔP) control across airborne infection isolation and open room strata, and intent-aligned ventilation delivery (air changes per hour ratio distribution concentrated near unity; p < 0.05 where letter groups differ). Operational-state discrimination improved (AUC 0.649→0.696) and issue-resolution times shortened (left-shifted cumulative distribution function), indicating reduced service burden. Option C verification showed energy residuals shifting negative under S3, consistent with net savings versus baseline expectations. Across progressive maturity (S0→S3), time-to-value and burden fractions decreased, carbon intensity (tCO2e m−2) decreased, long-tail exposure compressed (log-scale horizon), and composite performance indices increased (p < 0.05). These results demonstrate a verifiable pathway to pressure-reliable, decarbonized hospital operations at the whole-facility boundary while making the semantic layer’s utility explicit through query-driven, ontology-grounded asset discovery. We present an IPMVP Option-C–verifiable semantic digital-twin governance framework that links audited operational evidence (telemetry → actions → verification) to whole-facility energy and carbon outcomes while maintaining critical-room pressure-relationship reliability. Optimization benchmarking (including quantum annealing) is used as supporting decision-support evaluation, rather than as the central contribution. Full article
Show Figures

Figure 1

23 pages, 15039 KB  
Article
Impact of Atmospheric Turbulence on Data Quality During BVLOS UAV Missions in Antarctic Conditions
by Anna Zmarz and Mirosław Rodzewicz
Drones 2026, 10(3), 187; https://doi.org/10.3390/drones10030187 - 9 Mar 2026
Viewed by 303
Abstract
This article presents an analysis of the impact of atmospheric turbulence on the quality of images obtained during photogrammetric missions in Antarctica using a fixed-wing UAV operating in BVLOS mode. Image quality was evaluated primarily by the degree of blurring, which served as [...] Read more.
This article presents an analysis of the impact of atmospheric turbulence on the quality of images obtained during photogrammetric missions in Antarctica using a fixed-wing UAV operating in BVLOS mode. Image quality was evaluated primarily by the degree of blurring, which served as the main assessment criterion. In the Antarctic region, turbulence is a frequent phenomenon and can occur even under very light wind conditions, which formed the basis of this study. Autopilot log data were used to conduct a series of analyses, resulting in maps of areas where turbulence symptoms were recorded. In parallel, the quality of images captured during the mission was examined, producing a map of blurring levels assessed on a five-point scale. The study shows that UAV image blurring is mainly caused by sudden camera movements, mechanical vibrations from the propulsion system, and atmospheric turbulence that disrupts flight stability and overloads image stabilization. Additional factors such as low-light conditions, fog, haze, precipitation, glare, and moving shadows further reduce image clarity. Full article
(This article belongs to the Section Drones in Ecology)
Show Figures

Graphical abstract

26 pages, 409 KB  
Article
Unified Data Governance in Heterogeneous Database Environments: An API-Driven Architecture for Multi-Platform Policy Enforcement
by Maryam Abbasi, Paulo Váz, José Silva, Filipe Cardoso, Filipe Sá and Pedro Martins
Data 2026, 11(3), 54; https://doi.org/10.3390/data11030054 - 7 Mar 2026
Viewed by 415
Abstract
Modern organizations increasingly rely on heterogeneous database environments that combine relational, document-oriented, and key-value storage systems to optimize performance for diverse application requirements. However, this technological diversity creates significant challenges for implementing consistent data governance policies, regulatory compliance, and access control across disparate [...] Read more.
Modern organizations increasingly rely on heterogeneous database environments that combine relational, document-oriented, and key-value storage systems to optimize performance for diverse application requirements. However, this technological diversity creates significant challenges for implementing consistent data governance policies, regulatory compliance, and access control across disparate systems. Traditional governance approaches that operate within individual database silos fail to provide unified policy enforcement and create compliance gaps that expose organizations to regulatory and operational risks. This paper presents a novel API-driven architecture that enables unified data governance across heterogeneous database environments without requiring database-specific modifications or vendor lock-in. The proposed framework implements a centralized governance layer that coordinates policy enforcement across PostgreSQL, MongoDB, and Amazon DynamoDB systems through RESTful API interfaces. Key architectural components include differentiated access control through hierarchical API key management, automated compliance workflows for regulatory requirements such as GDPR, real-time audit trail generation, and comprehensive data quality monitoring with automated improvement mechanisms. Comprehensive experimental evaluation demonstrates the framework’s effectiveness across multiple operational dimensions. The system achieved 95.2% accuracy in access control enforcement across different data classification levels, while automated GDPR compliance workflows demonstrated 98.6% success rates with average processing times of 2.9 h. Performance evaluation reveals acceptable overhead characteristics with linear scaling patterns for PostgreSQL operations (R2 = 0.89), consistent sub-20ms response times for MongoDB logging operations, and sustained throughput rates ranging from 38.9 to 142.7 requests per second across the integrated system. Data quality improvements ranged from 16.1% to 34.3% across accuracy, completeness, consistency, and timeliness dimensions over a 12-week monitoring period, with accuracy improving by 17.8 percentage points, completeness by 13.2 percentage points, consistency by 19.7 percentage points, and timeliness by 24.5 percentage points. The duplicate detection system achieved 94.6% precision and 95.6% recall across various duplicate types, including cross-database redundancy identification. The results demonstrate that API-driven governance architectures can effectively address the persistent challenges of policy fragmentation in multi-database environments while maintaining operational performance and enabling measurable improvements in data quality and regulatory compliance. The framework provides a practical migration path for organizations seeking to implement comprehensive governance capabilities without replacing existing database infrastructure investments. Full article
(This article belongs to the Section Information Systems and Data Management)
Show Figures

Figure 1

Back to TopTop