Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (971)

Search Parameters:
Keywords = information bottleneck

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 2148 KB  
Review
Research Progress on the Detection of Deep-Sea Microorganisms and the Significance of Measurement Standards
by Ziyi Cheng, Mei Zhang, Huijun Yuan, Jingjing Liu and Yongzhuo Zhang
Chemosensors 2026, 14(4), 94; https://doi.org/10.3390/chemosensors14040094 (registering DOI) - 11 Apr 2026
Abstract
The exploration of deep-sea microorganisms is transitioning from ex situ laboratory analysis to in situ real-time monitoring. While in situ technologies offer unprecedented access to microbial activities in their natural extreme habitats, they face a critical, yet often overlooked, bottleneck: the absence of [...] Read more.
The exploration of deep-sea microorganisms is transitioning from ex situ laboratory analysis to in situ real-time monitoring. While in situ technologies offer unprecedented access to microbial activities in their natural extreme habitats, they face a critical, yet often overlooked, bottleneck: the absence of a robust metrological framework. This lack of standardized calibration, traceability, and reference materials results in data that are often irreproducible, device-specific, and incomparable across studies, severely undermining scientific discovery and resource assessment. This review provides a systematic analysis of the current landscape of deep-sea microbial detection technologies, categorizing them by their operational principles and critically evaluating their performance, limitations, and metrological readiness. By synthesizing the technological challenges with the principles of metrology, we identify the fundamental gap between advanced sensing capabilities and the lack of in situ measurement standards. To bridge this gap, we propose an innovative “laboratory simulation–in situ detection–remote calibration” trinity calibration system. This framework establishes a complete metrological traceability chain tailored for extreme deep-sea conditions, aiming to transform isolated sensor data into globally comparable, scientifically robust, and industrially actionable information, thereby paving the way for precision deep-sea biology and governance. Full article
(This article belongs to the Section (Bio)chemical Sensing)
Show Figures

Figure 1

16 pages, 1803 KB  
Article
A Physics-Coupled Deep LSTM Autoencoder for Robust Sensor Fault Detection in Industrial Systems
by Weiwei Jia, Youcheng Ding, Xilong Ye, Xinyi Huang, Maofa Wang and Chenglong Miao
Processes 2026, 14(8), 1213; https://doi.org/10.3390/pr14081213 - 10 Apr 2026
Abstract
Reliable sensor fault detection is critical for the safe and efficient operation of complex industrial systems, such as thermal power plants. However, traditional data-driven methods and standard deep learning models often struggle to detect incipient gradual drift faults under severe environmental noise, primarily [...] Read more.
Reliable sensor fault detection is critical for the safe and efficient operation of complex industrial systems, such as thermal power plants. However, traditional data-driven methods and standard deep learning models often struggle to detect incipient gradual drift faults under severe environmental noise, primarily because they ignore the inherent physical correlations among multivariate sensor signals. To address this challenge, this paper proposes a novel Physics-Coupled Deep Long Short-Term Memory Autoencoder (PC-Deep-LSTM-AE). Specifically, we integrate a deep LSTM architecture with an explicit non-linear information compression bottleneck and layer normalization to enhance robust feature extraction in high-noise environments. Furthermore, we innovatively introduce a Physics-Coupling Loss (PCC Loss) that jointly optimizes the mean squared reconstruction error and the Pearson correlation coefficient, forcing the model to strictly preserve the dynamic physical relationships among multivariable signals. Extensive experiments were conducted on a real-world thermal power plant dataset with severe noise injection. The results demonstrate that the proposed PC-Deep-LSTM-AE achieves an outstanding F1-score of over 0.98, significantly outperforming mainstream baseline models, including Vanilla LSTM-AE, GRU-AE, Bi-LSTM-AE, and CNN-AE. The proposed method exhibits exceptional robustness and high interpretability for root-cause analysis, highlighting its immense potential for real-world industrial deployment. Full article
(This article belongs to the Section Process Control and Monitoring)
Show Figures

Figure 1

20 pages, 1293 KB  
Article
Enhancing Long-Term Forecasting Stability in Smart Grids: A Hybrid Mamba-LSTM-Attention Framework
by Fusheng Chen, Chong Fo Lei, Te Guo and Chiawei Chu
Energies 2026, 19(8), 1855; https://doi.org/10.3390/en19081855 - 9 Apr 2026
Abstract
Accurate multivariate long-term time series forecasting (LTSF) is critical for smart grid operations. However, non-stationary distribution shifts frequently induce compounding error accumulation in conventional architectures. This study proposes the Mamba-LSTM-Attention (MLA) framework, a distribution-aware architecture engineered for forecasting stability. The pipeline integrates Reversible [...] Read more.
Accurate multivariate long-term time series forecasting (LTSF) is critical for smart grid operations. However, non-stationary distribution shifts frequently induce compounding error accumulation in conventional architectures. This study proposes the Mamba-LSTM-Attention (MLA) framework, a distribution-aware architecture engineered for forecasting stability. The pipeline integrates Reversible Instance Normalization (RevIN) to neutralize statistical drift. To address computational bottlenecks, the architecture utilizes a linear-time Selective State Space Model (Mamba) to capture global trend dynamics, cascaded with a single-layer gated Long Short-Term Memory (LSTM) unit to model localized non-linear residuals. A terminal information bottleneck structurally bounds cross-step error propagation. Empirical results across standard ETT and Electricity benchmarks reveal a precision–stability trade-off. By prioritizing structural resilience, the MLA framework limits error accumulation on highly volatile datasets, yielding MSEs of 0.210 and 0.128 on ETTh2 and ETTm2 at the T = 96 horizon. This structural bottleneck inherently smooths high-frequency periodic patterns, yielding lower absolute accuracy on stationary benchmarks such as ETTh1 and ETTm1. Ultimately, the architecture establishes a computationally efficient, structurally stable baseline tailored for non-stationary anomaly tracking in smart grids. Full article
(This article belongs to the Special Issue Forecasting Electricity Demand Using AI and Machine Learning)
Show Figures

Figure 1

21 pages, 1931 KB  
Article
A Shapelet Transform-Based Method for Structural Damage Identification: A Case Study on a Wooden Truss Bridge
by Ke Gan, Yingzhuo Ye, Fulin Nie, Ching Tai Ng and Liujie Chen
Sensors 2026, 26(8), 2323; https://doi.org/10.3390/s26082323 - 9 Apr 2026
Abstract
The impact of environmental disturbances and sensor deployment variations on damage identification represents a critical bottleneck that constrains the practical effectiveness of structural health monitoring. Existing methods addressing these challenges often suffer from poor interpretability due to information loss during feature extraction or [...] Read more.
The impact of environmental disturbances and sensor deployment variations on damage identification represents a critical bottleneck that constrains the practical effectiveness of structural health monitoring. Existing methods addressing these challenges often suffer from poor interpretability due to information loss during feature extraction or exhibit insufficient sensitivity in identifying early-stage minor damage. This paper proposes a damage identification method based on the Shapelet Transform and Random Forest classifier, which extracts highly interpretable local shape features from vibration response signals to achieve robust identification of structural state changes. The study utilizes measured random vibration response data from a timber truss bridge. The dataset comprises four reference states collected on different dates and five damage states simulated by additional masses ranging from +23.5 g to +193.7 g, with sensors deployed in both vertical and horizontal directions. The Shapelet Transform selects local subsequences with high information gain from the original time series as features, which are subsequently classified using the Random Forest algorithm. The experimental design systematically investigates the influence of different damage severities, sensor locations, and environmental variations on method performance. The results demonstrate that with a Shapelet extraction time of 10 min, the method achieves 100% identification accuracy across multiple operating conditions comprehensively considering environmental variations, sensor location differences, and varying damage severities. When the extraction time is reduced to 5 min, 3 min, and 1 min, the average accuracies are 93.98%, 89.51%, and 58.48%, respectively. The method effectively identifies the minimum simulated damage (+23.5 g), which represents only 0.07% of the total structural mass, while maintaining stable performance under varying sensor locations and environmental conditions. Compared to traditional methods based on global frequency-domain features or statistical characteristics, the proposed method extracts physically meaningful local Shapelet features, offering significant advantages in interpretability. In contrast to deep learning approaches, this method demonstrates greater robustness under limited sample conditions. This study confirms that the combined framework of the Shapelet Transform and Random Forest can effectively address multiple real-world challenges in structural health monitoring, delivering high accuracy, strong robustness, and excellent interpretability, thereby providing a novel approach for developing practical real-time damage identification systems. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

17 pages, 1473 KB  
Article
Key Updatable Cross-Domain-Message Anonymous Authentication Scheme Based on Dual-Chain for VANET
by Mei Sun, Dongbing Zhang, Yuyan Guo and Xudong Zhai
Electronics 2026, 15(7), 1541; https://doi.org/10.3390/electronics15071541 - 7 Apr 2026
Viewed by 127
Abstract
Traditional VANET authentication schemes often face challenges such as centralization bottlenecks and the updating of vehicle keys or pseudonyms. This paper proposes a layered approach that divides VANET into regions, utilizing dual-blockchain to enable anonymous message authentication between vehicles and RSUs, as well [...] Read more.
Traditional VANET authentication schemes often face challenges such as centralization bottlenecks and the updating of vehicle keys or pseudonyms. This paper proposes a layered approach that divides VANET into regions, utilizing dual-blockchain to enable anonymous message authentication between vehicles and RSUs, as well as between vehicles within the VANET. Compared to traditional blockchain authentication methods, this paper introduces an approach that enhances authentication efficiency and ensures information security by establishing secure connections between private and consortium chains through a trusted authority (TA). By leveraging third-party public parameter updates, the automatic updating of private and public keys for VANET nodes is achieved without the need for certificate issuance and updates. This approach facilitates long-term anonymous authentication and communication between VANET nodes, reduces the frequency of authentication interactions, simplifies authentication processes, and lowers computational and communication costs. The proposed scheme is well-suited for practical VANET environments that require low authentication latency and robust large-scale privacy protection. Full article
Show Figures

Figure 1

22 pages, 4917 KB  
Technical Note
Reducing Latency in Digital Twins: A Framework for Near-Real-Time Progress and Quality Reporting
by Zvonko Sigmund, Ivica Završki, Ivan Marović and Kristijan Vilibić
Buildings 2026, 16(7), 1448; https://doi.org/10.3390/buildings16071448 - 6 Apr 2026
Viewed by 321
Abstract
While Digital Twins offer transformative potential, their efficacy for real-time control is constrained by the slow data acquisition and the high computational intensity required to process raw datasets like point clouds. This paper identifies these critical bottlenecks—specifically the latency between data capture and [...] Read more.
While Digital Twins offer transformative potential, their efficacy for real-time control is constrained by the slow data acquisition and the high computational intensity required to process raw datasets like point clouds. This paper identifies these critical bottlenecks—specifically the latency between data capture and actionable insight—and proposes a refined theoretical framework for near-real-time automated progress monitoring and quality reporting. Building on the findings of the NORMENG project and informing the subsequent AutoGreenTraC project, this research synthesizes state-of-the-art advancements in reality capture, including LIDAR, SfM-MVS, and 360-degree vision. The study highlights a fundamental divergence in stakeholder requirements: the need for millimeter-level precision in quality control versus the demand for high-velocity documentation for progress monitoring. A key innovation presented is the shift toward neural rendering techniques to bypass the computational delays of traditional photogrammetry and enable immediate on-site visualization. By structuring a tiered processing hierarchy that combines lightweight edge analysis for immediate safety and progress monitoring with asynchronous high-fidelity Digital Twin updates, the framework aims to establish a single source of truth. Full article
Show Figures

Figure 1

14 pages, 1632 KB  
Perspective
Post-Document Science: From Static Narratives to Intelligent Objects
by Mehmet Fırat
Standards 2026, 6(2), 14; https://doi.org/10.3390/standards6020014 - 3 Apr 2026
Viewed by 217
Abstract
Scientific publishing is currently constrained by an unstructured narrative bottleneck paradigm, which increasingly diverges from the scale, complexity, and computational nature of modern research. Despite rapid advancements in data generation and analysis, scientific knowledge is predominantly disseminated as static narrative artifacts, thereby limiting [...] Read more.
Scientific publishing is currently constrained by an unstructured narrative bottleneck paradigm, which increasingly diverges from the scale, complexity, and computational nature of modern research. Despite rapid advancements in data generation and analysis, scientific knowledge is predominantly disseminated as static narrative artifacts, thereby limiting reproducibility, machine accessibility, and cumulative integration. This study explores how scientific communication can be restructured to facilitate scalable validation and reliable knowledge accumulation. We propose the Object-Oriented Scientific Information paradigm, wherein scientific contributions are represented as executable, machine-interpretable objects that integrate structured data, reproducible methodologies, and formally encoded semantic claims. To operationalize this paradigm, we delineate the architecture of an Autonomous Knowledge Engine, a modular neuro-symbolic system that combines domain-specialized Mixture-of-Experts routing, formal verification of claims, and an information-theoretic filter based on marginal information gain. This architecture enables continuous validation, redundancy control, and the integration of scientific contributions within an active knowledge graph. The analysis demonstrates that Object-Oriented Scientific Information (OOSI) and Autonomous Knowledge Engine (AKE) fundamentally differ from existing document-based, executable, and semantic publishing models by shifting epistemic control from narrative evaluation to computational verification. We conclude that transitioning toward a computable scientific record is essential for sustaining reliable and self-correcting science in the context of accelerating knowledge production. Full article
Show Figures

Graphical abstract

35 pages, 3098 KB  
Article
ImmerseFM-3D: A Foundation Model Framework for Generalizable 360-Degree Video Streaming with Cross-Modal Scene Understanding
by Reka Sandaruwan Gallena Watthage and Anil Fernando
Appl. Sci. 2026, 16(7), 3424; https://doi.org/10.3390/app16073424 - 1 Apr 2026
Viewed by 159
Abstract
Current 360-degree video streaming systems consider viewport prediction, adaptive bitrate allocation, tile selection, and quality-of-experience (QoE) estimation as independent activities, yielding fragmented pipelines that do not scale well across content type and network conditions and do not scale well to individual users. We [...] Read more.
Current 360-degree video streaming systems consider viewport prediction, adaptive bitrate allocation, tile selection, and quality-of-experience (QoE) estimation as independent activities, yielding fragmented pipelines that do not scale well across content type and network conditions and do not scale well to individual users. We propose ImmerseFM-3D, a foundation model that jointly solves all four sub-tasks through a single shared representation. Seven input modalities, namely video frames, network traces, head-motion trajectories, ambisonics audio, depth maps, eye-tracking signals, and CLIP scene semantics, are fused by four-layer cross-modal attention and compressed into a 256-dimensional bottleneck latent via a variational information bottleneck. Four task-specific decoders operate on this shared latent simultaneously. A model-agnostic meta-learning adapter augmented with episodic memory and a hypernetwork personalizes the model from as little as 1 s of user interaction data. An extended branch supports six-degrees-of-freedom volumetric content through spherical harmonic viewport decoding and depth-aware tile importance weighting. Trained and evaluated on the IMMERSE-1M combined dataset (1000 h of 360° and volumetric video, 524 users, and over 50,000 mean opinion scores), ImmerseFM-3D reduces the mean angular viewport error by 34%, lowers the bandwidth violation rate from 8.3% to 3.1%, and achieves a QoE Pearson correlation of 0.891. The personalization adapter reaches 90% of peak performance in 22 s, while zero-shot cross-format transfer attains 72% of full in-domain accuracy. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

16 pages, 597 KB  
Article
Bottlenecks Beyond Primary Care: Patient and Healthcare Worker Perspectives on Access to Specialists, Diagnostics, and System Organisation in Poland
by Anna Domańska, Sabina Lachowicz-Wiśniewska and Wioletta Żukiewicz-Sobczak
Healthcare 2026, 14(7), 894; https://doi.org/10.3390/healthcare14070894 - 31 Mar 2026
Viewed by 237
Abstract
Background/Objectives: Access delays in specialist consultations and diagnostics are frequently cited as key weaknesses of the Polish healthcare system. This study aimed to identify patient- and healthcare employee-reported bottlenecks beyond primary care, focusing on access, organisational and information barriers. Methods: A [...] Read more.
Background/Objectives: Access delays in specialist consultations and diagnostics are frequently cited as key weaknesses of the Polish healthcare system. This study aimed to identify patient- and healthcare employee-reported bottlenecks beyond primary care, focusing on access, organisational and information barriers. Methods: A cross-sectional survey in Primed Medical Center in Lublin (south-eastern Poland) analysed fifty eligible adult respondents (58% patients; 42% healthcare employees). Measures covered access and organisational barriers (primary care, specialists/diagnostics, out-of-hours), perceived quality, equity, and satisfaction. Results: Overall dissatisfaction predominated (66.0% rather/definitely dissatisfied vs. 24.0% somewhat/definitely satisfied), and 70.0% indicated that reform is needed. The most frequent constraints concerned appointment scheduling convenience (88.0%), limited specialist access (86.0%), inability to obtain timely diagnostics (80.0%), unclear guidance on where to seek help (78.0%), and low administrative efficiency (74.0%). Additional concerns included out-of-hours access (60.0% reported no immediate night help) and perceived inequity (58.0% reported unequal access; 62.0% reported unequal treatment). In contrast, primary care availability was rated positively by 78% on a qualitative scale, and physician competence by 62%. Associations with sex, age, residence, and role were significant but small to moderate. Conclusions: Respondents differentiate clinical competence from system performance: negative assessments cluster around organisational barriers and capacity constraints in specialist and diagnostic pathways. Improving patient navigation and information, scheduling and administrative workflows, and specialist/diagnostic capacity—while strengthening primary care coordination—may reduce delays and support more equitable, higher-quality care. Full article
(This article belongs to the Section Public Health and Preventive Medicine)
Show Figures

Figure 1

26 pages, 2252 KB  
Review
Detection and Source Identification of Goaf Water Accumulation in Chinese Coal Mines: A Review and Evaluation
by Jianying Zhang and Wenfeng Wang
Appl. Sci. 2026, 16(7), 3370; https://doi.org/10.3390/app16073370 - 31 Mar 2026
Viewed by 174
Abstract
Water accumulation in goafs in Chinese coal mines is a major hidden hazard that can trigger water inrush accidents and may also affect aquifer integrity and regional water security. Reliable delineation of goaf water distribution and identification of water-source types are therefore essential [...] Read more.
Water accumulation in goafs in Chinese coal mines is a major hidden hazard that can trigger water inrush accidents and may also affect aquifer integrity and regional water security. Reliable delineation of goaf water distribution and identification of water-source types are therefore essential for mine water-hazard control and groundwater protection. This paper reviews the main technical routes for goaf groundwater investigation, including geophysical prospecting, hydrogeochemical and isotopic identification, direct inspection tools, and data-driven intelligent workflows. For geophysical detection, the mechanisms, engineering applicability, and key constraints of the Transient Electromagnetic Method (TEM), Surface Nuclear Magnetic Resonance (NMR), the High-Density Resistivity Method (HDRM), and the Coherent Frequency Component (CFC) electromagnetic wave reflection coherence method are synthesized, with emphasis on interpretation boundaries and uncertainty sources under complex geological conditions. For source identification, conventional hydrochemistry, stable isotopes, and laser-induced fluorescence are summarized, and intelligent recognition models such as neural networks and support vector machines are discussed in terms of workflow positioning and practical performance limits. A unified evaluation rationale is established and a semi-quantitative method–metric matrix is constructed to compare techniques in terms of reliability, deployability, cost level, environmental adaptability, and information value, thereby clarifying their functional roles and complementarities within staged engineering workflows. The synthesis indicates that major bottlenecks include limited deep capability under strong interference, pronounced interpretational non-uniqueness caused by complex geology and irregular goaf geometries, and constrained timeliness and generalization for mixed-source identification. Future directions are summarized as multi-method integration with fusion-driven interpretation, intelligent and quantitative decision support with quality control, and sensor–platform advances enabling more practical three-dimensional investigation, aiming to improve the reliability and engineering usability of goaf groundwater hazard assessment. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

16 pages, 2260 KB  
Article
Urban Environmental Determinants and Spatiotemporal Patterns of Emergency Medical Service Response to Traumatic Injuries: A Five-Year Population-Based Study
by Akerke Chayakova and Oxana Tsigengagel
Int. J. Environ. Res. Public Health 2026, 23(4), 434; https://doi.org/10.3390/ijerph23040434 - 30 Mar 2026
Viewed by 277
Abstract
Background: Timely prehospital management is critical for survival after traumatic injury. In rapidly growing metropolises, emergency medical service (EMS) systems often struggle to provide equitable care amid urban sprawl and traffic congestion. This study investigated spatiotemporal inequalities in trauma-related EMS response in a [...] Read more.
Background: Timely prehospital management is critical for survival after traumatic injury. In rapidly growing metropolises, emergency medical service (EMS) systems often struggle to provide equitable care amid urban sprawl and traffic congestion. This study investigated spatiotemporal inequalities in trauma-related EMS response in a rapidly expanding capital city (Astana, Kazakhstan) to inform healthcare optimization and urban health equity. Methods: We analyzed a five-year population-based dataset of 26,073 trauma-related EMS calls recorded between 2020 and 2024. Spatial patterns were examined using Kernel Density Estimation (KDE) and Getis–Ord Gi* hotspot analysis. Road-network modeling assessed accessibility at 3, 5, and 10 min thresholds using a GIS-based network analyst framework. Results: Males accounted for 60.1% of utilization and had higher clinical severity (hospitalization rate: 45.5% vs. 40.3%, p < 0.001). Demand peaked at 20:00, coinciding with peak traffic. The mean total response time was 21.63 min, and only 16.9% of calls met the 10 min benchmark. Significant accessibility gaps were found in the Baikonur district (61.4% delay rate). Conclusions: The findings demonstrate that while the EMS system provides broad geographic coverage, it suffers from systemic spatiotemporal bottlenecks. Targeted infrastructure expansion in underserved peripheral districts and the implementation of dynamic deployment models are necessary to enhance urban health equity and reduce preventable mortality in expanding metropolitan areas. Full article
(This article belongs to the Section Environmental Health)
Show Figures

Figure 1

32 pages, 1792 KB  
Article
A Hybrid Systems Framework for Electric Vehicle Adoption: Microfoundations, Networks, and Filippov Dynamics
by Pascal Stiefenhofer and Jing Qian
Complexities 2026, 2(2), 8; https://doi.org/10.3390/complexities2020008 - 29 Mar 2026
Viewed by 193
Abstract
Electric vehicle(EV) diffusion exhibits nonlinear, path-dependent dynamics shaped by interacting economic, technological, and social constraints. This paper develops a unified hybrid systems framework that captures these complexities by integrating microfounded household choice, capacity-constrained firm behavior, local network spillovers, and multi-level policy intervention within [...] Read more.
Electric vehicle(EV) diffusion exhibits nonlinear, path-dependent dynamics shaped by interacting economic, technological, and social constraints. This paper develops a unified hybrid systems framework that captures these complexities by integrating microfounded household choice, capacity-constrained firm behavior, local network spillovers, and multi-level policy intervention within a Filippov differential-inclusion structure. Households face heterogeneous preferences, liquidity limits, and network-mediated moral and informational influences; firms invest irreversibly under learning-by-doing and profitability thresholds; and national and local governments implement distinct financial and infrastructure policies subject to budget constraints. The resulting aggregate adoption dynamics feature endogenous switching, sliding modes at economic bottlenecks, network-amplified tipping, and hysteresis arising from irreversible investment. We establish conditions for the existence of Filippov solutions, derive network-dependent tipping thresholds, characterize sliding regimes at capacity and liquidity constraints, and show how network structure magnifies hysteresis and shapes the effectiveness of local versus national policy. Optimal-control analysis further demonstrates that national subsidies follow bang–bang patterns and that network-targeted local interventions minimize the fiscal cost of achieving regional tipping. Beyond theoretical characterization, the framework is structurally calibrated to match the order-of-magnitude effects reported in leading empirical and simulation-based studies, including network diffusion models, agent-based simulations, bass-type specifications, and fuel-price shock analyses. The hybrid formulation reproduces short-run percentage-point subsidy effects, long-run forecast dispersion under alternative network assumptions, and policy-induced equilibrium shifts observed in the applied literature while providing a unified geometric interpretation of these heterogeneous results through explicit basin boundaries and regime switching. The framework provides a complex systems perspective on sustainable mobility transitions and clarifies why identical national policies can generate asynchronous regional outcomes. These results offer theoretical foundations for designing coordinated, cost-effective, and network-aware EV transition strategies. Full article
Show Figures

Figure A1

19 pages, 1021 KB  
Review
Urban Building Energy Modelling: A Review on the Integration of Geographic Information Systems and Remote Sensing
by Sebastiano Anselmo and Piero Boccardo
Energies 2026, 19(7), 1667; https://doi.org/10.3390/en19071667 - 28 Mar 2026
Viewed by 319
Abstract
Decarbonising the building sector is an energy policy priority due to its major contribution to global energy consumption and related emissions. Accurate energy modelling is crucial, with significant scientific advancements being made in the last decade. As data gathering is a primary bottleneck, [...] Read more.
Decarbonising the building sector is an energy policy priority due to its major contribution to global energy consumption and related emissions. Accurate energy modelling is crucial, with significant scientific advancements being made in the last decade. As data gathering is a primary bottleneck, the potential of Geographic Information Systems and Remote Sensing for streamlining data acquisition and integrating data sources has gained specific interest. This study aims to identify prevailing trends in scales, inputs, and outputs of energy modelling, focusing on Remote Sensing and Geographic Information Systems applications. A structured literature review was conducted, encompassing screening, textual analysis, and findings synthesis to identify key research trends. The results highlight a predominance of the neighbourhood scale (54%) and the reliance on building geometries as principal input (91% of studies). Remote Sensing, used in 36% of cases, is employed for defining geometric (41%) and non-geometric (45%) attributes, while 17% of studies leverage it to determine climatic variables. EnergyPlus remains the most widespread simulation engine (37%), frequently coupled with construction archetypes (50% of cases) to address data gaps. The increasing integration of these technologies in energy modelling is expected to diversify the number of inputs, ultimately enhancing output accuracy, scalability, and generalisability. Full article
(This article belongs to the Special Issue Digital Engineering for Future Smart Cities)
Show Figures

Figure 1

30 pages, 8163 KB  
Article
SDGR-Net: A Spatiotemporally Decoupled Gated Residual Network for Robust Multi-State HDD Health Prediction
by Zehong Wu, Jinghui Qin, Yongyi Lu and Zhijing Yang
Electronics 2026, 15(7), 1399; https://doi.org/10.3390/electronics15071399 - 27 Mar 2026
Viewed by 290
Abstract
Accurate prediction of hard disk drive (HDD) health states is critical for enabling proactive data maintenance and ensuring data reliability in large-scale data centers. However, conventional models often suffer from semantic entanglement among heterogeneous SMART attributes and from the masking of incipient failure [...] Read more.
Accurate prediction of hard disk drive (HDD) health states is critical for enabling proactive data maintenance and ensuring data reliability in large-scale data centers. However, conventional models often suffer from semantic entanglement among heterogeneous SMART attributes and from the masking of incipient failure signatures by stochastic noise. To address these challenges, we propose SDGR-Net, a spatiotemporally decoupled learning framework designed to model the complex degradation dynamics of HDDs. SDGR-Net introduces three synergistic innovations: (1) a spatiotemporally decoupled dual-branch encoder that disentangles longitudinal temporal evolution from cross-variable correlations via parameter-isolated branches, thereby reducing representational interference; (2) a parsimonious dual-view temporal extraction mechanism that captures early-stage anomalies through forward–reverse sequence concatenation, enabling high-fidelity preservation of non-stationary pre-failure patterns; and (3) a cross-branch dynamic gated residual fusion module that functions as an adaptive information bottleneck to emphasize failure-critical features while suppressing redundant noise. Extensive experiments conducted on three heterogeneous HDD datasets, ST4000DM000, HUH721212ALN604, and MG07ACA14TA, demonstrate that SDGR-Net consistently outperforms six state-of-the-art baselines. In particular, SDGR-Net achieves a peak fault detection rate (FDR) of 0.9898 and a 69.6% relative reduction in false alarm rate (FAR) under high-reliability operating conditions. These results, corroborated by comprehensive ablation studies, indicate that SDGR-Net effectively balances detection sensitivity and operational robustness, offering a practical solution for intelligent HDD health monitoring. Full article
Show Figures

Figure 1

20 pages, 2881 KB  
Article
Structural Deformation Prediction and Uncertainty Quantification via Physics-Informed Data-Driven Learning
by Tong Zhang and Shiwei Qin
Appl. Sci. 2026, 16(7), 3194; https://doi.org/10.3390/app16073194 - 26 Mar 2026
Viewed by 227
Abstract
In structural health monitoring, purely data-driven methods for deformation prediction are often susceptible to time-varying boundary conditions under complex operating scenarios, leading to insufficient physical interpretability and limited generalization across different conditions. To address these challenges, this study proposes a Physics-Informed Dual-branch Long [...] Read more.
In structural health monitoring, purely data-driven methods for deformation prediction are often susceptible to time-varying boundary conditions under complex operating scenarios, leading to insufficient physical interpretability and limited generalization across different conditions. To address these challenges, this study proposes a Physics-Informed Dual-branch Long Short-Term Memory framework (PINN-DualSHM). The framework employs dual-branch LSTMs to separately extract temporal features of structural mechanical responses and environmental thermal effects. Dynamic decoupling and fusion of these heterogeneous features are achieved through an adaptive cross-attention mechanism. Furthermore, physical priors, including the thermodynamic superposition principle and structural settlement monotonicity, are embedded into the loss function as regularization terms, complemented by a dual uncertainty quantification system based on heteroscedastic regression and MC Dropout. Experimental results based on long-term measured data from an industrial base project in Shenzhen demonstrate that PINN-DualSHM significantly outperforms baseline models such as LSTM, CNN-LSTM, and GAT-LSTM. Specifically, the Root Mean Square Error (RMSE) is reduced by 65.25%, and the coefficient of determination (R2) reaches 0.925. Physical consistency analysis confirms that the introduction of physical constraints effectively suppresses anomalous predictive fluctuations that violate mechanical laws. Uncertainty decomposition reveals that aleatoric uncertainty is dominant (93.7%), objectively indicating that the current system’s accuracy bottleneck lies in sensor noise rather than model capability. By enhancing prediction accuracy while providing credible quantitative assessments and physical interpretability, the proposed method provides a scientific basis for the operation, maintenance optimization, and upgrading decisions of SHM systems. Full article
Show Figures

Figure 1

Back to TopTop