Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,314)

Search Parameters:
Keywords = static process model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 8819 KB  
Article
Mitigating Overfitting and Physical Inconsistency in Flood Susceptibility Mapping: A Physics-Constrained Evolutionary Machine Learning Framework for Ungauged Alpine Basins
by Chuanjie Yan, Lingling Wu, Peng Huang, Jiajia Yue, Haowen Li, Chun Zhou, Congxiang Fan, Yinan Guo and Li Zhou
Water 2026, 18(7), 882; https://doi.org/10.3390/w18070882 - 7 Apr 2026
Abstract
Flood susceptibility mapping in high-altitude ungauged basins faces a structural dichotomy: physically based models often suffer from systematic biases due to uncertain satellite precipitation, whereas data-driven models are prone to overfitting and lack physical consistency in data-scarce regions. To resolve this, this study [...] Read more.
Flood susceptibility mapping in high-altitude ungauged basins faces a structural dichotomy: physically based models often suffer from systematic biases due to uncertain satellite precipitation, whereas data-driven models are prone to overfitting and lack physical consistency in data-scarce regions. To resolve this, this study proposes a Physically constrained Particle Swarm Optimization–Random Forest (P-PDRF) framework, validated in the Lhasa River Basin. The core innovation lies in coupling a hydrological model with statistical learning by utilizing the maximum daily runoff depth as a “Relative Hydraulic Intensity Index.” This approach leverages the topological correctness of physical simulations to circumvent absolute forcing errors. Furthermore, a Physiographically Constrained Negative Sampling (PCNS) strategy and a PSO-optimized “Shallow Tree” configuration are introduced to enforce structural regularization against stochastic noise. Empirical results demonstrate that P-PDRF achieves superior generalization (AUC = 0.942), significantly outperforming standard Random Forest, Support Vector Machine, and Analytic Hierarchy Process models. Ablation studies confirm that the dynamic index outweighs the static Topographic Wetness Index in feature importance, effectively correcting topographic artifacts where static models misclassify arid depressions as high-risk zones. This study offers a scalable Physics-Informed Machine Learning solution for the global “Prediction in Ungauged Basins” initiative. Full article
(This article belongs to the Special Issue Urban Flood Risk Assessment and Management)
Show Figures

Figure 1

18 pages, 2707 KB  
Article
Optimizing the Flexural Performance of ABS Parts Fabricated by FDM Additive Manufacturing Through a Taguchi–ANOVA Statistical Framework
by Hind B. Ali, Jamal J. Dawood, Farag M. Mohammed, Farhad M. Othman and Makram A. Fakhri
J. Manuf. Mater. Process. 2026, 10(4), 125; https://doi.org/10.3390/jmmp10040125 - 7 Apr 2026
Abstract
Additive manufacturing (AM), particularly Fused Deposition Modeling (FDM), has revolutionized polymer-based fabrication through design freedom and material efficiency. This work presents a comprehensive statical optimization of FDM parameters affecting the flexural properties of acrylonitrile/butadiene/styrene (ABS) specimens. The effects of layer thickness (0.15–0.25 mm), [...] Read more.
Additive manufacturing (AM), particularly Fused Deposition Modeling (FDM), has revolutionized polymer-based fabrication through design freedom and material efficiency. This work presents a comprehensive statical optimization of FDM parameters affecting the flexural properties of acrylonitrile/butadiene/styrene (ABS) specimens. The effects of layer thickness (0.15–0.25 mm), infill density (30–70%), printing speed (35–95 mm/s), and build orientation (Flat, On-edge, Vertical) were investigated following ASTM D790 standards. A Taguchi L9 orthogonal array coupled with ANOVA analysis was employed to quantity parameter significance. According to the ANOVA analysis, infill density was identified as the most influential parameter, accounting for 61.3% of the variation in flexural strength (σf) and 60.1% in flexural modulus (Eb). The optimal configuration (0.25 mm layer thickness, 70% infill, 65 mm/s speed, horizontal orientation) yielded a flexural strength of 84.9 MPa and modulus of 2.54 GPa. Microstructural observations confirmed that higher infill and moderate speed improved interlayer fusion and reduced void formation. The developed Taguchi–ANOVA framework offers quantitative insights for tailoring process–structure–property relationships in polymer-based additive manufacturing. Full article
Show Figures

Graphical abstract

19 pages, 6345 KB  
Article
Influence of Hypothermic Machine Perfusion on Markers of Oxidative Stress and Early Tubular Injury in Rat Donor Kidneys Before Transplantation
by Caleb LeGrand, Dinesh Bhattarai, Amod Sharma, Madison K McGraw, Neriman Gokden, Lee Ann MacMillan-Crow and Nirmala Parajuli
Kidney Dial. 2026, 6(2), 23; https://doi.org/10.3390/kidneydial6020023 - 7 Apr 2026
Abstract
Background: Hypothermic machine perfusion (HMP) has been associated with reduced delayed graft function compared with static cold storage (SCS). However, the molecular mechanisms underlying these differences during cold preservation remain incompletely understood. This study compared cold-storage-related biochemical and histological changes in kidneys preserved [...] Read more.
Background: Hypothermic machine perfusion (HMP) has been associated with reduced delayed graft function compared with static cold storage (SCS). However, the molecular mechanisms underlying these differences during cold preservation remain incompletely understood. This study compared cold-storage-related biochemical and histological changes in kidneys preserved by HMP versus SCS using a Lewis rat model prior to transplantation. Methods: Following isolation, rat kidneys were flushed with cold saline (4 °C). Left kidneys were preserved by HMP at constant flow using Belzer’s machine perfusion solution (MPS) at 4 °C, while right kidneys were stored using SCS in University of Wisconsin solution at 4 °C. After four hours of preservation, kidneys were processed for biochemical and histological analysis. Fresh biopsies were evaluated for mitochondrial complex respiration. Western blotting was performed to assess expression of NDUFS3, a complex I subunit. Histological staining for nitrotyrosine and kidney injury markers was compared across groups. Results: Mitochondrial complex respiration did not differ significantly between the SCS and HMP groups. Western blot analysis demonstrated significantly increased NDUFS3 expression in HMP-preserved kidneys compared with SCS and control kidneys. Histological evaluation revealed elevated tubular staining of nitrotyrosine and kidney injury markers in SCS kidneys relative to controls, whereas HMP preservation markedly attenuated these increases. Conclusions: HMP mitigates cold-storage-induced oxidative stress and reduces expression of kidney injury markers after four hours of preservation. These molecular findings suggest a protective effect of HMP during cold preservation. Future studies with longer preservation times and transplantation models are needed to determine whether these improvements translate into enhanced post-transplant kidney function. Full article
Show Figures

Figure 1

10 pages, 512 KB  
Proceeding Paper
Multitask Deep Neural Network for IMU Calibration, Denoising, and Dynamic Noise Adaption for Vehicle Navigation
by Frieder Schmid and Jan Fischer
Eng. Proc. 2026, 126(1), 44; https://doi.org/10.3390/engproc2026126044 - 7 Apr 2026
Abstract
In intelligent vehicle navigation, efficient sensor data processing and accurate system stabilization is critical to maintain robust performance, especially when GNSS signals are unavailable or unreliable. Classical calibration methods for Inertial Measurement Units (IMUs), such as discrete and system-level calibration, fail to capture [...] Read more.
In intelligent vehicle navigation, efficient sensor data processing and accurate system stabilization is critical to maintain robust performance, especially when GNSS signals are unavailable or unreliable. Classical calibration methods for Inertial Measurement Units (IMUs), such as discrete and system-level calibration, fail to capture time-varying, non-linear, and non-Gaussian noise characteristics. Likewise, Kalman filters typically assume static measurement noise levels for non-holonomic constraints (NHCs), resulting in suboptimal performance in dynamic environments. Furthermore, zero-velocity detection plays a vital role in preventing error accumulation by enabling reliable zero-velocity updates during motion stops, but classical thresholding approaches often lack robustness and precision. To address these limitations, we propose a novel multitask deep neural network (MTDNN) architecture that jointly learns IMU calibration, adaptive noise level estimation for NHC, and zero-velocity detection solely from raw IMU data. This shared-encoder design is utilized to minimize computational overhead, enabling real-time deployment on resource-constrained platforms such as Raspberry Pi. The model is trained using post-processed GNSS-RTK ground truth trajectories obtained from both a proprietary dataset and the publicly available 4Seasons dataset. Experimental results confirm the proposed system’s superior accuracy, efficiency, and real-time capability in GNSS-denied conditions. Full article
(This article belongs to the Proceedings of European Navigation Conference 2025)
Show Figures

Figure 1

22 pages, 3197 KB  
Article
Dynamic Cognition Graph for Adaptive Learning: Integrating Reasoning Evidence and Reinforcement Learning
by Ying Li, Yiming Gai, Xingyu Wang, Leilei Sun and Xuefei Huang
Appl. Sci. 2026, 16(7), 3580; https://doi.org/10.3390/app16073580 - 6 Apr 2026
Abstract
Accurate modeling of learners’ evolving cognitive states is essential for intelligent educational systems, yet many existing knowledge tracing and graph-based approaches rely on static structures or purely sequential representations that inadequately capture dynamic structural changes in learning processes. This study proposes a Learner [...] Read more.
Accurate modeling of learners’ evolving cognitive states is essential for intelligent educational systems, yet many existing knowledge tracing and graph-based approaches rely on static structures or purely sequential representations that inadequately capture dynamic structural changes in learning processes. This study proposes a Learner Cognitive Graph (LCG) framework that integrates dynamic heterogeneous graph modeling, structured behavioral data acquisition, and reinforcement learning-based intervention optimization. A Dynamic Cognition Graph (DCG) is formally defined as a sequence of temporally evolving graph snapshots representing interactions among learners, knowledge concepts, and exercises. A reverse Turing test-based agent with structured prompting is introduced to collect reasoning-oriented behavioral evidence, improving data reliability for cognitive modeling. Temporal message passing, multi-scale memory updating, and self-supervised learning objectives are employed to construct dynamic cognitive representations. Personalized intervention is formulated as a Markov decision process to optimize long-term learning outcomes. Experiments conducted on real-world and simulated educational datasets demonstrate improved knowledge mastery prediction accuracy, cognitive state transition modeling, and intervention efficiency compared with representative baselines. The proposed framework provides a systematic and scalable approach for dynamic cognitive modeling and adaptive educational support. Full article
(This article belongs to the Special Issue Artificial Intelligence in Education: Latest Advances and Prospects)
Show Figures

Figure 1

22 pages, 4256 KB  
Systematic Review
Modeling the Resilience of Multimodal Freight Networks Under Disruptions: A Systematic Review
by Tariq Lamei, Ahmed Elsayed, Ahmed Ibrahim and Ahmed Abdel-Rahim
Infrastructures 2026, 11(4), 130; https://doi.org/10.3390/infrastructures11040130 - 6 Apr 2026
Viewed by 85
Abstract
Multimodal freight transportation networks are increasingly exposed to natural and human-made disruptions, yet prior research remains fragmented in how disruptions are represented, which modeling techniques are applied, and how results are validated, limiting comparability and actionable guidance for resilient planning. This study presents [...] Read more.
Multimodal freight transportation networks are increasingly exposed to natural and human-made disruptions, yet prior research remains fragmented in how disruptions are represented, which modeling techniques are applied, and how results are validated, limiting comparability and actionable guidance for resilient planning. This study presents a PRISMA-guided systematic review of disruption modeling in multimodal freight networks. A total of 21 studies were identified and coded to address three research questions concerning (RQ1) which analytical and computational modeling techniques are applied; (RQ2) to what extent models represent cross-modal interdependencies, cascading failures, and recovery processes; and (RQ3) what validation, calibration, and empirical testing strategies are employed. The review shows that optimization-based approaches and hybrid frameworks dominate the literature, complemented by fewer network science and data-driven methods. Most studies model disruptions as node/link failures and/or capacity degradation using static single-event scenarios, and explicit representations of cascading effects, operational delay propagation, and time-evolving recovery trajectories remain relatively rare. While many studies rely on real network data, formal calibration and historical backtesting against observed disruption events are uncommon, and validation is primarily case study-based. These findings highlight the need for more dynamic resilience modeling, stronger uncertainty quantification, standardized reporting of performance and resilience metrics, and greater use of empirically grounded validation to improve the generalizability and decision relevance of multimodal freight resilience models. Full article
Show Figures

Figure 1

24 pages, 4159 KB  
Article
A UAV–Satellite Hybrid Pipeline for Wildfire Detection and Dynamic Perimeter Prediction
by Hossein Keshmiri and Khan A. Wahid
Drones 2026, 10(4), 263; https://doi.org/10.3390/drones10040263 - 4 Apr 2026
Viewed by 259
Abstract
Effective wildfire management demands seamless integration of real-time detection and long-term spread forecasting. This paper proposes a novel power-efficient UAV–satellite hybrid pipeline that synergizes the agility of UAVs with the scale of satellite intelligence. The system begins with a dashboard-guided, multi-UAV detection module [...] Read more.
Effective wildfire management demands seamless integration of real-time detection and long-term spread forecasting. This paper proposes a novel power-efficient UAV–satellite hybrid pipeline that synergizes the agility of UAVs with the scale of satellite intelligence. The system begins with a dashboard-guided, multi-UAV detection module that scores fire likelihood from historical satellite data and enables scalable, energy-efficient deployment with low-latency onboard processing. This aerial component ensures persistent surveillance and reliable ignition detection, supported by a Dual LoRa (Long Range) communication scheme for robust and low-power connectivity. It achieves an F1-score of 97.4% while minimizing power consumption to extend operational flight times. Following detection, the pipeline transitions to a dynamic perimeter-prediction phase utilizing a custom Canadian boreal dataset. We employ a Squeeze-and-Excitation Residual U-Net (SE-ResUNet) to model spatiotemporal fire propagation based on static terrain and dynamic environmental features. The model was validated using a dynamic simulation framework that evaluates temporal consistency and convergence behavior against final cumulative burned-area masks, effectively addressing the absence of daily ground truth. Under these conditions, the model achieves a recall of 84% and an AUC of 0.97, demonstrating a strong capability to delineate active fire fronts. By coupling dashboard-driven UAV sensing with satellite-based predictive modeling, this work establishes a modular, foundational framework to support data-scarce forecasting in modern wildfire management. Full article
(This article belongs to the Special Issue UAVs and UGVs Robotics for Emergency Response in a Changing Climate)
Show Figures

Figure 1

30 pages, 3640 KB  
Article
Analysis of Wing Structures via Machine Learning-Based Surrogate Models
by Hasan Kiyik, Metin Orhan Kaya and Peyman Mahouti
Aerospace 2026, 13(4), 338; https://doi.org/10.3390/aerospace13040338 - 3 Apr 2026
Viewed by 142
Abstract
Accurate structural analysis is essential for the design and optimization of aircraft wings; however, repeated high-fidelity finite element analysis (FEA) becomes computationally expensive when embedded in iterative design loops. This study presents a machine learning-based surrogate modeling framework for the efficient analysis and [...] Read more.
Accurate structural analysis is essential for the design and optimization of aircraft wings; however, repeated high-fidelity finite element analysis (FEA) becomes computationally expensive when embedded in iterative design loops. This study presents a machine learning-based surrogate modeling framework for the efficient analysis and optimization of metallic commercial wing structures. A detailed Airbus A320-like wing model was developed and analyzed in ANSYS 2023 R1 under modal, static, and eigenvalue buckling conditions. The general dimensions of the Airbus A320 wing were used only as a reference; the resulting model is a conceptual benchmark rather than a one-to-one geometric replica or a validated digital twin of a specific aircraft wing. Using Latin Hypercube Sampling, 340 high-fidelity samples were generated, with 300 samples used for training and validation and 40 retained as an independent holdout set. The proposed Pyramidal Deep Regression Network (PDRN), a deep learning-based surrogate model whose architecture is automatically tuned using Bayesian Optimization, was benchmarked against Artificial Neural Networks (ANNs), Ensemble Learning, Support Vector Regression (SVR), and Gaussian Process Regression (GPR). On the unseen test set, the PDRN achieved the best overall predictive performance, with RMS errors of 0.8% for mass, 3.1% for the first natural frequency, 11.5% for load factor, and 11.4% for safety factor. To evaluate its practical utility, the trained PDRN was embedded into a PSO-based optimization framework for mass minimization under minimum safety factor, load factor, and first-frequency constraints. The surrogate-guided optimum was verified in ANSYS and remained feasible, yielding a mass of 10,485 kg, a first natural frequency of 1.4142 Hz, a load factor of 1.307, and a safety factor of 1.158. Compared with direct ANSYS in-the-loop optimization, the proposed workflow reached a comparable feasible design with substantially fewer high-fidelity evaluations. These results demonstrate that the PDRN provides an accurate and computationally efficient surrogate for rapid wing analysis and constraint-driven structural optimization. Full article
(This article belongs to the Special Issue Aircraft Structural Design Materials, Modeling, and Optimization)
Show Figures

Figure 1

26 pages, 8175 KB  
Article
In Situ Damage Detection Method for Metallic Shear Plate Dampers Based on the Active Sensing Method and Machine Learning Algorithms
by Yunfei Li, Feng Xiong, Hong Liu, Xiongfei Li, Huanlong Ding, Yi Liao and Yi Zeng
Sensors 2026, 26(7), 2203; https://doi.org/10.3390/s26072203 - 2 Apr 2026
Viewed by 240
Abstract
Metallic Shear Plate Dampers (MSPDs) are essential components in passive vibration control systems and require rapid post-earthquake inspection to assess damage and determine replacement needs. Traditional visual inspection methods suffer from low efficiency and limited ability to detect concealed damage. This study proposes [...] Read more.
Metallic Shear Plate Dampers (MSPDs) are essential components in passive vibration control systems and require rapid post-earthquake inspection to assess damage and determine replacement needs. Traditional visual inspection methods suffer from low efficiency and limited ability to detect concealed damage. This study proposes a novel MSPD damage detection method based on active sensing and the k-nearest neighbor (KNN) algorithm, featuring high accuracy, efficiency, and low cost. Quasi-static tests were conducted to simulate various damage states. Sweep-frequency excitation was applied using a charge amplifier, and piezoelectric sensors were employed to generate and receive stress wave signals corresponding to different damage conditions. The acquired signals were processed using wavelet packet transform (WPT) and energy spectrum analysis to extract discriminative time–frequency features, which were used to train and validate the KNN model. Results show that the model achieved a validation accuracy of 98.9% using all valid data and 98.1% using a single excitation-sensing channel. When tested on an MSPD with a similar overall structure but lacking stiffeners, the model achieved an accuracy of 92.6% in distinguishing between healthy and damaged states. This indicates that the proposed method has good robustness and practical potential for MSPDs with similar damage evolution and failure modes despite certain structural variations. Full article
Show Figures

Figure 1

25 pages, 1247 KB  
Article
Budget-Aware Closed-Loop Incentive Allocation for Federated Learning with DDPG
by Yang Cao, Huimin Cai, Haotian Zhu, Sen Zhang and Jun Hu
Electronics 2026, 15(7), 1481; https://doi.org/10.3390/electronics15071481 - 2 Apr 2026
Viewed by 200
Abstract
With the growing demand for trustworthy multi-party data sharing, federated learning has demonstrated broad potential in cross-entity collaborative modeling. However, it still faces challenges such as insufficient participant engagement, inaccurate contribution assessment, and the lack of dynamic profit-sharing mechanisms. Traditional incentive schemes, which [...] Read more.
With the growing demand for trustworthy multi-party data sharing, federated learning has demonstrated broad potential in cross-entity collaborative modeling. However, it still faces challenges such as insufficient participant engagement, inaccurate contribution assessment, and the lack of dynamic profit-sharing mechanisms. Traditional incentive schemes, which typically rely on game-theoretic models or static rules, struggle to accommodate dynamic client participation and heterogeneous data distributions, thereby degrading the convergence efficiency and generalization performance of the global model. To address these issues, we propose a budget-aware closed-loop incentive allocation for federated learning with deep deterministic policy gradient (DDPG). The proposed approach constructs a DDPG-driven closed-loop framework in which the server manages system states, incentive decisions, and model aggregation, while clients autonomously adjust their data contribution levels. By formulating incentive allocation as a sequential decision-making problem, the mechanism jointly optimizes policy and value functions. A permutation method is introduced to ensure invariance to client ordering, and an Ornstein–Uhlenbeck process is employed to enhance exploration, thereby improving the adaptiveness and overall effectiveness of incentive allocation. Experimental results show that the proposed method significantly increases cumulative rewards and improves client data-sharing rates in high-dimensional dynamic environments. Compared with traditional fixed incentive schemes, the mechanism demonstrates clear advantages in adaptiveness, incentive effectiveness, and model performance. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

23 pages, 6737 KB  
Article
Reimagining Corporate Food Museums as Living Labs: A Heritage-Driven Model for Sustainable, Inclusive, and ICT-Enhanced Food Innovation
by Patrizia Marti, Annamaria Recupero, Flavio Lampus and Noemi Baldino
Heritage 2026, 9(4), 145; https://doi.org/10.3390/heritage9040145 - 1 Apr 2026
Viewed by 287
Abstract
Corporate food museums are increasingly recognised as strategic heritage infrastructures capable of mediating between industrial memory, territorial identity, and contemporary societal challenges. This paper proposes a conceptual shift that repositions corporate food museums from static repositories of brand heritage to Living Labs for [...] Read more.
Corporate food museums are increasingly recognised as strategic heritage infrastructures capable of mediating between industrial memory, territorial identity, and contemporary societal challenges. This paper proposes a conceptual shift that repositions corporate food museums from static repositories of brand heritage to Living Labs for sustainable, inclusive, and participatory food innovation. Drawing on the EU-funded GNAM project, the study adopts a qualitative methodology combining the mapping of Italian corporate food museums with an analysis of European Living Labs in the food and agri-food domain. The comparative framework informs the development of a heritage-driven Living Lab model articulated around three interconnected dimensions: cultural heritage valorisation, community engagement, and sustainable food system innovation. The model is empirically grounded through a series of design-driven workshops, technology-transfer activities, and digital engagement initiatives conducted within corporate museums and academic laboratories in Southern Italy. These include co-creation processes involving students, citizens, companies, and researchers; experimentation with food waste valorisation, biodegradable and hybrid materials, and 3D food printing; and the deployment of digital platforms and immersive virtual environments. The paper contributes to heritage studies by advancing a replicable framework in which corporate food museums act as active agents of sustainable transformation, linking cultural heritage, technological experimentation, and community participation. Full article
Show Figures

Figure 1

42 pages, 899 KB  
Review
Bridging the Semantic Gap: A Review of Data Interoperability Challenges and Advanced Methodologies from BIM to LCA
by Yilong Jia, Peng Zhang and Qinjun Liu
Sustainability 2026, 18(7), 3352; https://doi.org/10.3390/su18073352 - 30 Mar 2026
Viewed by 612
Abstract
Building Information Modelling (BIM) offers a pivotal opportunity to automate Life Cycle Assessment (LCA) within the Architecture, Engineering, and Construction (AEC) industry. However, seamless integration is persistently hindered by a semantic gap, a critical misalignment between the object-oriented, geometric definitions of BIM and [...] Read more.
Building Information Modelling (BIM) offers a pivotal opportunity to automate Life Cycle Assessment (LCA) within the Architecture, Engineering, and Construction (AEC) industry. However, seamless integration is persistently hindered by a semantic gap, a critical misalignment between the object-oriented, geometric definitions of BIM and the process-based material data required by Life Cycle Inventory (LCI) databases. This paper presents a comprehensive review of data interoperability challenges and evaluates advanced methodologies designed to bridge this divide, moving beyond simple tool comparison to analyse structural integration barriers. Through a systematic review of 124 primary studies published between 2010 and 2025, this research inductively derives the BIM-LCA Interoperability Triad. This framework analyses causal dependencies across three dimensions, including Semantic and Ontological Structures, Workflow and Temporal Integration, and System Architecture and Interoperability. Furthermore, by establishing a comparative challenge–solution matrix, the analysis reveals a maturity paradox in current methodologies. While semi-automated commercial plugins dominate practice due to accessibility, they frequently function as opaque black boxes with limited transparency. Conversely, advanced approaches utilising Semantic Web technologies and Machine Learning demonstrate superior capability in resolving terminological mismatches but currently face significant barriers regarding infrastructure and expertise. This study contributes a novel theoretical model for understanding integration failures. It concludes that future research must pivot from static schema mapping towards AI-driven semantic healing, dynamic Digital Twins, and explicit system boundary harmonisation to achieve truly automated, context-aware environmental assessments and support whole-life circularity. Full article
Show Figures

Figure 1

23 pages, 3785 KB  
Article
Dynamic Simulation of Seismogenic-Fault-Induced Rupture in Overlying Soil
by Chang Wang, Xiaojun Li, Mianshui Rong, Xiaoyan Sun and Weiqing Meng
Infrastructures 2026, 11(4), 119; https://doi.org/10.3390/infrastructures11040119 - 30 Mar 2026
Viewed by 183
Abstract
Accurate prediction of surface rupture induced by seismogenic fault displacement is essential for the seismic safety assessment of major engineering projects. Most existing numerical simulations adopt quasi-static approaches, in which the effect of fault displacement is simplified as static loading. As a result, [...] Read more.
Accurate prediction of surface rupture induced by seismogenic fault displacement is essential for the seismic safety assessment of major engineering projects. Most existing numerical simulations adopt quasi-static approaches, in which the effect of fault displacement is simplified as static loading. As a result, these methods cannot represent the dynamic characteristics of the fault rupture process, such as stress-wave propagation, soil inertial effects, and the influence of dynamic loading paths on rupture extension in soil layers. To address this issue, a full-process simulation method is established for simulating rupture of overlying soil subjected to dynamic fault displacement: Firstly, a non-uniform dynamic fault displacement loading is formulated for the two sides of the fault based on viscoelastic artificial boundaries, allowing the differential motion of the bedrock on both sides of the fault to be represented. Secondly, an improved dynamic skeleton curve constitutive model of soil is developed by introducing a minimum modulus constraint, providing an improved description of soil nonlinear dynamic behavior from small-strain hysteresis to large-strain shear failure. The reliability of the proposed method is verified through element-level tests and horizontal-site response simulation. As a benchmark, its ability to reproduce key rupture characteristics under quasi-static conditions is also assessed by comparison with classical quasi-static rupture studies. The method is then applied to simulate rupture extension and deformation response of overlying soil under strike-slip fault displacement. The results show that, compared to quasi-static analysis, dynamic fault displacement produces similar cumulative slip for surface rupture initiation and full connection, but induces transient amplification of peak surface displacement and a wider deformation zone with gentler displacement gradients. These findings demonstrate the necessity of considering dynamic fault dislocation of bedrock–overlying soil interaction in seismic assessments of engineering projects crossing active faults. Full article
Show Figures

Figure 1

36 pages, 11538 KB  
Article
Liquid Neural Networks and Multimodal Remote Sensing Fusion Applied to Dynamic Landslide Susceptibility Assessment
by Hongyi Guo, Ana Belén Gil-González and Antonio Miguel Martínez-Graña
Remote Sens. 2026, 18(7), 1035; https://doi.org/10.3390/rs18071035 - 30 Mar 2026
Viewed by 290
Abstract
The Landslide susceptibility assessment in complex mountainous terrain is frequently limited by static modelling frameworks that inadequately capture nonlinear deformation characteristics and temporally evolving hazard processes. To bridge this gap, a continuous-time dynamic assessment framework is proposed for Shazhou Town, Sichuan Province, integrating [...] Read more.
The Landslide susceptibility assessment in complex mountainous terrain is frequently limited by static modelling frameworks that inadequately capture nonlinear deformation characteristics and temporally evolving hazard processes. To bridge this gap, a continuous-time dynamic assessment framework is proposed for Shazhou Town, Sichuan Province, integrating slowly moving scatterogram interferometric radar (S(BAS-InSAR))-derived deformation time series with Liquid Neural Networks (LNN). By incorporating a liquid time-constant architecture, the model accommodates irregular temporal sampling and captures non-stationary environmental responses through adaptive multimodal feature fusion. Analysis of long-term SBAS-InSAR observations (January 2021–May 2025) reveals distinctive deformation patterns, identifying eight active zones with maximum annual displacement rates of 107 mm yr−1 and cumulative subsidence of 535.7 mm, which serve as critical dynamic inputs for the susceptibility model. Comparative experiments demonstrate that the LNN framework outperforms benchmark models (including LSTM, GRU, Random Forest, and SVM), achieving a coefficient of determination (R2) of 0.95 and an RMSE of 0.50. Furthermore, multi-temporal validation against 189 historical landslide records (2008–2025) confirms the model’s robustness, yielding a 91.5% capture rate within high-susceptibility zones. Interpretability analyses via SHAP and Layer-wise relevance propagation identify rainfall and vegetation cover as dominant dynamic controls, while characterising a distinct slope threshold effect at approximately 20°. These findings demonstrate that explicit continuous-time neural modelling enables physically consistent representation of irregular satellite acquisition intervals and delayed hydro-mechanical responses, thereby advancing landslide susceptibility assessment from static spatial classification toward dynamic state evolution inference under asynchronous Earth observation data streams. Full article
(This article belongs to the Special Issue Remote Sensing for Geo-Hydrological Hazard Monitoring and Assessment)
Show Figures

Figure 1

19 pages, 1040 KB  
Article
GTH-Net: A Dynamic Game-Theoretic HyperNetwork for Non-Stationary Financial Time Series Forecasting
by Fujie Chen and Chen Ding
Appl. Sci. 2026, 16(7), 3294; https://doi.org/10.3390/app16073294 - 28 Mar 2026
Viewed by 285
Abstract
Financial time series forecasting remains a challenging task due to the high non-stationarity and concept drift inherent to market data. Existing deep learning models, such as LSTMs and transformers, typically employ static weights after training, limiting their ability to adapt to rapid market [...] Read more.
Financial time series forecasting remains a challenging task due to the high non-stationarity and concept drift inherent to market data. Existing deep learning models, such as LSTMs and transformers, typically employ static weights after training, limiting their ability to adapt to rapid market regime shifts (e.g., from trends to reversals). To bridge this gap between static parameters and dynamic environments, we propose a novel framework named Game-Theoretic HyperNetwork (GTH-Net), which introduces a context-aware meta-learning mechanism to achieve adaptive forecasting. Specifically, we first introduce an Evolutionary Game-Theoretic Correction Module (E-GTCM) to explicitly extract latent buying and selling pressure based on market microstructure priors through an iterative gated evolution process. Subsequently, we propose a HyperNetwork-based fusion mechanism that treats the extracted game state as a meta-context to dynamically generate the weights of the forecasting head. This allows the model to automatically switch its prediction rules in response to shifting market regimes. Extensive experiments on real-world stock datasets demonstrate that GTH-Net significantly outperforms baselines in terms of machine learning predictive accuracy and simulated financial profitability. Furthermore, ablation studies and parameter analysis confirm that the dynamic weight generation mechanism effectively captures market reversals caused by overcrowded trades. Full article
Show Figures

Figure 1

Back to TopTop