Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,707)

Search Parameters:
Keywords = prediction-driven modeling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 2490 KB  
Article
PI-FSL: Physics-Informed Few-Shot Domain Adaptation for Robust Cross-Domain Condition Monitoring
by Jianbiao Wan, Kar Peo Yar, Malcolm Yoke Hean Low, Chi Xu, Ngoc Chi Nam Doan, Huey Yuen Ng and Wei Wang
Technologies 2026, 14(3), 167; https://doi.org/10.3390/technologies14030167 (registering DOI) - 6 Mar 2026
Abstract
Predictive maintenance (PdM) and predictive quality monitoring (PQM) increasingly rely on data-driven condition monitoring using vibration and related signals. However, real-world deployment often faces domain drift across machines, operating regimes, and sensing conditions, while only a few labeled target samples are available. This [...] Read more.
Predictive maintenance (PdM) and predictive quality monitoring (PQM) increasingly rely on data-driven condition monitoring using vibration and related signals. However, real-world deployment often faces domain drift across machines, operating regimes, and sensing conditions, while only a few labeled target samples are available. This combination of distribution shift and label scarcity creates a substantial deployment gap for models trained in a single setting. This paper proposes a physics-informed few-shot learning (PI-FSL) domain adaptation framework that is among the first to combine episodic metric learning with soft physics-consistency regularization to improve cross-domain generalization. The framework integrates CWT-based time–frequency encoding, relation-based episodic classification, physics-consistency constraints at representation and signal levels, and PSD-guided episodic sampling within a unified adaptation pipeline. We evaluated PI-FSL under explicit few-shot transfer scenarios on tool-wear and bearing-condition-monitoring datasets. On the Bosch benchmark, PI-FSL achieved an F1 = 0.960 (balanced accuracy = 0.961) for cross-machine transfer and an F1 = 0.907 (balanced accuracy = 0.901) under a combined machine-operation shift. A cross-dataset evaluation across tool-wear and multiple bearing-fault benchmarks under a unified two-way five-shot protocol further demonstrated a competitive and transferable performance. PI-FSL achieved the best average macro-F1 and a balanced accuracy, with the largest margin on PU bearing transfer (macro-F1, 0.663 vs. 0.590; balanced accuracy, 0.710 vs. 0.634). The ablation results showed that few-shot fine-tuning is the main contributor, while physics regularization provides an additional stabilizing gain under transfer. These findings support PI-FSL as a practical episodic framework for robust cross-domain condition monitoring across heterogeneous industrial datasets under realistic drift and limited labels. Full article
Show Figures

Figure 1

47 pages, 10831 KB  
Article
DS PRO-S: A Success Assessment Model and Methodology for Data Science Projects
by Gonca Tokdemir Gökay, Ebru Gökalp and P. Erhan Eren
Appl. Sci. 2026, 16(5), 2551; https://doi.org/10.3390/app16052551 - 6 Mar 2026
Abstract
There is a persistent paradox in the data science domain: despite the growing recognition of data as a strategic asset, many projects designed to leverage its value still suffer from high failure rates. To address this challenge, this study introduces the Data Science [...] Read more.
There is a persistent paradox in the data science domain: despite the growing recognition of data as a strategic asset, many projects designed to leverage its value still suffer from high failure rates. To address this challenge, this study introduces the Data Science Projects Success Assessment Model (DS PRO-S), developed using a Design Science Research approach to make data science project success explicit, measurable, and comparable. DS PRO-S functions as a meta-model and an instantiation toolkit, complete with an operational methodology that supports success and health assessments using critical success factors (CSFs) and success criteria at both the phase and project levels through four distinct modules. This modular structure enables evaluations at any point in the data science lifecycle and informs timely, data-driven interventions before issues propagate. The measurement and evaluation framework within DS PRO-S aligns with ISO/IEC 15939, incorporating mathematical formulations for aggregating success criteria and CSFs into upper-level scores. To demonstrate its instantiability, completeness, and operational utility, case studies were conducted in a predictive analytics project of a large energy enterprise and a generative AI project of a vendor. The findings indicate that DS PRO-S is applicable in diverse project contexts in the data science domain and offers a robust solution for assessments. Full article
Show Figures

Figure 1

18 pages, 3618 KB  
Article
Adaptive Ensemble Weight Optimization for Natural Gas Consumption Forecasting: A Hybrid Stochastic–Deep Learning Framework Applied to the Czech Market
by Vojtěch Vávra and Josef Jablonsky
Mathematics 2026, 14(5), 900; https://doi.org/10.3390/math14050900 - 6 Mar 2026
Abstract
The transition towards data-driven energy management requires predictive frameworks capable of handling the nonlinear and non-stationary nature of natural gas consumption. Traditional static models often struggle to adapt to rapid regime shifts in liberalized markets. To address this forecasting problem, this study proposes [...] Read more.
The transition towards data-driven energy management requires predictive frameworks capable of handling the nonlinear and non-stationary nature of natural gas consumption. Traditional static models often struggle to adapt to rapid regime shifts in liberalized markets. To address this forecasting problem, this study proposes a convex ensemble weight optimization framework. Moving beyond simple model averaging, we formulate the ensemble weighting problem as a constrained convex optimization task on the unit simplex. We utilize the Frank–Wolfe algorithm (Conditional Gradient) to dynamically optimize the weights of a heterogeneous set of base learners, including SARIMAX, XGBoost, N-HiTS, and Temporal Fusion Transformers (TFTs). Our results on the Czech gas market dataset demonstrate that this mathematically grounded approach achieves a Mean Absolute Percentage Error (MAPE) of 4.25%, which compares favorably to individual models such as N-HiTS (5.31%) and static averaging (6.74%). While the accuracy gain over greedy ensemble selection is marginal, the proposed convex formulation offers improved stability and interpretability, which are practical advantages for operational deployment. Full article
(This article belongs to the Section D: Statistics and Operational Research)
21 pages, 4551 KB  
Article
Optimized Machine Learning Models for Predicting Compressive, Tensile, and Flexural Strengths of Multi-Fiber Recycled Aggregate Concrete
by Marwah Al tekreeti, Ali Bahadori-Jahromi, Shah Room and Zeeshan Tariq
J. Compos. Sci. 2026, 10(3), 144; https://doi.org/10.3390/jcs10030144 - 6 Mar 2026
Abstract
The demand for concrete has led to increased use of raw materials and significant waste generation. Recycled aggregate concrete (RAC) offers a viable approach to sustainable concrete; however, the use of weakly bonded mortar on aggregate leads to low strength and crack formation. [...] Read more.
The demand for concrete has led to increased use of raw materials and significant waste generation. Recycled aggregate concrete (RAC) offers a viable approach to sustainable concrete; however, the use of weakly bonded mortar on aggregate leads to low strength and crack formation. Fiber reinforcement, specifically hybrid fiber reinforcement combining steel, glass, basalt, and polypropylene fibers, can increase the tensile and flexural properties of RAC. This study developed machine learning models to enable the prediction of hybrid fiber-reinforced RAC’s compressive, splitting tensile, and flexural strength performance; these new models overcome the limitations of previous research, which relied on only one fiber type and regular methods of optimization. Two models (a deep neural network (DNN) and an XGBoost model) were trained and optimized using bald eagle search (BES), particle swarm optimization (PSO), and the Bayesian optimization (BO) algorithm to improve performance. Among the three optimization analyses, PSO-XGBoost achieved the highest accuracy for compressive strength and splitting tensile strength, while BES-XGBoost achieved the highest accuracy for flexural strength. The most significant influences on the compressive strength were curing age and silica fume, while the main drivers of splitting tensile strength and flexural strength were fiber volume and fiber characteristics. The use of SHAP-based methodology with a user-friendly interface further improved the design of RAC mixtures, reducing waste from raw materials, enhancing the structural performance of RAC, and enabling data-driven decision-making in the manufacturing of eco-friendly concrete products. Full article
(This article belongs to the Section Fiber Composites)
Show Figures

Figure 1

25 pages, 18685 KB  
Article
A Novel Strategy for Rapid Quantification of Multiple Quality Indicators and Grade Discrimination of Atractylodis macrocephalae Rhizoma Based on Electronic Nose, Electronic Tongue and Machine-Learning Algorithms
by Ruiqi Yang, Jiayu Wang, Yushi Wang, Xingyu Guo, Yunqi Sun, Ziyue Song, Keyao Zhu, Yuanyu Zhao and Yonghong Yan
Molecules 2026, 31(5), 881; https://doi.org/10.3390/molecules31050881 - 6 Mar 2026
Abstract
Atractylodes macrocephala Rhizoma (AMR) is a frequently used medicinal herb for treating gastrointestinal disorders, with its quality influenced by factors such as origin and cultivation duration. Traditional quality control methods for AMR are time-consuming and invasive, making the development of faster and more [...] Read more.
Atractylodes macrocephala Rhizoma (AMR) is a frequently used medicinal herb for treating gastrointestinal disorders, with its quality influenced by factors such as origin and cultivation duration. Traditional quality control methods for AMR are time-consuming and invasive, making the development of faster and more efficient alternatives urgently needed. This study aims to utilize electronic nose (E-nose) and electronic tongue (E-tongue) to achieve the acquisition of odor–taste two-dimensional information of AMR. Integrating this approach with machine learning (ML) enables intelligent transformation from “experience-driven” to “data-driven” quality assessment, thereby developing a rapid and cost-effective quality control strategy for AMR. Feature-extraction and feature-selection techniques were employed to optimize back-propagation neural network (BPNN) classification and regression models for eight key quality markers, selecting the optimal feature subset. Additionally, nine machine-learning algorithms were applied with the optimal feature subset to establish classification models for different AMR grades and quantitative regression models for eight components based on E-nose and E-tongue data. The results demonstrated that the E-tongue combined with the k-nearest neighbors (KNN) algorithm could achieve a rapid classification of AMR grades with an accuracy of 95.56%. It also successfully predicted the contents of the extract, volatile oil, polysaccharides, atractylenolide I, atractylenolide II, atractylenolide III, bis-atractylenolide, and atractylone, with the test set’s coefficient of determination (R2) values of 0.8874, 0.8313, 0.9628, 0.8406, 0.8736, 0.8532, 0.7758, and 0.8101, respectively. In conclusion, this study provides a comprehensive and rapid solution for AMR grade classification and quality evaluation, significantly improving efficiency compared with traditional methods. This strategy holds substantial promise for real-world applications, as it enables a high-throughput, non-destructive screening of AMR in settings such as post-harvest processing and market quality surveillance, thereby supporting the sustainable and intelligent development of the herbal medicine industry. Full article
Show Figures

Graphical abstract

21 pages, 4699 KB  
Article
Study on Characteristics of Floating Ice Accumulation and Entrainment Safety Thresholds Upstream of Sluice Gates Based on Model Tests and Logistic Regression
by Suming Li, Chao Li, Huiping Hou, Shiang Zhang and Xizhi Lv
Hydrology 2026, 13(3), 86; https://doi.org/10.3390/hydrology13030086 - 6 Mar 2026
Abstract
In the complex flow fields of channels affected by sluice gates and bridge piers, winter ice transport, accumulation characteristics upstream of the gate, and the determination of submersion thresholds are crucial for the safe operation of hydraulic projects. In this study, ice transport [...] Read more.
In the complex flow fields of channels affected by sluice gates and bridge piers, winter ice transport, accumulation characteristics upstream of the gate, and the determination of submersion thresholds are crucial for the safe operation of hydraulic projects. In this study, ice transport experiments were conducted with and without bridge piers upstream of the gate to analyze the key factors governing the transport process and accumulation morphology of floating ice. Four machine learning models were evaluated and compared to identify the optimal model for predicting the motion state of floating ice. Based on this optimal model, the discriminant conditions for ice submersion under both pier configurations were proposed. The results indicate that, driven by incoming hydraulic parameters, gate boundary conditions, and ice discharge, the upstream floating ice undergoes a progressive evolution: “flat accumulation ”-shaped accumulation wedge-shaped accumulation passing through the gate (entrainment)”. Compared to the GBDT, RF, and SVM models, the LR model achieves higher and more stable accuracy, precision, recall, and F1 scores under configurations without and with bridge piers. With AUC values reaching 0.993 and 0.997, respectively, this model demonstrates optimal comprehensive performance in classifying whether floating ice passes through the gate. Furthermore, based on the LR model, explicit algebraic formulas for the critical submersion thresholds were constructed. Under the experimental conditions, the critical threshold intervals for the relative gate opening (e/H) are [0.170, 0.182] without piers and [0.142, 0.155] with piers. This study provides a solid theoretical foundation and technical support for ice-prevention operations and gate dispatching in cold-region hydraulic engineering under submerged outflow conditions. Full article
(This article belongs to the Section Hydrological and Hydrodynamic Processes and Modelling)
Show Figures

Figure 1

25 pages, 3080 KB  
Review
Machine Learning for Alloy Design: A Property-Oriented Review
by Shamim Pourrahimi and Soroosh Hakimian
Alloys 2026, 5(1), 7; https://doi.org/10.3390/alloys5010007 - 6 Mar 2026
Abstract
Machine learning (ML) is becoming an established part of alloy research, offering new ways to link composition, processing routes, and microstructure with measured properties. In this work, recent studies using ML for predicting or optimizing alloy behavior are reviewed, covering mechanical, corrosion, phase-related, [...] Read more.
Machine learning (ML) is becoming an established part of alloy research, offering new ways to link composition, processing routes, and microstructure with measured properties. In this work, recent studies using ML for predicting or optimizing alloy behavior are reviewed, covering mechanical, corrosion, phase-related, and physical properties. Unlike previous reviews organized by alloy system or modeling approach, this review is structured by target property (mechanical, corrosion, phase/structure, and physical), which helps identify the input features commonly used to model each property and highlights existing gaps in data and validation. For each study, the main property of interest, dataset features, model type, algorithm choice, use of hyperparameter tuning, and validation strategy were examined. Comparing these reports shows that ensemble models such as random forest and XGBoost, together with deep neural networks, usually perform better than linear approaches. At the same time, issues related to small datasets and inconsistent reporting remain major challenges. Attention is also drawn to new directions, particularly physics-based learning and multi-objective optimization, that are changing how ML is applied in materials design. Overall, this review summarizes current practices and outlines areas where closer integration of data-driven and experimental methods could accelerate the development of next-generation alloys. Full article
Show Figures

Figure 1

17 pages, 5128 KB  
Article
Evaluation of Residential Indoor Radon Levels in Zagreb Using Machine Learning
by Tomislav Bituh, Marija Jelena Lovrić Štefiček, Tea Čvorišćec, Branko Petrinec and Silvije Davila
Environments 2026, 13(3), 144; https://doi.org/10.3390/environments13030144 - 6 Mar 2026
Abstract
Machine learning (ML) models can complement traditional measurement-based approaches by supporting large-scale screening, spatial analysis, and prioritization of buildings for testing of indoor radon, a leading cause of lung cancer among non-smokers. Originating from uranium decay in soil and rock, radon enters homes [...] Read more.
Machine learning (ML) models can complement traditional measurement-based approaches by supporting large-scale screening, spatial analysis, and prioritization of buildings for testing of indoor radon, a leading cause of lung cancer among non-smokers. Originating from uranium decay in soil and rock, radon enters homes via foundation cracks and accumulates indoors, influenced by building characteristics, ventilation, urbanization, and geogenic factors. As part of the Zagreb pilot within the “Evidence Driven Indoor Air Quality Improvement” (EDIAQI) project, this is the first ML application for indoor radon analysis in Croatia. This research evaluates residential indoor radon concentrations in Zagreb using ML applied to a dataset of 80 households. Several linear regression and tree-based ensemble methods were tested. The best-performing model (GBR) achieved an R2 of 0.99 on the training set and 0.57 on the test set, with an RMSE of 33 Bq/m3 and MAE of 26 Bq/m3. Although predictive performance was moderate and generalization limited, key building characteristics such as construction year, dwelling type, occupancy details, and floor level were identified as relevant variables. The results suggest that machine learning may support radon risk prioritization in urban environments, but cannot replace direct measurements for regulatory purposes. Full article
Show Figures

Figure 1

33 pages, 2940 KB  
Article
Sustainability Uncertainty and Green Asset Volatility: Evidence from Decentralized Finance and Environmental, Social, and Governance Funds
by Sirine Ben Yaala and Jamel Eddine Henchiri
J. Risk Financial Manag. 2026, 19(3), 194; https://doi.org/10.3390/jrfm19030194 - 6 Mar 2026
Abstract
This study investigates the impact of sustainability-related uncertainty (SRU)—captured via the Sustainability-related Uncertainty Index in equal-weighted (ESGUI_EQ) and GDP-weighted (ESGUI_GDP) forms—on the volatility of green financial assets, focusing on decentralized finance (DeFi) protocols and Environmental, Social, and Governance (ESG)-focused Exchange-Traded Funds (ETFs). Employing [...] Read more.
This study investigates the impact of sustainability-related uncertainty (SRU)—captured via the Sustainability-related Uncertainty Index in equal-weighted (ESGUI_EQ) and GDP-weighted (ESGUI_GDP) forms—on the volatility of green financial assets, focusing on decentralized finance (DeFi) protocols and Environmental, Social, and Governance (ESG)-focused Exchange-Traded Funds (ETFs). Employing a fuzzy logic framework, complemented by 3D surface visualization, Rule Viewer analysis, diagnostic validation, and Granger causality tests, the study uncovers non-linear, asymmetric, and time-varying responses of these assets to sustainability ambiguity. Empirical results reveal a structural divergence: DeFi protocols amplify volatility due to fragmented governance, speculative investor behavior, and sensitivity to policy-driven signals, often exhibiting bidirectional predictive feedback with SRU, whereas ESG ETFs maintain stability through diversification, regulatory oversight, and rigorous ESG screening, primarily absorbing sustainability shocks. These findings extend sustainable finance theory by integrating governance, technology, and policy dimensions, and illustrate the value of fuzzy logic combined with Granger causality in modeling complex, ambiguous markets. From a practical standpoint, the study provides actionable guidance for investors, fund managers, and policymakers, emphasizing the importance of technology-informed governance, standardized ESG disclosures, regulatory sandboxes, and continuous monitoring of SRU. Full article
(This article belongs to the Special Issue Sustainable Finance and ESG Investment)
Show Figures

Figure 1

36 pages, 2033 KB  
Review
ArtificialIntelligence-Driven Discovery and Optimization of Antimicrobial Peptides Targeting ESKAPE Pathogens and Multidrug-Resistant Fungi
by Calina Wu-Mo, Ariana Flores-González, Jezrael Meléndez-Delgado, Valerie Ortiz-Gómez, Héctor Meléndez-González and Rafael Maldonado-Hernández
Microorganisms 2026, 14(3), 591; https://doi.org/10.3390/microorganisms14030591 - 6 Mar 2026
Abstract
Antimicrobial resistance (AMR) poses an escalating global health crisis driven by multidrug-resistant ESKAPE pathogens and emerging fungal threats such as Candida auris (C. auris). In response to this urgent need for new therapeutic strategies, antimicrobial peptides (AMPs) represent a mechanistically distinct [...] Read more.
Antimicrobial resistance (AMR) poses an escalating global health crisis driven by multidrug-resistant ESKAPE pathogens and emerging fungal threats such as Candida auris (C. auris). In response to this urgent need for new therapeutic strategies, antimicrobial peptides (AMPs) represent a mechanistically distinct alternative to conventional antibiotics due to their membrane-targeting mechanisms and a reduced propensity for resistance development; however, clinical translation has been hindered by toxicity, instability and manufacturing constraints. Recent advances in artificial intelligence (AI) are reshaping AMP discovery and optimization. Machine learning (ML), deep learning (DL) and transformer-based protein language models now enable improved prediction of antimicrobial activity, selectivity, protease stability and host toxicity. Generative approaches, including variational autoencoders, diffusion models and reinforcement learning, facilitate de novo multi-objective peptide design and pathogen-directed optimization against resistant bacteria and multidrug-resistant fungal pathogens. Integrated design–test–learn pipelines are accelerating iterative peptide engineering by tightly coupling computational prediction with experimental validation. Clinically used peptide-derived antibiotics such as polymyxins and daptomycin demonstrate the therapeutic feasibility of peptide-based antimicrobials, while investigational peptides, including pexiganan, illustrate ongoing translational progress. Although no fully AI-designed AMP has yet achieved regulatory approval, the accelerating convergence of computational modeling and experimental validation suggests a rapidly evolving translational landscape. Advancing scalable, surveillance-informed AI frameworks that integrate resistance data, predictive safety modeling and delivery optimization will be essential to accelerate the clinical translation of next-generation, multi-objective AMPs against high-risk resistant pathogens. Full article
Show Figures

Figure 1

22 pages, 660 KB  
Article
Symmetry-Aware Dynamic Graph Learning for One-Step Scenic-Spot Visitor Demand Forecasting
by Wenliang Cheng, Yiqiang Wang, Yulong Xiao and Yuxue Xiao
Symmetry 2026, 18(3), 449; https://doi.org/10.3390/sym18030449 - 6 Mar 2026
Abstract
Accurate one-step forecasting of scenic-spot visitor demand is challenging due to strong non-stationarity, holiday-induced peaks, and abrupt reputation-driven shocks. We propose a symmetry-aware dynamic graph learning framework that fuses social–physical sensing streams for robust demand prediction. Online reviews are treated as social sensing, [...] Read more.
Accurate one-step forecasting of scenic-spot visitor demand is challenging due to strong non-stationarity, holiday-induced peaks, and abrupt reputation-driven shocks. We propose a symmetry-aware dynamic graph learning framework that fuses social–physical sensing streams for robust demand prediction. Online reviews are treated as social sensing, transformed into daily sentiment indicators, and aligned with demand using a delay-aware aggregation scheme. To capture evolving inter-spot dependencies, we construct a time-varying adjacency matrix that is updated over time and integrated into a lightweight spatio-temporal forecasting model, Dynamic Spatio-temporal Graph Attention LSTM (DSGAT-LSTM). The model preserves the permutation-invariant property of graph learning while introducing sentiment-guided feature reweighting and sentiment-gated temporal updates to better track volatility. Experiments on multi-year daily data from multiple A-level scenic spots with holiday and weather context demonstrate consistent error reductions over representative temporal and graph-based baselines, together with improved stability under peak and shock conditions. We will release the processed feature-level dataset and implementation scripts to support reproducibility. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Symmetry/Asymmetry)
Show Figures

Figure 1

23 pages, 1688 KB  
Article
Low-Carbon Economic Dispatch of Integrated Energy Systems with Integrated Dynamic Pricing and Electric Vehicles: A Data-Model Driven Optimization Approach
by Jiale Liu, Weisi Deng, Haohuai Wang, Weidong Gao, Qi Mo and Yan Chen
Energies 2026, 19(5), 1327; https://doi.org/10.3390/en19051327 - 6 Mar 2026
Abstract
This paper addresses the critical challenges of multi-stakeholder interest coordination and low-carbon operation in modern power systems, specifically focusing on the interaction among an Integrated Energy System (IES), Electric Vehicle Charging Stations (EVCS), and Load Aggregators (LA). To tackle these challenges, we propose [...] Read more.
This paper addresses the critical challenges of multi-stakeholder interest coordination and low-carbon operation in modern power systems, specifically focusing on the interaction among an Integrated Energy System (IES), Electric Vehicle Charging Stations (EVCS), and Load Aggregators (LA). To tackle these challenges, we propose a novel data-model driven optimization framework. A bi-level model is established, where the upper-level IES acts as the leader, and the lower-level EVCS and LA serve as followers. At the core of our approach is an integrated dynamic pricing mechanism that synergistically combines EVCS operational schedules, carbon emission signals, and load demand response. This mechanism, enhanced by predictive insights from historical data, effectively guides lower-level entities to participate in the upper-level IES’s optimization, thereby aligning individual benefits with system-wide low-carbon goals. The resulting bi-level problem is solved iteratively using CPLEX, with the optimal equilibrium selected via a joint optimality formula. The proposed methodology is validated on a multi-stakeholder case study. Results demonstrate that our AI-enhanced dynamic pricing and dispatch model not only effectively balances the interests of all parties but also significantly improves the system’s low-carbon economic performance, showcasing the potential of integrating physical models with data-driven insights for future energy system management. Full article
Show Figures

Figure 1

20 pages, 9407 KB  
Systematic Review
A Systematic Review of River Discharge Measurement Methods: Evolution and Modern Applications in Water Management and Environmental Protection
by Oscar Abel González-Vergara, María Teresa Alarcón-Herrera, Ana Elizabeth Marín-Celestino, Armando Daniel Blanco-Jáquez, Joel García-Pazos, Samuel Villarreal-Rodríguez, Yolocuauhtli Salazar and Diego Armando Martínez-Cruz
Earth 2026, 7(2), 41; https://doi.org/10.3390/earth7020041 - 6 Mar 2026
Abstract
Accurate river discharge estimation is fundamental for water resource management under increasingly variable hydrological conditions. While conventional in situ techniques remain hydrometric reference standards, their operational deployment is constrained by cost, accessibility, and limited spatial coverage. Advances in remote sensing and artificial intelligence [...] Read more.
Accurate river discharge estimation is fundamental for water resource management under increasingly variable hydrological conditions. While conventional in situ techniques remain hydrometric reference standards, their operational deployment is constrained by cost, accessibility, and limited spatial coverage. Advances in remote sensing and artificial intelligence (AI) have introduced non-contact discharge estimation frameworks based on image-derived observations. This systematic review, conducted in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 reporting guidelines, examines the evolution of river discharge measurement methods between 2004 and 2024 through a structured two-stage design. An initial search in Web of Science and Scopus identified 2809 records, of which 249 were retained for first-stage synthesis. A focused second-stage screening isolated seven studies that directly integrate image-based data with machine learning or deep learning architectures for discharge estimation. The analysis reveals a methodological transition from instrument-based hydrometry toward computationally assisted, image-driven approaches. The retained studies employ close-range and satellite imagery combined with Convolutional Neural Networks (CNNs), Long Short-Term Memory networks (LSTMs), and related models. Although reported validation metrics indicate strong predictive capability under specific conditions, performance remains dependent on site-specific calibration and reference discharge records. Broader operational deployment requires improved transferability, uncertainty integration, and cross-basin validation. Full article
Show Figures

Figure 1

31 pages, 2863 KB  
Article
A Physics-Informed Hybrid Ensemble for Robust and High-Fidelity Temperature Forecasting in PMSMs
by Rifath Bin Hossain, Md Maruf Al Hasan, Md Imran Khan, Monzur Ahmed, Yuting Lin and Xuchao Pan
World Electr. Veh. J. 2026, 17(3), 133; https://doi.org/10.3390/wevj17030133 - 5 Mar 2026
Abstract
The deployment of artificial intelligence in safety-critical industrial systems is hindered by a core trust deficit, as models trained via empirical risk minimization often fail catastrophically in out-of-distribution (OOD) scenarios. We address this challenge by developing a physics-informed hybrid ensemble that achieves state-of-the-art [...] Read more.
The deployment of artificial intelligence in safety-critical industrial systems is hindered by a core trust deficit, as models trained via empirical risk minimization often fail catastrophically in out-of-distribution (OOD) scenarios. We address this challenge by developing a physics-informed hybrid ensemble that achieves state-of-the-art accuracy and robustness for Permanent Magnet Synchronous Motor (PMSM) temperature forecasting. Our methodology first calibrates a Lumped-Parameter Thermal Network (LPTN) to serve as a physics engine for generating physically consistent data augmentations, which then pre-trains a Temporal Convolutional Network (TCN) encoder via self-supervision, with the final prediction assembled from the physics model’s baseline guess and a correction learned by an ensemble of gradient boosting models on a rich, multi-modal feature set. Evaluated against a suite of strong baselines, our hybrid ensemble achieves a state-of-the-art Root Mean Squared Error of 5.24 °C on a challenging OOD stress test composed of the most chaotic operational profiles. Most compellingly, our model’s performance improved by an unprecedented −10.68% under these extreme stress conditions where standard, purely data-driven models collapsed. This demonstrated robustness, combined with a statistically valid Coverage Under Shift (CUS) Gap of only 1.43%, provides a complete blueprint for building high-performance, trustworthy AI, enabling safer and more efficient control of critical cyber-physical systems and motivating future research into physics-guided pre-training for other industrial assets. Full article
Show Figures

Graphical abstract

34 pages, 5022 KB  
Article
Evacuation Safety Evaluation for Deep Underground Railways Using Digital Twin Map Topology
by Jaemin Yoon, Dongwoo Song and Minkyu Park
Buildings 2026, 16(5), 1033; https://doi.org/10.3390/buildings16051033 - 5 Mar 2026
Abstract
DUR (Deep Underground Railways) stations, such as Suseo Station in Korea, present unique evacuation challenges stemming from multi-level spatial depth, long vertical circulation paths, and rapid smoke spread dynamics. Conventional design guidelines often fail to capture these complexities, underscoring the need for advanced, [...] Read more.
DUR (Deep Underground Railways) stations, such as Suseo Station in Korea, present unique evacuation challenges stemming from multi-level spatial depth, long vertical circulation paths, and rapid smoke spread dynamics. Conventional design guidelines often fail to capture these complexities, underscoring the need for advanced, simulation-driven safety evaluation frameworks. This study proposes a comprehensive Digital Twin-based methodology that integrates spatial topology modeling, agent-based evacuation simulation, and dynamic hazard-aware routing. A multi-layer map topology was constructed from high-fidelity architectural geometry, decomposing the station into functional regions and encoding connectivity across platforms, concourses, corridors, and vertical circulation elements. Real-time hazard conditions were reflected through dynamic adjustments to edge weights, allowing evacuation paths to adapt to blocked exits, fire shutter operations, and smoke-infiltrated domains. Ten evacuation scenarios were developed to assess sensitivity to fire origin, exit availability, vertical circulation failures, and onboard passenger loads. Simulation results reveal that evacuation performance is primarily constrained by vertical circulation bottlenecks, with emergency stairways (E1 and E2) serving as critical choke points under high-density conditions. Cases involving exit closures or fire-compartment failures produced significant delays, frequently exceeding NFPA 130 and KRCODE performance criteria. Conversely, guided evacuation strategies demonstrated marked improvements, reducing congestion and enabling compliance with platform evacuation thresholds even in full-load scenarios. These findings highlight the necessity of transitioning from static design evaluations toward Digital Twin-enabled, predictive safety management. The proposed framework enables real-time visualization, intervention testing, and operator decision support, offering a scalable foundation for next-generation evacuation planning in extreme-depth railway infrastructures. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Back to TopTop