Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (68)

Search Parameters:
Keywords = E47—forecasting and simulation: models and applications

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 2092 KB  
Article
Improved NB Model Analysis of Earthquake Recurrence Interval Coefficient of Variation for Major Active Faults in the Hetao Graben and Northern Marginal Region
by Jinchen Li and Xing Guo
Entropy 2026, 28(1), 107; https://doi.org/10.3390/e28010107 - 16 Jan 2026
Viewed by 145
Abstract
This study presents an improved Nishenko–Buland (NB) model to address systematic biases in estimating the coefficient of variation for earthquake recurrence intervals based on a normalizing function TTave. Through Monte Carlo simulations, we demonstrate that traditional NB methods [...] Read more.
This study presents an improved Nishenko–Buland (NB) model to address systematic biases in estimating the coefficient of variation for earthquake recurrence intervals based on a normalizing function TTave. Through Monte Carlo simulations, we demonstrate that traditional NB methods significantly underestimate the coefficient of variation when applied to limited paleoseismic datasets, with deviations reaching between 30 and 40% for small sample sizes. We developed a linear transformation and iterative optimization approach that corrects these statistical biases by standardizing recurrence interval data from different sample sizes to conform to a common standardized distribution. Application to 26 fault segments across 15 major active faults in the Hetao graben system yields a corrected coefficient of variation of α = 0.381, representing a 24% increase over the traditional method (α0 = 0.307). This correction demonstrates that conventional approaches systematically underestimate earthquake recurrence variability, potentially compromising seismic hazard assessments. The improved model successfully eliminates sampling bias through iterative convergence, providing more reliable parameters for probability distributions in renewal-based earthquake forecasting. Full article
Show Figures

Figure 1

24 pages, 8373 KB  
Article
Sensitivity of Airborne Methane Retrieval Algorithms (MF, ACRWL1MF, and DOAS) to Surface Albedo and Types: Hyperspectral Simulation Assessment
by Jidai Chen, Ding Wang, Lizhou Huang and Jiasong Shi
Atmosphere 2025, 16(11), 1224; https://doi.org/10.3390/atmos16111224 - 22 Oct 2025
Viewed by 547
Abstract
Methane (CH4) emissions are a major contributor to greenhouse gases and pose significant challenges to global climate mitigation efforts. The accurate determination of CH4 concentrations via remote sensing is crucial for emission monitoring but remains impeded by surface spectral heterogeneity—notably [...] Read more.
Methane (CH4) emissions are a major contributor to greenhouse gases and pose significant challenges to global climate mitigation efforts. The accurate determination of CH4 concentrations via remote sensing is crucial for emission monitoring but remains impeded by surface spectral heterogeneity—notably albedo variations and land cover diversity. This study systematically assessed the sensitivity of three mainstream algorithms, namely, matched filter (MF), albedo-corrected reweighted-L1-matched filter (ACRWL1MF), and differential optical absorption spectroscopy (DOAS), to surface type, albedo, and emission rate through high-fidelity simulation experiments, and proposed a dynamic regularized adaptive matched filter (DRAMF) algorithm. The experiments simulated airborne hyperspectral imagery from the Airborne Visible/InfraRed Imaging Spectrometer-Next Generation (AVIRIS-NG) with known CH4 concentrations over diverse surfaces (including vegetation, soil, and water) and controlled variations in albedo through the large-eddy simulation (LES) mode of the Weather Research and Forecasting (WRF) model and the MODTRAN radiative transfer model. The results show the following: (1) MF and DOAS have higher true positive rates (TP > 90%) in high-reflectivity scenarios, but the problem of false positives is prominent (TN < 52%); ACRWL1MF significantly improves the true negative rate (TN = 95.9%) through albedo correction but lacks the ability to detect low concentrations of CH4 (TP = 63.8%). (2) All algorithms perform better at high emission rates (1000 kg/h) than at low emission rates (500 kg/h), but ACRWL1MF performs more robustly in low-albedo scenarios. (3) The proposed DRAMF algorithm improves the F1 score (0.129) by about 180% compared to the MF and DOAS algorithms and improves TP value (81.4%) by about 128% compared to the ACRWL1MF algorithm through dynamic background updates and an iterative reweighting mechanism. In practical applications, the DRAMF algorithm can also effectively monitor plumes. This research indicates that algorithms should be selected considering the specific application scenario and provides a direction for technical improvements (e.g., deep learning model) for monitoring gas emission. Full article
(This article belongs to the Special Issue Satellite Remote Sensing Applied in Atmosphere (3rd Edition))
Show Figures

Graphical abstract

30 pages, 6699 KB  
Article
Modeling Firebrand Spotting in WRF-Fire for Coupled Fire–Weather Prediction
by Maria Frediani, Kasra Shamsaei, Timothy W. Juliano, Hamed Ebrahimian, Branko Kosović, Jason C. Knievel and Sarah A. Tessendorf
Fire 2025, 8(10), 374; https://doi.org/10.3390/fire8100374 - 23 Sep 2025
Viewed by 1522
Abstract
This study develops, implements, and evaluates the Firebrand Spotting parameterization within the WRF-Fire coupled fire–atmosphere modeling system. Fire spotting is an important mechanism characterizing fire spread in wind-driven events. It can accelerate the rate of spread and enable the fire to spread over [...] Read more.
This study develops, implements, and evaluates the Firebrand Spotting parameterization within the WRF-Fire coupled fire–atmosphere modeling system. Fire spotting is an important mechanism characterizing fire spread in wind-driven events. It can accelerate the rate of spread and enable the fire to spread over streams and barriers such as highways. Without the capability to simulate fire spotting, wind-driven fire simulations cannot accurately represent fire behavior. In the Firebrand Spotting parameterization, firebrands are generated with a set of fixed properties, from locations vertically aligned with the leading fire line. Firebrands are transported using a Lagrangian framework accounting for particle burnout (combustion) through an MPI-compatible implementation within WRF-Fire. Fire spots may occur when firebrands land on unburned grid points. The parameterization is verified through idealized simulations and its application is demonstrated for the 2021 Marshall Fire, Colorado. The simulations are assessed using the observed fire perimeter and time of arrival at multiple locations identified from social media footage and official documents. All simulations using a range of ignition thresholds outperform the control without spotting. Simulations accounting for fire spots show more accurate fire arrival times (i.e., reflecting a better fire rate of spread), despite producing a generally larger fire area. The Heidke Skill Score (Cohen’s Kappa) for the burn area ranges between 0.62 and 0.78 for simulations with fire spots compared to 0.47 for the control. These results show that the parameterization consistently improves the fire forecast verification metrics, while also underscoring future work priorities, including advancing the generation and ignition components. Full article
(This article belongs to the Section Fire Science Models, Remote Sensing, and Data)
Show Figures

Figure 1

46 pages, 47184 KB  
Article
Goodness of Fit in the Marginal Modeling of Round-Trip Times for Networked Robot Sensor Transmissions
by Juan-Antonio Fernández-Madrigal, Vicente Arévalo-Espejo, Ana Cruz-Martín, Cipriano Galindo-Andrades, Adrián Bañuls-Arias and Juan-Manuel Gandarias-Palacios
Sensors 2025, 25(17), 5413; https://doi.org/10.3390/s25175413 - 2 Sep 2025
Viewed by 1569
Abstract
When complex computations cannot be performed on board a mobile robot, sensory data must be transmitted to a remote station to be processed, and the resulting actions must be sent back to the robot to execute, forming a repeating cycle. This involves stochastic [...] Read more.
When complex computations cannot be performed on board a mobile robot, sensory data must be transmitted to a remote station to be processed, and the resulting actions must be sent back to the robot to execute, forming a repeating cycle. This involves stochastic round-trip times in the case of non-deterministic network communications and/or non-hard real-time software. Since robots need to react within strict time constraints, modeling these round-trip times becomes essential for many tasks. Modern approaches for modeling sequences of data are mostly based on time-series forecasting techniques, which impose a computational cost that may be prohibitive for real-time operation, do not consider all the delay sources existing in the sw/hw system, or do not work fully online, i.e., within the time of the current round-trip. Marginal probabilistic models, on the other hand, often have a lower cost, since they discard temporal dependencies between successive measurements of round-trip times, a suitable approximation when regime changes are properly handled given the typically stationary nature of these round-trip times. In this paper we focus on the hypothesis tests needed for marginal modeling of the round-trip times in remotely operated robotic systems with the presence of abrupt changes in regimes. We analyze in depth three common models, namely Log-logistic, Log-normal, and Exponential, and propose some modifications of parameter estimators for them and new thresholds for well-known goodness-of-fit tests, which are aimed at the particularities of our setting. We then evaluate our proposal on a dataset gathered from a variety of networked robot scenarios, both real and simulated; through >2100 h of high-performance computer processing, we assess the statistical robustness and practical suitability of these methods for these kinds of robotic applications. Full article
Show Figures

Figure 1

49 pages, 5229 KB  
Article
Enhancing Ship Propulsion Efficiency Predictions with Integrated Physics and Machine Learning
by Hamid Reza Soltani Motlagh, Seyed Behbood Issa-Zadeh, Md Redzuan Zoolfakar and Claudia Lizette Garay-Rondero
J. Mar. Sci. Eng. 2025, 13(8), 1487; https://doi.org/10.3390/jmse13081487 - 31 Jul 2025
Viewed by 1955
Abstract
This research develops a dual physics-based machine learning system to forecast fuel consumption and CO2 emissions for a 100 m oil tanker across six operational scenarios: Original, Paint, Advanced Propeller, Fin, Bulbous Bow, and Combined. The combination of hydrodynamic calculations with Monte [...] Read more.
This research develops a dual physics-based machine learning system to forecast fuel consumption and CO2 emissions for a 100 m oil tanker across six operational scenarios: Original, Paint, Advanced Propeller, Fin, Bulbous Bow, and Combined. The combination of hydrodynamic calculations with Monte Carlo simulations provides a solid foundation for training machine learning models, particularly in cases where dataset restrictions are present. The XGBoost model demonstrated superior performance compared to Support Vector Regression, Gaussian Process Regression, Random Forest, and Shallow Neural Network models, achieving near-zero prediction errors that closely matched physics-based calculations. The physics-based analysis demonstrated that the Combined scenario, which combines hull coatings with bulbous bow modifications, produced the largest fuel consumption reduction (5.37% at 15 knots), followed by the Advanced Propeller scenario. The results demonstrate that user inputs (e.g., engine power: 870 kW, speed: 12.7 knots) match the Advanced Propeller scenario, followed by Paint, which indicates that advanced propellers or hull coatings would optimize efficiency. The obtained insights help ship operators modify their operational parameters and designers select essential modifications for sustainable operations. The model maintains its strength at low speeds, where fuel consumption is minimal, making it applicable to other oil tankers. The hybrid approach provides a new tool for maritime efficiency analysis, yielding interpretable results that support International Maritime Organization objectives, despite starting with a limited dataset. The model requires additional research to enhance its predictive accuracy using larger datasets and real-time data collection, which will aid in achieving global environmental stewardship. Full article
(This article belongs to the Special Issue Machine Learning for Prediction of Ship Motion)
Show Figures

Figure 1

14 pages, 450 KB  
Article
Consumer Transactions Simulation Through Generative Adversarial Networks Under Stock Constraints in Large-Scale Retail
by Sergiy Tkachuk, Szymon Łukasik and Anna Wróblewska
Electronics 2025, 14(2), 284; https://doi.org/10.3390/electronics14020284 - 12 Jan 2025
Cited by 1 | Viewed by 1904
Abstract
In the rapidly evolving domain of large-scale retail data systems, envisioning and simulating future consumer transactions has become a crucial area of interest. It offers significant potential to fortify demand forecasting and fine-tune inventory management. This paper presents an innovative application of Generative [...] Read more.
In the rapidly evolving domain of large-scale retail data systems, envisioning and simulating future consumer transactions has become a crucial area of interest. It offers significant potential to fortify demand forecasting and fine-tune inventory management. This paper presents an innovative application of Generative Adversarial Networks (GANs) to generate synthetic retail transaction data, specifically focusing on a novel system architecture that combines consumer behavior modeling with stock-keeping unit (SKU) availability constraints to address real-world assortment optimization challenges. We diverge from conventional methodologies by integrating SKU data into our GAN architecture and using more sophisticated embedding methods (e.g., hyper-graphs). This design choice enables our system to generate not only simulated consumer purchase behaviors but also reflects the dynamic interplay between consumer behavior and SKU availability—an aspect often overlooked, among others, because of data scarcity in legacy retail simulation models. Our GAN model generates transactions under stock constraints, pioneering a resourceful experimental system with practical implications for real-world retail operation and strategy. Preliminary results demonstrate enhanced realism in simulated transactions measured by comparing generated items with real ones using methods employed earlier in related studies. This underscores the potential for more accurate predictive modeling. Full article
(This article belongs to the Special Issue Data Retrieval and Data Mining)
Show Figures

Figure 1

28 pages, 4062 KB  
Article
Forecasting River Water Temperature Using Explainable Artificial Intelligence and Hybrid Machine Learning: Case Studies in Menindee Region in Australia
by Leyde Briceno Medina, Klaus Joehnk, Ravinesh C. Deo, Mumtaz Ali, Salvin S. Prasad and Nathan Downs
Water 2024, 16(24), 3720; https://doi.org/10.3390/w16243720 - 23 Dec 2024
Cited by 1 | Viewed by 2438
Abstract
Water temperature (WT) is a crucial factor indicating the quality of water in the river system. Given the significant variability in water quality, it is vital to devise more precise methods to forecast temperature in river systems and assess the water quality. This [...] Read more.
Water temperature (WT) is a crucial factor indicating the quality of water in the river system. Given the significant variability in water quality, it is vital to devise more precise methods to forecast temperature in river systems and assess the water quality. This study designs and evaluates a new explainable artificial intelligence and hybrid machine-learning framework tailored for hourly and daily surface WT predictions for case studies in the Menindee region, focusing on the Weir 32 site. The proposed hybrid framework was designed by coupling a nonstationary signal processing method of Multivariate Variational Mode Decomposition (MVMD) with a bidirectional long short-term memory network (BiLSTM). The study has also employed a combination of in situ measurements with gridded and simulation datasets in the testing phase to rigorously assess the predictive performance of the newly designed MVMD-BiLSTM alongside other benchmarked models. In accordance with the outcomes of the statistical score metrics and visual infographics of the predicted and observed WT, the objective model displayed superior predictive performance against other benchmarked models. For instance, the MVMD-BiLSTM model captured the lowest Root Mean Square Percentage Error (RMSPE) values of 9.70% and 6.34% for the hourly and daily forecasts, respectively, at Weir 32. Further application of this proposed model reproduced the overall dynamics of the daily WT in Burtundy (RMSPE = 7.88% and Mean Absolute Percentage Error (MAPE) = 5.78%) and Pooncarie (RMSPE = 8.39% and MAPE = 5.89%), confirming that the gridded data effectively capture the overall WT dynamics at these locations. The overall explainable artificial intelligence (xAI) results, based on Local Interpretable Model-Agnostic Explanations (LIME), indicate that air temperature (AT) was the most significant contributor towards predicting WT. The superior capabilities of the proposed MVMD-BiLSTM model through this case study consolidate its potential in forecasting WT. Full article
Show Figures

Figure 1

20 pages, 11638 KB  
Article
A Study of Landslide Susceptibility Assessment and Trend Prediction Using a Rule-Based Discrete Grid Model
by Yanjun Duan, Xiaotong Zhang, Wenbo Zhao, Xinpei Han, Lingfeng Lv, Yunjun Yao, Kun Jia and Qiao Wang
Remote Sens. 2024, 16(24), 4740; https://doi.org/10.3390/rs16244740 - 19 Dec 2024
Cited by 1 | Viewed by 1794
Abstract
Landslides are common natural disasters in mountainous regions, exerting considerable influence on socioeconomic development and city construction. Landslides occur and develop rapidly, often posing a significant threat to the safety of individuals and their property. Consequently, the mapping of areas susceptible to landslides [...] Read more.
Landslides are common natural disasters in mountainous regions, exerting considerable influence on socioeconomic development and city construction. Landslides occur and develop rapidly, often posing a significant threat to the safety of individuals and their property. Consequently, the mapping of areas susceptible to landslides and the simulation of the development of such events are crucial for the early warning and forecasting of regional landslide occurrences, as well as for the management of associated risks. In this study, a landslide susceptibility (LS) model was developed using an ensemble machine learning (ML) approach which integrates geological and geomorphological data, hydrological data, and remote sensing data. A total of nine factors (e.g., surface deformation rates (SDF), slope, and aspect) were used to assess the susceptibility of the study area to landslides and a grading of the LS in the study area was obtained. The proposed model demonstrates high accuracy and good applicability for LS. Additionally, a simulation of the landslide process and velocity was constructed based on the principles of landslide movement and the rule-based discrete grid model. Compared with actual unmanned aerial vehicle (UAV) imagery, this simulation model has a Sørensen coefficient (SC) of 0.878, a kappa coefficient of 0.891, and a total accuracy of 94.12%. The evaluation results indicate that the model aligns well with the spatial and temporal development characteristics of landslides, thereby providing a valuable reference basis for monitoring and early warning of landslide events. Full article
Show Figures

Figure 1

22 pages, 44511 KB  
Article
Deep Learning Prediction of Streamflow in Portugal
by Rafael Francisco and José Pedro Matos
Hydrology 2024, 11(12), 217; https://doi.org/10.3390/hydrology11120217 - 19 Dec 2024
Cited by 3 | Viewed by 3489
Abstract
The transformative potential of deep learning models is felt in many research fields, including hydrology and water resources. This study investigates the effectiveness of the Temporal Fusion Transformer (TFT), a deep neural network architecture for predicting daily streamflow in Portugal, and benchmarks it [...] Read more.
The transformative potential of deep learning models is felt in many research fields, including hydrology and water resources. This study investigates the effectiveness of the Temporal Fusion Transformer (TFT), a deep neural network architecture for predicting daily streamflow in Portugal, and benchmarks it against the popular Hydrologiska Byråns Vattenbalansavdelning (HBV) hydrological model. Additionally, it evaluates the performance of TFTs through selected forecasting examples. Information is provided about key input variables, including precipitation, temperature, and geomorphological characteristics. The study involved extensive hyperparameter tuning, with over 600 simulations conducted to fine–tune performances and ensure reliable predictions across diverse hydrological conditions. The results showed that TFTs outperformed the HBV model, successfully predicting streamflow in several catchments of distinct characteristics throughout the country. TFTs not only provide trustworthy predictions with associated probabilities of occurrence but also offer considerable advantages over classical forecasting frameworks, i.e., the ability to model complex temporal dependencies and interactions across different inputs or weight features based on their relevance to the target variable. Multiple practical applications can rely on streamflow predictions made with TFT models, such as flood risk management, water resources allocation, and support climate change adaptation measures. Full article
Show Figures

Figure 1

25 pages, 4369 KB  
Article
Optimizing Project Time and Cost Prediction Using a Hybrid XGBoost and Simulated Annealing Algorithm
by Ali Akbar ForouzeshNejad, Farzad Arabikhan and Shohin Aheleroff
Machines 2024, 12(12), 867; https://doi.org/10.3390/machines12120867 - 29 Nov 2024
Cited by 17 | Viewed by 5184
Abstract
Machine learning technologies have recently emerged as transformative tools for enhancing project management accuracy and efficiency. This study introduces a data-driven model that leverages the hybrid eXtreme Gradient Boosting-Simulated Annealing (XGBoost-SA) algorithm to predict the time and cost of construction projects. By accounting [...] Read more.
Machine learning technologies have recently emerged as transformative tools for enhancing project management accuracy and efficiency. This study introduces a data-driven model that leverages the hybrid eXtreme Gradient Boosting-Simulated Annealing (XGBoost-SA) algorithm to predict the time and cost of construction projects. By accounting for the complexity of activity networks and uncertainties within project environments, the model aims to address key challenges in project forecasting. Unlike traditional methods such as Earned Value Management (EVM) and Earned Schedule Method (ESM), which rely on static metrics, the XGBoost-SA model adapts dynamically to project data, achieving 92% prediction accuracy. This advanced model offers a more precise forecasting approach by incorporating and optimizing features from historical data. Results reveal that XGBoost-SA reduces cost prediction error by nearly 50% and time prediction error by approximately 80% compared to EVM and ESM, underscoring its effectiveness in complex scenarios. Furthermore, the model’s ability to manage limited and evolving data offers a practical solution for real-time adjustments in project planning. With these capabilities, XGBoost-SA provides project managers with a powerful tool for informed decision-making, efficient resource allocation, and proactive risk management, making it highly applicable to complex construction projects where precision and adaptability are essential. The main limitation of the developed model in this study is the reliance on data from similar projects, which necessitates additional data for application to other industries. Full article
Show Figures

Figure 1

13 pages, 2051 KB  
Article
Artificial Neural Networks for Mineral Production Forecasting in the In Situ Leaching Process: Uranium Case Study
by Daniar Aizhulov, Madina Tungatarova, Maksat Kurmanseiit and Nurlan Shayakhmetov
Processes 2024, 12(10), 2285; https://doi.org/10.3390/pr12102285 - 18 Oct 2024
Cited by 5 | Viewed by 1803
Abstract
This study was conducted to assess the applicability of artificial neural networks (ANN) for forecasting the dynamics of uranium extraction over exploitation time during the process of In Situ Leaching (ISL). Currently, ISL process simulation involves multiple steps, starting with geostatistical interpolation, followed [...] Read more.
This study was conducted to assess the applicability of artificial neural networks (ANN) for forecasting the dynamics of uranium extraction over exploitation time during the process of In Situ Leaching (ISL). Currently, ISL process simulation involves multiple steps, starting with geostatistical interpolation, followed by computational fluid dynamics (CFD) and reactive transport simulation. While extensive research exists detailing each of these steps, machine learning techniques may offer the potential to directly obtain extraction curves (i.e., the concentration of the mineral produced over the exploitation time of the deposit), thereby bypassing these computationally expensive steps. As a basis, both an empirical experimental configuration and reactive transport simulations were used to generate training data for the neural network model. An ANN was constructed, trained, and tested on several test cases with different initial parameters, then the expected outcomes were compared to those derived from conventional modeling techniques. The results indicate that for the employed experimental configuration and a limited number of features, artificial intelligence technologies, specifically regression-based neural networks can model the recovery rate (or extraction degree) of the ISL process for mineral production, achieving a high degree of accuracy compared to traditional CFD and mass transport models. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

21 pages, 27474 KB  
Article
Hybrid Twins Modeling of a High-Level Radioactive Waste Cell Demonstrator for Long-Term Temperature Monitoring and Forecasting
by David Muñoz, Anoop Ebey Thomas, Julien Cotton, Johan Bertrand and Francisco Chinesta
Sensors 2024, 24(15), 4931; https://doi.org/10.3390/s24154931 - 30 Jul 2024
Viewed by 1555
Abstract
Monitoring a deep geological repository for radioactive waste during the operational phases relies on a combination of fit-for-purpose numerical simulations and online sensor measurements, both producing complementary massive data, which can then be compared to predict reliable and integrated information (e.g., in a [...] Read more.
Monitoring a deep geological repository for radioactive waste during the operational phases relies on a combination of fit-for-purpose numerical simulations and online sensor measurements, both producing complementary massive data, which can then be compared to predict reliable and integrated information (e.g., in a digital twin) reflecting the actual physical evolution of the installation over the long term (i.e., a century), the ultimate objective being to assess that the repository components/processes are effectively following the expected trajectory towards the closure phase. Data prediction involves using historical data and statistical methods to forecast future outcomes, but it faces challenges such as data quality issues, the complexity of real-world data, and the difficulty in balancing model complexity. Feature selection, overfitting, and the interpretability of complex models further contribute to the complexity. Data reconciliation involves aligning model with in situ data, but a major challenge is to create models capturing all the complexity of the real world, encompassing dynamic variables, as well as the residual and complex near-field effects on measurements (e.g., sensors coupling). This difficulty can result in residual discrepancies between simulated and real data, highlighting the challenge of accurately estimating real-world intricacies within predictive models during the reconciliation process. The paper delves into these challenges for complex and instrumented systems (multi-scale, multi-physics, and multi-media), discussing practical applications of machine and deep learning methods in the case study of thermal loading monitoring of a high-level waste (HLW) cell demonstrator (called ALC1605) implemented at Andra’s underground research laboratory. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

17 pages, 6670 KB  
Article
Dynamical Downscaling of Daily Extreme Temperatures over China Using PRECIS Model
by Junhong Guo, Hongtao Jia, Yuexin Wang, Xiaoxuan Wang and Wei Li
Sustainability 2024, 16(7), 3030; https://doi.org/10.3390/su16073030 - 5 Apr 2024
Cited by 1 | Viewed by 1839
Abstract
As global warming intensifies and the frequency of extreme weather events rises, posing a major threat to the world’s economy and sustainable development, accurate forecasting of future extreme events is of great significance to mankind’s response to extreme weather events and to the [...] Read more.
As global warming intensifies and the frequency of extreme weather events rises, posing a major threat to the world’s economy and sustainable development, accurate forecasting of future extreme events is of great significance to mankind’s response to extreme weather events and to the sustainable development of society. Global Climate Models (GCMs) have limitations in their applicability at regional scales due to their coarse resolution. Utilizing dynamical downscaling methods based on regional climate models (RCMs) is an essential approach to obtaining high-resolution climate simulation information in future. This study represents an attempt to extend the use of the Providing REgional Climates for Impacts Studies (PRECIS) regional climate model by employing the BCC-CSM2-MR model from the Beijing Climate Center to drive it, conducting downscaling experiments over China at a spatial resolution of 0.22° (25 km). The simulation and prediction of daily maximum and minimum temperatures across the China region are conducted, marking a significant effort to expand the usage of PRECIS with data from alternative GCMs. The results indicate that PRECIS performs well in simulating the daily maximum and minimum temperatures over the China region, accurately capturing their spatial distribution and demonstrating notable simulation capabilities for both cold and warm regions. In the annual cycle, the simulation performance of PRECIS is superior to its driving GCM, particularly during cold months (i.e., December and from January to May). Regarding future changes, the daily extreme temperatures in most regions are projected to increase gradually over time. In the early 21st century, the warming magnitude is approximately 1.5 °C, reaching around 3 °C by the end of the century, with even higher warming magnitudes exceeding 4.5 °C under the SSP585 scenario. Northern regions will experience greater warming magnitudes than southern regions, suggesting faster increases in extreme temperatures in higher latitudes. This paper provides forecasts of extreme temperatures in China, which will be useful for studying extreme events and for the government to make decisions in response to extreme events. Full article
Show Figures

Figure 1

23 pages, 606 KB  
Review
Current State of Advances in Quantification and Modeling of Hydrological Droughts
by Tribeni C. Sharma and Umed S. Panu
Water 2024, 16(5), 729; https://doi.org/10.3390/w16050729 - 29 Feb 2024
Cited by 5 | Viewed by 2793
Abstract
Hydrological droughts may be referred to as sustained and regionally extensive water shortages as reflected in streamflows that are noticeable and gauged worldwide. Hydrological droughts are largely analyzed using the truncation level approach to represent the desired flow condition such as the median, [...] Read more.
Hydrological droughts may be referred to as sustained and regionally extensive water shortages as reflected in streamflows that are noticeable and gauged worldwide. Hydrological droughts are largely analyzed using the truncation level approach to represent the desired flow condition such as the median, mean, or any other flow quantile of an annual, monthly, or weekly flow sequence. The quantification of hydrologic droughts is accomplished through indices, such as the standardized streamflow index (SSI) in tandem with the standardized precipitation index (SPI) commonly used in meteorological droughts. The runs of deficits in the SSI sequence below the truncation level are treated as drought episodes, and thus, the theory of runs forms an essential tool for analysis. The parameters of significance from the modeling perspective of hydrological droughts (or tantamount to streamflow droughts in this paper) are the longest duration and the largest magnitude over a desired return period of T-year (or month or week) of the streamflow sequences. It is to be stressed that the magnitude component of the hydrological drought is of paramount importance for the design and operation of water resource storage systems such as reservoirs. The time scales chosen for the hydrologic drought analysis range from daily to annual, but for most applications, a monthly scale is deemed appropriate. For modeling the aforesaid parameters, several methodologies are in vogue, i.e., the empirical fitting of the historical drought sequences through a known probability density function (pdf), extreme number theorem, Markov chain analysis, log-linear, copulas, entropy-based analyses, and machine learning (ML)-based methods such as artificial neural networks (ANN), wavelet transform (WT), support vector machines (SVM), adaptive neuro-fuzzy inference systems (ANFIS), and hybrid methods involving entropy, copulas, and machine learning-based methods. The forecasting of the hydrologic drought is rigorously conducted through machine learning-based methodologies. However, the traditional stochastic methods such as autoregressive integrated moving average (ARIMA), seasonal autoregressive integrated moving average (SARIMA), copulas, and entropy-based methods are still popular. New techniques for flow simulation are based on copula and entropy-based concepts and machine learning methodologies such as ANN, WT, SVM, etc. The simulated flows could be used for deriving drought parameters in consonance with traditional Monte Carlo methods of data generation. Efforts are underway to use hydrologic drought models for reservoir sizing across rivers. The ML methods whilst combined in the hybrid form hold promise in drought forecasting for better management of existing water resources during the drought periods. Data mining and pre-processing techniques are expected to play a significant role in hydrologic drought modeling and forecasting in future. Full article
(This article belongs to the Special Issue Advances in Quantification and Modeling of Hydrological Droughts)
Show Figures

Figure 1

23 pages, 10308 KB  
Article
An Open-Source Cross-Section Tool for Hydrodynamic Model Geometric Input Development
by Bradley Tom, Minxue He and Prabhjot Sandhu
Hydrology 2023, 10(11), 212; https://doi.org/10.3390/hydrology10110212 - 14 Nov 2023
Viewed by 4143
Abstract
Hydrodynamic models are widely used in simulating water dynamics in riverine and estuarine systems. A reasonably realistic representation of the geometry (e.g., channel length, junctions, cross-sections, etc.) of the study area is imperative for any successful hydrodynamic modeling application. Typically, hydrodynamic models do [...] Read more.
Hydrodynamic models are widely used in simulating water dynamics in riverine and estuarine systems. A reasonably realistic representation of the geometry (e.g., channel length, junctions, cross-sections, etc.) of the study area is imperative for any successful hydrodynamic modeling application. Typically, hydrodynamic models do not digest these data directly but rely on pre-processing tools to convert the data to a readable format. This study presents a parsimonious open-source and user-friendly Java software tool, the Cross-Section Development Program (CSDP), that is developed by the authors to prepare geometric inputs for hydrodynamic models. The CSDP allows the user to select bathymetry data collected in different years by different agencies and create cross-sections and computational points in a channel automatically. This study further illustrates the application of this tool to the Delta Simulation Model II, which is the operational forecasting and planning hydrodynamic and water quality model developed for the Sacramento–San Joaquin Delta in California, United States. Model simulations on water levels and flow rates at key stations are evaluated against corresponding observations. The simulations mimic the patterns of the corresponding observations very well. The square of the correlation coefficient is generally over 0.95 during the calibration period and over 0.80 during the validation period. The absolute bias is generally less than 5% and 10% during the calibration and validation periods, respectively. The Kling–Gupta efficiency index is generally over 0.70 during both calibration and validation periods. The results illustrate that CSDP can be efficiently applied to generate geometric inputs for hydrodynamic models. Full article
Show Figures

Figure 1

Back to TopTop