Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,038)

Search Parameters:
Keywords = stochastic averaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 1770 KB  
Article
Comparative Evaluation of Bandit-Style Heuristic Policies for Moving Target Detection in a Linear Grid Environment
by Hyunmin Kang, Minho Ahn and Yongduek Seo
Sensors 2026, 26(1), 226; https://doi.org/10.3390/s26010226 (registering DOI) - 29 Dec 2025
Abstract
Moving-target detection under strict sensing constraints is a recurring subproblem in surveillance, search-and-rescue, and autonomous robotics. We study a canonical one-dimensional finite grid in which a sensor probes one location per time step with binary observations while the target follows reflecting random-walk dynamics. [...] Read more.
Moving-target detection under strict sensing constraints is a recurring subproblem in surveillance, search-and-rescue, and autonomous robotics. We study a canonical one-dimensional finite grid in which a sensor probes one location per time step with binary observations while the target follows reflecting random-walk dynamics. The objective is to minimize the expected time to detection using transparent, training-free decision rules defined on the belief state of the target location. We compare two belief-driven heuristics with purely online implementation: a greedy rule that always probes the most probable location and a belief-proportional sampling (BPS, probability matching) rule that samples sensing locations according to the belief distribution (i.e., posterior probability of the target location). Repeated Monte Carlo simulations quantify the exploitation–exploration trade-off and provide a self-comparison between the two policies. Across tested grid sizes, the greedy policy consistently yields the shortest expected time to detection, improving by roughly 17–20% over BPS and uniform random probing in representative settings. BPS trades some average efficiency for stochastic exploration, which can be beneficial under model mismatch. This study provides an interpretable baseline and quantitative reference for extensions to noisy sensing and higher-dimensional search. Full article
(This article belongs to the Special Issue Multi-Sensor Technology for Tracking, Positioning and Navigation)
Show Figures

Figure 1

20 pages, 4291 KB  
Article
Scheduling Strategy for Electric Vehicle Aggregators Participating in Energy–Frequency Regulation Markets Considering User Uncertainty
by Xiaohan Dong, Chengxin Li, Xiuzheng Wu and Zhixing Wang
Energies 2026, 19(1), 158; https://doi.org/10.3390/en19010158 - 27 Dec 2025
Viewed by 94
Abstract
The continuous growth of electric vehicle (EV) penetration offers electric vehicle aggregators (EVAs) opportunities to increase revenue by participating in both the energy market and the frequency regulation (FR) market. However, the uncertainty of user behavior poses challenges for formulating effective scheduling strategies. [...] Read more.
The continuous growth of electric vehicle (EV) penetration offers electric vehicle aggregators (EVAs) opportunities to increase revenue by participating in both the energy market and the frequency regulation (FR) market. However, the uncertainty of user behavior poses challenges for formulating effective scheduling strategies. To address these issues, this paper first establishes a charging probability prediction model that considers battery state, travel distance, and user driving habits. Subsequently, a distributionally robust optimization (DRO) model is adopted to characterize the uncertainties associated with EV clusters, and the Column-and-Constraint Generation (C&CG) algorithm is employed to decompose the original model into a master–subproblem framework for solution. Finally, the proposed scheduling strategy for EVAs is validated within the PJM market framework. The results demonstrate that simultaneous participation in the energy and FR markets can significantly enhance the operational revenue of EVAs, achieving a total daily revenue of USD 547.47 compared to USD 427.35 from coordinated charging only. Moreover, the scheduling strategy based on the DRO model achieves a trade-off between economic efficiency and risk resilience, yielding a higher average daily revenue with lower volatility (standard deviation of USD 40.46) compared to Stochastic Optimization (UD 500.98 and USD 49.57, respectively). Full article
(This article belongs to the Section E: Electric Vehicles)
Show Figures

Figure 1

17 pages, 568 KB  
Article
Long-Term QoS-Constrained RSMA Scheduling in Multi-Carrier Systems
by Jae-Won Lee, Ju-Yeon Lee, Young-Hyun Kim, Sung-Yeon Kim and Do-Yup Kim
Mathematics 2026, 14(1), 92; https://doi.org/10.3390/math14010092 - 26 Dec 2025
Viewed by 63
Abstract
This paper studies long-term resource allocation for rate-splitting multiple access (RSMA) in multi-carrier downlink systems. RSMA provides a flexible interference-management mechanism that bridges spatial division multiple access (SDMA) and non-orthogonal multiple access (NOMA), but guaranteeing long-term quality-of-service (QoS) performance under dynamic fading channels [...] Read more.
This paper studies long-term resource allocation for rate-splitting multiple access (RSMA) in multi-carrier downlink systems. RSMA provides a flexible interference-management mechanism that bridges spatial division multiple access (SDMA) and non-orthogonal multiple access (NOMA), but guaranteeing long-term quality-of-service (QoS) performance under dynamic fading channels remains challenging. To address this limitation, we develop an opportunistic scheduling framework based on Lagrangian duality and stochastic optimization, which maximizes the long-term weighted sum rate (WSR) while satisfying per-user time-average QoS constraints. The proposed method decomposes the long-term problem into per-slot subproblems with adaptive effective weights, and each subproblem is efficiently solved through a two-stage procedure consisting of subcarrier–user pair matching and power allocation. Simulation results show that the proposed RSMA scheduling framework significantly outperforms conventional NOMA while ensuring the QoS requirements of all users. These results demonstrate the practical applicability of RSMA for next-generation wireless networks requiring both high spectral efficiency and long-term reliability. Full article
(This article belongs to the Special Issue Computational Methods in Wireless Communication)
Show Figures

Figure 1

33 pages, 40054 KB  
Article
MVDCNN: A Multi-View Deep Convolutional Network with Feature Fusion for Robust Sonar Image Target Recognition
by Yue Fan, Cheng Peng, Peng Zhang, Zhisheng Zhang, Guoping Zhang and Jinsong Tang
Remote Sens. 2026, 18(1), 76; https://doi.org/10.3390/rs18010076 (registering DOI) - 25 Dec 2025
Viewed by 160
Abstract
Automatic Target Recognition (ATR) in single-view sonar imagery is severely hampered by geometric distortions, acoustic shadows, and incomplete target information due to occlusions and the slant-range imaging geometry, which frequently give rise to misclassification and hinder practical underwater detection applications. To address these [...] Read more.
Automatic Target Recognition (ATR) in single-view sonar imagery is severely hampered by geometric distortions, acoustic shadows, and incomplete target information due to occlusions and the slant-range imaging geometry, which frequently give rise to misclassification and hinder practical underwater detection applications. To address these critical limitations, this paper proposes a Multi-View Deep Convolutional Neural Network (MVDCNN) based on feature-level fusion for robust sonar image target recognition. The MVDCNN adopts a highly modular and extensible architecture consisting of four interconnected modules: an input reshaping module that adapts multi-view images to match the input format of pre-trained backbone networks via dimension merging and channel replication; a shared-weight feature extraction module that leverages Convolutional Neural Network (CNN) or Transformer backbones (e.g., ResNet, Swin Transformer, Vision Transformer) to extract discriminative features from each view, ensuring parameter efficiency and cross-view feature consistency; a feature fusion module that aggregates complementary features (e.g., target texture and shape) across views using max-pooling to retain the most salient characteristics and suppress noisy or occluded view interference; and a lightweight classification module that maps the fused feature representations to target categories. Additionally, to mitigate the data scarcity bottleneck in sonar ATR, we design a multi-view sample augmentation method based on sonar imaging geometric principles: this method systematically combines single-view samples of the same target via the combination formula and screens valid samples within a predefined azimuth range, constructing high-quality multi-view training datasets without relying on complex generative models or massive initial labeled data. Comprehensive evaluations on the Custom Side-Scan Sonar Image Dataset (CSSID) and Nankai Sonar Image Dataset (NKSID) demonstrate the superiority of our framework over single-view baselines. Specifically, the two-view MVDCNN achieves average classification accuracies of 94.72% (CSSID) and 97.24% (NKSID), with relative improvements of 7.93% and 5.05%, respectively; the three-view MVDCNN further boosts the average accuracies to 96.60% and 98.28%. Moreover, MVDCNN substantially elevates the precision and recall of small-sample categories (e.g., Fishing net and Small propeller in NKSID), effectively alleviating the class imbalance challenge. Mechanism validation via t-Distributed Stochastic Neighbor Embedding (t-SNE) feature visualization and prediction confidence distribution analysis confirms that MVDCNN yields more separable feature representations and more confident category predictions, with stronger intra-class compactness and inter-class discrimination in the feature space. The proposed MVDCNN framework provides a robust and interpretable solution for advancing sonar ATR and offers a technical paradigm for multi-view acoustic image understanding in complex underwater environments. Full article
(This article belongs to the Special Issue Underwater Remote Sensing: Status, New Challenges and Opportunities)
Show Figures

Graphical abstract

33 pages, 2618 KB  
Article
Strategic Fleet Planning Under Carbon Tax and Fuel Price Uncertainty: An Integrated Stochastic Model for Fleet Deployment and Speed Optimization
by Weilin Sun, Ying Yang and Shuaian Wang
Mathematics 2026, 14(1), 66; https://doi.org/10.3390/math14010066 - 24 Dec 2025
Viewed by 103
Abstract
This paper presents a two-stage stochastic programming model for the joint optimization of fleet deployment and sailing speed in liner shipping under fuel price volatility and carbon tax uncertainty. The integrated framework addresses strategic fleet planning by determining optimal fleet composition in the [...] Read more.
This paper presents a two-stage stochastic programming model for the joint optimization of fleet deployment and sailing speed in liner shipping under fuel price volatility and carbon tax uncertainty. The integrated framework addresses strategic fleet planning by determining optimal fleet composition in the first stage, while the second stage optimizes operational decisions, including vessel assignment to routes and sailing speeds on individual voyage legs, after observing stochastic parameter realizations. The model incorporates nonlinear fuel consumption functions that are approximated using piecewise linearization techniques, with the resulting formulation being solved using the Sample Average Approximation (SAA) method. To enhance computational tractability, we employ big-M methods to linearize mixed-integer terms and introduce auxiliary variables to handle nonlinear relationships in both the objective function and constraints. The proposed model provides shipping companies with a comprehensive decision-support tool that effectively captures the complex interdependencies between long-term strategic fleet planning and short-term operational speed optimization. Numerical experiments demonstrate the model’s effectiveness in generating optimal solutions that balance economic objectives with environmental considerations under uncertain market conditions, highlighting its practical value for resilient shipping operations in volatile fuel and carbon pricing environments. Full article
(This article belongs to the Special Issue Mathematics Applied to Manufacturing and Logistics Systems)
Show Figures

Figure 1

23 pages, 5357 KB  
Article
Cellulose-Encapsulated Magnetite Nanoparticles for Spiking of Tumor Cells Positive for the Membrane-Bound Hsp70
by Anastasia Dmitrieva, Vyacheslav Ryzhov, Yaroslav Marchenko, Vladimir Deriglazov, Boris Nikolaev, Lyudmila Yakovleva, Oleg Smirnov, Vasiliy Matveev, Natalia Yudintceva, Anastasiia Spitsyna, Elena Varfolomeeva, Stephanie E. Combs, Andrey L. Konevega and Maxim Shevtsov
Int. J. Mol. Sci. 2026, 27(1), 150; https://doi.org/10.3390/ijms27010150 - 23 Dec 2025
Viewed by 102
Abstract
The development of highly sensitive approaches for detecting tumor cells in biological samples remains a critical challenge in laboratory and clinical oncology. In this study, we investigated the structural and magnetic properties of iron oxide nanoparticles incorporated into cellulose microspheres of two size [...] Read more.
The development of highly sensitive approaches for detecting tumor cells in biological samples remains a critical challenge in laboratory and clinical oncology. In this study, we investigated the structural and magnetic properties of iron oxide nanoparticles incorporated into cellulose microspheres of two size ranges (~100 and ~700 μm) and evaluated their potential for targeted tumor cell isolation. In the smaller microspheres, magnetite-based magnetic nanoparticles (MNPs) were synthesized in situ via co-precipitation, whereas pre-synthesized MNPs were embedded into the larger microspheres. The geometrical characteristics of the resulting magnetic cellulose microspheres (MSCMNs) were assessed by confocal microscopy. Transmission electron microscopy and X-ray diffraction analyses revealed an average magnetic core size of approximately 17 nm. Magnetic properties of the MNPs within MSCMNs were characterized using a highly sensitive nonlinear magnetic response technique, and their dynamic parameters were derived using a formalism based on the stochastic Hilbert–Landau–Lifshitz equation. To evaluate their applicability in cancer diagnostics and treatment monitoring, the MSCMNs were functionalized with a TKD peptide that selectively binds membrane-associated Hsp70 (mHsp70), yielding TKD@MSCMNs. Magnetic separation enabled the isolation of tumor cells from biological fluids. The specificity of TKD-mediated binding was confirmed using Flamma648-labeled Hsp70 and compared with control alloferone-conjugated microspheres (All@MSCMNs). The ability of TKD@MSCMNs to selectively extract mHsp70-positive tumor cells was validated using C6 glioma cells and mHsp70-negative FetMSCs controls. Following co-incubation, the extraction efficiency for C6 cells was 28 ± 14%, significantly higher than that for FetMSC (7 ± 7%, p < 0.05). These findings highlight the potential of TKD-functionalized magnetic cellulose microspheres as a sensitive platform for tumor cell detection and isolation. Full article
(This article belongs to the Special Issue Recent Research of Nanomaterials in Molecular Science: 2nd Edition)
Show Figures

Graphical abstract

22 pages, 1117 KB  
Article
An Econometric Analysis of Climatic Effects on Total Factor Productivity Across U.S. Dairy Counties
by Kamil Bora Bolat, Merve Bolat and Boris E. Bravo-Ureta
Animals 2026, 16(1), 30; https://doi.org/10.3390/ani16010030 - 22 Dec 2025
Viewed by 168
Abstract
High-yielding dairy cows are particularly vulnerable to heat stress, a challenge that climate change exacerbates. To quantify the impact of climatic variables on productivity, we applied a random parameter stochastic production frontier model to the United States Department of Agriculture (USDA) census data [...] Read more.
High-yielding dairy cows are particularly vulnerable to heat stress, a challenge that climate change exacerbates. To quantify the impact of climatic variables on productivity, we applied a random parameter stochastic production frontier model to the United States Department of Agriculture (USDA) census data from 1978 to 2022 for 179 dairy counties, allowing us to decompose total factor productivity growth (TFPG). Our analysis indicates that technological advancements were the primary driver of TFPG, amounting to 2.52% annually. While these gains are modestly constrained by heat stress, the average impact on the overall TFPG rate was only 0.008% per year. This minimal impact is consistent with the adoption of strategies such as cooling systems and improved management. Even in the most affected counties, the effect remained slight, with the largest reduction reaching 0.08%. This limited impact suggests that the sector’s adoption of technologies and management strategies appears to have mitigated potential productivity losses. This study highlights that future research is needed to quantify the direct impact of specific on-farm adaptation strategies on dairy productivity to inform well-targeted policy recommendations. Full article
(This article belongs to the Special Issue Effects of Heat Stress on Animal Reproduction and Production)
Show Figures

Figure 1

29 pages, 2653 KB  
Article
GreenMind: A Scalable DRL Framework for Predictive Dispatch and Load Balancing in Hybrid Renewable Energy Systems
by Ahmed Alwakeel and Mohammed Alwakeel
Systems 2026, 14(1), 12; https://doi.org/10.3390/systems14010012 - 22 Dec 2025
Viewed by 170
Abstract
The increasing deployment of hybrid renewable energy systems has introduced significant challenges in optimal energy dispatch and load balancing due to the intrinsic stochasticity and temporal variability of renewable sources, along with the multi-dimensional optimization requirements of simultaneously achieving economic efficiency, grid stability, [...] Read more.
The increasing deployment of hybrid renewable energy systems has introduced significant challenges in optimal energy dispatch and load balancing due to the intrinsic stochasticity and temporal variability of renewable sources, along with the multi-dimensional optimization requirements of simultaneously achieving economic efficiency, grid stability, and environmental sustainability. This paper presents GreenMind, a scalable Deep Reinforcement Learning framework designed to address these challenges through a hierarchical multi-agent architecture coupled with Long Short-Term Memory (LSTM) networks for predictive energy management. The framework employs specialized agents responsible for generation dispatch, storage management, load balancing, and grid interaction, achieving an average decision accuracy of 94.7% through coordinated decision-making enabled by hierarchical communication mechanisms. The integrated LSTM-based forecasting module delivers high predictive accuracy, achieving a 2.7% Mean Absolute Percentage Error for one-hour-ahead forecasting of solar generation, wind power, and load demand, enabling proactive rather than reactive control. A multi-objective reward formulation effectively balances economic, technical, and environmental objectives, resulting in 18.3% operational cost reduction, 23.7% improvement in energy efficiency, and 31.2% enhancement in load balancing accuracy compared to state-of-the-art baseline methods. Extensive validation using synthetic datasets representing diverse hybrid renewable energy configurations over long operational horizons confirms the practical viability of the framework, with 19.6% average cost reduction, 97.7% system availability, and 28.6% carbon emission reduction. The scalability analysis demonstrates near-linear computational growth, with performance degradation remaining below 9% for systems ranging from residential microgrids to utility-scale installations with 2000 controllable units. Overall, the results demonstrate that GreenMind provides a scalable, robust, and practically deployable solution for predictive energy dispatch and load balancing in hybrid renewable energy systems. Full article
(This article belongs to the Special Issue Technological Innovation Systems and Energy Transitions)
Show Figures

Figure 1

21 pages, 4555 KB  
Article
Turbidity Inversion from the ADCP Using a CNN-ResNet-RF Hybrid Model
by Jin Liao, Bowen Li, Xuerong Cui, Anran Yao and Ruixiang Wen
J. Mar. Sci. Eng. 2026, 14(1), 14; https://doi.org/10.3390/jmse14010014 - 21 Dec 2025
Viewed by 147
Abstract
Addressing the limitations of traditional acoustic turbidity inversion models in complex marine environments—specifically their reliance on empirical parameters and lack of vertical resolution—this study presents a novel CNN-ResNet-RF hybrid model based on the simultaneous ADCP and turbidity observations in the Chengshantou sea area. [...] Read more.
Addressing the limitations of traditional acoustic turbidity inversion models in complex marine environments—specifically their reliance on empirical parameters and lack of vertical resolution—this study presents a novel CNN-ResNet-RF hybrid model based on the simultaneous ADCP and turbidity observations in the Chengshantou sea area. Unlike conventional approaches, the proposed framework integrates deep spatio-temporal features automatically extracted by a ResNet-enhanced CNN, utilizing a Random Forest (RF) regressor for final prediction, thereby avoiding the limitations of artificial feature engineering. To ensure rigorous evaluation and mitigate stochastic bias, the model was validated using a 5-fold cross-validation strategy with dynamic Z-score normalization. Experimental results demonstrate that the proposed model significantly outperforms benchmark methods (CNN, RF, and CNN-RF), achieving an average R2 of 0.782, an MAE of 4.454, and a MAPE of 15.42% on the test sets. This study confirms that the hybrid framework successfully combines the feature extraction power of deep learning with the robustness of ensemble learning, providing a robust and high-precision tool for the vertical structural analysis of ocean turbidity. Full article
(This article belongs to the Section Physical Oceanography)
Show Figures

Figure 1

27 pages, 4287 KB  
Article
Novelty Detection in Underwater Acoustic Environments for Maritime Surveillance Using an Out-of-Distribution Detector for Neural Networks
by Nayeon Kim, Minho Kim, Chanil Lee, Chanjun Chun and Hong Kook Kim
Sensors 2026, 26(1), 37; https://doi.org/10.3390/s26010037 - 20 Dec 2025
Viewed by 225
Abstract
Reliable detection of unknown signals is essential for ensuring the robustness of underwater acoustic sensing systems, particularly in maritime security and autonomous navigation. However, Conventional deep learning models often exhibit overconfidence when encountering unknown signals and are unable to quantify predictive uncertainty due [...] Read more.
Reliable detection of unknown signals is essential for ensuring the robustness of underwater acoustic sensing systems, particularly in maritime security and autonomous navigation. However, Conventional deep learning models often exhibit overconfidence when encountering unknown signals and are unable to quantify predictive uncertainty due to their deterministic inference process. To address these limitations, this study proposes a novelty detection framework that integrates an out-of-distribution detector for neural networks (ODIN) with Monte Carlo (MC) dropout. ODIN mitigates model overconfidence and enhances the separability between known and unknown signals through softmax probability calibration, while MC dropout introduces stochasticity via multiple forward passes to estimate predictive uncertainty—an element critical for stable sensing in real-world underwater environments. The resulting probabilistic outputs are modeled using Gaussian mixture models fitted to ODIN-calibrated softmax distributions of known classes. The Kullback–Leibler divergence is then employed to quantify deviations of test samples from known class behavior. Experimental evaluations on the DeepShip dataset demonstrate that the proposed method achieves, on average, a 9.5% and 5.39% increase in area under the receiver operating characteristic curve, and a 7.82% and 2.63% reduction in false positive rate at 95% true positive rate, compared to the MC dropout and ODIN baseline, respectively. These results confirm that integrating stochastic inference with ODIN significantly enhances the stability and reliability of novelty detection in underwater acoustic environments. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Graphical abstract

33 pages, 1981 KB  
Article
DSGTA: A Dynamic and Stochastic Game-Theoretic Allocation Model for Scalable and Efficient Resource Management in Multi-Tenant Cloud Environments
by Said El Kafhali and Oumaima Ghandour
Future Internet 2025, 17(12), 583; https://doi.org/10.3390/fi17120583 - 17 Dec 2025
Viewed by 201
Abstract
Efficient resource allocation is a central challenge in multi-tenant cloud, fog, and edge environments, where heterogeneous tenants compete for shared resources under dynamic and uncertain workloads. Static or purely heuristic methods often fail to capture strategic tenant behavior, whereas many existing game-theoretic approaches [...] Read more.
Efficient resource allocation is a central challenge in multi-tenant cloud, fog, and edge environments, where heterogeneous tenants compete for shared resources under dynamic and uncertain workloads. Static or purely heuristic methods often fail to capture strategic tenant behavior, whereas many existing game-theoretic approaches overlook stochastic demand variability, fairness, or scalability. This paper proposes a Dynamic and Stochastic Game-Theoretic Allocation (DSGTA) model that jointly models non-cooperative tenant interactions, repeated strategy adaptation, and random workload fluctuations. The framework combines a Nash-like dynamic equilibrium, achieved via a lightweight best-response update rule, with an approximate Shapley-value-based fairness mechanism that remains tractable for large tenant populations. The model is evaluated on synthetic scenarios, with a trace-driven setup built from the Google 2019 Cluster dataset, and a scalability study is conducted with up to K=500 heterogeneous tenants. Using a consistent set of core metrics (tenant utility, resource cost, fairness index, and SLA satisfaction rate), DSGTA is compared against a static game-theoretic allocation (SGTA) and a dynamic pricing-based allocation (DPBA). The results, supported by statistical significance tests, show that DSGTA achieves higher utility, lower average cost, improved fairness and competitive utilization across diverse strategy profiles and stochastic conditions, thereby demonstrating its practical relevance for scalable, fair, and economically efficient resource allocation in realistic multi-tenant cloud environments. Full article
Show Figures

Figure 1

25 pages, 6352 KB  
Article
Integrated Stochastic Framework for Drought Assessment and Forecasting Using Climate Indices, Remote Sensing, and ARIMA Modelling
by Majed Alsubih, Javed Mallick, Hoang Thi Hang, Mansour S. Almatawa and Vijay P. Singh
Water 2025, 17(24), 3582; https://doi.org/10.3390/w17243582 - 17 Dec 2025
Viewed by 271
Abstract
This study presents an integrated stochastic framework for assessing and forecasting drought dynamics in the western Bhagirathi–Hooghly River Basin, encompassing the districts of Bankura, Birbhum, Burdwan, Medinipur, and Purulia. Employing multiple probabilistic and statistical techniques, including the gamma-based standardized precipitation index (SPI), effective [...] Read more.
This study presents an integrated stochastic framework for assessing and forecasting drought dynamics in the western Bhagirathi–Hooghly River Basin, encompassing the districts of Bankura, Birbhum, Burdwan, Medinipur, and Purulia. Employing multiple probabilistic and statistical techniques, including the gamma-based standardized precipitation index (SPI), effective drought index (EDI), rainfall anomaly index (RAI), and the auto-regressive integrated moving average (ARIMA) model, the research quantifies spatio-temporal variability and projects drought risk under non-stationary climatic conditions. The analysis of century-long rainfall records (1905–2023), coupled with LANDSAT-derived vegetation and moisture indices, reveals escalating drought frequency and severity, particularly in Purulia, where recurrent droughts occur at roughly four-year intervals. Stochastic evaluation of rainfall anomalies and SPI distributions indicates significant inter-annual variability and complex temporal dependencies across all districts. ARIMA-based forecasts (2025–2045) suggest persistent negative SPI trends, with Bankura and Purulia exhibiting heightened drought probability and reduced predictability at longer timescales. The integration of remote sensing and time-series modelling enhances the robustness of drought prediction by combining climatic stochasticity with land-surface responses. The findings demonstrate that a hybrid stochastic modelling approach effectively captures uncertainty in drought evolution and supports climate-resilient water resource management. This research contributes a novel, region-specific stochastic framework that advances risk-based drought assessment, aligning with the broader goal of developing adaptive and probabilistic environmental management strategies under changing climatic regimes. Full article
(This article belongs to the Special Issue Drought Evaluation Under Climate Change Condition)
Show Figures

Figure 1

25 pages, 2159 KB  
Article
Gray Prediction for Internal Corrosion Rate of Oil and Gas Pipelines Based on Markov Chain and Particle Swarm Optimization
by Yiqiong Gao, Aorui Bi, Tiecheng Yan, Chenxiao Yang and Jing Qi
Symmetry 2025, 17(12), 2144; https://doi.org/10.3390/sym17122144 - 12 Dec 2025
Viewed by 159
Abstract
Accurate prediction of the internal corrosion rate is crucial for the safety management and maintenance planning of oil and gas pipelines. However, this task is challenging due to the complex, multi-factor nature of corrosion and the scarcity of available inspection data. To address [...] Read more.
Accurate prediction of the internal corrosion rate is crucial for the safety management and maintenance planning of oil and gas pipelines. However, this task is challenging due to the complex, multi-factor nature of corrosion and the scarcity of available inspection data. To address this, we propose a novel hybrid prediction model, GM-Markov-PSO, which integrates a gray prediction model with a Markov chain and a particle swarm optimization algorithm. A key innovation of our approach is the systematic incorporation of symmetry principles—observed in the spatial distribution of corrosion factors, the temporal evolution of the corrosion process, and the statistical fluctuations of monitoring data—to enhance model stability and accuracy. The proposed model effectively overcomes the limitations of individual components, providing superior handling of small-sample, non-linear datasets and demonstrating strong robustness against stochastic disturbances. In a case study, the GM-Markov-PSO model achieved prediction accuracy improvements ranging from 0.93% to 13.34%, with an average improvement of 4.51% over benchmark models, confirming its practical value for informing pipeline maintenance strategies. This work not only presents a reliable predictive tool but also enriches the application of symmetry theory in engineering forecasting by elucidating the inherent order within complex corrosion systems. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

21 pages, 2695 KB  
Article
A Comparative Analysis of the Effect of Route Set Size in Logit and Weibit-Based Stochastic Traffic Assignment
by Seungkyu Ryu
Sustainability 2025, 17(24), 11144; https://doi.org/10.3390/su172411144 - 12 Dec 2025
Viewed by 208
Abstract
This study presents a comprehensive comparative analysis of the effect of route set size on stochastic user equilibrium (SUE) traffic assignment, focusing on both logit-based (Multinomial Logit (MNL) and Path Size Logit (PSL)) and weibit-based models (Multinomial Weibit (MNW) and Path Size Weibit [...] Read more.
This study presents a comprehensive comparative analysis of the effect of route set size on stochastic user equilibrium (SUE) traffic assignment, focusing on both logit-based (Multinomial Logit (MNL) and Path Size Logit (PSL)) and weibit-based models (Multinomial Weibit (MNW) and Path Size Weibit (PSW)). The primary objective is to investigate the influence of route set size on traffic patterns and determine the minimum requisite number of routes for flow stabilization within the SUE framework. The analysis, conducted on the Winnipeg network using a customized Self-Regulated Averaging (SRA) scheme, yields three key findings. First, all models successfully converged, but the weibit-based models (MNW and PSW) converged faster than the logit-based models. Second, an analysis of perceived total travel time demonstrated that the majority of efficiency gains from route inclusion diminish after a threshold of approximately maximum 30 routes to 40 routes per O-D pair, indicating this number is sufficient for achieving stable SUE results in both model families. Third, the weibit-based model was found to be more sensitive to route overlap effects, continuing to adjust flow patterns up to maximum 45 routes per O-D pair, and exhibiting a greater tendency to allocate flow to less overlapping outer roads. This highlights the superior capability of the weibit formulation, which accounts for heterogeneous perception variance, to achieve a more behaviorally realistic equilibrium compared to the logit models. Full article
Show Figures

Figure 1

31 pages, 5126 KB  
Article
A Stochastic Multi-Objective Optimization Framework for Integrating Renewable Resources and Gravity Energy Storage in Distribution Networks, Incorporating an Enhanced Weighted Average Algorithm and Demand Response
by Ali S. Alghamdi
Sustainability 2025, 17(24), 11108; https://doi.org/10.3390/su172411108 - 11 Dec 2025
Viewed by 224
Abstract
This paper introduces a novel stochastic multi-objective optimization framework for the integration of gravity energy storage (GES) with renewable resources—photovoltaic (PV) and wind turbine (WT)—in distribution networks incorporating demand response (DR), addressing key gaps in uncertainty handling and optimization efficiency. The GES plays [...] Read more.
This paper introduces a novel stochastic multi-objective optimization framework for the integration of gravity energy storage (GES) with renewable resources—photovoltaic (PV) and wind turbine (WT)—in distribution networks incorporating demand response (DR), addressing key gaps in uncertainty handling and optimization efficiency. The GES plays a pivotal role in this framework by contributing to a techno-economic improvement in distribution networks through enhanced flexibility and a more effective utilization of intermittent renewable energy generation and economically viable storage capacity. The proposed multi-objective model aims to minimize energy losses, pollution costs, and investment and operational expenses. A new multi-objective enhanced weighted average algorithm integrated with an elite selection mechanism (MO-EWAA) is proposed to determine the optimal sizing and placement of PV, WT, and GES units. To address uncertainties in renewable generation and load demand, the two-point estimation method (2m + 1 PEM) is employed. Simulation results on a standard 33-bus test system demonstrate that the coordinated use of GES with renewables reduces energy losses and emission costs by 14.55% and 0.21%, respectively, compared to scenarios without storage, and incorporating the DR decreases the different costs. Moreover, incorporating the stochastic model increases the costs of energy losses, pollution, and investment and operation by 6.50%, 2.056%, and 3.94%, respectively, due to uncertainty. The MO-EWAA outperforms conventional MO-WAA and multi-objective particle swarm optimization (MO-PSO) in computational efficiency and solution quality, confirming its effectiveness for stochastic multi-objective optimization in distribution networks. Full article
Show Figures

Figure 1

Back to TopTop