Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,576)

Search Parameters:
Keywords = stochastic optimization methods

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 741 KB  
Article
Restoration of Distribution Network Power Flow Solutions Considering the Conservatism Impact of the Feasible Region from the Convex Inner Approximation Method
by Zirong Chen, Yonghong Huang, Xingyu Liu, Shijia Zang and Junjun Xu
Energies 2026, 19(3), 609; https://doi.org/10.3390/en19030609 (registering DOI) - 24 Jan 2026
Abstract
Under the “Dual Carbon” strategy, high-penetration integration of distributed generators (DG) into distribution networks has triggered bidirectional power flow and reactive power-voltage violations. This phenomenon undermines the accuracy guarantee of conventional relaxation models (represented by second-order cone programming, SOCP), causing solutions to deviate [...] Read more.
Under the “Dual Carbon” strategy, high-penetration integration of distributed generators (DG) into distribution networks has triggered bidirectional power flow and reactive power-voltage violations. This phenomenon undermines the accuracy guarantee of conventional relaxation models (represented by second-order cone programming, SOCP), causing solutions to deviate from the AC power flow feasible region. Notably, ensuring solution feasibility becomes particularly crucial in engineering practice. To address this problem, this paper proposes a collaborative optimization framework integrating convex inner approximation (CIA) theory and a solution recovery algorithm. First, a system relaxation model is constructed using CIA, which strictly enforces ACPF constraints while preserving the computational efficiency of convex optimization. Second, aiming at the conservatism drawback introduced by the CIA method, an admissible region correction strategy based on Stochastic Gradient Descent is designed to narrow the dual gap of the solution. Furthermore, a multi-objective optimization framework is established, incorporating voltage security, operational economy, and renewable energy accommodation rate. Finally, simulations on the IEEE 33/69/118-bus systems demonstrate that the proposed method outperforms the traditional SOCP approach in the 24 h sequential optimization, reducing voltage deviation by 22.6%, power loss by 24.7%, and solution time by 45.4%. Compared with the CIA method, it improves the DG utilization rate by 30.5%. The proposed method exhibits superior generality compared to conventional approaches. Within the upper limit range of network penetration (approximately 60%), it addresses the issue of conservative power output of DG, thereby effectively promoting the utilization of renewable energy. Full article
33 pages, 5629 KB  
Article
Forecasting Highly Volatile Time Series: An Approach Based on Encoder–Only Transformers
by Adrian-Valentin Boicea and Mihai-Stelian Munteanu
Information 2026, 17(2), 113; https://doi.org/10.3390/info17020113 - 23 Jan 2026
Abstract
High-precision time-series forecasting allows companies to better allocate resources, improve their competitiveness, and increase revenues. In most real-world cases, however, time series are highly volatile and cannot be used for forecasting together with classical statistical methods, which usually yield errors of around 30% [...] Read more.
High-precision time-series forecasting allows companies to better allocate resources, improve their competitiveness, and increase revenues. In most real-world cases, however, time series are highly volatile and cannot be used for forecasting together with classical statistical methods, which usually yield errors of around 30% or even more. Thus, the goal of this work is to present an approach to obtaining day-ahead forecasts of electricity consumption based on such volatile time series, along with data preprocessing for volatility attenuation. For a thorough understanding, predictions were computed using various methods based on either Artificial Intelligence or purely statistical algorithms. The architectures based on the Transformer were optimized through Brute Force, while the N-BEATS architecture was optimized with Brute Force and OPTUNA because of the highly stochastic nature of the time series. The best method was based on an Encoder-only Transformer, which resulted in an approximate prediction error of 11.63%—far below the error of about 30% usually accepted in current practice. In addition, a procedure was developed to determine the maximum theoretical Pearson Correlation Coefficient between forecast and actual power demand. Full article
12 pages, 273 KB  
Article
The Fréchet–Newton Scheme for SV-HJB: Stability Analysis via Fixed-Point Theory
by Mehran Paziresh, Karim Ivaz and Mariyan Milev
Axioms 2026, 15(2), 83; https://doi.org/10.3390/axioms15020083 (registering DOI) - 23 Jan 2026
Viewed by 28
Abstract
This paper investigates the optimal portfolio control problem under a stochastic volatility model, whose dynamics are governed by a highly nonlinear Hamilton–Jacobi–Bellman equation. We employ a separable value function and introduce a novel exponential approximation technique to simplify the nonlinear terms of the [...] Read more.
This paper investigates the optimal portfolio control problem under a stochastic volatility model, whose dynamics are governed by a highly nonlinear Hamilton–Jacobi–Bellman equation. We employ a separable value function and introduce a novel exponential approximation technique to simplify the nonlinear terms of the auxiliary function. The simplified HJB equation is solved numerically using the advanced Fréchet–Newton method, which is known for its rapid convergence properties. We rigorously analyze the numerical outcomes, demonstrating that the iterative sequence converges quickly to the trivial fixed point (g*=1) under zero risk and zero excess return conditions. This convergence is mathematically justified through rigorous functional analysis, including the principles of contraction mapping and the Kantorovich theorem, which validate the stability and efficiency of the proposed numerical scheme. The results offer theoretical insight into the behavior of the HJB equation in simplified solution spaces. Full article
(This article belongs to the Special Issue Advances in Financial Mathematics and Stochastic Processes)
35 pages, 2106 KB  
Article
A Novel Method That Is Based on Differential Evolution Suitable for Large-Scale Optimization Problems
by Glykeria Kyrou, Vasileios Charilogis and Ioannis G. Tsoulos
Foundations 2026, 6(1), 2; https://doi.org/10.3390/foundations6010002 - 23 Jan 2026
Viewed by 36
Abstract
Global optimization represents a fundamental challenge in computer science and engineering, as it aims to identify high-quality solutions to problems spanning from moderate to extremely high dimensionality. The Differential Evolution (DE) algorithm is a population-based algorithm like Genetic Algorithms (GAs) and uses similar [...] Read more.
Global optimization represents a fundamental challenge in computer science and engineering, as it aims to identify high-quality solutions to problems spanning from moderate to extremely high dimensionality. The Differential Evolution (DE) algorithm is a population-based algorithm like Genetic Algorithms (GAs) and uses similar operators such as crossover, mutation and selection. The proposed method introduces a set of methodological enhancements designed to increase both the robustness and the computational efficiency of the classical DE framework. Specifically, an adaptive termination criterion is incorporated, enabling early stopping based on statistical measures of convergence and population stagnation. Furthermore, a population sampling strategy based on k-means clustering is employed to enhance exploration and improve the redistribution of individuals in high-dimensional search spaces. This mechanism enables structured population renewal and effectively mitigates premature convergence. The enhanced algorithm was evaluated on standard large-scale numerical optimization benchmarks and compared with established global optimization methods. The experimental results indicate substantial improvements in convergence speed, scalability and solution stability. Full article
(This article belongs to the Section Mathematical Sciences)
Show Figures

Figure 1

28 pages, 3944 KB  
Article
A Distributed Energy Storage-Based Planning Method for Enhancing Distribution Network Resilience
by Yitong Chen, Qinlin Shi, Bo Tang, Yu Zhang and Haojing Wang
Energies 2026, 19(2), 574; https://doi.org/10.3390/en19020574 - 22 Jan 2026
Viewed by 26
Abstract
With the widespread adoption of renewable energy, distribution grids face increasing challenges in efficiency, safety, and economic performance due to stochastic generation and fluctuating load demand. Traditional operational models often exhibit limited adaptability, weak coordination, and insufficient holistic optimization, particularly in early-/mid-stage distribution [...] Read more.
With the widespread adoption of renewable energy, distribution grids face increasing challenges in efficiency, safety, and economic performance due to stochastic generation and fluctuating load demand. Traditional operational models often exhibit limited adaptability, weak coordination, and insufficient holistic optimization, particularly in early-/mid-stage distribution planning where feeder-level network information may be incomplete. Accordingly, this study adopts a planning-oriented formulation and proposes a distributed energy storage system (DESS) planning strategy to enhance distribution network resilience under high uncertainty. First, representative wind and photovoltaic (PV) scenarios are generated using an improved Gaussian Mixture Model (GMM) to characterize source-side uncertainty. Based on a grid-based network partition, a priority index model is developed to quantify regional storage demand using quality- and efficiency-oriented indicators, enabling the screening and ranking of candidate DESS locations. A mixed-integer linear multi-objective optimization model is then formulated to coordinate lifecycle economics, operational benefits, and technical constraints, and a sequential connection strategy is employed to align storage deployment with load-balancing requirements. Furthermore, a node–block–grid multi-dimensional evaluation framework is introduced to assess resilience enhancement from node-, block-, and grid-level perspectives. A case study on a Zhejiang Province distribution grid—selected for its diversified load characteristics and the availability of detailed historical wind/PV and load-category data—validates the proposed method. The planning and optimization process is implemented in Python and solved using the Gurobi optimizer. Results demonstrate that, with only a 4% increase in investment cost, the proposed strategy improves critical-node stability by 27%, enhances block-level matching by 88%, increases quality-demand satisfaction by 68%, and improves grid-wide coordination uniformity by 324%. The proposed framework provides a practical and systematic approach to strengthening resilient operation in distribution networks. Full article
(This article belongs to the Section F1: Electrical Power System)
Show Figures

Figure 1

28 pages, 978 KB  
Article
Computable Reformulation of Data-Driven Distributionally Robust Chance Constraints: Validated by Solution of Capacitated Lot-Sizing Problems
by Hua Deng and Zhong Wan
Mathematics 2026, 14(2), 331; https://doi.org/10.3390/math14020331 - 19 Jan 2026
Viewed by 60
Abstract
Uncertainty in optimization models often causes awkward properties in their deterministic equivalent formulations (DEFs), even for simple linear models. Chance-constrained programming is a reasonable tool for handling optimization problems with random parameters in objective functions and constraints, but it assumes that the distribution [...] Read more.
Uncertainty in optimization models often causes awkward properties in their deterministic equivalent formulations (DEFs), even for simple linear models. Chance-constrained programming is a reasonable tool for handling optimization problems with random parameters in objective functions and constraints, but it assumes that the distribution of these random parameters is known, and its DEF is often associated with the complicated computation of multiple integrals, hence impeding its extensive applications. In this paper, for optimization models with chance constraints, the historical data of random model parameters are first exploited to construct an adaptive approximate density function by incorporating piecewise linear interpolation into the well-known histogram method, so as to remove the assumption of a known distribution. Then, in view of this estimation, a novel confidence set only involving finitely many variables is constructed to depict all the potential distributions for the random parameters, and a computable reformulation of data-driven distributionally robust chance constraints is proposed. By virtue of such a confidence set, it is proven that the deterministic equivalent constraints are reformulated as several ordinary constraints in line with the principles of the distributionally robust optimization approach, without the need to solve complicated semi-definite programming problems, compute multiple integrals, or solve additional auxiliary optimization problems, as done in existing works. The proposed method is further validated by the solution of the stochastic multiperiod capacitated lot-sizing problem, and the numerical results demonstrate that: (1) The proposed method can significantly reduce the computational time needed to find a robust optimal production strategy compared with similar ones in the literature; (2) The optimal production strategy provided by our method can maintain moderate conservatism, i.e., it has the ability to achieve a better trade-off between cost-effectiveness and robustness than existing methods. Full article
(This article belongs to the Section D: Statistics and Operational Research)
Show Figures

Figure 1

28 pages, 2028 KB  
Article
Dynamic Resource Games in the Wood Flooring Industry: A Bayesian Learning and Lyapunov Control Framework
by Yuli Wang and Athanasios V. Vasilakos
Algorithms 2026, 19(1), 78; https://doi.org/10.3390/a19010078 - 16 Jan 2026
Viewed by 158
Abstract
Wood flooring manufacturers face complex challenges in dynamically allocating resources across multi-channel markets, characterized by channel conflicts, demand uncertainty, and long-term cumulative effects of decisions. Traditional static optimization or myopic approaches struggle to address these intertwined factors, particularly when critical market states like [...] Read more.
Wood flooring manufacturers face complex challenges in dynamically allocating resources across multi-channel markets, characterized by channel conflicts, demand uncertainty, and long-term cumulative effects of decisions. Traditional static optimization or myopic approaches struggle to address these intertwined factors, particularly when critical market states like brand reputation and customer base cannot be precisely observed. This paper establishes a systematic and theoretically grounded online decision framework to tackle this problem. We first model the problem as a Partially Observable Stochastic Dynamic Game. The core innovation lies in introducing an unobservable market position vector as the central system state, whose evolution is jointly influenced by firm investments, inter-channel competition, and macroeconomic randomness. The model further captures production lead times, physical inventory dynamics, and saturation/cross-channel effects of marketing investments, constructing a high-fidelity dynamic system. To solve this complex model, we propose a hierarchical online learning and control algorithm named L-BAP (Lyapunov-based Bayesian Approximate Planning), which innovatively integrates three core modules. It employs particle filters for Bayesian inference to nonparametrically estimate latent market states online. Simultaneously, the algorithm constructs a Lyapunov optimization framework that transforms long-term discounted reward objectives into tractable single-period optimization problems through virtual debt queues, while ensuring stability of physical systems like inventory. Finally, the algorithm embeds a game-theoretic module to predict and respond to rational strategic reactions from each channel. We provide theoretical performance analysis, rigorously proving the mean-square boundedness of system queues and deriving the performance gap between long-term rewards and optimal policies under complete information. This bound clearly quantifies the trade-off between estimation accuracy (determined by particle count) and optimization parameters. Extensive simulations demonstrate that our L-BAP algorithm significantly outperforms several strong baselines—including myopic learning and decentralized reinforcement learning methods—across multiple dimensions: long-term profitability, inventory risk control, and customer service levels. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

24 pages, 1474 KB  
Article
A Fractional Hybrid Strategy for Reliable and Cost-Optimal Economic Dispatch in Wind-Integrated Power Systems
by Abdul Wadood, Babar Sattar Khan, Bakht Muhammad Khan, Herie Park and Byung O. Kang
Fractal Fract. 2026, 10(1), 64; https://doi.org/10.3390/fractalfract10010064 - 16 Jan 2026
Viewed by 180
Abstract
Economic dispatch in wind-integrated power systems is a critical challenge, yet many recent metaheuristics suffer from premature convergence, heavy parameter tuning, and limited ability to escape local optima in non-smooth valve-point landscapes. This study proposes a new hybrid optimization framework, the Fractional Grasshopper [...] Read more.
Economic dispatch in wind-integrated power systems is a critical challenge, yet many recent metaheuristics suffer from premature convergence, heavy parameter tuning, and limited ability to escape local optima in non-smooth valve-point landscapes. This study proposes a new hybrid optimization framework, the Fractional Grasshopper Optimization algorithm (FGOA), which integrates fractional-order calculus into the standard Grasshopper Optimization algorithm (GOA) to enhance its search efficiency. The FGOA method is applied to the economic load dispatch (ELD) problem, a nonlinear and nonconvex task that aims to minimize fuel and wind-generation costs while satisfying practical constraints such as valve-point loading effects (VPLEs), generator operating limits, and the stochastic behavior of renewable energy sources. Owing to the increasing role of wind energy, stochastic wind power is modeled through the incomplete gamma function (IGF). To further improve computational accuracy, FGOA is hybridized with Sequential Quadratic Programming (SQP), where FGOA provides global exploration and SQP performs local refinement. The proposed FGOA-SQP approach is validated on systems with 3, 13, and 40 generating units, including mixed thermal and wind sources. Comparative evaluations against recent metaheuristic algorithms demonstrate that FGOA-SQP achieves more accurate and reliable dispatch outcomes. Specifically, the proposed approach achieves fuel cost reductions ranging from 0.047% to 0.71% for the 3-unit system, 0.31% to 27.25% for the 13-unit system, and 0.69% to 12.55% for the 40-unit system when compared with state-of-the-art methods. Statistical results, particularly minimum fitness values, further confirm the superior performance of the FGOA-SQP framework in addressing the ELD problem under wind power uncertainty. Full article
Show Figures

Figure 1

45 pages, 2207 KB  
Article
Integrating the Contrasting Perspectives Between the Constrained Disorder Principle and Deterministic Optical Nanoscopy: Enhancing Information Extraction from Imaging of Complex Systems
by Yaron Ilan
Bioengineering 2026, 13(1), 103; https://doi.org/10.3390/bioengineering13010103 - 15 Jan 2026
Viewed by 194
Abstract
This paper examines the contrasting yet complementary approaches of the Constrained Disorder Principle (CDP) and Stefan Hell’s deterministic optical nanoscopy for managing noise in complex systems. The CDP suggests that controlled disorder within dynamic boundaries is crucial for optimal system function, particularly in [...] Read more.
This paper examines the contrasting yet complementary approaches of the Constrained Disorder Principle (CDP) and Stefan Hell’s deterministic optical nanoscopy for managing noise in complex systems. The CDP suggests that controlled disorder within dynamic boundaries is crucial for optimal system function, particularly in biological contexts, where variability acts as an adaptive mechanism rather than being merely a measurement error. In contrast, Hell’s recent breakthrough in nanoscopy demonstrates that engineered diffraction minima can achieve sub-nanometer resolution without relying on stochastic (random) molecular switching, thereby replacing randomness with deterministic measurement precision. Philosophically, these two approaches are distinct: the CDP views noise as functionally necessary, while Hell’s method seeks to overcome noise limitations. However, both frameworks address complementary aspects of information extraction. The primary goal of microscopy is to provide information about structures, thereby facilitating a better understanding of their functionality. Noise is inherent to biological structures and functions and is part of the information in complex systems. This manuscript achieves integration through three specific contributions: (1) a mathematical framework combining CDP variability bounds with Hell’s precision measurements, validated through Monte Carlo simulations showing 15–30% precision improvements; (2) computational demonstrations with N = 10,000 trials quantifying performance under varying biological noise regimes; and (3) practical protocols for experimental implementation, including calibration procedures and real-time parameter optimization. The CDP provides a theoretical understanding of variability patterns at the system level, while Hell’s technique offers precision tools at the molecular level for validation. Integrating these approaches enables multi-scale analysis, allowing for deterministic measurements to accurately quantify the functional variability that the CDP theory predicts is vital for system health. This synthesis opens up new possibilities for adaptive imaging systems that maintain biologically meaningful noise while achieving unprecedented measurement precision. Specific applications include cancer diagnostics through chromosomal organization variability, neurodegenerative disease monitoring via protein aggregation disorder patterns, and drug screening by assessing cellular response heterogeneity. The framework comprises machine learning integration pathways for automated recognition of variability patterns and adaptive acquisition strategies. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

32 pages, 999 KB  
Article
A Robust Hybrid Metaheuristic Framework for Training Support Vector Machines
by Khalid Nejjar, Khalid Jebari and Siham Rekiek
Algorithms 2026, 19(1), 70; https://doi.org/10.3390/a19010070 - 13 Jan 2026
Viewed by 90
Abstract
Support Vector Machines (SVMs) are widely used in critical decision-making applications, such as precision agriculture, due to their strong theoretical foundations and their ability to construct an optimal separating hyperplane in high-dimensional spaces. However, the effectiveness of SVMs is highly dependent on the [...] Read more.
Support Vector Machines (SVMs) are widely used in critical decision-making applications, such as precision agriculture, due to their strong theoretical foundations and their ability to construct an optimal separating hyperplane in high-dimensional spaces. However, the effectiveness of SVMs is highly dependent on the efficiency of the optimization algorithm used to solve their underlying dual problem, which is often complex and constrained. Classical solvers, such as Sequential Minimal Optimization (SMO) and Stochastic Gradient Descent (SGD), present inherent limitations: SMO ensures numerical stability but lacks scalability and is sensitive to heuristics, while SGD scales well but suffers from unstable convergence and limited suitability for nonlinear kernels. To address these challenges, this study proposes a novel hybrid optimization framework based on Open Competency Optimization and Particle Swarm Optimization (OCO–PSO) to enhance the training of SVMs. The proposed approach combines the global exploration capability of PSO with the adaptive competency-based learning mechanism of OCO, enabling efficient exploration of the solution space, avoidance of local minima, and strict enforcement of dual constraints on the Lagrange multipliers. Across multiple datasets spanning medical (diabetes), agricultural yield, signal processing (sonar and ionosphere), and imbalanced synthetic data, the proposed OCO-PSO–SVM consistently outperforms classical SVM solvers (SMO and SGD) as well as widely used classifiers, including decision trees and random forests, in terms of accuracy, macro-F1-score, Matthews correlation coefficient (MCC), and ROC-AUC. On the Ionosphere dataset, OCO-PSO achieves an accuracy of 95.71%, an F1-score of 0.954, and an MCC of 0.908, matching the accuracy of random forest while offering superior interpretability through its kernel-based structure. In addition, the proposed method yields a sparser model with only 66 support vectors compared to 71 for standard SVC (a reduction of approximately 7%), while strictly satisfying the dual constraints with a near-zero violation of 1.3×103. Notably, the optimal hyperparameters identified by OCO-PSO (C=2, γ0.062) differ substantially from those obtained via Bayesian optimization for SVC (C=10, γ0.012), indicating that the proposed approach explores alternative yet equally effective regions of the hypothesis space. The statistical significance and robustness of these improvements are confirmed through extensive validation using 1000 bootstrap replications, paired Student’s t-tests, Wilcoxon signed-rank tests, and Holm–Bonferroni correction. These results demonstrate that the proposed metaheuristic hybrid optimization framework constitutes a reliable, interpretable, and scalable alternative for training SVMs in complex and high-dimensional classification tasks. Full article
Show Figures

Figure 1

24 pages, 7986 KB  
Article
GVMD-NLM: A Hybrid Denoising Method for GNSS Buoy Elevation Time Series Using Optimized VMD and Non-Local Means Filtering
by Huanghuang Zhang, Shengping Wang, Chao Dong, Guangyu Xu and Xiaobo Cai
Sensors 2026, 26(2), 522; https://doi.org/10.3390/s26020522 - 13 Jan 2026
Viewed by 128
Abstract
GNSS buoys are essential for real-time elevation monitoring in coastal waterways, yet the vertical coordinate time series are frequently contaminated by complex non-stationary noise, and existing denoising methods often rely on empirical parameter settings that compromise reliability. This paper proposes GVMD-NLM, a hybrid [...] Read more.
GNSS buoys are essential for real-time elevation monitoring in coastal waterways, yet the vertical coordinate time series are frequently contaminated by complex non-stationary noise, and existing denoising methods often rely on empirical parameter settings that compromise reliability. This paper proposes GVMD-NLM, a hybrid denoising framework optimized by an improved Grey Wolf Optimizer (GWO). The method introduces an adaptive convergence factor decay function derived from the Sigmoid function to automatically determine the optimal parameters (K and α) for Variational Mode Decomposition (VMD). Sample Entropy (SE) is then employed to identify low-frequency effective signals, while the remaining high-frequency noise components are processed via Non-Local Means (NLM) filtering to recover residual information while suppressing stochastic disturbances. Experimental results from two datasets at the Dongguan Waterway Wharf demonstrate that GVMD-NLM consistently outperforms SSA, CEEMDAN, VMD, and GWO-VMD. In Dataset One, GVMD-NLM reduced the RMSE by 26.04% (vs. SSA), 17.87% (vs. CEEMDAN), 24.28% (vs. VMD), and 13.47% (vs. GWO-VMD), with corresponding SNR improvements of 11.13%, 7.00%, 10.18%, and 5.05%. In Dataset Two, the method achieved RMSE reductions of 28.87% (vs. SSA), 17.12% (vs. CEEMDAN), 18.45% (vs. VMD), and 10.26% (vs. GWO-VMD), with SNR improvements of 10.48%, 5.52%, 6.02%, and 3.11%, respectively. The denoised signal maintains high fidelity, with correlation coefficients (R) reaching 0.9798. This approach provides an objective and automated solution for GNSS data denoising, offering a more accurate data foundation for waterway hydrodynamics research and water level monitoring. Full article
(This article belongs to the Special Issue Advances in GNSS Signal Processing and Navigation—Second Edition)
Show Figures

Figure 1

32 pages, 5962 KB  
Article
Remote Sensing Monitoring of Soil Salinization Based on Bootstrap-Boruta Feature Stability Assessment: A Case Study in Minqin Lake Region
by Yukun Gao, Dan Zhao, Bing Liang, Xiya Yang and Xian Xue
Remote Sens. 2026, 18(2), 245; https://doi.org/10.3390/rs18020245 - 12 Jan 2026
Viewed by 285
Abstract
Data uncertainty and limited model generalization remain critical bottlenecks in large-scale remote sensing of soil salinization. Although the integration of multi-source data has improved predictive potential, conventional deterministic feature selection methods often overlook stochastic noise inherent in environmental variables, leading to models that [...] Read more.
Data uncertainty and limited model generalization remain critical bottlenecks in large-scale remote sensing of soil salinization. Although the integration of multi-source data has improved predictive potential, conventional deterministic feature selection methods often overlook stochastic noise inherent in environmental variables, leading to models that overfit spurious correlations rather than learning stable physical signals. To address this limitation, this study proposes a Bootstrap–Boruta feature stability assessment framework that shifts feature selection from deterministic “feature importance” ranking to probabilistic “feature stability” evaluation, explicitly accounting for uncertainty induced by data perturbations. The proposed framework is evaluated by integrating stability-driven feature sets with multiple machine learning models, including a Back-Propagation Neural Network (BPNN) optimized using the Red-billed Blue Magpie Optimization (RBMO) algorithm as a representative optimization strategy. Using the Minqin Lake region as a case study, the results demonstrate that the stability-based framework effectively filters unstable noise features, reduces systematic estimation bias, and improves predictive robustness across different modeling approaches. Among the tested models, the RBMO-optimized BPNN achieved the highest accuracy. Under a rigorous bootstrap validation framework, the quality-controlled ensemble model yielded a robust mean R2 of 0.657 ± 0.05 and an RMSE of 1.957 ± 0.289 dS/m. The framework further identifies eleven physically robust predictors, confirming the dominant diagnostic role of shortwave infrared (SWIR) indices in arid saline environments. Spatial mapping based on these stable features reveals that 30.7% of the study area is affected by varying degrees of soil salinization. Overall, this study provides a mechanism-driven, promising, within-region framework that enhances the reliability of remote-sensing-based soil salinity inversion under heterogeneous environmental conditions. Full article
Show Figures

Figure 1

36 pages, 1411 KB  
Article
A Novel Stochastic Framework for Integrated Airline Operation Planning: Addressing Codeshare Agreements, Overbooking, and Station Purity
by Kübra Kızıloğlu and Ümit Sami Sakallı
Aerospace 2026, 13(1), 82; https://doi.org/10.3390/aerospace13010082 - 12 Jan 2026
Viewed by 176
Abstract
This study presents an integrated optimization framework for fleet assignment, flight scheduling, and aircraft routing under uncertainty, addressing a core challenge in airline operational planning. A three-stage stochastic mixed-integer nonlinear programming model is developed that, for the first time, simultaneously incorporates station purity [...] Read more.
This study presents an integrated optimization framework for fleet assignment, flight scheduling, and aircraft routing under uncertainty, addressing a core challenge in airline operational planning. A three-stage stochastic mixed-integer nonlinear programming model is developed that, for the first time, simultaneously incorporates station purity constraints, codeshare agreements, and overbooking decisions. The formulation also includes realistic operational factors such as stochastic passenger demand and non-cruise times (NCT), along with adjustable cruise speeds and flexible departure time windows. To handle the computational complexity of this large-scale stochastic problem, a Sample Average Approximation (SAA) scheme is combined with two tailored metaheuristic algorithms: Simulated Annealing and Cuckoo Search. Extensive experiments on real-world flight data demonstrate that the proposed hybrid approach achieves tight optimality gaps below 0.5%, with narrow confidence intervals across all instances. Moreover, the SA-enhanced method consistently yields superior solutions compared with the CS-based variant. The results highlight the significant operational and economic benefits of jointly optimizing codeshare decisions, station purity restrictions, and overbooking policies. The proposed framework provides a scalable and robust decision-support tool for airlines seeking to enhance resource utilization, reduce operational costs, and improve service quality under uncertainty. Full article
(This article belongs to the Collection Air Transportation—Operations and Management)
Show Figures

Figure 1

16 pages, 606 KB  
Article
Identifying Unique Patient Groups in Melasma Using Clustering: A Retrospective Observational Study with Machine Learning Implications for Targeted Therapies
by Michael Paulse and Nomakhosi Mpofana
Cosmetics 2026, 13(1), 13; https://doi.org/10.3390/cosmetics13010013 - 12 Jan 2026
Viewed by 232
Abstract
Melasma management is challenged by heterogeneity in patient presentation, particularly among individuals with darker skin tones. This study applied k-means clustering, an unsupervised machine learning algorithm that partitions data into k distinct clusters based on feature similarity, to identify patient subgroups that could [...] Read more.
Melasma management is challenged by heterogeneity in patient presentation, particularly among individuals with darker skin tones. This study applied k-means clustering, an unsupervised machine learning algorithm that partitions data into k distinct clusters based on feature similarity, to identify patient subgroups that could provide a hypothesis-generating framework for future precision strategies. We analysed clinical and demographic data from 150 South African women with melasma using k-means clustering. The optimal number of clusters was determined using the Elbow Method and Bayesian Information Criterion (BIC), with t-distributed stochastic neighbour embedding (t-SNE) visualization for assessment. The k-Means algorithm identified seven exploratory patient clusters explaining 52.6% of the data variability (R2 = 0.526), with model evaluation metrics including BIC = 951.630 indicating optimal model fit and a Silhouette Score of 0.200 suggesting limited separation between clusters consistent with overlapping clinical phenotypes, while the Calinski-Harabasz index of 26.422 confirmed relatively well-defined clusters that were characterized by distinct profiles including “The Moderately Sun Exposed Young Women”, “Elderly Women with Long-Term Melasma”, and “Younger Women with Severe Melasma”, with key differentiators being age distribution and menopausal status, melasma severity and duration patterns, sun exposure behaviours, and quality of life impact profiles that collectively define the unique clinical characteristics of each subgroup. This study demonstrates how machine learning can identify clinically relevant patient subgroups in melasma. Aligning interventions with the characteristics of specific clusters can potentially improve treatment efficacy. Full article
(This article belongs to the Section Cosmetic Dermatology)
Show Figures

Figure 1

27 pages, 1856 KB  
Article
Waypoint-Sequencing Model Predictive Control for Ship Weather Routing Under Forecast Uncertainty
by Marijana Marjanović, Jasna Prpić-Oršić and Marko Valčić
J. Mar. Sci. Eng. 2026, 14(2), 118; https://doi.org/10.3390/jmse14020118 - 7 Jan 2026
Viewed by 227
Abstract
Ship weather routing optimization has evolved from deterministic great-circle navigation to sophisticated frameworks that account for dynamic environmental conditions and operational constraints. This paper presents a waypoint-sequencing Model Predictive Control (MPC) approach for energy-efficient ship weather routing under forecast uncertainty. The proposed rolling [...] Read more.
Ship weather routing optimization has evolved from deterministic great-circle navigation to sophisticated frameworks that account for dynamic environmental conditions and operational constraints. This paper presents a waypoint-sequencing Model Predictive Control (MPC) approach for energy-efficient ship weather routing under forecast uncertainty. The proposed rolling horizon framework integrates neural network-based vessel performance models with ensemble weather forecasts to enable real-time route adaptation while balancing fuel efficiency, navigational safety, and path smoothness objectives. The MPC controller operates with a 6 h control horizon and 24 h prediction horizon, re-optimizing every 6 h using updated meteorological forecasts. A multi-objective cost function prioritizes fuel consumption (60%), safety considerations (30%), and trajectory smoothness (10%), with an exponential discount factor (γ = 0.95) to account for increasing forecast uncertainty. The framework discretises planned routes into waypoints and optimizes heading angles and discrete speed options (12.0, 13.5, and 14.5 knots) at each control step. Validation using 21 transatlantic voyage scenarios with real hindcast weather data demonstrates the method’s capability to propagate uncertainties through ship performance models, yielding probabilistic estimates for attainable speed, fuel consumption, and estimated time of arrival (ETA). The methodology establishes a foundation for more advanced stochastic optimization approaches while offering immediate operational value through its computational tractability and integration with existing ship decision support systems. Full article
(This article belongs to the Special Issue The Control and Navigation of Autonomous Surface Vehicles)
Show Figures

Figure 1

Back to TopTop