Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (447)

Search Parameters:
Keywords = exponential smoothing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 430 KB  
Article
A Length Preserving Geodesic Curvature Difference Flow in the Hyperbolic Plane
by Qian Liu, Zhizhong Zheng, Fang Yang and Xinxin Pan
Mathematics 2026, 14(7), 1096; https://doi.org/10.3390/math14071096 - 24 Mar 2026
Abstract
In this study, we examine a length preserving geodesic curvature difference flow for smooth strictly horocyclically convex simple closed curves in the hyperbolic plane H2. Given an initial curve γ1 and a target curve γ2 of the same hyperbolic [...] Read more.
In this study, we examine a length preserving geodesic curvature difference flow for smooth strictly horocyclically convex simple closed curves in the hyperbolic plane H2. Given an initial curve γ1 and a target curve γ2 of the same hyperbolic length, we evolve γ1 by a normal speed given by the difference of the reciprocals of geodesic curvatures evaluated at points with the same outward unit normal, together with a time-dependent scalar term Γ(t) chosen to preserve the hyperbolic length. Using Leichtweiβ’s hyperbolic support function and Howe’s curvature formula, the flow is reformulated as a quasilinear uniformly parabolic equation on S1 with a nonlocal term Γ(t). We prove short-time existence, uniqueness, and preservation of strict horocyclic convexity. Linearizing the support function equation at the target support function yields a uniformly elliptic operator whose kernel contains the infinitesimal isometry directions. Under a spectral gap assumption on a normalized slice transverse to the isometry orbit, we prove global existence and exponential convergence for initial data sufficiently close to the target curve. In the last section, this assumption is verified explicitly when the target curve is a geodesic circle. Full article
19 pages, 1423 KB  
Article
Shipping Market Sentiment Shocks and BDI Volatility: Evidence from News-Based Indicators
by Lili Qu, Nan Hong and Yutong Han
Systems 2026, 14(3), 317; https://doi.org/10.3390/systems14030317 - 17 Mar 2026
Viewed by 130
Abstract
To address the lag and limited sensitivity of conventional shipping freight indicators, this study develops a news-based sentiment measure for the shipping market and examines its association with BDI dynamics. Using shipping-related news headlines from 2019 to 2025, a RoBERTa classifier fine-tuned on [...] Read more.
To address the lag and limited sensitivity of conventional shipping freight indicators, this study develops a news-based sentiment measure for the shipping market and examines its association with BDI dynamics. Using shipping-related news headlines from 2019 to 2025, a RoBERTa classifier fine-tuned on manually annotated data is used to quantify headline sentiment, and a daily Cumulative Sentiment Index (CSI) is constructed using an event-smoothing window with exponential decay. A higher CSI indicates more positive market sentiment, while a lower CSI reflects more negative sentiment. Empirical evidence shows that CSI exhibits pronounced responses around major market events and is closely linked to BDI behavior in event windows. In addition, an EGARCH specification augmented with CSI indicates that sentiment is significantly associated with conditional volatility, suggesting that news-based sentiment contains incremental information relevant to BDI risk dynamics. Overall, the proposed CSI provides a quantitative approach to tracking shipping market sentiment using publicly available news headlines and offers a complementary perspective to transaction-based freight indices. Full article
(This article belongs to the Topic Data Science and Intelligent Management)
Show Figures

Figure 1

13 pages, 2079 KB  
Article
Trend Prediction of Distribution Network Fault Symptoms Based on XLSTM-Informer Fusion Model
by Zhen Chen, Lin Gao and Yuanming Cheng
Energies 2026, 19(6), 1389; https://doi.org/10.3390/en19061389 - 10 Mar 2026
Viewed by 196
Abstract
Accurate prediction of distribution network operating states is essential for implementing proactive fault warning systems. However, with the high penetration of distributed energy resources, measurement data exhibit strong nonlinearity and multi-scale temporal characteristics, posing significant challenges to existing prediction methods. Current mainstream approaches [...] Read more.
Accurate prediction of distribution network operating states is essential for implementing proactive fault warning systems. However, with the high penetration of distributed energy resources, measurement data exhibit strong nonlinearity and multi-scale temporal characteristics, posing significant challenges to existing prediction methods. Current mainstream approaches face a critical dilemma: traditional recurrent neural network (RNN) models (e.g., LSTM) suffer from vanishing gradients and memory bottlenecks in long-sequence forecasting, making it difficult to capture long-term evolutionary trends. In contrast, while standard Transformer models excel at global modeling, their smoothing effect renders them insensitive to subtle transient abrupt changes such as voltage sags, and they incur high computational complexity. To address the dual challenges of “difficulty in capturing transient abrupt changes” and “inability to simultaneously handle long-term trends,” this paper proposes a fault precursor trend prediction model that integrates Extended Long Short-Term Memory (XLSTM) with Informer, termed XLSTM-Informer. To tackle the challenge of extracting transient features, an XLSTM-based local encoder is constructed. By replacing the conventional Sigmoid activation with an improved exponential gating mechanism, the model achieves significantly enhanced sensitivity to instantaneous fluctuations in voltage and current. Additionally, a matrix memory structure is introduced to effectively mitigate information forgetting issues during long-sequence training. To overcome the challenge of modeling long-term dependencies, Informer is employed as the global decoder. Leveraging its ProbSparse sparse self-attention mechanism, the model substantially reduces computational complexity while accurately capturing long-range temporal dependencies. Experimental results on a real-world distribution network dataset demonstrate that the proposed model achieves substantially lower Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE) compared to standalone CNN, LSTM, and other baseline models, as well as conventional LSTM–Informer hybrid approaches. Particularly under extreme operating conditions—such as sustained high summer loads and winter heating peak loads—the model successfully overcomes the trade-off limitations of traditional methods, enabling simultaneous and accurate prediction of both local precursors and global trends. This provides a reliable technical foundation for proactive warning systems in distribution networks. Full article
Show Figures

Figure 1

19 pages, 2348 KB  
Article
IEC 61850-80-5-Based Data Mapping for Communication Modeling of Smart Inverters with IEC 61850 and Modbus Integration
by Taha Selim Ustun
Electronics 2026, 15(5), 1134; https://doi.org/10.3390/electronics15051134 - 9 Mar 2026
Viewed by 154
Abstract
In modern industrial systems, including power system automation, it is important to ensure that new standards are able to communicate with the older ones. IEC 61850 standard has been gaining significant ground in power system automation due to it is object-oriented design. In [...] Read more.
In modern industrial systems, including power system automation, it is important to ensure that new standards are able to communicate with the older ones. IEC 61850 standard has been gaining significant ground in power system automation due to it is object-oriented design. In line with its exponential growth, it is imperative to integrate IEC 61850 with other information exchange approaches. Modbus is a very robust communication protocol that uses registers. Since it can be deployed in a cost-effective way, it is widely used in older or lower-cost devices. Unlike IEC 61850, which supports real-time communication, Modbus envisions a trigger-based communication style. All of these fundamental differences make direct communication between these two protocols nontrivial. In order to address this need, IEC TR 61850-80-5 is developed to give a structured approach for mapping Modbus data into the IEC 61850 data model. This is conducted using a gateway and includes identifying relevant Modbus registers, converting the data format and embedding them into IEC 61850 logical nodes and data attributes. If completed, this allows legacy devices such as meters or sensors running on Modbus to be seamlessly integrated into modern smart-grid systems using IEC 61850. This paper shows how such integration can be performed between smart inverters and the sensors feeding information to them. Firstly, both protocols are introduced. Then, the IEC 618150 modeling of smart inverters is presented. Finally, data mapping is performed between Modbus registers of current- and voltage sensors and the said smart inverter model. A gateway is developed based on this mapping as well. By bridging two widely used protocols, this work supports interoperability, extends the life of existing assets and ensures a smooth transition towards digital power systems. Full article
(This article belongs to the Special Issue Recent Advances of Renewable Energy in Power Systems)
Show Figures

Figure 1

18 pages, 594 KB  
Article
Research on Hybrid Energy Storage Optimisation Strategies for Mitigating Wind Power Fluctuations
by Zhenyun Song and Yu Zhang
Algorithms 2026, 19(3), 204; https://doi.org/10.3390/a19030204 - 9 Mar 2026
Viewed by 196
Abstract
Wind power generation exhibits pronounced volatility and intermittency, and direct grid connection may cause instability in grid frequency. To address this issue, this paper proposes an optimisation strategy for hybrid energy storage systems to mitigate wind power fluctuations, integrating lithium-ion batteries with supercapacitors [...] Read more.
Wind power generation exhibits pronounced volatility and intermittency, and direct grid connection may cause instability in grid frequency. To address this issue, this paper proposes an optimisation strategy for hybrid energy storage systems to mitigate wind power fluctuations, integrating lithium-ion batteries with supercapacitors within wind power systems. Firstly, the grid-connected power of wind turbines and the reference power of the energy storage system are determined through dynamic weight adjustment using a weighted filtering algorithm combining adaptive exponential smoothing and recursive averaging algorithms. Secondly, the fish-eagle optimisation algorithm is employed to refine variational modal decomposition parameters. The modal components derived from decomposing the energy storage system’s reference power are converted into Hilbert marginal spectra. Following determination of the cut-off frequency, high-frequency signal components are managed by supercapacitors, while low-frequency components are handled by lithium-ion batteries. Finally, an optimised configuration model for the hybrid energy storage system is constructed to minimise the annual lifecycle target cost. Case study analysis demonstrates that this approach effectively smooths fluctuations in wind power output while fully leveraging the complementary characteristics of both energy storage types, achieving a balance between system economics and overall performance. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

39 pages, 17333 KB  
Article
A Novel HOT-STA-SMC Strategy Integrated with MRAS for High-Performance Sensorless PMSM Drives
by Djaloul Karboua, Said Benkaihoul, Abdelkader Azzeddine Bengharbi and Francisco Javier Ruiz-Rodríguez
Electronics 2026, 15(5), 1105; https://doi.org/10.3390/electronics15051105 - 6 Mar 2026
Viewed by 279
Abstract
This paper proposes an advanced sensorless control strategy for Permanent Magnet Synchronous Motors (PMSMs) aimed at enhancing dynamic performance, robustness, and reliability while eliminating the need for mechanical sensors. The core contribution lies in a novel hybrid speed regulation framework that combines a [...] Read more.
This paper proposes an advanced sensorless control strategy for Permanent Magnet Synchronous Motors (PMSMs) aimed at enhancing dynamic performance, robustness, and reliability while eliminating the need for mechanical sensors. The core contribution lies in a novel hybrid speed regulation framework that combines a terminal sliding mode control scheme with a high-order super-twisting algorithm (HOT-STA-SMC), ensuring finite-time convergence, effective chattering suppression, and strong disturbance rejection under varying operating conditions. For the inner current loop, an Exponential Reaching Law Sliding Mode Controller (ERL-SMC) is implemented to guarantee fast current response and precise current tracking, even in the presence of parameter uncertainties. Furthermore, the conventional Model Reference Adaptive System (MRAS) observer is embedded within the proposed control architecture, resulting in more accurate speed estimation and enhanced stability during load fluctuations. The complete control system is rigorously modeled and tested in MATLAB R2024b/Simulink, capturing the full interaction between machine dynamics, control loops, and observer mechanisms. The simulation results verify that the proposed design achieves superior torque smoothness, minimal current ripples, and fast transient response compared to conventional sensorless methods. By integrating high-order sliding modes with advanced adaptive observation, this work offers a robust and cost-effective solution for high-performance PMSM drives, suitable for demanding applications such as electric vehicles, renewable energy conversion, and industrial motion control. Full article
Show Figures

Figure 1

26 pages, 4960 KB  
Article
TGR-T: Truncated-Gaussian-Weighted Reliability for Adaptive Dynamic Thresholding in Weakly Supervised Indoor 3D Point Cloud Segmentation
by Ziwei Luo, Xinyue Liu, Jun Jiang, Hanyu Qi, Chen Wang, Zhong Xie and Tao Zeng
ISPRS Int. J. Geo-Inf. 2026, 15(3), 108; https://doi.org/10.3390/ijgi15030108 - 4 Mar 2026
Viewed by 203
Abstract
Indoor 3D point cloud semantic segmentation is a fundamental task for fine-grained scene understanding and intelligent perception. Due to the prohibitive cost of dense point-wise annotations, weakly supervised learning has emerged as a promising alternative for indoor point cloud segmentation. However, existing weakly [...] Read more.
Indoor 3D point cloud semantic segmentation is a fundamental task for fine-grained scene understanding and intelligent perception. Due to the prohibitive cost of dense point-wise annotations, weakly supervised learning has emerged as a promising alternative for indoor point cloud segmentation. However, existing weakly supervised methods commonly rely on fixed confidence thresholds for pseudo-label selection, which exhibit limited generalization caused by threshold sensitivity, underutilization of informative low-confidence regions, and progressive noise accumulation during self-training. To address these issues, we propose TGR-T, a weakly supervised framework for indoor 3D point cloud semantic segmentation that incorporates truncated-Gaussian-weighted reliability with adaptive dynamic thresholding. Specifically, a reliability-adaptive dynamic thresholding strategy is introduced to guide pseudo-label selection based on the evolving confidence statistics of unlabeled mini-batches, with exponential moving average smoothing employed to produce stable global estimates and robust separation of reliable and ambiguous regions. To further exploit uncertain regions, a learnable truncated Gaussian weighting function is designed to explicitly model prediction uncertainty within the ambiguous set, providing soft supervision by assigning adaptive weights to low-confidence predictions during optimization. Extensive experimental results demonstrate that the proposed framework significantly enhances the exploitation of unlabeled data under extremely limited supervision: extensive experiments conducted on standard indoor 3D scene benchmarks demonstrate that TGR-T achieves competitive or superior segmentation performance under extremely sparse supervision and can even outperform several fully supervised baselines trained with dense annotations while using only 1% labeled points, thereby substantially narrowing the performance gap between weakly supervised and fully supervised 3D semantic segmentation methods. Full article
(This article belongs to the Special Issue Indoor Mobile Mapping and Location-Based Knowledge Services)
Show Figures

Figure 1

23 pages, 13416 KB  
Article
An Adaptive Ensemble Model Based on Deep Reinforcement Learning for the Prediction of Step-like Landslide Displacement
by Tengfei Gu, Lei Huang, Shunyao Tian, Zhichao Zhang, Huan Zhang and Yanke Zhang
Remote Sens. 2026, 18(5), 761; https://doi.org/10.3390/rs18050761 - 3 Mar 2026
Viewed by 268
Abstract
Accurate prediction of landslide displacement is crucial for hazard prevention. However, recurrent neural network (RNN) models have limitations in simultaneously capturing lag time and feature importance, and their black-box nature limits their interpretability. Moreover, the performance of single models varies across different deformation [...] Read more.
Accurate prediction of landslide displacement is crucial for hazard prevention. However, recurrent neural network (RNN) models have limitations in simultaneously capturing lag time and feature importance, and their black-box nature limits their interpretability. Moreover, the performance of single models varies across different deformation stages, especially during acceleration. To address these challenges, we propose an interpretable deep reinforcement learning-based adaptive ensemble (DRL-AE) framework. The method employs Seasonal and Trend decomposition using Loess to separate cumulative displacement into trend and periodic components. Trend and periodic sequences are predicted using double exponential smoothing and three RNN variants, respectively. An improved Convolutional Block Attention Module (ICBAM) enhances periodic feature extraction and provides temporal–spatial interpretability. The Deep Deterministic Policy Gradient algorithm adaptively integrates multi-model predictions in response to evolving environmental conditions. To validate the DRL-AE, a case study is conducted on the Baijiabao landslide in Zigui County, China. The results indicate that the DRL-AE substantially enhances prediction accuracy. For periodic displacement, it reduces MAE by 10.02% and RMSE by 6.65%, and increases R2 by 4.27% compared with the ICBAM-GRU model. The results also confirm the effectiveness of ICBAM in feature extraction, and the generated heatmaps provide intuitive interpretability of the relevant triggering factors. Full article
Show Figures

Figure 1

14 pages, 887 KB  
Article
On Maximum Entropy Density Estimation with Relaxed Moment Constraints
by Thi Lich Nghiem and Pierre Maréchal
Entropy 2026, 28(3), 282; https://doi.org/10.3390/e28030282 - 2 Mar 2026
Viewed by 212
Abstract
We study Maximum Entropy density estimation on continuous domains under finitely many moment constraints, formulated as the minimization of the Kullback–Leibler divergence with respect to a reference measure. To model uncertainty in empirical moments, constraints are relaxed through convex penalty functions, leading to [...] Read more.
We study Maximum Entropy density estimation on continuous domains under finitely many moment constraints, formulated as the minimization of the Kullback–Leibler divergence with respect to a reference measure. To model uncertainty in empirical moments, constraints are relaxed through convex penalty functions, leading to an infinite-dimensional convex optimization problem over probability densities. The main contribution of this work is a rigorous convex-analytic treatment of such relaxed Maximum Entropy problems in a functional setting, without discretization or smoothness assumptions on the density. Using convex integral functionals and an extension of Fenchel duality, we show that, under mild and explicit qualification conditions, the infinite-dimensional primal problem admits a dual formulation involving only finitely many variables. This reduction can be interpreted as a continuous-domain instance of partially finite convex programming. The resulting dual problem yields explicit primal–dual optimality conditions and characterizes Maximum Entropy solutions in exponential form. The proposed framework unifies exact and relaxed moment constraints, including box and quadratic relaxations, within a single variational formulation, and provides a mathematically sound foundation for relaxed Maximum Entropy methods previously studied mainly in finite or discrete settings. A brief numerical illustration demonstrates the practical tractability of the approach. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Graphical abstract

25 pages, 2735 KB  
Article
Beyond Traditional Forecasting Methods: Evaluating LSTM Performance on Diverse Time Series
by Zoltán Baráth, Péter Veres and Ágota Bányai
Mathematics 2026, 14(5), 838; https://doi.org/10.3390/math14050838 - 1 Mar 2026
Viewed by 374
Abstract
Time series forecasting performance is strongly influenced by the structural properties of the underlying data, yet learning-based models are often applied without sufficient validation of this dependency. This study evaluates a uniformly configured Long Short-Term Memory (LSTM) model on five real-world weekly time [...] Read more.
Time series forecasting performance is strongly influenced by the structural properties of the underlying data, yet learning-based models are often applied without sufficient validation of this dependency. This study evaluates a uniformly configured Long Short-Term Memory (LSTM) model on five real-world weekly time series with different levels of periodicity, noise, and volatility. Forecasting is performed in a single-step setting using a fixed sliding window of 12 weeks under a consistent training, validation, and testing framework. Model performance is assessed using mean squared error (MSE) and the coefficient of determination R2. The results show that for well-structured series, both the LSTM model and Holt’s exponential smoothing achieve very low MSE values with R2 scores close to one, indicating excellent predictive accuracy. For other items, performance varies across methods, with either the LSTM or Holt model providing the best results depending on the data structure. These findings confirm that high forecasting accuracy can be achieved with both advanced and classical methods, and that data characteristics play a more decisive role than model complexity. Full article
(This article belongs to the Special Issue Soft Computing in Computational Intelligence and Machine Learning)
Show Figures

Figure 1

24 pages, 3926 KB  
Article
Augmentation of Small Ultrasound Databases: A Practical Approach
by Onsasipat Kasamrach, Thiansiri Luangwilai and Stanislav Makhanov
Mathematics 2026, 14(4), 646; https://doi.org/10.3390/math14040646 - 12 Feb 2026
Viewed by 268
Abstract
Generative Adversarial Networks (GANs) have emerged as a promising tool for augmenting medical image datasets used by AI solutions. However, GANs trained on small datasets (300–500 images) frequently encounter mode collapse, overfitting, and instability, which hinder their practical application. Many GAN-generated images look [...] Read more.
Generative Adversarial Networks (GANs) have emerged as a promising tool for augmenting medical image datasets used by AI solutions. However, GANs trained on small datasets (300–500 images) frequently encounter mode collapse, overfitting, and instability, which hinder their practical application. Many GAN-generated images look unrealistic. The Enhanced Deep Convolutional GAN (EDCGAN) is introduced to generate high-quality synthetic images of breast US (BUS). The model includes an experimental design for the Discriminator and Generator. The main components are spectral normalization (SN), the Squeeze-and-Excitation (SE) block, and the Scaled Exponential Linear Unit (SELU). One of the basic versions of DCGAN is considered for the proposed modifications. The stopping criteria are based on the convergence of the smoothed loss function and the constraints imposed on the Discriminator. The contribution is a combination of the above modifications and postprocessing based on the visual evaluation by radiologists and selected image processing metrics. The Inception Score (IS), the Structural Similarity Index (SSIM), and the Mean Squared Error (MSE) comply with the results obtained in the preceding works. The efficiency of augmenting the US data has been verified on a DL classification based on ResNet-18. The tests against training on a non-augmented data outperform ResNet by 5% and by the data augmented by the previous DCGAN by 3%. These numbers are substantial since this variant of ResNet has been pre-trained on 1000 categories by ImageNet-1K, including 1.28 million images. Additionally, the model wins the “Guess-the-real-image” game, competing with seven preceding GANs. Full article
Show Figures

Figure 1

19 pages, 543 KB  
Article
Sectoral Forecasting of Natural Gas Consumption in Colombia: A Structural and Seasonal Analysis Using Holt–Winters Models
by Alexander D. Pulido-Rojano, Neyfe Sablón-Cossío, Arnaldo Verdeza-Villalobos, Juan Molina-Tapia, Ricardo Marin-Algarin, Aaron Jiménez-Rodríguez and Jesús Tejera-Gutiérrez
Energies 2026, 19(4), 915; https://doi.org/10.3390/en19040915 - 10 Feb 2026
Viewed by 301
Abstract
This study examines the sectoral dynamics of natural gas consumption in Colombia by applying additive and multiplicative Holt–Winters exponential smoothing models. The analysis covers the main demand segments (Thermal Generation, Industrial, Residential, Refinery, Compressed Natural Gas for Vehicles (GNVC), Commercial, Petrochemical, and SNT [...] Read more.
This study examines the sectoral dynamics of natural gas consumption in Colombia by applying additive and multiplicative Holt–Winters exponential smoothing models. The analysis covers the main demand segments (Thermal Generation, Industrial, Residential, Refinery, Compressed Natural Gas for Vehicles (GNVC), Commercial, Petrochemical, and SNT Compressor Stations) using official monthly data from the Colombian Mercantile Exchange for the period April 2020 to July 2025. Model configurations were optimized by minimizing the Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Mean Squared Error (MSE) to identify the most appropriate structure for each sector. The results confirm that natural gas consumption in Colombia does not follow a uniform seasonal pattern. Instead, each segment exhibits distinct dynamics shaped by operational conditions, production schedules, mobility-related behavior, or logistical planning. The Thermal Generation sector was best represented by the multiplicative model, reflecting proportional variability associated with electricity dispatch and system-level operational changes. In contrast, the Industrial, Residential, GNVC, Commercial, and SNT Compressor Stations sectors showed superior performance under the additive model, consistent with relatively stable or constant-magnitude seasonal effects. The Petrochemical and Refinery sectors displayed short-term cyclical behavior, with model accuracy depending on the performance metric prioritized. These findings demonstrate that energy forecasting must incorporate the structural heterogeneity of demand systems rather than treating natural gas consumption as a homogeneous aggregate. Practically, the results provide insights for improving supply planning, contract allocation, and regulatory segmentation. The study also offers a replicable methodological basis for forecasting in emerging economies characterized by diverse consumption profiles. Full article
(This article belongs to the Section C: Energy Economics and Policy)
Show Figures

Figure 1

20 pages, 3203 KB  
Article
A Data-Driven Multi-Scale Source–Grid–Load–Storage Collaborative Dispatching Method for Distribution Systems
by Wenbiao Xia, Xin Chen, Fuguo Jin, Lu Li, Meizhu Lu, Zhuo Yang and Ning Yan
Processes 2026, 14(4), 603; https://doi.org/10.3390/pr14040603 - 9 Feb 2026
Viewed by 261
Abstract
Currently, distribution system scheduling faces significant uncertainty and dynamic complexity due to the large-scale integration of diverse heterogeneous entities, while conventional approaches suffer from limited capability in modeling user behavior responses and ensuring dispatch accuracy, making them inadequate for source–grid–load–storage collaborative optimization. To [...] Read more.
Currently, distribution system scheduling faces significant uncertainty and dynamic complexity due to the large-scale integration of diverse heterogeneous entities, while conventional approaches suffer from limited capability in modeling user behavior responses and ensuring dispatch accuracy, making them inadequate for source–grid–load–storage collaborative optimization. To address this, this paper proposes a data-driven multi-scale coordinated scheduling method for distribution systems, in which distributed generation outputs, load responses, and energy storage states are extracted and modeled using an improved exponential smoothing technique; a hierarchical and time-divided optimization framework is then developed by combining machine learning and probabilistic modeling with spatial correlation analysis to enhance renewable generation and load forecasting accuracy; and finally, a two-stage robust optimization model considering scenario uncertainties is established through typical scenario generation and uncertainty set constraints to achieve dispatch strategies that balance economic efficiency and low-carbon objectives and supply reliability under fluctuating renewable outputs and dynamic load variations. Simulation results demonstrate that the proposed method reduces total operating cost by 16.4%, decreases carbon emissions by 10.7%, and lowers electricity purchase fluctuation by 8.75%, thereby significantly enhancing system flexibility and adaptability to renewable energy uncertainties and providing a novel pathway for the development of active and intelligent distribution systems. Full article
Show Figures

Figure 1

16 pages, 1258 KB  
Article
Coarse-Grained Molecular Dynamics Simulations for Predicting Rheological Behavior of Casein Micelle Dispersions
by Raghvendra Pratap Singh, Sophie Barbe, Paulo Peixoto, Manon Hiolle, Frédéric Affouard and Guillaume Delaplace
Beverages 2026, 12(2), 24; https://doi.org/10.3390/beverages12020024 - 6 Feb 2026
Viewed by 515
Abstract
Handling dispersion of casein powders in water is widely encountered in the milk industry. However, in silico prediction of the apparent viscosity of these colloidal dispersions is not an easy task, especially when these micellar casein suspensions are highly concentrated, as in hyper-protein [...] Read more.
Handling dispersion of casein powders in water is widely encountered in the milk industry. However, in silico prediction of the apparent viscosity of these colloidal dispersions is not an easy task, especially when these micellar casein suspensions are highly concentrated, as in hyper-protein milk beverages, which are experiencing exponential market growth. In this work, Coarse-Grained (CG) models using Lennard-Jones potentials to model interactions were built for simulating rheological properties of colloidal micellar casein dispersions (native and demineralized). In a first approach, a polydisperse explicit CG model was developed. For this polydisperse CG model, the representation chain was composed of four large smooth spheres of different sizes mimicking the real distribution of casein colloids. The CG simulation results were validated by comparison with experimental rheological data for native colloidal casein dispersions. Both in-house experimental results and available data found in the literature were used for this purpose, covering a wide range of casein concentrations ([10 g/L–200 g/L], [8–20%] corresponding to casein concentration, colloid volume fraction and solid/liquid volume fraction, respectively). In a second approach, a simplified model using a monodisperse CG model was developed. This simplified model only included one type of soft sphere and was found to preserve the accuracy of the rheological prediction. Finally, a monodisperse CG model was set up to predict the behavior of demineralized micellar casein dispersions, for which a decrease in the average size of the micelle size distribution is observed when demineralization occurs. For all models, the comparison between the predicted and experimental rheological behavior is fully satisfactory, proving that the CG models proposed for casein-based micellar dispersions are physically well founded and that the proposed simplified representation chain, based on micelle size observation, makes sense. Full article
Show Figures

Figure 1

17 pages, 1253 KB  
Article
ER-ACO: A Real-Time Ant Colony Optimization Framework for Emergency Medical Services Routing and Hospital Resource Scheduling
by Ahmed Métwalli, Fares Fathy, Esraa Khatab and Omar Shalash
Algorithms 2026, 19(2), 102; https://doi.org/10.3390/a19020102 - 28 Jan 2026
Viewed by 433
Abstract
Ant Colony Optimization (ACO) is a widely adopted metaheuristic for solving complex combinatorial problems; however, performance is often deteriorated by premature convergence and limited exploration in later iterations. Eclipse Randomness–Ant Colony Optimization (ER-ACO) is introduced as a lightweight ACO variant in which an [...] Read more.
Ant Colony Optimization (ACO) is a widely adopted metaheuristic for solving complex combinatorial problems; however, performance is often deteriorated by premature convergence and limited exploration in later iterations. Eclipse Randomness–Ant Colony Optimization (ER-ACO) is introduced as a lightweight ACO variant in which an exponentially fading randomness factor is integrated into the state-transition mechanism. Strong early-stage exploration is enabled, and a smooth transition to exploitation is induced, improving convergence behavior and solution quality. Low computational overhead is maintained while exploration and exploitation are dynamically balanced. ER-ACO is positioned within real-time healthcare logistics, with a focus on Emergency Medical Services (EMS) routing and hospital resource scheduling, where rapid and adaptive decision-making is critical for patient outcomes. These systems face dynamic constraints such as fluctuating traffic conditions, urgent patient arrivals, and limited medical resources. Experimental evaluation on benchmark instances indicates that solution cost is reduced by up to 14.3% relative to the slow-fade configuration (γ=1) in the 20-city TSP sweep, and faster stabilization is indicated under the same iteration budget. Additional comparisons against Standard ACO on TSP/QAP benchmarks indicate consistent improvements, with unchanged asymptotic complexity and negligible measured overhead at the tested scales. TSP/QAP benchmarks are used as controlled proxies to isolate algorithmic behavior; EMS deployment is treated as a motivating application pending validation on EMS-specific datasets and formulations. These results highlight ER-ACO’s potential as a lightweight optimization engine for smart healthcare systems, enabling real-time deployment on edge devices for ambulance dispatch, patient transfer, and operating room scheduling. Full article
Show Figures

Figure 1

Back to TopTop