Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (47)

Search Parameters:
Keywords = single exponential smoothing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 397 KB  
Article
On a Class of Nonlocal Integro-Delay Problems with Generalized Tempered Fractional Operators
by Marwa Ennaceur, Mohammed S. Abdo, Osman Osman, Amel Touati, Amer Alsulami, Neama Haron and Khaled Aldwoah
Fractal Fract. 2026, 10(4), 272; https://doi.org/10.3390/fractalfract10040272 - 21 Apr 2026
Viewed by 388
Abstract
This paper proposes and studies a new class of nonlinear nonlocal problem driven by a tempered Caputo-type fractional derivative with respect to an arbitrary smooth kernel. The novelty lies in treating a single nonlocal integro-delay setting that simultaneously couples an arbitrary kernel, exponential [...] Read more.
This paper proposes and studies a new class of nonlinear nonlocal problem driven by a tempered Caputo-type fractional derivative with respect to an arbitrary smooth kernel. The novelty lies in treating a single nonlocal integro-delay setting that simultaneously couples an arbitrary kernel, exponential tempering, a delayed state, a lower-order distributed fractional memory term, and multipoint nonlocal initial data, rather than introducing a new fractional operator. The resulting problem can be viewed as a rigorous well-posedness prototype for hereditary systems with delayed feedback, tempered memory, and nonlocal initialization. First, an equivalent Volterra integral equation is derived. Then, the existence and uniqueness of solutions are obtained by the Banach contraction principle in a suitable Banach space of continuous functions. Next, a Picard successive approximation procedure is introduced and shown to converge uniformly to the unique solution, together with an explicit a priori error estimate. Moreover, a continuous dependence result is proved with respect to perturbations in the initial constants, the multipoint coefficients, and the nonlinear term. Finally, the main results are illustrated with two examples enhanced by graphs of explicit Picard approximations and convergence tables. Full article
(This article belongs to the Section General Mathematics, Analysis)
Show Figures

Figure 1

22 pages, 1053 KB  
Article
Integrating Machine Learning and Operations Research for Sustainable Demand Forecasting and Production Planning in Craft Breweries
by Michele Cruz Martins, Marcelo Koboldt, Antonio Augusto Maciel Guimaraes, Matheus de Sousa Pereira, Cezer Vicente de Sousa Filho, João Gonçalves Borsato de Moraes, Sanderson Cesar Macedo Barbalho and Marcelo Carneiro Gonçalves
Sustainability 2026, 18(8), 3971; https://doi.org/10.3390/su18083971 - 16 Apr 2026
Viewed by 264
Abstract
The Brazilian craft beer market has experienced continuous growth, increasing operational challenges for small- and medium-sized breweries that frequently rely on empirical and spreadsheet-based production routines. These practices often lead to inefficient resource allocation, production instability, and sustainability concerns. This study proposes an [...] Read more.
The Brazilian craft beer market has experienced continuous growth, increasing operational challenges for small- and medium-sized breweries that frequently rely on empirical and spreadsheet-based production routines. These practices often lead to inefficient resource allocation, production instability, and sustainability concerns. This study proposes an integrated analytical framework combining Machine Learning (ML) and Operations Research (OR) to improve demand forecasting and production planning. The methodology is based on a synthetic dataset calibrated to the operational conditions of a Brasília-based craft brewery, incorporating realistic demand patterns such as seasonality, trend, and intermittency across multiple SKUs over an 18-month horizon. Forecasting models—including Moving Average, Single Exponential Smoothing, and a global ML-based proxy—were evaluated using rolling-origin validation. The resulting probabilistic forecasts were integrated into a capacity-constrained optimization model based on linear programming, extended with risk-aware decision-making using Conditional Value-at-Risk (CVaR). The results indicate that the ML-based approach achieved competitive forecasting performance (sMAPE = 5.83% and MAE = 11.76) while enabling the generation of capacity-feasible and risk-aware production plans aligned with service-level targets. The integration of probabilistic forecasts into the optimization model allowed explicit trade-offs between cost, service level, and resource utilization. The main contribution of this study lies in demonstrating how the integration of predictive and prescriptive analytics can support more sustainable production planning in resource-constrained manufacturing environments. By replacing ad hoc spreadsheet routines with a closed-loop decision-support system, the proposed framework advances the literature on data-driven PPC and provides practical guidance for SMEs operating under uncertainty. Full article
Show Figures

Figure 1

17 pages, 942 KB  
Article
Integrated Water Conservation Measures for Single-Family Homes: A Multi-City Assessment
by Kyrah L. Williams, Esber Andiroglu and Murat Erkoc
Water 2026, 18(8), 942; https://doi.org/10.3390/w18080942 - 15 Apr 2026
Viewed by 362
Abstract
Water plays a critical role in residential consumption, accounting for a significant share of public water supply use. With increasing concerns over water scarcity and projections that a large portion of the global population will experience water stress by 2050, the need for [...] Read more.
Water plays a critical role in residential consumption, accounting for a significant share of public water supply use. With increasing concerns over water scarcity and projections that a large portion of the global population will experience water stress by 2050, the need for effective water conservation strategies has become more urgent. This study evaluates the application and combined impact of water conservation measures in single-family homes. A deterministic modeling framework is developed to estimate household water consumption and conservation potential across four U.S. cities, namely, Houston, Phoenix, Las Vegas, and Des Moines, representing diverse climatic conditions. The analysis incorporates rainwater harvesting, HVAC condensate recovery, water-efficient fixtures, and greywater reuse systems. Scenario-based forecasting, including adoption rates of 1% and 5% of existing homes alongside new construction, is conducted over a six-year period using exponential smoothing techniques. Results indicate that the combined implementation of these measures can generate substantial aggregate water savings, with outcomes varying by climate and location. Greywater reuse and water-efficient fixtures consistently provide the largest contributions, while rainwater harvesting and condensate recovery depend more heavily on regional conditions. These findings highlight the importance of integrated and location-specific strategies and demonstrate the potential of decentralized, residential-level interventions to reduce demand on municipal water systems. Full article
(This article belongs to the Special Issue Resilience and Risk Management in Urban Water Systems)
Show Figures

Figure 1

7 pages, 1495 KB  
Proceeding Paper
Defect Identification of Trinitario Cacao Beans Using Residual Network-50 for Quality Control
by Jed Nathan L. Villapando, Kyle Aldrich R. Bordonada and Glenn V. Magwili
Eng. Proc. 2026, 134(1), 53; https://doi.org/10.3390/engproc2026134053 - 13 Apr 2026
Viewed by 175
Abstract
Cacao grading in the Philippines has relied on slow and inconsistent visual inspection. To effectively detect defects in Trinitario cacao beans, we developed a compact, low-cost computer vision system using single-bean images captured with a Raspberry Pi 5 and Camera Module 3 under [...] Read more.
Cacao grading in the Philippines has relied on slow and inconsistent visual inspection. To effectively detect defects in Trinitario cacao beans, we developed a compact, low-cost computer vision system using single-bean images captured with a Raspberry Pi 5 and Camera Module 3 under controlled lighting and distance conditions. The dataset comprises 1565 images, partitioned into training (80%), validation (10%), and testing (10%) sets. Each image was resized to 224 × 224 pixels, normalized with ImageNet statistics, and subjected to light augmentation. A ResNet-50 model was fine-tuned through transfer learning, employing AdamW optimization, warmup–cosine scheduling, label smoothing, exponential moving average, and early stopping, to classify beans into five categories: good, moldy, slaty, germinated, and over-fermented. On the held-out test set, the model achieved a 94.0% accuracy, strong per-class F1 scores, and high one-vs-rest mean average precision. Compared with a Visual Geometry Group-16 approach, which attained a 90.67% accuracy, the developed system improved performance by 3.3% while remaining inexpensive and easy to deploy. The lightweight system provides reliable and scalable cacao bean screening. Further improvements are anticipated through the expansion of underrepresented classes and refinement of class-specific thresholds. Full article
Show Figures

Figure 1

23 pages, 557 KB  
Article
A Multi-Stage Decomposition and Hybrid Statistical Framework for Time Series Forecasting
by Swera Zeb Abbasi, Mahmoud M. Abdelwahab, Imam Hussain, Moiz Qureshi, Moeeba Rind, Paulo Canas Rodrigues, Ijaz Hussain and Mohamed A. Abdelkawy
Axioms 2026, 15(4), 273; https://doi.org/10.3390/axioms15040273 - 9 Apr 2026
Viewed by 381
Abstract
Modeling and forecasting nonstationary and nonlinear economic time series remain fundamentally challenging due to structural breaks, volatility clustering, and noise contamination that distort the intrinsic stochastic structure. To address these limitations, this study proposes a novel three-stage hybrid statistical framework that systematically integrates [...] Read more.
Modeling and forecasting nonstationary and nonlinear economic time series remain fundamentally challenging due to structural breaks, volatility clustering, and noise contamination that distort the intrinsic stochastic structure. To address these limitations, this study proposes a novel three-stage hybrid statistical framework that systematically integrates multi-level signal decomposition with structured parametric modeling to enhance predictive accuracy. The proposed hybrid architectures—EMD–EEMD–ARIMA, EMD–EEMD–GMDH, and EMD–EEMD–ETS—employ a hierarchical decomposition–reconstruction strategy before forecasting. In the first stage, Empirical Mode Decomposition (EMD) decomposes the observed series into intrinsic mode functions (IMFs) and a residual component. In the second stage, Ensemble Empirical Mode Decomposition (EEMD) is applied to further refine the extracted components, mitigating mode mixing and improving signal separability. In the final stage, each reconstructed component is modeled using ARIMA, Exponential Smoothing State Space (ETS), and Group Method of Data Handling (GMDH) frameworks, and the individual forecasts are aggregated to obtain the final prediction. Empirical evaluation based on a recursive one-step-ahead forecasting scheme demonstrates consistent numerical improvements across all standard accuracy measures. In particular, the proposed EMD–EEMD–ARIMA model achieves the lowest forecasting error, reducing the root-mean-square error (RMSE) by approximately 6–7% relative to the best-performing single-stage model and by about 3–4% relative to the two-stage EMD-based hybrids. Similar improvements are observed in mean squared error (MSE), mean absolute error (MAE), and mean absolute percentage error (MAPE), indicating enhanced stability and robustness of the three-stage architecture. The results provide strong numerical evidence that multi-level decomposition combined with structured statistical modeling yields superior predictive performance for complex nonlinear and nonstationary time series. The proposed framework offers a mathematically coherent, computationally tractable, and systematically structured hybrid modeling strategy that effectively integrates noise-assisted decomposition with parametric and data-driven forecasting techniques. Full article
Show Figures

Figure 1

23 pages, 13416 KB  
Article
An Adaptive Ensemble Model Based on Deep Reinforcement Learning for the Prediction of Step-like Landslide Displacement
by Tengfei Gu, Lei Huang, Shunyao Tian, Zhichao Zhang, Huan Zhang and Yanke Zhang
Remote Sens. 2026, 18(5), 761; https://doi.org/10.3390/rs18050761 - 3 Mar 2026
Viewed by 393
Abstract
Accurate prediction of landslide displacement is crucial for hazard prevention. However, recurrent neural network (RNN) models have limitations in simultaneously capturing lag time and feature importance, and their black-box nature limits their interpretability. Moreover, the performance of single models varies across different deformation [...] Read more.
Accurate prediction of landslide displacement is crucial for hazard prevention. However, recurrent neural network (RNN) models have limitations in simultaneously capturing lag time and feature importance, and their black-box nature limits their interpretability. Moreover, the performance of single models varies across different deformation stages, especially during acceleration. To address these challenges, we propose an interpretable deep reinforcement learning-based adaptive ensemble (DRL-AE) framework. The method employs Seasonal and Trend decomposition using Loess to separate cumulative displacement into trend and periodic components. Trend and periodic sequences are predicted using double exponential smoothing and three RNN variants, respectively. An improved Convolutional Block Attention Module (ICBAM) enhances periodic feature extraction and provides temporal–spatial interpretability. The Deep Deterministic Policy Gradient algorithm adaptively integrates multi-model predictions in response to evolving environmental conditions. To validate the DRL-AE, a case study is conducted on the Baijiabao landslide in Zigui County, China. The results indicate that the DRL-AE substantially enhances prediction accuracy. For periodic displacement, it reduces MAE by 10.02% and RMSE by 6.65%, and increases R2 by 4.27% compared with the ICBAM-GRU model. The results also confirm the effectiveness of ICBAM in feature extraction, and the generated heatmaps provide intuitive interpretability of the relevant triggering factors. Full article
Show Figures

Figure 1

14 pages, 887 KB  
Article
On Maximum Entropy Density Estimation with Relaxed Moment Constraints
by Thi Lich Nghiem and Pierre Maréchal
Entropy 2026, 28(3), 282; https://doi.org/10.3390/e28030282 - 2 Mar 2026
Viewed by 321
Abstract
We study Maximum Entropy density estimation on continuous domains under finitely many moment constraints, formulated as the minimization of the Kullback–Leibler divergence with respect to a reference measure. To model uncertainty in empirical moments, constraints are relaxed through convex penalty functions, leading to [...] Read more.
We study Maximum Entropy density estimation on continuous domains under finitely many moment constraints, formulated as the minimization of the Kullback–Leibler divergence with respect to a reference measure. To model uncertainty in empirical moments, constraints are relaxed through convex penalty functions, leading to an infinite-dimensional convex optimization problem over probability densities. The main contribution of this work is a rigorous convex-analytic treatment of such relaxed Maximum Entropy problems in a functional setting, without discretization or smoothness assumptions on the density. Using convex integral functionals and an extension of Fenchel duality, we show that, under mild and explicit qualification conditions, the infinite-dimensional primal problem admits a dual formulation involving only finitely many variables. This reduction can be interpreted as a continuous-domain instance of partially finite convex programming. The resulting dual problem yields explicit primal–dual optimality conditions and characterizes Maximum Entropy solutions in exponential form. The proposed framework unifies exact and relaxed moment constraints, including box and quadratic relaxations, within a single variational formulation, and provides a mathematically sound foundation for relaxed Maximum Entropy methods previously studied mainly in finite or discrete settings. A brief numerical illustration demonstrates the practical tractability of the approach. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Graphical abstract

25 pages, 2735 KB  
Article
Beyond Traditional Forecasting Methods: Evaluating LSTM Performance on Diverse Time Series
by Zoltán Baráth, Péter Veres and Ágota Bányai
Mathematics 2026, 14(5), 838; https://doi.org/10.3390/math14050838 - 1 Mar 2026
Viewed by 776
Abstract
Time series forecasting performance is strongly influenced by the structural properties of the underlying data, yet learning-based models are often applied without sufficient validation of this dependency. This study evaluates a uniformly configured Long Short-Term Memory (LSTM) model on five real-world weekly time [...] Read more.
Time series forecasting performance is strongly influenced by the structural properties of the underlying data, yet learning-based models are often applied without sufficient validation of this dependency. This study evaluates a uniformly configured Long Short-Term Memory (LSTM) model on five real-world weekly time series with different levels of periodicity, noise, and volatility. Forecasting is performed in a single-step setting using a fixed sliding window of 12 weeks under a consistent training, validation, and testing framework. Model performance is assessed using mean squared error (MSE) and the coefficient of determination R2. The results show that for well-structured series, both the LSTM model and Holt’s exponential smoothing achieve very low MSE values with R2 scores close to one, indicating excellent predictive accuracy. For other items, performance varies across methods, with either the LSTM or Holt model providing the best results depending on the data structure. These findings confirm that high forecasting accuracy can be achieved with both advanced and classical methods, and that data characteristics play a more decisive role than model complexity. Full article
(This article belongs to the Special Issue Soft Computing in Computational Intelligence and Machine Learning)
Show Figures

Figure 1

28 pages, 1593 KB  
Article
Comparative Evaluation of Event-Based Forecasting Models for Thai Airport Passenger Traffic
by Thanrada Chaikajonwat and Autcha Araveeporn
Modelling 2026, 7(1), 26; https://doi.org/10.3390/modelling7010026 - 20 Jan 2026
Viewed by 441
Abstract
Accurate passenger traffic forecasting is vital for strategic planning in Thailand’s aviation industry. This study forecasts the monthly total number of passengers at Suvarnabhumi (BKK), Don Mueang (DMK), Chiang Mai (CNX), and Phuket (HKT) airports using data from 2017 to 2024. The dataset [...] Read more.
Accurate passenger traffic forecasting is vital for strategic planning in Thailand’s aviation industry. This study forecasts the monthly total number of passengers at Suvarnabhumi (BKK), Don Mueang (DMK), Chiang Mai (CNX), and Phuket (HKT) airports using data from 2017 to 2024. The dataset was partitioned into training (January 2017–December 2023) and testing (January–December 2024) sets. Six methods were compared: Single Exponential Smoothing, Holt’s, Holt’s with Events Adjustment, Holt–Winters Multiplicative, TBATS model, and Box–Jenkins. Performance was evaluated using Mean Absolute Percentage Error (MAPE) and Mean Absolute Error (MAE). The results indicate that the optimal forecasting method varies by airport characteristics. Holt’s Method with Events Adjustment, which incorporates major disruptions such as the COVID-19 pandemic, produced the most accurate forecasts for BKK and DMK by effectively capturing external shocks. In contrast, the Holt–Winters Multiplicative method performed best for CNX and HKT, reflecting strong seasonal patterns typically driven by tourism activities in these destinations. Full article
Show Figures

Figure 1

25 pages, 3700 KB  
Article
SP-LiDAR for Fast and Robust Depth Imaging at Low SBR and Few Photons
by Kehao Chi, Xialin Liu, Ruikai Xue and Genghua Huang
Photonics 2025, 12(12), 1229; https://doi.org/10.3390/photonics12121229 - 12 Dec 2025
Viewed by 603
Abstract
Single photon LiDAR has demonstrated remarkable proficiency in long-range sensing under conditions of weak returns. However, in the few-photon regime (SPPP ≈ 1) and at low signal-to-background ratios (SBR ≤ 0.1), depth estimation is subject to significant degradation due to Poisson fluctuations and [...] Read more.
Single photon LiDAR has demonstrated remarkable proficiency in long-range sensing under conditions of weak returns. However, in the few-photon regime (SPPP ≈ 1) and at low signal-to-background ratios (SBR ≤ 0.1), depth estimation is subject to significant degradation due to Poisson fluctuations and background contamination. To address these challenges, we propose GLARE-Depth, a patch-wise Poisson-GLRT framework with reflectance-guided spatial fusion. In the temporal domain, our method employs a continuous-time Poisson-GLRT peak search with a physically consistent exponentially modified Gaussian (EMG) kernel, complemented by closed-form amplitude updates and mode-bias correction. In the spatial domain, we implement a methodology that incorporates reflectance-guided, edge-preserving aggregation and confidence-gated lightweight hole filling to enhance effective coverage for few-photon pixels. In controlled simulations derived from the Middlebury dataset, under high-background conditions (SPPP ≈ 1, SBR ≈ 0.06–0.10), GLARE-Depth demonstrates substantial gains over representative baselines in RMSE, MAE, and valid-pixel ratio (insert concrete numbers when finalized) while maintaining smoothness in planar regions and sharpness at geometric boundaries. These results highlight the robustness of GLARE-Depth and its practical potential for low-SBR scenarios. Full article
Show Figures

Figure 1

16 pages, 565 KB  
Article
Analytical Regression and Geometric Validation of the Blade Arc Segment BC in a Michell–Banki Turbine
by Mauricio A. Díaz Raby, Gonzalo A. Moya Navarrete and Jacobo Hernandez-Montelongo
Machines 2025, 13(12), 1135; https://doi.org/10.3390/machines13121135 - 12 Dec 2025
Viewed by 611
Abstract
This study introduces a systematic methodology for modelling the radius of curvature of the arc-shaped section BC in a Michell–Banki cross-flow turbine blade. The method combines geometric modeling in polar coordinates with nonlinear regression, using both two- and three-parameter formulations estimated through [...] Read more.
This study introduces a systematic methodology for modelling the radius of curvature of the arc-shaped section BC in a Michell–Banki cross-flow turbine blade. The method combines geometric modeling in polar coordinates with nonlinear regression, using both two- and three-parameter formulations estimated through the Ordinary Least Squares (OLS) method. Model performance is assessed through two complementary criteria: the coefficient of determination (R2) and the computed arc length, ensuring that statistical accuracy aligns with geometric fidelity. The methodology was validated on digital measurements obtained from CATIA, using datasets with N=187 and a reduced subset of N=48 points. Results demonstrate that even with fewer data points, the regression model maintains high predictive accuracy and geometric consistency. The best-performing three-parameter model achieved R2=0.958, with a five-point Gauss–Legendre quadrature yielding an arc length of approximately 145mm, representing 98.8% agreement with the reference value of 146.78mm. By representing the arc as a single smooth exponential function rather than a piecewise mapping, the approach simplifies analysis and enhances reproducibility. Coupling regression precision with arc-length verification provides a robust and reproducible basis for curvature modeling. This methodology supports turbine blade design, manufacturing, and quality control by ensuring that the blade geometry is validated with high statistical confidence and physical accuracy. Future research will focus on deriving analytical arc-length integrals and integrating the procedure into automated design and inspection workflows. Full article
(This article belongs to the Special Issue Non-Conventional Machining Technologies for Advanced Materials)
Show Figures

Figure 1

27 pages, 3255 KB  
Article
Hourly Photovoltaic Power Forecasting Using Exponential Smoothing: A Comparative Study Based on Operational Data
by Dmytro Matushkin, Artur Zaporozhets, Vitalii Babak, Mykhailo Kulyk and Viktor Denysov
Solar 2025, 5(4), 48; https://doi.org/10.3390/solar5040048 - 20 Oct 2025
Cited by 3 | Viewed by 1840
Abstract
The accurate forecasting of solar power generation is becoming increasingly important in the context of renewable energy integration and intelligent energy management. The variability of solar radiation, caused by changing meteorological conditions and diurnal cycles, complicates the planning and control of photovoltaic systems [...] Read more.
The accurate forecasting of solar power generation is becoming increasingly important in the context of renewable energy integration and intelligent energy management. The variability of solar radiation, caused by changing meteorological conditions and diurnal cycles, complicates the planning and control of photovoltaic systems and may lead to imbalances in supply and demand. This study aims to identify the most effective exponential smoothing approach for real-world PV power forecasting using actual hourly generation data from a 9 MW solar power plant in the Kyiv region, Ukraine. Four exponential smoothing techniques are analysed: Classic, a Modified classic adapted to daily generation patterns, Holt’s linear trend method, and the Holt–Winters seasonal method. The models were implemented in Microsoft Excel (Microsoft 365, version 2408) using real measurement data collected over six months. Forecasts were generated one hour ahead, and optimal smoothing constants were identified via RMSE minimisation using the Solver Add-in. Substantial differences in forecasting accuracy were observed. The Classic simple exponential smoothing model performed worst, with an RMSE of 1413.58 kW and nMAE of 9.22%. Holt’s method improved trend responsiveness (RMSE = 1052.79 kW, nMAE = 5.96%), but still lacked seasonality modelling. Holt–Winters, which incorporates both trend and seasonality, achieved a strong balance (RMSE = 1031.00 kW, nMAE = 3.7%). The best performance was observed with the modified simple exponential smoothing method, which captured the daily cycle more effectively (RMSE = 166.45 kW, nMAE = 0.84%). These results pertain to a one-step-ahead evaluation on a single plant and an extended validation window; accuracy is dependent on meteorological conditions, with larger errors during rapid cloud transi. The study identifies forecasting models that combine high accuracy with structural simplicity, intuitive implementation, and minimal parameter tuning—features that make them well-suited for integration into lightweight real-time energy control systems, despite not being evaluated in terms of runtime or memory usage. The modified simple exponential smoothing model, in particular, offers a high degree of precision and interpretability, supporting its integration into operational PV forecasting tools. Full article
Show Figures

Figure 1

23 pages, 1850 KB  
Article
Forecasting of GDP Growth in the South Caucasian Countries Using Hybrid Ensemble Models
by Gaetano Perone and Manuel A. Zambrano-Monserrate
Econometrics 2025, 13(3), 35; https://doi.org/10.3390/econometrics13030035 - 10 Sep 2025
Cited by 1 | Viewed by 2707
Abstract
This study aimed to forecast the gross domestic product (GDP) of the South Caucasian nations (Armenia, Azerbaijan, and Georgia) by scrutinizing the accuracy of various econometric methodologies. This topic is noteworthy considering the significant economic development exhibited by these countries in the context [...] Read more.
This study aimed to forecast the gross domestic product (GDP) of the South Caucasian nations (Armenia, Azerbaijan, and Georgia) by scrutinizing the accuracy of various econometric methodologies. This topic is noteworthy considering the significant economic development exhibited by these countries in the context of recovery post COVID-19. The seasonal autoregressive integrated moving average (SARIMA), exponential smoothing state space (ETS) model, neural network autoregressive (NNAR) model, and trigonometric exponential smoothing state space model with Box–Cox transformation, ARMA errors, and trend and seasonal components (TBATS), together with their feasible hybrid combinations, were employed. The empirical investigation utilized quarterly GDP data at market prices from 1Q-2010 to 2Q-2024. According to the results, the hybrid models significantly outperformed the corresponding single models, handling the linear and nonlinear components of the GDP time series more effectively. Rolling-window cross-validation showed that hybrid ETS-NNAR-TBATS for Armenia, hybrid ETS-NNAR-SARIMA for Azerbaijan, and hybrid ETS-SARIMA for Georgia were the best-performing models. The forecasts also suggest that Georgia is likely to record the strongest GDP growth over the projection horizon, followed by Armenia and Azerbaijan. These findings confirm that hybrid models constitute a reliable technique for forecasting GDP in the South Caucasian countries. This region is not only economically dynamic but also strategically important, with direct implications for policy and regional planning. Full article
Show Figures

Figure 1

16 pages, 957 KB  
Article
The Influence of Blood Transfusion Indexed to Patient Blood Volume on 5-Year Mortality After Coronary Artery Bypass Grafting—An EuroSCORE II Adjusted Spline Regression Analysis
by Joseph Kletzer, Maximilian Kreibich, Martin Czerny, Tim Berger, Albi Fagu, Laurin Micek, Ulrich Franke, Matthias Eschenhagen, Tau S. Hartikainen, Mirjam Wild and Dalibor Bockelmann
J. Cardiovasc. Dev. Dis. 2025, 12(8), 287; https://doi.org/10.3390/jcdd12080287 - 28 Jul 2025
Viewed by 1695
Abstract
Background: While timely blood transfusion is critical for restoring oxygen-carrying capacity after coronary artery bypass grafting (CABG), allogeneic blood product transfusions are independently associated with increased long-term mortality, necessitating a risk-stratified approach to balance oxygen delivery against immunological complications and infection risks. Methods: [...] Read more.
Background: While timely blood transfusion is critical for restoring oxygen-carrying capacity after coronary artery bypass grafting (CABG), allogeneic blood product transfusions are independently associated with increased long-term mortality, necessitating a risk-stratified approach to balance oxygen delivery against immunological complications and infection risks. Methods: We retrospectively analyzed 3376 patients undergoing isolated CABG between 2005 and 2023 at a single tertiary center. Patients who died during their perioperative hospital stay within 30 days were excluded. Transfusion burden was assessed both as the absolute number of blood product units (packed red blood cells, platelet transfusion, fresh frozen plasma) and as a percentage of calculated patient blood volume. The primary outcome was all-cause mortality at 5 years. Flexible Cox regression with penalized smoothing splines, adjusted for EuroSCORE II, was used to model dose–response relationships. Results: From our cohort of 3376 patients, a total of 137 patients (4.05%) received >10 units of packed red blood cells (PRBC) perioperatively. These patients were older (median 71 vs. 68 years, p < 0.001), more often female (29% vs. 15%, p < 0.001), and had higher preoperative risk (EuroSCORE II: 2.53 vs. 1.41, p < 0.001). After 5 years, mortality was 42% in the massive transfusion group versus 10% in controls. Spline regression revealed an exponential increase in mortality with transfused units: 14 units yielded a 1.5-fold higher hazard of death (HR 1.46, 95% CI 1.31–1.64), rising to HR 2.71 (95% CI 2.12–3.47) at 30 units. When transfusion was indexed to blood volume, this relationship became linear and more tightly correlated with mortality, with lower maximum hazard ratios and narrower confidence intervals. Conclusions: Indexing transfusion burden to the percentage of patient blood volume replaced provides a more accurate and clinically actionable predictor of 5-year mortality after CABG than absolute unit counts. Our findings support a shift toward individualized, volume-based transfusion strategies to optimize patient outcomes and resource stewardship in a time of limited availability of blood products. Full article
Show Figures

Figure 1

28 pages, 4174 KB  
Article
Improving Portfolio Management Using Clustering and Particle Swarm Optimisation
by Vivek Bulani, Marija Bezbradica and Martin Crane
Mathematics 2025, 13(10), 1623; https://doi.org/10.3390/math13101623 - 15 May 2025
Cited by 3 | Viewed by 4439
Abstract
Portfolio management, a critical application of financial market analysis, involves optimising asset allocation to maximise returns while minimising risk. This paper addresses the notable research gap in analysing historical financial data for portfolio optimisation purposes. Particularly, this research examines different approaches for handling [...] Read more.
Portfolio management, a critical application of financial market analysis, involves optimising asset allocation to maximise returns while minimising risk. This paper addresses the notable research gap in analysing historical financial data for portfolio optimisation purposes. Particularly, this research examines different approaches for handling missing values and volatility, while examining their effects on optimal portfolios. For this portfolio optimisation task, this study employs a metaheuristic approach through the Swarm Intelligence algorithm, particularly Particle Swarm Optimisation and its variants. Additionally, it aims to enhance portfolio diversity for risk minimisation by dynamically clustering and selecting appropriate assets using the proposed strategies. This entire investigation focuses on improving risk-adjusted return metrics, like Sharpe, Adjusted Sharpe, and Sortino ratios, for single-asset-class portfolios over two distinct classes of assets, cryptocurrencies and stocks. Considering relatively high market activity during pre, during and post-pandemic conditions, experiments utilise historical data spanning from 2015 to 2023. The results indicate that Sharpe ratios of portfolios across both asset classes are maximised by employing linear interpolation for missing value imputation and exponential moving average smoothing with a lower smoothing factor (α). Furthermore, incorporating assets from different clusters significantly improves risk-adjusted returns of portfolios compared to when portfolios are restricted to high market capitalisation assets. Full article
(This article belongs to the Special Issue Combinatorial Optimization and Applications)
Show Figures

Figure 1

Back to TopTop