Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (38)

Search Parameters:
Keywords = single exponential smoothing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 3700 KB  
Article
SP-LiDAR for Fast and Robust Depth Imaging at Low SBR and Few Photons
by Kehao Chi, Xialin Liu, Ruikai Xue and Genghua Huang
Photonics 2025, 12(12), 1229; https://doi.org/10.3390/photonics12121229 - 12 Dec 2025
Viewed by 68
Abstract
Single photon LiDAR has demonstrated remarkable proficiency in long-range sensing under conditions of weak returns. However, in the few-photon regime (SPPP ≈ 1) and at low signal-to-background ratios (SBR ≤ 0.1), depth estimation is subject to significant degradation due to Poisson fluctuations and [...] Read more.
Single photon LiDAR has demonstrated remarkable proficiency in long-range sensing under conditions of weak returns. However, in the few-photon regime (SPPP ≈ 1) and at low signal-to-background ratios (SBR ≤ 0.1), depth estimation is subject to significant degradation due to Poisson fluctuations and background contamination. To address these challenges, we propose GLARE-Depth, a patch-wise Poisson-GLRT framework with reflectance-guided spatial fusion. In the temporal domain, our method employs a continuous-time Poisson-GLRT peak search with a physically consistent exponentially modified Gaussian (EMG) kernel, complemented by closed-form amplitude updates and mode-bias correction. In the spatial domain, we implement a methodology that incorporates reflectance-guided, edge-preserving aggregation and confidence-gated lightweight hole filling to enhance effective coverage for few-photon pixels. In controlled simulations derived from the Middlebury dataset, under high-background conditions (SPPP ≈ 1, SBR ≈ 0.06–0.10), GLARE-Depth demonstrates substantial gains over representative baselines in RMSE, MAE, and valid-pixel ratio (insert concrete numbers when finalized) while maintaining smoothness in planar regions and sharpness at geometric boundaries. These results highlight the robustness of GLARE-Depth and its practical potential for low-SBR scenarios. Full article
Show Figures

Figure 1

16 pages, 565 KB  
Article
Analytical Regression and Geometric Validation of the Blade Arc Segment BC in a Michell–Banki Turbine
by Mauricio A. Díaz Raby, Gonzalo A. Moya Navarrete and Jacobo Hernandez-Montelongo
Machines 2025, 13(12), 1135; https://doi.org/10.3390/machines13121135 - 12 Dec 2025
Viewed by 163
Abstract
This study introduces a systematic methodology for modelling the radius of curvature of the arc-shaped section BC in a Michell–Banki cross-flow turbine blade. The method combines geometric modeling in polar coordinates with nonlinear regression, using both two- and three-parameter formulations estimated through [...] Read more.
This study introduces a systematic methodology for modelling the radius of curvature of the arc-shaped section BC in a Michell–Banki cross-flow turbine blade. The method combines geometric modeling in polar coordinates with nonlinear regression, using both two- and three-parameter formulations estimated through the Ordinary Least Squares (OLS) method. Model performance is assessed through two complementary criteria: the coefficient of determination (R2) and the computed arc length, ensuring that statistical accuracy aligns with geometric fidelity. The methodology was validated on digital measurements obtained from CATIA, using datasets with N=187 and a reduced subset of N=48 points. Results demonstrate that even with fewer data points, the regression model maintains high predictive accuracy and geometric consistency. The best-performing three-parameter model achieved R2=0.958, with a five-point Gauss–Legendre quadrature yielding an arc length of approximately 145mm, representing 98.8% agreement with the reference value of 146.78mm. By representing the arc as a single smooth exponential function rather than a piecewise mapping, the approach simplifies analysis and enhances reproducibility. Coupling regression precision with arc-length verification provides a robust and reproducible basis for curvature modeling. This methodology supports turbine blade design, manufacturing, and quality control by ensuring that the blade geometry is validated with high statistical confidence and physical accuracy. Future research will focus on deriving analytical arc-length integrals and integrating the procedure into automated design and inspection workflows. Full article
(This article belongs to the Special Issue Non-Conventional Machining Technologies for Advanced Materials)
Show Figures

Figure 1

27 pages, 3255 KB  
Article
Hourly Photovoltaic Power Forecasting Using Exponential Smoothing: A Comparative Study Based on Operational Data
by Dmytro Matushkin, Artur Zaporozhets, Vitalii Babak, Mykhailo Kulyk and Viktor Denysov
Solar 2025, 5(4), 48; https://doi.org/10.3390/solar5040048 - 20 Oct 2025
Viewed by 764
Abstract
The accurate forecasting of solar power generation is becoming increasingly important in the context of renewable energy integration and intelligent energy management. The variability of solar radiation, caused by changing meteorological conditions and diurnal cycles, complicates the planning and control of photovoltaic systems [...] Read more.
The accurate forecasting of solar power generation is becoming increasingly important in the context of renewable energy integration and intelligent energy management. The variability of solar radiation, caused by changing meteorological conditions and diurnal cycles, complicates the planning and control of photovoltaic systems and may lead to imbalances in supply and demand. This study aims to identify the most effective exponential smoothing approach for real-world PV power forecasting using actual hourly generation data from a 9 MW solar power plant in the Kyiv region, Ukraine. Four exponential smoothing techniques are analysed: Classic, a Modified classic adapted to daily generation patterns, Holt’s linear trend method, and the Holt–Winters seasonal method. The models were implemented in Microsoft Excel (Microsoft 365, version 2408) using real measurement data collected over six months. Forecasts were generated one hour ahead, and optimal smoothing constants were identified via RMSE minimisation using the Solver Add-in. Substantial differences in forecasting accuracy were observed. The Classic simple exponential smoothing model performed worst, with an RMSE of 1413.58 kW and nMAE of 9.22%. Holt’s method improved trend responsiveness (RMSE = 1052.79 kW, nMAE = 5.96%), but still lacked seasonality modelling. Holt–Winters, which incorporates both trend and seasonality, achieved a strong balance (RMSE = 1031.00 kW, nMAE = 3.7%). The best performance was observed with the modified simple exponential smoothing method, which captured the daily cycle more effectively (RMSE = 166.45 kW, nMAE = 0.84%). These results pertain to a one-step-ahead evaluation on a single plant and an extended validation window; accuracy is dependent on meteorological conditions, with larger errors during rapid cloud transi. The study identifies forecasting models that combine high accuracy with structural simplicity, intuitive implementation, and minimal parameter tuning—features that make them well-suited for integration into lightweight real-time energy control systems, despite not being evaluated in terms of runtime or memory usage. The modified simple exponential smoothing model, in particular, offers a high degree of precision and interpretability, supporting its integration into operational PV forecasting tools. Full article
Show Figures

Figure 1

23 pages, 1850 KB  
Article
Forecasting of GDP Growth in the South Caucasian Countries Using Hybrid Ensemble Models
by Gaetano Perone and Manuel A. Zambrano-Monserrate
Econometrics 2025, 13(3), 35; https://doi.org/10.3390/econometrics13030035 - 10 Sep 2025
Viewed by 1509
Abstract
This study aimed to forecast the gross domestic product (GDP) of the South Caucasian nations (Armenia, Azerbaijan, and Georgia) by scrutinizing the accuracy of various econometric methodologies. This topic is noteworthy considering the significant economic development exhibited by these countries in the context [...] Read more.
This study aimed to forecast the gross domestic product (GDP) of the South Caucasian nations (Armenia, Azerbaijan, and Georgia) by scrutinizing the accuracy of various econometric methodologies. This topic is noteworthy considering the significant economic development exhibited by these countries in the context of recovery post COVID-19. The seasonal autoregressive integrated moving average (SARIMA), exponential smoothing state space (ETS) model, neural network autoregressive (NNAR) model, and trigonometric exponential smoothing state space model with Box–Cox transformation, ARMA errors, and trend and seasonal components (TBATS), together with their feasible hybrid combinations, were employed. The empirical investigation utilized quarterly GDP data at market prices from 1Q-2010 to 2Q-2024. According to the results, the hybrid models significantly outperformed the corresponding single models, handling the linear and nonlinear components of the GDP time series more effectively. Rolling-window cross-validation showed that hybrid ETS-NNAR-TBATS for Armenia, hybrid ETS-NNAR-SARIMA for Azerbaijan, and hybrid ETS-SARIMA for Georgia were the best-performing models. The forecasts also suggest that Georgia is likely to record the strongest GDP growth over the projection horizon, followed by Armenia and Azerbaijan. These findings confirm that hybrid models constitute a reliable technique for forecasting GDP in the South Caucasian countries. This region is not only economically dynamic but also strategically important, with direct implications for policy and regional planning. Full article
Show Figures

Figure 1

16 pages, 957 KB  
Article
The Influence of Blood Transfusion Indexed to Patient Blood Volume on 5-Year Mortality After Coronary Artery Bypass Grafting—An EuroSCORE II Adjusted Spline Regression Analysis
by Joseph Kletzer, Maximilian Kreibich, Martin Czerny, Tim Berger, Albi Fagu, Laurin Micek, Ulrich Franke, Matthias Eschenhagen, Tau S. Hartikainen, Mirjam Wild and Dalibor Bockelmann
J. Cardiovasc. Dev. Dis. 2025, 12(8), 287; https://doi.org/10.3390/jcdd12080287 - 28 Jul 2025
Viewed by 1109
Abstract
Background: While timely blood transfusion is critical for restoring oxygen-carrying capacity after coronary artery bypass grafting (CABG), allogeneic blood product transfusions are independently associated with increased long-term mortality, necessitating a risk-stratified approach to balance oxygen delivery against immunological complications and infection risks. Methods: [...] Read more.
Background: While timely blood transfusion is critical for restoring oxygen-carrying capacity after coronary artery bypass grafting (CABG), allogeneic blood product transfusions are independently associated with increased long-term mortality, necessitating a risk-stratified approach to balance oxygen delivery against immunological complications and infection risks. Methods: We retrospectively analyzed 3376 patients undergoing isolated CABG between 2005 and 2023 at a single tertiary center. Patients who died during their perioperative hospital stay within 30 days were excluded. Transfusion burden was assessed both as the absolute number of blood product units (packed red blood cells, platelet transfusion, fresh frozen plasma) and as a percentage of calculated patient blood volume. The primary outcome was all-cause mortality at 5 years. Flexible Cox regression with penalized smoothing splines, adjusted for EuroSCORE II, was used to model dose–response relationships. Results: From our cohort of 3376 patients, a total of 137 patients (4.05%) received >10 units of packed red blood cells (PRBC) perioperatively. These patients were older (median 71 vs. 68 years, p < 0.001), more often female (29% vs. 15%, p < 0.001), and had higher preoperative risk (EuroSCORE II: 2.53 vs. 1.41, p < 0.001). After 5 years, mortality was 42% in the massive transfusion group versus 10% in controls. Spline regression revealed an exponential increase in mortality with transfused units: 14 units yielded a 1.5-fold higher hazard of death (HR 1.46, 95% CI 1.31–1.64), rising to HR 2.71 (95% CI 2.12–3.47) at 30 units. When transfusion was indexed to blood volume, this relationship became linear and more tightly correlated with mortality, with lower maximum hazard ratios and narrower confidence intervals. Conclusions: Indexing transfusion burden to the percentage of patient blood volume replaced provides a more accurate and clinically actionable predictor of 5-year mortality after CABG than absolute unit counts. Our findings support a shift toward individualized, volume-based transfusion strategies to optimize patient outcomes and resource stewardship in a time of limited availability of blood products. Full article
Show Figures

Figure 1

28 pages, 4174 KB  
Article
Improving Portfolio Management Using Clustering and Particle Swarm Optimisation
by Vivek Bulani, Marija Bezbradica and Martin Crane
Mathematics 2025, 13(10), 1623; https://doi.org/10.3390/math13101623 - 15 May 2025
Viewed by 3438
Abstract
Portfolio management, a critical application of financial market analysis, involves optimising asset allocation to maximise returns while minimising risk. This paper addresses the notable research gap in analysing historical financial data for portfolio optimisation purposes. Particularly, this research examines different approaches for handling [...] Read more.
Portfolio management, a critical application of financial market analysis, involves optimising asset allocation to maximise returns while minimising risk. This paper addresses the notable research gap in analysing historical financial data for portfolio optimisation purposes. Particularly, this research examines different approaches for handling missing values and volatility, while examining their effects on optimal portfolios. For this portfolio optimisation task, this study employs a metaheuristic approach through the Swarm Intelligence algorithm, particularly Particle Swarm Optimisation and its variants. Additionally, it aims to enhance portfolio diversity for risk minimisation by dynamically clustering and selecting appropriate assets using the proposed strategies. This entire investigation focuses on improving risk-adjusted return metrics, like Sharpe, Adjusted Sharpe, and Sortino ratios, for single-asset-class portfolios over two distinct classes of assets, cryptocurrencies and stocks. Considering relatively high market activity during pre, during and post-pandemic conditions, experiments utilise historical data spanning from 2015 to 2023. The results indicate that Sharpe ratios of portfolios across both asset classes are maximised by employing linear interpolation for missing value imputation and exponential moving average smoothing with a lower smoothing factor (α). Furthermore, incorporating assets from different clusters significantly improves risk-adjusted returns of portfolios compared to when portfolios are restricted to high market capitalisation assets. Full article
(This article belongs to the Special Issue Combinatorial Optimization and Applications)
Show Figures

Figure 1

22 pages, 3100 KB  
Article
Evaluating Predictive Models for Three Green Finance Markets: Insights from Statistical vs. Machine Learning Approaches
by Sonia Benghiat and Salim Lahmiri
Computation 2025, 13(3), 76; https://doi.org/10.3390/computation13030076 - 14 Mar 2025
Viewed by 1256
Abstract
As climate change has become of eminent importance in the last two decades, so has interest in industry-wide carbon emissions and policies promoting a low-carbon economy. Investors and policymakers could improve their decision-making by producing accurate forecasts of relevant green finance market indices: [...] Read more.
As climate change has become of eminent importance in the last two decades, so has interest in industry-wide carbon emissions and policies promoting a low-carbon economy. Investors and policymakers could improve their decision-making by producing accurate forecasts of relevant green finance market indices: carbon efficiency, clean energy, and sustainability. The purpose of this paper is to compare the performance of single-step univariate forecasts produced by a set of selected statistical and regression-tree-based predictive models, using large datasets of over 2500 daily records of green market indices gathered in a ten-year timespan. The statistical models include simple exponential smoothing, Holt’s method, the ETS version of the exponential model, linear regression, weighted moving average, and autoregressive moving average (ARMA). In addition, the decision tree-based machine learning (ML) methods include the standard regression trees and two ensemble methods, namely the random forests and extreme gradient boosting (XGBoost). The forecasting results show that (i) exponential smoothing models achieve the best performance, and (ii) ensemble methods, namely XGBoost and random forests, perform better than the standard regression trees. The findings of this study will be valuable to both policymakers and investors. Policymakers can leverage these predictive models to design balanced policy interventions that support environmentally sustainable businesses while fostering continued economic growth. In parallel, investors and traders will benefit from an ease of adaptability to rapid market changes thanks to the computationally cost-effective model attributes found in this study to generate profits. Full article
(This article belongs to the Special Issue Quantitative Finance and Risk Management Research: 2nd Edition)
Show Figures

Figure 1

20 pages, 7676 KB  
Article
A High-Precision Matching Method for Heterogeneous SAR Images Based on ROEWA and Angle-Weighted Gradient
by Anxi Yu, Wenhao Tong, Zhengbin Wang, Keke Zhang and Zhen Dong
Remote Sens. 2025, 17(5), 749; https://doi.org/10.3390/rs17050749 - 21 Feb 2025
Viewed by 750
Abstract
The prerequisite for the fusion processing of heterogeneous SAR images lies in high-precision image matching, which can be widely applied in areas such as geometric localization, scene matching navigation, and target recognition. This study proposes a method for high-precision matching of heterogeneous SAR [...] Read more.
The prerequisite for the fusion processing of heterogeneous SAR images lies in high-precision image matching, which can be widely applied in areas such as geometric localization, scene matching navigation, and target recognition. This study proposes a method for high-precision matching of heterogeneous SAR images based on the combination of the single-scale ratio of an exponentially weighted averages (ROEWA) operator and angle-weighted gradient (RAWG). The method consists of the following three main steps: feature point extraction, feature description, and feature matching. The algorithm utilizes the block-based SAR-Harris operator to extract feature points from the reference SAR image, effectively combating the interference of coherent speckle noise and improving the uniformity of feature point distribution. By employing the single-scale ROEWA operator in conjunction with angle-weighted gradient projection, the construction of a 3D dense feature descriptor is achieved, enhancing the consistency of gradient features in heterogeneous SAR images and smoothing the search surface. Through the optimal feature construction strategy and frequency domain SSD algorithm, fast template matching is realized. Experimental comparisons with other mainstream matching methods demonstrate that the Root Mean Square Error (RMSE) of our method is reduced by 47.5% compared with CFOG, and compared with HOPES, the error is reduced by 15.4% and the matching time is reduced by 34.3%. The proposed approach effectively addresses the nonlinear intensity differences, geometric disparities, and interference of coherent speckle noise in heterogeneous SAR images. It exhibits robustness, high precision, and efficiency as its prominent advantages. Full article
(This article belongs to the Special Issue Temporal and Spatial Analysis of Multi-Source Remote Sensing Images)
Show Figures

Figure 1

25 pages, 3148 KB  
Review
A Review of Flow Field and Heat Transfer Characteristics of Jet Impingement from Special-Shaped Holes
by Liang Xu, Naiyuan Hu, Hongwei Lin, Lei Xi, Yunlong Li and Jianmin Gao
Energies 2024, 17(17), 4510; https://doi.org/10.3390/en17174510 - 9 Sep 2024
Cited by 4 | Viewed by 4502
Abstract
The jet impingement cooling technique is regarded as one of the most effective enhanced heat transfer techniques with a single-phase medium. However, in order to facilitate manufacturing, impingement with a large number of smooth circular hole jets is used in engineering. With the [...] Read more.
The jet impingement cooling technique is regarded as one of the most effective enhanced heat transfer techniques with a single-phase medium. However, in order to facilitate manufacturing, impingement with a large number of smooth circular hole jets is used in engineering. With the increasing maturity of additive technology, some new special-shaped holes (SSHs) may be used to further improve the cooling efficiency of jet impingement. Secondly, the heat transfer coefficient of the whole jet varies greatly on the impact target surface. The experiments with a large number of single smooth circular hole jets show that the heat transfer coefficient of the impact target surface will form a bell distribution—that is, the Nusselt number has a maximum value near the stagnation region, and then rapidly decreases exponentially in the radial direction away from the stagnation region. The overall surface temperature distribution is very uneven, and the target surface will form an array of cold spots, resulting in a high level of thermal stress, which will greatly weaken the structural strength and life of the equipment. Establishing how to ensure the uniformity of jet impingement cooling has become a new problem to be solved. In order to achieve uniform cooling, special-shaped holes that generate a swirling flow may be a solution. This paper presents a summary of the effects of holes with different geometrical features on the flow field and heat transfer characteristics of jet impingement cooling. In addition, the effect of jet impingement cooling with SSHs in different array methods is compared. The current challenges of jet impingement cooling technology with SSHs are discussed, as well as the prospects for possible future advances. Full article
(This article belongs to the Collection Advances in Heat Transfer Enhancement)
Show Figures

Figure 1

43 pages, 8643 KB  
Article
Diffusion on PCA-UMAP Manifold: The Impact of Data Structure Preservation to Denoise High-Dimensional Single-Cell RNA Sequencing Data
by Padron-Manrique Cristian, Vázquez-Jiménez Aarón, Esquivel-Hernandez Diego Armando, Martinez-Lopez Yoscelina Estrella, Neri-Rosario Daniel, Giron-Villalobos David, Mixcoha Edgar, Sánchez-Castañeda Jean Paul and Resendis-Antonio Osbaldo
Biology 2024, 13(7), 512; https://doi.org/10.3390/biology13070512 - 9 Jul 2024
Cited by 14 | Viewed by 4288
Abstract
Single-cell transcriptomics (scRNA-seq) is revolutionizing biological research, yet it faces challenges such as inefficient transcript capture and noise. To address these challenges, methods like neighbor averaging or graph diffusion are used. These methods often rely on k-nearest neighbor graphs from low-dimensional manifolds. However, [...] Read more.
Single-cell transcriptomics (scRNA-seq) is revolutionizing biological research, yet it faces challenges such as inefficient transcript capture and noise. To address these challenges, methods like neighbor averaging or graph diffusion are used. These methods often rely on k-nearest neighbor graphs from low-dimensional manifolds. However, scRNA-seq data suffer from the ‘curse of dimensionality’, leading to the over-smoothing of data when using imputation methods. To overcome this, sc-PHENIX employs a PCA-UMAP diffusion method, which enhances the preservation of data structures and allows for a refined use of PCA dimensions and diffusion parameters (e.g., k-nearest neighbors, exponentiation of the Markov matrix) to minimize noise introduction. This approach enables a more accurate construction of the exponentiated Markov matrix (cell neighborhood graph), surpassing methods like MAGIC. sc-PHENIX significantly mitigates over-smoothing, as validated through various scRNA-seq datasets, demonstrating improved cell phenotype representation. Applied to a multicellular tumor spheroid dataset, sc-PHENIX identified known extreme phenotype states, showcasing its effectiveness. sc-PHENIX is open-source and available for use and modification. Full article
(This article belongs to the Special Issue Machine Learning Applications in Biology)
Show Figures

Graphical abstract

14 pages, 2994 KB  
Article
Price Forecasting of Marine Fish Based on Weight Allocation Intelligent Combinatorial Modelling
by Daqing Wu, Binfeng Lu and Zinuo Xu
Foods 2024, 13(8), 1202; https://doi.org/10.3390/foods13081202 - 15 Apr 2024
Cited by 4 | Viewed by 2269
Abstract
China is a major player in the marine fish trade. The price prediction of marine fish is of great significance to socio-economic development and the fisheries industry. However, due to the complexity and uncertainty of the marine fish market, traditional forecasting methods often [...] Read more.
China is a major player in the marine fish trade. The price prediction of marine fish is of great significance to socio-economic development and the fisheries industry. However, due to the complexity and uncertainty of the marine fish market, traditional forecasting methods often struggle to accurately predict price fluctuations. Therefore, this study adopts an intelligent combination model to enhance the accuracy of food product price prediction. Firstly, three decomposition methods, namely empirical wavelet transform, singular spectrum analysis, and variational mode decomposition, are applied to decompose complex original price series. Secondly, a combination of bidirectional long short-term memory artificial neural network, extreme learning machine, and exponential smoothing prediction methods are applied to the decomposed results for cross-prediction. Subsequently, the predicted results are input into the PSO–CS intelligence algorithm for weight allocation and to generate combined prediction results. Empirical analysis is conducted using data illustrating the daily sea purchase price of larimichthys crocea in Ningde City, Fujian Province, China. The combination prediction accuracy with PSO–CS weight allocation is found to be higher than that of single model predictions, yielding superior results. With the implementation of weight allocation intelligent combinatorial modelling, the prediction of marine fish prices demonstrates higher accuracy and stability, enabling better adaptation to market changes and price fluctuations. Full article
(This article belongs to the Special Issue Seafood: Processing, Preservation, Nutrition, Marketing, and Policy)
Show Figures

Figure 1

17 pages, 2912 KB  
Article
Applying Machine Learning and Statistical Forecasting Methods for Enhancing Pharmaceutical Sales Predictions
by Konstantinos P. Fourkiotis and Athanasios Tsadiras
Forecasting 2024, 6(1), 170-186; https://doi.org/10.3390/forecast6010010 - 16 Feb 2024
Cited by 19 | Viewed by 13681
Abstract
In today’s evolving global world, the pharmaceutical sector faces an emerging challenge, which is the rapid surge of the global population and the consequent growth in drug production demands. Recognizing this, our study explores the urgent need to strengthen pharmaceutical production capacities, ensuring [...] Read more.
In today’s evolving global world, the pharmaceutical sector faces an emerging challenge, which is the rapid surge of the global population and the consequent growth in drug production demands. Recognizing this, our study explores the urgent need to strengthen pharmaceutical production capacities, ensuring drugs are allocated and stored strategically to meet diverse regional and demographic needs. Summarizing our key findings, our research focuses on the promising area of drug demand forecasting using artificial intelligence (AI) and machine learning (ML) techniques to enhance predictions in the pharmaceutical field. Supplied with a rich dataset from Kaggle spanning 600,000 sales records from a singular pharmacy, our study embarks on a thorough exploration of univariate time series analysis. Here, we pair conventional analytical tools such as ARIMA with advanced methodologies like LSTM neural networks, all with a singular vision: refining the precision of our sales. Venturing deeper, our data underwent categorisation and were segmented into eight clusters premised on the ATC Anatomical Therapeutic Chemical (ATC) Classification System framework. This segmentation unravels the evident influence of seasonality on drug sales. The analysis not only highlights the effectiveness of machine learning models but also illuminates the remarkable success of XGBoost. This algorithm outperformed traditional models, achieving the lowest MAPE values: 17.89% for M01AB (anti-inflammatory and antirheumatic products, non-steroids, acetic acid derivatives, and related substances), 16.92% for M01AE (anti-inflammatory and antirheumatic products, non-steroids, and propionic acid derivatives), 17.98% for N02BA (analgesics, antipyretics, and anilides), and 16.05% for N02BE (analgesics, antipyretics, pyrazolones, and anilides). XGBoost further demonstrated exceptional precision with the lowest MSE scores: 28.8 for M01AB, 1518.56 for N02BE, and 350.84 for N05C (hypnotics and sedatives). Additionally, the Seasonal Naïve model recorded an MSE of 49.19 for M01AE, while the Single Exponential Smoothing model showed an MSE of 7.19 for N05B. These findings underscore the strengths derived from employing a diverse range of approaches within the forecasting series. In summary, our research accentuates the significance of leveraging machine learning techniques to derive valuable insights for pharmaceutical companies. By applying the power of these methods, companies can optimize their production, storage, distribution, and marketing practices. Full article
(This article belongs to the Section Forecasting in Economics and Management)
Show Figures

Figure 1

28 pages, 5309 KB  
Article
Evaluation of MAX-DOAS Profile Retrievals under Different Vertical Resolutions of Aerosol and NO2 Profiles and Elevation Angles
by Xin Tian, Mingsheng Chen, Pinhua Xie, Jin Xu, Ang Li, Bo Ren, Tianshu Zhang, Guangqiang Fan, Zijie Wang, Jiangyi Zheng and Wenqing Liu
Remote Sens. 2023, 15(22), 5431; https://doi.org/10.3390/rs15225431 - 20 Nov 2023
Cited by 4 | Viewed by 2503
Abstract
In the Multi-Axis Differential Absorption Spectroscopy (MAX-DOAS) trace gas and aerosol profile inversion algorithm, the vertical resolution and the observation information obtained through a series of continuous observations with multiple elevation angles (EAs) can affect the accuracy of an aerosol profile, thus further [...] Read more.
In the Multi-Axis Differential Absorption Spectroscopy (MAX-DOAS) trace gas and aerosol profile inversion algorithm, the vertical resolution and the observation information obtained through a series of continuous observations with multiple elevation angles (EAs) can affect the accuracy of an aerosol profile, thus further affecting the results of the gas profile. Therefore, this study examined the effect of the vertical resolution of an aerosol profile and EAs on the NO2 profile retrieval by combining simulations and measurements. Aerosol profiles were retrieved from MAX-DOAS observations and co-observed using light detection and ranging (Lidar). Three aerosol profile shapes (Boltzmann, Gaussian, and exponential) with vertical resolutions of 100 and 200 m were used in the atmospheric radiative transfer model. Firstly, the effect of the vertical resolution of the input aerosol profile on the retrieved aerosol profile with a resolution of 200 m was studied. The retrieved aerosol profiles from the two vertical resolution aerosol profiles as input were similar. The aerosol profile retrieved from a 100 m resolution profile as input was slightly overestimated compared to the input value, whereas that from a 200 m resolution input was slightly underestimated. The relative deviation of the aerosol profile retrieved from the 100 m resolution as input was higher than that of the 200 m. MAX-DOAS observations in Hefei city on 4 September 2020 were selected to verify the simulation results. The aerosol profiles retrieved from the oxygen collision complex (O4) differential slant column density derived from MAX-DOAS observations and Lidar simulation were compared with the input Lidar aerosol profiles. The correlation between the retrieved and input aerosol profiles was high, with a correlation coefficient R > 0.99. The aerosol profiles retrieved from the Lidar profile at 100 and 200 m resolutions as input closely matched the Lidar aerosol profiles, consistent with the simulation result. However, aerosol profiles retrieved from MAX-DOAS measurements differed from the Lidar profiles due to the influence of the averaging kernel matrix smoothing, the different location and viewing geometry, and uncertainties associated with the Lidar profiles. Next, NO2 profiles of different vertical resolutions were used as input profiles to retrieve the NO2 profiles under a single aerosol profile scenario. The effect of the vertical resolution on the retrieval of NO2 profiles was found to be less significant compared to aerosol retrievals. Using the Lidar aerosol profile as the a priori aerosol information had little effect on NO2 profile retrieval. Additionally, the retrieved aerosol profiles and aerosol optical depths varied under different EAs. Ten EAs (i.e., 1, 2, 3, 4, 5, 6, 8, 15, 30, and 90°) were found to obtain more information from observations. Full article
Show Figures

Figure 1

16 pages, 5358 KB  
Article
Comparing the Simple to Complex Automatic Methods with the Ensemble Approach in Forecasting Electrical Time Series Data
by Winita Sulandari, Yudho Yudhanto, Sri Subanti, Crisma Devika Setiawan, Riskhia Hapsari and Paulo Canas Rodrigues
Energies 2023, 16(22), 7495; https://doi.org/10.3390/en16227495 - 8 Nov 2023
Cited by 5 | Viewed by 1594
Abstract
The importance of forecasting in the energy sector as part of electrical power equipment maintenance encourages researchers to obtain accurate electrical forecasting models. This study investigates simple to complex automatic methods and proposes two weighted ensemble approaches. The automated methods are the autoregressive [...] Read more.
The importance of forecasting in the energy sector as part of electrical power equipment maintenance encourages researchers to obtain accurate electrical forecasting models. This study investigates simple to complex automatic methods and proposes two weighted ensemble approaches. The automated methods are the autoregressive integrated moving average; the exponential smoothing error–trend–seasonal method; the double seasonal Holt–Winter method; the trigonometric Box–Cox transformation, autoregressive, error, trend, and seasonal model; Prophet and neural networks. All accommodate trend and seasonal patterns commonly found in monthly, daily, hourly, or half-hourly electricity data. In comparison, the proposed ensemble approaches combine linearly (EnL) or nonlinearly (EnNL) the forecasting values obtained from all the single automatic methods by considering each model component’s weight. In this work, four electrical time series with different characteristics are examined, to demonstrate the effectiveness and applicability of the proposed ensemble approach—the model performances are compared based on root mean square error (RMSE) and absolute percentage errors (MAPEs). The experimental results show that compared to the existing average weighted ensemble approach, the proposed nonlinear weighted ensemble approach successfully reduces the RMSE and MAPE of the testing data by between 28% and 82%. Full article
(This article belongs to the Special Issue Forecasting Techniques for Power Systems with Machine Learning)
Show Figures

Figure 1

15 pages, 3110 KB  
Article
Setting the Initial Value for Single Exponential Smoothing and the Value of the Smoothing Constant for Forecasting Using Solver in Microsoft Excel
by Wannaporn Junthopas and Chantha Wongoutong
Appl. Sci. 2023, 13(7), 4328; https://doi.org/10.3390/app13074328 - 29 Mar 2023
Cited by 4 | Viewed by 5965
Abstract
Although single exponential smoothing is a popular forecasting method for a wide range of applications involving stationary time series data, consistent rules about choosing the initial value and determining the value for the smoothing constant (α) are still required, because they directly impact [...] Read more.
Although single exponential smoothing is a popular forecasting method for a wide range of applications involving stationary time series data, consistent rules about choosing the initial value and determining the value for the smoothing constant (α) are still required, because they directly impact the forecast accuracy. The purpose of this study is to mitigate these shortcomings. First, a new method for setting the initial value by weighting is derived, and its performance is compared with two other traditional methods. Second, the optimal (α) was automatically solved using Solver in Microsoft Excel, after which 𝛼 was determined by minimizing the mean squared error (MSE). This was accomplished by comparing the 𝛼 from Solver with step search by setting the smoothing constant by varying its value from 0.001 to 1 in increments of 0.001 and then choosing the optimal 𝛼 value from this range that has the lowest MSE. The experimental results show that 𝛼 from Solver and the optimal 𝛼 with step search are not different, and the initial value set by the proposed method outperformed the existing ones regarding the MSE. Full article
Show Figures

Figure 1

Back to TopTop