Journal Description
Computer Sciences & Mathematics Forum
Computer Sciences & Mathematics Forum
is an open access journal dedicated to publishing findings resulting from academic conferences, workshops, and similar events in the area of computer science and mathematics. Each conference proceeding can be individually indexed, is citable via a digital object identifier (DOI), and is freely available under an open access license. The conference organizers and proceedings editors are responsible for managing the peer-review process and selecting papers for conference proceedings.
Latest Articles
Classifying Two Banking Cultures: The Pragmatic Structure of Economic Revelations
Comput. Sci. Math. Forum 2025, 11(1), 33; https://doi.org/10.3390/cmsf2025011033 - 9 Sep 2025
Abstract
►
Show Figures
This paper focuses on one specific aspect of a larger project evaluating three measures of banking risk. It emphasizes the overarching question of comparative regulatory policy: Do the European Union and the United States constitute two distinct and separate banking cultures? To answer
[...] Read more.
This paper focuses on one specific aspect of a larger project evaluating three measures of banking risk. It emphasizes the overarching question of comparative regulatory policy: Do the European Union and the United States constitute two distinct and separate banking cultures? To answer such a question, conventional econometrics often prescribes fixed effects regression. This paper pursues an alternative approach. It directly asks whether banks on those separate continents can be distinguished using exactly the same design matrix to evaluate the proposed risk measures. The successful completion of that classification task permits the bifurcation of the overall dataset into distinct subsets, one for each continent. Parameter estimates and fitted values produced by separate regressions supply far more reliable and accurate insights into the distinct business and regulatory cultures of European and American banking.
Full article
Open AccessProceeding Paper
Benchmarking Foundation Models for Time-Series Forecasting: Zero-Shot, Few-Shot, and Full-Shot Evaluations
by
Frédéric Montet, Benjamin Pasquier and Beat Wolf
Comput. Sci. Math. Forum 2025, 11(1), 32; https://doi.org/10.3390/cmsf2025011032 - 8 Sep 2025
Abstract
►▼
Show Figures
Recently, time-series forecasting foundation models trained on large, diverse datasets have demonstrated robust zero-shot and few-shot capabilities. Given the ubiquity of time-series data in IoT, finance, and industrial applications, rigorous benchmarking is essential to assess their forecasting performance and overall value. In this
[...] Read more.
Recently, time-series forecasting foundation models trained on large, diverse datasets have demonstrated robust zero-shot and few-shot capabilities. Given the ubiquity of time-series data in IoT, finance, and industrial applications, rigorous benchmarking is essential to assess their forecasting performance and overall value. In this study, our objective is to benchmark foundational models from Amazon, Salesforce, and Google against traditional statistical and deep learning baselines on both public and proprietary industrial datasets. We evaluate zero-shot, few-shot, and full-shot scenarios using metrics such as sMAPE and NMAE on fine-tuned models, ensuring reliable comparisons. All experiments are conducted with onTime, our dedicated open-source library that guarantees reproducibility, data privacy, and flexible configuration. Our results show that foundation models often outperform traditional methods with minimal dataset-specific tuning, underscoring their potential to simplify forecasting tasks and bridge performance gaps in data-scarce settings. Additionally, we address non-performance criteria, such as integration ease, model size, and inference/training time, which are critical for real-world deployment.
Full article

Figure 1
Open AccessProceeding Paper
Drift and Diffusion in Panel Data: Extracting Geopolitical and Temporal Effects in a Study of Passenger Rail Traffic
by
James Ming Chen, Thomas Poufinas and Angeliki C. Panagopoulou
Comput. Sci. Math. Forum 2025, 11(1), 31; https://doi.org/10.3390/cmsf2025011031 - 1 Sep 2025
Cited by 1
Abstract
►▼
Show Figures
Two-stage least squares (2SLS) regression undergirds much of contemporary geospatial econometrics. Walk-forward validation in time-series forecasting constitutes a special instance of iterative local regression. Two-stage least squares and iterative regression supply distinct approaches to isolating the drift and diffusion terms in data containing
[...] Read more.
Two-stage least squares (2SLS) regression undergirds much of contemporary geospatial econometrics. Walk-forward validation in time-series forecasting constitutes a special instance of iterative local regression. Two-stage least squares and iterative regression supply distinct approaches to isolating the drift and diffusion terms in data containing deterministic and stochastic components. To demonstrate the benefits of these methods outside their native contexts, this paper applies 2SLS correction of residuals and iterative local regression to panel data on passenger railway traffic in Europe. Goodness of fit improved from r2 ≈ 0.685 to r2 ≈ 0.723 through 2SLS and to r2 ≈ 0.825 through iterative local regression. Two-stage least squares provides strong evidence of geopolitical and temporal influences. Iterative local regression produces implicit vectors of coefficients and p-values that reinforce some causal inferences of the unconditional model for rail passenger traffic while simultaneously undermining others.
Full article

Figure 1
Open AccessProceeding Paper
Interpersonal Coordination Through Granger Causality Applied to AR Processes Modeling the Time Evolution of Low-Frequency Powers of RR Intervals
by
Pierre Bouny, Eric Grivel, Roberto Diversi and Veronique Deschodt Arsac
Comput. Sci. Math. Forum 2025, 11(1), 30; https://doi.org/10.3390/cmsf2025011030 - 26 Aug 2025
Abstract
In this paper, interpersonal coordination is studied by analyzing physiological synchronization between individuals. To this end, a four-phase protocol is proposed to collect biosignals from the participants in each dyad. Then, the time evolution of the low-frequency (LF) power of the heart rate
[...] Read more.
In this paper, interpersonal coordination is studied by analyzing physiological synchronization between individuals. To this end, a four-phase protocol is proposed to collect biosignals from the participants in each dyad. Then, the time evolution of the low-frequency (LF) power of the heart rate variability process for each participant is deduced. Finally, an approach based on a bivariate autoregressive model and Granger causality is proposed to determine whether a dependency exists between the biosignals. The approach is first applied to synthetic data and then to real data. This method has the advantage of providing explicit modeling of the dependency, which can help physiologists achieve better interpretation.
Full article
Open AccessProceeding Paper
Multivariate Forecasting Evaluation: Nixtla-TimeGPT
by
S M Ahasanul Karim, Bahram Zarrin and Niels Buus Lassen
Comput. Sci. Math. Forum 2025, 11(1), 29; https://doi.org/10.3390/cmsf2025011029 - 26 Aug 2025
Abstract
►▼
Show Figures
Generative models are being used in all domains. While primarily built for processing texts and images, their reach has been further extended towards data-driven forecasting. Whereas there are many statistical, machine learning and deep learning models for predictive forecasting, generative models are special
[...] Read more.
Generative models are being used in all domains. While primarily built for processing texts and images, their reach has been further extended towards data-driven forecasting. Whereas there are many statistical, machine learning and deep learning models for predictive forecasting, generative models are special because they do not need to be trained beforehand, saving time and computational power. Also, multivariate forecasting with the existing models is difficult when the future horizons are unknown for the regressors because they add mode uncertainties in the forecasting process. Thus, this study experiments with TimeGPT(Zeroshot) by Nixtla where it tries to identify if the generative model can outperform other models like ARIMA, Prophet, NeuralProphet, Linear Regression, XGBoost, Random Forest, LSTM, and RNN. To determine this, the research created synthetic datasets and synthetic signals to assess the individual model performances and regressor performances for 12 models. The results then used the findings to assess the performance of TimeGPT in comparison to the best fitting models in different real-world scenarios. The results showed that TimeGPT outperforms multivariate forecasting for weekly granularities by automatically selecting important regressors whereas its performance for daily and monthly granularities is still weak.
Full article

Figure 1
Open AccessProceeding Paper
Simplicity vs. Complexity in Time Series Forecasting: A Comparative Study of iTransformer Variants
by
Polycarp Shizawaliyi Yakoi, Xiangfu Meng, Danladi Suleman, Adeleye Idowu, Victor Adeyi Odeh and Chunlin Yu
Comput. Sci. Math. Forum 2025, 11(1), 27; https://doi.org/10.3390/cmsf2025011027 - 22 Aug 2025
Abstract
►▼
Show Figures
This study re-examines the balance between architectural intricacy and generalization in Transformer models for long-term time series predictions. We perform a systematic comparison involving a lightweight baseline (iTransformer) and two enhanced versions: MiTransformer, which incorporates an external memory component for extending context, and
[...] Read more.
This study re-examines the balance between architectural intricacy and generalization in Transformer models for long-term time series predictions. We perform a systematic comparison involving a lightweight baseline (iTransformer) and two enhanced versions: MiTransformer, which incorporates an external memory component for extending context, and DFiTransformer, which features dual-frequency decomposition along with Learnable Cross-Frequency Attention. All models undergo training using the same protocols across eight standard benchmarks and four forecasting periods. Findings indicate that both MiTransformer and DFiTransformer do not reliably surpass the baseline. In many instances, the increased complexity leads to greater variance and decreased accuracy, especially with unstable or inconsistent datasets. These results imply that architectural minimalism, when effectively refined, can match or surpass the effectiveness of more complex designs—challenging the prevailing trend toward increasingly intricate forecasting architectures.
Full article

Figure 1
Open AccessProceeding Paper
Should You Sleep or Trade Bitcoin?
by
Paridhi Talwar, Aman Jain and Eugene Pinsky
Comput. Sci. Math. Forum 2025, 11(1), 20; https://doi.org/10.3390/cmsf2025011020 - 22 Aug 2025
Abstract
►▼
Show Figures
Dramatic price swings and the possibility of extreme returns have made Bitcoin a hot topic of interest for investors and researchers alike. With the help of advanced neural network models including CNN, RCNN, and LSTM networks, this paper has delved deep into the
[...] Read more.
Dramatic price swings and the possibility of extreme returns have made Bitcoin a hot topic of interest for investors and researchers alike. With the help of advanced neural network models including CNN, RCNN, and LSTM networks, this paper has delved deep into the intricacies of Bitcoin price behavior. We will study different time intervals—close-to-close, close-to-open, open-to-close, and day-to-day—to find a pattern that we can use to develop an investment strategy. The average volatility over a year, six months, and three months is compared with the predictive power of volatility versus a traditional buy-and-hold strategy. Our findings point out the strengths and weaknesses of each neural network model and provide useful insights into optimizing cryptocurrency portfolios. This study contributes to the literature on the price prediction and volatility analysis of cryptocurrencies, thus providing useful information to both researchers and investors to execute strategic steps within the volatile cryptocurrency market.
Full article

Figure 1
Open AccessProceeding Paper
Estimating Forecast Accuracy Metrics by Learning from Time Series Characteristics
by
Alina Timmermann and Ananya Pal
Comput. Sci. Math. Forum 2025, 11(1), 19; https://doi.org/10.3390/cmsf2025011019 - 19 Aug 2025
Abstract
►▼
Show Figures
Accurate forecasts play a crucial role in various industries, where enhancing forecast accuracy has been a major focus of research. However, for volatile data and industrial applications, ensuring the reliability and interpretability of forecast results is equally important. This study shifts the focus
[...] Read more.
Accurate forecasts play a crucial role in various industries, where enhancing forecast accuracy has been a major focus of research. However, for volatile data and industrial applications, ensuring the reliability and interpretability of forecast results is equally important. This study shifts the focus from predicting future values to estimating forecast accuracy with confidence when no future validation data is present. To achieve this, we use time series characteristics calculated by statistical tests and estimate forecast accuracy metrics. For this, two methods are applied: Estimation by the euclidean distances between time series characteristic values, and second, estimation by clustering of time series characteristics. In-sample forecast accuracy serves as a benchmark method. A diverse, industrial data set is used to evaluate the methods. The results demonstrate that there is significant correlation between certain time series characteristics and estimation quality of forecast accuracy metrics. For all forecast accuracy metrics, the two proposed methods outperform the in-sample forecast estimation. These findings contribute to improving the reliability and interpretability of forecast evaluations, particularly in industrial applications with unstable data.
Full article

Figure 1
Open AccessProceeding Paper
The Explainability of Machine Learning Algorithms for Victory Prediction in the Video Game Dota 2
by
Julio Losada-Rodríguez, Pedro A. Castillo, Antonio Mora and Pablo García-Sánchez
Comput. Sci. Math. Forum 2025, 11(1), 26; https://doi.org/10.3390/cmsf2025011026 - 18 Aug 2025
Abstract
►▼
Show Figures
Video games, especially competitive ones such as Dota 2, have gained great relevance both as entertainment and in e-sports, where predicting the outcome of games can offer significant strategic advantages. In this context, machine learning (ML) is presented as a useful tool
[...] Read more.
Video games, especially competitive ones such as Dota 2, have gained great relevance both as entertainment and in e-sports, where predicting the outcome of games can offer significant strategic advantages. In this context, machine learning (ML) is presented as a useful tool for analysing and predicting performance in these games based on data collected before the start of the games, such as character selection information. Thus, in this work, we have developed and tested ML models, including Random Forest and Gradient Boosting, to predict the outcome of Dota 2 matches. This study is innovative in that it incorporates explainability techniques using Shapley Additive Explanations (SHAP) graphs, allowing us to understand which specific factors influence model predictions. Data extracted from the OpenDota API were preprocessed and used to train the models, evaluating them using metrics such as accuracy, precision, recall, F1-score, and cross-validated accuracy. The results indicate that predictive models, particularly Random Forest, can accurately predict game outcomes based only on pregame information, also suggesting that the explainability of machine learning techniques can be effective for analysing strategic factors in competitive video games.
Full article

Figure 1
Open AccessProceeding Paper
Comparative Analysis of Forecasting Models for Disability Resource Planning in Brazil’s National Textbook Program
by
Luciano Cabral, Luam Santos, Jário Santos Júnior, Thyago Oliveira, Dalgoberto Pinho Júnior, Nicholas Cruz, Joana Lobo, Breno Duarte, Lenardo Silva, Rafael Silva and Bruno Pimentel
Comput. Sci. Math. Forum 2025, 11(1), 25; https://doi.org/10.3390/cmsf2025011025 - 13 Aug 2025
Abstract
►▼
Show Figures
The accurate forecasting of student disability trends is essential for optimizing educational accessibility and resource distribution in the context of Brazil’s oldest public policy, the National Textbook Program (PNLD). This study applies machine learning (ML) and time series forecasting models (TSF) to predict
[...] Read more.
The accurate forecasting of student disability trends is essential for optimizing educational accessibility and resource distribution in the context of Brazil’s oldest public policy, the National Textbook Program (PNLD). This study applies machine learning (ML) and time series forecasting models (TSF) to predict the number of visually impaired students in Brazil using educational census data from 2021 to 2023, with the aim of estimating the amount of Braille textbooks to be acquired in the PNLD’s context. By performing a comparative analysis on various ML models (e.g, Naive Bayes, ElasticNet, gradient boosting) and TSF techniques (e.g., ARIMA and SARIMA models, as well as exponential smoothing) to predict future enrollment trends, we identify the most effective approaches for school-level and long-term disability enrollment predictions. Results show that ElasticNet and gradient boosting excel in forecasting enrollment estimations over TSF models. Despite challenges related to data inconsistencies and reporting variations, incorporating external demographic and health data could further improve predictive accuracy. This research contributes to AI-driven educational accessibility by demonstrating how predictive analytics can enhance policy decisions and ensure an equitable distribution of resources for students with disabilities.
Full article

Figure 1
Open AccessProceeding Paper
Drift and Diffusion in Geospatial Econometrics: Implications for Panel Data and Time Series
by
James Ming Chen
Comput. Sci. Math. Forum 2025, 11(1), 24; https://doi.org/10.3390/cmsf2025011024 - 11 Aug 2025
Cited by 2
Abstract
►▼
Show Figures
Economic data is highly dependent on its arrangement within space and time. Perhaps the most obvious and important definition of space is geospatial configuration on the Earth’s surface. Consideration of geospatial effects produces a dramatic improvement in the prediction of median housing prices
[...] Read more.
Economic data is highly dependent on its arrangement within space and time. Perhaps the most obvious and important definition of space is geospatial configuration on the Earth’s surface. Consideration of geospatial effects produces a dramatic improvement in the prediction of median housing prices across 20,640 districts in California. Unconditional regression with engineered variables, two-stage least squares regression (2SLS), and iterative local regression approach r2 ≈ 0.8536, the goodness of fit attained in the original California study. Geospatial methods can be generalized to panel data analysis and time-series forecasting. Distance-sensitive analysis reveals the value of treating time-variant data as potentially discrete and discontinuous. This insight highlights the value of methodologies that suspend the assumption that data varies in a continuous or even linear fashion across space and time.
Full article

Figure 1
Open AccessProceeding Paper
Enhancing Public Health Insights and Interpretation Through AI-Driven Time-Series Analysis: Hierarchical Clustering, Hamming Distance, and Binary Tree Visualization of Infectious Disease Trends
by
Ayauzhan Arystambekova and Eugene Pinsky
Comput. Sci. Math. Forum 2025, 11(1), 23; https://doi.org/10.3390/cmsf2025011023 - 11 Aug 2025
Abstract
►▼
Show Figures
This paper applies hierarchical clustering and Hamming Distance to analyze the temporal trends of infectious diseases across different regions of Uzbekistan. By leveraging hierarchical clustering, we effectively group regions based on disease similarity without requiring predefined cluster numbers. Hamming Distance further quantifies disease
[...] Read more.
This paper applies hierarchical clustering and Hamming Distance to analyze the temporal trends of infectious diseases across different regions of Uzbekistan. By leveraging hierarchical clustering, we effectively group regions based on disease similarity without requiring predefined cluster numbers. Hamming Distance further quantifies disease trajectory similarities, helping assess epidemiological patterns over time. Binary tree visualizations enhance the interpretability of clustering results, offering a novel method for identifying regional trends. The dataset includes yearly incidence rates of seven infectious diseases from 2012 to 2019, along with population, healthcare infrastructure, and geographic attributes for each region. This approach provides an interpretable framework for public health analysis and decision-making.
Full article

Figure 1
Open AccessProceeding Paper
Exploring Multi-Modal LLMs for Time Series Anomaly Detection
by
Hao Niu, Guillaume Habault, Huy Quang Ung, Roberto Legaspi, Zhi Li, Yanan Wang, Donghuo Zeng, Julio Vizcarra and Masato Taya
Comput. Sci. Math. Forum 2025, 11(1), 22; https://doi.org/10.3390/cmsf2025011022 - 11 Aug 2025
Abstract
►▼
Show Figures
Anomaly detection in time series data is crucial across various domains. Traditional methods often struggle with continuously evolving time series requiring adjustment, whereas large language models (LLMs) and multi-modal LLMs (MLLMs) have emerged as promising zero-shot anomaly detectors by leveraging embedded knowledge. This
[...] Read more.
Anomaly detection in time series data is crucial across various domains. Traditional methods often struggle with continuously evolving time series requiring adjustment, whereas large language models (LLMs) and multi-modal LLMs (MLLMs) have emerged as promising zero-shot anomaly detectors by leveraging embedded knowledge. This study expands recent evaluations of MLLMs for zero-shot time series anomaly detection by exploring newer models, additional input representations, varying input sizes, and conducting further analyses. Our findings reveal that while MLLMs are effective for zero-shot detection, they still face limitations, such as effectively integrating both text and vision representations or handling longer input lengths. These challenges unveil diverse opportunities for future improvements.
Full article

Figure 1
Open AccessProceeding Paper
Forecasting Techniques for Univariate Time Series Data: Analysis and Practical Applications by Category
by
Leonard Dervishi, Antonios Raptakis and Gerald Bieber
Comput. Sci. Math. Forum 2025, 11(1), 21; https://doi.org/10.3390/cmsf2025011021 - 11 Aug 2025
Abstract
►▼
Show Figures
Effective forecasting is vital in various domains as it supports informed decision-making and risk mitigation. This paper aims to improve the selection of appropriate forecasting methods for univariate time series. We propose a systematic categorization based on key characteristics, such as stationarity and
[...] Read more.
Effective forecasting is vital in various domains as it supports informed decision-making and risk mitigation. This paper aims to improve the selection of appropriate forecasting methods for univariate time series. We propose a systematic categorization based on key characteristics, such as stationarity and seasonality and analyze well-known forecasting techniques suitable for each category. Additionally, we examine how forecasting horizons, the time periods for which forecasts are generated, affect method performance, thus addressing a significant gap in the existing literature. Our findings reveal that certain techniques excel in specific categories and demonstrate performance progression over time, indicating how they improve or decline relative to other techniques. By enhancing the understanding of method effectiveness across diverse time series characteristics, this research aims to guide professionals in making informed choices for their forecasting needs.
Full article

Figure 1
Open AccessProceeding Paper
Emergent Behavior and Computational Capabilities in Nonlinear Systems: Advancing Applications in Time Series Forecasting and Predictive Modeling
by
Kárel García-Medina, Daniel Estevez-Moya, Ernesto Estevez-Rams and Reinhard B. Neder
Comput. Sci. Math. Forum 2025, 11(1), 17; https://doi.org/10.3390/cmsf2025011017 - 11 Aug 2025
Abstract
►▼
Show Figures
Natural dynamical systems can often display various long-term behaviours, ranging from entirely predictable decaying states to unpredictable, chaotic regimes or, more interestingly, highly correlated and intricate states featuring emergent phenomena. That, of course, imposes a level of generality on the models we use
[...] Read more.
Natural dynamical systems can often display various long-term behaviours, ranging from entirely predictable decaying states to unpredictable, chaotic regimes or, more interestingly, highly correlated and intricate states featuring emergent phenomena. That, of course, imposes a level of generality on the models we use to study them. Among those models, coupled oscillators and cellular automata (CA) present a unique opportunity to advance the understanding of complex temporal behaviours because of their conceptual simplicity but very rich dynamics. In this contribution, we review the work completed by our research team over the last few years in the development and application of an alternative information-based characterization scheme to study the emergent behaviour and information handling of nonlinear systems, specifically Adler-type oscillators under different types of coupling: local phase-dependent (LAP) coupling and Kuramoto-like local (LAK) coupling. We thoroughly studied the long-term dynamics of these systems, identifying several distinct dynamic regimes, ranging from periodic to chaotic and complex. The systems were analysed qualitatively and quantitatively, drawing on entropic measures and information theory. Measures such as entropy density (Shannon entropy rate), effective complexity measure, and Lempel–Ziv complexity/information distance were employed. Our analysis revealed similar patterns and behaviours between these systems and CA, which are computationally capable systems, for some specific rules and regimes. These findings further reinforce the argument around computational capabilities in dynamical systems, as understood by information transmission, storage, and generation measures. Furthermore, the edge of chaos hypothesis (EOC) was verified in coupled oscillators systems for specific regions of parameter space, where a sudden increase in effective complexity measure was observed, indicating enhanced information processing capabilities. Given the potential for exploiting this non-anthropocentric computational power, we propose this alternative information-based characterization scheme as a general framework to identify a dynamical system’s proximity to computationally enhanced states. Furthermore, this study advances the understanding of emergent behaviour in nonlinear systems. It explores the potential for leveraging the features of dynamical systems operating at the edge of chaos by coupling them with computationally capable settings within machine learning frameworks, specifically by using them as reservoirs in Echo State Networks (ESNs) for time series forecasting and predictive modeling. This approach aims to enhance the predictive capacity, particularly that of chaotic systems, by utilising EOC systems’ complex, sensitive dynamics as the ESN reservoir.
Full article

Figure 1
Open AccessProceeding Paper
Fundamentals of Time Series Analysis in Electricity Price Forecasting
by
Ciaran O’Connor, Andrea Visentin and Steven Prestwich
Comput. Sci. Math. Forum 2025, 11(1), 16; https://doi.org/10.3390/cmsf2025011016 - 11 Aug 2025
Abstract
►▼
Show Figures
Time series forecasting is a cornerstone of decision-making in energy and finance, yet many studies fail to rigorously analyse the underlying dataset characteristics, leading to suboptimal model selection and unreliable outcomes. This paper addresses these shortcomings by presenting a comprehensive framework that integrates
[...] Read more.
Time series forecasting is a cornerstone of decision-making in energy and finance, yet many studies fail to rigorously analyse the underlying dataset characteristics, leading to suboptimal model selection and unreliable outcomes. This paper addresses these shortcomings by presenting a comprehensive framework that integrates fundamental time series diagnostics—stationarity tests, autocorrelation analysis, heteroscedasticity, multicollinearity, and correlation analysis—into forecasting workflows. Unlike existing studies that prioritise pre-packaged machine learning and deep learning methods, often at the expense of interpretable statistical benchmarks, our approach advocates for the combined use of statistical models alongside advanced machine learning methods. Using the Day-Ahead Market dataset from the Irish electricity market as a case study, we demonstrate how rigorous statistical diagnostics can guide model selection, improve interpretability, and improve forecasting accuracy. This work offers a novel, integrative methodology that bridges the gap between statistical rigour and modern computational techniques, improving reliability in time series forecasting.
Full article

Figure 1
Open AccessProceeding Paper
Leveraging Exogenous Regressors in Demand Forecasting
by
S M Ahasanul Karim, Bahram Zarrin and Niels Buus Lassen
Comput. Sci. Math. Forum 2025, 11(1), 15; https://doi.org/10.3390/cmsf2025011015 - 1 Aug 2025
Abstract
►▼
Show Figures
Demand forecasting is different from traditional forecasting because it is a process of forecasting multiple time series collectively. It is challenging to implement models that can generalise and perform well while forecasting many time series altogether, based on accuracy and scalability. Moreover, there
[...] Read more.
Demand forecasting is different from traditional forecasting because it is a process of forecasting multiple time series collectively. It is challenging to implement models that can generalise and perform well while forecasting many time series altogether, based on accuracy and scalability. Moreover, there can be external influences like holidays, disasters, promotions, etc., creating drifts and structural breaks, making accurate demand forecasting a challenge. Again, these external features used for multivariate forecasting often worsen the prediction accuracy because there are more unknowns in the forecasting process. This paper attempts to explore effective ways of leveraging the exogenous regressors to surpass the accuracy of the univariate approach by creating synthetic scenarios to understand the model and regressors’ performances. This paper finds that the forecastability of the correlated external features plays a big role in determining whether it would improve or worsen accuracy for models like ARIMA, yet even 100% accurately forecasted extra regressors sometimes fail to surpass their univariate predictive accuracy. The findings are replicated in cases like forecasting weekly docked bike demand per station every hour, where the multivariate approach outperformed the univariate approach by forecasting the regressors with Bi-LSTM and using their predicted values for forecasting the target demand with ARIMA.
Full article

Figure 1
Open AccessProceeding Paper
Beyond the Hodrick Prescott Filter: Wavelets and the Dynamics of U.S.–Mexico Trade
by
José Gerardo Covarrubias and Xuedong Liu
Comput. Sci. Math. Forum 2025, 11(1), 14; https://doi.org/10.3390/cmsf2025011014 - 1 Aug 2025
Abstract
►▼
Show Figures
This study analyzes the evolution of the Mexico–U.S. trade balance as a seasonally adjusted time series, comparing the Hodrick–Prescott (HP) filter and wavelet analysis. The HP filter allowed the trend and cycle to be extracted from the series, while wavelets decomposed the information
[...] Read more.
This study analyzes the evolution of the Mexico–U.S. trade balance as a seasonally adjusted time series, comparing the Hodrick–Prescott (HP) filter and wavelet analysis. The HP filter allowed the trend and cycle to be extracted from the series, while wavelets decomposed the information into different time scales, revealing short-, medium-, and long-term fluctuations. The results show that HP provides a simplified view of the trend, while wavelets more accurately capture key events and cyclical dynamics. It is concluded that wavelets offer a more robust tool for studying the volatility and persistence of economic shocks in bilateral trade.
Full article

Figure 1
Open AccessProceeding Paper
Inclusive Turnout for Equitable Policies: Using Time Series Forecasting to Combat Policy Polarization
by
Natasya Liew, Sreeya R. K. Haninatha, Sarthak Pattnaik, Kathleen Park and Eugene Pinsky
Comput. Sci. Math. Forum 2025, 11(1), 11; https://doi.org/10.3390/cmsf2025011011 - 1 Aug 2025
Abstract
►▼
Show Figures
Selective voter mobilization dominates U.S. elections, with campaigns prioritizing swing voters to win critical states. While effective for a short-term period, this strategy deepens policy polarization, marginalizes minorities, and undermines representative democracy. This paper investigates voter turnout disparities and policy manipulation using advanced
[...] Read more.
Selective voter mobilization dominates U.S. elections, with campaigns prioritizing swing voters to win critical states. While effective for a short-term period, this strategy deepens policy polarization, marginalizes minorities, and undermines representative democracy. This paper investigates voter turnout disparities and policy manipulation using advanced time series forecasting models (ARIMA, LSTM, and seasonal decomposition). Analyzing demographic and geographic data, we uncover significant turnout inequities, particularly for marginalized groups, and propose actionable reforms to enhance equitable voter participation. By integrating data-driven insights with theoretical perspectives, this study offers practical recommendations for campaigns and policymakers to counter polarization and foster inclusive democratic representation.
Full article

Figure 1
Open AccessProceeding Paper
Cascading Multi-Agent Policy Optimization for Demand Forecasting
by
Saeed Varasteh Yazdi
Comput. Sci. Math. Forum 2025, 11(1), 18; https://doi.org/10.3390/cmsf2025011018 - 31 Jul 2025
Abstract
►▼
Show Figures
Reliable demand forecasting is crucial for effective supply chain management, where inaccurate forecasts can lead to frequent out-of-stock or overstock situations. While numerous statistical and machine learning methods have been explored for demand forecasting, reinforcement learning approaches, despite their significant potential, remain little
[...] Read more.
Reliable demand forecasting is crucial for effective supply chain management, where inaccurate forecasts can lead to frequent out-of-stock or overstock situations. While numerous statistical and machine learning methods have been explored for demand forecasting, reinforcement learning approaches, despite their significant potential, remain little known in this domain. In this paper, we propose a multi-agent deep reinforcement learning solution designed to accurately predict demand across multiple stores. We present empirical evidence that demonstrates the effectiveness of our model using a real-world dataset. The results confirm the practicality of our proposed approach and highlight its potential to improve demand forecasting in retail and potentially other forecasting scenarios.
Full article

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
