Previous Issue
Volume 4, September
 
 

Analytics, Volume 4, Issue 4 (December 2025) – 5 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
17 pages, 2557 KB  
Article
System Inertia Cost Forecasting Using Machine Learning: A Data-Driven Approach for Grid Energy Trading in Great Britain
by Maitreyee Dey, Soumya Prakash Rana and Preeti Patel
Analytics 2025, 4(4), 30; https://doi.org/10.3390/analytics4040030 - 23 Oct 2025
Viewed by 300
Abstract
As modern power systems integrate more renewable and decentralised generation, maintaining grid stability has become increasingly challenging. This study proposes a data-driven machine learning framework for forecasting system inertia service costs—a key yet underexplored variable influencing energy trading and frequency stability in Great [...] Read more.
As modern power systems integrate more renewable and decentralised generation, maintaining grid stability has become increasingly challenging. This study proposes a data-driven machine learning framework for forecasting system inertia service costs—a key yet underexplored variable influencing energy trading and frequency stability in Great Britain. Using eight years (2017–2024) of National Energy System Operator (NESO) data, four models—Long Short-Term Memory (LSTM), Residual LSTM, eXtreme Gradient Boosting (XGBoost), and Light Gradient-Boosting Machine (LightGBM)—are comparatively analysed. LSTM-based models capture temporal dependencies, while ensemble methods effectively handle nonlinear feature relationships. Results demonstrate that LightGBM achieves the highest predictive accuracy, offering a robust method for inertia cost estimation and market intelligence. The framework contributes to strategic procurement planning and supports market design for a more resilient, cost-effective grid. Full article
(This article belongs to the Special Issue Business Analytics and Applications)
Show Figures

Figure 1

25 pages, 3034 KB  
Article
Distributional CNN-LSTM, KDE, and Copula Approaches for Multimodal Multivariate Data: Assessing Conditional Treatment Effects
by Jong-Min Kim
Analytics 2025, 4(4), 29; https://doi.org/10.3390/analytics4040029 - 21 Oct 2025
Viewed by 327
Abstract
We introduce a distributional CNN-LSTM framework for probabilistic multivariate modeling and heterogeneous treatment effect (HTE) estimation. The model jointly captures complex dependencies among multiple outcomes and enables precise estimation of individual-level conditional average treatment effects (CATEs). In simulation studies with multivariate Gaussian mixtures, [...] Read more.
We introduce a distributional CNN-LSTM framework for probabilistic multivariate modeling and heterogeneous treatment effect (HTE) estimation. The model jointly captures complex dependencies among multiple outcomes and enables precise estimation of individual-level conditional average treatment effects (CATEs). In simulation studies with multivariate Gaussian mixtures, the CNN-LSTM demonstrates robust density estimation and strong CATE recovery, particularly as mixture complexity increases, while classical methods such as Kernel Density Estimation (KDE) and Gaussian Copulas may achieve higher log-likelihood or coverage in simpler scenarios. On real-world datasets, including Iris and Criteo Uplift, the CNN-LSTM achieves the lowest CATE RMSE, confirming its practical utility for individualized prediction, although KDE and Gaussian Copula approaches may perform better on global likelihood or coverage metrics. These results indicate that the CNN-LSTM can be trained efficiently on moderate-sized datasets while maintaining stable predictive performance. Overall, the framework is particularly valuable in applications requiring accurate individual-level effect estimation and handling of multimodal heterogeneity—such as personalized medicine, economic policy evaluation, and environmental risk assessment—with its primary strength being superior CATE recovery under complex outcome distributions, even when likelihood-based metrics favor simpler baselines. Full article
Show Figures

Figure 1

19 pages, 674 KB  
Article
Reservoir Computation with Networks of Differentiating Neuron Ring Oscillators
by Alexander Yeung, Peter DelMastro, Arjun Karuvally, Hava Siegelmann, Edward Rietman and Hananel Hazan
Analytics 2025, 4(4), 28; https://doi.org/10.3390/analytics4040028 - 20 Oct 2025
Viewed by 325
Abstract
Reservoir computing is an approach to machine learning that leverages the dynamics of a complex system alongside a simple, often linear, machine learning model for a designated task. While many efforts have previously focused their attention on integrating neurons, which produce an output [...] Read more.
Reservoir computing is an approach to machine learning that leverages the dynamics of a complex system alongside a simple, often linear, machine learning model for a designated task. While many efforts have previously focused their attention on integrating neurons, which produce an output in response to large, sustained inputs, we focus on using differentiating neurons, which produce an output in response to large changes in input. Here, we introduce a small-world graph built from rings of differentiating neurons as a Reservoir Computing substrate. We find the coupling strength and network topology that enable these small-world networks to function as an effective reservoir. The dynamics of differentiating neurons naturally give rise to oscillatory dynamics when arranged in rings, where we study their computational use in the Reservoir Computing setting. We demonstrate the efficacy of these networks in the MNIST digit recognition task, achieving comparable performance of 90.65% to existing Reservoir Computing approaches. Beyond accuracy, we conduct systematic analysis of our reservoir’s internal dynamics using three complementary complexity measures that quantify neuronal activity balance, input dependence, and effective dimensionality. Our analysis reveals that optimal performance emerges when the reservoir operates with intermediate levels of neural entropy and input sensitivity, consistent with the edge-of-chaos hypothesis, where the system balances stability and responsiveness. The findings suggest that differentiating neurons can be a potential alternative to integrating neurons and can provide a sustainable future alternative for power-hungry AI applications. Full article
Show Figures

Figure 1

50 pages, 6680 KB  
Article
Multiplicative Decomposition Model to Predict UK’s Long-Term Electricity Demand with Monthly and Hourly Resolution
by Marie Baillon, María Carmen Romano and Ekkehard Ullner
Analytics 2025, 4(4), 27; https://doi.org/10.3390/analytics4040027 - 6 Oct 2025
Viewed by 508
Abstract
The UK electricity market is changing to adapt to Net Zero targets and respond to disruptions like the Russia–Ukraine war. This requires strategic planning to decide on the construction of new electricity generation plants for a resilient UK electricity grid. Such planning is [...] Read more.
The UK electricity market is changing to adapt to Net Zero targets and respond to disruptions like the Russia–Ukraine war. This requires strategic planning to decide on the construction of new electricity generation plants for a resilient UK electricity grid. Such planning is based on forecasting the UK electricity demand long-term (from 1 year and beyond). In this paper, we propose a long-term predictive model by identifying the main components of the UK electricity demand, modelling each of these components, and combining them in a multiplicative manner to deliver a single long-term prediction. To the best of our knowledge, this study is the first to apply a multiplicative decomposition model for long-term predictions at both monthly and hourly resolutions, combining neural networks with Fourier analysis. This approach is extremely flexible and accurate, with a mean absolute percentage error of 4.16% and 8.62% in predicting the monthly and hourly electricity demand, respectively, from 2019 to 2021. Full article
Show Figures

Graphical abstract

16 pages, 894 KB  
Article
Fairness in Predictive Marketing: Auditing and Mitigating Demographic Bias in Machine Learning for Customer Targeting
by Sayee Phaneendhar Pasupuleti, Jagadeesh Kola, Sai Phaneendra Manikantesh Kodete and Sree Harsha Palli
Analytics 2025, 4(4), 26; https://doi.org/10.3390/analytics4040026 - 1 Oct 2025
Viewed by 673
Abstract
As organizations increasingly turn to machine learning for customer segmentation and targeted marketing, concerns about fairness and algorithmic bias have become more urgent. This study presents a comprehensive fairness audit and mitigation framework for predictive marketing models using the Bank Marketing dataset. We [...] Read more.
As organizations increasingly turn to machine learning for customer segmentation and targeted marketing, concerns about fairness and algorithmic bias have become more urgent. This study presents a comprehensive fairness audit and mitigation framework for predictive marketing models using the Bank Marketing dataset. We train logistic regression and random forest classifiers to predict customer subscription behavior and evaluate their performance across key demographic groups, including age, education, and job type. Using model explainability techniques such as SHAP and fairness metrics including disparate impact and true positive rate parity, we uncover notable disparities in model behavior that could result in discriminatory targeting. We implement three mitigation strategies—reweighing, threshold adjustment, and feature exclusion—and assess their effectiveness in improving fairness while preserving business-relevant performance metrics. Among these, reweighing produced the most balanced outcome, raising the Disparate Impact Ratio for older individuals from 0.65 to 0.82 and reducing the true positive rate parity gap by over 40%, with only a modest decline in precision (from 0.78 to 0.76). We propose a replicable workflow for embedding fairness auditing into enterprise BI systems and highlight the strategic importance of ethical AI practices in building accountable and inclusive marketing technologies. technologies. Full article
(This article belongs to the Special Issue Business Analytics and Applications)
Show Figures

Figure 1

Previous Issue
Back to TopTop