Journal Description
Analytics
Analytics
is an international, peer-reviewed, open access journal on methodologies, technologies, and applications of analytics, published quarterly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus and other databases.
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 27.4 days after submission; acceptance to publication is undertaken in 7.7 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: APC discount vouchers, optional signed peer review, and reviewer names published annually in the journal.
- Analytics is a companion journal of Mathematics.
- Journal Cluster of Information Systems and Technology: Analytics, Applied System Innovation, Cryptography, Data, Digital, Informatics, Information, Journal of Cybersecurity and Privacy and Multimedia.
Latest Articles
Assessing the Impact of Capital Expenditure on Corporate Profitability in South Korea’s Electronics Industry: A Regression Analysis Approach
Analytics 2025, 4(4), 36; https://doi.org/10.3390/analytics4040036 - 10 Dec 2025
Abstract
►
Show Figures
This study investigates the relationship between capital expenditure (CAPEX) and long-term corporate profitability in South Korea’s electronics industry. Using panel data from 126 listed electronics firms covering 2005–2019, the research applies fixed-effects regression analysis to examine how CAPEX influences profitability, measured by EBITDA/total
[...] Read more.
This study investigates the relationship between capital expenditure (CAPEX) and long-term corporate profitability in South Korea’s electronics industry. Using panel data from 126 listed electronics firms covering 2005–2019, the research applies fixed-effects regression analysis to examine how CAPEX influences profitability, measured by EBITDA/total assets. The results confirm that CAPEX exerts a positive and statistically significant long-term effect on profitability, with stronger but not significantly different impacts for large firms compared to SMEs. The findings contribute to empirical evidence on capital investment efficiency and the implications of economies and diseconomies of scale in capital-intensive industries.
Full article
Open AccessArticle
Option Pricing in the Approach of Integrating Market Risk Premium: Application to OTM Options
by
David Liu
Analytics 2025, 4(4), 35; https://doi.org/10.3390/analytics4040035 - 21 Nov 2025
Abstract
►▼
Show Figures
In this research, we summarize the results of implementing the market risk premium into the option valuation formulas of the Black–Scholes–Merton model for out-of-the-money (OTM) options. We show that derivative prices can partly depend on systematic market risk, which the BSM model ignores
[...] Read more.
In this research, we summarize the results of implementing the market risk premium into the option valuation formulas of the Black–Scholes–Merton model for out-of-the-money (OTM) options. We show that derivative prices can partly depend on systematic market risk, which the BSM model ignores by construction. Specifically, empirical studies are conducted using 50ETF options obtained from the Shanghai Stock Exchange, covering the periods from January 2018 to September 2022 and from December 2023 to October 2025. The pricing of the OTM options shows that the adjusted BSM formulas exhibit better pricing performance compared with the market prices of the OTM options tested. Furthermore, a framework for the empirical analysis of option prices based on the Capital Asset Pricing Model (CAPM) or factor models is discussed, which may lead to option formulas using non-homogeneous heat equations. The later proposal requires further statistical testing using real market data but offers an alternative to the existing risk-neutral valuation of options.
Full article

Figure 1
Open AccessArticle
Fan Loyalty and Price Elasticity in Sport: Insights from Major League Baseball’s Post-Pandemic Recovery
by
Soojin Choi, Fang Zheng and Seung-Man Lee
Analytics 2025, 4(4), 34; https://doi.org/10.3390/analytics4040034 - 21 Nov 2025
Abstract
►▼
Show Figures
The COVID-19 pandemic disrupted traditional patterns of sport consumption, raising questions about whether fans would return to stadiums and how sensitive they would be to ticket prices in the recovery period. This study reconceptualizes ticket price elasticity as a market-based indicator of fan
[...] Read more.
The COVID-19 pandemic disrupted traditional patterns of sport consumption, raising questions about whether fans would return to stadiums and how sensitive they would be to ticket prices in the recovery period. This study reconceptualizes ticket price elasticity as a market-based indicator of fan loyalty and applies it to Major League Baseball (MLB) during 2021–2023. Using team–season attendance data from Baseball-Reference, primary-market ticket prices from the Team Marketing Report Fan Cost Index, and secondary-market prices from TicketIQ, we estimate log–log fixed-effects panel models to separate causal price responses from popularity-driven correlations. The results show a strongly negative elasticity of attendance with respect to primary-market prices (β ≈ −7.93, p < 0.001), indicating that higher ticket prices substantially reduce attendance, while secondary-market prices are positively associated with attendance, reflecting demand shocks rather than causal effects. Heterogeneity analyses reveal that brand strength, team performance, and game salience significantly moderate elasticity, supporting the interpretation of inelastic demand as revealed loyalty. These findings highlight the potential of elasticity as a Fan Loyalty Index, providing a replicable framework for measuring consumer resilience. The study offers practical insights for pricing strategy, fan segmentation, and engagement, while emphasizing the broader social role of sport in restoring community identity during post-pandemic recovery.
Full article

Figure 1
Open AccessArticle
AI-Powered Chatbot for FDA Drug Labeling Information Retrieval: OpenAI GPT for Grounded Question Answering
by
Manasa Koppula, Fnu Madhulika, Navya Sreeramoju and Praveen Kolimi
Analytics 2025, 4(4), 33; https://doi.org/10.3390/analytics4040033 - 17 Nov 2025
Abstract
►▼
Show Figures
This study presents the development of an AI-powered chatbot designed to facilitate accurate and efficient retrieval of information from the FDA drug labeling documents. Leveraging OpenAI’s GPT-3.5-turbo model within a controlled, document-grounded question–answering framework, Chatbot was created, which can provide users with answers
[...] Read more.
This study presents the development of an AI-powered chatbot designed to facilitate accurate and efficient retrieval of information from the FDA drug labeling documents. Leveraging OpenAI’s GPT-3.5-turbo model within a controlled, document-grounded question–answering framework, Chatbot was created, which can provide users with answers that are strictly limited to the content of the uploaded drug label, thereby minimizing hallucinations and enhancing traceability. A user-friendly interface built with Streamlit allows users to upload FDA labeling PDFs and pose natural language queries. The chatbot extracts relevant sections using PyMuPDF and regex-based segmentation and generates responses constrained to those sections. To evaluate performance, semantic similarity scores were computed between generated answers and ground truth text using Sentence Transformers. Results across 10 breast cancer drug labels demonstrate high semantic alignment, with most scores ranging from 0.7 to 0.9, indicating reliable summarization and contextual fidelity. The chatbot achieved high semantic similarity scores (≥0.95 for concise sections) and ROUGE scores, confirming strong semantic and textual alignment. Comparative analysis with GPT-5-chat and NotebookLM demonstrated that our approach maintains accuracy and section-specific fidelity across models. The current work is limited to a small dataset, focused on breast cancer drugs. Future work will expand to diverse therapeutic areas and incorporate BERTScore and expert-based validation.
Full article

Figure 1
Open AccessReview
Scale-Invariant Correspondence Analysis of Compositional Data
by
Vartan Choulakian and Jacques Allard
Analytics 2025, 4(4), 32; https://doi.org/10.3390/analytics4040032 - 12 Nov 2025
Abstract
►▼
Show Figures
Correspondence analysis is a dimension reduction technique for visualizing a non-negative matrix of size , particularly contingency tables or compositional datasets, but it depends on the row and column marginals of .
[...] Read more.
Correspondence analysis is a dimension reduction technique for visualizing a non-negative matrix of size , particularly contingency tables or compositional datasets, but it depends on the row and column marginals of . Three complementary transformations of the data render CA of invariant for any and : first, Greenacre’s scale-invariant approach, valid for positive data; second, Goodman’s marginal-free correspondence analysis, valid for positive or moderately sparse data; third, correspondence analysis of the sign-transformed matrix, sign valid for sparse or extremely sparse data. We demonstrate these three methods on four real-world datasets with varying levels of sparsity to compare their exploratory performance.
Full article

Graphical abstract
Open AccessArticle
PlayMyData: A Statistical Analysis of a Video Game Dataset on Review Scores and Gaming Platforms
by
Christian Ellington, Paramahansa Pramanik and Haley K. Robinson
Analytics 2025, 4(4), 31; https://doi.org/10.3390/analytics4040031 - 11 Nov 2025
Abstract
►▼
Show Figures
In recent years, video games have become an increasingly popular form of entertainment and enjoyment for consumers of all ages. Given their rapid rise in production, projects such as PlayMyData aim to organize the immense amounts of data that accompany these games into
[...] Read more.
In recent years, video games have become an increasingly popular form of entertainment and enjoyment for consumers of all ages. Given their rapid rise in production, projects such as PlayMyData aim to organize the immense amounts of data that accompany these games into sets of data for public use in research, primarily games bound specifically to modern platforms that are still being actively developed or further improved. This study aims to examine the particular differences in video game review scores using this set of data across the four listed platforms—Nintendo, Xbox, PlayStation, and PC—for different gaming titles relating to each platform. Through analysis of variance (ANOVA) testing and several other statistical analyses, significant differences between the platforms were observed, with PC games receiving the highest amount of positive scores and consistently outperforming the other three platforms, Xbox and PlayStation trailing behind PC, and Nintendo receiving the lowest review scores overall. These results illustrate the influence of platforms and their differences on player ratings and provide insight for developers and market analysts seeking to develop and invest in console platform video games.
Full article

Figure 1
Open AccessArticle
System Inertia Cost Forecasting Using Machine Learning: A Data-Driven Approach for Grid Energy Trading in Great Britain
by
Maitreyee Dey, Soumya Prakash Rana and Preeti Patel
Analytics 2025, 4(4), 30; https://doi.org/10.3390/analytics4040030 - 23 Oct 2025
Abstract
As modern power systems integrate more renewable and decentralised generation, maintaining grid stability has become increasingly challenging. This study proposes a data-driven machine learning framework for forecasting system inertia service costs—a key yet underexplored variable influencing energy trading and frequency stability in Great
[...] Read more.
As modern power systems integrate more renewable and decentralised generation, maintaining grid stability has become increasingly challenging. This study proposes a data-driven machine learning framework for forecasting system inertia service costs—a key yet underexplored variable influencing energy trading and frequency stability in Great Britain. Using eight years (2017–2024) of National Energy System Operator (NESO) data, four models—Long Short-Term Memory (LSTM), Residual LSTM, eXtreme Gradient Boosting (XGBoost), and Light Gradient-Boosting Machine (LightGBM)—are comparatively analysed. LSTM-based models capture temporal dependencies, while ensemble methods effectively handle nonlinear feature relationships. Results demonstrate that LightGBM achieves the highest predictive accuracy, offering a robust method for inertia cost estimation and market intelligence. The framework contributes to strategic procurement planning and supports market design for a more resilient, cost-effective grid.
Full article
(This article belongs to the Special Issue Business Analytics and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Distributional CNN-LSTM, KDE, and Copula Approaches for Multimodal Multivariate Data: Assessing Conditional Treatment Effects
by
Jong-Min Kim
Analytics 2025, 4(4), 29; https://doi.org/10.3390/analytics4040029 - 21 Oct 2025
Abstract
►▼
Show Figures
We introduce a distributional CNN-LSTM framework for probabilistic multivariate modeling and heterogeneous treatment effect (HTE) estimation. The model jointly captures complex dependencies among multiple outcomes and enables precise estimation of individual-level conditional average treatment effects (CATEs). In simulation studies with multivariate Gaussian mixtures,
[...] Read more.
We introduce a distributional CNN-LSTM framework for probabilistic multivariate modeling and heterogeneous treatment effect (HTE) estimation. The model jointly captures complex dependencies among multiple outcomes and enables precise estimation of individual-level conditional average treatment effects (CATEs). In simulation studies with multivariate Gaussian mixtures, the CNN-LSTM demonstrates robust density estimation and strong CATE recovery, particularly as mixture complexity increases, while classical methods such as Kernel Density Estimation (KDE) and Gaussian Copulas may achieve higher log-likelihood or coverage in simpler scenarios. On real-world datasets, including Iris and Criteo Uplift, the CNN-LSTM achieves the lowest CATE RMSE, confirming its practical utility for individualized prediction, although KDE and Gaussian Copula approaches may perform better on global likelihood or coverage metrics. These results indicate that the CNN-LSTM can be trained efficiently on moderate-sized datasets while maintaining stable predictive performance. Overall, the framework is particularly valuable in applications requiring accurate individual-level effect estimation and handling of multimodal heterogeneity—such as personalized medicine, economic policy evaluation, and environmental risk assessment—with its primary strength being superior CATE recovery under complex outcome distributions, even when likelihood-based metrics favor simpler baselines.
Full article

Figure 1
Open AccessArticle
Reservoir Computation with Networks of Differentiating Neuron Ring Oscillators
by
Alexander Yeung, Peter DelMastro, Arjun Karuvally, Hava Siegelmann, Edward Rietman and Hananel Hazan
Analytics 2025, 4(4), 28; https://doi.org/10.3390/analytics4040028 - 20 Oct 2025
Abstract
►▼
Show Figures
Reservoir computing is an approach to machine learning that leverages the dynamics of a complex system alongside a simple, often linear, machine learning model for a designated task. While many efforts have previously focused their attention on integrating neurons, which produce an output
[...] Read more.
Reservoir computing is an approach to machine learning that leverages the dynamics of a complex system alongside a simple, often linear, machine learning model for a designated task. While many efforts have previously focused their attention on integrating neurons, which produce an output in response to large, sustained inputs, we focus on using differentiating neurons, which produce an output in response to large changes in input. Here, we introduce a small-world graph built from rings of differentiating neurons as a Reservoir Computing substrate. We find the coupling strength and network topology that enable these small-world networks to function as an effective reservoir. The dynamics of differentiating neurons naturally give rise to oscillatory dynamics when arranged in rings, where we study their computational use in the Reservoir Computing setting. We demonstrate the efficacy of these networks in the MNIST digit recognition task, achieving comparable performance of 90.65% to existing Reservoir Computing approaches. Beyond accuracy, we conduct systematic analysis of our reservoir’s internal dynamics using three complementary complexity measures that quantify neuronal activity balance, input dependence, and effective dimensionality. Our analysis reveals that optimal performance emerges when the reservoir operates with intermediate levels of neural entropy and input sensitivity, consistent with the edge-of-chaos hypothesis, where the system balances stability and responsiveness. The findings suggest that differentiating neurons can be a potential alternative to integrating neurons and can provide a sustainable future alternative for power-hungry AI applications.
Full article

Figure 1
Open AccessArticle
Multiplicative Decomposition Model to Predict UK’s Long-Term Electricity Demand with Monthly and Hourly Resolution
by
Marie Baillon, María Carmen Romano and Ekkehard Ullner
Analytics 2025, 4(4), 27; https://doi.org/10.3390/analytics4040027 - 6 Oct 2025
Abstract
►▼
Show Figures
The UK electricity market is changing to adapt to Net Zero targets and respond to disruptions like the Russia–Ukraine war. This requires strategic planning to decide on the construction of new electricity generation plants for a resilient UK electricity grid. Such planning is
[...] Read more.
The UK electricity market is changing to adapt to Net Zero targets and respond to disruptions like the Russia–Ukraine war. This requires strategic planning to decide on the construction of new electricity generation plants for a resilient UK electricity grid. Such planning is based on forecasting the UK electricity demand long-term (from 1 year and beyond). In this paper, we propose a long-term predictive model by identifying the main components of the UK electricity demand, modelling each of these components, and combining them in a multiplicative manner to deliver a single long-term prediction. To the best of our knowledge, this study is the first to apply a multiplicative decomposition model for long-term predictions at both monthly and hourly resolutions, combining neural networks with Fourier analysis. This approach is extremely flexible and accurate, with a mean absolute percentage error of 4.16% and 8.62% in predicting the monthly and hourly electricity demand, respectively, from 2019 to 2021.
Full article

Graphical abstract
Open AccessArticle
Fairness in Predictive Marketing: Auditing and Mitigating Demographic Bias in Machine Learning for Customer Targeting
by
Sayee Phaneendhar Pasupuleti, Jagadeesh Kola, Sai Phaneendra Manikantesh Kodete and Sree Harsha Palli
Analytics 2025, 4(4), 26; https://doi.org/10.3390/analytics4040026 - 1 Oct 2025
Abstract
As organizations increasingly turn to machine learning for customer segmentation and targeted marketing, concerns about fairness and algorithmic bias have become more urgent. This study presents a comprehensive fairness audit and mitigation framework for predictive marketing models using the Bank Marketing dataset. We
[...] Read more.
As organizations increasingly turn to machine learning for customer segmentation and targeted marketing, concerns about fairness and algorithmic bias have become more urgent. This study presents a comprehensive fairness audit and mitigation framework for predictive marketing models using the Bank Marketing dataset. We train logistic regression and random forest classifiers to predict customer subscription behavior and evaluate their performance across key demographic groups, including age, education, and job type. Using model explainability techniques such as SHAP and fairness metrics including disparate impact and true positive rate parity, we uncover notable disparities in model behavior that could result in discriminatory targeting. We implement three mitigation strategies—reweighing, threshold adjustment, and feature exclusion—and assess their effectiveness in improving fairness while preserving business-relevant performance metrics. Among these, reweighing produced the most balanced outcome, raising the Disparate Impact Ratio for older individuals from 0.65 to 0.82 and reducing the true positive rate parity gap by over 40%, with only a modest decline in precision (from 0.78 to 0.76). We propose a replicable workflow for embedding fairness auditing into enterprise BI systems and highlight the strategic importance of ethical AI practices in building accountable and inclusive marketing technologies. technologies.
Full article
(This article belongs to the Special Issue Business Analytics and Applications)
►▼
Show Figures

Figure 1
Open AccessReview
Evolution Cybercrime—Key Trends, Cybersecurity Threats, and Mitigation Strategies from Historical Data
by
Muhammad Abdullah, Muhammad Munib Nawaz, Bilal Saleem, Maila Zahra, Effa binte Ashfaq and Zia Muhammad
Analytics 2025, 4(3), 25; https://doi.org/10.3390/analytics4030025 - 18 Sep 2025
Cited by 2
Abstract
►▼
Show Figures
The landscape of cybercrime has undergone significant transformations over the past decade. Present-day threats include AI-generated attacks, deep fakes, 5G network vulnerabilities, cryptojacking, and supply chain attacks, among others. To remain resilient against contemporary threats, it is essential to examine historical data to
[...] Read more.
The landscape of cybercrime has undergone significant transformations over the past decade. Present-day threats include AI-generated attacks, deep fakes, 5G network vulnerabilities, cryptojacking, and supply chain attacks, among others. To remain resilient against contemporary threats, it is essential to examine historical data to gain insights that can inform cybersecurity strategies, policy decisions, and public awareness campaigns. This paper presents a comprehensive analysis of the evolution of cyber trends in state-sponsored attacks over the past 20 years, based on the council on foreign relations state-sponsored cyber operations (2005–present). The study explores the key trends, patterns, and demographic shifts in cybercrime victims, the evolution of complaints and losses, and the most prevalent cyber threats over the years. It also investigates the geographical distribution, the gender disparity in victimization, the temporal peaks of specific scams, and the most frequently reported internet crimes. The findings reveal a traditional cyber landscape, with cyber threats becoming more sophisticated and monetized. Finally, the article proposes areas for further exploration through a comprehensive analysis. It provides a detailed chronicle of the trajectory of cybercrimes, offering insights into its past, present, and future.
Full article

Figure 1
Open AccessArticle
Meta-Analysis of Artificial Intelligence’s Influence on Competitive Dynamics for Small- and Medium-Sized Financial Institutions
by
Macy Cudmore and David Mattie
Analytics 2025, 4(3), 24; https://doi.org/10.3390/analytics4030024 - 18 Sep 2025
Abstract
Artificial intelligence adoption in financial services presents uncertain implications for competitive dynamics, particularly for smaller institutions. The literature on AI in finance is growing, but there remains a notable absence regarding the impacts on small- and medium-sized financial services firms. We conduct a
[...] Read more.
Artificial intelligence adoption in financial services presents uncertain implications for competitive dynamics, particularly for smaller institutions. The literature on AI in finance is growing, but there remains a notable absence regarding the impacts on small- and medium-sized financial services firms. We conduct a meta-analysis combining a systematic literature review, sentiment bibliometrics, and network analysis to examine how AI is transforming competition across different firm sizes in the financial sector. Our analysis of 160 publications reveals predominantly positive academic sentiment toward AI in finance (mean positive sentiment 0.725 versus negative 0.586, Cohen’s d = 0.790, p < 0.0001), with anticipatory sentiment increasing significantly over time ( ). However, network analysis reveals substantial conceptual fragmentation in the research discourse, with a low connectivity coefficient ( ) indicating that the field lacks unified terminology. These findings expose a critical knowledge gap: while scholars increasingly view AI as competitively advantageous, research has not coalesced around coherent models for understanding differential impacts across firm sizes. The absence of size-specific research leaves practitioners and policymakers without clear guidance on how AI adoption affects competitive positioning, particularly for smaller institutions that may face resource constraints or technological barriers. The research fragmentation identified here has direct implications for strategic planning, regulatory approaches, and employment dynamics in financial services.
Full article
(This article belongs to the Special Issue Business Analytics and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Game-Theoretic Analysis of MEV Attacks and Mitigation Strategies in Decentralized Finance
by
Benjamin Appiah, Daniel Commey, Winful Bagyl-Bac, Laurene Adjei and Ebenezer Owusu
Analytics 2025, 4(3), 23; https://doi.org/10.3390/analytics4030023 - 15 Sep 2025
Abstract
►▼
Show Figures
Maximal Extractable Value (MEV) presents a significant challenge to the fairness and efficiency of decentralized finance (DeFi). This paper provides a game-theoretic analysis of the strategic interactions within the MEV supply chain, involving searchers, builders, and validators. A three-stage game of incomplete information
[...] Read more.
Maximal Extractable Value (MEV) presents a significant challenge to the fairness and efficiency of decentralized finance (DeFi). This paper provides a game-theoretic analysis of the strategic interactions within the MEV supply chain, involving searchers, builders, and validators. A three-stage game of incomplete information is developed to model these interactions. The analysis derives the Perfect Bayesian Nash Equilibria for primary MEV attack vectors, such as sandwich attacks, and formally characterizes attacker behavior. The research demonstrates that the competitive dynamics of the current MEV market are best described as Bertrand-style competition, which compels rational actors to engage in aggressive extraction that reduces overall system welfare in a prisoner’s dilemma-like outcome. To address these issues, the paper proposes and evaluates mechanism design solutions, including commit–reveal schemes and threshold encryption. The potential of these solutions to mitigate harmful MEV is quantified. Theoretical models are validated against on-chain data from the Ethereum blockchain, showing a close alignment between theoretical predictions and empirically observed market behavior.
Full article

Figure 1
Open AccessArticle
Bankruptcy Prediction Using Machine Learning and Data Preprocessing Techniques
by
Kamil Samara and Apurva Shinde
Analytics 2025, 4(3), 22; https://doi.org/10.3390/analytics4030022 - 10 Sep 2025
Abstract
►▼
Show Figures
Bankruptcy prediction is critical for financial risk management. This study demonstrates that machine learning models, particularly Random Forest, can substantially improve prediction accuracy compared to traditional approaches. Using data from 8262 U.S. firms (1999–2018), we evaluate Logistic Regression, SVM, Random Forest, ANN, and
[...] Read more.
Bankruptcy prediction is critical for financial risk management. This study demonstrates that machine learning models, particularly Random Forest, can substantially improve prediction accuracy compared to traditional approaches. Using data from 8262 U.S. firms (1999–2018), we evaluate Logistic Regression, SVM, Random Forest, ANN, and RNN in combination with robust data preprocessing steps. Random Forest achieved the highest prediction accuracy (~95%), far surpassing Logistic Regression (~57%). Key preprocessing steps included feature engineering of financial ratios, feature selection, class balancing using SMOTE, and scaling. The findings highlight that ensemble and deep learning models—particularly Random Forest and ANN—offer strong predictive performance, suggesting their suitability for early-warning financial distress systems.
Full article

Figure 1
Open AccessArticle
Accurate Analytical Forms of Heaviside and Ramp Function
by
John Constantine Venetis
Analytics 2025, 4(3), 21; https://doi.org/10.3390/analytics4030021 - 26 Aug 2025
Abstract
In this paper, explicit exact representations of the Unit Step Function and Ramp Function are obtained. These important functions constitute fundamental concepts of operational calculus together with digital signal processing theory and are also involved in many other areas of applied sciences and
[...] Read more.
In this paper, explicit exact representations of the Unit Step Function and Ramp Function are obtained. These important functions constitute fundamental concepts of operational calculus together with digital signal processing theory and are also involved in many other areas of applied sciences and engineering practices. In particular, according to a rigorous process from the viewpoint of Mathematical Analysis, the Unit Step Function and the Ramp Function are equivalently performed as bi-parametric single-valued functions with only one constraint imposed on each parameter. The novelty of this work, when compared with other investigations concerning accurate and/or approximate forms of Unit Step Function and/or Ramp Function, is that the proposed exact formulae are not exhibited in terms of miscellaneous special functions, e.g., Gamma Function, Biexponential Function, or any other special functions, such as Error Function, Complementary Error Function, Hyperbolic Function, or Orthogonal Polynomials. In this framework, one may deduce that these formulae may be much more practical, flexible, and useful in the computational procedures that are inserted into operational calculus and digital signal processing techniques as well as other engineering practices.
Full article
Open AccessArticle
LINEX Loss-Based Estimation of Expected Arrival Time of Next Event from HPP and NHPP Processes Past Truncated Time
by
M. S. Aminzadeh
Analytics 2025, 4(3), 20; https://doi.org/10.3390/analytics4030020 - 26 Aug 2025
Abstract
►▼
Show Figures
This article introduces a computational tool for Bayesian estimation of the expected time until the next event occurs in both homogeneous Poisson processes (HPPs) and non-homogeneous Poisson processes (NHPPs), following a truncated time. The estimation utilizes the linear exponential (LINEX) asymmetric loss function
[...] Read more.
This article introduces a computational tool for Bayesian estimation of the expected time until the next event occurs in both homogeneous Poisson processes (HPPs) and non-homogeneous Poisson processes (NHPPs), following a truncated time. The estimation utilizes the linear exponential (LINEX) asymmetric loss function and incorporates both gamma and non-informative priors. Furthermore, it presents a minimax-type criterion to ascertain the optimal sample size required to achieve a specified percentage reduction in posterior risk. Simulation studies indicate that estimators employing gamma priors for both HPP and NHPP demonstrate greater accuracy compared to those based on non-informative priors and maximum likelihood estimates (MLE), provided that the proposed data-driven method for selecting hyperparameters is applied.
Full article

Figure 1
Open AccessArticle
A Bounded Sine Skewed Model for Hydrological Data Analysis
by
Tassaddaq Hussain, Mohammad Shakil, Mohammad Ahsanullah and Bhuiyan Mohammad Golam Kibria
Analytics 2025, 4(3), 19; https://doi.org/10.3390/analytics4030019 - 13 Aug 2025
Abstract
►▼
Show Figures
Hydrological time series frequently exhibit periodic trends with variables such as rainfall, runoff, and evaporation rates often following annual cycles. Seasonal variations further contribute to the complexity of these data sets. A critical aspect of analyzing such phenomena is estimating realistic return intervals,
[...] Read more.
Hydrological time series frequently exhibit periodic trends with variables such as rainfall, runoff, and evaporation rates often following annual cycles. Seasonal variations further contribute to the complexity of these data sets. A critical aspect of analyzing such phenomena is estimating realistic return intervals, making the precise determination of these values essential. Given this importance, selecting an appropriate probability distribution is paramount. To address this need, we introduce a flexible probability model specifically designed to capture periodicity in hydrological data. We thoroughly examine its fundamental mathematical and statistical properties, including the asymptotic behavior of the probability density function (PDF) and hazard rate function (HRF), to enhance predictive accuracy. Our analysis reveals that the PDF exhibits polynomial decay as , ensuring heavy-tailed behavior suitable for extreme events. The HRF demonstrates decreasing or non-monotonic trends, reflecting variable failure risks over time. Additionally, we conduct a simulation study to evaluate the performance of the estimation method. Based on these results, we refine return period estimates, providing more reliable and robust hydrological assessments. This approach ensures that the model not only fits observed data but also captures the underlying dynamics of hydrological extremes.
Full article

Figure 1
Open AccessArticle
Predictive Framework for Regional Patent Output Using Digital Economic Indicators: A Stacked Machine Learning and Geospatial Ensemble to Address R&D Disparities
by
Amelia Zhao and Peng Wang
Analytics 2025, 4(3), 18; https://doi.org/10.3390/analytics4030018 - 8 Jul 2025
Abstract
►▼
Show Figures
As digital transformation becomes an increasingly central focus of national and regional policy agendas, parallel efforts are intensifying to stimulate innovation as a critical driver of firm competitiveness and high-quality economic growth. However, regional disparities in innovation capacity persist. This study proposes an
[...] Read more.
As digital transformation becomes an increasingly central focus of national and regional policy agendas, parallel efforts are intensifying to stimulate innovation as a critical driver of firm competitiveness and high-quality economic growth. However, regional disparities in innovation capacity persist. This study proposes an integrated framework in which regionally tracked digital economy indicators are leveraged to predict firm-level innovation performance, measured through patent activity, across China. Drawing on a comprehensive dataset covering 13 digital economic indicators from 2013 to 2022, this study spans core, broad, and narrow dimensions of digital development. Spatial dependencies among these indicators are assessed using global and local spatial autocorrelation measures, including Moran’s I and Geary’s C, to provide actionable insights for constructing innovation-conducive environments. To model the predictive relationship between digital metrics and innovation output, this study employs a suite of supervised machine learning techniques—Random Forest, Extreme Learning Machine (ELM), Support Vector Machine (SVM), XGBoost, and stacked ensemble approaches. Our findings demonstrate the potential of digital infrastructure metrics to serve as early indicators of regional innovation capacity, offering a data-driven foundation for targeted policymaking, strategic resource allocation, and the design of adaptive digital innovation ecosystems.
Full article

Figure 1
Open AccessArticle
Domestication of Source Text in Literary Translation Prevails over Foreignization
by
Emilio Matricciani
Analytics 2025, 4(3), 17; https://doi.org/10.3390/analytics4030017 - 20 Jun 2025
Cited by 1
Abstract
►▼
Show Figures
Domestication is a translation theory in which the source text (to be translated) is matched to the foreign reader by erasing its original linguistic and cultural difference. This match aims at making the target text (translated text) more fluent. On the contrary, foreignization
[...] Read more.
Domestication is a translation theory in which the source text (to be translated) is matched to the foreign reader by erasing its original linguistic and cultural difference. This match aims at making the target text (translated text) more fluent. On the contrary, foreignization is a translation theory in which the foreign reader is matched to the source text. This paper mathematically explores the degree of domestication/foreignization in current translation practice of texts written in alphabetical languages. A geometrical representation of texts, based on linear combinations of deep–language parameters, allows us (a) to calculate a domestication index which measures how much domestication is applied to the source text and (b) to distinguish language families. An expansion index measures the relative spread around mean values. This paper reports statistics and results on translations of (a) Greek New Testament books in Latin and in 35 modern languages, belonging to diverse language families; and (b) English novels in Western languages. English and French, although attributed to different language families, mathematically almost coincide. The requirement of making the target text more fluent makes domestication, with varying degrees, universally adopted, so that a blind comparison of the same linguistic parameters of a text and its translation hardly indicates that they refer to each other.
Full article

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Future Internet, AI, Analytics, BDCC
Data Intelligence and Computational Analytics
Topic Editors: Carson K. Leung, Fei Hao, Xiaokang ZhouDeadline: 30 November 2026
Special Issues
Special Issue in
Analytics
Reviews on Data Analytics and Its Applications
Guest Editor: Carson K. LeungDeadline: 31 March 2026
Special Issue in
Analytics
Critical Challenges in Large Language Models and Data Analytics: Trustworthiness, Scalability, and Societal Impact
Guest Editors: Oluwaseun Ajao, Bayode Ogunleye, Hemlata SharmaDeadline: 31 July 2026
Special Issue in
Analytics
Business Analytics and Applications, 2nd Edition
Guest Editor: Tatiana ErmakovaDeadline: 30 September 2026



