Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (79)

Search Parameters:
Keywords = over-dispersed Poisson

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
11 pages, 272 KB  
Article
Bayesian Bell Regression Model for Fitting of Overdispersed Count Data with Application
by Ameer Musa Imran Alhseeni and Hossein Bevrani
Stats 2025, 8(4), 95; https://doi.org/10.3390/stats8040095 - 10 Oct 2025
Abstract
The Bell regression model (BRM) is a statistical model that is often used in the analysis of count data that exhibits overdispersion. In this study, we propose a Bayesian analysis of the BRM and offer a new perspective on its application. Specifically, we [...] Read more.
The Bell regression model (BRM) is a statistical model that is often used in the analysis of count data that exhibits overdispersion. In this study, we propose a Bayesian analysis of the BRM and offer a new perspective on its application. Specifically, we introduce a G-prior distribution for Bayesian inference in BRM, in addition to a flat-normal prior distribution. To compare the performance of the proposed prior distributions, we conduct a simulation study and demonstrate that the G-prior distribution provides superior estimation results for the BRM. Furthermore, we apply the methodology to real data and compare the BRM to the Poisson and negative binomial regression model using various model selection criteria. Our results provide valuable insights into the use of Bayesian methods for estimation and inference of the BRM and highlight the importance of considering the choice of prior distribution in the analysis of count data. Full article
(This article belongs to the Section Computational Statistics)
32 pages, 1288 KB  
Article
Random Forest Adaptation for High-Dimensional Count Regression
by Oyebayo Ridwan Olaniran, Saidat Fehintola Olaniran, Ali Rashash R. Alzahrani, Nada MohammedSaeed Alharbi and Asma Ahmad Alzahrani
Mathematics 2025, 13(18), 3041; https://doi.org/10.3390/math13183041 - 21 Sep 2025
Viewed by 346
Abstract
The analysis of high-dimensional count data presents a unique set of challenges, including overdispersion, zero-inflation, and complex nonlinear relationships that traditional generalized linear models and standard machine learning approaches often fail to adequately address. This study introduces and validates a novel Random Forest [...] Read more.
The analysis of high-dimensional count data presents a unique set of challenges, including overdispersion, zero-inflation, and complex nonlinear relationships that traditional generalized linear models and standard machine learning approaches often fail to adequately address. This study introduces and validates a novel Random Forest framework specifically developed for high-dimensional Poisson and Negative Binomial regression, designed to overcome the limitations of existing methods. Through comprehensive simulations and a real-world genomic application to the Norwegian Mother and Child Cohort Study, we demonstrate that the proposed methods achieve superior predictive accuracy, quantified by lower root mean squared error and deviance, and critically produced exceptionally stable and interpretable feature selections. Our theoretical and empirical results show that these distribution-optimized ensembles significantly outperform both penalized-likelihood techniques and naive-transformation-based ensembles in balancing statistical robustness with biological interpretability. The study concludes that the proposed frameworks provide a crucial methodological advancement, offering a powerful and reliable tool for extracting meaningful insights from complex count data in fields ranging from genomics to public health. Full article
(This article belongs to the Special Issue Statistics for High-Dimensional Data)
Show Figures

Figure 1

23 pages, 4564 KB  
Technical Note
Vehicle Collision Frequency Prediction Using Traffic Accident and Traffic Volume Data with a Deep Neural Network
by Yeong Gook Ko, Kyu Chun Jo, Ji Sun Lee and Jik Su Yu
Appl. Sci. 2025, 15(18), 9884; https://doi.org/10.3390/app15189884 - 9 Sep 2025
Viewed by 568
Abstract
This study proposes a hybrid deep learning framework for predicting vehicle crash frequency (Fi) using nationwide traffic accident and traffic volume data from the United States (2019–2022). Crash frequency is defined as the product of exposure frequency (Na [...] Read more.
This study proposes a hybrid deep learning framework for predicting vehicle crash frequency (Fi) using nationwide traffic accident and traffic volume data from the United States (2019–2022). Crash frequency is defined as the product of exposure frequency (Na) and crash risk rate (λ), a structure widely adopted for its ability to separate physical exposure from the crash likelihood. Na was computed using an extended Safety Performance Function (SPF) that incorporates roadway traffic volume, segment length, number of lanes, and traffic density, while λ was estimated using a multilayer perceptron-based deep neural network (DNN) with inputs such as impact speed, road surface condition, and vehicle characteristics. The DNN integrates rectified linear unit (ReLU) activation, batch normalization, dropout layers, and the Huber loss function to capture nonlinearity and over-dispersion beyond the capability of traditional statistical models. Model performance, evaluated through five-fold cross-validation, achieved R2 = 0.7482, MAE = 0.1242, and MSE = 0.0485, demonstrating a strong capability to identify high-risk areas. Compared to traditional regression approaches such as Poisson and negative binomial models, which are often constrained by equidispersion assumptions and limited flexibility in capturing nonlinear effects, the proposed framework demonstrated substantially improved predictive accuracy and robustness. Unlike prior studies that loosely combined SPF terms with machine learning, this study explicitly decomposes Fi into Na and λ, ensuring interpretability while leveraging DNN flexibility for crash risk estimation. This dual-layer integration provides a unique methodological contribution by jointly achieving interpretability and predictive robustness, validated with a nationwide dataset, and highlights its potential for evidence-based traffic safety assessments and policy development. Full article
Show Figures

Figure 1

23 pages, 575 KB  
Article
A Comparison of the Robust Zero-Inflated and Hurdle Models with an Application to Maternal Mortality
by Phelo Pitsha, Raymond T. Chiruka and Chioneso S. Marange
Math. Comput. Appl. 2025, 30(5), 95; https://doi.org/10.3390/mca30050095 - 2 Sep 2025
Viewed by 835
Abstract
This study evaluates the performance of count regression models in the presence of zero inflation, outliers, and overdispersion using both simulated and real-world maternal mortality dataset. Traditional Poisson and negative binomial regression models often struggle to account for the complexities introduced by excess [...] Read more.
This study evaluates the performance of count regression models in the presence of zero inflation, outliers, and overdispersion using both simulated and real-world maternal mortality dataset. Traditional Poisson and negative binomial regression models often struggle to account for the complexities introduced by excess zeros and outliers. To address these limitations, this study compares the performance of robust zero-inflated (RZI) and robust hurdle (RH) models against conventional models using the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) to determine the best-fitting model. Results indicate that the robust zero-inflated Poisson (RZIP) model performs best overall. The simulation study considers various scenarios, including different levels of zero inflation (50%, 70%, and 80%), outlier proportions (0%, 5%, 10%, and 15%), dispersion values (1, 3, and 5), and sample sizes (50, 200, and 500). Based on AIC comparisons, the robust zero-inflated Poisson (RZIP) and robust hurdle Poisson (RHP) models demonstrate superior performance when outliers are absent or limited to 5%, particularly when dispersion is low (5). However, as outlier levels and dispersion increase, the robust zero-inflated negative binomial (RZINB) and robust hurdle negative binomial (RHNB) models outperform robust zero-inflated Poisson (RZIP) and robust hurdle Poisson (RHP) across all levels of zero inflation and sample sizes considered in the study. Full article
Show Figures

Figure 1

19 pages, 539 KB  
Article
Maximum-Likelihood Estimation for the Zero-Inflated Polynomial-Adjusted Poisson Distribution
by Jong-Seung Lee and Hyung-Tae Ha
Mathematics 2025, 13(15), 2383; https://doi.org/10.3390/math13152383 - 24 Jul 2025
Viewed by 415
Abstract
We propose the zero-inflated Polynomially Adjusted Poisson (zPAP) model. It extends the usual zero-inflated Poisson by multiplying the Poisson kernel with a nonnegative polynomial, enabling the model to handle extra zeros, overdispersion, skewness, and even multimodal counts. We derive the maximum-likelihood framework—including the [...] Read more.
We propose the zero-inflated Polynomially Adjusted Poisson (zPAP) model. It extends the usual zero-inflated Poisson by multiplying the Poisson kernel with a nonnegative polynomial, enabling the model to handle extra zeros, overdispersion, skewness, and even multimodal counts. We derive the maximum-likelihood framework—including the log-likelihood and score equations under both general and regression settings—and fit zPAP to the zero-inflated, highly dispersed Fish Catch data as well as a synthetic bimodal mixture. In both cases, zPAP not only outperforms the standard zero-inflated Poisson model but also yields reliable inference via parametric bootstrap confidence intervals. Overall, zPAP is a clear and tractable tool for real-world count data with complex features. Full article
(This article belongs to the Special Issue Statistical Theory and Application, 2nd Edition)
Show Figures

Figure 1

17 pages, 343 KB  
Article
On the Conflation of Poisson and Logarithmic Distributions with Applications
by Abdulhamid A. Alzaid, Anfal A. Alqefari and Najla Qarmalah
Axioms 2025, 14(7), 518; https://doi.org/10.3390/axioms14070518 - 6 Jul 2025
Viewed by 415
Abstract
It is frequent for real-life count data to show inflation in lower values; however, most of the well-known count distributions cannot capture such a feature. The present paper introduces a new distribution for modeling inflated count data in small values based on a [...] Read more.
It is frequent for real-life count data to show inflation in lower values; however, most of the well-known count distributions cannot capture such a feature. The present paper introduces a new distribution for modeling inflated count data in small values based on a conflation of distributions approach. The new distribution inherits some properties from Poisson distribution (PD) and logarithmic distribution (LD), making it a powerful modeling tool. It can serve as an alternative to PD, LD, and zero-truncated distributions. The new distribution is worth considering theoretically, as it belongs to the weighted PD family. With zero as a support point, two additional models are suggested for the new distribution. These modifications yield distributions that demonstrate overdispersion models comparable to the negative binomial distribution (NBD) while retaining essential PD properties, making them suitable for accurately representing count data with frequent events of low frequency and high variance. Furthermore, we discuss the superior performance of three new distributions in modeling real count data compared to traditional count distributions such as PD and NBD, as well as other discrete distributions. This paper examines the key statistical properties of the proposed distributions. A comparison of the novel and other distributions in the literature is shown employing real-life data from some domains. All of the computations shown in this study are generated using the R programming language. Full article
(This article belongs to the Special Issue Advances in the Theory and Applications of Statistical Distributions)
Show Figures

Figure 1

35 pages, 7691 KB  
Article
KLF14 and SREBF-1 Binding Site Associations with Orphan Receptor Promoters in Metabolic Syndrome
by Julio Jesús Garcia-Coste, Santiago Villafaña-Rauda, Karla Aidee Aguayo-Cerón, Cruz Vargas-De-León and Rodrigo Romero-Nava
Int. J. Mol. Sci. 2025, 26(7), 2849; https://doi.org/10.3390/ijms26072849 - 21 Mar 2025
Viewed by 642
Abstract
This study investigated the relationship between the transcription factors (TFs) KLF14 and SREBF-1 and orphan receptors (ORs) in the context of metabolic syndrome (MetS). A detailed bioinformatics analysis identified a significant association between the presence of binding sites (BS) for these TFs in [...] Read more.
This study investigated the relationship between the transcription factors (TFs) KLF14 and SREBF-1 and orphan receptors (ORs) in the context of metabolic syndrome (MetS). A detailed bioinformatics analysis identified a significant association between the presence of binding sites (BS) for these TFs in the promoters of ORs genes and the total number of BS in the distal region. The results suggest that KLF14 and SREBF-1 can regulate the expression of some of these genes and, in turn, can modulate the development of MetS. Although a stronger association was observed with KLF14, both factors showed a significant contribution. Additionally, the sequence similarity of KLF14 also contributed to the quantity of BS in the gene’s distal region (DR). The statistical models used, such as Poisson and negative binomial regression, confirmed these associations and allowed for the appropriate adjustment of overdispersion present in the data. However, no significant differences in receptor groups (orphan G Protein-Coupled Rereptors (oGPCRs) and G Protein-Coupled Receptors associated with MetS (GPCRs-MetS)) regarding their relationship with TFs were found. In conclusion, this study provides strong evidence of the importance of KLF14 and SREBF-1 in regulating orphan receptors genes and their participation in the development of metabolic syndrome. Full article
(This article belongs to the Special Issue Molecular Mechanisms of Obesity and Metabolic Diseases)
Show Figures

Figure 1

16 pages, 841 KB  
Article
An Alternative Estimator for Poisson–Inverse-Gaussian Regression: The Modified Kibria–Lukman Estimator
by Rasha A. Farghali, Adewale F. Lukman, Zakariya Algamal, Murat Genc and Hend Attia
Algorithms 2025, 18(3), 169; https://doi.org/10.3390/a18030169 - 14 Mar 2025
Viewed by 854
Abstract
Poisson regression is used to model count response variables. The method has a strict assumption that the mean and variance of the response variable are equal, while, in practice, the case of overdispersion is common. Also, in multicollinearity, the model parameter estimates obtained [...] Read more.
Poisson regression is used to model count response variables. The method has a strict assumption that the mean and variance of the response variable are equal, while, in practice, the case of overdispersion is common. Also, in multicollinearity, the model parameter estimates obtained with the maximum likelihood estimator are adversely affected. This paper introduces a new biased estimator that extends the modified Kibria–Lukman estimator to the Poisson–Inverse-Gaussian regression model to deal with overdispersion and multicollinearity in the data. The superiority of the proposed estimator over the existing biased estimators is presented in terms of matrix and scalar mean square error. Moreover, the performance of the proposed estimator is examined through a simulation study. Finally, on a real dataset, the superiority of the proposed estimator over other estimators is demonstrated. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

20 pages, 1435 KB  
Article
Modified Kibria–Lukman Estimator for the Conway–Maxwell–Poisson Regression Model: Simulation and Application
by Nasser A. Alreshidi, Masad A. Alrasheedi, Adewale F. Lukman, Hleil Alrweili and Rasha A. Farghali
Mathematics 2025, 13(5), 794; https://doi.org/10.3390/math13050794 - 27 Feb 2025
Cited by 1 | Viewed by 726
Abstract
This study presents a novel estimator that combines the Kibria–Lukman and ridge estimators to address the challenges of multicollinearity in Conway–Maxwell–Poisson (COMP) regression models. The Conventional COMP Maximum Likelihood Estimator (CMLE) is notably susceptible to the adverse effects of multicollinearity, underscoring the necessity [...] Read more.
This study presents a novel estimator that combines the Kibria–Lukman and ridge estimators to address the challenges of multicollinearity in Conway–Maxwell–Poisson (COMP) regression models. The Conventional COMP Maximum Likelihood Estimator (CMLE) is notably susceptible to the adverse effects of multicollinearity, underscoring the necessity for alternative estimation strategies. We comprehensively compare the proposed COMP Modified Kibria–Lukman estimator (CMKLE) against existing methodologies to mitigate multicollinearity effects. Through rigorous Monte Carlo simulations and real-world applications, our results demonstrate that the CMKLE exhibits superior resilience to multicollinearity while consistently achieving lower mean squared error (MSE) values. Additionally, our findings underscore the critical role of larger sample sizes in enhancing estimator performance, particularly in the presence of high multicollinearity and over-dispersion. Importantly, the CMKLE outperforms traditional estimators, including the CMLE, in predictive accuracy, reinforcing the imperative for judicious selection of estimation techniques in statistical modeling. Full article
(This article belongs to the Special Issue Application of Regression Models, Analysis and Bayesian Statistics)
Show Figures

Figure 1

34 pages, 2580 KB  
Article
Bayesian Estimation of Generalized Log-Linear Poisson Item Response Models for Fluency Scores Using brms and Stan
by Nils Myszkowski and Martin Storme
J. Intell. 2025, 13(3), 26; https://doi.org/10.3390/jintelligence13030026 - 23 Feb 2025
Viewed by 1440
Abstract
Divergent thinking tests are popular instruments to measure a person’s creativity. They often involve scoring fluency, which refers to the count of ideas generated in response to a prompt. The two-parameter Poisson counts model (2PPCM), a generalization of the Rasch Poisson counts model [...] Read more.
Divergent thinking tests are popular instruments to measure a person’s creativity. They often involve scoring fluency, which refers to the count of ideas generated in response to a prompt. The two-parameter Poisson counts model (2PPCM), a generalization of the Rasch Poisson counts model (RPCM) that includes discrimination parameters, has been proposed as a useful approach to analyze fluency scores in creativity tasks, but its estimation was presented in the context of generalized structural equation modeling (GSEM) commercial software (e.g., Mplus). Here, we show how the 2PPCM (and RPCM) can be estimated in a Bayesian multilevel regression framework and interpreted using the R package brms, which provides an interface for the Stan programming language. We illustrate this using an example dataset, which contains fluency scores for three tasks and 202 participants. We discuss model specification, estimation, convergence, fit and comparisons. Furthermore, we provide instructions on plotting item response functions, comparing models, calculating overdispersion and reliability, as well as extracting factor scores. Full article
(This article belongs to the Special Issue Analysis of a Divergent Thinking Dataset)
Show Figures

Figure 1

13 pages, 788 KB  
Article
Spatial Cluster Detection Under Dependent Random Environmental Effects
by Walguen Oscar and Jean Vaillant
Mathematics 2025, 13(3), 430; https://doi.org/10.3390/math13030430 - 27 Jan 2025
Viewed by 922
Abstract
This paper develops a new approach for the detection of spatial clusters in the presence of random environmental effects and covariates when the observed data consist of counts over a regular grid. Such data are frequently overdispersed and spatially dependent. Overdispersion and spatial [...] Read more.
This paper develops a new approach for the detection of spatial clusters in the presence of random environmental effects and covariates when the observed data consist of counts over a regular grid. Such data are frequently overdispersed and spatially dependent. Overdispersion and spatial dependence must be taken into account in the modeling, otherwise the classical scan statistics method may lead to the detection of false clusters. Therefore, we consider that the observed counts are generated by a Cox process, allowing for overdispersion and spatial correlation. The environmental effects here represents unobserved covariates, as opposed to observed covariates whose observations are used via the link function in the model. These random effects are modeled by means of spatial copula with margins distributed according to a Gamma distribution. We then prove that the counts are dependent and negative binomial and propose a spatial cluster detection test based on data augmentation techniques. It is worth noting that our model also includes the case of independent effects for which the counts are independent and negative binomial. An illustration of these spatial scan techniques is provided by a Black Leaf Streak Disease (BLSD) dataset from Martinique, French West Indies. The comparison of our model with Poisson models, with or without covariates, demonstrates the importance of our approach in avoiding false clusters. Full article
(This article belongs to the Special Issue Applied Statistics in Real-World Problems)
Show Figures

Figure 1

30 pages, 3766 KB  
Article
An Interpretable Machine Learning-Based Hurdle Model for Zero-Inflated Road Crash Frequency Data Analysis: Real-World Assessment and Validation
by Moataz Bellah Ben Khedher and Dukgeun Yun
Appl. Sci. 2024, 14(23), 10790; https://doi.org/10.3390/app142310790 - 21 Nov 2024
Cited by 4 | Viewed by 3514
Abstract
Road traffic crashes pose significant economic and public health burdens, necessitating an in-depth understanding of crash causation and its links to underlying factors. This study introduces a machine learning-based hurdle model framework tailored for analyzing zero-inflated crash frequency data, addressing the limitations of [...] Read more.
Road traffic crashes pose significant economic and public health burdens, necessitating an in-depth understanding of crash causation and its links to underlying factors. This study introduces a machine learning-based hurdle model framework tailored for analyzing zero-inflated crash frequency data, addressing the limitations of traditional statistical models like the Poisson and negative binomial models, which struggle with zero-inflation and overdispersion. The research employs a two-stage modeling process using CatBoost. The first stage uses binary classification to identify road segments with potential crash occurrences, applying a customized loss function to tackle data imbalance. The second stage predicts crash frequency, also utilizing a customized loss function for count data. SHapley Additive exPlanations (SHAP) analysis interprets the model outcomes, providing insights into factors affecting crash likelihood and frequency. This study validates the model’s performance with real-world crash data from 2011 to 2015 in South Korea, demonstrating superior accuracy in both the classification and regression stages compared to other machine learning algorithms and traditional models. These findings have significant implications for traffic safety research and policymaking, offering stakeholders a more accurate and interpretable tool for crash data analysis to develop targeted safety interventions. Full article
(This article belongs to the Section Transportation and Future Mobility)
Show Figures

Figure 1

15 pages, 595 KB  
Article
A New Computational Algorithm for Assessing Overdispersion and Zero-Inflation in Machine Learning Count Models with Python
by Luiz Paulo Lopes Fávero, Alexandre Duarte and Helder Prado Santos
Computers 2024, 13(4), 88; https://doi.org/10.3390/computers13040088 - 27 Mar 2024
Cited by 1 | Viewed by 2803
Abstract
This article provides an overview of count data and count models, explores zero inflation, introduces likelihood ratio tests, and explains how the Vuong test can be used as a model selection criterion for assessing overdispersion. The motivation of this work was to create [...] Read more.
This article provides an overview of count data and count models, explores zero inflation, introduces likelihood ratio tests, and explains how the Vuong test can be used as a model selection criterion for assessing overdispersion. The motivation of this work was to create a Vuong test implementation from scratch using the Python programming language. This implementation supports our objective of enhancing the accessibility and applicability of the Vuong test in real-world scenarios, providing a valuable contribution to the academic community, since Python did not have an implementation of this statistical test. Full article
Show Figures

Figure 1

17 pages, 1277 KB  
Article
Analysis of Road Infrastructure and Traffic Factors Influencing Crash Frequency: Insights from Generalised Poisson Models
by Muhammad Wisal Khattak, Hans De Backer, Pieter De Winne, Tom Brijs and Ali Pirdavani
Infrastructures 2024, 9(3), 47; https://doi.org/10.3390/infrastructures9030047 - 4 Mar 2024
Cited by 4 | Viewed by 4574
Abstract
This research utilises statistical modelling to explore the impact of roadway infrastructure elements, primarily those related to cross-section design, on crash occurrences in urban areas. Cross-section design is an important step in the roadway geometric design process as it influences key operational characteristics [...] Read more.
This research utilises statistical modelling to explore the impact of roadway infrastructure elements, primarily those related to cross-section design, on crash occurrences in urban areas. Cross-section design is an important step in the roadway geometric design process as it influences key operational characteristics like capacity, cost, safety, and overall functionality of the transport system entity. Evaluating the influence of cross-section design on these factors is relatively straightforward, except for its impact on safety, especially in urban areas. The safety aspect has resulted in inconsistent findings in the existing literature, indicating a need for further investigation. Negative binomial (NB) models are typically employed for such investigations, given their ability to account for over-dispersion in crash data. However, the low sample mean and under-dispersion occasionally exhibited by crash data can restrict their applicability. The generalised Poisson (GP) models have been proposed as a potential alternative to NB models. This research applies GP models for developing crash prediction models for urban road segments. Simultaneously, NB models are also developed to enable a comparative assessment between the two modelling frameworks. A six-year dataset encompassing crash counts, traffic volume, and cross-section design data reveals a significant association between crash frequency and infrastructure design variables. Specifically, lane width, number of lanes, road separation, on-street parking, and posted speed limit are significant predictors of crash frequencies. Comparative analysis with NB models shows that GP models outperform in cases of low sample mean crash types and yield similar results for others. Overall, this study provides valuable insights into the relationship between road infrastructure design and crash frequency in urban environments and offers a statistical approach for predicting crash frequency that maintains a balance between interpretability and predictive power, making it more viable for practitioners and road authorities to apply in real-world road safety scenarios. Full article
(This article belongs to the Special Issue Road Systems and Engineering)
Show Figures

Figure 1

13 pages, 896 KB  
Article
On Comparing and Assessing Robustness of Some Popular Non-Stationary BINAR(1) Models
by Yuvraj Sunecher and Naushad Mamode Khan
J. Risk Financial Manag. 2024, 17(3), 100; https://doi.org/10.3390/jrfm17030100 - 28 Feb 2024
Cited by 1 | Viewed by 1660
Abstract
Intra-day transactions of stocks from competing firms in the financial markets are known to exhibit significant volatility and over-dispersion. This paper proposes some bivariate integer-valued auto-regressive models of order 1 (BINAR(1)) that are useful to analyze such financial series. These models were constructed [...] Read more.
Intra-day transactions of stocks from competing firms in the financial markets are known to exhibit significant volatility and over-dispersion. This paper proposes some bivariate integer-valued auto-regressive models of order 1 (BINAR(1)) that are useful to analyze such financial series. These models were constructed under both time-variant and time-invariant conditions to capture features such as over-dispersion and non-stationarity in time series of counts. However, the quest for the most robust BINAR(1) models is still on. This paper considers specifically the family of BINAR(1)s with a non-diagonal cross-correlation structure and with unpaired innovation series. These assumptions relax the number of parameters to be estimated. Simulation experiments are performed to assess both the consistency of the estimators and the robust behavior of the BINAR(1)s under mis-specified innovation distribution specifications. The proposed BINAR(1)s are applied to analyze the intra-day transaction series of AstraZeneca and Ericsson. Diagnostic measures such as the root mean square errors (RMSEs) and Akaike information criteria (AICs) are also considered. The paper concludes that the BINAR(1)s with negative binomial and COM–Poisson innovations are among the most suitable models to analyze over-dispersed intra-day transaction series of stocks. Full article
(This article belongs to the Special Issue Financial Valuation and Econometrics)
Show Figures

Figure 1

Back to TopTop