Next Article in Journal
A PDE Model Approach to Formation Control of Large-Scale Mobile Sensor Networks with Boundary Uncertainties
Next Article in Special Issue
Complete Study of an Original Power-Exponential Transformation Approach for Generalizing Probability Distributions
Previous Article in Journal
Dynamical Analysis and Generalized Synchronization of a Novel Fractional-Order Hyperchaotic System with Hidden Attractor
Previous Article in Special Issue
Technology Opportunity Analysis Based on Machine Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Goodness-of-Fit Test for the Bivariate Hermite Distribution

by
Pablo González-Albornoz
1 and
Francisco Novoa-Muñoz
2,*
1
Departamento de Matemática, Universidad Adventista de Chile, Chillán 3780000, CP, Chile
2
Departamento de Estadística, Universidad del Bío-Bío, Concepción 4051381, CP, Chile
*
Author to whom correspondence should be addressed.
Submission received: 20 October 2022 / Revised: 24 November 2022 / Accepted: 25 November 2022 / Published: 22 December 2022
(This article belongs to the Special Issue Statistical Methods and Applications)

Abstract

:
This paper studies the goodness of fit test for the bivariate Hermite distribution. Specifically, we propose and study a Cramér–von Mises-type test based on the empirical probability generation function. The bootstrap can be used to consistently estimate the null distribution of the test statistics. A simulation study investigates the goodness of the bootstrap approach for finite sample sizes.

1. Introduction

Testing the goodness-of-fit (gof) of given observations with a probabilistic model is a crucial aspect of data analysis.
Since the chi-square test was proposed and analyzed by Pearson in 1900 until today, new gof tests have been constructed and applied to continuous and discrete data. Just to mention some of the most recent publications, there are, for example, the works of: Ebner and Henze [1], Górecki, Horváth and Kokoszka [2], Puig and Wei β [3], Arnastauskaitè et al. [4], Dörr, Ebner, and Henze [5]), Kolkiewicz, Rice, and Xie [6], Milonas et al. [7], Di Noia et al. [8], and Erlemann and Lindqvist [9].
Because count data can appear in different circumstances, the present investigation is oriented to gof in the discrete case, specifically, in the bivariate Hermite distribution (BHD).
In the univariate configuration, the Hermite distribution is a linear combination of the form Y = X 1 + 2 X 2 , where X 1 and X 2 are independent Poisson random variables. The distinguishing property of the univariate Hermite distribution (UHD) is that it is flexible when it comes to modeling count data that present a multimodality, in addition to presenting several zeros, which is called zero-inflation. It also allows for modeling data in which the overdispersion is moderate, that is, the variance is greater than the expected value. It was McKendrick at [10] who modeled a phagocytic experiment (bacteria count in leukocytes) through the UHD, obtaining a more satisfactory model than with the Poisson distribution. However, in practice, bivariate count data emerge in several different disciplines and the BHD plays an important role, having superinflated data—for example, the number of accidents in two different periods [11].
The only gof test related to the Hermite distribution found in this study so far is the one developed by the researchers Meintanis and Bassiakos in [12]. However, this test is for univariate data.
On the other hand, to the best of our knowledge, we did not find literature on gof tests for BHD.
The purpose of this paper is to propose and study a gof test for the bivariate Hermite Distribution that is consistent.
According to Novoa-Muñoz in [13], the probability generating function (pgf) characterizes the distribution of a random vector and can be estimated consistently by the empirical probability generating function (epgf); the proposed test is a function of the epgf. This statistical test compares the epgf of the data with an estimator of the pgf of the BHD. As it is well known, to establish the rejection region, we need to know the distribution of the statistic test.
As for finite sample sizes, the resulting test statistic is of the Cramér–von Mises type, and it was not possible to calculate explicitly the distribution of the statistic under a null hypothesis. This is why one uses simulation techniques. Therefore, we decided to use a null approximation of the statistic by using a parametric bootstrap.
Because the properties of the proposed test are asymptotic (see, for example, [14]) and with the purpose of evaluating the behavior of the test for samples of finite size, a simulation study was carried out.
The present work is ordered as follows: In Section 2, we present some preliminary results that will serve us in the following chapters, and the definition of the BHD with some of its properties is also given. In Section 3, the proposed statistic is presented. Section 4 is devoted to showing the bootstrap estimator and its approximation to the null distribution of the statistic. Section 5 is dedicated to presenting the results of a simulation study, power of a hypothesis test, and the application to a set of real data.
Before ending this section, we introduce some notation: F A δ F B denotes a mixture (compounding) distribution, where F A represents the original distribution and F B the mixing distribution (i.e., the distribution of δ ) [15]; all vectors are row vectors, and x is the transposed of the row vector x; for any vector x , x k denotes its kth coordinate, and x its Euclidean norm; N 0 = { 0 , 1 , 2 , 3 , } ; I { A } denotes the indicator function of the set A; P θ denotes the probability law of the BHD with parameter θ ; E θ denotes expectation with respect to the probability function P θ ; P * and E * denotes the conditional probability law and expectation, given the data ( X 1 , Y 1 ) , , ( X n , Y n ) , respectively; all limits in this work are taken as n ; L denotes convergence in distribution; a . s . denotes almost sure convergence; let { C n } be a sequence of random variables or random elements and let ϵ R ; then, C n = O P ( n ϵ ) means that n ϵ C n is bounded in probability, C n = o P ( n ϵ ) means that n ϵ C n P 0 and C n = o ( n ϵ ) means that n ϵ C n a . s . 0 and H = L 2 [ 0 , 1 ] 2 , ϱ denotes the separable Hilbert space of the measurable functions φ , ϱ : [ 0 , 1 ] 2 R such that | | φ | | H 2 = 0 1 0 1 φ 2 ( t ) ϱ ( t ) d t < .

2. Preliminaries

Several definitions for the BHD have been given (see, for example, Kocherlakota and Kocherlakota in [16]). In this paper, we will work with the following one, which has received more attention in the statistical literature (see, for example, Papageorgiou et al. in [17]; Kemp et al. in [18]).
Let X = ( X 1 , X 2 ) have the bivariate Poisson distribution with the parameters δ λ 1 , δ λ 2 , and δ λ 3 (for more details of this distribution; see, for example, Johnson et al. in [19]); then, X δ N ( μ , σ 2 ) has the BHD. Kocherlakota in [20] obtained its pgf, which is given by
v ( t ; θ ) = exp μ λ + 1 2 σ 2 λ 2 ,
where t = ( t 1 , t 2 ) , θ = ( μ , σ 2 , λ 1 , λ 2 , λ 3 ) , λ = λ 1 ( t 1 1 ) + λ 2 ( t 2 1 ) + λ 3 ( t 1 t 2 1 ) and μ > σ 2 ( λ i + λ 3 ) , i = 1 , 2 .
From the pgf of the BHD, Kocherlakota and Kocherlakota [16] obtained the probability mass function of the BHD, which is given by
f ( r , s ) = λ 1 r λ 2 s r ! s ! M ( γ ) k = 0 min ( r , s ) r k s k k ! ξ k P r + s k ( γ ) ,
where M ( x ) is the moment-generating function of the normal distribution, P r ( x ) is a polynomial of degree r in x, γ = ( λ 1 + λ 2 + λ 3 ) and ξ = λ 3 λ 1 λ 2 .
Remark 1.
If λ 3 = 0 , then the probability function is reduced to
f ( r , s ) = λ 1 r λ 2 s r ! s ! M ( λ 1 λ 2 ) P r + s ( λ 1 λ 2 ) .
Remark 2.
If X is a random vector that is bivariate Hermite distributed with parameter θ, it will be denoted X B H ( θ ) , where θ Θ , and the parameter space is
Θ = ( μ , σ 2 , λ 1 , λ 2 , λ 3 ) R 5 / μ > σ 2 ( λ i + λ 3 ) , λ i > λ 3 0 , i = 1 , 2 .
Let X 1 = ( X 11 , X 12 ) , X 2 = ( X 21 , X 22 ) , , X n = ( X n 1 , X n 2 ) be independent and identically distributed (iid) random vectors defined on a probability space ( Ω , A , P ) and taking values in N 0 2 . In what follows, let
v n ( t ) = 1 n i = 1 n t 1 X i 1 t 2 X i 2
denote the epgf of X 1 , X 2 , , X n for some appropriate W R 2 .
The following section is dedicated to developing the statistic proposed in this study and, for this, it is essential to know the result that is presented below, the proof of which can be reviewed in [14]:
Proposition 1.
Let X 1 , , X n be iid from a random vector X = ( X 1 , X 2 ) N 0 2 . Let v ( t ) = E t 1 X 1 t 2 X 2 be the pgf of X , defined on W R 2 . Let 0 b j c j < , j = 1 , 2 , such that Q = [ b 1 , c 1 ] × [ b 2 , c 2 ] W ; then,
sup t Q | v n ( t ) v ( t ) | a . s . 0 .

3. The Test Statistic and Its Asymptotic Null Distribution

Let X 1 = ( X 11 , X 12 ) , X 2 = ( X 21 , X 22 ) , , X n = ( X n 1 , X n 2 ) be iid from a random vector X = ( X 1 , X 2 ) N 0 2 . Based on the sample X 1 , X 2 , , X n , the objective is to test the hypothesis
H 0 : ( X 1 , X 2 ) B H ( θ ) , for some θ Θ ,
against the alternative
H 1 : ( X 1 , X 2 ) B H ( θ ) , θ Θ .
With this purpose, we will recourse to some of the properties of the pgf that allow us to propose the following statistical test.
According to Proposition 1, a consistent estimator of the pgf is the epgf. If H 0 is true and θ ^ n is a consistent estimator of θ , then v ( t ; θ ^ n ) consistently estimates the population pgf. Since the distribution of X = ( X 1 , X 2 ) is uniquely determined by its pgf, v ( t ) , t = ( t 1 , t 2 ) [ 0 , 1 ] 2 , a reasonable test for testing H 0 should reject the null hypothesis for large values of V n , w ( θ ^ n ) defined by
V n , w ( θ ^ n ) = 0 1 0 1 V n 2 ( t ; θ ^ n ) w ( t ) d t ,
where
V n ( t ; θ ) = n v n ( t ) v ( t ; θ ) ,
θ ^ n = θ ^ n ( X 1 , X 2 , , X n ) is a consistent estimator of θ and w ( t ) is a measurable weight function, such that w ( t ) 0 , t [ 0 , 1 ] 2 , and
0 1 0 1 w ( t ) d t < .
The assumption (3) on w ensures that the double integral in (2) is finite for each fixed n. Now, to determine what are large values of V n , w ( θ ^ n ) , we must calculate its null distribution, or at least an approximation to it. Since the null distribution of V n , w ( θ ^ n ) is unknown, we first try to estimate it by means of its asymptotic null distribution. In order to derive it, we will assume that the estimator θ ^ n satisfies the following regularity condition:
Assumption 1.
Under H 0 , if θ = ( μ , σ 2 , λ 1 , λ 2 , λ 3 ) Θ denotes the true parameter value, then
n θ ^ n θ = 1 n i = 1 n X i ; θ + o P ( 1 ) ,
where : N 0 2 × Θ R 5 is such that E θ ( X 1 ; θ ) = 0 and J ( θ ) = E θ X 1 ; θ X 1 ; θ < .
Assumption 1 is fulfilled by most commonly used estimators; see [16,21].
The next result gives the asymptotic null distribution of V n , w ( θ ^ n ) .
Theorem 1.
Let X 1 , , X n be iid from X = ( X 1 , X 2 ) B H ( θ ) . Suppose that Assumption 1 holds.
Then
V n , w ( θ ^ n ) = | | W n | | H 2 + o P ( 1 ) ,
where W n ( t ) = 1 n i = 1 n V 0 ( X i , θ ; t ) , with
V 0 ( X i , θ ; t ) = t 1 X i 1 t 2 X i 2 v ( t ; θ ) 1 + λ , 1 2 λ 2 , η ( t 1 1 ) , η ( t 2 1 ) , η ( t 1 t 2 1 ) X i ; θ ,
i = 1 , , n , η = μ + σ 2 λ . Moreover,
V n , w ( θ ^ n ) L j 1 λ j χ 1 j 2 ,
where χ 11 2 , χ 12 2 , are independent χ 2 variates with one degree of freedom and the set { λ j } is the non-null eigenvalues of the operator C ( θ ) defined on the function space { τ : N 0 2 R , s u c h t h a t E θ τ 2 ( X ) < , θ Θ } , as follows:
C ( θ ) τ ( x ) = E θ { h ( x , Y ; θ ) τ ( Y ) } ,
where
h ( x , y ; θ ) = 0 1 0 1 V 0 ( x ; θ ; t ) V 0 ( y ; θ ; t ) w ( t ) d t .
Proof. 
By definition, V n , w ( θ ^ n ) = V n ( θ ^ n ) H 2 . Note that
V n ( t ; θ ^ n ) = 1 n i = 1 n V ( X i ; θ ^ n ; t ) , with V ( X i ; θ ; t ) = t 1 X i 1 t 2 X i 2 v ( t ; θ ) .
By Taylor expansion of V ( X i ; θ ^ n ; t ) around θ ^ n = θ ,
V n ( t ; θ ^ n ) = 1 n i = 1 n V ( X i ; θ ; t ) + 1 n i = 1 n Q ( 1 ) ( X i ; θ ; t ) n ( θ ^ n θ ) + q n ,
where q n = 1 2 n ( θ ^ n θ ) i = 1 n Q ( 2 ) ( X i ; θ ˜ ; t ) ( θ ^ n θ ) , θ ˜ = α θ ^ n + ( 1 α ) θ , for some 0 < α < 1 , Q ( 1 ) ( x ; ϑ ; t ) is the vector of the first derivatives and Q ( 2 ) ( x ; ϑ ; t ) is the matrix of the second derivatives of V ( x ; ϑ ; t ) with respect to ϑ .
Thus, considering (3) results in
E θ Q j ( 1 ) X 1 ; θ ; t H 2 < , j = 1 , 2 , , 5 .
Using the Markov inequality and (8), we have
P θ 1 n i = 1 n Q j ( 1 ) ( X i ; θ ; t ) E θ Q j ( 1 ) ( X 1 ; θ ; t ) H > ε 1 n ε 2 E θ Q j ( 1 ) ( X 1 ; θ ; t ) H 2 0 , j = 1 , 2 , , 5 .
Then,
1 n i = 1 n Q ( 1 ) ( X i ; θ ; t ) P E θ Q ( 1 ) ( X 1 ; θ ; t ) ,
where E θ Q ( 1 ) ( X 1 ; θ ; t ) = v ( t ; θ ) λ , 1 2 λ 2 , η ( t 1 1 ) , η ( t 2 1 ) , η ( t 1 t 2 1 ) .
As q n H = o P ( 1 ) , then, using Assumption 1, (7) can be written as
V n ( t ; θ ^ n ) = S n ( t ; θ ) + s n ,
where s n H = o P ( 1 ) , and
S n ( t ; θ ) = 1 n i = 1 n V ( X i ; θ ; t ) + E θ Q ( 1 ) ( X 1 ; θ ; t ) X i ; θ .
On the other hand, observe that
S n ( θ ) H 2 = 1 n i = 1 n j = 1 n h ( X i , X j ; θ ) ,
where h ( x , y ; θ ) is defined in (5) and satisfies h ( x , y ; θ ) = h ( y , x ; θ ) , E θ h 2 ( X 1 , X 2 ; θ ) < , E θ | h ( X 1 , X 1 ; θ ) | < and E θ h ( X 1 , X 2 ; θ ) = 0 . Thus, from Theorem 6.4.1.B in Serfling [22],
S n ( θ ) H 2 L j 1 λ j χ 1 j 2
where χ 11 2 , χ 12 2 , and the set { λ j } are as defined in the statement of the Theorem. In particular, S n ( θ ) H 2 = O P ( 1 ) , which implies (4). □
The asymptotic null distribution of V n , w ( θ ^ n ) depends on the unknown true value of the parameter θ ; therefore, in practice, they do not provide a useful solution to the problem of estimating the null distribution of the respective statistical tests. This could be solved by replacing θ with θ ^ .
However, a greater difficulty is to determine the sets { λ j } j 1 ; for most of the cases, calculating the eigenvalues of an operator is not a simple task and, in our case, we must also obtain the expression h ( x , y ; θ ) , which is not easy to find, since it depends on the function , which usually does not have a simple expression.
Thus, in the next section, we consider another way to approximate the null distribution of the statistical test, the parametric bootstrap method.

4. The Bootstrap Estimator

An alternative way to estimate the null distribution is through the parametric bootstrap method.
Let X 1 , , X n be iid taking values in N 0 2 . Assume that θ ^ n = θ ^ n ( X 1 , , X n ) Θ . Let X 1 * , , X n * be iid from a population with distribution B H ( θ ^ n ) , given X 1 , , X n , and let V n , w * ( θ ^ n * ) be the bootstrap version of V n , w ( θ ^ n ) obtained by replacing X 1 , , X n and θ ^ n = θ ^ n ( X 1 , , X n ) by X 1 * , , X n * and θ ^ n * = θ ^ n ( X 1 * , , X n * ) , respectively, in the expression of V n , w ( θ ^ n ) . Let P * denote the bootstrap conditional probability law, given X 1 , , X n . In order to show that the bootstrap consistently estimate the null distribution of V n , w ( θ ^ n ) , we will assume the following assumption, which is a bit stronger than Assumption 1.
Assumption 2.
Assumption 1 holds and the functions ℓ and J satisfy
(1) 
sup ϑ Θ 0 E ϑ ( X ; ϑ ) 2 I ( X ; ϑ ) > γ 0 , as γ , where Θ 0 Θ is an open neighborhood of θ.
(2) 
( X ; ϑ ) is continuous as a function of ϑ at ϑ = θ , and J ( ϑ ) is finite ϑ Θ 0 .
As stated after Assumption 1, Assumption 2 is not restrictive since it is fulfilled by commonly used estimators.
The next theorem shows that the bootstrap distribution of V n , w ( θ ^ n ) consistently estimates its null distribution.
Theorem 2.
Let X 1 , , X n be iid from a random vector X = ( X 1 , X 2 ) N 0 2 . Suppose that Assumption 2 holds and that θ ^ n = θ + o ( 1 ) , for some θ Θ . Then,
sup x R | P * { V n , w * ( θ ^ n * ) x } P θ { V n , w ( θ ^ n ) x } | a . s . 0 .
Proof. 
By definition, V n , w * ( θ ^ n * ) = V n * ( θ ^ n * ) H 2 , with
V n * ( t ; θ ^ n * ) = 1 n i = 1 n V ( X i * ; θ ^ n * ; t )
and V ( X ; θ ; t ) defined in (6).
Following similar steps to those given in the proof of Theorem 1, it can be seen that V n , w * ( θ ^ n * ) = W n * H 2 + o P * ( 1 ) , where W n * ( t ) is defined as W n ( t ) with X i and θ replaced by X i * and θ ^ n , respectively.
To derive the result, first we will check that assumptions (i)–(iii) in Theorem 1.1 of Kundu et al. [23] hold.
Observe that
Y n * ( t ) = i = 1 n Y n i * ( t )
where
Y n i * ( t ) = 1 n V 0 ( X i * ; θ ^ n ; t ) , i = 1 , , n ,
Clearly, E * Y n i * = 0 and E * Y n i * H 2 < . Let K n be the covariance kernel of Y n * , which by SLLN satisfies
K n ( u , v ) = E * Y n * ( u ) Y n * ( v ) = E * V 0 ( X 1 * ; θ ^ n ; u ) V 0 ( X 1 * ; θ ^ n ; v ) a . s . E θ V 0 ( X 1 ; θ ; u ) V 0 ( X 1 ; θ ; v ) = K ( u , v ) .
Moreover, let Z be a zero-mean Gaussian process on H whose operator of covariance C is characterized by
C f , h H = c o v Z , f H , Z , h H = [ 0 , 1 ] 4 K ( u , v ) f ( u ) h ( v ) w ( u ) w ( v ) d u d v .
From the central limit theorem in Hilbert spaces (see, for example, van der Vaart and Wellner [24]), it follows that Y n = 1 n i = 1 n V 0 ( X i ; θ ; t ) L Z on H , when the data are iid from the random vector X H B ( θ ) .
Let C n denote the covariance operator of Y n * and let { e k : k 0 } be an orthonormal basis of H . Let f , h H , by a dominated convergence theorem,
lim n C n e k , e l H = lim n [ 0 , 1 ] 4 K n ( u , v ) e k ( u ) e l ( v ) w ( u ) w ( v ) d u d v = C e k , e l H .
Setting a k l = C e k , e l H in the aforementioned Theorem 1.1, this proves that condition (i) holds. To verify condition (ii), by using a monotone convergence theorem, Parseval’s relation and dominated convergence theorem, we obtained
lim n k = 0 C n e k , e k H = lim n k = 0 [ 0 , 1 ] 4 K n ( u , v ) e k ( u ) e k ( v ) w ( u ) w ( v ) d u d v = k = 0 [ 0 , 1 ] 4 K ( u , v ) e k ( u ) e k ( v ) w ( u ) w ( v ) d u d v = k = 0 C e k , e k H = k = 0 a k k = k = 0 E θ Z , e k H 1 2 = E θ Z H 2 < .
To prove condition (iii), we first notice that
| Y n i * , e k H | M n , i = 1 , , n , n , where 0 < M < .
From the above inequality, for each fixed ε > 0 ,
E * Y n i * , e k H 2 I | Y n i * , e k H | > ε = 0 .
for sufficiently large n. This proves condition (iii). Therefore, Y n * L Z in H , a.s. Now, the result follows from the continuous mapping theorem. □
From Theorem 2, the test function
Ψ V * = 1 , if V n , w * ( θ ^ n * ) v n , w , α * , 0 , otherwise ,
or, equivalently, the test that rejects H 0 when p * = P * { V n , w * ( θ ^ n * ) V o b s } α , is asymptotically correct in the sense that, when H 0 is true, lim P θ ( Ψ V * = 1 ) = α , where v n , w , α * = inf { x : P * ( V n , w * ( θ ^ n * ) x ) α } is the α upper percentile of the bootstrap distribution of V n , w ( θ ^ n ) and V o b s is the observed value of the test statistic.

5. Numerical Results and Discussion

According to Novoa-Muñoz and Jiménez-Gamero in [14], the properties of the statistic V n , w ( θ ^ n ) are asymptotic, that is, such properties describe the behavior of the test proposed for large samples. To study the goodness of the bootstrap approach for samples of finite size, a simulation experiment was carried out. In this section, we describe this experiment and provide a summary of the results that have been obtained.
It is necessary to emphasize, as mentioned in the Introduction that, to the best of our knowledge, we have not found another goodness-of-fit test for the bivariate Hermite distribution with which we can make a comparison. Therefore, the simulation study is limited only to the test presented in this investigation.
On the other hand, all the computational calculations made in this paper were carried out through codes written in the R language [25].
To calculate V n , w ( θ ^ n ) , it is necessary to give an explicit form to the weight function w. Here, the following is taken into account:
w ( t ; a 1 , a 2 ) = t 1 a 1 t 2 a 2 .
Observe that the only restrictions that have been imposed on the weight function are that w be positive almost everywhere in [ 0 , 1 ] 2 and the established in (3). The function w ( t ; a 1 , a 2 ) given in (9) meets these conditions whenever a i > 1 , i = 1 , 2 . Hence,
V n , w ( θ ^ n ) = n 0 1 0 1 i = 1 n t 1 X i 1 t 2 X i 2 e x p μ ^ λ ^ + 1 2 σ ^ 2 λ ^ 2 2 t 1 a 1 t 2 a 2 d t 1 d t 2 .
It was not possible to find an explicit form of the statistic V n , w ( θ ^ n ) , for which its calculation used the curvature package of R [25] to calculate it.

5.1. Simulated Data

In order to approximate the null distribution of the statistic V n , w ( θ ^ n ) for finite-size samples of sizes 30, 50, and 70 from a B H ( θ ) , for θ = ( μ , σ 2 , λ 1 , λ 2 , λ 3 ) , the pgf (1), with λ 3 = 0 , was utilized. The combinations of parameters were chosen in such a way that μ > σ 2 ( λ i + λ 3 ) , i = 1 , 2 .
The selected values of the other parameters were μ { 1.0 , 1.5 , 2.0 } , σ 2 { 0.8 , 1.0 } , λ 1 { 0.10 , 0.25 , 0.50 , 0.75 , 1.00 } and λ 2 { 0.20 , 0.25 , 0.50 , 0.75 } .
The selected values of λ 1 and λ 2 were not greater than 1 since the Hermite distribution is characterized as being zero-inflated.
To estimate the parameter θ , we use the maximum likelihood method given in Kocherlakota and Kocherlakota [16]. Then, we approximated the bootstrap p-values of the proposed test with the weight function given in (9) for ( a 1 , a 2 ) { ( 0 , 0 ) , ( 1 , 0 ) , ( 0 , 1 ) , ( 1 , 1 ) , ( 5 , 1 ) , ( 1 , 5 ) , ( 5 , 5 ) } , and we generate B = 500 bootstrap samples.
The above procedure was repeated 1000 times, and the fraction of the estimated p -values that was found to be less than or equal to 0.05 and 0.10, which are the estimates type I error probabilities for α = 0.05 and 0.1.
The results obtained are presented in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7 for the different pairs ( a 1 , a 2 ) . In each table, the established order was growing in μ and σ 2 , and for each new μ increasing values in λ 1 , and in each new λ 1 , increasing values for λ 2 . From these results, we can conclude that the parametric bootstrap method provides good approximations to the null distribution of the V n , w ( θ ^ n ) in most of the cases considered.
It is seen that the values of a 1 and a 2 of the weight function affect bootstrap estimates of p-values.
From the tables, it is clear that the bootstrap p-values are increasingly approaching the nominal value as n increases. These approximations are better when a 1 = a 2 . In particular, when a 1 = a 2 is small (less than 5), then the bootstrap p-values are approached from the left (below) to the nominal value; otherwise, it happens when a 1 = a 2 are fairly large values (greater or equal to 5). Table 4 is the one that shows the best results, being the weight function with a 1 = a 2 = 1 that presents the best p-values estimates.
Unfortunately, we could not find a closed form for our statistic V n , w ( θ ^ n ) ; in order to calculate it, we used the curvature package of the software R [25]. This had a serious impact on the computation time since the simulations were increased in their execution time by at least 30%.

5.2. The Power of a Hypothesis Test

To study the power, we repeated the previous experiment for samples of size n = 50 and, for the weight function, we used the values of a 1 and a 2 that yielded the best results in the study of type I error. The alternative distributions we use are detailed below:
  • bivariate binomial distribution B B ( m ; p 1 , p 2 , p 3 ) , where p 1 + p 2 p 3 1 , p 1 p 3 , p 2 p 3 and p 3 > 0 ,
  • bivariate Poisson distribution B P ( λ 1 , λ 2 , λ 3 ) , where λ 1 > λ 3 , λ 2 > λ 3 > 0 ,
  • bivariate logarithmic series distribution B L S ( λ 1 , λ 2 , λ 3 ) , where 0 < λ 1 + λ 2 + λ 3 < 1 ,
  • bivariate negative binomial distribution B N B ( ν ; γ 0 , γ 1 , γ 2 ) , where ν N , γ 0 > γ 2 , γ 1 > γ 2 and γ 2 > 0 ,
  • bivariate Neyman type A distribution B N T A ( λ ; λ 1 , λ 2 , λ 3 ) , where 0 < λ 1 + λ 2 + λ 3 1 ,
  • bivariate Poisson distribution mixtures of the form p B P ( θ ) + ( 1 p ) B P ( λ ) , where 0 < p < 1 , denoted by B P P ( p ; θ , λ ) .
Table 8 displays the alternatives considered and the estimated power for nominal significance level α = 0.05 . Analyzing this table, we can conclude that all the considered tests, denoted by V ( a 1 , a 2 ) , are able to detect the alternatives studied and with a good power, giving better results in cases where a 1 = a 2 . The best result was achieved for a 1 = a 2 = 1 , as expected, as occurred in the study of type I error.

5.3. Real Data Set

Now, the proposed test will be applied to a real data set. The data set comprises the number of accidents in two different years, presented in [16], where X is the accident number of the first period and Y the accident number of the second period. Table 9 shows the real data set.
The p-value, obtained from the statistic V n , w ( θ ^ n ) of the proposed test, with a 1 = 1 and a 2 = 0 applied to the real values, is 0.838; therefore, we decided not to reject the null hypothesis, that is, the data seem to have a BHD. This is consistent with the results presented by Kemp and Papageorgiou in [26], who performed the goodness-of-fit test χ 2 obtaining a p-value of 0.3078.

Author Contributions

Conceptualization, F.N.-M.; methodology, F.N.-M. and P.G.-A.; software, F.N.-M. and P.G.-A.; validation, F.N.-M. and P.G.-A.; formal analysis, F.N.-M. and P.G.-A.; investigation, F.N.-M. and P.G.-A.; resources, F.N.-M.; data curation, P.G.-A.; writing—original draft preparation, F.N.-M. and P.G.-A.; writing—review and editing, F.N.-M. and P.G.-A.; visualization, F.N.-M. and P.G.-A. All authors have read and agreed to the published version of the manuscript.

Funding

This publication was supported by Universidad del Bío-Bío, DICREA [2220529 IF/R] and Universidad Adventista de Chile, DI [2021-139 II], Chile.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The corresponding author would like to thank research project DIUBB 2220529 IF/R and Fondo de Apoyo a la Participación a Eventos Internacionales (FAPEI) at Universidad del Bío-Bío, Chile. He also thanks the anonymous reviewers and the editor of this journal for their valuable time and their careful comments and suggestions with which the quality of this paper has been improved.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ebner, B.; Henze, N. Tests for multivariate normality-a critical review with emphasis on weighted L2-statistics. TEST 2020, 29, 845–892. [Google Scholar] [CrossRef]
  2. Górecki, T.; Horváth, L.; Kokoszka, P. Tests of Normality of Functional Data. Int. Stat. Rev. 2020, 88, 677–697. [Google Scholar] [CrossRef]
  3. Puig, P.; Weiβ, C.H. Some goodness-of-fit tests for the Poisson distribution with applications in biodosimetry. Comput. Stat. Data Anal. 2020, 144, 106878. [Google Scholar] [CrossRef]
  4. Arnastauskaitè, J.; Ruzgas, T.; Bražènas, M. A New Goodness of Fit Test for Multivariate Normality and Comparative Simulation Study. Mathematics 2021, 9, 3003. [Google Scholar]
  5. Dörr, P.; Ebner, B.; Henze, N. A new test of multivariate normality by a double estimation in a characterizing PDE. Metrika 2021, 84, 401–427. [Google Scholar]
  6. Kolkiewicz, A.; Rice, G.; Xie, Y. Projection pursuit based tests of normality with functional data. J. Stat. Plan. Inference 2021, 211, 326–339. [Google Scholar] [CrossRef]
  7. Milonas, D.; Ruzgas, T.; Venclovas, Z.; Jievaltas, M.; Joniau, S. The significance of prostate specific antigen persistence in prostate cancer risk groups on long-term oncological outcomes. Cancers 2021, 13, 2453. [Google Scholar] [CrossRef] [PubMed]
  8. Di Noia, A.; Barabesi, L.; Marcheselli, M.; Pisani, C.; Pratelli, L. Goodness-of-fit test for count distributions with finite second moment. J. Nonparametric Stat. 2022. [Google Scholar] [CrossRef]
  9. Erlemann, R.; Lindqvist, B.H. Conditional Goodness-of-Fit Tests for Discrete Distributions. J. Stat. Theory Pract. 2022. [Google Scholar] [CrossRef]
  10. McKendrick, A.G. Applications of Mathematics to Medical Problems? Proc. Edinb. Math. Soc. 1926, 44, 98–130. [Google Scholar] [CrossRef] [Green Version]
  11. Cresswell, W.L.; Froggatt, P. The Causation of Bus Driver Accidents; Oxford University Press: Oxford, UK, 1963; p. 316. [Google Scholar]
  12. Meintanis, S.; Bassiakos, Y. Goodness-of-fit tests for additively closed count models with an application to the generalized Hermite distribution. Sankhya 2005, 67, 538–552. [Google Scholar]
  13. Novoa-Muñoz, F. Goodness-of-fit tests for the bivariate Poisson distribution. Commun. Stat. Simul. Comput. 2019. [Google Scholar] [CrossRef] [Green Version]
  14. Novoa-Muñoz, F.; Jiménez-Gamero, M.D. Testing for the bivariate Poisson distribution. Metrika 2013, 77, 771–793. [Google Scholar] [CrossRef]
  15. Johnson, N.L.; Kemp, A.W.; Kotz, S. Univariate Discrete Distributions, 3rd ed.; John Wiley & Sons, Inc.: New York, NY, USA, 2005. [Google Scholar]
  16. Kocherlakota, S.; Kocherlakota, K. Bivariate Discrete Distributions; John Wiley & Sons: Hoboken, NJ, USA, 1992. [Google Scholar]
  17. Papageorgiou, H.; Kemp, C.D.; Loukas, S. Some methods of estimation for the bivariate Hermite distribution. Biometrika 1983, 70, 479–484. [Google Scholar] [CrossRef]
  18. Kemp, C.D.; Kemp, A.W. Rapid estimation for discrete distributions. Statistician 1988, 37, 243–255. [Google Scholar] [CrossRef]
  19. Johnson, N.L.; Kotz, S.; Balakrishnan, N. Discrete Multivariate Distributions; Wiley: New York, NY, USA, 1997. [Google Scholar]
  20. Kocherlakota, S. On the compounded bivariate Poisson distribution: A unified approach. Ann. Inst. Stat. Math. 1988, 40, 61–76. [Google Scholar] [CrossRef]
  21. Papageorgiou, H.; Loukas, S. Conditional even point estimation for bivariate discrete distributions. Commun. Stat. Theory Methods 1988, 17, 3403–3412. [Google Scholar] [CrossRef]
  22. Serfling, R.J. Approximation Theorems of Mathematical Statistics; Wiley: New York, NY, USA, 1980. [Google Scholar]
  23. Kundu, S.; Majumdar, S.; Mukherjee, K. Central limits theorems revisited. Stat. Probab. Lett. 2000, 47, 265–275. [Google Scholar] [CrossRef]
  24. Van der Vaart, J.A.; Wellner, J.A. Weak Convergence and Empirical Processes; Springer: New York, NY, USA, 1996. [Google Scholar]
  25. R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. 2021. Available online: https://www.R-project.org/ (accessed on 1 July 2019).
  26. Kemp, C.D.; Papageorgiou, H. Bivariate Hermite distributions. Sankhya 1982, 44, 269–280. [Google Scholar]
Table 1. Simulation results for the probability of type I error for a 1 = 0 and a 2 = 0 .
Table 1. Simulation results for the probability of type I error for a 1 = 0 and a 2 = 0 .
θ n = 30 n = 50 n = 70
α = 0.05 α = 0.1 α = 0.05 α = 0.1 α = 0.05 α = 0.1
(1.0, 0.8, 0.10, 0.20, 0.00)0.0120.0530.0290.0690.0370.081
(1.0, 0.8, 0.25, 0.25, 0.00)0.0270.0670.0370.0640.0430.094
(1.0, 0.8, 0.50, 0.20, 0.00)0.0160.0620.0460.0730.0470.087
(1.0, 0.8, 0.50, 0.50, 0.00)0.0250.0630.0420.0760.0440.091
(1.5, 1.0, 0.50, 0.50, 0.00)0.0100.0640.0350.0780.0420.089
(1.5, 1.0, 0.50, 0.75, 0.00)0.0100.0650.0360.0840.0410.084
(1.5, 1.0, 0.75, 0.25, 0.00)0.0170.0710.0380.0870.0430.088
(1.5, 1.0, 1.00, 0.25, 0.00)0.0270.0760.0390.0900.0420.092
(2.0, 1.0, 0.25, 0.75, 0.00)0.0170.0670.0380.0820.0470.089
(2.0, 1.0, 0.50, 0.25, 0.00)0.0110.0670.0370.0880.0450.091
(2.0, 1.0, 0.75, 0.25, 0.00)0.0290.0700.0350.0870.0430.089
Table 2. Simulation results for the probability of type I error for a 1 = 1 and a 2 = 0 .
Table 2. Simulation results for the probability of type I error for a 1 = 1 and a 2 = 0 .
θ n = 30 n = 50 n = 70
α = 0.05 α = 0.1 α = 0.05 α = 0.1 α = 0.05 α = 0.1
(1.0, 0.8, 0.10, 0.20, 0.00)0.0100.0390.0250.0730.0430.088
(1.0, 0.8, 0.25, 0.25, 0.00)0.0250.0730.0370.0880.0410.104
(1.0, 0.8,0.50, 0.20, 0.00)0.0270.0720.0410.0830.0450.086
(1.0, 0.8, 0.50, 0.50, 0.00)0.0350.0530.0420.0720.0450.101
(1.5, 1.0, 0.50, 0.50, 0.00)0.0110.0640.0310.0800.0380.085
(1.5, 1.0, 0.50, 0.75, 0.00)0.0190.0650.0340.0780.0390.080
(1.5, 1.0, 0.75, 0.25, 0.00)0.0250.0810.0380.0850.0420.084
(1.5, 1.0, 1.00, 0.25, 0.00)0.0370.0740.0350.0850.0400.086
(2.0, 1.0, 0.25, 0.75, 0.00)0.0270.0710.0340.0820.0470.089
(2.0, 1.0, 0.50, 0.25, 0.00)0.0110.0770.0310.0840.0440.086
(2.0, 1.0, 0.75, 0.25, 0.00)0.0190.0800.0350.0850.0440.087
Table 3. Simulation results for the probability of type I error for a 1 = 0 and a 2 = 1 .
Table 3. Simulation results for the probability of type I error for a 1 = 0 and a 2 = 1 .
θ n = 30 n = 50 n = 70
α = 0.05 α = 0.1 α = 0.05 α = 0.1 α = 0.05 α = 0.1
(1.0, 0.8, 0.10, 0.20, 0.00)0.0140.0440.0290.0670.0430.088
(1.0, 0.8, 0.25, 0.25, 0.00)0.0280.0680.0390.0790.0420.084
(1.0, 0.8, 0.50, 0.20, 0.00)0.0190.0630.0420.0830.0570.092
(1.0, 0.8, 0.50, 0.50, 0.00)0.0290.0630.0450.0750.0540.089
(1.5, 1.0, 0.50, 0.50, 0.00)0.0110.0660.0390.0790.0420.089
(1.5, 1.0, 0.50, 0.75, 0.00)0.0130.0700.0430.0820.0430.087
(1.5, 1.0, 0.75, 0.25, 0.00)0.0170.0810.0420.0890.0430.092
(1.5, 1.0, 1.00, 0.25, 0.00)0.0370.0860.0450.0910.0450.093
(2.0, 1.0, 0.25, 0.75, 0.00)0.0470.0770.0480.0840.0470.089
(2.0, 1.0, 0.50, 0.25, 0.00)0.0140.0770.0370.0890.0430.093
(2.0, 1.0, 0.75, 0.25, 0.00)0.0270.0800.0410.0970.0440.096
Table 4. Simulation results for the probability of type I error for a 1 = 1 and a 2 = 1 .
Table 4. Simulation results for the probability of type I error for a 1 = 1 and a 2 = 1 .
θ n = 30 n = 50 n = 70
α = 0.05 α = 0.1 α = 0.05 α = 0.1 α = 0.05 α = 0.1
(1.0, 0.8, 0.10, 0.20, 0.00)0.0160.0730.0240.0860.0480.092
(1.0, 0.8, 0.25, 0.25, 0.00)0.0320.0580.0370.0880.0490.091
(1.0, 0.8, 0.50, 0.20, 0.00)0.0240.0640.0430.0850.0480.089
(1.0, 0.8, 0.50, 0.50, 0.00)0.0330.0720.0430.0860.0490.093
(1.5, 1.0, 0.50, 0.50, 0.00)0.0300.0720.0380.0880.0460.090
(1.5, 1.0, 0.50, 0.75, 0.00)0.0330.0710.0420.0840.0470.098
(1.5, 1.0, 0.75, 0.25, 0.00)0.0360.0970.0390.0970.0490.099
(1.5, 1.0, 1.00, 0.25, 0.00)0.0390.0880.0460.0900.0490.093
(2.0, 1.0, 0.25, 0.75, 0.00)0.0310.0870.0440.0920.0480.099
(2.0, 1.0, 0.50, 0.25, 0.00)0.0350.0680.0390.0810.0470.093
(2.0, 1.0, 0.75, 0.25, 0.00)0.0370.0800.0450.0880.0490.096
Table 5. Simulation results for the probability of type I error for a 1 = 1 and a 2 = 5 .
Table 5. Simulation results for the probability of type I error for a 1 = 1 and a 2 = 5 .
θ n = 30 n = 50 n = 70
α = 0.05 α = 0.1 α = 0.05 α = 0.1 α = 0.05 α = 0.1
(1.0, 0.8, 0.10, 0.20, 0.00)0.0140.0370.0320.0750.0510.093
(1.0, 0.8, 0.25, 0.25, 0.00)0.0230.0740.0530.0900.0600.113
(1.0, 0.8, 0.50, 0.20, 0.00)0.0360.1010.0620.1100.0640.117
(1.0, 0.8, 0.50, 0.50, 0.00)0.0230.0800.0420.1070.0630.109
(1.5, 1.0, 0.50, 0.50, 0.00)0.0220.0810.0370.1110.0460.108
(1.5, 1.0, 0.50, 0.75, 0.00)0.0390.0950.0480.1080.0560.108
(1.5, 1.0, 0.75, 0.25, 0.00)0.0340.1080.0480.1070.0540.108
(1.5, 1.0, 1.00, 0.25, 0.00)0.0370.1070.0590.1090.0540.107
(2.0, 1.0, 0.25, 0.75, 0.00)0.0480.1060.0560.1080.0540.106
(2.0, 1.0, 0.50, 0.25, 0.00)0.0250.1070.0470.1080.0450.108
(2.0, 1.0, 0.75, 0.25, 0.00)0.0430.1070.0450.1070.0430.106
Table 6. Simulation results for the probability of type I error for a 1 = 5 and a 2 = 1 .
Table 6. Simulation results for the probability of type I error for a 1 = 5 and a 2 = 1 .
θ n = 30 n = 50 n = 70
α = 0.05 α = 0.1 α = 0.05 α = 0.1 α = 0.05 α = 0.1
(1.0, 0.8, 0.10, 0.20, 0.00)0.0150.0400.0320.0620.0420.081
(1.0, 0.8, 0.25, 0.25, 0.00)0.0340.0760.0450.1010.0480.104
(1.0, 0.8, 0.50, 0.20, 0.00)0.0280.0840.0480.0730.0530.089
(1.0, 0.8, 0.50, 0.50, 0.00)0.0280.0690.0450.0790.0540.098
(1.5, 1.0, 0.50, 0.50, 0.00)0.0190.0710.0350.0780.0420.099
(1.5, 1.0, 0.50, 0.75, 0.00)0.0440.1040.0480.0980.0560.104
(1.5, 1.0, 0.75, 0.25, 0.00)0.0270.1070.0380.1050.0460.103
(1.5, 1.0, 1.00, 0.25, 0.00)0.0370.1170.0430.1120.0600.107
(2.0, 1.0, 0.25, 0.75, 0.00)0.0370.1120.0390.1080.0540.108
(2.0, 1.0, 0.50, 0.25, 0.00)0.0260.0770.0340.1090.0550.109
(2.0, 1.0, 0.75, 0.25, 0.00)0.0340.1160.0450.1070.0560.105
Table 7. Simulation results for the probability of type I error for a 1 = 5 and a 2 = 5 .
Table 7. Simulation results for the probability of type I error for a 1 = 5 and a 2 = 5 .
θ n = 30 n = 50 n = 70
α = 0.05 α = 0.1 α = 0.05 α = 0.1 α = 0.05 α = 0.1
(1.0, 0.8, 0.10, 0.20, 0.00)0.0170.0350.0320.0650.0500.089
(1.0, 0.8, 0.25, 0.25, 0.00)0.0270.0770.0340.0810.0430.084
(1.0, 0.8, 0.50, 0.20, 0.00)0.0300.0860.0420.0870.0480.104
(1.0, 0.8, 0.50, 0.50, 0.00)0.0130.0690.0300.0760.0450.105
(1.5, 1.0, 0.50, 0.50, 0.00)0.0160.0630.0350.0780.0460.087
(1.5, 1.0, 0.50, 0.75, 0.00)0.0190.0850.0610.0890.0540.094
(1.5, 1.0, 0.75, 0.25, 0.00)0.0310.0710.0530.1020.0470.098
(1.5, 1.0, 1.00, 0.25, 0.00)0.0370.0860.0490.1040.0520.102
(2.0, 1.0, 0.25, 0.75, 0.00)0.0150.0870.0570.0980.0550.101
(2.0, 1.0, 0.75, 0.25, 0.00)0.0400.0970.0540.1020.0530.102
Table 8. Simulation results for the power. The values are in the form of percentages, rounded to the nearest integer.
Table 8. Simulation results for the power. The values are in the form of percentages, rounded to the nearest integer.
Alternative V ( 0 , 0 ) V ( 1 , 0 ) V ( 1 , 1 ) V ( 1 , 5 ) V ( 5 , 5 )
B B ( 1 ; 0.41 , 0.02 , 0.01 )
B B ( 1 ; 0.41 , 0.03 , 0.02 )
B B ( 2 ; 0.61 , 0.01 , 0.01 )
B B ( 1 ; 0.61 , 0.03 , 0.02 )
B B ( 2 ; 0.71 , 0.01 , 0.01 )
87
85
93
95
94
81
82
84
89
86
89
88
98
100
100
81
80
83
87
85
85
86
92
95
93
B P ( 1.00 , 1.00 , 0.25 )
B P ( 1.00 , 1.00 , 0.50 )
B P ( 1.00 , 1.00 , 0.75 )
B P ( 1.50 , 1.00 , 0.31 )
B P ( 1.50 , 1.00 , 0.92 )
85
84
87
87
86
76
77
75
77
76
89
91
92
93
92
77
72
73
75
77
82
85
83
87
87
B L S ( 0.25 , 0.15 , 0.10 )
B L S ( 5 d / 7 , d / 7 , d / 7 ) *
B L S ( 3 d / 4 , d / 8 , d / 8 ) *
B L S ( 7 d / 9 , d / 9 , d / 9 ) *
B L S ( 0.51 , 0.01 , 0.02 )
94
91
90
94
90
85
85
86
86
83
98
100
100
100
98
86
84
84
83
83
95
90
90
93
91
B N B ( 1 ; 0.92 , 0.97 , 0.01 )
B N B ( 1 ; 0.97 , 0.97 , 0.01 )
B N B ( 1 ; 0.97 , 0.97 , 0.02 )
B N B ( 1 ; 0.98 , 0.98 , 0.01 )
B N B ( 1 ; 0.99 , 0.99 , 0.01 )
93
92
94
92
91
87
86
88
84
84
96
95
100
97
96
85
85
89
85
83
92
92
93
92
91
B N T A ( 0.21 ; 0.01 , 0.01 , 0.98 )
B N T A ( 0.24 ; 0.01 , 0.01 , 0.98 )
B N T A ( 0.26 ; 0.01 , 0.01 , 0.97 )
B N T A ( 0.26 ; 0.01 , 0.01 , 0.98 )
B N T A ( 0.28 ; 0.01 , 0.01 , 0.97 )
93
95
93
94
93
86
87
85
85
86
98
100
97
98
96
85
85
86
86
86
92
95
93
94
94
B P P ( 0.31 ; ( 0.2 , 0.2 , 0.1 ) , ( 1.0 , 1.0 , 0.9 ) )
B P P ( 0.31 ; ( 0.2 , 0.2 , 0.1 ) , ( 1.0 , 1.2 , 0.9 ) )
B P P ( 0.32 ; ( 0.2 , 0.2 , 0.1 ) , ( 1.0 , 1.0 , 0.9 ) )
B P P ( 0.33 ; ( 0.2 , 0.2 , 0.1 ) , ( 1.0 , 1.0 , 0.9 ) )
B P P ( 0.33 ; ( 0.2 , 0.2 , 0.1 ) , ( 1.0 , 1.1 , 0.9 ) )
76
77
78
78
76
70
71
71
70
71
82
84
84
85
83
72
71
71
70
70
77
76
76
77
78
* d = 1 − exp(−1) ≈ 0.63212
Table 9. Real data of X accident number in a period and Y of another period.
Table 9. Real data of X accident number in a period and Y of another period.
   X
01234567Total
01179655192200291
1616947278510218
2344231137230132
Y3715177310049
43311211113
5210000003
6000010001
7000100001
Total22422615068231151708
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

González-Albornoz, P.; Novoa-Muñoz, F. Goodness-of-Fit Test for the Bivariate Hermite Distribution. Axioms 2023, 12, 7. https://doi.org/10.3390/axioms12010007

AMA Style

González-Albornoz P, Novoa-Muñoz F. Goodness-of-Fit Test for the Bivariate Hermite Distribution. Axioms. 2023; 12(1):7. https://doi.org/10.3390/axioms12010007

Chicago/Turabian Style

González-Albornoz, Pablo, and Francisco Novoa-Muñoz. 2023. "Goodness-of-Fit Test for the Bivariate Hermite Distribution" Axioms 12, no. 1: 7. https://doi.org/10.3390/axioms12010007

APA Style

González-Albornoz, P., & Novoa-Muñoz, F. (2023). Goodness-of-Fit Test for the Bivariate Hermite Distribution. Axioms, 12(1), 7. https://doi.org/10.3390/axioms12010007

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop