Next Article in Journal
Performance of Gradient-Based Optimizer on Charging Station Placement Problem
Next Article in Special Issue
Predictive Constructions Based on Measure-Valued Pólya Urn Processes
Previous Article in Journal
Inflection Points in Cubic Structures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Compound Poisson Perspective of Ewens–Pitman Sampling Model

by
Emanuele Dolera
1,2,3 and
Stefano Favaro
2,3,4,*
1
Department of Mathematics, University of Pavia, Via Adolfo Ferrata 5, 27100 Pavia, Italy
2
Collegio Carlo Alberto, Piazza V. Arbarello 8, 10122 Torino, Italy
3
IMATI-CNR “Enrico Magenes”, 27100 Pavia, Italy
4
Department of Economic and Social Sciences, Mathematics and Statistics, University of Torino, Corso Unione Sovietica 218/bis, 10134 Torino, Italy
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(21), 2820; https://doi.org/10.3390/math9212820
Submission received: 7 October 2021 / Revised: 3 November 2021 / Accepted: 5 November 2021 / Published: 6 November 2021

Abstract

:
The Ewens–Pitman sampling model (EP-SM) is a distribution for random partitions of the set { 1 , , n } , with n N , which is indexed by real parameters α and θ such that either α [ 0 , 1 ) and θ > α , or α < 0 and θ = m α for some m N . For α = 0 , the EP-SM is reduced to the Ewens sampling model (E-SM), which admits a well-known compound Poisson perspective in terms of the log-series compound Poisson sampling model (LS-CPSM). In this paper, we consider a generalisation of the LS-CPSM, referred to as the negative Binomial compound Poisson sampling model (NB-CPSM), and we show that it leads to an extension of the compound Poisson perspective of the E-SM to the more general EP-SM for either α ( 0 , 1 ) , or α < 0 . The interplay between the NB-CPSM and the EP-SM is then applied to the study of the large n asymptotic behaviour of the number of blocks in the corresponding random partitions—leading to a new proof of Pitman’s α diversity. We discuss the proposed results and conjecture that analogous compound Poisson representations may hold for the class of α -stable Poisson–Kingman sampling models—of which the EP-SM is a noteworthy special case.

1. Introduction

The Pitman–Yor process is a discrete random probability measure indexed by real parameters α and θ such that either α [ 0 , 1 ) and θ > α , or α < 0 and θ = m α for some m N —as can be seen in, e.g., Perman et al. [1], Pitman [2] and Pitman and Yor [3]. Let { V i } i 1 be independent random variables such that V i is distributed as a Beta distribution with parameter ( 1 α , θ + i α ) , for i 1 , with the convention for α < 0 that V m = 1 and V i is undefined for i > m . If P 1 : = V 1 and P i : = V i 1 j i 1 ( 1 V j ) for i 2 , such that almost definitely i 1 P i = 1 , then the Pitman–Yor process is the random probability measure p ˜ α , θ on ( N , 2 N ) such that p ˜ α , θ ( { i } ) = P i for i 1 . The Dirichlet process (Ferguson [4]) arises for α = 0 . Because of the discreteness of p ˜ α , θ , a random sample ( X 1 , , X n ) induces a random partition n of { 1 , , n } by means of the equivalence i j X i = X j (Pitman [5]). Let K n ( α , θ ) : = K n ( X 1 , , X n ) n be the number of blocks of n and let M r , n ( α , θ ) : = M r , n ( X 1 , , X n ) , for r = 1 , , n , be the number of blocks with frequency r of n with 1 r n M r , n = K n and 1 r n r M r , n = n . Pitman [2] showed that:
Pr [ ( M 1 , n ( α , θ ) , , M n , n ( α , θ ) ) = ( x 1 , , x n ) ] = n ! θ α ( i = 1 n x i ) ( θ ) ( n ) i = 1 n α ( 1 α ) ( i 1 ) i ! x i x i ! ,
with ( x ) ( n ) being the ascending factorial of x of order n, i.e., ( x ) ( n ) : = 0 i n 1 ( x + i ) . The distribution (1) is referred to as the Ewens–Pitman sampling model (EP-SM), and for α = 0 , it reduces to the Ewens sampling model (E-SM) in Ewens [6]. The Pitman–Yor process plays a critical role in a variety of research areas, such as mathematical population genetics, Bayesian nonparametrics, machine learning, excursion theory, combinatorics and statistical physics. See Pitman [5] and Crane [7] for a comprehensive treatment of this subject.
The E-SM admits a well-known compound Poisson perspective in terms of the log-series compound Poisson sampling model (LS-CPSM). See Charalambides [8] and the references therein for an overview of compound Poisson models. We consider a population of individuals with a random number K of distinct types, and let K be distributed as a Poisson distribution with parameter λ = z log ( 1 q ) for q ( 0 , 1 ) and z > 0 . For i N , let N i denote the random number of individuals of type i in the population, and let the N i ’s be independent of K and independent from each other, with the same distribution:
Pr [ N 1 = x ] = 1 x log ( 1 q ) q x
for x N . Let S = 1 i K N i and let M r = 1 i K 𝟙 { N i = r } for r = 1 , , S , that is, M r is the random number of N i equal to r such that r 1 M r = K and r 1 r M r = S . If ( M 1 ( z , n ) , , M n ( z , n ) ) denotes a random variable whose distribution coincides with the conditional distribution of ( M 1 , , M S ) given S = n , then (Section 3, Charalambides [8]) it holds:
Pr [ ( M 1 ( z , n ) , , M n ( z , n ) ) = ( x 1 , , x n ) ] = n ! ( z ) ( n ) i = 1 n z i x i x i ! .
The distribution (3) is referred to as the LS-CPSM, and it is equivalent to the E-SM. That is, the distribution (3) coincides with the distribution (1) with α = 0 . Therefore, the distributions of K ( z , n ) = 1 r n M r ( z , n ) and M r ( z , n ) coincide with the distributions of K n ( 0 , z ) and M r , n ( 0 , z ) , respectively. Let w denote the weak convergence. From Korwar and Hollander [9], K ( z , n ) / log n w z as n + , whereas from Ewens [6], it follows that M r ( z , n ) w P z / r as n + , where P z is a Poisson random variable with parameter z.
In this paper, we consider a generalisation of the LS-CPSM referred to as the negative binomial compound Poisson sampling model (NB-CPSM). The NB-CPSM is indexed by real parameters α and z such that either α ( 0 , 1 ) and z > 0 or α < 0 and z < 0 . The LS-CPSM is recovered by letting α 0 and z > 0 . We show that the NB-CPSM leads to extend the compound Poisson perspective of the E-SM to the more general EP-SM for either α ( 0 , 1 ) , or α < 0 . That is, we show that: (i) for α ( 0 , 1 ) , the EP-SM (1) admits a representation as a randomised NB-CPSM with α ( 0 , 1 ) and z > 0 , where the randomisation acts on z with respect a scale mixture between a Gamma and a scaled Mittag–Leffler distribution (Pitman [5]); (ii) for α < 0 the NB-CPSM admits a representation in terms of a randomised EP-SM with α < 0 and θ = m α for some m N , where the randomisation acts on m with respect to a tilted Poisson distribution arising from the Wright function (Wright [10]). The interplay between the NB-CPSM and the EP-SM is then applied to the large n asymptotic behaviour of the number of distinct blocks in the corresponding random partitions. In particular, by combining the randomised representation in (i) with the large n asymptotic behaviour or the number of distinct blocks under the NB-CPSM, we present a new proof of Pitman’s α -diversity (Pitman [5]), namely the large n asymptotic behaviour of K n ( α , θ ) under the EP-SM for α ( 0 , 1 ) .

2. A Compound Poisson Perspective of EP-SM

To introduce the NB-CPSM, we considered a population of individuals with a random number K of types and let K be distributed as a Poisson distribution with parameter λ = z [ 1 ( 1 q ) α ] such that either q ( 0 , 1 ) , α ( 0 , 1 ) and z > 0 , or q ( 0 , 1 ) , α < 0 and z < 0 . For i N , let N i be the random number of individuals of type i in the population, and let the N i be independent of K and independent from each other with the same distribution:
Pr [ N 1 = x ] = 1 [ 1 ( 1 q ) α ] α x ( q ) x
for x N . Let S = 1 i K N i and M r = 1 i K 𝟙 { N i = r } for r = 1 , , S , that is, M r is the random number of N i equal to r such that r 1 M r = K and r 1 r M r = S . If ( M 1 ( α , z , n ) , , M n ( α , z , n ) ) is a random variable whose distribution coincides with the conditional distribution of ( M 1 , , M S ) , given S = n , then it holds (Section 3, Charalambides [8]):
Pr [ ( M 1 ( α , z , n ) , , M n ( α , z , n ) ) = ( x 1 , , x n ) ] = n ! j = 0 n C ( n , j ; α ) z j i = 1 n z α ( 1 α ) ( i 1 ) i ! x i x i ! ,
where C ( n , j ; α ) = 1 j ! 0 i j j i ( 1 ) i ( i α ) ( n ) is the generalised factorial coefficient (Charalambides [11]), with the proviso C ( n , 0 , α ) = 0 for all n N , C ( n , j , α ) = 0 for all j > n and C ( 0 , 0 , α ) = 1 . The distribution (5) is referred to as the NB-CPSM. As α 0 , the distribution (4) reduces to the distribution (2), and hence the NB-CPSM (5) is reduced to the LS-CPSM (3). The next theorem states the large n asymptotic behaviour of the counting statistics K ( α , z , n ) = 1 r n M r ( α , z , n ) and M r ( α , z , n ) arising from the NB-CPSM.
Theorem 1.
Let P λ denote a Poisson random variable with the parameter λ > 0 . As n + ,
(i) 
for α ( 0 , 1 ) and z > 0 :
K ( α , z , n ) w 1 + P z
and:
M r ( α , z , n ) w P α ( 1 α ) ( r 1 ) r ! z ;
(ii) 
for α < 0 and z < 0 :
K ( α , z , n ) n α 1 α w ( α z ) 1 1 α α
and:
M r ( α , z , n ) w P α ( 1 α ) ( r 1 ) r ! z .
Proof. 
As regards the proof of (6), we start by recalling that the probability generating function G ( · ; λ ) of P λ is G ( s ; λ ) = exp { λ ( s 1 ) } for any s > 0 . Now, let G ( · ; α , z , n ) be the probability generating function of K ( α , z , n ) . The distribution of K ( α , z , n ) follows by combining the NB-CPSM (5) with Theorem 2.15 of Charalambides [11]. In particular, it follows that:
G ( s ; α , z , n ) = j = 1 n C ( n , j ; α ) ( s z ) j j = 1 n C ( n , j ; α ) z j .
Hereafter, we show that G ( s ; α , z , n ) s exp { z ( s 1 ) } as n + , for any s > 0 , which implies (6). In particular, by the direct application of the definition of C ( n , k ; α ) , we write the following:
j = 1 n C ( n , j ; α ) z j = i = 1 n ( 1 ) i ( i α ) ( n ) k = i n 1 k ! k i z k = i = 1 n ( 1 ) i ( i α ) ( n ) e z z i Γ ( n i + 1 , z ) i ! Γ ( n i + 1 ) ,
where Γ ( a , x ) : = x + t a 1 e t d t denotes the incomplete gamma function for a , x > 0 and Γ ( a ) : = 0 + t a 1 e t d t denotes the Gamma function for a > 0 . Accordingly, we write the identity:
G ( s ; α , z , n ) = e z ( s 1 ) z s Γ ( n , z s ) Γ ( n ) + i = 2 n ( 1 ) i ( i α ) ( n ) ( α ) ( n ) ( z s ) i Γ ( n i + 1 , z s ) i ! Γ ( n i + 1 ) z Γ ( n , z ) Γ ( n ) + i = 2 n ( 1 ) i ( i α ) ( n ) ( α ) ( n ) z i Γ ( n i + 1 , z ) i ! Γ ( n i + 1 ) .
Since lim n + Γ ( n , x ) Γ ( n ) = 1 for any x > 0 , the proof (6) is completed by showing that, for any t > 0 :
lim n + i = 2 n ( 1 ) i ( i α ) ( n ) ( α ) ( n ) Γ ( n i + 1 , t ) Γ ( n i + 1 ) t i i ! = 0 .
By the definition of ascending factorials and the reflection formula of the Gamma function, it holds:
( i α ) ( n ) ( α ) ( n ) = Γ ( n i α ) Γ ( n α ) sin i π α π Γ ( i α + 1 ) Γ ( α ) .
In particular, by means of the monotonicity of the function [ 1 , + ) z Γ ( z ) , we can write:
1 i ! | ( i α ) ( n ) ( α ) ( n ) | | Γ ( α ) | π Γ ( n 2 α ) Γ ( n α ) Γ ( i α + 1 ) i !
for any n N such that n > 1 / ( 1 α ) , and i { 2 , , n } . Note that Γ ( n , x ) Γ ( n ) 1 . Then, we apply (11) to obtain:
| i = 2 n ( 1 ) i ( i α ) ( n ) ( α ) ( n ) Γ ( n i + 1 , t ) Γ ( n i + 1 ) t i i ! | i = 2 n t i i ! | ( i α ) ( n ) ( α ) ( n ) | | Γ ( α ) | π Γ ( n 2 α ) Γ ( n α ) i 0 t i Γ ( i α + 1 ) i ! .
Now, by means of Stirling approximation, it holds Γ ( n 2 α ) Γ ( n α ) 1 n α as n + . Moreover, we have:
i 0 t i Γ ( i α + 1 ) i ! = 0 + e t z α z d z < +
where the finiteness of the integral follows, for any fixed t > 0 , from the fact that t z α < 1 2 z if z > ( 2 t ) 1 1 α . This completes the proof of (10) and hence the proof of (6). As regards the proof of (7), we make use of the falling factorial moments of M r ( α , z , n ) , which follows by combining the NB-CPSM (5) with Theorem 2.15 of Charalambides [11]. Let ( a ) [ n ] be the falling factorial of a of order n, i.e., ( a ) [ n ] = 0 i n 1 ( a i ) , for any a R + and n N 0 with the proviso ( a ) [ 0 ] = 1 . Then, we write:
E [ ( M r ( α , z , n ) ) [ s ] ] = ( 1 ) r s ( n ) [ r s ] α r s ( z ) s j = 0 n r s C ( n r s , j ; α ) z j j = 0 n C ( n , j ; α ) z j = ( 1 ) r s ( n ) [ r s ] α r s ( z ) s ( z ) Γ ( n r s , z ) Γ ( n r s ) + i = 2 n r s ( 1 ) i ( i α ) ( n r s ) ( α ) ( n r s ) ( z ) i Γ ( n r s i + 1 , z ) i ! Γ ( n r s i + 1 ) ( z ) Γ ( n , z ) Γ ( n ) + i = 2 n ( 1 ) i ( i α ) ( n ) ( α ) ( n ) ( z ) i Γ ( n i + 1 , z ) Γ ( n i + 1 ) = ( 1 ) r s ( n ) [ r s ] α r s ( z ) s × ( α ) ( n r s ) ( α ) ( n ) ( z ) Γ ( n r s , z ) Γ ( n r s ) + i = 2 n r s ( 1 ) i ( i α ) ( n r s ) ( α ) ( n l r ) ( z ) i Γ ( n r s i + 1 , z ) i ! Γ ( n r s i + 1 ) ( z ) Γ ( n , z ) Γ ( n ) + i = 2 n ( 1 ) i ( i α ) ( n ) ( α ) ( n ) ( z ) i Γ ( n i + 1 , z ) Γ ( n i + 1 ) .
Now, by means of the same argument applied in the proof of statement (6), it holds true that:
lim n + ( z ) Γ ( n r s , z ) Γ ( n r s ) + i = 2 n r s ( 1 ) i ( i α ) ( n r s ) ( α ) ( n l r ) ( z ) i Γ ( n r s i + 1 , z ) i ! Γ ( n r s i + 1 ) ( z ) Γ ( n , z ) Γ ( n ) + i = 2 n ( 1 ) i ( i α ) ( n ) ( α ) ( n ) ( z ) i Γ ( n i + 1 , z ) Γ ( n i + 1 ) = 1 .
Then:
lim n + E [ ( M r ( α , z , n ) ) [ s ] ] = ( 1 ) r s α r s ( z ) s = α ( 1 α ) ( r 1 ) r ! z s
follows from the fact that ( n ) [ r s ] ( α ) ( n r s ) ( α ) ( n ) as n + . The proof of the large n asymptotics (7) is completed by recalling that the falling factorial moment of order s of P λ is E [ ( P λ ) [ s ] ] = λ s .
As regards the proof of statement (8), let α = σ for any σ > 0 and let z = ζ for any ζ > 0 . Then, by direct application of Equation (2.27) of Charalambides [11], we write the following identity:
j = 0 n C ( n , j ; σ ) ( ζ ) j = ( 1 ) n v = 0 n s ( n , v ) ( σ ) v j = 0 v ζ j S ( v , j ) ,
where S ( v , j ) is the Stirling number of that second type. Now, note that 0 j v v ζ j S ( v , j ) is the moment of order v of a Poisson random variable with parameter ζ > 0 . Then, we write:
j = 0 n C ( n , j ; σ ) ( ζ ) j = v = 0 n | s ( n , v ) | σ v j 0 j v e ζ ζ j j ! = j 0 e ζ ζ j j ! 0 + x n f G σ j , 1 ( x ) d x .
That is:
B n ( w ) = E [ ( G σ P w , 1 ) n ] ,
where G a , 1 and P w are independent random variables such that G a , 1 is a Gamma random variable with a shape parameter a > 0 and a scale parameter 1, and P w is a Poisson random variable with a parameter w. Accordingly, the distribution of G σ P w , 1 , say μ σ , w , is the following:
μ σ , w ( d t ) = e w δ 0 ( d t ) + j 1 e w w j j ! 1 Γ ( j σ ) e t t j σ 1 d t
for t > 0 . The discrete component of μ σ , w does not contribute to the expectation (13) so that we focus on the absolutely continuous component, whose density can be written as follows:
j 1 e w w j j ! 1 Γ ( j σ ) e t t j σ 1 = e ( w + t ) t W σ , 0 ( w t σ ) ,
where W σ , τ ( y ) : = j 0 y j j ! Γ ( j σ + τ ) is the Wright function (Wright [10]). In particular, for τ = 0 :
B n ( w ) = 0 + t n e ( w + t ) t W σ , 0 ( w t σ ) d t .
If we split the integral as 0 M + M + for any M > 0 , the contribution of the latter integral is overwhelming with respect to the contribution of the former. Then, W σ , 0 can be equivalently replaced by the asymptotics W σ , 0 ( y ) c ( σ ) y 1 2 ( 1 + σ ) exp { σ 1 ( σ + 1 ) ( σ y ) 1 1 + σ } , as y + , for some constant c ( σ ) solely depending on σ . See Theorem 2 in Wright [10]. Hence:
B n ( w ) c ( σ ) 0 + t n 1 e ( w + t ) ( w t σ ) 1 2 ( 1 + σ ) exp σ + 1 σ ( σ w t σ ) 1 1 + σ d t = c ( σ ) e w w 1 2 ( 1 + σ ) 0 + t n + σ 2 ( 1 + σ ) 1 exp { A ( w , σ ) t σ 1 + σ t } d t ,
where A ( w , σ ) : = σ + 1 σ ( σ w ) 1 1 + σ . Then, the problem is reduced to an integral whose asymptotic behaviour is described in Berg [12]. From Equation (31) of the Berg [12] and Stirling approximation, we have:
B n ( w ) c ( σ ) e w w 1 2 ( 1 + σ ) Γ ( n ) exp A ( w , σ ) n σ 1 + σ .
This last asymptotic expansion leads directly to (8). Indeed, let G ( · ; σ , ζ , n ) be the probability generating function of the random variable K ( σ , ζ , n ) , which reads as G ( s ; σ , ζ , n ) = B n ( s ζ ) / B n ( ζ ) for s > 0 . Then, by means of (15), for any fixed s > 0 we write:
G ( s ; σ , ζ , n ) e w ( s 1 ) s 1 2 ( 1 + σ ) exp n σ 1 + σ σ + 1 σ ( σ ζ ) 1 1 + σ [ s 1 1 + σ 1 ] .
Since (15) holds uniformly in w in a compact set, we consider the function G ( s ; σ , ζ , n ) evaluated at some point s n and extend the validity of (16) with s n in the place of s, as long as { s n } n 1 varies in a compact subset of [ 0 , + ) . Thus, we can choose s n = s β ( n ) and β ( n ) = 1 n σ 1 + σ and notice that β ( n ) 0 as n + . Thus, s n 1 + β ( n ) log s 1 and we have:
n σ 1 + σ σ + 1 σ ( σ w ) 1 1 + σ [ s n 1 1 + σ 1 ] ( σ ζ ) 1 1 + σ σ log s ,
which implies that K ( σ , ζ , n ) ( σ ζ ) 1 1 + σ σ as n + . This completes the proof of (8). As regards the proof (9), let α = σ for any σ > 0 and let z = ζ for any ζ > 0 . Similarly to the proof of (7), here we make use of the falling factorial moments of M r ( σ , ζ , n ) , that is:
E [ ( M r ( σ , ζ , n ) ) [ s ] ] = ( 1 ) r s ( n ) [ r s ] σ r s ζ s j = 0 n r s C ( n r s , j ; σ ) ( ζ ) j j = 0 n C ( n , j ; σ ) ( ζ ) j .
At this point, we make use of the same large n arguments applied in the proof of statement (7). In particular, by means of the large n asymptotic (15), as n + , it holds true that:
j = 0 n r s C ( n r s , j ; σ ) ( ζ ) j j = 0 n C ( n , j ; σ ) ( ζ ) j n r s .
Then:
lim n + E [ ( M r ( σ , ζ , n ) ) [ s ] ] = ( 1 ) r s σ r s ζ s = σ ( 1 + σ ) ( r 1 ) r ! ( ζ ) s
it follows from the fact that ( n ) [ r s ] n r s as n + . The proof of the large n asymptotics (9) is completed by recalling that the falling factorial moment of order s of P λ is E [ ( P λ ) [ s ] ] = λ s . □
In the rest of the section, we make use of the NB-CPSM (5) to introduce a compound Poisson perspective of the EP-SM. In particular, our result extends the well-known compound Poisson perspective of the E-SM to the EP-SM for either α ( 0 , 1 ) , or α < 0 . For α ( 0 , 1 ) let f α denote the density function of a positive α -stable random variable X α , that is X α is a random variable for which E [ exp { t X α } ] = exp { t α } for any t > 0 . For α ( 0 , 1 ) and θ > α , let S α , θ be a positive random variable with the density function:
f S α , θ ( s ) = Γ ( θ + 1 ) α Γ ( θ / α + 1 ) s θ 1 α 1 f α ( s 1 α ) .
That is, S α , θ is a scaled Mittag–Leffler random variable (Chapter 1, Pitman [5]). Let G a , b be a Gamma random variable with the scale parameter b > 0 and shape parameter a > 0 , and let us assume that G a , b is independent of S α , θ . Then, for α ( 0 , 1 ) , θ > α and n N let:
X ¯ α , θ , n = d G θ + n , 1 α S α , θ .
Finally, for α < 0 , z < 0 and n N , let X ˜ α , z , n be a random variable on N whose distribution is a tilted Poisson distribution arising from the identity (12). Precisely, for any x N :
Pr [ X ˜ α , z , n = x ] = 1 j = 1 n C ( n , j ; α ) z j e z ( z ) x Γ ( x α + n ) x ! Γ ( x α ) .
The next theorem makes use of X ¯ α , θ , n and X ˜ α , z , n to set an interplay between NB-CPSM (5) and EP-SM (1). This extends the compound Poisson perspective of the E-SM.
Theorem 2.
Let ( M 1 , n ( α , θ ) , , M n , n ( α , θ ) ) be distributed as the EP-SM (1) and let X ¯ α , θ , n be the random variable defined in (17), which is independent of ( M 1 , n ( α , θ ) , , M n , n ( α , θ ) ) . Moreover, let ( M 1 ( α , z , n ) , , M n ( α , z , n ) ) be distributed as the NB-CPSM (5), and let X ˜ α , z , n be the random variable defined in (18), which is independent of ( M 1 ( α , z , n ) , , M n ( α , z , n ) ) . Then:
(i) 
for α ( 0 , 1 ) and θ > α :
( M 1 , n ( α , θ ) , , M n , n ( α , θ ) ) = d ( M 1 ( α , X ¯ α , θ , n , n ) , , M n ( α , X ¯ α , θ , n , n ) ) ;
(ii) 
for α < 0 and z < 0 :
( M 1 ( α , z , n ) , , M n ( α , z , n ) ) = d ( M 1 , n ( α , X ˜ α , z , n α ) , , M n , n ( α , X ˜ α , z , n α ) ) .
Proof. 
As regards the proof of statement (i), it relies on the classical integral representation of the Gamma function. That is, by applying the integral representation of Γ ( θ / α + k ) to the EP-SM (1), for x 1 , , x n { 0 , , n } with i = 1 n x i = k and i = 1 n i x i = n , we can write that:
Pr [ ( M 1 , n ( α , θ ) , , M n , n ( α , θ ) ) = ( x 1 , , x n ) ] = n ! α k Γ ( θ + n ) i = 1 n ( 1 α ) ( i 1 ) i ! x i x i ! Γ ( θ + 1 ) α Γ ( θ / α + 1 ) × 0 + z θ / α 1 e z z k j = 1 n C ( n , j ; α ) z j j = 1 n C ( n , j ; α ) z j d z
By Equation (13) of Favaro et al. [13]:
= n ! α k Γ ( θ + n ) i = 1 n ( 1 α ) ( i 1 ) i ! x i x i ! Γ ( θ + 1 ) α Γ ( θ / α + 1 ) × 0 + z θ / α 1 e z z k j = 1 n C ( n , j ; α ) z j e z z n / α 0 + y n e y z 1 / α f α ( y ) d y d z = 0 + n ! j = 0 n C ( n , j , α ) z j i = 1 n z α ( 1 α ) ( i 1 ) i ! x i x i ! × Γ ( θ + 1 ) α Γ ( θ + n ) Γ ( θ / α + 1 ) z θ / α + n / α 1 0 + y n e y z 1 / α f α ( y ) d y d z = 0 + Pr [ ( M 1 ( α , x , n ) , , M n ( α , x , n ) ) = ( x 1 , , x n ) ] × Γ ( θ + 1 ) α Γ ( θ + n ) Γ ( θ / α + 1 ) z θ / α + n / α 1 0 + y n e y z 1 / α f α ( y ) d y d z By the distribution of X ¯ α , θ , n : = 0 + Pr [ ( M 1 ( α , z , n ) , , M n ( α , z , n ) ) = ( x 1 , , x n ) ] f X ¯ α , θ , n ( z ) d z ,
where f X ¯ α , θ , n is the density function of the random variable X ¯ α , θ , n . This completes the proof of (i).
As regards the proof of statement (ii), for any α < 0, m N , k m and n N , we define the function m A ( m ; k , α , n ) = m ! ( m k ) ! Γ ( m α ) Γ ( m α + n ) , and then consider the following identity:
( z ) k j = 1 n C ( n , j ; α ) z j = m k A ( m ; k , α , n ) Pr [ X ˜ α , z , n = m ] .
By applying (19) to the NB-CPSM (5), for x 1 , , x n { 0 , , n } with i = 1 n x i = k and i = 1 n i x i = n , we write:
Pr [ ( M 1 ( α , z , n ) , , M n ( α , z , n ) ) = ( x 1 , , x n ) ] = m k n ! ( 1 ) k A ( m ; k , α , n ) Pr [ X ˜ α , z , n = m ] i = 1 n α ( 1 α ) ( i 1 ) i ! x i x i ! = m k n ! ( 1 ) k m ! ( m k ) ! Γ ( m α ) Γ ( m α + n ) Pr [ X ˜ α , z , n = m ] i = 1 n α ( 1 α ) ( i 1 ) i ! x i x i ! = m k n ! m α α ( k ) ( m α ) ( n ) i = 1 n α ( 1 α ) ( i 1 ) i ! x i x i ! Pr [ X ˜ α , z , n = m ] = m k Pr [ ( M 1 ( α , m α ) , , M n ( α , m α ) ) = ( x 1 , , x n ) ] Pr [ X ˜ α , z , n = m ] .
This completes the proof of (ii). □
Theorem 2 presents a compound Poisson perspective of the EP-SM in terms of the NB-CPSM, thus extending the well-known compound Poisson perspective of the E-SM in terms of the LS-CPSM. Statement (i) of Theorem 2 shows that for α ( 0 , 1 ) and θ > α , the EP-SM admits a representation in terms of the NB-CPSM with α ( 0 , 1 ) and z > 0 , where the randomisation acts on the parameter z with respect to the distribution (17). Precisely, this is a compound mixed Poisson sampling model. That is, a compound sampling model in which the distribution of the random number K of distinct types in the population is a mixture of Poisson distributions with respect to the law of X ¯ α , θ , n . Statement (ii) of Theorem 2 shows that for α < 0 and z < 0 , the NB-CPSM admits a representation in terms of a randomised EP-SM with α < 0 and θ = m α for some m N , where the randomisation acts on the parameter m with respect to the distribution (17).
Remark 1.
The randomisation procedure introduced in Theorem 2 is somehow reminiscent of a class of Gibbs-type sampling models introduced in Gnedin and Pitman [14]. This class is defined from the EP-SM with α < 0 and θ = m α , for some m N , and then it assumes that the parameter m is distributed according to an arbitrary distribution on N . This can be seen in Theorem 12 of Gnedin and Pitman [14] and Gnedin [15] for example. However, differently from the definition of Gnedin and Pitman [14], in our context, the distribution of m depends on the sample size n.
For α ( 0 , 1 ) and θ > α , Pitman [5] first studied the large n asymptotic behaviour of K n ( α , θ ) . This can also be seen in Gnedin and Pitman [14] and the references therein. Let a . s . denote the almost sure convergence, and let S α , θ be the scaled Mittag–Leffler random variable defined above. Theorem 3.8 of Pitman [5] exploited a martingale convergence argument to show that:
K n ( α , θ ) n α a . s . S α , θ
as n + . The random variable S α , θ is referred to as Pitman’s α -diversity. For α < 0 and θ = m α for some m N , the large n asymptotic behaviour of K n ( α , θ ) is trivial, that is:
K n ( α , θ ) w m
as n + . We refer to Dolera and Favaro [16,17] for Berry–Esseen type refinements of (20) and to Favaro et al. [18,19] and Favaro and James [13] for generalisations of (20) with applications to Bayesian nonparametrics. This can also be seen in Pitman [5] (Chapter 4) for a general treatment of (20). According to Theorem 2, it is natural to ask whether there exists an interplay between Theorem 1 and the large n asymptotic behaviours (20) and (21). Hereafter, we show that: (i) (20), with the almost sure convergence replaced by the convergence in distribution, arises by combining (6) with (i) of Theorem 2; (ii) (8) arises by combining (21) with (ii) of Theorem 2. This provides an alternative proof of Pitman’s α -diversity.
Theorem 3.
Let K n ( α , θ ) and K ( α , z , n ) under the EP-SM and the NB-CPSM, respectively. As n + :
(i) 
For α ( 0 , 1 ) and θ > α :
K n ( α , θ ) n α w S α , θ .
(ii) 
For α < 0 and z < 0 :
K ( α , z , n ) n α 1 α w ( α z ) 1 1 α α .
Proof. 
We show that (22) arises by combining (6) with statement (i) of Theorem 2. For any pair of N -valued random variables U and V, let d T V ( U ; V ) be the total variation distance between the distribution of U and the distribution of V. Furthermore, let P c denote a Poisson random variable with parameter c > 0 . For any α ( 0 , 1 ) and t > 0 , we show that as n + :
d T V ( K ( α , t n α , n ) ; 1 + P t n α ) 0 .
This implies (22). The proof of (24) requires a careful analysis of the probability generating function of K ( α , t n α , n ) . In particular, let us define ω ( t ; n , α ) : = t n α + t M α ( t ) M α ( t ) , where M α ( t ) : = 1 π m = 1 ( t ) m 1 ( m 1 ) ! Γ ( α m ) sin ( π α m ) is the Wright–Mainardi function (Mainardi et al. [20]). Then, we apply Corollary 2 of Dolera and Favaro [16] to conclude that d T V ( K ( α , t n α , n ) ; 1 + P ω ( t ; n , α ) ) 0 as n + . Finally, we applied inequality (2.2) in Adell and Jodrá [21] to obtain:
d T V ( 1 + P t n α ; 1 + P ω ( t ; n , α ) ) = d T V ( P t n α ; P ω ( t ; n , α ) ) t M α ( t ) M α ( t ) min 1 , ( 2 / e ) ω ( t ; n , α ) + t n α
So that d T V ( 1 + P t n α ; 1 + P ω ( t ; n , α ) ) 0 as n + , and (24) follows. Now, keeping α and t fixed as above, we show that (24) entails (22). To this aim, we introduced the Kolmogorov distance d K which, for any pair of R + -valued random variables U and V, is defined by d K ( U ; V ) : = sup x 0 | Pr [ U x ] Pr [ V x ] | . The claim to be proven is equivalent to:
d K ( K n ( α , θ ) / n α ; S α , θ ) 0
as n + . We exploit statement (i) of Theorem 2. This leads to the distributional identity K n ( α , θ ) = d K ( α , X ¯ α , θ , n , n ) . Thus, in view of the basic properties of the Kolmogorov distance:
d K ( K n ( α , θ ) / n α ; S α , θ ) d K ( K n ( α , θ ) ; K ( α , n α S α , θ , n ) ) + d K ( K ( α , n α S α , θ , n ) ; 1 + P n α S α , θ ) + d K ( [ 1 + P n α S α , θ ] / n α ; S α , θ ) ,
where the { P λ } λ 0 is thought of here as a homogeneous Poisson process with a rate of 1, independent of S α , θ . The desired conclusion will be reached as soon as we will prove that all the three summands on the right-hand side of (25) go to zero as n + . Before proceeding, we recall that d K ( U ; V ) d T V ( U ; V ) . Therefore, for the first of these terms, we write:
d K ( K n ( α , θ ) ; K ( α , n α S α , θ , n ) ) 1 2 k = 1 n | C ( n , k ; α ) Γ ( k + θ / α ) α Γ ( θ / α + 1 ) Γ ( θ + 1 ) Γ ( n + θ ) 0 + C ( n , k ; α ) ( t n α ) k d n ( t ) f S α , θ ( t ) d t |
with d n ( t ) : = j = 1 n C ( n , j ; α ) ( t n α ) j . Now, let us define d n * ( t ) : = e t n α ( n 1 ) ! 1 t 1 / α f α ( 1 t 1 / α ) . Accordingly, we can make the above right-hand side major by means of the following quantity:
1 2 k = 1 n | C ( n , k ; α ) Γ ( k + θ / α ) α Γ ( θ / α + 1 ) Γ ( θ + 1 ) Γ ( n + θ ) 0 + C ( n , k ; α ) ( t n α ) k d n * ( t ) f S α , θ ( t ) d t | + 1 2 0 + | d n * ( t ) d n ( t ) | d n * ( t ) f S α , θ ( t ) d t .
Then, by exploiting the identity 0 + ( t n α ) k d n * ( t ) f S α , θ ( t ) d t = 1 ( n 1 ) ! Γ ( k + θ / α ) n θ Γ ( θ + 1 ) α Γ ( θ / α + 1 ) , we can write:
k = 1 n | C ( n , k ; α ) Γ ( k + θ / α ) α Γ ( θ / α + 1 ) Γ ( θ + 1 ) Γ ( n + θ ) 0 + C ( n , k ; α ) ( t n α ) k d n * ( t ) f S α , θ ( t ) d t | = | 1 Γ ( n + θ ) Γ ( n ) n θ |
which goes to zero as n + for any θ > α , by Stirling’s approximation. To show that the integral 0 + | d n * ( t ) d n ( t ) | d n * ( t ) f S α , θ ( t ) d t also goes to zero as n + , we may resort to identities (13)–(14) of Dolera and Favaro [16], as well as Lemma 3 therein. In particular, let Δ : ( 0 , + ) ( 0 , + ) denote a suitable continuous function independent of n, and such that Δ ( z ) = O ( 1 ) as z 0 and Δ ( z ) f α ( 1 / z ) = O ( z ) as z + . Then, we write that:
0 + | d n * ( t ) d n ( t ) | d n * ( t ) f S α , θ ( t ) d t | ( n / e ) n 2 π n n ! 1 | + ( n / e ) n 2 π n n ! 1 n 0 + Δ ( t 1 / α ) f S α , θ ( t ) d t .
Since 0 + Δ ( t 1 / α ) f S α , θ ( t ) d t < + by Lemma 3 of Dolera and Favaro [16], both the summands on the above right-hand side go to zero as n + , again by Stirling’s approximation. Thus, the first summand on the right-hand side of (25) goes to zero as n + . As for the second summand on the right-hand side of (25), it can be bounded by
0 + d T V ( K ( α , t n α , n ) ; 1 + P t n α ) f S α , θ ( t ) d t .
By a dominated convergence argument, this quantity goes to zero as n + as a consequence of (24). Finally, for the third summand on the right-hand side of (25), we can resort to a conditioning argument in order to reduce the problem to a direct application of the law of large numbers for renewal processes (Section 10.2, Grimmett and Stirzaker [22]). In particular, this leads to n α P t n α a . s . t for any t > 0 , which entails that n α P n α S α , θ a . s . S α , θ as n + . Thus, this third term also goes to zero as n + and (22) follows.
Now, we consider (23), showing that it arises by combining (21) with statement (ii) of Theorem 2. In particular, by an obvious conditioning argument, we can write that as n + :
K n ( α , X ˜ α , z , n | α | ) X ˜ α , z , n a . s . 1 .
At this stage, we consider the probability generating function of X ˜ α , z , n and we immediately obtain E [ s X ˜ α , z , n ] : = B n ( s z ) / B n ( z ) for n N and s [ 0 , 1 ] with the same B n as in (13) and (14). Therefore, the asymptotic expansion we already provided in (15) entails:
X ˜ α , z , n n α 1 α w ( α z ) 1 1 α α
as n + . In particular, (26) follows by applying exactly the same arguments used to prove (8). Now, since:
K n ( α , X ˜ α , z , n | α | ) n α 1 α = d K n ( α , X ˜ α , z , n | α | ) X ˜ α , z , n X ˜ α , z , n n α 1 α ,
the claim follows from a direct application of Slutsky’s theorem. This completes the proof. □

3. Discussion

The NB-CPSM is a compound Poisson sampling model generalising the popular LS-CMSM. In this paper, we introduced a compound Poisson perspective of the EP-SM in terms of the NB-CPSM, thus extending the well-known compound Poisson perspective of the E-SM in terms of the LS-CPSM. We conjecture that an analogous perspective holds true for the class of α -stable Poisson–Kingman sampling models (Pitman [23] and Pitman [5]), of which the EP-SM is a noteworthy special case. That is, for α ( 0 , 1 ) , we conjecture that an α -stable Poisson–Kingman sampling model admits a representation as a randomised NB-CPSM with α ( 0 , 1 ) and z > 0 , where the randomisation acts on z with respect to a scale mixture between a Gamma and a suitable transformation of the Mittag–Leffler distribution. We believe that such a compound Poisson representation would be critical in order to introduce Berry–Esseen type refinements of the large n asymptotic behaviour of K n under α -stable Poisson–Kingman sampling models. This can be seen in Section 6.1 of Pitman [23] and the references therein. Such a line of research aims to extend the preliminary works of Dolera and Favaro [16,17] on Berry–Esseen type theorems under the EP-SM. Work on this, and on the more general settings induced by normalised random measures (Regazzini et al. [24]) and Poisson–Kingman models (Pitman [23]), is ongoing.

Author Contributions

Formal analysis, E.D. and S.F.; writing—original draft preparation, E.D. and S.F.; writing—review and editing, E.D. and S.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme under grant agreement No 817257.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank the editor and two anonymous referees for all their comments and suggestions which remarkably improved the original version of the present paper. Emanuele Dolera and Stefano Favaro wish to express their enormous gratitude to Eugenio Regazzini, whose fundamental contributions to the theory of Bayesian statistics have always been a great source of inspiration, transmitting enthusiasm and method for the development of their own research. The authors gratefully acknowledge the financial support from the Italian Ministry of Education, University and Research (MIUR), “Dipartimenti di Eccellenza” grant 2018–2022.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Perman, M.; Pitman, J.; Yor, M. Size-biased sampling of Poisson point processes and excursions. Probab. Theory Relat. Fields 1992, 92, 21–39. [Google Scholar] [CrossRef]
  2. Pitman, J. Exchangeable and partially exchangeable random partitions. Probab. Theory Relat. Fields 1995, 102, 145–158. [Google Scholar] [CrossRef]
  3. Pitman, J.; Yor, M. The two parameter Poisson-Dirichlet distribution derived from a stable subordinator. Ann. Probab. 1997, 25, 855–900. [Google Scholar] [CrossRef]
  4. Ferguson, T.S. A Bayesian analysis of some nonparametric problems. Ann. Stat. 1973, 1, 209–230. [Google Scholar] [CrossRef]
  5. Pitman, J. Combinatorial Stochastic Processes; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  6. Ewens, W. The sampling theory or selectively neutral alleles. Theor. Popul. Biol. 1972, 3, 87–112. [Google Scholar] [CrossRef]
  7. Crane, H. The ubiquitous Ewens sampling formula. Stat. Sci. 2016, 31, 1–19. [Google Scholar] [CrossRef]
  8. Charalambides, C.A. Distributions of random partitions and their applications. Methodol. Comput. Appl. Probab. 2007, 9, 163–193. [Google Scholar] [CrossRef]
  9. Korwar, R.M.; Hollander, M. Contributions to the theory of Dirichlet processes. Ann. Stat. 1973, 1, 705–711. [Google Scholar] [CrossRef]
  10. Wright, E.M. The asymptotic expansion of the generalized Bessel function. Proc. Lond. Math. Soc. 1935, 38, 257–270. [Google Scholar] [CrossRef]
  11. Charalambides, C.A. Combinatorial Methods in Discrete Distributions; Wiley: Hoboken, NJ, USA, 2005. [Google Scholar]
  12. Berg, L. Asymptotische darstellungen für integrale und reihen mit anwendungen. Math. Nachrichten 1958, 17, 101–135. [Google Scholar] [CrossRef]
  13. Favaro, S.; James, L.F. A note on nonparametric inference for species variety with Gibbs-type priors. Electron. J. Stat. 2015, 9, 2884–2902. [Google Scholar] [CrossRef]
  14. Gnedin, A.; Pitman, J. Exchangeable Gibbs partitions and Stirling triangles. J. Math. Sci. 2006, 138, 5674–5685. [Google Scholar] [CrossRef] [Green Version]
  15. Gnedin, A. A species sampling model with finitely many types. Electron. Commun. Probab. 2010, 8, 79–88. [Google Scholar] [CrossRef]
  16. Dolera, E.; Favaro, S. A Berry—Esseen theorem for Pitman’s α—Diversity. Ann. Appl. Probab. 2020, 30, 847–869. [Google Scholar] [CrossRef]
  17. Dolera, E.; Favaro, S. Rates of convergence in de Finetti’s representation theorem, and Hausdorff moment problem. Bernoulli 2020, 26, 1294–1322. [Google Scholar] [CrossRef]
  18. Favaro, S.; Lijoi, A.; Prünster, I. Asymptotics for a Bayesian nonparametric estimator of species richness. Bernoulli 2012, 18, 1267–1283. [Google Scholar] [CrossRef]
  19. Favaro, S.; Lijoi, A.; Mena, R.H.; Prünster, I. Bayesian nonparametric inference for species variety with a two parameter Poisson-Dirichlet process prior. J. R. Stat. Soc. Ser. B 2009, 71, 992–1008. [Google Scholar] [CrossRef]
  20. Mainardi, F.; Mura, A.; Pagnini, G. The M-Wright function in time-fractional diffusion processes: A tutorial survey. Int. J. Differ. Equat. 2010, 104505. [Google Scholar] [CrossRef] [Green Version]
  21. Adell, J.A.; Jodrá, P. Exact Kolmogorov and total variation distances between some familiar discrete distributions. J. Inequalities Appl. 2006, 64307. [Google Scholar] [CrossRef] [Green Version]
  22. Grimmett, G.; Stirzaker, D. Probability and Random Processes; Oxford University Press: Oxford, UK, 2001. [Google Scholar]
  23. Pitman, J. Poisson-Kingman partitions. In Science and Statistics: A Festschrift for Terry Speed; Goldstein, D.R., Ed.; Institute of Mathematical Statistics: Tachikawa, Japan, 2003. [Google Scholar]
  24. Regazzini, E.; Lijoi, A.; Prünster, I. Distributional results for means of normalized random measures with independent increments. Ann. Stat. 2003, 31, 560–585. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dolera, E.; Favaro, S. A Compound Poisson Perspective of Ewens–Pitman Sampling Model. Mathematics 2021, 9, 2820. https://doi.org/10.3390/math9212820

AMA Style

Dolera E, Favaro S. A Compound Poisson Perspective of Ewens–Pitman Sampling Model. Mathematics. 2021; 9(21):2820. https://doi.org/10.3390/math9212820

Chicago/Turabian Style

Dolera, Emanuele, and Stefano Favaro. 2021. "A Compound Poisson Perspective of Ewens–Pitman Sampling Model" Mathematics 9, no. 21: 2820. https://doi.org/10.3390/math9212820

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop