To introduce the NB-CPSM, we considered a population of individuals with a random number 
K of types and let 
K be distributed as a Poisson distribution with parameter 
 such that either 
, 
 and 
, or 
, 
 and 
. For 
, let 
 be the random number of individuals of type 
i in the population, and let the 
 be independent of 
K and independent from each other with the same distribution:
      for 
. Let 
 and 
 for 
, that is, 
 is the random number of 
 equal to 
r such that 
 and 
. If 
 is a random variable whose distribution coincides with the conditional distribution of 
, given 
, then it holds (Section 3, Charalambides [
8]):
      where 
 is the generalised factorial coefficient (Charalambides [
11]), with the proviso 
 for all 
, 
 for all 
 and 
. The distribution (
5) is referred to as the NB-CPSM. As 
, the distribution (
4) reduces to the distribution (
2), and hence the NB-CPSM (
5) is reduced to the LS-CPSM (
3). The next theorem states the large 
n asymptotic behaviour of the counting statistics 
 and 
 arising from the NB-CPSM.
Proof.  As regards the proof of (
6), we start by recalling that the probability generating function 
 of 
 is 
 for any 
. Now, let 
 be the probability generating function of 
. The distribution of 
 follows by combining the NB-CPSM (
5) with Theorem 2.15 of Charalambides [
11]. In particular, it follows that:
        
Hereafter, we show that 
 as 
, for any 
, which implies (
6). In particular, by the direct application of the definition of 
, we write the following:
        
        where 
 denotes the incomplete gamma function for 
 and 
 denotes the Gamma function for 
. Accordingly, we write the identity:
        
Since 
 for any 
, the proof (
6) is completed by showing that, for any 
:
        
By the definition of ascending factorials and the reflection formula of the Gamma function, it holds:
        
In particular, by means of the monotonicity of the function 
, we can write:
        
        for any 
 such that 
, and 
. Note that 
. Then, we apply (
11) to obtain:
        
Now, by means of Stirling approximation, it holds 
 as 
. Moreover, we have:
        
        where the finiteness of the integral follows, for any fixed 
, from the fact that 
 if 
. This completes the proof of (
10) and hence the proof of (
6). As regards the proof of (
7), we make use of the falling factorial moments of 
, which follows by combining the NB-CPSM (
5) with Theorem 2.15 of Charalambides [
11]. Let 
 be the falling factorial of 
a of order 
n, i.e., 
, for any 
 and 
 with the proviso 
. Then, we write:
        
Now, by means of the same argument applied in the proof of statement (
6), it holds true that:
        
Then:
        
        follows from the fact that 
 as 
. The proof of the large 
n asymptotics (
7) is completed by recalling that the falling factorial moment of order 
s of 
 is 
.
As regards the proof of statement (
8), let 
 for any 
 and let 
 for any 
. Then, by direct application of Equation (2.27) of Charalambides [
11], we write the following identity:
        
        where 
 is the Stirling number of that second type. Now, note that 
 is the moment of order 
v of a Poisson random variable with parameter 
. Then, we write:
        
That is:
        
        where 
 and 
 are independent random variables such that 
 is a Gamma random variable with a shape parameter 
 and a scale parameter 1, and 
 is a Poisson random variable with a parameter 
w. Accordingly, the distribution of 
, say 
, is the following:
        
        for 
. The discrete component of 
 does not contribute to the expectation (
13) so that we focus on the absolutely continuous component, whose density can be written as follows:
        
        where 
 is the Wright function (Wright [
10]). In particular, for 
:
        
If we split the integral as 
 for any 
, the contribution of the latter integral is overwhelming with respect to the contribution of the former. Then, 
 can be equivalently replaced by the asymptotics 
, as 
, for some constant 
 solely depending on 
. See Theorem 2 in Wright [
10]. Hence:
        
        where 
. Then, the problem is reduced to an integral whose asymptotic behaviour is described in Berg [
12]. From Equation (31) of the Berg [
12] and Stirling approximation, we have:
        
This last asymptotic expansion leads directly to (
8). Indeed, let 
 be the probability generating function of the random variable 
, which reads as 
 for 
. Then, by means of (
15), for any fixed 
 we write:
        
Since (
15) holds uniformly in 
w in a compact set, we consider the function 
 evaluated at some point 
 and extend the validity of (
16) with 
 in the place of 
s, as long as 
 varies in a compact subset of 
. Thus, we can choose 
 and 
 and notice that 
 as 
. Thus, 
 and we have:
        
        which implies that 
 as 
. This completes the proof of (
8). As regards the proof (
9), let 
 for any 
 and let 
 for any 
. Similarly to the proof of (
7), here we make use of the falling factorial moments of 
, that is:
        
At this point, we make use of the same large 
n arguments applied in the proof of statement (
7). In particular, by means of the large 
n asymptotic (
15), as 
, it holds true that:
        
Then:
        
        it follows from the fact that 
 as 
. The proof of the large 
n asymptotics (
9) is completed by recalling that the falling factorial moment of order 
s of 
 is 
. □
 In the rest of the section, we make use of the NB-CPSM (
5) to introduce a compound Poisson perspective of the EP-SM. In particular, our result extends the well-known compound Poisson perspective of the E-SM to the EP-SM for either 
, or 
. For 
 let 
 denote the density function of a positive 
-stable random variable 
, that is 
 is a random variable for which 
 for any 
. For 
 and 
, let 
 be a positive random variable with the density function:
Theorem 2 presents a compound Poisson perspective of the EP-SM in terms of the NB-CPSM, thus extending the well-known compound Poisson perspective of the E-SM in terms of the LS-CPSM. Statement (i) of Theorem 2 shows that for 
 and 
, the EP-SM admits a representation in terms of the NB-CPSM with 
 and 
, where the randomisation acts on the parameter 
z with respect to the distribution (
17). Precisely, this is a compound mixed Poisson sampling model. That is, a compound sampling model in which the distribution of the random number 
K of distinct types in the population is a mixture of Poisson distributions with respect to the law of 
. Statement (ii) of Theorem 2 shows that for 
 and 
, the NB-CPSM admits a representation in terms of a randomised EP-SM with 
 and 
 for some 
, where the randomisation acts on the parameter 
m with respect to the distribution (
17).
For 
 and 
, Pitman [
5] first studied the large 
n asymptotic behaviour of 
. This can also be seen in Gnedin and Pitman [
14] and the references therein. Let 
 denote the almost sure convergence, and let 
 be the scaled Mittag–Leffler random variable defined above. Theorem 3.8 of Pitman [
5] exploited a martingale convergence argument to show that:
      as 
. The random variable 
 is referred to as Pitman’s 
-diversity. For 
 and 
 for some 
, the large 
n asymptotic behaviour of 
 is trivial, that is:
      as 
. We refer to Dolera and Favaro [
16,
17] for Berry–Esseen type refinements of (
20) and to Favaro et al. [
18,
19] and Favaro and James [
13] for generalisations of (
20) with applications to Bayesian nonparametrics. This can also be seen in Pitman [
5] (Chapter 4) for a general treatment of (
20). According to Theorem 2, it is natural to ask whether there exists an interplay between Theorem 1 and the large 
n asymptotic behaviours (
20) and (
21). Hereafter, we show that: (i) (
20), with the almost sure convergence replaced by the convergence in distribution, arises by combining (
6) with (i) of Theorem 2; (ii) (
8) arises by combining (
21) with (ii) of Theorem 2. This provides an alternative proof of Pitman’s 
-diversity.
Proof.  We show that (
22) arises by combining (
6) with statement (i) of Theorem 2. For any pair of 
-valued random variables 
U and 
V, let 
 be the total variation distance between the distribution of 
U and the distribution of 
V. Furthermore, let 
 denote a Poisson random variable with parameter 
. For any 
 and 
, we show that as 
:
        
This implies (
22). The proof of (
24) requires a careful analysis of the probability generating function of 
. In particular, let us define 
, where 
 is the Wright–Mainardi function (Mainardi et al. [
20]). Then, we apply Corollary 2 of Dolera and Favaro [
16] to conclude that 
 as 
. Finally, we applied inequality (2.2) in Adell and Jodrá [
21] to obtain:
        
        So that 
 as 
, and (
24) follows. Now, keeping 
 and 
t fixed as above, we show that (
24) entails (
22). To this aim, we introduced the Kolmogorov distance 
 which, for any pair of 
-valued random variables 
U and 
V, is defined by 
. The claim to be proven is equivalent to:
        
        as 
. We exploit statement (i) of Theorem 2. This leads to the distributional identity 
. Thus, in view of the basic properties of the Kolmogorov distance:
        
        where the 
 is thought of here as a homogeneous Poisson process with a rate of 1, independent of 
. The desired conclusion will be reached as soon as we will prove that all the three summands on the right-hand side of (
25) go to zero as 
. Before proceeding, we recall that 
. Therefore, for the first of these terms, we write:
        
        with 
. Now, let us define 
. Accordingly, we can make the above right-hand side major by means of the following quantity:
        
Then, by exploiting the identity 
, we can write:
        
        which goes to zero as 
 for any 
, by Stirling’s approximation. To show that the integral 
 also goes to zero as 
, we may resort to identities (13)–(14) of Dolera and Favaro [
16], as well as Lemma 3 therein. In particular, let 
 denote a suitable continuous function independent of 
n, and such that 
 as 
 and 
 as 
. Then, we write that:
        
Since 
 by Lemma 3 of Dolera and Favaro [
16], both the summands on the above right-hand side go to zero as 
, again by Stirling’s approximation. Thus, the first summand on the right-hand side of (
25) goes to zero as 
. As for the second summand on the right-hand side of (
25), it can be bounded by
        
By a dominated convergence argument, this quantity goes to zero as 
 as a consequence of (
24). Finally, for the third summand on the right-hand side of (
25), we can resort to a conditioning argument in order to reduce the problem to a direct application of the law of large numbers for renewal processes (Section 10.2, Grimmett and Stirzaker [
22]). In particular, this leads to 
 for any 
, which entails that 
 as 
. Thus, this third term also goes to zero as 
 and (
22) follows.
Now, we consider (
23), showing that it arises by combining (
21) with statement (ii) of Theorem 2. In particular, by an obvious conditioning argument, we can write that as 
:
        
At this stage, we consider the probability generating function of 
 and we immediately obtain 
 for 
 and 
 with the same 
 as in (
13) and (
14). Therefore, the asymptotic expansion we already provided in (
15) entails:
        
        as 
. In particular, (
26) follows by applying exactly the same arguments used to prove (
8). Now, since:
        
        the claim follows from a direct application of Slutsky’s theorem. This completes the proof. □