Next Article in Journal
Comparison of Entropy Calculation Methods for Ransomware Encrypted File Identification
Previous Article in Journal
New Challenges for Classical and Quantum Probability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generalized Species Richness Indices for Diversity

Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223, USA
Entropy 2022, 24(10), 1504; https://doi.org/10.3390/e24101504
Submission received: 15 September 2022 / Revised: 15 October 2022 / Accepted: 18 October 2022 / Published: 21 October 2022
(This article belongs to the Section Entropy and Biology)

Abstract

:
A generalized notion of species richness is introduced. The generalization embeds the popular index of species richness on the boundary of a family of diversity indices each of which is the number of species in the community after a small proportion of individuals belonging to the least minorities is trimmed. It is established that the generalized species richness indices satisfy a weak version of the usual axioms for diversity indices, are qualitatively robust against small perturbations in the underlying distribution, and are collectively complete with respect to all information of diversity. In addition to a natural plug-in estimator of the generalized species richness, a bias-adjusted estimator is proposed, and its statistical reliability is gauged via bootstrapping. Finally an ecological example and supportive simulation results are given.

1. Introduction

Consider an ecological community with a well-defined set of species X = { k ; k = 1 , , K } and an associated distribution of proportions, also known as species abundances, p = { p k ; k = 1 , , K } . More generally, X and p may be considered as a countable alphabet and an associated probability distribution, where K may be a finite integer or infinite. In this article, the k s may be interchangeably referred to as letters of an alphabet or species in a community, and p may be referred to as a species abundance distribution or a probability distribution. The notion of diversity in a community has been of long standing research interest. What is diversity and how should it be quantified have been the two fundamental questions at the center of diversity literature for many decades. A large number of diversity indices have been proposed in the history, for example, those by Simpson in [1], Shannon in [2], Rényi in [3] and Tsallis in [4] are among many most commonly used indices, each of which has been argued to have particular merit. The opinions on diversity and possible numerical indices to measure it are indeed diverse. There are even doubts in the general concept of diversity, for example see [5,6]; and there is also a school of thought which believes that the species richness is the only acceptable diversity index, for example see [7]. There have also been unifying efforts to define diversity indices to accommodate a range of such indices, for example see [8,9,10,11], among others. Nevertheless when it come to measuring diversity, there is a lack of agreement for a generally satisfactory univariate index. The general consensus in the existing literature seems to be that a better description of diversity should be a multidimensional index set, or a profile. A good introduction to diversity profiles is offered in [10] where many basic concepts are articulated and many related references are found.
The departure point of this article is the species richness index, K, the number of different species in a community. The species richness index is a part of almost every discussion in the existing literature, and it is so for a good reason. Like the notion of happiness, diversity is an intuitively clear notion for most, but is difficult to quantify. Does there exist a universally accepted index (or an index profile) that would please all? The answer is unknown. If there does, it has not been found. If not, then the objective would be to find one that would have wider acceptance. Either way, the search should and does continue. In that regard, the species richness index K is perhaps one of the simplest, the most direct and most intuitive of all existing diversity indices. It is difficult to dismiss such an index.
Nevertheless the species richness index has many weaknesses which can be summarized into the following list.
  • It is oblivious to the magnitude of species abundances.
  • It is ultra-sensitive to redistribution of any arbitrarily small proportion.
  • It is difficult to estimate based on a sample.
  • It does not provide an ordering, or a partial ordering, for communities with infinite number of species.
The first weakness is easily illustrated by a simple example. Consider two distributions with K = 2 , p = { 0.5 , 0.5 } and q = { 0.99 , 0.01 } . The species richness is 2 for both but it clearly does not capture the intuitive notion of diversity. In the diversity literature species richness is sometimes considered a separated type of index from those taking abundances into consideration. This article argues that the separation is not necessary and a slight change of perspective would embed species richness in a profile that naturally takes abundances into consideration.
The second weakness is also easily illustrated by a simple example. Consider p = { 1 ε , ε } where ε > 0 is an arbitrarily small value. The species richness of p is K = 2 . However taking the abundance p 2 = ε and redistributing it to m new species, k = 2 , , m + 1 , evenly, a new distribution q = { q 1 = 1 ε , q k = ε / m ; k = 2 , , m + 1 } is created. It is easily seen first that the species richness of q is K = m + 1 , second that m is arbitrarily large so the species richness of q can be carried over all bounds, and third that the arbitrarily large difference in species richness between p and q is due to an arbitrarily small difference between p and q .
The second weakness demonstrated above is not unique to the species richness. Consider Shannon’s entropy, H = k = 1 K p k ln ( 1 / p k ) . Taking an arbitrarily small quantity ε > 0 (from any p k ), re-distributing it evenly to m new species each of which with proportion ε / m , and hence creating distribution q , it would then add approximately
i = 1 m ε m ln m ε = ε ln m ε ln ε
to H in evaluating entropy of q . (1) may be carried over all bounds as m increases indefinitely.
In fact, this issue of ultra-sensitivity is well-known beyond the boundary of diversity literature. In modern data science where the sample space is large, non-metrized, non-ordinal, and not completely pre-scribed, statistical inference often relies on information theoretical quantities that are sensitive to the probabilities of rare events. Such information-theoretic quantities are often ultra-sensitive toward small perturbations in the tail of a distribution.
The third weakness is essentially caused by the second weakness. As demonstrated above, two distributions, different only in the way that one is an arbitrarily stretched version of another by an arbitrarily small mass in abundance, can have arbitrarily different values in species richness. In that regard, in a random sample of size n, the species with stretched proportions collectively have very small probability to be represented. This makes it nearly impossible to estimate K with any reliability non-parametrically. Estimating K with a random sample is a long standing difficult problem in statistics. Interested readers may refer to two excellent survey papers, ref. [12,13], respectively. More specifically, a worthy line of approaches based on Turing’s formula may also be of interest, see [14]. See also for example, [15,16,17,18]. Nevertheless it is fair to say that, not surprisingly, there are no known generally satisfactory estimators of K.
The fourth weakness is in the generality of the definition. Generally one would prefer to have a notion of diversity not only for communities with finite K but also for K = . The species richness does not provide an ordering, or a partial ordering, for all communities with K = , In fact, it does not provide an ordering or partial ordering communities with a same K < .
The generalized species richness proposed in this article resolves, or at least alleviates all these weaknesses. Toward introducing the generalized species richness indices, consider the second weakness mentioned above once more. Recognizing the fact that an infinitesimal perturbation in the abundance distribution could greatly impact species richness, one may ask the following questions.
  • If 100 × α % , where α ( 0 , 1 ) , of the communities belonging to species with the lowest abundances is trimmed, what would be the species richness of the remaining community?
  • What is the least number of species that can be represented by 100 × ( 1 α ) % of the community?
Let the non-increasing ordered p = { p k ; k 1 } be denoted by
p = { p ( k ) ; k 1 }
where p ( k ) p ( k + 1 ) for all k 1 . The answer to both above questions is, for a fixed α ( 0 , 1 ) ,
K α = K α ( p ) = k 1 k × 1 i = 1 k 1 p ( i ) < 1 α i = 1 k p ( i )
= max k : i = 1 k p ( i ) < 1 α + 1
= min k : i = 1 k p ( i ) 1 α
where 1 [ · ] is the indicator function. For a given α ( 0 , 1 ) , there is only one non-zero term in the summation of (3) with an integer value k such that 1 α is sandwiched between i = 1 k 1 p ( i ) exclusive and i = 1 k p ( i ) inclusive. See a graphic representation of K α in Figure 1. K α is the proposed generalized species richness, and it may also be reasonably referred to as the α -trimmed species richness. Let
K ( p ) = { K α ( p ) ; α ( 0 , 1 ) }
be referred to as the species richness profile.
Revisiting the example of p = { 0.5 , 0.5 } and q = { 0.99 , 0.01 } mentioned above for the first weakness of species richness K = K 0 , with say α = 0.05 , it is easily seen that K 0.05 ( p ) = 2 and K 0.05 ( q ) = 1 . Revisiting the example of p = { 1 ε , ε } and its stretched version q = { q 1 = 1 ε , q k = ε / m ; k = 2 , , m + 1 } mentioned above for the second weakness of species richness K = K 0 , it is also easy to see that arbitrary stretching of ε , that is, letting m increase indefinitely, will not carry K α ( q ) over all bounds so long as ε < α . In this regard, it is clear that K α may be viewed as a robustified version of species richness. With the influence from arbitrary stretching of an infinitesimal mass in abundance controlled (but not eliminated), the difficulty level in estimating K α is considerably reduced from that in estimating K. Finally the fourth weakness of species richness is eliminated since K α is always finite so long as α > 0 for distributions with K < as well as K = .
In Section 2, several properties of the generalized species richness are established. More specifically, it is established that every member of K in (6) is a diversity index as it satisfies a weak version of the usual axioms of diversity indices; and a notion of “breakdown point” is introduced and the robustness of K α is gauged accordingly. Furthermore, a notion of “completeness” in profiles is introduced and K of (6), as a profile, is shown to be complete.
To estimate K α , let an identically and independently distributed ( i i d ) sample of size n be summarized into sample species frequencies, { Y k ; k 1 } , and relative species frequencies, p ^ = { p ^ k = Y k / n ; k 1 } ; and let p ^ = { p ^ ( k ) ; k 1 } be a non-increasingly ordered p ^ . A natural estimator of K α is (3), (4) or (5), with p ^ ( i ) in place of p ( i ) , that is,
K ^ α = K α ( p ^ ) = k 1 k × 1 [ i = 1 k 1 p ^ ( i ) < 1 α i = 1 k p ^ ( i ) ] = max k : i = 1 k p ^ ( i ) < 1 α + 1 = min k : i = 1 k p ^ ( i ) 1 α ,
specifically noting that K ^ α is based on the same functional K α ( · ) in (3) but evaluated at the empirical distribution p ^ instead of p . It is easy to see that (7) is simply counting the number of species in the sample after 100 × α % of the observations in the sample with the lowest (observed) species relative frequencies trimmed. K ^ α in (7) will be referred to as the plug-in estimator of K α in subsequent text.
However K ^ α significantly under-estimates K α due to a well-known phenomenon—a perpetual under representation of small probability letters in a finite sample. This phenomenon was perhaps first explicitly identified by Alan Turing during World War II in an effort to break the German naval enigmas, and is referred to as the Turing phenomenon in the subsequent text. The core of the Turing phenomenon is the total probability associated with letters of the alphabet that are not represented in a sample, that is, π 0 = k 1 p k 1 [ Y k = 0 ] , also sometimes known as the “missing probability”. In non-parametric estimation of information-theoretic quantities, small probability letters often carry much information and the fact many (possibly infinitely many) of them are missing in a sample often causes a significant downward bias. For example, in view of k 1 w k p k = 1 where w k = 1 and p k > 0 , Shannon’s entropy H = k 1 ( ln ( 1 / p k ) ) p k is an weighted average of { p k } with w k = ln ( 1 / p k ) . For another example, the species richness K = k 1 ( 1 / p k ) p k is a weighted average of { p k } with w k = 1 / p k . In both cases, the small probability events get heavy weights and therefore under-representation of them in a sample translates to under-estimation. In comparison of the two examples mentioned above, the Turing phenomenon has a much more profound impact on estimation of K than H in the sense that ( ln ( 1 / p ) ) p 0 and ( 1 / p ) p 1 as p 0 . Having realized the difficulty in estimating such quantities, it would seem reasonable to device mechanisms, either by modifying the estimands (provided that the modified estimands remain relevant) or the assumption on the underlying distribution, to control the behavior of corresponding estimators. For example, ref. [19,20] discuss certain optimal rates of convergence for a class of estimators of entropy and community size under certain condition to prevent p k from being arbitrarily small, in turn to control the behavior of the estimators. This article however seeks such controls by means of α -trimming, both in the estimand, K α , as well as in its estimator, K ^ α , specifically with regard to the notion of species richness.
On the other hand, K ^ α in (7) may be improved by means of bias correction. There are many possible ways to correct the bias. For simplicity, an estimator based on the basic bootstrap method is proposed as in (14) of Section 3. In the same section, the statistical properties of both K α ( p ^ ) of (7) and K α ( p ^ ) of (14) are discussed. More specifically several asymptotic properties of partial sums of p ^ are given. Based on these asymptotic results, several conservative one-sample and two-samples inferential procedures regarding the underlying generalized species richness are proposed and justified. Several simulation results are also reported in gauging the performance of the estimators. Finally an real life ecological data set is used to illustrate the proposed method.
The article ends with an appendix where many lemmas, corollaries and propositions, along with their proofs, are found.

2. Properties of Generalized Species Richness Indices

Diversity as an intuitive notion is quite clear in most minds. However the quantification of diversity is still quite a distance away from a point of universal consensus. In the diversity literature it is commonly accepted that an index may be reasonably referred to as a diversity index if it satisfies several axioms. For notation convenience, let P K be the family of all distributions such that K = k 1 1 [ p k > 0 ] , that is, on a community with K species (or a finite alphabet with cardinality K), and let P be the family of all possible distributions on a general countable community. It follows that P = K = 1 P K . Let D ( p ) be a functional defined for every p P . The essential axioms of diversity indices include:
A1:
A diversity index D ( p ) is invariant under any permutation of species labels, that is, any permutation on the index set { k ; k 1 } .
A2:
A diversity index D ( p ) is minimized at p = { p ( 1 ) = 1 , p ( k ) = 0 ; k 2 } .
A3:
A diversity index D ( p ) is maximized at p = { p ( k ) = 1 / K ; k = 1 , , K } , the uniform distribution in P K for every positive integer K.
A4:
For any distribution p , let p * be the associated distribution of p resulted from a transfer of a mass δ > 0 from a higher p i to a lower p j subject to δ p i p j , with all other p k s remain unchanged. A diversity index D ( p ) satisfies D ( p ) D ( p * ) .
The list of axioms may grow longer representing a more stringent imposition on the underlying diversity indices. There are also stronger versions of the axioms. For example, A 2 as stated is a weaker version of one that requires the index D ( p ) to be minimized only at p = { p ( 1 ) = 1 , p ( k ) = 0 ; k 2 } but not at any other distributions. Similarly A 3 as stated above also has a stronger version which requires the index D ( p ) to be maximized only at p = { p ( k ) = 1 / K ; k = 1 , , K } but not at any other distributions. Axiom A 4 also has a stronger version which requires a strict inequality, that is, D ( p ) < D ( p * ) . The weaker axioms are chosen in this article because species richness K, the reference index of the discussion, satisfies them.
Regardless the length or the version of the axioms, Axiom A 1 is the most essential of them all and is universally accepted. It is important to recognize the implication of A 1 —every diversity index is a functional of p only through p . Consequently the domain of all diversity indices can be represented by the subset of P that contains only distributions in non-increasing order, denoted as P .
For a given α ( 0 , 1 ) , it is clear K α satisfies A 1 , A 2 and A 3 . The fact that K α satisfies A 4 is true but is not so obviously. This fact is one of the main results of this article and is summarized in Proposition A1 along with a lengthy proof, both of which are given in Appendix A. The fact that K α satisfies all axioms A 1 through A 4 suggests that it may be reasonably regarded as a diversity index.
To quantify the robustness of the generalized species richness indices against disturbances due to re-distributions of a small abundance (or probability) mass, a notion of breakdown point may be introduced. Breakdown point, roughly speaking, is the greatest proportion of data, whose worst behavior may not carry a function of the data over all bounds. To be more precise, let p P be an abundance distribution, let ε ( 0 , 1 ) be an arbitrarily small value, and let ε 1 = { ε 1 , k ; k 1 } and ε 2 = { ε 2 , k ; k 1 } be two non-negative sequences, each of which is with total mass of ε > 0 , that is, k 1 ε 1 , k = k 1 ε 2 , k = ε . Let
p ε = p ε 1 + ε 2
represent a perturbation by subtracting a mass ε away from p by means of ε 1 and adding back the same mass by means of ε 2 .
Definition 1.
Let D ( p ) be any non-negative function of p P . The breakdown point of D at p is
B p ( D ) = sup ε : sup ε 1 , ε 2 D ( p ε ) < .
Obviously 0 B p ( D ) 1 . A higher value of B p ( D ) is regarded as an indication that D is more robust at p .
Definition 2.
Let B p ( D ) be as in Definition 1. Let P 0 be a sub-family of P . For any given α ( 0 , 1 ] , if B p ( D ) α for every p P 0 , then D ( p ) is said to be 100 × α % robust with respect to P 0 . In particular, if B p ( D ) α for every p P , then D ( p ) is said to be 100 × α % robust.
Example 1.
The species richness, K, is 0 % -robust. This is so because sup ε K ( ( 1 ε ) p + ε ) = for any p P and any small ε > 0 .
Example 2.
The generalized species richness, K α , is 100 × α % -robust. This claim is one of the main results of this article and is summarized in Proposition A2. Both the proposition and its proof are given in Appendix A.
In passing, it may also be of interest to evaluate the robustness of two other community diversity indices, Shannon’s entropy H = k 1 p k ln p k and the Gini-Simpson index D = 1 k 1 p k 2 .
Example 3.
Shannon’s entropy is 0 % -robust. To see this, for a given p , let ε > 0 be an arbitrarily small value and let a total mass of ε > 0 cumulatively trimmed from the right end in p = { p ( 1 ) , p ( 2 ) , } , that is, using the language of Definition 1,
ε 1 = { 0 , , 0 , ε K ε , p ( K ε + 1 ) , p ( K ε + 2 ) , }
which has zeros in the first K ε 1 positions and ε K ε = ε i = K ε + 1 p ( i ) in the K ε th position. In such a construction, the remainder of the mass of 1 ε covers K ε species, and p ε 1 = { p ( 1 ) , , p ( K ε 1 ) , i = 1 K ε p ( i ) ε , 0 , 0 , } . Redistributing the mass ε > 0 uniformly over m indices from i = K ε + 1 to i = K ε + m with mass ε / m , resulting in p ε 1 + ε 2 = { p ( 1 ) , , p ( K ε 1 ) , i = 1 K ε p ( i ) ε , ε / m , , ε / m , 0 , } . It follows that, as m ,
H ( p ε 1 + ε 2 ) ln ( m / ε ) .
Example 4.
The Gini-Simpson index is 100 % -robust. This is clearly true because 0 < D ( p ) 1 for any abundance distribution p P .
A diversity profile is a set of diversity indices containing more than one index. A profile is generally preferred over a single diversity index because it is commonly accepted that diversity is a multi-dimensional notion and is better captured by a multivariate index. An immediate question naturally arises: how much diversity information is contained in a profile? This question can be partially answered with a notion of completeness defined below.
Definition 3.
A profile of indices, D p = { D α ( p ) ; α A } where A is a set containing more than one element, is said to be complete, if, for any two distributions p and q , p = q if and only if D α ( p ) = D α ( q ) for every α A .
Definition 3 essentially says that a complete profile D p uniquely determines p , and in turn uniquely determines any other diversity index evaluated at p .
Example 5.
K ( p ) of (6) is complete. This claim is clearly true noting, for each positive integer i, p ( i ) = max { α : K α ( p ) = i } max { α : K α ( p ) = i 1 } .
K ( p ) of (6) is not the only complete profile. The two well known families of diversity indices given in the following two examples are also complete.
Example 6.
The generalized Simpson’s diversity indices, D ( p ) = { D u ( p ) = 1 k 1 p k u ; u 1 } , is complete. The fact that D ( p ) , indexed by positive integers u 1 , is a family of diversity indices is established by Grabchak, Marcon, Lang and Zhang (2017). The claim of completeness follows the fact that η = { k 1 p k u ; u 1 } uniquely determines p , a fact established in [21].
Example 7.
Rényi’s diversity profile H ( p ) = { H α ( p ) = ( 1 α ) 1 ln ( k 1 p k α ) ; α ( 0 , 1 ) ( 1 , ) } is complete. The completeness follows the fact that the subset of H ( p ) , H * ( p ) = { H u ( p ) = ( 1 u ) 1 ln ( k 1 p k u ) ; u 1 } , uniquely determines η = { k 1 p k u ; u 1 } , which uniquely determines p .

3. Inference

Let the discussion of this section begin with a natural estimator of K α , K ^ α = K α ( p ^ ) , as given in (7), which may be viewed an estimator based on the right-tail of p ^ being trimmed by a fixed mass α . This estimator however presents several difficulties in developing valid inferential procedures regarding K α . Towards describing some of these difficulties, the following proposition is first stated and proved.
Proposition 1.
Let p = { p k ; k 1 } be the underlying distribution on a countable alphabet, satisfying p k p k + 1 for every k 1 , let p ^ = { p ^ i ; i 1 } be the corresponding relative letter frequencies in an i i d sample of size n, and let K be a positive integer such that 1 K < K . Suppose the multiplicity of p K in p is one. Then as n ,
  • n i = 1 K p ^ i i = 1 K p i D N 0 , i = 1 K p i 1 i = 1 K p i ;
  • P n i = 1 K p ^ ( i ) i = 1 K p ^ i = 0 0 ; and
  • n i = 1 K p ^ ( i ) i = 1 K p i D N 0 , i = 1 K p i 1 i = 1 K p i .
Proof. 
Part 1 directly follows from the central limit theorem. For Part 2, first consider an aggregation of the letters as follows. If K < let K = K , and if K = let K be any index such that p K * = i = K p i < p K . Let the observed relative letter frequencies in the sample be aggregated accordingly, in particularly let p ^ K * = i = K p ^ i . Let p ^ * = { p ^ 1 , , p ^ K , , p ^ K 1 , p ^ K } , and let p * = { p 1 , , p K , , p K 1 , p K } . It follows that p ^ * p p * , that is to say that, P ( p ^ * n ε ( p * ) ) 1 where n ε ( p * ) is an arbitrarily small ε -neighborhood centered at the point p * . Noting p k s are arranged in a non-increasing order, p K has multiplicity 1, K is finite, and n ε ( p * ) is arbitrarily small, the event { p ^ * n ε ( p * ) } implies the event that the set of K largest p ^ s are identical to the first K p ^ s in p ^ , that is, O n ( K ) = { { p ^ i ; i = 1 , , K } = { p ^ ( i ) ; i = 1 , , K } } . It follows that P ( O n ( K ) ) 1 , and that for any ε > 0 .
P n i = 1 K p ^ ( i ) i = 1 K p ^ i > ε = P n i = 1 K p ^ ( i ) i = 1 K p ^ i > ε O n ( K ) P ( O n ( K ) ) + P n i = 1 K p ^ ( i ) i = 1 K p ^ i > ε O n c ( K ) P ( O n c ( K ) ) = 0 × P ( O n ( K ) ) + P n i = 1 K p ^ ( i ) i = 1 K p ^ i > ε O n c ( K ) P ( O n c ( K ) ) P ( O n c ( K ) ) 0 .
Part 2 follows.
For Part 3, since
n i = 1 K p ^ ( i ) i = 1 K p i = n i = 1 K p ^ ( i ) i = 1 K p ^ i + n i = 1 K p ^ i i = 1 K p i ,
and the first term converges to zero in probability by Part 2, the asymptotic normality follows Part 1 by Slusky’s theorem. □
The first difficulty of K ^ α = K α ( p ^ ) is that it cannot be guaranteed to be consistent under general conditions. To see this, one needs only to consider a special case of i K α p k = 1 α . By Part 3 of Proposition 1, for sufficiently large n,
P n i = 1 K α p ^ ( i ) i = 1 K α p i > 0 = P i = 1 K α p ^ ( i ) i = 1 K α p i > 0 = P i = 1 K α p ^ ( i ) > 1 α = P ( K ^ α K α + 1 ) 0.5 > 0 .
(10) implies inconsistency and, in addition to that, (10) also suggests that, for sufficiently large n, K ^ α could over-estimate K α , albeit by at most one. Clearly the said inconsistency is caused by the discrete nature of the functional K α ( p ^ ) .
The second difficulty of K ^ α = K α ( p ^ ) is its significant downward bias when n is relatively small. To illustrate the bias, consider the extreme case of α = 0 in K α , which is simply the species richness index, K, in case of a finite sample space. If K is relatively large, a relatively small i i d sample of size n would likely not cover all K species in the community. In fact, the sample would typically miss a large number of species, that is, K o b s K where K o b s is observed number of species in a sample. Consequently the empirical distribution, p ^ = { p ^ k ; k = 1 , , K } would consist of mostly zeros and hence would severely under-represent p = { p k ; k = 1 , , K } in terms of species richness. When α > 0 but small, the same qualitative argument explains the significant downward bias of K ^ α .
The possible inconsistency, along with the persistent and significant downward bias, gives much difficulty in developing inferential procedures under general conditions based on asymptotic properties such as Part 3 of Proposition 1.
Next consider bootstrapping 100 × ( 1 β ) % confidence intervals (in general standard notions), respectively, of the quantile method [ θ ^ β / 2 * , θ ^ 1 β / 2 * ] and of the centered quantile method, also known as the basic method, [ 2 θ ^ θ ^ 1 β / 2 * , 2 θ ^ θ ^ β / 2 * ] , where θ ^ denotes the estimator based on the original sample of size n and θ ^ 1 β / 2 * and θ ^ β / 2 * , respectively, denote the 100 × ( 1 β / 2 ) th and 100 × β / 2 th percentiles of the bootstrapping samples.
First let it be noted that the quantile method [ θ ^ β / 2 * , θ ^ 1 β / 2 * ] is an inadequate 100 × ( 1 β ) % confidence. To see this, let the extreme case of K α = K with α = 0 be considered once again. There, given an empirical distribution, p ^ = { p ^ ( 1 ) , p ^ ( 2 ) , } . It is clear that K ^ α K α as already argued above. For the same reason, by sampling from p ^ , every K ^ α * K ^ α K α . Consequently [ θ ^ β / 2 * , θ ^ 1 β / 2 * ] necessarily excludes K α far to the right, causing the coverage of the bootstrapping interval to have much lower coverage than 1 β . This is to say that, in terms of estimating K α , the downward bias of K ^ α strikes twice in bootstrapping with the quantile method, once in using the original sample and once in using a bootstrapping sample. In fact, it is commonly observed with real data sets that
K ^ α , β / 2 * < K ^ α , 1 β / 2 * K ^ α K α ,
where K ^ α , β / 2 * and K ^ α , 1 β / 2 * are the 100 × ( 1 β / 2 ) th and 100 × β / 2 th percentiles of the estimates of K ^ α based on bootstrapping samples. See Example 8 below. The discomforting (11) essentially disqualifies the bootstrapping confidence interval based on the quantile method as a valid inferential tool.
However bootstrapping based on the centered quantile method, also known as the basic bootstrapping method, is qualitatively different. There the downward bias K ^ α K α is off set by the bootstrapping downward bias K ^ α * K ^ α . Once again in the extreme case of K α = K with α = 0 , since K ^ α * K ^ α for every bootstrapping sample, it follows that K ^ α K ^ α , β / 2 * K ^ α K ^ α , 1 β / 2 * 0 and hence K ^ α K ^ α + ( K ^ α K ^ α , 1 β / 2 * ) K ^ α + ( K ^ α K ^ α , β / 2 * ) , or
K ^ α 2 K ^ α K ^ α , 1 β / 2 * 2 K ^ α K ^ α , β / 2 * ,
that is, the centered bootstrapping confidence interval excludes K ^ α to the left of the interval. In fact (12) is commonly observed with real data sets even when α > 0 is small. See Example 8 below. Unlike (11), the fact that K ^ α is outside of the centered bootstrapping confidence interval in (12) only indicates inadequacy of the estimator K ^ α but not that of the interval itself. In fact the centered bootstrapping confidence interval,
[ 2 K ^ α K ^ α , 1 β / 2 * , 2 K ^ α K ^ α , β / 2 * ] ,
represents a bias-adjustment in the right direction, that is, the bias in K ^ α as an estimator of K α is partially offset by that in K ^ α * as an estimator of K ^ α . It is to be noted that (12) suggests a bias-adjusted alternative estimator to K ^ α ,
K ^ α = 2 K ^ α K ^ α , 1 / 2 * ,
where K ^ α , 1 / 2 * is the median of bootstrapped estimates.
The 100 × ( 1 β ) % bootstrapping confidence interval, or confidence set since only the integer values in the interval are relevant, in (13) provides a basic assessment of K α ’s whereabouts. However its coverage does necessarily converge to the claimed value 1 β as n increasing indefinitely, due to the above mentioned possible inconsistency of K ^ α and the consequential “at-most-one” over-estimation asymptotically. To take that into consideration, a conservative adjustment may be adopted by extending the lower limit of (13) by one, that is,
[ 2 K ^ α K ^ α , 1 β / 2 * 1 , 2 K ^ α K ^ α , β / 2 * ] .
An advantage of (15) is that its asymptotic coverage is at least 1 β for general p , but a disadvantage is that the limiting form of (15) necessarily contains two integer values instead of one, which (13) could achieve when K ^ α is consistent.
On the other hand, while (15) accommodates the issue of possible asymptotic over-estimation (by at most one) by K ^ α , in most practical cases, the more acute issue is still the under-estimation of K α by K ^ α when n is not sufficiently large. The confidence set in (15) generally requires n to be quite large for its coverage to be reasonably close to the claimed coverage 1 β . To help accelerate the convergence of the actual coverage to the claimed coverage, a more conservative adjustment may be adopted by extending the right limit of (15) by one, that is,
[ 2 K ^ α K ^ α , 1 β / 2 * 1 , 2 K ^ α K ^ α , β / 2 * + 1 ] .
Advantages of (16) are that its asymptotic coverage is at least 1 β for general p and that its actual coverage converges to at least 1 β faster as n increases. However a disadvantage is that the limiting form of (16) necessarily contains three integer values and no fewer.
The bootstrapping confidence intervals, described in (13), (15) and (16), may also be utilized in testing hypothesis with different degrees of conservativeness. For example, based on (13) and at the β level of significance, in testing H 0 : K α = k α versus H a : K α > k α , H a : K α < k α or H a : K α k α , k α is a pre-specified positive integer, one may choose to reject H 0 when
k α < 2 K ^ α K ^ α , 1 β * ,
k α > 2 K ^ α K ^ α , β * , or
k α [ 2 K ^ α K ^ α , 1 β / 2 * , 2 K ^ α K ^ α , β / 2 * ]
respectively.
Based on (15) and at the β level of significance, in testing H 0 : K α = k α versus H a : K α > k α , H a : K α < k α or H a : K α k α , k α is a pre-specified positive integer, one may choose to reject H 0 when
k α < 2 K ^ α K ^ α , 1 β * 1 ,
k α > 2 K ^ α K ^ α , β * , or
k α [ 2 K ^ α K ^ α , 1 β / 2 * 1 , 2 K ^ α K ^ α , β / 2 * ]
respectively.
Based on (16) and at the β level of significance, in testing H 0 : K α = k α versus H a : K α > k α , H a : K α < k α or H a : K α k α , k α is a pre-specified positive integer, one may choose to reject H 0 when
k α < 2 K ^ α K ^ α , 1 β * 1 ,
k α > 2 K ^ α K ^ α , β * + 1 , or
k α [ 2 K ^ α K ^ α , 1 β / 2 * 1 , 2 K ^ α K ^ α , β / 2 * + 1 ]
respectively.
Suppose there are two communities and it is of interest to estimate the difference between the two α -trimmed richness indices,
D α = K 1 , α K 2 , α
where K 1 , α and K 2 , α are α -trimmed richness indices of the two underlying communities, respectively. The proposed estimator of D α in (26) is
D ^ α = K ^ 1 , α K ^ 2 , α
where K ^ 1 , α = 2 K ^ 1 , α K ^ 1 , α , 1 / 2 * and K ^ 2 , α = 2 K ^ 2 , α K ^ 2 , α , 1 / 2 * , where K ^ 1 , α and K ^ 2 , α are as in (7) and K ^ 1 , α , 1 / 2 * and K ^ 2 , α , 1 / 2 * are respective bootstrapping medians from the two samples as in (14).
In testing equality of generalized species richness of two communities, D α = K 1 , α K 2 , α , one may first consider a bootstrapping 1 β confidence interval for D α based on two independent samples are size n 1 and n 2 , respectively,
[ 2 D ^ α D ^ α , 1 β / 2 * , 2 D ^ α D ^ α , β / 2 * ]
where D ^ α = K ^ 1 , α K ^ 2 , α ,
[ 2 D ^ α D ^ α , 1 β / 2 * 1 , 2 D ^ α D ^ α , β / 2 * + 1 ]
where D ^ α , β / 2 * and D ^ α , 1 β / 2 * are the 100 × β / 2 th and the 100 × ( 1 β / 2 ) th percentiles of the bootstrapping estimates, each of which is based a sample of size n 1 from p ^ 1 , and a sample of size n 2 from p ^ 2 , , where, for j = 1 or j = 2 , p ^ j , is the ordered relative frequencies of letters in the sample of size n j from the j th community.
For, H 0 : K 1 , α K 2 , α = d 0 versus H 1 : K 1 , α K 2 , α > d 0 , or H 2 : K 1 , α K 2 , α d 0 , where K α , 1 and K α , 2 are the respective generalized species richness of two communities and d 0 is a pre-fixed integer, approximate testing procedures may be devised based (28) or (29). For example, based on (28), one may choose to reject H 0 when
d 0 < 2 D ^ α D ^ α , 1 β * ,   for H 0 vs . H 1 ,   or
d 0 [ 2 D ^ α D ^ α , 1 β / 2 * , 2 D ^ α D ^ α , β / 2 * ] ,   for H 0 vs . H 2 .
Similarly, based on (29), one may choose to reject H 0 when
d 0 < 2 D ^ α D ^ α , 1 β * 1 ,   for H 0 vs . H 1 ,   or
d 0 [ 2 D ^ α D ^ α , 1 β / 2 * 1 , 2 D ^ α D ^ α , β / 2 * + 1 ] ,   for H 0 vs . H 2 .
To assess the reliability of the inferential procedures discussed above, several simulation studies are conducted. The studies are carried out under three different distributions. The first distribution is the uniform distribution with K = 20 and p k = 0.05 for k = 1 , , 20 . The second distribution is a triangular distribution with K = 20 and p k = k / 20 for k = 1 , , 20 . The third distribution is the Poisson distribution with λ = 10 and p k = e λ λ x / x ! , noting that in this case K is infinite.
In Table 1, Table 2 and Table 3, the bias and the mean squared errors of K ^ α of (7) and K ^ α of (14) are compared, at two levels of α , α = 0.01 and α = 0.05 , for various sample sizes, n. Table 1, Table 2 and Table 3, respectively, summarize the results under three different distributions, the uniform, the triangular and the Poisson. Each simulation scenario is based on 5000 repeated samples. Each sample is bootstrapped 1000 times. The bias is defined in such a way that, a positive value indicates an under-estimation and a negative value indicates an over-estimation. The variable T is the average of Turing’s formula, T n = n 1 / n , where n 1 is the number of singletons in a sample, based on 5000 simulated samples. T helps to indicate the adequacy of sample size. Turing’s formula, T n , is sometimes called the sample coverage deficit and 1 T n is the sample coverage (see [17]).
It is quite clear that K ^ α generally has a smaller simulated bias than K ^ α . More specifically, if one considers an absolute bias being less than one to be satisfactory, then K ^ α gets there faster, as n increases, than K ^ α in all cases considered in the simulation studies.
To assess the performance of the confidence sets in (13), (15) and (16), their actual coverage rates are evaluated by simulation studies with 1 β = 0.95 for various sample sizes and distributions. For each scenario, the coverage rate is based on 5000 simulated samples and for each sample, the bootstrapping confidence set is based on 1000 bootstrapping samples. The results are summarized in Table 4, Table 5 and Table 6.
Let it be noted that, although the confidence set of (13) could perform well in some cases (see Columns 3 and 6 in Table 4, and Columns 6 and 12 of Table 5), it has difficulty in providing an appropriate coverage in many other cases (see Column 12 of Table 4, Columns 3, 6, 9 and 12 of Table 5, and Columns 3 and 9 of Table 6). The said difficulty is partially caused by the inconsistency mentioned above in combinations of certain distributional characteristics and the values of α . Similarly, the confidence set of (15) suffers from the same difficulty though to a lesser degree. It could also perform well in some cases (see Columns 4, 7, 10 and 13 in Table 4, Columns 7 and 10 of Table 5, and Columns 7 and 10 of Table 6), but it does not in many other cases (see Column 4 of Table 5, Columns 4 and 9 of Table 6). Since in practice the underlying distribution is not observable, it cannot be determined a priori what values of α are appropriate and what are not. This fact essentially disqualifies the confidence sets of (13) and (15) as general inferential procedures, but (16). Additionally, to be noted is the fact that the confidence set of (16) performs well across all cases in the simulation studies albeit more conservative. The confidence sets of (28) and (29) have general better performances than their one-sample counterparts due to an offset of bias between the two one-sample estimators.
Another point of interest pertains to the practically important question of how large a sample should be in order for (16) to produce a reasonable coverage. Simulation results in Table 4, Table 5 and Table 6 seem to indicate that the coverage is adequate when Turing’s formula, which estimates the total probability associated with the letters of the alphabet not represented in a given sample, takes on a value approximately at a level not much greater than α , that is, T = n 1 / n < α where n 1 is the number of species observed exactly once in the sample, referred to as the rule of thumb below. (Interested readers may refer to Zhang (2017) for a comprehensive introduction to Turing’s formula.)
In summary, all things considered, observing the rule of thumb,
  • (14) is the proposed estimator of K α ;
  • (16) is the proposed 100 × ( 1 β ) % confidence set for K α ;
  • (23)–(25) are the proposed approximate size- β tests of hypothesis involving K α ;
  • (29) is the proposed 100 × ( 1 β ) % confidence set for D α = K 1 , α K 2 , α ; and
  • (32) and (33) are the proposed approximate size- β tests of hypothesis involving D α .
Example 8.
Two tree samples of 1-ha plots (#6 and #18), respectively, indexed as samples 6 and 18, of tropical forest in the experimental forest of Paracou, French Guiana, described in [22], are compared in terms of biodiversity. Respectively 643 and 481 trees with diameter at breast height over 10 cm were inventoried. The data is available in the entropart package for R. In these samples, 147 and 149 tree species from plots #6 and #18 are, respectively, observed, along with their frequencies. In [23], the data are analyzed by using generalized Simpson’s indices and concluded that plot #18 is more diverse than plot #6. In the respective samples, Turing’s formula takes on the values of T 6 = 10.58 % and T 18 = 15.38 % . Observing the rule of thumb, let the generalized species richness be evaluated at α = 0.15 . K ^ 6 , 0.15 = 76 , K ^ 18 , 0.15 = 91 (as compared to the plug-ins K ^ 6 , 0.15 = 65 and K ^ 18 , 0.15 = 77 ), and therefore D ^ = K ^ 18 , 0.15 K ^ 6 , 0.15 = 15 . The proposed 95 % confidence sets for K 6 , 0.15 and K 18 , 0.15 are, respectively, [ 69 , 82 ] and [ 84 , 98 ] . The proposed one-sided and two-sided 95 % confidence sets for D 0.15 are, respectively, [ 6 , ) and [ 5 , 26 ] , both of which exclude zero and therefore lead to a rejection of H 0 : K 6 , 0.15 = K 18 , 0.15 with either H a : K 6 , 0.15 < K 18 , 0.15 or H a : K 6 , 0.15 K 18 , 0.15 , qualitatively supporting the findings of [23].
Let α vary from 0.01 to 0.99. K ^ 6 , α and K ^ 18 , α as functions of α by means of (14) give two curves in Figure 2, which visually suggests that plot #18 is more diverse than plot #6 for a wide range of α. D ^ α = K ^ 18 , α K ^ 6 , α as a function of α, along with the 95 % point-wise confidence band by means of (29), is given in Figure 3, where it is evident that, with reasonable statistical confidence, K α , 18 > K α , 6 for α values in the range from 0.6 to 0.15, that is, for 1 α values from 0.4 to 0.85.

4. Summary

This article proposes a generalized richness index, K α of (3), or equivalently of (4) or of (5), and an estimator, K ^ α of (14). α [ 0 , 0 ) is a user-chosen constant, and when α = 0 , K α becomes the well-known original richness index, K. K α may also be referred to as the α -trimmed richness index. It is designed to remove or to alleviate several weaknesses of K. First, K is only finitely defined for some distributions but not for all. On the other hand, K α is finitely defined for all distributions on a countable alphabet. Second, K does not take the abundance { p k ; k 1 } into consideration, but K α does. Third, K is ultra-sensitive to re-distribution of an arbitrarily small mass, but K α is not, as evidenced by Definitions 1 and 2, Examples 1 and 4, and Proposition A2.
A conservative confidence interval based on bootstrapping is proposed in (16). This confidence interval provides the basic support for inferences about K α . A rule of thumb to judge whether the sample is adequate in supporting the proposed methodology is also proposed based on Turing’s formula: T = n 1 / n < α , where n 1 is the number of singletons in the sample of size n. The rule of thumb is illustrated by simulated results in Table 4, Table 5 and Table 6. More specifically, in Table 4, the rule of thumb amounts to n 110 for α = 0.01 , n 60 for α = 0.05 , n 50 for α = 0.10 and n 40 for α = 0.15 . The simulated coverages are all near or above the target 95 % . In Table 5, the rule of thumb amounts to n 150 for α = 0.01 , n 70 for α = 0.05 , n 50 for α = 0.10 and n 40 for α = 0.15 . The simulated coverages are all above the target 95 % . In Table 6, the rule of thumb amounts to n 450 for α = 0.01 , n 70 for α = 0.05 , n 40 for α = 0.10 and n 30 for α = 0.15 . The simulated coverages are all above the target 95 % .
The one-sample estimator of K α in (14) for a single community is extended to the two-sample estimator of D α of (26), the difference of two α -trimmed richness indices of two communities. The proposed estimator of D α is D ^ α as in (27). A proposed 100 × ( 1 β ) % confidence interval for D α is given in (29). This interval provides the basic support for testing hypotheses regarding D α , as specified in (32) and (33).
For the two-sample problem, the rule of thumb for the one-sample problem is modified to be:
T 1 = n 1 , 1 / n 1 < α and T 2 = n 2 , 1 / n 2 < α
where n 1 and n 2 are the respective sample sizes of the two independent samples, and n 1 , 1 and n 2 , 1 are the respective numbers of singletons in the two independent samples.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used in Example 8 are available in the entropart package for R.

Conflicts of Interest

The author of this manuscript has no financial or non-financial interests that are directly or indirectly related to the work submitted for publication.

Appendix A

The claims that K α satisfies Axiom A 4 and that K α is 100 × α % robust (the claim of Example 2) are established in this section.
For clarity of the proof, a definition and two lemmas are needed. The generalized species richness K α ( p ) as in (3), (4) or (5) is defined for an underlying p being a probability distribution, that is, more specifically p k 0 for each k and k 1 p k = 1 . For notation convenience in the proofs of this section, let the definition of K α ( p ) be extended to any sequence of non-negative numbers, p = { p k ; k 1 } or p = { p ( k ) ; k 1 } , such that k 1 p k < , which implies that p ( k ) 0 as k , specifically noting that k 1 p k may not necessarily be one.
Definition A1.
For any sequence of non-negative values p = { p k ; k 1 } , such that k 1 p k < , and an α ( 0 , k 1 p k ) , the generalized species richness is given by
K α = K α ( p ) = min k : i = k p ( i ) < α 1 .
It is clear that if p is a bonafide probability distribution, then K α ( p ) given in (3), (4) or (5) is identical to (A1) in Definition A1. In this section, the notion of K α used is that of (A1).
Lemma A1.
For any given sequence of non-negative values p = { p k ; k 1 } , let a mass of ε > 0 be taken away from p i for a specific index i, where ε ( 0 , p i ] . Let p i * = p i ε , let p * be the sequence p but with p i * in place of p i , and let p * = { p ( k ) * ; k 1 } be the re-arranged p * in a non-increasing order. For any α ( 0 , k 1 p k ) , K α ( p * ) K α ( p ) .
Proof. 
Without loss of generality, let it be assumed that the sequence p = { p k ; k 1 } is non-increasingly arranged. Denote K α ( p ) = k α and note that
k = k α p ( k ) α and k = k α + 1 p ( k ) < α .
The lemma is established, respectively, in three exhaustive scenarios: (a) i < k α , (b) i = k α , and i > k α .
In scenario (a), there are three exhaustive possible placements of p i * in p * and they are
(a1):  p i * < p k α , p * = { p 1 , , p i * , , p k α , } ,
(a2):  p i * = p k α , p * = { p 1 , , p i * , p k α , } ,
(a3):  p i * > p k α , p * = { p 1 , , p k α , , p i * , } .
In either scenario (a1) or scenario (a2), the right tail sub-sequence { p k α , } of p is preserved in p * , and therefore K α ( p * ) = K α ( p ) .
In scenario (a3), p k α occupies the k α 1 st position in p * . It follows that
τ * ( k α ) = k k α p ( k ) * = p i * + k k α + 1 p k .
Noting k k α + 1 p k < α by (A2), if τ * ( k α ) α then K α ( p * ) = k α = K α ( p ) . If τ * ( k α ) < α then, again by (A2), K α ( p * ) = k α 1 < K α ( p ) .
In scenario (b), (A3) still holds. Noting k k α + 1 p k < α , if τ * ( k α ) α then K α ( p * ) = k α = K α ( p ) . If τ * ( k α ) < α then, since p k α 1 p k α , K α ( p * ) = k α 1 < K α ( p ) .
In scenario (c), it follows that p k α occupies the k α th position in p * and that
τ * ( k α + 1 ) = k k α + 1 p ( k ) * = k k α + 1 p k ( p i p i * ) < α .
If τ * ( k α ) = k k α p k ( p i p i * ) α then K α ( p * ) = k α = K α ( p ) . If τ * ( k α ) < α then
τ * ( k α 1 ) = p k α 1 + τ * ( k α ) = p k α 1 + k k α p k ( p i p i * ) = k k α p k + ( p k α 1 p i ) + p i * > k k α p k α .
It follows that K α ( p * ) = k α 1 < K α ( p ) . □
The proof of Lemma A1 above actually establishes that K α ( p ) 1 K α ( p * ) K α ( p ) , which immediately gives the following corollary.
Corollary A1.
For any given sequence of non-negative values p = { p k ; k 1 } , let a mass of ε > 0 be added to p i for a specific index i. Let p i * = p i + ε , let p * be the sequence p but with p i * in place of p i , and let p * = { p ( k ) * ; k 1 } be the re-arranged p * in a non-increasing order. For any α ( 0 , k 1 p k ) , K α ( p ) K α ( p * ) K α ( p ) + 1 .
Lemma A2.
For any given sequence of non-negative values p = { p k ; k 1 } and a given α ( 0 , k 1 p k ) , let a mass of ε > 0 be added to either p i or p j , where i and j are two specific indices such that p i > p j , resulting in p i * = p i + ε and p j * = p j + ε . Let p * ( i ) be the sequence p but with p i * in place of p i . Let p * ( j ) be the sequence p but with p j * in place of p j . Let p * ( i ) and p * ( j ) be the, respectively, re-arranged p * ( i ) and p * ( j ) in a non-increasing order. Then K α ( p * ( i ) ) K α ( p * ( j ) ) .
Proof. 
Without loss of generality, let it be assumed that the sequence p = { p k ; k 1 } is non-increasingly arranged. Denote K α ( p ) = k α and note that
i = k α p ( i ) α and i = k α + 1 p ( i ) < α .
The lemma is established, respectively, in four exhaustive scenarios: (a) i < j k α , (b) i < k α j , (c) i = k α < j , and (d) k α < i < j .
In scenario (a), the tail sequence { p k α , } of p is preserved in p * ( i ) and p * ( j ) after adding a mass ε to p i or p j , respectively. It follows that K α ( p * ( i ) ) = K α ( p * ( j ) ) and hence K α ( p * ( i ) ) K α ( p * ( j ) ) holds.
In scenario (b), the tail sequence { p k α , } of p is preserved in p * ( i ) after ε is added to p i and therefore K α ( p * ( i ) ) = k α . However, applying Corollary A1, it follows that K α ( p * ( j ) ) K α ( p ) = k α . Hence K α ( p * ( i ) ) K α ( p * ( j ) ) holds.
In scenario (c), p i * = p i + ε occupies a position in p * ( i ) with an index less or equal to k α . This fact implies that the value at the k α th position in p * ( i ) is a value greater or equal to p k α . By the definition of K α in (A1), K α ( p * ( i ) ) = k α = K α ( p ) . However, applying Corollary A1, K α ( p * ( j ) ) K α ( p ) = k α . Hence K α ( p * ( i ) ) K α ( p * ( j ) ) holds.
In scenario (d), let the position occupied by p i * = p i + ε in p * ( i ) be denoted as s and that by p j * = p j + ε in p * ( j ) as t. Let it be recognized that s t . The following three exhaustive sub-scenarios need to be, respectively, considered: (d1) k α s t , (d2) s < k α t , and (d3) s t k α
In scenario (d1), it follows that k k α p ( k ) * ( i ) = k k α p k + ε > α + ε > α , and similarly that k k α p ( k ) * ( j ) = k k α p k + ε > α + ε > α . If ε is such that
k k α + 1 p ( k ) * ( i ) = k k α + 1 p ( k ) + ε α ,
then K α ( p * ( i ) ) k α + 1 and therefore, by Corollary A1, K α ( p * ( i ) ) = k α + 1 . On the other hand, since p i > p j , (A6) implies
k k α + 1 p ( k ) * ( j ) = k k α + 1 p ( k ) + ε = k k α + 1 p ( k ) * ( i ) α ,
which in turn implies that K α ( p * ( j ) ) k α + 1 . By Corollary A1, K α ( p * ( j ) ) = k α + 1 and therefore K α ( p * ( i ) ) = K α ( p * ( j ) ) . Hence K α ( p * ( i ) ) K α ( p * ( j ) ) holds.
If
k k α + 1 p ( k ) * ( i ) = k k α + 1 p ( k ) + ε < α ,
then K α ( p * ( i ) ) k α and therefore, by Corollary A1, K α ( p * ( i ) ) = k α . On the other hand, since p i > p j , (A7) implies
k k α + 1 p ( k ) * ( j ) = k k α + 1 p ( k ) + ε = k k α + 1 p ( k ) * ( i ) < α ,
which in turn implies that K α ( p * ( j ) ) k α . By Corollary A1, K α ( p * ( j ) ) = k α and therefore K α ( p * ( i ) ) = K α ( p * ( j ) ) . Hence K α ( p * ( i ) ) K α ( p * ( j ) ) holds.
In scenario (d2), let it be noted first that
  • the value at the k α + 1 st position in p * ( i ) is p k α and the value at the k α th position in p * ( j ) is also p k α ; and
  • p i + ε > p k α and therefore ε > p k α p i .
Consider the two tail sums of p * ( i ) and p * ( j ) . First for i,
τ k α + 1 * ( i ) = k k α + 1 p ( k ) * ( i ) = k k α p k p i , τ k α + 2 * ( i ) = k k α + 2 p ( k ) * ( i ) = k k α + 1 p k p i < α p i < α ;
and next for j,
τ k α * ( j ) = k k α p ( k ) * ( j ) = k k α p k + ε > α + ε > α , τ k α + 1 * ( j ) = k k α + 1 p ( k ) * ( j ) = k k α + 1 p k + ε > k k α + 1 p k + p k α p i = k k α p k p i α .
If τ k α + 1 * ( i ) α , then by (A8) K α ( p * ( i ) ) = k α + 1 . On the other hand, by (A9), K α ( p * ( j ) ) k α + 1 . By Corollary A1, K α ( p * ( j ) ) = k α + 1 , and hence K α ( p * ( j ) ) K α ( p * ( i ) ) holds.
If τ k α + 1 * ( i ) < α , then, by Corollary A1, K α ( p * ( i ) ) = k α . Additionally, by Corollary A1, K α ( p * ( j ) ) k α , and hence K α ( p * ( j ) ) K α ( p * ( i ) ) holds.
In scenario (d3), let it be noted first that both values at the k α + 1 st position in p * ( i ) and at the k α + 1 st position in p * ( j ) ar, respectively, p k α .
Consider the two tail sums of p * ( i ) and p * ( j ) . First for i,
τ k α + 1 * ( i ) = k k α + 1 p ( k ) * ( i ) = k k α p k p i , τ k α + 2 * ( i ) = k k α + 2 p ( k ) * ( i ) = k k α + 1 p k p i < α p i < α ;
and next for j,
τ k α + 1 * ( j ) = k k α + 1 p ( k ) * ( j ) = k k α p k p j , τ k α + 2 * ( j ) = k k α + 2 p ( k ) * ( j ) = k k α + 1 p k p j < α p j < α .
If τ k α + 1 * ( i ) α , then by (A10) K α ( p * ( i ) ) = k α + 1 . On the other hand, since p i > p j , τ k α + 1 * ( i ) α implies τ k α + 1 * ( j ) α , it follows by (A11) that K α ( p * ( j ) ) = k α + 1 . Therefore K α ( p * ( j ) ) K α ( p * ( i ) ) holds.
If τ k α + 1 * ( i ) < α , then by Corollary A1, K α ( p * ( i ) ) = k α . However τ k α + 1 * ( j ) may take a value less than α or greater or equal to α . In the first case, it is implied that K α ( p * ( j ) ) = k α , that is, K α ( p * ( j ) ) K α ( p * ( i ) ) holds. Or in the second case, it is implied that K α ( p * ( j ) ) = k α + 1 , which in turn implies K α ( p * ( j ) ) K α ( p * ( i ) ) holds.
At this point, the claim of the lemma is established for all scenarios and sub-scenarios. □
Proposition A1.
Let p = { p k ; k 1 } be a probability distribution on a countable alphabet and let p = { p ( k ) ; k 1 } be a non-increasing arranged p . Suppose, for two particular indices i and j such that i < j , a mass of ε > 0 is transferred from p ( i ) to p ( j ) , subject to 0 < ε < p ( i ) p ( j ) . Let the sequence after the transfer be denoted as p * and its non-increasingly re-arranged version as p * . Then for any α ( 0 , 1 ) , K α ( p ) K α ( p * ) .
Proof. 
Without loss of generality, let it be assumed that p = { p k ; k 1 } is non-increasingly arranged. Since K α ( p ) is symmetric with respect to i and j, it suffices to show that K α ( p ) K α ( p * ) for any transfer of ε mass for ε ( 0 , ( p i p j ) / 2 ] . Toward that end, consider the following sequence of non-negative values,
p ε = { p 1 , , p i 1 , p i ε , p i + 1 , , p j 1 , p j , p j + 1 , } .
It is to be noted first that adding ε to p i ε in p ε gives p and second that adding ε to p j in p ε gives
p * = { p 1 , , p i 1 , p i ε , p i + 1 , , p j 1 , p j + ε , p j + 1 , } .
Since p i ε p j , by Lemma A2, K α ( p ) K α ( p * ) . □
Before stating and proving Proposition A2 below, a simple and trivial fact is summarized in the following lemma for easy reference.
Lemma A3
(Stairway Carpeting). Let q = { q k ; k 1 } be a sequence of non-increasingly ordered non-negative values such that (a) q 1 > 0 and (b) k 1 q k = C > 0 . Let ε > 0 be any positive value. Then there exists a sequence of non-negative values ε = { ε k ; k 1 } satisfying k 1 ε k = ε , such that, letting q * = { q k * ; k 1 } where q k * = q k + ε k ,
  • q k * q k + 1 * for each and every k 1 , and
  • k 1 q k * = C + ε .
Proof. 
Since (a) and (b), there exists an index value k 0 such that q k 0 > q k 0 + 1 and hence q k 0 q k 0 + 1 > 0 . Let M = M ( q , ε ) be an integer such that ε / M q k 0 q k 0 + 1 . Let ε k = 0 for k = 1 , , k 0 , ε k = ε / M for k = k 0 + 1 , , k 0 + M , and ε k = 0 for k k 0 + M + 1 . It can be easily verified that the claim of the lemma holds. □
Lemma A3 has the following two important implications that are relevant in the proof of Proposition A2.
  • For any ordered non-negative sequence, there exists a way to distribute any additional non-negative mass on top of the sequence and yet to preserve the non-increasing order of the sequence. Any such existent way will be referred to as a way of Stairway Carpeting.
  • A transfer of mass ε to q by a way of Stairway Carpeting may be viewed as a sequence of M steps, in each of which a part of ε , ε / M , is transferred.
Proposition A2.
The generalized species richness, K α , is 100 × α % -robust.
Proof. 
Consider any given probability distribution p = { p k ; k 1 } , which can be without loss of generality assumed to be non-increasingly arranged, a given α ( 0 , 1 ) and a given ε ( 0 , α ) . An ε -mass re-distribution of p is a combination of two steps: (a) an arbitrarily reduction of ε mass from p and (b) an arbitrarily add-back of the same ε mass. Let the reduction, the add-back and their differences be represented by
ε 1 = { ε 1 , k ; k 1 } ,   ε 2 = { ε 2 , k ; k 1 } and   δ = { δ k ; k 1 }
where ε i , k 0 and k 1 ε i , k = ε for i = 1 , 2 , δ k = ε 2 , k ε 1 , k for each k and k 1 δ k = 0 . Let the distribution after the re-distribution be denoted as
p * = { p k * ; k 1 } = { p k + δ k ; k 1 } .
For any ε ( 0 , α ) , it is desired to show that K α ( p * ) is bounded above by a constant only depending on p , α and ε (but not on ε 1 and ε 2 ). Toward that end, let it first be noted that k α ε = K α ε ( p ) is a constant integer only depending on p , α and ε .
Several modifications are to be made to p * . First let all δ k < 0 be set to zero, that is, let δ k * = max { δ k , 0 } and write the modified p * as
p 1 * = { p k + δ k * ; k 1 } = { ( p 1 + δ 1 * ) , , ( p k α ε + δ k α ε * ) , ( p k α ε + 1 + δ k α ε + 1 * ) , } .
By Corollary A1, it follows that
K α ( p * ) K α ( p 1 * ) .
Let is be observed that there are only finitely many terms in p 1 * of (A13) that are greater than or equal to p k α ε . Each of these terms corresponds to an index k. Let the maximum of these indices be denoted as k 0 (so that, for each k k 0 + 1 , p k + δ k * < p k α ε and that k 0 k α ε ).
Second, let p 1 * be further modified in such a way that the first k 0 terms are preserved but the remainder terms, from k = k 0 + 1 on, are re-arranged into a non-increasing order. Denote the resulting sequence by
p 2 * = { ( p 1 + δ 1 * ) , , ( p k 0 + δ k 0 * ) , p k 0 + 1 * * , } .
Since K α is permutation invariant, it follows that
K α ( p * ) K α ( p 1 * ) = K α ( p 2 * ) .
Next, for each k = 1 , , k 0 , collect δ k * from p k + δ k * and re-distribute the mass of ε = δ k * to the tail sequence of p 2 * , q = { p k 0 + 1 * * , p k 0 + 2 * * , } , by means of a Stairway Carpeting way described in Lemma A3. The resulting sequence would have the following form
p 3 * = { p k * * * ; k 1 } = { p 1 , , p k α ε , p k α ε + 1 , , p k 0 , p k 0 + 1 * * * , } .
By construction, p 3 * satisfies the following three properties:
  • the sub-sequence { p 1 , , p k 0 } is non-increasingly ordered;
  • each term in the tail sequence { p k 0 + 1 * * * , } is less than p k α ε ; and
  • the sum of all terms in p 3 * from k = k α ε + 1 on is
    k k α ϵ + 1 p k * * * = k k α ϵ + 1 p k + k 1 δ k * < ( α ε ) + k 1 δ k * < α .
By (A18) and the definition of K α ,
K α ( p 3 * ) K α ε ( p ) = k α ε .
Finally, let it be noted that the modification of (A15) into (A17) is a finite sequence of steps each of which transfers a probability from a higher term to a lower term—the second implication of Lemma A2 mentioned above. Applying Corollary A1 as finitely many times as needed, it follows that
K α ( p 2 * ) K α ( p 3 * ) .
Combining (A16), (A19) and (A20) gives K α ( p * ) k α ε < . □

References

  1. Simpson, E.H. Measurement of diversity. Nature 1949, 163, 688. [Google Scholar] [CrossRef]
  2. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  3. Rényi, A. On measures of information and entropy. In Proceedings of the fourth Berkeley Symposium on Mathematics, Statistics and Probability, Berkeley, CA, USA, 20 June 1960; pp. 547–561. [Google Scholar]
  4. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  5. Hurlbert, S.H. The noconcept of species diversity: A critique and alternative parameters. Ecology 1971, 52, 577–586. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Poole, R.W. An Introduction to Quantitative Ecology; McGraw-Hill: New York, NY, USA, 1974. [Google Scholar]
  7. Jost, L. Entropy and diversity. Oikos 2006, 113, 363–375. [Google Scholar] [CrossRef]
  8. Hill, M.O. Diversity and evenness: A unifying notation and its consequences. Ecology 1973, 54, 427–432. [Google Scholar] [CrossRef] [Green Version]
  9. Rao, C.R. Diversity and dissimilarity coefficients: A unified approach. Theor. Popul. 1982, 21, 24–43. [Google Scholar] [CrossRef]
  10. Patil, G.P. Diversity Profiles. Wiley StatsRef: Statistics Reference Online; John Wiley and Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
  11. Zhang, Z.; Grabchak, M. Entropic representation and estimation of diversity indices. J. Nonparametric Stat. 2016, 28, 563–575. [Google Scholar] [CrossRef] [Green Version]
  12. Bunge, J.; Fitzpatrick, M. Estimating the number of species: A review. J. Am. Stat. Assoc. 1993, 88, 364–373. [Google Scholar]
  13. Bunge, J.; Willis, A.; Walsh, F. Estimating the number of species in microbial diversity studies. Annu. Rev. Stat. Its Appl. 2014, 1, 427–445. [Google Scholar] [CrossRef] [Green Version]
  14. Zhang, Z. Statistical Implications of Turing’s Formula; John Wiley & Sons: Hoboken, NJ, USA, 2017. [Google Scholar]
  15. Chao, A. Nonparametric estimation of the number of the classes in a population. Scand. J. Stat. 1984, 11, 265–270. [Google Scholar]
  16. Chao, A. Estimating the population size in capture-recapture data with unequal catchability. Biometrics 1987, 43, 783–791. [Google Scholar] [CrossRef] [PubMed]
  17. Chao, A.; Lee, S. Estimating the number of classes via sample coverage. JASA 1992, 87, 210–217. [Google Scholar] [CrossRef]
  18. Zhang, Z.; Chen, C.; Zhang, J. Estimation of population size in entropic perspective. Commun. Stat.-Theory Methods 2020, 49, 307–324. [Google Scholar] [CrossRef]
  19. Valiant, G.; Valiant, P. Estimating the unseen: An n/log(n)-sample estimator for entropy and support size, shown optimal via new CLTs. In Proceedings of the 43rd Annual ACM Symposium on Theory of Computing, San Jose, CA, USA, 6–8 June 2011; pp. 685–694. [Google Scholar]
  20. Wu, Y.; Yang, P. Chebyshev polynomials, moment matching, and optimal estimation of the unseen. Ann. Stat. 2019, 47, 857–883. [Google Scholar] [CrossRef] [Green Version]
  21. Zhang, Z.; Zhou, J. Re-parameterization of multinomial distribution and diversity indices. J. Stat. Plan. Inference 2010, 140, 1731–1738. [Google Scholar] [CrossRef]
  22. Gourlet-Fleury, S.; Guehl, J.M.; Laroussinie, O. Ecology & Management of a Neotropical Rainforest: Lessons Drawn from Paracou, a Long-Term Experimental Research site in French Guiana; Elsevier: Paris, France, 2004. [Google Scholar]
  23. Grabchak, M.; Marcon, E.; Lang, G.; Zhang, Z. The generalized Simpson’s entropy is a measure of biodiversity. PLoS ONE 2017, 12, e0173305. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Graphic definition of K α = k given α .
Figure 1. Graphic definition of K α = k given α .
Entropy 24 01504 g001
Figure 2. Estimated K α for Plots #6 and #18.
Figure 2. Estimated K α for Plots #6 and #18.
Entropy 24 01504 g002
Figure 3. Estimated D α with 95 % Confidence Band.
Figure 3. Estimated D α with 95 % Confidence Band.
Entropy 24 01504 g003
Table 1. Simulation Results under Uniform Distribution, K = 20 .
Table 1. Simulation Results under Uniform Distribution, K = 20 .
nT α = 0.01 K ^ α Bias K ^ α MSE K ^ α Bias K ^ α MSE α = 0.05 K ^ α Bias K ^ α MSE K ^ α Bias K ^ α MSE
100.63 K α = 20 12.00145.219.7297.20 K α = 19 11.00122.208.7278.76
200.38 K α = 20 7.1753.323.9620.17 K α = 19 7.1753.323.9720.21
300.22 K α = 20 4.3320.761.165.70 K α = 19 4.3320.761.175.71
400.14 K α = 20 2.578.18−0.263.25 K α = 19 3.5614.230.763.76
500.08 K α = 20 1.533.46−0.812.85 K α = 19 2.487.250.172.13
600.05 K α = 20 0.941.66−0.922.32 K α = 19 2.527.080.551.74
700.03 K α = 20 0.550.77−0.871.73 K α = 19 1.793.870.061.40
800.02 K α = 20 0.330.41−0.731.07 K α = 19 1.753.590.191.24
900.01 K α = 20 0.190.22−0.620.83 K α = 19 1.332.18−0.010.90
1000.01 K α = 20 0.610.71−0.250.76 K α = 19 1.372.280.150.89
1100.00 K α = 20 0.410.45−0.350.91 K α = 19 1.041.41−0.090.82
1200.00 K α = 20 0.280.30−0.490.95 K α = 19 1.121.520.080.64
Table 2. Simulation Results under Triangular Distribution, K = 20 .
Table 2. Simulation Results under Triangular Distribution, K = 20 .
nT α = 0.01 K ^ α Bias K ^ α MSE K ^ α Bias K ^ α MSE α = 0.05 K ^ α Bias K ^ α MSE K ^ α Bias K ^ α MSE
100.55 K α = 19 11.45132.239.3991.09 K α = 16 8.4472.556.3943.72
200.31 K α = 19 7.3455.854.6626.22 K α = 16 5.3430.502.6711.64
300.18 K α = 19 5.0127.242.4210.23 K α = 16 3.0111.180.434.57
400.12 K α = 19 3.5514.511.225.24 K α = 16 2.528.190.263.80
500.08 K α = 19 2.618.510.573.52 K α = 16 1.553.97−0.433.24
600.06 K α = 19 1.955.350.152.94 K α = 16 1.643.92−0.012.35
700.04 K α = 19 1.483.52−0.082.47 K α = 16 1.042.19−0.402.26
800.03 K α = 19 1.142.49−0.232.20 K α = 16 1.132.27−0.131.85
900.03 K α = 19 0.841.76−0.382.00 K α = 16 0.691.37−0.451.82
1000.02 K α = 19 1.553.330.501.89 K α = 16 0.801.48−0.201.53
1100.02 K α = 19 1.332.570.371.57 K α = 16 0.480.96−0.451.52
1200.01 K α = 19 1.152.040.291.33 K α = 16 0.561.01−0.301.34
Table 3. Simulation Results under Poisson Distribution, λ = 10 .
Table 3. Simulation Results under Poisson Distribution, λ = 10 .
nT α = 0.01 K ^ α Bias K ^ α MSE K ^ α Bias K ^ α MSE α = 0.05 K ^ α Bias K ^ α MSE K ^ α Bias K ^ α MSE
100.46 K α = 16 9.1084.007.3857.25 K α = 13 6.1038.424.3822.00
200.22 K α = 16 6.0137.974.0520.26 K α = 13 4.0117.862.088.20
300.13 K α = 16 4.3921.172.5910.48 K α = 13 3.087.520.614.17
400.09 K α = 16 3.3613.161.766.65 K α = 13 2.256.670.793.89
500.06 K α = 16 2.648.701.164.62 K α = 13 1.503.750.192.98
600.05 K α = 16 2.086.060.693.68 K α = 13 1.653.940.572.65
700.04 K α = 16 1.624.320.293.21 K α = 13 1.132.520.152.35
800.03 K α = 16 1.263.210.012.88 K α = 13 1.312.740.492.16
900.03 K α = 16 0.942.54−0.283.08 K α = 13 0.941.860.161.89
1000.03 K α = 16 1.594.060.493.12 K α = 13 1.072.020.391.83
1100.02 K α = 16 1.373.390.322.92 K α = 13 0.821.540.191.69
1200.02 K α = 16 1.172.810.152.77 K α = 13 0.951.680.381.65
Table 4. Simulated Coverage of 95 % Confidence Sets under Uniform Distribution, K = 20 .
Table 4. Simulated Coverage of 95 % Confidence Sets under Uniform Distribution, K = 20 .
α = 0.01 α = 0.05 α = 0.10 α = 0.15
nTOf (13)Of (15)Of (16)Of (13)Of (15)Of (16)Of (13)Of (15)Of (16)Of (13)Of (15)Of (16)
100.630.00000.00000.00000.00000.00000.00000.00000.00000.00000.00000.00000.0000
200.380.32260.32260.52840.32240.32240.50600.32220.32220.41180.63320.63320.6774
300.220.78980.80560.90860.78740.80320.90100.56360.56420.77040.80840.80900.9128
400.140.91240.94220.97940.85220.85840.94200.65760.65760.84020.95340.95320.9732
500.080.88840.98620.99640.95880.95960.99640.78680.79220.93820.96220.96760.9958
600.050.97920.99340.99880.93020.93180.98580.86780.86820.94880.99100.99180.9998
700.030.99700.99700.99960.95120.98240.99860.85860.85860.98200.99840.99840.9986
800.020.99900.99901.00000.97700.98340.99540.92280.92280.98820.99920.99941.0000
900.010.99920.99921.00000.93600.98160.99840.95520.95540.98480.99880.99901.0000
1000.010.99400.99821.00000.92840.97620.99920.93220.93280.98860.99900.99900.9998
1100.000.99640.99661.00000.89120.99520.99980.93160.93840.99700.99980.99981.0000
1200.000.99580.99581.00000.91480.99421.00000.92820.93220.99900.99941.00001.0000
1300.000.99840.99840.99980.89520.99661.00000.95200.95960.99920.99841.00001.0000
1400.000.99900.99901.00000.89180.99160.99940.96620.97820.99980.99921.00001.0000
1500.000.99980.99981.00000.95940.99181.00000.97640.98740.99900.99681.00001.0000
2000.000.99660.99661.00000.98860.99601.00000.88460.94141.00000.99161.00001.0000
2500.000.99740.99741.00000.99960.99961.00000.86080.97500.99980.96421.00001.0000
3000.000.99640.99641.00000.99920.99921.00000.91200.99481.00000.93741.00001.0000
3500.000.99960.99961.00001.00001.00001.00000.97700.99821.00000.87441.00001.0000
4000.000.99980.99981.00001.00001.00001.00000.99621.00001.00000.86961.00001.0000
4500.001.00001.00001.00001.00001.00001.00000.99040.99161.00000.92161.00001.0000
5000.001.00001.00001.00000.99380.99381.00000.95740.95741.00000.94661.00001.0000
10000.001.00001.00001.00001.00001.00001.00000.99880.99881.00001.00001.00001.0000
Table 5. Simulated Coverage of 95 % Confidence Sets under Triangular Distribution, K = 20 .
Table 5. Simulated Coverage of 95 % Confidence Sets under Triangular Distribution, K = 20 .
α = 0.01 α = 0.05 α = 0.10 α = 0.15
nTOf (13)Of (15)Of (16)Of (13)Of (15)Of (16)Of (13)Of (15)Of (16)Of (13)Of (15)Of (16)
100.550.00000.00000.00000.00240.00240.02540.02620.02620.18560.19140.19140.1938
200.310.20.23700.30260.54080.54440.64920.54660.56480.78660.52620.54440.7420
300.180.60740.61200.74000.81380.83780.92400.78020.80340.86920.71900.74260.8940
400.120.77180.77980.90520.81920.87000.93440.80160.85520.96020.79660.81560.9548
500.080.85480.88280.94400.85460.91880.95160.88640.93040.96500.88240.91840.9584
600.060.88480.90780.97200.90860.94260.97420.90520.93860.97400.87160.89280.9686
700.040.91440.94360.97520.89480.94940.97900.89000.93520.98120.89940.91520.9840
800.030.90040.95280.98760.90800.94160.98660.90580.95480.98720.90000.91580.9900
900.030.88300.96340.99220.89080.96820.98840.91500.97420.99220.93780.96360.9920
1000.020.89160.89320.97680.91020.96600.99300.90820.97840.99160.94080.96480.9920
1100.020.91740.92000.98620.88520.98340.99180.88060.98240.99320.94460.97880.9950
1200.010.93840.94340.98680.90560.97960.99540.88180.98200.99380.93900.96620.9916
1300.010.94220.94920.98740.86020.97980.99220.86700.97880.99520.91440.95800.9956
1400.010.93340.94720.98940.88540.97220.99460.84180.97020.99300.90620.94260.9958
1500.010.92640.95060.99180.83620.97600.99460.84100.97520.99560.89660.94820.9966
2000.000.90260.90880.99280.86200.98080.99880.83200.98580.99640.91520.96100.9996
2500.000.93480.96200.99760.81620.99420.99840.80980.99540.99740.93180.98681.0000
3000.000.95040.95700.99620.83140.99460.99860.79100.99660.99780.94040.99280.9992
3500.000.95020.97780.99720.78220.99320.99740.76640.99380.99680.92420.99380.9994
4000.000.95580.96260.99340.80120.98880.99900.74060.99140.99940.93940.98300.9986
4500.000.93340.95440.99720.75880.98580.99920.72860.99040.99840.89940.95880.9998
5000.000.91660.92560.99760.77160.98480.99980.70600.98780.99920.89140.93761.0000
10000.000.85720.85920.99980.74320.99681.00000.61660.99781.00000.92900.95341.0000
Table 6. Simulated Coverage of 95 % Confidence Sets under Poisson Distribution, λ = 10 .
Table 6. Simulated Coverage of 95 % Confidence Sets under Poisson Distribution, λ = 10 .
α = 0.01 α = 0.05 α = 0.10 α = 0.15
nTOf (13)Of (15)Of (16)Of (13)Of (15)Of (16)Of (13)Of (15)Of (16)Of (13)Of (15)Of (16)
100.460.00040.00040.00540.07360.07360.29980.30100.30100.30380.32240.32760.6372
200.220.27120.27140.36820.86440.86580.93060.62860.65420.85600.86080.88640.9588
300.130.51860.52140.68400.90020.93980.97160.75500.79040.87060.89940.92760.9666
400.090.62980.64000.80940.96320.98220.98720.79960.81660.95880.96480.97420.9954
500.060.74280.75340.88360.94460.98000.98160.85100.87740.97800.96400.98640.9966
600.050.80540.82420.90760.97180.99380.99440.89680.92060.96980.98580.99600.9984
700.040.80340.85000.93600.93200.98980.99020.88180.92060.96820.97420.99800.9992
800.030.81260.87680.95360.97200.99660.99660.87760.90820.97940.98740.99920.9996
900.030.79020.88400.94680.94860.99220.99220.86020.89040.98380.97720.99760.9994
1000.030.76600.80520.93420.97320.99540.99540.85560.88300.99200.98680.99900.9998
1100.020.78280.83420.94040.95880.99560.99560.87140.90400.99420.98260.99840.9998
1200.020.79100.85920.94580.97360.99800.99800.87620.90220.99620.98920.99880.9998
1300.020.79660.88760.95140.96220.99700.99700.88860.91420.99640.98380.99881.0000
1400.020.77940.89660.94620.97300.99900.99900.90020.92960.99700.98800.99900.9998
1500.020.74760.88940.93160.96720.99760.99760.90520.93160.99700.98540.99881.0000
2000.010.80280.88700.95500.98000.99920.99920.94560.96960.99480.99201.00001.0000
2500.010.76480.89480.94600.97340.99940.99940.94220.97200.99300.99301.00001.0000
3000.010.84880.91780.97560.97800.99920.99920.92640.95460.99380.99741.00001.0000
3500.010.81840.92940.96020.96500.99940.99940.89680.92420.99760.99621.00001.0000
4000.010.86800.94860.98220.97940.99980.99980.87020.89440.99880.99721.00001.0000
4500.000.84120.95420.97080.97520.99980.99980.84220.86180.99780.99821.00001.0000
5000.000.88480.96560.98080.97761.00001.00000.81940.83820.99960.99941.00001.0000
10000.000.86560.97420.98900.98761.00001.00000.85780.86121.00001.00001.00001.0000
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, Z. Generalized Species Richness Indices for Diversity. Entropy 2022, 24, 1504. https://doi.org/10.3390/e24101504

AMA Style

Zhang Z. Generalized Species Richness Indices for Diversity. Entropy. 2022; 24(10):1504. https://doi.org/10.3390/e24101504

Chicago/Turabian Style

Zhang, Zhiyi. 2022. "Generalized Species Richness Indices for Diversity" Entropy 24, no. 10: 1504. https://doi.org/10.3390/e24101504

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop