Next Article in Journal
Deep Learning-Based Malicious Smart Contract and Intrusion Detection System for IoT Environment
Next Article in Special Issue
Method for Obtaining Coefficients of Powers of Multivariate Generating Functions
Previous Article in Journal
Optimal Selection of Stock Portfolios Using Multi-Criteria Decision-Making Methods
Previous Article in Special Issue
Inverse Limit Shape Problem for Multiplicative Ensembles of Convex Lattice Polygonal Lines
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Asymptotic Properties of Random Restricted Partitions

1
School of Statistics, University of Minnesota, 224 Church Street S. E., Minneapolis, MN 55455, USA
2
Department of Mathematics, Hong Kong University of Science and Technology, Hong Kong
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2023, 11(2), 417; https://doi.org/10.3390/math11020417
Submission received: 24 November 2022 / Revised: 7 January 2023 / Accepted: 11 January 2023 / Published: 12 January 2023
(This article belongs to the Special Issue Random Combinatorial Structures)

Abstract

:
We study two types of probability measures on the set of integer partitions of n with at most m parts. The first one chooses the partition with a chance related to its largest part only. We obtain the limiting distributions of all of the parts together and that of the largest part as n tending to infinity for m fixed or tending to infinity with m = o ( n 1 / 3 ) . In particular, if m goes to infinity not too fast, the largest part satisfies the central limit theorem. The second measure is very general and includes the Dirichlet and uniform distributions as special cases. The joint asymptotic distributions of the parts are derived by taking limits of n and m in the same manner as that in the first probability measure.
MSC:
11P82; 60C05; 60B10

1. Introduction

The partition κ of a positive integer n is a sequence of positive integers k 1 k 2 k m with m 1 whose sum is n. We denote κ = ( k 1 , , k m ) n if κ is a partition of n. The number m is called the length of κ and k i the ith largest part of κ . Let P n denote the set of partitions of n and P n ( m ) the set of partitions of n with length at mostm. Thus, 1 m n and P n ( n ) = P n .
The set of all partitions P = n 1 P n is called the macrocanonical ensemble. The partitions of n, P n , is called the canonical ensemble and the restricted partitions P n ( m ) is the microcanonical ensemble. Integer partitions have a close relationship with statistical physics ([1,2,3]). To be more precise, a partition κ P n can be interpreted as an assembly of particles with total energy n. The number of particles is the length of κ ; the number of particles with energy l is equal to # { j : k j = l } . Thus, P n ( m ) is the set of configurations κ with a given number of particles m. It is known that P n ( m ) corresponds to the Bose–Einstein assembly (see Section 3 in [3] for a brief discussion). Therefore, the asymptotic distribution of a probability measure on P n ( m ) as n tends to infinity is connected to how the total energy of the system is distributed among a given number of particles.
The most natural probability measure on the integer partitions is the uniform measure. The uniform measure on P n ( m ) for m = n has been well-studied (see [4,5,6]). However, for the other values of m, to our best knowledge, the whole picture is not clear yet. In [7], as a by-product of studying the eigenvalues of Laplacian–Beltrami operator defined on symmetric polynomials, the limiting distribution of ( k 1 , , k m ) chosen uniformly from P n ( m ) is derived for fixed integer m. This is one of the motivations resulting in this paper. As a special case of a more general measure on P n ( m ) (detailed definition given in Section 1.2 below), we obtain the asymptotic joint distribution of ( k 1 , , k m ) P n ( m ) imposed with a uniform measure for m and m = o ( n 1 / 3 ) . It would be an intriguing question to understand the uniform measure on P n ( m ) for all values of m. The limiting shape of the young diagram corresponding to P n ( m ) with respect to uniform measure was studied in [8,9,10,11] for m = n and for m = c n where c is a positive constant.
Another important class of probability measure on the integer partitions is the Plancherel measure which chooses a partition κ P n with probability ( dim ( κ ) ) 2 / n ! . Here, dim ( κ ) is the degree of the irreducible representation of the symmetric group S n indexed by κ . More generally, the α -Jack measure (see the detailed definition in [12], for instance), which subsumes the Plancherel measure as a special case when α = 1 , has also been considered. It is known the both the Plancherel measure (see [13,14,15,16], a survey by [17] and the references therein) and α -Jack measure (see, for instance, [12,18,19]) have a deep connection with random matrix theory.
For a fixed constant q ( 0 , 1 ) , the q-analog of the Plancherel measure, which is called the q-Plancherel measure, on integer partitions has been studied in [20,21,22]. As explained in Section 2.2 from [21], it is related to a probability measure M q ( n ) on P n . More precisely, for each partition κ = ( k 1 , , k m ) P n ,
M q ( n ) ( κ ) = ( 1 q ) n dim ( κ ) q b ( κ ) u κ [ [ h ( u ) ] ] ,
where h ( u ) = k i i + k j j + 1 is the hook length of a box u in a position ( i , j ) of the Young diagram associated with κ , the notation [ [ k ] ] : = 1 q k for positive integer k and b ( κ ) = i = 1 m ( i 1 ) k i . It can be verified that if q = 1 , then M 1 ( n ) ( κ ) is exactly the Plancherel measure on P n . Hence, M q ( n ) can be interpreted as the q-deformation of the Plancherel measure. Indeed, it is quite natural and common to consider the q-versions of existing probability measures; for example, the Macdonald measure on P can be thought of as the q-version of the circular β -ensemble (see [23,24]). This point of view motivates us to consider a probability measure on P n ( m ) that chooses κ P n ( m ) proportionally to q σ ( κ ) , where σ ( κ ) is a function of κ = ( k 1 , , k m ) . In this paper, we consider σ ( κ ) = k 1 , the largest part of κ , and study the asymptotic behavior of the parts of κ as n tends to infinity. This probability measure on the microcanonical ensemble P n ( m ) can also be viewed as an analog of a probability measure μ ( · ) defined on the macrocanonical ensemble P , introduced in [8], where μ ( λ ) = c q | λ | for any λ P and | λ | is the sum of its parts.
In this paper, we consider two new probability measures on P n ( m ) assuming either m is fixed or m tends to infinity with n. We investigate the asymptotic joint distributions of ( k 1 , , k m ) as n tends to infinity. This paper is organized as follows. In Section 1.1, we introduce a new probability measure, called the restricted geometric distribution, on P n ( m ) . We state the main results, Theorems 1 and 2, obtained under this probability measure assuming m fixed or m tends to infinity with n and m = o ( n 1 / 3 ) . The overview of the proof of Theorem 2 is explained. In Section 1.2, we first introduce the second probability measure on P n ( m ) and present new results, Theorems 3 and 4, on the joint asymptotic distributions of the parts, by taking limits of n and m in the same manner as that in the previous probability measure. The proofs of the main results and their corollaries are collected in Section 2 and Section 3. To be more specific, we prove Theorem 1 and Corollary 1 in Section 2.1 and Theorem 2 in Section 2.2. The proofs of Theorem 3 and two corollaries are presented in Section 3.1 and the proof of Theorem 4 is stated in Section 3.2.

1.1. Restricted Geometric Distribution

The first type of random partitions on P n ( m ) is defined as follows: for κ = ( k 1 , , k m ) P n ( m ) , consider the probability measure
P ( κ ) = c · q k 1
where 0 < q < 1 and c = c n , m is the normalizing constant that κ P n ( m ) P ( κ ) = 1 . We call this probability measure the restricted geometric distribution. This probability measure favors the partitions κ with the smallest possible largest part k 1 . Thus, we concern the fluctuation of k 1 around n m . The motivation to work on the measure in (1) has been stated in the Introduction.
When m is a fixed integer, the main result is the following. Recall that a sequence of random vectors X 1 , X 2 , in R k converges weakly to a random vector X R k with distribution function F X if the distribution functions F X n ( x ) F X ( x ) as n for any continuity point x R k of F X .
Theorem 1.
For given m 2 , let κ = ( k 1 , , k m ) P n ( m ) be chosen with probability P ( κ ) as in (1). For a subsequence n j 0 (mod m), define j = j 0 if 1 j 0 m 1 and j = m if j 0 = 0 . Then as n with n j 0 (mod m) for a fixed j 0 , we have ( k 1 n m , , k m n m ) converges weakly to a discrete random vector with probability mass function (pmf)
f ( l 1 , , l m ) = q l 1 l = 0 q l · | P m ( l + 1 ) j ( m 1 ) |
for all integers ( l 1 , , l m ) with l 1 0 , l 1 l m and i = 1 m l i = j m .
Remark 1.
Note that the summation in the denominator of the pmf f ( l 1 , , l m ) in Theorem 1 starts with l = 0 . To make | P m ( l + 1 ) j ( m 1 ) | non-zero, we have m ( l + 1 ) j m 1 j m l + 1 . Since 1 j m , l = 0 enforces j = 1 . Indeed, l = 0 corresponds to the case when the largest part k 1 = n m . From the constraints on the parts, this happens only when j = 1 (that is, n 1 (mod m)) and k 1 = n m , k 2 = = k m = n m 1 . If j 1 , then the case l = 0 cannot happen and this is guaranteed by | P m ( 0 + 1 ) j ( m 1 ) | = 0 .
From Theorem 1, we immediately obtain the limiting distribution of the largest part k 1 , which fluctuates around its smallest possible value n m . As a consequence, the conditional distribution of ( k 2 , , k m ) given the largest part k 1 is asymptotically a uniform distribution.
Corollary 1.
Given m 2 , let κ = ( k 1 , , k m ) P n ( m ) be chosen with probability P ( κ ) as in (1). For a subsequence n j 0 (mod m), define j = j 0 if 1 j 0 m 1 and j = m if j 0 = 0 . Then as n , we have k 1 n m converges weakly to a discrete random variable with pmf
f ( l ) = q l · | P m l + m j ( m 1 ) | l = 0 q l · | P m l + m j ( m 1 ) | , l 0 .
Furthermore, the conditional distribution of ( k 2 n m , , k m n m ) given k 1 = n m + l 1 ( l 1 0 ) is asymptotically a uniform distribution on the set ( l 2 , , l m ) Z m 1 ; l 1 l 2 l m a n d l 1 + i = 2 m l i = j m .
We present the proofs of Theorem 1 and Corollary 1 in Section 2.1.
When m tends to infinity with n and m = o ( n 1 / 3 ) , we consider the limiting distribution of the largest part k 1 . The main result is that with proper normalization, the largest part k 1 converges to a normal distribution.
Theorem 2.
Given q ( 0 , 1 ) , let κ = ( k 1 , , k m ) P n ( m ) be chosen with probability P ( κ ) as in (1). Set λ = log q > 0 . If m = m n with m = o ( n 1 / 3 ) , then 1 m ( k 1 n m γ m ) converges weakly to N ( 0 , σ 2 ) as n , where
γ = 1 λ 2 0 λ t e t 1 d t a n d σ 2 = 2 λ 3 0 λ t e t 1 d t 1 λ ( e λ 1 ) > 0 .
The proof of Theorem 2 is analytic and quite involved. The main technical difficulty in the proof is the estimation of the normalization constant c = c n , m in (1). We use the Laplace method to estimate c n , m . The same analysis is applied to obtain the asymptotic distribution of the largest part k 1 . Thanks to the Szekeres formula (see (11)) for the number of restricted partitions, we first approximate c n , m 1 with an integral
c n , m 1 C ( m ) · exp ( m ψ ( t ) ) d t
for some function ψ ( t ) that has a global maximum at t 0 > 0 and some quantity C ( m ) > 0 . Thus,
ψ ( t ) ψ ( t 0 ) 1 2 | ψ ( t 0 ) | t 2
and
c n , m 1 C ( m ) e m ψ ( t 0 ) · exp 1 2 m | ψ ( t 0 ) | t 2 d t .
The most significant contribution in the integral on the right hand side of (2) comes from the t close to t 0 . Indeed, the integral in (2) is reduced to a Gaussian integral as n . We prove Theorem 2 in Section 2.2.
It remains to consider the conditional distribution of ( k 2 , , k m ) given the largest part k 1 . It is convenient to work with k i = n m + l i for 1 i m . In view of Theorem 2, let k 1 = n m + l 1 with l 1 = γ m + C · m for an arbitrary positive constant C. Given l 1 , ( l 2 , , l m ) has a uniform distribution on the set { ( l 2 , , l m ) Z m 1 ; l 1 l 2 l m a n d l 1 + i = 2 m l i = j m } . We consider a linear transform ( j 2 , , j m ) = ( l 1 l 2 , , l 1 l m ) . Since uniform distribution is preserved under linear transformations, ( j 2 , , j m ) has the uniform distribution on the set { ( j 2 , , j m ) N m 1 ; j m j 3 j 2 a n d i = 2 m j i = m l 1 + m j } . In general, the problem is related to understanding the uniform distribution on the set
( λ 2 , , λ m ) N m 1 ; λ 2 λ m 0 a n d i = 2 m λ i = m l 1 .
To our best knowledge, it is not even clear what the limiting joint distribution of a partition chosen uniformly from P m 2 ( γ m ) is as m tends to infinity. We raise the following questions for future projects.
Question 1.
Given q ( 0 , 1 ) , let κ = ( k 1 , , k m ) P n ( m ) be chosen with probability P ( κ ) as in (1). Assume m tends to infinity with n and m = o ( n 1 / 3 ) . Determine the asymptotic joint distribution of ( k 2 , , k m ) given k 1 . Furthermore, what is the limiting distribution of ( k 1 , k 2 , , k m ) as n tends to infinity?
We have considered the limiting distribution of κ P n ( m ) chosen as in (1) for m fixed as well as m = o ( n 1 / 3 ) . The requirement of m = o ( n 1 / 3 ) stems from the technical reason that in this regime, we could provide an asymptotic expression for the normalizing constant c in (1) (see (21) below) via Lemma 1, which facilitates further fine analysis to identify the limiting distribution of the largest part. It is also interesting to investigate this probability measure for other ranges of m.
Question 2.
Given q ( 0 , 1 ) , let κ = ( k 1 , , k m ) P n ( m ) be chosen with probability P ( κ ) as in (1). Identify the asymptotic distribution of κ for the entire range 1 m n .

1.2. A Generalized Distribution

Next we consider a probability measure on P n ( m ) by choosing a partition κ = ( k 1 , , k m ) n with chance
P n ( κ ) = c · f k 1 n , , k m n
where c = c n , m = ( k 1 , , k m ) P n ( m ) f ( k 1 n , , k m n ) 1 is the normalizing constant and f ( x 1 , , x m ) is defined on ¯ m 1 , the closure of m 1 . Here, m 1 is the ordered ( m 1 ) -dimensional simplex defined as
m 1 : = ( y 1 , , y m ) [ 0 , 1 ] m ; y 1 > y 2 > > y m 1 > y m and y m = 1 i = 1 m 1 y i .
We assume f is a probability density function on m 1 and is either bounded continuous or Lipschitz on ¯ m 1 .
When m is a fixed integer, we study the limiting joint distribution of the parts of κ chosen as in (3). The main result is the following.
Theorem 3.
Let m 2 be a fixed integer. Assume κ = ( k 1 , , k m ) P n ( m ) is chosen as in (3), where f is a probability density function on m 1 and f is bounded continuous on ¯ m 1 . Then ( k 1 n , , k m n ) converges weakly to a probability measure μ with density function f ( y 1 , , y m ) defined on m 1 .
From Theorem 3, we can immediately obtain the limiting convergence to several familiar distributions. We say ( X 1 , , X m ) has the symmetric Dirichlet distribution with parameter α > 0 , denoted by ( X 1 , , X m ) Dir ( α ) , if the distribution has pdf
Γ ( m α ) Γ ( α ) m x 1 α 1 x m α 1
on the ( m 1 ) -dimensional simplex
W m 1 : = ( x 1 , , x m 1 , x m ) [ 0 , 1 ] m ; i = 1 m x i = 1
and zero elsewhere.
Corollary 2.
Let m 2 be a fixed integer. Assume κ = ( k 1 , , k m ) P n ( m ) is chosen as in (3) with f ( x 1 , , x m ) = c · x 1 α 1 x m α 1 for some α 1 and 1 / c = m 1 x 1 α 1 x m α 1 d x 1 d x m 1 , then
k 1 n , , k m n ( X ( 1 ) , , X ( m ) )
where ( X ( 1 ) , , X ( m ) ) is the decreasing order statistics of ( X 1 , , X m ) Dir ( α ) .
Corollary 3.
Let m 2 be a fixed integer. Assume κ = ( k 1 , , k m ) P n ( m ) is chosen as in (3) with f ( x 1 , , x m ) = c · x 1 α 1 x m α 1 for some α 1 and 1 / c = m 1 x 1 α 1 x m α 1 d x 1 d x m 1 , then
k 1 n α , , k m n α ( Y 1 , , Y m )
as n , where ( Y 1 , , Y m ) has the uniform distribution on
( y 1 , , y m ) [ 0 , 1 ] m ; i = 1 m y i 1 / α = 1 , y 1 y m ,
or equivalently, ( Y 1 , , Y m ) is the decreasing order statistics of the uniform distribution on { ( y 1 , , y m ) [ 0 , 1 ] m ; i = 1 m y i 1 / α = 1 } .
For the special case α = 1 , that is, κ is chosen uniformly from P n ( m ) , the conclusion of Corollary 3 is first proved in [7]. The proofs of Theorem 3, Corollarys 2 and 3 are included in Section 3.1.
When m grows with n, we establish the limiting distribution of random restricted partitions in P n ( m ) . Define
= ( y 1 , y 2 , ) [ 0 , 1 ] ; y 1 y 2 a n d i = 1 y i 1 .
Note that m 1 can be viewed as subsets of
= ( y 1 , y 2 , ) [ 0 , 1 ] ; y 1 y 2 a n d i = 1 y i = 1
by natural embedding, and ∇ is the closure of in [ 0 , 1 ] with topology inherited from [ 0 , 1 ] (see (68) for the precise explanation). By Tychonoff’s theorem, m 1 and ∇ are compact. Furthermore, both m 1 and ∇are compact Polish spaces and thus any probability measure on m 1 is tight. Therefore, for probability measures { μ n } n 1 and μ on ∇, μ n converges to μ weakly if all the finite-dimensional distribution of μ n converges to the corresponding finite-dimensional distribution of μ .
Theorem 4.
Let m = o ( n 1 / 3 ) as n . Assume κ = ( k 1 , , k m ) P n ( m ) is chosen with probability as in (3) where f is a probability density function on m 1 and is Lipschitz on ¯ m 1 . Furthermore, assume the Lipschitz constant f L i p K for an absolute constant K > 0 . Let ( X m , 1 , , X m , m ) have density function f ( y 1 , , y m ) defined on m 1 . If ( X m , 1 , , X m , m ) converges weakly to X defined on as n , then ( k 1 n , , k m n ) converges weakly to X as n .
We will prove Theorem 4 in Section 3.2. The proof of Theorem 4 follows along the same lines as that of Theorem 3 with modifications. In Theorem 3 where m is fixed, we only require the function f in (3) to be bounded continuous on ¯ m 1 . This assumption is essentially used to show E ψ ( k 1 n , , k m n ) E ψ ( x 1 , , x m ) as n for any bounded continuous function ψ on ¯ m 1 because ψ · f is still bounded continuous on ¯ m 1 . For Theorem 4 where m depends on n, a stronger assumption on f with the Lipschitz constant f L i p K is imposed as we need to carefully analyze the difference E ψ ( k 1 n , , k m n ) E ψ ( x 1 , , x m ) in terms of m and n for any bounded and Lipschitz function ψ on ¯ m 1 .
We have investigated the limiting distribution of κ P n ( m ) chosen as in (3) for both m fixed and m = o ( n 1 / 3 ) . The assumption m = o ( n 1 / 3 ) is due to the essential use of the Erdös–Lehner formula | P n ( m ) | n 1 m 1 m ! in our proof and it is known that this asymptotic formula holds only for m = o ( n 1 / 3 ) . It would be interesting to understand the limiting distribution of κ for other ranges of m. We leave this as an open question for future research.
Question 1.
Let κ = ( k 1 , , k m ) P n ( m ) be chosen with probability P n ( κ ) as in (3). Identify the asymptotic distribution of κ for the entire range 1 m n .
Notation: For x R , the notation x stands for the smallest integer greater than or equal to x. The symbol [ x ] denotes the largest integer less than or equal to x. We use Z to be the set of all real integers. For a set A, the notation # A or | A | stands for the cardinality of A. We also use a A 1 to represent | A | . We use c · A = { c · a : a A } . For f ( n ) , g ( n ) > 0 , f ( n ) g ( n ) if lim n f ( n ) / g ( n ) = 1 .

2. Proofs of Theorems 1 and 2 and Corollary 1

The strategies of deriving Theorems 1 and 2 are different. In addition, the proof of Theorem 2 is relatively lengthy. For clarity, their proofs are given in two sections. In Section 2.1, we will present the proofs of Theorems 1 and Corollary 1. Theorem 2 will be established in Section 2.2.

2.1. The Proofs of Theorems 1 and Corollary 1

In this section, m is assumed to be a fixed integer. We start with a lemma concerning the number of restricted partitions P n ( m ) with the largest part fixed.
Lemma 1.
Let l 0 , m 2 and n 1 be integers. Set j = m + n m n m . Then 1 j m . If 0 l 1 m 1 ( n m m ) , we have
# ( k 1 , k 2 , , k m ) P n ( m ) ; k 1 = n m + l = | P m ( l + 1 ) j ( m 1 ) | .
If 1 m 1 ( n m m ) < l n n m , we have
# ( k 1 , k 2 , , k m ) P n ( m ) ; k 1 = n m + l | P m ( l + 1 ) j ( m 1 ) | .
Proof. 
For κ = ( k 1 , , k m ) P n ( m ) , let us rewrite k i = n m + l i for 1 i m . By assumption, l 1 = l 0 . Since κ n , we have l 1 l 2 l m n m and l 1 + i = 2 m l i = n m n m = j m by assumption. Therefore,
# ( k 1 , k 2 , , k m ) P n ( m ) ; k 1 = n m + l 1 = # ( l 2 , , l m ) Z m 1 ; l 1 l 2 l m n m a n d l 1 + i = 2 m l i = j m = # ( j 2 , , j m ) Z m 1 ; l 1 + n m j m j 2 0 a n d i = 2 m j i = m ( l 1 + 1 ) j
by the transform j i = l 1 l i for 2 i m .
Assume 0 l 1 m 1 ( n m m ) . If j m j 2 0 and i = 2 m j i = m ( l 1 + 1 ) j , then
j m i = 2 m j i = m ( l 1 + 1 ) j m ( l 1 + 1 ) l 1 + n m
by assumption, the notation l 1 = l and the fact x x for any x R . It follows that the left hand side of (4) is identical to
# ( j 2 , , j m ) Z m 1 ; j m j 2 0 a n d i = 2 m j i = m ( l 1 + 1 ) j = | P m ( l + 1 ) j ( m 1 ) | .
For 1 m 1 ( n m m ) + 1 l n n m , the upper bound (5) follows directly from the definitions of the sets.      □
Now, we are ready to present the proof of Theorem 1.
Proof. (Proof of Theorem 1)
First, it is easy to check that for the subsequence n j 0 (mod m), if we define j = j 0 if 1 j 0 m 1 and j = m if j 0 = 0 , then j = m + n m n m . Set
M n = 1 m 1 n m m .
We first estimate the normalizing constant c in (1).
1 = κ P n ( m ) P ( κ ) = c · k 1 = n m n ( k 1 , k 2 , , k m ) n q k 1 = c · l = 0 n n m q n m + l ( n m + l , k 2 , , k m ) n 1 .
We first show that, as n tends to infinity,
l = 0 n n m q n m + l ( n m + l , k 2 , , k m ) n 1 l = 0 M n q n m + l ( n m + l , k 2 , , k m ) n 1 .
By Lemma 1,
l = M n + 1 n n m q n m + l ( n m + l , k 2 , , k m ) n 1 l = 0 M n q n m + l ( n m + l , k 2 , , k m ) n 1 l = M n + 1 n n m q l · | P m ( l + 1 ) j ( m 1 ) | l = 0 M n q l · | P m ( l + 1 ) j ( m 1 ) | l = M n + 1 n n m q l m l + m j 1 m 2 l = 0 M n q l m l + m j 1 m 2 ,
where the last equality follows from (49). Note that the series s = 1 s m 2 q s converges for 0 < q < 1 . We have
l = M n + 1 n n m q l m l + m j 1 m 2 l = 0 M n q l m l + m j 1 m 2 = O l = M n + 1 n n m q l l m 2 l = 0 M n q l l m 2 = o ( 1 ) .
Therefore, one obtains the normalizing constant
c 1 q n m l = 0 M n q l · | P m ( l + 1 ) j ( m 1 ) | .
Now, we study the limiting joint distribution of the parts
( k 1 , k 2 , , k m ) = n m + l 1 , n m + l 2 , , n m + l m .
First, we claim that it is enough to consider l 1 to be any fixed integer from { 0 , 1 , 2 , } . Indeed, for any L = L ( n ) as n , it follows from (7), (49) and Lemma 1 that
P k 1 n m + L = l = L n n m P ( k 1 = n m + l ) l = L M n P ( k 1 = n m + l ) = l = L M n c · q n m + l | P m l + m j ( m 1 ) | c · q n m l = L M n m l + m j 1 m 2 ( m 1 ) ! q l .
Plugging in the normalizing constant c in (8) and letting L , we have
P ( k 1 n m + L ) = O l = L M n l m 2 q l l = 0 M n q l · | P m l + m j ( m 1 ) | = o ( 1 )
as n , where the last equality follows from similar arguments as (7). Likewise, we have as n tends to infinity,
c q n m 1 l = 0 q l · | P m l + m j ( m 1 ) | .
Therefore, for any given l 1 = 0 , 1 , 2 , , we conclude that
P k 1 = n m + l 1 , k 2 = n m + l 2 , , k m = n m + l m = c · q n m + l 1 q l 1 l = 0 q l · | P m l + m j ( m 1 ) | = f ( l 1 , , l m ) .
Finally, we show that f ( l 1 , , l m ) is indeed a pmf on the set of S : = { ( l 1 , , l m ) Z m ; l 1 0 , l 1 l m and i = 1 m l i = j m } . To see this, summing over all possible choice of ( l 1 , , l m ) from S on both sides of (10), since the number of terms in the sum is finite and independent of n, we get the ( l 1 , , l m ) S f ( l 1 , , l m ) = 1 .
The proof is completed.      □
We continue with the proof of Corollary 1.
Proof of Corollary 1.
By Theorem 1, it is enough to consider k 1 = n m + l for l { 0 , 1 , 2 , } in the limiting distribution. From (1), Lemma 1 and (9),
P k 1 = n m + l = c · q n m + l ( n m + l , k 2 , , k m ) n 1 = c · q n m + l · | P m l 1 + m j ( m 1 ) | q l · | P m l + m j ( m 1 ) | l = 0 q l · | P m l + m j ( m 1 ) |
as n .
Furthermore, since
P ( k 2 n m = l 2 , , k m n m = l m | k 1 n m = l 1 ) = P ( k 1 n m = l 1 , k 2 n m = l 2 , , k m n m = l m ) P ( k 1 n m = l 1 ) f ( l 1 , , l m ) f ( l 1 ) = 1 | P m l 1 + m j ( m 1 ) |
as n , it follows immediately the conditional distribution of ( k 2 n m , , k m n m ) given k 1 = n m + l 1 ( l 1 0 ) is asymptotically a uniform distribution on the set ( l 2 , , l m ) Z m 1 ; l 1 l 2 l m and l 1 + i = 2 m l i = j m . This completes the proof.      □

2.2. The Proof of Theorem 2

Szekeres formula (see [25,26,27,28]) says that for any given ϵ > 0 ,
| P n ( k ) | = f ( u ) n e n g ( u ) + O ( n 1 / 6 + ϵ )
uniformly for k n 1 / 6 , where u = k / n ,
(12) f ( u ) = v 2 3 / 2 π u 1 e v 1 2 u 2 e v 1 / 2 , (13) g ( u ) = 2 v u u log ( 1 e v ) ,
and v = v ( u ) is determined implicitly by
u 2 = v 2 0 v t e t 1 d t .
We start with a technical lemma that will be used in the proof of Theorem 2 later.
Lemma 2.
Let λ > 0 be given. Define ψ ( t ) = g ( t ) t λ t 2 for t > 0 . Then
t 0 : = λ ( 0 λ t e t 1 d t ) 1 / 2 s a t i s f i e s ψ ( t 0 ) = 2 λ ( e λ 1 ) t 0 4 ( e λ 1 1 2 t 0 2 ) < 0 .
Further, ψ ( t 0 ) = 0 , ψ ( t ) is strictly increasing on ( 0 , t 0 ] and strictly decreasing on [ t 0 , ) .
Proof. 
Trivially, the function t e t 1 = ( i = 1 t i 1 i ! ) 1 is positive and decreasing in t ( 0 , ) . It follows that v = v ( u ) > 0 for all u ( 0 , ) and
v 2 u 2 = 0 v t e t 1 d t > v 2 e v 1 .
Thus, e v 1 u 2 > 0 . In particular,
e v 1 1 2 u 2 > 0 .
By taking derivative from (14), we get
2 v · v = 2 u 0 v t e t 1 d t + u 2 v · v e v 1 .
This implies that v e v 1 = 2 v u 2 2 v u 3 , or equivalently,
v = v u + u v 2 ( e v 1 1 2 u 2 ) .
Consequently, v = v ( u ) > 0 for all u > 0 , and thus v ( u ) is strictly increasing on ( 0 , ) . Take derivative on g ( u ) in (13), and use (14) and (16) to see
g ( u ) = log ( 1 e v ) ; g ( u ) = v e v 1 e v = v / u e v 1 1 2 u 2 .
Therefore,
g ( u ) u = u g ( u ) g ( u ) u 2
and
g ( u ) u = g ( u ) u 2 g ( u ) u 2 + 2 g ( u ) u 3 = v u 4 4 u 2 e v 1 1 2 u 2 .
With the above preparation, we now study ψ ( t ) (we switch the variable “u” to “t”).
ψ ( t ) = g ( t ) t λ t 2 = v t 4 4 t 2 e v 1 1 2 t 2 6 λ t 4 = 1 t 4 4 v 6 λ v · t 2 e v 1 1 2 t 2 .
The assertions in (17) and (18) imply
g ( t ) t = t 2 log ( 1 e v ) t g ( t ) t 3 = 2 v t 3 .
Thus, ψ ( t ) = 2 ( λ v ) t 3 . Thus, the stable point t 0 of ψ ( t ) satisfies that v ( t 0 ) = λ . This implies that ψ ( t ) is strictly increasing on ( 0 , t 0 ] and strictly decreasing on [ t 0 , ) . It is not difficult to see from (14) that
t 0 = λ ( 0 λ t e t 1 d t ) 1 / 2 .
Plug this into (19) to get
ψ ( t 0 ) = 1 t 0 4 2 λ + λ · t 0 2 e λ 1 1 2 t 0 2 = 2 λ ( e λ 1 ) t 0 4 ( e λ 1 1 2 t 0 2 ) < 0
by (15).      □
Now, we are in a position to prove Theorem 2.
Proof of Theorem 2.
Let M n = [ 1 m 1 ( n m m ) ] as in (6). The assumption m = o ( n 1 / 3 ) implies
lim n M n m = .
Similar to (8), we first claim that the normalization constant
c 1 q n m l = 0 M n q l · | P m ( l + 1 ) j ( m 1 ) | .
Indeed, from Lemma 1,
1 c = l = 0 n n m q n m + l ( n m + l , k 2 , , k m ) n 1 = l = 0 M n q n m + l · | P m ( l + 1 ) j ( m 1 ) | + l = M n + 1 n n m q n m + l ( n m + l , k 2 , , k m ) n 1
and
l = M n + 1 n n m q n m + l ( n m + l , k 2 , , k m ) n 1 l = M n + 1 n n m q n m + l · | P m ( l + 1 ) j ( m 1 ) | = l = M n + 2 n n m + 1 q n m + l · | P l m j ( m 1 ) | .
Observe that | P l m j ( m 1 ) | | P l m ( l m ) | e K l m for some constant K > 0 by the Hardy-Ramanujan formula [29]. Therefore,
l = M n + 1 n n m q n m + l ( n m + l , k 2 , , k m ) n 1 q n m l = M n e λ l + K l m q n m l = M n e λ l / 2 q n m e λ M n / 2 1 e λ / 2 = o l = 0 M n q n m + l · | P m ( l + 1 ) j ( m 1 ) |
for n sufficiently large. This completes the proof of (21).
Hence, following (21) and Lemma 1, without loss of generality, we have
P k 1 = n m + l q l | P m ( l + 1 ) j ( m 1 ) | l = 0 M n q l · | P m ( l + 1 ) j ( m 1 ) |
for l = 0 , 1 , 2 , , M n , where j = m + n m n m and 1 j m . Thus, combined with (20), we arrive at
P k 1 n m + m ξ l = 1 [ m ξ ] + 1 q l · | P l m j ( m 1 ) | l = 1 M n + 1 q l · | P l m j ( m 1 ) |
for any ξ 0 .
In the following, we first apply a fine analysis to estimate the denominator
l = 1 M n + 1 q l · | P l m j ( m 1 ) | .
We divide the range of summation into five parts: 1 l c m , C m l M n , c m l < γ m m log m , γ m + m log m < l C m and γ m m log m l γ m + m log m for some proper constants c , C > 0 and γ = t 0 2 (recall t 0 in Lemma 2). The most significant contribution in the summation comes from the range γ m m log m l γ m + m log m and others are negligible. The estimation for the numerator is similar.
Before we proceed to the technical details, we explain in more detail how the division in (23) is chosen. Following the heuristic explained in (2), the most significant contribution in the summation (23), which is approximated by the integral
C ( m ) e m ψ ( t 0 ) · exp 1 2 m | ψ ( t 0 ) | t 2 d t ,
comes from the t close to t 0 given in Lemma 2. Dividing (23) into five parts can be thought of dividing (24) into five integrals with t = m l . Indeed, the constants c , C in the division of (23) are chosen (see (30) below) to satisfy 1 / C < t 0 < 1 / c . Hence, for the parts where 1 l c m or C m l M n , they correspond to the integrals in (24) where t 1 / c or t 1 / C and the contribution is negligible. For the parts in (23) where C m l M n or c m l < γ m m log m , they correspond to the integrals in (24) where t is of order log m / m away from t 0 . We show their contribution is also negligible though finer analysis. The main contribution in (23) is essentially from the part where γ m m log m l γ m + m log m . This corresponds to the integral in (24) where t is within O ( log m / m ) from t 0 .
Step 1: Two rough tails are negligible. First, by the Hardy–Ramanujan formula, there exists a constant K > 0 such that
| P l m j ( m 1 ) | | P l m ( l m ) | e K l m
for l 1 as n is large. Set λ = log q > 0 . It follows that
l = C m M n + 1 q l · | P l m j ( m 1 ) | l = C m e λ l + K m l l = C m e λ l / 2 1 1 e λ / 2
for all l ( 4 K 2 λ 2 ) m and for C > 4 K 2 λ 2 . Similarly, for the same K as above,
l = 1 c m q l · | P l m j ( m 1 ) | l = 1 c m q l · | P [ c m 2 ] ( m ) | ( c m ) · | P [ c m 2 ] ( m ) | ( c m ) · e c K m
for all c > 0 as n is sufficiently large.
In the rest of the proof, the variable n will be hidden in m = m n and j = j n . Keep in mind that m is sufficiently large when we say “n is sufficiently large”. We set two parameters
C = max 8 K 2 λ 2 , 2 γ ;
c = min ψ ( t 0 ) 2 16 K 2 , γ 2 .
Step 2: Two refined tails are negligible. Recall t 0 in Lemma 2. Define γ = t 0 2 and
Ω 1 = { l N ; c m l < γ m m log m } , Ω 2 = { l N ; γ m m log m l γ m + m log m } , Ω 3 = { l N ; γ m + m log m < l C m } ,
where c ( 0 , γ ) and C > γ by (28) and (27). Note that
1 / C < t 0 = γ 1 / 2 < 1 / c .
The limit in (20) asserts that Ω 2 { 1 , 2 , , M n } as n is large. Then
l = c m C m q l · | P l m j ( m 1 ) | = i = 1 3 l Ω i q l · | P l m j ( m 1 ) | .
Easily,
l Ω 1 Ω 3 q l · | P l m j ( m 1 ) | l Ω 1 Ω 3 q l · | P l m ( m ) | .
Take n = l m and k = m in (11), we get
| P l m ( m ) | f ( u ) l m e l m g ( u )
uniformly for all c m l C m where u = ( m l ) 1 / 2 . Notice
q l · | P l m ( m ) | f ( u ) l m e λ l + l m g ( u ) .
Consider function λ x + x m · g ( ( m x 1 ) 1 / 2 ) for x [ c m , C m ] . Set t = t x = ( m x 1 ) 1 / 2 . Then
λ x + x m · g ( m x 1 ) 1 / 2 = λ m t 2 + m g ( t ) t = m g ( t ) t 1 t 2 λ .
By (12) and (13), f ( x ) is a continuous function on [ C 1 / 2 , c 1 / 2 ] . Therefore, f ( ( m j 1 ) 1 / 2 ) m j = O ( m 2 ) uniformly for all j Ω 1 Ω 3 , which together with (32) yields
l Ω 1 Ω 3 q l · | P l m j ( m 1 ) | O 1 m 2 l Ω 1 Ω 3 exp m g ( t l ) t l λ t l 2 O 1 m · exp m max l Ω 1 Ω 3 g ( t l ) t l λ t l 2 .
Now,
max l Ω 1 Ω 3 g ( t l ) t l λ t l 2 = max l Ω 1 Ω 3 ψ m l .
Evidently,
m l , l Ω 1 m γ m m log m 1 / 2 , 1 c ( t 0 , ) ; m l , l Ω 3 1 C , m γ m + m log m 1 / 2 ( 0 , t 0 ) .
Recall Lemma 2, ψ ( t ) = g ( t ) t λ t 2 is increasing ( 0 , t 0 ] and decreasing in [ t 0 , ) . It follows that
max l Ω 1 Ω 3 g ( t l ) t l λ t l 2 max ψ m γ m m log m , ψ m γ m + m log m .
Recall that t 0 = γ 1 / 2 . Notice
m γ m ± m log m t 0 2 = 1 γ 1 ± log m γ m 1 / 2 t 0 2 = ( log m ) 2 4 γ 3 m ( 1 + o ( 1 ) ) .
By Taylor expansion and the fact that ψ ( t 0 ) = 0 , we see that
ψ m γ m ± m log m = ψ ( t 0 ) L ( log m ) 2 m + O m 3 / 2 ( log m ) 3
as n is large, where L = | ψ ( t 0 ) | 8 γ 3 > 0 . This joins (33) to yield that
l Ω 1 Ω 3 q l · | P l m j ( m 1 ) | m e m ψ ( t 0 ) ( L / 2 ) ( log m ) 2
as n is large.
Step 3. The estimate of j Ω 2 . Take n = l m j and k = m 1 in (11), we get
| P m l j ( m 1 ) | f ( u ) m l j e m l j g ( u )
uniformly for all c m l C m where u = m 1 l m j .
For l Ω 2 from (29),
γ m 2 m m log m j m l j γ m 2 + m m log m j .
Note that 1 j m . As m with n , we observe that
u = m 1 m l j 1 γ = t 0
and
m 2 m l j 1 γ = t 0 2 .
Hence, by continuity,
f ( u ) m l j t 0 2 f ( t 0 ) · 1 m 2
for all l Ω 2 . Consequently,
l Ω 2 q l · | P l m j ( m 1 ) | = ( 1 + o ( 1 ) ) t 0 2 f ( t 0 ) m 2 l Ω 2 exp λ l + l m j · g m 1 l m j t 0 2 f ( t 0 ) m 2 e λ j / m l Ω 2 exp λ ( m 1 ) 2 m t l 2 + m 1 t l g ( t l )
by setting t x = ( m 1 ) / m x j for x 2 (recall 1 j m ), and hence x = j m + ( m 1 ) 2 m t x 2 . It is easy to verify that
max l Ω 2 | t l t 0 | = O log m m
as n . We then have
l Ω 2 q l · | P l m j ( m 1 ) | t 0 2 f ( t 0 ) m 2 e λ t 0 2 ( λ j / m ) l Ω 2 exp ( m 1 ) g ( t l ) t l λ t l 2 .
Recall Lemma 2. Since ψ ( t 0 ) = 0 , it is seen from the Taylor’s expansion and (35) that
ψ ( t x ) = ψ ( t 0 ) + 1 2 ψ ( t 0 ) ( t x t 0 ) 2 + O ( m 3 / 2 ( log m ) 3 )
uniformly for all x Ω 2 . It follows that
l Ω 2 exp ( m 1 ) g ( t l ) t l λ t l 2 = ( 1 + o ( 1 ) ) · e ( m 1 ) ψ ( t 0 ) l Ω 2 exp 1 2 ψ ( t 0 ) ( t l t 0 ) 2 m .
It is trivial to check that
m 1 m x j = m 1 m x + j 2 γ 3 / 2 m 2 + O log m m 2
uniformly for all x Ω 2 . Therefore,
m m 1 m x j t 0 2 = m m 1 m x t 0 2 + j γ 3 / 2 m m 1 m x t 0 + O log m m = m m 1 m x t 0 2 + o ( 1 )
uniformly for all x Ω 2 by (35). This tells us that
l Ω 2 exp ( m 1 ) g ( t l ) t l λ t l 2 = ( 1 + o ( 1 ) ) · e ( m 1 ) ψ ( t 0 ) l Ω 2 exp 1 2 ψ ( t 0 ) m 1 m l t 0 2 m .
Set a m = γ m m log m , b m = γ m + m log m , c m = ( m 1 ) / m and
ρ ( x ) = exp 1 2 ψ ( t 0 ) c m x t 0 2 m
for x > 0 . It is easy to check that there exists an absolute constant C 1 > 0 such that
ρ ( x ) e C 1 ( log m ) 2
for all x ( a m , b m ) ( [ a m ] + 2 , [ b m ] 2 ) . Hence,
a m b m ρ ( x ) d x = l = [ a m ] [ b m ] 1 l l + 1 ρ ( x ) d x + ϵ m ,
where | ϵ m | e C 1 ( log m ) 2 for large m. By the expression ρ ( x ) = exp 1 2 ψ ( t 0 ) c m x t 0 2 m , we get
ρ ( x ) = 1 2 ρ ( x ) ψ ( t 0 ) c m x t 0 m c m x 3 / 2
for x > 0 . Easily, m c m x 3 / 2 = O ( 1 ) and c m x t 0 = O ( log m m ) uniformly for all [ a m ] x [ b m ] . Thus,
| ρ ( x ) | ( log m ) 2 m ρ ( x )
for all [ a m ] x [ b m ] . Therefore, by integration by parts,
| l l + 1 ρ ( x ) d x ρ ( l ) | = | l l + 1 ρ ( x ) ( l + 1 x ) d x | l l + 1 | ρ ( x ) | d x ( log m ) 2 m l l + 1 ρ ( x ) d x
as m is sufficiently large. This, (39) and (40) imply
| l Ω 2 ρ ( l ) a m b m ρ ( x ) d x | ( log m ) 2 m a m b m ρ ( x ) d x + e C 1 ( log m ) 2 .
Set γ m = ( log m ) γ 3 / 2 / 2 . We see from (37) and (38) that
a m b m ρ ( x ) d x = 2 c m 2 m γ m + o ( 1 ) γ m + o ( 1 ) u m + t 0 3 e 1 2 ψ ( t 0 ) u 2 d u = ( 1 + o ( 1 ) ) 2 m t 0 3 γ m γ m e 1 2 ψ ( t 0 ) u 2 d u = ( 1 + o ( 1 ) ) 2 m t 0 3 e 1 2 ψ ( t 0 ) u 2 d u m · 1 t 0 3 8 π | ψ ( t 0 ) |
by making the transform u = c m x t 0 m . Combining this, (37) and (41), we arrive at
e ( m 1 ) ψ ( t 0 ) l Ω 2 exp ( m 1 ) g ( t l ) t l λ t l 2 = ( 1 + o ( 1 ) ) l Ω 2 ρ ( l ) m · 1 t 0 3 8 π | ψ ( t 0 ) |
as n is sufficiently large. This and (36) yield
l Ω 2 q l · | P l m j ( m 1 ) | t 0 2 f ( t 0 ) m 2 e λ t 0 2 ( λ j / m ) l Ω 2 exp ( m 1 ) g ( t l ) t l λ t l 2 f ( t 0 ) e λ t 0 2 ψ ( t 0 ) ( λ j / m ) t 0 · 8 π | ψ ( t 0 ) | · e m ψ ( t 0 ) m 3 / 2
as m .
Step 4. Wrap-up of the denominator. By the choice of c in (28), we have c ( 4 K ) 1 ψ ( t 0 ) in (26). Therefore, we get from (25) and (26) that
l = 1 c m + l = C m M n + 1 q l · | P l m j ( m 1 ) | e ψ ( t 0 ) m / 2
as n is large. This and (31) imply
l = 1 M n + 1 q l · | P l m ( m 1 ) | = O e ψ ( t 0 ) m / 2 + i = 1 3 l Ω i q l · | P l m j ( m 1 ) |
as m . This identity together with (34) and (43) concludes that
l = 1 M n + 1 q l · | P l m j ( m 1 ) | f ( t 0 ) e λ t 0 2 ψ ( t 0 ) ( λ j / m ) t 0 · 8 π | ψ ( t 0 ) | · e m ψ ( t 0 ) m 3 / 2
as m .
Step 5. Numerator. We need to show
lim n P 1 m k 1 n m m t 0 2 x = 1 2 π σ x e t 2 2 σ 2 d t
for every x R , where σ = 1 | ψ ( t 0 ) | . Recall γ = t 0 2 . By (22),
P 1 m k 1 n m m t 0 2 x = l = 1 b m q l · | P m l j ( m 1 ) | l = 1 M n + 1 q l · | P l m j ( m 1 ) |
where b m = [ γ m + m x ] + 1 . Recall that c ( 4 K ) 1 ψ ( t 0 ) . It is known from (44) that
l = 1 c m q l · | P l m j ( m 1 ) | e ψ ( t 0 ) m / 2
as n is large. Let Ω 1 and Ω 2 be as in (29). Set Ω 2 = { l N ; γ m m log m l b m } . Notice Ω 2 Ω 2 for large m. By (34), (36) and (47),
l = 1 b m q l · | P m l j ( m 1 ) | = O e ψ ( t 0 ) m / 2 + m e m ψ ( t 0 ) ( L / 2 ) ( log m ) 2 + l Ω 2 q l · | P m l j ( m 1 ) | = O m · e m ψ ( t 0 ) ( L / 2 ) ( log m ) 2 + t 0 2 f ( t 0 ) m 2 e λ t 0 2 ( λ j / m ) l Ω 2 exp ( m 1 ) g ( t l ) t l λ t l 2
as m . Review the derivation between (37) and (42) and replace b m by b m . by the fact Ω 2 Ω 2 for large m again, we have
e ( m 1 ) ψ ( t 0 ) l Ω 2 exp ( m 1 ) g ( t l ) t l λ t l 2 = a m b m ρ ( x ) d x + ϵ m + O ( log m ) 2 m a m b m ρ ( x ) d x
where, as mentioned before, a m = γ m m log m and | ϵ m | e C 1 ( log m ) 2 for large m. Let us evaluate the integral above. In fact, from (38) we see that
a m b m ρ ( x ) d x = a m b m exp 1 2 ψ ( t 0 ) c m x t 0 2 m d x .
Recall the fact γ = t 0 2 . Set w = c m x t 0 m . Then
a m b m ρ ( x ) d x = 2 c m 2 m γ m + o ( 1 ) x 2 γ 3 / 2 + o ( 1 ) w m + t 0 3 e 1 2 | ψ ( t 0 ) | w 2 d w = ( 1 + o ( 1 ) ) 2 m t 0 3 x 2 γ 3 / 2 e 1 2 | ψ ( t 0 ) | w 2 d w = ( 1 + o ( 1 ) ) m t 0 3 γ 3 / 2 x e w 2 / ( 2 σ 2 ) d w = ( 1 + o ( 1 ) ) m x e w 2 / ( 2 σ 2 ) d w
where γ m = ( log m ) γ 3 / 2 / 2 and σ 2 = 4 γ 3 | ψ ( t 0 ) | . Collect the assertions from (48) to the above to obtain
l = 1 b m q l · | P m l j ( m 1 ) | = ( 1 + o ( 1 ) ) t 0 2 f ( t 0 ) m 2 e λ t 0 2 ( λ j / m ) · e ( m 1 ) ψ ( t 0 ) · m x e w 2 / ( 2 σ 2 ) d w t 0 2 f ( t 0 ) · e λ t 0 2 ψ ( t 0 ) ( λ j / m ) · e m ψ ( t 0 ) m 3 / 2 x e w 2 / ( 2 σ 2 ) d w
as m . Join this with (45) and (46) to conclude that
P 1 m k 1 n m m t 0 2 x 1 2 π σ x e w 2 / ( 2 σ 2 ) d w
as m . Notice that σ 2 = 4 | ψ ( t 0 ) | t 0 6 . The proof is completed by using Lemma 2 and the fact γ = t 0 2 .      □

3. Proofs of Theorems 3 and 4 and Corollaries 2 and 3

In Section 3.1 below, we will prove Theorem 3, Corollaries 2 and 3 where m is assumed to be a fixed integer. Theorem 4 studies the case when m tends to infinity with n and m = o ( n 1 / 3 ) . Its proof is given in Section 3.2.

3.1. The Proofs of Theorem 3 and Corollaries 2 and 3

From [4], we have
| P n ( m ) | n 1 m 1 m !
uniformly for m = o ( n 1 / 3 ) in the sense that for any ϵ > 0 and 0 < m 3 / n < ϵ , the ratio of | P n ( m ) | to n 1 m 1 m ! remains between 1 ± ϵ as n . We start with the proof of Theorem 3.
Proof of Theorem 3.
To prove the conclusion, it suffices to show that for any bounded continuous function ψ on ¯ m 1 ,
E ψ ( k 1 n , , k m n ) E ψ ( x 1 , , x m )
as n tends to infinity, where ( x 1 , , x m ) μ . By definition,
E ψ ( k 1 n , , k m n ) = ( k 1 , , k m ) P n ( m ) ψ ( k 1 n , , k m n ) f ( k 1 n , , k m n ) ( k 1 , , k m ) P n ( m ) f ( k 1 n , , k m n ) = n ( m 1 ) ( k 1 , , k m ) R n ( m ) ψ ( k 1 n , , k m n ) f ( k 1 n , , k m n ) n ( m 1 ) ( k 1 , , k m ) P n ( m ) f ( k 1 n , , k m n ) + E n , m ,
where the set
R n ( m ) : = { ( k 1 , , k m ) n ; k 1 > > k m > 0 }
and
E n , m : = ( k 1 , , k m ) P n ( m ) R n ( m ) ψ ( k 1 n , , k m n ) f ( k 1 n , , k m n ) ( k 1 , , k m ) P n ( m ) f ( k 1 n , , k m n ) .
On the other hand,
E ( ψ ( x 1 , , x m ) ) = m 1 ψ ( y 1 , , y m ) f ( y 1 , , y m ) d y 1 d y m 1 = m 1 ψ ( y 1 , , y m ) f ( y 1 , , y m ) d y 1 d y m 1 m 1 f ( y 1 , , y m ) d y 1 d y m 1 .
In order to compare (50) and (51), we divide the proof into a few steps.
Step 1: Estimate of | E n , m | . We claim that the term E n , m is negligible as n . We first estimate the size of R n ( m ) . For any ( k 1 , , k m ) R n ( m ) , set j i = k i ( m i + 1 ) for 1 i m . It is easy to verify that j i 1 j i = k i 1 k i 1 0 for 2 i m . Thus,
j 1 + + j m = n m + 1 2
and j 1 j m 0 . Therefore, ( j 1 , , j m ) P n m + 1 2 ( m ) . Indeed, this transform is a bijection between R n ( m ) and P n m + 1 2 ( m ) , which implies
| R n ( m ) | = | P n m + 1 2 ( m ) | .
On the other hand, we know from (49),
| P N ( m ) | N 1 m 1 m !
as N . Thus, by Stirling’s formula,
| R n ( m ) | | P n ( m ) | n m + 1 2 1 m 1 n 1 m 1 = ( n m + 1 2 1 ) ! ( n m ) ! ( n 1 ) ! ( n m + 1 2 m ) ! ( n m + 1 2 ) ! ( n m ) ! n ! ( n m + 1 2 m ) ! ( 1 m n ) 1 / 2 ( 1 m n m + 1 2 ) 1 / 2 ( 1 m n ) n ( 1 m + 1 2 n m ) m ( 1 m n m + 1 2 ) n m + 1 2
as n . By assumption m = o ( n ) , we have n m + 1 2 m with n. Using the fact that lim N ( 1 + x N ) N = exp ( x ) , we obtain
| R n ( m ) | | P n ( m ) | exp m m + 1 2 n m .
Thus, as long as m = o ( n 1 / 3 ) ,
| R n ( m ) | | P n ( m ) | a n d | P n ( m ) R n ( m ) | = o ( | P n ( m ) | )
as n .
Further, since m 1 f ( y 1 , , y m ) d y 1 d y m 1 = 1 , there exists a region S on ¯ m 1 whose measure | S | μ | m 1 | for some constant μ > 0 such that f ( y 1 , , y m ) > c on S for some c > 0 . Thus, for n sufficiently large, f ( k 1 / n , , k m / n ) > c 0 > 0 for ( k 1 , , k m ) in a subset of P n ( m ) with cardinality at least a small fraction of | P n ( m ) | . Moreover, since the functions ψ and f are bounded on m 1 , we conclude
| E n , m | = O | P n ( m ) R n ( m ) | | P n ( m ) | = o ( 1 )
as n , as long as m = o ( n 1 / 3 ) .
Step 2: Compare the numerators of (50) and (51). For convenience, denote
G ( y 1 , , y m 1 ) = ψ y 1 , , y m 1 , 1 i = 1 m 1 y i f y 1 , , y m 1 , 1 i = 1 m 1 y i .
Since ψ , f are bounded continuous functions on ¯ m 1 , it is easy to check that G is also bounded and continuous on ¯ m 1 . We can rewrite the numerator in (50) as follows.
I 1 : = 1 n m 1 k 1 > > k m > 0 k 1 + + k m = n G k 1 n , , k m 1 n = 1 n m 1 ( k 1 , , k m 1 ) { 1 , , n } m 1 G k 1 n , , k m 1 n I A n = ( k 1 , , k m 1 ) { 1 , , n } m 1 k 1 1 n k 1 n k m 1 1 n k m 1 n G k 1 n , , k m 1 n I A n d y 1 d y m 1 ,
where I A n is the indicator function of set A n defined as below
A n = 1 n ( k 1 , , k m 1 ) { 1 , , n } m 1 ; k 1 n > > k m 1 n > 1 i = 1 m 1 k i n > 0 .
Similarly,
I 2 : = m 1 G ( y 1 , , y m 1 ) d y 1 d y m 1 = [ 0 , 1 ] m 1 G ( y 1 , , y m 1 ) I A d y 1 d y m 1 = ( k 1 , , k m 1 ) { 1 , , n } m 1 k 1 1 n k 1 n k m 1 1 n k m 1 n G ( y 1 , , y m 1 ) I A d y 1 d y m 1 ,
where the I A is the indicator function of set A denoted by
A = ( x 1 , , x m 1 ) [ 0 , 1 ] m 1 ; x 1 > > x m 1 > 1 i = 1 m 1 x i 0 .
Now, we estimate the difference between the numerators in (50) and (51).
I 1 I 2 = ( k 1 , , k m 1 ) { 1 , , n } m 1 k 1 1 n k 1 n k m 1 1 n k m 1 n G k 1 n , , k m 1 n I A n G ( y 1 , , y m 1 ) I A d y 1 d y m 1
which is identical to
( k 1 , , k m 1 ) { 1 , , n } m 1 k 1 1 n k 1 n k m 1 1 n k m 1 n G k 1 n , , k m 1 n G ( y 1 , , y m 1 ) I A n d y 1 d y m 1 + ( k 1 , , k m 1 ) { 1 , , n } m 1 k 1 1 n k 1 n k m 1 1 n k m 1 n G ( y 1 , , y m 1 ) I A n I A d y 1 d y m 1 : = S 1 + S 2 .
Step 3: Estimate S 1 . Since G is uniformly continuous on ¯ m 1 , for any ε > 0 and any y i [ k i 1 n , k i n ] ( 1 i m 1 ) ,
| G k 1 n , , k m 1 n G ( y 1 , , y m 1 ) | < ε ,
when n is sufficiently large. Thus,
| S 1 | ( k 1 , , k m 1 ) { 1 , , n } m 1 k 1 1 n k 1 n k m 1 1 n k m 1 n | G k 1 n , , k m 1 n G ( y 1 , , y m 1 ) | d y 1 d y m 1 ε 1 n m 1 n m 1 = ε
for n sufficiently large.
Step 4: Estimate S 2 . Since G is bounded on ¯ m 1 , G : = sup x ¯ m 1 | G ( x ) | < and thus,
| S 2 | G ( k 1 , , k m 1 ) { 1 , , n } m 1 k 1 1 n k 1 n k m 1 1 n k m 1 n | I A n I A | d y 1 d y m 1 .
Now, we control | I A n I A | provided k i 1 n < y i < k i n for 1 i m 1 . By definition,
I A n = 1 , if k 1 n > > k m 1 n > 1 i = 1 m 1 k i n > 0 0 , otherwise
and
I A = 1 , if y 1 > > y m 1 > 1 i = 1 m 1 y i 0 0 , otherwise .
Let B n be a subset of A n such that
B n = A n ( k 1 , , k m 1 ) { 1 , , n } m 1 ; k m 1 n + i = 1 m 1 k i n > m n + 1 .
Given ( k 1 , , k m 1 ) B n , for any
k 1 1 n < y 1 < k 1 n , , k m 1 1 n < y m 1 < k m 1 n ,
it is easy to verify from (57) and (56) that I A = 1 . Hence,
I A n = I B n + I A n B n I A + I A n { k m 1 + i = 1 m 1 k i n + m } = I A + j = n + 1 n + m I E j
where
E j = { ( k 1 , , k m 1 ) { 1 , , n } m 1 ; k 1 > > k m 1 1 , k m 1 + i = 1 m 1 k i = j , i = 1 m 1 k i < n }
for n + 1 j m + n . Let us estimate the size of | E j | . From the last two restrictions, we obtain k m 1 > j n . Since i = 1 m 1 k i < n and k i > k m 1 for 1 i m 2 , we have j n + 1 k m 1 n m 1 .
For each fixed k m 1 , since k 1 > > k m 2 is the ordered positive integer solution to the linear equation i = 1 m 2 k i = j 2 k m 1 , thus,
| E j | j n + 1 l n m 1 j 2 l 1 m 3 ( m 2 ) ! n m 1 + n j 2 n j 3 m 3 ( m 2 ) ! .
As a result, we obtain the crude upper bound
j = n + 1 n + m | E j | j = n + 1 n + m n m 1 + n j 2 n j 3 m 3 ( m 2 ) ! m · n m 2 ( m 1 ) ! ( m 3 ) ! .
On the other hand, consider a subset of A n c : = { 1 n , 2 n , , 1 } m 1 A n defined by
C n = 1 n { ( k 1 , , k m 1 ) { 1 , , n } m 1 ; e i t h e r k i k i + 1 1 f o r s o m e 1 i m 2 , o r k 1 + + k m 2 + 2 k m 1 n , o r k 1 + + k m 1 m + n 1 } .
Set A c = [ 0 , 1 ] m 1 A . Given ( k 1 n , , k m 1 n ) C n , for any k i ’s and y i ’s satisfying (58), it is not difficult to check that I A c = 1 . Consequently,
I A n c = I C n + I { ( k 1 n , , k m 1 n ) A n c ; k i > k i + 1 1 f o r a l l 1 i m 2 , k 1 + + k m 2 + 2 k m 1 > n , a n d k 1 + + k m 1 < m + n 1 } I A c + I D n , m , 1 + I D n , m , 2 ,
or equivalently,
I A n I A I D n , m , 1 I D n , m , 2 ,
where
D n , m , 1 = l = n n + m 2 1 n ( k 1 , , k m 1 ) { 1 , , n } m 1 ; i = 1 m 1 k i = l , k 1 k m 1 ; D n , m , 2 = l = 1 m 2 1 n { ( k 1 , , k m 1 ) { 1 , , n } m 1 ; k l = k l + 1 , k 1 k m 1 , i = 1 m 1 k i + k m 1 n + 1 , i = 1 m 1 k i n + m 2 } .
By the definition of partitions and (49), we have the following bound on | D n , m , 1 | .
| D n , m , 1 | l = n n + m 2 | P l ( m 1 ) | l = n n + m 2 l 1 m 2 ( m 1 ) ! ( m 1 ) n + m 2 m 2 ( m 1 ) ! ( n + m 2 ) m 2 [ ( m 2 ) ! ] 2
as n .
The estimation of | D n , m , 2 | is the same argument as in (60). For the cases m = 3 or m = 4 , it is easy to verify that | D n , m , 2 | = O ( n m 2 ) . Now, we assume m 5 . First, from the decreasing order of k i and i = 1 m 1 k i n + m 2 , we determine the range of k m 1 ,
1 k m 1 n + m 2 m 1 .
On the other hand, n + 1 2 k m 1 i = 1 m 2 k i n + m 2 k m 1 . If l m 2 , from the restriction k l = k l + 1 , we see k 1 + + k l 1 + k l + 2 + + k m 2 = s 2 k l is the ordered positive integer solutions to the equation j 1 + + j m 4 = s 2 k l , where n + 1 2 k m 1 s n + m 2 k m 1 . If l = m 2 , then k 1 + + k m 3 = s 2 k m 1 and n + 1 3 k m 1 s 2 k m 1 n + m 2 2 k m 1 . Therefore, we have the following crude upper bound
| D n , m , 2 | l = 1 m 3 k m 1 = 1 n + m 2 m 1 s = n + 1 2 k m 1 n + m 2 k m 1 k m 1 k l s / 2 s 2 k l 1 m 5 ( m 4 ) ! + k m 1 = 1 n + m 2 m 1 s = n + 1 3 k m 1 n + m 2 2 k m 1 s k m 1 1 m 4 ( m 3 ) ! = O n 3 ( m 3 ) m 2 ( m 4 ) ! n + m 6 m 5 + n 2 m 2 ( m 3 ) ! n + m 6 m 4 = O n 2 ( n + m ) m 4 m ( m 4 ) ! ( m 5 ) ! .
Joining (59) and (61), and assuming (58) holds, we arrive at
| I A n I A | I D n , m , 1 + I D n , m , 2 + i = n + 1 n + m I E i .
Observe that D n , m , i ’s and E i ’s do not depend on y i ’s, we obtain from (55) that
| S 2 | G k 1 = 1 n k m 1 = 1 n i = 1 2 I D n , m , i + i = n n + m I E i k 1 1 n k 1 n k m 1 1 n k m 1 n 1 d y 1 d y m 1 = G i = 1 2 | D n , m , i | + i = n n + m | E i | · 1 n m 1 .
For 2 m 4 ,
| S 2 | = O ( n 1 ) .
For m 5 , by (60), (62) and (63),
| S 2 | = O m · n m 2 ( m 1 ) ! ( m 3 ) ! + ( n + m ) m 2 [ ( m 2 ) ! ] 2 + n 2 ( n + m ) m 4 m ( m 4 ) ! ( m 5 ) ! · 1 n m 1 = O ( 1 + m n ) m n
as n .
Step 5: Difference between the expectations (50) and (51). For any ε > 0 , from Step 3 and Step 4, we obtain the difference between the numerators in (50) and (51)
| I 1 I 2 | | S 1 | + | S 2 | ε + O ( 1 + m n ) m n < 2 ε
for n sufficiently large. Choosing ψ to be identity on ¯ m 1 , we obtain the difference between the denominators in (50) and (51) as follows:
n ( m 1 ) ( k 1 , , k m ) P n ( m ) f k 1 n , , k m n m 1 f ( y 1 , , y m ) d y 1 d y m 1 < 2 ε
for n sufficiently large.
Finally, we estimate the expectations (50) and (51). Since m is fixed, by (52), (64), and the triangle inequality,
| E ψ k 1 n , , k m n E ψ ( x 1 , , x m ) | 0
as n . This completes the proof.      □
Next, we provide the proof of Corollary 2.
Proof of Corollary 2.
By Theorem 3,
k 1 n , , k m n ( x 1 , , x m ) μ
as n , where μ has pdf
g ( y 1 , , y m ) = y 1 α 1 y m α 1 m 1 y 1 α 1 y m α 1 d y 1 d y m 1 .
It suffices to show the order statistics ( X ( 1 ) , , X ( m ) ) of ( X 1 , , X m ) Dir ( α ) has the same pdf on m 1 . For any continuous function ψ defined on m 1 , by symmetry,
E ψ ( X ( 1 ) , , X ( m ) ) = W m 1 ψ ( y ( 1 ) , , y ( m ) ) 1 { y ( 1 ) y ( m ) } Γ ( m α ) Γ ( α ) m y 1 α 1 y m α 1 d y 1 d y m 1 = W m 1 σ S m ψ ( y σ ( 1 ) , , y σ ( m ) ) 1 { y σ ( 1 ) y σ ( m ) } Γ ( m α ) Γ ( α ) m y σ ( 1 ) α 1 y σ ( m ) α 1 d y 1 d y m 1 = m 1 ψ ( y 1 , , y m ) m ! Γ ( m α ) Γ ( α ) m y 1 α 1 y m α 1 d y 1 d y m 1 .
Therefore, the pdf of ( X ( 1 ) , , X ( m ) ) is
m ! Γ ( m α ) Γ ( α ) m y 1 α 1 y m α 1
on the set m 1 . Similarly, by the definition of pdf we have
W m 1 Γ ( m α ) Γ ( α ) m x 1 α 1 x m α 1 = 1 .
By symmetry, we obtain
m 1 y 1 α 1 y m α 1 d y 1 d y m 1 = Γ ( α ) m m ! Γ ( m α ) .
Comparing the above with (66) and (65), we complete the proof.      □
We conclude this subsection with the proof of Corollary 3.
Proof of Corollary 3.
By Theorem 3 or Corollary 2,
k 1 n , , k m n ( Y ˜ 1 , , Y ˜ m ) μ
as n , where μ has pdf
m ! · Γ ( m α ) Γ ( 1 α ) m ( y 1 y m ) α 1
on m 1 and zero elsewhere. Since f ( x ) = x α is continuous,
k 1 n α , , k m n α Y ˜ 1 α , , Y ˜ m α
as n .
Now, it suffices to show Y ˜ 1 α , , Y ˜ m α has the uniform distribution on the set
U m 1 = ( x 1 , , x m ) [ 0 , 1 ] m ; i = 1 m x i 1 α = 1 , x 1 x m .
This can be seen by change of variables. For any continuous function ψ defined on m 1 ,
E ψ ( Y ˜ 1 α , , Y ˜ m α ) = m 1 ψ ( y 1 α , , y m α ) m ! · Γ ( m α ) Γ ( 1 α ) m y 1 α 1 y m α 1 d y 1 d y m 1 = U m 1 ψ ( x 1 , , x m ) m ! · Γ ( m α ) α m 1 Γ ( 1 α ) m d x 1 d x m 1 .
In the last equality, we set x i = y i α for 1 i m . Therefore, we can see the pdf of ( Y ˜ 1 α , , Y ˜ m α ) is a constant on U m 1 , which is the uniform distribution on U m 1 . The proof is complete.      □

3.2. The Proof of Theorem 4

In Section 3.1 we have studied the asymptotic distribution of ( k 1 n , , k m n ) as m is fixed. Now, we consider the case that m depends on n. Note that the Formula (49) holds as long as m = o ( n 1 / 3 ) .
Let μ and ν be two Borel probability measures on a Polish space S with the Borel σ -algebra B ( S ) . Define
ρ ( μ , ν ) = sup φ L 1 S φ ( x ) μ ( d x ) S φ ( x ) ν ( d x ) ,
where φ is a bounded Lipschitz function defined on S with φ = sup x S | φ ( x ) | , and φ L = φ + sup x y | φ ( x ) φ ( y ) | / | x y | . It is known that μ n converges to μ weakly if and only if lim n φ ( x ) μ n ( d x ) = φ ( x ) μ ( d x ) for every bounded and Lipschitz continuous function φ ( x ) defined on R m , and if and only if lim n ρ ( μ n , μ ) = 0 ; see, e.g., Chapter 11 from [30].
Let { X i , X n , i ; n 1 , i 1 } be random variables taking values in [ 0 , 1 ] . Set X n = ( X n 1 , X n 2 , ) [ 0 , 1 ] . If X n i = 0 for i > m , we simply write X n = ( X n 1 , , X n m ) . We say that X n converges weakly to X : = ( X 1 , X 2 , ) as n if, for any r 1 , ( X n 1 , , X n r ) converges weakly to X = ( X 1 , , X r ) as n . This convergence actually is the same as the weak convergence of random variables in ( [ 0 , 1 ] , d ) where
d ( x , y ) = i = 1 | x i y i | 2 i
for x = ( x 1 , x 2 , ) [ 0 , 1 ] and y = ( y 1 , y 2 , ) [ 0 , 1 ] . The topology generated by this metric is the same as the product topology.
Lemma 3.
Let m = m n as n . Let κ = ( k 1 , , k m ) P n ( m ) be chosen with probability as in (3) under the assumption of Theorem 4. Let ( X m , 1 , , X m , m ) and X = ( X 1 , X 2 , ) be random variables taking values in m 1 and ∇, respectively. If
sup φ L 1 | E φ k 1 n , , k m n E φ ( X m , 1 , , X m , m ) | 0
as n , and ( X m , 1 , , X m , m ) converges weakly to X as n , then k 1 n , , k m n converges weakly to X as n .
Proof. 
Given integer r 1 , to prove the theorem, it is enough to show k 1 n , , k r n converges weakly to ( X 1 , , X r ) as n . Since m = m n as n , without loss of generality, we assume r < m in the rest of discussion. For any random vector Z, let L ( Z ) denote its probability distribution. Review (67). By the triangle inequality,
ρ L k 1 n , , k r n , L X 1 , , X r ρ L k 1 n , , k r n , L X m , 1 , , X m , r + ρ L X m , 1 , , X m , r , L X 1 , , X r
For any function φ ( x 1 , , x r ) defined on [ 0 , 1 ] r with φ L 1 , set φ ˜ ( x 1 , , x m ) = φ ( x 1 , , x r ) for all ( x 1 , , x m ) R m . Then φ ˜ L 1 . Condition (69) implies that the middle one among the three distances in (70) goes to zero. Further, the assumption that ( X m , 1 , , X m , m ) converges weakly to X implies the third distance in (70) also goes to zero. Hence, the first distance goes to zero. The proof is completed.      □
With Lemma 3 and the estimation in Theorem 3, we obtain the proof of Theorem 4.
Proof of Theorem 4.
Assume κ = ( k 1 , , k m ) P n ( m ) is chosen with probability as in (3). The proof is almost identical to the proof of Theorem 3. We only mention the difference and modifications. Instead of choosing the test function ψ to be bounded and continuous as in the beginning of Theorem 3, we select ψ = φ to be bounded and Lipschitz. Following the proof of Theorem 3, the function G defined in (53) in Step 2 is now bounded and Lipschitz on ¯ m 1 . The major change happens in Step 3, where we replace the estimation in (54) by
| G k 1 n , , k m 1 n G ( y 1 , , y m 1 ) | C · i = 1 m 1 y i k i n 2 C · m n ,
for some constant C depending only on the Lipschitz constant of G, where y i [ k i 1 n , k i n ] for 1 i m 1 . Consequently, the term S 1 defined in the end of Step 2 is now bounded as follows:
| S 1 | ( k 1 , , k m 1 ) { 1 , , n } m 1 k 1 1 n k 1 n k m 1 1 n k m 1 n | G k 1 n , , k m 1 n G ( y 1 , , y m 1 ) | d y 1 d y m 1 C · m n 1 n m 1 n m 1 = C m n .
Step 4 remains the same and we modify Step 5 using the changes mentioned above. The difference between the numerators in (50) and (51) now becomes
| I 1 I 2 | | S 1 | + | S 2 | C 1 · m n + ( 1 + m n ) m n
as n for some constant C 1 depending only on the Lipschitz constants of φ and f and the upper bounds of φ and f on the compact set ¯ m 1 . Using the same argument in the end of the proof of Theorem 3 and the assumption that f L i p K , we have for any φ defined on m 1 satisfying φ L 1 ,
sup φ L 1 | E φ ( k 1 n , , k m n ) E φ ( X m , 1 , , X m , m ) | = O m n + ( 1 + m n ) m n + | E n , m | 0 .
as n . Recall in (52), we have | E n , m | 0 as long as m = o ( n 1 / 3 ) . Therefore, by Lemma 3, we conclude that ( k 1 n , , k m n ) converges weakly to X as n .      □

Author Contributions

Methodology, T.J. and K.W.; Writing—original draft, T.J. and K.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Science Foundation (NSF) Grant [DMS-1916014 to T.J., DMS-1406279 to T.J., DMS-2210802 to T.J.]; and by the Research Grants Council (RGC) of Hong Kong [GRF 16308219 to K.W., GRF 16304222 to K.W., ECS 26304920 to K.W.].

Data Availability Statement

Not applicable.

Acknowledgments

We would like to thank the anonymous reviewers for their careful reading of our manuscript and their many insightful comments and suggestions, which helped us to improve the quality of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bohr, N.; Kalckar, F. On the transmutation of atomic nuclei by impact of material particles. I. General theoretical remarks. Kgl. Dan. Vid. Selskab. Math. Phys. Medd. 1937, 14, 1–40. [Google Scholar]
  2. Van Lier, C.; Uhlenbeck, G. On the statistical calculation of the density of the energy levels of the nuclei. Physica 1937, 4, 531–542. [Google Scholar] [CrossRef]
  3. Auluck, F.; Kothari, D. Statistical mechanics and the partitions of numbers. In Mathematical Proceedings of the Cambridge Philosophical Society; Cambridge Univiversity Press: Cambridge, UK, 1946; Volume 42, pp. 272–277. [Google Scholar]
  4. Erdös, P.; Lehner, J. The distribution of the number of summands in the partitions of a positive integer. Duke Math. J. 1941, 8, 335–345. [Google Scholar] [CrossRef] [Green Version]
  5. Fristedt, B. The structure of random partitions of large integers. Trans. Am. Math. Soc. 1993, 337, 703–735. [Google Scholar] [CrossRef]
  6. Pittel, B. On a likely shape of the random Ferrers diagram. Adv. Appl. Math. 1997, 18, 432–488. [Google Scholar] [CrossRef] [Green Version]
  7. Jiang, T.; Wang, K. Statistical Properties of Eigenvalues of Laplace-Beltrami Operators. arXiv 2016, arXiv:1602.00406. [Google Scholar] [CrossRef]
  8. Vershik, A.M. Statistical mechanics of combinatorial partitions, and their limit configurations. Funktsional. Anal. I Prilozhen. 1996, 30, 19–39. [Google Scholar] [CrossRef]
  9. Vershik, A.M.; Kerov, S.V. Asymptotic of the largest and the typical dimensions of irreducible representations of a symmetric group. Funct. Anal. Its Appl. 1985, 19, 21–31. [Google Scholar] [CrossRef]
  10. Vershik, A.M.; Yakubovich, Y.V. Asymptotics of the uniform measures on simplices and random compositions and partitions. Funct. Anal. Its Appl. 2003, 37, 273–280. [Google Scholar] [CrossRef]
  11. Petrov, F. Two elementary approaches to the limit shapes of Young diagrams. Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) 2009, 370, 111–131. [Google Scholar] [CrossRef]
  12. Fulman, J. Stein’s method, Jack measure, and the Metropolis algorithm. J. Combin. Theory Ser. A 2004, 108, 275–296. [Google Scholar] [CrossRef]
  13. Baik, J.; Deift, P.; Johansson, K. On the distribution of the length of the longest increasing subsequence of random permutations. J. Am. Math. Soc. 1999, 12, 1119–1178. [Google Scholar] [CrossRef]
  14. Borodin, A.; Okounkov, A.; Olshanski, G. Asymptotics of Plancherel measures for symmetric groups. J. Am. Math. Soc. 2000, 13, 481–515. [Google Scholar] [CrossRef]
  15. Johansson, K. Discrete orthogonal polynomial ensembles and the Plancherel measure. Ann. Math. 2001, 153, 259–296. [Google Scholar] [CrossRef]
  16. Okounkov, A. The uses of random partitions. In Proceedings of the XIVth International Congress on Mathematical Physics, Lisbon, Portugal, 28 July–2 August 2003; World Science Publishing: Hackensack, NJ, USA, 2005; pp. 379–403. [Google Scholar]
  17. Okounkov, A. Random matrices and random permutations. Int. Math. Res. Not. 2000, 2000, 1043–1095. [Google Scholar] [CrossRef]
  18. Borodin, A.; Olshanski, G. Z-measures on partitions and their scaling limits. Eur. J. Comb. 2005, 26, 795–834. [Google Scholar]
  19. Matsumoto, S. Jack deformations of Plancherel measures and traceless Gaussian random matrices. arXiv 2008, arXiv:0810.5619. [Google Scholar] [CrossRef] [Green Version]
  20. Kerov, S.V. q-analogue of the hook walk algorithm and random Young tableaux. Funkt. Anal. I Prilozhen. 1992, 26, 35–45. [Google Scholar] [CrossRef]
  21. Strahov, E. A differential model for the deformation of the Plancherel growth process. Adv. Math. 2008, 217, 2625–2663. [Google Scholar] [CrossRef] [Green Version]
  22. Féray, V.; Méliot, P.L. Asymptotics of q-Plancherel measures. Probab. Theory Relat. Fields 2012, 152, 589–624. [Google Scholar] [CrossRef] [Green Version]
  23. Forrester, P.J.; Rains, E.M. Interpretations of some parameter dependent generalizations of classical matrix ensembles. Probab. Theory Relat. Fields 2005, 131, 1–61. [Google Scholar] [CrossRef] [Green Version]
  24. Macdonald, I.G. Symmetric Functions and Hall Polynomials, 2nd ed.; Oxford Classic Texts in the Physical Sciences; The Clarendon Press, Oxford University Press: New York, NY, USA, 2015; p. xii+475. [Google Scholar]
  25. Szekeres, G. An asymptotic formula in the theory of partitions. Q. J. Math. Oxf. Ser. 1951, 2, 85–108. [Google Scholar] [CrossRef]
  26. Szekeres, G. Some asymptotic formulae in the theory of partitions. II. Q. J. Math. Oxf. Ser. 1953, 4, 96–111. [Google Scholar] [CrossRef]
  27. Canfield, E.R. From recursions to asymptotics: On Szekeres’ formula for the number of partitions. Electron. J. Combin. 1997, 4. [Google Scholar] [CrossRef] [PubMed]
  28. Romik, D. Partitions of n into t n parts. Eur. J. Combin. 2005, 26, 1–17. [Google Scholar] [CrossRef] [Green Version]
  29. Hardy, G.H.; Ramanujan, S. Asymptotic formulæ in combinatory analysis. Proc. Lond. Math. Soc. 1918, 2, 75–115. [Google Scholar] [CrossRef]
  30. Dudley, R.M. Real Analysis and Probability; Cambridge University Press: Cambridge, UK, 2002; Volume 74. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, T.; Wang, K. Asymptotic Properties of Random Restricted Partitions. Mathematics 2023, 11, 417. https://doi.org/10.3390/math11020417

AMA Style

Jiang T, Wang K. Asymptotic Properties of Random Restricted Partitions. Mathematics. 2023; 11(2):417. https://doi.org/10.3390/math11020417

Chicago/Turabian Style

Jiang, Tiefeng, and Ke Wang. 2023. "Asymptotic Properties of Random Restricted Partitions" Mathematics 11, no. 2: 417. https://doi.org/10.3390/math11020417

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop