Next Article in Journal
Applying the Crow Search Algorithm for the Optimal Integration of PV Generation Units in DC Networks
Next Article in Special Issue
Asymptotic Properties of Random Restricted Partitions
Previous Article in Journal
Estimation of Citarum Watershed Boundary’s Length Based on Fractal’s Power Law by the Modified Box-Counting Dimension Algorithm
Previous Article in Special Issue
A Measure-on-Graph-Valued Diffusion: A Particle System with Collisions and Its Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inverse Limit Shape Problem for Multiplicative Ensembles of Convex Lattice Polygonal Lines

by
Leonid V. Bogachev
1 and
Sakhavet M. Zarbaliev
2,3,*
1
Department of Statistics, School of Mathematics, University of Leeds, Woodhouse Lane, Leeds LS2 9JT, UK
2
Department of Mathematics, Econometrics and Information Technology, School of International Economic Relations, MGIMO University, Prospekt Vernadskogo 76, Moscow 119454, Russia
3
Department of Mathematical and Computer Modeling, Institute of Information Technologies and Computer Science, National Research University “Moscow Power Engineering Institute”, Krasnokazarmennaya 14, Moscow 111250, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(2), 385; https://doi.org/10.3390/math11020385
Submission received: 1 November 2022 / Revised: 3 January 2023 / Accepted: 6 January 2023 / Published: 11 January 2023
(This article belongs to the Special Issue Random Combinatorial Structures)

Abstract

:
Convex polygonal lines with vertices in Z + 2 and endpoints at 0 = ( 0 , 0 ) and n = ( n 1 , n 2 ) , such that n 2 / n 1 c ( 0 , ) , under the scaling n 1 1 , have limit shape γ * with respect to the uniform distribution, identified as the parabola arc c ( 1 x 1 ) + x 2 = c . This limit shape is universal in a large class of so-called multiplicative ensembles of random polygonal lines. The present paper concerns the inverse problem of the limit shape. In contrast to the aforementioned universality of γ * , we demonstrate that, for any strictly convex C 3 -smooth arc γ R + 2 started at the origin and with the slope at each point not exceeding 90 , there is a sequence of multiplicative probability measures P n γ on the corresponding spaces of convex polygonal lines, under which the curve γ is the limit shape.
MSC:
52A22; 05A17; 05D40; 52A10; 60F05; 60G50

1. Introduction

Consider a convex lattice polygonal line Γ with vertices in Z + 2 : = { ( k 1 , k 2 ) Z 2 : k 1 , k 2 0 } , starting at the origin and such that the slope of each of its edges is non-negative that does not exceed 90 . Convexity means that the slope of consecutive edges is strictly increasing. Denote by L the set of all such polygonal lines with finitely many edges, and by L n the subset of polygonal lines Γ L with the right endpoint fixed at n = ( n 1 , n 2 ) Z + 2 .
The limit shape, with respect to a sequence of probability measures P n on L n as n , is understood as a planar curve γ such that, for any ε > 0 ,
lim n P n { Γ L n : d ( Γ n ˜ , γ ) ε } = 1 ,
where Γ ˜ n = s n ( Γ ) , with a suitable scaling transform s n : R 2 R 2 , and d ( · , · ) is some metric, for example, induced by the Hausdorff distance between compact planar sets,
d H ( A , B ) : = max max x A min y B | x y | , max y B min x A | x y | ,
where | · | is the Euclidean vector norm in R 2 .
Of course, the limit shape and its very existence may depend on the probability laws P n in the polygonal spaces L n . With respect to the uniform distribution on each L n (i.e., where all Γ L n are assumed to be equally likely), the problem was solved independently by Vershik [1], Bárány [2], and Sinai [3], who showed that, if n 1 , n 2 so that n 2 / n 1 c ( 0 , ) , then under the scaling Γ ˜ n = n 1 1 Γ , the limit (1) holds with respect to the Hausdorff metric d H and with the limit shape γ = γ * identified as the parabola arc
c ( 1 x 1 ) + x 2 = c ( 0 x 1 1 , 0 x 2 c ) .
More precisely, in this case, the limit (1) transcribes as follows,
lim n # { Γ L n : d H ( Γ n ˜ , γ * ) ε } # L n = 1 ,
where # A denotes the cardinality of set A.
Bogachev and Zarbaliev [4,5] proved that the limit shape (3) holds for the parametric class of multiplicative measures  P n r ( 0 < r < ) of the form
P n r ( Γ ) : = 1 B n r e i Γ b k i r , B n r : = Γ L n e i Γ b k i r ( Γ L n ) ,
where the product is taken over all edges e i of Γ L n , k i is the number of lattice points on the edge e i except its left endpoint, and the weights b k r are specified according to the binomial formula,
b k r = r + k 1 k = r ( r + 1 ) ( r + k 1 ) k ! , k N .
This result provided the first evidence in support of a conjecture of the limit shape universality put forward by Vershik [1]. The class of probability measures (4) with the coefficients (5) belongs to a general metatype of decomposable combinatorial structures known as multisets [6]. Bogachev [7] extended the limit shape universality to a much wider class of multiplicative probability measures of the form (4) including the analogs of two other well-known metatypes of decomposable structures — selections and assemblies [6]; for example, this class includes the uniform distribution on the subset of “simple” polygonal lines (i.e., with no lattice points apart from vertices).
In a different development, a surprising universality result with the same limit shape (3) was obtained by Bureaux and Enriquez [8] under the uniform probability measure on the space of constrained convex lattice polygonal lines with a prescribed number of vertices growing with n, regardless of growth rate. On the other hand, it was demonstrated in [8] that, under additional constraints on the length of the polygonal line, the limit shape modifies to transverse a continuous family of convex curves interpolating between the hypotenuse, and a concatenation of the two legs of the limiting triangular container { 0 x 1 1 , 0 x 2 c x 1 } . Related results were obtained earlier by Bárány [9], Žunić [10], Stojaković [11], and Prodromou [12].
In the present paper, we consider a general inverse limit shape problem, and show that, in sharp contrast to the aforementioned universality of the curve (3), any C 3 -smooth strictly convex arc γ R + 2 started at the origin may serve as the limit shape with respect to a suitable sequence of multiplicative probability measures P n γ on the polygonal spaces L n . See [13] for a short communication of this result, interpreted there in the sense of approximation of convex curves by random polygonal lines.
Like in [4,5,7], our construction employs an elegant probabilistic approach based on randomization and conditioning (see [6,14]), first used in the polygonal context by Sinai [3]. The idea is to introduce a suitable “global” probability measure Q defined on the space L = n L n of all convex lattice polygonal lines with finitely many edges (hence, with a “free” right endpoint) and then obtain measures P n on the spaces L n by conditioning, that is, P n ( Γ ) = Q ( Γ | L n ) ( Γ L n ). The measure Q is constructed as the distribution of an integer-valued random field ν = ν ( · ) with mutually independent components, defined on the subset X Z + 2 consisting of points x = ( x 1 , x 2 ) Z + 2 with coprime coordinates. A polygonal line Γ L is uniquely retrieved from a configuration { ν ( x ) , x X } using the collection of the corresponding edges x ν ( x ) (with ν ( x ) > 0 ) and the convexity property (see Section 3).
It is convenient to set up the measure Q = Q z depending on a parameter function z ( x ) , such that the marginal distribution of each ν ( x ) is defined to be geometric with “success” parameter 1 z ( x ) . In view of the aforementioned association between polygonal lines Γ L and configurations { ν Γ ( x ) } , and by virtue of the product structure of the measure Q z , the Q z -probability of a polygonal line Γ L is proportional to the (finite) product x X z ( x ) ν Γ ( x ) . In the classical case (with uniform P n ), a good choice is to take z ( x ) = z 1 x 1 z 2 x 2 ( 0 < z 1 , z 2 < 1 ), yielding Q z ( Γ ) z 1 ξ 1 z 2 ξ 2 , where ( ξ 1 , ξ 2 ) is the (random) right endpoint of the polygonal line Γ L .
To better adjust the measure Q z to the conditional measure P n ( Γ ) = Q z ( Γ | Γ L n ) , the defining terminal condition ( ξ 1 , ξ 2 ) = ( n 1 , n 2 ) is emulated using expectation with respect to Q z , leading to a dependence of the parameters z ( x ) on n = ( n 1 , n 2 ) . Furthermore, in order to suit a target curve γ as a hypothetical limit shape, the constants z 1 , z 2 in the classical parameterization z ( x ) = z 1 x 1 z 2 x 2 need to allow for a further dependence on x X to match γ . We derive a suitable parameter function in the form z ( x ) = exp { α x 1 δ 1 ( x ) + x 2 δ 2 ( x ) } , assuming that the functions δ 1 , δ 2 depend on x = ( x 1 , x 2 ) through the ratio x 2 / x 1 , which is convenient in conjunction with the parameterization of the curve γ using its tangent slope (see Section 2). As one would expect, if γ = γ * (see (3)) then the functions z 1 ( x ) , z 2 ( x ) are reduced to constants, and our method recovers the uniform distribution P n on L n .
To summarize, our main result is the following:
Theorem 1.
Let γ R + 2 be a strictly convex C 3 -smooth arc, with endpoints ( 0 , 0 ) and ( 1 , c γ ) and with the curvature bounded from below by a positive constant. Suppose that n 2 / n 1 c γ , and set Γ ˜ n : = n 1 1 Γ for polygonal lines Γ L n . Then, there is a sequence of multiplicative probability measures P n γ on the polygonal spaces L n such that, for any ε > 0 ,
lim n P n γ { Γ L n : d H ( Γ ˜ n , γ ) ε } = 1 .
Remark 1.
Here and in what follows, n signifies that n 1 , n 2 so that n 2 / n 1 c γ . The term “multiplicative” is made more precise in Section 3 (see Remark 7).
Remark 2.
It is more convenient to use another metric d T on the space of convex curves on the basis of tangential parameterization and a subdistance between the corresponding arc lengths (see Section 2). Result (6) follows since the Hausdorff metric d H is dominated by d T (see Proposition 2).
Remark 3.
As we see in Section 3, our measures Q z γ were constructed merely by asymptotically fitting the running expectation of the (length of the) random polygonal line to the target curve γ, but with no explicit reference to the combinatorial properties of the underlying polygonal lines. It would be interesting to elaborate the combinatorial characterization of the multiplicative ensembles of polygonal lines under the measures Q z γ and P n γ .
Remark 4.
It would be natural to try and relax the C 3 -smoothness condition on γ (e.g., by permitting “change points” or corners), and to allow for the degeneration of the curvature (e.g., through possible flat segments). We address these issues elsewhere.
Remark 5.
Product measures Q z that are used in the general construction of multiplicative measures P n on the corresponding polygonal spaces L n are of interest in their own right. For instance, such measures are known in statistical physics as grand canonical Gibbs ensembles [15,16], and in computer science as Boltzmann distributions used in abundance for sampling from discrete combinatorial structures [17,18].
The rest of the paper is organized as follows. In Section 2, we introduce the space of convex curves on the plane and endow it with a suitable metric. In Section 3, the measures Q z γ and P n γ are constructed for a given convex curve γ . In Section 4, the parameter function z ( x ) is chosen to guarantee the convergence of expectation of scaled polygonal lines Γ ˜ n = n 1 1 Γ to the target curve γ (Theorem 2). Refined first-order moment asymptotics are obtained in Section 5, while higher-order moment sums are analyzed in Section 6. Section 7 is devoted to the proof of a local central limit theorem (Theorem 7). Lastly, the limit shape result with respect to both Q z γ and P n γ is proved in Section 8 (Theorems 8 and 9, respectively).
Some general notation. For a row vector x = ( x 1 , x 2 ) R 2 , its Euclidean norm (length) is denoted | x | : = x 2 x 1 2 + x 2 2 , and x , y : = x 1 y 1 + x 2 y 2 is the corresponding inner product of vectors x , y R 2 . We denote Z + : = { k Z : k 0 } , Z + 2 : = Z + × Z + , and similarly R + : = { x R : x 0 } , R + 2 : = R + × R + . We use the floor function a : = max { k Z : k a } (integer part of a R ). The standard notation is used for asymptotic comparisons: a b means that a / b 1 ; a = o ( b ) that a / b 0 ; a = O ( b ) that a / b is bounded; and a b that both a = O ( b ) and b = O ( a ) . We take the liberty to write f ( x ) p for ( f ( x ) ) p .

2. Preliminaries: Convex Planar Curves

Definition 1.
Let G 0 = { γ } be the space of curves in R + 2 represented as the graphs of functions u v = g γ ( u ) ( 0 u a γ ) with the following properties:
(i)
g γ ( 0 ) = 0 (i.e., each curve γ starts at the origin);
(ii)
g γ ( u ) is nondecreasing and continuous on [ 0 , a γ ] ;
(iii)
g γ ( u ) is piecewise differentiable on [ 0 , a γ ] , with the derivative g γ ( u ) continuous everywhere except finitely many points; the (left) derivative at u = a γ may be infinite, g γ ( a γ ) + ;
(iv)
g γ ( u ) is convex on [ 0 , a γ ] , that is, for any θ [ 0 , 1 ] and any u 1 , u 2 [ 0 , a γ ] ,
g γ ( θ u 1 + ( 1 θ ) u 2 ) θ g γ ( u 1 ) + ( 1 θ ) g γ ( u 2 ) .
Remark 6.
Convex polygonal lines Γ L can be treated as curves in G 0 ; the corresponding function g Γ is a piecewise linear function.
It follows from Definition 1 that, for any curve γ G 0 , the derivative g γ ( u ) is non-negative and nondecreasing in its domain, and in particular 0 t ̲ γ g γ ( u ) t ¯ γ ( 0 u a γ ), where
t ̲ γ : = inf 0 u a γ g γ ( u ) , t ¯ γ : = sup 0 u a γ g γ ( u ) .
Set
u γ ( t ) : = sup { u [ 0 , a γ ] : g γ ( u ) t } , 0 t ,
with the convention that sup = 0 . That is, u γ ( t ) is a generalized inverse of the derivative t = g γ ( u ) (cf. [19], §1.5). It follows that the function t u γ ( t ) is nondecreasing and right-continuous on [ 0 , ] , with values in [ 0 , a γ ] ; moreover, u γ ( t ) 0 for all t < t ̲ γ and u γ ( t ) = a γ for all t t ¯ γ (see (7)). For shorthand, we write
v γ ( t ) : = g γ ( u γ ( t ) ) , 0 t .
Denote by γ ( t ) the length of the part of γ where the tangent slope does not exceed t,
γ ( t ) = 0 u γ ( t ) 1 + g γ ( u ) 2 d u , 0 t .
Clearly, every curve γ G 0 has finite length,
γ ( ) = 0 u γ ( ) 1 + g γ ( u ) 2 d u 0 a γ ( 1 + g γ ( u ) ) d u = a γ + g γ ( a γ ) < .
Let us now equip the space G 0 with a suitable metric. Define the map d T : G 0 × G 0 R + as follows,
d T ( γ 1 , γ 2 ) : = sup 0 t | γ 1 ( t ) γ 2 ( t ) | , γ 1 , γ 2 G 0 .
Proposition 1.
The function d T ( · , · ) defined in (11) satisfies all properties of a distance.
Proof. 
Clearly, d T ( γ 1 , γ 2 ) = d T ( γ 2 , γ 1 ) and d T ( γ , γ ) = 0 . The triangle axiom is also obvious. Lastly, if d T ( γ 1 , γ 2 ) = 0 , then the inequality (12) proven below implies that d H ( γ 1 , γ 2 ) = 0 , and it follows that γ 1 = γ 2 since d H is a distance. □
Proposition 2.
The metric d H is dominated by the metric d T as follows:
d H ( γ 1 , γ 2 ) 5 d T ( γ 1 , γ 2 ) , γ 1 , γ 2 G 0 .
Proof. 
By symmetry (see (2)), it suffices to show that
max x γ 1 min y γ 2 | x y | 5 d T ( γ 1 , γ 2 ) , γ 1 , γ 2 G 0 .
Any curve γ G 0 can be approximated simultaneously in d H and d T , by C 2 -smooth strictly convex curves γ k G 0 (e.g., via the refinement of possible corners and/or flat edges in the arc γ ), so that
lim k d H ( γ k , γ ) = 0 , lim k d T ( γ k , γ ) = 0 .
This reduces the inequality (13) to such curves. Note that
max x γ 1 min y γ 2 | x y | = max 0 t 1 min 0 t 2 | u γ 1 ( t 1 ) u γ 2 ( t 2 ) | 2 + | v γ 1 ( t 1 ) v γ 2 ( t 2 ) | 2 sup 0 t | u γ 1 ( t ) u γ 2 ( t ) | 2 + | v γ 1 ( t ) v γ 2 ( t ) | 2 .
For a strictly convex increasing function g γ C 2 [ 0 , a γ ] , the function u γ ( t ) defined in (8) is given explicitly by
u γ ( t ) = 0 , 0 t < t ̲ γ , ( g γ ) 1 ( t ) , t ̲ γ t t ¯ γ , a γ , t ¯ γ < t ,
where ( g γ ) 1 ( t ) is the (ordinary) inverse of the derivative t = g γ ( u ) . Differentiating formula (10) with respect to t, we find
d u γ d t = 1 1 + t 2 d γ d t , u γ ( 0 ) = 0 ,
and hence, using (9),
d v γ d t = d g γ d u · d u γ d t = t 1 + t 2 d γ d t , v γ ( 0 ) = 0 .
Integrating equations (16) and (17) by parts yields
u γ ( t ) = γ ( t ) 1 + t 2 + 0 t s γ ( s ) ( 1 + s 2 ) 3 / 2 d s , v γ ( t ) = t γ ( t ) 1 + t 2 0 t γ ( s ) ( 1 + s 2 ) 3 / 2 d s .
Note that these equations are linear in γ . Recalling the definition (11) of d T , from formula (18) we obtain
sup 0 t | u γ 1 ( t ) u γ 2 ( t ) | d H ( γ 1 , γ 2 ) sup 0 t 1 1 + t 2 + 0 t s ( 1 + s 2 ) 3 / 2 d s = d H ( γ 1 , γ 2 ) sup 0 t 1 1 + t 2 + 1 1 1 + t 2 = d H ( γ 1 , γ 2 ) ,
and similarly
sup 0 t | v γ 1 ( t ) v γ 2 ( t ) | d H ( γ 1 , γ 2 ) sup 0 t t 1 + t 2 + 0 t d s ( 1 + s 2 ) 3 / 2 = d H ( γ 1 , γ 2 ) sup 0 t t 1 + t 2 + t 1 + t 2 = 2 d H ( γ 1 , γ 2 ) .
Returning to (14), by the estimates (19) and (20) we obtain the bound (13), which completes the proof of Proposition 2. □
Consider a fixed convex curve γ G 0 , represented as the graph of an increasing convex function u g γ ( u ) , which for definiteness was assumed to be defined on the interval u [ 0 , 1 ] . We are working under the following
Assumption 1.
The function u g γ ( u ) is strictly increasing and strictly convex on [ 0 , 1 ] , and g γ C 2 [ 0 , 1 ] . In particular, g γ ( u ) 0 and g γ ( u ) 0 for all u [ 0 , 1 ] . Moreover, the curvature ϰ γ of the curve γ, given by the formula
ϰ γ ( u ) = g γ ( u ) ( 1 + g γ ( u ) 2 ) 3 / 2 , 0 u 1 ,
is uniformly bounded away from zero,
inf u [ 0 , 1 ] ϰ γ ( u ) > 0 .
The meaning of the last condition is that the curve γ is not “too flat”. The graph γ of the function g γ can be parameterized by the derivative t = g γ ( u ) via the equations u = u γ ( t ) , v = g γ ( u γ ( t ) ) , where u γ ( t ) is given by (15). Expression (21) for the curvature is then reduced to
ϰ γ ( t ) = g γ ( u γ ( t ) ) ( 1 + t 2 ) 3 / 2 , t ̲ γ t t ¯ γ ,
where t ̲ γ = inf 0 u 1 g γ ( u ) , t ¯ γ = sup 0 u 1 g γ ( u ) (see (7)).

3. Construction of the Measure Q z

Consider the set X Z + 2 of all pairs of coprime non-negative integers:
X : = { x = ( x 1 , x 2 ) Z + 2 : gcd ( x 1 , x 2 ) = 1 } ,
where “gcd” stands for “greatest common divisor”. In particular, pairs ( 1 , 0 ) and ( 0 , 1 ) are included in this set, but pair ( 0 , 0 ) is not. Let Φ ( X ) : = ( Z + ) X be the space of functions X x ϕ ( x ) Z + , and consider the subspace of functions with finite support, Φ 0 ( X ) : = { ϕ Φ ( X ) : # ( supp ϕ ) < } , where supp ϕ : = { x X : ϕ ( x ) > 0 } . It is easy to see that the space Φ 0 ( X ) is in one-to-one correspondence with the space L = n Z + 2 L n of all (finite) convex lattice polygonal lines [3,5]. Indeed, each x X determines the direction of a potential edge, utilized only if x supp ϕ , in which case the value ϕ ( x ) > 0 specifies the scaling factor, altogether yielding a vector edge x ϕ ( x ) ; lastly, assembling all such edges into a lattice polygonal line is uniquely determined by fixation of the starting point (at the origin) and the convexity property. Degenerate configuration ϕ ( · ) 0 formally corresponds to the “trivial” polygonal line with coinciding endpoints. In what follows, we identify the spaces L and Φ 0 ( X ) .
Now, a probability measure Q z is introduced on the space Φ ( X ) as the distribution of an integer-valued random field { ν ( x ) , x X } with mutually independent components and geometric marginals:
Q z { ν ( x ) = k } = z ( x ) k 1 z ( x ) , k Z + .
The subscript z in the notation Q z refers to a parameter function 0 z ( x ) < 1 ( x X ); its explicit form, adjusted to a given curve γ G 0 , is specified in Section 4. So far, we only assume that
x X ( 1 z ( x ) ) > 0 .
By virtue of the one-to-one association L Γ ν Γ Φ 0 ( X ) , the Q z -probability of each polygonal line Γ L is given by
Q z ( Γ ) = x X z ( x ) ν Γ ( x ) ( 1 z ( x ) ) = x X z ( x ) ν Γ ( x ) · x X ( 1 z ( x ) ) .
The expression (27) is well defined; indeed, the first product on the right-hand side is finite because supp ( ν Γ ) < , whereas the second product is convergent due to condition (26).
The measure Q z , formally defined as a product measure on the space Φ ( X ) , is in fact concentrated on the subspace Φ 0 ( X ) of configurations with finite support.
Lemma 1.
Condition (26) is necessary and sufficient in order that Q z { ν Φ 0 ( X ) } = 1 .
Proof. 
According to (25), Q z { ν ( x ) > 0 } = z ( x ) ( x X ). Hence,
x X Q z { ν ( x ) > 0 } = x X z ( x ) <
whenever the infinite product in (26) is convergent. Since the random variables ν ( x ) are mutually independent for different x X , the Borel–Cantelli lemma ([20], §VIII.3) implies that condition (28) holds if and only if Q z { supp ν < } = 1 , and the lemma is proved. □
As a result, with Q z -probability 1 a realization of the random field ν ( · ) determines a (random) convex polygonal line Γ L . Denote by ξ Γ the right endpoint of Γ ,
ξ Γ = x X x ν Γ ( x ) , Γ ν Γ .
The measure Q z induces a conditional distribution P n on L n = { Γ L : ξ Γ = n } ,
P n ( Γ ) = Q z { Γ | ξ Γ = n } = Q z ( Γ ) Q z { ξ Γ = n } , Γ L n .
Substituting formula (27), the measure (30) is expressed in a more intrinsic form as a product of certain weights across the polygonal edges e i Γ (cf. (4)),
P n ( Γ ) = x X z ( x ) ν Γ ( x ) ϕ Φ 0 ( X ) x X z ( x ) ϕ ( x ) = Z n 1 e i Γ w ( e i ) , Γ L n ,
where the multiplicative weight w ( e ) for an edge e = x ν ( x ) is given by
w ( e ) : = z ( x ) ν ( x ) ,
and Z n is the normalization factor,
Z n = Γ L n e i Γ w ( e i ) .
Remark 7.
The product formula (31) explains and justifies the term “multiplicative” used throughout the paper, including its title and main Theorem 1.

4. Calibration of the Parameter Function z ( x )

In the above construction, the measure Q z depends on the parameters { z ( x ) , x X } . So far, the function x z ( x ) was only assumed to guarantee convergence of the infinite product (26). Let us now adjust it to a given curve γ G 0 and to the terminal condition ξ Γ = n that specifies the subspace L n .
Let Γ ( t ) denote the part of the polygonal line Γ L in which the slope of edges does not exceed t [ 0 , ] . Recalling the association Γ ν Γ described in Section 3, the polygonal line Γ ( t ) is determined by the truncated configuration ν Γ ( x ) 1 X ( t ) ( x ) , where X ( t ) : = { x X : x 2 / x 1 t } . Denote by ξ Γ ( t ) the right endpoint of Γ ( t ) (cf. (29)),
ξ Γ ( t ) = x X ( t ) x ν Γ ( x ) ,
and by Γ ( t ) its length,
Γ ( t ) = x X ( t ) | x | ν Γ ( x ) .
Let us impose the following calibration condition:
lim n n 1 1 E z [ Γ ( t ) ] = γ ( t ) , 0 t ,
where E z stands for the expectation with respect to the measure Q z and γ ( t ) is the corresponding length function associated with a given curve γ (see (10)). We seek the function z ( x ) in the form
z ( x ) = e α x , δ ( x 2 / x 1 ) , x X ,
where
α α n : = n 1 1 / 3 0
and δ ( t ) = ( δ 1 ( t ) , δ 2 ( t ) ) is a function on [ 0 , ] such that
inf 0 t min { δ 1 ( t ) , δ 2 ( t ) } > 0 .
According to the geometric distribution (25), we have (see [20], §XI.2, p. 269)
E z [ ν ( x ) ] = z ( x ) 1 z ( x ) = k = 1 z ( x ) k .
Then, from (33), (38) and (35) we obtain
E z [ Γ ( t ) ] = x X ( t ) | x | k = 1 z ( x ) k = k = 1 x X ( t ) | x | e α k x , δ ( x 2 / x 1 ) .
To deal with sums over the sets X ( t ) X , the following lemma is instrumental. Recall that the Möbius function  μ ( m ) ( m N ) is defined as follows: μ ( 1 ) : = 1 , μ ( m ) : = ( 1 ) d if m is a product of d different prime numbers, and μ ( m ) : = 0 otherwise (see [21], §16.3, p.234); in particular, | μ ( m ) | 1 for all m N .
Lemma 2.
Let f : R + 2 R be a function such that f ( 0 , 0 ) = 0 and
k = 1 x Z + 2 | f ( h k x ) | < , h > 0 .
For h > 0 , consider the functions
F ( h ) : = k = 1 x X f ( h k x ) , F ( h ) : = x X f ( h x ) .
Then the following identities hold for all h > 0
F ( h ) = x Z + 2 f ( h x ) , F ( h ) = m = 1 μ ( m ) F ( h m ) .
Proof. 
Recalling the definition (24) of the set X , observe that Z + 2 = k = 0 k X ; hence, the definition of F ( · ) in (41) is reduced to the first formula in (42). Furthermore, from (41) we have F ( h ) = k = 1 F ( h k ) , and the representation for F ( · ) in (42) follows by the Möbius inversion formula (see [21], Theorem 270, p.237), provided that k , m | F ( h k m ) | < . To verify the last condition, using (41) we obtain
k , m = 1 | F ( k m h ) | k , m = 1 x X | f ( h k m x ) | = k = 1 x Z + 2 | f ( h k x ) | < ,
according to (40). This completes the proof of the lemma. □
Introduce the notation
κ : = 2 ζ ( 3 ) ζ ( 2 ) 1 / 3 ,
where ζ ( s ) = k = 1 k s is the Riemann zeta function.
Theorem 2.
Suppose that the functions δ 1 ( t ) , δ 2 ( t ) satisfy the condition (37). Then, in order for equation (34) to be fulfilled for all t [ 0 , ] , it is necessary and sufficient that
δ j ( t ) + ( j = 1 , 2 ) , t < t ̲ γ , t > t ¯ γ ,
δ 1 ( t ) + t δ 2 ( t ) = κ g γ ( u γ ( t ) ) 1 / 3 , t ̲ γ t t ¯ γ ,
where the map t u γ ( t ) is given by (15).
Proof. 
Let us set
f ( x ) : = | x | e α x , δ ( x 2 / x 1 ) 1 X ( t ) ( x ) , x R + 2 ,
for simplicity, suppressing in the notation the dependence on t. Following the notation (41) of Lemma 2, the representation (39) is rewritten as
E z [ Γ ( t ) ] = k = 1 k 1 F ( k ) .
Let δ * > 0 be a constant such that inf 0 t min { δ 1 ( t ) , δ 2 ( t ) } δ * (see (37)). From (42) and (46) we have
F ( h ) = x 1 = 0 x 2 = 0 t x 1 h | x | e α h x , δ ( x 2 / x 1 ) h x 1 , x 2 = 0 ( x 1 + x 2 ) e α h ( x 1 + x 2 ) δ *
= 2 h e α h δ * ( 1 e α h δ * ) 3 = O ( 1 ) α 3 h 2 .
In particular, this gives F ( h k ) = O ( k 2 ) , uniformly in k N , and it follows that condition (40) of Lemma 2 is satisfied. Hence, using (42) and (48), and recalling that n 1 1 = α 3 , from (47) with h = k we obtain
n 1 1 E z [ Γ ( t ) ] = α 3 k , m = 1 m μ ( m ) F ( k m ) = k , m = 1 m μ ( m ) x 1 = 1 x 2 = 0 t x 1 α 3 | x | e k m α x , δ ( x 2 / x 1 ) .
Taking into account the estimate (49), we see that the general term in the double sum over k , m in (50) admits a uniform bound of the form O ( 1 ) k 3 m 2 , which is a term of a convergent series. Therefore, we can apply Lebesgue’s dominated convergence theorem to pass to the limit in (50) termwise as α 0 . In order to find this limit, note that the internal double series over x 1 , x 2 in (50) is a Riemann sum for the double integral
0 x 2 t x 1 x 1 2 + x 2 2 e k m x 1 δ 1 ( x 2 / x 1 ) + x 2 δ 2 ( x 2 / x 1 ) d x 1 d x 2 .
Moreover, this sum converges to Integral (51) as α 0 , since the integrand function in (51) is directly Riemann integrable, as follows from an estimation similar to (49).
By the change of variables x 1 = u , x 2 = u s (with the Jacobian J ( u , s ) = u ) the integral (51) is reduced to
0 t 1 + s 2 0 u 2 e k m u δ 1 ( s ) + s δ 2 ( s ) d u d s = 2 ( k m ) 3 0 t 1 + s 2 δ 1 ( s ) + s δ 2 ( s ) 3 d s .
Substituting this into (50), observe, recalling the notation (43), that
2 k = 1 1 k 3 m = 1 μ ( m ) m 2 = 2 ζ ( 3 ) ζ ( 2 ) = κ 3 ,
where the identity m = 1 m 2 μ ( m ) = ζ ( 2 ) 1 readily follows by the Möbius inversion formula (42) with F ( h ) = h 2 , F ( h ) = m = 1 ( h m ) 2 = h 2 ζ ( 2 ) (cf. [21], §17.5, Theorem 287, p.250). Hence, combining (50), (52), and (53) with the calibrating condition (34), we arrive at the equation
κ 3 0 t 1 + s 2 δ 1 ( s ) + s δ 2 ( s ) 3 d s = γ ( t ) , 0 t .
According to definitions (8) and (10), we have γ ( t ) 0 for t [ 0 , t ̲ γ ) and γ ( t ) γ ( ) for t ( t ¯ γ , ] , while for t [ t ̲ γ , t ¯ γ ] the derivative d γ / d t is determined by formula (16), where d u γ / d t = 1 / g γ ( u γ ( t ) ) , due to (15) and the differentiation rule for the inverse function. Hence, differentiating the identity (54) with respect to t, we obtain (44) and (45). □
Let us now check that the equation (45) has a suitable solution.
Proposition 3.
For t [ t ̲ γ , t ¯ γ ] , set
δ 1 ( t ) = κ ϰ γ ( t ) 1 / 3 c γ 1 + t 2 c γ + t , δ 2 ( t ) = δ 1 ( t ) c γ ,
where c γ = g γ ( 1 ) , and the curvature ϰ γ ( t ) is given by (23). Then the functions δ 1 ( t ) and δ 2 ( t ) satisfy equation (45) and a lower bound (37).
Proof. 
It is straightforward to verify that equation (45) is satisfied. The lower bound (37) follows from the assumption (22). □
Remark 8.
In the “classical” case, where the curve γ = γ * is determined by equation (3), it is easy to check that the corresponding curvature (see (21)) is given by
ϰ γ * ( t ) = c ( 1 + t / c ) 3 2 ( 1 + t 2 ) 3 / 2 , 0 t .
Hence, expressions (55) are reduced to the constants δ 1 = κ ( c / 2 ) 1 / 3 , δ 2 = δ 1 / c (cf. [5]).
Assumption 2.
Throughout the rest of the paper, we assume that the parameters z ( x ) ( x X ) are chosen according to formulas (35) with the functions δ 1 ( t ) , δ 2 ( t ) given by (44), (55). In particular, the measure Q z becomes dependent on the target curve γ G 0 , Q z -probabilities, and the corresponding expected values. To emphasize this dependence, we explicitly include γ in the notation by writing Q z γ and E z γ .

5. Asymptotics of the Expectation

In this section, we derive a few corollaries from the above choice of z ( x ) , assuming throughout that Assumptions 1 and 2 are satisfied.
Theorem 3.
The convergence in (34) is uniform in t [ 0 , ] ,
lim n sup 0 t | n 1 1 E z γ [ Γ ( t ) ] γ ( t ) | = 0 .
We use the following simple criterion for uniform convergence of monotone functions (see [22], Sec. 0.1, and [5], Lemma 4.3).
Lemma 3.
Let a sequence of monotone functions on a finite interval [ a , b ] converge pointwise to a continuous (monotone) function. Then, this convergence is uniform on [ a , b ] .
Proof of Theorem 3.
For each n = ( n 1 , n 2 ) , the function
t f n ( t ) : = n 1 1 E z γ [ Γ ( t ) ] = 1 n 1 x X ( t ) | x | E z γ [ ν ( x ) ]
is nondecreasing in t, and the limiting function f ( t ) : = γ ( t ) given by (10) is continuous on [ 0 , ] . Hence, by Lemma 3 the convergence in (56) is uniform in t on every finite interval [ 0 , t * ] . To complete the proof, it suffices to check that for any ε > 0 and for large enough n, there exists t * < such that for all t t *
n 1 1 E z γ [ Γ ( ) Γ ( t ) ] ε .
Using (39), similarly to (49), we can write
E z γ [ Γ ( ) Γ ( t ) ] = k = 1 x X X ( t ) | x | e α k x , δ ( x 2 / x 1 ) ) k = 1 x 1 = 1 x 2 > t x 1 ( x 1 + x 2 ) e α k ( x 1 + x 2 ) δ * .
Note that the number of integer pairs ( x 1 , x 2 ) (with x 1 1 , x 2 0 ) satisfying the conditions x 1 + x 2 = y and x 2 > t x 1 does not exceed y / ( t + 1 ) . Hence, again using the estimate (49), we see that the right-hand side of (58) is bounded by
k = 1 y = 1 y 2 t + 1 e α k δ * y = 1 t + 1 k = 1 O ( 1 ) ( α k ) 3 = O ( 1 ) α 3 ( t + 1 ) .
Finally, since α 3 = n 1 1 , this implies the estimate (57) for all t large enough. □
Recall that ξ Γ ( t ) = ( ξ 1 ( t ) , ξ 2 ( t ) ) denotes the right endpoint of Γ ( t ) (see (32)).
Theorem 4.
Uniformly in t [ 0 , ] we have
lim n n 1 1 E z γ [ ξ 1 ( t ) ] = u γ ( t ) , lim n n 1 1 E z γ [ ξ 2 ( t ) ] = g γ ( u γ ( t ) ) .
In particular, for t = this yields
lim n n 1 1 E z γ ( ξ 1 ) = 1 , lim n n 1 1 E z γ ( ξ 2 ) = c γ .
Proof. 
Similarly to the representation (50), one can show that
n 1 1 E z γ [ ξ 1 ( t ) ] = k , m = 1 m μ ( m ) x 1 = 1 x 2 = 0 t x 1 α 3 x 1 e k m α x , δ ( x 2 / x 1 ) .
Assuming that t ̲ γ t t ¯ γ and passing to the limit similarly as in the proof of Theorem 2, we obtain that, using (45) and performinfg the substitution x 2 = s x 1 ,
lim n n 1 1 E z γ [ ξ 1 ( t ) ] = k , m = 1 m μ ( m ) 0 x 2 t x 1 x 1 e k m x , δ ( x 2 / x 1 ) d x 1 d x 2 = k , m = 1 m μ ( m ) 2 ( k m ) 3 t ̲ γ t d s δ 1 ( s ) + s δ 2 ( s ) 3 = 2 k = 1 1 k 3 m = 1 μ ( m ) m 2 t ̲ γ t d s κ 3 g γ ( u γ ( s ) ) = 2 ζ ( 3 ) ζ ( 2 ) κ 3 0 u γ ( t ) d g γ ( u ) g γ ( u ) = u γ ( t ) .
Similarly,
lim n n 2 1 E z γ [ ξ 2 ( t ) ] = k , m = 1 m μ ( m ) 0 x 2 t x 1 x 2 e k m x , δ ( x 2 / x 1 ) d x 1 d x 2 = k , m = 1 m μ ( m ) 2 ( k m ) 3 t ̲ γ t s d s δ 1 ( s ) + s δ 2 ( s ) 3 = 2 ζ ( 3 ) ζ ( 2 ) t ̲ γ t s d s κ 3 g γ ( u γ ( s ) ) = 0 u γ ( t ) g γ ( u ) d g γ ( u ) g γ ( u ) = g γ ( u γ ( t ) ) .
Finally, the uniform convergence in (59) can be proved similarly as in Theorem 3. □
For the future applications, we need to estimate the rate of convergence in (60) with sufficient accuracy. To this end, we require some more smoothness of function g γ .
Assumption 3.
In addition to Assumptions 1 and 2, we now suppose that g γ C 3 [ 0 , 1 ] .
Theorem 5.
Under Assumption 3, E z γ ( ξ j ) n j = O ( n 1 2 / 3 ) as n   ( j = 1 , 2 ) .
Proof. 
Consider ξ 1 (the case ξ 2 is handled similarly). From (61) with t = we have
E z γ ( ξ 1 ) = k , m = 1 μ ( m ) k α F 1 ( k m α ) ,
where
F 1 ( h ) : = x 1 = 1 x 2 = 0 f 1 ( h x 1 , h x 2 ) , f 1 ( x 1 , x 2 ) : = x 1 e x , δ ( x 2 / x 1 ) .
Repeating the calculations as in (62), we note that
R + 2 f 1 ( h x 1 , h x 2 ) d x 1 d x 2 = 2 h 2 κ 3 ,
so that
k , m = 1 μ ( m ) α k R + 2 f 1 ( h x 1 , h x 2 ) d x 1 d x 2 h = α k m = 2 α 3 κ 3 k , m = 1 μ ( m ) k 3 m 2 = 1 α 3 = n 1 .
Hence, we obtain the representation
E z γ [ ξ 1 ] n 1 = k , m = 1 μ ( m ) α k Δ 1 ( α k m ) ,
where
Δ 1 ( h ) : = F 1 ( h ) R + 2 f 1 ( h x 1 , h x 2 ) d x 1 d x 2 .
Using that δ j ( t ) δ * > 0 (cf. the proof of (50)), we have
F 1 ( h ) x 1 = 1 x 2 = 0 h x 1 e h ( x 1 + x 2 ) δ * = h e h δ * ( 1 e h δ * ) 3 .
Hence, F 1 ( h ) = O ( h 2 ) as h 0 and F 1 ( h ) = O ( h β ) for any β > 0 as h + . Therefore, the function F 1 ( h ) is well-defined for all h > 0 and its Mellin transform ([23], Ch. VI, §9).
M 1 ( s ) : = 0 h s 1 F 1 ( h ) d h
is a regular function for s > 2 . From a two-dimensional version of the Müntz formula (see [5], Lemma 5.1), it follows that M 1 ( s ) is meromorphic in the half-plane s > 1 and has a single (simple) pole at point s = 2 . Moreover, for all 1 < s < 2
M 1 ( s ) = 0 h s 1 Δ 1 ( h ) d h .
The inversion formula for the Mellin transform ([23], Theorem 9a, pp. 246–247) yields.
Δ 1 ( h ) = 1 2 π i c i c + i h s M 1 ( s ) d s , 1 < c < 2 .
In order to make use of formula (66), we need to find explicitly the analytic continuation of the function (65) to the strip 1 < s < 2 . Let us use the Euler–Maclaurin summation formula (see, e.g., [24], §12.2)
x = 0 f ( x ) = 0 f ( x ) d x + 1 2 f ( 0 ) + 0 B 1 ( x ) f ( x ) d x ,
where B 1 ( x ) : = x x 1 2 . Applying this formula to the sum over x 2 in (63), we obtain
F 1 ( h ) = x 1 = 1 h x 1 0 e h x , δ ( x 2 / x 1 ) d x 2 + 1 2 x 1 = 1 h x 1 e h x 1 δ 1 ( 0 ) + O ( 1 ) e h δ * h = h x 1 = 1 x 1 2 0 e h x 1 ψ ( t ) d t + O ( 1 ) e h δ * h ,
where (see (45))
ψ ( t ) : = δ 1 ( t ) + t δ 2 ( t ) κ g γ ( u γ ( t ) ) 1 / 3 .
Keeping track of only the main term in (67), and writing dots for functions that are regular for s > 1 , the Mellin transform of F 1 ( h ) can be represented as follows:
M 1 ( s ) = 0 h s x 1 = 1 x 1 2 0 e h x 1 ψ ( t ) d t d h + = x 1 = 1 x 1 2 0 0 h s e h x 1 ψ ( t ) d h d t + = x 1 = 1 1 x 1 s 1 0 Γ ( s + 1 ) ψ ( t ) s + 1 d t + = ζ ( s 1 ) Γ ( s + 1 ) Ψ ( s ) + ,
where
Ψ ( s ) : = 0 1 ψ ( t ) s + 1 d t .
Recalling formula (21), the function (68) may be rewritten in the form:
ψ ( t ) = κ ϰ γ ( t ) 1 / 3 1 + t 2 , t ̲ γ t t ¯ γ ,
and Assumption 1 implies that the function Ψ ( s ) is regular if s > 0 . Furthermore, it is well known that the gamma function Γ ( s ) is analytic for s > 0 ([25], §4.41, p. 148), whereas the zeta function ζ ( s ) has a single pole at point s = 1 ([25], §4.43, p. 152). It follows that the right-hand side of (69) is regular in the strip 1 < s < 2 and hence provides the required analytic continuation of the function M 1 ( s ) originally defined by (65).
Setting h = α k m and returning to formulas (64) and (66), we get for 1 < c < 2
E z γ ( ξ 1 ) n 1 = k , m = 1 μ ( m ) α k 1 2 π i c i c + i M 1 ( s ) ( k m α ) s d s = 1 2 π i c i c + i M 1 ( s ) ζ ( s + 1 ) α s + 1 ζ ( s ) d s .
Using that ζ ( s ) 0 for s 1 , we can transform the contour of integration s = c in (70) into the union of a small semi-circle s = 1 + r e i t ( π / 2 t π / 2 ) and two vertical lines, s = 1 ± i t ( t r ). Furthermore, studying the resolution (69), one can show that M 1 ( 1 ± i t ) = O ( | t | 2 ) as t . As a result, the right-hand side of (70) is bounded by O ( α 2 ) = O ( n 1 2 / 3 ) . Thus, the proof of the theorem for ξ 1 is complete. □

6. Asymptotics of Higher-Order Moments

6.1. Second-Order Moments

According to the geometric distribution (25), we have (see [20], §XI.2, p. 269)
Var z [ ν ( x ) ] = z ( x ) ( 1 z ( x ) ) 2 = k = 1 k z ( x ) k .
Denote a z : = E z ( ξ Γ ) , where ξ Γ ( ξ 1 , ξ 2 ) = x X x ν ( x ) (see (29)). Let K z : = Cov z ( ξ Γ , ξ Γ ) = E z ( ξ Γ a z ) ( ξ Γ a z ) be the covariance matrix (with respect to the measure Q z ) of the random vector ξ Γ . Since { ν ( x ) } are mutually independent, we see using (71) that the elements K z ( i , j ) = Cov z ( ξ i , ξ j ) of the matrix K z are given by
K z ( i , j ) = x X x i x j Var z [ ν ( x ) ] = x X x i x j k = 1 k z ( x ) k , i , j { 1 , 2 } .
Theorem 6.
Under Assumptions 1 and 2,
K z = 3 κ 1 n 1 4 / 3 ( 1 + o ( 1 ) ) B ,
where the elements of the matrix B = ( B i j ) are given by
B 11 = 0 1 d u g γ ( u ) 1 / 3 , B 12 = B 21 = 0 1 g γ ( u ) d u g γ ( u ) 1 / 3 , B 22 = 0 1 g γ ( u ) 2 d u g γ ( u ) 1 / 3 .
Proof. 
Let us consider K z ( 1 , 1 ) (the other elements of K z are analyzed in a similar manner). Substituting (35) into (72), by the Möbius inversion formula (cf. (61)), we obtain
K z ( 1 , 1 ) = k = 1 x X k x 1 2 e k α x , δ ( x 2 / x 1 ) = k , m = 1 k m 2 μ ( m ) x 1 = 1 x 2 = 0 x 1 2 e k m α x , δ ( x 2 / x 1 ) .
Arguing as in the proof of Theorems 2 and 4, we obtain
lim n α 4 x 1 = 1 x 2 = 0 x 1 2 e k m α x , δ ( x 2 / x 1 ) = R + 2 x 2 2 e k m α x , δ ( x 2 / x 1 ) d x 1 d x 2 = 6 ( k m ) 4 t ̲ γ t ¯ γ d s δ 1 ( s ) + s δ 2 ( s ) 4 .
Returning to (74) and using (44), (45), we get
lim n α 4 K z ( 1 , 1 ) = 6 ζ ( 3 ) ζ ( 2 ) t ̲ γ t ¯ γ d s κ 4 g γ ( u γ ( s ) ) 4 / 3 = 3 κ 0 1 d u g γ ( u ) 1 / 3 ,
and the first formula in (73) follows, since α = n 1 1 / 3 . □
Lemma 4.
Under Assumptions 1 and 2,
det K z 3 κ 2 0 1 d u g γ ( u ) 1 / 3 0 1 g γ ( u ) 2 d u g γ ( u ) 1 / 3 0 1 g γ ( u ) d u g γ ( u ) 1 / 3 2 n 1 8 / 3 .
Proof. 
The proof readily follows from Theorem 6. □
From Theorem 6 and Lemma 4, it follows (e.g., using the Cauchy–Schwarz inequality) that the matrix K z is (asymptotically) positive definite; in particular, det K z > 0 and hence K z is invertible. Let V z : = K z 1 / 2 be the (unique) square root of K z 1 , that is, a symmetric positive definite matrix such that V z 2 = K z 1 . Recall that the matrix norm induced by the Euclidean vector norm | · | is defined by A : = sup | x | = 1 | x A | . We need some general facts about this norm (see [5], §7.2, pp. 33–34, for simple proofs and bibliographic comments).
Lemma 5.
If A is a real matrix then A A = A 2 .
Lemma 6.
If A = ( a i j ) is a real d × d matrix, then
1 d i , j = 1 d a i j 2 A 2 i , j = 1 d a i j 2 .
Lemma 7.
Let A be a symmetric 2 × 2 matrix with det A 0 . Then
A 1 = A | det A | .
We can now prove the following estimates for the norms of the matrices K z and V z .
Lemma 8.
Under Assumptions 1 and 2,
K z n 1 4 / 3 , V z n 1 2 / 3 .
Proof. 
Using Theorem 6 and the upper bound in Lemma 6, we obtain
K z 2 K z ( 1 , 1 ) 2 + 2 K z ( 1 , 2 ) + K z ( 2 , 2 ) 2 = O ( n 1 8 / 3 ) .
On the other hand, by Theorem 6 and the lower bound in Lemma 6,
K z 2 1 2 K z ( 1 , 1 ) 2 + K z ( 2 , 2 ) 2 K z ( 1 , 1 ) K z ( 2.2 ) 3 κ 2 n 1 8 / 3 0 1 d u g γ ( u ) 1 / 3 0 1 g γ ( u ) 2 d u g γ ( u ) 1 / 3 .
Combining (76) and (77), we obtain the first estimate in (75).
Furthermore, Lemma 5 implies that V z 2 = K z 1 . In turn, Lemma 7 yields K z 1 = K z / det K z , and it remains to use Lemmas 4 and 8 to obtain the second part of (75). □

6.2. Asymptotics of the Moment Sums

Denote ν 0 ( x ) : = ν ( x ) E z [ ν ( x ) ] ( x X ), and for q N set
m q ( x ) : = E z ν ( x ) q , μ q ( x ) : = E z ν 0 ( x ) q
(for notational simplicity, we suppress the dependence on γ and z).
The following two-sided estimate of μ q ( x ) can be easily proved using Newton’s binomial formula and Lyapunov’s inequality (cf. [5], Lemmas 6.2 and 6.6).
Lemma 9.
For each q N and all x X ,
μ 2 ( x ) q / 2 μ q ( x ) 2 q m q ( x ) .
Next, we need a general upper bound for the moments of geometric random variables proved in [5], Lemma 6.3.
Lemma 10.
For each q N , there exists a constant C q > 0 such that, for all x X ,
m q ( x ) C q z ( x ) 1 z ( x ) q .
Using the estimate (79) and repeating the calculations in [5], Lemma 6.4, we obtain the following asymptotic bound.
Lemma 11.
Under Assumptions 1 and 2, for each q N
x X | x | q m q ( x ) = O ( 1 ) n 1 ( q + 2 ) / 3 .
Lemma 11, together with the bounds (78) and Theorem 6, implies the following asymptotic estimate (cf. [5], Lemma 6.6).
Lemma 12.
Under Assumptions 1 and 2, for any integer q 2
x X | x | q μ q ( x ) n 1 ( q + 2 ) / 3 .
Using Lemma 12, the next asymptotic bound is obtained by a straightforward adaptation of Lemma 6.7 in [5].
Lemma 13.
For each q N ,
E z γ | Γ E z γ ( Γ ) | q = O n 1 2 q / 3 .
Lastly, let us consider the Lyapunov coefficient
L z : = V z 3 x X | x | 3 μ 3 ( x ) ,
The next asymptotic estimate is an immediate consequence of Lemmas 8 and 12.
Lemma 14.
Under Assumptions 1 and 2, one has L z n 1 1 / 3 .

7. Local Limit Theorem

The role of a local limit theorem is to yield the asymptotics of the terminal probability Q z γ { ξ Γ = n } = Q z γ ( L n ) appearing in the representation of the measure P n γ as a conditional distribution, P n γ ( · ) = Q z γ ( · | L n ) (see (30)).
As before, we denote by a z = E z ( ξ Γ ) and K z = Cov z ( ξ Γ , ξ Γ ) the expectation vector and covariance matrix of the random vector ξ Γ = x X x ν ( x ) . Let f 0 , I ( · ) be the density of a standard two-dimensional normal distribution N ( 0 , I ) (i.e., with zero mean and identity covariance matrix),
f 0 , I ( x ) = 1 2 π e | x | 2 / 2 , x R 2 .
Then, the density of the normal distribution N ( a z , K z ) is given by
f a z , K z ( x ) = ( det K z ) 1 / 2 f 0 , I ( x a z ) V z , x R 2 .
Theorem 7.
Under Assumptions 1 and 2, uniformly in m Z + 2
Q z γ { ξ Γ = m } = f a z , K z ( m ) + O ( n 1 5 / 3 ) .
Let us make some preparations for the proof. Recall that the random variables { ν ( x ) , x X } are mutually independent and have geometric distribution with parameter 1 z ( x ) , respectively. In particular, their characteristic functions φ ν ( s ; x ) : = E z [ e i s ν ( x ) ] are given by
φ ν ( s ; x ) = 1 z ( x ) 1 z ( x ) e i t s , s R .
Hence, the characteristic function φ ξ ( λ ) : = E z [ e i λ , ξ ] of the vector ξ Γ = x X x ν Γ ( x ) reads
φ ξ ( λ ) = x X φ ν ( x , λ ; x ) = x X 1 z ( x ) 1 z ( x ) e i x , λ .
Let us start with a general absolute estimate for the characteristic function of a centered random variable (for a proof, see [5], Lemma 7.10).
Lemma 15.
Consider the random variable ν 0 ( x ) = ν ( x ) E z [ ν ( x ) ] and its characteristic function φ ν 0 ( t ; x ) = E z [ e i t ν 0 ( x ) ] . Then,
| φ ν 0 ( t ; x ) | exp 1 2 μ 2 ( x ) t 2 + 1 3 μ 3 ( x ) | t | 3 , t R .
The next lemma provides two estimates (proved in [5], Lemmas 7.11 and 7.12) for the characteristic function φ ξ 0 ( λ ) = E z [ e i λ , ξ 0 ] of the centered vector
ξ 0 : = ξ Γ a z = x X x ν 0 ( x ) .
Recall that the Lyapunov coefficient L z is defined in (80), and V z = K z 1 / 2 .
Lemma 16.
(a) For all λ R 2 ,
| φ ξ 0 ( λ V z ) | exp 1 2 | λ | 2 + 1 3 L z | λ | 3 .
(b) If | λ | L z 1 then
φ ξ 0 ( λ V z ) e | λ | 2 / 2 16 L z | λ | 3 e | λ | 2 / 6 .
The next global bound is obtained by repeating the proof of Lemma 7.14 in [5].
Lemma 17.
For all λ R 2 ,
| φ ξ 0 ( λ ) | e J α ( λ ) ,
where
J α ( λ ) : = 1 4 x X e α x , δ 1 cos λ , x 0 .
We can now proceed to the proof of Theorem 7.
Proof of Theorem 7.
By the Fourier inversion formula, we can write
Q z γ { ξ Γ = m } = 1 4 π 2 T 2 e i λ , m a z φ ξ 0 ( λ ) d λ , m Z + 2 ,
where T 2 : = { λ = ( λ 1 , λ 2 ) : | λ 1 | π , | λ 2 | π } . On the other hand, the characteristic function corresponding to the normal probability density f a z , K z ( x ) (see (81)) is given by
φ a z , K z ( λ ) = e i λ , a z | λ V z 1 | ( z 2 / 2 , λ R 2 ,
so by the Fourier inversion formula
f a z , K z ( m ) = 1 4 π 2 R 2 e i λ , m a z | λ V z 1 | ( z 2 / 2 d λ , m Z + 2 .
Note that if | λ V z 1 | L z 1 , then, according to Lemmas 8 and 14,
| λ | | λ V z 1 | · V z L z 1 V z = O ( n 1 1 / 3 ) = o ( 1 ) ,
which of course implies that λ T 2 . Using this observation and subtracting (85) from (84), we obtain, uniformly in m Z + 2 , that
Q z γ { ξ Γ = m } f a z , K z ( m ) I 1 + I 2 + I 3 ,
by denoting
I 1 : = 1 4 π 2 { λ : | λ V z 1 | L z 1 } φ ξ 0 ( λ ) e | λ V z 1 | ( z 2 / 2 d λ , I 2 : = 1 4 π 2 { λ : | λ V z 1 | > L z 1 } e | λ V z 1 | ( z 2 / 2 d λ , I 3 : = 1 4 π 2 T 2 { λ : | λ V z 1 | > L z 1 } | φ ξ 0 ( λ ) | d λ .
By the substitution λ = y V z , the integral I 1 is reduced to
I 1 = | det V z | 4 π 2 | y | L z 1 φ ξ 0 ( y V z ) e | y | 2 / 2 d y = O ( 1 ) ( det K z ) 1 / 2 L z R 2 | y | 3 e | y | 2 / 6 d y = O ( n 1 5 / 3 ) ,
on account of Lemmas 4, 14 and 16. Similarly, again putting λ = y V z and passing to the polar coordinates, we get, due to Lemmas 4 and 14,
I 2 = | det V z | 2 π L z 1 r e r 2 / 2 d r = O ( n 1 4 / 3 ) e L z 2 / 2 = o ( n 1 5 / 3 ) .
Finally, let us turn to I 3 . Using Lemma 17, we obtain
I 3 = O ( 1 ) T 2 { | λ V z 1 | > L z 1 } e J α ( λ ) d λ ,
where J α ( λ ) is given by (83). The condition | λ V z 1 | > L z 1 implies that | λ | > 2 η α ; hence, max { | λ 1 | , | λ 2 | } > η α , where η > 0 is suitable (small enough) constant. Indeed, assuming the contrary, from (36) and Lemmas 8 and 14 it would follow that
1 < L z | λ V z 1 | L z η α K z 1 / 2 = O ( η ) 0 as η 0 ,
which is a contradiction. Hence, the estimate (89) is reduced to
I 3 = O ( 1 ) | λ 1 | > η α + | λ 2 | > η α e J α ( λ ) d λ .
Note that, by Assumption 1 and formulas (55), the functions δ 1 ( t ) , δ 2 ( t ) are bounded above, sup t δ j ( t ) δ * < . Hence, (83) implies
J α ( λ ) x X e α ( x 1 + x 2 ) δ * 1 cos λ , x .
To estimate the first integral in (90), by keeping in the sum (91) only x = ( x 1 , 1 ) , x 1 Z + , we obtain
J α ( λ ) e α δ * x 1 = 0 e α δ * x 1 1 e i ( λ 1 x 1 + λ 2 ) = 1 1 e α e i λ 2 1 e α + i λ 1 1 1 e α 1 | 1 e α + i λ 1 | ,
because u | u | for any u C . Since η α | λ 1 | π , we have
| 1 e α + i λ 1 | | 1 e α + i η α | α ( 1 + η 2 ) 1 / 2 ( α 0 ) .
Substituting this estimate into (92), we conclude that J α ( λ ) is asymptotically bounded from below by C ( η ) α 1 n 1 1 / 3 (with some constant C ( η ) > 0 ), uniformly in λ such that η α | λ 1 | π . Thus, the first integral in (90) is bounded by O ( 1 ) exp const · n 1 1 / 3 = o ( n 1 5 / 3 ) .
Similarly, the second integral in (90) is estimated by reducing the summation in (83) to that over x = ( 1 , x 2 ) only. As a result, I 3 = o ( n 1 5 / 3 ) . Substituting this estimate, together with (87) and (88), into (86) we obtain (82), and so the theorem is proved. □
Corollary 1.
In addition to the conditions of Theorem 7, suppose that Assumption 3 holds. Then
Q z γ { ξ Γ = n } n 1 4 / 3 .
Proof. 
By Theorem 5, a z = E z γ ( ξ Γ ) = n + O ( n 1 2 / 3 ) . Together with Lemma 8 this implies
| ( n a z ) V z | | n a z | · V z = O ( 1 ) .
Hence, by Lemma 4 we obtain
f a z , K z ( n ) = 1 2 π ( det K z ) 1 / 2 e | ( n a z ) V z | 2 / 2 n 1 4 / 3 ,
and (93) now readily follows from (82). □

8. The Limit Shape

Throughout this section, we work under Assumptions 1–3. Let us first establish that a given curve γ G 0 is indeed the limit shape of polygonal lines Γ L with respect to the measure Q z γ (under the scaling Γ n 1 1 Γ ).
Theorem 8.
For any ε > 0 ,
lim n Q z γ Γ L : d T ( n 1 1 Γ , γ ) ε = 1 .
Proof. 
In view of Theorem 3, we only need to check that, for each ε > 0 ,
lim n Q z γ 1 n 1 sup 0 t Γ ( t ) E z γ [ Γ ( t ) ] > ε = 0 .
Note that the random process
Γ 0 ( t ) : = Γ ( t ) E z γ [ Γ ( t ) ] ( 0 t )
has independent increments and zero mean; hence, it is a martingale with respect to the filtration F t : = σ { ν ( x ) , x X ( t ) , t [ 0 , ] } . From the definition of Γ ( t ) (see (33)), it is also clear that Γ 0 ( t ) is càdlàg (i.e., its paths are everywhere right-continuous and have left limits). Therefore, the Kolmogorov–Doob submartingale inequality (see, e.g., [26], Ch. II, Theorem 1.7, p. 54) gives
Q z sup 0 t | Γ 0 ( t ) | > n 1 ε 1 ( n 1 ε ) 2 sup 0 t Var z [ Γ ( t ) ] 1 n 1 2 ε 2 Var z ( Γ ) .
Furthermore, using the decomposition (33) and Theorem 6, we have
Var z ( Γ ) = x X | x | 2 Var z [ ν ( x ) ] = x X ( x 1 2 + x 2 2 ) Var z [ ν ( x ) ] = Var z ( ξ 1 ) + Var z ( ξ 2 ) = O ( n 1 4 / 3 ) .
Finally, substituting (97) into (96), we see that the probability on the left-hand side is bounded by O ( n 1 2 / 3 ) 0 , which proves (94). □
Let us now prove a limit shape result under the measure P n γ (cf. Theorem 1).
Theorem 9.
For any ε > 0
lim n P n γ Γ L n : d T ( n 1 1 Γ , γ ) ε = 1 .
Proof. 
Similarly to the proof of Theorem 8, it suffices to show that, for each ε > 0 ,
lim n P n γ sup 0 t n 1 1 Γ 0 ( t ) > ε = 0 ,
where the random process Γ 0 ( t ) is defined in (95). Recalling formula (30), we obtain
P n γ sup 0 t | Γ 0 ( t ) | > ε n 1 Q z γ sup 0 t | Γ 0 ( t ) | > ε n 1 Q z γ { ξ Γ = n } .
To estimate the probability in the numerator in (98), similarly to the proof of Theorem 8 we use the Kolmogorov– Doob submartingale inequality, but now with the sixth-order central moment. Combining this with Lemma 13 (with q = 3 ), we obtain
Q z γ sup 0 t | Γ 0 ( t ) | > n 1 ε 1 n 1 6 ε 6 E z γ Γ E z γ ( Γ ) 6 = O ( n 1 2 ) .
On the other hand, by Corollary 1 the denominator in (98) decays no faster than at the order of n 1 4 / 3 . Together with the estimate (99), this implies that the right-hand side of (98) admits an asymptotic bound O ( n 1 2 / 3 ) 0 . Hence, Theorem 9 is proved. □

Author Contributions

The authors contributed equally to the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Vershik, A.M. The limit shape of convex lattice polygons and related topics. Funct. Anal. Appl. 1994, 28, 13–20. [Google Scholar] [CrossRef]
  2. Bárány, I. The limit shape of convex lattice polygons. Discret. Comput. Geom. 1995, 13, 279–295. [Google Scholar] [CrossRef]
  3. Sinai, Y.G. Probabilistic approach to the analysis of statistics for convex polygonal lines. Funct. Anal. Appl. 1994, 28, 108–113. [Google Scholar] [CrossRef]
  4. Bogachev, L.V.; Zarbaliev, S.M. A proof of the Vershik–Prokhorov conjecture on the universality of the limit shape for a class of random polygonal lines. Dokl. Math. 2009, 79, 197–202. [Google Scholar] [CrossRef]
  5. Bogachev, L.V.; Zarbaliev, S.M. Universality of the limit shape of convex lattice polygonal lines. Ann. Probab. 2011, 39, 2271–2317. [Google Scholar] [CrossRef]
  6. Arratia, R.; Barbour, A.D.; Tavaré, S. Logarithmic Combinatorial Structures: A Probabilistic Approach; EMS Monographs in Mathematics; European Mathematical Society: Zürich, Switzerland, 2003. [Google Scholar] [CrossRef] [Green Version]
  7. Bogachev, L.V. Limit shape of random convex polygonal lines on Z 2 : Even more universality. J. Comb. Theory Ser. A 2014, 127, 353–399. [Google Scholar] [CrossRef]
  8. Bureaux, J.; Enriquez, N. Asymptotics of convex lattice polygonal lines with a constrained number of vertices. Isr. J. Math. 2017, 222, 515–549. [Google Scholar] [CrossRef]
  9. Bárány, I. Affine perimeter and limit shape. J. Reine Angew. Math. 1997, 484, 71–84. [Google Scholar] [CrossRef]
  10. Žunić, J. Notes on optimal convex lattice polygons. Bull. Lond. Math. Soc. 1998, 30, 377–385. [Google Scholar] [CrossRef]
  11. Stojaković, M. Limit shape of optimal convex lattice polygons in the sense of different metrics. Discret. Math. 2003, 271, 235–249. [Google Scholar] [CrossRef] [Green Version]
  12. Prodromou, N.M. Limit shape of convex lattice polygons with minimal perimeter. Discret. Math. 2005, 300, 139–151. [Google Scholar] [CrossRef] [Green Version]
  13. Bogachev, L.V.; Zarbaliev, S.M. Approximation of convex functions by random polygonal lines. Dokl. Math. 1999, 59, 46–49. [Google Scholar]
  14. Arratia, R.; Tavaré, S. Independent process approximations for random combinatorial structures. Adv. Math. 1994, 104, 90–154. [Google Scholar] [CrossRef] [Green Version]
  15. Vershik, A.M. Statistical mechanics of combinatorial partitions, and their limit shapes. Funct. Anal. Appl. 1996, 30, 90–105. [Google Scholar] [CrossRef]
  16. Erlihson, M.M.; Granovsky, B.L. Limit shapes of Gibbs distributions on the set of integer partitions: The expansive case. Ann. Inst. Henri Poincaré Probab. Stat. 2008, 44, 915–945. [Google Scholar] [CrossRef]
  17. Duchon, P.; Flajolet, P.; Louchard, G.; Schaeffer, G. Boltzmann samplers for the random generation of combinatorial structures. Comb. Probab. Comput. 2004, 13, 577–625. [Google Scholar] [CrossRef]
  18. Bodini, O.; Duchon, P.; Jacquot, A.; Mutafchiev, L. Asymptotic analysis and random sampling of digitally convex polyominoes. In DGCI 2013: Discrete Geometry for Computer Imagery; Gonzalez-Diaz, R., Jimenez, M.J., Medrano, B., Eds.; Lecture Notes in Computer Science, 7749; Springer: Berlin/Heidelberg, Germany, 2013; pp. 95–106. [Google Scholar] [CrossRef] [Green Version]
  19. Bingham, N.H.; Goldie, C.M.; Teugels, J.L. Regular Variation; Encyclopedia of Mathematics and its Applications, 27; Cambridge University Press: Cambridge, UK, 1989. [Google Scholar]
  20. Feller, W. An Introduction to Probability Theory and Its Applications, 3rd ed.; Wiley Series in Probability and Mathematical Statistics; Wiley: New York, NY, USA, 1968; Volume 1. [Google Scholar]
  21. Hardy, G.H.; Wright, E.M. An Introduction to the Theory of Numbers, 4th ed.; Oxford University Press: Oxford, UK, 1960. [Google Scholar]
  22. Resnick, S.I. Extreme Values, Regular Variation and Point Processes; Applied Probability: A Series of the Applied Probability Trust 4; Springer: New York, NY, USA, 1987; reprinted in Springer Series in Operations Research and Financial Engineering; Springer: New York, NY, USA, 2008. [Google Scholar] [CrossRef]
  23. Widder, D.V. The Laplace Transform; Princeton Mathematical Series, 6; Princeton University Press: Princeton, NJ, USA, 1941; Available online: https://www.jstor.org/stable/j.ctt183ptft (accessed on 31 October 2022).
  24. Cramér, H. Mathematical Methods of Statistics; Princeton Mathematical Series, 9; Princeton University Press: Princeton, NJ, USA, 1946; Available online: https://www.jstor.org/stable/j.ctt1bpm9r4 (accessed on 31 October 2022).
  25. Titchmarsh, E.C. The Theory of Functions, 2nd ed., corrected printing; Oxford University Press: Oxford, UK, 1952. [Google Scholar]
  26. Revuz, D.; Yor, M. Continuous Martingales and Brownian Motion, 3rd ed.; Grundlehren der Mathematischen Wissenschaften, 293; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bogachev, L.V.; Zarbaliev, S.M. Inverse Limit Shape Problem for Multiplicative Ensembles of Convex Lattice Polygonal Lines. Mathematics 2023, 11, 385. https://doi.org/10.3390/math11020385

AMA Style

Bogachev LV, Zarbaliev SM. Inverse Limit Shape Problem for Multiplicative Ensembles of Convex Lattice Polygonal Lines. Mathematics. 2023; 11(2):385. https://doi.org/10.3390/math11020385

Chicago/Turabian Style

Bogachev, Leonid V., and Sakhavet M. Zarbaliev. 2023. "Inverse Limit Shape Problem for Multiplicative Ensembles of Convex Lattice Polygonal Lines" Mathematics 11, no. 2: 385. https://doi.org/10.3390/math11020385

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop