Next Article in Journal
Multivalued Extensions of Krasnosel’skii-Type Fixed-Point Theorems in p-Normed Spaces
Previous Article in Journal
Hankel and Toeplitz Determinants for q-Analog Functions Defined by Linear Multiplier q-Differintegral Operator
 
 
Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Lower Bounds for the Integrated and Minimax Risks in Intrinsic Statistical Estimation: A Geometric Approach

by
José Manuel Corcuera
1 and
José María Oller
2,*
1
Department of Mathematics and Computer Science, Faculty of Mathematics, Universitat de Barcelona, 08007 Barcelona, Spain
2
Department of Genetics, Microbiology and Statistics, Faculty of Biology, Universitat de Barcelona, 08028 Barcelona, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2026, 14(2), 240; https://doi.org/10.3390/math14020240
Submission received: 3 October 2025 / Revised: 23 December 2025 / Accepted: 5 January 2026 / Published: 8 January 2026
(This article belongs to the Section D1: Probability and Statistics)

Abstract

In parametric statistics, it is well established that the canonical measures of estimator performance—such as bias, variance, and mean squared error—are inherently dependent on the parameterization of the model. Consequently, these quantities describe the behavior of an estimator only relative to a particular parameterization, rather than representing intrinsic properties of either the estimator itself or the underlying probability distribution it seeks to estimate. Some years ago, the authors introduced a framework, termed the intrinsic analysis of point estimation, in which tools from information geometry were employed to construct analogues of classical statistical notions that are intrinsic to both the estimator and the associated probability measure. Within this framework, a contravariant vector field was introduced to define the intrinsic bias, while the squared Riemannian distance naturally emerged as the intrinsic analogue of the classical squared distance. Intrinsic counterparts of the Cramér–Rao inequalities, as well as the Rao–Blackwell and Lehmann–Scheffé theorems, were also established. The present work extends the intrinsic analysis—originally founded on the concept of intrinsic risk, a fundamentally local measure of estimator performance—to an approach that characterizes the estimator over an entire region of the parameter space, thereby yielding an intrinsically global perspective. Building upon intrinsic risk, two indices are proposed to evaluate estimator performance within a bounded region: (i) the integral of the intrinsic risk with respect to the Riemannian volume over the specified region, and (ii) the maximum intrinsic risk attained within that region. The Riemannian volume induced by the Fisher information metric on the manifold associated with the parametric model provides a natural means of averaging the intrinsic risk. Using variational methods, integral inequalities of the Cramér–Rao type are derived for the mean squared integrated Rao distance of the estimators, thereby extending previous contributions by several authors. Furthermore, lower bounds for the maximum intrinsic risk are obtained through corresponding integral formulations.

1. Introduction

The study of estimation efficiency within the framework of information geometry has evolved significantly since the pioneering work of Rao [1] and the subsequent works of others—see [2,3,4]—and may be found in more recent papers and books like [5,6]. The Fisher information metric, providing a canonical Riemannian structure on parametric statistical models, allows an intrinsic quantification of statistical distinguishability and the derivation of sharp risk bounds. This was developed in the article [7], where intrinsic classical results, such as Cramér–Rao inequalities, were established under regularity conditions; in that work, we developed what can be termed an intrinsic approach to the analysis of point estimation. Given a statistical model—that is, after fixing the collection of possible stochastic mechanisms assumed to generate the observed sample—the term intrinsic refers to properties that are inherent to the estimator itself and not to the particular parameterization used to describe the model. Intrinsic properties, in contrast to classical ones, are invariant under reparameterizations of the model, which may be interpreted as changes of coordinates in the space of probabilistic mechanisms under consideration; see also [8].
However, whether one adopts a classical or an intrinsic perspective, the risk functions of different estimators often intersect. As a consequence, the comparison of estimators cannot, in general, be based on pointwise risk criteria alone, unless additional structural properties are imposed, such as unbiasedness or equivariance in the case of families that are invariant under the action of a group (see [9]). One natural way to address this issue is to assess the performance of an estimator—intrinsic or classical—over an entire region of the parameter space, for example, by integrating its risk with respect to the Riemannian volume measure or by considering the supremum of the risk over that region. This work extends the findings of these articles through the use of indices that quantify estimator performance over regions of the parameter space, rather than at single pointwise locations. Specifically, the aim of this paper is to derive lower bounds for two global risk measures of an estimator over a subset of the parameter space under the intrinsic geometry induced by the Fisher information: the average risk and the maximum risk. In the next section, we outline the setting of the problem and recall some results on local risk bounds, which will be later applied to obtain global bounds. Related contributions are found in [2], where the analysis is carried out in a classical unidimensional framework, and in [10], which develops a classical but non-intrinsic perspective.
Building on these foundations, the notion of global efficiency has recently attracted renewed attention, emphasizing the behavior of estimators not only locally but across regions of the parameter space, particularly if we take into account that the interplay between geometry and physics has been further enriched by applications of Fisher information to variational principles in classical and quantum mechanics; see [11,12,13,14,15].

2. The Intrinsic Analysis Framework

Let χ be a sample space, A a σ –algebra of subsets of χ and μ a σ –finite positive measure in ( χ , A ) . A parametric statistical model is defined as the triple ( χ , A , μ ) ; Θ ; f , where ( χ , A , μ ) is a measure space, Θ is a smooth real manifold, known as the parameter space, and f is a non-negative measurable map, f : X × Θ R 0 , such that P θ ( d x ) = f ( x , θ ) μ ( d x ) is a probability measure on the measurable space ( χ , A ) , θ Θ . Here, μ is referred to as the reference measure and f as the model function.
For simplicity, in this paper, we shall focus on the case in which Θ is an open-connected subset of R n . In this setting, it is customary to use the same symbol θ to denote both the points in Θ and their coordinate representations. Adopting this convention, it is possible to present the results in this familiar form hereafter, even though the statements can be formulated in greater generality.
Additionally, it will be assumed that the model function f satisfies certain regularity conditions:
  • When x is fixed, the real function ξ f ( x ; ξ ) is a C function in the manifold Θ .
  • The functions in x, ln f ( x ; θ ) / θ i i = 1 , , n , are linearly independent and belong to L α ( f ( · ; θ ) d μ ) for a suitable α > 0 , that is, the scores have moments of the order α for a convenient α > 0 .
  • The partial derivatives of the required orders
    / θ i , 2 / θ i θ j , 3 / θ i θ j θ k , i , j , k = 1 , , n ,
    and the integration of f ( x ; θ ) with respect to d μ can always be interchanged.
  • The model is identifiable: the map θ P θ , with d P θ = f ( · ; θ ) d μ is one-to-one.
Within this framework, the probabilistic mechanism that generates the data under analysis can be equivalently represented by a probability measure, a density function or a parameter, that is, by a point in the parametric manifold Θ . When these conditions are satisfied, the parametric statistical model is said to be regular. Initially, Θ is regarded as a Riemannian manifold endowed with an arbitrary fundamental tensor h on Θ , whose components are denoted by h i j . Nevertheless, it is well known that the parameter space admits a natural Riemannian structure induced by probability measures, referred to as the information metric, whose fundamental tensor components g i j coincide with those of the Fisher information matrix. For further details, see [1,3,4,6,7], among many others.
In this context, for a given sample size k, an estimator  U of the true parameter θ Θ —that is, the parameter associated with the true probabilistic mechanism generating the observed sample—is defined as a measurable map U : χ k Θ , under the assumption that the probability measure in χ k is given by ( P θ ) ( k ) ( d x ) = f ( k ) ( x ; θ ) μ k ( d x ) = i = 1 k f ( x i ; θ ) μ ( d x i ) .

2.1. Local Bounds

Let h α β denote the components of the metric tensor associated with the Riemannian metric on Θ , and let g α β denote the components of the information metric on Θ . Consider the Levi–Civita connection corresponding to h α β , and define
A = exp θ 1 ( U ) , B = E θ exp θ 1 ( U ) ,
where exp θ 1 is the inverse of the exponential map induced by this connection (see Appendix A). Observe that A encodes the deviation between the true parameter θ and its estimate U , quantified by the tangent vector at θ of the geodesic connecting both points, whose length equals the corresponding Riemannian distance. The term B is the expectation of this deviation vector. For simplicity, it is assumed that the estimators U are such that A is defined almost everywhere with respect to μ , and B is a C 1 vector field on Θ . The existence of such a field is ensured whenever the mean square Riemannian distance exists.
Let 𝔖 θ = { ξ T θ Θ : ξ = 1 } , where T θ Θ denotes the tangent space at θ . For each ξ 𝔖 θ , define
C θ ( ξ ) = sup { s > 0 : d ( θ , γ ξ ( s ) ) = s } ,
where d denotes the Riemannian distance and γ ξ is a geodesic defined on an open interval containing zero, satisfying γ ξ ( 0 ) = θ and γ ˙ ξ ( 0 ) = ξ , that is, the tangent vector at θ is equal to ξ . Define
𝔇 θ = { s ξ T θ Θ : 0 s < C θ ( ξ ) , ξ 𝔖 θ } , D θ = exp θ ( 𝔇 θ ) .
It is known that exp θ is a diffeomorphism mapping 𝔇 θ onto D θ (see Hicks [16]). An intrinsic extension of the Cramér–Rao bound is obtained, generalizing the formulation of [7]. The previous result relied exclusively on the information metric, whereas the current framework admits an arbitrary Riemannian metric for the quantification of estimator loss.
Theorem 1
(Riemannian Cramér-Rao lower bound). Let U be an estimator based on a sample of size k, corresponding to an n-dimensional regular parametric family of density functions. Assume that the parameter manifold Θ is simply connected and that ( P θ ) k U 1 ( Θ D θ ) = 0 , θ Θ , so that the estimator takes values almost surely in a normal neighborhood D θ of θ. Suppose further that the mean squared Riemannian distance with respect to the metric h α β between the true parameter and the estimator, E θ d 2 ( U , θ ) , exists for all θ, and that the covariant derivative of the bias field B may be computed by differentiating under the integral sign. Then
E θ d 2 ( U , θ ) div ( B ) E θ ( div ( A ) ) 2 k c + B 2 ,
where c = α , β h α β g α β and div ( · ) represent the divergence operator.
Observe that the divergence of a vector field on a Riemannian manifold is the scalar function that quantifies the net rate at which the vector field flows outward from (or inward toward) a point.
Proof. 
Let C be any vector field. Then, applying the Cauchy–Schwartz inequality twice,
E θ | A B , C | E θ A B C E θ A B 2 E θ C 2 ,
where , and denote, respectively, the inner product and the norm defined on each tangent space.
Let C ( x ; θ ) = grad ( ln f ( k ) ( x ; θ ) ) , where grad ( · ) is the gradient operator. Taking expectations and using the repeated index convention,
E θ C 2 = E θ h α β h β γ ln f ( k ) θ γ h α λ ln f ( k ) θ λ = k h α β h β γ h α λ g γ λ = k h γ λ g γ λ .
Furthermore, we also have
| E θ ( A , C ) | = | E θ ( A B , C ) | E θ | A B , C |
and
E θ A B 2 = E θ ( A 2 ) B 2 .
Thus,
| E θ A , C | E θ A 2 B 2 k c ,
but A 2 = d 2 ( U , θ ) . Moreover,
div ( B ) = E θ ( div ( A ) ) + E θ A , C .
Then the theorem follows. □
Remark 1.
We can choose a geodesic spherical coordinate system with origin U ( x ) ; under this coordinate system, we have the following.
A α θ α = 1 and Γ α j α A j = ρ Γ α 1 α = ln g ρ ρ ,
where g is the determinant of the metric tensor. Then
div ( A ) = 1 ρ ln g ρ .
Now we can use Bishop’s comparison theorems (see [17], pp. 71–73) to estimate ln g ρ .
In the Euclidean case,
ln g ρ = n 1 ρ ,
and thus div ( A ) = n .
When the sectional curvatures are non-positive, we obtain the following.
ln g ρ n 1 ρ ,
and therefore div ( A ) n .
Finally, when the supreme of the sectional curvatures, K , is positive and the diameter of the manifold satisfies d ( Θ ) < π / 2 K , we have
ln g ρ 0 ,
and then we obtain div ( A ) 1 .
In any case, div ( A ) a , with a = n or a = 1 , depending on the sectional curvature sign.
Corollary 1.
Suppose that there is a global chart such that h α β = δ α β . Identifying the points with their coordinates, we have
MSE U div ( E θ ( U ) ) 2 k g β β + Bias ( U ) 2 ,
where MSE and Bias are the ordinary mean squared error and bias under the assumed global chart, and we use the repeated index summation convention.
Proof. 
It follows straightforwardly from the previous theorem and the facts that d is the Euclidean distance, A = U θ and div ( A ) = n and Bias ( U ) = E θ ( U ) θ . □
Corollary 2
(Intrinsic Cramér-Rao lower bound). If h α β = g α β , we have
E θ ρ 2 ( U , θ ) div ( B ) E θ ( div ( A ) ) 2 k n + B 2 ,
where ρ is the Rao distance, that is, the Riemannian distance induced by the information metric. In particular, If all the sectional Riemannian curvatures K are bounded from above by a non-positive constant K and div ( B ) n , then
E θ ρ 2 ( U , θ ) div ( B ) + 1 + ( n 1 ) K B coth K B 2 k n + B 2 .
If all sectional Riemannian curvatures K are bounded from above by a positive constant K and d ( Θ ) < π / 2 K , where d ( Θ ) is the diameter of the manifold and div ( B ) 1 , then
E θ ρ 2 ( U , θ ) div ( B ) + 1 + ( n 1 ) K d ( Θ ) cot K d ( Θ ) 2 k n + B 2 .
Proof. 
If the Riemannian metric is the Fisher metric, the distance is known by the Rao distance and c = g α β g α β = δ α α = n . To prove (5) and (6) see [7]. □
Note that the geometry of the model influences the lower bounds of the Riemannian risk. This influence is intricate, as it depends not only on the Riemannian structure of the parameter space but also on the probability distribution that the estimator induces in that space. For a given bias structure, specified by the bias vector field B and its divergence div ( B ) , the behavior of the lower bounds of the Riemannian risk is determined by the curvature. When the curvature is negative, these bounds tend to increase when K decreases; see (5) as K B coth K B increases, while for positive curvature, they tend to decrease when K increases as a function of K d ( Θ ) cot K d ( Θ ) , d ( Θ ) being the diameter of the manifold Θ that plays a significant role in the calculation of this lower bound.
Intuitively, geodesics are the straightest possible paths in a curved space. When sectional curvatures are positive, the geodesics starting from a single point initially spread out, but eventually start to converge: positive curvature pulls geodesics together. When the curvature is zero, the geodesics starting from a point spread uniformly in all directions, the space is flat, and the geodesics neither attract nor repel each other. When the curvature is negative, geodesics starting from a point diverge between them. Even if they start close together, they spread apart very quickly as you move along them: negative curvature pushes geodesics away from each other. This behavior has consequences in connection with intrinsic risk. When the model has positive sectional curvatures, the risk can decrease, since the geodesics bend together and the estimators might behave more similarly across nearby parameters, while when the sectional curvatures are negative, the risk can increase since the geodesics spread apart and estimators may differ more across the space.

2.2. Global Bounds

It is well known that, for a general loss function, there is no estimator whose risk function is uniformly smaller than that of every other estimator. Consequently, given a particular estimator, it is natural to assess its performance over a specified region of the statistical model by integrating its risk function over that region and normalizing the result by the corresponding Riemannian volume. In what follows, the square of the Rao distance is adopted as the loss function and the Riemannian metric is taken to be the Fisher information metric. This setting corresponds to the intrinsic analysis framework developed in [7].
Let B Θ be a measurable subset satisfying 0 < V ( B ) < , where V denotes the Riemannian measure. The Riemannian average of the mean squared Rao distance is defined as
R U 2 ( B ) = B E θ ρ 2 ( U , θ ) d V B d V .
The resulting performance index represents a weighted average of the mean squared Rao distance. This formulation is compatible with a Bayesian perspective: a uniform prior with respect to the Riemannian volume can be regarded as a noninformative prior, see [18]. Furthermore, as shown in [19], when the parameter space is a locally compact topological group, the corresponding Riemannian volume coincides, up to a multiplicative constant, with a left-invariant Haar measure. In general, this volume is invariant under any group that leaves the parametric family of densities unchanged.
In the first part of the article, the lower bounds for this global index are derived on geodesic balls of radius R, R U 2 ( B ) .
An alternative measure of global estimator performance is given by the maximum risk over a region of the parameter space:
M U 2 ( B ) = sup θ B E θ ρ 2 ( U , θ ) ,
corresponding to the minimax approach. The final part of the paper is devoted to the derivation of lower bounds for this maximum risk.

3. Variational Methods to Obtain Global Bounds

The local bounds established in Corollary 2 indicate that the expected squared Rao distance between the true probabilistic mechanism generating the sample and its corresponding estimates is bounded from below by a quantity depending on the intrinsic bias structure of the estimator.
Global bounds can be obtained by using variational methods. A study in this direction was previously conducted in [2]. The approach consists in integrating the local bounds for the mean squared Rao distance, as derived above, under the assumption that the Riemannian metric coincides with the Fisher information metric and over a submanifold W Θ with boundary W Θ . Specifically,
Y ( B ) = W B 2 + 1 k n ( div ( B ) + a ) 2 d V ,
where a = n if the sectional curvatures are nonpositive and a = 1 otherwise.
The functional above depends solely on the vector field B, and the problem reduces to finding the C vector field B that minimizes Y ( B ) . Since the minimization is performed over a class of vector fields larger than that of smooth bias fields, the resulting minimum provides a lower bound for the average of the mean squared Rao distance.
Observe that the expression (2) yields a pointwise lower bound for the intrinsic risk, whose dependence on the estimator’s bias is immediately evident. Allowing for a non-negligible bias may lead to an artificial reduction of the risk—whether classical or intrinsic—but only at the expense of increasing the bias itself. This trade-off would at best indicate satisfactory performance for a specific probabilistic mechanism, corresponding to a single point in the parameter space Θ , while typically resulting in poor performance over a substantial region of the parameter space, primarily due to the growth of the bias. The minimization of (2) thus emerges naturally when considering this pointwise intrinsic bound. To assess the performance of an estimator over an entire region of the parameter space, it is therefore reasonable to consider estimators with various bias structures, since, in principle, biased estimators may outperform unbiased ones when evaluated over a given region. The problem may then be formulated as follows: among all estimators exhibiting a prescribed form of bias, determine the bias structure that minimizes a lower bound on the risk over the region of interest. In particular, this formulation leads to a relatively simple variational problem, considerably more tractable than the one posed directly in terms of the field A.
In applications, the integration region W should be selected as the subset of Θ within which the true probabilistic mechanism that generates the data is expected to lie. Consequently, since this region can be chosen arbitrarily, it will be chosen in such a way that we expect any well-behaved estimator to exhibit a low or vanishing risk on the boundary W , and consequently a small or negligible squared norm of the bias vector, since B 2 E θ ρ 2 ( U , θ ) . Hence, W is chosen so that the bias values on its boundary can be considered zero or negligible. These assumptions can reasonably be made within a broad class of statistical models. In situations where they fail to hold—typically because the true parameter value lies on the boundary of Θ —the boundary conditions must be modified appropriately to reflect the specific structure of the model under consideration.
Lemma 1.
The C field B minimizes the functional
Y ( B ) = W B 2 + 1 k n ( div ( B ) + a ) 2 d V
iff it verifies
B 1 k n grad ( div ( B ) ) = 0 , θ W , div ( B ) + a = 0 , θ W ,
and the minimum value is given by
Y * = a 2 k n vol ( W ) a k n W B * d σ = a 2 k n vol ( W ) + a k n W div ( B * ) d V ,
where B * satisfies (9) and d σ denotes the element of the induced surface area in W .
Proof. 
Consider the first variation δ Y ( B , η ) , where η is an arbitrary smooth vector field. A direct computation yields
lim ϵ 0 Y ( B + ϵ η ) Y ( B ) ϵ δ Y ( B , η ) = W 2 B , η + 2 k n div ( η ) div ( B ) + a d V
Moreover, the Gâteaux variations satisfy
Y ( B + η ) Y ( B ) = δ Y ( B , η ) + W η 2 + 1 k n div ( η ) 2 d V .
which shows that the functional Y is minimized at any point B for which its Gâteaux variations vanish. In addition, since the integral term in (11) is strictly positive for every nonzero smooth vector field η , the functional (Y) is strictly convex; see, for example, [20]. Consequently, every stationary point is necessarily a global minimizer. The stationary condition δ Y ( B , η ) = 0 is equivalent to
W B , η + 1 k n div ( η ) div ( B ) + a d V = 0 .
Using the identity
div ( f X ) = f div ( X ) + X , grad ( f ) ,
it follows that
1 k n div ( η ) div ( B ) + a = div 1 k n η div ( B ) + a grad 1 k n div ( B ) + a , η = 1 k n div η div ( B ) + a 1 k n grad div ( B ) , η .
Hence, the stationary condition can be written as
W B , η + 1 k n div η div ( B ) + a 1 k n grad ( div ( B ) ) , η d V = 0 .
By the Gauss divergence theorem, this expression becomes
W B 1 k n grad ( div ( B ) ) , η d V + 1 k n W ( div ( B ) + a ) η , ν d σ = 0 ,
where d σ denotes the Riemannian measure induced in W and ν is the outward unit normal vector field in W . Equation (9) follows from the fact that the preceding equality holds for all η .
For the second part of the proposition, applying condition (9) together with (12) gives the following.
B * 2 = 1 k n div ( B * div ( B * ) ) ( div ( B * ) ) 2 ,
Substituting this expression into Y ( B ) yields
Y * = W 1 k n div ( B * div ( B * ) ) + a 2 + 2 a div ( B * ) d V = a 2 k n vol ( W ) + 1 k n W B * div ( B * ) + 2 a B * , ν d σ .
From the second stationary condition in (9), it follows that
Y * = a 2 k n vol ( W ) + a k n W B * , ν d σ .
Since 0 div ( B * ) a , and div ( B * ) = a on W , together with B * = grad ( div ( B * ) ) , it follows that B * , ν = B * . Finally, by another application of the Gauss divergence theorem, the second equality in (10) is obtained. □
Remark 2.
The minimal value of Y ( B ) depends solely on the divergence of the optimal field, div ( B * ) . Let f * div ( B * ) . It follows from condition (9) that f * satisfies the boundary value problem
Δ f = k n f , f ( θ ) = a , θ W ,
where Δ denotes the Laplace–Beltrami operator associated with the Riemannian metric on W. We have obtained an explicit solution to this problem in the case where W = S R , the geodesic ball of radius R, under the assumption of constant sectional curvature K .
Theorem 2.
Let the parametric statistical model be a Riemannian manifold with constant sectional curvature K . Then, for geodesic balls S R Θ centered at γ Θ of radius R less than the injectivity radius at γ, and satisfying | K | S K 2 ( R ) < 1 , the average of the mean squared Rao distance satisfies the following lower bound:
R U 2 S R a 2 k n 1 f ( R ) S K n 1 ( R ) k n f ( R ) 0 R S K n 1 ( r ) d r ,
where
f ( R ) = a 0 j = 0 s = 1 j k n + 2 K ( s 1 ) ( n + 2 s 3 ) j ! ( n 2 ) j 4 j S K 2 j ( R )
and the function S K ( t ) is defined by
S K ( t ) = sin ( K t ) K i f   K > 0 , t i f   K = 0 , sinh ( K t ) K i f   K < 0 .
Proof. 
By symmetry and uniqueness, the solution of the boundary value problem in the geodesic ball S R ,
Δ f = k n f , with f ( θ ) = a θ S R ,
depends only on the geodesic distance to the center of S R . Using geodesic spherical coordinates ( r , u ) with origin at the center of the geodesic ball, the Riemannian volume element satisfies g ( r , u ) = S K n 1 Ω ( u ) (see the Appendix A). Hence
Δ f = 1 g d d r g d d r f = 1 S K n 1 d d r S K n 1 d d r f .
and the differential equation can be written as
( n 1 ) S K S K f + f = k n f .
Let v = S K ( r ) and define h ( v ) = f ( S K 1 ( v ) ) . Using the relations
f ( r ) = S K ( r ) h ( v ) and f ( r ) = S K ( r ) h ( v ) + S K 2 ( r ) h ( v ) ,
we obtain
( n 1 ) S K 2 ( r ) h ( v ) S K ( r ) + S K ( r ) h ( v ) + S K 2 ( r ) h ( v ) = k n h ( v ) .
Since the identities are
S K 2 ( r ) + K S K 2 ( r ) = 1 and S K ( r ) + K S K ( r ) = 0 ,
hold, substitution yields
v ( 1 K v 2 ) h ( v ) + ( n ( 1 K v 2 ) 1 ) h ( v ) k n v h ( v ) = 0 .
Assuming a power series expansion h ( v ) = j = 0 a j v j , with
h ( v ) = j = 1 a j j v j 1 , h ( v ) = j = 2 a j j ( j 1 ) v j 2 ,
and substituting into the above equation, we find
j = 2 a j j ( j 1 ) v j 1 K j = 2 a j j ( j 1 ) v j + 1 + ( n 1 ) j = 1 a j j v j 1 K n j = 1 a j j v j + 1 k n j = 0 a j v j + 1 = 0 ,
that is,
( n 1 ) a 1 + ( 2 ( n 1 ) a 2 + 2 a 2 k n a 0 ) v + j = 1 a j + 2 ( j + 2 ) ( n + j ) a j ( k n + K j ( n + j 1 ) ) v j + 1 = 0 .
For n 1 , this yields the recurrence
a 1 = 0 and a j + 2 = k n + K j ( n + j 1 ) ( n + 1 ) ( j + 2 ) a j , j 0 .
Hence,
h ( v ) = a 0 j = 0 s = 1 j k n + 2 K ( s 1 ) ( n + 2 s 3 ) j ! ( n 2 ) j 4 j v 2 j ,
and consequently,
f ( r ) = a 0 j = 0 s = 1 j k n + 2 K ( s 1 ) ( n + 2 s 3 ) j ! ( n 2 ) j 4 j S K 2 j ( r ) ,
where a 0 is determined from the boundary condition f ( R ) = a . It is straightforward to verify that this series converges whenever | K | S K 2 ( r ) < 1 , which is true automatically for nonnegative sectional curvature.
To compute S R f d V , note that in spherical coordinates, see Appendix A,
S R f d V = area ( S ) 0 R S K n 1 ( r ) f ( r ) d r ,
where area ( S ) = 2 π n / 2 Γ ( n / 2 ) is the area of the n-dimensional unit radius sphere, S. Since ( S K n 1 ( r ) f ( r ) ) = k n S K n 1 ( r ) f ( r ) , we obtain the following:
S R f d V = 2 π n / 2 k n Γ ( n / 2 ) S K n 1 ( R ) f ( R ) ,
and thus,
Y * = a 2 k n vol ( S R ) 1 f ( R ) S K n 1 ( R ) k n f ( R ) 0 R S K n 1 ( r ) d r .
Corollary 3.
When the parametric statistical model is a Euclidean manifold, the following lower bound holds for the Riemannian average of the mean squared Rao distance on a ball centered at γ Θ of radius R less than the injectivity radius at γ:
R U 2 S R n k 1 F 1 0 n 2 + 1 ; k n R 2 4 F 1 0 n 2 ; k n R 2 4 ,
where F 1 0 a ; z denotes the confluent hypergeometric limit function, see (A2) in Appendix A.
Moreover, if the Euclidean manifold Θ is complete and simply connected, then the following lower bound holds globally:
R U 2 ( Θ ) lim R R U 2 S R n k .
Proof. 
The result follows directly as a particular case of Equation (15) with constant sectional curvature K = 0 . The second assertion is obtained by taking the limit R in Equation (18). □
Note again that the geometry of the model influences the lower bounds of the Riemannian risk—both in its local and integral forms—when these bounds are extended over a region of the parameter space, and that it does so in a non-trivial manner, a fact that merits further investigation.
Example 1.
Consider the n–variate normal distribution with known covariance matrix Σ . For a sample of size k, the Riemannian risk-measured as the mean squared Rao distance associated with the sample mean X ¯ k is given by
R U 2 S R = n / k ,
which coincides with the lower bound (19) derived in the preceding corollary.
Observe that in this case the parameter space is Θ = R n . It is well known that this model is invariant under the action of a subgroup of the affine group that leaves the sample variance-covariance matrix unchanged. In this setting, the unique equivariant estimator throughout the parameter space is the sample mean U ( X 1 , , X k ) = X ¯ k = 1 k ( j = 1 k X j ) . The induced group acting on the parameter space is, in this case, transitive and commutative, implying that the estimator is intrinsically unbiased; see [21], and in the context of intrinsic analysis [22].
The intrinsic risk of U is n / k , which coincides with the intrinsic Cramér–Rao bound [7]. However, in a geodesic ball of radius R, the Riemannian mean may achieve an integrated intrinsic risk strictly smaller than n / k , provided that estimators are allowed with appropriate bias, although this quantity goes to n / k when R , as we naturally expect. This is consistent with the existence of shrinkage estimators of the James–Stein type; see [23]. In the present example, though, such estimators cannot be equivariant under the group action.
Figure 1a,b illustrate these phenomena for ( n , k ) = ( 2 , 10 ) and ( n , k ) = ( 5 , 50 ) , respectively.
The method presented above apply to any statistical model satisfying standard regularity conditions, independently of the particular Riemannian geometry induced by the information metric. Moreover, they remain valid regardless of the probability distribution of the estimator of the probabilistic mechanism determined by θ . The only aspects that may vary are the complexity of the resulting calculations.
In the univariate case n = 1 , the parameter manifold is Euclidean, and the corresponding expression simplifies to
Y * 2 R = 1 k 1 tanh ( k R ) k R ,
which agrees with the result originally obtained by Chentsov [2].
We now examine the Euclidean case in Cartesian coordinates. Let us fix a coordinate system with origin at an arbitrary point θ 0 and consider the cube
C R = { x R n : | x i | R , i = 1 , , n } .
In this setting, the corresponding variational problem reduces to solving the Dirichlet boundary value problem
i = 1 n 2 f ( x i ) 2 = k n f , with   f ( x ) = n i f | x i | = R for some i = 1 , , n .
Looking for a solution of the form
f ( x ) = i = 1 n f i ( x i ) ,
with real-valued functions f i : R R each defined on the real variable x i , obtaining the following
i = 1 n d 2 f i d ( x i ) 2 = k n i = 1 n f i ( x i ) , i = 1 n f i ( ± R ) = n .
A convenient particular solution is of the form f ( x ) = i = 1 n g ( x i ) , where g satisfies
d 2 g d z 2 = k n g ( z ) , g ( ± R ) = 1 .
The unique solution is given by
g ( z ) = cosh ( k n z ) cosh ( k n R ) ,
so that
f ( x ) = i = 1 n cosh ( k n x i ) cosh ( k n R ) .
Substituting this expression into the definition of functional Y * yields
Y * vol ( C R ) = n k 1 g ( R ) area ( C R ) k n 2 g ( R ) vol ( C R ) = n k 1 tanh ( k n R ) k n R ,
which constitutes an improvement over the result obtained by Chentsov [2].
Furthermore, by Corollary 1, a similar inequality can be established in the general non-Euclidean case (with a fixed coordinate system). Specifically, the M S E satisfies the bound
R U 2 ( C R ) n 2 k η 1 tanh ( k η R ) k η R ,
where η denotes an upper bound of β g β β within C R .
Analogous lower bounds can also be obtained for more general settings.
Theorem 3.
Let the parametric statistical model be represented by a Riemannian manifold whose sectional curvatures are bounded from above by K . Then the average of the mean squared Rao distance satisfies the lower bound
R U 2 S R a 2 k n 1 f K ( R ) area ( S R ) k n f K ( R ) vol ( S R ) > 0 ,
where area ( S R ) denotes the area of the boundary of a n-dimensional geodesic ball centered at γ Θ of radius R less than the injectivity radius at γ, vol ( S R ) its corresponding volume, and f K ( r ) is the solution to the boundary value problem (17), on a manifold of constant sectional curvature K .
Proof. 
Using geodesic spherical coordinates ( r , u ) , let f ( r , u ) denote the solution to the boundary value problem (17) on a manifold with sectional curvatures bounded above by K , and let f K ( r ) denote the solution to the same problem on a manifold of constant sectional curvature K , we have
Δ f K = 1 g r g r f K = 2 r 2 f K + r ln g r f K .
By Bishop’s comparison theorem, it follows that
( n 1 ) S K S K r ln g ,
and, since f K / r 0 , we obtain
Δ f K k n f K Δ K f K k n f K = 0 ,
where Δ K denotes the Laplace–Beltrami operator for a manifold of constant sectional curvature K . Therefore,
Δ f K k n f K Δ f k n f = 0 .
Since f K ( θ ) = f ( θ ) = a for all θ S R , the comparison theorem for elliptic differential equations (see [24], Theorem 6, p. 243) implies
f ( θ ) f K ( θ ) , θ S R ,
and equality on the boundary gives
r f ( θ ) r f K ( θ ) θ S R .
Using (9) and (13), we then obtain
Y * = a 2 k n vol ( S R ) + a k n S R r f d σ a 2 k n vol ( S R ) 1 f K ( R ) area ( S R ) k n f K ( R ) vol ( S R ) ,
which establishes the claimed bound. □
Remark 3.
Estimates for the volumes of geodesic balls provided in the Appendix are instrumental in obtaining explicit expressions for the lower bounds derived above. In particular, if the sectional curvatures of the parametric manifold are bounded from below by κ and from above by K , then, according to Proposition (A3), the ratio between the area and the volume of the geodesic ball of radius R satisfies
area ( S R ) vol ( S R ) S κ n 1 ( R ) 0 R S K n 1 ( r ) d r .
where S K ( · ) denotes the comparison function associated with curvature K (as defined in (16)).

4. Lower Bounds for the Maximum Risk

Although one may employ the Riemannian average of the risk to obtain bounds on the maximum risk, alternate minimax bounds can be derived by a more direct argument, as shown below.
Lemma 2.
Let X be a smooth vector field on the parameter manifold Θ such that div ( X ) a . Let f be a nonnegative smooth function on Θ, and let W Θ be a submanifold with a smooth boundary W . Then
a W f d V W f X d σ + W X grad ( f ) d V
where d σ denotes the element of the induced surface area in S R .
Proof. 
We have
a W f d V W X , grad ( f ) div ( f X ) d V W X grad ( f ) d V + W f X d σ
Theorem 4.
Let U be an estimator in a submanifold W Θ with vol ( W ) > 0 . Then M U 2 ( W ) satisfies the inequality
M U 2 ( W ) = sup θ W E ρ 2 ( U , θ ) a 2 area ( W ) vol ( W ) + k n 2 .
and therefore M U 2 ( W ) is a lower bound of the risk of the local minimax estimator on W.
Proof. 
Let A = exp 1 ( U ) . Integrating inequality (21) with respect to measure f ( k ) ( x ; θ ) , d μ and applying Fubini’s theorem yield
a vol ( W ) W E A 2 E C 2 d V + W E ( A ) d σ k n W E A 2 d V + W E A 2 d σ .
Hence,
sup θ W E A 2 a vol ( W ) area ( W ) + k n vol ( W ) 2 .
Corollary 4.
When the parameter manifold is Euclidean and dim ( Θ ) = n , given a geodesic ball centered at γ Θ of radius R less than the injectivity radius at γ, the following lower bound holds for the local minimax risk:
M U 2 S R n R n + k n R 2 ,
where M U 2 ( S R ) = sup θ S R E ρ 2 ( U , θ ) . If the Euclidean manifold Θ is complete and simply connected, then
M U 2 Θ n k .
Proof. 
Since
vol ( S r ) = 2 π n / 2 r n n Γ ( n / 2 ) ,
we obtain
0 R vol ( S r ) d r = 2 π n / 2 R n + 1 n ( n + 1 ) Γ ( n / 2 ) .
Substituting into Theorem 4 yields
M U 2 S R n R n + k n R 2 .
The global bound follows by taking the limit R . □
The lower bounds for the local minimax risk in geodesic balls of radius R in Example 1 are calculated using (23) and graphically displayed in Figure 2a,b.
Another lower bound for the integrated Riemannian risk is stated in the following theorem.
Theorem 5.
We obtain the following lower bound for the Riemannian average of the mean–squared Rao distance over the geodesic ball centered at γ Θ of radius R less than the injectivity radius at γ, S R :
R U k 2 S R a 0 R vol ( S r ) d r vol ( S R ) + k n vol ( S R ) 0 R vol ( S r ) d r 2 ,
where a = n when the sectional curvatures are nonpositive and a = 1 when the sectional curvature satisfies sup K > 0 .
Proof. 
Consider inequality (22) with W = S r , 0 < r R , and integrate with respect to d r on [ 0 , R ] . We obtain
a 0 R vol ( S r ) d r k n 0 R S r E A 2 d V d r + 0 R S r E A 2 d σ d r k n 0 R vol ( S r ) S r E A 2 d V d r + vol ( S R ) 1 vol ( S R ) S R E A 2 d V
The map
r S r E ( A ) 2 d V
is positive and increases monotonically. Using E ( A ) 2 = E ρ 2 ( U k , θ ) , we obtain
a vol ( S R ) 0 R vol ( S r ) d r k n vol ( S R ) 0 R vol ( S r ) d r + 1 R U 2 S R .
This yields the stated bound and completes the proof. □
Corollary 5.
When the parametric statistical model is a Euclidean manifold, the Riemannian average of the mean squared Rao distance over the geodesic ball centered at γ Θ of radius R less than the injectivity radius at γ, S R , satisfies the lower bound
R U 2 S R n ( n + 2 ) R ( n + 1 ) n + 2 + 2 k n R 2 .
If the Euclidean manifold Θ is complete and simply connected, the corresponding lower bound over the entire manifold is
R U 2 ( Θ ) lim R R U 2 S R n ( n + 2 ) 2 4 k ( n + 1 ) 2 .
Proof. 
For a Euclidean manifold of dimension n, the volume of the geodesic ball of radius r is
vol ( S r ) = 2 π n / 2 r n n Γ ( n / 2 ) ,
Consequently,
0 R vol ( S r ) d r = 8 π n / 2 R n + 2 n ( n + 2 ) 2 Γ ( n / 2 ) 1 / 2 , 0 R vol ( S r ) d r = 2 π n / 2 R n + 1 n ( n + 1 ) Γ ( n / 2 ) .
Substituting these expressions into the inequality (24) yields
0 < n ( n + 2 ) R ( n + 1 ) n + 2 + 2 k n R 2 R U k 2 S R .
The second assertion follows by taking the limit R . □

5. Concluding Remarks

This article introduces two performance indices for statistical estimators restricted to a prescribed region of the parameter space. These indices are based on the intrinsic risk function developed in [7]. Specifically, for a bounded region W of the parameter space, we consider (i) the integral of the intrinsic risk over W with respect to the associated Riemannian volume measure, and (ii) the maximum value of the intrinsic risk attained within W. Both criteria are compatible with Bayesian methodologies.
In this framework, if it is known a priori that the true parameter belongs to a given region W, biased estimators can be constructed that outperform unbiased estimators in terms of either criterion. When there is strong practical evidence that the parameter lies within a sufficiently small region, allowing controlled bias may be preferable whenever it reduces the average Riemannian risk or the worst-case risk over W.
A representative example is examined in which the statistical model is the multivariate normal distribution with known covariance matrix and a simple random sample of size k. Numerical illustrations (Figure 1 and Figure 2) show that restricting attention to a region W allows the construction of estimators with strictly smaller integrated risk or smaller maximal risk than classical unbiased estimators, at the cost of introducing a region-dependent bias term. The magnitude and behavior of these improvements depend on the underlying Riemannian geometry induced by the statistical model, suggesting several directions for further investigation.
When the model is invariant under the action of a transformation group G, coherence considerations require restricting attention to equivariant estimators. In this setting, the corresponding intrinsic risk must remain constant along the orbits of the induced action G * in the parameter space. As shown in [21], if an intrinsically unbiased estimator exists that uniformly minimizes the Riemannian risk, then this estimator must be equivariant. The converse does not hold: equivariance, together with uniform optimality, does not, in general, imply intrinsic unbiasedness. Additional structural assumptions—such as the transitivity of the G-action and the commutativity of the induced action in G * —are required to recover this implication. In that case, the intrinsically unbiased estimator that uniformly minimizes the Riemannian risk will be preferable.

Author Contributions

Conceptualization, J.M.C. and J.M.O.; Methodology, J.M.C. and J.M.O.; Writing—original draft, J.M.C. and J.M.O.; Writing—review & editing, J.M.C. and J.M.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study did not require ethical approval.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Appendix A.1. Notation and Basic Concepts of Differential Geometry

In this Appendix, we introduce the main frequently used symbols in a small summary table and additionally we briefly recall several standard notions from differential geometry that will be used throughout the paper. Classical references include [16,25,26,27], among others.

Appendix A.1.1. Manifolds

Let Θ be a nonempty set. A C n-atlas on Θ is a family of charts A = { ( U i , ϕ i ) } i I , where U i i I is a cover of Θ , each map ϕ i : U i ϕ i ( U i ) R n is a bijection onto an open subset of R n , and for every i , j I the transition map ϕ j ϕ i 1 : ϕ i ( U i U j ) ϕ j ( U i U j ) is a C diffeomorphism. Charts satisfying this compatibility condition are called C -compatible. The maximal C atlas containing A determines a unique n-dimensional differentiable structure on Θ , although dependent on A . Equipped with this structure, Θ becomes a smooth manifold, where a set W Θ is defined to be open if and only if ϕ ( W U ) is open in R n for every chart ( U , ϕ ) in the maximal atlas. Additionally, a real-valued function f : U R in an open subset U Θ is said to be smooth if, for every chart ( V , φ ) with U V , the composition f φ 1 : φ ( U V ) R is a smooth function in the classical sense on an open subset of R n . This definition is independent of the chosen chart and provides the manifold with its canonical sheaf of smooth real-valued functions. It is rather straightforward and well known to extend this smoothness notion to other objects, through the convenient compatible local charts.
Table A1. Summary of the main symbols used in this article.
Table A1. Summary of the main symbols used in this article.
SymbolsDescription
f ( k ) Joint k-size sample
Θ Parametric space
W Boundary of W
U                               Estimator
g i j Metric tensor components                        
A = exp θ 1 ( U ) Estimator vector field
B = E θ exp θ 1 ( U ) Bias vector field
ρ 2 ( U , θ ) Riemannian (Rao) distance
gradGradient operator
C = g r a d ln f k Gradient of ln f ( k )
divDivergence operator
Δ Laplace-Beltrami operator
R U 2 ( B ) Riemannian average of the intrinsic risk in B
M U 2 ( W ) Supremum of the intrinsic risk in W
S R Geodesic ball of radius R

Appendix A.1.2. Vectors and Lie Bracket

Let θ Θ . Denote by C ( θ ) the set of all smooth functions in an open neighborhood of θ . A tangent vector at a point θ Θ is a linear map ξ : C ( θ ) R which also satisfy the Leibniz rule ξ θ ( f g ) = f ( θ ) ξ θ ( g ) + g ( θ ) ξ θ ( f ) , f , g C ( γ ) . The collection of all tangent vectors at θ forms the tangent space (at θ ) T θ Θ , a real vector space of dimension n. Given a local chart ( U , ϕ ) around θ , let π i be the i-th projection function and let ϕ i = π i ϕ , ( ϕ 1 , , ϕ n ) be denoted as a coordinate system. The tangent vectors are defined as ϕ i | θ f = D i ( f ϕ 1 ) ( ϕ ( θ ) ) , i = 1 , , n where D i is the ordinary partial derivative in R n , form a natural basis linked to the local chart.
A vector field on a subset W Θ is a map assigning to each point θ W a tangent vector ξ θ T θ Θ . The field ξ is of class C in W if and only if W is open and, for every smooth function f defined in an open set B A , the function θ ( ξ f ) ( θ ) = ξ θ ( f ) is ( C on A B .
Given a coordinate system ϕ 1 , , ϕ n , a smooth vector field ξ is represented by its component functions ξ 1 , , ξ n via ξ θ = i = 1 n ξ θ i ϕ i θ . In coordinates, we thus identify the field with the map θ ( ξ 1 ( ϕ ( θ ) ) , , ξ n ( ϕ ( θ ) ) ) . For two smooth vector fields ξ and ζ in W, their Lie bracket is the vector field [ ξ , ζ ] defined by [ ξ , ζ ] θ ( f ) = ξ θ ( ζ f ) ζ θ ( ξ f ) , for every smooth function f.

Appendix A.1.3. Tensors

Let Θ be an n-dimensional smooth manifold and let r , s 0 be integers. A ( r , s ) -tensor field A in an open set U Θ is a smooth assignment
T θ * Θ × × T θ * Θ × T θ Θ × × T θ Θ R r s
which is multilinear in each argument and depends smoothly on θ U .
Given a coordinate chart ( θ 1 , , θ m ) , the induced local frame / θ 1 , , / θ m spans each tangent space T θ Θ , and the dual frame d θ 1 , , d θ m spans T θ * Θ . With respect to these bases, the tensor field A has components A β 1 β s α 1 α r . Under a coordinate change ( θ 1 , , θ m ) ( θ ¯ 1 , , θ ¯ m ) the components transform according to the classical rule
A ¯ β 1 β s α 1 α r = θ ¯ α 1 θ i 1 · · θ ¯ α r θ i r · θ j 1 θ ¯ β 1 · · θ j s θ ¯ β s · A j 1 j s i 1 i r ,
where repeated indices are summed and all quantities are evaluated at the same point θ .
The case r = s = 0 corresponds to smooth real-valued functions in Θ , which are invariant under coordinate transformations. When r = 0 , the tensor field is called s-covariant; when s = 0 , it is r-contravariant. Throughout, any object or property that is independent of the chosen coordinate system in Θ will be called intrinsic.

Appendix A.1.4. Riemannian Manifold

A Riemannian manifold is a smooth manifold Θ equipped with a Riemannian metric, that is, a smooth ( 0 , 2 ) -tensor field g that is positive definite on each tangent space. Equivalently, for every θ Θ , the metric assigns an inner product · , · θ : T θ Θ × T θ Θ R that varies smoothly with θ . In local coordinates ( θ 1 , , θ m ) , the metric is represented by the symmetric, positive-definite matrix-valued function g μ ν ( θ ) = θ μ , θ ν θ , and the corresponding line element is written, using the summation convention,
d s 2 = g μ ν ( θ ) d θ μ d θ ν .
Associated with the metric g are the canonical index–raising and index–lowering operations, which provide isomorphisms between the tangent bundle T Θ and the cotangent bundle T * Θ . The flat map : T Θ T * Θ , is defined by ξ ( η ) = ξ , η θ , η T θ Θ , which enable the identification of a contravariant vector field with a covariant vector field by lowering the indices. In contrast, the sharp map  : T * Θ T Θ , is defined as the inverse of ♭, that is, the unique vector ( ω ) T θ Θ satisfying ω ( η ) = ( ω ) , η θ , η T θ Θ . In coordinates, these operations are expressed by contraction with the metric tensor and its inverse: lowering an index uses g μ ν , while raising an index uses the components g μ ν of the inverse metric. Thus, in component form, the passage between the covariant and contravariant tensors is achieved by multiplication by g μ ν or g μ ν , respectively.

Appendix A.1.5. Gradient, Divergence and the Covariant Derivative

For a smooth function q, its gradient is the unique vector field grad ( q ) satisfying grad ( q ) , ζ θ = ζ θ ( q ) , ζ T θ Θ , so that grad ( q ) = ( q ) where q is the covariant vector whose components with respect to the dual frame d θ 1 , , d θ n are ( q θ 1 , , q θ 1 ) . By Cauchy–Schwarz, | ζ θ ( q ) | | grad ( q ) | , with equality when ζ = grad ( q ) / | grad ( q ) | .
A connection in T Θ is a bilinear operator ξ ζ assigning to each tangent vector ξ and the local vector field ζ another tangent vector at the same point, satisfying the usual linearity and Leibniz properties. In a Riemannian manifold, the Levi–Civita connection ∇ is uniquely characterized by 1. torsion-freeness: ζ ξ ξ ζ = [ ζ , ξ ] and 2. metric compatibility: ξ ζ , η = ξ ζ , η + ζ , ξ η .
In coordinates, ξ ζ = ζ μ θ ν + Γ ν λ μ ζ λ ξ ν θ μ , where Γ ν λ μ are the Christoffel symbols given by, using the repeated index summation convention, Γ ν λ μ = 1 2 g μ α g α λ θ ν + g α ν θ λ g ν λ θ α . where g μ α are the coefficients of the inverse of the fundamental tensor components with respect to θ i , i = 1 , , n . The divergence of a vector field ζ is defined by div ( ζ ) ( θ ) = trace ξ ξ ζ , and the Laplace–Beltrami operator is Δ q = div ( grad ( q ) ) .

Appendix A.1.6. Curvature and Geodesics

The curvature operator is defined by R ( ξ , η ) ζ = η ξ ζ ξ η ζ [ η , ξ ] ζ . Associated quantities include the Riemann curvature tensor  K ( ν , ζ , ξ , η ) = ν , R ( ξ , η ) ζ , and the sectional curvature  K ( ξ , η ) = K ( ξ , η , ξ , η ) ξ , ξ η , η ξ , η 2 ; and the Ricci tensor, Ric ( ξ , η ) = trace ζ R ( ξ , ζ ) η .
A curve γ ( t ) with tangent vector field ξ is a geodesic if ξ ξ = 0 , which, in local coordinates and using repeated index summation convention, yields the system d 2 θ k d t 2 + Γ i j k d θ i d t d θ j d t = 0 , k = 1 , , n .

Appendix A.1.7. Exponential Map, Cut Locus and Injectivity Radius

For θ Θ and ξ T θ Θ , let γ ξ be a geodesic that satisfies γ ξ ( 0 ) = θ and γ ξ ( 0 ) = ξ . The exponential map is defined by exp θ ( ξ ) = γ ξ ( 1 ) and is a diffeomorphism near the origin.
Denote by 𝔖 θ ( r ) = { ξ T θ Θ : ξ θ = r } , where r > 0 , and for each ξ 𝔖 θ 𝔖 θ ( 1 ) we define C θ ( ξ ) = sup { s > 0 : ρ ( θ , γ ξ ( s ) ) = s } , where ρ is the Riemannian distance and γ ξ is a geodesic defined in an open interval containing zero such that γ ξ ( 0 ) = θ and with a tangent vector equal to ξ at the origin. Then if we set 𝔇 θ = { s ξ T θ Θ : 0 s < C θ ( ξ ) ; ξ 𝔖 θ } and D θ = exp θ ( 𝔇 θ ) , the map exp θ is a diffeomorphism from 𝔇 θ onto D θ . Moreover, if the manifold is also complete, the boundary of 𝔇 θ , 𝔇 θ , is mapped by the exponential map onto D θ , called the cut locus of θ in Θ. It is also interesting to note that, in this case, the cut locus of θ has a zero n-dimensional Riemannian measure in Θ and Θ is the disjoint union of D θ and D θ .
The injectivity radius at θ is inj ( θ ) = sup { r > 0 : exp θ | B r ( 0 ) is a diffeomorphism onto its image } , where B r ( 0 ) T θ Θ is the Euclidean ball of radius r, at the tangent space T θ Θ . The injectivity radius at θ captures the largest radius of a geodesic ball around θ on which normal coordinates are well-defined and uniquely minimize the distance. The injectivity radius of Θ is inj ( Θ ) = inf θ Θ { inj ( θ ) } .

Appendix A.2. Comparison Theorems and Volume

Bishop’s comparison theorems provide a powerful means of deriving the volume of a radius geodesic ball r in a Riemannian manifold with constant sectional curvature, as well as obtaining upper and lower bounds for this volume when sectional curvatures are only bounded. The following propositions summarize these results.
Proposition A1.
Let Θ be a Riemannian manifold with constant sectional curvature K and let S r be a geodesic Riemannian ball centered at γ Θ of radius r, with r > 0 but less than the injectivity radius at γ. Then the volume of this Riemannian ball, S r , and the area of its smooth boundary are equal to S r .
vol ( S r ) = 2 π n / 2 Γ ( n / 2 ) 0 r S K n 1 ( t ) d t and area ( S r ) = 2 π n / 2 Γ ( n / 2 ) S K n 1 ( r ) ,
where
S K ( t ) = sin ( K t ) K i f   K > 0 , t i f   K = 0 , sinh ( K t ) K i f   K < 0 .
Proof. 
Using geodesic spherical coordinates ξ ( θ ) = ( ρ , u ) centered on γ Θ , where ρ is the Riemannian distance between θ and γ and u defines a point in S n , the unit sphere in the tangent space Θ γ , we have the following
vol ( S r ) = 0 r ξ 1 ( S n ) g d u d ρ ,
When the sectional curvature is constant, Bishop’s theorem implies
ρ ln g ( ρ , u ) = ( n 1 ) S K S K ( ρ ) ,
Integrating this relation yields g ( ρ , u ) = S K n 1 Ω ( u ) , where Ω ( u ) d u is the surface element of the Euclidean unit sphere and satisfies
ξ 1 ( S n ) Ω ( u ) d u = 2 π n / 2 Γ ( n / 2 ) .
Substituting completes the proof. □
For the next result, we are going to use generalized hypergeometric functions. These functions are defined by
F q p ( a 1 , , a p ; b 1 , , b q ; z ) j = 0 ( a 1 ) j ( a p ) j ( b 1 ) j ( b q ) j z j j ! ,
where ( a ) j = a ( a + 1 ) ( a + j 1 ) and z is any complex number if p q , z < 1 if p = q + 1 and diverge for all z 0 if p > q + 1 , see [28].
Proposition A2.
If the sectional curvature is constant and equal to K , with | K | S K 2 ( r ) < 1 , then the volume of a geodesic ball geodesic ball centered at γ Θ of radius r less than the injectivity radius at γ, satisfies
vol ( S r ) = 2 π n / 2 n Γ ( n / 2 ) S K n ( r ) 1 + j = 1 n Γ ( j + 1 2 ) π ( n + 2 j ) K j S K 2 j ( r ) j ! .
Proof. 
From Proposition A1,
vol ( S r ) = 2 π n / 2 Γ ( n / 2 ) 0 r S K n 1 ( t ) d t .
Using the identity S K ( t ) 2 + K { S K ( t ) } 2 = 1 , and the setting y = S K 2 ( t ) / S K 2 ( r ) , we obtain
0 r S K n 1 ( t ) d t = 1 2 S K n ( r ) 0 1 y n 2 2 1 K S K 2 ( r ) y 1 2 d y .
Using the Euler integral representation of the Gauss hypergeometric function,
F 1 2 ( a , b ; c ; z ) = Γ ( c ) Γ ( b ) Γ ( c b ) 0 1 t b 1 ( 1 t ) c b 1 ( 1 t z ) a d t , R e ( c ) > R e ( b ) > 0 .
we find
0 r S K n 1 ( t ) d t = 1 2 S K n ( r ) Γ ( n 2 ) Γ ( n + 2 2 ) F 1 2 1 2 , n 2 ; n + 2 2 ; K S K 2 ( r )
and the series expansion of F 1 2 yields the desired expression. □
Proposition A3.
Let vol ( S r ( γ ) ) denote the volume of a geodesic ball centered at γ Θ of radius r less than the injectivity radius at γ, in a manifold whose sectional curvatures are bounded below by κ and above by K . Then
vol κ ( S r ) vol ( S r ( γ ) ) vol K ( S r ) ,
where vol κ ( S r ) and vol K ( S r ) denote the volumes of balls of radius r and center γ in manifolds with constant sectional curvatures κ and K , respectively.
Proof. 
Integrating the inequalities in Bishop’s comparison theorem from ρ 0 to ρ gives
S κ n 1 ( ρ ) S κ n 1 ( ρ 0 ) g ( ρ , u ) g ( ρ 0 , u ) S K n 1 ( ρ ) S K n 1 ( ρ 0 ) .
Taking the limit as ρ 0 0 and noting that
lim ρ 0 0 g ( ρ 0 , u ) S κ n 1 ( ρ 0 ) = lim ρ 0 0 g ( ρ 0 , u ) S K n 1 ( ρ 0 ) = Ω ( u ) ,
we conclude that
S κ n 1 ( ρ ) Ω ( u ) g ( ρ , u ) S K n 1 ( ρ ) Ω ( u ) .
and the result follows upon integration. □

References

  1. Rao, C. Information and Accuracy Attainable in Estimation of Statistical Parameters. Bull. Calcutta Math. Soc. 1945, 37, 81–91. [Google Scholar] [CrossRef]
  2. Chentsov, N.N. Statistical Decision Rules and Optimal Inference; Translations of Mathematical Monographs (English translation of the Russian book published in 1972, Nauka, Moscow); American Mathematical Society: Providence, RI, USA, 1982; Volume 53. [Google Scholar] [CrossRef]
  3. Atkinson, C.; Mitchell, A.F.S. Rao’s Distance Measure. Sankhyā Indian J. Stat. Ser. A 1981, 43, 345–365. [Google Scholar]
  4. Burbea, J.; Rao, C.R. Entropy differential metric, distance and divergence measures in probability spaces: A unified approach. J. Multivar. Anal. 1982, 12, 575–596. [Google Scholar] [CrossRef]
  5. Nielsen, F. An Elementary Introduction to Information Geometry. Entropy 2020, 22, 1100. [Google Scholar] [CrossRef] [PubMed]
  6. Amari, S.i. Information Geometry and Its Applications, 1st ed.; Springer Publishing Company: Tokyo, Japan, 2016. [Google Scholar] [CrossRef]
  7. Oller, J.M.; Corcuera, J.M. Intrinsic Analysis of Statistical Estimation. Ann. Stat. 1995, 23, 1562–1581. [Google Scholar] [CrossRef]
  8. García, G.; Oller, J.M. What does intrinsic mean in statistical estimation? (with discussion). SORT-Stat. Oper. Res. Trans. 2006, 30, 125–170. [Google Scholar]
  9. García, G.; Cubedo, M.; Oller, J.M. Univariate Linear Normal Models: Optimal Equivariant Estimation. Mathematics 2025, 13, 3659. [Google Scholar] [CrossRef]
  10. Rao, B.L.S.P. Remarks on Cramer-Rao Type Integral Inequalities for Randomly Censored Data. Lect. Notes-Monogr. Ser. 1995, 27, 163–175. [Google Scholar]
  11. Frieden, B. Science from Fisher Information: A Unification, 2nd ed.; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar] [CrossRef]
  12. Tsang, M. Physics-inspired forms of the Bayesian Cramér-Rao bound. Phys. Rev. A 2020, 102, 062217. [Google Scholar] [CrossRef]
  13. Brody, D.C.; Hughston, L.P. Statistical Geometry in Quantum Mechanics. Proc. Math. Phys. Eng. Sci. 1998, 454, 2445–2475. [Google Scholar] [CrossRef]
  14. Esposito, A.R.; Vandenbroucque, A.; Gastpar, M. Lower bounds on the Bayesian risk via information measures. J. Mach. Learn. Res. 2024, 25, 1–45. [Google Scholar]
  15. Bernal-Casas, D.; Oller, J.M. Variational Information Principles to Unveil Physical Laws. Mathematics 2024, 12, 3941. [Google Scholar] [CrossRef]
  16. Hicks, N. Notes on Differential Geometry; Mathematica Studies; Van Nostrand: New York, NY, USA, 1965. [Google Scholar]
  17. Chavel, I. Eigenvalues in Riemannian Geometry; Elsevier: Amsterdam, The Netherlands, 1984. [Google Scholar] [CrossRef]
  18. Jeffreys, H. An invariant form for the prior probability in estimation problems. Proc. R. Soc. Lond. Ser. A Math. Phys. Sci. 1946, 186, 453–461. [Google Scholar] [CrossRef] [PubMed]
  19. Berger, J.O. Statistical Decision Theory and Bayesian Analysis; Springer: New York, NY, USA, 1985. [Google Scholar] [CrossRef]
  20. Troutman, J.L. Variational Calculus and Optimal Control: Optimization with Elementary Convexity, 2nd ed.; Undergraduate Texts in Mathematics; Springer: Berlin/Heidelberg, Germany, 1996. [Google Scholar]
  21. Lehmann, E.L. A general concept of unbiasedness. Ann. Math. Stat. 1951, 22, 587–592. [Google Scholar] [CrossRef]
  22. García, G. Cuantificación de la no invariancia y su aplicación en estadística. Ph.D. Thesis, Universitat de Barcelona, Barcelona, Spain, 2001. [Google Scholar]
  23. Muirhead, R. Aspects of Multivariate Statistical Theory; Wiley: Hoboken, NJ, USA, 1982. [Google Scholar] [CrossRef]
  24. Rauch, J. Partial Differential Equations; Graduate texts in mathematics; Springer: New York, NY, USA, 1991; Volume 128. [Google Scholar] [CrossRef]
  25. Kobayashi, S.; Nomizu, K. Foundations of Differential Geometry; John Wiley & Sons: Hoboken, NJ, USA, 1963; Volumes 1 and 2. [Google Scholar]
  26. Petersen, P. Riemannian Geometry, 3rd ed.; Graduate Texts in Mathematics; Springer: Berlin/Heidelberg, Germany, 2016; Volume 171. [Google Scholar]
  27. Chavel, I. Riemannian Geometry: A Modern Introduction; Cambridge Studies in Advanced Mathematics; Cambridge University Press: Cambridge, UK, 2006. [Google Scholar]
  28. Abramowitz, M.; Stegun, I.A. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables; ninth dover printing, tenth gpo printing ed.; Dover: New York, NY, USA, 1964. [Google Scholar]
Figure 1. This figure displays the lower bound, calculated trough (18), of the integrated Riemannian risk on S R , R U 2 S R , for certain values of n and k as a function of R. Note that this intrinsic index, evaluated in terms of its integral across an entire region, unlike the classical Cramér-Rao bound, is independent of any reparametrization of the model. The lower bound increases with R and converges to the intrinsic Cramér–Rao bound for intrinsically unbiased estimators, n / k . Furthermore, this bound is attained by the unique equivariant estimator for the model, namely the sample mean.
Figure 1. This figure displays the lower bound, calculated trough (18), of the integrated Riemannian risk on S R , R U 2 S R , for certain values of n and k as a function of R. Note that this intrinsic index, evaluated in terms of its integral across an entire region, unlike the classical Cramér-Rao bound, is independent of any reparametrization of the model. The lower bound increases with R and converges to the intrinsic Cramér–Rao bound for intrinsically unbiased estimators, n / k . Furthermore, this bound is attained by the unique equivariant estimator for the model, namely the sample mean.
Mathematics 14 00240 g001
Figure 2. This figure displays the lower bound, calculated trough (23), of the local minimax Riemannian risk on S R , M U 2 S R , for certain values of n and k as a function of R. Note that this intrinsic index, evaluated in terms of its supremum across an entire region, unlike the classical Cramér-Rao bound, is independent of any reparametrization of the model. These lower bounds increases with R and converges to the intrinsic Cramér–Rao bound for intrinsically unbiased estimators, n / k . Furthermore, this bound is attained by the unique equivariant estimator for the model, namely the sample mean.
Figure 2. This figure displays the lower bound, calculated trough (23), of the local minimax Riemannian risk on S R , M U 2 S R , for certain values of n and k as a function of R. Note that this intrinsic index, evaluated in terms of its supremum across an entire region, unlike the classical Cramér-Rao bound, is independent of any reparametrization of the model. These lower bounds increases with R and converges to the intrinsic Cramér–Rao bound for intrinsically unbiased estimators, n / k . Furthermore, this bound is attained by the unique equivariant estimator for the model, namely the sample mean.
Mathematics 14 00240 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Corcuera, J.M.; Oller, J.M. Lower Bounds for the Integrated and Minimax Risks in Intrinsic Statistical Estimation: A Geometric Approach. Mathematics 2026, 14, 240. https://doi.org/10.3390/math14020240

AMA Style

Corcuera JM, Oller JM. Lower Bounds for the Integrated and Minimax Risks in Intrinsic Statistical Estimation: A Geometric Approach. Mathematics. 2026; 14(2):240. https://doi.org/10.3390/math14020240

Chicago/Turabian Style

Corcuera, José Manuel, and José María Oller. 2026. "Lower Bounds for the Integrated and Minimax Risks in Intrinsic Statistical Estimation: A Geometric Approach" Mathematics 14, no. 2: 240. https://doi.org/10.3390/math14020240

APA Style

Corcuera, J. M., & Oller, J. M. (2026). Lower Bounds for the Integrated and Minimax Risks in Intrinsic Statistical Estimation: A Geometric Approach. Mathematics, 14(2), 240. https://doi.org/10.3390/math14020240

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop