Next Article in Journal
Tropical Solution of Discrete Best Approximation Problems
Previous Article in Journal
How Does Government Innovation Regulation Inhibit Corporate “Greenwashing”?—Based on a Tripartite Evolutionary Game Perspective
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Univariate Linear Normal Models: Optimal Equivariant Estimation

1
Department of Education and Professional Training, Gencat, 08021 Barcelona, Spain
2
Department of Genetics, Microbiology and Statistics, Faculty of Biology, University of Barcelona, Avda. Diagonal 643, 08028 Barcelona, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(22), 3659; https://doi.org/10.3390/math13223659
Submission received: 2 October 2025 / Revised: 30 October 2025 / Accepted: 11 November 2025 / Published: 14 November 2025
(This article belongs to the Section D1: Probability and Statistics)

Abstract

In this paper, we establish the existence and uniqueness of the minimum intrinsic risk equivariant (MIRE) estimator for univariate linear normal models. The estimator is derived under the action of the subgroup of the affine group that preserves the column space of the design matrix, within the framework of intrinsic statistical analysis based on the squared Rao distance as the loss function. This approach provides a parametrization-free assessment of risk and bias, differing substantially from the classical quadratic loss, particularly in small-sample settings. The MIRE is compared with the maximum likelihood estimator (MLE) in terms of intrinsic risk and bias, and a simple approximate version (a-MIRE) is also proposed. Numerical evaluations show that the a-MIRE performs closely to the MIRE while significantly reducing the intrinsic bias and risk of the MLE for small samples. The proposed intrinsic methods could extend to other invariant frameworks and connect with recent developments in robust estimation procedures.

1. Introduction

Methodological developments concerning the point estimation of parameters in statistical models remain a subject of significant scholarly interest. This is particularly true in settings that demand robust estimation procedures or involve complex multiparameter structures, where conventional or historically established methods may be unsuitable or inefficient. Notable contributions in this field include those by [1,2,3], among others.
A classical topic in statistical inference is the study of parametric statistical models that remain invariant under the action of a transformation group G acting on the sample space χ . This action induces a corresponding group G ¯ , acting on the parameter space Θ . In such a setting, it is logically necessary to restrict attention to equivariant estimators, that is, estimators U of θ Θ , satisfying
U ( g X ) = g ¯ U ( X ) , g G , g ¯ G ¯
here, X is a random map with domain χ .
Only equivariant estimators yield coherent and consistent inference, free of paradoxes arising from reparametrization or alternative representations of the data. Moreover, if the loss function L is itself invariant under the group action,
L ( U ( X ) , θ ) = L ( U ( g X ) , g ^ θ )
then the associated risk is constant along orbits of G ¯ . In particular, if the induced action is transitive on Θ , the risk is constant throughout the entire parameter space, and a minimum-risk equivariant estimator is optimal in the strongest possible sense.
Furthermore, the Fisher information matrix of a regular statistical model endows the parameter space with a natural Riemannian structure [4,5,6]. This geometric viewpoint has led to intrinsic approaches to estimation [7,8,9] and has also revealed connections with physical laws in certain contexts [10]. Within this intrinsic framework, non-invariant criteria such as squared error loss are replaced with intrinsic loss functions, most notably the squared Rao distance, which is directly induced by the Fisher–Rao metric.
As a motivating example, consider the multivariate normal family Y N n ( μ , Σ ) , which is invariant under the affine group. If Y Z = P Y + C for a non-singular matrix P and vector C, then Z N n ( P μ + C , P Σ P T ) , and the induced transformation in parameter space is ( μ , Σ ) ( P μ + C , P Σ P T ) .
In this work, we focus on the univariate linear normal model
Y N n ( X β , σ 2 I n )
where X is a fixed n × m design matrix of full rank, and ( β , σ ) Θ = R m × R + . This family is invariant under a particular subgroup of the affine group, acting on R n , making it suitable for the application of equivariant decision theory.
We first characterize the class of equivariant estimators under this group action. Then, adopting the squared Rao distance as an intrinsic loss function, we establish the existence and uniqueness of the minimum-risk equivariant estimator and derive a closed-form expression for its intrinsic bias. We further compare its intrinsic risk and bias with those of the maximum likelihood estimator and introduce a simple approximation (a-MIRE) that retains the essential advantages of the optimal equivariant estimator while being easier to compute.
Although our perspective is classical, intrinsic Bayesian approaches based on noninformative priors are also possible [9,11,12] and represent a promising direction for future research. For broader background on statistical estimation and its intrinsic developments, see [13,14,15,16].

2. Equivariant Estimators for Linear Models

Let us consider the univariate linear normal model,
y = X β + e
where y is an n × 1 random vector, X is an n × m matrix of known constants with 0 < rank ( X ) = m < n , β is an m × 1 vector of unknown parameters to be estimated, and e is the fluctuation or error of y about X β . We assume that the errors are unbiased, independent, with the same variance σ 2 , and following an n-variate normal distribution, which is e N n ( 0 , σ 2 I n ) , where I n is the n × n identity matrix. Therefore, y follows a distribution belonging to the parametric family P = { N n ( X β , σ 2 I n ) | ( β , σ ) Θ } , where the parameter space is Θ = R m × R + , a ( m + 1 ) -dimensional simply connected real manifold. Hereafter, we identify elements of R m with m × 1 column vectors whenever necessary.
Denote by
O E ( n ) = { H O ( n ) | y E Hy E }
where E is a subspace of R n and O ( n ) is the group of n × n orthogonal matrices with entries from R . Define F as the subspace of R n spanned by the columns of X , that is F = Col ( X ) . Observe that O F ( n ) = O F ( n ) , I n O F ( n ) , Col ( X ) = Col ( HX ) H O F ( n ) , and if H O F ( n ) , then H t O F ( n ) . Moreover, every H O F ( n ) induces, in F and F , two isomorphisms preserving the Euclidean norm in each subspace.
P is invariant under the action of the subgroup of the affine group in R n given by the family of transformations
g ( a , H , c ) ( y ) = a H y + c , y R n
where a > 0 , H O F ( n ) , c F .
Here, H belongs to the subgroup O F ( n ) O ( n ) , consisting of all orthogonal transformations that preserve the column space F = Col ( X ) and its orthogonal complement. In other words, R n decomposes as F F , and any H O F ( n ) acts as an isometry on each of these subspaces, F and F , possibly including rotations or reflections, but it cannot mix these subspaces or alter lengths.
Observe that G = { g ( a , H , c ) | a > 0 , H O F ( n ) , c F } induces an action on the parameter space θ given by
g ( a , H , c ) ¯ ( β , σ ) = a ( X t X ) 1 X t H X β + ( X t X ) 1 X t c , a σ
where this result is obtained, taking into account that X ( X t X ) 1 X t is the projection matrix into F, and thus, if w F , there exists a unique η such that w = X η and X ( X t X ) 1 X t w = w .
Since the family P is invariant under the action of G , it is natural to restrict our attention to the class of equivariant estimators U = ( U 1 , U 2 ) of ( β , σ ) , i.e., an estimator satisfying U g ( a , H , c ) ( y ) = g ( a , H , c ) ¯ U ( y ) for all g ( a , H , c ) G .
Proposition 1.
Let  U  be an equivariant estimator of  ( β , σ ) . Then  U  belongs to the family  { U λ , λ R + } , where
U λ ( y ) = ( X t X ) 1 X t y , λ y t I n X ( X t X ) 1 X t y 1 / 2
Proof. 
Let U = ( U 1 , U 2 ) . The equivariance condition for U involves
U 1 ( a Hy + c ) = a ( X t X ) 1 X t H X U 1 ( y ) + ( X t X ) 1 X t c U 2 ( a Hy + c ) = a U 2 ( y )
for any a > 0 , H O F ( n ) , c F .
Any y R n can be written in a unique form as y = y F + y F , where y F F and y F F . Specifically,
y F = X ( X t X ) 1 X t y and y F = ( I n X ( X t X ) 1 X t ) y
If we choose c = a Hy F in the previous expressions, we obtain
U 1 ( a Hy F ) = a ( X t X ) 1 X t H X U 1 ( y ) a ( X t X ) 1 X t H y F
U 2 ( a Hy F ) = a U 2 ( y )    
First, we focus on U 1 . If we let H = ( I n 2 X ( X t X ) 1 X t ) , we have H O F ( n ) with
H v = v v F and H v = v v F
Now observe that (5) is satisfied for any a > 0 and H in O F ( n ) , in particular for I n and H . Therefore,
U 1 ( a y F ) = a U 1 ( y ) a ( X t X ) 1 X t y F U 1 ( a H y F ) = a U 1 ( y ) + a ( X t X ) 1 X t y F
But a H y F = a y F , which leads to
0 = U 1 ( y ) ( X t X ) 1 X t y F
From (4),
U 1 ( y ) = ( X t X ) 1 X t y , y R n
Next, we consider U 2 . Let a = 1 and H = I n in (6); it follows
U 2 ( y F ) = U 2 ( y )
Accordingly, it is enough to determine U 2 on F . Since F is at least one-dimensional (recall that m < n), we can take a unit vector z F such that U 2 ( z ) = λ > 0 . Then, by (6), we have
U 2 ( Hz ) = U 2 ( z )
for any H O F ( n ) . Observe that any arbitrary unit vector in F can be written as Hz for a proper H O F ( n ) . Therefore, for any y F F , y F 0 , with probability one, we have
U 2 ( y F ) = y F U 2 y F y F = y F U 2 ( Hz ) = y F U 2 ( z ) = λ y F
Finally, from (7) and (4), we have
U 2 ( y ) = λ y t I n X ( X t X ) 1 X t y 1 / 2
Observe that the standard maximum likelihood estimator, MLE, for the present model, is an equivariant estimator, with λ = 1 n .

3. Minimum Riemannian Risk Estimators

In the framework of intrinsic analysis, where the loss function is the square of the Rao distance, the Riemannian distance is induced by the information metric in the parameter space Θ , and once the class of the equivariant estimators has been determined, a natural question arises: which is the equivariant estimator that minimizes the risk?
First of all, we summarize the basic geometric results corresponding to the model (1) which are going to be used hereafter. We use a standardized version of the information metric, given by the usual information metric corresponding to this linear model divided by a constant factor n, i.e., the number of rows of matrix X . This metric is given by the solution of the geodesic equations
d s 2 = 1 n σ 2 d β t X t X d β + 2 n ( d σ ) 2 ,
which is, up to a linear coordinate change, the Poincaré hyperbolic metric of the upper half space R m × R + , see [17]. The Riemannian curvature κ = 1 2 is constant and negative and the unique geodesic, parameterized by the arc length, which connects two points θ 1 = ( β 1 , σ 1 ) and θ 2 = ( β 2 , σ 2 ) Θ , when β 1 β 2 , is given by
β ( s ) = 2 n K 2 tanh s 2 + ϵ ( X t X ) 1 / 2 C + D σ ( s ) = K 1 sech s 2 + ϵ
where s is the arc length, C and D are m × 1 vectors whose components, and also ϵ , are integration constants which appear when solving the differential equation geodesic system, such that β ( 0 ) = β 1 , β ( ρ 12 ) = β 2 , and σ ( 0 ) = σ 1 , σ ( ρ 12 ) = σ 2 , with ρ 12 being the Riemannian distance between θ 1 and θ 2 . Finally, K is given by K 2 = 2 n C t C . When β 1 = β 2 , the geodesic is given by
β ( s ) = D σ ( s ) = B e ± s 2
where B is a positive integration constant.
The Rao distance ρ between the points θ 1 and θ 2 is
ρ 12 ρ θ 1 , θ 2 = 2 ln 1 + δ ( θ 1 , θ 2 ) 1 δ ( θ 1 , θ 2 ) = 2 2 arctanh δ ( θ 1 , θ 2 )
where
δ ( θ 1 , θ 2 ) = d M 2 ( β 1 , β 2 ) + 2 ( σ 1 σ 2 ) 2 σ 1 σ 2 d M 2 ( β 1 , β 2 ) + 2 ( σ 1 + σ 2 ) 2 σ 1 σ 2 1 / 2
and
d M 2 ( θ 1 , θ 2 ) = 1 n σ 1 σ 2 ( β 1 β 2 ) t X t X ( β 1 β 2 )
or, equivalently,
ρ θ 1 , θ 2 = 2 arccosh 1 4 d M 2 ( θ 1 , θ 2 ) + σ 1 2 + σ 2 2 2 σ 1 σ 2
Let exp θ 1 1 ( θ 2 ) be the inverse of the exponential map corresponding to Levi-Civita connection and W 1 , , W m , W m + 1 , with its components corresponding to the basis field ( / β 1 ) θ 1 , , ( / β m ) θ 1 , ( / σ ) θ 1 . Then, we have
W i = ρ 12 / 2 sinh ( ρ 12 / 2 ) ( β 2 i β 1 i ) σ 1 σ 2 , i = 1 , , m W m + 1 = ρ 12 / 2 sinh ( ρ 12 / 2 ) cosh ( ρ 12 / 2 ) σ 1 σ 2 σ 1
It is well known that the Riemannian distance induced by the information metric is invariant under equivariant estimator transformations. We shall supply direct and alternative proof for the linear model setting.
Proposition 2.
The Rao distance ρ given by (11) is invariant under the action of the induced group by  G  on the parameter space,  G ¯ . In other words,
ρ ( g ( a , H , c ) ¯ θ 1 , g ( a , H , c ) ¯ θ 2 ) = ρ ( θ 1 , θ 2 )
Proof. 
Observe that
HX ( β 1 β 2 ) F , H O F ( n )
and taking into account that X ( X t X ) 1 X t is the projection matrix into F, we have
X ( X t X ) 1 X t HX ( β 1 β 2 ) = HX ( β 1 β 2 )
Therefore,
( β 1 β 2 ) t X t H t X ( X t X ) 1 X t HX ( β 1 β 2 ) = ( β 1 β 2 ) t X t X ( β 1 β 2 )
and the invariance of δ and ρ trivially follows. □
Proposition 3.
G ¯  acts transitively on Θ.
Proof. 
The transitivity follows observing that a is an arbitrary positive real number and X ( X t X ) 1 X t is the projection matrix into F with rank X = m , the dimension of F. □
Since ρ , and thus ρ 2 , is invariant under the action of G ¯ , and G ¯ acts transitively on Θ , the distribution of ρ 2 U λ ( y ) , θ does not depend on θ , and therefore, the risk of any equivariant estimator remains constant and independent of the target parameter provided that this risk is finite. More precisely, observe that if we let
z = 1 σ ( y F X β ) , V = z and W = 1 σ y F
from (1) and (4), we clearly have z N n ( 0 , X ( X t X ) 1 X t ) with a rank m idempotent covariance matrix, and V 2 and W 2 are independent random variables following a chi-square distribution with m and ( n m ) degrees of freedom, equal to the dimensions of F and F since V 2 and W 2 are quadratic forms based on the projection matrices on these subspaces of R n , and y F (or z ) and y F are independent random vectors. Therefore, since X U λ 1 ( y ) = y F and U λ 2 ( y ) = λ y F , we have
d M 2 ( U λ ( y ) , θ ) = 1 n U λ 2 ( y ) σ ( U λ 1 ( y ) β ) t X t X ( U λ 1 ( y ) β ) = V 2 n λ W
δ ( U λ ( y ) , θ ) = V 2 n λ W + 2 ( λ W 1 ) 2 λ W V 2 n λ W + 2 ( λ W + 1 ) 2 λ W 1 / 2
and
ρ ( U λ ( y ) , θ ) = 2 2 arctanh V 2 + 2 n ( λ W 1 ) 2 V 2 + 2 n ( λ W + 1 ) 2 1 / 2
or
ρ U λ ( y ) , θ = 2 arccosh 1 4 V 2 n λ W + 1 2 λ W + 1 λ W
which have a distribution depending only on V 2 and W 2 , with independent random variables with fixed distribution, whatever the value of θ .
Since the risk of any equivariant estimator remains constant on the parameter space, it is enough to examine it at one point, for instance, at the point ( 0 , 1 ) Θ . Let us denote the expectation with respect to the n-variate linear normal model N n ( X β , σ 2 I n ) by E ( β , σ ) and by E the E ( 0 , 1 ) . We can prove the following propositions.
Proposition 4.
Since  n m + 1  we have
E ( β , σ ) ρ 2 U λ ( y ) , ( β , σ ) = E ρ 2 U λ ( y ) , ( 0 , 1 ) < , λ > 0
for any  ( β , σ ) Θ .
Proof. 
From (14) and (13), since
1 + 1 2 ! ρ 2 2 + 1 4 ! ρ 4 4 cosh ( ρ 2 ) = 1 4 d M 2 ( θ 1 , θ 2 ) + σ 1 2 + σ 2 2 2 σ 1 σ 2
we have
0 ρ 2 θ 1 , θ 2 12 1 + d M 2 ( θ 1 , θ 2 ) 6 + 1 3 ( σ 1 σ 2 ) 2 σ 1 σ 2 1
developing the square of the difference and taking into account that the standard Euclidean norm of a vector is less or equal to the absolute value of the sum of its components, we obtain
0 ρ 2 θ 1 , θ 2 2 6 d M ( θ 1 , θ 2 ) + 4 3 σ 1 σ 2 + σ 2 σ 1 + 4 3 12
Notice that both bounds (22) and (23) are invariant under the action of the induced group on the parameter space.
As we mentioned before, from [18], it is enough to prove that the risk is finite at ( 0 , 1 ) Θ . Taking into account (16), it follows from (23) that
0 ρ 2 ( U λ ( y ) , ( 0 , 1 ) ) 2 6 V 2 λ W n + 4 3 λ W + 1 λ W + 4 3 12
Observe that if Q has a chi-square distribution with k degrees of freedom,
E ( Q α ) = 2 α Γ ( k 2 + α ) Γ ( k 2 ) provided   that k 2 + α > 0
Therefore, since V 2 and W 2 are independent random variables following a central chi-square distribution with m and ( n m ) degrees of freedom, we have
E ( V ) = 2 Γ ( m + 1 2 ) Γ ( m 2 ) , E ( W ) = 2 4 Γ ( 2 ( n m ) + 1 4 ) Γ ( n m 2 ) provided   that n m > 1 / 2
and
E ( 1 W ) = 1 2 4 Γ ( 2 ( n m ) 1 4 ) Γ ( n m 2 ) provided   that n m > 1 / 2
Then, taking the average, it follows that
E ρ 2 ( U λ ( y ) , ( 0 , 1 ) ) < 2 6 λ n E ( V ) E ( 1 W ) + + 4 3 λ E ( W ) + 1 λ E ( 1 W ) + 4 3 12 < + , λ > 0 and n m > 1 / 2
Since n and m are positive integers, with n > m , we conclude that the risk is finite if n m + 1 . □
Proposition 4 is a sufficient condition for the existence of the Riemannian risk of the equivariant estimator U λ , thus
Φ ( λ ) = E ρ 2 ( U λ ( y ) , ( 0 , 1 ) ) , λ > 0 ,
is well defined.
Proposition 5.
There exists a unique minimizer  λ n m > 0  of the Riemannian risk given by Φ.
Proof. 
Let us consider the Riemannian risk at λ = e s 2 2 as a function of s, that is,
F ( s ) = Φ ( e s 2 2 ) , s R
The particular selection of λ , from which F follows, relies on the Riemannian structure of Θ induced by the information metric. The Riemannian curvature is constant and equal to 1 2 , and taking into account (10), we have that
γ ( s ) = ( X t X ) 1 X t y , e s 2 y t I n X ( X t X ) 1 X t y 1 / 2
is a geodesic in Θ ; precisely, a geodesic parameterized by the arc length; see [17] for further details.
Then, following [19], the real valued function
s ρ 2 ( γ ( s ) , ( 0 , 1 ) )
is strictly convex. Since almost surely convexity of a stochastic process carries over the mean of a process, the map F is strictly convex as well.
On the other hand, from Fatou’s Lemma F ( s ) + as s or s . This, together with the strict convexity of F, yields the existence of a unique minimizer s of the function F, which depends on n and m.
The optimal value of λ is therefore obtained by expressing the unique minimizer s of F ( s ) in the original parameterization λ = e s / ( 2 2 ) , which provides the correspondence between the minimization problems in s and in λ .
Finally, since the map s e s 2 2 is a strictly monotonous function; a unique λ n m must exist, namely λ n m = e s 2 2 , such that Φ ( λ n m ) = min λ > 0 Φ ( λ ) . □
In fact, this result guarantees the unicity of the MIRE, although a numerical analysis is required to obtain it explicitly (see next section). It could be useful to develop a simple approximate estimator, that shall be referred to hereafter as a-MIRE, obtained, luckily, by minimizing a convenient upper bound of Φ ( λ ) . Since
1 + 1 2 ! ρ 2 2 cosh ( ρ 2 ) = 1 4 d M 2 ( θ 1 , θ 2 ) + σ 1 2 + σ 2 2 2 σ 1 σ 2
we shall have
0 ρ 2 ( U λ ( y ) , ( 0 , 1 ) ) 1 n λ V 2 W + 2 λ W 1 λ W 2
and therefore, taking expectations, we have
0 Φ ( λ ) H ( λ ) = 1 λ Γ ( n m 2 1 2 ) 2 Γ ( n m 2 ) m n + 2 + 2 λ 2 Γ ( n m 2 + 1 2 ) Γ ( n m 2 ) 4
the upper bound H ( λ ) is clearly a convex function with an absolute minimum attained when λ satisfies
λ n m ˜ = 1 + m 2 n 1 n m 1
Furthermore, given an arbitrary m, we have
lim n λ n m ˜ 2 n = 1
and, therefore, a-MIRE is very close to MLE for large values of n. Observe also that it is possible to compute a-MIRE for n > m + 1 , a condition which is slightly stronger than the result required for the existence of MIRE in Proposition 4.
A further aspect is the intrinsic bias of the equivariant estimators. In fact, connections between minimum risk, bias, and invariance have been established; see [18]. Since the action of the group G is not commutative, we cannot guarantee the unbiasedness of the MIRE, and an additional analysis must be performed. First of all, we compute the vector bias, see [8], a quantitative measure of the bias which is compatible with the Lehmann results.
Let A θ ( λ ) = exp θ 1 ( U λ ( y ) ) and A θ 1 ( λ ) , , A θ m ( λ ) , A θ m + 1 ( λ ) be the components of A θ ( λ ) corresponding to the basis field ( / β 1 ) θ , , ( / β m ) θ , ( / σ ) θ . With matrix notation, A θ 1 ( λ ) = ( A θ 1 ( λ ) , , A θ m ( λ ) ) t . Furthermore, let us define h ( x ) x / sinh x for x 0 and h ( 0 ) 0 ; taking into account (16) and (19), from (11) and (15), we have
XA θ 1 ( λ ) = f λ ( V , W ) ) z σ λ W A θ m + 1 ( λ ) = f λ ( V , W ) g λ ( V , W ) σ
where
f λ ( V , W ) h ρ ( U λ ( y ) , θ ) / 2
and
g λ ( V , W ) cosh ρ ( U λ ( y ) , θ ) / 2 1 λ W
Let B θ ( λ ) = E θ ( exp θ 1 ( U λ ( y ) ) ) be the intrinsic bias vector corresponding to an equivariant estimator U λ ( y ) = ( U λ 1 ( y ) , U λ 2 ( y ) ) evaluated at the point θ = ( β , σ ) and let B θ 1 ( λ ) , , B θ m ( λ ) , B θ m + 1 ( λ ) be their components. In matrix notation, B θ 1 ( λ ) = ( B θ 1 ( λ ) , , B θ m ( λ ) ) t . We have the following.
Proposition 6.
If  n m + 1 , the bias vector is finite and
B θ 1 ( λ ) = 0 B θ m + 1 ( λ ) = E f λ ( V , W ) g λ ( V , W ) σ
where  V 2  and  W 2  are independent random variables following a chi-square distribution with m and  n m  degrees of freedom, respectively.
Moreover, the square of the norm of the bias vector is constant and given by
B θ ( λ ) θ 2 = 2 E f λ ( V , W ) g λ ( V , W ) 2
Proof. 
Observe that if n m + 1 , we have
B θ ( λ ) θ 2 E θ ( exp θ 1 ( U λ ( y ) ) θ ) 2 E θ ( exp θ 1 ( U λ ( y ) ) θ 2 ) = E θ ( ρ 2 ( U λ ( y ) , θ ) ) <
where · θ denotes the Riemannian norm at the tangent space at θ .
On the other hand, taking into account (31) and defining z as in (16), observe that V = z = z , z is independent of W , and z and z have the same distribution. Then we have
X B θ 1 ( λ ) = E θ f λ ( V , W ) z σ λ W = E θ f λ ( V , W ) ( z ) σ λ W = E θ f λ ( V , W ) z σ λ W = 0
and since rank ( X ) = m , it follows that B θ 1 ( λ ) = 0 .
B θ m + 1 ( λ ) is obtained directly from (31). The distribution of z and W 2 follow from the basic properties of multivariate normal distribution. Finally, the norm of the bias vector field follows from (32) and (8). □
We may remark, finally, that the norm of the bias vector field of any equivariant estimator, B θ ( λ ) θ , is invariant under the action of the induced group, G ¯ , on the parameter space, and since this group acts transitively on Θ , this quantity must be constant, which is clear from (33).

4. Numerical Evaluation

In this section, we compare, numerically, the MIRE estimator with the standard MLE. Observe that both estimators only differ in the estimation of the parameter σ (or σ 2 ). Precisely, the MIRE and the MLE of σ 2 are, respectively,
σ 2 ^ = λ n m 2 y t I n X ( X t X ) 1 X t y σ 2 = 1 n y t I n X ( X t X ) 1 X t y
namely, they only differ by the factor ξ n m = λ n m 2 n , since σ 2 ^ = ξ n m σ 2 .
In order to compare MLE with the MIRE, we computed the factor ξ n m , the intrinsic risk, and the square of the norm of the bias vector for each estimator. All the computations were performed using Mathematica 10.2, specifically, the main procedure used was NIntegrate, with the parameter AccuracyGoal > 16 [20].
Moreover, we suggested a rather simple approximation for ξ n m , or λ n m , which allows us to approximate the MIRE estimator through (29), i.e.,
ξ n m ˜ = 1 + m 2 n n n m 1
The corresponding estimator, which shall be referred to hereafter as a-MIRE, was also compared with MLE and MIRE in terms of the intrinsic risk.
The results are summarized in the following figures which summarize the tables given in Appendix A; see [21] for a convincing argument for the use of plots over tables. In Figure 1, numerical results of ξ n m (left) and Φ ( λ n m ) = Φ ( ξ n m / n ) (right) are displayed graphically. Observe that, for m fixed, as n increases, Φ ( ξ n m / n ) goes to zero and ξ n m goes to one. The exact numerical values are given in Appendix A, Table A1.
Figure 2 graphically shows the percentage of the intrinsic risk increment for MLE (left) and a-MIRE (right), that is
100 Φ ( 1 n ) Φ ( λ n m ) Φ ( λ n m ) and 100 Φ ( λ n m ˜ ) Φ ( λ n m ) Φ ( λ n m )
respectively, for some values of n and m. The exact numerical results are given in Appendix A, Table A2.
Observe that if we approximate the MIRE by the MLE for a certain value of m, the relative difference of risk decreases as n increases. Notice that these relative differences of risks are rather moderate or small (about 10–15%) only if n > 10 m . Let us remark also the non-appropriate behavior of the MLE for small values of n m . This is in regard to the intrinsic risk increment: when we use MLE instead of MIRE, this increment oscillates between 80.5 % and 251.1 % when n m = 1 , with m from 1 to 10. On the other hand, the behavior of the a-MIRE is reasonably good, with its intrinsic risk very similar to the risk of the MIRE estimator: here, the percentage on risk increment is less than 1 % for all studied cases, and also, this percentage decreases as n increases. In fact, this percentage is lower than 1‰ when n m 4 , which indicates the extraordinary degree of approximation; therefore, the a-MIRE is a reasonable and useful approximation of MIRE. In particular, as an example, for a two-way analysis of variance, with a and b levels for factors A and B, respectively, with a single replicate for treatment, we shall have
λ ˜ = 1 + a + b 1 2 a b 1 a b a b
with a > 1 and b > 1 , while the corresponding quantity for MLE is λ = 1 a b .
In particular, if a = 3 and b = 5 , we shall have
λ ˜ = 1 + 7 30 1 7 = 0.419750
and λ = 1 15 = 0.258199 , which is sensibly different. At this moment, it could be useful to recall that the quadratic loss and the squared Riemannian distance behave very differently; see [9].
Figure 3 graphically displays the numerical results of the m + 1 component of the bias vector divided by σ , i.e., the unique non-zero physical component of this vector field, for MIRE (left) and MLE (right), 1 σ B θ m + 1 ( λ n m ) and 1 n σ B θ m + 1 ( 1 n ) , respectively, for some values of n and m. Observe the sign differences, meaning that MIRE overestimates, on average, σ , while MLE underestimates this quantity on average as well. This follows from the equation of the geodesics of the present model (9). The exact numerical values are given in Appendix A, Table A3.
Figure 4 graphically shows the numerical results of the percentage of intrinsic risk due to bias for MIRE and MLE, that is
100 B θ 2 ( λ n m ) Φ ( λ n m ) and 100 B θ 2 ( 1 n ) Φ ( λ n m )
respectively, for some values of n and m. Observe that the bias is moderate, with respect to the intrinsic risk, in both estimators. The bias of the MIRE estimator is smaller than the bias of MLE for small values of m and the opposite for large values. The exact numerical results are given in Appendix A, Table A4.

5. Conclusions

In this work, we characterized the class of equivariant estimators for the univariate linear normal model under the largest subgroup of the affine group that preserves the column space of the design matrix. Within this class, and using the squared Rao distance as an intrinsic loss function, we established the existence and uniqueness of the minimum-risk equivariant estimator (MIRE). We also derived an explicit expression for the intrinsic bias of any equivariant estimator and compared the intrinsic risk and bias of the MIRE with those of the maximum likelihood estimator.
The numerical evaluation shows that
  • The intrinsic risk and bias of the MLE can be substantially larger than those of the MIRE in small-sample settings.
  • The difference between the two estimators decreases as the sample size increases.
  • The approximate estimator (a-MIRE) provides a practical and computationally efficient alternative, achieving intrinsic risk values nearly identical to those of the MIRE for moderate sample sizes.
The results highlight the importance of incorporating the geometric structure of the parameter space in statistical estimation. When the parameter space is endowed with the Fisher–Rao metric, the squared Rao distance provides a natural intrinsic loss function that captures aspects of performance not reflected by classical criteria such as quadratic loss.

Future Research Directions

This work suggests several avenues for further development:
  • Extension to heteroscedastic or correlated error structures, where the geometry of the model may differ significantly.
  • Intrinsic Bayesian formulations based on noninformative or reference priors, where posterior summaries may be interpreted in geometric terms.
  • Computational schemes for approximating the Rao distance efficiently in high-dimensional or large-sample contexts.
We hope that these contributions will encourage further exploration of intrinsic methods in statistical inference and strengthen the connection between information geometry and classical estimation theory.

Author Contributions

Formal analysis, J.M.O.; Investigation, G.G. and M.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

We have to thank the referees and the editor comments and suggestions for the improvement of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Numerical results for ξ n m (above) and Φ ( λ n m ) (below), with ξ n m = λ n m 2 n .
Table A1. Numerical results for ξ n m (above) and Φ ( λ n m ) (below), with ξ n m = λ n m 2 n .
m 12345678910
n
28.66992
3.03876
33.07654
1.16322
13.85950
3.22626
42.15828
0.71941
4.40442
1.32989
19.07110
3.31958
51.79608
0.52257
2.87441
0.86369
5.72625
1.42579
24.29080
3.37544
61.60525
0.41144
2.28365
0.65040
3.59243
0.95721
7.05039
1.48877
29.51720
3.41264
71.48787
0.33979
1.97404
0.52570
2.77223
0.73918
4.31144
1.02277
8.37564
1.53330
34.74390
3.43918
81.40849
0.28964
1.78417
0.44287
2.34344
0.60912
3.26141
0.80445
5.03105
1.07128
9.70161
1.56646
39.97520
3.45908
91.35126
0.25252
1.65604
0.38342
2.08088
0.52102
2.71323
0.67272
3.75096
0.85446
5.75096
1.10863
11.02800
1.59210
45.20590
3.47455
101.30807
0.22391
1.56381
0.33847
1.90388
0.45665
2.37787
0.58233
3.08328
0.72281
4.24078
0.89402
6.47135
1.13827
12.35490
1.61252
50.43540
3.48692
111.27433
0.20116
1.49427
0.30320
1.77656
0.40721
2.15190
0.51544
2.67503
0.63172
3.45350
0.76331
4.73076
0.92608
7.19181
1.16237
13.68190
1.62918
55.65840
3.49704
151.19071
0.14318
1.33089
0.21469
1.49647
0.28618
1.69505
0.35810
1.93754
0.43107
2.24025
0.50594
2.62870
0.58406
3.14513
0.66756
3.86473
0.76019
4.93543
0.86917
201.13807
0.10534
1.23410
0.15768
1.34212
0.20973
1.46452
0.26160
1.60436
0.31342
1.76568
0.36539
1.95380
0.41771
2.17599
0.47070
2.44246
0.52476
2.76780
0.58046
301.08895
0.06895
1.14768
0.10318
1.21093
0.13718
1.27923
0.17097
1.35322
0.20457
1.43364
0.23801
1.52137
0.27132
1.61744
0.30453
1.72312
0.33769
1.83991
0.37084
401.06561
0.05126
1.10786
0.07673
1.15245
0.10205
1.19960
0.12721
1.24952
0.15224
1.30246
0.17713
1.35872
0.20191
1.41860
0.22657
1.48247
0.25114
1.55075
0.27561
501.05197
0.04080
1.08495
0.06108
1.11936
0.08126
1.15532
0.10134
1.19289
0.12131
1.23222
0.14119
1.27342
0.16098
1.31663
0.18068
1.36200
0.20029
1.40970
0.21983
601.04303
0.03388
1.07007
0.05074
1.09808
0.06752
1.12711
0.08423
1.15722
0.10086
1.18846
0.11742
1.22090
0.13391
1.25461
0.15033
1.28967
0.16669
1.32617
0.18298
701.03671
0.02897
1.05963
0.04340
1.08324
0.05776
1.10757
0.07207
1.13268
0.08632
1.15857
0.10051
1.18530
0.11466
1.21291
0.12874
1.24144
0.14278
1.27093
0.15676
801.03201
0.02531
1.05189
0.03791
1.07230
0.05047
1.09324
0.06298
1.11476
0.07545
1.13687
0.08787
1.15959
0.10025
1.18295
0.11259
1.20697
0.12489
1.23170
0.13715
901.02838
0.02246
1.04593
0.03366
1.06390
0.04481
1.08228
0.05593
1.10111
0.06701
1.12039
0.07806
1.14014
0.08907
1.16038
0.10005
1.18112
0.11099
1.20239
0.12190
1001.02548
0.02020
1.04120
0.03026
1.05725
0.04029
1.07363
0.05030
1.09036
0.06027
1.10745
0.07022
1.12492
0.08014
1.14276
0.09002
1.16101
0.09988
1.17966
0.10972
Table A2. Percentage of intrinsic risk increment for MLE (above) and a-MIRE (below).
Table A2. Percentage of intrinsic risk increment for MLE (above) and a-MIRE (below).
m 12345678910
n
280.54
357.08
0.95
114.18
442.51
0.12
87.63
0.64
140.61
533.73
0.04
67.93
0.11
114.44
0.61
162.61
627.88
0.02
54.83
0.04
90.96
0.10
138.22
0.58
181.58
723.72
0.01
45.78
0.02
74.42
0.03
112.04
0.10
159.65
0.56
198.31
820.63
0.01
39.22
0.01
62.67
0.02
92.70
0.03
131.51
0.09
179.21
0.55
213.33
918.25
0.00
34.27
0.01
54.00
0.01
78.62
0.01
109.84
0.03
149.62
0.09
197.26
0.54
226.98
1016.35
0.00
30.42
0.00
47.38
0.01
68.08
0.01
93.73
0.01
125.99
0.03
166.59
0.09
214.02
0.53
239.51
1114.81
0.00
27.33
0.00
42.19
0.00
59.96
0.00
81.53
0.01
108.10
0.01
141.27
0.03
182.58
0.08
229.71
0.53
251.10
1510.75
0.00
19.42
0.00
29.26
0.00
40.45
0.00
53.25
0.00
68.04
0.00
85.30
0.00
105.68
0.00
130.06
0.01
159.58
0.01
208.00
0.00
14.25
0.00
21.13
0.00
28.68
0.00
37.01
0.00
46.24
0.00
56.51
0.00
68.00
0.00
80.96
0.00
95.69
0.00
305.29
0.00
9.29
0.00
13.56
0.00
18.11
0.00
22.95
0.00
28.11
0.00
33.63
0.00
39.52
0.00
45.85
0.00
52.66
0.00
403.96
0.00
6.89
0.00
9.98
0.00
13.23
0.00
16.62
0.00
20.18
0.00
23.91
0.00
27.83
0.00
31.95
0.00
36.28
0.00
503.16
0.00
5.48
0.00
7.90
0.00
10.42
0.00
13.03
0.00
15.74
0.00
18.55
0.00
21.47
0.00
24.51
0.00
27.67
0.00
602.63
0.00
4.54
0.00
6.54
0.00
8.59
0.00
10.71
0.00
12.90
0.00
15.15
0.00
17.48
0.00
19.88
0.00
22.35
0.00
702.25
0.00
3.88
0.00
5.57
0.00
7.31
0.00
9.09
0.00
10.93
0.00
12.81
0.00
14.74
0.00
16.72
0.00
18.75
0.00
801.97
0.00
3.39
0.00
4.86
0.00
6.36
0.00
7.90
0.00
9.48
0.00
11.09
0.00
12.74
0.00
14.42
0.00
16.15
0.00
901.75
0.00
3.01
0.00
4.30
0.00
5.63
0.00
6.99
0.00
8.37
0.00
9.78
0.00
11.22
0.00
12.68
0.00
14.18
0.00
1001.57
0.00
2.70
0.00
3.86
0.00
5.05
0.00
6.26
0.00
7.49
0.00
8.74
0.00
10.02
0.00
11.32
0.00
12.64
0.00
Table A3. Numerical results for the unique non-null physical component of bias vector field 1 σ B θ m + 1 ( λ ) . for MIRE (above, with λ = λ n m ) and MLE (below, with λ = 1 / n ).
Table A3. Numerical results for the unique non-null physical component of bias vector field 1 σ B θ m + 1 ( λ ) . for MIRE (above, with λ = λ n m ) and MLE (below, with λ = 1 / n ).
m 12345678910
n
20.25273
−0.64316
30.15729
−0.34089
0.33352
−0.66450
40.11879
−0.23333
0.23107
−0.38021
0.37337
−0.68608
50.09542
−0.17763
0.18380
−0.27128
0.27237
−0.41182
0.39710
−0.70605
60.07989
−0.14348
0.15393
−0.21196
0.22483
−0.30232
0.29910
−0.43827
0.41288
−0.72422
70.06877
−0.12037
0.13280
−0.17426
0.19348
−0.24073
0.25307
−0.32857
0.31782
−0.46104
0.42409
−0.74076
80.06039
−0.10369
0.11693
−0.14809
0.17044
−0.20063
0.22201
−0.26549
0.27371
−0.35132
0.33165
−0.48102
0.43251
−0.75589
90.05384
−0.09107
0.10451
−0.12881
0.15255
−0.17224
0.19859
−0.22366
0.24357
−0.28719
0.28944
−0.37137
0.34228
−0.49884
0.43904
−0.76980
100.04858
−0.08120
0.09451
−0.11401
0.13817
−0.15101
0.17997
−0.19362
0.22044
−0.24409
0.26044
−0.30651
0.30184
−0.38931
0.35073
−0.51492
0.44424
−0.78265
110.04426
−0.07326
0.08628
−0.10227
0.12633
−0.13451
0.16471
−0.17089
0.20175
−0.21278
0.23791
−0.26244
0.27399
−0.32390
0.31186
−0.40554
0.35758
−0.52956
0.44844
−0.79460
150.03267
−0.05266
0.06405
−0.07250
0.09426
−0.09377
0.12339
−0.11673
0.15156
−0.14172
0.17887
−0.16918
0.20545
−0.19972
0.23144
−0.23423
0.25707
−0.27404
0.28272
−0.32133
200.02461
−0.03897
0.04848
−0.05318
0.07164
−0.06810
0.09414
−0.08383
0.11602
−0.10047
0.13732
−0.11814
0.15809
−0.13700
0.17836
−0.15724
0.19818
−0.17912
0.21761
−0.20294
300.01649
−0.02564
0.03264
−0.03470
0.04845
−0.04405
0.06394
−0.05370
0.07913
−0.06368
0.09403
−0.07402
0.10864
−0.08474
0.12298
−0.09588
0.13707
−0.10748
0.15090
−0.11958
400.01240
−0.01911
0.02460
−0.02576
0.03661
−0.03256
0.04843
−0.03953
0.06008
−0.04666
0.07154
−0.05397
0.08284
−0.06146
0.09397
−0.06916
0.10495
−0.07707
0.11576
−0.08520
500.00994
−0.01523
0.01974
−0.02048
0.02942
−0.02583
0.03899
−0.03128
0.04843
−0.03682
0.05775
−0.04248
0.06696
−0.04824
0.07606
−0.05413
0.08506
−0.06013
0.09394
−0.06626
600.00829
−0.01266
0.01649
−0.01700
0.02460
−0.02141
0.03262
−0.02588
0.04056
−0.03042
0.04842
−0.03503
0.05620
−0.03971
0.06390
−0.04447
0.07152
−0.04931
0.07906
−0.05423
700.00711
−0.01083
0.01415
−0.01453
0.02113
−0.01827
0.02805
−0.02207
0.03490
−0.02591
0.04169
−0.02980
0.04842
−0.03375
0.05509
−0.03775
0.06170
−0.04180
0.06825
−0.04591
800.00622
−0.00946
0.01240
−0.01269
0.01852
−0.01594
0.02460
−0.01924
0.03062
−0.02257
0.03660
−0.02594
0.04253
−0.02934
0.04842
−0.03279
0.05425
−0.03628
0.06005
−0.03981
900.00554
−0.00840
0.01103
−0.01126
0.01649
−0.01414
0.02190
−0.01705
0.02728
−0.01999
0.03262
−0.02296
0.03792
−0.02595
0.04319
−0.02898
0.04841
−0.03204
0.05360
−0.03514
1000.00498
−0.00756
0.00993
−0.01012
0.01485
−0.01270
0.01974
−0.01531
0.02459
−0.01794
0.02942
−0.02059
0.03421
−0.02327
0.03898
−0.02597
0.04371
−0.02870
0.04841
−0.03145
Table A4. Percentage of intrinsic risk due to bias for MIRE (above) and MLE (below), referred to as the risk of MIRE.
Table A4. Percentage of intrinsic risk due to bias for MIRE (above) and MLE (below), referred to as the risk of MIRE.
m 12345678910
n
24.20
27.23
34.25
19.98
6.90
27.37
43.92
15.14
8.03
21.74
8.40
28.36
53.48
12.08
7.82
17.04
10.41
23.79
9.34
29.54
63.10
10.01
7.29
13.81
10.56
19.10
12.02
25.80
9.99
30.74
72.78
8.53
6.71
11.55
10.13
15.68
12.52
21.11
13.18
27.73
10.46
31.91
82.52
7.42
6.17
9.90
9.54
13.22
12.25
17.52
13.99
23.04
14.04
29.54
10.82
33.04
92.30
6.57
5.70
8.66
8.93
11.39
11.72
14.87
13.89
19.31
15.11
24.88
14.72
31.26
11.10
34.11
102.11
5.89
5.28
7.68
8.36
9.99
11.12
12.88
13.45
16.49
15.17
21.02
16.01
26.63
15.26
32.88
11.32
35.13
111.95
5.34
4.91
6.90
7.84
8.89
10.53
11.33
12.89
14.33
14.83
18.05
16.21
22.66
16.73
28.30
15.70
34.43
11.50
36.11
151.49
3.87
3.82
4.90
6.21
6.14
8.50
7.61
10.66
9.32
12.65
11.31
14.45
13.66
16.05
16.44
17.39
19.76
18.39
23.76
201.15
2.88
2.98
3.59
4.89
4.42
6.78
5.37
8.59
6.44
10.32
7.64
11.97
8.99
13.52
10.51
14.97
12.23
16.32
14.19
300.79
1.91
2.06
2.33
3.42
2.83
4.78
3.37
6.12
3.96
7.43
4.60
8.70
5.29
9.93
6.04
11.13
6.84
12.28
7.71
400.60
1.42
1.58
1.73
2.63
2.08
3.69
2.46
4.74
2.86
5.78
3.29
6.80
3.74
7.80
4.22
8.77
4.73
9.72
5.27
500.48
1.14
1.28
1.37
2.13
1.64
3.00
1.93
3.87
2.24
4.72
2.56
5.57
2.89
6.40
3.24
7.22
3.61
8.03
3.99
600.41
0.95
1.07
1.14
1.79
1.36
2.53
1.59
3.26
1.83
3.99
2.09
4.72
2.36
5.43
2.63
6.14
2.92
6.83
3.21
700.35
0.81
0.92
0.97
1.55
1.16
2.18
1.35
2.82
1.56
3.46
1.77
4.09
1.99
4.71
2.21
5.33
2.45
5.94
2.69
800.31
0.71
0.81
0.85
1.36
1.01
1.92
1.18
2.49
1.35
3.05
1.53
3.61
1.72
4.16
1.91
4.71
2.11
5.26
2.31
900.27
0.63
0.72
0.75
1.21
0.89
1.72
1.04
2.22
1.19
2.73
1.35
3.23
1.51
3.73
1.68
4.22
1.85
4.71
2.03
1000.25
0.57
0.65
0.68
1.10
0.80
1.55
0.93
2.01
1.07
2.47
1.21
2.92
1.35
3.37
1.50
3.83
1.65
4.27
1.80

References

  1. Park, C.; Gao, X.; Wang, M. Robust explicit estimators using the power-weighted repeated medians. J. Appl. Stat. 2023, 51, 1590–1608. [Google Scholar] [CrossRef] [PubMed]
  2. Morikawa, K.; Terada, Y.; Kim, J.K. Semiparametric adaptive estimation under informative sampling. Ann. Stat. 2025, 53, 1347–1369. [Google Scholar] [CrossRef]
  3. Jiang, J. Asymptotic distribution of maximum likelihood estimator in generalized linear mixed models with crossed random effects. Ann. Stat. 2025, 53, 1298–1318. [Google Scholar] [CrossRef]
  4. Rao, C. Information and accuracy attainable in the estimation of statistical parameters. Bull. Calcutta Math. Soc. 1945, 37, 81–91. [Google Scholar]
  5. Burbea, J.; Rao, C. Entropy differential metric, distance and divergence measures in probability spaces: A unified approach. J. Multivar. Anal. 1982, 12, 575–596. [Google Scholar] [CrossRef]
  6. Burbea, J. Informative geometry of probability spaces. Expo. Math. 1986, 4, 347–378. [Google Scholar]
  7. Amari, S.; Nagaoka, H. Methods of Information Geometry; American Mathematical Society; Oxford University Press: Oxford, UK, 2000. [Google Scholar]
  8. Oller, J.; Corcuera, J. Intrinsic Analysis of the Statistical Estimation. Ann. Stat. 1995, 23, 1562–1581. [Google Scholar] [CrossRef]
  9. García, G.; Oller, J. What does intrinsic mean in statistical estimation? Sort 2006, 2, 125–146. [Google Scholar]
  10. Bernal-Casas, D.; Oller, J. Variational Information Principles to Unveil Physical Laws. Mathematics 2024, 12, 3941. [Google Scholar] [CrossRef]
  11. Bernardo, J.; Juárez, M. Intrinsic estimation. In Bayesian Statistics 7; Bernardo, J., Bayarri, M., Berger, J., Dawid, A., Hackerman, W., Smith, A., West, M., Eds.; Oxford University Press: Berlin, Germany, 2003; pp. 465–476. [Google Scholar]
  12. Oller, J.M.; Corcuera, J.M. Intrinsic Bayesian Estimation Using the Kullback–Leibler Divergence. Ann. Inst. Stat. Math. 2003, 55, 355–365. [Google Scholar]
  13. Lehmann, E.; Casella, G. Theory of Point Estimation; Springer Science & Business Media: New York, NY, USA, 2006. [Google Scholar]
  14. Amari, S.I. Information Geometry and Its Applications; Applied Mathematical Sciences; Springer Japan: Tokyo, Japan, 2016; Volume 194. [Google Scholar]
  15. Ay, N.; Jost, J.; Lê, H.V.; Schwachhöfer, L. Information Geometry; Springer: Cham, Switzerland, 2017. [Google Scholar]
  16. Nielsen, F. An Elementary Introduction to Information Geometry. Entropy 2020, 22, 1100. [Google Scholar] [CrossRef] [PubMed]
  17. Burbea, J.; Oller, J.M. The information metric for univariate linear elliptic models. Stat. Risk Model. 1988, 6, 209–222. [Google Scholar] [CrossRef]
  18. Lehmann, E. A general concept of unbiasedness. Ann. Math. Stat. 1951, 22, 587–592. [Google Scholar] [CrossRef]
  19. Karcher, H. Riemannian Center of Mass and Mollifier Smoothing. Commun. Pure Appl. Math. 1977, 30, 509–541. [Google Scholar] [CrossRef]
  20. Wolfram Research, Inc. Mathematica, Version 10.2, Wolfram Research, Inc.: Champaign, IL, USA, 2015. Available online: https://www.wolfram.com/mathematica/ (accessed on 1 July 2016).
  21. Gelman, A.; Pasarica, C.; Dodhia, R. Let’s practice what we preach: Turning tables into graphs. Am. Stat. 2002, 56, 121–130. [Google Scholar] [CrossRef]
Figure 1. Numerical results for ξ n m (left) and Φ ( λ n m ) (right), with ξ n m = λ n m 2 n .
Figure 1. Numerical results for ξ n m (left) and Φ ( λ n m ) (right), with ξ n m = λ n m 2 n .
Mathematics 13 03659 g001
Figure 2. Percentage of intrinsic risk increment for MLE (left) and a-MIRE (right).
Figure 2. Percentage of intrinsic risk increment for MLE (left) and a-MIRE (right).
Mathematics 13 03659 g002
Figure 3. Numerical results for the unique non-null physical component of bias vector field 1 σ B θ m + 1 ( λ ) , for MIRE ((left), with λ = λ n m ) and MLE ((right), with λ = 1 / n ).
Figure 3. Numerical results for the unique non-null physical component of bias vector field 1 σ B θ m + 1 ( λ ) , for MIRE ((left), with λ = λ n m ) and MLE ((right), with λ = 1 / n ).
Mathematics 13 03659 g003
Figure 4. Percentage of intrinsic risk due to bias for MIRE (left) and MLE (right), referred to as the risk of MIRE.
Figure 4. Percentage of intrinsic risk due to bias for MIRE (left) and MLE (right), referred to as the risk of MIRE.
Mathematics 13 03659 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

García, G.; Cubedo, M.; Oller, J.M. Univariate Linear Normal Models: Optimal Equivariant Estimation. Mathematics 2025, 13, 3659. https://doi.org/10.3390/math13223659

AMA Style

García G, Cubedo M, Oller JM. Univariate Linear Normal Models: Optimal Equivariant Estimation. Mathematics. 2025; 13(22):3659. https://doi.org/10.3390/math13223659

Chicago/Turabian Style

García, Gloria, Marta Cubedo, and Josep M. Oller. 2025. "Univariate Linear Normal Models: Optimal Equivariant Estimation" Mathematics 13, no. 22: 3659. https://doi.org/10.3390/math13223659

APA Style

García, G., Cubedo, M., & Oller, J. M. (2025). Univariate Linear Normal Models: Optimal Equivariant Estimation. Mathematics, 13(22), 3659. https://doi.org/10.3390/math13223659

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop