Next Article in Journal
Characterization of Marangoni Forced Convection in Casson Nanoliquid Flow with Joule Heating and Irreversibility
Next Article in Special Issue
Black-Scholes Theory and Diffusion Processes on the Cotangent Bundle of the Affine Group
Previous Article in Journal
Dynamic Shear Deformation of a Precipitation Hardened Al0.7CoCrFeNi Eutectic High-Entropy Alloy Using Hat-Shaped Specimen Geometry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Bi-Invariant Statistical Model Parametrized by Mean and Covariance on Rigid Motions

by
Emmanuel Chevallier
1,* and
Nicolas Guigui
2
1
Institut Fresnel, Aix-Marseille University, CNRS, Centrale Marseille, 13013 Marseille, France
2
Université Côte d’Azur, Inria Epione, 06902 Sophia Antipolis, France
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(4), 432; https://doi.org/10.3390/e22040432
Submission received: 10 March 2020 / Revised: 7 April 2020 / Accepted: 9 April 2020 / Published: 10 April 2020

Abstract

:
This paper aims to describe a statistical model of wrapped densities for bi-invariant statistics on the group of rigid motions of a Euclidean space. Probability distributions on the group are constructed from distributions on tangent spaces and pushed to the group by the exponential map. We provide an expression of the Jacobian determinant of the exponential map of S E ( n ) which enables the obtaining of explicit expressions of the densities on the group. Besides having explicit expressions, the strengths of this statistical model are that densities are parametrized by their moments and are easy to sample from. Unfortunately, we are not able to provide convergence rates for density estimation. We provide instead a numerical comparison between the moment-matching estimators on S E ( 2 ) and R 3 , which shows similar behaviors.

1. Introduction

This work is an extended version of the conference paper [1], focused on S E ( 2 ) . We provide here a formula for S E ( n ) with arbitrary n 2 , and a numerical evaluation of the convergence of the moment-matching density estimator on S E ( 2 ) .
Probability density estimation problems generally fall in one of two categories: estimating a density on a Euclidean vector space or estimating a density on a non-Euclidean manifold. In turn, estimation problems on non-Euclidean manifolds can be divided in different categories depending on the nature of the manifold. The two main classes of non-Euclidean manifold encountered in statistics are Riemannian manifolds and Lie groups. On Riemannian manifolds, the objects studied in statistics should be consistent with the Riemannian distance. For instance, means of distributions are defined as points minimizing the average square Riemannian distances. On a Lie group, the objects should be consistent with the group law. Direct products of compact Lie groups and vector spaces for examples belong to both categories, they admit a Riemannian metric invariant by left and right multiplications. However, in full generality, Lie groups do not admit such nice metrics, hence the need for statistical tools based solely on the group law and not on the Riemannian distance.
The definition of a statistical mean on Lie groups was addressed by Pennec and Arsigny in [2] where authors define bi-invariant means on arbitrary Lie groups as exponential barycenters [3]. Once the bi-invariant mean is defined, higher order bi-invariant centered moments can be defined in the tangent space at the mean. We build on this notion of moments to address the problem of constructing statistical models on S E ( n ) , the group of direct isometries of R n . The wrapped distributions model we propose has several advantages. First, it is stable under left and right multiplications. Second, densities have explicit expressions and are parameterized by their mean and covariance rather than their concentration matrix as for normal distributions defined in [4]. Third, the densities are easy to sample from. To do so, we construct wrapped densities on S E ( n ) similar to the densities defined in [5,6,7,8,9,10,11] on Riemannian manifolds. Similar types of probability distributions have already been considered for robotics applications on S E ( 3 ) to model uncertainty in motion estimation, see for instance [12].
Harmonic analysis is another well-known approach to density estimation, see [13] for S E ( 2 ) and [14,15,16] on other manifolds. Beside the technicalities and numerical difficulties introduced by harmonic analysis on non-abelian and non-compact groups, the main motivation for using wrapped distributions over harmonic analysis techniques, is that it enables the definition of parametric models.
This work is based on two facts. First, the exponential map can be translated from the identity element to any point of the group regardless of the choice of left or right multiplication. This property was already of primary importance in the construction of the bi-invariant mean [2] and enables the definition of bi-invariant estimation procedures. The second important fact is that the Jacobian of the exponential map on S E ( n ) admits a closed form expression which we compute in Section 3.2. This Jacobian provides an easy way to define probability densities with explicit expressions on the group by pushing densities from tangent spaces using the exponential map.
Unfortunately, the literature on convergence of bi-invariant moments on Lie group is still very limited. Therefore, we were not able to characterize the convergence of estimators using the proposed model. Instead, we compared numerically the convergence of the moment-matching estimator on S E ( 2 ) and on R 3 .
The paper is organized as follows. Section 2 describes the group of direct isometries of the Euclidean space. Section 3 includes relevant properties of the exponential mapping and the computation of the Jacobian determinant. Section 4 recalls the definitions of the first and second centered moments on a Lie group. A statistical model together with a sampling and an estimation procedure is introduced in Section 5. Section 6 concludes the paper.

2. Euclidean Groups

For a condensed introduction to Lie group theory for robotics, see [17], and for several relevant calculations on low-dimensional rigid motions, see the series of notes [18,19,20].
S E ( n ) is the set of all direct isometries of the Euclidean space R n . The composition law of maps makes S E ( n ) a group. For each element g of S E ( n ) there are a unique rotation R and a unique vector t such that
g ( u ) = R u + t ,
hence the isometry g can be represented by the couple ( R , t ) . The group structure of S E ( n ) is not a direct product between the special orthogonal group and the group of translations, but a semi direct product with translations as the normal subgroup:
S E ( n ) = S O ( n ) ϕ R n
( R , t ) ( R , t ) = ( R R , ϕ R ( t ) + t )
where we simply have ϕ R = R . Let Ψ ( R , t ) denote the conjugation by ( R , t ) . A short calculation gives
Ψ ( R , t ) ( R , t ) = ( R , t ) ( R , t ) ( R , t ) 1 = ( R R R t , R R R t t + R t + t ) .
Recall that A d R , t = d Ψ ( R , t ) e . Hence, after unfolding the elements of the Lie algebra se ( n ) into column vectors, the matrix representation of A d R , t is given by
A d ( R , t ) : A d R 0 C R
where C is a n by n ( n 1 ) 2 matrix, A d R is the adjoint representation of rotations. The structure of this adjoint matrix implies first that S E ( n ) is unimodular, i.e., admits a bi-invariant measure and the derivative of the exponential admits an explicit expression as we will see in Section 3.2. To see that S E ( n ) is unimodular, consider a left-invariant volume form ω . The volume form is bi-invariant if and only if
d L g d R g 1 ( ω e ) = ω e ,
or equivalently det ( d L g d R g 1 ) = det ( A d g ) = 1 . Since S O ( n ) is compact, it admits a bi-invariant measure. Hence det ( A d R ) = 1 , and we have
det ( A d ( R , t ) ) = det ( A d R ) · det ( R ) = 1 .
We note μ G the bi-invariant measure associated with ω . The fact that S E ( n ) is unimodular has a significant impact on the definition of statistical tools: it is possible to manipulate densities of probability distributions with respect to a canonical measure.
A convenient way to represent elements of S E ( n ) is to identify the isometry ( R , t ) with the matrix
R t 0 1 G L n + 1 ( R ) .
It is easy to check that the composition of isometries corresponds to the matrix multiplication. S E ( n ) is thus seen as a Lie subgroup of G L n + 1 ( R ) . Our density modelling framework is intrinsic and does not depend on a specific choice of coordinates. However, it is useful for some computations to set a reference basis. The tangent space at the identity element, noted T e S E ( n ) , is spanned by the matrices of the form
A i , j = E i , j E j , i 0 0 0 and T i = 0 e i 0 0
where E i , j is the n × n matrix with a 1 at index ( i , j ) and zeros elsewhere and e i is the i-th basis vector of R n . Let B e = A i , j T i be the reference basis of T e S E ( n ) . B e can be translated by left multiplication to make a left-invariant field of basis B . Depending on context A will denote an n × n skew-symmetric matrix or its embedding in the Lie algebra of G L n + 1 , and tangent vectors will be noted with the letter u: u = ( A , T ) .
Recall that a skew-symmetric matrix can be block-diagonalized with 2 by 2 rotations on the diagonal, followed by a 0 when the dimension is odd. For each n by n skew-symmetric matrix A, we note θ 1 , , θ n 2 the set of angles of the 2 by 2 rotations.
The identification of S E ( n ) with a Lie subgroup of G L n + 1 ( R ) makes the computation of the exponential map easy: the group exponential is simply the matrix exponential. Let U be the subset of T e S E ( n ) defined by
U = { u = ( A , T ) | i , θ i [ π , π ] } .
It can be checked that the exponential map on U is a bijection. Therefore, we can define the logarithm on S E ( n ) as the inverse of the exponential on U.

3. Bi-Invariant Local Linearizations

Moments and densities are defined using local linearizations of the group. Hence, to obtain bi-invariant statistics, the linearization must be compatible with left and right multiplications. This section describes why the exponential map provides such linearizations from arbitrary elements.
Though we do not use this formalism, the construction of the exponential at g can be viewed in the general setting of Cartan connections on Lie groups. The exponential at g is then the exponential of a bi-invariant connection, see [21,22,23].

3.1. The Exponential at Point G

Since the exponential maps the lines of the tangent space at e to the one parameter subgroups of S E ( n ) , it is a natural candidate to linearize the group near the identity. To linearize the group around an arbitrary element g, it is possible to move g to the identity by a multiplication by g 1 , use the linearization at identity to obtain a tangent vector in T e S E ( n ) , and map the resulting tangent vector to T g S E ( n ) with a multiplication by g. Fortunately, we can check that this procedure does not depend on a choice of left or right multiplication. Recall that on a Lie group,
g exp ( u ) g 1 = exp ( A d g ( u ) ) = exp ( d L g ( d R g 1 ( u ) ) ) = exp ( d R g 1 ( d L g ( u ) ) ) ,
where d L g and d R g are the differentials of the left and right multiplication. This property enables the transport of the exponential application to any element of the group without ambiguity on the choice of left or right multiplication,
exp g : T g S E ( n ) S E ( n )
u exp g ( u ) = g × exp d L g 1 u = exp d R g 1 u g ,
see Figure 1 for a visual illustration.
Note U g T g S E ( n ) = d L g U the injectivity domain of exp g . The logarithm log g : S E ( n ) U g becomes
log g 0 ( g ) = d L g 0 log g 0 1 g
= d R g 0 log g g 0 1 .
We now have a linearization of the group around an arbitrary g S E ( n ) . The bi-invariant nature of the linearization is summarized in Figure 2. Independence from the choice of left or right multiplication in the definition of the exponential at an arbitrary point was the key ingredient of the definition of the bi-invariant mean in [2]. It is again a key property in our statistical model.
The strength of the exponential map is that it turns some general algebra problems into linear algebra. Once the space has been lifted to a tangent space, the problem of left and right invariances is reduced to the study of the commutation with the differentials of left and right multiplications. Since the tangent spaces do not have a canonical basis or scalar product, the manipulations we perform such as computing a mean, a covariance or estimating a density should not depend on the choice of a particular coordinate system. Hence if these manipulations commute with all the linear invertible transformations, in particular with the left and right differentials, they induce bi-invariant operations.

3.2. Jacobian Determinant of the Exponential

A measure μ on T g S E ( n ) can be pushed forward to the group using the exponential at g. This push-forward measure is noted exp g ( μ ) . Since exp g commutes with the right and left actions, so does the push-forward of measures. To obtain expressions of the densities on the group, it is necessary to compute the Jacobian determinant of the exponential, see Figure 3.
Assume μ has a density f with respect to a Lebesgue measure of T g S E ( n ) and that its support is contained in an injectivity domain U g of exp g . The density f S E ( n ) of the measure pushed on the group is given by
f S E ( n ) ( exp g u ) = d exp g ( μ ) d μ G ( exp g ( u ) ) = 1 | det ( d ( exp g ) u ) | f ( u )
where d ( exp g ) u is the differential of exp g at the vector u expressed in the left-invariant reference field of basis. Since S E ( n ) is unimodular, i.e., μ G is bi-invariant, the density of the pushed forward measure also commutes with the left and right translations of S E ( n ) .
We now compute this Jacobian determinant at the identity element. For the sake of notation, we drop the index e and let d exp u be the differential of the group exponential at the tangent vector u expressed in the bases B e and B exp ( u ) . d exp u has the following expression (see [20,24]):
d exp u = d L exp u k 0 ( 1 ) k ( k + 1 ) ! a d u k .
Since det ( d L exp u ) = 1 , the Jacobian determinant of the exponential is given by the determinant of the series. Fortunately, the adjoint action can be diagonalized and the determinant can be computed explicitly. Recall that a d u = ( A , T ) = d ( A d ( R , t ) ) ( R , t ) = e ( A , T ) . Using Equation(1) we have that the matrix of a d u has the following form
a d u : a d A 0 D A ,
where on the left side A is an n × n skew-symmetric matrix, a d A is the adjoint map in the Lie algebra of skew-symmetric matrices, and D is an n ( n 1 ) 2 by n matrix. Since the matrix of a d ( A , T ) is block triangular,
det ( d exp u ) = det k 0 ( 1 ) k ( k + 1 ) ! a d A k . det k 0 ( 1 ) k ( k + 1 ) ! A k .
Both determinants are obtained by diagonalizing A and a d A . Take a d × d real skew-symmetric matrix M. There is an unitary matrix P such that M = P D P ¯ t , where D is diagonal with eigenvalues i λ 1 , i λ 1 , , i λ d 2 , i λ d 2 with λ i R followed by a 0 when d is odd. For λ 0 we have,
k 0 ( 1 ) k ( k + 1 ) ! λ k = 1 e λ λ ,
and when λ = 0 , the left term equals 1 and the right term can be extended by continuity. The right terms are the eigenvalues of the series in M. Hence using the fact that the determinant of a diagonalizable matrix is the product of its eigenvalues we have
det k 0 ( 1 ) k ( k + 1 ) ! M k = i 1 e i λ i i λ i 1 e i λ i i λ i = i 2 1 c o s ( λ i ) λ i 2 .
A is by definition skew-symmetric. Since the adjoint representation of S O ( n ) is compact, there is a basis of matrices such that a d A are skew-symmetric. Hence, Equation (5) enables the computation of det ( d exp u ) from the eigenvalues of A and a d A . The eigenvalues of a d A are usually obtained by computing the roots of the complexified Lie algebra of the group S O ( n ) , see ([25], [chap. 3, sec. 8]). We provide a direct computation in Appendix A. If n is even, we have then
det ( d exp u ) = i 2 1 c o s ( θ i ) θ i 2 · i < j 4 · 1 + c o s ( θ i + θ j ) ( θ i + θ j ) 2 · 1 + c o s ( θ i θ j ) ( θ i θ j ) 2
and for n odd,
det ( d exp u ) = i 2 1 c o s ( θ i ) θ i 2 2 · i < j 4 · 1 + c o s ( θ i + θ j ) ( θ i + θ j ) 2 · 1 + c o s ( θ i θ j ) ( θ i θ j ) 2 .
Let J g ( u ) = | det ( d exp g , u ) | . Since exp g ( u ) = g · exp e d L g 1 u ,
d exp g , u = d L g d exp e , d L g 1 ( u ) d L g 1 .
Furthermore,
d L g 1 B g = B e and B exp g ( u ) = d L g B exp e d L g 1 ( u ) .
Hence expressed in the basis B g and B exp g ( u ) , the determinant of d exp g , u is given by
J g ( u ) = J e d L g 1 ( u ) .
When all tangent vectors are expressed in the left-invariant basis, it is possible to drop the subscripts and write
J ( A , T ) = J ( θ 1 , , θ n 2 , T ) = det ( d exp ( A , T ) ) .
On S E ( 2 ) we simply have
J ( θ , T ) = 2 1 c o s ( θ ) θ 2 .

4. First and Second Moments of a Distribution on a Lie Group

4.1. Bi-Invariant Means

Bi-invariant means on Lie groups have been introduced by Pennec and Arsigny, see [2]. An element g ¯ in a Lie group G is said to be a bi-invariant mean of g 1 , , g k G or of probability distribution μ on G, if
i log g ¯ ( g i ) = 0 or G log g ¯ ( g ) d μ ( g ) = 0 .
Observe that g ¯ is not necessarily unique, see [2,26,27] for more details. Using Equation (2), it is straightforward to check that the mean is compatible with left and right multiplications:
d L g i log g ( g i ) = i log g g ( g g i ) and d R g i log g ( g i ) = i log g g ( g i g ) ,
Hence if i log g ¯ ( g i ) = 0 we also have i log g g ¯ ( g g i ) = 0 and i log g ¯ g ( g i g ) = 0 .

4.2. Covariance in a Vector Space

In this section, the bold letter u represents a vector and the letter u its coordinate in a basis.
Let us recall the definition of the covariance of a distribution on a vector space in a coordinate system. Let e 1 , , e n be a basis of the vector space V and μ a distribution on V. The covariance of μ in V is defined by
Σ = E μ ( ( u μ ¯ ) ( u μ ¯ ) t ) = V ( u μ ¯ ) ( u μ ¯ ) t d μ ( u ) ,
where u and μ ¯ are the coordinate expressions of the vector u and the average of μ and E μ ( ) is the expectation with respect to μ .
Let K : R + R + be such that K ( x ) is a probability density on R n whose covariance matrix in the canonical basis is the identity matrix, and μ be the distribution on V whose density is
d μ d λ e ( u ) = 1 det Σ K u t Σ 1 u .
where λ e is the Lebesgue measure induced by e 1 , , e n . It is easy to check that the covariance of μ is Σ .
Since the tangent space of a Lie group does not have a canonical basis, it is sometimes useful to define objects independently of coordinates. The coordinate free definition of the covariance becomes
Σ = V ( u μ ¯ ) ( u μ ¯ ) d μ ( u ) .
Recall that V V is naturally identified with the space of bilinear forms on V . Let B be the bilinear form on V associated with Σ . If B is positive definite, it induces an isomorphism between V and V and is then naturally identified with a bilinear form B on V. The definition μ in Equation (10) becomes
d μ d λ B ( u ) = K B u , u ,
where λ B is the Lebesgue measure on V induced by B. In this formulation it clearly appears that μ does not depends on a basis.

4.3. Covariance of a Distribution on S E ( N )

Let μ be a distribution on S E ( n ) such that its bi-invariant mean g ¯ is uniquely defined. The covariance tensor of μ is defined as
Σ = E μ log g ¯ ( g ) log g ¯ ( g ) = S E ( n ) log g ¯ ( g ) log g ¯ ( g ) d μ ( g ) T g ¯ S E ( n ) T g ¯ S E ( n ) ,
see Figure 4 for a visual illustration.
Again, using Equation (2) and the bi-invariance of the mean, the compatibility of the covariance with left and right multiplication is straightforward. Note g · Σ and Σ · g the pushforwards by left and right multiplication by g of the tensor Σ . We have then
g · Σ = E μ d L g ( log g ¯ ( g ) ) d L g ( log g ¯ ( g ) ) = E μ log g g ¯ ( g g ) log g g ¯ ( g g ) = Σ
where Σ is the covariance of the distribution g · μ , the push-forward of μ by L g . The same goes for right multiplications. However, it is important to note that for a covariance Σ defined on T g S E ( n ) , pushing the covariance to the tangent space at identity using left and right multiplication usually gives different results:
g 1 · Σ = A d g 1 Σ · g 1 Σ · g 1 ,
where A d g ( · ) is interpreted as the map on tensors induced by the adjoint representation.
For two distributions μ 1 and μ 2 with different means, their covariance tensors are objects defined in different tangent spaces. The collection of all these spaces form the tangent bundle T S E ( n ) , and covariances are identified to points in the tensor bundle T S E ( n ) T S E ( n ) .
In the reference field of basis B , the covariance Σ has a matrix Σ given by
Σ = S E ( n ) log g ¯ ( g ) log g ¯ ( g ) t d μ ( g ) .
In principal geodesic analysis, the matrix Σ is sometimes referred to as a linearized quantity in contrast to the exact principal geodesic analysis, see [28].

5. Statistical Models for Bi-Invariant Statistics

5.1. The Model

Let K : R + R + be such that
(i)
R n K ( u ) d u = 1
(ii)
R n u u t K ( u ) d u = I , I being the n × n identity matrix
(iii)
K ( x > a ) = 0 for some a R .
Condition (i) imposes that K ( u ) is a probability density on R n , condition (ii) that the covariance matrix is the identity matrix and condition (iii) that it has a bounded support.
The statistical model is defined by pushing densities of the form K ( u ) from tangent spaces to the group via the exponential, where the Euclidean norms on tangent spaces are parameters of the distributions. To avoid summing densities over the multiple inverse images of the exponential map, it is convenient to deal with densities K ( u ) whose support are included in injectivity domains, hence the (iii) requirement. Let C g be the set of covariance matrices compatible with the injectivity domain U g ,
C g = Σ | u U g , u t Σ 1 u > a ,
see Figure 5. When covariance matrices are expressed in the left-invariant reference basis, the set C g is the same for all g and the subscript can be dropped.
When Σ C g , the support of the probability distribution μ on T g S E ( n ) defined by
d μ ˜ d λ g ( u ) = 1 det ( Σ ) K u t Σ 1 u ,
where μ ˜ is the lift of μ by log g , is contained in U g . Here λ g denotes the Lebesgue measure of T g S E ( n ) . The density of the push-forward of μ is then
f ( exp g ( u ) ) = 1 J ( u ) det ( Σ ) K u t Σ 1 u ,
or, expressed at g S E ( n ) ,
f ( g ) = 1 J ( log g ( g ) ) det ( Σ ) K log g ( g ) t Σ 1 log g ( g ) ,
where J is given in Equation (8). The set of such probability densities when g and Σ vary form a natural parametric statistical model:
M = f g , Σ : g S E ( n ) and Σ C g .
The commutation relations of Section 3.1 imply that M is closed under left and right multiplications. The fact that g and Σ are the moments of f g , Σ plays a major role in the relevance of the model M . This fact holds when Σ is small enough, a more precise result should follow in a future work.

5.2. Sampling Distributions of M

An advantage of constructing distributions from tangent spaces is that they are easy to sample: it suffices to be able to sample from the probability density p on R proportional to K ( x ) , p K . Recall that the dimension of tangent spaces is d = n ( n + 1 ) 2 . Let v i = ( x 1 , i , , x d , i ) t be random column vectors with x k , l i.i.d. reals distributed according to p. Then the vectors
u i = Σ 1 2 v i
are i.i.d. of density 1 det ( Σ ) K u t Σ 1 u on R d , and the points
g i = exp g ( u i )
are i.i.d. according to the density f g , Σ on S E ( n ) .

5.3. Evaluation of the Convergence of the Moment-Matching Estimator

All the experiments in this section were performed using the Python package geomstats, see [29], available at http:geomstats.ai. Let g 1 , , g k be points in S E ( n ) with a unique bi-invariant mean g ^ and such that the empirical covariance
Σ ^ ( g 1 , , g k ) = 1 k i log g ^ ( g i ) log g ^ ( g i ) t
is contained in C g ^ and that the moments of f g ^ , Σ ^ μ G are ( g ^ , Σ ^ ) . The compatibilities with left multiplications,
f g g , g Σ = g · f g , Σ and Σ ^ ( g · g 1 , , g · g k ) = g . Σ ^ ( g 1 , , g k ) ,
and right multiplications, implies that the maximum likelihood and the moment-matching estimators are bi-invariant.
On the one hand, finding the maximum likelihood estimation when g 1 , , g k are i.i.d. requires an optimization procedure. On the other hand, matching moments is straightforward, provided that the moments of f g ¯ , Σ are ( g ¯ , Σ ) . In most cases, this moment-matching estimator is expected to have reasonable convergence properties; however there are currently no theoretical results on the convergence of bi-invariant means and covariance on Lie groups. Hence for now it is only possible to provide empirical convergence on specific examples. Let
K ( x ) = 3 4 π 5 3 / 2 1 [ 0 , 5 ] ( x ) , Σ 1 = 1 0 0 0 1 0 0 0 1 , Σ 2 = 0 1 0 1 0 0 0 0 1 0.5 0 0 0 0.2 0 0 0 1 0 1 0 1 0 0 0 0 1 .
The function K verifies (i), (ii) and (iii) of Section 5.1. Since 5 < π 2 , Σ 1 and Σ 2 are admissible covariances, Σ 1 , 2 C . Σ 2 is chosen such that it correlates the rotation and translation coordinates.
Given a set of i.i.d. samples g 1 , , g k of the density f e , Σ , the estimated density of the moment-matching estimator is f g ^ , Σ ^ . For the sake of notations, we drop subscripts and simply write f and f ^ . To characterize the convergence of the estimator, we compare the convergence of f ^ on S E ( 2 ) with the analogous moment-matching estimator on T e S E ( 2 ) R 3 using the samples log ( g 1 ) , , log ( g k ) .
Any L p distance between densities provides a way to evaluate the convergence in a bi-invariant way. The L 1 is particularly meaningful in the context of probabilities and presents the advantage of being independent from a reference measure. Therefore, we evaluated the expectation of the L 1 distance to f:
e k = E f S E ( 2 ) | f ( g ) f ^ ( g ) | d μ G ( g ) ,
and the Euclidean analogous, where k is the number of samples of f. The integrals over S E ( 2 ) can be estimated using a Monte-Carlo sampling adapted to the distributions. Indeed,
f f ^ d μ G = S 1 f ^ f f d μ G + S c f ^ d μ G
= E f 1 f ^ f + 1 E f f ^ f
1 + i 1 f ^ f ( u i ) f ^ f ( u i )
where S is the support of f and the ( u i ) are i.i.d. samples of f. The L 1 distances between f and f ^ are estimated using 5000 Monte-Carlo samples, and the expectation of the L 1 distance is estimated using 200 estimates f ^ . Figure 6 depicts the decay of the expected L 1 distance with the number of samples for the S E ( 2 ) and R 3 cases using the covariance Σ 1 and Figure 7 using the covariance Σ 2 . For a given covariance Σ , the error decay on S E ( 2 ) and R 3 seem to be asymptotically related by a multiplicative factor close to 1. Future work should focus on gaining insights into the phenomena underlying the error decay in the general case.

6. Conclusion and Perspectives

In this paper, we have described a statistical model M of densities for bi-invariant statistics on S E ( n ) . Even though we do not provide convergence rates, we showed experimentally on an example that the density estimation on S E ( 2 ) behaves similarly to the estimation on R 3 . Further works will focus on a deeper analysis of the performance of the moment-matching estimator, on proposing detailed algorithms to estimate densities in a mixture model, and on generalizing the construction to other Lie groups.

Author Contributions

Conceptualization, E.G. and N.G.; Methodology, E.G. and N.G.; Software, E.G. and N.G.; Writing—original draft, E.C.; Writing—review and editing, N.G. All authors have read and agreed to the published version of the manuscript.

Funding

N. G. has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement G-Statistics No 786854).

Acknowledgments

Authors would like to thank the anonymous reviewers for their helpful contributions.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Eigenvalues of adA

Let A be the set of skew-symmetric n × n matrices. Let A A and a d A : X A A X X A A . We aim at computing the eigenvalues of a d A . Since A is in A , there exists an orthogonal matrix P such that D = P 1 A P is a matrix with vanishing entries outside of n 2 2 by 2 matrices A j = 0 a j a j 0 along the diagonal. If n is even, the eigenvalues of a d A are the numbers
i ( ± a j ± a k ) , 1 j < k n 2
0 , with multiplicity n 2 .
If n is odd the eigenvalues of a d A are the numbers
i ( ± a j ± a k ) , 1 j < k n 1 2
± a j , 1 j n 1 2 ,
0 , with multiplicity n 1 2 .
Proof. 
Let g P ( X ) = P X P 1 . Since g P is invertible and that a d A = g P a d D g P 1 , a d A and a d D have the same eigenvalues. Consider first the case n odd. Let X A , X can be decomposed in n 1 2 × n 1 2 2 by 2 sub-matrices B i , j , n 1 2 1 by 2 sub-matrices u j on the last line of X, and n 1 2 2 by 1 u j t on the last column, and a 1 by 1 sub-matrix x = X n , n . Y = a d A ( X ) can be decomposed the same way in sub-matrices c i , j , v i , j , v i , j t and y = Y n , n . With the block products we obtain,
C i , j = A i B i , j B i , j A i , 1 j k n 1 2
v j = u j A j 1 j n 1 2 ,
y = 0 .
it follows that each subspace A i j , i j with vanishing entries outside the 2 by 2 blocks i j and j i , are a d D stable. These spaces are four-dimensional and direct calculation shows that the eigenvalues of a d D restricted to these spaces are i ( ± a i ± a j ) . The subspace A i defined by the blocks u i and u i t are stable as well, and the computation shows that the corresponding eigenvalues are i ( ± a j ) . a d D restricted to blocks A i , i vanishes, thus 0 is of multiplicity n 1 2 . In the n even case, only the eigenvalues associated with blocks A i , j remain. □

References

  1. Chevallier, E. Towards Parametric Bi-Invariant Density Estimation on SE (2). In Proceedings of International Conference on Geometric Science of Information; Springer: Cham, Switzerland, 2019; pp. 695–702. [Google Scholar]
  2. Pennec, X.; Arsigny, V. Exponential barycenters of the canonical Cartan connection and invariant means on Lie groups. In Matrix Information Geometry; Springer: Berlin/Heidelberg, Germany, 2013; pp. 123–166. [Google Scholar]
  3. Émery, M.; Mokobodzki, G. Sur le barycentre d’une probabilité dans une variété. In Séminaire de probabilités XXV; Springer: Berlin/Heidelberg, Germany, 1991; pp. 220–233. [Google Scholar]
  4. Pennec, X. Intrinsic statistics on Riemannian manifolds: Basic tools for geometric measurements. J. Math. Imaging Vis. 2006, 25, 127. [Google Scholar] [CrossRef] [Green Version]
  5. Chevallier, E.; Forget, T.; Barbaresco, F.; Angulo, J. Kernel density estimation on the siegel space with an application to radar processing. Entropy 2016, 18, 396. [Google Scholar] [CrossRef] [Green Version]
  6. Chevallier, E. A family of anisotropic distributions on the hyperbolic plane. In Proceedings of International Conference on Geometric Science of Information; Nielsen, F., Barbaresco, F., Eds.; Springer: Cham, Switzerland, 2017; pp. 717–724. [Google Scholar]
  7. Chevallier, E.; Kalunga, E.; Angulo, J. Kernel density estimation on spaces of Gaussian distributions and symmetric positive definite matrices. SIAM J. Imaging Sci. 2017, 10, 191–215. [Google Scholar] [CrossRef]
  8. Falorsi, L.; de Haan, P.; Davidson, T.R.; De Cao, N.; Weiler, M.; Forré, P.; Cohen, T.S. Explorations in homeomorphic variational auto-encoding. arXiv 2018, arXiv:1807.04689. [Google Scholar]
  9. Forster, C.; Carlone, L.; Dellaert, F.; Scaramuzza, D. On-Manifold Preintegration for Real-Time Visual–Inertial Odometry. IEEE Trans. Robot. 2016, 33, 1–21. [Google Scholar] [CrossRef] [Green Version]
  10. Grattarola, D.; Livi, L.; Alippi, C. Adversarial autoencoders with constant-curvature latent manifolds. Appl. Soft Comput. 2019, 81, 105511. [Google Scholar] [CrossRef] [Green Version]
  11. Pelletier, B. Kernel density estimation on Riemannian manifolds. Stat. Probab. Lett. 2005, 73, 297–304. [Google Scholar] [CrossRef]
  12. Barfoot, T.D.; Furgale, P.T. Associating uncertainty with three-dimensional poses for use in estimation problems. IEEE Trans. Rob. 2014, 30, 679–693. [Google Scholar] [CrossRef]
  13. Lesosky, M.; Kim, P.T.; Kribs, D.W. Regularized deconvolution on the 2D-Euclidean motion group. Inverse Prob. 2008, 24, 055017. [Google Scholar] [CrossRef]
  14. Hendriks, H. Nonparametric estimation of a probability density on a Riemannian manifold using Fourier expansions. Ann. Stat. 1990, 18, 832–849. [Google Scholar] [CrossRef]
  15. Huckemann, S.; Kim, P.; Koo, J.; Munk, A. Mobius deconvolution on the hyperbolic plane with application to impedance density estimation. Ann. Stat. 2010, 38, 2465–2498. [Google Scholar] [CrossRef] [Green Version]
  16. Kim, P.; Richards, D. Deconvolution density estimation on the space of positive definite symmetric matrices. In Nonparametric Statistics and Mixture Models: A Festschrift in Honor of Thomas P. Hettmansperger; World Scientific Publishing: Singapore, 2008; pp. 147–168. [Google Scholar]
  17. Sola, J.; Deray, J.; Atchuthan, D. A micro Lie theory for state estimation in robotics. arXiv 2018, arXiv:1812.01537. [Google Scholar]
  18. Eade, E. Lie groups for 2d and 3d transformations. Available online: http://ethaneade.com/lie.pdf (accessed on 10 April 2020).
  19. Eade, E. Lie Groups for Computer Vision; Cambridge Univ. Press: Cambridge, UK, 2014. [Google Scholar]
  20. Eade, E. Derivative of the Exponential Map. Available online: http://ethaneade.com/lie.pdf (accessed on 10 April 2020).
  21. Cartan, É. On the geometry of the group-manifold of simple and semi-groups. Proc. Akad. Wetensch. 1926, 29, 803–815. [Google Scholar]
  22. Lorenzi, M.; Pennec, X. Geodesics, parallel transport & one-parameter subgroups for diffeomorphic image registration. Int. J. Comput. Vision 2013, 105, 111–127. [Google Scholar]
  23. Postnikov, M.M. Geometry VI: Riemannian Geometry; Encyclopedia of mathematical science; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
  24. Rossmann, W. Lie Groups: An Introduction Through Linear Groups; Oxford University Press on Demand: Oxford, UK, 2006; Volume 5. [Google Scholar]
  25. Helgason, S. Differential Geometry, Lie Groups, and Symmetric Spaces; Academic Press: Cambridge, MA, USA, 1979. [Google Scholar]
  26. Pennec, X. Bi-invariant means on Lie groups with Cartan-Schouten connections. In Proceedings of International Conference on Geometric Science of Information (GSI 2013); Springer: Berlin/Heidelberg, Germany, 2013; pp. 59–67. [Google Scholar]
  27. Arsigny, V.; Pennec, X.; Ayache, N. Bi-invariant Means in Lie Groups. Application to Left-invariant Polyaffine Transformations. Available online: https://hal.inria.fr/inria-00071383/ (accessed on 9 April 2020).
  28. Sommer, S.; Lauze, F.; Hauberg, S.; Nielsen, M. Manifold valued statistics, exact principal geodesic analysis and the effect of linear approximations. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2010; pp. 43–56. [Google Scholar]
  29. Miolane, N.; Le Brigant, A.; Mathe, J.; Hou, B.; Guigui, N.; Thanwerdas, Y.; Heyder, S.; Peltre, O.; Koep, N.; Zaatiti, H.; et al. Geomstats: A Python Package for Riemannian Geometry in Machine Learning. 2020. Available online: https://hal.inria.fr/hal-02536154/file/main.pdf (accessed on 8 April 2020).
Figure 1. Commutation of the Adjoint/conjugation and the exponential.
Figure 1. Commutation of the Adjoint/conjugation and the exponential.
Entropy 22 00432 g001
Figure 2. Bi-invariant linearization.
Figure 2. Bi-invariant linearization.
Entropy 22 00432 g002
Figure 3. To push a density from a tangent space to the group, it is necessary to know the ratios between red and blue areas.
Figure 3. To push a density from a tangent space to the group, it is necessary to know the ratios between red and blue areas.
Entropy 22 00432 g003
Figure 4. Covariance of an empirical measure.
Figure 4. Covariance of an empirical measure.
Entropy 22 00432 g004
Figure 5. Σ C g .
Figure 5. Σ C g .
Entropy 22 00432 g005
Figure 6. L 1 errors and their ratios on S E ( 2 ) and T e S E ( 2 ) R 3 for the covariance Σ 1 , see Equation (13).
Figure 6. L 1 errors and their ratios on S E ( 2 ) and T e S E ( 2 ) R 3 for the covariance Σ 1 , see Equation (13).
Entropy 22 00432 g006
Figure 7. L 1 errors and their ratios on S E ( 2 ) and T e S E ( 2 ) R 3 for the covariance Σ 2 , see Equation (13).
Figure 7. L 1 errors and their ratios on S E ( 2 ) and T e S E ( 2 ) R 3 for the covariance Σ 2 , see Equation (13).
Entropy 22 00432 g007

Share and Cite

MDPI and ACS Style

Chevallier, E.; Guigui, N. A Bi-Invariant Statistical Model Parametrized by Mean and Covariance on Rigid Motions. Entropy 2020, 22, 432. https://doi.org/10.3390/e22040432

AMA Style

Chevallier E, Guigui N. A Bi-Invariant Statistical Model Parametrized by Mean and Covariance on Rigid Motions. Entropy. 2020; 22(4):432. https://doi.org/10.3390/e22040432

Chicago/Turabian Style

Chevallier, Emmanuel, and Nicolas Guigui. 2020. "A Bi-Invariant Statistical Model Parametrized by Mean and Covariance on Rigid Motions" Entropy 22, no. 4: 432. https://doi.org/10.3390/e22040432

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop