Entropy 2014, 16(4), 2131-2145; doi:10.3390/e16042131

Article
Information Geometry of Positive Measures and Positive-Definite Matrices: Decomposable Dually Flat Structure
Shun-ichi Amari
RIKEN Brain Science Institute, Hirosawa 2-1, Wako-shi, Saitama 351-0198, Japan; E-Mail: amari@brain.riken.jp; Tel.: +81-48-467-9669; Fax: +81-48-467-9687
Received: 14 February 2014; in revised form: 9 April 2014 / Accepted: 10 April 2014 /
Published: 14 April 2014

Abstract

: Information geometry studies the dually flat structure of a manifold, highlighted by the generalized Pythagorean theorem. The present paper studies a class of Bregman divergences called the (ρ, τ )-divergence. A (ρ, τ )-divergence generates a dually flat structure in the manifold of positive measures, as well as in the manifold of positive-definite matrices. The class is composed of decomposable divergences, which are written as a sum of componentwise divergences. Conversely, a decomposable dually flat divergence is shown to be a (ρ, τ )-divergence. A (ρ, τ )-divergence is determined from two monotone scalar functions, ρ and τ. The class includes the KL-divergence, α-, β- and (α, β)-divergences as special cases. The transformation between an affine parameter and its dual is easily calculated in the case of a decomposable divergence. Therefore, such a divergence is useful for obtaining the center for a cluster of points, which will be applied to classification and information retrieval in vision. For the manifold of positive-definite matrices, in addition to the dually flatness and decomposability, we require the invariance under linear transformations, in particular under orthogonal transformations. This opens a way to define a new class of divergences, called the (ρ, τ )-structure in the manifold of positive-definite matrices.
Keywords:
information geometry; dually flat structure; decomposable divergence; (ρ,τ )-structure

1. Introduction

Information geometry, originated from the invariant structure of a manifold of probability distributions, consists of a Riemannian metric and dually coupled affine connections with respect to the metric [1]. A manifold having a dually flat structure is particularly interesting and important. In such a manifold, there are two dually coupled affine coordinate systems and a canonical divergence, which is a Bregman divergence. The highlight is given by the generalized Pythagorean theorem and projection theorem. Information geometry is useful not only for statistical inference, but also for machine learning, pattern recognition, optimization and even for neural networks. It is also related to the statistical physics of Tsallis q-entropy [24].

The present paper studies a general and unique class of decomposable divergence functions in R + n, the manifold of n-dimensional positive measures. This is the (ρ, τ )-divergences, introduced by Zhang [5,6], from the point of view of “representation duality”. They are Bregman divergences generating a dually flat structure. The class includes the well-known Kullback-Leibler divergence, α-divergence, β-divergence and (α, β)-divergence [1,79] as special cases. The merit of a decomposable Bregman divergence is that the θ-η Legendre transformation is computationally tractable, where θ and η are two affine coordinates systems coupled by the Legendre transformation. When one uses a dually flat divergence to define the center of a cluster of elements, the center is easily given by the arithmetic mean of the dual coordinates of the elements [10,11]. However, we need to calculate its primal coordinates. This is the θ-η transformation. Hence, our new type of divergences has an advantage of calculating θ-coordinates for clustering and related pattern matching problems. The most general class of dually flat divergences, not necessarily decomposable, is further given in R + n. They are the (ρ, τ ) divergence.

Positive-definite (PD) matrices appear in many engineering problems, such as convex programming, diffusion tensor analysis and multivariate statistical analysis [1220]. The manifold, PDn, of n × n PD matrices form a cone, and its geometry is by itself an important subject of research. If we consider the submanifold consisting of only diagonal matrices, it is equivalent to the manifold of positive measures. Hence, PD matrices can be regarded as a generalization of positive measures. There are many studies on geometry and divergences of the manifold of positive-definite matrices. We introduce a general class of dually flat divergences, the (ρ, τ )-divergence. We analyze the cases when a (ρ, τ )-divergence is invariant under the general linear transformations, Gl(n), and invariant under the orthogonal transformations, O(n). They not only include many well-known divergences of PD matrices, but also give new important divergences.

The present paper is organized as follows. Section 2 is preliminary, giving a short introduction to a dually flat manifold and the Bregman divergence. It also defines the cluster center due to a divergence. Section 3 defines the (ρ, τ )-structure in R + n. It gives dually flat decomposable affine coordinates and a related canonical divergence (Bregman divergence). Section 4 is devoted to the (ρ, τ )-structure of the manifold, PDn, of PD matrices. We first study the class of divergences that are invariant under O(n). We further study a decomposable divergence that is invariant under Gl(n). It coincides with the invariant divergence derived from zero-mean Gaussian distributions with PD covariance matrices. They not only include various known divergences, but new remarkable ones. Section 5 discusses a general class of non-decomposable flat divergences and miscellaneous topics. Section 6 is the conclusions.

2. Preliminaries to Information Geometry of Divergence

2.1. Dually Flat Manifold

A manifold is said to have the dually flat Riemannian structure, when it has two affine coordinate systems θ = (θ1, · · · , θn) and η = (η1, · · · , ηn) (with respect to two flat affine connections) together with two convex functions, ψ(θ) and ϕ(η), such that the two coordinates are connected by the Legendre transformations:

η = ψ ( θ ) ,             θ = ϕ ( η ) ,

where ∇ is the gradient operator. The Riemannian metric is given by:

( g i j ( θ ) ) = ψ ( θ ) ,             ( g i j ( η ) ) = ϕ ( η )

in the respective coordinate systems. A curve that is linear in the θ-coordinates is called a θ-geodesic, and a curve linear in the η-coordinates is called an η-geodesic.

A dually flat manifold has a unique canonical divergence, which is the Bregman divergence defined by the convex functions,

D [ P : Q ] = ψ ( θ P ) + ϕ ( η Q ) - θ P · η Q ,

where θP is the θ-coordinates of P, ηQ is the η-coordinates of Q and θ P · η Q = i ( θ P i ) ( η Q i ), where θ P i and ηQi are components of θp and ηQ, respectively. The Pythagorean and projection theorems hold in a dually flat manifold:

Pythagorean Theorem

Given three points, P,Q,R, when the η-geodesic connecting P and Q is orthogonal to the θ-geodesic connecting Q and R with respect to the Riemannian metric,

D [ P : Q ] + D [ Q : R ] = D [ P : R ] .

Projection Theorem

Given a smooth submanifold, S, let PS be the minimizer of divergence from

P S = min Q S D [ P : Q ]

Then, PS is the η-geodesic projection of P to S, that is the η-geodesic connecting P and PS is orthogonal to S.

We have the dual of the above theorems where θ- and η-geodesics are exchanged and D [P : Q] is replaced by its dual D [Q : P].

2.2. Decomposable Divergence

A divergence, D [P : Q], is said to be decomposable, when it is written as a sum of component-wise divergences,

D [ P : Q ] = i = 1 n d ( θ P i , θ Q i ) ,

where θ P i and θ Q i are the components of θP and θQ and d ( θ P i , θ Q i ) is a scalar divergence function.

An f-divergence:

D f [ P : Q ] = p i f ( q i p i )

is a typical example of decomposable divergence in the manifold of probability distributions, where P = (p) and Q = (q) are two probability vectors with ∑pi = ∑qi = 1. A convex function, ψ(θ), is said to be decomposable, when it is written as:

ψ ( θ ) = i = 1 n ψ ˜ ( θ i )

by using a scalar convex function, ψ̃(θ). The Bregman divergence derived from a decomposable convex function is decomposable.

When ψ(θ) is a decomposable convex function, its Legendre dual is also decomposable. The Legendre transformation is given componentwise as:

η i = ψ ˜ ( θ i ) ,

where ′ is the differentiation of a function, so that it is computationally tractable. Its inverse transformation is also componentwise,

θ i = ϕ ˜ ( η i ) ,

where ϕ̃ is the Legendre dual of ψ̃.

2.3. Cluster Center

Consider a cluster of points P1, · · · , Pm of which θ-coordinates are θ1, · · · , θm and η-coordinates are η1, · · · , ηm. The center, R, of the cluster with respect to the divergence, D [P : Q], is defined by:

R = arg min Q i = 1 m D [ Q : P i ] .

By differentiating ∑D [Q : Pi] by θ (the θ-coordinates of Q), we have:

ψ ( θ R ) = 1 m i = 1 m η i .

Hence, the cluster-center theorem due to Banerjee et al. [10] follows; see also [11]:

Cluster-Center Theorem

The η-coordinates ηR of the cluster center are given by the arithmetic average of the η-coordinates of the points in the cluster:

η R = 1 m i = 1 m η i .

When we need to obtain the θ-coordinates of the cluster center, it is given by the θ-η transformation from ηR,

θ R = ϕ ( η R ) .

However, in many cases, the transformation is computationally heavy and intractable when the dimensions of a manifold is large. The transformation is easy in the case of a decomposable divergence. This is motivation for considering a general class of decomposable Bregman divergences.

3. (ρ, τ ) Dually Flat Structure in R + n

3.1. (ρ, τ )-Coordinates of R + n

Let R + n be the manifold of positive measures over n elements x1, · · · , xn. A measure (or a weight) of xi is given by:

ξ i = m ( x i ) > 0

and ξ = (ξ1, · · · , ξn) is a distribution of measures. When ∑ξi = 1 is satisfied, it is a probability measure. We write:

R n + = { ξ ξ i > 0 }

and ξ forms a coordinate system of R + n.

Let ρ(ξ) and τ (ξ) be two monotonically increasing differentiable functions. We call:

θ = ρ ( ξ ) ,             η = τ ( ξ )

the ρ- and τ -representations of positive measure ξ. This is a generalization of the ±α representations [1] and was introduced in [5] for a manifold of probability distributions. See also [6].

By using these functions, we construct new coordinate systems θ and η of R + n. They are given, for θ = (θi) and η = (ηi), by componentwise relations,

θ i = ρ ( ξ i ) ,             η i = τ ( ξ i ) .

They are called the ρ- and τ -representations of ξ ξ R + n, respectively. We search for convex functions, ψρ,τ (θ) and ϕρ,τ (η), which are Legendre duals to each other, such that θ and η are two dually coupled affine coordinate systems.

3.2. Convex Functions

We introduce two scalar functions of θ and η by:

ψ ˜ ρ , τ ( θ ) = 0 ρ - 1 ( θ ) τ ( ξ ) ρ ( ξ ) d ξ ,
ϕ ˜ ρ , τ ( η ) = 0 τ - 1 ( η ) ρ ( ξ ) τ ( ξ ) d ξ .

Then, the first and second derivatives of ψ̃ρ,τ are:

ψ ˜ ρ , τ ( θ ) = τ ( ξ ) ,
ψ ˜ ρ , τ ( θ ) = τ ( ξ ) ρ ( ξ ) .

Since ρ′(ξ) > 0, τ′ (ξ) > 0, we see that ψ̃ρ,τ (θ) is a convex function. So is ϕ̃ρ,τ (η). Moreover, they are Legendre duals, because:

ψ ˜ ρ , τ ( θ ) + ϕ ˜ ρ , τ ( η ) - θ η = 0 ξ τ ( ξ ) ρ ( ξ ) d ξ + 0 ξ ρ ( ξ ) τ ( ξ ) d ξ - ρ ( ξ ) τ ( ξ )
= 0.

We then define two decomposable convex functions of θ and η by:

ψ ρ , τ ( θ ) = ψ ˜ ρ , τ ( θ i ) ,
ϕ ρ , τ ( η ) = ϕ ˜ ρ , τ ( η i ) .

They are Legendre duals to each other.

3.3. (ρ, τ )-Divergence

The (ρ, τ )-divergence between two points, ξ, ξ R n +, is defined by:

D ρ , τ [ ξ : ξ ] = ψ ρ , τ ( θ ) + ϕ ρ , τ ( η ) - θ · η
= i = 1 n [ 0 ξ i τ ( ξ ) ρ ( ξ ) d ξ + 0 ξ i ρ ( ξ ) τ ( ξ ) d ξ - ρ ( ξ i ) τ ( ξ i ) ] ,

where θ and η′ are ρ- and τ -representations of ξ and ξ′, respectively.

The (ρ, τ )-divergence gives a dually flat structure having θ and η as affine and dual affine coordinate systems. This is originally due to Zhang [5] and a generalization of our previous results concerning the q and deformed exponential families [4]. The transformation between θ and η is simple in the (ρ, τ )-structure, because it can be done componentwise,

θ i = ρ { τ - 1 ( η i ) } ,
η i = τ { ρ - 1 ( θ i ) } .

The Riemannian metric is:

g i j ( ξ ) = τ ( ξ i ) ρ ( ξ i ) δ i j ,

and hence Euclidean, because the Riemann-Christoffel curvature due to the Levi-Civita connection vanishes, too.

The following theorem is new, characterizing the (ρ, τ )-divergence.

Theorem 1

The (ρ, τ )-divergences form a unique class of divergences in R + n that are dually flat and decomposable.

3.4. Biduality: α-(ρ, τ ) Divergence

We have dually flat connections, (∇ρ,τ, ρ , τ *), represented in terms of covariant derivatives, which are derived from Dρ,τ. This is called the representation duality by Zhang [5]. We further have the α-(ρ, τ ) connections defined by:

ρ , τ ( α ) = 1 + α 2 ρ , τ + 1 - α 2 ρ , τ * .

The α-(−α) duality is called the reference duality [5]. Therefore, ρ , τ ( α ) possesses the biduality, one concerning α and (−α), and the other with respect to ρ and τ.

The Riemann-Christoffel curvature of ρ , τ ( α ) is:

R ρ , τ ( α ) = 1 - α 2 4 R ρ , τ ( 0 ) = 0

for any α. Hence, there exists unique canonical divergence D ρ , τ ( α ) and α-(ρ, τ ) affine coordinate systems. It is an interesting future problem to obtain their explicit forms.

3.5. Various Examples

As a special case of the (ρ, τ )-divergence, we have the (α, β)-divergence obtained from the following power functions,

ρ ( ξ ) = 1 α ξ α , τ ( ξ ) = 1 β ξ β .

This was introduced by Cichocki, Cruse and Amari in [7,8].

The affine and dual affine coordinates are:

θ i = 1 α ( ξ i ) α ,             η i = 1 β ( ξ i ) β

and the convex functions are:

ψ ( θ ) = c α , β θ i α + β α ,             ϕ ( η ) = c β , α η i α + β β ,

where:

c α , β = 1 β ( α + β ) α α + β α .

The induced (α, β)-divergence has a simple form,

D α , β [ ξ : ξ ] = 1 α β ( α + β ) { α ξ i α + β β ξ i α + β - ( α + β ) ξ i α ξ i β } ,

for ξ, ξ R n +. It is defined similarly in the manifold, Sn, of probability distributions, but it is not a Bregman divergence in Sn. This is because the total mass constraint ∑ξi = 1 is not linear in θ- or η-coordinates in general.

The α-divergence is a special case of the (α, β)-divergence, so that it is a (ρ, τ )-divergence. By putting:

ρ ( ξ ) = 2 1 - α ξ 1 - α 2 ,             τ ( ξ ) = 2 1 + α ξ 1 + α 2 ,

we have:

D α [ ξ : ξ ] = 4 1 - α 2 { 1 - α 2 ξ i + 1 + α 2 ξ i 1 - α 2 - ξ i α ( ξ i ) 1 + α 2 } .

The β-divergence [19] is obtained from:

ρ ( ξ ) = ξ ,             τ ( ξ ) = 1 β ξ 1 + β .

It is written as:

D β [ ξ : ξ ] = 1 β ( β + 1 ) i [ ξ i β + 1 + ( β + 1 ) ξ i - ( ξ i ) β + 1 - ( β + 1 ) ξ i ( ξ i ) β ] .

The β-divergence is special in the sense that it gives a dually flat structure, even in Sn. This is because u(ξ) is linear in ξ.

The classes of α-divergences and β-divergences intersect at the KL-divergence, and their duals are different in general. They are the only intersecting points of the two classes.

When ρ(ξ) = ξ and τ (ξ) = U′ (ξ) where U is a convex function, (ρ, τ )-divergence is Eguchi’s U-divergence [21].

Zhang already introduced the (α, β)-divergence in [5], which is not a (ρ, τ )-divergence in R + n and different from ours. We regret for our confusing the naming of the (α, β)-divergence.

4. Invariant, Flat Decomposable Divergences in the Manifold of Positive-Definite Matrices

4.1. Invariant and Decomposable Convex Function

Let P be a positive-definite matrix and ψ(P) be a convex function. Then, a Bregman divergence is defined between two positive definite matrices, P and Q, by:

D [ P : Q ] = ψ ( P ) - ψ ( Q ) - ψ ( P ) · ( P - Q )

where ∇ is the gradient operator with respect to matrix P = (Pij), so that ∇ψ(P) is a matrix and the inner product of two matrices is defined by:

ψ ( Q ) · P = tr { ψ ( Q ) P } ,

where tr is the trace of a matrix.

It induces a dually flat structure to the manifold of positive-definite matrices, where the affine coordinate system (θ-coordinates) is Θ = P and the dual affine coordinate system (η-coordinates) is:

H = ψ ( P ) .

A convex function, ψ(P), is said to be invariant under the orthogonal group O(n), when:

ψ ( P ) = ψ ( O T PO )

holds for any orthogonal transformation O, where OT is the transpose of O. An invariant function is written as a symmetric function of n eigenvalues λ1, · · · , λn of P. See Dhillon and Tropp [12]. When an invariant convex function of P is written, by using a convex function, f, of one variable, in the additive form:

ψ ( P ) = f ( λ i ) ,

it is said to be decomposable. We have:

ψ ( P ) = tr f ( P ) .

4.2. Invariant, Flat and Decomposable Divergence

A divergence D [P : Q] is said to be invariant under O(n), when it satisfies:

D [ P : Q ] = D [ O T PO : O T QO ] .

When it is derived from a decomposable convex function, ψ(P), it is invariant, flat and decomposable.

We give well-known examples of decomposable convex functions and the divergences derived from them:

(1)

For f(λ) = (1/2)λ2, we have:

ψ ( P ) = 1 2 λ i 2 ,
D [ P : Q ] = 1 2 P - Q 2 ,

where ||P||2 is the Frobenius norm:

P 2 = P i j 2 .

(2)

For f(λ) = −log λ

ψ ( P ) = - log ( det P ) ,
D [ P : Q ] = tr ( P Q - 1 ) - log ( det P Q - 1 ) - n .

The affine coordinate system is P, and the dual coordinate system is P−1. The derived geometry is the same as that of multivariate Gaussian probability distributions with mean zero and covariance matrix P.

(3)

For f(λ) = λ log λλ,

ψ ( P ) = tr ( P log P - P ) ,
D [ P : Q ] = tr ( P log P - P log Q - P + Q ) .

This divergence is used in quantum information theory. The affine coordinate system is P, and the dual affine coordinate system is log P; and, ψ(P) is called the negative von Neuman entropy.

4.3. (ρ, τ )-Structure in Positive Definite Matrices

We extend the (ρ, τ )-structure in the previous section to the matrix case and introduce a general dually flat invariant decomposable divergence in the manifold of positive-definite matrices. Let:

Θ = ρ ( P ) ,             H = τ ( P )

be ρ- and τ -representations of matrices. We use two functions, ψ̃ρ,τ (θ) and ϕ̃ρ,τ (η), defined in Equations (19) and (20), for defining a pair of dually coupled invariant and decomposable convex functions,

ψ ( Θ ) = tr ψ ˜ ρ , τ { Θ } ,
ϕ ( H ) = tr ϕ ˜ ρ , τ { H } .

They are not convex with respect to P, but are convex with respect to Θ and H, respectively. The derived Bregman divergence is:

D [ P : Q ] = ψ { Θ ( P ) } + ϕ { H ( Q ) } - Θ ( P ) · H ( Q ) .

Theorem 2

The (ρ, τ )-divergences form a unique class of invariant, decomposable and dually flat divergences in the manifold of positive matrices.

The Euclidean, Gaussian and von Neuman divergences given in Equations (51), (54) and (56) are special examples of (ρ, τ )-divergences. They are given, respectively, by:

(1)

ρ ( ξ ) = τ ( ξ ) = ξ ,

(2)

ρ ( ξ ) = ξ ,             τ ( ξ ) = - 1 ξ ,

(3)

ρ ( ξ ) = ξ ,             τ ( ξ ) = log ξ .

When ρ and τ are power functions, we have the (α, β)-structure in the manifold of positive-definite matrices.

(4)

(α-β)-divergence.

By using the (α, β) power functions given by Equation (34), we have:

ψ ( Θ ) = α α + β tr Θ α + β α = α α + β tr P α + β ,
ϕ ( H ) = β α + β tr H α + β β = β α + β tr P α + β

so that the (α, β)-divergence of matrices is:

D [ P : Q ] = tr { α α + β P α + β + β α + β Q α + β - P α Q β } .

This is a Bregman divergence, where the affine coordinate system is Θ = Pα and its dual is H = Pβ.

(5)

The α-divergence is derived as:

Θ ( P ) = 2 1 - α P 1 - α 2 ,
ψ ( Θ ) = 2 1 + α P ,
D α [ P : Q ] = 4 1 - α 2 tr ( - P 1 - α 2 Q 1 + α 2 + 1 - α 2 P + 1 + α 2 Q ) .

The affine coordinate system is 2 1 - α P 1 - α 2, and its dual is 2 1 + α P 1 + α 2.

(6)

The β-divergence is derived from Equation (41) as:

D β [ P : Q ] = 1 β ( β + 1 ) tr [ P β + 1 + ( β + 1 ) Q - Q β + 1 - ( β + 1 ) P Q β ] .

4.4. Invariance Under Gl(n)

We extend the concept of invariance under the orthogonal group to that under the general linear group, Gl(n), that is the set of invertible matrices, L, det |L| ≠ 0. This is a stronger condition. A divergence is said to be invariant under Gl(n), when:

D [ P : Q ] = D [ L T PL : L T QL ]

holds for any LGl(n).

We identify matrix P with the zero-mean Gaussian distribution:

p ( x , P ) = exp { - 1 2 x T P - 1 x - 1 2 log det P - c } ,

where c is a constant. We know that an invariant divergence belongs to the class of f-divergences in the case of a manifold of probability distributions, where the invariance means the geometry does not change under a one-to-one mapping of x to y. Moreover, the only invariant flat divergence is the KL-divergence [22]. These facts suggest the following conjecture.

Proposition

The invariant, flat and decomposable divergence under Gl(n) is the KL-divergence given by:

D K L [ P : Q ] = tr ( PQ - 1 ) - log ( det PQ - 1 ) - n .

5. Non-Decomposable Divergence

We have focused on flat and decomposable divergences. There are many interesting non-decomposable divergences. We first discuss a general class of flat divergences in R + n and then touch upon interesting flat and non-flat divergences in the manifold of positive-definite matrices.

5.1. General Class of Flat Divergences in R + n

We can describe a general class of flat divergence in R + n, which are not necessarily decomposable. This is introduced in [23], which studies the conformal structure of general total Bregman divergences ([11,13]). When R + n is endowed with a dually flat structure, it has a θ-coordinate system given by:

θ = ρ ( ξ )

which is not necessarily a componentwise function. Any pair of invertible θ = ρ(ξ) and convex function ψ(θ) defines a dually flat structure and, hence, a Bregman divergence in R + n.

The dual coordinates η = τ (ξ) are given by:

η = ψ ( θ )

so that we have:

η = τ ( ξ ) = ψ { ρ ( ξ ) } .

This implies that a pair (ρ, τ ) of coordinate systems can define dually coupled affine coordinates and, hence, a dually flat structure, when and only when η = τ {ρ−1(θ)} is a gradient of a convex function.

This is different from the case of decomposable divergence, where any monotone pair of ρ(ξ) and τ (ξ) gives a dually flat structure.

5.2. Non-Decomposable Flat Divergence in PDn

Ohara and Eguchi [15,16] introduced the following function:

ψ V ( P ) = V ( det P ) ,

where V (ξ) is a monotonically decreasing scalar function. ψV is convex when and only when:

1 + V ( ξ ) ξ 2 V ( ξ ) < 1 n .

In such a case, we can introduce dually flat structure to PDn, where P is an affine coordinate system with convex ψV (P), and the dual affine coordinate system is:

H = V ( det P ) P - 1 .

The derived divergence is:

D V [ P : Q ] = V ( det P ) - V ( det Q )
+ V ( det Q ) tr { Q - 1 ( Q - P ) } .

When V (ξ) = −log ξ, it reduces to the case of Equation (54), which is invariant under Gl(n) and decomposable. However, the divergence DV [P : Q] is not decomposable. It is invariant under O(n) and more strongly so under SGl(n) ⊂ Gl(n), defined by det |L| = ±1.

5.3. Flat Structure Derived from q-Escort Distribution

A dually flat structure is introduced in the manifold of probability distributions [4] as:

D ˜ α [ p : q ] = 1 1 - q 1 H q ( p ) ( 1 - p i 1 - q q i q ) ,

where:

H q ( p ) = p i q ,
q = 1 + α 2 .

The dual affine coordinates are the q-escort distribution: [4]

η i = 1 H q ( p ) p i q .

The divergence, q, is flat, but not decomposable.

We can generalize it to the case of PDn,

D ˜ q [ P : Q ] = 1 1 - q 1 tr P q { ( 1 - q ) tr ( P ) + q tr ( Q ) - tr ( P 1 - q Q q ) } .

This is flat, but not decomposable.

5.4. γ-Divergence in PDn

The γ-divergence is introduced by Fujisawa and Eguchi [24]. It gives a super-robust estimator. It is interesting to generalize it to PDn,

D γ [ P : Q ] = 1 γ ( γ - 1 ) { log tr P γ - ( γ - 1 ) log tr Q γ - 1 - γ log tr P Q γ - 1 } .

This is not flat nor decomposable. This is a projective divergence in the sense that, for any c, c> 0,

D γ [ c P : c Q ] = D γ [ P : Q ] .

Therefore, it can be defined in the submanifold of tr P = 1.

6. Concluding Remarks

We have shown that the (ρ, τ )-divergence introduced by Zhang [5] is a general dually flat decomposable structure of the manifold of positive measures. We then extended it to the manifold of positive-definite matrices, where the criterion of invariance under linear transformations (in particular, under orthogonal transformations) were added. The decomposability is useful from the computational point of view, because the θ-η transformation is tractable. This is the motivation for studying decomposable flat divergences.

When we treat the manifold of probability distributions, it is a submanifold of the manifold of positive measures, where the total sum of measures are restricted to one. This is a nonlinear constraint in the θ or η coordinates, so that the manifold is not flat, but curved in general. Hence, our arguments hold in this case only when at least one of the ρ and τ functions are linear. The U-divergence [21] and β-divergence [19] are such cases. However, for clustering, we can take the average of the η-coordinates of member probability distributions in the larger manifold of positive measures and then project it to the manifold of probability distributions. This is called the exterior average, and the projection is simply a normalization of the result. Therefore, the (ρ, τ )-structure is useful in the case of probability distributions. The same situation holds in the case of positive-definite matrices.

Quantum information theory deals with positive-definite Hermitian matrices of trace one [25,26]. We need to extend our discussions to the case of complex matrices. The trace one constraint is not linear with respect to θ- or η-coordinates, as is the same in the case of probability distributions. Many interesting divergence functions have been introduced in the manifold of positive-definite Hermitian matrices. It is an interesting future problem to apply our theory to quantum information theory.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Amari, S.; Nagaoka, H. Methods of Information Geometry; American Mathematical Society and Oxford University Press: Rhode Island, RI, USA, 2000.
  2. Tsallis, C. Introduction to Nonextensive Statistical Mechanics: Approaching a Complex World; Springer: Berlin/Heidelberg, Germany, 2009.
  3. Naudts, J. Generalized Thermostatistics; Springer: Berlin/Heidelberg, Germany, 2011.
  4. Amari, S.; Ohara, A.; Matsuzoe, H. Geometry of deformed exponential families: Invariant, dually-flat and conformal geometries. Physica A 2012, 391, 4308–4319.
  5. Zhang, J. Divergence function, duality, and convex analysis. Neural Comput 2004, 16, 159–195.
  6. Zhang, J. Nonparametric information geometry: From divergence function to referential-representational biduality on statistical manifolds. Entropy 2013, 15, 5384–5418.
  7. Cichocki, A.; Amari, S. Families of alpha- beta- and gamma-divergences: Flexible and robust measures of similarities. Entropy 2010, 12, 1532–1568.
  8. Cichocki, A.; Cruces, S.; Amari, S. Generalized alpha-beta divergences and their application to robust nonnegative matrix factorization. Entropy 2011, 13, 134–170.
  9. Minami, M.; Eguchi, S. Robust blind source separation by beta-divergence. Neural Comput 2002, 14, 1859–1886.
  10. Banerjee, A.; Merugu, S.; Dhillon, I.; Ghosh, J. Clustering with Bregman Divergences. J. Mach. Learn. Res 2005, 6, 1705–1749.
  11. Liu, M.; Vemuri, B.C.; Amari, S.; Nielsen, F. Shape retrieval using hierarchical total Bregman soft clustering. IEEE Trans. Pattern Anal. Mach. Learn 2012, 24, 3192–3212.
  12. Dhillon, I.S.; Tropp, J.A. Matrix nearness problems with Bregman divergences. SIAM J. Matrix Anal. Appl 2007, 29, 1120–1146.
  13. Vemuri, B.C.; Liu, M.; Amari, S.; Nielsen, F. Total Bregman divergence and its applications to DTI analysis. IEEE Trans. Med. Imaging 2011, 30, 475–483.
  14. Ohara, A.; Suda, N.; Amari, S. Dualistic differential geometry of positive definite matrices and its applications to related problems. Linear Algebra Appl 1996, 247, 31–53.
  15. Ohara, A.; Eguchi, S. Group invariance of information geometry on q-Gaussian distributions induced by beta-divergence. Entropy 2013, 15, 4732–4747.
  16. Ohara, A.; Eguchi, S. Geometry on positive definite matrices induced from V -potential functions. In Geometric Science of Information; Nielsen, F., Barbaresco, F., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 621–629.
  17. Chebbi, Z.; Moakher, M. Means of Hermitian positive-definite matrices based on the log-determinant alpha-divergence function. Linear Algebra Appl 2012, 436, 1872–1889.
  18. Tsuda, K.; Ratsch, G.; Warmuth, M.K. Matrix exponentiated gradient updates for on-line learning and Bregman projection. J. Mach. Learn. Res 2005, 6, 995–1018.
  19. Nock, R.; Magdalou, B.; Briys, E.; Nielsen, F. Mining matrix data with Bregman matrix divergences for portfolio selection. In Matrix Information Geometry; Nielsen, F., Bhatia, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume Chapter 15, pp. 373–402.
  20. Matrix Information Geometry; Nielsen, F., Bhatia, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2013.
  21. Eguchi, S. Information geometry and statistical pattern recognition. Sugaku Expo 2006, 19, 197–216.
  22. Amari, S. α-divergence is unique, belonging to both f-divergence and Bregman divergence classes. IEEE Trans. Inf. Theory 2009, 55, 4925–4931.
  23. Nock, R.; Nielsen, F.; Amari, S. On conformal divergences and their population minimizers. IEEE Trans. Inf. Theory 2014. submitted for publication.
  24. Fujisawa, H.; Eguchi, S. Robust parameter estimation with a small bias against heavy contamination. J. Multivar. Anal 2008, 99, 2053–2081.
  25. Petz, P. Monotone metrics on matrix spaces. Linear Algebra Appl 1996, 244, 81–96.
  26. Hasegawa, H. α-divergence of the non-commutative information geometry. Rep. Math. Phys 1993, 33, 87–93.
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert