Next Article in Journal
Dynamical Systems over Lie Groups Associated with Statistical Transformation Models
Previous Article in Journal
Information Properties of a Random Variable Decomposition through Lattices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Graphical Gaussian Models Associated to a Homogeneous Graph with Permutation Symmetries †

1
CNRS, LAREMA, SFR MATHSTIC, Université d’Angers, 49100 Angers, France
2
School of Science Department of Mathematics, Osaka Metropolitan University, Osaka-shi 530-0001, Japan
3
Faculty of Mathematics and Information Science, Warsaw University of Technology, 00-662 Warsaw, Poland
*
Author to whom correspondence should be addressed.
Presented at the 41st International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering, Paris, France, 18–22 July 2022.
Phys. Sci. Forum 2022, 5(1), 20; https://doi.org/10.3390/psf2022005020
Published: 7 December 2022

Abstract

:
We consider multivariate-centered Gaussian models for the random vector ( Z 1 , , Z p ) , whose conditional structure is described by a homogeneous graph and which is invariant under the action of a permutation subgroup. The following paper is concerned with model selection within colored graphical Gaussian models, when the underlying conditional dependency graph is known. We derive an analytic expression of the normalizing constant of the Diaconis–Ylvisaker conjugate prior for the precision parameter and perform Bayesian model selection in the class of graphical Gaussian models invariant by the action of a permutation subgroup. We illustrate our results with a toy example of dimension 5.

1. Introduction

In the Graphical Gaussian model, conditional independencies among components of a random vector Z = ( Z 1 , Z 2 , , Z p ) obeying the multivariate centered Gaussian law N ( 0 , Σ ) with an unknown covariance matrix Σ Sym + ( p , R ) are assigned by a simple undirected graph G = ( V , E ) , where the set V of vertices is enumerated as V = { 1 , 2 , , p } . Namely, if the vertices i and j are disconnected in the graph G , then Z i and Z j are conditionally independent given other components Z k , k i , j . This property is equivalent to the ( i , j ) -component of the precision matrix K : = Σ 1 equals 0. Following Højsgaard and Lauritzen [1], we impose the invariance on such a statistical model under the natural action of a permutation subgroup Γ S p preserving the conditional independence structure of the model, which means that Γ is a subgroup of the automorphism group Aut ( G ) : = σ S p ; σ ( i ) σ ( j )   if   and   only   if   i j of the graph G , where i j means that there exists an edge between the vertices i and j. Such models are called RCOP graphical models. It is proved that, when the graph G is homogeneous (i.e., decomposable and A 4 -free, see [2]), the parameter set P G Γ of precision matrices K of our invariant model forms a homogeneous cone. Therefore, we can apply our previous results [3] about the Wishart laws on homogeneous cones to this situation. In particular, we obtain an exact analytic formula for the normalizing constant of the Diaconis–Ylvisaker conjugate prior to the precision matrix. In order to demonstrate our results, we work on the data set of the examination marks of 88 students in 5 different mathematical subjects reported in Mardia et al. [4], following Højsgaard and Lauritzen [1]. As is discussed in Whittaker [5] and Edwards [6], the data fit into the graphical Gaussian model from Figure 1.
We carry out Bayesian model selection in the spirit of Graczyk et al. [7] of the group Γ having the highest posterior probability among the ten possible groups preserving the graph above. We note that in [7], only complete graph G was allowed. Thanks to new formulas for the normalizing constant of Diaconis–Ylvisaker conjugate prior, we are able to generalize results of [7] to homogeneous graphs.
The authors are grateful to anonymous referees for their careful reading and valuable comments.

2. Main Results

Let us describe our results in more detail. Let Z G Γ be the linear space consisting of symmetric matrices K Sym ( p , R ) such that K σ ( i ) σ ( j ) = K i j for all σ Γ and i , j V , and K i j = 0 if i j and i ¬ j . Then the cone P G Γ equals Z G Γ Sym + ( p , R ) , so that our statistical model is the family of N ( 0 , Σ ) with Σ 1 P G Γ . The Diaconis–Ylvisaker conjugate prior for K = Σ 1 P G Γ is given by
f ( K ; δ , D ) : = 1 I G Γ ( δ , D ) e tr K D / 2 ( det K ) ( δ 2 ) / 2 1 P G Γ ( K )
for hyperparameters δ > 2 and D Sym + ( p , R ) , where
I G Γ ( δ , D ) : = P G Γ e tr K D / 2 ( det K ) ( δ 2 ) / 2 d K
is the normalizing constant. As is already stated, the cone P G Γ is homogeneous, which means that there exists a linear group H G L ( Z G Γ ) acting on P G Γ transitively. Then, making use of our integral formula over P G Γ , see (2), we can compute the normalizing constant I G Γ ( δ , D ) .
In order to state the integral formula, we introduce some functions. Let Z be a linear subspace of Sym ( p , R ) such that P Z : = Z Sym + ( p , R ) is non-empty. Let π Z : Sym ( p , R ) Z denote the orthogonal projection with respect to the trace inner product x , y : = tr x y , x , y Sym ( p , R ) , that is,
x , y = x , π Z ( y ) , x Z , y Sym ( p , R ) .
Let P Z * be the dual cone of P Z , that is, P Z * : = y Z ; x , y > 0   for   all   x P Z ¯ \ { 0 } . It is easy to see that, if D Sym + ( p , R ) , then π Z ( D ) P Z * . One can show that (see the proof of Proposition V.8 in [8]), for each y P Z * , there exists a unique ψ Z ( y ) P Z such that the function P Z x e x , y det x attains its maximum value at x = ψ Z ( y ) , and that the map ψ Z : P Z * P Z equals the inverse map of P Z x π Z ( x 1 ) P Z * . For y P Z * , define δ Z ( y ) : = ( det ψ Z ( y ) ) 1 and let S Z ( y ) : Z Z be a linear operator defined in such a way that
S Z ( y ) u , v = 2 s t log δ Z ( y + s u + t v ) | s = t = 0 u , v Z .
Namely, S Z ( y ) is the Hessian operator of a strictly convex function log δ Z ( y ) . Put φ Z ( y ) : = ( det S Z ( y ) ) 1 / 2 for y P Z * . Finally, define γ Z ( α ) : = P Z e tr x ( det x ) α d x for α 0 , where d x denotes the Lebesgue measure on Z normalized by the trace inner product. Namely, d x = i = 1 dim Z d x i , where ( x 1 , , x dim Z ) is the standard coordinate system associated to an orthonormal basis of Z with respect to the trace inner product.
Theorem 1.
If Z = Z G Γ , then one has
P Z e x , y ( det x ) α d x = γ Z ( α ) φ Z ( y ) δ Z ( y ) α y P Z * , α 0 .
We shall show Theorem 1 by using the homogeneity of P G Γ = P Z G Γ in our case, whereas we notice that the Formula (2) is also valid for some non-homogeneous cases, e.g., the cone P Z arising from uncolored decomposable graphical models [9].
Let γ G Γ , δ G Γ and φ G Γ denote the functions γ Z , δ Z and φ Z , respectively, with Z = Z G Γ . Then, we have
I G Γ ( δ , D ) = γ G Γ ( ( δ 2 ) / 2 ) φ G Γ ( π Z ( D ) / 2 ) δ G Γ ( π Z ( D ) / 2 ) ( δ 2 ) / 2 .
In our Bayesian model selection setting [7], for a fixed graph G , we suppose that a group Γ is distributed uniformly over all the possible subgroups of the automorphism group of the graph G . Given samples Z 1 , Z 2 , , Z n , that is, the independent and identically distributed random vectors obeying N ( 0 , Σ ) with Σ 1 P G Γ , we see that the posterior probability P ( Γ | Z 1 , , Z n ) is proportional to I G Γ ( δ + n , D + i = 1 n Z i Z i ) / I G Γ ( δ , D ) , see [7] (Equation (30)).

3. Matrix Realization of Homogeneous Cones

It is known that any homogeneous cone is linearly isomorphic to some P Z , where Z Sym ( p , R ) is a linear subspace consisting of real symmetric matrices admitting certain specific block decompositions described below [10,11]. Let n 1 , n 2 , , n r be positive integers such that p = n 1 + n 2 + + n r . Let V = { V l k } 1 k < l r be a family of linear spaces V l k Mat ( n l , n k ; R ) satisfying the following axioms:
  • (V1) A V l k A A R I n l ( 1 k < l r ) ,
  • (V2) A V l j , B V k j A B V l k ( 1 j < k < l r ) ,
  • (V3) A V l k , B V k j A B V l j ( 1 j < k < l r ) .
Let Z V be the linear space consisting of x Sym ( p , R ) of the following form:
x = X 11 X 21 X r 1 X 21 X 22 X r 2 X r 1 X r 2 X r r X k k = x k k I n k , x k k R , k = 1 , , r , X l k V l k , 1 k < l r .
Let H V be the set of p × p lower triangular matrices T of the following form:
T = T 11 T 21 T 22 T r 1 T r 2 T r r T k k = t k k I n k , t k k > 0 , k = 1 , , r , T l k V l k , 1 k < l r .
Then, H V forms a Lie group by (V3), and acts on Z V linearly by ρ ( T ) x : = T x T , T H V , x Z V . Moreover, the group H V acts on the cone P V : = P Z V simply transitively by ρ . We write ρ * ( T ) , T H V , for the adjoint of the linear operator ρ ( T ) on Z V with respect to the trace inner product, which means that ρ ( T ) x , y = x , ρ * ( T ) y for x , y Z V . Then, we see that ρ * ( T ) y = π Z ( T y T ) for y Z V . Moreover, if y P V * : = P Z V * , we have ψ Z V ( ρ * ( T ) y ) = ρ ( T 1 ) ψ Z V ( y ) . Since ψ Z ( y ) P V for y P V * , we can take a unique T y H V for which ψ Z ( y ) = ρ ( T y 1 ) I p . Then, we have y = ρ * ( T y ) I p because ρ ( T y 1 ) I p = ρ ( T y 1 ) ψ Z ( I p ) = ψ Z ( ρ * ( T y ) I p ) .
Lemma 1.
For y P Z V * , one has δ Z V ( y ) = ( det T y ) 2 and φ Z V ( y ) = ( det ρ ( T y ) ) 1 .
Using Lemma 1 and a general theory about relatively invariant functions on a homogeneous cone (see, e.g., [10] (Section IV)), we can compute explicitly δ Z V and φ Z V .
Put q k : = l > k dim V l k for k = 1 , , r and N : = dim Z V .
Theorem 2.
(i) One has
γ Z V ( α ) = ( 2 π ) ( N r ) / 2 k = 1 r n k n k α ( q k + 1 ) / 2 Γ ( n k α + ( q k / 2 ) + 1 ) .
(ii) The equality (2) holds for Z = Z V .
We give a sketch of the proof of Theorem 2. We denote by ( A | B ) the trace inner product tr A B of A , B V l k . Then, for an element x Z V in (3), we have
x , x = k = 1 r ( n k x k k 2 ) + 2 1 k < l r ( X l k | X l k ) ,
so that d x = k = 1 r ( n k 1 / 2 d x k k ) 1 k < l r ( 2 dim V l k / 2 d X l k ) , where d X l k stands for the Lebesgue measure on V l k normalized by ( · | · ) . By the change of variable x = ρ ( T ) I p with T H V in (4), we get
d x = k = 1 r ( n k 1 / 2 2 t k k 1 + q k d t k k ) 1 k < l r 2 dim V l k / 2 d T l k ,
so that γ Z V ( α ) equals
k = 1 r 0 e n k t k k 2 n k 1 / 2 2 t k k 2 n k α + 1 + q k d t k k 1 k < l r 2 dim V l k / 2 V l k e ( T l k | T l k ) d T l k .
As for (ii), we observe that x , y = x , ρ * ( T y ) I p = ρ ( T y ) x , I p = tr ρ ( T y ) x . By the change of variable x = ρ ( T y ) x , we have
P V e x , y ( det x ) α d x = P V e tr x { ( det x ) ( det T y ) 2 } α ( det ρ ( T y ) ) 1 d x = ( det T y ) 2 α ( det ρ ( T y ) ) 1 γ Z V ( α ) ,
so that (2) follows from Lemma 1.
Theorem 3.
For a homogeneous graph G = ( V , E ) and a subgroup Γ of the automorphism group of G , there exists an orthogonal matrix U O ( p ) such that U Z G Γ U = Z V with some V = { V l k } 1 k < l r .
The proof is omitted; it is based on a representation theory similarly as in [7] and uses a proper ordering of vertices. Theorem 3 together with Theorem 2 yields Theorem 1. Theorem 3 is very important from the practical point of view. The knowledge of the orthogonal matrix U allows us to identify all parameters of the space Z V (see (3)). However, the problem of finding a suitable U matrix is in general very complicated. In the next section, we will consider the butterfly model from Figure 1 and present the exact forms of the U matrices for all subgroups of Aut ( G ) .

4. Toy Example

In what follows, let G be the five-vertex graph from Figure 1. We use the cyclic representation of permutations on V = { 1 , , 5 } . Then, the group Aut ( G ) is generated by σ 1 : = 1 2 , σ 2 : = 4 5 , and σ 3 : = 1 4 2 5 . Put τ : = σ 2 σ 3 = 1 5 2 4 . Then, σ 2 = σ 1 τ 2 and σ 3 = σ 1 τ 3 , so that Aut ( G ) is generated by σ 1 and τ . Moreover, since the orders of σ 1 and τ are 2 and 4, respectively, with σ 1 τ σ 1 1 = τ 3 , the group Aut ( G ) equals the dihedral group σ 1 τ of order 8. Then, all the subgroups of Aut ( G ) are listed as:
Γ 1 : = { e } , Γ 2 : = σ 1 , Γ 3 : = σ 1 τ , Γ 4 : = σ 1 τ 2 , Γ 5 : = σ 1 τ 3 , Γ 6 : = τ 2 , Γ 7 : = τ , Γ 8 : = σ 1 , τ 2 , Γ 9 : = σ 1 τ , τ 2 , Γ 10 : = Aut ( G ) .
(i) When Γ = Γ 1 , then x Z G Γ is of the form
x = x 11 x 21 x 31 0 0 x 21 x 22 x 32 0 0 x 31 x 32 x 33 x 43 x 53 0 0 x 43 x 44 x 54 0 0 x 53 x 54 x 55 .
We give an orthogonal matrix U so that U x U becomes of the form of matrix realization in the previous section (see Theorem 3) by
U : = 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 , U x U = x 11 x 21 0 0 x 31 x 21 x 22 0 0 x 32 0 0 x 44 x 54 x 43 0 0 x 54 x 55 x 53 x 31 x 32 x 43 x 53 x 33 .
In this case, N = 11 , r = 5 , n 1 = = n 5 = 1 , and V l k ( 1 k < l 5 ) is R or { 0 } . Since γ U Z U ( α ) = γ Z ( α ) in general, we see from Theorem 2 (i) that
γ G Γ ( α ) = ( 2 π ) 3 Γ ( α + 1 ) Γ ( α + 3 2 ) 2 Γ ( α + 2 ) 2 .
Moreover, the functions δ G Γ ( x ) and φ G Γ ( x ) are expressed, respectively, as
x 33 1 x 11 x 21 x 31 x 21 x 22 x 32 x 31 x 32 x 33 x 33 x 43 x 53 x 43 x 44 x 54 x 53 x 54 x 55 , x 33 x 11 x 21 x 31 x 21 x 22 x 32 x 31 x 32 x 33 2 x 33 x 43 x 53 x 43 x 44 x 54 x 53 x 54 x 55 2 .
For i = 2 , , 10 , clearly P G Γ i is a subset of P G Γ = P G Γ 1 and one can show that δ G Γ i is equal to the restriction of δ G Γ above to P G Γ i .
(ii) When Γ = Γ 2 , we describe, respectively, x Z G Γ , an orthogonal matrix U and U x U as
a b c 0 0 b a c 0 0 c c d e f 0 0 e g h 0 0 f h i , 1 / 2 1 / 2 0 0 0 1 / 2 1 / 2 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 , a b 0 0 0 0 0 a + b 0 0 2 c 0 0 g h e 0 0 h i f 0 2 c e f d .
We have γ G Γ ( α ) = ( 2 π ) 2 Γ ( α + 1 ) 2 Γ ( α + 3 2 ) 2 Γ ( α + 2 ) , and
φ G Γ ( x ) = d ( a b ) 1 a + b 2 c 2 c d 3 / 2 d e f e g h f h i 2 .
(iii) When Γ = Γ 3 , we describe, respectively, x Z G Γ , an orthogonal matrix U and U x U as
a d e 0 0 d b f 0 0 e f c f e 0 0 f b d 0 0 e d a , 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 , a 0 d 0 e 0 a 0 d e d 0 b 0 f 0 d 0 b f e e f f c .
Then γ G Γ ( α ) = ( 2 π ) 3 / 2 2 4 α 5 / 2 Γ ( 2 α + 2 ) Γ ( 2 α + 3 2 ) Γ ( α + 1 ) and φ G Γ ( x ) = a d e d b f e f c 2 .
(iv) When Γ = Γ 4 , we describe, respectively, x Z G Γ , an orthogonal matrix U and U x U as
a d e 0 0 d b f 0 0 e f c g g 0 0 g h i 0 0 g i h , 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 1 / 2 1 / 2 0 0 0 1 / 2 1 / 2 0 , a d 0 0 e d b 0 0 f 0 0 h i 0 0 0 0 0 h + i 2 g e f 0 2 g c .
Then, γ G Γ ( α ) = γ G Γ 2 ( α ) = ( 2 π ) 2 Γ ( α + 1 ) 2 Γ ( α + 3 2 ) 2 Γ ( α + 2 ) , while φ G Γ ( x ) is equal to
c ( h i ) 1 h + i 2 g 2 g c 3 / 2 a d e d b f e f c 2 .
(v) When Γ = Γ 5 , we describe, respectively, x Z G Γ , an orthogonal matrix U and U x U as
a d e 0 0 d b f 0 0 e f c e f 0 0 e a d 0 0 f d b , 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 , a 0 d 0 e 0 a 0 d e d 0 b 0 f 0 d 0 b f e e f f c .
Then, γ G Γ ( α ) = γ G Γ 3 ( α ) = ( 2 π ) 3 / 2 2 4 α 5 / 2 Γ ( 2 α + 2 ) Γ ( 2 α + 3 2 ) Γ ( α + 1 ) and φ G Γ ( x ) = a d e d b f e f c 2 .
(vi) We have Z G Γ 6 = Z G Γ 8 . When Γ = Γ 6 or Γ 8 , we describe, respectively, x Z G Γ , an orthogonal matrix U and U x U as
a b c 0 0 b a c 0 0 c c d e e 0 0 e f g 0 0 e g f , 1 / 2 1 / 2 0 0 0 1 / 2 1 / 2 0 0 0 0 0 0 0 1 0 0 1 / 2 1 / 2 0 0 0 1 / 2 1 / 2 0 , a b 0 0 0 0 0 a + b 0 0 2 c 0 0 f g 0 0 0 0 0 f + g 2 e 0 2 c 0 2 e d .
Then, γ G Γ ( α ) = 2 π Γ ( α + 1 ) 3 Γ ( α + 3 2 ) 2 , and
φ G Γ ( x ) = d ( a b ) 1 ( f g ) 1 a + b 2 c 2 c d 3 / 2 f + g 2 e 2 e d 3 / 2 .
(vii) For Γ = Γ 7 , Γ 9 or Γ 10 the linear space Z G Γ is the same. We describe, respectively, x Z G Γ , an orthogonal matrix U and U x U as
a c d 0 0 c a d 0 0 d d b d d 0 0 d a c 0 0 d c a , 1 / 2 0 1 / 2 0 0 1 / 2 0 1 / 2 0 0 0 0 0 0 1 0 1 / 2 0 1 / 2 0 0 1 / 2 0 1 / 2 0 , a c 0 0 0 0 0 a c 0 0 0 0 0 a + c 0 2 d 0 0 0 a + c 2 d 0 0 2 d 2 d b .
Then, γ G Γ ( α ) = ( 2 π ) 1 / 2 2 4 α 3 / 2 Γ ( 2 α + 1 ) Γ ( 2 α + 3 2 ) Γ ( α + 1 ) and φ G Γ ( x ) = ( a c ) 1 a + c 2 d 2 d b 3 / 2 .

5. Numerical Example

We carry out Bayesian model selection in the spirit of Graczyk et al. [7]. For a fixed graph G , we suppose that a group Γ is distributed uniformly over all possible subgroups of Aut ( G ) . Given sample Z 1 , , Z n from N ( 0 , Σ ) with Σ 1 P G Γ , where K = Σ 1 follows the Diaconis–Ylvisaker conjugate prior (1) with hyperparameters ( δ , D ) , then the posterior probability P ( Γ | Z 1 , , Z n ) is proportional to I G Γ ( δ + n , D + i = 1 n Z i Z i ) / I G Γ ( δ , D ) . In order to demonstrate our results, we work on the data set of the examination marks of n = 88 students in p = 5 different mathematical subjects. As was reported in [5,6], the data demonstrate an excellent fit to the graphical Gaussian model displayed in Figure 1. Since the groups Γ = Γ 6 , Γ 8 (similarly Γ 7 , Γ 9 and Γ 10 ) impose the same symmetries on Z G Γ , we consider Bayesian model selection within 7 different models: Γ i for i = 1 , 2 , , 7 . The five mathematical subjects are enumerated as in the graph presented in Figure 1. As our method applies only to centered normal sample, as usual, we center the marks and consider a correction of the degrees of freedom n * = 88 1 = 87 . Then,
i = 1 n Z i Z i = 26 601.82 11 068.36 8 837.41 9 245.73 10 214.23 11 068.36 15 037.27 7 408.68 8 236.55 | 8 614.05 8 837.41 7 408.68 9 821.08 9 753.86 10 602.74 9 245.73 8 236.55 9 753.86 19 173.09 13 531.59 10 214.23 8 614.05 10 602.74 13 531.59 25 904.72 .
We take usual hyperparameter δ = 3 and D = d · I 5 for d { 1 , 10 2 , 10 4 } . Below, we present a subgroup with the highest posterior probability p.
d = 1 d = 100 d = 10 000 Γ 7 ( p = 1 ) Γ 3 ( p = 0.8 ) Γ 1 ( p = 0.75 )
Depending on the value of hyperparameter D, the model with the highest posterior probability is
  • Γ 7 , which corresponds to full symmetry as Z G Γ 7 = Z G Aut ( G ) ,
  • Γ 3 = 1 5 2 4 , which corresponds to invariance of the model to interchange (Mechanics, Vectors) ↔ (Statistics, Analysis),
  • Γ 1 = { e } , which corresponds to no additional symmetry.
The hyperparameter δ has much less impact on model selection.
We note that the same example was considered in [1], where the colored graphical models were introduced for the first time. The authors of [1], using a BIC criterion, point out that model Γ 3 (see [1] (Figure 8)) represents an excellent fit.
Fitted concentrations × 10 3 for the examination marks assuming the model Γ 3 are presented in Table 1. (In [1] (Table 6) erroneous entries are presented in the same table).

Author Contributions

Conceptualization, H.I.; methodology, H.I.; numerical experiments, B.K.; validation, P.G., B.K. and H.I.; formal analysis, P.G., B.K. and H.I.; investigation, P.G., B.K. and H.I.; writing—original draft preparation, H.I.; writing—review and editing, P.G., B.K. and H.I. All authors have read and agreed to the published version of the manuscript.

Funding

Research of P. Graczyk was supported by JST PRESTO. Research of H. Ishi was supported by JST PRESTO, KAKENHI 20K03657, and Osaka Central Advanced Mathematical Institute (MEXT Joint Usage/Research Center on Mathematics and Theoretical Physics JPMXP0619217849). Research of B. Kołodziejek was funded by (POB Cybersecurity and Data Science) of Warsaw University of Technology within the Excellence Initiative: Research University (IDUB) programme.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data we used were published in the book [5] and are public.

Acknowledgments

We thank the Editors of the Phys. Sci. Forum for their help in improving manuscript tables.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Højsgaard, S.; Lauritzen, S.L. Graphical Gaussian models with edge and vertex symmetries. J. R. Stat. Soc. Ser. B Stat. Methodol. 2008, 70, 1005–1027. [Google Scholar] [CrossRef]
  2. Letac, G.; Massam, H. Wishart distributions for decomposable graphs. Ann. Statist. 2007, 35, 1278–1323. [Google Scholar] [CrossRef]
  3. Graczyk, P.; Ishi, H.; Kołodziejek, B. Wishart laws and variance function on homogeneous cones. Probab. Math. Statist. 2019, 39, 337–360. [Google Scholar] [CrossRef]
  4. Mardia, K.V.; Kent, J.T.; Bibby, J.M. Multivariate Analysis; Probability and Mathematical Statistics: A Series of Monographs and Textbooks; Academic Press: New York, NY, USA; Harcourt Brace Jovanovich: Boston, MA, USA, 1979; p. xv+521. [Google Scholar]
  5. Whittaker, J. Graphical Models in Applied Multivariate Statistics; Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics; John Wiley & Sons, Ltd.: Chichester, UK, 1990; p. xiv+448. [Google Scholar]
  6. Edwards, D. Introduction to Graphical Modelling, 2nd ed.; Springer Texts in Statistics; Springer: New York, NY, USA, 2000; p. xvi+333. [Google Scholar]
  7. Graczyk, P.; Ishi, H.; Kołodziejek, B.; Massam, H. Model selection in the space of Gaussian models invariant by symmetry. Ann. Statist. 2022, 50, 1747–1774. [Google Scholar] [CrossRef]
  8. Letac, G. Decomposable Graphs. In Modern Methods of Multivariate Statistics; Hermann: Paris, France, 2014; Volume 82, pp. 155–196. [Google Scholar]
  9. Roverato, A. Cholesky decomposition of a hyper inverse Wishart matrix. Biometrika 2000, 87, 99–112. [Google Scholar] [CrossRef]
  10. Ishi, H. Homogeneous cones and their applications to statistics. In Modern Methods of Multivariate Statistics; Hermann: Paris, France, 2014; Volume 82, pp. 135–154. [Google Scholar]
  11. Ishi, H. Matrix realization of a homogeneous cone. In Geometric Science of Information; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2015; Volume 9389, pp. 248–256. [Google Scholar]
Figure 1. Conditional independence structure of examination marks.
Figure 1. Conditional independence structure of examination marks.
Psf 05 00020 g001
Table 1. Fitted concentration matrix × 10 3 .
Table 1. Fitted concentration matrix × 10 3 .
MechanicsVectorsAlgebraAnalysisStatistics
Mechanics5.85−2.23−3.7200
Vectors−2.2310.15−5.8800
Algebra−3.72−5.8826.95−5.88−3.72
Analysis00−5.8810.15−2.23
Statistics00−3.72−2.235.85
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Graczyk, P.; Ishi, H.; Kołodziejek, B. Graphical Gaussian Models Associated to a Homogeneous Graph with Permutation Symmetries. Phys. Sci. Forum 2022, 5, 20. https://doi.org/10.3390/psf2022005020

AMA Style

Graczyk P, Ishi H, Kołodziejek B. Graphical Gaussian Models Associated to a Homogeneous Graph with Permutation Symmetries. Physical Sciences Forum. 2022; 5(1):20. https://doi.org/10.3390/psf2022005020

Chicago/Turabian Style

Graczyk, Piotr, Hideyuki Ishi, and Bartosz Kołodziejek. 2022. "Graphical Gaussian Models Associated to a Homogeneous Graph with Permutation Symmetries" Physical Sciences Forum 5, no. 1: 20. https://doi.org/10.3390/psf2022005020

APA Style

Graczyk, P., Ishi, H., & Kołodziejek, B. (2022). Graphical Gaussian Models Associated to a Homogeneous Graph with Permutation Symmetries. Physical Sciences Forum, 5(1), 20. https://doi.org/10.3390/psf2022005020

Article Metrics

Back to TopTop