# Undirected Structural Markov Property for Bayesian Model Determination

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Preliminaries

#### 2.1. Graphical Terminologies and Notation

**Definition**

**1**

**.**Let $G=(V,E)$ be an undirected graph. A graph G is reducible if its vertex set contains a clique separator, otherwise G is said to be prime. E.g., G is prime if G is a clique, while G is reducible if G is a disconnected graph. An induced subgraph ${G}_{U}$ is a maximal prime subgraph of G if it satisfies

- (i)
- ${G}_{U}$ is prime, and
- (ii)
- $\forall W\subseteq V\left(G\right)$ s.t. $U\subset W$, ${G}_{W}$ is reducible.

**Definition 2**

**.**A proper decomposition $(A,B,S)$ of an undirected graph G is stated to form a prime decomposition if ${G}_{A\cup S}$ and ${G}_{B\cup S}$ are prime, or ${G}_{A\cup S}$ and ${G}_{B\cup S}$ can be recursively decomposed into pairwise different maximal prime subgraphs of G.

#### 2.2. Independence Model and Collapsibility

- for all $A,B\subseteq V\left(G\right)$, $\langle A,B|A\rangle \in \mathcal{I}(G)$, $\langle A,B|B\rangle \in \mathcal{I}(G)$ and $\langle A,B|A\cap B\rangle \in \mathcal{I}(G)$;
- if $\langle A,B|C\rangle \in \mathcal{I}(G)$, then $\langle B,A|C\rangle \in \mathcal{I}(G)$;
- if $\langle A,B|C\rangle \in \mathcal{I}(G)$, and $U\subseteq A$, then $\langle U,B|C\rangle \in \mathcal{I}(G)$;
- if $\langle A,B|C\rangle \in \mathcal{I}(G)$, and $U\subseteq A$, then $\langle A,B|C\cup U\rangle \in \mathcal{I}(G)$;
- if $\langle A,B|C\rangle \in \mathcal{I}(G)$, and $\langle A,W|B\cup C\rangle \in \mathcal{I}(G)$, then $\langle A,B\cup W|C\rangle \in \mathcal{I}(G)$.

**Definition 3**(CI-collapsibility).

**Definition 4**(M-collapsibility).

**Theorem 1.**

- 1.
- G is graphical collapsible onto D;
- 2.
- $\mathcal{I}\left(G\right)$is CI-collapsible onto D;
- 3.
- $\mathcal{P}\left(G\right)$is M-collapsible onto D.

**Proof.**

**Proposition 1.**

- 1.
- G can be graphical collapsible onto${H}_{j}$;
- 2.
- $\mathcal{I}\left({G}_{{H}_{j}}\right)=\mathcal{I}{\left(G\right)}_{{H}_{j}}$;
- 3.
- $\mathcal{P}\left({G}_{{H}_{j}}\right)=\mathcal{P}{\left(G\right)}_{{H}_{j}}$.

**Proof.**

## 3. Structural Markov Graph Laws for Full Bayesian Inference

#### 3.1. Basic Concepts and Properties

**Definition 5.**

**Proposition 2.**

**Proof.**

**Proposition 3.**

- 1.
- ${G}_{A\cup S}\otimes {G}_{B\cup S}^{\prime}\in \mathfrak{U}(A,B,S)$ and ${G}_{A\cup S}^{\prime}\otimes {G}_{B\cup S}\in \mathfrak{U}(A,B,S)$;
- 2.
- if $\mathfrak{G}$ is structural Markov on $\mathfrak{U}$, then$$\pi \left(G\right)\pi \left({G}^{\prime}\right)=\pi ({G}_{A\cup S}\otimes {G}_{B\cup S}^{\prime})\pi ({G}_{A\cup S}^{\prime}\otimes {G}_{B\cup S}).$$

**Proof.**

**Proposition 4.**

**Proof.**

#### 3.2. Joint Distribution Law

**Definition 6**

**.**Suppose that G is a fixed graph in $\mathfrak{U}(A,B,S)$ and $\theta \in \mathcal{P}\left(G\right)$. Let $\mathcal{L}\left(\theta \right)$ be a law of θ. We say that $\mathcal{L}\left(\theta \right)$ is weak hyper Markov over G if

**Proposition 5.**

- 1.
- if $\mathcal{L}\left(\theta \right)$ is weak hyper Markov with respect to G, then$$({X}_{A\cup S},{\theta}_{A\cup S})\coprod ({X}_{B\cup S},{\theta}_{B\cup S})|({X}_{S},{\theta}_{S})\phantom{\rule{3.33333pt}{0ex}}[P,\mathcal{L}];$$
- 2.
- if $\mathcal{L}\left(\theta \right)$ is strong hyper Markov, then$$({X}_{A\cup S},{\theta}_{A\cup S|S})\coprod ({X}_{B\cup S},{\theta}_{B\cup S})|{X}_{S}\phantom{\rule{3.33333pt}{0ex}}[P,\mathcal{L}].$$

**Proof.**

**Definition 7**(Hyper compatibility).

**Proposition 6.**

**Proof.**

**Theorem 2.**

- 1.
- if $\mathcal{L}\left(\theta \right)$ is weak hyper Markov, then$$({\theta}_{A\cup S},{G}_{A\cup S})\coprod ({\theta}_{A\cup S},{G}_{B\cup S})|({\theta}_{S},\{G\in \mathfrak{U}(A,B,S)\})\phantom{\rule{3.33333pt}{0ex}}[\mathfrak{L},\mathfrak{G}];$$
- 2.
- if $\mathcal{L}\left(\theta \right)$ is strong hyper Markov, then$$({\theta}_{A\cup S},{G}_{A\cup S})\coprod ({\theta}_{B\cup S|S},{G}_{B\cup S})|\{G\in \mathfrak{U}(A,B,S)\}\phantom{\rule{3.33333pt}{0ex}}[\mathfrak{L},\mathfrak{G}].$$

**Proof.**

**Proposition 7.**

- 1.
- if $\mathcal{L}\left(\theta \right)$ is weak hyper Markov, then$$({X}_{A\cup S},{\theta}_{A\cup S})\coprod {G}_{B\cup S}|({X}_{S},{\theta}_{S},\{G\in \mathfrak{U}(A,B,S)\})\phantom{\rule{3.33333pt}{0ex}}[\mathsf{\Theta},\mathfrak{L},\mathfrak{G}];$$
- 2.
- if $\mathcal{L}\left(\theta \right)$ is strong hyper Markov, then$$({X}_{A\cup S},{\theta}_{A\cup S|S})\coprod {G}_{B\cup S}|({X}_{S},\{G\in \mathfrak{U}(A,B,S)\})\phantom{\rule{3.33333pt}{0ex}}[\mathsf{\Theta},\mathfrak{L},\mathfrak{G}].$$

**Proof.**

**Theorem 3.**

- 1.
- if $\mathcal{L}\left(\theta \right)$ is weak hyper Markov, then$$({X}_{A\cup S},{\theta}_{A\cup S},{G}_{A\cup S})\coprod ({X}_{B\cup S},{\theta}_{B\cup S},{G}_{B\cup S})|({X}_{S},{\theta}_{S},\{G\in \mathfrak{U}(A,B,S)\})\phantom{\rule{3.33333pt}{0ex}}[\mathsf{\Theta},\mathfrak{L},\mathfrak{G}];$$
- 2.
- if $\mathcal{L}\left(\theta \right)$ is strong hyper Markov, then$$({X}_{A\cup S},{\theta}_{A\cup S},{G}_{A\cup S})\coprod ({X}_{B\cup S},{\theta}_{B\cup S|S},{G}_{B\cup S})|({X}_{S},\{G\in \mathfrak{U}(A,B,S)\})\phantom{\rule{3.33333pt}{0ex}}[\mathsf{\Theta},\mathfrak{L},\mathfrak{G}].$$

**Proof.**

**Corollary 1.**

- 1.
- if $\mathcal{L}\left(\theta \right)$ is weak hyper Markov, then$$({X}_{A\cup S},{\theta}_{A\cup S})\coprod ({X}_{B\cup S},{\theta}_{B\cup S})|({X}_{S},{\theta}_{S},\{G\in \mathfrak{U}(A,B,S)\})\phantom{\rule{3.33333pt}{0ex}}[\mathsf{\Theta},\mathfrak{L},\mathfrak{G}];$$
- 2.
- if $\mathcal{L}\left(\theta \right)$ is strong hyper Markov, then$$({X}_{A\cup S},{\theta}_{A\cup S})\coprod ({X}_{B\cup S},{\theta}_{B\cup S|S})|({X}_{S},\{G\in \mathfrak{U}(A,B,S)\})\phantom{\rule{3.33333pt}{0ex}}[\mathsf{\Theta},\mathfrak{L},\mathfrak{G}].$$

**Proof.**

#### 3.3. Posterior Updating for Graph Law

**Proposition 8.**

**Proof.**

**Proposition 9.**

- 1.
- The posterior graph law obtained by conditioning on data ${X}^{\left(n\right)}={x}^{\left(n\right)}$ is structural Markov with respect to $\mathfrak{U}$;
- 2.
- The marginal data distribution of ${X}^{\left(n\right)}$ is Markov with respect to $\mathfrak{U}$;
- 3.
- The posterior law of θ conditioning on ${X}^{\left(n\right)}={x}^{\left(n\right)}$ is Markov with respect to $\mathfrak{U}$.

**Proof.**

## 4. Two Special Cases

#### 4.1. Graphical Gaussian Models and the Inverse Wishart Law

#### 4.2. Multinomial Models and the Dirichlet Law

#### 4.3. An Example on Simulated Data

#### 4.3.1. Dataset Description

**inc:**The income of the respondents.**deg:**Tespondents’ highest educational degree.**chi:**The number of children of the respondents.**pin:**The income of the respondents’ parents.**pde:**The highest educational degree of respondents’ parents.**pch:**The number of children of respondents’ parents.**age:**Respondents’ age in years.

#### 4.3.2. Experiments and Results

- $X\sim {N}_{p}(0,\mathsf{\Sigma})$;
- $\mathsf{\Sigma}|X,\mathsf{\Phi}\sim IW(n+\delta -1,\mathbf{S}+\mathsf{\Phi})$.

## 5. Computations

#### 5.1. Ratio for Graph Law

**Proposition 10.**

- 1.
- if u and v are contained in exactly one maximal prime subgraph ${U}_{j}$ of G, then$$\mathsf{\Lambda}({G}^{\prime}:G)=\frac{{\eta}_{{U}_{j}^{\prime}}}{{\eta}_{{U}_{j}}},j=1,2,\dots ,k;$$
- 2.
- if u and v are contained in both two neighboring maximal prime subgraphs ${U}_{j},{U}_{j+1}$ of G, then$$\mathsf{\Lambda}({G}^{\prime}:G)=\frac{{\eta}_{{W}^{\prime}}}{{\eta}_{W}},$$

**Proof.**

**Proposition 11.**

- 1.
- if u and v are contained in exactly one incomplete prime subgraph ${U}_{h}$, then$$\mathsf{\Lambda}({G}^{\prime}:G)=\frac{{\eta}_{{U}_{h}^{\prime}}}{{\eta}_{{U}_{h}}}.$$
- 2.
- if ${U}_{i}\ni u$ and ${U}_{j}\ni v$ are the two distinct maximal prime subgraphs of G, then there are some prime components ${U}_{i}={U}_{{h}_{1}},{U}_{{h}_{2}},\dots ,{U}_{{h}_{m}}={U}_{j}$ such that$$\mathsf{\Lambda}({G}^{\prime}:G)=\frac{{\eta}_{{T}^{\prime}}}{{\eta}_{T}},$$

**Proof.**

**Lemma 1**

**.**Let G be a decomposable graph in $\mathfrak{U}$ and G has a perfect cliques sequence $({U}_{1},{U}_{2},\dots ,{U}_{k})$. Suppose that ${G}^{\prime}$ is decomposable, obtained from G by removing or adding one edge $(u,v)$. Then,

- 1.
- If ${G}^{\prime}$ is obtained from G by removing the edge $(u,v)$, then u and v must belong to a clique ${U}_{j}$ of G;
- 2.
- If ${G}^{\prime}$ is obtained from G by adding the edge $(u,v)$, then there exist two different cliques ${U}_{i}\ni u$ and ${U}_{j}\ni v$ such that $S={U}_{i}\cap {U}_{j}$ is complete and separates ${U}_{i}$ and ${U}_{j}$.

**Corollary 2.**

- 1.
- If ${G}^{\prime}$ is obtained from G by removing the edge $(u,v)$ within ${U}_{j}$, then$$\mathsf{\Lambda}({G}^{\prime}:G)=\frac{{\eta}_{{U}_{u}}{\eta}_{{U}_{v}}}{{\eta}_{0}{\eta}_{U}},$$
- 2.
- If ${G}^{\prime}$ is obtained from G by adding the edge $(u,v)$ such that $u\in {U}_{i}$ and $v\in {U}_{j}$, then the ratio $\mathsf{\Lambda}({G}^{\prime}:G)$ is$$\mathsf{\Lambda}({G}^{\prime}:G)=\frac{{\zeta}_{S}{\zeta}_{{S}_{0}}}{{\zeta}_{{S}_{u}}{\zeta}_{{S}_{v}}},$$

**Proof.**

#### 5.2. Sampling Decomposable Graphs from Structural Markov Graph Laws

Algorithm 1 A Metropolis–Hastings algorithm for sampling decomposable graphs from a structural Markov graph law. |

Input: An ER random graph $G\in \mathfrak{U}$.Output: A set of decomposable graph from $\mathfrak{U}$.Set ${G}^{\left(0\right)}=G$ for
$t=0,1,2,\dots .$doif $(u,v)\in E\left({G}^{\left(t\right)}\right)$ and ${G}^{-(u,v)}\in {\mathfrak{U}}^{\ast}$ thenset ${G}^{(t+1)}={G}^{-(u,v)}$ with probability $min\left(\frac{{\eta}_{{U}_{u}}{\eta}_{{U}_{v}}}{{\eta}_{{U}_{0}}{\eta}_{U}},1\right)$ else if $(u,v)\notin E\left({G}^{\left(t\right)}\right)$ and ${G}^{+(u,v)}\in {\mathfrak{U}}^{\ast}$ thenset ${G}^{(t+1)}={G}^{+(u,v)}$ with probability $min\left(\frac{{\zeta}_{S}{\zeta}_{{S}_{0}}}{{\zeta}_{{S}_{u}}{\zeta}_{{S}_{v}}},1\right)$ else${G}^{(t+1)}={G}^{\left(t\right)}$ end ifend forreturn A set of decomposable graphs. |

## 6. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## Appendix A. Proofs of Some Main Theorems and Propositions

**Proof of Theorem 1.**

**Proof of Proposition 3.**

**Proof of Proposition 4.**

**Proof of Proposition 5.**

**Proof of Theorem 2.**

**Proof of Proposition 7.**

**Proof of Theorem 3.**

**Proof of Proposition 10.**

**Proof of Proposition 11.**

## References

- Koller, D.; Friedman, N. Probabilistic Graphical Models: Principles and Techniques; Adaptive Computation and Machine Learning; MIT Press: Cambridge, MA, USA, 2009. [Google Scholar]
- Lauritzen, S.L. Graphical Models; Oxford University Press: New York, NY, USA, 1996. [Google Scholar]
- Richardson, T. A factorization criterion for acyclic directed mixed graphs. In Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, Montreal, QC, Canada, 18–21 June 2009. [Google Scholar]
- Richardson, T.; Spirtes, P. Ancestral graph Markov models. Ann. Stat.
**2002**, 30, 962–1030. [Google Scholar] [CrossRef] - Iqbal, K.; Buijsse, B.; Wirth, J. Gaussian Graphical Models Identify Networks of Dietary Intake in a German Adult Population. J. Nutr. Off. Organ Am. Inst. Nutr.
**2016**, 146, 646–652. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Larranaga, P.; Moral, S. Probabilistic graphical models in artificial intelligence. Appl. Soft Comput.
**2011**, 11, 1511–1528. [Google Scholar] [CrossRef] - Verzilli, C.J.; Stallard, N.; Whittaker, J.C. Bayesian graphical models for genomewide association studies. Am. J. Hum. Genet.
**2006**, 79, 100–112. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Giudici, P.; Green, P.J. Decomposable graphical Gaussian model determination. Biometrika
**1999**, 86, 785–801. [Google Scholar] [CrossRef] [Green Version] - Madigan, D.; Raftrey, A.E. Model selection and accounting for model uncertainty in graphical models using Occam’s window. J. Amer. Stat. Assoc.
**1994**, 89, 1535–1546. [Google Scholar] [CrossRef] - Byrne, S.; Dawid, A.P. Structural Markov graph laws for Bayesian model uncertainty. Ann. Stat.
**2015**, 43, 1647–1681. [Google Scholar] [CrossRef] [Green Version] - Li, B.C. Support condition for equivalent characterization of graph laws. Sci. Sin. Math.
**2022**, 52, 467–474. [Google Scholar] - Dawid, A.P.; Lauritzen, S.L. Hyper Markov laws in the statistical analysis of decomposable graphical models. Ann. Stat.
**1993**, 21, 1272–1317. [Google Scholar] [CrossRef] - Green, P.J.; Thomas, A. A structural Markov property for decomposable graph laws that allows control of clique intersections. Biometrika
**2018**, 105, 19–29. [Google Scholar] [CrossRef] [Green Version] - Leimer, H.G. Optimal decomposition by clique separators. Discret. Math.
**1993**, 113, 99–123. [Google Scholar] [CrossRef] [Green Version] - Dawid, A.P. Conditional independence in statistical theory. J. R. Stat. Soc. B.
**1979**, 41, 1–15. [Google Scholar] [CrossRef] - Dawid, A.P. Conditional independence for statistical operations. Ann. Stat.
**1980**, 8, 598–617. [Google Scholar] [CrossRef] - Meek, C. Strong Completeness and Faithfulness in Bayesian Networks; Morgan Kaufmann: San Francisco, CA, USA, 1995. [Google Scholar]
- Hoff, P.D. Extending the rank likelihood for semiparametric copula estimation. Ann. Appl. Stat.
**2007**, 23, 103–122. [Google Scholar] - Frydennberg, M.; Lauritzen, S.L. Decomposition of maximum likelihood in mixed graphical interaction models. Biometrika
**1989**, 76, 539–555. [Google Scholar] [CrossRef] - Asmussen, S.; Edwards, D. Collapsibility and response variables in contingency tables. Biometrika
**1983**, 70, 567–578. [Google Scholar] [CrossRef] - Wang, X.F.; Guo, J.H. Junction trees of general graphs. Front. Math. China
**2008**, 3, 399–413. [Google Scholar] [CrossRef]

**Figure 3.**A representation of the structural Markov property for non-decomposable undirected graphs: $A\cap B$ is complete and separates A from B.

**Figure 5.**The figure in the left is the estimated posterior probabilities of the size of the graphs. The figure in the right is the estimated posterior probabilities of all visited graphs.

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Kang, X.; Hu, Y.; Sun, Y.
Undirected Structural Markov Property for Bayesian Model Determination. *Mathematics* **2023**, *11*, 1590.
https://doi.org/10.3390/math11071590

**AMA Style**

Kang X, Hu Y, Sun Y.
Undirected Structural Markov Property for Bayesian Model Determination. *Mathematics*. 2023; 11(7):1590.
https://doi.org/10.3390/math11071590

**Chicago/Turabian Style**

Kang, Xiong, Yingying Hu, and Yi Sun.
2023. "Undirected Structural Markov Property for Bayesian Model Determination" *Mathematics* 11, no. 7: 1590.
https://doi.org/10.3390/math11071590