Next Article in Journal
An Inverse Extremal Eigenproblem for Bordered Tridiagonal Matrices Applied to an Inverse Singular Value Problem for Lefkovitch-Type Matrices
Previous Article in Journal
Sharp Functional Inequalities for Starlike and Convex Functions Defined via a Single-Lobed Elliptic Domain
Previous Article in Special Issue
Modeling the Evolution of AI Identity Using Structural Features and Temporal Role Dynamics in Complex Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Consistent Markov Edge Processes and Random Graphs

by
Donatas Surgailis
Faculty of Mathematics and Informatics, Vilnius University, Naugarduko 24, 03225 Vilnius, Lithuania
Mathematics 2025, 13(21), 3368; https://doi.org/10.3390/math13213368
Submission received: 20 September 2025 / Revised: 15 October 2025 / Accepted: 20 October 2025 / Published: 22 October 2025
(This article belongs to the Special Issue Modeling and Data Analysis of Complex Networks)

Abstract

We discuss Markov edge processes { Y e ; e E } defined on edges of a directed acyclic graph ( V , E ) with the consistency property P E ( Y e ; e E ) = P E ( Y e ; e E ) for a large class of subgraphs ( V , E ) of ( V , E ) obtained through a mesh dismantling algorithm. The probability distribution P E of such edge process is a discrete version of consistent polygonal Markov graphs. The class of Markov edge processes is related to the class of Bayesian networks and may be of interest to causal inference and decision theory. On regular ν -dimensional lattices, consistent Markov edge processes have similar properties to Pickard random fields on Z 2 , representing a far-reaching extension of the latter class. A particular case of binary consistent edge process on Z 3 was disclosed by Arak in a private communication. We prove that the symmetric binary Pickard model generates the Arak model on Z 2 as a contour model.

1. Introduction

A Bayesian network is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph [Wikipedia]. Bayesian networks are a fundamental concept in learning and artificial intelligence, see [1,2,3]. Formally, a Bayesian network is a random field X V = { X v ; v V } indexed by sites v V of a directed acyclic graph (DAG) G = ( V , E ) whose probability distribution writes as a product of conditional probabilities of X v given its parent variables X u , u pa ( v ) , where pa ( v ) V is the set of parents of v.
A Bayesian network is a special case of Markov random field on a (generally undirected) graph G = ( V , E ) . Markov random fields include nearest-neighbor Gibbs random fields and play an important role in many applied sciences including statistical physics and image analysis [4,5,6,7].
As noted in [8,9,10] and elsewhere, manipulating the marginals (e.g., computing the mean or the covariance function) of a Gibbs random field is generally very hard when V is large. A class of Markov random fields on Z 2 which avoids this difficulty was proposed by Pickard [10,11] (the main idea of their construction belongs to Verhagen [12], see [11] (p. 718)). Pickard and related unilateral random fields have found useful applications in image analysis, coding, information theory, crystallography and other scientific areas, not least because they allow for efficient simulation procedures [8,9,13,14,15,16].
A Pickard model X V (rigorously defined in Section 2) is a family of Bayesian networks on rectangular graphs G = ( V , E ) , V Z 2 enjoying the important (Kolmogorov) consistency property: for any rectangles V V we have that
P V | V = P V ,
where P V is the distribution of X V and P V | V is the restriction of marginal of X V on V V . Property (1) implies that the Pickard model extends to a stationary random field on Z 2 whose marginals coincide with P V and form Markov chains on each horizontal or vertical line.
Arak et al. [17,18] introduced a class of polygonal Markov graphs and random fields X T = { X t ; t T } indexed by continuous argument t R 2 satisfying a similar consistency property: for any bounded convex domains T T , the restriction of X T to T coincides in distribution with X T . The above models allow a Gibbs representation but are constructed via equilibrium evolution of a one-dimensional particle system with random piece-wise constant Markov velocities, with birth, death and branching. Particles move independently and interact only at collisions. Similarity between polygonal and Pickard models was noted in [19]. Several papers [20,21,22] used polygonal graphs in landscape modeling and random tesselation problems.
The present paper discusses an extension of polygonal graphs and the Pickard model, satisfying a consistency property similar to (1) but not directly related to lattices or rectangles. The original idea of the construction (communicated to the author by Taivo Arak in 1992) referred to a particle evolution on the three-dimensional lattice Z 3 . In this paper, the above Arak model is extended to any dimension and discussed in Section 4 in detail. It is a special case of a Markov edge process Y E = { Y e ; e E } with probability distribution P E introduced in Section 3 and defined on edges (arcs) e E of a directed acyclic graph (DAG) G = ( V , E ) as the product over v V of the conditional clique probabilities π v ( y E out ( v ) | y E in ( v ) ) , somewhat similarly to a Bayesian network, albeit the conditional probabilities refer to collection y E out ( v ) = { y e ; e E out ( v ) } of ‘outgoing’ clique variables and not to a single ‘child’ x v as in a Bayesian network. Markov random fields indexed by edges discussed in the literature [5,6,7] are usually defined on undirected graphs through Gibbs representation and experience a similar difficulty of computing their marginal distributions as Gibbsian site models. On the other hand, ‘directed’ Markov edge processes might be useful in causal analysis where edges of DAG G = ( V , E ) represent ‘decisions’ and Y e can be interpreted as the (random) ‘cost’ of ‘decision’ e E .
The main result of this work is Theorem 2, providing sufficient conditions for consistency
P E | E = P E , G G
of a family P E , G = ( E , V ) S of Markov edge processes defined on DAGs, G , with the partial order G = ( V , E ) G = ( V , E ) given by V V , E E . The sufficient conditions for (2) are expressed in terms of clique distributions π v , v V and essentially reduce to the marginal independence of incoming and outgoing clique variables (i.e., collections y E in ( v ) and y E out ( v ) ), see (33) and (34).
The class S of DAGs G satisfying (2) is of special interest. In Theorem 2, S = S MDA ( G ) is the class of all sub-DAGs obtained from a given DAG G = ( V , E ) by a mesh dismantling algorithm (MDA). The above algorithm starts by erasing any edge that leads to a sink v + V or comes from a source v V , and proceeds in the same way and in an arbitrary order, see Definition 4. The class S MDA ( G ) contains several interesting classes of ‘one-dimensional’ and ‘multi-dimensional’ sub-DAGs G on which the restriction in (2) is identified as a Markov chain or a sequence of independent r.v.s (Corollary 2).
Section 4 discusses consistent Markov edge processes on ‘rectangular’ subgraphs of Z ν endowed with nearest-neighbor edges directed in the lexicographic order. We discuss properties of such edge processes and present several examples of ‘generic’ (common) clique distributions, with particular attention paid to the binary case (referred to as the Arak model). In dimension ν = 2 we provide a detailed description of the Arak model in terms of a particle system moving along the edges of Z 2 . Finally, Section 5 establishes a relation between the Pickard and Arak models, the latter model identified in Theorem 3 as a contour model of the former model under an additional symmetry assumption.
We expect that the present work can be extended in several directions. Similarly to [8,9] and some other related studies, the discussion is limited to discrete probability distributions, albeit continuous distributions (e.g., Gaussian edge processes) are of interest. A major challenge is the application of Markov edge processes to causal analysis and Bayesian inference, bearing in mind the extensive research in the case of Bayesian networks [1,2]. Consistent Markov edge processes on regular lattices may be of interest to pattern recognition and information theory. Some open problems are mentioned in Remark 7.

2. Bayesian Networks and Pickard Random Fields

Let G = ( V , E ) be a given DAG. A directed edge from v 1 to v 2 , v 1 v 2 , is denoted as e = ( v 1 v 2 ) E . With any vertex v V we can associate three sets of edges
E in ( v ) : = { e = ( v v ) E , v V } , E out ( v ) : = { e = ( v v ) E , v V } , E ( v ) : = E in ( v ) E out ( v )
representing the incoming edges, outgoing edges and all edges incident to v. A vertex v V is called a source or a sink if E in ( v ) = or E out ( v ) = , respectively. The partial order (reachability relation) v 1 v 2 on G means that there is a directed path on this DAG from v 1 to v 2 . We write v 1 v 2 if v 1 v 2 , v 1 v 2 . The above partial order carries over to edges of G, namely, e 1 = ( v 1 v 1 ) e 2 = ( v 2 v 2 ) v 1 v 2 . Following [2], the set of vertices pa ( v ) : = { v v : ( v v ) E } V is called the parents of v, whereas F ( v ) : = { v } pa ( v ) V is called the family of v.
Definition 1.
Let G = ( V , E ) be a given DAG. We call family distribution at v V  any discrete probability distribution ϕ v = ϕ v ( x F ( v ) ) , x F ( v ) = ( x v ; v F ( v ) ) , that is, a sequence of positive numbers ϕ v ( x F ( v ) ) > 0 summing up to 1:
x F ( v ) ϕ v ( x F ( v ) ) = 1 .
The set of all family distributions ϕ v at v V is denoted by Φ v ( G ) .
For v V , ϕ v Φ v ( G ) , the conditional probability of x v given parent configuration x pa ( v ) = ( x v ; v pa ( v ) ) writes as
ϕ v ( x v | x pa ( v ) ) = ϕ v ( x F ( v ) ) ϕ v ( x pa ( v ) ) if ϕ v ( x F ( v ) ) > 0 ,
with ϕ v ( x v | x pa ( v ) ) : = 0 if ϕ v ( x F ( v ) ) = 0 and ϕ v ( x v | x pa ( v ) ) : = ϕ v ( x v ) if pa ( v ) = .
Definition 2.
Let Φ V = { ϕ v ; v V } be a set of family distributions on DAG G = ( V , E ) . We call a Bayesian network on G corresponding to Φ V  a random process X V = { X v ; v V } indexed by vertices (sites) of G and such that for any configuration x V = ( x v ; v V )
P V ( x V ) = P ( X v = x v ; v V ) = v V ϕ v ( x v | x pa ( v ) ) .
We remark that the terminology ‘family distribution’ is not commonplace, whereas the definition of Bayesian network varies in the literature [1,2,23], being equivalent to (5) under the positivity condition P V ( x V ) > 0 . In the simplest case of chain graph G = ( V , E ) , V = { 0 , 1 , , n } , E = { ( 0 1 ) , , ( n 1 n ) } (Example 2 below), a Bayesian network X V is a Markov chain with transition probabilities ϕ v ( x v | x v 1 ) ; particularly, any non-Markovian site process X V , V = { 0 , 1 , , n } on the above G is not a Bayesian network. In the general case G = ( V , E ) , the conditional (family) distributions ϕ v ( x v | x pa ( v ) ) in (5) are most intuitive and often used instead of ϕ v ( x F ( v ) ) . A Bayesian network satisfies several Markov conditions [2], particularly, the ordered Markov condition:
P V ( x v | x v , v v ) = P V ( x v | x pa ( v ) ) = ϕ v ( x v | x pa ( v ) ) , v V .
The Markov properties of site models on directed (not necessary acyclic) graphs are discussed in Lauritzen et al. [24] and other works.
In the rest of this subsection, G = ( V , E ) is a directed subgraph of the infinite DAG ( Z 2 , E ( Z 2 ) ) with lexicographic partial order v = ( t , s ) v = ( t , s ) ( t , s , t , s Z )
v v s s , t t
and edge set
E ( Z 2 ) : = { ( v v ) ; v , v Z 2 , v = v ( 0 , 1 ) , v ( 1 , 0 ) , v ( 1 , 1 ) } .
The class of ‘rectangular’ subDAGs G = ( V , E ) with
V = { v Z 2 : v v v + } , E = { e = ( v v ) E ( Z 2 ) , v , v V }
( v ± Z 2 , v v + ) will be denoted G rec ( Z 2 ) . Given two ‘rectangular’ DAGs G i = ( V i , E i ) G rec ( Z 2 ) , i = 1 , 2 , we write G 1 G 2 if V 1 V 2 . For the generic family in Figure 1, right, the family distribution is is indexed by the elements of the table
C D A B
and written as the joint distribution ϕ ( A , B , C , D ) of four r.v.s A , B , C , D with x v ( 1 , 1 ) = A , x v ( 1 , 0 ) = B , x v ( 1 , 0 ) = C , x v = D . The marginal distributions of these r.v.s are designated by the corresponding subsets of the table (8), e.g., ϕ ( A ) is the distribution of r.v. A. The conditional distributions of these r.v.s are denoted as in (4); viz., ϕ ( B | A ) = ϕ ( A , B ) / ϕ ( A ) is the conditional distribution of B given a value of A.
Definition 3.
We call a Pickard model a Bayesian network P V on DAG G = ( V , E ) G rec ( Z 2 ) with family distribution ϕ v ϕ independent of v V and satisfying the following conditions:
ϕ ( A ) = ϕ ( B ) = ϕ ( C ) = ϕ ( D ) , ϕ ( A , B ) = ϕ ( C , D ) , ϕ ( A , C ) = ϕ ( B , D )
(stationarity) and
ϕ ( B , C | A ) = ϕ ( B | A ) ϕ ( C | A ) , ϕ ( B , C | D ) = ϕ ( B | D ) ϕ ( C | D )
(conditional independence).
In a Pickard model, the family F ( v ) of v V consists of four points, see Figure 1, except when v belongs to the left lower boundary V : = { v = ( t , s ) V : t = t or s = s } , v = ( t , s ) of rectangle V in (7) and | F ( v ) | 2 ; for v = v it consists of the single point F ( v ) = { v } . Accordingly, the family distributions ϕ v ( x F ( v ) ) = ϕ ( x F ( v ) ) , v V are marginals of the generic distribution ϕ in Definition 3. We note that Pickard model (also called Pickard random field) is defined in [8,9,10,16] in somewhat different ways, which are equivalent to Definition 3. Let V rec ( Z 2 ) be the class of all rectangles V in (7).
Theorem 1
([10]). The family { P V ; V V rec ( Z 2 ) } of Pickard models on DAGs G = ( V , E ) G rec ( Z 2 ) and corresponding to the same generic family distribution ϕ in (9) and (10) is consistent; in other words, for any G = ( V , E ) G = ( V , E ) , G , G G rec ( Z 2 ) we have that
P V | V ( x V ) = P V ( x V ) .
Remark 1.
The consistency property in (11) implies that for G = ( V , E ) in (7), v ± = ( t ± , s ± ) and any s s s + , the restriction of a Pickard model P V on the horizontal interval H s : = { ( t , s ) , , ( t + , s ) } V is a Markov chain
P V ( x H s ) = ϕ ( x ( t , s ) ) ϕ ( x ( t + 1 , s ) | x ( t , s ) ) ϕ ( x ( t + , s ) | x ( t + 1 , s ) )
with initial distribution ϕ ( A ) and transition probabilities ϕ ( B | A ) . Indeed, H s = { v Z 2 : ( t , s ) v ( t + , s ) } is the set of vertices of G s G , and the corresponding ‘one-dimensional’ Pickard model on G s is defined by (12). In a similar way, the restriction of a Pickard model on a vertical interval V t : = { ( t , s ) , , ( t , s + ) } V is a Markov chain
P V ( x V t ) = ϕ ( x ( t , s ) ) ϕ ( x ( t , s + 1 ) | x ( t , s ) ) ϕ ( x ( t , s + ) | x ( t , s + 1 ) )
with initial distribution ϕ ( A ) and transition probabilities ϕ ( C | A ) .
Ref. [10] (Thm. 3) extend the Markov chain identifications in (12) and (13) to any non-increasing undirected path in V. This fact and the discussion in Section 3 and Section 5 suggest that the Pickard model and the consistency property in (11) can be extended to non-rectangular index sets V Z 2 .
Remark 2.
In the binary case X v { 0 , 1 } or A , B , C , D { 0 , 1 } , the distribution ϕ is completely determined by probabilities
ϕ A : = P ( A = 1 ) , , ϕ A B C D : = P ( A = B = C = D = 1 ) .
In terms of (14), conditions (9) and (10) translate to
ϕ A = ϕ B = ϕ C = ϕ D , ϕ A B = ϕ C D , ϕ A C = ϕ B D , ϕ A B C = ϕ D B C , ϕ A B C ϕ A = ϕ A B ϕ A C , ( ϕ A ϕ A B ) ( ϕ A ϕ A C ) = ( ϕ B C ϕ A B C ) ( 1 ϕ A ) ,
see [10] (p. 666). The special cases ϕ A = 0 , 1 , ϕ A = ϕ A B and ϕ A = ϕ A C correspond to a degenerated Pickard model, the latter two implying A = B and A = C , respectively, and leading to a random field which takes constant values on each horizontal (or vertical) line in Z 2 . Hence, a binary Pickard model can be parametrized by seven parameters,
ϕ A = : a , ϕ A B = : b , ϕ A C = : c , ϕ A D , ϕ A C D , ϕ A B D , ϕ A B C D ,
as ϕ A B C , ϕ B C in the non-degenerated case can be found from a , b , c :
ϕ A B C = b c a , ϕ B C = b c a + ( a b ) ( a c ) 1 a , a 0 , 1 .
The parameters in (16) satisfy natural constraints resulting from their definition as probabilities, particularly, 1 a b c .

3. Markov Edge Process: General Properties and Consistency

It is convenient to allow G to have isolated vertices, which of course can be removed w.l.g., our interest primarily being focused on edges. The directed line graph  G E = ( V ( G E ) , E ( G E ) ) of DAG G = ( V , E ) is defined as the graph whose set of vertices is V ( G E ) : = E is the same as that of an undirected line graph, and the set of (directed) edges is
E ( G E ) : = { ( e 1 e 2 ) : e 1 E in ( v ) , e 2 E out ( v ) ( v V ) }
Note that G E is acyclic, and hence a DAG.
Given a DAG G = ( V , E ) , we denote as S ( G ) the class of all non-empty sub-DAGs (subgraphs) G = ( V , E ) with V V , E E .
Definition 4.
A transformation G + G S ( G ) is said to be a top mesh dismantling algorithm (top MDA) if it erases an edge e E in ( v + ) leading to a sink v + V ; in other words, if G = ( V , E ) with
V = V , E = E { e } , e E in ( v + ) , E out ( v + ) = .
Similarly, a transformation G G S ( G ) is said to be a bottom mesh dismantling algorithm (bottom MDA) if it erases an edge e E out ( v ) starting from a source v V , in other words, if G = ( V , E ) with
V = V , E = E { e } , e E out ( v ) , E in ( v ) = .
We denote as S MDA + ( G ) , S MDA ( G ) and S MDA ( G ) the classes of DAGs containing G and the subDAGs G S ( G ) which can be obtained from G by succesively applying a top MDA, bottom MDA, or both types of MDA in an arbitrary order:
S MDA + ( G ) : = { G } { G = G k , k 1 } , G + G 1 + G 2 + + G k , S MDA ( G ) : = { G } { G = G k , k 1 } , G G 1 G 2 G k , S MDA ( G ) : = { G } { G = G k , k 1 } , G ± G 1 ± G 2 ± ± G k
Note that the above transformations may lead to a subDAG G containing isolated vertices which can be removed from G w.l.g. Figure 2 illustrates that S MDA ( G ) S ( G ) in general.
Definition 5.
A subDAG G = ( V , E ) S ( G ) is said to be the following:
(i) 
An interval DAG if
V = { v V : v 1 v v 2 } , E = { e = ( v v ) E : v , v V }
for some v 1 v 2 , v i V , i = 1 , 2 ;
(ii) 
A chain DAG if V = { v 1 = v 1 v 2 v k = v 2 , E = { ( v 1 v 2 ) , , ( v k 1 v k ) } ( k 1 ) , for some v 1 v 2 , v i V , i = 1 , 2 ;
(iii) 
A source-to-sink DAG if any edge e = ( v v ) E connects a source v of G to a sink v of G .
The corresponding classes of subDAGs G = ( V , E ) in Definition 5 (i)–(iii) will be denoted by S int ( G ) , S chain ( G ) and S sts ( G ) .
Proposition 1.
For any DAG G = ( V , E ) we have that
S int ( G ) S MDA ( G ) . S chain ( G ) S MDA ( G ) . S sts ( G ) S MDA ( G ) .
Proof. 
The proof of each inclusion in (20) proceeds by induction on the number | E | of edges of G. Clearly, the proposition holds for | E | = 1 . Assume it holds for | E | = n 1 ; we will show that it holds for | E | = n . The induction step n 1 n for the three relation in (20) is proved as follows.
(i)
Fix v 1 v 2 and G = ( V , E ) S ( G ) to be as in (19). Let { v + , 1 , , v + , m } be the set of sinks of G. If v 2 is a sink and m = 1 (or v 2 = v + , 1 ), then G = G belongs to S MDA + ( G ) by definition of the last class. If v 2 is a sink and m 2 , we can dismantle an edge e E in ( v + . 2 ) , and the remaining graph G = ( V , E ) in (18) has n 1 edges and contains G = ( V , E ) , meaning the inductive assumption applies, proving G S MDA + ( G ) S MDA + ( G ) S MDA ( G ) .
Next, let v 2 { v + , 1 , , v + , m } . Then there is a path in G from v 2 to at least one of these sinks. If v 2 v + = v + , 1 , we apply a top MDA to v + and see that G in (18) has the number of edges | E | = n 1 and contains the interval graph G in (19), so that the inductive assumption applies to G and consequently to G as well, as before, proving the induction step in case (i).
(ii)
Let G be a chain between v 1 v 2 and G G , since G = G belongs to S MDA + ( G ) by definition. Let { v + , 1 , , v + , m } be the set of sinks of G. If v 2 is a sink and m = 1 then G contains an edge e = ( v v + , 1 ) which does not belong to G . Then, by removing e from G by top MDA we see that G = ( V , E ) in (18) contains the chain G , viz., G S ( G ) , and therefore G S MDA + ( G ) by the inductive assumption. If v 2 is a sink and m 2 , we remove from G any edge leading to v + , 2 and arrive at the same conclusion. If v 2 { v + , 1 , , v + , m } is not a sink, we remove any edge leading to a sink and apply the inductive assumption to the remaining graph G having n 1 edges. This proves the induction step in case (ii).
(iii)
Let G G , G S sts ( G ) , V = V V + , where V = { v , 1 , , v , k } and V + = { v + , 1 , , v + , } are the sets of sources and sinks of G , respectively. Let V and V + be the sets of sources and sinks of G = ( V , E ) . If V = V V + , i.e., G is a source-to-sink DAG, we can remove from it an edge e E , e E and the remaining graph G in (18) contains G and satisfies the inductive assumption. If G is not a source-to-sink DAG, it contains a sink v + V + or a source v V . In the first case, there is e = ( v , v + ) E , e E , which can be removed from G, and the remaining graph G contains G and satisfies the inductive assumption. The second case follows from the first one by DAG reversion. This proves the induction step n 1 n in case (iii), hence the proposition.
An edge process on DAG G = ( V , E ) is a family Y E = { Y e ; e E } of discrete r.v.s indexed by edges of G. It is identified with a (discrete) probability distribution P E ( y E ) , y E = ( y e ; e E ) .
Definition 6.
Let G = ( V , E ) be a given DAG. A clique distribution at v V is any discrete probability distribution π v = π v ( y E ( v ) ) , y E ( v ) = ( y e ; e E ( v ) ) , that is, a family of positive numbers π v ( y E ( v ) ) > 0 summing up to 1:
y E ( v ) π v ( y E ( v ) ) = 1 .
The set of all clique distributions π v at v V is denoted by Π v ( G ) .
Given π v Π v ( G ) , the conditional probabilities of out-configuration y E out ( v ) = ( y e ; e E out ( v ) ) given in-configuration y E in ( v ) = ( y e ; e E in ( v ) ) write as
π v ( y E out ( v ) | y E in ( v ) ) = π v ( y E ( v ) ) π v ( y E in ( v ) )
with y E ( v ) = ( y e ; e E out ( v ) E in ( v ) ) and π v ( y E out ( v ) | y E in ( v ) ) : = 1 for E out ( v ) = , : = π v ( E out ( v ) ) for E in ( v ) = .
Definition 7.
A Markov edge process on a DAG G = ( V , E ) corresponding to a given family { π v ; v V } of clique distributions π v Π v ( G ) is a random process Y E = { Y e ; e E } indexed by edges of G and such that for any configuration y E = ( y e ; e E )
P E ( y E ) = P ( Y e = y e ; e E ) : = v V π v ( y E out ( v ) | y E in ( v ) ) .
An edge process Y E = { Y e ; e E } on a DAG G = ( V , E ) can be viewed as a site process on the line graph G E = ( V ( G E ) , E ( G E ) ) of G = ( V , E ) , with V ( G E ) : = E ,   E ( G E ) : = { ( e 1 e 2 ) : e 1 E in ( v ) , e 2 E out ( v ) ( v V ) } .
Corollary 1.
A Markov edge process P E in (21) is a Bayesian network on the line DAG G E = ( V ( G E ) , E ( G E ) ) if
π v ( y E out ( v ) | y E in ( v ) ) = e E out ( v ) π v ( y e | y E in ( v ) ) , v V .
Condition (22) can be rephrased as the statement that ’outgoing’ variables Y e , e E out ( v ) are conditionally independent given ‘ingoing’ variables Y e , e E in ( v ) for each node v V . Analogously, P E in (21) is a Bayesian network on the reversed line graph G E under a symmetric condition that ‘ingoing’ variables Y e , e E in ( v ) are conditionally independent given ‘outgoing’ variables Y e , e E out ( v ) , for each node v V .
Definition 8.
Let G = ( V , E ) be a DAG and S ( G ) S ( G ) be a family of subDAGs of G. A family of edge processes { P E ; G = ( V , E ) S ( G ) } is said consistent if
P E ( y E ) = P E ( y E ) , G = ( V , E ) S ( G ) .
Given edge process P E in (21), we define its restriction on a subDAG G S ( G ) as
P E | E ( y E ) : = P E ( y E ) , y E = ( y e ; e E ) .
In general, P E | E is not a Markov edge process on G , as shown in the following example.
Example 1.
Let G = ( V , E ) , V = { 1 , 2 , 3 } , E = { e 1 = ( 1 2 ) , e 2 = ( 1 3 ) , e 3 = ( 2 3 ) } , E = { e 2 , e 3 } . Then P E ( y e 1 , y e 2 , y e 3 ) = π 1 ( y e 1 , y e 2 ) π 2 ( y e 3 | y e 1 ) and
P E | E ( y e 2 , y e 3 ) = y e 1 π 1 ( y e 1 , y e 2 ) π 2 ( y e 3 | y e 1 ) .
The subDAG G = ( V , E ) is composed of two edges going from different sources 1 and 2 to the same sink 3. By definition, a Markov edge process P E on G corresponds to independent y e 2 , y e 3 , viz.,
P E ( y e 2 , y e 3 ) = π 1 ( y e 2 ) π 2 ( y e 3 ) .
It is easy to see that the two probabilities in (25) and (26) are generally different (however, they are equal if π 1 ( y e 1 , y e 2 ) = π 1 ( y e 1 ) π 1 ( y e 2 ) is a product distribution and π 1 ( y e 1 ) = π 2 ( y e 1 ) , in which case π 1 ( y e 2 ) = π 1 ( y e 2 ) and π 2 ( y e 3 ) = π 2 ( y e 3 ) ) .
In Example 1, the restriction P E | E to E = { e 1 , e 2 } is a Markov edge process with P E | E ( y e 1 , y e 2 ) = π 1 ( y e 1 , y e 2 ) . The following proposition shows that a similar fact holds in a general case for subgraphs G obtained from G by applying a top MDA.
Proposition 2.
Let P E be a Markov edge process on DAG G = ( V , E ) . Then for any G = ( V , E ) S MDA + ( G ) , the restriction P E | E is a Markov edge process on G with clique distribution π v Π v ( G ) , v V given by
π v ( y E ( v ) ) : = π v ( y E ( v ) ) , v V
which is the restriction of clique distributions π v to (sub-clique) E ( v ) = E ( v ) E .
Proof. 
It suffices to prove the proposition for a one-step top MDA, or G in (18). Let E in ( v + ) : = { e 1 = ( v 1 v + ) , , e k = ( v k v + ) } , k 2 , e = e k . From the definitions in (21) and (27),
P E | E ( y E ) = v V { v k } π v ( y E out ( v ) | y E in ( v ) ) y e k π v k ( y E out ( v k ) | y E in ( v k ) ) = v V { v k } π v ( y E out ( v ) | y E in ( v ) ) π v k ( y E out ( v k ) | y E in ( v k ) ) = v V π v ( y E out ( v ) | y E in ( v ) ) .
Example 2
(Markov chain). Let G = ( V , E ) , V = { 0 , 1 , , n } , E = { e 1 = ( 0 1 ) , e 2 = ( 1 2 ) , , e n = ( n 1 n ) } be a chain from 0 to n. The classes S MDA + ( G ) , S MDA ( G ) and S MDA ( G ) consist respectively of all chains from 0 to j { 1 , , n } , from i { 0 , , n 1 } to n and from i to j ( 0 i < j n ) . A Markov edge process on the above graph is a Markov chain Y E = { Y e 1 , , Y e n } with the probability distribution
P E ( y e 1 , y e 1 , , y e n ) = π 0 ( y e 1 ) π 1 ( y e 2 | y e 1 ) π n 1 ( y e n | y e n 1 )
where π 0 is a (discrete) univariate and π k , 1 k < n 1 are bivariate probability distributions; π k ( y e k + 1 | y e k ) = π k ( y e k , y e k + 1 ) / y π k ( y e k , y ) are conditional or transitional probabilities. It is clear that the restriction P E | E on E = { e 1 , , e j } is a Markov chain with π 0 = π 0 , π k = π k , 1 k < j ; in other words, it satisfies Proposition 2 and (27). However, the restriction P E | E to E = { e i , , e n } or G = ( V , E ) S MDA ( G ) is a Markov chain with the initial distribution
π i ( y e i ) = y ˜ e 1 , , y ˜ e i 1 π 0 ( y ˜ e 1 ) 1 k < i π k ( y ˜ e k , y ˜ e k + 1 ) π k ( y ˜ e k ) , y ˜ e i = y e i
which is generally different from π i ( y e i ) . We conclude that Proposition 2 ((27) in particular) fails for subDAGs G of G obtained by a bottom MDA. On the other hand, π i = π i , 1 i < n hold for the above G , G provided the π k values satisfy the additional compatibility condition:
π k ( y e k ) = π k 1 ( y e k ) , e k = ( k 1 k ) , k = 1 , , n 1 .
Proposition 3.
Let P E be a Markov edge process on DAG G = ( V , E ) in (21). Then for any edge e = ( v 1 v 2 ) E
P E ( y e | y e , e e , e E ) = P E ( y e | y e , e E out ( v 2 ) E in ( v 1 ) ) = π v 1 ( y E out ( v 1 ) | y E in ( v 1 ) ) π v 2 ( y E out ( v 2 ) | y E in ( v 2 ) ) y ˜ e π v 1 ( y ˜ E out ( v 1 ) | y ˜ E in ( v 1 ) ) π v 2 ( y ˜ E out ( v 2 ) | y ˜ E in ( v 2 ) )
and
P E ( y e | y e , e e ) = P E ( y e | y e , e E in ( v 1 ) ) = π v 1 ( y e | y E in ( v 1 ) ) ,
with y ˜ E = ( y ˜ e ; e E ) satisfying y ˜ e = y e , e e .
Proof. 
Relation (28) follows from
P E ( y e | y e , e e , e E ) = P E ( y E ) y ˜ e P E ( y ˜ E )
and the definition in (21), since the products v v 1 , v 2 cancel in the numerator and the denominator of (30).
Consider (29). We use Propositions 1 and 2, according to which the interval DAGs G i = ( V i , E i ) , V i : = { v V : v v i } , i = 1 , 2 belong to S MDA + ( G ) . The ‘intermediate’ DAG G ˜ = ( V ˜ , E ˜ ) , constructed from G 1 by adding the single edge e = ( v 1 v 2 ) , viz., V ˜ : = V 1 { v 2 } , E ˜ : = E 1 { e } , also belongs to S MDA + ( G ) since it can be attained from G 2 by dismantling all edges with the exception of E ˜ . Note that { e } { e E : e } = E ˜ . Therefore, by Proposition 2, P E | E ˜ ( y E ˜ ) = P E ˜ ( y E ˜ ) where P E ˜ is a Markov edge process on G ˜ with clique distributions
π ˜ v ( y E ˜ ( v ) ) : = π v ( y E ˜ ( v ) ) , v V ˜ .
The expression in (29) follows from (31) and (28) with E replaced by E ˜ by noting that v 2 is a sink in G ˜ ; hence, π ˜ v 2 ( y E ˜ out ( v 2 ) | y E ˜ in ( v 2 ) ) = 1 , whereas π ˜ v 1 ( y E ˜ out ( v 1 ) | y E ˜ in ( v 1 ) ) = π v 2 ( y e | y E in ( v 1 ) ) . □
Remark 3.
Note that e E out ( v 2 ) E in ( v 1 ) in the conditional probability on the r.h.s. of (28) are nearest neighbors of e in the line graph (DAG) G E = ( V ( G E ) , E ( G E ) )
Theorem 2.
Let G = ( V , E ) be a given DAG and P E a Markov edge process in (21) with clique distributions π v Π v , v V satisfying the compatibility condition
π v 1 ( y e ) = π v 2 ( y e ) , e = ( v 1 v 2 ) E
and two marginal independence conditions:
π v ( y E out ( v ) ) = e E out ( v ) π v ( y e )
and
π v ( y E in ( v ) ) = e E in ( v ) π v ( y e ) .
Then the family of Markov edge process P E , G = ( V , E ) S MDA ( G ) with clique distributions π v Π v , v V given in (27) is consistent, viz.,
P E | E = P E , G = ( V , E ) S MDA ( G ) .
Remark 4.
(i) 
Conditions (33) and (34) do not imply the mutual independence of the in- and out-clique variables y E in ( v ) and y E out ( v ) under π v .
(ii) 
Conditions (33) and (34) are automatically satisfied if card ( E in ( v ) ) , card ( E out ( v ) ) { 0 , 1 } (Markov chain).
(iii) 
Conditions (33) and (34) are symmetric with regard to DAG reversion (all directions reversed).
(iv) 
In the binary case ( Y e = 0 , 1 ), the value Y e = 1 can be interpreted as the presence of a ‘particle’ and Y e = 0 as its absence on edge e E . ‘Particles’ ‘move’ on a DAG ( V , E ) in the direction of arrows. ‘Particles’ ‘collide’, ‘annihilate’ or ‘branch’ at nodes v V , with probabilities determined by clique distribution π v . See Section 4 for a detailed description of particle evolution for the Arak model on Z 2 .
Proof of Theorem 2. 
It suffices to prove (35) for a one-step MDA:
E E = E { e } and E + E + = E { e + }
which remove a single edge e = ( v v ) coming from a source v V and a single edge e + = ( v v + ) going to a sink v + V , respectively. Moreover, it suffices to consider E only. The proof for E + follows from Proposition 2 and does not require (32)–(34). (It also follows from E by DAG reversion.) Then, by the marginal independence of π v , v = v , v and π v ( y e ) = π v ( y e ) ,
P E ( y E ) = π v ( y e ) e E out ( v ) , e e π v ( y e ) π v ( y E in ( v ) E out ( v ) ) π v ( y e ) π v ( y E in ( v ) { e } ) v V , v v , v π v ( y E out ( v ) | y E in ( v ) ) = e E out ( v ) , e e π v ( y e ) π v ( y E in ( v ) E out ( v ) ) π v ( y E in ( v ) { e } ) v V , v v , v π v ( y E out ( v ) | y E in ( v ) ) .
From the definition of π v in (27), we have that e E out ( v ) , e e π v ( y e ) = π v ( y E out ( v ) ) , π v ( y E in ( v ) { e } ) = π v ( y E in ( v ) ) , v V , v v , v π v ( y E out ( v ) | y E in ( v ) ) , and therefore
P E ( y E ) = π v ( y E out ( v ) ) π v ( y E in ( v ) E out ( v ) ) π v ( y E in ( v ) ) v V , v v , v π v ( y E out ( v ) | y E in ( v ) ) ,
leading to
P E ( y E ) = π v ( y E out ( v ) ) π v ( y E in ( v ) E out ( v ) ) π v ( y E in ( v ) ) v V , v v , v π v ( y E out ( v ) | y E in ( v ) ) = v V π v ( y E out ( v ) | y E in ( v ) )
and proving (35) for E = E defined above, or the statement of the theorem for a one-step MDA E E . □
Corollary 2.
Let P E be a Markov edge process on DAG G = ( V , E ) satisfying the conditions of Theorem 2:
(i) 
The restriction P E | E of P E on a chain DAG G = ( V , E ) S chain ( G ) , V = { v 0 , v 1 , , v k } , E = { e 1 , , e k } , e i = ( v i 1 v i ) is a Markov chain, viz.,
P E | E ( y e 1 , , y e k ) = π v 0 ( y e 1 ) π v 1 ( y e 2 | y e 1 ) π v k 1 ( y e k | y e k 1 )
where
π v 0 ( y e 1 ) = π v 0 ( y e 1 ) , π v i ( y e i 1 , y e i ) = π v i ( y e i 1 , y e i ) , i = 1 , , k .
(ii) 
The restriction P E | E of P E on a source-to-sink DAG G = ( V , E ) S sts ( G ) is a sequence of independent r.v.s, viz.,
P E | E ( y e , e E ) = v V e E out ( v ) π v ( y e ) ,
where V V is the set of sources of G and E out ( v ) : = E out ( v ) E is the set of edges coming from a source v V and ending into a sink of G , π v ( y e ) : = π v ( y e ) , e E out ( v ) .
Proof. 
(i)
By Proposition 1, G belongs to S MDA ( G ) so that Theorem 2 applies with π v in (27) given by (31), resulting in (36).
(ii)
By Proposition 1, G belongs to S MDA ( G ) so that Theorem 2 applies with π v in (27) given by
π v ( y E out ( v ) ) = π v ( y E out ( v ) ) = e E out ( v ) π v ( y e ) = e E out ( v ) π v ( y e ) , v V ,
(the second equality holds by (33)), resulting in (37).
Remark 5.
A natural generalization of chain and source-to-sink DAGs is a source–chain–sink DAG G = ( V , E ) S ( G ) with the property that any sink v + V is reachable from a source v V by a single chain (directed path). We conjecture that for a source–chain–sink DAG G , the restriction P E | E of P E in Theorem 2 is a product of independent Markov chains on disjointed directed paths of G , in agreement with representations (36) and (37) of Corollary 2.
Gibbsian representation of Markov edge processes. Gibbsian representation is fundamental in the study of Markov random fields [4,6]. Gibbsian representation of Pickard random fields was discussed in [8,9,11,12]. The following Corollary 3 provides Gibbsian representation of the consistent Markov edge process in Theorem 2. Accordingly, the set of vertices of DAG G = ( V , E ) is written as V = V 0 V , where the boundary V : = { v V : E in ( v ) = or E out ( v ) = } consists of sinks and sources of G and the interior V 0 : = V V of the remaining sites.
Corollary 3.
Let P E be a Markov edge process on DAG G = ( V , E ) satisfying the conditions of Theorem 2 and the positivity condition P E ( y E ) > 0 . Then
P E ( y E ) = exp v V 0 Q v ( y E ( v ) ) + v V Q v V ( y E ( v ) ) ,
where the inner and boundary potentials Q v , Q v V are given by
Q v ( y E ( v ) ) : = log π v ( y E ( v ) ) e E ( v ) π v ( y e ) , Q v V ( y E ( v ) ) : = log e E ( v ) π v ( y e ) .
Formula (38) follows by writing (21) as P E ( y E ) = exp v V log π v ( y E ( v ) ) log e E in ( v ) π v ( y e ) and rearranging terms in the exponent using (32)–(34). Note (38) is invariant with regard to graph reversal (direction of all edges reversed). Formally, the inner potentials Q v in (39) do not depend on the orientation of G, raising the question of the necessity of conditions (33) and (34) in Theorem 2. An interesting perspective seems to be the study of the Markov evolution { Y E ( t ) ; t 0 } of Markov edge processes on DAG G = ( V , E ) with invariant Gibbs distribution in (38).

4. Consistent Markov Edge Process on Z ν and Arak Model

Let ( Z ν , E ( Z ν ) ) be an infinite DAG whose vertices are points of regular lattice Z ν and whose edges are pairs e = ( v v ) , | v v | = 1 , v v directed in the lexicographic order v = ( t 1 , , t ν ) v = ( t 1 , , t ν ) if and only if t i t i , i = 1 , , ν . We write v v if v v , v v . A ν ˜ -dimensional hyperplane ( 1 ν ˜ < ν )
H ν ˜ = { ( t 1 , , t ν ˜ , s ν ˜ + 1 , , s ν ) Z ν : ( t 1 , , t ν ˜ ) Z ν ˜ }
can be identified with Z ν ˜ for any fixed ( s ν ˜ + 1 , , s ν ) Z ν ν ˜ .
Let G rec ( Z ν ) denote the class of all finite ‘rectangular’ subgraphs of ( Z ν , E ( Z ν ) ) , viz., ( V , E ) G rec ( Z ν ) if V = { v Z ν : v v v + } for some v v + Z ν , v ± Z ν , and E = { e = ( v v ) E ( Z ν ) , v , v V } . We denote
G rec ( V ) = { G ν ˜ , 1 ν ˜ < ν }
the class of all subgraphs G ν ˜ = ( V ν ˜ , E ν ˜ ) of ( V , E ) G rec ( Z ν ) formed by intersection V ν ˜ = V H ν ˜ , E ν ˜ = { e = ( v v ) E , v , v V ν ˜ } with a ν ˜ -dimensional hyperplane H ν ˜ Z ν , H ν ˜ V that can be identified with an element of G rec ( Z ν ˜ ) . Particularly, for ν = 3 and any ( s 2 , s 3 ) Z 2 ,
H 1 = { ( t 1 , s 2 , s 3 ) : t 1 Z } , H 1 V
and the subgraph G 1 = ( V 1 , E 1 ) can be identified with a chain
E 1 = { v 1 v 1 + 1 v 1 + 1 v 1 + } , v 1 + > v 1
of length n = v 1 + > v 1 1 . Similarly, for any s 3 Z ,
H 2 = { ( t 1 , t 2 , s 3 ) : ( t 1 , t 2 ) Z 2 } , H 2 V
and the subgraph G 2 = ( V 2 , E 2 ) of G = ( V , E ) G rec ( Z 3 ) is a planar ‘rectangular’ graph belonging to G rec ( Z 2 ) . Note that any subgraph G ν ˜ G rec ( V ) is an interval subDAG of G = ( V , E ) in the sense of Definition 5 (i). From Proposition 1 we obtain the following corollary.
Corollary 4.
Let G = ( V , E ) G rec ( Z ν ) . Any subgraph G ν ˜ G rec ( V ) , 1 ν ˜ < ν can be obtained by applying an MDA to G.
We designate Y 1 , Y 3 , , Y 2 ν 1 ‘outgoing’ and Y 2 , Y 4 , , Y 2 ν ‘incoming’ clique variables (see Figure 3). Let π be a generic clique distribution of random vector ( Y 1 , Y 2 , Y 2 ν 1 , Y 2 ν ) on incident edges of v Z ν . We use the notation π A ( y A ) = π ( Y i = y i , i A ) ,   y A = ( y i , i A ) ,   A A ν : = { 1 , , 2 ν } ,   A . A ν = A ν out A ν in , A ν out : = { 1 , 3 , , 2 ν 1 } ,   A ν in : = { 2 , 4 , , 2 ν } .
The compatibility and consistency relations in Theorem 2 write as follows for any
π i ( y ) = π i + 1 ( y ) , i = 1 , 3 , , 2 ν 1
and
π A ν out ( y A ν out ) = i A ν out π i ( y i ) , π A ν in ( y A ν in ) = i A ν in π i ( y i ) .
Let Π ν be the class of all discrete probability distributions π on R 2 ν satisfying (45) and (46).
Corollary 5.
Let Y E = { Y e ; e E } be a (consistent) Markov edge process on DAG G = ( V , E ) G rec ( Z ν ) with clique distribution π Π ν . The restriction
Y E ˜ : = { Y e ; e E ˜ } , E ˜ = E H ν ˜
of Y E on any ν ˜ -dimensional hyperplane H ν ˜ in (40) is a consistent Markov edge process on DAG G ν ˜ = ( V ˜ , E ˜ ) G rec ( Z ν ˜ ) with generic clique distribution π ˜ ( y A ν ˜ ) : = π ( y A ν ˜ ) , π ˜ Π ν ˜ . Particularly, the restriction
Y E 1 : = { Y e ; e E 1 } , E 1 = E H 1
of Y E to a one-dimensional hyperplane H 1 in (40) is a reversible Markov chain with the probability distribution
P ( Y e 1 = y 1 , , Y e n = y n ) = π 1 ( y 1 ) π 1 ( y 2 | y 1 ) π 1 ( y n | y n 1 ) ,
where π 1 ( y 1 , y 2 ) : = π 12 ( y 1 , y 2 ) and π 1 ( y 1 | y 2 ) = π 1 ( y 1 , y 2 ) / π 1 ( y 2 ) , π 1 ( y 2 ) > 0 are the conditional probabilities.
Example 3.
Broken line process. Let ν = 2 and Y i , i = 1 , 2 , 3 , 4 take integer values Y i N = { 0 , 1 , } and have a joint distribution
π ( y 1 , y 2 , y 3 , y 4 ) = ( 1 λ 2 ) ( 1 λ ) 2 λ y 1 + y 2 + y 3 + y 4 λ | y 3 y 1 | 1 ( y 1 y 3 = y 4 y 2 ) ,
y i N , i = 1 , 2 , 3 , 4 , where 0 < λ < 1 is a parameter. Note that for y 1 y 3
y 2 , y 4 N π ( y 1 , y 2 , y 3 , y 4 ) = ( 1 λ 2 ) ( 1 λ ) 2 λ 2 y 3 y 2 , y 4 N λ y 2 + y 4 1 ( y 1 y 3 = y 4 y 2 ) = ( 1 λ ) 2 λ y 1 + y 3
and the last equality is valid for y 1 y 3 , too. Therefore, π in (50) is a probability distribution on N 4 with (marginals) π 13 and π 24 written as the product of the geometric distribution with parameter λ > 0 . We see that (50) belongs to Π 2 and satisfies (45) and (46) of Corollary 5. The corresponding edge process called the discrete broken line process was studied by Rollo et al. [25] and Sidoravicius et al. [26] in connection with planar Bernoulli first passage percolation. The discrete broken line process can be described as an evolution of particles moving (horizontally or vertically) with constant velocity until collision with another particle and dying upon collision, with independent immigration of pairs of particles. An interesting and challenging open problem is the extension of the broken line process to higher dimensions, particularly to ν = 3 or G G rec ( Z 3 ) .
Definition 9.
We call an Arak model a binary Markov edge process Y E = { Y e ; e E } on a DAG G = ( V , E ) G rec ( Z ν ) with clique distribution π Π ν .
We also use the same terminology for the restriction of an Arak model on a subDAG G S ( G ) , G G rec ( Z ν ) , and speak about an Arak model on Z ν since Y E extends to infinite DAG ( Z ν , E ( Z ν ) ) ; the extension Y = { Y e ; e E ( Z ν ) } is a stationary binary random field whose restriction coincides with Y E . Note that when ( Y 1 , , Y 2 ν ) { 0 , 1 } 2 ν , the clique distribution π is determined by 2 2 ν 1 probabilities
p A : = P ( Y i = 1 , i A ) , A A ν , A .
Particularly, for ν = 3 the compatibility and consistency relations in (45) and (46) read as
p 1 = p 2 , p 3 = p 4 , p 5 = p 6 , p 135 = p 1 p 3 p 5 , p 13 = p 1 p 3 , p 15 = p 1 p 5 , p 15 = p 1 p 5 , p 246 = p 2 p 4 p 6 , p 24 = p 2 p 4 , p 26 = p 2 p 6 , p 46 = p 4 p 6 .
The resulting consistent Markov edge process on Z 3 depends on 63 3 8 = 52 parameters. In the lattice isotropic case it depends on nine parameters p 1 , p 13 , p 14 , p 135 , p 123 , p 1234 , p 1345 , p 12345 , p 123456 satisfying two consistency equations, p 135 = p 1 3 and p 13 = p 1 2 , resulting in a seven-parameter isotropic binary edge process communicated to the author by T. Arak.
Denote v i : = ( 0 , , 0 i 1 , 1 , 0 , , 0 ) R ν , 1 i ν unit vectors in Z ν , e i = ( v v + v i ) ,   τ n e i : = ( v + n v i v + ( n + 1 ) v i ) —edges parallel to vectors v i , 1 i ν . The following corollary is a consequence of Corollary 5 and the formula for the transition probabilities of a binary Markov chain in Feller [27] (Ch.16.2).
Corollary 6.
The covariance function of an Arak model. Let Y E be an Arak model on DAG G = ( V , E ) G rec ( Z ν ) . Then for any 1 i ν , n N , e i E , τ n e i E ,
E Y e i Y τ n e i = p 2 i 2 + p 2 i ( 1 p 2 i ) p 2 i 1 , 2 i p 2 i 2 p 2 i ( 1 p 2 i ) n ,
where p 2 i = P ( Y e i = 1 ) = P ( Y 2 i 1 = 1 ) , p 2 i 1 , 2 i = P ( Y e i = Y τ 1 e i = 1 ) = P ( Y 2 i 1 = Y 2 i = 1 ) are as in (51).
Remark 6.
The conditional probabilities in Corollary 1 (22) for an Arak model can be expressed through p A in (51). Particularly, an Arak model in dimension ν = 2 is a Bayesian network on the line graph of ( Z 2 , E ( Z 2 ) ) if
p 12 2 = p 1234 , p 123 2 = p 1 2 p 1234 .
In the two-dimensional case ν = 2 , the Arak model on DAG G = ( V , E ) G rec ( Z 2 ) is determined by the clique distribution π of ( Y 1 , Y 2 , Y 3 , Y 4 ) in Figure 3, middle, with probabilities p 1 = P ( Y 1 = 1 ) , , p 1234 = P ( Y 1 = = Y 4 = 1 ) satisfying four conditions:
p 1 = p 2 , p 3 = p 4 , p 13 = p 1 p 3 , p 24 = p 2 p 4 .
The resulting edge process depends on 11 = (15 − 4) parameters. In the lattice isotropic case, we have five parameters p 1 , p 12 , p 13 , p 123 , p 1234 satisfying a single condition p 13 = p 1 2 and leading to a four-parameter consistent isotropic binary Markov edge process on Z 2 . Some special cases of parameters in (54) are discussed in Examples 4–6 below.
The evolution of the particle system. Below, we describe the binary edge process Y E in Remark 2 on DAG G = ( V , E ) G rec ( Z 2 ) , V = { ( t , s ) : 1 t m , 1 s n } in terms of particle system evolution. The description becomes somewhat simpler by embedding G into G ¯ = ( V ¯ , E ¯ ) , where
V ¯ = V 1 0 V 2 0 V 1 1 V 1 1 V ,
where 1 0 V = { ( 1 , 0 ) , , ( m , 0 ) } , 2 0 V = { ( 0 , 1 ) , , ( 0 , n ) } , 1 1 V = { ( 1 , n + 1 ) , , ( m , n + 1 ) } and 2 1 V = { ( m + 1 , 1 ) , , ( m + 1 , n ) } are boundary sites (sources or sinks belonging of the extended graph G ¯ ), as shown in Figure 4. The edge process Y E is obtained from Y E ¯ = { Y ¯ e : e E ¯ } as Y E = { Y ¯ e : e E } .
The presence of a particle on edge e = ( v v ) E ¯ (in the sense explained in Section 2) is identified with Y ¯ e = 1 . A particle moving on a horizontal or vertical edge is termed horizontal or vertical, respectively. The evolution of particles is described by the following rules:
(p0)
Particles enter the boundary edges 1 0 E = { ( 1 , 0 ) ( 1 , 1 ) , , ( m , 0 ) ( m , 1 ) ) } (vertically) and 2 0 E = { ( 0 , 1 ) ( 1 , 1 ) , , ( 0 , n ) ( 1 , n ) } (horizontally) independently of each other with respective probabilities p 3 and p 1 .
(p1)
Particles move along directed edges of G ¯ independently of each other until collision with another moving particle. A vertical particle entering an empty site v V will undergoes one of the following transformations.
(i)
Leaves v as a vertical particle with probability ( p 34 p 134 p 234 + p 1234 ) / p 3 ( 1 p 1 ) ;
(ii)
Changes the direction at v to horizontal with probability ( p 14 p 124 p 134 + p 1234 ) / p 3 ( 1 p 1 ) ;
(iii)
Branches at v into two particles moving into different directions, with probability ( p 134 p 1234 ) / p 3 ( 1 p 1 ) ;
(iv)
Dies at v, with probability ( p 4 p 14 p 24 p 34 + p 124 + p 134 + p 234 p 1234 ) / p 3 ( 1 p 1 ) .
Similarly, a horizontal particle entering an empty site v V exhibits transformations as in (i)–(iv) with respective probabilities ( p 12 p 123 p 124 + p 1234 ) / p 1 ( 1 p 3 ) , ( p 23 p 123 p 234 + p 1234 ) / p 1 ( 1 p 3 ) ,   ( p 123 p 1234 ) / p 1 ( 1 p 3 ) and ( p 2 p 12 p 23 p 24 + p 123 + p 124 + p 234 p 1234 ) / p 1 ( 1 p 3 ) .
(p2)
Two (horizontal and vertical) particles entering v V , either
(i)
both die with probability ( p 24 p 124 p 234 + p 1234 ) / p 1 p 3 ; or
(ii)
the horizontal one survives and the vertical one dies with probability ( p 124 p 1234 ) / p 1 p 3 . or
(iii)
the vertical one survives and the horizontal one dies with probability ( p 234 p 1234 ) / p 1 p 3 . or
(iv)
both particles survive with probability p 1234 / p 1 p 3 .
(p3)
At an empty site v V (no particle enters v), one of the following events occurs:
(i)
A single horizontal particle is born, with probability ( p 1 p 12 p 13 p 14 + p 123 + p 124 + p 134 p 1234 ) / ( 1 p 1 ) ( 1 p 2 ) ;
(ii)
A single vertical particle is born, with probability ( p 3 p 13 p 23 p 34 + p 123 + p 134 + p 234 p 1234 ) / ( 1 p 1 ) ( 1 p 2 ) ;
(iii)
No particles are born, with probability ( 1 p 1 p 2 p 3 p 4 + p 12 + p 13 + p 14 + p 23 + p 24 + p 34 p 123 p 124 p 134 p 234 + p 1234 ) / ( 1 p 1 ) ( 1 p 3 ) ;
(iv)
Two (horizontal and vertical particles) are born, with probability ( p 13 p 123 p 134 + p 1234 ) / ( 1 p 1 ) ( 1 p 3 ) .
The above list provides a complete description of the particle transformations and transition probabilities of the edge process in terms of parameters p 1 , , p 1234 .
Example 4.
( Y 1 , Y 2 , Y 3 , Y 4 ) are independent r.v.s, P ( Y 1 = 1 ) = P ( Y 2 ) = p 1 , P ( Y 3 = 1 ) = P ( Y 4 = 1 ) = p 3 , p 12 = p 1 2 , p 34 = p 3 2 , , p 1234 = p 1 2 p 3 2 . Accordingly, Y e , e E take independent values on edges of a ’rectangular’ graph G = ( V , E ) , with generally different probabilities p 1 and p 3 for horizontal and vertical edges. In terms of the particle evolution, this means the ‘outgoing’ particles at each site v V being independent and independent of the ‘incoming’ ones.
Example 5.
p 12 = p 1 , implying P ( Y 1 Y 2 ) = 0 and Y e 2 = Y e 2 + τ n e 1 for any horizontal shift of e 2 . The corresponding edge process Y E in this case takes constant values on each horizontal line of the rectangle V. Similarly, case p 34 = p 3 leads to Y E taking constant values on each vertical line of V.
Example 6.
P ( Y i Y j . j i , j = 1 , 2 , 3 , 4 ) = 0 , i = 1 , 2 , 3 , 4 , meaning that none of the four binary edge variables can be different from the remaining three. This implies E Y 1 Y 2 Y 3 ( 1 Y 4 ) = = E ( 1 Y 1 ) Y 2 Y 3 Y 4 = E Y 1 Y 2 Y 3 Y 4 , or
p 123 = p 124 = p 134 = p 234 = p 1234 ,
and E ( 1 Y 1 ) ( 1 Y 2 ) ( 1 Y 3 ) Y 4 = E ( 1 Y 1 ) ( 1 Y 2 ) ( 1 Y 4 ) Y 3 = E ( 1 Y 1 ) ( 1 Y 3 ) ( 1 Y 4 ) Y 2 = E ( 1 Y 2 ) ( 1 Y 3 ) ( 1 Y 4 ) Y 1 = 0 , or
p 14 = p 23 , p 12 = p 1 p 13 p 14 + 2 p 1234 , p 34 = p 3 p 13 p 23 + 2 p 1234 .
The resulting four-parameter model is determined by p 1 , p 3 , p 23 , p 1234 . In the isotropic case we have two parameters p 1 , p 1234 since p 3 = p 1 , p 23 = p 1 2 . This case does not allow for the death, birth or branching of a single particle; particles move independently until collision with another moving particle, upon which both colliding particles die or cross each other with probabilities defined in (p1)–(p3) above.

5. Contour Edge Process Induced by Pickard Model

Consider a binary site model X V = { X v ; v V } on rectangular graph G = ( V , E ) ,
V = { v Z 2 : v v v + } , E = { ( v v ) ; v , v V , v = v + ( 0 , 1 ) , v + ( 1 , 0 ) , v + ( 1 , 1 ) } .
Let Z ˜ 2 : = Z 2 + ( 1 / 2 , 1 / 2 ) be the shifted lattice, v ˜ = v + ( 1 / 2 , 1 / 2 ) , v ˜ + = v + ( 1 / 2 , 1 / 2 ) , v ˜ ± Z ˜ 2 and G ˜ = ( V ˜ , E ˜ ) G rec ( Z 2 ) be the rectangular graph with V ˜ = { v ˜ Z ˜ 2 : v ˜ v ˜ v ˜ + } , E ˜ = { e ˜ = ( v ˜ v ˜ ) E ( Z ˜ 2 ) , v ˜ , v ˜ V ˜ } . For any horizontal or vertical edge e = ( v v ) E E ( Z 2 ) we designate
e ˜ ( e ) = ( v ˜ v ˜ ) E ˜
the edge of G ˜ , which ‘perpendicularly crosses e at the middle’ and is defined formally by
v ˜ : = v + ( 1 / 2 , 1 / 2 ) , v = v + ( 1 , 0 ) , v + ( 1 / 2 , 1 / 2 ) , v = v + ( 0 , 1 ) , v ˜ : = v ˜ + ( 0 , 1 ) , v = v + ( 1 , 0 ) , v ˜ + ( 1 , 0 ) , v = v + ( 0 , 1 ) .
A Boolean function β = β ( i , j ) , i , j { 0 , 1 } takes two values, 0 or 1. The class of such Boolean functions has 4 2 = 16 elements.
Definition 10.
Let β and β be two Boolean functions. A contour edge process induced by a binary site model X V on G = ( V , E ) in (57) and corresponding to β , β is a binary edge process on G ˜ = ( V ˜ , E ˜ ) defined by
Y e ˜ ( e ) : = β ( X v , X v ) if v = v + ( 1 , 0 ) , β ( X v , X v ) if v = v + ( 0 , 1 ) ,
Probably the most natural contour process occurs in the case of Boolean functions
β ( i , j ) = β ( i , j ) : = 1 ( i j ) , i , j = 0 , 1 ,
visualized by ‘drawing an edge’ e ˜ E ˜ across neighboring ‘occupied’ and ‘empty’ sites v , v in the site process. The introduced ‘edges’ form ‘contours’ between connected components of the random set { v V : X v = 1 } . Contour edge processes are usually defined for site models on undirected lattices and are well-known in statistical physics [28] (e.g., the Ising model), where they represent boundaries between ± 1 ‘spins’ and are very helpful in rigorous study of phase transitions. In our paper, a contour process is viewed as a bridge between binary site and edge processes on DAG G = ( V , E ) in (57), particularly, between Pickard and Arak models. Even in this special case, our results are limited to the Boolean functions in (60), raising many open questions for future work. Some of these questions are mentioned at the end of the paper.
Theorem 3.
The contour edge process Y E ˜ in (59) and (60) induced by a non-degenerate Pickard model coincides with the Arak model on G ˜ if and only if X V is symmetric with regard to X v 1 X v . The last condition is equivalent to
ϕ A = 1 / 2 , ϕ A B D = ϕ A C D , ϕ B C 2 ϕ A B C = ϕ A D 2 ϕ A B D .
Moreover, Y E ˜ coincides with the Arak model in Example 6 with
p 1 = p 2 = 1 2 ϕ A C , p 3 = p 4 = 1 2 ϕ A B , p 23 = 1 / 2 ϕ A B ϕ A C + ϕ A D , p 1234 = 1 2 ϕ A B 2 ϕ A C + 2 ϕ A B C D .
Proof. 
Necessity. Let Y E ˜ in (59) and (60) agree with the Arak model. Accordingly, the generic clique distribution ( Y ˜ 1 , Y ˜ 2 , Y ˜ 3 , Y ˜ 4 ) is determined by a generic family distribution ( A , B , C , D ) as
Y ˜ 1 = 1 ( B D ) , Y ˜ 2 = 1 ( A C ) , Y ˜ 3 = 1 ( C D ) , Y ˜ 4 = 1 ( A B ) ,
see Figure 3. From (15), we find that
p 1 = P ( Y ˜ 1 = 1 ) = 2 ( ϕ B ϕ B D ) = 2 ( ϕ A ϕ A C ) = P ( Y ˜ 2 = 1 ) = p 2 , p 3 = P ( Y ˜ 3 = 1 ) = 2 ( ϕ A ϕ A B ) = P ( Y ˜ 4 = 1 ) = p 4 , P ( Y ˜ 2 = Y ˜ 4 = 1 ) = p 24 = P ( A = 0 , B = C = 1 ) + P ( A = 1 , B = C = 0 ) = ϕ B C + ϕ A ϕ A B ϕ A C = P ( Y ˜ 1 = Y ˜ 3 = 1 ) .
The two first equations in (54) are satisfied by (64). Let us show that (64) implies the last two equations in (54), viz., P ( Y ˜ 2 = Y ˜ 4 = 1 ) = P ( Y ˜ 2 = 1 ) P ( Y ˜ 4 = 1 ) . Using (10), (64) and the same notation as in (17), we see that p 24 = p 2 p 4 is equivalent to the equation
4 ( a b ) ( a c ) = a b c + b c a + ( a b ) ( a c ) 1 a ,
which factorizes as ( 2 a 1 ) 2 ( a b ) ( a c ) = 0 . Hence, a = ϕ A = 1 / 2 , since a = b and a = c are excluded by the non-degeneracy of X V .
Let us show the necessity of the two other conditions in (61). Let V = { A , B , C , D , A , C } be a 2 × 1 rectangle as in Figure 4 and ( Y ˜ 1 , , Y ˜ 7 ) be the corresponding edge process, where Y ˜ i , i = 1 , 2 , 3 , 4 are as in (63) and
Y ˜ 5 = 1 ( B A ) , Y ˜ 6 = 1 ( D C ) , Y ˜ 7 = 1 ( A C ) .
The last fact implies that ( Y ˜ 2 , Y ˜ 3 , Y ˜ 4 ) and ( Y ˜ 5 , Y ˜ 6 , Y ˜ 7 ) are conditionally independent given Y ˜ 1 = 0 , 1 . Particularly,
P ( Y ˜ 1 = 1 , Y ˜ 2 = = Y ˜ 7 = 1 ) P ( Y ˜ 1 = 1 ) = P ( Y ˜ 1 = Y ˜ 2 = Y ˜ 3 = Y ˜ 4 = 1 ) P ( Y ˜ 1 = Y ˜ 5 = Y ˜ 6 = Y ˜ 7 = 1 )
Figure 4. Contour edge variables in Theorem 3.
Figure 4. Contour edge variables in Theorem 3.
Mathematics 13 03368 g004
and
P ( Y ˜ 1 = 1 , Y ˜ 2 = = Y ˜ 7 = 0 ) P ( Y ˜ 1 = 1 ) = P ( Y ˜ 1 = 1 , Y ˜ 2 = Y ˜ 3 = Y ˜ 4 = 0 ) P ( Y ˜ 1 = 1 , Y ˜ 5 = Y ˜ 6 = Y ˜ 7 = 0 ) .
From ϕ A = 1 / 2 and (15) we find that P ( Y ˜ 1 = 1 ) = P ( A C ) = 1 2 ϕ A C ,
P ( Y ˜ 1 = Y ˜ 2 = = Y ˜ 7 = 1 ) = P ( A = D = A = 1 , B = C = C = 0 ) + P ( A = D = A = 0 , B = C = C = 1 ) = ϕ ( A = D = 1 , B = C = 0 ) ϕ ( A = 1 | B = 0 ) ϕ ( C = 0 | A = D = 1 , B = 0 ) + ϕ ( A = D = 0 , B = C = 1 ) ϕ ( A = 0 | B = 1 ) ϕ ( C = 1 | A = D = 0 , B = 1 ) = 4 ( ϕ B C 2 ϕ A B C + ϕ A B C D ) ( ϕ A D ϕ A C D ϕ A B D + ϕ A B C D ) / ( 1 2 ϕ A C ) ,
and
P ( Y ˜ 1 = Y ˜ 2 = Y ˜ 3 = Y ˜ 4 = 1 ) = P ( Y ˜ 1 = Y ˜ 5 = Y ˜ 6 = Y ˜ 7 = 1 ) = P ( A = D = 1 , B = C = 0 ) + P ( A = D = 0 , B = C = 1 ) = ( ϕ A D ϕ A B D ϕ A C D + ϕ A B C D ) + ( ϕ B C 2 ϕ A B C + ϕ A B C D )
Hence, (65) for x : = ϕ B C 2 ϕ A B C + ϕ A B C D , y : = ϕ A D ϕ A C D ϕ A B D + ϕ A B C D leads to 4 x y = ( x + y ) 2 , or
ϕ B C 2 ϕ A B C = ϕ A D ϕ A C D ϕ A B D .
Next, consider (66). We have
P ( Y ˜ 1 = 1 , Y ˜ 2 = = Y ˜ 7 = 0 ) = P ( A = C = D = 1 , B = A = C = 0 ) + P ( A = C = D = 0 , B = A = C = 1 ) = ϕ ( A = C = D = 1 , B = 0 ) ϕ ( A = 0 | B = 0 ) ϕ ( C = 0 | A = B = 0 , D = 1 ) + ϕ ( A = C = D = 0 , B = 1 ) ϕ ( A = 1 | B = 1 ) ϕ ( C = 1 | A = B = 1 , D = 0 ) = 2 ( ϕ A B D ϕ A B C D ) 2 + ( ϕ A C D ϕ A B C D ) 2 / ( 1 2 ϕ A C ) , P ( Y ˜ 1 = 1 , Y ˜ 2 = Y ˜ 3 = Y ˜ 4 = 0 ) = P ( Y ˜ 1 = 1 , Y ˜ 5 = Y ˜ 6 = Y ˜ 7 = 0 ) = ϕ A B D + ϕ A C D 2 ϕ A B C D .
Hence, (66) for x : = ϕ A B D ϕ A B C D , y : = ϕ A C D ϕ A B C D writes as 2 ( x 2 + y 2 ) = ( x + y ) 2 yielding x = y and
ϕ A B D = ϕ A C D .
Equations (67) and (68) prove (61).
Finally, let us show that (61) implies symmetry of family distribution ϕ and Pickard model X V . Indeed, ϕ ( A , B , C , D ) = ϕ ( 1 A , 1 B , 1 C , 1 D ) is equivalent to the equality of moment functions:
E A = E ( 1 A ) , , E A B C D = E ( 1 A ) ( 1 B ) ( 1 C ) ( 1 D ) .
The relation E A = 1 / 2 implies the coincidence of moment functions up to order 2: E A B = E ( 1 A ) ( 1 B ) , E A C = E ( 1 A ) ( 1 C ) , E A D = E ( 1 A ) ( 1 D ) , E B C = E ( 1 B ) ( 1 C ) so that (69) reduces to
E A B C = E ( 1 A ) ( 1 B ) ( 1 C ) , E A B D = E ( 1 A ) ( 1 B ) ( 1 D ) , E A C D = E ( 1 A ) ( 1 B ) ( 1 C ) , E A B C D = E ( 1 A ) ( 1 B ) ( 1 C ) ( 1 D ) .
The relation E ( 1 A ) ( 1 B ) ( 1 C ) = 1 3 / 2 + ϕ A B + ϕ A C + ϕ B C ϕ A B C = ϕ A B C = E A B C follows from (15), whereas the remaining three relations in (70) use (15) and (61).
It remains to show the symmetry of Pickard model X V . Write x ^ V = { x ^ v ; v V } , x ^ v : = 1 x v { 0 , 1 } for configuration of the transformed Pickard model X ^ v : = 1 X v . Then by the definition of Bayesian network in (5),
P V ( x ^ V ) = v V ϕ v ( x ^ v | x ^ F ( v ) ) .
where ϕ v ( x ^ v | x ^ F ( v ) ) = ϕ v ( x v | x F ( v ) ) provided ϕ v ( x ^ F ( v ) ) = ϕ v ( x F ( v ) ) have the symmetry property. The latter property is valid for our family distribution ϕ , implying P V ( x ^ V ) = P V ( x V ) and ending the proof of the necessity part of Theorem 3.
Sufficiency. As shown above, the symmetry P V ( x ^ V ) = P V ( x V ) implies (61) and (54) for the distribution π ˜ of the quadruple ( Y ˜ 1 , Y ˜ 2 , Y ˜ 3 , Y ˜ 4 ) in (63). We need to show that Y E ˜ in (59) and (60) is a Markov edge process with clique distribution π ˜ following Definition 7.
Let v = ( 1 , 1 ) be the left bottom point of V in (57) and
Y E ˜ : = { y e ˜ ( e ) { 0 , 1 } : x V = { x v ; v V } { 0 , 1 } V s . t . y e ˜ ( e ) = β ( x v , x v ) e = ( v v ) , v = v ( 1 , 0 ) , β ( x v , x v ) e = ( v v ) , v = v ( 0 , 1 ) }
be the set of all configurations of the contour model. It is clear that any y E ˜ = { y e ˜ ; E ˜ } Y E ˜ uniquely determines x V = { x v ; v V } { 0 , 1 } V in (72) up to the symmetry transformation: there exist two and only two configurations, x V = { x v ; v V } , x ^ V = { x ^ v ; v V } , x ^ v = 1 x v , x ( 1 , 1 ) = 1 , x ^ ( 1 , 1 ) = 0 , satisfying (72). Then, by the definition of the Pickard model and the symmetry of ϕ ,
P ( Y e ˜ ( e ) = y e ˜ ; e ˜ E ˜ ) = P V ( x V ) + P V ( x ^ V ) = 2 v V ϕ ( x v | x pa ( v ) )
where
ϕ ( x v | x pa ( v ) ) = ϕ ( A , B , C , D ) ϕ ( A , B , C )
for x v = D , x v ( 1 , 1 ) = A , x v ( 1 , 0 ) = C , x v ( 0 , 1 ) = B and v V V having a four-point family as in Figure 1, right. From the definitions in (63), we see that
P ( Y ˜ 1 , Y ˜ 2 , Y ˜ 3 , Y ˜ 4 ) = ϕ ( A , B , C , D ) + ϕ ( 1 A , 1 B , 1 C , 1 D ) = 2 ϕ ( A , B , C , D ) , P ( Y ˜ 2 , Y ˜ 4 ) = ϕ ( A , B , C ) + ϕ ( 1 A , 1 B , 1 C ) = 2 ϕ ( A , B , C ) , P ( Y ˜ 4 ) = ϕ ( A , B ) + ϕ ( 1 A , 1 B ) = 2 ϕ ( A , B ) , P ( Y ˜ 2 ) = ϕ ( A , C ) + ϕ ( 1 A , 1 C ) = 2 ϕ ( A , C ) .
This and (74) yield
ϕ ( x v | x pa ( v ) ) = π ˜ ( y e ˜ 1 ( v ) , y e ˜ 2 ( v ) | y e ˜ 2 ( v ) , y e ˜ 4 ( v ) ) , v V V
where e ˜ 1 ( v ) = ( v ( 1 / 2 , 1 / 2 ) v + ( 1 / 2 , 1 / 2 ) ) , e ˜ 3 ( v ) = ( v ( 1 / 2 , 1 / 2 ) v + ( 1 / 2 , 1 / 2 ) ) are out-edges and e ˜ 2 ( v ) = ( v ( 3 / 2 , 1 / 2 ) v ( 1 / 2 , 1 / 2 ) ) , e ˜ 4 ( v ) = ( v ( 1 / 2 , 3 / 2 ) v ( 1 / 2 , 1 / 2 ) ) in-edges of Z ˜ 2 . An analogous relation to (75) holds for v V , v v , whereas for v = v we have ϕ ( x v | x pa ( v ) ) = ϕ ( x v ) = 1 / 2 , cancelling with factor 2 on the r.h.s. of (73). The above argument leads to the desired expression
P ( Y e ˜ ( e ) = y e ˜ ; e ˜ E ˜ ) = v ˜ V ˜ π ˜ ( y E out ( v ˜ ) | y E in ( v ˜ ) )
of the contour model as the Arak model with clique distribution π ˜ given by the distribution of ( Y ˜ 1 , Y ˜ 2 , Y ˜ 3 , Y ˜ 4 ) in (63).
Let us show that the latter model coincides with Example 6 and its parameters satisfy (62), (55) and (56). Indeed, (63) implies P ( Y ˜ i Y ˜ j . j i , j = 1 , 2 , 3 , 4 ) = 0 , i = 1 , 2 , 3 , 4 and hence (55) since p 23 = p 14 = ϕ A D ϕ A B D follow from the second equation in (61). Finally, (62) is a consequence of (17). Theorem 3 is proved. □
Remark 7.
(i) 
The Boolean functions β ( i , j ) , β ( i , j ) in (60) are invariant under symmetry ( i , j ) ( 1 i , 1 j ) , i , j = 0 , 1 . This fact seems to be related to the symmetry of the Pickard model in Theorem 3. It is of interest to extend Theorem 3 to non-symmetric Boolean functions, in an attempt to completely clarify the relation between the Pickard and Arak models in dimension 2.
(ii) 
A contour model in dimension ν 3 is usually formed by drawing a ( ν 1 ) -dimensional plaquette perpendicularly to the edge between neighboring sites | v v | = 1 , X v X v of a site model X in Z ν [28,29]. It is possible that a natural extension of the Pickard model in higher dimensions is a plaquette model (i.e., a random field indexed by plaquettes rather than sites in Z ν ), which satisfies a similar consistency property as in (11) and is related to the Arak model in Z ν . Plaquette models may be a useful and realistic alternative to site models in crystallography [30].

6. Conclusions

We introduce and systematically explore a new class of processes—consistent Markov edge processes defined on directed acyclic graphs (DAGs). The work generalizes and combines previously known models such as Picard’s Markov random fields and Arak’s probabilistic graph models, offering a unified approach to their study. Main Theorem 2, which establishes sufficient consistency conditions for a family of subgraphs obtained using the MDA algorithm, is a key contribution and opens the way to applications in causal inference and decision theory.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

I am grateful to the anonymous reviewers for their useful comments and suggestions. This work was inspired by collaboration and personal communication with Taivo Arak (1946–2007). I also thank Mindaugas Bloznelis for his interest and encouragement.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Neapolitan, R.E. Learning Bayesian Networks; Prentice Hall: Upper Saddle River, NY, USA, 2004. [Google Scholar]
  2. Pearl, J. Causality: Models, Reasoning, and Inference; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  3. Russell, S.J.; Norvig, P. Artificial Intelligence: A Modern Approach, 3rd ed.; Prentice Hall: Upper Saddle River, NY, USA, 2010. [Google Scholar]
  4. Besag, J. Spatial interaction and the statistical analysis of lattice systems (with Discussion). J. R. Stat. Soc. Ser. B (Methodol.) 1974, 36, 192–236. [Google Scholar] [CrossRef]
  5. Grimmet, G. Probability on Graphs, 2nd ed.; Cambridge University Press: Cambridge, UK, 2018. [Google Scholar]
  6. Kindermann, R.; Snell, J.L. Markov Random Fields and Their Applications; Contemporary Mathematics, v.1; American Mathematical Society: Providence, RI, USA, 1980. [Google Scholar]
  7. Lauritzen, S. Graphical Models; Oxford University Press: Oxford, UK, 1996. [Google Scholar]
  8. Champagnat, F.; Idier, J.; Goussard, Y. Stationary Markov random fields on a finite rectangular lattice. IEEE Trans. Inf. Theory 1998, 44, 2901–2916. [Google Scholar] [CrossRef]
  9. Goutsias, J. Mutually compatible Gibbs random fields. IEEE Trans. Inf. Theory 1989, 35, 1233–1249. [Google Scholar] [CrossRef]
  10. Pickard, D.K. Unilateral Markov fields. Adv. Appl. Probab. 1980, 12, 655–671. [Google Scholar] [CrossRef]
  11. Pickard, D.K. A curious binary lattice process. J. Appl. Probab. 1977, 14, 717–731. [Google Scholar] [CrossRef]
  12. Verhagen, A.M.V. A three parameter isotropic distribution of atoms and the hard-core square lattice gas. J. Chem. Phys. 1977, 67, 5060–5065. [Google Scholar] [CrossRef]
  13. Davidson, J.; Talukder, A.; Cressie, N. Texture analysis using partially ordered Markov models. In Proceedings of the 1st International Conference on Image Processing, ICIP-94, Austin, TX, USA, 13–16 November 1994; IEEE Computer Society Press: Los Alamitos, CA, USA, 1994; pp. 402–406. [Google Scholar]
  14. Gray, A.J.; Kay, J.W.; Titterington, D.M. An empirical study of the simulation of various models used for images. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 507–513. [Google Scholar] [CrossRef]
  15. Forchhammer, S.; Justesen, J. Block Pickard models for two-dimensional constraints. IEEE Trans. Inf. Theory 2009, 55, 4626–4634. [Google Scholar] [CrossRef]
  16. Justesen, J. Fields from Markov chains. IEEE Trans. Inf. Theory 2005, 51, 4358–4362. [Google Scholar] [CrossRef]
  17. Arak, T.; Clifford, P.; Surgailis, D. Point-based polygonal models for random graphs. Adv. Appl. Prob. 1993, 25, 348–372. [Google Scholar] [CrossRef]
  18. Arak, T.; Surgailis, D. Markov fields with polygonal realizations. Probab. Theory Relat. Fields 1989, 80, 543–579. [Google Scholar] [CrossRef]
  19. Surgailis, D. The thermodynamic limit of polygonal models. Acta Appl. Math. 1991, 22, 77–102. [Google Scholar] [CrossRef]
  20. Kahn, J. How many T-tessellations on k lines? Existence of associated Gibbs measures on bounded convex domains. Random Struct. Algorithms 2015, 47, 561–587. [Google Scholar] [CrossRef]
  21. Kiêu, K.; Adamczyk-Chauvat, K.; Monod, H.; Stoica, R. A completely random T-tessellation model and Gibbsian extensions. Spat. Stat. 2013, 6, 118–138. [Google Scholar] [CrossRef]
  22. Thäle, C. Arak-Clifford-Surgailis tesselations. Basic properties and variance of the total edge length. J. Stat. Phys. 2011, 144, 1329–1339. [Google Scholar] [CrossRef]
  23. Ben-Gal, I. Bayesian networks. In Encyclopedia of Statistics in Quality and Reliability; Ruggeri, F., Faltin, F., Kenett, R., Eds.; Wiley: New York, NY, USA, 2007; pp. 1–6. [Google Scholar]
  24. Lauritzen, S.; Dawid, A.P.; Larsen, B.; Leimer, H. Independence properties of directed Markov fields. Networks 1990, 20, 491–505. [Google Scholar] [CrossRef]
  25. Rollo, L.T.; Sidoravicius, V.; Surgailis, D.; Vares, M.E. The discrete and continuum broken line process. Markov Process. Relat. Fields 2010, 16, 79–116. [Google Scholar]
  26. Sidoravicius, V.; Surgailis, D.; Vares, M.E. Poisson broken lines’ process and its application to Bernoulli first passage percolation. Acta Appl. Math. 1999, 58, 311–325. [Google Scholar] [CrossRef]
  27. Feller, W. An Introduction to Probability Theory and Its Applications; Wiley: New York, NY, USA, 1950; Volume 1. [Google Scholar]
  28. Sinai, Y.G. Theory of Phase Transitions: Rigorous Results; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
  29. Grimmet, G. The Random Cluster Model; Springer: New York, NY, USA, 2006. [Google Scholar]
  30. Enting, I.G. Crystal growth models and Ising models: Disorder points. J. Phys. C Solid State Phys. 1977, 10, 1379–1388. [Google Scholar] [CrossRef]
Figure 1. Left: Rectangular DAG in (7); right: generic family F ( v ) = v pa ( v ) , pa ( v ) = { v , v , v } .
Figure 1. Left: Rectangular DAG in (7); right: generic family F ( v ) = v pa ( v ) , pa ( v ) = { v , v , v } .
Mathematics 13 03368 g001
Figure 2. SubDAG G (right) cannot be obtained from DAG G (left) by MDA.
Figure 2. SubDAG G (right) cannot be obtained from DAG G (left) by MDA.
Mathematics 13 03368 g002
Figure 3. Generic clique variables of Markov edge processes on Z ν , ν = 3 , 2 , 1 .
Figure 3. Generic clique variables of Markov edge processes on Z ν , ν = 3 , 2 , 1 .
Mathematics 13 03368 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Surgailis, D. Consistent Markov Edge Processes and Random Graphs. Mathematics 2025, 13, 3368. https://doi.org/10.3390/math13213368

AMA Style

Surgailis D. Consistent Markov Edge Processes and Random Graphs. Mathematics. 2025; 13(21):3368. https://doi.org/10.3390/math13213368

Chicago/Turabian Style

Surgailis, Donatas. 2025. "Consistent Markov Edge Processes and Random Graphs" Mathematics 13, no. 21: 3368. https://doi.org/10.3390/math13213368

APA Style

Surgailis, D. (2025). Consistent Markov Edge Processes and Random Graphs. Mathematics, 13(21), 3368. https://doi.org/10.3390/math13213368

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop