Linking and Cutting Spanning Trees

We consider the problem of uniformly generating a spanning tree, of a connected undirected graph. This process is useful to compute statistics, namely for phylogenetic trees. We describe a Markov chain for producing these trees. For cycle graphs we prove that this approach significantly outperforms existing algorithms. For general graphs we obtain no analytical bounds, but experimental results show that the chain still converges quickly. This yields an efficient algorithm, also due to the use of proper fast data structures. To bound the mixing time of the chain we describe a coupling, which we analyse for cycle graphs and simulate for other graphs.


Introduction
A spanning tree A of an undirected connected graph G is a tree, i.e., a connected set of edges without cycles, that spans every vertex of G. Every vertex of G occurs in some edge of A. Figure 1 shows an example. The vertexes of the graph are represented by circles, the set of vertexes is denoted by V . The edges of G are represented by dashed lines, the set of edges is represented by E.
The edges of the spanning tree A are represented by thick grey lines. We also use V and E to mean respectively the size of the set V and the size of set E, i.e., the number of vertexes and the number of edges. In case the expression can be interpreted as a A simple procedure to build A consists in using a Union-Find data structure (Galler and Fisher, 1964), to guarantee that A does not contain a cycle. Note that these structures are strictly incremental, meaning that they can be used to detect cycles but can not be used to remove an edge from the cycle. Therefore the only possible action is to discard the edge that creates the cycle.
Let us analyse a concrete example of the resulting distribution of spanning trees. We shall show that this distribution is not uniform. First generate a permutation p of E and then process the edges in this order. Each edge that does not produce a cycle is added to A, edges that would otherwise produce cycles are discarded and the procedure continues with the next edge in the permutation. Consider the complete graph on 4 vertexes, K 4 , and focus on the probability of generating a star graph, centered at the vertex labeled 1. Figure 2 illustrates the star graph. The K 4 graph has 6 edges, hence there are 6! = 720 different permutations. To produce the star graph, from one such permutation, it is necessary that the edges (1, 2) and (1, 3) are selected before the edge (2, 3) appears, in general the edges (1, u) and (1, v) must occur before (u, v). One permutation that generates the star graph is (1, 2), (1, 3), (2, 3), (1, 4), (2, 4), (3,4). Now (2, 3) can be moved to the right to any of 3 different locations so we know 4 sequences that generate the star graph. The same reasoning can be applied to (2, 4) which can be moved once to the right. In total we counted 8 different sequences that generate the star graph, centered at 1.
For each of these sequences it is possible to permute the vertexes 2, 3, 4, amongst themselves. Hence multiplying the previous count by 3! = 6. In total we counted 48 = 8 × 6 sequences that generate the star graph, therefore the total probability of obtaining a star graph is 48/6! = 1/15. According to Cayley's formula the probability to obtain the star graph centered at 1 should be 1/4 2 = 1/16.
Hence too many sequences are generating the star graph centered at 1.
In the next section we fix this bias by discarding some edge in the potential cycle, not necessarily the edge that creates it.

Main Idea
To generate a uniform spanning tree start by generating an arbitrary spanning tree A. One way to obtain this tree is to compute a depth first search in G, in which case the necessary time is O(V +E).
In general we wish that the mixing time of our chain is much smaller than O(E), specially for dense graphs. This initial tree is only generated once, subsequent trees are obtained by the randomizing process. To randomize A repeat the next process several times. Choose and edge (u, v) from E, uniformly at random, and consider the set A ∪ {(u, v)}. If (u, v) already belongs to A the process stops, otherwise A ∪ {(u, v)} contains a cycle C. To complete the process choose and edge (u ′ , v ′ ) uniformly from C \ {(u, v)} and remove it. Hence at each step the set A is transformed into the set }. An illustration of this process is shown in Figure 3. need to be represented with the adjacency list data structure. For our purposes this approach is inefficient. This computation is central to our algorithm and its complexity becomes a factor in the overall performance. Hence we will now explain how to perform this operation in only O(log V ) amortized time with the link cut tree data structure.
The link cut tree (LCT) is a data structure that can used to represent a forest of rooted trees. The representation is dynamic so that edges can be removed and added. Whenever an edge is removed the original tree is cut in two. Adding an edge between two trees links them. This structure was proposed by Sleator and Tarjan (1985).
Both the link and cut operations can be computed in O(log V ) amortized time.
Algorithm 1 Edge swapping process 1: procedure EdgeSwap(A) ⊲ A is an LCT representation of the current spanning tree The LCT can only represent trees, therefore the edge swap procedure must first cut the edge (u ′ , v ′ ) and afterwards insert the edge (u, v) with the Link operation. The randomizing process needs to identify C and select (u ′ , v ′ ) from it. The LCT can also compute this process in O(log V ) amortized time. The LCT works by partitioning the represented tree into disjoint paths. Each path is stored in an auxiliary data structure, so that any of its edges can be accessed efficiently in O(log V ) amortized time. To compute this process we force the path D = C \ {(u, v)} to become a disjoint path. This means that D will be completely stored in one auxiliar data structure. Hence it is possible to efficiently select and edge from it. Moreover the size of D can also be computed efficiently. The exact process, to force D into an auxiliar structure, is to make u the root of the represented tree and then access v. Algorithm 1 shows the pseudo-code of the edge swapping procedure. We can confirm, by inspection, that this process can be computed in the O(log V ) amortized time bound that is crucial for our main result. In section 4.1 we prove that the process we described is indeed an ergodic Markov chain, thus establishing the result. We finish this section by pointing out a detail in Algorithm 1. In the comment of line 3 we point out that the property (u, v) / ∈ A must be checked in at most O(log V ) time. This can be achieved in O(1) time by keeping an array of booleans indexed by E. Moreover it can also be achieved in O(log V ) amortized time by using the LCT data structure, essentially by delaying the verification until D is determined and verifying if |D| = 1.

Ergodic Analysis
In this section, we analyse the Markov chain M t induced by the edge swapping process. It should be clear that this process has the Markov property because the probability of reaching a state depends only on the previous state. In other words the next spanning tree depends only on the current tree.
To prove that our procedure is correct we must show that the stationary distribution is uniform for all states. Let us first establish that such a stationary distribution exists. Note that, for a given finite graph G, the number of spanning trees is also finite. More precisely for complete graphs Cayley's formula yields V V −2 spanning trees. This value is an upper bound for other graphs, as all spanning trees of a certain graph are also spanning trees of the complete graph with the same number of vertexes. Therefore the chain is finite. If we show that it is irreducible and aperiodic, it follows that it is ergodic (Mitzenmacher and Upfal, 2005, Corollary 7.6) and therefore it has a stationary distribution (Mitzenmacher and Upfal, 2005, Theorem 7.7).
The chain is aperiodic because self-loops may occur, i.e., transitions where the underlying state does not change. Such transitions occur when (u, v) is already in A, therefore their probability is at least (V − 1)/E, because there are V − 1 edges in a spanning tree A.
To establish that the chain is irreducible it is enough to show that for any pair of states i and j there is a non-zero probability path from i to j. First note that the probability of any transition on the chain is at least To obtain a path from i to j let A i and A j represent the respective trees. We consider the following cases: note that the set equality follows from the assumption that (u, v) belongs to A j . For the last property note that if no such (u ′ , v ′ ) existed then C ⊆ A j , which is a contradiction because A j is a tree and C is a cycle. As mentioned above, the probability of this transition is at least 1/(EV ). After this step the resulting tree is not necessarily A j , but it is closer to that tree. More precisely is smaller than the original A j \ A i . Its size decreases by 1 because the edge (u, v) exists on the second set but not on the first. Therefore this process can be iterated until the resulting set is empty and therefore the resulting tree coincides with A j . The maximal size of A j \ A i is V − 1, because the size of A j is at most V − 1. This value occurs when A i and A j do no share edges. Multiplying all the probabilities in the process of transforming A i into A j we obtain a total probability of at Now that the stationary distribution is guaranteed to exist, we will show that it coincides with the uniform distribution by proving that the chain is time reversible (Mitzenmacher and Upfal, 2005, Theorem 7.10). We prove that for any pair of states i and j, with j = i, for which there exists a transition from i to j, with probability P i,j , there exists, necessarily, a transition from j to i with probability P j,i = P i,j . If the transition from i to j exists it means that there are edges (u, v) and

which means that the tree
A i can be obtained from the tree A j by adding the edge (u ′ , v ′ ) and removing the edge (u, v). In other words, the process in Figure 3 is similar both top down or bottom up. This process is a valid transition in the edge-swap chain, where the cycle C is the same in both transitions, i.e., C is the Now we obtain our result by observing that P i,j = 1/(E(C − 1)) = P j,i . In the transition from i to j the factor 1/E comes from the choice of (u, v) and the factor 1/(C − 1) from the choice of (u ′ , v ′ ). In the transition between j to i, the factor 1/E comes from the choice of (u ′ , v ′ ) and the factor 1/(C − 1) from the choice of (u, v). Hence we established that the algorithm we propose correctly generates spanning trees uniformly, provided we can sample from the stationary distribution. Hence, we need to determine the mixing time of the chain, i.e., the number of edge swap operations that need to be performed on an initial tree until the distribution of the resulting trees is close enough to the stationary distribution.
Before analyzing the mixing time of this chain we point out that it is possible to use a faster version of this chain by choosing (u, v) uniformly from E \A, instead of from E. This makes the chain faster but proving that it is aperiodic is trickier. In this chain we have that Pr(M t+1 = i|M t = i) = 0, for any state i. We will now prove that Pr(M t+s = i|M t = i) = 0, for any state i and s > 1. It is enough to show for s = 2 and s = 3, all other values follow from the fact that the greatest common divisor of 2 and 3 is 1. For the case of s = 2 we use the time reverse property and the following deduction: Pr(M t+2 = i|M t = i) ≥ P i,j P j,i ≥ (1/EV ) 2 > 0. For the case of s = 3 we observe that the cycle C must contain at least 3 edges (u, v), (u ′ , v ′ ) and (u ′′ , v ′′ ). To obtain A j we insert (u, v) and remove (u ′ , v ′ ), now we move from this state to state A k by inserting (u ′′ , v ′′ ) and removing (u, v). Finally we move back to A i by inserting (u ′ , v ′ ) and removing (u ′′ , v ′′ ). Hence, for this case

A Coupling
In this section, we focus on bounding the mixing time. We did not obtain general analytical bounds from existing analysis techniques, such as couplings (Levin and Peres, 2017;Mitzenmacher and Upfal, 2005), strong stopping times (Levin and Peres, 2017) and canonical paths (Sinclair, 1992). The coupling technique yielded a bound only for cycle graphs and moreover a simulation of the resulting coupling converges for ladder graphs.
Before diving into the reasoning in this section, we first need a finer understanding of the cycles generated in our process. We consider a closed walk to be a sequence of vertexes v 0 , . . . , v n = v 0 , starting and ending at the same vertex, and such that any two consecutive vertexes v i and v i+1 are adjacent, in our case The cycles we consider are simple, in the sense that they consist of a set of edges for which a closed walk can be formed, that traverses all the edges in the cycle and moreover no vertex repetitions are allowed, except for the vertex v 0 , which is only repeated at the end. Formally this can be stated as: if 0 ≤ i, j < n and i = j then v i = v j .
The cycles that occur in our randomizing process are even more regular. A cordless cycle in a graph is a cycle such that no two vertices of the cycle are connected by an edge that does not itself belong to the cycle. The cycles we produce also have this property, otherwise if such a chord existed then it would form a cycle on our tree A, which is a contradiction. In fact a spanning tree over a graph can alternatively be defined as a set of edges such that for any pair of vertexes v and v ′ there is exactly one path linking v to v ′ .
A coupling is an association between two copies of the same Markov chain X t and Y t , in our case the edge swapping chain. The goal of a coupling is to make the two chains meet as fast as possible, i.e., obtain X τ = Y τ , for a small value of τ . At this point we say that the chains have coalesced.
The two chains may share information and cooperate towards this goal. However, when analysed in isolation, each chain must be indistinguishable from the original chain M t . Obtaining X τ = Y τ with a high probability implies that at time τ the chain is well mixed. Precise statements of these claims are given in Section 5.
We use the random variable X t to represent the state of the first chain, at time t. The variable Y t represents the state of the second chain. We consider the chain X t in state x and the chain Y t in state y. In one step the chain X t will transition to the state x ′ = X t+1 and the chain Y t will transition to state y ′ = Y t+1 .
The set A x \ A y contains the edges that are exclusive to A x and likewise the set A y \ A x contains the edges that are exclusive to A y . The number of such edges provides a distance d(x, y) = |A x \ A y | = |A y \ A x |, that measures how far apart are the two states. We refer to this distance as the edge distance. We define a coupling when d(x, y) ≤ 1, which can be extended for states x and y that are farther apart, by using the path coupling technique (Bubley and Dyer, 1997).
To use the path coupling technique we cannot alter the behavior of the chain X t as, in general, it is determined by the previous element in the path. We denote by i x the edge that gets added to A x , and by o x the edge that gets removed from the corresponding cycle C x ⊆ A x ∪ {i x }, in case such a cycle exists. Likewise, i y represents the edge that is inserted into A y and o y the edge that gets removed from the corresponding cycle C y ⊆ A y ∪ {i y }, in case such a cycle exists. The edge i x is chosen uniformly at random from E and o x is chosen uniformly at random from C x \ {i x }.
The edges i y and o y will be obtained by trying to mimic the chain X t , but still exhibiting the same behavior as M t . In this sense the information flows from X t to Y t . Let us now analyse d(x, y).

d(x, y) = 0
If d(x, y) = 0 then x = y, which means that the corresponding trees are also equal, A y = A x . In this case Y t uses the same transition as X t , by inserting i x , i.e., set i y = i x , and removing o x , i.e., If d(x, y) = 1 then the edges e x ∈ A x \ A y and e y ∈ A y \ A x exist and are distinct. We also need the following sets: I = C x ∩ C y , E x = C x \ I and E y = C y \ I. The set I represents the edges that are common to C x and C y . The set E x represents the edges that are exclusive to C x , from the cycle point of view. This should not be confused with e x which represents the edge that is exclusive to A x , i.e., from a tree point of view. Likewise, E y represents the edges that are exclusive to C y . Also we consider the cycle C e as the cycle contained in A x ∪ {e y }, which necessarily contains e x . The following Lemma describes the precise structure of these sets.
form simple paths and the following properties hold: Notice that in particular this means that, in the non-trivial case, E x and E y partition C e . A schematic representation of this Lemma is shown in Figure 4.
We have several different cases described below. Aside from the lucky cases 2 and 3, we will usually choose i y = i x , as Y t tries to copy X t . Likewise, if possible, we would like to set o y = o x . When this is not possible we must choose o y ∈ E y , ideally we would choose o y = e y , but we must be extra careful with this process to avoid loosing the behavior of M t . To maintain this behaviour, we must Since X t provides no information on this type of edges we use o y = s y chosen uniformly from this E y \ {e y }, i.e., select from C y but not e y nor edges that are also There is a final twist to this choice, which makes the coupling non Markovian, i.e., it does not verify the conditions in Equations (1a) and (1b). We can choose o y = e y more often than would otherwise be permissible, by keeping track of how e y was determined. If e y was obtained deterministically, for example by the initial selection of x and y, then this is not possible. In general e y might be determined by the changes in X t , in which case we want to take advantage of the underlying randomness. Therefore, we keep track of the random processes that occur. The exact information we store is a set of edges U y ⊆ C e \ {e x } such that e y ∈ U y and moreover this set contains the edges that are equally likely to be e y . This information can be used to set o y = e y when s y ∈ U y , however after such an action the information on U y must be purged.
To illustrate the possible cases we use Figures 5 to 15, where the edges drawn with double lines represent a generic path, that may contain several edges, or none at all. The precise cases are the following: 1. If the chain X t loops (x ′ = x), because i x ∈ A x then Y t also loops and therefore y ′ = y. The set U y does not change, i.e., set U y ′ = U y .
2. If i x = e y and o x = e x then set i y = e x and o y = e y . In this case the chains do not coalesce, they swap states because x ′ = y and y ′ = x, see Figure 5.
3. If i x = e y and o x = e x then set i y = e x and o y = o x . In this case the chains coalesce, i.e., x ′ = y ′ , see Figure 6. When the chains coalesce the edges e x ′ and e y ′ no longer exist and the set U y ′ is no longer relevant. x

4.
If i x = e y set i y = i x . We now have 3 sub-cases, which are further sub-divided. These cases is simpler and establishes the basic situations. When |C x | < |C y | or |C x | > |C y | we use some Bernoulli random variables to balance out probabilities and whenever possible reduce to the cases considered for |C x | = |C y |. When this is not possible we present the corresponding new situation. When o x / ∈ C e the set C e ′ remains equal to C e and likewise U y ′ remains equal to U y , see Figure 8. Otherwise when o x ∈ C e the set C e ′ is different from C e and we assign Figure 9.
see Figure 10. In this case set U y ′ = E x \ {e x }. The alternative, when s y / ∈ U y is considered in the next case (4.a.iv).
x   Figure 10: Case 4.a.iii, case 4.b.iv and case 4.c.iv, when B ′ is true.
This case is shown in Figure 11. In this case the distance of the coupled states increases, i.e., d(x ′ , y ′ ) = 2. Therefore we include a new state z ′ , in between x ′ and y ′ and define e z ′ to be the edge in contain the edges that provide alternatives to e z ′ . In this case set Therefore we use a Bernoulli random variable B with a success probability p defined as follows: In Lemma 2 we prove that p properly balances the necessary probabilities, for now note that when |C x | = |C y | the expression for p yields p = 1. This is coherent with the following cases, because when B yields true we use the choices defined for |C x | = |C y |.
The alternative, when s y / ∈ U y is considered in the next case (4.b.iii).
iii) If o x ∈ I \ {i x } and B fails and s y / ∈ U y . We have a new situation, set o y = s y .
The distance increases, d(x ′ , y ′ ) = 2, see Figure 14. (Figure 10), otherwise, when s y / ∈ U y , use case 4.a.iv (Figure 11). x x e probability p * defined as follows: In Lemma 2 we prove that B * properly balances the necessary probabilities. For now, note that when |C x | = |C y | the expression for p * yields p * = 0, because 1/(C y − 1) − 1/(C x − 1) becomes 0. This is coherent because when B * returns false we will use the choices defined for |C x | = |C y |. The case when B * fails is considered in the next case (4.c.iv).
If B * is successful we have a new situation. Set o y = s i , where s i is chosen uniformly from I \ {i y } and see Figure 15. We have e y ′ = e y , U y and B * fails we use another Bernoulli random variable B ′ with a success probability p ′ defined as follows: In Lemma 2 we prove that B ′ properly balances the necessary probabilities. In case B ′ yields true, use case 4.a.iii and set o y = e y , see Figure 10. Otherwise, if s y ∈ U y , use case 4.a.iii (Figure 10) or, if s y / ∈ U y , use case 4.a.iv ( Figure 11).
Notice that the case 4.a.ii applies when C x = C y , thus solving this situation as a particular case.
This case is shown in Figure 8. It may even be the case that C x = C y and C x and C e are disjoint, i.e., C x ∩ C e = ∅. This case is not drawn.
Formally, a coupling is Markovian when Equations (1a) and (1b) hold, where Z t is the coupling, which is defined as a pair of chains (X t , Y t ). The chain M t represents the original chain.
To establish vital insight into the coupling structure we will start by studying it when it is Markovian.
Lemma 2. When U y = {e y } the process we described is a Markovian coupling.
Proof. The coupling verifies Equation (1a), because we do not alter the behavior of the chain X t .
Hence the main part of the proof focus on Equation (1b).
First let us prove that for any edge i ∈ E the probability that i y = i is 1/E, i.e., Pr(i y = i) = 1/E.
The possibilities for i y are the following: • i ∈ A y , this occurs only in case 1, when i x ∈ A x . It may be that i = e y , this occurs when i x = e x in which case i y = e y = i and this is the only case where i y = e y . In this case • i = e x , this occurs in cases 2 and 3, i.e., when i x = e y , which is the decisive condition for this choice. Therefore Pr(i y = i) = Pr(i x = e y ) = 1/E.
• i ∈ E \ A y , this occurs in case 4. In this case i y = i x so again we have that Pr(i y = i) = Before focusing on o x we will prove that the Bernoulli random variables are well defined, i.e., that the expression on the denominators are not 0 and that the values of p, p * , p ′ are between 0 and 1.
• Analysis of B. We need to have C y − 1 = 0 for p to be well defined. Any cycle must contain at least 3 edges, therefore 3 ≤ C y and hence 0 < 2 ≤ C y − 1. This guarantees that the denominator is not 0. The same argument proves that 0 < C x − 1, thus implying that 0 < p, as both expressions are positive. We also establish that p < 1 because of the hypothesis of case 4.b which guarantees C x < C y and therefore C x − 1 < C y − 1.
• Analysis of B * . As in seen the analysis of B we have that 0 < C y − 1 and 0 < C x − 1, therefore those denominators are not 0. Moreover we also need to prove that E x − 1 = 0. In general we have that 1 ≤ E y , because e y ∈ E y . Moreover, the hypothesis of case 4.c.iii is that C y < C x and therefore E y < E x , obtained by removing I from the both sides. This implies that 1 < E x and therefore 0 < E x − 1, thus establishing that the last denominator is also not 0.
Let us now establish that 0 ≤ p * and p * < 1. Note that p * can be simplified to the ex- where all the expressions in parenthesis are non-negative, so 0 ≤ p * . For the second property we use the new expression for p * and sim- . The deduction is straightforward using the equality C x − C y = E x − E y that is obtained by removing I from the left side. The properties E x − E y ≤ E x − 1 and I − 1 < C y − 1 establish the desired result.
• Analysis of B ′ . We established, in the analysis of B, that C y − 1 is non-zero. In the analysis of B * we also established that E x − 1 is non-zero, note that case 4.c.iv also assumes the hypothesis that C y < C x . Moreover in the analysis of B * we also established that p * < 1, which implies that 0 < 1 − p * and therefore the last denominator is also non-zero.
Let us also establish that 0 ≤ p ′ and p ′ ≤ 1. For the second property we instead prove that ) and all of the expressions in parenthesis are non-negative. We use the following deduction of equivalent inequalities to establish that 0 ≤ p ′ : This last inequality is part of the hypothesis of case 4.c.iv. Now let us focus on the edge o x . We wish to establish that for any o ∈ C y \ {i y } we have that Pr(o y = o) = 1/(C y − 1). We analyse this edge according to the following cases: 1. When the cycles are equal C x = C y . This involves cases 2 and 3.
• o = e y , this occurs only in case 2 and it is determined by the fact that o x = e x , therefore • o ∈ E y \ {e y }, this occurs only in case 4.a.iv. This case is determined by the fact that o x ∈ X \ {e x } and moreover sets o y = s y , which was uniformly selected from E y \ {e y }.
We have the following deduction where we use the fact that the events are independent and that |C x | = |C y | implies |E x | = |E y |: 3. When C x < C y this involves case 4.b. The cases for o are the following: 4. When C x > C y , this concerns case 4.c. The cases for o are the following: • o = e y , this occurs in the case 4.c.i and case 4.c.iv when B ′ is true. We use the following deduction: = Pr(4.c.i or (4.c.iv and B ′ = true)) = Pr(4.c.i) + Pr(4.c.iv and B ′ = true) • o ∈ I \ {i y }, this occurs in case 4.c.ii and case 4.c.iii when B * is true. We make the following deduction
Proof. In the context of a Markovian coupling we analyse the transition from y to y ′ given the information about x. In the non-Markovian case we will use less information about x. We assume that e y is a random variable and that x provides only e x and U y and we know only that e y ∈ U y and moreover that Pr(e y = e) = 1/U y , for any e ∈ U y and Pr(e y = e) = 0 otherwise. Then the chain X t makes its move and provides information about i x and o x . Let us consider only the cases when Y t then chooses i y = i x , because nothing changes in the cases where this does not happen. Now i x can be used to define C x and C y and, therefore, E x and E y . We focus our attention on E y ∩ U y because, except for the trivial cases, we must have e y ∈ (E y ∩ U y ). Hence, we instead alter our condition to Pr(e y = e) = 1/|E y ∩ U y |, for any e ∈ (E y ∩ U y ) and 0 otherwise.
Note that this is a reasonable process because we established in Lemma 1 that, in the non-trivial case, E x and E y partition C e and, therefore, because U y ⊆ C e \ {e x }, we have that E x and E y also partition U y . This means that U y ∩ I = ∅ and so we are not loosing any part of U y in this process, we are only dividing it into cases. This process is also the reason why, even when s y / ∈ U y we define Now the cases considered in Lemma 2 must be changed. Substitute the original cases of o = e y for o ∈ U y ∩ E y . Also, substitute the case o ∈ E y \ {e y } by o ∈ E y \ U y . The other cases remain unaltered. Except for the first case, the previous deductions still apply. We will exemplify how the deduction changes for o ∈ U y ∩ E y . We consider only the situation when |C x | = |C y |. For the remaining situations, C x < C y and C x > C y , we use a general argument. Hence our precise assumptions are: o ∈ U y ∩ E y and |C x | = |C y |.
Which is correct according to our assumption.
The general argument follows the above derivation. Whenever the Markovian coupling would produce o y = e y , we obtain 1/(C y − 1) probability. Moreover, for every edge such that o y ∈ E y \ {e y } produced by the Markovian coupling we obtain another 1/(C y − 1) probability. This totals to |U y ∩ E y |/(C y − 1) as desired. This also occurs in the cases when C x < C y and C x > C y , only the derivations become more cumbersome.
Finally will argue why the property that Pr(e y = e) = 1/U y when e ∈ U y . The set U y is initialised to contain only the edge e y , i.e., U y = {e y }. As the coupling proceeds U y ′ is chosen to represent the edges, from which e y ′ was chosen, or is simply restricted if e y does not change. More precisely: in case 4.a.iii we chose e y ′ = o x ; in case 4.b.iv we chose e z ′ = o x ; in case 4.b.ii we chose e y ′ = o x .
We obtained no general bounds on the coupling we presented, it may even be that such bounds are exponential even if the Markov chain has polynomial mixing time. In fact Kumar and Ramesh (1999) proved that this is the case for Markovian couplings of the Jerrum-Sinclair chain (Jerrum and Sinclair, 1989). Note that according to the classification of Kumar and Ramesh (1999) the coupling we present is considered as time-variant Markovian. Hence their result applies to type of coupling we are using, albeit we are considering different chains so it is not immediate that indeed there exist no polynomial Markovian couplings for the chain we presented. Cycle graphs are the only class of graphs for which we establish polynomial bounds, see Figure 18. Proof. For any two trees A x and A y we have a maximum distance of 1 edge, i.e., d(A x , A y ) ≤ 1.
Hence our coupling applies directly.
For the fast version, case 1 does not occur, because i x is chosen from E \ {A x }. Hence, the only cases that might apply are cases 2 and 3. In first case, the chains preserve their distance and in the last case the distance is reduced to 0. Hence, E[d(X 1 , Y 1 )] = 1/(V − 1), which corresponds to the probability of case 2. Each step of the coupling is independent, which means we can use the previous result and Markov's inequality to obtain Pr(d(X τ , Y τ ) ≥ 1) ≤ 1/(V − 1) τ . Then, we use this probability in the coupling Lemma 4 to obtain a variation distance of 1/4, by solving the following equation: 1/(V − 1) τ = 1/4. Moreover when a graph is a connect set of cycles connected by bridges or articulation points we can also establish a similar result. Figure 16 shows one such graph. The mixing time for the slow version is obtained by using |E| instead of n in the previous expression.
Proof. To obtain this result we use a path coupling argument. Then for two chains at distance 1 n(m−1) . We assume that the different edge occurs in the largest cycle. In general the edges inserted and deleted do not alter this situation, hence the term 1. However with probability 1/n the chain X t inserts an edge that creates the cycle where the diference occurs. In that case with probability (m − 2)/(m − 1) the chains coalesce. Hence applying path coupling the mixing time must verify the following equation: 1 4 For the slow version the chain choose the correct edge with probability 1/E instead of 1/n.

Convergence Testing
Before looking at the performance of the algorithm we started by testing the convergence of the edge swap chain. We estimate the variation distance after a varying number of iterations. The results are shown in Figures 19,20,21,22,23,24,and 25. We now describe the structure of these figures. Consider for example Figure 21. The structure is the following: • The bottom left plot shows the graph properties, the number of vertexes V in the x axis and the number edges E on the y axis. For the dense case graph 0 has 10 vertexes and 45 edges. Moreover, graph 6 has 40 vertexes and 780 edges. These graph indexes are used in the remaining plots.
• The top left plot show the number of iterations t of the chain in the x axis and the estimated variation distance on the y axis, for all the different graphs.
• The top right plot is similar to the top left, but the x axis contains the number of iterations divided by (V 1.3 + E). Besides the data this plot also show a plot of ln(1/ε) for reference.
• The bottom right plot is the same as the top right plot, using a logarithmic scale on the y axis.
To avoid the plots from becoming excessively dense, we do not plot points for all experimental values, instead plot one point out of 3. However, the lines pass through all experimental points, even those that are not explicit.
The variation distance between two distributions D 1 and D 2 on a countable state space S is given by D 1 − D 2 = x∈S |D 1 (x) − D 2 (x)|/2. This is the real value of ε. However, the size of S quickly becomes larger than we can compute. Instead, we compute a simpler variation distance D 1 −D 2 d , where S is reduced from the set of all spanning trees of G to the set of integers from 0 to V −1, which correspond to the edge distance, defined in Section 4.2, of the generated tree A to a fixed random spanning tree R. More precisely, we generate 20 random trees, using a random walk algorithm described in Section 5. For each of these trees, we compute π − M t d , i.e., the simpler distance between the stationary distribution π and the distribution M t obtained by computing t steps of the edge swapping chain. To obtain M t , we start from a fixed initial tree A 0 and execute our chain t  times. This process is repeated several times to obtain estimates for the corresponding probabilities.
We keep two sets of estimates M t and M ′ t and stop when M t − M ′ t d < 0.05. Moreover we only estimate values where π − M t d ≥ 0.1. We use the same criteria to estimate π, but in this case the trees are again generated by the random walk algorithm. The final valueε is obtained as the maximum value obtained for the 20 trees.
We generated dense graphs, sparse graphs and some in between graphs. The sparse graphs are ladder graphs; an illustration of these graphs is shown in Figure 17. The cycle graphs consist of a single cycle, as shown in Figure 18. The dense graphs are actually the complete graphs K V . We also generated other dense graphs labelled biK which consisted of two complete graphs connected by two edges. We also generated graphs based on the duplication model dmP.
be an undirected and unweighted graph. Given 0 ≤ p ≤ 1, the partial duplication model builds a graph G = (V, E) by partial duplication as follows (Chung et al., 2003): start with G = G 0 at time t = 1 and, at time t > 1, perform a duplication step: 1. Uniformly select a random vertex u of G.  3. For each neighbor w of u, add an edge (v, w) with probability p.
The different values at the end of dmP, namely in Figures 23, 24 and 25, correspond to the choices of p.
These graphs show the convergence of the Markov chain and moreover V 1.3 + E seems to be a reasonable bound for τ . Still, these results are not entirely binding. On the one hand the estimation of the variation distance groups several spanning trees into the same distance, which means that within a group the distribution might not be uniform, even if the global statistics are good. So the actual variation distance may be larger and the convergence might be slower. On the other hand we chose the exponent 1.3 experimentally by trying to force the data of the graphs to converge at the same point. The actual value may be smaller or larger.

Coupling Simulation
As mentioned before, we obtained no general bounds on the coupling we presented. In fact, experimental simulation for the coupling does not converge for all classes of graphs. We obtained     experimental convergence for cycle graph, as expected from Theorem 2, and for ladder graphs.
For the remaining graphs we used an optimistic version of the coupling which always assumes that s y ∈ U y and that B * fails. With these assumptions, all the cases which increase the distance between states are eliminated and the coupling always converges. Note that this approach does not yield a sound coupling, but in practice we verified that this procedure obtained good experimental variation distance. Moreover, the variation distance estimation for these tests is not the simpler distance but the actual experimental variation distance, obtained by generating several experimental trees, such that in average each possible tree is obtained 100 times.
The simulation of the path coupling proceeds by generating a path with e ln V steps, essentially selecting two trees at distance e ln V from each other. This path is obtained by computing e ln V steps of the fast chain. Recall that our implementation and all simulations use the fast version of the chain. The simulation ends once this path contracts to size ln V . Let t ′ be the number of steps in this process. Once this point is obtained our estimate for mixing time isτ = t ′ ln V . In general, we wish to obtainτ such that the probability that the two general chains X t and X t coalesce is at least 75%. Hence, we repeat this process 4 times and choose the second largest value ofτ as our estimate. and it is faster for biK, cycle and sparse (ladder) graphs. As expected, it is less competitive for dense graphs. Hence, experimental results seem to point out that the edge swapping method is more competitive in practice for those instances that are harder for random walk based methods, namely biK and cycle graphs. The results for biK and dmP are of particular interest as most real networks seem to include these kind of topologies, i.e., they include communities and they are scale-free Chung and Lu (2006).

Related Work
For a detailed exposure on probability on trees and networks see Lyons and Peres (2016, Chapter 4). As far as we know, the initial work on generating uniform spanning trees was by Aldous (1990b) and Broader (1989), which obtained spanning trees by performing a random walk on the underlying graph. The author also further studied the properties of such random trees (Aldous, 1990a), namely giving general closed formulas for the counting argument we presented in Section 2. In the random walk process a vertex v of G is chosen and at each step this vertex is swapped by an adjacent vertex, where all neighboring vertices are selected with equal probability. Each time a vertex is visited by the first time the corresponding edge is added to the growing spanning tree. The process ends when all vertexes of G get visited at least once. This amount of steps is known as the cover time of G.
To obtain an algorithm that is faster than the cover time, Wilson (1996) proposed a different approach. A vertex r of G is initially chosen uniformly and the goal is to hit this specific vertex r from a second vertex, also chosen uniformly from G. This process is again a random walk, but with a loop erasure feature. Whenever the path from the second vertex intersects itself all the edges in the corresponding loop must be removed from the path. When the path eventually reaches r it becomes part of the spanning tree. The process then continues by choosing a third vertex and          also computing a loop erasure path from it, but this time it is not necessary to hit r precisely, it is enough to hit any vertex on the branch that is already linked to r. The process continues by choosing random vertexes an computing loop erasure paths that hit the spanning tree that is already computed.
We implemented the above algorithms, as they are accessible, although several theoretical results where obtained in recent years we are not aware of an implementation of such algorithms. We will now survey these results.
Another approach to this problem relies on the Kirchoff's Theorem Kirchhoff (1847) that counts the number of spanning trees by computing the determinant of a certain matrix, related to the graph G. This relation is researched by Guénoche (1983); Kulkarni (1990), which yielded an O(EV 3 ) time algorithm. This result was improved to O(V 2.373 ), by Colbourn, Day, and Nel (1989); Colbourn, Myrvold, and Neufeld (1996), where the exponent corresponds to the fastest algorithm to compute matrix multiplication. Improvements on the random walk approach where obtained by Kelner and Mądry (2009);Mądry (2011), culminating in anÕ(E o(1)+4/3 ) time algorithm by Mądry, Straszak, (2015), which relies on insight provided by the effective resistance metric.
Interestingly, the initial work by Broader (1989) contains reference to the edge swapping chain we presented in this paper (Section 5, named the swap chain). The author mentions that the mixing time of this chain is E O(1) , albeit the details are omitted. As far as we can tell, this natural approach to the problem did not receive much attention precisely due to the lack of an efficient implementation. Even though link cut trees were known at the time Tarjan, 1981, 1985) their application to this problem was not established prior to this work. Their initial application was to network flows (Goldberg and Tarjan, 1989). We also found another reference to the edge swap in the work of Sinclair (1992). In the proposal of the canonical path technique the author mentions this particular chain as a motivating application for the canonical path technique, still the details are omitted and we were not able to obtain such an analysis.
We considered the LCT version where the auxiliary trees are implemented with splay trees Sleator and Tarjan (1985), i.e., the auxiliary data structures we mentioned in Section 3 are splay trees. This means that in step 5 of Algorithm 1 all the vertexes involved in the path C \ {(u, v)} get stored in a splay tree. This path oriented approach of link cut trees makes them suitable for our goals, as opposed to other dynamic connectivity data structures such as Euler tour trees (Henzinger and King, 1995).
Splay trees are self-adjusting binary search trees, therefore the vertexes are ordered in such a way that the inorder traversal of the tree coincides with the sequence of the vertexes that are obtained by traversing C \ {(u, v)} from u to v. This also justifies why the size of this set can also be obtained in O(log V ) amortized time. Each node simply stores the size of its sub-tree and these values are efficiently updated during the splay process, which consists of a sequence of rotations.
Moreover, these values can also be used to Select an edge from the path. By starting at the root and comparing the tree sizes to i we can determine if the first vertex of the desired edge is on the left sub-tree, on the root or on the right sub-tree. Likewise we can do the same for the second vertex of the edge in question. These operations splay the vertexes that they obtain and therefore the total time depends on the Splay operation. The precise total time of the Splay operation is O((V + 1) log n), however the V log V term does not accumulate over successive operations, thus yielding the bound of O((V + τ ) log V ) in Theorem 1. In general the V log V term should not be a bottleneck because for most graphs we should have τ > V . This is not always the case, if G consists of a single cycle then τ = 1, but V may be large. Figure 18 shows an example of such a graph.
We finish this Section by reviewing the formal definitions of variational distance and mixing time τ (Mitzenmacher and Upfal, 2005).
Definition 1. The variation distance between two distributions D 1 and D 2 on a countable space S is given by Definition 2. Let π be the stationary distribution of a Markov chain with state space S. Let p t x represent the distribution of the state of the chain starting at state x after t steps. We define That is ∆ x (t) is the variation distance between the stationary distribution and p t x and ∆(t) is the maximum of these values over all states x. We also define τ x (ε) = min{t : ∆ x (t) ≤ ε} (5) When we refer only to the mixing time we mean τ (1/4). Finally the coupling Lemma justifies the coupling approach: Lemma 4. Let Z t = (X t , Y t ) be a coupling for a Markov chain M on a state space S. Suppose that there exists a T such that, for every x, y ∈ S, Then τ (ε) ≤ T . That is, for any initial state, the variation distance between the distribution of the state of the chain after T steps and the stationary distribution is at most ε.
If there is a distance d defined in S then the property X T = Y T can be obtained using the condition d(X T , Y T ) ≥ 1. For this condition we can use the Markovian inequality Pr(d(X T , Y T ) ≥ 1) ≤ The path coupling technique (Bubley and Dyer, 1997) constructs a coupling by chaining several chains, such that the distance between then is 1. Therefore we obtain d(X T , Y T ) = d(X 0 T , X 1 T ) + d(X 1 T , X 2 T ) + . . . + d(X D−1 T , X D T ) = 1 + 1 + . . . + 1, where X T = X 0 T and Y T = X D T .

Conclusions and Future Work
In this paper we studied a new algorithm to obtain the spanning trees of a graph in an uniform way. The underlying Markov chain was initially sketched by Broader (1989) in the early study of this problem. We further extended this work by proving the necessary Markov chain properties and using the link cut tree data structure. This allows for a much faster implementation than repeating the DFS procedure. This may actually be the reason why this approach has gone largely unnoticed during this time.
The main shortcoming of our approach is the lack of a general theoretical bound of the mixing time. Such a bound might be possible using new approaches as the insight into the resistance metric (Mądry, Straszak, and Tarnawski, 2015). Although a general analysis would be valuable we addressed this problem by simulating a coupling, both sound or optimistic. The lack of analysis is not a shortcoming of the algorithm itself, which is both practical and efficient. We implemented it and compared it against existing alternatives. The experimental results show that it is very competitive. A theoretical bound would still be valuable, specially if it is the case that this algorithm is more efficient than the alternatives.
On the one hand computing the mixing time of the underlying chain is complex, time consuming and hard to analyse in theory. On the other hand the user of this process can fix a certain number of steps to execute. This is a very useful parameter, as it can be used to swap randomness for time. Depending on the type of application the user may sacrifice the randomness of the underlying trees to obtain faster results or on the contrary spend some extra time to guarantee randomness.
Existing algorithms do not provide such a possibility.
As a final note, we point out that our approach can be generalized by assigning weights to the edges of the graph. The edge to be inserted can then be selected with a probability that corresponds to its weight, divided by the global sum of weights. Moreover the edge to remove from the cycle should be removed according to its weight. The probability should be its weight divided by the sum of the cycle weights. The ergodic analysis of Section 4.1 generalizes easily to this case, so this chain also generates spanning trees uniformly, albeit the analysis of the coupling of Section 4.2 might need some adjustments. A proper weight selection might obtain a faster mixing timer, possibly something similar to the resistance of the edge.