Cover Time in Edge-Uniform Stochastically-Evolving Graphs

We define a new general model of stochastically evolving graphs, namely the \emph{Edge-Uniform Stochastically-Evolving Graphs}. In this model, each possible edge of an underlying general static graph evolves independently being either alive or dead at each discrete time step of evolution following a (Markovian) stochastic rule. The stochastic rule is identical for each possible edge and may depend on the previous $k \ge 0$ observations of the edge's state. We examine two kinds of random walks for a single agent taking place in such a dynamic graph: (i) The \emph{Random Walk with a Delay} (\emph{RWD}), where at each step the agent chooses (uniformly at random) an incident possible edge (i.e. an incident edge in the underlying static graph) and then it waits till the edge becomes alive to traverse it. (ii) The more natural \emph{Random Walk on what is Available} (\emph{RWA}) where the agent only looks at alive incident edges at each time step and traverses one of them uniformly at random. Our study is on bounding the \emph{cover time}, i.e. the expected time until each node is visited at least once by the agent. For \emph{RWD}, we provide the first upper bounds for the cases $k = 0, 1$ by correlating \emph{RWD} with a simple random walk on a static graph. Moreover, we present a modified electrical network theory capturing the $k = 0$ case and a mixing-time argument toward an upper bound for the case $k = 1$. For \emph{RWA}, we derive the first upper bounds for the cases $k = 0, 1$, too, by reducing \emph{RWA} to an \emph{RWD}-equivalent walk with a modified delay. Finally, for the case $k = 1$, we prove that when the underlying graph is complete, then the cover time is $\mathcal{O}(n\log n)$ (i.e. it matches the cover time on the static complete graph) under only a mild condition on the edge-existence probabilities determined by the stochastic rule.


Introduction
In the modern era of Internet, modifications in a network topology can occur extremely frequently and in a disorderly way.Communication links may fail from time to time, while connections amongst terminals may appear or disappear intermittently.Thus, classical (static) network theory fails to capture such ever-changing processes.In an attempt to fill this void, different research communities have given rise to a variety of theories on dynamic networks.In the context of algorithms and distributed computing, such networks are usually referred to as temporal graphs [16].A temporal graph is represented by a (possibly infinite) sequence of subgraphs of the same static graph.That is, the graph is evolving over a set of (discrete) time steps under a certain group of deterministic or stochastic rules of evolution.Such a rule can be edge-or graph-specific and may take as input some graph instances observed in previous time steps.
In this paper, we focus on stochastically evolving temporal graphs.We define a new model of evolution where there exists a single stochastic rule which is applied independently to each edge.Furthermore, our model is general in the sense that the underlying static graph is allowed to be a general connected graph, i.e. with no further constraints on its topology, and the stochastic rule can include any finite number of past observations.Assume now that a single mobile agent is placed on an arbitrary node of a temporal graph evolving under the aforementioned model.Next, the agent performs a simple random walk; at each time step, after the graph instance is fixed according to the model, the agent chooses uniformly at random a node amongst the neighbors of its current node and visits it.The cover time of such a walk is the expected number of time steps until the agent has visited each node at least once.Herein, we prove some first bounds on the cover time for a simple random walk as defined above, mostly via the use of Markovian theory.
Random walks constitute a very important primitive in terms of distributed computing.Examples include their use in information dissemination [1] and random network structure [3]; also, see the short survey in [7].In this work, we consider a single random walk as a fundamental building block for other more distributed scenarios to follow.

Related Work
A paper which is very relevant with respect to ours is the one of Clementi et al. [9], where they consider the flooding time in Edge-Markovian dynamic graphs.In such graphs, each edge independently follows an one-step Markovian rule and their model appears as a special case of ours (matches our case k = 1).Further work under this Edge-Markovian paradigm includes [4,10].
Another work related to our paper is the one of Avin et al. [2] where they define the notion of a Markovian Evolving Graph, i.e. a temporal graph evolving over a set of graphs G 1 , G 2 , . . ., where the process transits from G i to G j with probability p ij , and consider random walk cover times.Note that their approach becomes intractable if applied to our case; each of the possible edges evolves independently, thence causing the state space to be of size 2 m , where m is the number of possible edges in our model.
Clementi et al. [11] study the broadcast problem when at each time step the graph is selected according to the well-known G n,p model.Also, Yamauchi et al. [21] study the rendezvous problem for two agents on a ring when each edge of the ring independently appears at every time step with some fixed probability p. Lastly, there exist a few papers considering random walks on different models of stochastic graphs, e.g.[15,18,19], but without considering the cover time.
In the analysis to follow, we employ several seminal results around the theory of random walks and Markov chains.For random walks, we base our analysis on the seminal work in [1] and the electrical network theory presented in [8,12], while for results regarding the mixing time of a Markov chain we cite textbooks [14,17].

Our Results
We define a new general model for stochastically evolving graphs where each possible edge evolves independently, but all of them evolve according to the same stochastic rule.Furthermore, the stochastic rule may take into account the last k states of a given edge.Special cases of our model have appeared in previous literature, e.g. in [11,21] for k = 0 and in the line of work starting from [9] for k = 1, however they only consider special graph topologies (like ring and clique).On the other hand, the model we define is general in the sense that no assumptions, aside from connectivity, are made on the topology of the underlying graph and any amount of history is allowed into the stochastic rule.Thence, we believe it can be valued as a basis for more general results to follow capturing search or communication tasks in such dynamic graphs.
We hereby provide the first known upper bounds relative to the cover time of a simple random walk taking place in such stochastically evolving graphs for k = 0 and k = 1.To do so, we make use of a simple, yet fairly useful, modified random walk, namely the Random Walk with a Delay (RWD ), where at each time step the agent is choosing uniformly at random from the incident edges of the static underlying graph and then waits for the chosen edge to become alive in order to traverse it.Moreover, we consider the natural random walk on such graphs, namely the Random Walk on What's Available (RWA), where at each time step the agent only considers the currently alive incident edges and chooses to traverse one out of them uniformly at random.
For the case k = 0, that is when each edge appears at each round with a fixed probability p, we prove that the cover time for RWD is upper bounded by 2m(n − 1)/p, where n (respectively m) is the number of vertices (respectively edges) of the underlying graph.The result can be obtained both by a careful mapping of the RWD walk to its corresponding simple random walk on the static graph and by generalizing the standard electrical network theory literature in [8,12].Later, we proceed to prove that the cover time for RWA is upper bounded by 2m(n − 1)/(1 − (1 − p) δ ) where δ is the min degree of the underlying graph.The main idea here is to reduce RWA to an RWD walk where at each step the traversal delay is lower bounded by (1 − (1 − p) δ ).
For k = 1, the stochastic rule takes into account the previous (one time step ago) state of the edge.If an edge were not present, then it becomes alive with probability p, whereas if it were alive, then it dies with probability q.Let τ mix stand for the mixing time of this process.We prove that the RWD cover time is upper bound by τ mix + 2m(n − 1)(p 2 + q)/(p 2 + pq) by carefully computing the expected traversal delay at each step after mixing is attained.Moreover, we show another 2m(n − 1)/ξ min bound by considering the minimum probability guarantee of existence at each round, i.e. ξ min = min{p, 1 − q}, and we discuss the trade-off between these two bounds.As far as RWA is concerned, we upper bound its cover time by 2m(n − 1)/(1 − (1 − ξ min ) δ ) again by a reduction to an RWD -equivalent walk.Finally, we obtain a quite important result in the context of complete underlying graphs where we prove an upper bound of O(n log n) (which matches the cover time for complete static graphs) under the soft restriction ξ min ∈ Ω(log n/n) via some cautious coupon-collector-type arguments.

Outline
In Section 2 we provide preliminary definitions and results regarding important concepts and tools that we use in later sections.Then, in Section 3, we define our model of stochastically evolving graphs in a more rigorous fashion.Afterwards, in Sections 4 and 5, we provide the analysis of our cover time upper bounds when for determining the current state of an edge we take into account its last 0 and 1 states, respectively.Finally, in Section 6, we cite some concluding remarks.

Preliminaries
Let us hereby define a few standard notions related to a simple random walk performed by a single agent on a simple connected graph G = (V, E).By d(v), we denote the degree (i.e. the number of neighbors) of a node v ∈ V .A simple random walk is a Markov chain where, for v, u ∈ V , we set p vu = 1/d(v), if (v, u) ∈ E, and p vu = 0, otherwise.That is, an agent performing the walk chooses the next node to visit uniformly at random amongst the set of neighbors of its current node.Given two nodes v, u, the expected time for a random walk starting from v to arrive at u is called the hitting time from v to u and is denoted by H vu .The cover time of a random walk is the expected time until the agent has visited each node of the graph at least once.Let P stand for the stochastic matrix describing the transition probabilities for a random walk (or, in general, a discrete-time Markov chain) where p ij denotes the probability of transition from node i to node j, p ij ≥ 0 for all i, j and j p ij = 1 for all i.Then, the matrix P t consists of the transition probabilities to move from one node to another after t time steps and we denote the corresponding entries as p (t) ij .Asymptotically, lim t→∞ P t is referred to as the limiting distribution of P .A stationary distribution for P is a row vector π such that πP = π and i π i = 1.That is, π is not altered after an application of P .If every state can be reached from another in a finite number of steps (i.e.P is irreducible) and the transition probabilities do not exhibit periodic behavior with respect to time, i.e. gcd{t : p (t) ij > 0} = 1, then the stationary distribution is unique and it matches the limiting distribution; this result is often referred to as the Fundamental Theorem of Markov chains.The mixing time is the expected number of time steps until a Markov chain approaches its stationary distribution.Below, let p (t) i stand for the i-th row of P t and tvd(t) = max i ||p ij − π j | stand for the total variation distance of the two distributions.We say that a Markov chain is ǫ-near to its stationary distribution at time t if tvd(t) ≤ ǫ.Then, we denote the mixing time by τ (ǫ): the minimum value of t until a Markov chain is ǫ-near to its stationary distribution.A coupling (X t , Y t ) is a joint stochastic process defined in a way such that X t and Y t are copies of the same Markov chain P when viewed marginally, and once X t = Y t for some t, then X t ′ = Y t ′ for any t ′ ≥ t.Also, let T xy stand for the minimum expected time until the two copies meet, i.e. until X t = Y t for the first time, when starting from the initial states X 0 = x and Y 0 = y.We can now state the following Coupling Lemma correlating the coupling meeting time to the mixing time: Lemma 1 (Lemma 4.4 [14]).Given any coupling Furthermore, asymptotically, we need not care about the exact value of the total variation distance since, for any ǫ > 0, we can force the chain to be ǫ-near to its stationary distribution after a multiplicative time of log ǫ −1 steps due to the submultiplicativity of the total variation distance.Formally, it holds tvd(kt) ≤ (2 • tvd(t)) k .Fact 1. Suppose τ (ǫ 0 ) ≤ t for some Markov chain P and a constant 0 < ǫ 0 < 1.Then, for any 0 < ǫ < ǫ 0 , it holds τ (ǫ) ≤ t log ǫ −1 .

The Edge-Uniform Evolution Model
Let us define a novel model of a dynamically evolving graph.Let G = (V, E) stand for a simple, connected graph, from now on referred to as the underlying graph of our model.The number of nodes is given by n = |V |, while the number of edges is denoted by m = |E|.For a node v ∈ V , let N (v) = {u : (v, u) ∈ E} stand for the open neighborhood of v and d(v) = |N (v)| for the (static) degree of v.Note that we make no assumptions regarding the topology of G besides connectedness.We refer to the edges of G as the possible edges of our model.We consider evolution over a sequence of discrete time steps (namely 0, 1, 2, . ..) and denote by G = (G 0 , G 1 , G 2 , . ..) the infinite sequence of graphs G t = (V t , E t ) where V t = V and E t ⊆ E. That is, G t is the graph appearing at time step t and each edge e ∈ E is either alive (if e ∈ E t ) or dead (if e / ∈ E t ) at time step t.Let R stand for a stochastic rule dictating the probability that a given possible edge is alive at any time step.We apply R at each time step and at each edge independently to determine the set of currently alive edges, i.e. the rule is uniform with regard to the edges.In other words, let e t stand for a random variable where e t = 1, if e is alive at time step t, or e t = 0, otherwise.Then R determines the value of P r(e t = 1|H t ) where H t is also determined by R and denotes the history length (i.e. the values of e t−1 , e t−2 , . ..) considered when deciding for the existence of an edge at time step t.For instance, H t = ∅ means no history is taken into account, while H t = {e t−1 } means the previous state of e is taken into account when deciding for its current state.
Overall, the aforementioned Edge-Uniform Evolution model (shortly EUE ) is defined by the parameters G and R. In the following sections, we consider some special cases for R and provide first bounds for the cover time of G under this model.Each time step of evolution consists of two stages: in the first stage, the graph G t is fixed for time step t following R, while in the second stage, the agent moves to a node in Notice that, since G is connected, then the cover time under EUE is finite since R models edge-specific delays.

Cover Time with Zero-Step History
We hereby analyze the cover time of G under EUE in the special case when no history is taken into consideration for computing the probability that a given edge is alive at the current time step.Intuitively, each edge appears with a fixed probability p at every time step independently of the others.More formally, for all e ∈ E and time steps t, P r(e t = 1) = p ∈ [0, 1].

Random Walk with a Delay
A first approach toward covering G with a single agent is the following: The agent is randomly walking G as if all edges were present and, when an edge is not present, it just waits for it to appear in a following time step.More formally, suppose the agent arrives on a node v ∈ V with (static) degree d(v) at the second stage of time step t.Then, after the graph is fixed for time step t + 1, the agent selects a neighbor of v, say u ∈ N (v), uniformly at random, i.e. with probability 1 d(v) .If (v, u) ∈ E t+1 , then the agent moves to u and repeats the above procedure.Otherwise, it remains on v until the first time step t ′ > t + 1 such that (v, u) ∈ E t ′ and then moves to u.This way, p acts as a delay probability, since the agent follows the same random walk it would on a static graph, but with an expected delay of 1  p time steps at each node.Notice that, in order for such a strategy to be feasible, each node must maintain knowledge about its neighbors in the underlying graph; not just the currently alive ones.From now on, we refer to this strategy for the agent as the Random Walk with a Delay (shortly RWD ).Now, let us upper bound the cover time of RWD by exploiting its strong correlation to a simple random walk on the underlying graph G. Below, let C G stand for the cover time of a simple random walk on the static graph G.
Theorem 2. For any connected underlying graph G, the cover time under RWD is expectedly C G /p.
Proof.Consider a simple random walk, shortly SRW, and an RWD (under the EUE model) taking place on a given connected graph G. Given that RWD decides on the next node to visit uniformly at random based on the underlying graph, that is in exactly the same way SRW does, we use a coupling argument to enforce RWD and SRW to follow the exact same trajectory (i.e.sequence of visited nodes) in G.
Then, let the trajectory end when each node in G has been visited at least once and denote by T the total number of node transitions made by the agent.Such a trajectory under SRW will cover all nodes in expectedly E[T ] = C G time steps.On the other hand, in the RWD case, for each transition we have to take into account the delay experienced until the chosen edge becomes available.Let D i ≥ 1 be a random variable where 1 ≤ i ≤ T standing for the actual delay corresponding to node transition i in the trajectory.Then, the expected number of time steps till the trajectory is realized is given by E[D 1 + . . .+ D T ].Since the random variables D i are independent and identically distributed (by the edge-uniformity of our model), T is a stopping time for them and all of them have finite expectations, then we can apply Wald's Equation [20] to get For an explicit general bound on RWD, it suffices to use C G ≤ 2m(n − 1) proved by Aleliunas et al. in [1].
A Modified Electrical Network.Another way to analyze the above procedure is to make use of a modified version of the standard literature approach of electrical networks and random walks [8,12].This point of view gives us in addition expressions for the hitting time between any two nodes of the underlying graph.That is, we hereby (in Lemmata 3, 4 and Theorem 5) provide a generalization of the results given in [8,12] thus correlating the hitting and commute times of RWD to an electrical network analog and reaching a conclusion for the cover time similar to the one of Theorem 2.
In particular, given the underlying graph G, we design an electrical network, N (G), with the same edges as G, but where each edge has a resistance of r = 1 p ohms.Let H u,v stand for the hitting time from node u to node v in G, i.e. the expected number of time steps until the agent reaches v after starting from u and following RWD.Furthermore, let φ u,v declare the electrical potential difference between nodes u and v in N (G) when, for each w ∈ V , we inject d(w) amperes of current into w and withdraw 2m amperes of current from a single node v.We now upper-bound the cover time of G under RWD by correlating H u,v to φ u,v .
Proof.Let us denote by C uw the current flowing between two neighboring nodes u and w.Then, d(u) = (u,w)∈E C uw since at each node the total inward current must match the total outward current (Kirchhoff's first law).Moving forward, C uw = φ uw /r = φ uw /(1/p) = p • φ uw by Ohm's law.Finally, φ uw = φ uv − φ wv since the sum of electrical potential differences forming a path is equal to the total electrical potential difference of the path (Kirchhoff's second law).Overall, we As far as the hitting time from u to v is concerned, we rewrite it based on the first step: since the first addend represents the expected number of steps for the selected edge to appear due to RWD, and the second addend stands for the expected time for the rest of the walk.Wrapping it up, since both formulas above hold for each u ∈ V \ {v}, therefore inducing two identical linear systems of n equations and n variables, it follows that there exists a unique solution to both of them and H u,v = φ u,v .
In the lemma below, let R u,v stand for the effective resistance between u and v, i.e. the electrical potential difference induced when flowing a current of one ampere from u to v. Lemma 4 ([8,12]).For all u, v ∈ V , H u,v + H v,u = 2mR u,v holds.
Proof.Similarly to the definition of φ u,v above, one can define φ v,u as the electrical potential difference between v and u when d(w) amperes of current are injected into each node w and 2m of them are withdrawn from node u.Next, note that changing all currents' signs leads to a new network where for the electrical potential difference, namely φ ′ , it holds φ ′ u,v = φ v,u .We can now apply the Superposition Theorem (see Section 13.3 in [5]) and linearly superpose the two networks implied from φ u,v and φ ′ u,v creating a new one where 2m amperes are injected into u, 2m amperes are withdrawn from v and no current is injected or withdrawn at any other node.Let φ ′′ u,v stand for the electrical potential difference between u and v in this last network.By the superposition argument, we get The proof concludes by merging these two observations and applying Lemma 3.
Theorem 5.For any connected underlying graph G, the cover time under the RWD is at most 2m(n − 1)/p.
Proof.Consider a spanning tree T of G.An agent, starting from any node, can visit all nodes by performing an Eulerian tour on the edges of T (crossing each edge twice).This is a feasible way to cover G and thus the expected time for an agent to finish the above task provides an upper bound on the cover time.The expected time to cover each edge twice is given by

Random Walk on what's Available
Random Walk with a Delay does provide a nice connection to electrical network theory.However, depending on p, there could be long periods of time where the agent is simply standing still on the same node.Since the walk is random anyway, waiting for an edge to appear may not sound very wise.Hence, we now analyze the strategy of a Random Walk on what's Available (shortly RWA).That is, suppose the agent has just arrived at a node v after the second stage at time step t and then E t+1 is fixed after the first stage at time step t + 1.Now, the agent picks uniformly at random only amongst the alive edges at time step t + 1, i.e. with probability 1 d t+1 (v) where d t+1 (v) stands for the degree of node v in G t+1 .The agent then follows the selected edge to complete the second stage of time step t + 1 and repeats the strategy.In a nutshell, the agent keeps moving randomly on available edges and only remains on the same node if no edge is alive at the current time step.Below, let δ = min v∈V d(v) and ∆ = max v∈V d(v).Theorem 6.For any connected underlying graph G with min-degree δ, the cover time for RWA is at most 2m(n − 1)/(1 − (1 − p) δ ).
Proof.Suppose the agent follows RWA and has reached node u ∈ V after time step t.Then, G t+1 becomes fixed and the agent selects uniformly at random a neighboring edge to move to.Let M uv (where v ∈ {w ∈ V : (u, w) ∈ E}) stand for a random variable taking value 1 if the agent moves to node v and 0 otherwise.For k = 1, 2, . . ., d(u) = d, let A k stand for the event that d t+1 (u) = k.Therefore, P r(A k ) = d k p k (1 − p) d−k is exactly the probability k out of the d edges exist since each edge exists independently with probability p. Now, let us consider the probability P r(M uv = 1|A k ): the probability v will be reached given that k neighbors are present.This is exactly the product of the probability that v is indeed in the chosen k-tuple (say p 1 ) and the probability that then v is chosen uniformly at random (say p 2 ) from the k-tuple.
To conclude, let us reduce RWA to RWD.Indeed, in RWD the equivalent transition probability is P r(M uv = 1) = 1 d p, accounting both for the uniform choice and the delay p.Therefore, the RWA probability can be viewed as 1 d p ′ where p ′ = (1 − (1 − p) d ).To achieve edge-uniformity we set p ′ = (1 − (1 − p) δ ) which lower bounds the delay of each edge and finally we can apply the same RWD analysis by substituting p by p ′ .Applying Theorem 5 completes the proof.
The value of δ used to lower-bound the transition probability may be a harsh estimate for general graphs.However, it becomes quite more accurate in the special case of a d-regular underlying graph where δ = ∆ = d.

Cover Time with One-Step History
We now turn our attention to the case where the current state of an edge affects its next state.That is, we take into account a history of length one when computing the probability of existence for each edge independently.A Markovian model for this case was introduced in [9]; see Table 1.The left side of the table accounts for the current state of an edge, while the top for the next one.The respective table box provides us with the probability of transition from one state to the other.Intuitively, another way to refer to this model is as the Birth-Death model: a dead edge becomes alive with probability p, while an alive edge dies with probability q.
dead alive dead 1 − p p alive q 1 − q Table 1: Birth-Death chain for a single edge [9] Let us now consider an underlying graph G evolving under the EUE model where each possible edge independently follows the aforementioned stochastic rule of evolution.In order to bound the RWD cover time, we apply a two-step analysis.First, we bound the mixing time of the Markov chain defined by Table 1 for a single edge and then for the whole graph by considering all m independent edge processes evolving together.Lastly, we estimate the cover time for a single agent after each edge has reached the stationary state of Birth-Death.
On the other hand, for RWA, we make use of the "being alive" probabilities ξ min = min{p, 1−q} and ξ max = max{p, 1 − q} in order to bound the cover time by following a similar argument to the one of Theorem 6 (starting again from an RWD analysis).In the special case of a complete underlying graph, we employ a coupon-collector-like argument to achieve an improved upper bound.

RWD for General (p, q)-Graphs via Mixing
As a first step, let us prove the following upper-bound inequality, which helps us break our analysis to follow into two separate phases.
Lemma 7. Let τ (ǫ) stand for the mixing time for the whole-graph chain up to some total variation distance ǫ > 0, C τ (ǫ) for the expected time to cover all nodes after time step τ (ǫ) and C for the cover time of G under RWD.Then, C ≤ τ (ǫ) + C τ (ǫ) holds.
Proof.The upper bound is easy to see since RWD covers a subset V 0 ⊆ V until mixing occurs and then, after the mixing time τ (ǫ), we require RWD to cover the whole node-set V ; including the already visited V 0 nodes.That is, we discard any progress made by the walk during the first τ (ǫ) time steps and require a full cover to occur afterwards.
The above upper bound discards some walk progress, however, intuitively, this may be negligible in some cases: if the mixing is rapid, then the cover time C τ (ǫ) dominates the sum, whereas, if the mixing is slow, this may mean that edges appear rarely and thence little progress can be made anyway.
Phase I: Mixing Time.Let P stand for the Birth-Death Markov chain given in Table 1.It is easy to see that P is irreducible and aperiodic and therefore its limiting distribution matches its stationary distribution and is unique.We hereby provide a coupling argument to upper-bound the mixing time of the Birth-Death chain for a single edge.Let X t , Y t stand for two copies of the Birth-Death chain given in Table 1 where X t = 1 if the edge is alive at time step t and X t = 0 otherwise.We need only consider the initial case X 0 = Y 0 .For any t ≥ 1, we compute the meeting probability Definition 1.Let p 0 = p(1−q)+q(1−p) denote the meeting probability under the above Birth-Death coupling for a single time step.
We now bound the mixing time of Birth-Death for a single edge.Lemma 8.The mixing time of Birth-Death for a single edge is O(p −1 0 ).Proof.Let T xy denote the meeting time of X t and Y t , i.e. the first occurence of a time step t such that X t = Y t .We now compute the probability the two chains meet at a specific time step t ≥ 1: where we make use of the total probability law and the one-step Markovian evolution.Finally, we accumulate and then bound the probability the meeting time is greater to some time-value t: The above result analyzes the mixing time for a single edge of the underlying graph G.In order to be mathematically accurate, let us extend this to the Markovian process accounting for the whole graph G. Let G t , H t stand for two copies of the Markov chain consisting of m independent Birth-Death chains; one per edge.Initially, we define a graph G * = (V * , E * ) such that V * = V and E * ⊆ E; any graph with these properties is fine.We set G 0 = G * and H 0 = G * which is a worst-case starting point since each pair of respective G, H edges has exactly one alive and one dead edge.To complete the description of our coupling, we enforce that when a pair of respective edges meets, i.e. when the coupling for a single edge as described in the proof of Lemma 8 becomes successful, then both edges stop applying the Birth-Death rule and remain at their current state.Similarly to before, let T G,H stand for the meeting time of the two above defined copies, that is, the time until all pairs of respective edges have met.Furthermore, let T e x,y stand for the meeting time associated with edge e ∈ E. Lemma 9.The mixing time for any underlying graph G where each edge independently applies the Birth-Death rule is at most O(p −1 0 log m).
Proof.To start with, we calculate the probability the meeting time is bounded by some value t: where we successively applied the fact that the edges are independent, Bernoulli's inequality stating (1 + x) r ≥ 1 + rx for every r and any x ≥ −1, and the already seen inequality 1 − x ≤ e −x .Moving forward, P r[T G,H > t] ≤ me −p 0 t and after setting t = αp −1 0 log m, for some α ≥ 2 we derive that P r[T G,H > αp −1 0 log m] ≤ m 1−α .Applying Lemma 1 gives τ (m 1−α ) ≤ αp −1 0 log m.
Phase II: Cover Time After Mixing.We can now proceed to apply Lemma 7 by computing the expected time for RWD to cover G after mixing is attained.As before, we use the notation C τ (ǫ) to denote the cover time after the whole-graph process has mixed to some distance ǫ > 0 from its stationary state in time τ (ǫ).The following remark is key in our motivation toward the use of stationarity.
Fact 2. Let D be a random variable capturing the number of time steps until a possible edge becomes alive under RWD once the agent selects it for traversal.For any time step t ≥ τ (ǫ), the expected delay for any single edge traversal e under RWD is the same and equals E[D|e t = 1]P r(e t = 1) + E[D|e t = 0]P r(e t = 0).
That is, due to the uniformity of our model, all edges behave similarly.Furthermore, after convergence to stationarity has been achieved, when an agent picks a possible edge for traversal under RWD, the probability P r(e t = 1) that the edge is alive for any time step t ≥ τ (ǫ) is actually given by the stationary distribution in a simpler formula and can be regarded independently of the edge's previous state(s).
Lemma 10.For any constant 0 < ǫ < 1 and ǫ ′ = ǫ • min{p,q} p+q , it holds that C τ (ǫ ′ ) ≤ 2m(n − 1) Proof.We compute the stationary distribution π for the Birth-Death chain P by solving the system πP = π.Thus, we get π = [ q p+q , p p+q ].From now on, we only consider time steps t ≥ τ (ǫ ′ ), i.e. after the chain has mixed, for some ǫ ′ = ǫ • min{p,q} p+q ∈ (0, 1).We have tvd(t) = 1 2 max i j |p (t) ij − π j |≤ ǫ ′ implying that for any edge e, we get P r(e t = 1) ≤ (1 + 2ǫ) p p+q .Similarly, P r(e t = 0) ≤ (1 + 2ǫ) q p+q .Let us now estimate the expected delay until the RWD -chosen possible edge at some time step t becomes alive.If the selected possible edge exists, then the agent moves along it with no delay (i.e.we count 1 step).Otherwise, if the selected possible edge is currently dead, then the agent waits till the edge becomes alive.This will expectedly take 1/p time steps due to the Birth-Death chain rule.Overall, the expected delay is at most 1 , where we condition on the above cases.
Since for any time t ≥ τ (ǫ) and any edge e, we have the same expected delay to traverse an edge, we can extract a bound for the cover time by considering an electrical network with each resistance equal to (1 + 2ǫ) p 2 +q p 2 +pq .Applying Theorem 5 completes the proof.
The following theorem is directly proven by plugging into the inequality of Lemma 7 the bounds computed in Lemmata 9 and 10.
Theorem 11.For any connected underlying graph G and the Birth-Death rule, the cover time of RWD is O(p −1 0 log m + mn • (p 2 + q)/(p 2 + pq)).
5.2 RWD and RWA for General (p, q)-Graphs via Min-Max In the previous subsection, we employed a mixing-time argument in order to reduce the final part of the proof to the zero-step history case.Let us hereby derive another upper bound for the cover time of RWD (and then extend it for RWA) via a min-max approach.The idea here is to make use of the "being alive" probabilities to prove lower and upper bounds for the cover time parameterized by ξ min = min{p, 1 − q} and ξ max = max{p, 1 − q}.
Let us consider an RWD walk on a general connected graph G evolving under EUE with a zero-step history rule dictating P r(e t = 1) = ξ min for any edge e and time step t.We refer to this walk as the Upper Walk with a Delay, shortly UWD.Below, we make use of UWD in order to bound the cover time of RWD and RWA in general (p, q)-graphs.Lemma 12.For any connected underlying graph G and the Birth-Death rule, the cover time of RWD is at most 2m(n − 1)/ξ min .
Proof.Regarding UWD, one can design a corresponding electrical network where each edge has a resistance of 1/ξ min capturing the expected delay till any possible edge becomes alive.Applying Theorem 5, gives an upper bound of 2m(n − 1)/ξ min for the UWD cover time.
Let C ′ stand for the UWD cover time and C stand for the cover time of RWD under the Birth-Death rule.It now suffices to show C ≤ C ′ to conclude the lemma.
In Birth-Death, the expected delay before each edge traversal is either 1/p (in case the possible edge is dead) or 1/(1 − q) (in case the possible edge is alive).In both cases, the expected delay is upper-bounded by the 1/ξ min delay of UWD and therefore C ≤ C ′ follows since any trajectory under RWD will take at most as much time as the same trajectory under UWD.
Notice that the above upper bound improves over the one in Theorem 11 for a wide range of cases, especially if q is really small.For example, when q = Θ(m −k ) for some k ≥ 2 and p = Θ(1), then Lemma 12 gives O(mn) whereas Theorem 11 gives O(m k ) since the mixing time dominates the whole sum.On the other hand, for relatively big values of p and q, e.g. in Ω(1/m), then mixing is rapid and the upper bound in Theorem 11 proves better.
Let us now turn our attention to the RWA case with the subsequent theorem.
Proof.Suppose the agent follows RWA with some stochastic rule R of the form P r(e t = 1|H t ) which incorporates some history H t when making a decision about an edge at time step t.Let us now proceed in fashion similar to the proof of Theorem 6. Assume the agent follows RWA and has reached node u ∈ V after time step t.Then G t+1 becomes fixed and the agent selects uniformly at random an alive neighboring node to move to.Let M uv (where v is a neighbor to u) stand for a random variable taking value 1 if the agent moves to v at time step t + 1 and 0 otherwise.For k = 0, 1, 2, . . ., d(u) = d, let A k (H t ) stand for the event that d t+1 = k given some history H t about all incident possible edges of u.We compute P r(M uv = 1) = d k=1 P r(M uv = 1|A k (H t ))P r(A k (H t )).Similarly to the proof of Theorem 6, P r(M uv = 1|A k (H t )) = p 1 •p 2 = 1/d where p 1 is the probability v is indeed in the chosen k-tuple (which is k/d) and p 2 is the probability it is chosen uniformly at random from the k-tuple (which is 1/k).Thus, we get P r(M uv = 1) = 1 ) where A 0 is the event no edge becomes alive at this time step.Moving forward, by definition, UWD depicts a zero-step history RWD walk.Let us denote by UWA its RWA corresponding walk.Furthermore, let P U be equal to the probability P r(M uv = 1) under the UWA walk.Then, we can substitute p by ξ min to apply Theorem 6 and get In the Birth-Death model, we know P r(A 0 (H 1 )) ≤ (1 − ξ min ) d since each possible edge becomes alive with probability at least ξ min .Thus, it follows P U ≤ P r(M uv = 1).
To wrap up, UWA can be viewed as an RWD walk with delay probability (1 − (1 − ξ min ) d ) which lower bounds the (1 − P r(A 0 (H t )) probability associated with RWA.Inverting the inequality to account for the delays, we have C ≤ C U for the cover times.Finally, Theorem 6 gives C U ≤ 2m(n − 1)/(1 − (1 − ξ min ) δ ).

RWA for Complete (p, q)-Graphs
We now proceed towards providing an upper bound for the cover time in the special case when the underlying graph G is complete, i.e. between any two nodes there exists a possible edge for our model.We utilize the special topology of G to come up with a different analytical approach and derive a better upper bound than the one given in Theorem 13.In this case, let |V |= n + 1 to make the calculations to follow more presentable.In other words, each node has n possible neighbors.Below, again, let ξ min = min{p, 1 − q} and ξ max = max{p, 1 − q}.Also, let d t (v) stand for a random variable depending on the Birth-Death process and denoting the actual degree of v ∈ V at time step t.Since all nodes have the same static degree, we simplify the notation to d t .Lemma 14.For some constants β ∈ (0, 1) and α ≥ 3/β 2 , if ξ min ≥ α log n n , then it holds with high probability that d t ∈ [(1 − β)ξ min n, (1 + β)ξ max n].
Proof.We provide a lower and upper bound for the expected value of d t and determine the necessary condition under which d t remains near its expected value.Given where X is Notice that the latter bound matches exactly the cover time upper bound for a simple random walk on a complete static graph.Intuitively, the condition ξ min ∈ Ω(log n/n) indicates the graph instance G t is almost surely connected at each time step t given that each graph instance can be viewed as "lower-bounded" by a G(n, ξ min ) Erdős-Rényi graph.In other words, an expected degree of Ω(log n) alive edges at each time step suffices to explore the complete graph at asymptotically the same time as in the case when all n of them are available.

Further Work
Our results can directly be extended for any history length considered by the stochastic rule.Of course, if we wish to take into account the last k states of a possible edge, then we need to consider 2 k possible states, thus making some tasks computationally intractable for large k.On the other hand, the min-max guarantee is easier to deal with for any value of k.Finally, it remains open whether the O(n log n) bound can be extended for a wider family of underlying graphs, thus making progress over the general bound stated in Theorem 13.
Our model seems to be on the opposite end of the Markovian evolving graph model introduced in [2].There, the evolution of possible edges directly depends on the family of graphs selected as possible instances.Thus, a new research direction we suggest is to devise another model of partial edge-dependency.
d t−1 , we get the expression E[d t |d t−1 ] = p(n − d t−1 ) + (1 − q)d t−1 .Then, it follows ξ min n ≤ E[d t |d t−1 ] ≤ ξmax n and, by applying the expectation again, we get E[ξ min n] ≤ E[E[d t |d t−1 ]] ≤ E[ξ max n] which is the same as ξ min n ≤ E[d t ] ≤ ξ max n.We now bound the probability that d t deviates from its expected value by using the Chernoff bounds P r[X ≥ (1 + β)µ] ≤ e − β 2 µ 3 and P r[X ≤ (1 − β)µ] ≤ e − β 2 µ 2 the model is edge-uniform and we can fix v and choose any of the d−1 k−1 k-tuples with v in them out of the d k total ones.On the other hand, p 2 = 1 k by uniformity.Overall, we get P r(M uv = 1|A k ) = p 1 •p 2 = 1 d .We can now apply the total probability law to calculate