1. Introduction
In the modern era of the Internet, modifications in a network topology can occur extremely frequently and in a disorderly way. Communication links may fail from time to time, while connections amongst terminals may appear or disappear intermittently. Thus, classical (static) network theory fails to capture such ever-changing processes. In an attempt to fill this void, different research communities have given rise to a variety of theories on dynamic networks. In the context of algorithms and distributed computing, such networks are usually referred to as temporal graphs [
1]. A temporal graph is represented by a (possibly infinite) sequence of subgraphs of the same static graph. That is, the graph is evolving over a series of (discrete) time steps under a set of deterministic or stochastic rules of evolution. Such a rule can be edge- or graph-specific and may take as input graph instances observed in previous time steps.
In this paper, we focus on stochastically-evolving temporal graphs. We define a model of evolution, where there exists a single stochastic rule, which is applied independently to each edge. Furthermore, our model is general in the sense that the underlying static graph is allowed to be a general connected graph, i.e., with no further constraints on its topology, and the stochastic rule can include any finite number of past observations.
Assume now that a single mobile agent is placed on an arbitrary node of a temporal graph evolving under the aforementioned model. Next, the agent performs a simple random walk; at each time step, after the graph instance is fixed according to the model, the agent chooses uniformly at random a node amongst the neighbours of its current node and visits it. The cover time of such a walk is defined as the expected number of time steps until the agent has visited each node at least once. Herein, we prove some first bounds on the cover time for a simple random walk as defined above, mostly via the use of Markovian theory.
Random walks constitute a very important primitive in terms of distributed computing. Examples include their use in information dissemination [
2] and random network structure [
3]; also, see the short survey in [
4]. In this work, we consider a single random walk as a fundamental building block for other more distributed scenarios to follow.
1.1. Related Work
A paper very relevant to ours is the one of Clementi, Macci, Monti, Pasquale and Silvestri [
5], where they considered the flooding time in edge-Markovian dynamic graphs. In such graphs, each edge independently follows a one-step Markovian rule and their model appears as a special case of ours (matches our case
). Further work under this edge-Markovian paradigm includes [
6,
7].
Another work related to our paper is the one of Avin, Koucký and Lotker [
8], who defined the notion of a Markovian evolving graph, i.e., a temporal graph evolving over a set of graphs
where the process transits from
to
with probability
, and considered random walk cover times. Note that their approach becomes computationally intractable if applied to our case; each of the possible edges evolves independently, thence causing the state space to be of size
, where
m is the number of possible edges in our model.
Clementi, Monti, Pasquale and Silvestri [
9] studied the broadcast problem, when at each time step, the graph is selected according to the well-known
model. Furthermore, Yamauchi, Izumi and Kamei [
10] studied the rendezvous problem for two agents on a ring, when each edge of the ring independently appears at every time step with some fixed probability
p.
Moving to a more general scope, research in temporal networks is of interdisciplinary interest, since they are able to capture a wide variety of systems in physics, biology, social interactions and technology. For a view of the big picture, see the review in [
11]. There exist several papers considering, mostly continuous-time, random walks on different models of temporal networks: In [
12], they considered a walker navigating randomly on some specific empirical networks. Rocha and Masuda [
13] studied a lazy version of a random walk, where the walker remains at its current node according to some sojourn probability. In [
14], they studied the behaviour of a continuous time random walk on a stationary and ergodic time-varying dynamic graph. Lastly, random walks with arbitrary waiting times were studied in [
15], while random walks on stochastic temporal networks were surveyed in [
16].
In the analysis to follow, we employ several seminal results around the theory of random walks and Markov chains. For random walks, we base our analysis on the seminal work in [
2] and the electrical network theory presented in [
17,
18]. For results on Markov chains, we cite the textbooks [
19,
20].
1.2. Our Results
We define a general model of stochastically-evolving graphs, where each possible edge evolves independently, but all of them evolve following the same stochastic rule. Furthermore, the stochastic rule may take into account the last k states of a given edge. The motivation for such a model lies in several practical examples from networking where the existence of an edge in the recent past means it is likely to exist in the near future, e.g., for telephone or Internet links. In some other cases, existence may mean that an edge has “served its purpose” and is now unlikely to appear in the near future, e.g., due to a high maintenance cost. The model is a discrete-time one following previous work in the computer science literature. Moreover, as a first start and for mathematical convenience, it is formalized as a synchronous system, where all possible edges evolve concurrently in distinct rounds (each round corresponding to a discrete time step).
Special cases of our model have appeared in previous literature, e.g., in [
9,
10] for
and in the line of work starting from [
5] for
; however, they only consider special graph topologies (like ring and clique). On the other hand, the model we define is general in the sense that no assumptions, aside from connectivity, are made on the topology of the underlying graph and any amount of history is allowed in the stochastic rule. Thence, we believe it can be valued as a basis for more general results to follow, capturing search or communication tasks in such dynamic graphs.
We hereby provide the first known bounds relative to the cover time of a simple random walk taking place in such stochastically-evolving graphs for . To do so, we make use of a simple, yet fairly useful, modified random walk, namely the Random Walk with a Delay (RWD), where at each time step, the agent is choosing uniformly at random from the incident edges of the static underlying graph and then waits for the chosen edge to become alive in order to traverse it. Despite the fact that this strategy may not sound naturally-motivated enough, it can act as a handy tool when studying other, more natural, random walk models, as in the case of this paper. Indeed, we study the natural random walk on such graphs, namely the Random Walk on what is Available (RWA), where at each time step, the agent only considers the currently alive incident edges and chooses to traverse one out of them uniformly at random.
For the case
, that is when each edge appears at each round with a fixed probability
p regardless of history, we prove that the cover time for RWD is upper bounded by
, where
is the cover time of a simple random walk on the (static) underlying graph
G. The result can be obtained both by a careful mapping of the RWD walk to its corresponding simple random walk on the static graph and by generalizing the standard electrical network theory literature in [
17,
18]. Later, we proceed to prove that the cover time for RWA is between
and
, where
, respectively
, is the minimum, respectively maximum, degree of the underlying graph. The main idea here is to reduce RWA to an RWD walk, where at each step, the traversal delay is lower, respectively upper, bounded by
, respectively
.
For , the stochastic rule takes into account the previous, one time step ago, state of the edge. If an edge was not present, then it becomes alive with probability p, whereas if it was alive, then it dies with probability q. For RWD, we show a upper bound by considering the minimum probability guarantee of existence at each round, i.e., . Similarly, we show a lower bound, where .
Consequently, we demonstrate an exact, exponential-time approach to determine the precise cover time value for a general setting of stochastically-evolving graphs, including also the edge-independent model considered in this paper.
Finally, we conduct a series of experiments on calculating the cover time of RWA ( case) on various underlying graphs. We compare our experimental results with the achieved theoretical bounds.
1.3. Outline
In
Section 2, we provide preliminary definitions and results regarding important concepts and tools that we use in later sections. Then, in
Section 3, we define our model of stochastically-evolving graphs in a more rigorous fashion. Afterwards, in
Section 4 and
Section 5, we provide the analysis of our cover time bounds when for determining the current state of an edge, we take into account its last zero and one states, respectively. In
Section 6, we demonstrate an exact approach for determining the cover time for general stochastically-evolving graphs. Then, in
Section 7, we present some experimental results on zero-step history, RWA cover time and compare them to the corresponding theoretical bounds in
Section 4. Finally, in
Section 8, we cite some concluding remarks.
2. Preliminaries
Let us hereby define a few standard notions related to a simple random walk performed by a single agent on a simple connected graph . By , we denote the degree, i.e., the number of neighbours, of a node . A simple random walk is a Markov chain where, for , we set , if , and , otherwise. That is, an agent performing the walk chooses the next node to visit uniformly at random amongst the set of neighbours of its current node. Given two nodes v, u, the expected time for a random walk starting from v to arrive at u is called the hitting time from v to u and is denoted by . The cover time of a random walk is the expected time until the agent has visited each node of the graph at least once. Let P stand for the stochastic matrix describing the transition probabilities for a random walk (or, in general, a discrete-time Markov chain), where denotes the probability of transition from node i to node j, for all and for all i. Then, the matrix consists of the transition probabilities to move from one node to another after t time steps, and we denote the corresponding entries as . Asymptotically, is referred to as the limiting distribution of P. A stationary distribution for P is a row vector such that and . That is, is not altered after an application of P. If every state can be reached from another in a finite number of steps, i.e., P is irreducible and the transition probabilities do not exhibit periodic behaviour with respect to time, i.e., , then the stationary distribution is unique, and it matches the limiting distribution (fundamental theorem of Markov chains). The mixing time is the expected number of time steps until a Markov chain approaches its stationary distribution.
In order to derive lower bounds for RWA, we use the following graph family, commonly known as lollipop graphs, capturing the maximum cover time for a simple random walk, e.g., see [
21,
22].
Definition 1. A lollipop graph consists of a clique on k nodes and a path on nodes connected with a cut-edge, i.e., an edge whose deletion makes the graph disconnected.
3. The Edge-Uniform Evolution Model
Let us define a general model of a dynamically-evolving graph. Let stand for a simple, connected graph, from now on referred to as the underlying graph of our model. The number of nodes is given by , while the number of edges is denoted by . For a node , let stand for the open neighbourhood of v and for the (static) degree of v. Note that we make no assumptions regarding the topology of G, besides connectedness. We refer to the edges of G as the possible edges of our model. We consider evolution over a sequence of discrete time steps (namely ) and denote by the infinite sequence of graphs , where and . That is, is the graph appearing at time step t, and each edge is either alive (if ) or dead (if ) at time step t.
Let R stand for a stochastic rule dictating the probability that a given possible edge is alive at any time step. We apply R at each time step and at each edge independently to determine the set of currently alive edges, i.e., the rule is uniform with regard to the edges. In other words, let stand for a random variable where , if e is alive at time step t, or , otherwise. Then, R determines the value of where is also determined by R and denotes the history length, i.e., the values of , considered when deciding for the existence of an edge at time step t. For instance, means no history is taken into account, while means the previous state of e is taken into account when deciding its current state.
Overall, the aforementioned Edge-Uniform Evolution model (EUE) is defined by the parameters G, R and some initial input instance . In the following sections, we consider some special cases for R and provide some first bounds for the cover time of G under this model. Each time step of evolution consists of two stages: in the first stage, the graph is fixed for time step t following R, while in the second stage, the agent moves to a node in . Notice that, since G is connected, then the cover time under EUE is finite, since R models edge-specific delays.
4. Cover Time with Zero-Step History
We hereby analyse the cover time of G under EUE in the special case when no history is taken into consideration for computing the probability that a given edge is alive at the current time step. Intuitively, each edge appears with a fixed probability p at every time step independently of the others. More formally, for all and time steps t, .
4.1. Random Walk with a Delay
A first approach toward covering G with a single agent is the following: The agent is randomly walking G as if all edges were present, and when an edge is not present, it just waits for it to appear in a following time step. More formally, suppose the agent arrives on a node with (static) degree at the second stage of time step t. Then, after the graph is fixed for time step , the agent selects a neighbour of v, say , uniformly at random, i.e., with probability . If , then the agent moves to u and repeats the above procedure. Otherwise, it remains on v until the first time step such that and then moves to u. This way, p acts as a delay probability, since the agent follows the same random walk it would on a static graph, but with an expected delay of time steps at each node. Notice that, in order for such a strategy to be feasible, each node must maintain knowledge about its neighbours in the underlying graph; not just the currently alive ones. From now on, we refer to this strategy for the agent as the Random Walk with a Delay (RWD).
Now, let us upper bound the cover time of RWD by exploiting its strong correlation to a simple random walk on the underlying graph G via Wald’s equation (Theorem 1). Below, let stand for the cover time of a simple random walk on the static graph G.
Theorem 1 ([
23])
. Let be a sequence of real-valued, independent and identically distributed random variables, where N is a nonnegative integer random variable independent of the sequence (in other words, a stopping time for the sequence). If each and N have finite expectations, then it holds: Theorem 2. For any connected underlying graph G evolving under the zero-step history EUE, the cover time for RWD is expectedly .
Proof. Consider a Simple Random Walk (SRW) and an RWD (under the EUE model) taking place on a given connected graph G. Given that RWD decides on the next node to visit uniformly at random based on the underlying graph, that is in exactly the same way SRW does, we use a coupling argument to enforce RWD and SRW to follow the exact same trajectory, i.e., sequence of visited nodes.
Then, let the trajectory end when each node in G has been visited at least once, and denote by T the total number of node transitions made by the agent. Such a trajectory under SRW will cover all nodes in expectedly time steps. On the other hand, in the RWD case, for each transition, we have to take into account the delay experienced until the chosen edge becomes available. Let be a random variable, where stands for the actual delay corresponding to node transition i in the trajectory. Then, the expected number of time steps till the trajectory is realized is given by . Since the random variables are independent and identically distributed by the edge-uniformity of our model, T is a stopping time for them and all of them have finite expectations, then by Theorem 1, we get: . ☐
For an explicit general bound on RWD, it suffices to use
proven in [
2].
A Modified Electrical Network
Another way to analyse the above procedure is to make use of a modified version of the standard literature approach of electrical networks and random walks [
17,
18]. This point of view gives us expressions for the hitting time between any two nodes of the underlying graph. That is, we hereby (in Lemmata 1, 2 and Theorem 3) provide a generalization of the results given in [
17,
18], thus correlating the hitting and commute times of RWD with an electrical network analogue and reaching a conclusion for the cover time similar to the one of Theorem 2.
In particular, given the underlying graph G, we design an electrical network, , with the same edges as G, but where each edge has a resistance of ohms. Let stand for the hitting time from node u to node v in G, i.e., the expected number of time steps until the agent reaches v after starting from u and following RWD. Furthermore, let declare the electrical potential difference between nodes u and v in when, for each , we inject amperes of current into w and withdraw amperes of current from a single node v. We now upper-bound the cover time of G under RWD by correlating to .
Lemma 1. For all , holds.
Proof. Let us denote by
the current flowing between two neighbouring nodes
u and
w. Then,
, since at each node, the total inward current must match the total outward current (Kirchoff’s first law). Moving forward,
by Ohm’s law. Finally,
, since the sum of electrical potential differences forming a path is equal to the total electrical potential difference of the path (Kirchoff’s second law). Overall, we can rewrite
. Rearranging gives:
Regarding the hitting time from
u to
v, we rewrite it based on the first step:
since the first addend represents the expected number of steps for the selected edge to appear due to RWD and the second addend stands for the expected time for the rest of the walk.
Wrapping it up, since both formulas above hold for each , therefore inducing two identical linear systems of n equations and n variables, it follows that there exists a unique solution to both of them, and . ☐
In the lemma below, let stand for the effective resistance between u and v, i.e., the electrical potential difference induced when flowing a current of one ampere from u to v.
Lemma 2. For all , holds.
Proof. Similar to the definition of
above, one can define
as the electrical potential difference between
v and
u when
amperes of current are injected into each node
w and
of them are withdrawn from node
u. Next, note that changing all currents’ signs leads to a new network where for the electrical potential difference, namely
, it holds
. We can now apply the superposition theorem (see Section 13.3 in [
24]) and linearly superpose the two networks implied from
and
, creating a new one where
amperes are injected into
u,
amperes are withdrawn from
v and no current is injected or withdrawn at any other node. Let
stand for the electrical potential difference between
u and
v in this last network. By the superposition argument, we get
, while from Ohm’s law, we get
. The proof concludes by combining these two observations and applying Lemma 1. ☐
Theorem 3. For any connected underlying graph G evolving under the zero-step history EUE, the cover time for RWD is at most .
Proof. Consider a spanning tree T of G. An agent, starting from any node, can visit all nodes by performing a Eulerian tour on the edges of T (crossing each edge twice). This is a feasible way to cover G, and thus, the expected time for an agent to finish the above task provides an upper bound on the cover time. The expected time to cover each edge twice is given by where is the edge-set of T with . By Lemma 2, this is equal to . ☐
4.2. Random Walk on What Is Available
Random walk with a delay does provide a nice connection to electrical network theory. However, depending on p, there could be long periods of time where the agent is simply standing still at the same node. Since the walk is random anyway, waiting for an edge to appear may not sound very wise. Hence, we now analyse the strategy of a Random Walk on what is Available (shortly RWA). That is, suppose the agent has just arrived at a node v after the second stage at time step t, and then, is fixed after the first stage at time step . Now, the agent picks uniformly at random only amongst the alive incident edges at time step . Let stand for the degree of node v in . If , then the agent does not move at time step . Otherwise, if , the agent selects an alive incident edge each having probability . The agent then follows the selected edge to complete the second stage of time step and repeats the strategy. In a nutshell, the agent keeps moving randomly on available edges and only remains on the same node if no edge is alive at the current time step. Below, let and .
Theorem 4. For any connected underlying graph G with min-degree δ and max-degree Δ evolving under the zero-step history EUE, the cover time for RWA is at least and at most .
Proof. Suppose the agent follows RWA and has reached node
after time step
t. Then,
becomes fixed, and the agent selects uniformly at random a neighbouring edge to which to move. Let
(where
) stand for a random variable taking value one if the agent moves to node
v and zero otherwise. For
, let
stand for the event that
. Therefore,
is exactly the probability
k out of the
d edges existing since each edge exists independently with probability
p. Now, let us consider the probability
: the probability
v will be reached given that
k neighbours are present. This is exactly the product of the probability that
v is indeed in the chosen
k-tuple (say
) and the probability that then
v is chosen uniformly at random (say
) from the
k-tuple.
, since the model is edge-uniform, and we can fix
v and choose any of the
k-tuples with
v in them out of the
total ones. On the other hand,
by uniformity. Overall, we get
. We can now apply the total probability law to calculate:
To conclude, let us reduce RWA to RWD. Indeed, in RWD, the equivalent transition probability is , accounting both for the uniform choice and the delay p. Therefore, the RWA probability can be viewed as , where . To achieve edge-uniformity, we set , which lower bounds the delay of each edge, and finally, we can apply the same RWD analysis by substituting p by . Similarly, we can set the upper-bound delay to lower bound the cover time. Applying Theorem 2 completes the proof. ☐
The value of used to lower-bound the transition probability may be a harsh estimate for general graphs. However, it becomes quite more accurate in the special case of a d-regular underlying graph where . To conclude this section, we provide a worst-case lower bound on the cover time based on similar techniques as above.
Lemma 3. There exists an underlying graph G evolving under the zero-step history EUE such that the RWA cover time is at least .
Proof. We consider the
lollipop graph, which is known to attain a cover time of
for a simple random walk [
21,
22]. Applying the lower bound from Theorem 4 completes the proof. ☐
5. Cover Time with One-Step History
We now turn our attention to the case where the current state of an edge affects its next state. That is, we take into account a history of length one when computing the probability of existence for each edge independently. A Markovian model for this case was introduced in [
5]; see
Table 1. The left side of the table accounts for the current state of an edge, while the top for the next one. The respective table box provides us with the probability of transition from one state to the other. Intuitively, another way to refer to this model is as the birth-death model: a dead edge becomes alive with probability
p, while an alive edge dies with probability
q.
Let us now consider an underlying graph G evolving under the EUE model where each possible edge independently follows the aforementioned stochastic rule of evolution.
RWD for General -Graphs
Let us hereby derive some first bounds for the cover time of RWD via a min-max approach. The idea here is to make use of the “being alive” probabilities to prove lower and upper bounds for the cover time parameterized by and .
Let us consider an RWD walk on a general connected graph G evolving under EUE with a zero-step history rule dictating for any edge e and time step t. We refer to this walk as the Upper Walk with a Delay (UWD). Respectively, we consider an RWD walk when the stochastic rule of evolution is given by . We refer to this specific walk as the Lower Walk with a Delay (LWD). Below, we make use of UWD and LWD in order to bound the cover time of RWD in general -graphs.
Theorem 5. For any connected underlying graph G and the birth-death rule, the cover time of RWD is at least and at most .
Proof. Regarding UWD, one can design a corresponding electrical network where each edge has a resistance of capturing the expected delay till any possible edge becomes alive. Applying Theorem 2 gives an upper bound for the UWD cover time.
Let stand for the UWD cover time and C stand for the cover time of RWD under the birth-death rule. It now suffices to show to conclude.
In birth-death, the expected delay before each edge traversal is either , in case the possible edge is dead, or , in case the possible edge is alive. In both cases, the expected delay is upper-bounded by the delay of UWD, and therefore, follows since any trajectory under RWD will take at most as much time as the same trajectory under UWD.
In a similar manner, the cover time of LWD lower bounds the cover time of RWD, and by applying Theorem 2, we derive a lower bound of . ☐
6. An Exact Approach
So far, we have established upper and lower bounds for the cover time of edge-uniform stochastically-evolving graphs. Our bounds are based on combining extended results from simple random walk theory and careful delay estimations. In this section, we describe an approach to determine the exact cover time for temporal graphs evolving under any stochastic model. Then, we apply this approach to the already seen zero-step history and one-step history cases of RWA.
The key component of our approach is a Markov chain capturing both phases of evolution: the graph dynamics and the walk trajectory. In that case, calculating the cover time reduces to calculating the hitting time to a particular subset of Markov states. Although computationally intractable for large graphs, such an approach provides the exact cover time value and is hence practical for smaller graphs.
Suppose we are given an underlying graph and a set of stochastic rules R capturing the evolution dynamics of G. That is, R can be seen as a collection of probabilities of transition from one graph instance to another. We denote by k the (longest) history length taken into account by the stochastic rules. Like before, let stand for the number of nodes and for the number of possible edges of G. We define a Markov chain M with states of the form , where:
is a k-tuple of temporal graph instances, that is for each , is the graph instance present time steps before the current one (which is )
is the current position of the agent
is the set of already covered nodes, i.e., the set of nodes that have been visited at least once by the agent
As described earlier for our edge-uniform model, we assume evolution happens in two phases. First, the new graph instance is determined according to the rule-set R. Second, the new agent position is determined based on a random walk on what is available. In this respect, consider a state and another state of the described Markov chain M. Let denote the transition probability from S to . We seek to express this probability as a product of the probabilities for the two phases of evolution. The latter is possible, since, in our model, the random walk strategy is independent of the graph evolution.
For the graph dynamics, let stand for the probability to move from a history-tuple H to another history-tuple under the rules of evolution in R. Note that, for , it must hold in order to properly maintain history, otherwise the probability becomes zero. On the other hand, for valid transitions, the probability reduces to , which is exactly the probability that becomes the new instance given the history of past instances (and any such probability is either given directly or implied by R).
For the second phase, i.e., the random walk on what is available, we denote by
the probability of moving from
v to
on some graph instance
. Since the random walk strategy is only based on the current instance, we can derive a general expression for this probability, which is independent of the graph dynamics
R. Below, let
stand for the set of neighbours of
v in graph instance
. If
, that is if there is no possible edge between
v and
, then for any temporal graph instance
, it holds
. The probability is also zero for all graph instances
where the possible edge is not alive, i.e.,
. In contrast, if
, then
, since the agent chooses a destination uniformly at random out of the currently alive ones. Finally, if
, then the agent remains still, with probability one, only if there exist no alive incident edges. We summarize the above facts in the following equation:
Overall, we combine the two phases in M and introduce the following transition probabilities.
For , notice that only two cases may have a non-zero probability with respect to the growth of . If the newly visited node is already covered, then must be identical to since no new nodes are covered during this transition. Further, if a new node is not yet covered, then is updated to include it, as well as all the covered nodes in .
For , the idea is that once such a state has been reached, and so all nodes are covered, then there is no need for further exploration. Therefore, such a state can be made to absorb. In this respect, let us denote the set of these states as .
Definition 2. Let be the problem of determining the exact value of the cover time for an RWA on a graph G stochastically evolving under rule-set R.
Theorem 6. Assume all probabilities of the form used in M are exact reals and known a priori. Then, for any underlying graph G and stochastic rule-set R, it holds that .
Proof. For each temporal graph instance, , in the worst case, there exist possibilities, since each of the m possible edges is either alive or dead at a graph instance. For the whole history H, the number of possibilities becomes by taking the product of k such terms. There are n possibilities for the walker’s position v. Finally, for each , we only allow states such that . Therefore, since we fix v, there are up to nodes to be included or not in leading to a total of possibilities for . Taking everything into account, M has a total of states.
Let
stand for the hitting time of
when starting from a state
. Assuming exact real arithmetic, we can compute all such hitting times by solving the following system (Theorem 1.3.5 [
20]):
Let C stand for the cover time of an RWA on G evolving under R. By definition, the cover time is the expected time till all nodes are covered, regardless of the position of the walker at that time. Consider the set of start positions for the agent as depicted in M. Then, it follows , where we take the worst-case hitting time to a state in over any starting position of the agent. In terms of time complexity, computing C requires computing all values , for every . To do so, one must solve the above linear system of size , which can be done in time exponential to input parameters and k. ☐
It is noteworthy to remark that this approach is general in the sense that there are no assumptions on the graph evolution rule-set
R besides it being stochastic, i.e., describing the probability of transition from each graph instance to another given some history of length
k. In this regard, Theorem 6 captures both the case of Markovian evolving graphs [
8] and the case of edge-uniform graphs considered in this paper. We now proceed and show how the aforementioned general approach applies to the zero-step and one-step history cases of edge-uniform graphs. To do so, we calculate the corresponding graph-dynamics probabilities. The random walk probabilities are given in Equation (
1).
6.1. RWA on Edge-Uniform Graphs (Zero-Step History)
Based on the general model, we rewrite the transition probabilities for the special case when RWA takes place on an edge-uniform graph without taking into account any memory, i.e., the same case as in
Section 4. Notice that, since past instances are not considered in this case, the history-tuple reduces to a single graph instance
H. We rewrite the transition probabilities, for the case
, as follows:
Let stand for the number of edges alive in . Since there is no dependence on history and each edge appears independently with probability p, we get .
6.2. RWA on Edge-Uniform Graphs (One-Step History
We hereby rewrite the transition probabilities for a Markov chain capturing an RWA taking place on an edge-uniform graph where, at each time step, the current graph instance is taken into account to generate the next one. This case is related to the results in
Section 5. Due to the history inclusion, the transition probabilities become more involved than those seen for the zero-history case. Again, we consider the non-absorbing states, where
.
If
, i.e., if it does not hold that, for each
,
if and only if
, then
, otherwise the history is not properly maintained. On the other hand, if
, then
. To derive an expression for the latter, we need to consider all edge (mis)matches between
and
and properly apply the birth-death rule (
Table 1). Below, we denote by
the set of possible edges of
G, which are dead at instance
H. Let
,
,
and
. Each of the
edges was dead in
and remained dead in
, with probability
. Similarly, each of the
edges was dead in
and became alive in
, with probability
p. Furthermore, each of the
edges was alive in
and died in
, with probability
q. Finally, each of the
edges was alive in
and remained alive in
, with probability
. Overall, due to the edge-independence of the model, we get
.
7. Experimental Results
In this section, we discuss some experimental results to complement our previously-established theoretical bounds. We simulated an RWA taking place in graphs evolving under the zero-step history model. We provided an experimental estimation of the value of the cover time for such a walk. To do so, for each specific graph and value of p considered, we repeated the experiment a large number of times, e.g., at least 1000 times. In the first experiment, we started from a graph instance with no alive edges. At each step, after the graph evolved, the walker picked uniformly at random an incident alive edge to traverse. The process continued till all nodes were visited at least once. Each next experiment commenced with the last graph instance of the previous experiment as its first instance.
We constructed underlying graphs in the following fashion: given a natural number n, we initially constructed a path on n nodes, namely . Afterwards, for each two distinct nodes and , we added an edge with probability equal to a parameter. For instance, means the graph remains a path. On the other hand, for , the graph becomes a clique.
In
Table 2,
Table 3 and
Table 4, we display the average cover time, rounding it to the nearest natural number, computed in some indicative experiments for
equal to
,
and
, respectively. Consequently, we provide estimates for a lower and an upper bound on the temporal cover time. In this respect, we experimentally compute a value for the cover time of a simple random walk in the underlying graph, i.e., the static cover time. Then, we plug in this value in place of
to apply the bounds given in Theorem 4. Overall, the temporal cover times computed appear to be within their corresponding lower and upper bounds.
8. Conclusions
We defined the general edge-uniform evolution model for a stochastically-evolving graph, where a single stochastic rule is applied, but to each edge independently, and provided lower and upper bounds for the cover time of two random walks taking place on such a graph (cases ). Moreover, we provided a general framework to compute the exact cover time of a broad family of stochastically-evolving graphs in exponential time.
An immediate open question is how to obtain a good lower/upper bound for the cover time of RWA in the birth-death model. In this case, the problem becomes quite more complex than the case. Depending on the values of p and q, the walk may be heavily biased, positively or negatively, toward possible edges incident to the walker’s position, which were used in the recent past.