Next Article in Journal
Non-Stationary Stochastic Global Optimization Algorithms
Next Article in Special Issue
A Constructive Heuristics and an Iterated Neighborhood Search Procedure to Solve the Cost-Balanced Path Problem
Previous Article in Journal
Algorithms for Automatic Data Validation and Performance Assessment of MOX Gas Sensor Data Using Time Series Analysis
Previous Article in Special Issue
Images Segmentation Based on Cutting the Graph into Communities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Foremost Walks and Paths in Interval Temporal Graphs †

1
Adobe Systems Inc., Lehi, UT 84043, USA
2
Department of Computer and Information Sciences and Engineering, University of Florida, Gainesville, FL 32611, USA
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in International Conference on Contemporary Computing (IC3-2022), ACM ICSPS.
These authors contributed equally to this work.
Algorithms 2022, 15(10), 361; https://doi.org/10.3390/a15100361
Submission received: 16 August 2022 / Revised: 22 September 2022 / Accepted: 22 September 2022 / Published: 29 September 2022

Abstract

:
The min-wait foremost, min-hop foremost and min-cost foremost paths and walks problems in interval temporal graphs are considered. We prove that finding min-wait foremost and min-cost foremost walks and paths in interval temporal graphs is NP-hard. We develop a polynomial time algorithm for the single-source all-destinations min-hop foremost paths problem and a pseudopolynomial time algorithm for the single-source all-destinations min-wait foremost walks problem in interval temporal graphs. We benchmark our algorithms against algorithms presented by Bentert et al. for contact sequence graphs and show, experimentally, that our algorithms perform up to 207.5 times faster for finding min-hop foremost paths and up to 23.3 times faster for finding min-wait foremost walks.

1. Introduction

Temporal graphs are graphs in which the edges connecting vertices or the characteristics of these edges may change with time. The applications of temporal graphs include the spread of viral diseases, information dissemination by means of physical/virtual contact between people, understanding behavior in online social networks, modeling data transmission in phone networks, modeling traffic flow in road networks, and studying biological networks at the molecular level [1,2,3,4,5,6,7].
Two popular categories of temporal or dynamic graphs are contact-sequence (temporal) graphs and interval-temporal graphs. In a contact-sequence graph, each directed edge ( u , v ) has the label ( d e p a r t u r e _ t i m e , t r a v e l _ d u r a t i o n ) , where d e p a r t u r e _ t i m e is the time at which one can leave vertex u along edge ( u , v ) and t r a v e l _ d u r a t i o n is the time it takes to traverse the edge. Therefore, vertex v is reached at time d e p a r t u r e _ t i m e + t r a v e l _ t i m e . Note that a contact-sequence graph may have many edges from vertex u to vertex v; each edge has a different d e p a r t u r e _ t i m e . In an interval-temporal graph, each directed edge ( u , v ) has a label that is comprised of one or more tuples of the form ( s t a r t _ t i m e , e n d _ t i m e , t r a v e l _ d u r a t i o n ) , where s t a r t _ t i m e e n d _ t i m e defines an interval of times at which one can depart vertex u. If one departs u at time t, s t a r t _ t i m e t e n d _ t i m e , one reaches v at time t + t r a v e l _ d u r a t i o n . The intervals associated with the possibly many tuples that comprise the label of an edge ( u , v ) must be disjoint. Figure 1 and Figure 2 are examples of contact-sequence and interval-temporal graphs, respectively. A (time-respecting) w a l k in a temporal graph is a sequence of edges with the property that the end vertex of one edge is the start vertex of the next edge (if any) on the walk, each edge is labeled by the departure time from the start vertex of the edge, and the departure time label on each edge is a valid departure time for that edge and is greater than or equal to the arrival time (if any) at the edge’s start vertex (A more formal definition is provided in Section 2). A (time-respecting) p a t h is a walk in which no vertex is repeated (i.e., there is no cycle). < S , 1 , A , 2 , B > and < S , 0 , B , 5 , C > are example walks in the temporal graphs of Figure 1 and Figure 2, respectively. Both walks are also paths.
An application that can be modeled with a contact-sequence graph is a flight network. Each flight has a departure time at which it leaves the originating airport and a certain travel duration before it can reach the destination airport. On the other hand, if we consider the example of a road network, there is no single instance of time when one needs to depart on a given street. There may be different time windows (or time intervals) during which a street may be open for travel. Further, we may define time windows based on travel duration needed to reach from point A to point B on a given street due to different traffic conditions during different times of the day (such as office hours). Such networks cannot be modeled using contact-sequence graphs, as a given time window represents infinitely many possible departure times and, hence, infinitely many contact sequence edges. It is easy to see that every contact-sequence graph can be modeled as an interval-temporal graph. Further, when time can be discretized, every interval-temporal graph can be modeled as a contact-sequence graph (with potentially an explosion in the number of edges).
The authors of [8,9,10,11] focus on finding optimal paths and walks. Optimization criteria such as f o r e m o s t (arrive at the destination at the earliest possible time), m i n h o p (use the fewest number of hops when going from the source to the destination vertex), s h o r t e s t (the time taken to go from the source to the destination is minimized), and so on, are considered. Gheibi et al. [12] present a new data structure for contact-sequence graphs that results in faster algorithms for many of the path problems studied in [8]. While [9] focuses on interval-temporal graphs, refs. [8,10] use the contact-sequence model. Ref. [10] presents an algorithm for finding walks that optimize any linear combination of eight different optimization criteria ( f o r e m o s t , r e v e r s e _ f o r e m o s t , f a s t e s t , s h o r t e s t , c h e a p e s t , m o s t _ l i k e l y , m i n _ h o p , m i n _ w a i t ). In our earlier paper [13], we developed algorithms to find optimal foremost and min-hop paths in interval-temporal graphs. These algorithms were demonstrated experimentally to run faster than earlier algorithms for these problems.
A temporal graph may have many walks/paths that optimize a criterion such as foremost. In some applications, it is required to find a walk/path that optimizes a secondary criterion from among all walks/paths that optimize a primary criterion. For example, we may desire a min-wait foremost path (i.e., a foremost path in which the sum of the wait times at intermediate vertices is minimized) or a min-hop foremost path (a foremost path that goes through the fewest number of edges). For example, when selecting flights to go from A to B one may wish to use a min-wait foremost path (a route that minimizes the total wait time at intermediate airports while getting to the destination at the earliest possible time) or a min-hop foremost path (a route that involves the fewest number of connections while guaranteeing the earliest possible arrival time). In this paper, we examine the min-wait, min-hop, and min-cost (each edge or edge interval has an additional attribute, its cost), foremost walks/paths problems.
As discussed in Section 6, the algorithm by Bentert et al. can be tuned by carefully choosing the coefficients for different optimization criteria to find min-hop foremost paths and min-wait foremost walks in contact-sequence temporal graphs. We use this method to benchmark our algorithms against the algorithm by Bentert et al. and show that we perform about 207.5 times faster for finding min-hop foremost paths and up to 23.3 times faster for finding min-wait foremost walks. Further, we solve these problems on the interval-temporal graphs that can represent a wider problem space for temporal graphs, as opposed to the contact-sequence temporal graphs.
Our main contributions in this paper are:
  • We show that the problems for finding min-cost foremost and min-wait foremost paths and walks in interval temporal graphs are NP-hard.
  • We develop a polynomial time algorithm for the single-source all-destinations min-hop foremost paths problem in interval-temporal graphs.
  • We develop a pseudopolynomial time algorithm for the single-source all-destinations min-wait foremost walks problem in interval-temporal graphs.
  • We show that the problem of finding min-hop foremost paths and min-wait foremost walks can be modeled using the linear combination formulation employed in Section 6 when the optimization criteria are discrete (e.g., time is discrete). Our modeling methodology readily extends to other primary and secondary criteria as well as to multiple levels of secondary criteria (e.g., shortest min-hop foremost path).
  • We benchmark our algorithms against the algorithm of Bentert et al. [10] using datasets for which the preceding modeling can be used. On these datasets, our algorithm is up to 207.5 times faster for finding min-hop foremost paths and up to 23.3 faster for finding min-wait foremost walks.
The roadmap of this paper is as follows. In Section 2, we describe the problems of finding min-hop foremost paths ( m h f paths), min-wait foremost walks ( m w f walks), and min-cost foremost paths ( m c f paths). In Section 3, we show that the problems of finding m w f paths and walks and the problem for finding m c f paths in interval temporal graphs are NP-hard. In Section 4, we present theorems that describe the properties of min-hop foremost paths in temporal graphs. We review the data structures used to represent interval-temporal graphs along with some fundamental functions necessary for the algorithm for finding m h f paths. Finally, we present the algorithm for finding m h f paths in interval-temporal graphs along with the proof of its correctness and its computational complexity. In Section 5, we present theorems that describe the properties of m w f walks in interval-temporal graphs. We introduce additional data structures required by the algorithm we propose for finding m w f walks in interval-temporal graphs. Finally, we present the algorithm for finding the m w f walks in interval-temporal graphs along with proof of its correctness and complexity analysis. In Section 6, we show how the problems for finding foremost paths and walks with a secondary optimization criteria can be modeled as optimal walks with linear combination of multiple optimization criteria. We use this modeling to find m h f paths and m w f walks in contact sequence graphs using the algorithm of Bentert et al. [10], which optimizes a linear combination of criteria. In Section 7, we compare our algorithms for finding m h f paths and m w f walks with the algorithm by Bentert et al. [10] by transforming contact-sequence graphs to interval-temporal graphs and vice versa. Finally, we conclude in Section 8.

2. Foremost Walks and Paths

A w a l k (equivalently, valid walk, temporal walk or time-respecting walk) in a temporal graph is an alternating sequence of vertices and departure times u t 0 , t 0 , u t 1 , t 1 , . . . , u t k where (a) t i is a permissible departure time from u t i to u t i + 1 and (b) for 0 i < k 1 , t i + λ i ) t i + 1 . Here, λ i is the travel duration when departing u t i at t i using the edge ( u t i , u t i + 1 ) (i.e., t i + λ i is the arrival time at u t i + 1 ). For this w a l k , u t 0 is the source vertex and u t k the destination. Note that every walk in which a vertex is repeated contains one or more cycles. A walk that has no cycle (equivalently, no vertex repeats) is a path.
W 1 = S , 0 , B , 5 , C is a walk from S to C in the temporal graph of Figure 2. W 2 = S , 0 , A , 1 , B , 3 , C , 5 , A is another walk in the temporal graph of Figure 2. W 1 does not contain any cycles as none of the vertices are repeated from source to destination. Hence, W 1 is also a path. However, W 2 has vertex A that repeats. Therefore, it is not a path. We are interested in f o r e m o s t paths and walks in a temporal graph that start at a vertex s at a time t s t a r t and end at another vertex v. As defined in [8,9,10], a f o r e m o s t walk is a walk from a source vertex s to a destination vertex v that starts at or after a given time t s t a r t and has the arrival time t f , which is the earliest possible arrival time at the destination v among all possible walks from s to v.

2.1. Min-Hop Foremost

A min-Hop foremost ( m h f ) walk is a foremost walk from a source vertex s to another vertex v that goes through the fewest number of intermediate vertices. We observe that every m h f walk is an m h f path, as cycles may be removed from any s to v walk to obtain a path with a fewer number of hops and the same arrival time at v. For this reason, we refer to m h f walks as m h f paths in the rest of this paper.
As an example, consider the interval-temporal graph of Figure 3. Every walk in this graph is also a path. The paths P 1 = < a , 0 , b , 1 , c , 7 , d > and P 2 = < a , 0 , c , 7 , d > arrive at d at time 8 and are foremost paths from a to d. P 2 is a 2-hop foremost path while P 1 is a 3-hop foremost path. P 2 is the only min-hop foremost path or m h f path from a to d in Figure 3.

2.2. Min-Wait Foremost

A min-wait foremost or m w f walk is a f o r e m o s t walk from a source vertex s to any other vertex v that accumulates minimum total wait time at the vertices visited by the walk. The wait time at each vertex u is d e p a r t u r e _ t i m e ( u ) a r r i v a l _ t i m e ( u ) . The total wait time accumulated by the walk is the sum of the wait times at each vertex u. Therefore, m w f walk is a foremost walk from s to v that minimizes ( u t i d e p a r t u r e _ t i m e ( u t i ) a r r i v a l _ t i m e ( u t i ) ) where u t i v e r t i c e s ( m w f ( s , v ) ) and u t i s ; u t i v , as there is no wait time accumulated at the source vertex and the destination vertex.
m w f walks can have cycles, as is evident from the example of Figure 4. The m w f walk from source vertex s to destination vertex b is m w f ( s , b ) = s , 0 , a , 2 , c , 4 , d , 5 , a , 7 , b , arriving at b at time 8 with total wait time of 1 accumulated at vertex a. Alternate f o r e m o s t paths from s to b are P 1 = s , 0 , a , 4 , b , arriving at time 8 with a wait time of 3, and P 2 = s , 0 , a , 7 , b , also arriving at 8 with a wait time of 6. Therefore, to reduce the total wait time along the walk, we may need to go through cycles instead of waiting for a long time at a given vertex.

2.3. Min-Cost Foremost

For this problem, we assume there is a non-negative cost associated with every edge traveled along the walk from source vertex s to a destination vertex v. The cost of an edge may depend on the departure time from the edge’s start vertex. The min-cost foremost or m c f walk is a foremost walk from source vertex s to any other vertex v that incurs minimum cost along the walk. The cost incurred on a time arc from a vertex u t i to u t i + 1 departing at time t i is denoted by a function c ( u t i , u t i + 1 ) . Therefore, the objective for the min-cost foremost or m c f walk problem is to find a walk from a given source vertex s to a destination vertex v with the arrival time t f , such that among all the walks from s to v starting at or after t s t a r t and arriving at t f , we choose a walk that accumulates minimum cost along the way or minimizes ( u t i c ( u t i , u t i + 1 ) . If there are cycles in a walk from s to v, we could eliminate those cycles and arrive at the destination at the same time, or sooner, at a cost that is the same, or less. Therefore, every m c f path is also an m c f walk. For this reason, we refer to m c f walks as m c f paths in the rest of this paper.

3. NP-Hard Foremost Path and Walk Problems in Interval Temporal Graphs

Several problems are known to be NP-hard for contact-sequence temporal graphs. For example, Bhadra et al. [14] show that computing several types of strongly connected components is NP-hard; Casteigts et al. [15] show that determining the existence of a no-wait path (In a no-wait path, the arrival and departure times at each intermediate vertex are the same.) between two vertices is NP-hard; and Zschoche et al. [16] show that computing several types of separators is NP-hard. Additional complexity results for contact-sequence temporal graphs appear in [15]. Since contact-sequence temporal graphs are a special case of interval-temporal graph, as discussed in Section 1, every problem that is NP-hard for the contact-sequence model remains NP-hard in the interval model. However, the reverse may not be true, as the transformation from the interval model to the contact-sequence model entails a possible explosion in the instance size. In this section, we demonstrate that m w f and m c f path and walk problems are NP-hard in the interval model but polynomially solvable in the contact-sequence model. In fact, in [13], we show that finding a no-wait path from a given source vertex u to a destination vertex v in an interval-temporal graph whose underlying static graph (defined below) is acyclic is NP-hard and that this problem is polynomially solvable for contact-sequence graphs whose underlying static graph is acyclic. We remark in [13] that our proof of this is easily extended to show that finding foremost, fastest, min-hop, and shortest no-wait paths in interval-temporal graphs with an acyclic underlying static graph is NP-hard while these problems are polynomial for contact-sequence temporal graphs whose underlying static graph is acyclic.
The underlying static graph for any contact-sequence temporal graph is the graph that results when each edge ( u , v , t , λ ) is replaced by the edge ( u , v ) and then multiple occurrences of the same edge ( u , v ) are replaced by a single edge ( u , v ) . For an interval temporal graph, its underlying static graph is obtained by replacing each edge ( u , v , i n t v l s ) by the edge ( u , v ) . Figure 5 shows the underlying static graphs for the temporal graphs of Figure 1 and  Figure 2. We show below that finding m w f paths and walks and m c f paths is NP-hard for the interval model but polynomially solvable for the contact-sequence model (for acyclic graphs).
Theorem 1.
The m w f path and walk problems are NP-hard for interval-temporal graphs.
Proof. 
For the NP-hard proof, we use the sum of subsets problem, which is known to be NP-hard. In this problem, we are given n non-negative integers S = { s 1 , s 2 , . . . , s n } and another non-negative integer M. We are to determine if there is a subset of S that sums to M. For any instance of the sum of subsets problem, we can construct, in polynomial time, the interval-temporal graph shown in Figure 6. For all edges other than ( u n , v ) , the permissible departure times are from 0 through M (i.e., their associated interval is [ 0 M ] ) and the edge ( u n , v ) has the single permissible departure time M (equivalently, its associated interval is [ M M ] or simply [ M ] ). The travel time for edge ( u i , u i + 1 ) is s i , that for ( u n , v ) is 1, and that for the remaining edges is 0. For every subset of S, there is a no-wait path from u 0 to u n that arrives at u n at a time equal to the sum of the s i s in that subset. Further, every no-wait path from u 0 to v must get to u n at time M. Hence, there is a no-wait path from u 0 to v if S has a subset whose sum is M. In addition, every time-respecting path from u 0 to v in the temporal graph of Figure 6 is a foremost path, as there is no path from u 0 to v that can arrive at v either before or after time M + 1 . Hence, a min-wait foremost path from u 0 to v has a total wait of 0 if there is a subset of S that sums to M; this path gets to v at time M + 1 . Hence, the min-wait foremost path problem is NP-hard. The same construction shows that the min-wait foremost walk problem is also NP-hard for interval temporal graphs as every walk in the graph of Figure 6 is a path.    □
We note that the construction used in the above proof is easily modified, so that every edge has a travel time that is > 0.
Theorem 2.
The m c f path problem is NP-hard for interval-temporal graphs.
Proof. 
For this proof, we use the partition problem: Given S = { s 1 , . . . , s n } with { s 1 + s 2 + . . . + s n = 2 M } , we are to determine whether there is a subset whose sum is M; the s i s and M are non-negative integers. We use the same graph as in Figure 6. Each edge ( u i , x i ) has the cost s i ; all other edges have a cost of 0. As in the proof of Theorem 1, every path from u 0 to v corresponds to a subset of S; this subset consists of the s i s on the included edges of the form ( u i , u i + 1 ) . A path from u 0 to v is feasible (time-respecting) if its length from u 0 to u n is P M . Every feasible path from u 0 to v gets to v at M + 1 and is a formost path. Feasible paths have the property that the sum, P, of the s i s in the associated subset is M. Also, for every subset of S, there is a corresponding feasible path. The cost of such a path is 2 M P M (as P M ). This cost takes on the min value M if the sum of s i s on it is also M; i.e., if S has a partition (i.e., a subset whose sum P is M). Hence, the m c f path problem is NP-hard for interval temporal graphs.    □

4. Min-Hop Foremost Paths

Before we develop the algorithm for finding m h f paths in interval-temporal graphs, we present the following theorems about m h f paths in temporal graphs.
Theorem 3.
There exist interval-temporal graphs in which every m h f path from a source vertex s to a destination vertex v has a prefix-path ending at a prefix vertex u and the prefix-path from s to u is not a m i n h o p path.
Proof. 
This can be seen from Figure 7. The only m h f path from a to d is < a , 0 , b , 1 , c , 2 , d > . However, the prefix path < a , 0 , b , 1 , c > is not a min-hop path from a to c. The min-hop path from a to c is < a , 8 , c > which is not a prefix path to the only m h f path from a to d, even though this m h f path goes via c.    □
Theorem 4.
There exist interval-temporal graphs in which every m h f path from a source vertex s to a destination vertex v has a prefix path ending at a prefix vertex u and the prefix path from s to u is not a foremost path.
Proof. 
This can also be seen from Figure 7. The only m h f path from a to f is < a , 0 , b , 7 , d , 8 , f > . However the prefix path < a , 0 , b , 7 , d > is not a foremost path from a to d. There are two foremost paths from a to d in the interval-temporal graph of Figure 7, namely, < a , 0 , b , 1 , c , 2 , d > and < a , 0 , b , 1 , c , 2 , e , 3 , d > . Neither of these foremost paths from a to d is a prefix path to the only m h f path from a to f, even though this m h f path goes via d.    □
Theorem 5.
There exist interval-temporal graphs in which every m h f path from a source vertex s to a destination vertex v has a prefix path ending at a prefix vertex u and the prefix path from s to u is not an m h f path.
Proof. 
This can also be seen from Figure 7. The m h f path from a to f is < a , 0 , b , 7 , d , 8 , f > . However, the prefix path < a , 0 , b , 7 , d > is not an m h f path from a to d. Out of the two foremost paths from a to d in the interval-temporal graph of Figure 7, namely, < a , 0 , b , 1 , c , 2 , d > and < a , 0 , b , 1 , c , 2 , e , 3 , d > , the former has a fewer number of hops. Therefore, it is the the m h f path from a to d. However, it is not a prefix path for the only m h f path from a to f, even though the only m h f path from a to f goes via d.    □
Theorem 6.
Consider an interval graph G that has a path from s to v. G has an m h f path P from s to v with the property that every prefix Q of P is an h hop foremost path to u, where h is the number of hops in Q and u is the last vertex of Q.
Proof. 
Since G has a path from s to v, it has a min-hop foremost path R from s to v. Let S be the longest prefix of R that is not an h-hop foremost path from s to u, where h is the number of hops in S and u is the last vertex in S. If there is no such S then the theorem is proved. Assume S exists. Replace S in R, by an h-hop foremost path from s to u, say S . The resulting path R is an m h f path from s to v with a prefix S from s to u with an earlier arrival time at u, but the same number of hops. Repeating this replacement strategy a finite number of times, we obtain a min-hop foremost path P from s to v that satisfies the theorem.    □

4.1. Algorithm to Find mhf Paths in Interval-Temporal Graphs

As noted in [13], some intervals of departure times on an edge may be redundant for the purposes of finding optimal foremost, min-hop, shortest, and fastest paths, as these are dominated by other intervals of the edge. These intervals are also redundant from the perspective of m h f paths (Theorem 6) and we assume that these redundant intervals have been removed in a preprocessing step.
Our algorithm employs the function f ( u , v , t ) [9,13], which determines the earliest possible departure time t using the edge ( u , v ) .

4.1.1. Data Structures Used by mhf Algorithm 1

  • We use the same data structure to represent the interval temporal graph as we used in our earlier paper [13]. The data structure comprises a (say) C++ vector with one slot for each vertex in the graph. The slot for any vertex u itself contains a vector of vertices adjacent from u. Associated with each adjacent vertex v from u, there is a vector of time-ordered tuples for the edge ( u , v ) .
  • i n c S t is a structure that keeps track of vertices discovered in every hop. The fields in this structure are as follows:
    (a)
    c u r V t x I d is the current vertex.
    (b)
    a r r T m is the time of arrival at the current vertex.
    (c)
    r e f P r v I n c S t is reference to previous i n c S t that stores similar information about previous vertex on this path.
  • a l l H o p P a t h s —array of lists that stores a list of vertices discovered at every hop. This array has, at most, H lists, where H is the maximum number of hops in min-hop paths from source vertex, s, to any of the vertices v V . Every element of the list is an instance of the structure i n c S t .
  • t E K A —array that stores for each vertex v tuples of:
    (a)
    earliest known arrival time t f .
    (b)
    number of hops in which earliest time found h f .
    (c)
    index, i n d x , into allHopPaths to the structure i n c S t in the list at h.

4.1.2. Algorithm Description

  • INPUT:
    • Temporal graph represented by data structure described in Section 4.1.1, item 1
    • Source vertex s
  • OUTPUT:
    -
    Array t E K A as described in Section 4.1.1, item 4
As an example, consider the interval-temporal graph of Figure 8. Let the source vertex be S and t s t a r t = 0 . In the first round ( h o p C n t = 1 ), the neighbors A, B, and C are identified as one-hop neighbors of S with one-hop path arrival times of 1, 5, and 10, respectively. Therefore, the earliest known arrival time to these neighbors are updated with t E K A , with a h o p C n t of 1. In the next round ( h o p C n t = 2 ), these one-hop paths are expanded to two-hop paths to vertices B ( S , A , B ) and C ( S , B , C ). The arrival times of these paths are 2 and 6, which is earlier than their current arrival times. Therefore, their earliest known arrival times ( f o r e m o s t arrival times) are updated. In the third round ( h o p C n t = 3 ), the earlier arriving 2-hop paths to B and C are expanded. While the 2-hop path to C cannot be expanded any further, the 2-hop path to B is expanded to obtsin a 3-hop path to C that gets to C at 4, which is earlier than its current arrival time. Therefore, t E K A is again updated for this vertex. This path is expanded in the next round ( h o p C n t = 4 ) and the 4-hop path to D ( S , A , B , C , D ) is discovered, which is also the f o r e m o s t arrival time at D. This path arrives at D at 5. The algorithm now terminates as h o p C n t = 4 = V 1 .
Algorithm 1 Min-hop foremost path algorithm
1:
Create s t a r t S t ( s , t s t a r t , n u l l ) as an instance of i n c S t .
2:
Initialize t E K A [ s ] t s t a r t ; v V , v s , t E K A [ v ] ; Initialize a l l H o p P a t h s [ 0 ] { s t a r t S t }
3:
h o p C n t = 0 ; n e w V s I n H o p = 1 ; t o t V s R c h d = 1
4:
while ( h o p C n t < V 1 ) and ( n e w V s I n H o p 0 )  do
5:
     h o p C n t + +
6:
     n e w V s I n H o p 0
7:
    for each  ( r e f I n c S t a l H o p P t h s [ h o p C n t 1 ] )  do
8:
         v e r t = r e f I n c S t . c u r V t x I d
9:
         t v e r t A r r = r e f I n c S t . a r r T m
10:
       for each ( n b r V [ v e r t ] . n b r s ) do
11:
             ( d e p T m , i n t v l I d ) = f ( ( v e r t , n b r ) , t v e r t A r r )
12:
            if  d e p T m  then
13:
                continue                                                                             ▹ start next loop iteration
14:
           end if
15:
            n e w A r r n b r = d e p T m + λ i n t v l I d
16:
           if  ( n e w A r r n b r < t E K A [ n b r ] . t f )  then
17:
               if  ( h o p C n t > t E K A [ n b r ] . h f )  then
18:
                   Create an instance of i n c S t r u c t as n i s
19:
                    n i s ( n b r , n e w A r r n b r , r e f I n c S t )
20:
                    t E K A [ n b r ] . h f = h o p C n t
21:
                    a l H o p P t h s [ h o p C n t ] . a p p e n d ( n i s )
22:
                    t E K A [ n b r ] . i n d x = n e w V s I n H o p + +
23:
               else                                                               ▹ arrival with an earlier time in same hop from a diferent prev node
24:
                   Update
25:
                    a l H o p P t h s [ h o p C n t ] [ t E K A [ n b r ] . i n d x ]
26:
               end if
27:
               if  t E K A [ n b r ] . t f  then
28:
                    t o t V s R c h d + +
29:
               end if
30:
                t E K A [ n b r ] . t f = n e w A r r n b r
31:
           end if
32:
        end for
33:
    end for
34:
end while
Theorem 7.
Algorithm 1 finds m h f paths from the source vertex, s, to all reachable vertices v V in the interval-temporal graph G = ( V , E )
Proof. 
The proof follows from Theorem 6 and the observation that the algorithm constructs m h f paths first with 1 hop, then with 2 hops, and so on. Form Theorem 6, it follows that, on each round, it is sufficient to examine only 1-hop extensions of paths constructed in the previous round.    □
The asymptotic complexity of Algorithm 1 is O ( N M i t g log δ ) , where N and M i t g are the number of vertices and edges, respectively, in the interval temporal graph and δ is the maximum number of departure intervals on an edge from the given M i t g edges. This is the same as that of the min-hop algorithm in our earlier paper [13].

5. Min-Wait Foremost Walks in Interval Temporal Graphs

5.1. Properties of mwf Walks

In this section, we describe properties of m w f walks that are used later in the section for developing single-source all-destinations m w f walks algorithm and the correctness proof of our m w f algorithm. Some of the terminology we use is given below.
  • For a walk X, t X denotes its arrival time at its terminating vertex v and w X denotes the total wait time accumulated by X along its way from s to v.
  • Comparing any two walks X and X w.r.t m w f arrival at a vertex v, X is said to be the same or better than X if t X t X and w X w X .
  • A walk from s to v via some vertex u is denoted as W ( s , u , v ) .
  • A walk X from s to u that is extended further to obtain a walk W ( s , u , v ) is a prefix walk denoted by X = P r e ( W ( s , u , v ) ) .
Definition 1.
Walk dominance—for any two walks A and A from s to u, A is said to dominate over A if, for any walk W ( s , u , v ) where A = P r e ( W ( s , u , v ) ) , A can always be replaced by A to produce a same-or-better walk W ( s , u , v ) w.r.t m w f arrival at v
Theorem 8.
If there are two walks, A ( t A , w A ) and A ( t A , w A ) , that arrive at a vertex u, such that t A t A , then if ( ( t A t A + w A ) w A ) then A dominates A .
Proof. 
We have:
( t A t A + w A ) w A
Let the departure time of A for any onward walk from u be t d e p t A . Since t A t A , t d e p t A , therefore, A can also depart vertex u at t d e p . If A departs at t d e p , the wait time accumulated by the time of departure is w = w A + t d e p t A . If A departs at t d e p , the wait time accumulated by the time of departure is w = w A + t d e p t A . Due to Equation (1) w w . Hence, if A is replaced by A, then any possible extension of A can still depart at the same time and would have accumulated an equal or less wait time by the time it departs from u. Therefore, A can always be replaced by A in any possible extension of A to obtain a same-or-better walk.    □
Theorem 9.
For identifying non-dominated walks among all the walks arriving at a vertex u, all walks may be arranged in an increasing order of arrival time at vertex u and only adjacent walks need be compared for dominance.
Proof. 
If there are three walks ( A , B , C ) terminating at u with arrival times ( t A t B t C ) , we need to show that it is sufficient to compare only the adjacent walks to find and retain the dominant walks terminating at u. This follows from the following two conditions:
  • If B survives A (is not dominated by A), then comparing C with B should have the same result as comparing C with A. In other words
    (a)
    If C survives B, it also survives A.
    (b)
    If C is dominated by B, it is also dominated by A.
  • If A dominates B and B dominates C, then A also dominates C. Therefore, it does not matter in which order the walks are compared. Both B and C will be eliminated.
  • We are given B is not dominated by A or survives A. Based on the dominance criteria of Equation (1) we obtain Equation (2) for B surviving A
    ( t B t A + w A ) > w B
    (a)
    When C survives B, we have
    ( t C t B + w B ) > w C
    We are to prove that C also survives A
    ( t C t A + w A ) > w C
    Adding Equations (2) and (3) yields (4)
    (b)
    When C is dominated by B, Equation (1) gives
    ( t C t B + w B ) w C
    Replacing w B from Equation (2) in Equation (5), we obtain
    ( t C t A + w A ) w C
    Therefore, A also dominates C.
    This proves that, when B is not dominated by A, then comparing C with B has the same result as comparing C with A.
  • Here, we prove that if A dominates B and B dominates C, then A also dominates C. Since A dominates B we have
    ( t B t A + w A ) w B
    For B dominates C we have Equation (5). The two Equations (5) and (7), can be added together to obtain Equation (6), which states that A dominates C. Therefore, when A dominates B and B dominates C, regardless of the order in which the adjacent walks are compared, both B and C will be eliminated.
Hence, for any three walks A, B, and C with t A t B t C , only adjacent walks need to be compared to remove dominated walks and retain non-dominated walks. When applied transitively to all walks arriving at u, we see that if all the arriving walks are arranged in non-decreasing order of their arrival times, only adjacent walks need to be compared to eliminate the dominated walks and retain the non-dominated walks.    □
Theorem 10.
For any two walks A and B arriving at a vertex u, where t B > t A , if B is not dominated by A as per the dominance criteria (1), then for any departure time t available on an outgoing edge ( u , v ) such that t t B , A need not be considered for expansion.
Proof. 
Let A be the walk obtained by extending A from u to a neighbor v at time t t B . Similarly, B is obtained by extending B on ( u , v ) at t t B . Assuming λ as the travel time when departing at t on the edge ( u , v ) , we have the following:
t A = t + λ ; w A = w A + ( t t A )
t B = t + λ ; w B = w B + ( t t B )
Therefore, we have t A = t B . In addition, since B is not dominated by A, we have
t B t A + w A > w B
From Equations (8)–(10), we have w B < w A . This means that extending A at t t B is not beneficial, as it will be dominated by B .    □
Theorem 11.
In order to find m w f walks, if walk A arriving at u is extended from u to v using an edge ( u , v ) at time t d e p t A in a departure interval I ( s , e , λ ) , then there is no benefit to extending A using the same edge ( u , v ) in the same interval I ( s , e , λ ) at a time t where t > t d e p .
Proof. 
Let the extension of A from u to v by departing at t d e p be A 1 . Let the extension to v obtained by departing at t be A 2 . We have the following two equations for the arrival and wait times of A 1 and A 2 , respectively.
t A 1 = t d e p + λ ; w A 1 = w A + ( t d e p t A )
t A 2 = t + λ ; w A 2 = w A + ( t t A )
Clearly, t A 1 < t A 2 as t d e p < t . Further, substituting t A 2 , t A 1 , w A 1 from Equations (11) and (12) in expression t A 2 t A 1 + w A 1 , we obtain
t A 2 t A 1 + w A 1 = w A 2
From the dominance criterion of Equation (1), we conclude that A 1 dominates A 2 . This means that if we extend A at t d e p and at t > t d e p in the departure interval I ( s , e , λ ) using the edge ( u , v ) to obtain A 1 and A 2 , respectively, A 2 will be dominated at v by A 1 . Therefore, there is no benefit of extending A at t    □
Theorem 12.
If the walk A arriving at u is extended from u to v using the edge ( u , v ) at time t d e p t A in a departure interval I 1 ( s 1 , e 1 , λ 1 ) , then A may need to be extended again using the edge ( u , v ) in a subsequent interval I 2 ( s 2 , e 2 , λ 2 ) where ( s 2 > e 1 ) if one of the following is true
  • s 2 + λ 2 < t d e p + λ 1
  • λ 2 > λ 1
Proof. 
t A e 1 as A can be extended in I 1 . Therefore, the earliest time at which A can be extended in I 2 is s 2 . It need not be extended at any other time in I 2 due to Theorem 11. Let the extension of A obtained by extending in I 1 be A 1 and that obtained by extending in I 2 be A 2 .
  • If s 2 + λ 2 < t d e p + λ 1 , it means t A 2 < t A 1 ; therefore, A will need to be extended at s 2 so we do not miss any opportunities of further expansion in intervals of departure available at v at a time t, such that s 2 + λ 2 t < t d e p + λ 1 .
  • We have the following two equations for the arrival and wait times of A 1 and A 2 , respectively
    t A 1 = t d e p + λ 1 ; w A 1 = w A + ( t d e p t A )
    t A 2 = s 2 + λ 2 ; w A 2 = w A + ( s 2 t A )
    Comparing A 1 and A 2 at v, we see that s 2 + λ 2 t d e p + λ 1 as otherwise, condition 1 would be true. Therefore, we have:
    t A 2 t A 1 = s 2 + λ 2 t d e p λ 1
    Let e = t A 2 t A 1 + w A 1 . Evaluating e using Equation (16) and substituting for w A 1 from Equation (14), we obtain
    e = w A + s 2 t A + ( λ 2 λ 1 )
    Substituting w A 2 from Equation (15) into Equation (17), we obtain:
    e = w A 2 + ( λ 2 λ 1 )
    We know that for A 2 to survive being dominated by A 1 , we need Equation (19) to be true
    e > w A 2 w A 2 + ( λ 2 λ 1 ) > w A 2
    Therefore, for Equation (19) to be true, λ 2 > λ 1 , otherwise A 2 will be dominated by  A 1 .
Therefore, unless 1 or 2 is true, there is no benefit of extending in interval I 2 .
This can also be seen from the example of Figure 9. There are two intervals on the edge ( u , v ) with I 1 ( 2 , 6 , 12 ) and I 2 ( 10 , 20 , 5 ) . A departs at 2 and then does not need to depart in I 2 ( 10 , 20 , 5 ) , as none of the conditions of Theorem 12 is satisfied. A departs at 4 in I 1 ( 2 , 6 , 12 ) but needs to depart in I 2 ( 10 , 20 , 5 ) again as condition 1 is satisfied.    □
Theorem 13.
If walk A arriving at u is extended from u to v using the edge ( u , v ) , it should always be extended at ( t d e p , i n t v l I d ) = f ( u , v , t A ) , where f ( u , v , t ) is the next function referred to in Section 4.1 and described in [9,13].
Proof. 
From Theorem 11, a given walk departing in a departure interval I on an edge ( u , v ) should always depart at the earliest possible departure time in I. Further, as per the conditions stated in Theorem 12, it is evident that, for any two available departure intervals I 1 ( s 1 , e 1 , λ 1 ) and I 2 ( s 2 , e 2 , λ 2 ) where e 1 s 2 on an edge ( u , v ) , a walk A terminating at u such that t A e 1 should always depart in I 1 and whether or not it departs in I 2 is determined by the conditions specified in the theorem.    □

5.2. Departure from Source Vertex s

All walks that start from a source vertex s have a wait time of 0, as there is no wait accumulated at the source vertex. Therefore, for any available departure interval on an edge ( s , x ) , we should be extending at every possible instance of time in this departure interval. Theorems 11 and 12 do not apply to walks departing from the source vertex s, as these theorems assume that there is an extra wait accumulated if a walk A is extended at a later time from the departing vertex u. However, this is not true when u = s as the wait time at s is always 0. This implies, we may assume that m w f walks do not have a cycle that involves s as such cycles can be removed from the walk without increasing either the total wait time or the arrival time at the destination. From the source vertex s, walks may depart at every departure instance available to find m w f walks. To account for this, we introduce the concept of a walk class.
Definition 2.
A walk class ( w s , w e , u ) is a set of walks that arrives at the vertex u with a wait time of 0. The first walk in this set arrives at w s and the last one arrives at w e , where w e > w s . There is a walk in the walk class at every instance of time in the range [ w s , w e ] and each of these walks has the same total wait time, which is 0. This can be seen in Figure 10.
For every departure interval denoted by I ( s t a r t , e n d , λ ) where e n d > s t a r t on an edge ( s , x ) , we need to generate a walk class, ranging from the s t a r t to e n d , with travel time as λ and a wait time of 0. Once the walks in the walk class arrive at the neighbor x, Theorems 11 and 12 apply to each walk in the class.
Theorem 14.
Each walk in a walk class survives the walk, if any, that arrives before it.
Proof. 
This is easy to see from Equation (1), as these walks have the same wait time but different arrival times.    □
Theorem 15.
If two walk classes W C 1 and W C 2 arrive at a vertex u, there is no dominance to be considered.
Proof. 
This follows from Equation (1), as the wait times of all walks is 0. When they overlap, one of them may be discarded. When they do not overlap, both can be retained.    □
When a walk class, W C ( w s , w e ) , terminates at a vertex u, if some portion of it overlaps with a departure interval I ( s , e , λ ) from u to a neighbor v, the set of walk instances x in the walk class W C such that s t x e can be immediately extended to the neighbor v without any additional wait at u into a new walk class arriving at v as W C ( w s , w e ) . Therefore, each walk in W C ( w s , w e ) also has a wait time of 0. If there is no overlap with a departure interval from u to v, then only the walk instance at w e needs to be extended to v due to Theorems 10 and 14.
For the m w f problem, redundant intervals as described in [13] for the foremost, min-hop, fastest problems are not redundant and need to be retained. Such intervals, however, are redundant for m h f problems, as noted in Section 4.1
This is illustrated with the example of Figure 11. The min-wait foremost m w f walk from s to c would benefit by departing from the vertex s at 0, instead of waiting for the next faster interval that becomes available for departure at 2 and reaches a at 4. Departing at 0 from s reaches a at 5. In this case, the m w f walk would be W m w f = ( s , 0 , a , 7 , b , 1 , c ) arriving at vertex c at time 9 with a wait time of 0.

5.3. Additional Data Structures

For finding m w f walks, we introduce an additional data structure which is a sorted list of all departure intervals in the interval temporal graph, sorted by the arrival time when departure is at the start of the interval. Each interval in this list is represented as I ( s , e , λ ) and the sort key for the list is s + λ . This is demonstrated in Figure 12 for the interval temporal graph of Figure 11.

5.4. Algorithm to Find mwf Walks

5.4.1. Data Structures Used by mwf Algorithm 2

  • Input graph G as in our earlier paper [13] and briefly described in Section 4.1.
  • i n t v l s I n f o is a structure that describes a departure interval.
    (a)
    u is the start vertex.
    (b)
    v is the end vertex.
    (c)
    s t —start time of the interval
    (d)
    e n d —end time of the interval.
    (e)
    λ is the travel time on this intvl.
    (f)
    i n t v l I d is the Id of this intvl on connection ( u , v ) in the input graph
  • i n t v l L i s t —array of i n t v l I n f o that contains all the departure intervals from the input graph, as described in Section 5.3
  • P Q of ad-hoc intervals—Priority queue of items of type i n t v l s I n f o . The priority key is ( i n t v l S t r t + λ ).
  • m w f W C l a s s is a structure that describes a walk class
    (a)
    s t r t T m —arrival time of first walk instance in this class
    (b)
    e n d T m —arrival time of last walk instance
    (c)
    w t T m —total wait time accumulated on the walk.
    (d)
    l a s t E x p A t —This is an array of λ s of the departure interval in which this walk was last expanded to each of the nbr of its terminating vertex. The number of items in this array is the number of nbrs of the vertex at which this walk terminates.
  • l i s t W C l a s s e s —This is an array of lists of walk classes. The size of the array is the number of vertices in the graph. Each list in the array is the list of walk classes terminating at the vertex corresponding to the array index. This list is always sorted in the increasing order of a r r v T m S t r t of the walk classes.
  • m w f W a l k s —An array of walk classes. Array has an item for each vertex in the graph. For each vertex, it has the m w f walk to that vertex.

5.4.2. Algorithm Description

  • INPUT:
    • Temporal graph represented by data structure described in Section 5.4.1, item 1
    • Source vertex s
  • OUTPUT:
    -
    Array m w f W a l k s as described in Section 5.4.1, item 7
For simplicity, we assume that all the edge travel times are > 0. Our algorithm for solving the m w f problem (Algorithm 2) can easily be extended to the case when some travel times are 0. Later, we show how to carry out this extension. Algorithm 2 looks at all possible departure intervals in non-decreasing order of their arrival times when departing at the start of the interval ( i n t v l . s t r t T m + i n t v l . λ ). In Step 8, it saves the arrival time of the latest interval being considered in the variable c u r T m . In the subsequent w h i l e loop, it keeps fetching new intervals as long as they have the same arrival time as c u r T m , when departing at the start of the interval. For each fetched interval n e w I n t v l , at the originating vertex u of n e w I n t v l , it finds a walk class p r e v W with latest arrival time of the start of the walk class p r e v W . s t r t T m such that p r e v W . s t r t T m n e w I n t v l . s t . No other walk arriving at u needs to be considered for expansion on this interval due to Theorem 10. Using this previous walk at the originating vertex u, a new walk is created from u to v at the start of the n e w I n t v l using the function c r e a t e N e w W . This function creates a new walk from the previous walk departing at start of n e w I n t v l only if the second part of Theorem 12 is satisfied. The first part of Theorem 12 is already taken care of because the intervals are examined only in non-decreasing order of i n t v l . s t + i n t v l . λ .
Algorithm 2 Min-wait foremost walks algorithm
1:
Create a new n e w W C l a s s ( t s t a r t , , 0 , n b r s _ s [ 0 ] ) as an instance of m w f W C l a s s . Values are assigned to fields a r r i v T m S t r t , a r r i v T m E n d , w t T m , l a s t E x p A t , respectively. Last field is an array of size out degree of vertex s.
2:
Initialize mwfWalks[s]     ←   newWClass; initialize l i s t W C l a s s e s [ s ] . p u s h b a c k ( n e w W C l a s s ) ; v V , v s , m w f W a l k s [ v ] n u l l
3:
n o d e s R c h d 1 ; s t a t i c I n t v l s i n t v l L i s t . s i z e ; i n d x S t a t i c I n t v l s 0
4:
s e t u p N e w W C l a s s ( s , n e w W C l a s s . a r r i v T m S t r t )
5:
r e a c h a b l e N o d e s g e t R e a c h a b l e N o d e s ( G , s )
6:
n e w I n t v l r e m o v e M i n I n t v l ( )
7:
while ( ( i n d x S t a t i c I n t v l s < s t a t i c I n t v l s ) or ! P Q . e m p t y ( ) ) and ( n o d e s R c h d < r e a c h a b l e N o d e s )  do
8:
     c u r T m n e w I n t v l . s t + n e w I n t v l . λ
9:
     u = n e w I n t v l . u ; v = n e w I n t v l . v
10:
     n e w V s 0
11:
    while ( c u r T m = = n e w I n t v l . s t + n e w I n t v l . λ ) do
12:
        if  ( n e w I n t v l . v s )  then
13:
            p r e v W I n d x g e t P r v e W ( n e w I n t v l )
14:
           if  p r e v W I n d x 1  then
15:
                n e w W c r e a t e N e w W ( l i s t W C l a s s e s [ u ] [ p r e v W I n d x ] , n e w I n t v l )
16:
               if  n e w W  then
17:
                    i n s i n s N e w W ( n e w W , v )    ▹ Inserts n e w W in list of walks at v. If n e w W is best m w f walk it is recorded in m w f W a l k s [ v ]
18:
                   if  i n s = = N E W  then
19:
                        n e w V s + +
20:
                   end if
21:
                   if  i n s 0  then
22:
                        s e t u p N e w W C l a s s ( v , n e w W . s t r t T m )
23:
                   end if
24:
               end if
25:
           end if
26:
        end if
27:
         n e w I n t v l r e m o v e M i n I n t v l ( )
28:
    end while
29:
     n o d e s R c h d + = n e w V s
30:
end while
After the new walk is created on an edge ( u , v ) , it is appended to the list of walks at v. To qualify as a valid walk to be appended to the list of walks at v, it needs to be compared for dominance with only the last walk in the list at v due to Theorem 9. This walk is appended to the list at v only if it is not dominated. The insert function also checks if this is the first walk arriving at v. If so, it updates the m w f W a l k s for vertex v and increments the count of n e w V s . If this is not the first walk to arrive at v, but if it is better w.r.t m w f arrival than the best known walk at v, m w f W a l k s is updated for vertex v. Note that the new walk cannot have an earlier arrival time than the best known walk at v as the intervals are examined in non-decreasing order of arrival times, but it may have a lesser wait time.
After a walk has been appended to the list of walks at v, the function s e t u p N e w W C l a s s examines all the neighbors of v. If the start time of the last arriving walk class is in the middle of an available departure interval, i n t v l given, in the input graph G on an edge ( v , w ) , a new departure sub-interval of i n t v l , say s u b I n t v l , is created. The start time of s u b I n t v l is the arrival time of the walk and travel time same as that of the i n t v l ( i n t v l . λ ). This way, every walk originates at the s t a r t of an interval given in graph G or at start of a s u b I n t v l that is a sub-interval of one of the intervals given in the input graph G. This new s u b I n t v l is inserted into the priority queue P Q . In the r e m o v e M i n I n t v l function, the interval with minimum arrival time (when departing at the start of the interval) from i n t v l L i s t and the top of P Q is returned.
Theorem 16.
Algorithm 2 finds m w f walks from s to all other vertices in an interval temporal graph when all edge travel times are > 0.
Proof. 
The algorithm examines all the departure intervals in non-decreasing order of arrival time, when departing at the start of the interval. For every interval, n e w I n t v l in that order, it considers the best walk eligible for extension in this interval, as per Theorem 10, by obtaining the index for the previous walk ( p r e v W I n d e x ) in the list of walks at the departure vertex u, in Step 13. It extends the p r e v W a l k with departure in n e w I n t v l only if this extension is beneficial as per Theorem 12. Once the walk is extended, it updates the l a s t E x p A t for p r e v W a l k for the edge ( u , v ) , so that subsequent departure intervals on ( u , v ) are used for expanding p r e v W a l k only if the resultant walks may be beneficial, as per Theorem 12.
The extended walk is appended to the list of walks at v only if it is not dominated as per the dominance criterion of Equation (1). In addition, m w f walks at v are updated if the arriving walk is better than the previous m w f walk, w.r.t m w f arrival at v or if it is the first walk to arrive at v.
Finally, the arriving walk at v, n e w W , is examined w.r.t all outgoing edges ( v , w ) . If for a departure interval I ( s , e , λ ) to any neighbor w we have I s < n e w W . s t r t T m I e , a new sub-interval, s u b I n t v l , is created that has a start time of n e w W . s t r t T m and inserted into P Q . When such a s u b I n t v l is the minimum interval among all the intervals from the input graph G and the s u b I n t v l s created during the course of Algorithm 2, the n e w W gets the opportunity to expand at the start time of s u b I n t v l .
Therefore, this algorithm starts at the start vertex and generates walks at all feasible departure instances w.r.t m w f walks criteria. Feasible departure intervals are examined in non-decreasing order of arrival times; therefore, every new walk generated has an arrival time ≥ arrival time of any previously generated walk. No feasible non-dominated walk is missed due to Theorems 8–13.
Every non-dominated walk generated is preserved and expanded as per Theorems 11–13. When a walk terminating at a vertex v is discovered that is the best among the walks arriving at v w.r.t m w f arrival, it is recorded as the m w f walk at v. The algorithm terminates only when one of the following is true
  • All intervals including the s u b i n t e r v a l s created during the course of Algorithm 2 have been examined.
  • m w f walks to all reachable vertices have been found and the arrival time when departing at the start of next interval being examined is bigger than the max arrival time of the m w f walks found. Therefore, examining any more intervals cannot give a better m w f walk.
Therefore, when the algorithm terminates, m w f walks to all reachable vertices have been found. □
For handling the case when some travel times are 0, in the while loop of line 11, we need to first obtain all the edges with 0 travel times arriving at c u r r T m and form their transitive closure. Then, we can process every edge arriving at the same c u r r T m over this transitive closure.
Theorem 17.
When time is an integer, Algorithm 2 has pesudopolynomial complexity.
Proof. 
Let T be the maximum possible arrival time at any vertex. Since c u r T m is an integer that increases in each iteration of the outer while loop, this outer loop is iterated O ( T ) times. For each value of c u r T m , the while loop of line 11 is iterated O ( M i t g T ) times, where M i t g is number of edges in the input interval temporal graph, as the number of walks that can arrive at a vertex v at time c u r T m is O ( i n d e g r e e ( v ) T ) . The components of the time required for each iteration of the while loop of line 11 are
  • log ( w a l k s ( v ) ) in step 13 to find the previous walk that can be extended at the current departure instance( t a r r ). w a l k s ( v ) , which is the number of walks at v at time t a r r , can be at most t a r r as each preserved walk at a given vertex v has a unique arrival time. Therefore, time taken by this step is log ( t a r r ) or at most log ( T ) .
  • Constant time for creating a new walk n e w W in step 15.
  • Constant time for inserting n e w W , in the list of walks l at v in step 17. l at every vertex v maintains a list of walks that has arrived at v in non-decreasing order of their arrival times. n e w W only needs to be compared with the last walk in l.
  • At most d log ( δ ) in step 22, d is the out-degree of the vertex v at which n e w W terminates and δ is the number of departure intervals on an outgoing edge ( v , w ) that has maximum departure intervals among all the outgoing edges from v. Each of the departure intervals on outgoing edges from v may need to be added to P Q .
    Size of P Q can be, at most, the maximum number of departure instances in the graph G, which is M i t g T . Therefore, the maximum time taken in this step is d log ( δ ) log ( M i t g T ) .
  • Finally, in step 27, departure instance with minimum arrival time is retrieved. Maximum size of P Q can be M i t g T . Therefore, maximum time taken by this step is log ( M i t g T )
Therefore, the complexity of the algorithm is O ( T ( M i t g T ( log ( T ) + d log ( δ ) log ( M i t g T ) + log ( M i t g T ) ) ) ) , which is O ( M i t g T 2 log ( T ) ) . Hence, the time complexity of Algorithm 2 is pseudopolynomial. □

6. Linear Combination of Optimization Criteria

Bentert et al. [10] present a polynomial time algorithm that optimizes any linear combination of the eight optimization criteria f o r e m o s t , r e v e r s e _ f o r e m o s t , f a s t e s t , s h o r t e s t , m i n _ h o p , c h e a p e s t , m o s t _ l i k e l y , and m i n _ w a i t in a contact-sequence temporal graph. In this section, we show that, when time is discrete, we can use the algorithm of Bentert et al. [10] to solve the m h f , m w f and m c f problems in polynomial time for contact-sequence graphs. This algorithm may also be used to find m h f , m w f , and m c f paths in interval temporal graphs by first transforming the interval-temporal graph into an equivalent contact sequence graph as described in Section 1. Since this transformation may increase the number of edges by an exponential amount, this approach does npt result in a polynomial time algorithm for interval-temporal graphs. However, it does result in a pseudopolynomial time algorithm as shown in Theorem 17.
To solve m h f , for example, for contact-sequence graphs, we perform the following:
  • Set the coefficient for every criterion other than m i n _ h o p and f o r e m o s t to 0.
  • The coefficient for the foremost criterion may be set to any integer that is greater than or equal to the number of vertices, n, in the contact-sequence graph.
  • Set the coefficient for the min-hop criterion to 1.
With these settings, Bentert et al. [10] will find walks that minimize h ( s , v ) = c f t a ( v ) + h o p s ( v ) , where c f is the coefficient for the foremost criterion, t a ( v ) is the time at which the walk from s arrives at v, and h o p s ( v ) is the number of hops in the walk. Since a min-hop walk is necessarily a min-hop path, it has no more than n 1 hops. If we examine the digits of m h f ( s , v ) , which is a non-negative integer, using the radix c f , the least significant digit is the number of hops and the remaining digits give t a . Hence, m h f ( s , v ) is minimized by min-hop foremost paths to v and not by any other path or walk.
The strategy for m w f and m c f is similar. In the case of m w f , the function to optimize is m w f ( s , v ) = c f t a ( v ) + w a i t ( s , v ) and for w c f , the optimization function is w c f ( s , v ) = c f t a ( v ) + c o s t ( s , v ) , where w a i t ( s , v ) is the total wait time on the walk from s to v and c o s t ( s , v ) is its cost. For m w f ( s , v ) , c f must be larger than the maximum total wait time on an optimal foremost walk from s (a simple bound is the maximum arrival time of a foremost path to reachable vertices), and for m c f ( s , v ) , c f must be larger than the maximum cost of an optimal foremost path (a simple bound to use is the sum of the maximum cost of each edge). For simplicity, we use c f = 2 32 in our experiments, as this is large enough for our data sets.
It is easy to see how the above modeling strategy may be used to find, say, min-cost foremost paths with the fewest number of hops.

7. Experimental Results

In this section, we compare the performance of our m h f path and m w f walk algorithms to the algorithm of Bentert et al. [10]. Since the latter algorithm works only on contact-sequence graphs, we first transform our interval-temporal graphs into equivalent contact-sequence graphs (as noted in Section 1, this can be performed when time is discrete) and then we use the strategy discussed in Section 6 to formulate an appropriate optimization function for the algorithm of Bentert et al. [10]. Since the optimization function constructed in Section 6 has values larger than what can be handled by the datatype int, we modified the code of Bentert et al. [10] using the datatype long for relevant variables.
Our experimental platform was an I n t e l C o r e , i 9 7900 X C P U @ 3.30 G H z processor with 64 GB RAM. The C++ codes for finding the optimal linear combination of multiple optimization criteria for contact-sequence temporal graph algorithms was obtained from the authors of [10]. All other algorithms were coded by us in C++. The codes were compiled using the g + + v e r . 7.5 . 0 compiler with option O2. For test data, we used the datasets used in [8,10], for the contact-sequence graphs. We transformed these contact-sequence graphs to interval-temporal graphs to run on our algorithms. We also generated some synthetic datasets as interval-temporal graphs that we transformed to contact-sequence models so the programs of Bentert et al. [10] could be run on them.

7.1. Datasets

We used 13 of the 14 real-world contact-sequence graphs that were used in [8,10,12,13]. The 14th dataset, dblp, was not used as it had a few negative timestamps. The statistics for the remaining 13 real-world datasets are given in Table 1. In this table, | V | is the number of vertices, | E s | is the number of edges in the underlying static graph and c s e d g e s is the number of edges in the contact-sequence temporal graph. The ratio ( c s e d g e s ) / | E s | is the temporal activity of the graph. Note that the number of edges in the interval-temporal graph is also | E s | . The datasets have a wide range of sizes in terms of the number of vertices and edges. Consistent with [8,10,12,13], the travel time, λ , on all edges was set to 1. The temporal activity on these datasets is low, ranging from 1 to 3.67 .
In [13], we generated five synthetic datasets with higher activity and variable λ s by starting with the social network graphs of youtube, flickr, livejournal shared by the authors of [17] at http://socialnetworks.mpi-sws.org/data-imc2007.html (accessed on 16 August 2022). These graphs represent user-to-user interactions.
Table 2 shows the statistics for the five synthetic temporal graphs generated by us.
Table 3 gives the time (in seconds) required to read each dataset from the disk as well as the disk memory required by each dataset. The columns labeled [10] are for the case when the dataset is stored in the input format used by the algorithm of Bentert et al. [10] and those labeled Ours are for the input format required by our algorithm. As can be seen, the disk reading time and disk space required by the interval-temporal graph representation are less than that required by the Bentert et al. [10]. Further, this difference increases as the temporal activity increases. In fact, for four of the instances, input creation code of [10] failed to create the required input format for lack of sufficient memory (out-of-mem).
As an example, for the large synthetic y o u t u b e graph with ( μ I = 4 , μ D = 8 , μ T = 3 ) which has an activity factor of 32.1 , the reading time of the interval-temporal graph is 3.53 s while the program by Bentert et al. [10] takes about 102.5 s to read the corresponding graph in Bentert’s transformed format. The size of the interval temporal graph for the same dataset on disk is 272.3 MB as compared to 6.7 GB for the Bentert’s transformed graph from the corresponding contact-sequence graph.

7.2. Run Times

As in [8,10,12,13], we assume that the the graph is resident in memory (i.e., the read time from disk is not accounted for). This is a valid assumption in applications where the graph is input once and queried many times. To use the algorithm of Bentert et al. [10], we use the following steps:
  • Transform the interval temporal graph to an equivalent contact sequence graph ( c s g ) as described in Section 1.
  • Use Bentert’s graph transformation program [10] to convert the c s g to the input format used by Bentert’s linear combination algorithm.
  • Run Bentert’s linear combination program on the transformed graph from Step 2 using the coefficients given in Section 6.
The step 2 transformation was unsuccessful on the Koblenz dataset w i k i e n e d i t as well as on all our synthetic datasets other than y o u t u b e graph with ( μ I = 4 , μ D = 5 , 8 , μ T = 3 ), as indicated in Table 3. The failure of this step resulted from the unavailability of sufficient memory to run the step 2 transformation code of [10] on these instances. We measured the average of the runtimes for 100 randomly selected source vertices for each dataset except the d e l i c i o u s from the Koblenz collection and the synthetic datasets, where we used only five randomly selected source vertices. This reduction in the number of source vertices was necessary because of the excessive runtime taken by the algorithm of [10] on these datasets. The average run-times (in seconds) for the Koblenz and synthetic datasets are given in Table 4. This time does not include the time required by steps 1 and 2. The speedups (time taken by the algorithm of [10]/time taken by our algorithm) are also shown visually in Figure 13, Figure 14, Figure 15 and Figure 16.
The speedups obtained by our m h f algorithm over [10] range from 3 to 207.5 for the Koblenz datasets and from 451.3 to 679.56 for the synthetic datasets. Our m h f algorithm outperforms that of [10] on all datasets. The speedups obtained by our m w f algorithm over [10] range from a low of 0.045 to a high of 23.3 . The algorithm of [10] outperforms our m w f algorithm when the graph has very low connectivity. Their algorithm is able to quickly discover that no more vertices are reachable from the source vertex and, so, terminates sooner than our m w f algorithm. However, when there are many reachable vertices, our algorithm outperforms that of Bentert et al. [10]. We also note that our algorithm works on interval-temporal graphs where time intervals may be continuous or have large durations, while the algorithm of Bentert et al. [10] is only for contact-sequence graphs, which are a subset of interval temporal graphs. As noted earlier, the algorithm of [10] could not be run on some datasets because of the failure of step 2 as it ran out of memory. On the four datasets where the algorithm of [10] could not be run, the average runtime of our m h f algorithm was 1, 1.1, 1.3, and 22.2 s, respectively; the runtime for our m w f algorithm was 39.1 , 368.4 , 449 and 1317.6 s, respectively.

8. Conclusions

We have shown that finding m w f paths and walks as well as m c f paths in interval-temporal graphs is NP-hard. For the m h f single-source all-destinations problem, a polynomial time algorithm was developed and for the m w f single-source all-destinations problem, a pseudopolynomial time algorithm was developed. We show also that the m h f and m w f problems for interval graphs can be solved using the linear combination algorithm of Bentert et al. [10] for those interval graphs that can be modeled as contact-sequence graphs. While the algorithm of [10] is polynomial in the size of the contact-sequence graph, the modeling of an interval-temporal graph by a contact-sequence graph, when possible, may increase the graph size by an exponential amount and, so, this approach does not result in a polynomial time algorithm for interval-temporal graphs. In fact, even though all of our datasets could be modeled as contact-sequence graphs, we were unable to use the algorithm of [10] on one of our Koblenz datasets and three of our synthetic datasets as the code of [10] that transform the contact-sequence graph into the required input format failed due to insufficient memory. Our m h f paths algorithm outperformed the linear combination algorithm of [10] on all datasets on which the algorithm of [10] could be run. A speedup of up to 69.9 was obtained on the Koblenz datasets and up to 679 on synthetic datasets. For the m w f single-source all-destinations problem, which is NP-hard, the linear combination algorithm of [10] outperformed our algorithm on three of the Koblenz data sets and tied on a fourth. On all remaining datasets, our algorithm outperformed that of [10]. The datasets on which the algorithm of [10] did well had the property that few vertices were reachable from the source vertex. This enabled the algorithm of [10] to terminate early. In these cases, the speedup obtained by our algorithm ranged from 0.045 to 0.56. On the remaining Koblenz datasets, our m w f algorithm obtained a speedup of up to 23.3 . For both the synthetic datasets where it was possible to run the algorithm of [10], our algorithm was faster and delivered a speedup of up to 2.8. On the four datasets that the transformation to the input format required by the algorithm of [10] failed due to lack of memory, our m h f algorithm ran in 1, 1.1, 1.3, and 22.2 s, respectively and the runtime for our m w f algorithm was 39.1 , 368.4 , 449 and 1317.6 s, respectively.

Author Contributions

Conceptualization, A.J. and S.S.; Data curation, A.J. and S.S.; Formal analysis, A.J. and S.S.; Investigation, A.J. and S.S.; Methodology, A.J. and S.S.; Project administration, S.S.; Resources, A.J. and S.S.; Software, A.J.; Supervision, S S.; Validation, A.J. and S.S.; Visualization, A.J. and S.S.; Writing—original draft, A.J. and S.S.; Writing—review & editing, A.J. and S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The Koblenz datasets used for benchmarking are available in the KONECT graphs [18]. The social network graphs of youtube, flickr, livejournal are available at http://socialnetworks.mpi-sws.org/data-imc2007.html (accessed on 16 August 2022). This was shared by the authors of [17].

Acknowledgments

We acknowledge Wu et al. [8] and Bentert et al. [10] for sharing their implementation with us to benchmark against.

Conflicts of Interest

The authors declare that they have no competing interests.

Abbreviations

The following abbreviations are used in this manuscript:
NAPPNo-wait acyclic path problem
VNumber of vertices in a graph
NUsed interchangeably with V
EsNumber of edges in underlying static graph
m w f Min-wait foremost
m h f Mih-hop foremost
m c f Min-cost foremost
M i t g Number of edges in interval-temporal graph (same as E s )
M c s g Number of contact-sequence edges
λ Travel duration on an edge at a given departure time
δ Maximum number of departure intervals on an edge

References

  1. Scheideler, C. Models and Techniques for Communication in Dynamic Networks. In Proceedings of the STACS19th Annual Symposium on Theoretical Aspects of Computer Science, Antibes, Juan les Pins, France, 14–16 March 2002; Volume 2285, pp. 27–49. [Google Scholar]
  2. Stojmenović, I. Location Updates for Efficient Routing in Ad Hoc Networks. In Handbook of Wireless Networks and Mobile Computing; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2002; Chapter 21; pp. 451–471. [Google Scholar] [CrossRef]
  3. Holme, P.; Saramäki, J. Temporal networks. Phys. Rep. 2012, 519, 97–125. [Google Scholar] [CrossRef]
  4. Michail, O. An Introduction to Temporal Graphs: An Algorithmic Perspective. arXiv 2015, arXiv:1503.00278. [Google Scholar]
  5. Santoro, N.; Quattrociocchi, W.; Flocchini, P.; Casteigts, A.; Amblard, F. Time-Varying Graphs and Social Network Analysis: Temporal Indicators and Metrics. arXiv 2011, arXiv:1102.0629. [Google Scholar]
  6. Kuhn, F.; Oshman, R. Dynamic Networks: Models and Algorithms. SIGACT News 2011, 42, 82–96. [Google Scholar] [CrossRef]
  7. Bhadra, S.; Ferreira, A. Computing multicast trees in dynamic networks and the complexity of connected components in evolving graphs. J. Internet Serv. Appl. 2012, 3, 269–275. [Google Scholar] [CrossRef]
  8. Wu, H.; Cheng, J.; Ke, Y.; Huang, S.; Huang, Y.; Wu, H. Efficient Algorithms for Temporal Path Computation. IEEE Trans. Knowl. Data Eng. 2016, 28, 2927–2942. [Google Scholar] [CrossRef]
  9. Bui-Xuan, B.M.; Ferreira, A.; Jarry, A. Evolving graphs and least cost journeys in dynamic networks. In Proceedings of the WiOpt’03: Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks, Sophia Antipolis, France, 3–5 March 2003. [Google Scholar]
  10. Bentert, M.; Himmel, A.S.; Nichterlein, A.; Niedermeier, R. Efficient computation of optimal temporal walks under waiting-time constraints. Appl. Netw. Sci. 2020, 5, 73. [Google Scholar] [CrossRef]
  11. Guo, F.; Zhang, D.; Dong, Y.; Guo, Z. Urban link travel speed dataset from a megacity road network. Sci. Data 2019, 6, 61. [Google Scholar] [CrossRef] [PubMed]
  12. Gheibi, S.; Banerjee, T.; Ranka, S.; Sahni, S. An Effective Data Structure for Contact Sequence Temporal Graphs. In Proceedings of the 2021 IEEE Symposium on Computers and Communications (ISCC), Athens, Greece, 5–8 September 2021; pp. 1–8. [Google Scholar] [CrossRef]
  13. Jain, A.; Sahni, S. Min Hop and Foremost Paths in Interval Temporal Graphs. In Proceedings of the 2021 IEEE Symposium on Computers and Communications (ISCC), Athens, Greece, 5–8 September 2021; pp. 1–7. [Google Scholar]
  14. Bhadra, S.; Ferreira, A. Complexity of Connected Components in Evolving Graphs and the Computation of Multicast Trees in Dynamic Networks. In Proceedings of the Ad-Hoc, Mobile, and Wireless Networks, Montreal, QC, Canada, 8–10 October 2003; Pierre, S., Barbeau, M., Kranakis, E., Eds.; Springer: Berlin/Heidelberg, Germany, 2003; pp. 259–270. [Google Scholar]
  15. Casteigts, A.; Himmel, A.; Molter, H.; Zschoche, P. The Computational Complexity of Finding Temporal Paths under Waiting Time Constraints. arXiv 2019, arXiv:1909.06437. [Google Scholar]
  16. Zschoche, P.; Fluschnik, T.; Molter, H.; Niedermeier, R. The Complexity of Finding Small Separators in Temporal Graphs. arXiv 2018, arXiv:1711.00963. [Google Scholar] [CrossRef]
  17. Mislove, A.; Marcon, M.; Gummadi, K.P.; Druschel, P.; Bhattacharjee, B. Measurement and Analysis of Online Social Networks. In Proceedings of the 5th ACM/Usenix Internet Measurement Conference (IMC’07), San Diego, CA, USA, 24–26 October 2007. [Google Scholar]
  18. Kunegis, J. KONECT: The Koblenz Network Collection. In Proceedings of the 22nd International Conference on World Wide Web, WWW ’13 Companion, Rio de Janeiro, Brazil, 13–17 May 2013; Association for Computing Machinery: New York, NY, USA, 2013; pp. 1343–1350. [Google Scholar] [CrossRef]
Figure 1. Contact-sequence temporal graph.
Figure 1. Contact-sequence temporal graph.
Algorithms 15 00361 g001
Figure 2. Interval-temporal graph.
Figure 2. Interval-temporal graph.
Algorithms 15 00361 g002
Figure 3. Example of m h f paths from source vertex a.
Figure 3. Example of m h f paths from source vertex a.
Algorithms 15 00361 g003
Figure 4. Example of m w f walks from source vertex s.
Figure 4. Example of m w f walks from source vertex s.
Algorithms 15 00361 g004
Figure 5. Underlying static graph for temporal graphs of Figure 1 and Figure 2.
Figure 5. Underlying static graph for temporal graphs of Figure 1 and Figure 2.
Algorithms 15 00361 g005
Figure 6. Interval-temporal graph for NP-hard proofs.
Figure 6. Interval-temporal graph for NP-hard proofs.
Algorithms 15 00361 g006
Figure 7. Example m h f paths from vertex a.
Figure 7. Example m h f paths from vertex a.
Algorithms 15 00361 g007
Figure 8. m h f paths in interval-temporal graphs.
Figure 8. m h f paths in interval-temporal graphs.
Algorithms 15 00361 g008
Figure 9. Example of departure in next intervals.
Figure 9. Example of departure in next intervals.
Algorithms 15 00361 g009
Figure 10. Example of a walk class.
Figure 10. Example of a walk class.
Algorithms 15 00361 g010
Figure 11. Interval temporal graph with some slow intervals.
Figure 11. Interval temporal graph with some slow intervals.
Algorithms 15 00361 g011
Figure 12. Interval list in non-decreasing order of arrival.
Figure 12. Interval list in non-decreasing order of arrival.
Algorithms 15 00361 g012
Figure 13. m h f speedups on Koblenz datasets.
Figure 13. m h f speedups on Koblenz datasets.
Algorithms 15 00361 g013
Figure 14. m h f speedups on synthetic datasets.
Figure 14. m h f speedups on synthetic datasets.
Algorithms 15 00361 g014
Figure 15. m w f speedups on Koblenz datasets.
Figure 15. m w f speedups on Koblenz datasets.
Algorithms 15 00361 g015
Figure 16. m w f speedups on synthetic datasets.
Figure 16. m w f speedups on synthetic datasets.
Algorithms 15 00361 g016
Table 1. Koblenz collection graph statistics.
Table 1. Koblenz collection graph statistics.
Dataset | V | | E s | cs edges Activity
epin131.8 K840.8 K841.3 K1
elec7119 K103.6 K103.6 K1
fb63.7 K817 K817 K1
flickr2302.9 K33,140 K33,140 K1
growth1870.7 K39,953 K39,953 K1
youtube3223 K9375 K9375 K1
digg30.3 K85.2 K87.6 K1.02
slash51 K130.3 K140.7 K1.07
conflict118 K2027.8 K2917.7 K1.43
arxiv28 K3148 K4596 K1.45
wiki-en-edit42,640 K255,709 K572,591 K2.23
enron87,274320.1 K1148 K3.58
delicious4512 K81,988 K301,186 K3.67
Table 2. Synthetic graphs statistics.
Table 2. Synthetic graphs statistics.
Graphs with μ I = 4 , μ D = 5 , μ T = 3
Dataset | V | | E s | cs-edgesEdge Activity
youtube1157.8 K4945 K105,039 K21.2
flickr1861 K22,613.9 K480,172 K21.24
livejournal5284 K77,402.6 K1,643,438 K21.3
Graphs with μ I = 4 , μ D = 8 , μ T = 3
youtube1157.8 K4945 K159,103.7 K32.1
flickr1861 K22,613.9 K727,405.9 K32.1
Table 3. Reading times and sizes.
Table 3. Reading times and sizes.
Koblenz Collection
DatasetReading Time (in s)Sizes in MBs
[10]Ours[10] GraphOurs
epin0.460.3431.119.3
elec0.130.074.53.3
fb0.580.3432.921.5
flickr19.314.21492.3911
growth2719.92188.31485.4
youtube6.14.2456.1300.8
digg0.110.063.62.5
slash0.140.0775
conflict1.440.8298.843.2
arxiv2.51.4193104
wiki-en-edit227out-of-mem20,115
enron0.740.3655.929.3
delicious2097816,2497552
Synthetic Datasets with μ I = 4 , μ D = 5 , μ T = 3
youtube6744400272.2
flickr17.8787out-of-mem1248
livejournal56.1638out-of-mem4411
Synthetic Datasets with μ I = 4 , μ D = 8 , μ T = 3
youtube10236775272
flickr18out-of-mem1249
Table 4. Runtimes in seconds.
Table 4. Runtimes in seconds.
Koblenz Datasets
Dataset mhf mwf
mhf -[10] mhf -Ours mhf -[10]/Ours mwf -[10] mwf -Ours mwf -[10]/Ours
epin0.046.6 × 10 3 6.1060.040.022
elec1.3 × 10 2 3.6 × 10 4 360.010.0033.33
fb1.8 × 10 2 1.1 × 10 3 15.90.0170.030.56
flickr5.570.4512.345.651.374.12
growth9.561.675.729.954.22.36
youtube0.321.2 × 10 2 25.80.330.181.83
digg2.1 × 10 3 9.1 × 10 5 23.660.0020.0021
slash0.011.3 × 10 3 11.170.010.0052
conflict9 × 10 5 3 × 10 5 30.00090.020.045
arxiv5 × 10 2 9.3 × 10 3 6.250.060.150.4
enron0.182.8 × 10 3 64.110.70.0323.3
delicious2221.07207.57634118.6
Synthetic datasets μ I = 4 , μ D = 5 , μ T = 3
youtube120.80.17679.5186.871.32.6
Synthetic runtimes μ I = 4 , μ D = 8 , μ T = 3
youtube132.70.29451.327595.52.8
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jain, A.; Sahni, S. Foremost Walks and Paths in Interval Temporal Graphs. Algorithms 2022, 15, 361. https://doi.org/10.3390/a15100361

AMA Style

Jain A, Sahni S. Foremost Walks and Paths in Interval Temporal Graphs. Algorithms. 2022; 15(10):361. https://doi.org/10.3390/a15100361

Chicago/Turabian Style

Jain, Anuj, and Sartaj Sahni. 2022. "Foremost Walks and Paths in Interval Temporal Graphs" Algorithms 15, no. 10: 361. https://doi.org/10.3390/a15100361

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop