Next Article in Journal
Bloom Filters at Fifty: From Probabilistic Foundations to Modern Engineering and Applications
Previous Article in Journal
A Multifocal RSSeg Approach for Skeletal Age Estimation in an Indian Medicolegal Perspective
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Floyd–Warshall Algorithm for Sparse Graphs

1
Faculty of Computer and Information Science, University of Ljubljana, 1000 Ljubljana, Slovenia
2
Faculty of Mathematics, Natural Sciences and Information Technologies, University of Primorska, 6000 Koper, Slovenia
3
Andrej Marušič Institute, University of Primorska, 6000 Koper, Slovenia
4
Institute of Mathematics, Physics and Mechanics, 1000 Ljubljana, Slovenia
*
Author to whom correspondence should be addressed.
Algorithms 2025, 18(12), 766; https://doi.org/10.3390/a18120766 (registering DOI)
Submission received: 28 October 2025 / Revised: 25 November 2025 / Accepted: 28 November 2025 / Published: 4 December 2025
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)

Abstract

The Floyd–Warshall algorithm, which uses a classic dynamic programming approach, provides a solution to the all-pairs shortest paths problem. However, for sparse graphs, iteratively applying Dijkstra’s, or some other similar algorithm from each node, often proves to be more efficient. We introduce a novel technique based on a structural decomposition of the input graph into strongly connected components, allowing us to exploit the disconnectedness of the graph by avoiding redundant relaxation attempts on nodes that are not reachable from the source component. Using an empirical evaluation, where execution time is measured, we demonstrate that our approach outperforms existing alternatives on disconnected graphs.

1. Introduction

Real-world problems in fields, such as bio-informatics, logistics, telecommunications, and others, are often modeled using graphs and solved with search for the shortest paths between node pairs of the graph [1]. The shortest path problem has three main variations: finding the shortest path between two specific nodes, computing shortest paths from a single node to all other nodes (single-source shortest path, SSSP), and determining shortest paths between all node pairs (all-pairs shortest path, APSP). There are two fundamentally different approaches to solve the APSP problem. In the first one, we solve the SSSP problem for each node in the graph and combine the results, while in the second approach, we simultaneously construct shortest paths for all node pairs of the graph. In the first approach, usually a Dijkstra-like algorithm is used for graphs with non-negative edge weights. In this paper, we restrict ourselves to graphs with non-negative edge weights, leaving the treatment of negative weights for future work.
Probably the best known second approach is the Floyd–Warshall algorithm, which employs the dynamic programming technique. However, in both approaches, the key operation is a relaxation and it turns out that the time complexity of the solution is linear in the number of relaxation attempts. Consequently, the first approach proves to be more appropriate for sparse graphs and the second for dense ones.
In this paper, we study the feasibility of the second approach on sparse graphs, particularly disconnected directed graphs. In our approach, we propose first identifying strongly connected components of the graph, on each of which we then apply the “APSP algorithm” independently. This is reasonable, as within each strongly connected component, each pair of nodes is mutually reachable. To handle shortest paths between pairs of nodes in different components, we rely on a simple observation: if there is no path from an arbitrarily chosen node in the source component to an arbitrarily chosen node in the destination component, then relaxations can be safely skipped for all node pairs where the first node belongs to the source component and the second to the destination component.
The objective of this paper is first to present a new approach to solve the APSP problem and then empirically evaluate and compare it with the existing algorithms. Empirical evaluation is performed on artificially generated graphs (Erdős–Rényi method [2,3] and the Barabási–Albert method [4]) and real-world graphs derived from practical datasets.
Section 2 reviews related work in all-pairs shortest path algorithms and it is followed by basic definitions. Our solution is presented in Section 4 and evaluated in Section 5. Section 6 summarizes the results and research insights.

2. Related Work

The Floyd–Warshall algorithm [5,6] is a classic APSP solution using dynamic programming, performing O ( n 3 ) relaxation attempts on a graph with n nodes. However, [7] shows that many relaxations are unnecessary, and for complete directed graphs with uniformly distributed weights, their modified version runs in expected O ( n 2 log 2 n ) time. Similarly, SmartForce [8] achieves significant speedups by avoiding all other but a small fraction relaxation attempts. For graphs that are not complete or are sparse, the Rectangular Algorithm [9] improves performance by blocking the distance matrix and skipping inactive regions. The Improved Floyd–Warshall algorithm [10] further optimizes sparse graphs by dynamically maintaining reachability lists and prioritizing nodes with low in-/out-degree products, reducing iterations when the number in a graph is bigger the number of edges. All variants, however, retain the worst-case O ( n 3 ) time bound.
Another approach to solve the APSP problem is to solve the SSSP problem for each node in the graph (cf. [11]). For graphs with m edges, all of which have non-negative weights, Dijkstra’s algorithm [12] solves the SSSP problem in O ( m + n log n ) when a priority queue with amortized O ( log n ) deletes min operation and O ( 1 ) other operations is used (e.g., [13]). However, in practice, Dijkstra with pairing heaps [14], despite weaker theoretical bounds, performs better. Returning to the APSP problem, the SSSP-based approach gives us O ( n m + n 2 log n ) solution. Typically, on sparse graphs (e.g., m = o ( n 2 ) ), this approach is more efficient than Floyd–Warshall [15]; however, for m = Θ ( n 2 ) , it matches Floyd–Warshall’s O ( n 3 ) . Finally for complete graphs, the Hidden Paths algorithm [16] and Uniform Paths algorithm [17,18] give an expected O ( n 2 log n ) time bound (uniform weights on [ 0 , 1 ] ).

3. Preliminaries

A directed graph G is an ordered pair ( V , E ) , where V is a finite, non-empty set of nodes, and  E V × V is a set of directed edges. We assume that V = { v 1 , v 2 , , v n } for some integer n.
Each directed edge e = ( u , v ) E connects two nodes, called its end nodes, where u is referred to as the initial node, denoted by ini ( e ) = u and v is referred as the terminal node, denoted by ter ( e ) = v .
A path of length k in a directed graph G = ( V , E ) is a sequence of pair-wise distinct nodes ( v 1 , v 2 , , v k ) such that for each i where 1 i < k , the directed edge ( v i , v i + 1 ) E .
A directed graph, G = ( V , E ) , is a sub-graph of a directed graph G = ( V , E ) if V V and E E .
A node v is reachable from another node u in a directed graph G = ( V , E ) if there exists a path from u to v. That is, there is a sequence of nodes ( v 1 , v 2 , , v k ) such that v 1 = u , v k = v , and for each i where 1 i < k , the directed edge ( v i , v i + 1 ) belongs to E.
A strongly connected component (SCC) of a directed graph G = ( V , E ) is a maximal sub-graph G = ( V , E ) —with respect to the number of nodes—such that for every pair of nodes u , v V , there exists a path from u to v and a path from v to u, meaning the nodes are mutually reachable.
Given a SCC C V , the set of outgoing edges from C is defined as
Out ( C ) = { e E : ini ( e ) C and ter ( e ) C } .
A directed graph G = ( V , E ) is a directed acyclic graph (DAG) if it contains no directed cycles. That is, there does not exist a sequence of distinct nodes v 1 , v 2 , , v k such that ( v 1 , v 2 ) , ( v 2 , v 3 ) , , ( v k 1 , v k ) , ( v k , v 1 ) E .
A topological ordering of a DAG G = ( V , E ) is a linear ordering of its nodes such that for every directed edge ( u , v ) E , node u appears before v in the ordering.
A weighted digraph is a digraph G = ( V , E ) with a weight function w : E R that assigns each directed edge e E a weight w ( e ) . A weight function w can be extended to a path P = ( v 1 , v 2 , , v k ) by w ( P ) = i = 0 m 1 w ( v i , v i + 1 ) . A shortest path from s to d is a path in G whose weight is infimum among all paths from s to d. The distance between two nodes s and d, denoted by D s , d , is the weight of a shortest path from s to d in G.
For simplicity, we will refer to directed graphs simply as graphs throughout the rest of this paper.

4. Our Solution

We propose a shortest-path algorithm that exploits the strongly disconnected structure of graphs. The key idea is to decompose the graph into SCCs and process them in topological order, computing distances within each component and propagating paths across components only via outgoing edges.

4.1. Formal Definition of Shortest Path via SCCs

Let G be a directed graph partitioned into t strongly connected components C 1 , C 2 , , C t , ordered topologically such that there are no edges from C j to C i for j > i . Further, we denote the union C i C i + 1 C t by C ( i t ) . Furthermore, for nodes s , d C ( i t ) let D s , d ( i t ) denote the length of shortest path from s to d containing only nodes from C ( i t ) . We shorten the notation D s , d ( i i ) to D s , d ( i ) .
Corollary 1.
The length D s , d of the shortest path in G is D s , d = D s , d ( 1 t ) .
The following lemmata establish several structural properties of shortest paths with respect to the SCC decomposition, which will allow us to derive a recursive formulation for D s , d ( i t ) .
Lemma 1.
Let C i be a SCC and let s , d C i . Then the shortest path from s to d containing only nodes from C ( i t ) lies entirely within C i .
Proof. 
Assume, for contradiction, that the shortest path from s to d exits C i , passing through a node v C i . Since s can reach v, and v can reach d, by transitivity, this implies v is part of the same SCC as s and d. This contradicts the maximality of C i . Therefore, the shortest path must remain inside C i .    □
Lemma 2.
Let s C i and let d C ( i + 1 t ) . Then the shortest path P s , d ( i t ) from s to d contains only nodes from C ( i t ) and is of the form
P s , d ( i t ) = P s , ini ( e ) ( i ) + e + P ter ( e ) , d ( i + 1 t ) ,
where P s , ini ( e ) ( i ) is a shortest path containing only nodes from C i , the edge e is contained in Out ( C i ) , and  P ter ( e ) , d ( i + 1 t ) is a shortest path containing only nodes from C ( i + 1 t ) .
Proof. 
First, since SCCs are topologically ordered, for the edge e Out ( C i ) we have that ini ( e ) C i and ter ( e ) C j where j > i . The first consequence of this observation is that the shortest path P s , d ( i t ) from s to d contains only nodes from C ( i t ) . The second consequence is that the only way that to reach a later component C j from C i is to traverse an edge e Out ( C i ) , and hence P s , d ( i t ) traverses an edge e Out ( C i ) . Let P s , ini ( e ) ( i ) denote the sub-path of P s , d ( i t ) from s to ini ( e ) , and let P ter ( e ) , d ( i + 1 t ) denote the sub-path of P s , d ( i t ) from ter ( e ) to d. Since a sub-path of a shortest path is also a shortest path, both P s , ini ( e ) ( i ) and P ter ( e ) , d ( i + 1 t ) are shortest paths. Next, since P s , ini ( e ) ( i ) starts and ends in C i , by Lemma 1 we have that P s , ini ( e ) ( i ) contains only nodes from C i . Moreover, since P ter ( e ) , d ( i + 1 t ) starts outside the component C i , by Lemma 4 we have that P ter ( e ) , d ( i + 1 t ) lies entirely within C ( i + 1 t ) . This completes the proof.    □
Remark 1.
Lemma 2 defines the structure of a shortest path P s , d ( i t ) , which permits us to perform relaxations only over Out ( C i ) .
Lemma 3.
Let d C i and let s C ( i + 1 t ) . Then D s , d ( i t ) = .
Proof. 
Since s C j for some j > i , and the SCCs are topologically ordered, no node in C i is reachable from s. Hence, there is no path from s to d, and so D s , d ( i t ) = .    □
Lemma 4.
Let s and d be nodes in C ( i + 1 t ) . Then the shortest path form s to d containing only nodes from C ( i t ) lies entirely within C ( i + 1 t ) .
Proof. 
Since s C j for some j > i , and the SCCs are topologically ordered, no node in C i is reachable from s. Hence, there is no path from s that enters the component C i . Consequently, the  shortest path from s to d lies entirely within C ( i + 1 t ) .    □
We now summarize these observations into the following theorem.
Theorem 1.
Let s and d be nodes in C ( i t ) , then using the above assumptions and notation, the length of a shortest path P s , d ( i t ) can be computed by the recurrence
D s , d ( i t ) = D s , d ( i ) if s C i , d C i min e Out ( C i ) D s , ini ( e ) ( i ) + w ( e ) + D ter ( e ) , d ( i + 1 t ) if s C i , d C i if s C i , d C i D s , d ( i + 1 t ) otherwise .

4.2. Algorithm SCC

The algorithm SCC designed by Equation (1) is a classical dynamic programming algorithm, which is also reflected in its structure. A naïve implementation in Algorithm 1 starts with a call of function SCC_Out on the input graph G, which returns a topologically sorted list of SCCs associated with corresponding sets of outgoing edges (line 1). Next, we compute the base cases defined in the first line of Equation (1) (lines 2–4). The remaining distances are computed using the second line and the third line of Equation (1) (lines 6–16). The last line of Equation (1) is reflected in line 6 of Algorithm 1. Note that the outer iteration loop goes over all destination nodes and the inner over all source nodes, which is counter-intuitive, but will prove useful later.
Algorithm 1 Naïve APSP algorithm on input graph G using SCC decomposition
1:   [ ( C 1 , Out ( C i ) ) , , ( C t , Out ( C t ) ) ] SCC_Out ( G )
2:  for  i = 1 to t do
3:      D s , d ( i ) APSP( C i ){Base case for each component}
4:  end for
5:  {Process in reverse topological order}
6:  for  i = t 1 to 1 do
7:     for each node d C ( i + 1 t )  do
8:       for each node s C i  do
9:           D s , d ( i t ) {Initialize}
10:          for each edge e Out ( C i )  do
11:             D s , d ( i t ) min D s , d ( i t ) , D s , ini ( e ) ( i ) + w ( e ) + D ter ( e ) , d ( i + 1 t ) {Relaxation}
12:          end for
13:           D d , s ( i t ) {Third line of (1)}
14:       end for
15:     end for
16:  end for
The time complexity of Algorithm 1 depends on the number of relaxations performed in line 11. Consequently, we want to avoid unnecessary relaxation attempts, which happens, in particular, when the destination node d is not reachable from the source node s.
Such a situation occurs in Figure 1, where destination nodes in components C 2 and C 3 are not reachable from source nodes in component C 1 . Furthermore, destination nodes in component C 4 are reachable from source nodes in C 3 only through one edge in Out ( C 3 ) . We observe that if one node of the component C j is not reachable from the source node s through an edge e Out ( C i ) , no node in C j is reachable via e. This brings us to the final version of our algorithm presented in Algorithm 2. To iterate only over the Out ( C i ) -edges via which nodes in C j are reachable the structure Out j ( C i ) in lines 8–14 is built. The relaxation in line 19 remains the same as in Algorithm 1.
We conclude the section with an analysis of the space complexity. First, if  s C ( i t ) or d C ( i t ) , we extend the definition of D s , d ( i t ) by setting D s , d ( i t ) = . Second, if both s , d C ( i t ) , then by Lemma 4 and Corollary 1 we have D s , d ( i t ) = D s , d .
Algorithm 2 APSP algorithm SCC on input graph G avoiding unnecessary relaxations
1:   [ ( C 1 , Out ( C i ) ) , , ( C t , Out ( C t ) ) ] SCC_Out ( G )
2:  for  i = 1 to t do
3:      D s , d ( i ) APSP( C i ){Base case for each component}
4:  end for
5:  {Process in reverse topological order}
6:  for  i = t 1 to 1 do
7:     for  j = i + 1 to t do
8:        Out j ( C i ) { } {Collect only edges via which C j is reachable}
9:        d an arbitrary node in C j
10:       for each edge e Out ( C i )  do
11:          if  D ter ( e ) , d ( i + 1 t )  then
12:             Out j ( C i ) Out j ( C i ) { e } {All nodes in C j are reachable from ter ( e ) }
13:          end if
14:       end for{ Out j ( C i ) consist of Out ( C i ) -edges via which nodes in C j are reachable}
15:       for each node d C j  do
16:          for each node s C i  do
17:             D s , d ( i t ) {Initialize}
18:            for each edge e Out j ( C i )  do
19:                D s , d ( i t ) min D s , d ( i t ) , D s , ini ( e ) ( i ) + w ( e ) + D ter ( e ) , d ( i + 1 t ) {Relaxation}
20:            end for
21:             D d , s ( i t ) {Third line of (1)}
22:          end for
23:       end for
24:     end for
25:  end for
Consequently, we can drop the exponent notation, yielding only D s , d , which in turn means that our algorithm needs to only store a single n × n matrix of shortest path values. For this purpose, we can use a 2D array D[s,d].
Notably, in case of sparse graphs, in general, the majority of entries in the distance matrix D s , d are corresponding to unreachability of a node d from a node s. To avoid storing these redundant entries, we, inspired by [19], adopt a more space efficient storing of the matrix D s , d —instead of a 2D array D[,] we use a hashmap implementation of dictionary. The two operations on matrix D s , d , assigning the value and reading the value, are implemented in straightforward way. The assignment is implemented as a combination of insertion and update, while the value reading uses the find operation, which in case of failure, returns . Consequently, if the time complexity of our solution with a 2D array was O ( f ( n ) ) in the worst case, it stays the same but in the expected case. Moreover, we observe that since the value D s , d is never increased, it is assigned value only once. However, this assignment is already implicitly done at the beginning since D s , d is not inserted into a dictionary and hence takes no action and no time.
However, the space complexity changes from O ( n 2 ) to Θ ( m * ) , where m * n ( n 1 ) is the number of pairs of nodes s and d, where d is reachable from s. This is optimal.

5. Evaluation

We empirically evaluated SCC algorithm against some other solutions, which the literature recognizes to be among the most efficient ones. This section first introduces the graphs on which the evaluation was performed. The test cases description is followed by brief description of implementation and concluded with the results and a brief discussion.

5.1. Test Cases

The tests were performed on generated graphs and graphs taken from real cases. Furthermore we used two different kinds of generated graphs. The first ones were generated using the Erdős–Rényi random graph model [3]. The generation program takes two parameters n, the number of nodes, and 0 ϵ 2 , the graph density measure. The generated graph has nodes V = { v 1 , v 2 , , v n } and m = n ϵ edges. In a generation process, a list of all possible edges [ ( v i , v j ) 1 i , j n , i j ] is formed and randomly permuted. The first m edges are then taken and assigned weights uniformly in the interval [ 0 , 1 ] , and they become edges of a generated graph. In our study, we focused on ϵ [ 0.5 , 1.1 ] , since for larger ϵ , the number of SCCs decreases quickly to 1.
For the second kind of generated graphs, we used the Barabási–Albert approach with preferential attachment [4]. This approach was designed to simulate real-world networks with heavy-tailed degree distributions and a hub-like structure. The generation procedure starts with a small initial complete undirected graph of n 0 nodes. New nodes are added one by one, and each added node is connected to exactly n 0 existing nodes. The probability of an existing node to be connected to the added node is proportional to its current degree. This results in a scale-free undirected graph with n nodes and n 0 · ( n 0 1 ) 2 + n 0 · ( n n 0 ) edges. In the second step we produce a directed graph from the generated undirected graph by replacing each undirected edge { u , v } randomly with either ( u , v ) or ( v , u ) . Each edge is in final step assigned a weight drawn uniformly from the interval [ 0 , 1 ] .
For the graphs from the real-world, we used the Stanford Large Network Dataset Collection (SNAP) [20]. In particular we concentrated on Internet peer-to-peer networks collection of graphs modeling the Gnutella file-sharing system [21]. The nodes in these graphs represent the hosts of the Gnutella networks, while edges represent connections between the hosts: the directed edge from node u to node v indicates that host u reports to host v as a neighbor. The graphs are directed and sparse, with several thousand nodes and edges. We used seven out of the nine graphs in the collection as test cases (see Table 1). The two omitted graphs were too big to be processed.
The final step is similar as before and assigns a uniform random weight of [ 0 , 1 ] to each edge. These real-world graphs allow us to evaluate the behavior of our algorithm under realistic network structures that exhibit non-random, small-world, and scale-free properties.

5.2. Algorithm Implementation and Testing System

We implemented the function SCC_Out by slightly adapting the Tarjan’s algorithm for searching SCCs [22,23]. Briefly, the original algorithm applies a depth-first search to find cycles in a graph as they are bases for C i components. In turn, an edge that is part of a cycle becomes part of an appropriate C i , while the remaining edges belong to one of Out ( C i ) . Consequently the adapted algorithm in linear time returns topologically sorted list of pairs [ ( C i , Out ( C i ) ) i = 1 , , t ] .
To compute APSP distances within each component C i we decided to use the Tree algorithm presented in [7].
The evaluated algorithms and graph generation algorithms were implemented using C/C++, and compiled using GCC version 13.1.0. with no compilation switches. Experimental evaluations were conducted on a computer with an Apple M2 processor, 8 GB of RAM, and running macOS Sequoia 15.3.1. For each combination of parameters, we generated three independent graph instances and reported the average execution time, measured in elapsed real-time (in seconds). Detailed execution times for all algorithms and structural properties for all graph instances can be found in Appendix A.

5.3. Results

This section presents the empirical results of the APSP algorithm evaluations across three distinct graph types: Erdős–Rényi, Barabási–Albert and Gnutella. We compared our SCC algorithm with Floyd–Warshall (FW) [5,6], Dijkstra [12], Hidden Paths (Hidden) [16], Uniform Paths (Uniform) [17,18], Tree [7] and Improved Floyd–Warshall algorithm (Toroslu) [10].

Erdős–Rényi Graphs

On Erdős–Rényi graphs, the SCC algorithm consistently outperformed all other algorithms by a significant margin across both tested graph sizes ( n = 2048 and n = 4096 ). The Toroslu algorithm demonstrated competitive performance for lower edge densities ( ϵ [ 0.5 , 1.0 ] ), but its scalability deteriorated noticeably when ϵ = 1.1 , where execution times increased sharply. In contrast, SCC maintained stable and efficient runtimes even as both the graph size and density increased. The results on Erdős–Rényi graphs are presented in Figure 2.

Barabási–Albert Graphs

In the case of Barabási–Albert graphs, Tree and SCC outperformed all other algorithms. SCC demonstrated particularly strong performance in sparser configurations with n 0 = 2 and n 0 = 3 , where the graph naturally fragments into smaller strongly connected components that can be efficiently processed independently. However, as the n 0 increases to n 0 = 4 and n 0 = 5 , the largest strongly connected component begins to dominate the graph, often encompassing most of its nodes. In these denser configurations, the benefits of component decomposition diminish, and the overhead introduced by SCC becomes more pronounced. Consequently, Tree outperforms SCC in such cases. The results are summarized in Figure 3, where algorithms with excessive runtimes are omitted for clarity (Hidden and Uniform).

Gnutella Graphs

On the Gnutella peer-to-peer network graphs, the SCC algorithm achieved the best performance on all but one graph, on which the Tree algorithm outperformed it. The results are presented in Figure 4. For clarity, algorithms with significantly higher runtimes are omitted from the figure (Hidden, Uniform, FW).

6. Conclusions

In this paper, we proposed a new APSP algorithm specifically designed for disconnected graphs scenarios where classical approaches are often inefficient. Our solution combines well-established shortest path techniques with graph decomposition into SCCs and applies selective inter-component relaxations. Consequently, our approach significantly reduces redundant computations and leverages graph structure to improve runtime. A key feature of our approach is that relaxation is only performed between node pairs for which reachability is established—that is, only when intermediate distances are finite. This selective processing significantly reduces computational overhead, particularly in sparse topologies where many node pairs are disconnected. As a result, the algorithm eliminates redundant updates on unreachable paths, leading to substantial performance gains.
We note that the algorithm does not improve the theoretical worst-case bound O ( n 3 ) . In the ill-formed case where the decomposition produces two strongly connected components, C 1 and C 2 , and every vertex in C 1 has an outgoing edge to every vertex in C 2 , the number of outgoing edges becomes | Out ( C 1 ) | = n 1 ( n n 1 ) , where n 1 denotes the number of vertices in C 1 . Under this configuration, the number of relaxation operations can approach O ( n 4 ) . However, by applying a simple heuristic that detects when the number of outgoing edges | Out ( C i ) | becomes too large, the overall complexity can be reduced to O ( n 3 ) . Thus, while the algorithm performs efficiently on sparse and disconnected graphs, its asymptotic upper bound in the worst case remains the same as that of the classical Floyd–Warshall algorithm.
The impact of this work lies in offering a practical and efficient alternative for APSP computation in domains where sparse, modular, or disconnected graphs naturally arise—such as social networks, dependency graphs, and biological networks. Future directions of research include formalizing the expected complexity in various graph models, validating the approach on real-world datasets.

Author Contributions

All authors (D.Z., R.P., and A.B.) contributed equally to the methodology, software, validation, formal analysis, data curation, and writing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by ARIS grants: D.Z. and A.B. by P2-0359, and R.P by P1-0285, N1-0159, J1-2451, N1-0209, and J5-4596.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Detail Results

The appendix contains complete per-instance results for all algorithms across the three graph kinds used in the evaluation: Table A1 for Erdős–Rényi graphs, Table A2 for Barabási–Albert graphs, and Table A3 for real-world Gnutella graphs. Each table has two groups of columns, where the first one up to a double vertical line lists the properties of a test case graph. The group is followed by the runtimes of compared algorithms (for their description, see Section 5.3) on this graph.
The runtimes are in seconds, while graph properties vary depending on the kind of graph. However, in all cases n and m give the number of nodes and edges respectively in a graph, while sccV and sccE represent the number of nodes and edges in the largest strongly connected component in a graph. Besides the described properties, each graph has additional properties, as described in Section 5.1: Erdős–Rényi graphs have ϵ , Barabási–Albert graphs have n 0 , and real-world Gnutella graphs have id.
Table A1. Performance of compared algorithms on Erdős–Rényi graphs.
Table A1. Performance of compared algorithms on Erdős–Rényi graphs.
Graph PropertiesRuntimes in Seconds
n m ϵ sccV sccE SCC Tree FW Dijkstra Toroslu Uniform Hidden
2048450.5100.0150.0560.0270.0490.0180.0700.042
2048450.5100.0150.0490.0220.0490.0190.0660.040
2048450.5100.0150.0490.0210.0480.0190.0640.040
2048970.6100.0150.0490.0220.0490.0200.0650.040
2048970.6100.0150.0480.0220.0490.0190.0650.039
2048970.6100.0150.0480.0220.0480.0190.0650.039
20482080.7100.0150.0480.0230.0490.0200.0690.039
20482080.7100.0150.0470.0210.0480.0200.0660.040
20482080.7100.0150.0490.0220.0490.0200.0660.039
20484460.8100.0150.0480.0230.0500.0210.0680.040
20484460.8100.0150.0500.0300.0520.0210.0690.041
20484460.8100.0150.0480.0230.0500.0210.0670.040
20489550.9100.0160.0480.0270.0510.0220.0720.044
20489550.9100.0160.0490.0260.0490.0210.0740.046
20489550.9220.0170.0510.0270.0510.0220.0760.046
204820481.011110.0180.0500.0590.0870.0230.2680.194
204820481.033340.0200.0490.0500.0910.0230.2940.214
204820481.027280.0190.0500.0580.0940.0240.3290.237
204843901.1143330770.1480.2013.5226.7331.21220.78516.592
204843901.1145831690.1390.1873.5426.8011.18720.34616.895
204843901.1144330670.1350.1773.4576.6621.20419.93716.421
4096640.5100.0620.3120.1570.1930.0740.2900.157
4096640.5100.0590.3120.1560.1930.0740.2790.160
4096640.5100.0570.2990.1570.1930.0730.2710.161
40961470.6100.0580.3010.1580.1920.0750.2690.154
40961470.6100.0580.2960.1560.1930.0730.2640.154
40961470.6100.0580.2850.1550.1930.0730.2710.154
40963380.7100.0580.2810.1560.1930.0750.2670.156
40963380.7100.0580.2810.1520.1920.0740.2720.155
40963380.7100.0580.2730.1550.1920.0750.2710.156
40967760.8100.0590.2670.1600.1930.0760.2710.157
40967760.8100.0590.2800.1600.1940.0760.2690.156
40967760.8100.0580.2580.1570.1920.0760.2720.155
409617830.9100.0600.2580.1670.1930.0810.2750.163
409617830.9100.0610.2560.1630.1930.0810.2810.163
409617830.9100.0610.2570.1710.1960.0810.2810.161
409640961.039420.0660.2610.2740.2450.0910.5880.396
409640961.0550.0670.2890.2520.2320.0900.5290.345
409640961.0440.0670.2820.2730.2450.0900.5960.401
409694101.1302669360.8791.56230.27232.64913.761100.57485.391
409694101.1308370940.9811.56232.03733.12613.389100.73286.582
409694101.1302469220.9421.55031.01332.42313.04698.59884.568
Table A2. Performance of compared algorithms on Barabási–Albert graphs.
Table A2. Performance of compared algorithms on Barabási–Albert graphs.
Graph PropertiesRuntimes in Seconds
n m n 0 sccV sccE SCC Tree FW Dijkstra Toroslu Uniform Hidden
204840932114422530.0820.1587.1925.9050.45416.95013.669
204840932112121940.0750.1436.9345.7880.41316.35913.453
204840932112021920.0740.1467.0655.7610.42816.69213.486
204861383170750460.2030.21711.3258.9481.66424.72520.806
204861383170050140.2050.21311.1338.9561.64124.38620.691
204861383172550890.2000.21411.3259.0621.65525.46022.626
204881824190375500.3680.27413.43710.6443.04027.89424.112
204881824192976570.3560.27013.34410.5683.19828.17324.035
204881824192276340.3550.26613.40310.6313.07628.05023.913
204810,2255198898940.5280.30714.48311.2104.56429.11925.014
204810,2255199199220.5140.33814.56111.2734.44629.17924.991
204810,2255199099150.5110.31514.39211.1644.78129.08924.937
409681892219842961.2070.95155.18425.2833.08174.10662.453
409681892236546490.4171.02657.54326.7913.33078.77566.696
409681892228344760.4140.94756.27026.0183.30676.57364.510
409612,2823341810,1031.2631.60989.24139.91412.991113.44498.560
409612,2823342110,0911.2621.53988.77739.95913.825113.32198.209
409612,2823342710,1061.2191.49188.71239.77913.151112.48797.259
409616,3744383615,2532.2332.099107.30346.30225.277127.680112.433
409616,3744384615,2682.3452.049106.90146.56426.668128.111112.220
409616,3744380915,1302.2822.135106.32846.06025.244127.084112.210
409620,4655396719,7583.3092.501115.26049.53638.982133.113118.907
409620,4655400919,9923.6842.557117.03049.96038.333134.054118.007
409620,4655398119,8393.5882.464115.55049.56138.969133.583117.695
Table A3. Performance of compared algorithms on Gnutella graphs.
Table A3. Performance of compared algorithms on Gnutella graphs.
Graph PropertiesRuntimes in Seconds
n m id sccV sccE SCC Tree FW Dijkstra Toroslu Uniform Hidden
871731,5254322613,5893.2894.925349.07484.53247.398242.711209.696
10,87639,9915431718,7429.8528.474771.895144.166111.391836.076366.017
884631,8396323413,4533.5355.159345.32385.20143.457245.265210.372
630120,7778206893130.9561.694112.33336.5399.781101.66286.746
811426,0139262410,7761.8423.292241.69561.17822.908173.528148.001
22,68754,70524515317,69512.12326.0633397.836391.162319.4348082.3041105.421
26,51865,36925635222,92818.55638.2415773.323573.398601.36610,692.4741734.372

References

  1. Ahuja, R.K.; Magnanti, T.L.; Orlin, J.B. Network Flows: Theory, Algorithms and Applications; Prentice-Hall, Inc.: Wilmington, DE, USA, 1993. [Google Scholar]
  2. Durrett, R. Random Graph Dynamics. In Cambridge Series in Statistical and Probabilistic Mathematics; Cambridge University Press: Cambridge, UK, 2006; Volume 5, pp. 27–69. [Google Scholar]
  3. Erdős, P.; Rényi, A. On random graphs I. Publ. Math. 1959, 6, 290–297. [Google Scholar] [CrossRef]
  4. Barabási, A.; Albert, R. Emergence of Scaling in Random Networks. Science 1999, 286, 509–512. [Google Scholar] [CrossRef] [PubMed]
  5. Floyd, R.W. Algorithm 97: Shortest path. Commun. ACM 1962, 5, 345. [Google Scholar] [CrossRef]
  6. Warshall, S. A theorem on boolean matrices. J. ACM 1962, 9, 11–12. [Google Scholar] [CrossRef]
  7. Brodnik, A.; Grgurovič, M.; Požar, R. Modifications of the Floyd–Warshall algorithm with nearly quadratic expected-time. Ars Math. Contemp. 2021, 22, 1–22. [Google Scholar] [CrossRef]
  8. Lancia, G.; Dalpasso, M. Speeding Up Floyd–Warshall’s Algorithm to Compute All-Pairs Shortest Paths and the Transitive Closure of a Graph. Algorithms 2025, 18, 560. [Google Scholar] [CrossRef]
  9. Aini, A.; Salehipour, A. Speeding up the Floyd–Warshall algorithm for the cycled shortest path problem. Appl. Math. Lett. 2012, 25, 1–5. [Google Scholar] [CrossRef]
  10. Toroslu, I.H. The Floyd–Warshall all-pairs shortest paths algorithm for disconnected and very sparse graphs. Software: Pract. Exp. 2023, 53, 1287–1303. [Google Scholar] [CrossRef]
  11. Sapundzhi, F.; Danev, K.; Ivanova, A.; Popstoilov, M.; Georgiev, S. A Performance Comparison of Shortest Path Algorithms in Directed Graphs. Eng. Proc. 2025, 100, 31. [Google Scholar]
  12. Dijkstra, E.W. A note on two problems in connexion with graphs. Numer. Math. 1959, 1, 269–271. [Google Scholar] [CrossRef]
  13. Fredman, M.L.; Tarjan, R. Fibonacci heaps and their uses in improved network optimization algorithms. J. ACM 1987, 34, 596–615. [Google Scholar] [CrossRef]
  14. Fredman, M.L.; Sedgewick, R.; Sleator, D.D.; Tarjan, R.E. The pairing heap: A new form of self-adjusting heap. Algorithmica 1986, 1, 111–129. [Google Scholar] [CrossRef]
  15. Cormen, T.H.; Leiserson, C.E.; Rivest, R.L.; Stein, C. Introduction to Algorithms, 3rd ed.; MIT Press: Cambridge, MA, USA, 2009. [Google Scholar]
  16. Karger, D.; Koller, D.; Phillips, S.J. Finding the hidden path: Time bounds for all-pairs shortest paths. SIAM J. Comput. 1993, 22, 1199–1217. [Google Scholar] [CrossRef]
  17. Demetrescu, C.; Italiano, G.F. A new approach to dynamic all pairs shortest paths. J. ACM 2004, 51, 968–992. [Google Scholar] [CrossRef]
  18. Demetrescu, C.; Italiano, G.F. Experimental analysis of dynamic all pairs shortest path algorithms. ACM Trans. Algorithms 2006, 2, 578–601. [Google Scholar] [CrossRef]
  19. Willard, D.E. Log-Logarithmic Worst-Case Range Queries are Possible in Space 𝒪(n). Inf. Process. Lett. 1983, 17, 81–84. [Google Scholar] [CrossRef]
  20. Leskovec, J.; Krevl, A. SNAP Datasets: Stanford Large Network Dataset Collection. 2014. Available online: https://snap.stanford.edu/data/#p2p (accessed on 13 May 2025).
  21. Ripeanu, M.; Foster, I.; Iamnitchi, A. Mapping the Gnutella network: Properties of large-scale peer-to-peer systems and implications for system design. IEEE Internet Comput. 2002, 6, 50–57. [Google Scholar]
  22. Tarjan, R.E. Depth-first search and linear graph algorithms. SIAM J. Comput. 1972, 1, 146–160. [Google Scholar] [CrossRef]
  23. Tarjan, R.E.; Zwick, U. Finding strong components using depth-first search. Eur. J. Comb. 2024, 119, 103815. [Google Scholar] [CrossRef]
Figure 1. SCCs of a directed graph in topological order with outgoing edges.
Figure 1. SCCs of a directed graph in topological order with outgoing edges.
Algorithms 18 00766 g001
Figure 2. Results on Erdős–Rényi graphs.
Figure 2. Results on Erdős–Rényi graphs.
Algorithms 18 00766 g002
Figure 3. Results on Barabási–Albert graphs.
Figure 3. Results on Barabási–Albert graphs.
Algorithms 18 00766 g003
Figure 4. Results on Gnutella graphs.
Figure 4. Results on Gnutella graphs.
Algorithms 18 00766 g004
Table 1. Graphs chosen from SNAP Internet peer-to-peer networks for test cases and their sizes.
Table 1. Graphs chosen from SNAP Internet peer-to-peer networks for test cases and their sizes.
idnm idnm
410,87639,994 9811426,013
5884631,839 2426,51865,369
6871731,525 2522,68754,705
8630120,777
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zugan, D.; Požar, R.; Brodnik, A. Floyd–Warshall Algorithm for Sparse Graphs. Algorithms 2025, 18, 766. https://doi.org/10.3390/a18120766

AMA Style

Zugan D, Požar R, Brodnik A. Floyd–Warshall Algorithm for Sparse Graphs. Algorithms. 2025; 18(12):766. https://doi.org/10.3390/a18120766

Chicago/Turabian Style

Zugan, Dani, Rok Požar, and Andrej Brodnik. 2025. "Floyd–Warshall Algorithm for Sparse Graphs" Algorithms 18, no. 12: 766. https://doi.org/10.3390/a18120766

APA Style

Zugan, D., Požar, R., & Brodnik, A. (2025). Floyd–Warshall Algorithm for Sparse Graphs. Algorithms, 18(12), 766. https://doi.org/10.3390/a18120766

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop