1. Introduction
Multi-hop networks operate by transmitting data from a source to a destination through a series of intermediate nodes. This communication paradigm extends the effective coverage area and improves scalability and resilience, making it suitable for wireless sensor networks, vehicular ad hoc systems, and emergency communication scenarios. Each node acts both as a data source and as a relay, forwarding packets toward their destination. Such architectures reduce the need for high transmission power and can significantly increase the operational lifetime of the network. However, they also introduce challenges. Multi-hop communication can increase latency and create complex routing requirements. The distributed nature of these networks makes them more vulnerable to cascading failures, and maintaining stable connectivity under dynamic conditions becomes more difficult. For these reasons, understanding the structural principles of multi-hop networks and identifying critical links is essential for enhancing their overall reliability and performance.
Preserving strong connections between all nodes in multi-hop networks can enhance the resilience of these networks against potential failures or congestion. Identifying the weak links in the network can assist in detecting vulnerable areas and addressing them to improve overall network robustness. A multi-hop network can be modeled as a weighted graph
, where
represents the set of nodes,
denotes the set of edges between the nodes, and
is a non-negative weight function that assigns a weight to each edge. The weight of each edge may represent factors such as bandwidth, reliability index, or signal strength in wireless networks. Edges with smaller weights can be considered unreliable links or bottlenecks. For example,
Figure 1a shows a sample multi-hop network and its corresponding weighted graph, where the weight of the edges reflects the wireless signal strength between nodes.
Several graph-theoretic problems can help identify weak links in the network topology. For instance, in the cut edge problem, the objective is to identify edges whose removal separates certain nodes from others. In
Figure 1a, the edge
is a cut edge, whose removal disconnects node 4 from the other nodes. While cut edges reveal the edges that may disconnect certain nodes, they are not necessarily critical in the broader network because they may only disconnect a small set of nodes.
Another relevant problem is the minimum cut problem, which finds a set of edges with the minimum total weight that separates the graph into disconnected components. Formally, a minimum cut set of
is a set
such that
is disconnected, and the sum of weights in
C,
, is minimized. A graph may have multiple minimum cut sets with the same minimum total weight. For example, the network in
Figure 1a has several minimum cut sets, including
,
, and
, all with a total weight of 5. However, the minimum cut set may not necessarily reveal the most critical edges with the greatest impact on the network, as the cut might disconnect only a small set of nodes.
This study focuses on identifying the smallest set of critical edgeswhose removal would disconnect the maximum number of nodes from each other. These edges have the most significant impact on network connectivity, and their failure may sever paths between a large number of nodes. We use pairwise connectivity among nodes to measure overall network connectivity robustness and aim to identify the smallest set of edges that minimizes general pairwise connectivity across the network. The main difference between critical edges and minimum cut sets or cut edges is that critical edges may not necessarily be cut edges or members of minimum cuts. However, when removed as a group, critical edges sever communication paths between a large number of nodes. In other words, the cut edge and minimum cut problems do not take into account the number of disconnected nodes, whereas the critical edge problem focuses on identifying the edges whose removal separates the maximum number of nodes.
The problem of finding the critical edge set of a given graph is well known and extensively studied in graph theory, with applications across various domains, such as wireless multi-hop networks [
1], road networks [
2], VLSI design [
3], electric power networks [
4], and other optimization problems [
5,
6]. For example, in a wireless ad hoc network, the critical edge set represents a subset of critical links with minimum weight, whose failure would terminate all communication paths to a subset of active nodes. Similarly, in a road network, the critical edge set highlights the set of critical streets that may serve as bottlenecks in traffic, and their closure may halt all traffic between certain parts of the city. Identifying such a set allows for reinforcing the critical sections of the network, thereby enhancing its fault tolerance or bandwidth. In general, determining the critical edge set of networks provides valuable insights into their weak points, bottlenecks, clusters, reliability, lifetime, and fault tolerance.
The mentioned problems may impose constraints on the total weight of the selected edges. Given a weighted graph
G and an integer value
k, the weight constrained cut edge detection problem aims to find a subset of edges whose total weight does not exceed
k and whose removal maximizes the number of disconnected components [
7]. Similarly, the minimum cut problem seeks to determine the smallest possible edge set with maximum total weight
k. Most of the proposed methods for the unconstrained problems can be adapted to estimate the edge sets with a total weight not exceeding
k through minor modifications.
In this paper, we propose a novel algorithm for identifying critical edges in a given network that integrates the well-known max-flow computations with a node merging strategy. The merging operation is fundamentally inspired by the principles proposed in the Stoer–Wagner algorithm [
8] for computing global Min-Cuts, which iteratively contracts non-critical edges until the graph is reduced to a single node. The core rationale behind merging non-critical edges is based on the following observations:
Edges that do not belong to a minimal cut separating nodes s and t cannot participate in any smaller cut discovered in subsequent iterations. Therefore, merging their endpoints does not eliminate any potential smaller cut that could later emerge as a candidate for the critical edge set.
By systematically maintaining and updating the smallest cuts identified throughout the iterations, we ensure that no critical edge is inadvertently removed from consideration.
Consequently, this merging process effectively contracts regions of the graph that are internally well connected with respect to the current threshold. This approach preserves the correctness of the critical edge identification process and guarantees that no true minimal cut edges are overlooked.
After computing the max-flow between two neighboring nodes, we eliminate the edges (by merging their endpoints) that carry a flow exceeding a predefined threshold and therefore cannot be part of any critical edge set. In each iteration, we retain the smallest detected cuts up to that point and continue the contraction process until all nodes have been merged into a group.
The main contributions of this paper are as follows:
A novel pairwise connectivity-based node removal algorithm is introduced to identify the most critical edges that have the highest impact on pairwise connectivity.
The proposed method is formally defined, and its computational complexity is analyzed.
A comprehensive comparative analysis is conducted with state-of-the-art algorithms, including Greedy, Degree-based, and Min-Cut approaches, across multiple networks.
2. Related Work
The identification of critical nodes or critical edges that significantly impact the connectivity of networks has been extensively studied in various research works [
9,
10]. While some studies focus on identifying simple cut edges [
11,
12] or cut vertices [
13,
14], other works aim to find the group of critical nodes whose failures sever all communication paths between a large number of nodes [
15,
16,
17].
Cut edges (bridges) is a classic problem in graph theory that finds the edges whose removal partitions the graph. Tarjan has introduced the first linear-time depth-first search (DFS) algorithm to detect all bridges in an undirected graph [
18]. Hopcroft and Tarjan had earlier devised an efficient DFS-based method for finding articulation points and biconnected components, which can also identify bridges [
19]. Schmidt has developed a simplified linear-time method that simultaneously finds all cut edges and cut vertices in a graph [
20]. More recently, Cairo has proposed a streamlined single-traversal algorithm to compute all bridges (and articulation points) in an undirected graph [
21].
Min-Cut detection is another problem that finds a set of edges with minimum weight, whose removal partitions the graph. Generally, the network flow and edge contraction are the main approaches for the Min-Cut problem, which have been used in different ways in several algorithms. The size of the minimum cut between arbitrary nodes
equals the maximum flow between them [
22]. Therefore, the max-flow algorithms can be used to find a Min-Cut between arbitrary nodes
s and
t in any graph [
23]. Max-flow is a well-studied problem that has different polynomial-time algorithms [
24]. Ford and Fulkerson have proposed a basic algorithm for finding the maximum flow between arbitrary nodes in a given graph with
time complexity [
25], where
f is the maximum possible flow between arbitrary nodes. This approach finds augmenting paths in a residual graph, updates the weight of the edges in the detected augmenting path, and continues this process until no path is available between the nodes. While there is a path from the source to the target, with some capacity on all edges in the path, the algorithm sends a flow along the path and then finds another path until no path is available. A path with available capacity is called an augmenting path.
To find the minimum cuts between all pairs of nodes, the Edmonds–Karp algorithm uses the Breadth-First Search (BFS) method and finds the augmenting paths that lead to
time complexity [
26]. Dinic has proposed another algorithm based on the Ford–Fulkerson method. It uses a combination of level graph and BFS for path finding, which bounds the time complexity of the algorithm to
[
27,
28]. Goldberg and Tarjan have proposed a push–relabel algorithm that finds the maximum flow with
time complexity [
29]. The algorithm has the push and relabel operations and maintains a flow excess value in each node. The algorithm runs until the graph has no node with a positive excess. The push operation transfers flow from a node to its neighbors over a residual edge. The relabel operation determines the edges for pushing the flow. Orlin has proposed an algorithm that solves the max-flow problem as a sequence of improvement phases with
time complexity [
30]. To find the minimum cuts of a graph using max-flow algorithms, we need to repeat the max-flow algorithm between a source and all other nodes in the graph, which increases the time complexity of the algorithms by a factor of
. Hao and Orlin have proposed an algorithm that uses the idea of a push–reliable maximum-flow algorithm to find the minimum cut of a given weighted graph. The algorithm finds a Min-Cut of
G as a sequence of at most
maximum-flow problems in which their running time equals the running time of a single maximum-flow problem. The resulting algorithm has
time complexity. Generally, using the max-flow-based algorithms results in only one of the Min-Cuts in the graph.
Nagamochi and Ibaraki have proposed an algorithm based on the edge contraction method [
31]. Their algorithm constructs spanning forests and iteratively contracts edges with high weights, which leads to
time complexity on undirected graphs with non-negative weights. Stoer and Wagner have proposed another edge contraction-based approach, which detects the minimum cut of a given weighted graph with
time complexity [
8]. Starting from an arbitrary node, the algorithm visits the most tightly connected node to the current node and continues this process until all nodes have been visited. The total weight of edges between the last visited node and its neighbors are considered as a candidate minimum cut or cut-of-the-phase. The algorithm stores the cut-of-the-phase in a variable and merges the last visited node with its previous visited neighbor. The newly merged node may have multiple edges to the other nodes. For example, if we merge nodes
a and
b, where both have an edge to another node
c, then node
c will have two edges to the merged nodes
. In this case, the edges are merged and the weight of the new edge is set to the total weight of the existing edges. After merging the nodes, the algorithm repeats the same process by starting from an arbitrary node and visiting the most tightly connected neighbor of each visited node. The last visited two nodes are merged and the detected minimum cut size is updated if the total weight of edges between the last visited nodes and its neighbors is smaller than the previously detected cut size. This process continues until all nodes are merged into a single node. After merging all nodes, the cut-of-the-phase will show the minimum cut size and its corresponding cut set. Brinkmeier has improved the Stoer and Wagner algorithm by contracting more than one pair of nodes (if possible), which reduces the worst-case time complexity of the algorithm to
, where
is the minimum edge weight and
is the minimum weighted degree of nodes [
32].
Besides the exact algorithms, there are several approximated and randomized algorithms that can find the minimum or near-minimum cut of weighted graphs with high probability [
33,
34,
35]. A practical method for finding all minimum cuts of an undirected weighted graph has been proposed in [
36]. This algorithm uses different local- and connectivity-based edge contraction heuristics to contract and eliminate the edges that cannot be a member of any minimum cut. After contracting the edges and reducing the size of the graph, the proposed algorithm in [
31] is used to find the minimum cuts of the remaining graph [
37]. The proposed parallel algorithms for the Min-Cut problem perform the contraction [
38] or path detection [
39] tasks in parallel, which allows the available processors to reduce the time complexity of the algorithms. The distributed algorithms assume that the graph of the entire network is not available and use message passing to find the disjoint paths or maximum flow between the nodes [
1,
40,
41,
42,
43].
Another similar problem is determining the edge connectivity of a graph with the smallest number of edges, whose removal disconnects the graph. Gabow has introduced an efficient algorithm that computes the edge connectivity of undirected graphs in
time complexity [
44]. Karger has presented a randomized contraction algorithm achieving near-linear expected time [
45]. Nagamochi and Ibaraki have offered a deterministic approach that avoids explicit flow computations and reduces the number of edge contractions [
46]. More recent work by Kawarabayashi and Thorup achieves near-linear time for finding the minimum cuts and k-edge connectivity [
47].
Table 1 summarizes similar problems and existing algorithms related to this study.
3. Problem Formulation
Maintaining a strong connection between the nodes in a multi-hop network is a vital requirement for most applications. Finding critical edges may help to identify the weak connections that have a higher impact on network connectivity than the other edges. The problem of critical edges can be formally defined as follows:
Critical Edge Problem: Given a weighted network and a positive integer c, find a subset of edges such that and the pairwise connectivity of is minimal.
In this study, we consider the pairwise connectivity (
) of a network as the total number of nodes that can be reached from each node. So, the pairwise connectivity of a connected network
will be
In a disconnected graph with a set of separated components, the pairwise connectivity can be calculated as follows:
where
is the set of components in
G and
is the number of nodes in component
s. Since we primarily focus on undirected, connected networks without accounting for multi-path robustness metrics (such as the number of alternative paths), we define the pairwise connectivity (
) as the maximum theoretical number of connected node pairs in a connected network. At each iteration, we compute the actual
of the residual network by enumerating all reachable node pairs after edge removals. This metric provides a lower bound on the available connectivity between nodes, as it disregards redundant paths and edge weights.
Figure 2 shows a sample network and its pairwise connectivity after removing different edges. In
Figure 2a, we have a connected network with eight nodes. So, each node has path to seven other nodes, which leads to
. If we remove the edges with the smallest weight, which is the
edge, node 4 is separated from the other nodes, which reduces
to 42 (
Figure 2b). Removing the three edges with the smallest weight—which are the edges
,
, and
—disconnects nodes 1 and 4 from the other nodes and reduces the
(
Figure 2c). Removing the edges
and
with a total weight of 17, as shown in
Figure 2d, generates a network with
(two separated components with
= 12). The aim of proposed algorithm in this paper is finding a subset of edges
with maximum weight
c such that the
of
is minimum.
As a Greedy strategy, we may remove the edges with the smallest weights one by one until the total weight of the removed edges reaches
c. However, this approach can produce highly inefficient solutions that are far from the optimal. For example,
Figure 3 illustrates the steps of the Greedy algorithm on a sample network for
. In the first step, edge
is removed because it has the lowest weight among all the edges (
Figure 3a). In the subsequent steps, edges
,
, and
are respectively removed from the network (
Figure 3b–d). After removing edge
, the total weight of the removed edges reaches 16, and selecting any additional edge would exceed the limit of 20. The pairwise connectivity of the resulting network in
Figure 3e is 30. In contrast, the most critical edges in this network with a total weight under 20 are
,
, and
. Removing these edges reduces the pairwise connectivity to
, which is substantially lower than 30.
Another possible solution is removing the minimum cut set of networks until the total weight of removed edges reaches
c. This strategy may also generate an inaccurate answer.
Figure 4, for example, shows the steps of Min-Cut-based solutions for
c = 20. In the first step, the minimum cut set
is removed from the network (
Figure 4b). In the second step, by removing the minimum cut set
, the weight of the total removed edges becomes 9. The weight of the next minimum cut set is 12, which exceeds the
c = 20 limit. Therefore, the minimum cut set-based approaches leads to a network with
= 30, which is much higher than the optimal answer of 18. In the next section, we propose a max-flow-based approach that generates accurate solutions close to the optimal answer.
4. Proposed Algorithm
The main idea of the proposed algorithm is to find the edges that minimize the total maximum flow between the nodes. Removing the edges that minimize the total flow may decrease the number of paths between the nodes and minimize the pairwise connectivity.
The proposed algorithm integrates max-flow-based Min-Cut detection with an iterative edge contraction strategy to progressively simplify the network while preserving essential connectivity properties needed for identifying critical edges. The main steps and order of the operations in the proposed algorithms are as follows:
Min-Cut Search: Before any contraction, the algorithm identifies a minimal cut separating the currently visited set (denoted by B) from an unvisited candidate node v. This is accomplished by executing a max-flow procedure on the current state of the graph. However, if the total weight of the direct edges between v and B, combined with the flows passed from v in previous iterations, exceeds the threshold c, the max-flow search is skipped.
Selection of Candidate Cuts: If the total weight of minimum cut between B and v is less than the specified threshold c, the corresponding cut set is stored along with its resulting pairwise connectivity value. This guarantees that potential critical edge sets are collected before any modification to the graph topology through merging.
Edge Contraction: Node v is added to the set B, effectively contracting the edges between v and B. This merging step simplifies subsequent iterations by reducing the number of nodes and aggregating local connectivity information.
Iterative Refinement: The process is then repeated and the algorithm selects a new unvisited node v adjacent to B, computes Min-Cuts between B and v on the contracted graph, records candidate cut sets that yield low pairwise connectivity, and continues merging nodes until B includes all nodes.
Figure 5 shows the general steps of the proposed algorithm on a sample network for
c = 20. The algorithm starts from an arbitrary node, e.g., node
v, and adds it to the visited node set (
Figure 5a). The green nodes in
Figure 5 are visited nodes. If the total weight of connected edge to
v is less than
c, the edges are added to a potential critical edge set denoted as
Z. In
Figure 5a, we add [42,(0,3),(2,3)] to
W, where 42 is the
of the network after removing the (0,3) and (2,3) edges. In each step, a minimum cut between the neighbor of the visited group and the visited group is detected to determine how strongly it is connected to the visited nodes. In
Figure 5b, for example, if we select node 0, the cut size between nodes 0 and 1 will be 8 and the cut set will include the same (0,3) and (2,3) edges.
In the next step, we may select node 2 and detect a minimum cut with size 24, which is higher than 20. So, we may add node 2 to the visited group without updating
Z (
Figure 5c). As for the next step, we may find a cut between node 6 and the visited group, which will contain the edges (0,6) and (2,5) with a weight of 12, which is smaller than 20. So, we add [26,(0,6),(2,5)], as potential critical edges to
Z, where 26 is the
of the network after removing the selected edges. In the next step, we select node 5 and find a minimum cut between this node and the visited group (
Figure 5d). This cut will contain the edges (2,5) and (4,6) with size 13. Since
, we add [13,(1,5),(4,6)] to
Z. In the next step, we select node 1 and find a minimum cut between this node and the visited group (
Figure 5e). This cut will contain the edges (1,5) and (4,6) with size 17. Since
, we add [17,(2,5),(4,6)] to
Z. In the next step, we select node 4 and find a minimum cut between this node and the visited group (
Figure 5f). This cut will contain the edges (1,4) and (4,6) with size 22. Since
, we continue to the next step without updating
Z, select node 7, and find a minimum cut between this node and the visited group (
Figure 5g). This cut will contain the edge (4,7) with size 2. Since
, we add [42,(4,7)] to
Z. After visiting and merging all nodes, we sort the elements in
Z based on the
value, which leads to the following set:
We then select a combination of candidate sets from Z that yields the lowest while ensuring that the total weight of the selected edges remains below c. After sorting the sets in Z by their values, the algorithm iteratively selects the sets with the lowest until the total weight reaches c. During this process, the edges included in the final selected set are removed from the remaining sets in Z to ensure that the weight of each edge is counted only once. In cases where multiple candidate sets have the same value, we prioritize the set with the smallest total edge weight. If these weights are also identical, we apply a consistent tie-breaking rule by selecting the set that was added first to Z, based on the insertion order. This strategy maintains consistency across runs.
In the above example, the set
is selected first as the critical edges, with a total weight of 11. We then remove these edges from
Z, which results in the following updated set:
At this point, we have two candidates with the same
, so we select the set with the lowest total weight, which is
. This set can be included since its weight is only 6, keeping the cumulative weight within the allowed limit. After selecting this set, no further candidates can be added without exceeding the constraint imposed by
c.
Figure 5i illustrates the resulting union set, which includes the edges
,
, and
, leading to a final pairwise connectivity of
.
Figure 6 shows the steps involved in selecting the final edges from the candidate set Z.
To find the Min-Cut between visited and unvisited nodes, we may use the max-flow algorithm [
48]. However, starting the max-flow algorithm for each merging process may significantly increase the time complexity of the algorithms. The proposed algorithm uses the previously calculated flows and local connections to minimize the overall flow detection procedure. The main idea of the algorithm is to contract the non-critical edges as fast as possible. We try to use local information (i.e., edge weight or previously passed flow) to decide whether an edge can be critical or not. If we cannot decide using local information, we find the flow between the endpoints of the edge. At the end of the contraction process, we select the best possible set of critical edges.
Let denote the weight of edge . An edge e cannot be critical if , as its selection would immediately violate the constraint limit. Similarly, an edge is not critical if , where represents the passing flow between nodes u and v. In this case, u and v are strongly connected through other paths in the network, implying that there is no minimum cut with a total weight less than c that both includes edge and separates u from v. Since any minimum cut that contains must necessarily disconnect u and v, the condition ensures that such a cut cannot exist, and thus e cannot be critical.
In the proposed algorithm, we add the merged nodes to a set named
B. Initially,
B is empty and the algorithm terminates when all nodes are added to
B. The algorithm starts from an arbitrary node, e.g.,
, and adds it to
B. We select a random neighbor of
, e.g.,
, which is not in
B, and contract the
edge by merging them. Obviously,
Since
and
are neighbors, the value of
is a part of
. More precisely, we have
Let
be the set of neighbors of node
. We may write
We may directly add a node
to
B if
We have to find some flow from
to
B based on relation (4) if
If
, then
is not critical and can be added to the
B set. Otherwise, the total weight of cut between
and
B is not larger than
c and this edge can be a candidate for the critical edge set. In this case, we add the newly detected cut and its related
to the candidate critical edge set. This process continues until all nodes are added to the
B set. When we find the flow between
and
B, every node on the detected flow paths will have at least two different paths to
B after merging with node
. Formally, if we find a flow path as
, we will have
So, we may keep the passing flows from the nodes to use them in future contraction operations and decrease the number of flow detection. Using the direct edges to the B set and also detected flows in the previous search results, we may merge most of the nodes without starting a new max-flow procedure.
Algorithm 1 shows the general steps of the proposed method. Initially, we set Z as an empty set (Z is the set of possible critical edges) and add a random node from V to B (lines 1–2). While there are unvisited nodes in the network, we select a node v from that has at least one neighbor in B (lines 3 and 4). If the total weight of edges between v and B is smaller than c, we find the flow between v and any node in B using BFS tree (lines 5–7). The flow detection is stopped when we have no more flow or the flow detected thus far, f, exceeds c. If , then we have found a new cut that is the set of edges between the visited and unvisited nodes in the last BFS tree (lines 8 and 9). If , we find the pairwise connectivity of . This value, along with the corresponding cut, is used obtain the possible critical edge set Z (lines 10 and 11). Before repeating the loop for the next node, we add v to B (line 12). Also, any node is added to B if u has a neighbor in B and the total weight of its edges to B plus the existing flow on its other edges is bigger than c (lines 13–15). After adding all nodes to B, we select a subset of cut edges in Z with the lowest values such that the total weight of selected edges does not exceed c and returns the selected edges as critical edges.
Figure 7 shows the flowchart and
Figure 8 shows the idea of the proposed algorithm on a sample network for
7. We assume that the algorithm starts from node 0 and adds it to the
B set. Let node 3 be the next selected node for merging. Since 9 > 7, we merge nodes 0 and 3 (
Figure 8b). Let node 2 be the next selected node for merging. Since
is 7 (not bigger than
c), we find some flow between node 2 and
B. The maximum flow between node 2 and
B in
is 4 (
Figure 8c). So the total flow between node 2 and set
B is
, which is bigger than
c. Therefore, there are no critical edges between node 2 and
B. Also, the sum of passing flow over the connected edges to node 6 is 8, which is bigger than
c (
Figure 8c). So, we can add both nodes 2 and 6 to
B at the same time (
Figure 8d).
Algorithm 1: Critical Edges (general) |
Input: ,c./* a graph G and the upper limit c */ Output: . /* a set of critical edges*/
- 1:
. - 2:
Set where is an arbitrary node. /* add an arbitrary node B */ - 3:
while do - 4:
select a random node with at least one neighbor in B. - 5:
set total weight of edges between v and B and remove these edges from G. - 6:
while and some flow available from v to B do - 7:
using BFS tree, find some flow from v to B and add it to f. - 8:
if then /* a candidate critical edges set detected */ - 9:
BFS tree and BFS tree }. /* The edges between reachable and unreachable nodes in BFS tree is a cut*/ - 10:
pairwise-connect(). - 11:
. - 12:
. - 13:
for each node with a neighbor in B do - 14:
if total weight of edges between u and B + the flow on the other edges of u > c then - 15:
. - 16:
select with lowest s.t ≤ c. - 17:
return F.
|
In the next iteration, we select node 1. Since
is smaller than
c, we find some flow between 1 and
B. After ignoring the direct edge between 1 and
B, we may find a flow of 1 over the path
by finding a BFS tree in the network. So, the total flow from 1 to
B will be 5. Since 5
, we have a critical edge set which separates node 1 from nodes
. At this stage, any edge between the visited and unvisited nodes in the last BFS tree are critical edges. So, we have
. The pairwise connectivity of
will be
. After adding
to
Z and node 1 to
B, we select node 5 for merging (
Figure 8e). The weight of the direct edge between 5 and
B is 4 and this node has already 1 passing flow (over node 4) to
B. So the current cut size between 5 and
B is 5, which is not bigger than
c. Therefore, we try to find some new flow between 5 and
B, which will be unsuccessful because the whole capacity of the
edge is already occupied. Considering the edges between visited and unvisited nodes in the last BFS tree, the second candidate critical edges will be
with pairwise connectivity
. We add node 5 to
B and
to
Z. Since
, we have a new critical edge set between 4 and other nodes as
with pairwise connectivity 30. So, the algorithm adds
to
Z and 4 to
B and finishes the
loop. Now, we have a set of three candidate critical edges in
Z. We start from the lowest pairwise connectivity, which is 18, and add the edges
to the final critical edges
F. So, the total weight of the selected edges in
F will be 5, and no more edges would be added to
F because adding an edge would increase the total edge weight to more than
c. Therefore, the algorithm selects the edges
as critical edges, whose removal would decrease the pairwise connectivity of the network to 18.
More detailed steps of the proposed algorithm are provided in Algorithm 2. The algorithm accepts the adjacency matrix of the network G and outputs the detected critical edges. We assume that the adjacency matrix holds the weight of edges between the nodes. For example, if the weight of edges between nodes u and v is 3, then and will be 3. To find some flow between a white and black set of nodes, we use the BFS tree. The parent array keeps the parent node ID of each node in the established BFS tree. The color array keeps the color of each node, which is initially white for all nodes. The direct array keeps the total weight of edges between a white node and its black neighbors. If node i has no black neighbor, the direct[i] will be 0. As another example, if node i has two black neighbors with edge weights 2 and 5, the direct[i] will be 7. The merge(v) procedure adds the node v to the black node set and updates the direct value of its neighbors (lines 1–3). In this procedure, we change the color of node v to black; reset its direct[v] to zero; and for any (the white neighbors of v), add to direct[i]. Note that during the flow detection, we may change and , but their total value is always constant and equals twice that of the initial . The flow(v) procedure sums up the passing flows (up to this point) from the edges to the white neighbors of node v (lines 4–7). The passing flow from each edge comes from the flow searching for the previous (before v) merged nodes.
The BFS(s) procedure performs a breadth-first search starting from node s until it finds a black node (lines 8–18). The parent of each node in the established BFS tree is stored in the parent variable. The returns the set of neighbors of u that has an edge with positive weight. So, we can pass some flow to each neighbor . The procedure returns the ID of the first detected black node. If the procedure cannot find any black nodes, it returns the value −1 to indicate that the search was unsuccessful.
The
findFlow(s) procedure (line 19) finds some flow from the selected white node
s to a black node based on the Ford-Fulkerson algorithm. This procedure calls the
BFS procedure to find a path to a black node, e.g., node
t (line 20). The algorithm returns 0 if the
BFS procedure cannot find a black node (line 21). Otherwise, we find the minimum capacity of edges, e.g.,
, in the detected
s-
t path (lines 22–25). Then we reduce
from the capacity of all edges on the detected path from
t to
s and add the same value to the capacity of edges in the reverse direction (lines 26–31). Finally, the sent flow over the path is returned from the function (line 32).
Algorithm 2: Critical Edges (detailed) |
Input: . Output: Detected Critical Edges parent ← array . /* parent of nodes in the detected path */ color ← array . /* color of nodes (white or black) */ direct ← array . /* the weight of edges to the black neighbors in each node */
- 1:
procedure merge(v): - 2:
color[v]← black, direct[v]←0. - 3:
foreach i in do direct[i] ← direct[i] + (G[] + G[])/2.
- 4:
procedure flow(v): - 5:
0. - 6:
foreach i in do |(G[] + G[])/2 −G[]|. - 7:
return f.
- 8:
procedure BFS(s): - 9:
parent[], parent[s], queue.clear(). - 10:
queue.push(s). - 11:
while queue.isEmpty() = do - 12:
queue.pop(). - 13:
foreach i in do - 14:
if parent[i] = −1 then - 15:
parent[i] . - 16:
if color[i] = black then return i. - 17:
queue.push(i). - 18:
return −1.
- 19:
procedure findFlow(s): - 20:
BFS(s). - 21:
if t = −1 then return 0. - 22:
Inf, . - 23:
while do - 24:
min(,G[parent[v],v]). - 25:
parent[v]. - 26:
. - 27:
while do - 28:
parent[v]. - 29:
G[] [] − . - 30:
G[] [] + . - 31:
parent[v]. - 32:
return .
- 33:
procedure Critical Edges(s): - 34:
. - 35:
merge(s). - 36:
while do - 37:
flow(v)+ direct[v], 1, . - 38:
foreach do . - 39:
while and do findFlow(v), . - 40:
if then - 41:
foeach with color[u] = white and parent[u] ≠ −1 do - 42:
and parent[w]. - 43:
. - 44:
merge(v). - 45:
foeach with flow(v) + direct[v] > c do merge(u). - 46:
return with lowest s.t .
|
The criticaledge(s) is the main procedure of the proposed algorithm that uses the previous procedures to find all critical edges. Node s is the starting point of the algorithm and is selected randomly from the node set. In the first step, we add node s to the black node set (line 34). While the network has some white nodes with a black neighbor, we select a random white node, e.g., node v, and remove its direct edges to the black nodes because the weight of these edges has already been added to the direct variable (lines 35–37). If the total weight of direct edges to the black nodes plus the passing flow over v is smaller than c, we repeatedly find some flow from v to the black nodes until the total passing flow exceeds the so-far-detected cut size c or no flow is available in the network (line 38). If the detected flow is smaller or equal to the so-far-detected cut size c, a new cut is generated by selecting the edges between visited and unvisited nodes in the last BFS tree (lines 39–41). If the detected flow equals c, then we add the new cut to the cutset (line 42). If the flow is smaller than c, then we update c with the new flow, clear the existing cuts in cutset, and add the new cut to this set (line 43). Finally, we add node v and some white neighbors of black nodes (that has more than c passing flow) to the black node set and repeat the same process for the next white node (lines 44,45).
Figure 9 shows the steps of the proposed algorithm on a sample network. The minimum weighted degree of this network is
. We assume that the algorithm starts from node 0 and converts the color of this node to black (
Figure 9a). Let node 1 with
be the selected node for merging. Since
, we find some flow between 1 and 0 (ignoring the direct edges between 0 and 1) using BFS and Ford–Fulkerson algorithms.
Figure 9b shows the direction of the updated edges and occupied capacity after running the first BFS, which finds a flow of size 5. After finding the first flow path, the sum of passing flow and direct edge weight between nodes 0 and 1 will be 11, which is bigger than 8. Also, the sum of passing flow over the edges of node 2 will be 10, which is bigger than 8. So, we can convert the color of both nodes 2 and 1 to black at the same time without updating the cut size (
Figure 9c). In the next iteration, node 3 is immediately converted to black because the total weight of direct edges between 3 and the black nodes is bigger than 8 (
Figure 9d). In the next iteration, assume that node 6 is selected for merging. Since 3 is smaller than 8, we start the flow detection procedure, which finds a maximum flow of 2 (ignoring the direct edge between 6 and 1) from 6 to the black nodes (
Figure 9e). So, the sum of the max-flow and the weight of direct edges to the black nodes in node 6 will be 5.
The nodes with the green border in
Figure 9f show the discovered nodes in the last BFS tree. Since 5 is smaller than 8, we update
c = 5 and set the critical edges to the edges between discovered nodes {4,5,6,7,8} and undiscovered nodes {0,1,2,3} in the last BFS. Therefore, the critical edge obtained so far will be {(1,6),(3,4)}. After converting node 6 to black, node 7 is selected for merging (
Figure 9g). The weight of the direct edge between nodes 6 and 7 is 5, which is not bigger than
c. So, we start the flow detection process and find a flow of 2 after the first BFS, which increases the cut size of node 7 to 7 (
Figure 9h). Since 7 is bigger than
c, we do not need to find more flow and convert node 7 to black. Also, after finding this flow, the sum of passing flow over the connected edges to node 5 and the weight of the direct edge between node 5 and the black nodes will be 6, which is bigger than
c. Therefore, we convert nodes 5 and 7 to black at the same time (
Figure 9i). In the next iteration (
Figure 9i), node 4 (with total direct weight 7) is selected for merging. Since 7 is bigger than
c, we convert node 4 to black immediately (
Figure 9j). Finally, node 8 is converted to black without flow detection because the total weight of its direct edges to the black nodes is bigger than
c (
Figure 9k). So the algorithm detects the critical edges as {(1,6),(3,4)} and size
.
6. Performance Evaluation
To evaluate the performance of the proposed algorithm, we implemented the proposed and existing algorithms in Python 3.12 and ran them on a set of random networks with different edges and node counts. We set the number of nodes in random networks from 50 up to 500 (stepping 50) and added random edges between the nodes. To ensure that the resulting network was connected, we added an edge between each successive node IDs. The maximum number of connected edges to each node (degree of the node) was selected from 2 up to 10 (stepping 1). For each specific node count and maximum connected edge values, we generated 5 different instances of random networks, which led to 450 networks in total. The weight of edges was selected randomly between 1 and 100 (both 1 and 100 included).
The random graph generation model used in this study is a hybrid of the Erdős–Rényi (ER) [
48] and Barabási–Albert (BA) [
49] models. In this model, we begin by selecting a number of nodes and creating random edges between them, similar to the ER model, where edges are added randomly between pairs of nodes. However, to ensure the graph remains connected, we enforce a rule that adds edges between successive node IDs, which guarantees a continuous path between nodes. Additionally, inspired by the BA model, the degree of each node is constrained, meaning that each node has a predefined maximum number of edges connected to it, which ranges from 2 to 10.
We implemented the proposed, Min-Cut, Greedy, and Degree-based approaches. In the Min-Cut approach, the algorithm successively removes minimum cut sets from the network until the total weight of the removed edges reaches the threshold
c. At each iteration, a global minimum cut is computed, and the corresponding edges are eliminated, thereby progressively reducing the network’s connectivity. To identify these minimum cuts, we employed the algorithm proposed by Stoer and Wagner [
8].
The Greedy strategy, on the other hand, iteratively removes edges with the smallest weights until the cumulative weight of the removed edges meets the threshold c. The underlying rationale is that eliminating lighter edges allows for the removal of a greater number of edges within the same budget, which may partition the network into more disconnected components. In the Degree-based method, nodes with the lowest total degree (the smallest total weights of their incident edges) are selected, and all edges connected to these nodes are removed. This ensures that nodes with inherently weak connections are detached from the network early in the process, potentially accelerating the overall reduction in connectivity.
Figure 10a and
Figure 10b, respectively, show the heat maps of detected paths by the proposed and max-flow-based algorithms based on the node count and maximum node degree. In the graphs with 50 nodes and a maximum degree of 2, the proposed algorithm finds only 3 flow paths on average, while this value for the max-flow-based algorithm is 98. In the networks with 500 nodes and a maximum degree of 10, the proposed algorithm finds 333 flow paths, while this value for the max-flow-based algorithm is 14,756.
Figure 11 shows how the number of the detected flow paths by the proposed (blue) and max-flow-based (orange) algorithms grows when we add more nodes to the network. In this figure, the thick lines show the average of detected paths in all degree values.
Figure 11 clearly shows that, in contrast to the max-flow-based algorithm, the number of the detected paths by the proposed algorithm for all node counts and degree values is almost negligible.
Figure 12a shows the pairwise connectivity of the network after removing the detected critical edges. After removing the detected edges by the proposed algorithm in the networks with 50 nodes, the pairwise connectivity of the network decreases from 2450 (in the original network) to 1607, indicating a
decrease. In the networks with 500 nodes, the pairwise connectivity decreases from 249,500 to 160,492, resulting in a
decrease. In the networks with 50 nodes, the Greedy algorithm reduces the pairwise connectivity from 2450 to 1909, which corresponds to a
decrease. For the networks with 500 nodes, the connectivity decreases from 249,500 to 212,690, representing a
reduction. In the 50-node networks, applying the Min-Cut-based method reduces the pairwise connectivity from 2450 to 1708, a 30.3% decrease. For the 500-node networks, the value drops from 249,500 to 191,591, resulting in a 23.2% reduction. Using the Degree-based edge removal strategy, the pairwise connectivity in 50-node networks drops from 2450 to 2109, indicating a 13.9% reduction. For 500-node networks, the value decreases from 249,500 to 233,364, showing a 6.5% decrease.
Figure 12b shows how pairwise connectivity changes when we increase the critical edge weight limit
c. The original network structure remains unchanged for all values of
c and maintains a constant pairwise connectivity of 95,975. For the proposed algorithm, pairwise connectivity decreases as
c grows. At
, connectivity drops to 81,995, which indicates a 14.6% reduction. When
c reaches 0.35, the pairwise connectivity is reduced to 52,371, which indicates a 45.4% decrease. In comparison, the Greedy algorithm reduces connectivity to 84811 at
(11.6% reduction), and to 61,569 at
(35.9% reduction). The Min-Cut method follows a similar pattern. It reduces pairwise connectivity to 83,803 (12.7%) at
and 59,970 (37.5%) at
. Among all approaches, the Degree-based method has the lowest performance. It reduces connectivity to just 88,801 at
(7.5% drop), and to 70,315 at
, corresponding to a 26.7% reduction.
Figure 13a shows the total number of critical edges identified by each algorithm against the number of nodes in the network. The number of selected edges increases with the network size for all algorithms. The proposed algorithm selects the fewest number of critical edges in all network sizes. For instance, it identifies only 20 critical edges in networks with 50 nodes and 125 edges in networks with 500 nodes. The Greedy algorithm selects significantly more edges, with 63 edges at 50 nodes and 590 edges at 500 nodes, showing a steep increase as the network grows. The Min-Cut-based method also identifies a high number of edges (53 edges at 50 nodes and 403 edges at 500 nodes). However, it tends to be slightly more conservative than the Greedy approach in larger networks. The Degree-based algorithm shows almost similar behavior to the Greedy method, selecting 60 edges at 50 nodes and 523 edges at 500 nodes. These findings highlight the effectiveness of the proposed method in identifying a minimal yet important set of critical edges, which is crucial for maintaining network functionality.
Figure 13b presents the number of selected critical edges by each algorithm as against the critical edge weight limit
c. As
c increases (increase in total weight for edge removals), all algorithms select more edges. However, the rate and volume of selection vary between the methods. The proposed algorithm selects the smallest number of critical edges for all
c values. For example, at
, it identifies only 26 edges, and even at the highest bound
, it selects just 64 edges. The Greedy algorithm selects more edges, starting from 178 at
and reaching 480 edges at
. This indicates a more aggressive but less efficient selection approach. The Min-Cut-based algorithm lies between the proposed and Greedy methods in terms of edge count. It selects 105 edges at
and 330 edges at
, indicating a moderate edge removal strategy. The Degree-based algorithm generally selects more edges than the Min-Cut method, with 145 edges at
and 450 at
, which indicates a relatively lower efficiency.
Figure 14a shows the pairwise connectivity loss percentage in all networks after removing the selected edges by the algorithms. Removing the selected edges by the proposed algorithm results in the highest percentage of connectivity loss, ranging between 34% and 38%. This highlights its effectiveness in identifying critical edges that maximize disconnection between nodes. The Greedy algorithm produces moderate loss percentages, from 22% to 24%, which indicates that while it selects many edges, their impact on overall connectivity is low. The Min-Cut method shows slightly higher loss percentages than the Greedy approach, particularly in smaller networks, which vary between 24% up to 32%. The Degree-based algorithm has the lowest connectivity loss, with values decreasing from 19% in small networks to 12% in larger networks. This implies that its selected edges are less significant in terms of connectivity.
Figure 14b shows the pairwise connectivity loss percentage against
c. As
c increases, the algorithms select more edges, and a greater loss in connectivity is observed. The proposed algorithm has the highest connectivity loss for all
c values. The loss percentages of the proposed algorithm ranges from 17% at
to 56% at
. Despite selecting a lower number of edges, the proposed method leads to the highest disconnection in the network. The Greedy algorithm shows a more gradual increase in loss, ranging from 11% to 36%, by removing a larger number of edges. The Min-Cut algorithm performs similarly, with loss values increasing from 13% to 39% as
c grows. The Degree-based method causes the least disruption, with connectivity loss percentages increasing only from 10% to 23%, indicating that most of the removed edges with this strategy are not critical.
Figure 15 shows the pairwise connectivity loss percentage of each algorithm in different network sizes and
c values. The
x-axis shows the number of nodes in the network, the
y-axis shows the same but for
c, and the
z-axis indicates the loss in pairwise connectivity. In all algorithms, the loss percentage increases as either the number of nodes or the
c value grows, which is expected due to the larger opportunity for topological disconnection. The proposed algorithm yields the highest connectivity loss for most configurations. This behavior is consistent in all
c values and node counts, which indicates its efficiency in identifying vital edges. The Greedy algorithm shows moderate performance. Its loss percentages increase steadily but remain below that of the proposed method, especially at higher
c values. The Min-Cut algorithm performs better than the Greedy method in medium-sized networks and higher
c values. It tends to be more sensitive to
c than the node count. The Degree-based algorithm has the lowest connectivity loss, which indicates that the edges removed by this algorithm are less critical. For instance, in the networks with 500 nodes and
, the pairwise connectivity loss reaches 53%, whereas for all other algorithms, this value remains below 37%. In networks with 50 nodes, the pairwise connectivity loss increases to approximately 60%, while for the Greedy- and Degree-based methods, it stays under 35%.
Figure 16 shows contour plots of the number of selected critical edges against the number of nodes and
c. Even in networks with 500 nodes and the highest
c value (
), the number of selected edges remains lower than other methods. In contrast, the Greedy algorithm (
Figure 16d) selects a large number of edges. The number ranges from 32 (at 50 nodes,
) up to 875 (at 500 nodes,
), reflecting its focus on local optimization without considering overall efficiency. The Min-Cut method (
Figure 16b) selects a moderate number of edges, adapting well to the increased
c value. This method is less aggressive than the Greedy algorithm, but it still selects more edges than the proposed method, particularly in larger networks and higher
c values. The Degree-based approach (
Figure 16c) shows similar behavior to Min-Cut, but with slightly more variance in different networks. It often selects a high number of edges (over 600 in larger networks) where most of them have minimal effect on connectivity.
Figure 17 shows the pairwise connectivity of the network after applying the proposed algorithm, against the average node degrees and number of nodes. For a fixed
c value, pairwise connectivity increases with both the average degree and the network size, due to the higher number of alternative paths. For
(
Figure 17a), in the networks with 50 nodes, the pairwise connectivity ranges from approximately 860 to 2450 as the average degree increases from 2 to 10. In contrast, for larger networks (500 nodes), the pairwise connectivity spans from about 9340 to nearly 249,300 over the same range of average degrees.
For
(
Figure 17b), in smaller networks (50 nodes), the pairwise connectivity ranges from 840 to 2430 as the average degree increases from 2 to 10. For larger networks (500 nodes), the pairwise connectivity spans from around 92,350 to 249,500 over the same range of average degrees. For
(
Figure 17c), in smaller networks (50 nodes), the pairwise connectivity ranges from 552 to 2410 as the average degree increases from 2 to 10. In contrast, for larger networks (500 nodes), the pairwise connectivity spans from around 53,700 to 249,300 across the same range of average degrees. For
(
Figure 17d), in smaller networks (50 nodes), the pairwise connectivity ranges from 465 to 2410 as the average degree increases from 2 to 10. In larger networks (500 nodes), the pairwise connectivity spans from about 50,400 to 249,300 across the same degree range.
At low c values (e.g., and ), the proposed algorithm removes only a limited number of critical edges, resulting in higher connectivity in all networks. At moderate to high c values ( and ), the algorithms can remove more edges, leading to more noticeable connectivity loss, particularly in sparse (low-degree) or small networks. For instance, with and an average node degree under 10, the pairwise connectivity across all networks drops to less than 199,533 after the removal of the selected edges. For larger average degrees, even under high c values, the pairwise connectivity remains relatively robust because when the average degree in a network is high, each node is connected to more nodes on average, which leads to a greater number of alternative paths between node pairs.