Next Article in Journal
Data-Driven Predictive Modelling of Lifestyle Risk Factors for Cardiovascular Health
Previous Article in Journal
A Quantitative Analysis Framework for Investigating the Impact of Variable Interactions on the Dynamic Characteristics of Complex Nonlinear Systems
Previous Article in Special Issue
Detection of TCP and MQTT-Based DoS/DDoS Attacks on MUD IoT Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of Critical Links for Improving Network Resilience

1
International Computer Institute, Ege University, Izmir 35100, Turkey
2
Department of Computer Engineering, Izmir Bakircay University, Izmir 35665, Turkey
3
Department of Computer Engineering, Ege University, Izmir 35100, Turkey
*
Author to whom correspondence should be addressed.
All authors contributed equally to this work.
Electronics 2025, 14(14), 2904; https://doi.org/10.3390/electronics14142904
Submission received: 5 June 2025 / Revised: 10 July 2025 / Accepted: 17 July 2025 / Published: 20 July 2025
(This article belongs to the Special Issue Network and Information Security)

Abstract

Identifying and eliminating critical links in multi-hop networks is essential for enhancing overall network resilience. In this study, we propose a novel algorithm to detect links that significantly impact the pairwise connectivity of multi-hop networks. We formulate the critical link detection problem as minimizing pairwise connectivity subject to a total edge weight constraint c. The proposed method first computes the maximum flow between neighboring nodes to evaluate strong connections, and then progressively contracts these nodes to expose weaker connections. Throughout this iterative process, the algorithm records previously identified flows to minimize redundant flow computations. At each step, it also keeps track of the cut sets that reduce the network’s pairwise connectivity. Ultimately, it selects the subset of these cut sets whose removal minimizes pairwise connectivity while satisfying the total weight constraint c. This approach consistently identifies fewer yet more impactful critical edges than traditional Min-Cut or Greedy strategies. We evaluate the performance of our method against existing algorithms across various network sizes and node degrees. Experimental results show that the proposed method consistently discovers more influential edges and achieves a 34–38% reduction in pairwise connectivity, outperforming Greedy (22–24%), Min-Cut (24–32%), and Degree-based (12–19%) methods.

1. Introduction

Multi-hop networks operate by transmitting data from a source to a destination through a series of intermediate nodes. This communication paradigm extends the effective coverage area and improves scalability and resilience, making it suitable for wireless sensor networks, vehicular ad hoc systems, and emergency communication scenarios. Each node acts both as a data source and as a relay, forwarding packets toward their destination. Such architectures reduce the need for high transmission power and can significantly increase the operational lifetime of the network. However, they also introduce challenges. Multi-hop communication can increase latency and create complex routing requirements. The distributed nature of these networks makes them more vulnerable to cascading failures, and maintaining stable connectivity under dynamic conditions becomes more difficult. For these reasons, understanding the structural principles of multi-hop networks and identifying critical links is essential for enhancing their overall reliability and performance.
Preserving strong connections between all nodes in multi-hop networks can enhance the resilience of these networks against potential failures or congestion. Identifying the weak links in the network can assist in detecting vulnerable areas and addressing them to improve overall network robustness. A multi-hop network can be modeled as a weighted graph G ( V , E , w ) , where V = { v 1 , v 2 , × , v n } represents the set of nodes, E V × V denotes the set of edges between the nodes, and w : E N is a non-negative weight function that assigns a weight to each edge. The weight of each edge may represent factors such as bandwidth, reliability index, or signal strength in wireless networks. Edges with smaller weights can be considered unreliable links or bottlenecks. For example, Figure 1a shows a sample multi-hop network and its corresponding weighted graph, where the weight of the edges reflects the wireless signal strength between nodes.
Several graph-theoretic problems can help identify weak links in the network topology. For instance, in the cut edge problem, the objective is to identify edges whose removal separates certain nodes from others. In Figure 1a, the edge ( 1 , 4 ) is a cut edge, whose removal disconnects node 4 from the other nodes. While cut edges reveal the edges that may disconnect certain nodes, they are not necessarily critical in the broader network because they may only disconnect a small set of nodes.
Another relevant problem is the minimum cut problem, which finds a set of edges with the minimum total weight that separates the graph into disconnected components. Formally, a minimum cut set of G ( V , E , w ) is a set C E such that G ( V , E / C , w ) is disconnected, and the sum of weights in C, e C w ( e ) , is minimized. A graph may have multiple minimum cut sets with the same minimum total weight. For example, the network in Figure 1a has several minimum cut sets, including C 1 = { ( 3 , 6 ) , ( 5 , 8 ) } , C 2 = { ( 3 , 6 ) , ( 2 , 7 ) } , and C 3 = { ( 1 , 2 ) , ( 1 , 3 ) } , all with a total weight of 5. However, the minimum cut set may not necessarily reveal the most critical edges with the greatest impact on the network, as the cut might disconnect only a small set of nodes.
This study focuses on identifying the smallest set of critical edgeswhose removal would disconnect the maximum number of nodes from each other. These edges have the most significant impact on network connectivity, and their failure may sever paths between a large number of nodes. We use pairwise connectivity among nodes to measure overall network connectivity robustness and aim to identify the smallest set of edges that minimizes general pairwise connectivity across the network. The main difference between critical edges and minimum cut sets or cut edges is that critical edges may not necessarily be cut edges or members of minimum cuts. However, when removed as a group, critical edges sever communication paths between a large number of nodes. In other words, the cut edge and minimum cut problems do not take into account the number of disconnected nodes, whereas the critical edge problem focuses on identifying the edges whose removal separates the maximum number of nodes.
The problem of finding the critical edge set of a given graph is well known and extensively studied in graph theory, with applications across various domains, such as wireless multi-hop networks [1], road networks [2], VLSI design [3], electric power networks [4], and other optimization problems [5,6]. For example, in a wireless ad hoc network, the critical edge set represents a subset of critical links with minimum weight, whose failure would terminate all communication paths to a subset of active nodes. Similarly, in a road network, the critical edge set highlights the set of critical streets that may serve as bottlenecks in traffic, and their closure may halt all traffic between certain parts of the city. Identifying such a set allows for reinforcing the critical sections of the network, thereby enhancing its fault tolerance or bandwidth. In general, determining the critical edge set of networks provides valuable insights into their weak points, bottlenecks, clusters, reliability, lifetime, and fault tolerance.
The mentioned problems may impose constraints on the total weight of the selected edges. Given a weighted graph G and an integer value k, the weight constrained cut edge detection problem aims to find a subset of edges whose total weight does not exceed k and whose removal maximizes the number of disconnected components [7]. Similarly, the minimum cut problem seeks to determine the smallest possible edge set with maximum total weight k. Most of the proposed methods for the unconstrained problems can be adapted to estimate the edge sets with a total weight not exceeding k through minor modifications.
In this paper, we propose a novel algorithm for identifying critical edges in a given network that integrates the well-known max-flow computations with a node merging strategy. The merging operation is fundamentally inspired by the principles proposed in the Stoer–Wagner algorithm [8] for computing global Min-Cuts, which iteratively contracts non-critical edges until the graph is reduced to a single node. The core rationale behind merging non-critical edges is based on the following observations:
  • Edges that do not belong to a minimal cut separating nodes s and t cannot participate in any smaller cut discovered in subsequent iterations. Therefore, merging their endpoints does not eliminate any potential smaller cut that could later emerge as a candidate for the critical edge set.
  • By systematically maintaining and updating the smallest cuts identified throughout the iterations, we ensure that no critical edge is inadvertently removed from consideration.
Consequently, this merging process effectively contracts regions of the graph that are internally well connected with respect to the current threshold. This approach preserves the correctness of the critical edge identification process and guarantees that no true minimal cut edges are overlooked.
After computing the max-flow between two neighboring nodes, we eliminate the edges (by merging their endpoints) that carry a flow exceeding a predefined threshold and therefore cannot be part of any critical edge set. In each iteration, we retain the smallest detected cuts up to that point and continue the contraction process until all nodes have been merged into a group.
The main contributions of this paper are as follows:
  • A novel pairwise connectivity-based node removal algorithm is introduced to identify the most critical edges that have the highest impact on pairwise connectivity.
  • The proposed method is formally defined, and its computational complexity is analyzed.
  • A comprehensive comparative analysis is conducted with state-of-the-art algorithms, including Greedy, Degree-based, and Min-Cut approaches, across multiple networks.

2. Related Work

The identification of critical nodes or critical edges that significantly impact the connectivity of networks has been extensively studied in various research works [9,10]. While some studies focus on identifying simple cut edges [11,12] or cut vertices [13,14], other works aim to find the group of critical nodes whose failures sever all communication paths between a large number of nodes [15,16,17].
Cut edges (bridges) is a classic problem in graph theory that finds the edges whose removal partitions the graph. Tarjan has introduced the first linear-time depth-first search (DFS) algorithm to detect all bridges in an undirected graph [18]. Hopcroft and Tarjan had earlier devised an efficient DFS-based method for finding articulation points and biconnected components, which can also identify bridges [19]. Schmidt has developed a simplified linear-time method that simultaneously finds all cut edges and cut vertices in a graph [20]. More recently, Cairo has proposed a streamlined single-traversal algorithm to compute all bridges (and articulation points) in an undirected graph [21].
Min-Cut detection is another problem that finds a set of edges with minimum weight, whose removal partitions the graph. Generally, the network flow and edge contraction are the main approaches for the Min-Cut problem, which have been used in different ways in several algorithms. The size of the minimum cut between arbitrary nodes s , t V equals the maximum flow between them [22]. Therefore, the max-flow algorithms can be used to find a Min-Cut between arbitrary nodes s and t in any graph [23]. Max-flow is a well-studied problem that has different polynomial-time algorithms [24]. Ford and Fulkerson have proposed a basic algorithm for finding the maximum flow between arbitrary nodes in a given graph with O ( f × | E | ) time complexity [25], where f is the maximum possible flow between arbitrary nodes. This approach finds augmenting paths in a residual graph, updates the weight of the edges in the detected augmenting path, and continues this process until no path is available between the nodes. While there is a path from the source to the target, with some capacity on all edges in the path, the algorithm sends a flow along the path and then finds another path until no path is available. A path with available capacity is called an augmenting path.
To find the minimum cuts between all pairs of nodes, the Edmonds–Karp algorithm uses the Breadth-First Search (BFS) method and finds the augmenting paths that lead to O ( | V | × | E | 2 ) time complexity [26]. Dinic has proposed another algorithm based on the Ford–Fulkerson method. It uses a combination of level graph and BFS for path finding, which bounds the time complexity of the algorithm to O ( | V | 2 × | E | ) [27,28]. Goldberg and Tarjan have proposed a push–relabel algorithm that finds the maximum flow with O ( | V | 3 ) time complexity [29]. The algorithm has the push and relabel operations and maintains a flow excess value in each node. The algorithm runs until the graph has no node with a positive excess. The push operation transfers flow from a node to its neighbors over a residual edge. The relabel operation determines the edges for pushing the flow. Orlin has proposed an algorithm that solves the max-flow problem as a sequence of improvement phases with O ( | V | × | E | ) time complexity [30]. To find the minimum cuts of a graph using max-flow algorithms, we need to repeat the max-flow algorithm between a source and all other nodes in the graph, which increases the time complexity of the algorithms by a factor of O ( | V | ) . Hao and Orlin have proposed an algorithm that uses the idea of a push–reliable maximum-flow algorithm to find the minimum cut of a given weighted graph. The algorithm finds a Min-Cut of G as a sequence of at most 2 × | V | 2 maximum-flow problems in which their running time equals the running time of a single maximum-flow problem. The resulting algorithm has O ( | V | × | E | × l o g ( | V | 2 / | E | ) ) time complexity. Generally, using the max-flow-based algorithms results in only one of the Min-Cuts in the graph.
Nagamochi and Ibaraki have proposed an algorithm based on the edge contraction method [31]. Their algorithm constructs spanning forests and iteratively contracts edges with high weights, which leads to O ( | V | × | E | + | V | 2 × l o g | V | ) time complexity on undirected graphs with non-negative weights. Stoer and Wagner have proposed another edge contraction-based approach, which detects the minimum cut of a given weighted graph with O ( | V | × | E | + | V | 2 × l o g | V | ) time complexity [8]. Starting from an arbitrary node, the algorithm visits the most tightly connected node to the current node and continues this process until all nodes have been visited. The total weight of edges between the last visited node and its neighbors are considered as a candidate minimum cut or cut-of-the-phase. The algorithm stores the cut-of-the-phase in a variable and merges the last visited node with its previous visited neighbor. The newly merged node may have multiple edges to the other nodes. For example, if we merge nodes a and b, where both have an edge to another node c, then node c will have two edges to the merged nodes a b . In this case, the edges are merged and the weight of the new edge is set to the total weight of the existing edges. After merging the nodes, the algorithm repeats the same process by starting from an arbitrary node and visiting the most tightly connected neighbor of each visited node. The last visited two nodes are merged and the detected minimum cut size is updated if the total weight of edges between the last visited nodes and its neighbors is smaller than the previously detected cut size. This process continues until all nodes are merged into a single node. After merging all nodes, the cut-of-the-phase will show the minimum cut size and its corresponding cut set. Brinkmeier has improved the Stoer and Wagner algorithm by contracting more than one pair of nodes (if possible), which reduces the worst-case time complexity of the algorithm to O ( m a x ( l o g | V | , m i n ( | E | / | V | , δ / ϵ ) × | V | 2 ) , where ϵ is the minimum edge weight and δ is the minimum weighted degree of nodes [32].
Besides the exact algorithms, there are several approximated and randomized algorithms that can find the minimum or near-minimum cut of weighted graphs with high probability [33,34,35]. A practical method for finding all minimum cuts of an undirected weighted graph has been proposed in [36]. This algorithm uses different local- and connectivity-based edge contraction heuristics to contract and eliminate the edges that cannot be a member of any minimum cut. After contracting the edges and reducing the size of the graph, the proposed algorithm in [31] is used to find the minimum cuts of the remaining graph [37]. The proposed parallel algorithms for the Min-Cut problem perform the contraction [38] or path detection [39] tasks in parallel, which allows the available processors to reduce the time complexity of the algorithms. The distributed algorithms assume that the graph of the entire network is not available and use message passing to find the disjoint paths or maximum flow between the nodes [1,40,41,42,43].
Another similar problem is determining the edge connectivity of a graph with the smallest number of edges, whose removal disconnects the graph. Gabow has introduced an efficient algorithm that computes the edge connectivity of undirected graphs in O ( n 3 ) time complexity [44]. Karger has presented a randomized contraction algorithm achieving near-linear expected time [45]. Nagamochi and Ibaraki have offered a deterministic approach that avoids explicit flow computations and reduces the number of edge contractions [46]. More recent work by Kawarabayashi and Thorup achieves near-linear time for finding the minimum cuts and k-edge connectivity [47]. Table 1 summarizes similar problems and existing algorithms related to this study.

3. Problem Formulation

Maintaining a strong connection between the nodes in a multi-hop network is a vital requirement for most applications. Finding critical edges may help to identify the weak connections that have a higher impact on network connectivity than the other edges. The problem of critical edges can be formally defined as follows:
Critical Edge Problem: Given a weighted network G ( V , E ) and a positive integer c, find a subset of edges F E such that | F | c and the pairwise connectivity of G ( V , E / F ) is minimal.
In this study, we consider the pairwise connectivity ( P C ) of a network as the total number of nodes that can be reached from each node. So, the pairwise connectivity of a connected network G ( V , E , w ) will be
P C ( G ) = | V | × ( | V | 1 )
In a disconnected graph with a set of separated components, the pairwise connectivity can be calculated as follows:
P C ( G ) = s C o m p ( G ) | V s | × ( | V s | 1 )
where C o m p ( G ) is the set of components in G and | V s | is the number of nodes in component s. Since we primarily focus on undirected, connected networks without accounting for multi-path robustness metrics (such as the number of alternative paths), we define the pairwise connectivity ( P C ) as the maximum theoretical number of connected node pairs in a connected network. At each iteration, we compute the actual P C of the residual network by enumerating all reachable node pairs after edge removals. This metric provides a lower bound on the available connectivity between nodes, as it disregards redundant paths and edge weights.
Figure 2 shows a sample network and its pairwise connectivity after removing different edges. In Figure 2a, we have a connected network with eight nodes. So, each node has path to seven other nodes, which leads to P C = 56 . If we remove the edges with the smallest weight, which is the ( 4 , 6 ) edge, node 4 is separated from the other nodes, which reduces P C to 42 (Figure 2b). Removing the three edges with the smallest weight—which are the edges ( 1 , 2 ) ( 1 , 3 ) , and ( 4 , 6 ) —disconnects nodes 1 and 4 from the other nodes and reduces the P C = 30 (Figure 2c). Removing the edges ( 5 , 7 ) and ( 3 , 6 ) with a total weight of 17, as shown in Figure 2d, generates a network with P C = 24 (two separated components with P C = 12). The aim of proposed algorithm in this paper is finding a subset of edges F E with maximum weight c such that the P C of G ( V , E / F ) is minimum.
As a Greedy strategy, we may remove the edges with the smallest weights one by one until the total weight of the removed edges reaches c. However, this approach can produce highly inefficient solutions that are far from the optimal. For example, Figure 3 illustrates the steps of the Greedy algorithm on a sample network for c = 20 . In the first step, edge ( 4 , 7 ) is removed because it has the lowest weight among all the edges (Figure 3a). In the subsequent steps, edges ( 0 , 3 ) , ( 2 , 3 ) , and ( 0 , 6 ) are respectively removed from the network (Figure 3b–d). After removing edge ( 0 , 6 ) , the total weight of the removed edges reaches 16, and selecting any additional edge would exceed the limit of 20. The pairwise connectivity of the resulting network in Figure 3e is 30. In contrast, the most critical edges in this network with a total weight under 20 are ( 0 , 6 ) , ( 2 , 5 ) , and ( 4 , 6 ) . Removing these edges reduces the pairwise connectivity to P C = 18 , which is substantially lower than 30.
Another possible solution is removing the minimum cut set of networks until the total weight of removed edges reaches c. This strategy may also generate an inaccurate answer. Figure 4, for example, shows the steps of Min-Cut-based solutions for c = 20. In the first step, the minimum cut set { ( 4 , 6 ) } is removed from the network (Figure 4b). In the second step, by removing the minimum cut set { ( 0 , 3 ) , ( 2 , 3 ) } , the weight of the total removed edges becomes 9. The weight of the next minimum cut set is 12, which exceeds the c = 20 limit. Therefore, the minimum cut set-based approaches leads to a network with P C = 30, which is much higher than the optimal answer of 18. In the next section, we propose a max-flow-based approach that generates accurate solutions close to the optimal answer.

4. Proposed Algorithm

The main idea of the proposed algorithm is to find the edges that minimize the total maximum flow between the nodes. Removing the edges that minimize the total flow may decrease the number of paths between the nodes and minimize the pairwise connectivity.
The proposed algorithm integrates max-flow-based Min-Cut detection with an iterative edge contraction strategy to progressively simplify the network while preserving essential connectivity properties needed for identifying critical edges. The main steps and order of the operations in the proposed algorithms are as follows:
  • Min-Cut Search: Before any contraction, the algorithm identifies a minimal cut separating the currently visited set (denoted by B) from an unvisited candidate node v. This is accomplished by executing a max-flow procedure on the current state of the graph. However, if the total weight of the direct edges between v and B, combined with the flows passed from v in previous iterations, exceeds the threshold c, the max-flow search is skipped.
  • Selection of Candidate Cuts: If the total weight of minimum cut between B and v is less than the specified threshold c, the corresponding cut set is stored along with its resulting pairwise connectivity value. This guarantees that potential critical edge sets are collected before any modification to the graph topology through merging.
  • Edge Contraction: Node v is added to the set B, effectively contracting the edges between v and B. This merging step simplifies subsequent iterations by reducing the number of nodes and aggregating local connectivity information.
  • Iterative Refinement: The process is then repeated and the algorithm selects a new unvisited node v adjacent to B, computes Min-Cuts between B and v on the contracted graph, records candidate cut sets that yield low pairwise connectivity, and continues merging nodes until B includes all nodes.
Figure 5 shows the general steps of the proposed algorithm on a sample network for c = 20. The algorithm starts from an arbitrary node, e.g., node v, and adds it to the visited node set (Figure 5a). The green nodes in Figure 5 are visited nodes. If the total weight of connected edge to v is less than c, the edges are added to a potential critical edge set denoted as Z. In Figure 5a, we add [42,(0,3),(2,3)] to W, where 42 is the P C of the network after removing the (0,3) and (2,3) edges. In each step, a minimum cut between the neighbor of the visited group and the visited group is detected to determine how strongly it is connected to the visited nodes. In Figure 5b, for example, if we select node 0, the cut size between nodes 0 and 1 will be 8 and the cut set will include the same (0,3) and (2,3) edges.
In the next step, we may select node 2 and detect a minimum cut with size 24, which is higher than 20. So, we may add node 2 to the visited group without updating Z (Figure 5c). As for the next step, we may find a cut between node 6 and the visited group, which will contain the edges (0,6) and (2,5) with a weight of 12, which is smaller than 20. So, we add [26,(0,6),(2,5)], as potential critical edges to Z, where 26 is the P C of the network after removing the selected edges. In the next step, we select node 5 and find a minimum cut between this node and the visited group (Figure 5d). This cut will contain the edges (2,5) and (4,6) with size 13. Since 13 < 20 , we add [13,(1,5),(4,6)] to Z. In the next step, we select node 1 and find a minimum cut between this node and the visited group (Figure 5e). This cut will contain the edges (1,5) and (4,6) with size 17. Since 17 < 20 , we add [17,(2,5),(4,6)] to Z. In the next step, we select node 4 and find a minimum cut between this node and the visited group (Figure 5f). This cut will contain the edges (1,4) and (4,6) with size 22. Since 22 > 20 , we continue to the next step without updating Z, select node 7, and find a minimum cut between this node and the visited group (Figure 5g). This cut will contain the edge (4,7) with size 2. Since 2 < 20 , we add [42,(4,7)] to Z. After visiting and merging all nodes, we sort the elements in Z based on the P C value, which leads to the following set:
Z = { [ 24 , ( 2 , 5 ) , ( 4 , 6 ) ] , [ 26 , ( 1 , 5 ) , ( 4 , 6 ) ] , [ 26 , ( 0 , 6 ) , ( 2 , 5 ) ] , [ 42 , ( 0 , 3 ) , ( 2 , 3 ) ] , [ 42 , ( 4 , 7 ) ] }
We then select a combination of candidate sets from Z that yields the lowest P C while ensuring that the total weight of the selected edges remains below c. After sorting the sets in Z by their P C values, the algorithm iteratively selects the sets with the lowest P C until the total weight reaches c. During this process, the edges included in the final selected set are removed from the remaining sets in Z to ensure that the weight of each edge is counted only once. In cases where multiple candidate sets have the same P C value, we prioritize the set with the smallest total edge weight. If these weights are also identical, we apply a consistent tie-breaking rule by selecting the set that was added first to Z, based on the insertion order. This strategy maintains consistency across runs.
In the above example, the set ( 2 , 5 ) , ( 4 , 6 ) is selected first as the critical edges, with a total weight of 11. We then remove these edges from Z, which results in the following updated set:
Z = { [ 26 , ( 1 , 5 ) ] , [ 26 , ( 0 , 6 ) ] , [ 42 , ( 0 , 3 ) , ( 2 , 3 ) ] , [ 42 , ( 4 , 7 ) ] }
At this point, we have two candidates with the same P C , so we select the set with the lowest total weight, which is ( 0 , 6 ) . This set can be included since its weight is only 6, keeping the cumulative weight within the allowed limit. After selecting this set, no further candidates can be added without exceeding the constraint imposed by c. Figure 5i illustrates the resulting union set, which includes the edges ( 2 , 5 ) , ( 4 , 6 ) , and ( 0 , 6 ) , leading to a final pairwise connectivity of P C = 18 . Figure 6 shows the steps involved in selecting the final edges from the candidate set Z.
To find the Min-Cut between visited and unvisited nodes, we may use the max-flow algorithm [48]. However, starting the max-flow algorithm for each merging process may significantly increase the time complexity of the algorithms. The proposed algorithm uses the previously calculated flows and local connections to minimize the overall flow detection procedure. The main idea of the algorithm is to contract the non-critical edges as fast as possible. We try to use local information (i.e., edge weight or previously passed flow) to decide whether an edge can be critical or not. If we cannot decide using local information, we find the flow between the endpoints of the edge. At the end of the contraction process, we select the best possible set of critical edges.
Let w ( e ) denote the weight of edge e E . An edge e cannot be critical if w ( e ) > c , as its selection would immediately violate the constraint limit. Similarly, an edge e = ( u , v ) is not critical if F [ u , v ] > c , where F [ u , v ] represents the passing flow between nodes u and v. In this case, u and v are strongly connected through other paths in the network, implying that there is no minimum cut with a total weight less than c that both includes edge ( u , v ) and separates u from v. Since any minimum cut that contains e = ( u , v ) must necessarily disconnect u and v, the condition F [ u , v ] > c ensures that such a cut cannot exist, and thus e cannot be critical.
In the proposed algorithm, we add the merged nodes to a set named B. Initially, B is empty and the algorithm terminates when all nodes are added to B. The algorithm starts from an arbitrary node, e.g., v 0 , and adds it to B. We select a random neighbor of v i B , e.g., v j , which is not in B, and contract the ( v i , v j ) edge by merging them. Obviously,
( v i , v j ) E i s n o t c r i t i c a l i f w ( v i , v j ) > c o r F [ v i , v j ] > c
Since v i and v j are neighbors, the value of w ( v i , v j ) is a part of F [ v i , v j ] . More precisely, we have
F [ v i , v j ] = w ( v i , v j ) + ( F [ v i , v j ] i n G / ( v i , v j ) )
Let N ( v ) be the set of neighbors of node v V . We may write
F [ v i , B ] =   ( u B N ( v i ) w ( v i , u ) ) + F [ v i , B ] i n G / { ( v i , u ) : u B N ( v i ) }
We may directly add a node v i B to B if
u B N ( v i ) w ( v i , u ) > c
We have to find some flow from v i to B based on relation (4) if
u B N ( v i ) w ( v i , u ) c
If F [ v i , B ] > c , then v i is not critical and can be added to the B set. Otherwise, the total weight of cut between v i and B is not larger than c and this edge can be a candidate for the critical edge set. In this case, we add the newly detected cut and its related P C to the candidate critical edge set. This process continues until all nodes are added to the B set. When we find the flow between v i and B, every node on the detected flow paths will have at least two different paths to B after merging with node v i . Formally, if we find a flow path as ( v , u 1 , u 2 , , u i B ) , we will have
1 j < i : p 1 = ( v B , u 1 , , u j ) a n d p 2 = ( u j , u i B )
So, we may keep the passing flows from the nodes to use them in future contraction operations and decrease the number of flow detection. Using the direct edges to the B set and also detected flows in the previous search results, we may merge most of the nodes without starting a new max-flow procedure.
Algorithm 1 shows the general steps of the proposed method. Initially, we set Z as an empty set (Z is the set of possible critical edges) and add a random node from V to B (lines 1–2). While there are unvisited nodes in the network, we select a node v from V / B that has at least one neighbor in B (lines 3 and 4). If the total weight of edges between v and B is smaller than c, we find the flow between v and any node in B using BFS tree (lines 5–7). The flow detection is stopped when we have no more flow or the flow detected thus far, f, exceeds c. If f c , then we have found a new cut that is the set of edges between the visited and unvisited nodes in the last BFS tree (lines 8 and 9). If f c , we find the pairwise connectivity of G ( V , E / c u t ) . This value, along with the corresponding cut, is used obtain the possible critical edge set Z (lines 10 and 11). Before repeating the loop for the next node, we add v to B (line 12). Also, any node u V / B is added to B if u has a neighbor in B and the total weight of its edges to B plus the existing flow on its other edges is bigger than c (lines 13–15). After adding all nodes to B, we select a subset of cut edges in Z with the lowest p c values such that the total weight of selected edges does not exceed c and returns the selected edges as critical edges.
Figure 7 shows the flowchart and Figure 8 shows the idea of the proposed algorithm on a sample network for = ¸ 7. We assume that the algorithm starts from node 0 and adds it to the B set. Let node 3 be the next selected node for merging. Since 9 > 7, we merge nodes 0 and 3 (Figure 8b). Let node 2 be the next selected node for merging. Since w ( B , 2 ) is 7 (not bigger than c), we find some flow between node 2 and B. The maximum flow between node 2 and B in G / ( 2 , B ) is 4 (Figure 8c). So the total flow between node 2 and set B is 4 + 7 = 11 , which is bigger than c. Therefore, there are no critical edges between node 2 and B. Also, the sum of passing flow over the connected edges to node 6 is 8, which is bigger than c (Figure 8c). So, we can add both nodes 2 and 6 to B at the same time (Figure 8d).
Algorithm 1: Critical Edges (general)
  • Input:  G ( V , E ) ,c./* a graph G and the upper limit c */
  • Output:  F E . /* a set of critical edges*/
  1:
    Z .
  2:
   Set B { v } where v V is an arbitrary node.   /* add an arbitrary node B */
  3:
   while  V B  do
  4:
      select a random node v V / B with at least one neighbor in B.
  5:
      set  f total weight of edges between v and B and remove these edges from G.
  6:
      while  f < c and some flow available from v to B do
  7:
            using BFS tree, find some flow from v to B and add it to f.
  8:
      if  f c  then    /* a candidate critical edges set detected */
  9:
          c u t { ( u , w ) E : u BFS tree and w BFS tree }. /* The edges between reachable and unreachable nodes in BFS tree is a cut*/
10:
          p c pairwise-connect( G ( V , E / c u t ) ).
11:
          Z Z ( p c , c u t ) .
12:
       B B v .
13:
      for each node u B with a neighbor in B do
14:
         if total weight of edges between u and B + the flow on the other edges of u > c then
15:
             B B u .
16:
   select  F Z with lowest p c  s.t  | Σ w i F w i | c.
17:
   return F.
In the next iteration, we select node 1. Since w ( 1 , B ) is smaller than c, we find some flow between 1 and B. After ignoring the direct edge between 1 and B, we may find a flow of 1 over the path 1 , 5 , 4 , B by finding a BFS tree in the network. So, the total flow from 1 to B will be 5. Since 5 < c , we have a critical edge set which separates node 1 from nodes { 0 , 2 , 3 , 6 } . At this stage, any edge between the visited and unvisited nodes in the last BFS tree are critical edges. So, we have c u t 1 , B = { ( 1 , 2 ) , ( 1 , 6 ) , ( 4 , 6 ) } . The pairwise connectivity of G ( V , E / c u t 1 , B ) will be ( 4 × 3 ) + ( 3 × 2 ) . After adding [ 18 , { ( 1 , 2 ) , ( 1 , 6 ) , ( 4 , 6 ) } ] to Z and node 1 to B, we select node 5 for merging (Figure 8e). The weight of the direct edge between 5 and B is 4 and this node has already 1 passing flow (over node 4) to B. So the current cut size between 5 and B is 5, which is not bigger than c. Therefore, we try to find some new flow between 5 and B, which will be unsuccessful because the whole capacity of the ( 4 , B ) edge is already occupied. Considering the edges between visited and unvisited nodes in the last BFS tree, the second candidate critical edges will be c u t 5 , B = { ( 1 , 5 ) , ( 4 , 5 ) } with pairwise connectivity ( 5 × 4 ) + ( 2 × 1 ) . We add node 5 to B and [ 22 , { ( 1 , 5 ) , ( 3 , 4 ) } ] to Z. Since 7 = c , we have a new critical edge set between 4 and other nodes as c u t 4 , B = { ( 4 , 6 ) , ( 4 , 5 ) } with pairwise connectivity 30. So, the algorithm adds [ 22 , { ( 1 , 5 ) , ( 3 , 4 ) } ] to Z and 4 to B and finishes the w h i l e loop. Now, we have a set of three candidate critical edges in Z. We start from the lowest pairwise connectivity, which is 18, and add the edges { ( 1 , 2 ) , ( 1 , 6 ) , ( 4 , 6 ) } to the final critical edges F. So, the total weight of the selected edges in F will be 5, and no more edges would be added to F because adding an edge would increase the total edge weight to more than c. Therefore, the algorithm selects the edges { ( 1 , 2 ) , ( 1 , 6 ) , ( 4 , 6 ) } as critical edges, whose removal would decrease the pairwise connectivity of the network to 18.
More detailed steps of the proposed algorithm are provided in Algorithm 2. The algorithm accepts the adjacency matrix of the network G and outputs the detected critical edges. We assume that the adjacency matrix holds the weight of edges between the nodes. For example, if the weight of edges between nodes u and v is 3, then G [ u , v ] and G [ v , u ] will be 3. To find some flow between a white and black set of nodes, we use the BFS tree. The parent array keeps the parent node ID of each node in the established BFS tree. The color array keeps the color of each node, which is initially white for all nodes. The direct array keeps the total weight of edges between a white node and its black neighbors. If node i has no black neighbor, the direct[i] will be 0. As another example, if node i has two black neighbors with edge weights 2 and 5, the direct[i] will be 7. The merge(v) procedure adds the node v to the black node set and updates the direct value of its neighbors (lines 1–3). In this procedure, we change the color of node v to black; reset its direct[v] to zero; and for any i N w ( v ) (the white neighbors of v), add w ( i , v ) to direct[i]. Note that during the flow detection, we may change w ( i , v ) and w ( v , i ) , but their total value is always constant and equals twice that of the initial w ( i , v ) . The flow(v) procedure sums up the passing flows (up to this point) from the edges to the white neighbors of node v (lines 4–7). The passing flow from each edge comes from the flow searching for the previous (before v) merged nodes.
The BFS(s) procedure performs a breadth-first search starting from node s until it finds a black node (lines 8–18). The parent of each node in the established BFS tree is stored in the parent variable. The N ( u ) returns the set of neighbors of u that has an edge with positive weight. So, we can pass some flow to each neighbor i N ( u ) . The procedure returns the ID of the first detected black node. If the procedure cannot find any black nodes, it returns the value −1 to indicate that the search was unsuccessful.
The findFlow(s) procedure (line 19) finds some flow from the selected white node s to a black node based on the Ford-Fulkerson algorithm. This procedure calls the BFS procedure to find a path to a black node, e.g., node t (line 20). The algorithm returns 0 if the BFS procedure cannot find a black node (line 21). Otherwise, we find the minimum capacity of edges, e.g., f p , in the detected s-t path (lines 22–25). Then we reduce f p from the capacity of all edges on the detected path from t to s and add the same value to the capacity of edges in the reverse direction (lines 26–31). Finally, the sent flow over the path is returned from the function (line 32).
Algorithm 2: Critical Edges (detailed)
  • Input:  G , c .
  • Output: Detected Critical Edges
  •    parent ← array [ 1 n ] { 1 } . /* parent of nodes in the detected path */
  •    color ← array [ 1 n ] { w h i t e } . /* color of nodes (white or black) */
  •    direct ← array [ 1 n ] { 0 } . /* the weight of edges to the black neighbors in each node */
  •  
  1:
procedure merge(v):
  2:
   color[v]← black,   direct[v]←0.
  3:
   foreach i in N w ( v )  do direct[i] ← direct[i] + (G[ i , v ] + G[ v , i ])/2.
  •  
  4:
procedure flow(v):
  5:
    f 0.
  6:
   foreach i in N w ( v )  do  f f + |(G[ i , v ] + G[ v , i ])/2 −G[ v , i ]|.
  7:
   return f.
  •  
  8:
procedure BFS(s):
  9:
   parent[ 1 n ] { 1 } ,   parent[s] s ,   queue.clear().
10:
   queue.push(s).
11:
   while queue.isEmpty() = F a l s e do
12:
       u queue.pop().
13:
      foreach i in N ( u )  do
14:
         if parent[i] = −1 then
15:
            parent[i] u .
16:
            if color[i] = black then return i.
17:
            queue.push(i).
18:
   return −1.
  •  
19:
procedure findFlow(s):
20:
    t BFS(s).
21:
   if t = −1 then return 0.
22:
    f p Inf,   v t .
23:
   while  v s  do
24:
       f p min( f p ,G[parent[v],v]).
25:
       v parent[v].
26:
    v t .
27:
   while  v s  do
28:
       u parent[v].
29:
      G[ u , v ] G [ u , v ] − f p .
30:
      G[ v , u ] G [ v , u ] + f p .
31:
       v parent[v].
32:
   return  f p .
  •  
33:
procedure Critical Edges(s):
34:
    c a n d .
35:
   merge(s).
36:
   while  v N w ( B )  do
37:
       f flow(v)+ direct[v], f n 1, c t { } .
38:
      foreach  u N b ( v )  do  G [ v , u ] 0 .
39:
      while  f c  and  f n > 0  do  f n findFlow(v),   f f + f n .
40:
      if  f c  then
41:
         foeach  u V  with color[u] = white and parent[u] ≠ −1 do
42:
             c t c t { ( u , w ) E : w N ( u ) and parent[w] = 1 } .
43:
          c a n d c a n d c t .
44:
      merge(v).
45:
      foeach  u N w ( B )  with flow(v) + direct[v] > c do  merge(u).
46:
   return  F c a n d with lowest p c  s.t  | Σ ( u , v ) F G [ u , v | c .
The criticaledge(s) is the main procedure of the proposed algorithm that uses the previous procedures to find all critical edges. Node s is the starting point of the algorithm and is selected randomly from the node set. In the first step, we add node s to the black node set (line 34). While the network has some white nodes with a black neighbor, we select a random white node, e.g., node v, and remove its direct edges to the black nodes because the weight of these edges has already been added to the direct variable (lines 35–37). If the total weight of direct edges to the black nodes plus the passing flow over v is smaller than c, we repeatedly find some flow from v to the black nodes until the total passing flow exceeds the so-far-detected cut size c or no flow is available in the network (line 38). If the detected flow is smaller or equal to the so-far-detected cut size c, a new cut is generated by selecting the edges between visited and unvisited nodes in the last BFS tree (lines 39–41). If the detected flow equals c, then we add the new cut to the cutset (line 42). If the flow is smaller than c, then we update c with the new flow, clear the existing cuts in cutset, and add the new cut to this set (line 43). Finally, we add node v and some white neighbors of black nodes (that has more than c passing flow) to the black node set and repeat the same process for the next white node (lines 44,45).
Figure 9 shows the steps of the proposed algorithm on a sample network. The minimum weighted degree of this network is δ = 8 . We assume that the algorithm starts from node 0 and converts the color of this node to black (Figure 9a). Let node 1 with w ( 0 , 1 ) = 6 be the selected node for merging. Since 6 < 8 , we find some flow between 1 and 0 (ignoring the direct edges between 0 and 1) using BFS and Ford–Fulkerson algorithms. Figure 9b shows the direction of the updated edges and occupied capacity after running the first BFS, which finds a flow of size 5. After finding the first flow path, the sum of passing flow and direct edge weight between nodes 0 and 1 will be 11, which is bigger than 8. Also, the sum of passing flow over the edges of node 2 will be 10, which is bigger than 8. So, we can convert the color of both nodes 2 and 1 to black at the same time without updating the cut size (Figure 9c). In the next iteration, node 3 is immediately converted to black because the total weight of direct edges between 3 and the black nodes is bigger than 8 (Figure 9d). In the next iteration, assume that node 6 is selected for merging. Since 3 is smaller than 8, we start the flow detection procedure, which finds a maximum flow of 2 (ignoring the direct edge between 6 and 1) from 6 to the black nodes (Figure 9e). So, the sum of the max-flow and the weight of direct edges to the black nodes in node 6 will be 5.
The nodes with the green border in Figure 9f show the discovered nodes in the last BFS tree. Since 5 is smaller than 8, we update c = 5 and set the critical edges to the edges between discovered nodes {4,5,6,7,8} and undiscovered nodes {0,1,2,3} in the last BFS. Therefore, the critical edge obtained so far will be {(1,6),(3,4)}. After converting node 6 to black, node 7 is selected for merging (Figure 9g). The weight of the direct edge between nodes 6 and 7 is 5, which is not bigger than c. So, we start the flow detection process and find a flow of 2 after the first BFS, which increases the cut size of node 7 to 7 (Figure 9h). Since 7 is bigger than c, we do not need to find more flow and convert node 7 to black. Also, after finding this flow, the sum of passing flow over the connected edges to node 5 and the weight of the direct edge between node 5 and the black nodes will be 6, which is bigger than c. Therefore, we convert nodes 5 and 7 to black at the same time (Figure 9i). In the next iteration (Figure 9i), node 4 (with total direct weight 7) is selected for merging. Since 7 is bigger than c, we convert node 4 to black immediately (Figure 9j). Finally, node 8 is converted to black without flow detection because the total weight of its direct edges to the black nodes is bigger than c (Figure 9k). So the algorithm detects the critical edges as {(1,6),(3,4)} and size c = 5 .

5. Complexity Analysis

The proposed algorithm employs the BFS algorithm, which has a time complexity of O ( | V | + | E | ) O ( | V | 2 ) in dense graphs to compute flows between nodes. In the following theorem, we prove that the proposed algorithm invokes the BFS procedure at most | V | times.
Theorem 1. 
The proposed algorithm calls the BFS procedure at most c × | V | times.
Proof. 
Let u denote a black node (already merged into the set B) and let v be the candidate node selected for merging. The algorithm computes a flow of at most c from v to u by repeatedly calling the BFS procedure. In the worst-case scenario, each BFS increases the flow by only one unit, resulting in at most c BFS calls to achieve the desired flow.
The flow from v to u may pass through a single intermediate node t, or more generally, through a set of white nodes T V { u , v } . If all c units of flow pass through a single node t, then in the subsequent iteration, this node can be immediately merged into the black node set without requiring any additional BFS calls.
If we let f t denote the amount of flow that has already passed through node t during earlier iterations, then for each node, the number of remaining BFS calls required before merging is reduced by this previously established flow. Formally, the total number of BFS calls will be
Total BFS calls = t V c f t = c × | V | t V f t .
Since the cumulative flow t V f t is a constant value, we have
Total BFS calls c × | V |
Combining Theorem 1 with the O ( | V | 2 ) complexity of each BFS procedure yields an overall time complexity of O ( c × | V | 3 ) for the algorithm.

6. Performance Evaluation

To evaluate the performance of the proposed algorithm, we implemented the proposed and existing algorithms in Python 3.12 and ran them on a set of random networks with different edges and node counts. We set the number of nodes in random networks from 50 up to 500 (stepping 50) and added random edges between the nodes. To ensure that the resulting network was connected, we added an edge between each successive node IDs. The maximum number of connected edges to each node (degree of the node) was selected from 2 up to 10 (stepping 1). For each specific node count and maximum connected edge values, we generated 5 different instances of random networks, which led to 450 networks in total. The weight of edges was selected randomly between 1 and 100 (both 1 and 100 included).
The random graph generation model used in this study is a hybrid of the Erdős–Rényi (ER) [48] and Barabási–Albert (BA) [49] models. In this model, we begin by selecting a number of nodes and creating random edges between them, similar to the ER model, where edges are added randomly between pairs of nodes. However, to ensure the graph remains connected, we enforce a rule that adds edges between successive node IDs, which guarantees a continuous path between nodes. Additionally, inspired by the BA model, the degree of each node is constrained, meaning that each node has a predefined maximum number of edges connected to it, which ranges from 2 to 10.
We implemented the proposed, Min-Cut, Greedy, and Degree-based approaches. In the Min-Cut approach, the algorithm successively removes minimum cut sets from the network until the total weight of the removed edges reaches the threshold c. At each iteration, a global minimum cut is computed, and the corresponding edges are eliminated, thereby progressively reducing the network’s connectivity. To identify these minimum cuts, we employed the algorithm proposed by Stoer and Wagner [8].
The Greedy strategy, on the other hand, iteratively removes edges with the smallest weights until the cumulative weight of the removed edges meets the threshold c. The underlying rationale is that eliminating lighter edges allows for the removal of a greater number of edges within the same budget, which may partition the network into more disconnected components. In the Degree-based method, nodes with the lowest total degree (the smallest total weights of their incident edges) are selected, and all edges connected to these nodes are removed. This ensures that nodes with inherently weak connections are detached from the network early in the process, potentially accelerating the overall reduction in connectivity.
Figure 10a and Figure 10b, respectively, show the heat maps of detected paths by the proposed and max-flow-based algorithms based on the node count and maximum node degree. In the graphs with 50 nodes and a maximum degree of 2, the proposed algorithm finds only 3 flow paths on average, while this value for the max-flow-based algorithm is 98. In the networks with 500 nodes and a maximum degree of 10, the proposed algorithm finds 333 flow paths, while this value for the max-flow-based algorithm is 14,756. Figure 11 shows how the number of the detected flow paths by the proposed (blue) and max-flow-based (orange) algorithms grows when we add more nodes to the network. In this figure, the thick lines show the average of detected paths in all degree values. Figure 11 clearly shows that, in contrast to the max-flow-based algorithm, the number of the detected paths by the proposed algorithm for all node counts and degree values is almost negligible.
Figure 12a shows the pairwise connectivity of the network after removing the detected critical edges. After removing the detected edges by the proposed algorithm in the networks with 50 nodes, the pairwise connectivity of the network decreases from 2450 (in the original network) to 1607, indicating a 34.4 % decrease. In the networks with 500 nodes, the pairwise connectivity decreases from 249,500 to 160,492, resulting in a 35.7 % decrease. In the networks with 50 nodes, the Greedy algorithm reduces the pairwise connectivity from 2450 to 1909, which corresponds to a 22.1 % decrease. For the networks with 500 nodes, the connectivity decreases from 249,500 to 212,690, representing a 14.7 % reduction. In the 50-node networks, applying the Min-Cut-based method reduces the pairwise connectivity from 2450 to 1708, a 30.3% decrease. For the 500-node networks, the value drops from 249,500 to 191,591, resulting in a 23.2% reduction. Using the Degree-based edge removal strategy, the pairwise connectivity in 50-node networks drops from 2450 to 2109, indicating a 13.9% reduction. For 500-node networks, the value decreases from 249,500 to 233,364, showing a 6.5% decrease.
Figure 12b shows how pairwise connectivity changes when we increase the critical edge weight limit c. The original network structure remains unchanged for all values of c and maintains a constant pairwise connectivity of 95,975. For the proposed algorithm, pairwise connectivity decreases as c grows. At c = 0.05 , connectivity drops to 81,995, which indicates a 14.6% reduction. When c reaches 0.35, the pairwise connectivity is reduced to 52,371, which indicates a 45.4% decrease. In comparison, the Greedy algorithm reduces connectivity to 84811 at c = 0.05 (11.6% reduction), and to 61,569 at c = 0.35 (35.9% reduction). The Min-Cut method follows a similar pattern. It reduces pairwise connectivity to 83,803 (12.7%) at c = 0.05 and 59,970 (37.5%) at c = 0.35 . Among all approaches, the Degree-based method has the lowest performance. It reduces connectivity to just 88,801 at c = 0.05 (7.5% drop), and to 70,315 at c = 0.35 , corresponding to a 26.7% reduction.
Figure 13a shows the total number of critical edges identified by each algorithm against the number of nodes in the network. The number of selected edges increases with the network size for all algorithms. The proposed algorithm selects the fewest number of critical edges in all network sizes. For instance, it identifies only 20 critical edges in networks with 50 nodes and 125 edges in networks with 500 nodes. The Greedy algorithm selects significantly more edges, with 63 edges at 50 nodes and 590 edges at 500 nodes, showing a steep increase as the network grows. The Min-Cut-based method also identifies a high number of edges (53 edges at 50 nodes and 403 edges at 500 nodes). However, it tends to be slightly more conservative than the Greedy approach in larger networks. The Degree-based algorithm shows almost similar behavior to the Greedy method, selecting 60 edges at 50 nodes and 523 edges at 500 nodes. These findings highlight the effectiveness of the proposed method in identifying a minimal yet important set of critical edges, which is crucial for maintaining network functionality.
Figure 13b presents the number of selected critical edges by each algorithm as against the critical edge weight limit c. As c increases (increase in total weight for edge removals), all algorithms select more edges. However, the rate and volume of selection vary between the methods. The proposed algorithm selects the smallest number of critical edges for all c values. For example, at c = 0.05 , it identifies only 26 edges, and even at the highest bound c = 0.35 , it selects just 64 edges. The Greedy algorithm selects more edges, starting from 178 at c = 0.05 and reaching 480 edges at c = 0.35 . This indicates a more aggressive but less efficient selection approach. The Min-Cut-based algorithm lies between the proposed and Greedy methods in terms of edge count. It selects 105 edges at c = 0.05 and 330 edges at c = 0.35 , indicating a moderate edge removal strategy. The Degree-based algorithm generally selects more edges than the Min-Cut method, with 145 edges at c = 0.05 and 450 at c = 0.35 , which indicates a relatively lower efficiency.
Figure 14a shows the pairwise connectivity loss percentage in all networks after removing the selected edges by the algorithms. Removing the selected edges by the proposed algorithm results in the highest percentage of connectivity loss, ranging between 34% and 38%. This highlights its effectiveness in identifying critical edges that maximize disconnection between nodes. The Greedy algorithm produces moderate loss percentages, from 22% to 24%, which indicates that while it selects many edges, their impact on overall connectivity is low. The Min-Cut method shows slightly higher loss percentages than the Greedy approach, particularly in smaller networks, which vary between 24% up to 32%. The Degree-based algorithm has the lowest connectivity loss, with values decreasing from 19% in small networks to 12% in larger networks. This implies that its selected edges are less significant in terms of connectivity.
Figure 14b shows the pairwise connectivity loss percentage against c. As c increases, the algorithms select more edges, and a greater loss in connectivity is observed. The proposed algorithm has the highest connectivity loss for all c values. The loss percentages of the proposed algorithm ranges from 17% at c = 0.05 to 56% at c = 0.35 . Despite selecting a lower number of edges, the proposed method leads to the highest disconnection in the network. The Greedy algorithm shows a more gradual increase in loss, ranging from 11% to 36%, by removing a larger number of edges. The Min-Cut algorithm performs similarly, with loss values increasing from 13% to 39% as c grows. The Degree-based method causes the least disruption, with connectivity loss percentages increasing only from 10% to 23%, indicating that most of the removed edges with this strategy are not critical.
Figure 15 shows the pairwise connectivity loss percentage of each algorithm in different network sizes and c values. The x-axis shows the number of nodes in the network, the y-axis shows the same but for c, and the z-axis indicates the loss in pairwise connectivity. In all algorithms, the loss percentage increases as either the number of nodes or the c value grows, which is expected due to the larger opportunity for topological disconnection. The proposed algorithm yields the highest connectivity loss for most configurations. This behavior is consistent in all c values and node counts, which indicates its efficiency in identifying vital edges. The Greedy algorithm shows moderate performance. Its loss percentages increase steadily but remain below that of the proposed method, especially at higher c values. The Min-Cut algorithm performs better than the Greedy method in medium-sized networks and higher c values. It tends to be more sensitive to c than the node count. The Degree-based algorithm has the lowest connectivity loss, which indicates that the edges removed by this algorithm are less critical. For instance, in the networks with 500 nodes and c = 0.35 , the pairwise connectivity loss reaches 53%, whereas for all other algorithms, this value remains below 37%. In networks with 50 nodes, the pairwise connectivity loss increases to approximately 60%, while for the Greedy- and Degree-based methods, it stays under 35%.
Figure 16 shows contour plots of the number of selected critical edges against the number of nodes and c. Even in networks with 500 nodes and the highest c value ( 0.35 ), the number of selected edges remains lower than other methods. In contrast, the Greedy algorithm (Figure 16d) selects a large number of edges. The number ranges from 32 (at 50 nodes, c = 0.05 ) up to 875 (at 500 nodes, c = 0.35 ), reflecting its focus on local optimization without considering overall efficiency. The Min-Cut method (Figure 16b) selects a moderate number of edges, adapting well to the increased c value. This method is less aggressive than the Greedy algorithm, but it still selects more edges than the proposed method, particularly in larger networks and higher c values. The Degree-based approach (Figure 16c) shows similar behavior to Min-Cut, but with slightly more variance in different networks. It often selects a high number of edges (over 600 in larger networks) where most of them have minimal effect on connectivity.
Figure 17 shows the pairwise connectivity of the network after applying the proposed algorithm, against the average node degrees and number of nodes. For a fixed c value, pairwise connectivity increases with both the average degree and the network size, due to the higher number of alternative paths. For c = 0.05 (Figure 17a), in the networks with 50 nodes, the pairwise connectivity ranges from approximately 860 to 2450 as the average degree increases from 2 to 10. In contrast, for larger networks (500 nodes), the pairwise connectivity spans from about 9340 to nearly 249,300 over the same range of average degrees.
For c = 0.1 (Figure 17b), in smaller networks (50 nodes), the pairwise connectivity ranges from 840 to 2430 as the average degree increases from 2 to 10. For larger networks (500 nodes), the pairwise connectivity spans from around 92,350 to 249,500 over the same range of average degrees. For c = 0.2 (Figure 17c), in smaller networks (50 nodes), the pairwise connectivity ranges from 552 to 2410 as the average degree increases from 2 to 10. In contrast, for larger networks (500 nodes), the pairwise connectivity spans from around 53,700 to 249,300 across the same range of average degrees. For c = 0.3 (Figure 17d), in smaller networks (50 nodes), the pairwise connectivity ranges from 465 to 2410 as the average degree increases from 2 to 10. In larger networks (500 nodes), the pairwise connectivity spans from about 50,400 to 249,300 across the same degree range.
At low c values (e.g., c = 0.05 and c = 0.10 ), the proposed algorithm removes only a limited number of critical edges, resulting in higher connectivity in all networks. At moderate to high c values ( c = 0.20 and c = 0.30 ), the algorithms can remove more edges, leading to more noticeable connectivity loss, particularly in sparse (low-degree) or small networks. For instance, with c = 0.30 and an average node degree under 10, the pairwise connectivity across all networks drops to less than 199,533 after the removal of the selected edges. For larger average degrees, even under high c values, the pairwise connectivity remains relatively robust because when the average degree in a network is high, each node is connected to more nodes on average, which leads to a greater number of alternative paths between node pairs.

7. Conclusions

Finding the critical edges that have the highest impact on network connectivity can help to identify the weak segments and increase the robustness of the network. This paper introduced a novel approach for identifying critical edges in multi-hop networks with the goal of finding the edges that have the highest impact on pairwise connectivity. Through extensive simulations on networks with varying node counts, average degrees, and edge weight constraints c, we evaluated the performance of the proposed algorithm in comparison with three well-known baselines: Greedy, Min-Cut, and Degree-based methods.
Experimental evaluations across networks of varying sizes (50–500 nodes), average degrees (2–10), and c = 0.05 to 0.35 demonstrate that the proposed algorithm consistently outperforms Greedy, Min-Cut, and Degree-based methods. For instance, at c = 0.35 in a 500-node network, removing the detected edges by the proposed method achieves a 56% pairwise connectivity loss with only 64 critical edges removed. In comparison, the Greedy algorithm detects 875 edges to reach 36% loss, the Min-Cut algorithm finds 450 edges to achieve 39%, and the Degree-based method results in only 23% loss with 573 edges.
On average, removing the edges detected by the proposed method results in a connectivity loss of 34–38% across all networks, while Greedy and Min-Cut methods achieve 22–24% and 24–32%, respectively. Furthermore, our algorithm maintains robustness across different average node degrees. These findings have practical implications for network vulnerability assessment and resilience measurement, especially in wireless multi-hop networks and wireless communication infrastructures.
In this study, we use the pairwise connectivity metric to evaluate the impact of edges on the overall network connectivity. This metric, as applied here, does not account for redundant paths or edge weights. As part of future work, more sophisticated connectivity metrics that take alternative paths and edge weights into account can be defined to measure the network connectivity in a more comprehensive manner. Moreover, the analysis can be extended by incorporating additional structural indicators, such as the clustering coefficient, centrality measures, and network diameter. Investigating how these topological properties interact with pairwise connectivity under varying constraints would enable more robust statistical interpretations and further enhance the applicability of our findings across different networks.

Author Contributions

Conceptualization, N.A. and İ.K.; methodology, N.A. and O.U.; software, N.A. and O.U.; validation, N.A., İ.K. and O.D.; formal analysis, N.A. and O.U.; investigation, N.A. and O.U.; resources, O.D.; data curation, N.A. and İ.K.; writing—original draft preparation, N.A. and O.U.; writing—review and editing, İ.K. and O.D.; visualization, N.A. and O.U.; supervision, O.D.; project administration, O.D.; funding acquisition, İ.K. and O.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Technological Research Council of Turkey (TUBITAK) project number 121E500.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors thank the Scientific and Technological Research Council of Turkey (TUBITAK) for their support under Grant Number 121E500. N. Akram is thankful to TUBITAK for their scholarship support under the BIDEB 2211-C program.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Akram, V.K. An Asynchronous Distributed Algorithm for Minimum s- t Cut Detection in Wireless Multi-hop Networks. Ad Hoc Netw. 2020, 101, 102092. [Google Scholar] [CrossRef]
  2. Schild, A.; Sommer, C. On balanced separators in road networks. In International Symposium on Experimental Algorithms; Springer: Cham, Switzerland, 2015; pp. 286–297. [Google Scholar]
  3. Krishnamurthy, B. An improved min-cut algonthm for partitioning vlsi networks. IEEE Trans. Comput. 1984, 33, 438–446. [Google Scholar] [CrossRef]
  4. Sen, A.; Ghosh, P.; Vittal, V.; Yang, B. A new min-cut problem with application to electric power network partitioning. Eur. Trans. Electr. Power 2009, 19, 778–797. [Google Scholar] [CrossRef]
  5. Karlin, A.R.; Klein, N.; Gharan, S.O. An improved approximation algorithm for TSP in the half integral case. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, Chicago, IL, USA, 22–26 June 2020; pp. 28–39. [Google Scholar]
  6. Johnson, E.L.; Mehrotra, A.; Nemhauser, G.L. Min-cut clustering. Math. Program. 1993, 62, 133–151. [Google Scholar] [CrossRef]
  7. Walteros, J.L.; Pardalos, P.M. Selected topics in critical element detection. In Applications of Mathematics and Informatics in Military Science; Springer: New York, NY, USA, 2012; pp. 9–26. [Google Scholar]
  8. Stoer, M.; Wagner, F. A simple min-cut algorithm. J. ACM 1997, 44, 585–591. [Google Scholar] [CrossRef]
  9. Yang, H.; An, S. Critical nodes identification in complex networks. Symmetry 2020, 12, 123. [Google Scholar] [CrossRef]
  10. Yu, E.Y.; Chen, D.B.; Zhao, J.Y. Identifying critical edges in complex networks. Sci. Rep. 2018, 8, 14469. [Google Scholar] [CrossRef] [PubMed]
  11. Milic, B.; Malek, M. Adaptation of the breadth first search algorithm for cut-edge detection in wireless multihop networks. In Proceedings of the 10th ACM Symposium on Modeling, Analysis, and Simulation of Wireless and Mobile Systems, Chania Crete Island, Greece, 22–26 October 2007; pp. 377–386. [Google Scholar]
  12. Akram, V.K.; Dagdeviren, O. Breadth-first search-based single-phase algorithms for bridge detection in wireless sensor networks. Sensors 2013, 13, 8786–8813. [Google Scholar] [CrossRef] [PubMed]
  13. Dagdeviren, O.; Akram, V.K. An energy-efficient distributed cut vertex detection algorithm for wireless sensor networks. Comput. J. 2014, 57, 1852–1869. [Google Scholar] [CrossRef]
  14. Liu, X.; Xiao, L.; Kreling, A. A fully distributed method to detect and reduce cut vertices in large-scale overlay networks. IEEE Trans. Comput. 2011, 61, 969–985. [Google Scholar] [CrossRef]
  15. Lalou, M.; Tahraoui, M.A.; Kheddouci, H. The critical node detection problem in networks: A survey. Comput. Sci. Rev. 2018, 28, 92–117. [Google Scholar] [CrossRef]
  16. Ugurlu, O. Comparative analysis of centrality measures for identifying critical nodes in complex networks. J. Comput. Sci. 2022, 62, 101738. [Google Scholar] [CrossRef]
  17. Akram, V.K.; Ugurlu, O. Detecting the most vital articulation points in wireless multi-hop networks. IEEE/ACM Trans. Netw. 2023, 31, 2389–2402. [Google Scholar] [CrossRef]
  18. Tarjan, R.E. A note on finding the bridges of a graph. Inf. Process. Lett. 1974, 2, 160–161. [Google Scholar] [CrossRef]
  19. Hopcroft, J.; Tarjan, R. Algorithm 447: Efficient algorithms for graph manipulation. Commun. ACM 1973, 16, 372–378. [Google Scholar] [CrossRef]
  20. Schmidt, J.M. A simple test on 2-vertex-and 2-edge-connectivity. Inf. Process. Lett. 2013, 113, 241–244. [Google Scholar] [CrossRef]
  21. Cairo, M.; Khan, S.; Rizzi, R.; Schmidt, S.; Tomescu, A.I.; Zirondelli, E.C. A simplified algorithm computing all st bridges and articulation points. Discret. Appl. Math. 2021, 305, 103–108. [Google Scholar] [CrossRef]
  22. Dantzig, G.; Fulkerson, D.R. On the max flow min cut theorem of networks. Linear Inequalities Relat. Syst. 2003, 38, 225–231. [Google Scholar]
  23. Curet, N.D.; DeVinney, J.; Gaston, M.E. An efficient network flow code for finding all minimum cost s–t cutsets. Comput. Oper. Res. 2002, 29, 205–219. [Google Scholar] [CrossRef]
  24. Gu, T.; Xu, Z. The symbolic algorithms for maximum flow in networks. Comput. Oper. Res. 2007, 34, 799–816. [Google Scholar] [CrossRef]
  25. Ford, L.; Fulkerson, D. Maximal flow through a network. Can. J. Math. 1956, 8, 399–404. [Google Scholar] [CrossRef]
  26. Edmonds, J.; Karp, R.M. Theoretical improvements in algorithmic efficiency for network flow problems. J. ACM 1972, 19, 248–264. [Google Scholar] [CrossRef]
  27. Dinic, E.A. Algorithm for solution of a problem of maximum flow in networks with power estimation. Soviet Math. Dokl. 1970, 11, 1277–1280. [Google Scholar]
  28. Dinitz, Y. Dinitz’algorithm: The original version and Even’s version. In Theoretical Computer Science; Springer: Berlin/Heidelberg, Germany, 2006; pp. 218–240. [Google Scholar]
  29. Goldberg, A.V.; Tarjan, R.E. A new approach to the maximum-flow problem. J. ACM 1988, 35, 921–940. [Google Scholar] [CrossRef]
  30. Orlin, J.B. Max flows in O (nm) time, or better. In Proceedings of the Forty-Fifth Annual ACM Symposium on Theory of Computing, Palo Alto, CA, USA, 1–4 June 2013; pp. 765–774. [Google Scholar]
  31. Nagamochi, H.; Ibaraki, T. Computing edge-connectivity in multigraphs and capacitated graphs. SIAM J. Discret. Math. 1992, 5, 54–66. [Google Scholar] [CrossRef]
  32. Brinkmeier, M. A simple and fast min-cut algorithm. Theory Comput. Syst. 2007, 41, 369–380. [Google Scholar] [CrossRef]
  33. Karger, D.R. Minimum cuts in near-linear time. J. ACM (JACM) 2000, 47, 46–76. [Google Scholar] [CrossRef]
  34. Karger, D.R.; Stein, C. A new approach to the minimum cut problem. J. ACM (JACM) 1996, 43, 601–640. [Google Scholar] [CrossRef]
  35. Henzinger, M.; Noe, A.; Schulz, C.; Strash, D. Practical minimum cut algorithms. J. Exp. Algorithmics (JEA) 2018, 23, 1–22. [Google Scholar] [CrossRef]
  36. Henzinger, M.; Noe, A.; Schulz, C.; Strash, D. Finding all global minimum cuts in practice. arXiv 2020, arXiv:2002.06948. [Google Scholar] [CrossRef]
  37. Nagamochi, H.; Ono, T.; Ibaraki, T. Implementing an efficient minimum capacity cut algorithm. Math. Program. 1994, 67, 325–341. [Google Scholar] [CrossRef]
  38. Ghaffari, M.; Nowicki, K. Massively Parallel Algorithms for Minimum Cut. In Proceedings of the 39th Symposium on Principles of Distributed Computing, Virtual, 3–7 August 2020; pp. 119–128. [Google Scholar]
  39. Geissmann, B.; Gianinazzi, L. Parallel minimum cuts in near-linear work and low depth. In Proceedings of the 30th on Symposium on Parallelism in Algorithms and Architectures, Vienna, Austria, 16–18 July 2018; pp. 1–11. [Google Scholar]
  40. Ghaffari, M.; Kuhn, F. Distributed minimum cut approximation. In International Symposium on Distributed Computing; Springer: Berlin/Heidelberg, Germany, 2013; pp. 1–15. [Google Scholar]
  41. Nanongkai, D.; Su, H.H. Almost-tight distributed minimum cut algorithms. In International Symposium on Distributed Computing; Springer: Berlin/Heidelberg, Germany, 2014; pp. 439–453. [Google Scholar]
  42. Su, H.H. A Distributed Minimum Cut Approximation Scheme. arXiv 2014, arXiv:1401.5316. [Google Scholar] [CrossRef]
  43. Akram, V.K. Distributed Detection of Minimum Cuts in Wireless Multi-hop Networks. IEEE Trans. Comput. 2021, 71, 919–932. [Google Scholar] [CrossRef]
  44. Gabow, H.N. A matroid approach to finding edge connectivity and packing arborescences. In Proceedings of the Twenty-Third Annual ACM Symposium on Theory of Computing, New Orleans, LA, USA, 5–8 May 1991; pp. 112–122. [Google Scholar]
  45. Karger, D.R. Random sampling in cut, flow, and network design problems. In Proceedings of the Twenty-Sixth Annual ACM Symposium on Theory of Computing, Montréal, QC, Canada, 23–25 May 1994; pp. 648–657. [Google Scholar]
  46. Nagamochi, H.; Nishimura, K.; Ibaraki, T. Computing all small cuts in an undirected network. SIAM J. Discret. Math. 1997, 10, 469–481. [Google Scholar] [CrossRef]
  47. Kawarabayashi, K.i.; Thorup, M. Deterministic global minimum cut of a simple graph in near-linear time. In Proceedings of the Forty-Seventh Annual ACM Symposium on Theory of Computing, Portland, OR, USA, 14–17 June 2015; pp. 665–674. [Google Scholar]
  48. Ahuja, R.K.; Kodialam, M.; Mishra, A.K.; Orlin, J.B. Computational investigations of maximum flow algorithms. Eur. J. Oper. Res. 1997, 97, 509–542. [Google Scholar] [CrossRef]
  49. Barabási, A.L.; Albert, R. Emergence of scaling in random networks. Science 1999, 286, 509–512. [Google Scholar] [CrossRef] [PubMed]
Figure 1. A sample wireless multi-hop network and its network model.
Figure 1. A sample wireless multi-hop network and its network model.
Electronics 14 02904 g001
Figure 2. Pairwise connectivity of a network after removing some edges.
Figure 2. Pairwise connectivity of a network after removing some edges.
Electronics 14 02904 g002
Figure 3. Removing edges based on the weight for c = 20 (Greedy algorithm).
Figure 3. Removing edges based on the weight for c = 20 (Greedy algorithm).
Electronics 14 02904 g003
Figure 4. Removing minimum cut sets until reaching c = 20 .
Figure 4. Removing minimum cut sets until reaching c = 20 .
Electronics 14 02904 g004
Figure 5. Steps of the proposed algorithm for c = 20.
Figure 5. Steps of the proposed algorithm for c = 20.
Electronics 14 02904 g005
Figure 6. The steps of selecting final edges from the candidate set Z.
Figure 6. The steps of selecting final edges from the candidate set Z.
Electronics 14 02904 g006
Figure 7. The flowchart of the proposed algorithm.
Figure 7. The flowchart of the proposed algorithm.
Electronics 14 02904 g007
Figure 8. The steps of the proposed algorithm on a sample network.
Figure 8. The steps of the proposed algorithm on a sample network.
Electronics 14 02904 g008
Figure 9. The steps of the proposed algorithm on a sample graph.
Figure 9. The steps of the proposed algorithm on a sample graph.
Electronics 14 02904 g009
Figure 10. Number of detected flow paths in the proposed and traditional max-flow-based algorithms.
Figure 10. Number of detected flow paths in the proposed and traditional max-flow-based algorithms.
Electronics 14 02904 g010
Figure 11. Number of detected flow paths in the proposed and traditional max-flow-based algorithms.
Figure 11. Number of detected flow paths in the proposed and traditional max-flow-based algorithms.
Electronics 14 02904 g011
Figure 12. Pairwise connectivity of the network after removing the detected critical edges.
Figure 12. Pairwise connectivity of the network after removing the detected critical edges.
Electronics 14 02904 g012
Figure 13. Number of detected critical edges against the number of nodes and the weight limit c.
Figure 13. Number of detected critical edges against the number of nodes and the weight limit c.
Electronics 14 02904 g013
Figure 14. Pairwise connectivity loss percentage after removing the detected critical edges.
Figure 14. Pairwise connectivity loss percentage after removing the detected critical edges.
Electronics 14 02904 g014
Figure 15. Pairwise connectivity loss percentage in (a) proposed, (b) Greedy, (c) Degree-based, and (d) Min-Cut algorithms against the number of nodes and c values.
Figure 15. Pairwise connectivity loss percentage in (a) proposed, (b) Greedy, (c) Degree-based, and (d) Min-Cut algorithms against the number of nodes and c values.
Electronics 14 02904 g015
Figure 16. Number of detected critical edges in (a) proposed, (b) Min-Cut, (c) Degree-based, and (d) Greedy algorithms against the number of nodes and c values.
Figure 16. Number of detected critical edges in (a) proposed, (b) Min-Cut, (c) Degree-based, and (d) Greedy algorithms against the number of nodes and c values.
Electronics 14 02904 g016
Figure 17. Pairwise connectivity of the proposed algorithm against the average node degree for (a) c = 0.05 , (b) c = 0.10 , (c) c = 0.20 , and (d) c = 0.30 .
Figure 17. Pairwise connectivity of the proposed algorithm against the average node degree for (a) c = 0.05 , (b) c = 0.10 , (c) c = 0.20 , and (d) c = 0.30 .
Electronics 14 02904 g017
Table 1. A comparison of existing studies on the identification of edges that influence network connectivity.
Table 1. A comparison of existing studies on the identification of edges that influence network connectivity.
ProblemAlgorithmsPurposeProsCons
Cut Edge (Bridge)[18,19,20,21]Detect individual edges, whose removal increases the number of connected componentsStraightforward to compute; low computational overheadOften identifies edges that disconnect only minor portions of the graph, missing more impactful edges
Minimum st Cut[22,23,24,25]Identifies a set of edges with the minimum weight whose removal disconnects nodes s and tComputationally efficient and relatively simpleMay overlook more critical edges that significantly impact the overall topology
Minimum Cut[8,26,27,28,29,30,31,32,33,34,35,36]Identify an edge set with minimum weight, whose removal partitions the graphEfficient and computationally simple; well-established methods existMay identify edges that are not critical in practice, as they could disconnect only a few nodes
k-Edge Connectivity[44,45,46,47]Determine the maximum k such that the graph remains connected after removal of any k–1 edgesMeasures global resilience; provides the minimum cut size; polynomial-time algorithms existComputationally expensive on large graphs and may not reveal the critical edges
Critical Edges[9,10,15], this studyFinds the edges that have a critical impact on the connectivity of the graphReveal the most critical and vulnerable links in the graphSelecting an appropriate metric is challenging and becomes computationally expensive on large graphs
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Akram, N.; Ugurlu, O.; Kocabaş, İ.; Dagdeviren, O. Detection of Critical Links for Improving Network Resilience. Electronics 2025, 14, 2904. https://doi.org/10.3390/electronics14142904

AMA Style

Akram N, Ugurlu O, Kocabaş İ, Dagdeviren O. Detection of Critical Links for Improving Network Resilience. Electronics. 2025; 14(14):2904. https://doi.org/10.3390/electronics14142904

Chicago/Turabian Style

Akram, Nusin, Onur Ugurlu, İlker Kocabaş, and Orhan Dagdeviren. 2025. "Detection of Critical Links for Improving Network Resilience" Electronics 14, no. 14: 2904. https://doi.org/10.3390/electronics14142904

APA Style

Akram, N., Ugurlu, O., Kocabaş, İ., & Dagdeviren, O. (2025). Detection of Critical Links for Improving Network Resilience. Electronics, 14(14), 2904. https://doi.org/10.3390/electronics14142904

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop