Analyzing the Robustness of Complex Networks with Attack Success Rate

Analyzing the robustness of networks against random failures or malicious attacks is a critical research issue in network science, as it contributes to enhancing the robustness of beneficial networks and effectively dismantling harmful ones. Most studies commonly neglect the impact of the attack success rate (ASR) and assume that attacks on the network will always be successful. However, in real-world scenarios, attacks may not always succeed. This paper proposes a novel robustness measure called Robustness-ASR (RASR), which utilizes mathematical expectations to assess network robustness when considering the ASR of each node. To efficiently compute the RASR for large-scale networks, a parallel algorithm named PRQMC is presented, which leverages randomized quasi-Monte Carlo integration to approximate the RASR with a faster convergence rate. Additionally, a new attack strategy named HBnnsAGP is introduced to better assess the lower bound of network RASR. Finally, the experimental results on six representative real-world complex networks demonstrate the effectiveness of the proposed methods compared with the state-of-the-art baselines.


Introduction
Complex networks are powerful representations of various real-world systems, including the Internet, social networks, and power grids.Most networks provide benefits and yield positive effects.However, some networks can also produce negative effects, with the most important examples being terrorism [1] and disease transmission networks [2].Whether beneficial or harmful, these networks substantially influence the functioning and development of our society.In recent decades, the study of diverse complex networks has gained significant attention from researchers across various fields, such as computer science, statistical physics, systems engineering, and applied mathematics [3][4][5][6][7].One hot topic in these studies is the error and attack tolerance of complex networks [8][9][10][11][12][13][14][15], a concept referred to as robustness within the context of this paper.
The robustness of a network refers to its ability to keep functioning when some of its components, such as nodes or edges, malfunction due to random failures or malicious attacks [12,16,17].The study of network robustness is valuable from two primary perspectives.Firstly, the failure of components can lead to the breakdown of beneficial networks and result in significant economic losses.A typical example is the Northeast blackout of 2003 [18,19].Analyzing network robustness aids in developing methods to enhance it.On the other hand, for harmful networks, such as terrorist networks [1] or COVID-19 transmission networks [20], analyzing their robustness assists in developing effective attack strategies to dismantle them.Therefore, analyzing network robustness is of great importance.
To assess the robustness of the network, it is crucial to select an appropriate metric.Since almost all network applications are typically designed to operate in a connected environment [21], network connectivity is selected as the primary indicator to assess network robustness in this study.
The robustness of a network depends not only on its structural features but also on the mechanisms of random failures or malicious attacks.In random failures, nodes or edges are attacked with equal probability.In contrast, malicious attacks carefully select nodes or edges in the network for removal in order to maximally disrupt network functionality.Typically, random failures are less severe than malicious attacks [22].Therefore, this paper primarily focuses on the latter.Evaluating the impacts of node or edge removal using various malicious attack strategies is a crucial approach to analyzing network robustness.Determining the lower bound of network robustness is critical, as it allows for analysis of network robustness under worst-case scenarios, identification of the most vulnerable components, and development of robustness improvement methods.An effective approach to addressing this issue involves identifying an optimal attack strategy that inflicts maximum damage on the network [23].
Extensive research has been conducted on the robustness of complex networks.Albert et al. [8] studied the robustness of scale-free networks and found that, while these networks are robust to random failures, they are extremely vulnerable to malicious attacks.Iyer et al. [9] conducted a systematic examination of the robustness of complex networks by employing simultaneous and sequential targeted attacks based on various centrality measures such as degree, betweenness, closeness, and eigenvector centrality.Fan et al. [10] proposed a deep reinforcement learning algorithm, FINDER, to effectively identify critical network nodes.Wang et al. [11] introduced region centrality and proposed an efficient network disintegration strategy based on this concept, which combines topological properties and geographic structure in complex networks.Ma et al. [12] conducted a study on the robustness of complex networks against incomplete information.They employed link prediction methods to restore missing network topology information and identify critical nodes.Lou et al. [14] introduced LFR-CNN, a CNN-based approach that utilizes learning feature representation for predicting network robustness, which exhibits excellent predictive performance, including notably smaller prediction errors.
However, the aforementioned research generally assumes that attacks on the network will always be successful, neglecting the important factor of attack success rate (ASR).In fact, attacks may not succeed in real-world scenarios.For example, even if enemy forces launch an attack on a target within a military communication network, there is no guarantee of successfully destroying it.Figure 1 illustrates the main process of network disintegration under varying ASRs.Moreover, selecting an optimum attack strategy that can lead to maximal destructiveness to the network is challenging due to the NP-hard nature of this problem [10].Existing methods often encounter difficulties in achieving a desirable balance between effectiveness and computational efficiency.
Therefore, the purpose of this paper is to analyze network robustness when considering ASR under an optimal attack strategy.To achieve this purpose, a novel robustness measure called Robustness-ASR (RASR) is introduced, which utilizes mathematical expectations to evaluate network robustness when considering ASR.In addition, an efficient algorithm called PRQMC is proposed to calculate the RASR for large-scale networks.Furthermore, to assess the lower bound of network RASR, a new attack strategy called HBnnsAGP is proposed.The main contributions of this study are as follows:

•
We introduce and define a novel robustness measure called RASR, which utilizes mathematical expectations to assess network robustness when considering the ASR of each node.

•
To efficiently calculate the RASR for large-scale networks, we propose the PRQMC algorithm.PRQMC leverages randomized quasi-Monte Carlo (QMC) integration to approximate the RASR with a faster convergence rate and utilizes parallelization to speed up the calculation.

•
To assess the lower bound of network RASR, we present a new attack strategy called HBnnsAGP.In HBnnsAGP, a novel centrality measure called BCnns is proposed to quantify the importance of a node.

•
The experimental results for six representative real-world networks demonstrate the effectiveness of the proposed methods compared with the baselines.
The rest of this paper is organized as follows.Section 2 provides an introduction to the preliminaries, including classical centrality measures, traditional network robustness measures, and the principles of Monte Carlo (MC) and QMC integration.Section 3 presents the proposed methods for analyzing network robustness when considering ASR, including the RASR, the PRQMC algorithm, and the HBnnsAGP attack strategy.The experiments and results are demonstrated in Section 4. Finally, Section 5 concludes the paper.

Preliminaries
A complex network can be modeled as an unweighted, undirected graph G = (V, E), where V(|V| = N) and E(|E| = M) represent the set of nodes and the set of edges in network G, respectively.Network G can be also represented as adjacency matrix A = (a ij ) N×N ; if node i and node j are connected, a ij = 1, otherwise a ij = 0.

Centrality Measures
The concept of a centrality measure attempts to quantify how important a node is [24].Here we introduce two classical centrality measures: degree centrality and betweenness centrality.

Degree Centrality (DC)
DC is the simplest measure of centrality.The DC of a node is defined by its degree, that is, its number of edges.The DC is formally defined as follows.
Definition 1.Given a network G = (V, E), A = (a ij ) N×N is the adjacency matrix of the network G.The DC of node i is defined as: The DC is frequently a reliable and effective measure of a node's importance.A higher DC value typically signifies a more critical node.

Betweenness Centrality (BC)
BC quantifies the number of shortest paths passing through a particular node in a network [25].BC characterizes the extent to which a node acts as a mediator among all other nodes in a network.Nodes that lie on numerous shortest paths are likely to play a crucial role in information transmission, exhibiting higher BC values.The BC is defined as follows.
Definition 2. Given a network G = (V, E), the BC of node v in G is defined as: where v ∈ V, σ(s, t) is the total number of shortest paths from node s to node t, and σ(s, t | v) is the number of those paths that pass through node v. σ(s, t)

Accumulated Normalized Connectivity
Traditionally, network robustness has been evaluated by calculating the size of the giant connected component (GCC) after the network has endured attacks.The accumulated normalized connectivity (ANC), also known as R, is a well-known measure of network robustness for node attacks [10,16,26].The ANC is defined as follows.
Definition 3.For a network G = (V, E), |V| = N.Given an attack sequence of nodes (v 1 , v 2 , . . ., v N ), where v i ∈ V indicates the ith node to be attacked, the ANC of G under this attack sequence is defined as: where σ gcc (G\{v 1 , v 2 , . . ., v k }) is the size of the GCC of the residual network after the sequential removal of nodes from the set {v 1 , v 2 , . . ., v k } in G, and σ gcc (G) is the initial size of the GCC of G before any nodes are removed.The normalization factor 1 N ensures that the robustness of networks with different sizes can be compared.
A larger ANC value indicates a higher level of network robustness against attacks.Additionally, the ANC can be used to assess the destructiveness of attacks, as lower ANC values correspond to more destructive attack strategies.The ANC value can be viewed as an estimate of the area beneath the ANC curve, which is plotted with the horizontal axis as k/N and the vertical axis as σ gcc (G\{v 1 , v 2 , . . ., v k })/σ gcc (G).

Monte Carlo Integration
Monte Carlo (MC) integration is a numerical technique that is particularly useful for higher-dimensional integrals [27].Caflisch [28] provides a comprehensive review of this method.The integral of a Lebesgue integrable function f (X) can be expressed as the average or expectation of the function evaluated at random locations.Considering X as a random variable uniformly distributed on the one-dimensional unit interval [0, 1], the integration of f (X) over this interval can be represented as follows: in which P(X) is the probability measure of X on the interval [0, 1], then therefore Similarly, for an integral on the unit hypercube [0, 1] N in N dimensions, in which X = (x 1 , x 2 , . . ., x N ) is a uniformly distributed vector in [0, 1] N , where x i ∈ [0, 1], i ∈ {1, 2, . . ., N}.Given that the hyper-volume of [0, 1] N is equal to 1, [0, 1] N can be viewed as the total probability space.
The MC integration method approximates definite integrals utilizing random sampling.It draws K uniform samples from [0, 1] N , in turn generating point set {X 1 , X 2 , . . ., X K }.The empirical approximation of the integral I[ f ] is then procured by computing the mean of the K sample outcomes f (X i ), which can be expressed as follows: According to the Strong Law of Large Numbers [29], this approximation is convergent with probability 1; that is, lim Figure 2 illustrates the application of the MC integration method in approximating definite integrals over a one-dimensional unit interval.As shown in Figure 2a, MC integration approximates the area under the curve of the integral by summing the areas of the bars corresponding to the sampled points.The bars are rearranged sequentially to avoid overlap on the X-axis, as shown in Figure 2b.
An example of the MC integration method for approximating a definite integral over a one-dimensional unit interval.(a) illustrates the approximation of the integral by summing the areas of bars that correspond to the sampled points.Each bar's height represents the value of f (X) at X i , and its width is 1/K, where K denotes the total number of samples.(b) demonstrates the sequential rearrangement of the bars to prevent overlapping on the X-axis, ensuring a clear visualization of the areas.
The error of MC integration is: By the Central Limit Theorem [29], for any a, b where a < b, we have: where v is a standard normal random variable, and σ is the square root of the variance of f , given by When K is sufficiently large, we have: This implies that the order of error convergence rate of the MC integration is O(K −1/2 ) [30], which means that the accuracy of the integral error decreases at a rate proportional to the total number of samples as K increases.That is, "an additional factor of 4 increase in computational effort only provides an additional factor of 2 improvements in accuracy" [28].
In practical applications, the MC integration method draws K uniform samples from an N-dimensional pseudo-random sequence (PRS) generated by a computer to obtain the point set {X 1 , X 2 , . . ., X K }.

Quasi-Monte Carlo Integration
The quasi-Monte Carlo (QMC) integration is a method of numerical integration that operates in the same way as MC integration, but instead uses a deterministic low-discrepancy sequence (LDS) [31] to approximate the integral.The advantage of using LDS is a faster rate of convergence.QMC integration has a rate of convergence close to O(K −1 ), which is much faster than the rate for the MC integration, O(K −1/2 ) [32].
Using the QMC integration method for approximating definite integrals is similar to the MC integration method.This can be expressed as: where {Y 1 , Y 2 , . . ., Y K } is a point set obtained by combining the first K points from an The error order of the QMC integration can be determined by the Koksma-Hlawka inequality [33,34]; that is, where V( f ) is the Hardy-Krause variation of the function f , and D * K is the star discrepancy of {Y 1 , Y 2 , . . ., Y K }, defined as: where For more detailed information, please refer to [28].
For an N-dimensional LDS comprising K points, the star discrepancy of the sequence is O(K −1 (log K) N ).Consequently, for a function F with V(F) < ∞, a QMC approximation based on this sequence yields a worst-case error bound in (28) converging at a rate of O(K −1 (log K) N ) [35].Since log K K, the QMC integration convergence rate approaches O(K −1 ) for low-dimensional cases [30], which is asymptotically superior to MC.
Figure 3 illustrates the clear differences between MC and QMC integration methods.The subfigures provide a visual representation of their respective point distributions and demonstrate their application for approximating definite integrals over a one-dimensional unit interval.The points generated from an LDS exhibit greater uniformity than the points generated by a PRS.Consequently, with the same number of sampling points, LDS has the ability to uniformly fill the integration space, resulting in a faster convergence rate.b,c) depict the MC integration for approximating a definite integral over a one-dimensional unit interval, while (e,f) present the QMC integration for approximating a definite integral over a one-dimensional unit interval.

Methods
In this section, we first introduce the major problem we focus on in this paper.Then, we give the details of the proposed methods for analyzing network robustness when considering ASR, including the RASR, the PRQMC algorithm, and the HBnnsAGP attack strategy.

Problem Formalization
Typically, it is assumed that removing a node will also remove all of its connected edges.Therefore, in this paper, we only consider node attack strategies.
For a network G = (V, E), |V| = N.A node attack strategy can be represented as a sequence Seq = (v 1 , v 2 , . . ., v N ), where v i ∈ V indicates the ith node to be attacked.Given a predefined metric Φ(Seq) to measure network robustness against attacks, the primary goal is to evaluate the lower bound of network robustness.Therefore, the objective is to minimize Φ(Seq), as presented below: To achieve this objective, it is crucial to determine the optimal node attack strategy that will minimize the Φ(Seq).

The Proposed Robustness Measure RASR
The ANC, as defined in Definition 3, does not consider the ASR, or it is a special case where the ASR of each node is 100%.To this end, the proposed robustness measure RASR utilizes mathematical expectations to assess network robustness when considering ASR.
Before introducing the RASR, we first present a weighted ANC (called ANCw), which takes into account both the state of the attack sequence and the associated attack cost.
For a network G = (V, E) with N nodes, Seq = (v 1 , v 2 , . . ., v N ) is an attack sequence, where v i ∈ V.The state of Seq is denoted as a random variable S = (s v 1 , s v 2 , . . ., s v N ), where Then, the ANCw is defined as follows.
Definition 4. The ANCw of G under an attack sequence Seq is defined as: where σ gcc is the same as defined in Definition 3. When k = 0, it indicates that no nodes have been attacked.ϕ(v k ) is a weighted function; that is, There are two main reasons for using the weighted function ϕ(v k ).Firstly, it is important for an attacker to choose an optimal attack strategy at a minimum attack cost to efficiently disintegrate the network [11,23].Secondly, as illustrated in Figure 1, with an increased number of nodes removed, the network will eventually fragment into isolated nodes, thereby losing its functionality as a network.Therefore, this paper sets the attack cost of an isolated node to 0.
Let P v = (p v 1 , p v 2 , . . ., p v N ) represent the ASR of each node corresponding to Seq, where p v i represents the ASR of node v i .Assuming that attacks on different nodes are independent, then the probability of S is where Based on the above formulas, the proposed RASR can be defined as follows.
Definition 5. Considering the ASR of each node, the robustness of a network G against an attack sequence Seq can be quantified by the RASR, which is defined as: where S is a random variable representing the state of Seq, Ω is the sample space of S, and E(ANCw(Seq, S)) is the expectation of the ANCw.
In theory, the value of RASR can be calculated using ( 23) once all the samples of S are obtained in the sample space Ω.However, it confronts "the curse of dimensionality" [36] when applied to networks with a large number of nodes.In such cases, the size of Ω grows exponentially to 2 N .As a result, the analytical approach becomes infeasible when N is significantly large.

The Proposed PRQMC Algorithm
To efficiently calculate the RASR for large-scale networks, the PRQMC algorithm is proposed, which leverages randomized QMC integration to approximate the RASR with a faster convergence rate and utilizes parallelization techniques to speed up the calculation.In the following, we first introduce the RASR calculation model based on QMC integration and then give the PRQMC algorithm.

RASR Calculation Model Based on QMC Integration
The RASR of a network G, as defined in Definition 5, can be expressed using Lebesgue integration based on the principle of MC integration (see Section 2); that is, where S = (s v 1 , s v 2 , . . ., s v N ) denotes a random variable representing the state of an attacking sequence Seq, Ω is the sample space of S, and P(S) is the probability measure of S.
represent the ASR of each node corresponding to Seq, and let X = (x 1 , x 2 , . . .x N ) be a uniformly distributed vector in [0, 1] N , where x i ∈ [0, 1], i ∈ {1, 2, . . ., N}.Then, S = (s v 1 , s v 2 , . . ., s v N ) can be represented as follows: where When the Seq is determined, then ANCw(Seq, S) can be represented as a function of X; that is, By substituting ( 27) into (24) and transforming the integral space from Ω to [0, 1] N , we obtain the following expression for RASR: This equation represents the integration of F(X) with respect to the probability measure P(S) over the N-dimensional unit hypercube [0, 1] N .For the given network G, the sample space Ω has a size of 2 N .Let the state of Seq be S i , where i ∈ {1, 2, 3, . . ., 2 N }.Based on P v , the unit hypercube [0, 1] N can be divided into 2 N regions denoted by Q i , where region Q i corresponds to state S i , i ∈ {1, 2, . . . 2 N }. Figure 4 illustrates this process for the case when N = 2.Then, the integral in (28) can be transformed into: where X {i} is a vector uniformly distributed within region Q i .The Lebesgue measure of region Q i in [0, 1] N , denoted by λ N (Q i ), is equivalent to the probability measure of S i , denoted as P(S i ).Based on the principle of MC integration, we have: Combining ( 28), (29), and (30), we obtain: By referencing ( 14) and (31), the RASR of a network can be approximated using the QMC integration method.The approximation of RASR, denoted by R, is defined as follows.
x 1 Here, {Y 1 , Y 2 , . . ., Y K }, as specified in (14), represents a set of points obtained from an N-dimensional LDS.K is the total number of samples.The function F(X) is defined in (27).
The error bound of the QMC integral is determined by the star discrepancy of the chosen LDS, making the selection of LDS important for improving the accuracy of approximations.Two frequently used LDSs are the Halton sequence and the Sobol sequence [37].In this research, the Sobol sequence is adopted, as it demonstrates better performance in higher dimensions compared to the Halton sequence [38].

Parallel Randomized QMC (PRQMC) Algorithm
Despite the faster convergence rate of the QMC integration method compared to MC integration, it still necessitates a large number of samples to calculate the average value.Furthermore, the calculation of function ANCw(Seq, S), typically done through attack simulations, demands considerable computational resources, especially for large-scale networks [39].Consequently, the computational process of obtaining R for large-scale networks remains time-consuming.Additionally, due to the deterministic nature of the LDS, the QMC integration method can be seen as a deterministic algorithm, thus presenting challenges in assessing the reliability of numerical integration results and potentially leading to being stuck in local optima.In light of these issues, the PRQMC algorithm capitalizes on the benefits of the Randomized QMC method and parallelization.
The PRQMC algorithm improves computational efficiency through parallelization.This is because the computational cost of sampling the attack sequence's state S is significantly lower than that of computing the function ANCw(Seq, S).Therefore, by initially sampling the attack sequence's state S and obtaining a sufficient number of samples, it is possible to calculate the R by parallelizing the computation of the function ANCw(Seq, S) with various samples.This approach effectively accelerates the calculation process by distributing the task across multiple processors or computing nodes.
Additionally, the PRQMC algorithm enhances randomness by randomly sampling points from the LDS, providing unbiased estimation and improved variance reduction capabilities.This is particularly advantageous in high-dimensional problems, where RQMC often outperforms QMC in terms of accuracy and efficiency [40].
The procedure of the PRQMC algorithm is presented in Algorithm 1, which consists of two main steps: "sampling stage" and "paralleling stage".In the sampling stage, we first randomly sample K points {Y 1 , Y 2 , . . ., Y K } from an N-dimensional Sobol sequence, then determine K states of the attack sequence, {S 1 , S 2 , . . ., S K }, by comparing the values of each dimension of the sampled points with the ASR of each node.In the paralleling stage, we parallelize the computation of the function ANCw(Seq, S i ), then obtain R by calculating the average value of ANCw(Seq, S i ).

The Proposed HBnnsAGP Attack Strategy
To assess the lower bound of network RASR, a new attack strategy called High BCnns Adaptive GCC-Priority (HBnnsAGP) is presented.In HBnnsAGP, a novel centrality measure called BCnns is proposed to quantify the significance of a node, and the GCCpriority attack strategy is utilized to improve attack effectiveness.Algorithm 2 describes the procedure of HBnnsAGP, which contains two steps: "obtaining the first part of Seq" and "obtaining the second part of Seq".In the first step, the algorithm obtains the first part of the attack sequence by iteratively removing the node with the highest BCnns in GCC and recalculating BCnns for the remaining nodes until only isolated nodes remain in the residual network.In the second step, the algorithm arranges these isolated nodes in descending order according to their DC values in the initial network to obtain the second part of the attack sequence.This procedure is aimed at improving the effectiveness of attacks when the ASR is below 100%.It is important to note that isolated nodes when the ASR is 100% may no longer remain isolated, as shown in Figure 1.Additionally, previous research has shown that there is minimal difference in destructiveness between simultaneous attacks and sequential attacks based on DC [9].Therefore, by sorting these isolated nodes in descending order based on their DC values from the initial network (similar to the approach used in simultaneous attacks), the second step further improves the effectiveness of attacks when the ASR is less than 100%.
for all v ∈ V c do G ← G \ {attack_node}; 12: end while Step 2: Obtaining the second part of Seq.13: V r ← (V 0 \ Seq); 14: sort the nodes of V r decreasing by DC values from G 0 ; 15: Seq ← Seq + V r ; 16: return Seq.
In the following, we first introduce the BCnns and then give the GCC-priority attack strategy.

Non-Central Node Sampling Betweenness Centrality (BCnns)
Contrasted with BC (see Definition 2), which evaluates a node's role as a mediator in the network based on the count of shortest paths it traverses for all node pairs, BCnns quantifies the importance of nodes acting as bridge nodes between different network communities by counting the number of shortest paths that pass through a node for specific pairs of non-central nodes (nodes located on the periphery of the network and with less importance).These bridge nodes typically serve as mediators for non-central nodes across different communities.The BCnns is defined as follows.
Definition 7.For a network G = (V, E) with N nodes, the BCnns of node v in network G is: where S, T ⊂ V nns , V nns is the set of non-central nodes sampled from V, V nns ⊂ V, and S ∩ T = ∅.The σ(s, t) and σ(s, t | v) have the same meaning as in Definition 2.
By selecting the appropriate pairs of non-central nodes, BCnns can more effectively measure the significance of nodes as bridges between different communities in a network.While these bridge nodes may not have the highest BC value, they are crucial for maintaining overall network connectivity and could potentially have the highest BCnns value.
The definition of BCnns highlights the importance of selecting suitable nodes for sets S and T. Thus, we proposed an algorithm called SelectionST for node selection.Algorithm 3 describes the procedure of SelectionST.Initially, the nodes are sorted in ascending order based on their DC values, and the first N 1 nodes with lower DC values are selected to create the non-central node set V nns .This is because nodes with lower DC values typically have lower centrality and are considered non-central nodes.Next, in order to achieve a more balanced sampling, V nns is divided into two subsets: V odd nns , containing nodes at odd indices, and V even nns , containing nodes at even indices.Lastly, N 2 nodes are randomly sampled from V odd nns to create set S, and N 2 nodes are similarly sampled from V even nns to form set T. The N 1 and N 2 are chosen based on the size of the network and the node degree distribution.Typically, both N 1 and N 2 are much smaller compared to the total number of nodes N. Therefore, BCnns have higher computational efficiency compared to BC, especially for large-scale networks.
Figure 5 demonstrates the differences between BC and BCnns.Specifically, Figure 5a identifies the non-central nodes in red, Figure 5b showcases node sizes based on BC values, and Figure 5c adjusts node sizes based on their BCnns values.Notably, node 14 plays a critical bridging role between two communities, a role that BCnns captures more accurately than BC.

GCC-Priority Attack Strategy
As the attack progresses, the network fragments into connected components of varying sizes.The importance of these components varies within the residual network.The GCC refers to the largest connected component containing the most nodes.The destruction of the GCC accelerates the collapse of the network.The GCC-priority attack strategy enhances the attack's effectiveness by targeting nodes within the GCC at each stage of the attack process.

Experimental Studies
In this section, we present a series of experiments to verify the effectiveness of our proposed methods.Firstly, we introduce the experimental settings, including network datasets and baselines.Next, we compare the proposed PRQMC method with the baselines.Additionally, we demonstrate the effectiveness of the proposed HBnnsAGP attack strategy.Finally, we present further discussions of network robustness when considering the ASR.In our experiments, we selected six real-world classic complex networks of different scales, including Karate [41], Krebs [10], Airport [42], Crime [42], Power [42], and Oregon1 [43].

1.
Karate: This is a network depicting relationships in a karate club recorded by Zachary.
Nodes represent club members, and each edge connects two members of the club.

2.
Krebs: The network is associated with the 9/11 attack.The nodes represent the terrorists involved in the network, while the edges depict their communication patterns.

3.
Airport: This is a network consisting of direct air routes between American airports in 1997.Each node represents an airport, and the edges represent connections between airports.

4.
Crime: This network represents a criminal network that is derived from a bipartite network of individuals and criminal activities.In this network, each node represents an individual, and an edge connects two individuals involved in the same criminal activity.

5.
Power: This network represents the high-voltage power grid in the western United States.The nodes represent transformers, substations, and generators, while the edges represent high-voltage transmission lines.

6.
Oregon1: This network showcases peering information of Autonomous Systems (AS) inferred from Oregon route-views.Each AS is represented by a node, and the edges depict the relationships between the AS.
Table 1 provides a detailed summary of these networks, with N and M representing the number of nodes and edges, respectively; <k> and MaxDeg denote the network's average degree and maximal degree, respectively; and C denotes the average clustering coefficient.The topologies of these networks are shown in Figure 6.

Comparison Methods
To show the effectiveness of the proposed PRQMC algorithm, we compare it with MC and QMC methods.
• MC: This calculates the estimated value of R using original MC integration and generates a set of points from a PRS.• QMC: This calculates the estimated value of R using original QMC integration and generates a set of points from an LDS.
To show the effectiveness of the proposed HBnnsAGP attack strategy, we compare it with random failures and three representative baseline attack strategies, including HDA [10], HBA [25], and FINDER [10].

•
Random Failures (RF): Nodes are removed from the network in a random order.• High Degree Adaptive (HDA): HDA is an adaptive version of the high degree method that ranks nodes based on their DC and sequentially removes the node with the highest DC.HDA recomputes the DC of the remaining nodes after each node removal and is recognized for its superior computational efficiency.• High Betweenness Adaptive (HBA): HBA is an adaptive version of the high betweenness method.It operates by iteratively removing the node with the highest BC and recomputing BC for the remaining nodes.HBA has long been considered the most effective strategy for the network dismantling problem in the node-unweighted scenario [44].However, the high computing cost prohibits its use in medium-and large-scale networks.• FINDER: FINDER is notable as an algorithm based on deep reinforcement learning, which achieves superior performances in terms of both effectiveness and efficiency.
We implemented the proposed algorithm and baselines using the Python programming language.All experiments were performed on a server AMD EPYC 7742 64-Core Processor @ 2.25 GHz, with memory (RAM) 1024 GB, running the Linux Ubuntu 11.10 Operating System.

Comparison of the PRQMC with Baselines
This subsection presents the comparison results to demonstrate the effectiveness of the proposed algorithm, PRQMC, on six real-world complex networks.Specifically, we compare PRQMC with two baselines: MC and QMC.All experiments use the same attack strategy, and the ASR of each node is randomly generated.
We first compare PRQMC with the baselines on two small-scale networks (Karate and Krebs).This is because precise values of RASR can be calculated analytically for small-scale networks.Then, for large-scale networks (Airport, Crime, Power, and Oregon1), we utilize the standard deviation curve as the convergence criterion, as the analytical method is not applicable to large-scale networks.Figures 7 and 8 present the comparison of the convergence and error between PRQMC and baselines.The figure clearly illustrates that PRQMC achieves faster convergence and better accuracy with fewer samples compared to the baselines.Additionally, Table 2 presents a comparison of the computational efficiency of PRQMC and the baselines, each with 5000 sampling iterations.In the PRQMC method, the number of parallel computing processes is set based on the network size, assigning 25 processes to Karate and Krebs, and 100 processes to the other networks.The results in Table 2 indicate that the PRQMC method outperforms in terms of computational efficiency.Specifically, the PRQMC method operates nearly 50 times faster than the QMC and MC methods on Oregon1.

Comparison of the HBnnsAGP with Baselines
In this subsection, we demonstrate the effectiveness and efficiency of the proposed HBnnsAGP attack strategy.Specifically, we compare HBnnsAGP with HDA, HBA, FINDER, and RF for six real-world complex networks, while considering different ASR conditions.Initially, we employ various strategies to generate corresponding attack sequences.Subsequently, we utilize the PRQMC method to calculate the R value under the following ASR distribution scenarios.
ASR = 50% for the first 30% of nodes: In the attack sequence generated by different attack strategies, the ASR of the first 30% of nodes is set to 50%. 8.
Random ASR: The ASR of each node is randomly set between 50% and 100%.To obtain more reliable results, the average of 10 experimental outcomes is taken.
The sample numbers (N 1 and N 2 ) for different networks used in HBnnsAGP are presented in Table 3. Table 4 presents the R values of networks in the four specified scenarios.The data demonstrate that HBnnsAGP outperforms other attack strategies in terms of destructiveness in the majority of cases.The destructiveness of HBnnsAGP, on average, has increased by 7.01%, 4.05%, 7.62%, and 40.51% compared to FINDER, HBA, HDA, and RF, respectively.Table 5 presents a comparison of computation times for HBnnsAGP and the baselines.As the network size increases, the computation time for the HBA method becomes excessively long.In contrast, the HBnnsAGP method maintains commendable computational efficiency even for larger-scale networks.For the Oregon1 network, HBnnsAGP is approxi-mately 28 times faster than HBA.While the computational efficiency of HBnnsAGP slightly lags behind that of FINDER and HDA for larger-scale networks, it surpasses them in terms of attack destructiveness.Figure 9 represents the ANCw curves of the networks under various attack strategies when the ASR of each node is set to 100%.In this scenario, the state of the attack sequence is unique.The figure shows that HBnnsAGP excels at identifying critical nodes in the network, leading to the effective disruption of the network structure compared to other methods.Hence, the effectiveness of the proposed HBnnsAGP attack strategy is verified.

Further Discussions about Network Robustness Considering ASR
The role of ASR in determining network robustness is a complex yet critical aspect to consider when assessing the effectiveness of an attack strategy.A higher ASR implies a more successful attack, leading to a greater extent of network disruption.Conversely, a lower ASR indicates a more robust network that can resist the attack without significant damage.
Our analysis, as evidenced by the data presented in Table 4, clearly indicates that a decrease in ASR corresponds to an increase in network robustness.This is due to the fact that nodes with lower ASR are less susceptible to destruction, thereby enhancing the network's resilience.This trend is consistent across all strategies, including HBnnsAGP, FINDER, HBA, HDA, and RF.Specifically, for every 10% decrease in ASR, the average R value for the HBnnsAGP strategy increases by approximately 7-8, indicating a significant improvement in network robustness.This suggests that enhancing node protection to reduce ASR can effectively bolster the robustness of the network.This insight is crucial for designing more robust networks.
Interestingly, we found that enhancing the protection of a small subset of critical nodes, resulting in reduced ASR, can effectively enhance network robustness.This is demonstrated in Scenarios 6 and 7 in Table 4, where merely reducing the ASR of the initial 30% of nodes in the attack sequence (Scenario 7) significantly enhances network robustness.This improvement is approximately 78.25% of that observed in Scenario 6.This highlights the importance of identifying and protecting key nodes within a network.By allocating resources to enhance the protection of these crucial nodes, the robustness of the network can be significantly improved.This strategy is particularly beneficial in scenarios where resources for network protection are limited, thus necessitating prioritized allocation.
However, while ASR is a valuable metric for evaluating network robustness, it is essential to recognize that factors can vary significantly depending on the specific domain.It should be incorporated alongside other network characteristics to achieve a comprehensive evaluation of network robustness.For example, in a power grid network, the failure of a single power station can lead to load redistribution, which can cause overloads and subsequent failures in other parts of the system.This cascading failure can lead to widespread power outages.Thus, improving the protection of individual power stations without considering the overall system dynamics may not substantially enhance the system's robustness.
Therefore, future research could focus on developing more sophisticated metrics that consider the complexity and specific characteristics of different domains, thus achieving a more accurate and detailed evaluation of network robustness.Furthermore, it would be interesting to explore how to optimally allocate resources to enhance the resilience of critical nodes when resources for network protection are limited.

Conclusions
In this paper, we conducted a study to analyze the robustness of networks when considering ASR.Firstly, we introduce a novel metric called RASR to assess network robustness in this scenario.Then, we propose the PRQMC algorithm to efficiently calculate the RASR for large-scale networks.PRQMC utilizes RQMC integration to approximate the RASR with a faster convergence rate and employs parallelization to speed up the calculation.Next, we propose a new attack strategy called HBnnsAGP to evaluate the lower bound of network RASR.In HBnnsAGP, we quantify the significance of a node using BCnns and enhance the destructiveness of the attack using the GCC-priority attack strategy.Experimental results on six representative real-world networks demonstrate the effectiveness of the proposed methods.Furthermore, our work demonstrates that reinforcing the protection of a small subset of critical nodes significantly improves network robustness.These findings offer valuable insights for devising more robust networks, especially in scenarios where resources for network protection are limited.The efficiency of the proposed methods can be further enhanced, particularly when analyzing ultra-largescale networks.In future research, we aim to explore efficient algorithms to enhance the network RASR and devise promising methods for analyzing ultra-large-scale networks.

Figure 1 .
Figure 1.An example of network disintegration processes under different ASR.Gray nodes indicate successful attacks, green nodes represent unsuccessful attacks, and blue nodes denote unattacked nodes.(a-f) represent scenarios where 5.9%, 20.6%, and 35.3% of network nodes are attacked, with ASR of 100% and 60% respectively.

Figure 3 .
Figure 3.A comparison of MC and QMC integration methods.(a,d) show the two-dimensional projections of a PRS and an LDS (a Sobol sequence), respectively.(b,c) depict the MC integration for approximating a definite integral over a one-dimensional unit interval, while (e,f) present the QMC integration for approximating a definite integral over a one-dimensional unit interval.
attack_node to the end of Seq;11:

Figure 5 .
Figure 5.An illustrative example of non-central nodes and comparison of BC and BCnns.In this figure, (a) highlights non-central nodes in red, (b) showcases node sizes based on BC, and (c) showcases node sizes based on BCnns.

Figure 6 .
Figure 6.The topologies of six real-world networks.The size of each node is proportional to its degree.(a) Karate, (b) Krebs, (c) Airport, (d) Crime, (e) Power, (f) Oregon1.

Figure 7 .
Figure 7.Comparison of the convergence and error of the PRQMC, QMC, and MC methods in assessing robustness for two smaller-scale networks.Convergence and error curves of Karate (a,b), Krebs (c,d).

Figure 8 .
Figure 8. of the convergence and standard deviation of the PRQMC, QMC, and MC methods in assessing robustness for four larger-scale networks.Convergence and standard deviation curves of Airport (a,e), Crime (b,f), Power (c,g), Oregon1 (d,h).
where each region corresponds to a state of Seq, denoted by S 1 , S 2 , S 3 , S 4 .Consider a network G = (V, E) with N nodes.Suppose a sequence of nodes Seq = (v 1 , v 2 , . . ., v N ) is targeted for attack, and P v = (p v 1 , p v 2 , . . ., p v N ) signifies the ASR of each node.The RASR of the network G can be approximated by R, which is defined as Definition 6.

Table 1 .
Basic information for six real-world networks.N and M represent the number of nodes and edges, respectively; <k> and MaxDeg denote the network's average degree and maximal degree, respectively; and C is the average clustering coefficient.

Table 2 .
Computational time comparison of PRQMC, QMC, and MC methods (s).Smaller values are better (best in bold).

Table 3 .
The sample numbers (N 1 and N 2 ) for different networks used in HBnnsAGP.

Table 4 .
The robustness of networks under different ASR.All R values are multiplied by 100.Smaller values represent better attack destructiveness for attack strategies (best in bold).

Table 5 .
The computation time of different attack strategies (ms).Smaller values are better (best in bold).