Percolation Theories for Quantum Networks

Quantum networks have experienced rapid advancements in both theoretical and experimental domains over the last decade, making it increasingly important to understand their large-scale features from the viewpoint of statistical physics. This review paper discusses a fundamental question: how can entanglement be effectively and indirectly (e.g., through intermediate nodes) distributed between distant nodes in an imperfect quantum network, where the connections are only partially entangled and subject to quantum noise? We survey recent studies addressing this issue by drawing exact or approximate mappings to percolation theory, a branch of statistical physics centered on network connectivity. Notably, we show that the classical percolation frameworks do not uniquely define the network’s indirect connectivity. This realization leads to the emergence of an alternative theory called “concurrence percolation”, which uncovers a previously unrecognized quantum advantage that emerges at large scales, suggesting that quantum networks are more resilient than initially assumed within classical percolation contexts, offering refreshing insights into future quantum network design.


I. INTRODUCTION
Quantum information [1] is a fast-developing field that has transcended its roots originally in quantum mechanics and information theory to other areas like condensed matter physics [2], statistical physics [3][4][5], and network science [6,7].At the core of quantum information lies the quantum bit, or qubit, the basic quantum information carrier.Two qubits can be designed into a relationship, called entanglement, which is an essential quantum resource [8] for quantum computing.Yet, entanglement is notoriously fragile, especially when qubits are spatially distant.Fortunately, by path routing and adding in-between sites for replaying, entanglement between remote qubits may eventually be established in an indirect way.Such an action, called entanglement distribution [9], is a fundamental benefit of quantum networks (QN) .
In general, a QN is a network representation of different parties (nodes) that share entanglement (links) as connections.A significant part of our interest lies in distributing entanglement between two arbitrary nodes in the network, a process we refer to as "entanglement transmission."Entanglement across different parties is essentially transmitted through quantum communication protocols.Successful demonstrations of quantum communication protocols have already been made on small-scale QN using diamond nitrogen-vacancy centers [31][32][33] and ion traps [34,35].However, the big question that looms is how to scale this to much larger networks.A large-scale, practical QN would offer significant advantages for many industrial and scientific applications.For example, financial institutions and governments would benefit from quantum cybersecurity providing an unprecedented level of secure communication.Researchers could also use networked quantum computers to dramatically increase the simulation speed of the physical and chemical processes of many interacting particles.Yet, if the individual channels (links) along the routed path are too noisy, the entanglement transmission may fail.Study of such dependence of "indirect" transmission ability on the noise level of individual links requires tools from statistical physics and complex network theories.
One of such theory that has proven to be useful is percolation theory [36][37][38][39].Percolation theory offers a mathematical framework for understanding how networks behave when subjected to random processes (can be treated as a FIG.1: A pure-state quantum network (QN) consists of nodes (purple) and links (gray line).Each node comprises a collection of qubits (gray dots) that are entangled with qubits belonging to other nodes, and each link represents a bipartite entangled pure state |ϕ⟩ connecting the two qubits at its endpoints.This QN model can be extended to d-dimensional qudits (bottom left), which allow higher bandwidth for transmitting information, or to tensor networks (bottom middle) by employing linear transformations T i at each node i.Moreover, the QN can be adapted to higher-order graphs (bottom right), where each link manifests as a hyperedge, denoting a multipartite entangled pure state.form of noise), such as how water percolates through soil or how diseases spread through populations.In the context of QN, percolation could provide valuable insights into the robustness and efficiency of entanglement distribution.By applying percolation theory, we can model and analyze the network structure directly and identify the most effective ways to maintain and distribute quantum entanglement across it.This lays the groundwork for examining QNs through the lens of statistical physics and opens up new avenues for understanding the upper limits of entanglement distribution in these networks.
In this work, we will explore and summarize the developments of the QN framework and how a mapping to percolation offers unique tools for dissecting the problem of entanglement transmission.Specifically, we will show that the mapping to percolation theories-and a definition of how a combination of pairwise edges combines into indirect connectivity-are, indeed, not unique.A new, alternative percolation-like theory termed concurrence percolation [40] emerges, and it underlies an unexpected "quantum advantage," revealing that QNs are more robust than we initially thought within the classical percolation framework.Moreover, the finding is scalable with network size and adaptable to different network topologies, suggesting a macroscopic improvement over classical considerations from a statistical physics perspective.
This paper focuses on the comparison between classical percolation and concurrence percolation when mapped based on QN.It is structured as follows: In Section II, we give a definition of the QN theoretical framework as well as its possible generalizations to other QN-based structures (e.g., hypergraphs).In Section III, we briefly review the concept and definition of percolation theory and, in particular, how it relates to network connectivity at large scales.In Sections IV and V, we will focus on the discoveries that the new concurrence percolation theory surpasses the traditional percolation theory (which we refer to as "classical percolation" for comparison).In Section VI, we will delve into the algorithms developed for calculating concurrence percolation.Finally, in Section VII, we will discuss the open questions and practical implications of the findings, both theoretically and for real-world communications.
. Furthermore, LO also allows Alice or Bob to locally measure their qubits as well, resulting in the random collapse of |ϕ⟩ to one of its eigenstates.However, Alice and Bob cannot transform their state globally and obtain a singlet, |ϕ⟩ → |ϕ ⊥ ⟩ = 1/2 |00⟩ + 1/2 |11⟩.This is not counted as LO.
On top of LO, Alice and Bob are also free to communicate classical information (CC), sharing their results of quantum measurements.Together, this set of operations is called the local operations and classical communication (LOCC).The LOCC defines a set of strategies to share and manipulate quantum information under the locality constraint.One of the most powerful theorems in quantum information states that the average entanglement between two parties can never be increased if only LOCC is allowed for the two parties.This establishes the role of entanglement as a quantum resource, given that LOCC is the "free operations" of the system [8].
Therefore, qubits belonging to the same node are not constrained by LOCC.They can be freely entangled or disentangled as needed, but the entanglement is not viewed as a resource for communication.Only the entanglement of qubits belonging to different nodes matters.In other words, quantum networks are an effective representation of the fundamental constraints of locality, manifested by assigning qubits to different local compartments and entanglement to inter-compartment connections.

A. Qudit-based quantum networks
A natural extension of QN is to use more general d-dimensional "qudits" (qutrits, ququarts, etc.) instead of qubits (Fig. 1).Each link, as a bipartite pure state of qudits, can be written as Here, In this generalization, the weight of each link is no longer a single number but a list of non-negative numbers known as Schmidt numbers, denoted as λ = (λ 1 , λ 2 , • • • , λ d ).When d = 2, the bipartite pure state reduces to qubits, where λ 1 = cos 2 θ and λ 2 = sin 2 θ.
The consideration of qudit-based QN offers both theoretical and practical advantages.Theoretically, a d-dimensional qudit inherently carries log 2 d times more information than a qubit.Therefore, as the value of d increases, a single carrier can transmit more information, increasing the bandwidth.This enhanced capability is also evident in the robustness of entanglement for entangled states of qudits.Indeed, even when some coefficients are erased (λ j → 0) from λ in the presence of noise, the pure state |λ⟩ can still remain entangled, as long as the two largest Schmidt numbers λ 1 and λ 2 remain positive.
In experiments, qubit systems are commonly realized using two-level atoms or superconducting states.However, isolating these two levels from other nearby levels can be challenging.By including nearby levels and increasing the potential dimension d, the experimental design may become more feasible.In fact, several experiments have employed qudits to achieve better performance, including applications in quantum scrambling [43] and superdense coding [44].

B. Quantum networks are the basis of tensor networks
There is also an interesting and deep connection between QN and tensor networks (Fig. 1), the latter being a familiar and powerful tool in condensed matter physics, mostly used for the purpose of facilitating computations and simulations in quantum physics and materials science.To be specific, tensor networks are designed to efficiently represent many-body quantum states [45].These quantum states, which are essentially large, high-dimensional tensors in mathematical terms, can be factorized into smaller tensors using tensor networks.In particular, tensor networks are useful for representing the ground state of quantum systems, which typically exhibit strong ordering compared to excited states.This strong ordering often means that entanglement does not grow very fast with the length scale, which, in turn, allows for easier and more efficient factorization of the corresponding ground state.
To delve deeper into the concept, note that a general N -body quantum state reads which lives in a D N -dimensional Hilbert space that is the tensor product of N "single-body" Hilbert spaces The complex tensor T that stores the coefficients is exponentially large (∼ D N ), effectively preventing direct computations of the quantum state |Ψ⟩'s characteristics.However, there may be a significant level of redundancy in the coefficients present stored in the tensor.Consider an example where every entry in T can be fully factorized, such that In this case, it becomes unnecessary to store the entire tensor or perform calculations on it.Rather, it suffices to simply store the vectors a, b, c, • • • of which the total size is DN .This, indeed, is how a tensor network works-by leveraging different ways of factorization that can be depicted through different graphical network structures [46].Among various tensor networks, the matrix product state (MPS) is among the most researched [47].In computer science, it is often called the tensor-train network [48].The MPS is commonly utilized to represent one-dimensional many-body quantum states.When extended to higher dimensions, this becomes what is known as the projected entangled pair state (PEPS) [49].More involved tensor network structures, such as the multiscale entanglement renormalization ansatz (MERA) [50], are also routinely used to study critical quantum systems.
For example, a MPS representation of Eq. ( 2) can be written as where for each single-body i, a set of D different matrices, where d is called the bond dimension.Thus, the total number of parameters is N Dd 2 , which is linear in N .For a sufficiently large d, the MPS has enough degrees of freedom to exactly represent any tensor T .However, it is frequently observed that a small d can approximately, if not perfectly, reproduce T .This occurs when the information stored in T scales linearly with N , a condition often found in the ground states of one-dimensional noncritical quantum systems.
Intriguingly, the MPS offers a new physical perspective-the valence-bond picture [49].To be specific, we map each single-body Hilbert space (spanned by |x i ⟩) to a physical site and assume that there are two d-dimensional qudits located at each site.For every two neighboring sites (1 ↔ 2, 2 ↔ 3, • • • , N ↔ 1), two qudits from each site are fully entangled, forming a "valence bond" that can be written as an unnormalized maximally entangled state, Here, the state is also represented (matricized) into the matrix form ψ. Combining this with Eq. ( 3), we obtain where represents a linear transformation acting on the two qudits (labeled by j and j ′ ) on-site i.The valence-bond picture is now evident: In this picture, the many-body state is not the primary entity.Instead, it is built upon something more fundamental-a network of qudits and "valence bonds."The linear transformations T i are then employed on top of it to form the tensor network.
Note that this fundamental network |ψ⟩ ⊗N is a one-dimensional (periodic) quantum network consisting of maximally entangled states, making it remarkably suitable to be generalized to partially entangled states [Eq.(1)].This can be achieved by replacing ψ in Eq. ( 4) by The physical meaning of inserting such a partially entangled state is that since LO cannot increase the entanglement, the entanglement between neighboring sites will be upper bounded by the amount of entanglement in ψ.
The valence-bond picture is not limited to MPS but can be generalized to arbitrary tensor networks.Indeed, suppose A i at site i does not denote a set of matrices but a set of tensors, k) .Each subscript denotes a qudit on-site i.The site has k qudits in total, indicating that the corresponding node i has degree k (i.e., k incident links) in the QN.The linear transformation then becomes and the many-body state is expressed as where |QN⟩ represents the entire QN considered as a huge pure state (Fig. 1).

C. Multipartite quantum networks
Our attention has been narrowed to bipartite entanglement.However, a complete QN framework should take multipartite entangled states into account.This is since multipartite entangled states have a specialized and unique role in certain quantum communication applications, such as secret sharing [51].Although multipartite entanglement has been widely explored, we still lack a unified, clear method to precisely detect, measure, and define it.For example, even with an entangled state of just three qubits, there exist two non-equivalent forms of genuine tripartite entanglement.The first is known as the GHZ class, characterized by five real parameters, α, β, γ, δ, and θ, and can be expressed as [52] |ϕ GHZ ⟩ ∝ cos δ |000⟩ + sin δe iθ (cos The second form, called a W class, has a general representation as [52] |ϕ with the real parameters a, b, c > 0 and d = 1 − a − b − c ≥ 0. Both the GHZ and W classes represent a level of correlation that goes beyond just pairwise interactions, meaning that a measurement on any single qubit among the three will instantaneously affect the outcomes of the other two.Despite this, states within one class cannot be converted to those in the other class using LOCC.As a result, we cannot directly compare the degree of entanglement of states belonging to different classes.This represents a fundamentally challenging quantum "three-body problem" that complicates the practical applications of multipartite entanglement.For example, a W state may outperform in certain applications, while in others, a GHZ state may be more effective.Note that states belonging to the W class are characterized by only three real d.o.f., whereas the GHZ class requires five.Hence, a generic tripartite state typically belongs to the GHZ class.
Traditionally, each link in a network is also "bipartite," connecting exactly two nodes.As a result, to study a QN consisting of multipartite entangled states, it is essential to go beyond "bipartite" network theory and consider multipartite entangled states as higher-order interactions [53,54].These can be mathematically represented as "hyperedges" of hypergraphs [55].Here, Fig. 1 shows an example of a hypergraph-based QN consisting of hyperedges in the form of which represents a special case of the GHZ class [Eq.(9)].Of course, this is only one specific form of multipartite entangled states, characterized by the sole parameter δ.Yet it illustrates the necessity of representing these as hypergraphs and studying them through higher-order network theories, such as higher-order percolation theories (see Section III C for a brief review).

III. PERCOLATION OF COMPLEX NETWORK
Percolation theory, serving as a foundational model for investigating disordered systems [36,37,56,57], is mainly concerned with understanding the geometric connectivity of random media.Constructing a percolation model is straightforward: Take, for example, a square lattice (or a lattice of any shape) in which each link is randomly either present with a probability p or absent with a probability 1 − p.In a real-world application, one could consider the present links as electrical conductors and the absent ones as insulators [39].The electrical current would then flow solely through the conductor links.When p is small, almost no paths exist that connect the lattice's two distant boundaries (e.g., the left and right boundaries in the square lattice).However, as p grows, various conduction pathways begin to emerge.A phase transition [58] is eventually triggered when p crosses a critical threshold, labeled as p th , effectively changing the composite material from an insulating to a conducting state.At this point, the probability of a path connecting the two distant boundaries becomes greater than zero.(This specific probability of connecting distant boundaries is termed the "sponge-crossing probability," about which we will delve into more details in Section V C).
This phenomenon of a phase transition between two phases of different connectivity is prevalent in real-life scenarios.An illustrative example from biology is the spread of epidemics [59][60][61][62][63][64][65][66].In its most basic manifestation, an epidemic commences with an infected individual who, with a probability denoted as p, can transmit the infection to its nearest neighbors over time until it propagates extensively.A comparable methodology can be applied to model forest fires, where the probability of a burning tree igniting its nearest neighbor tree in the subsequent time step replaces the infection probability [67,68].Another notable application of this concept can be found in polymerization processes within chemistry, where the activation of bonds between small branched molecules leads to the formation of larger molecules [69,70].This transformation is known as a gelation transition.An illustrative example of this gelation process can be observed when boiling eggs.Percolation theory has a wide range of other applications as well, spanning fields such as quantum systems [71][72][73][74][75][76][77], material science [78][79][80], geophysics [81][82][83][84][85], social dynamics [86][87][88][89], and infrastructures [90][91][92][93][94].

A. Percolation of single-layer networks
Percolation theory is closely associated with a wide range of concepts of critical phenomena, including scaling laws, fractals, self-organization criticality, and renormalization, holding significance across diverse statistical physics disciplines [37].The traditional characterization of phase transition in percolation hinges on the statistical properties of clusters near p th .For p < p th , only finite clusters exist.As p > p th , a unique, infinite cluster emerges.A crucial parameter is P ∞ , signifying the relative size of the infinite cluster, which exhibits a power-law near p th [39]: The parameter P ∞ serves as a measure of order within the percolation system and can be identified as the order parameter.
If we exclude the infinite cluster (if it exists), then the rest of the finite clusters follow a distribution: Here, s is the cluster size, and n s is the number of clusters of size s.At criticality, the characteristic size s * diverges: Consequently, the tail of the distribution n s becomes a power law, n s ∼ s −τ .The mean cluster size, i.e., how large a finite cluster is on average, also diverges: with the same exponent γ above and below p th .Finally, the correlation length ξ, defined as the average distance between two sites on the same finite cluster, also diverges: again, with the same exponent ν above and below p th .These exponents, namely β, τ , σ, γ, and ν, encapsulate the critical behavior of key quantities associated with the percolation transition and are collectively referred to as the critical exponents.Notably, they satisfy the scaling relations: It is worth emphasizing that these exponents exhibit universality, meaning they remain invariant irrespective of the specific structural attributes of the lattice (e.g., square or triangular) or the type of percolation (site or bond).Instead, their values are solely determined by the dimensionality of the lattice.Besides, at the critical point, ξ and s * also follow a relation, The exponent d f is often called the fractal dimension [39], characterizing the structure of the infinite cluster at the critical point.Assuming the dimension of the system is d, there is another relation between critical exponents, called the hyperscaling relation,

TABLE I:
The critical exponents for the classical percolation transition in scale-free networks [100].
Thus, the fractal dimension of the infinite cluster at p th is not a new independent exponent but depends on β, ν and d.
Finally, for complex network structures, similar critical exponents following Eqs.( 11)-( 15) can also be identified.For example, in scale-free networks [6,15,[95][96][97], which are characterized by a power-law distribution P (k) ∼ k −λ of its degree k, the values of critical exponents depend on the power-law exponent λ, as outlined in Table I.As an essential process inherently associated with the notion of connectivity in networked systems, percolation has been generalized to models that go beyond undirected networks, with studies dedicated to directed networks [98], temporal networks [99], and, as we discuss in more detail in the next sections, networks of networks and hypergraphs.

B. Percolation of networks of networks
In many real-world systems, an individual network is one component within a much larger complex network of interdependent networks [101][102][103][104].In interdependent networks, the failure of nodes in one network leads to the failure of dependent nodes in other networks, which may cause further damage to the first network, leading to cascading failures and possibly catastrophic consequences.In 2010, Buldyrev et al. studied the percolation of two fully interdependent networks subject to cascading failures based on a generating function formalism.They found a surprising first-order discontinuous phase transition, dramatically different from the second-order continuous phase transition in single-layer networks [105], as shown in Fig. 2. Later, Parshani et al. studied two partially interdependent networks and found that the percolation transition changes from first to second order as the coupling strength decreases [106].Considering a malicious attack, Huang et al. developed a mathematical framework for understanding the percolation of two interdependent networks under targeted attack, later extended to targeted attack on partially interdependent networks [107].Each node in one network may depend on multiple nodes in another network.Therefore, Shao et al. proposed a theoretical framework for understanding the percolation of interdependent networks with various support and dependence relationships [108].The study of interdependence between networks also led researchers to realize that other types of interactions are important.One example closely related to interdependence is antagonistic interactions [109].Here, for a node to be active, the antagonistic node in another network has to be active, as can happen if each pair of nodes competes for some limited resource.Considering that more than two networks may depend on one another, Gao et al. developed the analytical framework to study the percolation of a network formed by n interdependent networks [110], which was later extended to the study of targeted attacks on high-degree nodes [111,112].Baxter et al. studied the percolation of multiplex networks, which can be considered as the percolation of tree-like network of networks in Ref. [113].Liu et al. developed a theoretical framework based on generating functions and percolation theory to understand the percolation of interdependent directed networks [114].In the past decade, we have witnessed fruitful results and discoveries related to the percolation of networks of networks [115][116][117][118][119][120][121][122][123][124][125][126][127], as well as multilayer networks and interconnected networks.
In general, the percolation of networks of networks extends that of single-layer networks.For example, when the n interdependent Erdős-Rényi (ER) networks form a tree-like topology and have the same average degree k, ki = k (i = 1, 2, ..., n), the giant connected component in each layer, P ∞ , as a function of k, p, and n follows [101]: Note that for Eq. ( 19), the particular case n = 1 is the known ER second order percolation law for a single-layer network [128][129][130].When n ≥ 2, the system shows a first-order phase transition.Using the generating function, we obtain that p th and and

First order
Second order th th FIG.2: Schematic demonstration of first and second order percolation transitions.In the second order case, the giant component is continuously approaching zero at the percolation threshold p = p th .In the first order case the giant component approaches zero discontinuously.
where w is given by w = W − (−1/n exp(−1/n)), and W − (x) is the smallest of the two real roots of the Lambert equation exp(W − )W − = x.For n = 1 we obtain the known ER results p th = 1/ k, and 20) and ( 21), we obtain the exact results derived by Buldyrev et al. [105].

C. Percolation of hypergraphs
Hypergraphs generalize graphs by allowing that interactions, the hyperedges, connect an arbitrary number of vertices [131].Hypergraphs, and, up to a certain extent, simplicial complexes, offer more flexibility to model interacting systems, and they have become popular models of many real-world networks over recent years [132,133].For example, more than two molecules can participate in some reactions [134,135], and group interactions also frequently occur for collaborations of scientific papers [136,137].It has been shown that higher-order interaction may significantly change the physical properties of dynamical processes from those on ordinary networks with only pairwise connections [53,132,138,139].However, there are only a few works exploring the robustness or the percolation of hypergraphs [140][141][142][143][144][145].Specifically, Coutinho et al. introduced two generalizations of core percolation to hypergraphs, and offered analytical solutions to certain types of random hypergraphs accordingly [140].Sun and Bianconi later proposed a general framework for accessing hypergraph robustness, and further characterized the critical properties of simple and higher-order percolation processes [141].Sun et al. also considered a paradigmatic type of higher-order interactions, triadic interactions, where a node regulates the interaction between two other nodes, and provided a general theory, accurately predicting the full phase diagram on random graphs [142].More recently, Bianconi and Dorogovtsev have further developed a theory for hyperedge and node percolation on hypergraphs, and showed that, in contrast to ordinary networks, the node and hyperedge percolation problems for hypergraphs strongly differ from each other [146].

IV. CLASSICAL PERCOLATION IN QUANTUM NETWORKS
Why is percolation theory useful in the study of QN?The roots of this interest can be traced back to a 2007 paper [10].In the seminal work, the authors first proposed a mapping between percolation theory and a particular entanglement transmission scheme, which they discovered and accordingly termed as the classical entanglement percolation (CEP) scheme.Within this context, an entanglement transmission scheme refers to a (possibly infinite) series of quantum communication protocols that may be applied collectively to a QN for distributing entanglement between two nodes.This pioneering discovery has opened up a new approach to studying QN from a statistical physics perspective, with a focus on understanding the large-scale, collective characteristics of the entanglement transmission task and how they are influenced by the topology of the QN.

A. Classical entanglement percolation (CEP)
As previously noted, the LOCC cannot increase the average entanglement.However, it does not mean that one cannot use LOCC as a form of "gambling"-to enhance the entanglement with a certain probability p, even though it might reduce the entanglement with probability 1 − p.This principle forms the foundation of the CEP scheme.To be specific, the CEP scheme involves two steps [10].First, we "gamble" to enhance the entanglement of each link, aiming to obtain a singlet (maximally entangled state) with a probability of p.The optimal probability for this is referred to as the singlet conversion probability, given by p = 2 sin 2 θ.Second, if a path of links connecting the source (s) and target (t) has all been converted to singlets, then a specific protocol known as entanglement swapping can be applied.This protocol converts every two singlet links sharing a common node (Relay, R) into a single singlet linking the two end nodes.For example, if there is a singlet between Alice and the Relay, and another between the Relay and Bob, the entanglement swapping protocol can merge the two into one singlet between Alice and Bob.By applying this protocol recursively along the singlet path connecting s and t, we arrive at a final singlet between s and t, fulfilling the transmission task.
Equipped with these concepts, the mapping between CEP and (classical) percolation theory is straightforward.The probability p = 2 sin 2 θ represents the probability for links to be present or absent.The CEP scheme succeeds if s and t are connected after the random percolation process is applied.Furthermore, this connection implies a nontrivial critical threshold for the CEP scheme on infinitely large QN.Specifically, when 2 sin 2 θ falls below the percolation threshold p th , s and t are almost certainly disconnected if they are infinitely apart.Hence, p th , which solely depends on the network topology, serves as a metric of the overall capacity of the QN in the context of CEP.
The CEP scheme represents a great simplification of the QN entanglement transmission task to a pure percolation problem.Nevertheless, the CEP is not necessarily the optimal scheme.Indeed, even when 2 sin 2 θ ≤ p th , there might still be other schemes that can fulfill the transmission task, as we will explore in the following sections.

B. Quantum entanglement percolation (QEP)
It is expensive to obtain a singlet from a partially entangled state given its "gambling" nature.Even worse, the swapping protocol spends all the singlets along a path and converts them into just one singlet.This process leads to a waste of singlets and causes the inefficiency of the CEP scheme.Naturally, this leads to the question: Is it necessary to convert every link into a singlet?As we will see, the answer is negative, paving the way for the QEP scheme [10].
The QEP scheme is based on the discovery that given two partially entangled states between three parties, Alice-Relay-Bob, there exists a LOCC protocol that can yield a higher probability of obtaining a singlet between Alice and Bob.This probability is higher than obtaining two singlets (Alice-Relay, Relay-Bob) individually and followed by a swapping protocol.Indeed, the optimal probability is found to be min{2 sin 2 θ AR , 2 sin 2 θ RB } [147], outperforming the probability 2 sin 2 θ AR 2 sin 2 θ RB achieved by CEP.
What about three partially entangled states between four parties (Alice-Relay1-Relay2-Bob)? Unfortunately, the optimal conversion probability does not intuitively simplify to min{2 sin 2 θ AR1 , 2 sin 2 θ R1R2 , 2 sin 2 θ R2B }, but takes on a much more complicated form.This forbids us to generalize the optimal result to larger scale.For readers who wish to delve deeper, further details can be found in Ref. [148].
Even though we cannot generalize the optimal result, we can still extend the improvement, by bypassing one relay every other step.This gives rise to the QEP scheme, which avoids the need to create singlets for every link, bypassing (half of) the Relays.But this approach is not without its trade-offs, especially on a large scale.Since the Relays are bypassed, the QN misses out on the potential connectivity to other paths through the Relays.Thus, it still remains a question whether QEP can achieve a lower critical threshold than CEP, fulfilling the entanglement transmission task at infinite scales.The authors of the 2007 paper [10] demonstrated that this is indeed achievable for specific topologies, such as a "double"-honeycomb topology, where there are two links between every two adjacent nodes on a hexagonal network.The QEP scheme is equivalent to adding a preprocessing step, modifying the network into a triangular structure and thereby reducing the percolation threshold.
Note that despite being referred to as QEP, the mapping of the scheme is still aligned with classical percolation theory from a statistical physics point of view.The quantum aspect of this process is confined primarily to the preprocessing step, which is executed only at the local scale.Additionally, the QEP does not yield the optimal result either [148], which leaves open the question of whether a more effective entanglement transmission scheme might exist.It would be intriguing if this new scheme were guided by a fundamentally different statistical physical theory distinct from classical percolation.We will show that such a theory does exist.FIG.3: Entanglement transmission on quantum networks can be understood from two different mappings.In classical percolation (bottom left), each link is present or absent with probability p or 1 − p, respectively.Only the paths in which all links are fully present contribute to the connectivity.This forms the basis of the classical/quantum entanglement percolation (CEP/QEP) schemes [10], with the goal of securing a singlet between source s and target t through a "gambling" approach.In concurrence percolation (bottom right), every path contributes to the connectivity.This mapping forms the basis of the deterministic entanglement transmission (DET) scheme [149], where the aim is not to obtain a singlet probabilistically but to establish a partially entangled state deterministically.

A. No need to establish singlets
The mapping of CEP/QEP to classical percolation is essentially established on the necessity that two nodes must be connected by at least one path of singlets.However, we have seen that in the QEP scheme, it is not mandatory for all links along the path to be converted to singlets.By bypassing some of the links, a more efficient scheme might be realized.
This observation leads us to a natural question: why not stop establishing singlets at all?In other words, we would bypass not just some, but all links, resulting in a final state between the source s and target t that remains only partially, rather than maximally, entangled.This scenario is, of course, approachable, considering that one can always "downgrade" a singlet to a partially entangled state with no cost [150].What we truly seek, however, is a trade-off where we can achieve a much higher probability of obtaining such a partially entangled state instead of a singlet.By carefully weighing the compromise (having only partial entanglement) against the benefit (a significantly higher conversion probability), we might discover a more advantageous scheme for entanglement transmission overall.This revised approach challenges the conventional thinking in terms of classical percolation and could lead to new opportunities in developing new schemes on QN.

B. Deterministic entanglement transmission (DET)
Based on the above ideas, a new scheme named the deterministic entanglement transmission (DET) scheme is introduced [149].The DET approaches the entanglement transmission task from a completely distinct perspective: the scheme demands that the conversion probability throughout the process always equals one.In other words, rather than "gambling" to increase the links' entanglement, we operate directly on partially entangled states in a deterministic fashion.The aim of DET is to maximize the final (partial) entanglement under the constraint of determinacy, contrasting with CEP/QEP's objective of always acquiring a singlet with high (but not unit) probability.
The DET involves two quantum communication protocols: The first is a continuation of the swapping protocol [147,151].However, here the swapping protocol operates directly on partially entangled states.It can be shown that given entanglements θ AR and θ RB in the A-R-B configuration, one can tune the swapping protocol such that it deterministically yields a final state between A and B, having a new entanglement θ AB that satisfies sin 2θ AB = sin 2θ AR sin 2θ RB [149].The second protocol is the entanglement concentration protocol [152,153].This protocol takes two links between A and B (with entanglement θ 1 , θ 2 , respectively) as input.At the expense of the two links, a new link that has a higher entanglement θ is produced between A and B, where cos θ = cos θ 1 cos θ 2 or 1/2, whichever is the largest.
The DET scheme is founded on the generalization of these two protocols to global scales.This is possible since the swapping and concentration protocols are fully equivalent to the series/parallel rules, respectively, as often employed in circuit network analysis [149].Therefore, the DET scheme becomes applicable if the network topology between the source s and target t is series-parallel [154].In other words, the network can be fully reduced to a link between s and t using only series and parallel reductions.Note that there are already fast network algorithms designed specifically for detecting series-parallel networks and determining their feasible reductions, which will be explored further in the next section.
One of the most intriguing characteristics of the DET scheme is that, when applied to infinitely large series-parallel networks, a threshold similar to the CEP threshold is observed [40], below which the DET can never produce nonzero entanglement between s and t.This threshold, however, is lower than the CEP threshold, demonstrating a "quantum advantage" over the classical scheme.The existence of such a threshold suggests that the DET may be globally governed by a statistical physics theory.In subsequent sections, we will explore this statistical theory in details.

C. Concurrence percolation theory
To establish the statistical theory, recall that in classical percolation theory, given a regular lattice, one can define a "sponge-crossing" quantity, P SC , as the probability that there is an open path connecting two far-apart boundaries [155].When the lattice becomes infinitely large (the number of nodes N → ∞), it is known that a minimum value of p exists, below which P SC becomes zero in the thermodynamic limit: This minimum value coincides with the traditionally defined percolation threshold p th in Section III, which is based on the size of the infinite cluster.As a result, Eq. ( 22) offers an alternative definition of p th .In the special case of two-dimensional square lattices, Kesten proved that the "sponge-crossing" probability P SC of connecting the left and right boundaries is strictly zero until p > 1/2.Thus, p th = 1/2 [155].Moreover, the existence of such a critical threshold is not limited to regular lattices but can also be observed for complex network topologies.All we need to do is to generalize P SC from being defined between two apparent boundaries to two arbitrary sets of nodes, denoted S and T .It is reasonable to believe that there still exists a nontrivial ConPT threshold p th for this generalized P SC , as long as the minimum length of all paths connecting S and T increases with the network size N .We contract the two sets S and T into two "mega" nodes, which amounts to erasing the internal network topologies of S and T , and then calculate the "sponge-crossing" probability P SC between them.This provides us a definitive way of calculating P SC for arbitrary network topology and inferring p th from Eq. (22).
How to derive p th ?First, we need to know how P SC manifests as a function of p.The exact expression of P SC can be calculated by basic addition and multiplication rules of probability measures [37].In general, P SC bears the form as a ratio of two large polynomials of p (i.e., meromorphic in p), which quickly becomes complex when the number of links becomes large.Nevertheless, when the network topology between S and T is series-parallel, then P SC can be decomposed into the iteration of two connectivity rules, namely, the series and parallel rules (Table II).these rules establish the probability of at least one path connecting the two ends.
When the network topology between S and T is not series-parallel (such as a bridge circuit [154]), higher-order connectivity rules are needed.Unlike the series/parallel rules, these higher-order rules do not follow a general form.That being said, there is a way to approximate (see Section VI for a detailed discussion about the nature of this approximation) these higher-order rules using only the series/parallel rules.This technique is know as the star-mesh transform, originating from the study of circuit analysis.We will include more details on this technique in the Algorithms section (Section VI).
It becomes clear that the series/parallel rules play a very special rule in defining percolation connectivity.Given that the DET scheme is also founded on series/parallel rules, a natural and intriguing question arises: can we define a new statistical theory reversely, starting directly from the exact forms of series/parallel rules?Such a definition may not be complete, since we do not know the exact forms for higher-order rules (which may not even have a closed mathematical form).Yet, using the star-mesh transform technique, it may be possible that we can approximate these higher-order rules using the series/parallel rules that we have known.This is the motivation of the concurrence percolation theory.Recall that the DET series/parallel rules are given by sin 2θ = sin 2θ 1 sin 2θ 2 and cos θ = max{ 1/2, cos θ 1 cos θ 2 }, respectively.Under a change of variable c ≡ sin 2θ, the rules can be rewritten in the form presented in Table II.Note that the new series rule in terms of c bears the • • • } Higher-order rules Can be approximated by the star-mesh transform, by the following two-step argument: 1.The star-mesh transform can reduce an Ngraph to an (N − 1)-graph (right panel) and is solvable by applying the series and parallel rules recursively through a group of N (N − 1)/2 coupled equations (see Section VI for details).same nominal form as the classical series rule in terms of p.This variable c, indeed, has a physical meaning, known as the concurrence, a specific measure of bipartite entanglement [156].This inspires the new theory to be termed "concurrence percolation."In comparison to classical percolation where the variable of interest is the probability p, in concurrence percolation, choosing c as the variable of interest is appropriate.
After fixing all connectivity rules (series + parallel + star-mesh), an analogous quantity, C SC , referred to as the sponge-crossing concurrence can be defined as the weighted sum of all paths in terms of this new weight c [40].It is believed that a nontrivial threshold on c also exists: such that c th is the minimum value of the concurrence c per link, below which C SC becomes zero when S and T become infinitely distant.

D. Results
DET outperforms CEP.-Utilizing the framework of concurrence percolation, we can derive an essential and powerful result: the DET scheme always outperforms the CEP scheme on any series-parallel QN.To rigorously demonstrate this comparative superiority, we rewrite both the classical and concurrence series/parallel rules in terms of the entanglement variable θ (Table III).These rules correspond to the entanglement transmission rules for CEP and DET, respectively (as illustrated in Fig. 4).
where the inequality is supported by the subadditivity of f (x) = ln(2 − e −x ), namely, for x = − ln 2 sin 2 θ ≥ 0. This leads to This final inequality underscores that the θ obtained from the CEP series rule (under a change of variable p = 2 sin 2 θ) is never greater than the θ obtained from the DET series rule (under a change of variable c = sin 2θ).For the parallel rule, similarly we have where the inequality is supported by the subadditivity of f (x) = − ln(1/2 + e −x /2) for x = − ln 1 − 2 sin 2 θ ≥ 0. This further leads to which, again, underscores that the θ obtained from the CEP parallel rule is never greater than the θ obtained from the DET parallel rule.Together, it can be established that the DET rules consistently yield superior results to those of the CEP rules, both in series and parallel configurations.This underlines the potential of DET as a valuable tool in the ongoing development of large-scale QN and adds a new dimension to our understanding of quantum connectivity.Concurrence percolation threshold.-Ininfinite-size QNs, both a classical percolation threshold p th and a concurrence percolation threshold c th exist.This leads us to the second insightful finding within the realm of concurrence percolation: the prediction of a lower threshold compared to what was known from earlier classical-percolation-theorybased schemes, including CEP and its variants (such as QEP).What makes this result particularly interesting is that the improvement exists across various network topologies.Table IV shows these findings, detailing tests conducted on different network topologies, including the Bethe lattice (Fig. 5) as well as other regular lattices such as the square, honeycomb, and triangular lattices (Fig. 6).This consistency across multiple configurations underscores the robustness of the concurrence percolation method, demonstrating its potential to redefine our understanding of entanglement transmission within large-scale QNs.
On series-parallel QN, this predicted concurrence percolation threshold is readily achievable using the DET scheme.On general network topologies, however, it is unknown if the higher-order connectivity rules produced by the starmesh transform is realizable by LOCC.They are only approximations of the true LOCC-allowing rules.The study of the higher-order rules of concurrence percolation remains a difficult task that could be handled by multipartite strategies [157], QN routing [22,158], or QN coding [159].
Saturation.-Concurrence percolation also differs from classical percolation with the existence of a saturation point c sat .Whenever c ≥ c sat , the sponge-crossing C SC consistently equals one [Fig.5(b)].For example, basic calculations show that the exact value of the saturation point for a Bethe lattice of degree k is given by [40] In contrast, in classical percolation, P SC equals one if and only if p = 1.This phenomenon originates from the anomaly of the parallel rule (Table II) being not a smooth function.The presence of this saturation point unveils a new "quantum advantage" originating only from concurrence percolation: one can deterministically establish a singlet as long as the entanglement in each link surpasses the saturation point.This advantage stands in contrast to schemes based on classical percolation, where a singlet can only be established with certainty if each link is perfectly entangled.Critical exponents.-Lastly,similar to classical percolation, concurrence percolation also shows critical phenomena, marked by a set of dependent or independent critical exponents [3].However, it is important to note that concurrence percolation is defined based on connectivity rules (Table II), not clusters.As a result, one cannot simply deduce a TABLE IV: The concurrence percolation threshold is the lowest threshold compared to earlier known classical-percolation-theory-based schemes, including CEP and its variants.Particularly, for the Bethe lattice, one has P swap (k) = 2x − x 2 , where x(k) is the solution of 2x + x k (kx − x − k − 1) − (1 − x)/(k − 1) = 0 by the q-swapping strategy [160], and P GHZ (k) = y, where y(k) is the solution of where ⌊•⌋ is the floor function [157].For the square and triangular lattices, QEP yields the same thresholds as of CEP.
In the absence of a cluster definition, the sole critical exponent that can be determined is the dynamic thermal exponent, ν dyn = zν.This exponent is tied solely to length dimensions, reflecting how the system's correlation length diverges as c approaches c th .Note that the length in the context refers to chemical length, not the conventional Euclidean length.The two length scales are related by the dynamic critical exponent z [38].This is why zν, rather than ν, is used in this case.
Importantly, the dynamic thermal exponent zν can be retrieved from finite-size analysis [3].Here, the idea is that the correlation length can be replaced by the system's finite length scale l when |c − c th | → 0. Therefore, near the critical threshold, all dependence on ξ can be deduced using l.The finite-size analysis results for both classical and concurrence percolation for the Bethe lattice are shown in Fig. 7.For concurrence percolation, it is found that C SC follows a power law with an exponential cutoff as a function of the number of layers l, C SC ∼ l −1/2 exp(−l/l * ).Here, l * diverges as a power law as c approaches c th [Fig.7(f)].The numerical value zν = 1.082( 95) is obtained by fitting near |c − c th | ∼ 10 −5 [Fig.7(g)].
Alternatively, zν can also be determined by looking at the finite-size critical threshold c th (l), which is defined as the turning point of C SC , which deviates from c th as a power law with respect to l [Fig.7(h)].Again, the numerical value 1/(zν) = 0.99( 5) is obtained near l ∼ 10 4 [Fig.7(i)].
For general k, different c th and c sat are also presented [Fig.7(j)], revealing two universal (i.e., independent of k) power laws of C SC near c → c th and c → c sat , respectively, supported by numerical results (dots) on a finite Bethe lattice of l = 500.In particular, the power-law relation C SC ∼ |c − c th | 1/2 is reminiscent of the critical exponent β in classical percolation, which follows P inf ∼ |p − p th | β [Eq.(11)].Yet, as previously discussed, C SC cannot be uniquely equated to a "cluster-based" order parameter.Thus, it would be premature to assert that β = 1/2 without accounting for certain nuances.
Note that the above results also have their counterparts in classical percolation [Figs.7(a)-7(e)] except for the saturation point c sat .It is found that the critical exponent zν is the same for both classical and concurrence percolation theories on the Bethe lattice.It thus remains unknown whether the classical and concurrence percolation theories belong to the same universality class or not.In conclusion, the concept of concurrence percolation establishes a new theory that governs the behavior of entanglement transmission across large-scale QN.This novel theory brings in several unique characteristics that distinguish it from classical percolation, thereby providing a refreshing and rich perspective on QN.We believe that the theoretical framework set by concurrence percolation may also open doors to practical applications, such as more efficient entanglement transmission schemes or novel protocols for quantum communication and computation.In essence, concurrence percolation not only enriches our comprehension of the inherent complexity of QN but also signifies a leap towards a more refined and versatile understanding of the statistical physics that governs QN.

VI. ALGORITHMS
This section is dedicated to a comprehensive exploration of the fundamental algorithms that have played a pivotal role in our investigation of concurrence percolation theory.Not only are these algorithms instrumental in the analysis and understanding of QN, but they also serve as essential tools for modeling and simulating the complex behaviors within the network.

A. Identification of series-parallel networks
Series-parallel networks were introduced by Duffin [154] as a mathematical model of electrical networks, and a general version was introduced later by Lawler [161] and Monma and Sidney [162] as a model for scheduling problems.The classification of a network as series-parallel depends on the choice of two specific nodes of interest [154].Given two source and target nodes S and T , the network topology can be grouped into different categories (Fig. 8).All topologies between S and T given in Figs.8(a)-8(e) are considered series-parallel, except Fig. 8(f), due to an existing "bridge" in addition to Fig. 8(e).Importantly, it is worth highlighting that many realistic complex networks can be approximated as series-parallel.This is because, in infinite-dimensional systems, cycles can typically be ignored through the Bethe approximation [163].FIG.9: Decomposition of a series-parallel network to the final base graph (from i to viii).At each step, the links that the series rule and the parallel rule are applied to are highlighted in orange and cyan, respectively.
It is known that when a "decomposition tree" (Fig. 9) for a series-parallel graph is given, many problems, including those that are NP-hard for arbitrary graphs [164][165][166][167], can be solved in linear time.While series-parallel networks continue to play an important role in various applications, they have been extensively studied in their own right as well as in relation to other optimization problems (cf.[168][169][170][171]).We also refer to [172] for more results in series-parallel graphs.
Series-parallel networks enjoy nice algorithmic properties.There is a fast algorithm that determines whether any given network is series-parallel, and if it is, also returns the decomposition tree that is suitable to be used in the following applications [173].Following this work, researchers have further developed parallelized algorithms to determine the important class of series-parallel networks [174][175][176][177].

B. Star-mesh transform
The star-mesh transform (also known as the Kron reduction [178]) was originally developed as a circuit analysis technique for calculating the effective resistance in resistor networks.The star-mesh transform replaces a local star network topology by a mesh topology (a complete graph).Importantly, the equivalent resistance between each pair of nodes remains consistent before and after the transformation.Here, we generalize this idea to offer an approximation method for percolation on networks.This approach bears similarity to the real-space renormalization group (RG) methods used in percolation theory.However, the star-mesh transform is more versatile, applicable to various types of networks beyond regular lattices.
A star-mesh transform [179] can be built upon only series and parallel rules but not higher-order rules to map an N -node star graph to an (N − 1)-node complete graph, establishing a local equivalence (in terms of connectivity) between the two graphs.Mathematically, we denote G(N ) to be a star graph with one root vertex and N leaf vertices, where the weights of the N edges are from θ 1 to θ N .And the N -complete graph transformed from G(N ) is denoted as G ′ (N ), the weights of its N (N − 1)/2 edges are (θ 12 , θ 13 ,   and G ′ (N ) are formatted N (N − 1)/2 independent equations: in which, seri(θ i , θ j ) is the series-sum of θ i and θ j based on the series rule, and c(i, j; G ′ (N )) is the net weight between vertices i and j of the complete graph G ′ (N ).
To calculate the sponge-crossing percolation between the source S and target T in a certain network, we approximately obtain the equivalent simplified network by consecutively applying the star-mesh transform technique, where one node is degraded for each application.Specifically, we arbitrarily choose a vertex from G ′ (N ) (w.l.o.g., the last one, N ) to be the new root of a sub-star-graph of G ′ (N ) constructed from the N − 1 edges that connect the root to the other N − 1 vertices.Next, we transform this sub-star-graph (subG ′ )(N − 1) into a (N − 1)-complete graph, denoted by (subG ′ ) ′ (N − 1), and combine it with what is left untransformed, G ′ (N ) \ (subG ′ )(N − 1).The new graph denoted as Comb (G α , G β ) is derived by setting each edge weight to be θ ij = para(α ij , β ij ), which is the parallel-sum of α ij ∈ G α and β ij ∈ G β based on the parallel rule.Note that, because of the lack of closed-form solution for concurrence percolation, we use Broyden's root-finding algorithm to numerically solve the N (N − 1)/2 weights θ ij FIG.
12: Approximations for calculating the sponge-crossing concurrence C SC .In the S m approximation, one defines S m as the set which contains up to the m-th shortest paths (i.e., the shortest paths, the 2nd shortest paths, and so on up to the m-th shortest paths) between s and t for all s ∈ S and t ∈ T .In the parallel approximation, one treats all paths in S m as parallel and non-overlapping.Thus, the network topology reduces to series-then-parallel, and the sponge-crossing concurrence can be calculated using the series/parallel rules (Table II).
that satisfy Eq. ( 31).In a word, we can calculate c(i, j; G ′ (N )) by first solving a (N − 1)-complete graph, By applying Eq. ( 32) one after the other on all but boundary nodes, the network can be eventually reduced to two nodes and one link between them (Table II), the final weight θ of which shall be approximately equivalent to the percolation of initial network.For demonstrative purposes, here we show how the star-mesh transform works for both the classical percolation (Fig. 10) and concurrence percolation (Fig. 11) on a small square lattice.Since c(i, j; G ′ (N )) is calculable through recursions and Eq. ( 32) involves a (N − 1)-level star-mesh transform, the entire procedure is a double recursion, the cost growing faster than exponential.But by practically carrying out the recursive computation using symbolic expressions in Mathematica and other numerical techniques, the solutions can be found within a sufficiently small error range [40].
Note that the star-mesh transform functions as an approximation rather than an exact representation of higherorder connectivity rules.To see this, consider the example of classical percolation given in Fig. 10.Under the change of variable p ≡ 2 sin 2 θ, the actual higher-order connectivity, i.e., the probability of at least one path connecting nodes 1 and 6, can be expressed as follows: where p ij ≡ 2 sin 2 θ ij ≈ 0.304 represents the singlet conversion probability for each link i ↔ j.The final probability (≈ 0.0799) translates to a final entanglement θ ≈ 0.256π/4, which is very close to the star-mesh approximation result of θ ≈ 0.25π/4 (Fig. 10).The closeness of these values supports our confidence that the star-mesh transform can offer a reasonably accurate approximation.Also, note that the process of reducing a network using star-mesh transforms is not unique in sequence.For example, in the fourth step of Fig. 10, one can transform the star graph (3 ↔ 1, 3 ↔ 4, 3 ↔ 6) instead.Different sequences of reduction might lead to varying approximate results, but they tend to stay close to each other and to the exact value [40].

C. Fast numerical approximation for concurrence percolation
The heuristic approximation (star-mesh transform) used for higher-order connectivity rules can be quite demanding in terms of computational resources.To address this challenge, in this section, we discuss a more efficient approach to calculate the sponge-crossing concurrence C SC [180].This acceleration in computation is achieved through the incorporation of two key simplifying approximations: the parallel approximation and the S m approximation (Fig. 12).

Parallel approximation
First, we introduce the parallel approximation: we treat all paths connecting nodes of interest to be parallel, i.e., treating them as if they have no overlap.For an arbitrary network with N nodes and uniform concurrence c per link, the parallel approximation C ′ SC of the true sponge-crossing concurrence C SC between two sets of nodes, S and T , is given by where N l is the total number of self-avoiding paths of length l that connect the source/target nodes s and t for all s ∈ S and t ∈ T , respectively.Equation ( 34) is the mathematical statement of the parallel approximation, indicating that we are taking each of the N l paths to be parallel and non-overlapping (Fig. 12).Now, we will show that on series-parallel networks [154] the concurrence calculated under the parallel approximation forms an upper bound to the true concurrence.First, we consider the case where our network is essentially parallel, i.e., it can be expressed as the parallel combination of k subnetworks, each with concurrence c i .In this case, the parallel approximation is exact: The more interesting case is that of an essentially series network, i.e., a network that can be decomposed as a combination of subnetworks in series.We consider an exemplary network that splits into k branches, each with concurrence c pi .The concurrence of the segment before branching is c s .Following the series and parallel rules (Table II), the sponge-crossing concurrence from the left of this network segment to the right is where f (c p0 , . . .
Under the parallel approximation, the network is transformed so that the concurrence of the segment is given by Since c s c pi ≤ c pi , it follows that g(c s c pi ) ≥ g(c pi ) and thus f (c s c p0 , . . .c s c p k ) ≥ f (c p0 , . . .c p k ).After some calculations [180] one can show that C ′ SC ≥ C SC .Taken together, since every series-parallel network can be decomposed into essentially series or parallel configurations, we have shown that C ′ SC is an upper bound for C SC on series-parallel networks.Interestingly, as we will see, this upper bound seemingly becomes tighter as the network becomes larger.We hence expect that a new concurrence threshold on C ′ SC can emerge, which should numerically approach the true c th from below and match c th in the thermodynamic limit N → ∞.

Sm approximation
For most regular lattices and complex networks, however, it is a nontrivial task to determine the number of paths N l of length l.When we look at arbitrary networks, the calculation for the sponge-crossing concurrence is essentially a path-counting problem which may require approximation as well.
Although the literature of path counting on graphs is rich and well studied, we are not aware of any closedform solutions for enumeration of self-avoiding walks of arbitrary length for even the simplest network (like 2D lattices) [181].While approximate path enumerations exist for both 2D lattices [182] and random networks [183], we find them impractical, since the concurrence calculation is very sensitive to N l for small l.Based on this, if we define S m as the set which contains up to the m-th shortest paths (i.e., the shortest paths, the 2nd shortest paths, and so on up to the m-th shortest paths) between s and t for all s ∈ S and t ∈ T , then it is possible to approximate the sponge-crossing concurrence between S and T using only these paths.
. The solid black lines represent the exact C SC for the Bethe lattice (cf.Fig. 5).

Results
In this section, using the S m and parallel approximations, we present numerical results for calculating C SC in different networks of large size N .From that, we can numerically estimate the finite-size concurrence percolation threshold c th ≡ sin 2θ th , determining its position on the critical curve by matching the corresponding sponge-crossing concurrence at the half point, namely, Next, we show how to apply our approach to Bethe Lattice and 2D Square Lattices.Bethe lattice.-Given a finite Bethe lattice (i.e., a Cayley tree) [Fig.5(a)] with L layers and coordination number k [38,39], all paths from the root node to any one of the boundary nodes have the same length, L. Since only one path exists from the root node to any node on the boundary, the number of paths of length L is There is no need to employ the S m approximation since all paths are exactly known.Only the parallel approximation C ′ SC of the sponge-crossing concurrence C SC is to be calculated, which is given by [following Eq. ( 34)] To solve for c th , near C ′ SC = 0 we let given an arbitrarily small positive ϵ.This gives rise to and thus in the limit of large L. The result is identical to the exact concurrence percolation threshold for Bethe lattices (Table IV).13.We see that as L increases, the threshold c th approaches 1/ √ k − 1 from below, consistent with our theoretical result.Hence, it is highly suggested that the parallel approximation can correctly estimate the true concurrence percolation threshold c th in the limit N → ∞.
It is known that a saturation point c sat < 1 also exists in the Bethe lattice [40], namely, before c reaches unity, C SC will already reach unity at c = c sat .It is obvious that c sat ≥ c th , given the monotonicity of the series and parallel rules (Table II).To see if we can recover c sat using the parallel approximation too, let set by C ′ SC = 1.This yields and thus We see that the saturation point calculated using the parallel approximation is equal to c th but different from the exact value [Fig.15: The speed-up obtained by the approximations over the star-mesh transform.The figure shows the computing time (in seconds) to calculate the sponge-crossing concurrence between two nodes s and t on 2D square lattices with N nodes, using the S 1 and S 2 approximation.In contrast to the star-mesh transform, We can see that the approximations speed up the calculation over the star-mesh transform approach by two orders of magnitude.Now, let S and T denote the left (x s = 1) and right (x t = √ N ) boundaries.Let s = (1, y s ) ∈ S and t = ( √ N , y t ) ∈ T .Under the S m approximation, the total number of self-avoiding paths of length l between S and T is given by where δ ij is the Kronecker delta.This approximation of N l is then substituted into the parallel approximation [Eq.(34)] to calculate C SC between S and T .For m ≤ 2, it is possible to directly enumerate the 1st and 2nd shortest self-avoiding paths between every pair of s and t.For m > 2, however, it becomes difficult to write down a closed-form combinatorial expression for N lm (s → t).A path enumeration algorithm is thus needed.We treat paths of length l m with m > 2 as deviations from the 1st and 2nd shortest paths.For a given m, these deviations can only take a finite number of shapes.Once we have identified these primitive deviations, we must next identify positions in the lattice where these deviations can be placed.Finally, we count the total number of paths by counting the number of shortest paths between deviations [180].
For example, given every pair of source and target nodes s and t, all 3rd-shortest paths (m = 3) have either two single-step deviations or one double-step deviation from the shortest path (m = 1).For the case where we have two single-step deviations, we first identify two sets of points, D 1 and D 2 , where the first and second deviations can happen, respectively.Then we calculate N s,D1 (the number of shortest paths from s to every point in D 1 ), N D1,D2 (the number of shortest paths from every point in D 1 to every point in D 2 ), and N D2,t (the number of shortest paths from every point in D 2 to t).The total number of 3rd-shortest paths is then given by N l2 (s → t) = N s,D1 N D1,D2 N D2,t .
The final numerical results of the sponge-crossing concurrence C SC are shown in Fig. 14.We see that the transition in the value of C SC becomes sharper as the network size N increases.Moreover, for higher-order approximation S m and/or larger N , the numerical threshold θ th levels out at constant values that are very close to those calculated using the star-mesh transform.For example, for N = 8 2 , the numerical approximation yields θ th ≈ 0.4, closely mirroring the θ th ≈ 0.416 result from the star-mesh transform technique.This evidence underscores the viability of the new approach for approximating the concurrence percolation threshold accurately.
One major benefit of this path-counting approach is its speed.As shown in Fig. 15, the algorithm is over a hundred times faster than the heuristic star-mesh transform.This substantial increase in computational speed facilitates the extension of concurrence percolation threshold calculations to more complex network topologies, such as random networks.
Random networks.-Forrandom networks, the sponge-crossing concurrence is simply defined as the concurrence between S = {s} and T = {t}, each set containing only one source (target) node s (t), picked such that the shortest path between s and t is equal to the diameter of the network.By randomly generating and averaging 10 2 network realizations of certain sizes and degree distributions, the concurrence percolation threshold θ th is obtained.The outcomes of this numerical approach, applied across different topologies including the Erdős-Rényi (ER) [129] and the Barabási-Albert (BA) [95] random networks at large scales (N ∼ 10 4 ), are summarized in Table V.These findings shed light on the inherent capacities of large-scale complex QNs, opening new avenues for exploration.[180].The results are compared to those provided in Meng et al. [40] for the Bethe lattice and 2D square lattice.Results on Erdős-Rényi (ER) and Barabási-Albert (BA) networks are also reported.

VII. DISCUSSION AND CONCLUSIONS
Distributing entanglement throughout a quantum network (QN) is a critical and complex problem at the heart of quantum communications that has attracted significant attention and studies.This field has been further enriched from a statistical physics point of view, by the discovery of two percolation theories (classical versus concurrence) that, at first glance, appear to be remotely related but are, in fact, fundamentally distinct (Fig. 3).These theories have not only deepened our understanding but also raised many new questions for further exploration and potentially groundbreaking research.In the following, we will outline and discuss some of the open questions that have been brought to light by these developments: • Optimality.Does there exist an optimal scheme for entanglement transmission?In the context of classical percolation, both classical entanglement percolation (CEP) and quantum entanglement percolation (QEP) fall short of yielding the optimal singlet conversion probability, especially as network size scales up [10].The deterministic entanglement transmission (DET) [149], on the other hand, focuses on improving not the singlet conversion probability but the entanglement that can be deterministically obtained.It has been found that the DET optimizes the average concurrence on either series [Fig.8(a)] or parallel topologies [Fig.8(b)], meaning that even when we relax the requirement of determinacy and consider the average entanglement of general probabilistic outcomes, the DET results remain the optimal on series or parallel topologies [149].However, this result does not generalize to general series-parallel topologies [Fig.8(e)], where DET may not always offer the optimal average concurrence.This prompts us to ask how effective the DET actually is across various QN topologies.Answering this question could substantially deepen our comprehension of the maximum entanglement capacity of QN.
• Universality.As a statistical physics theory, the concurrence percolation also exhibits a second-order phase transition near the threshold c th , similar to classical percolation near p th .So, what can be said about the universality of this phase transition?It has been found that the thermal critical exponent ν remains the same on different 2D lattices (square versus honeycomb versus triangular), suggesting that universality is likely at play near the critical threshold.Yet, the current definition of percolation based on connectivity rules (Table II) does not clearly define an order parameter [40].This lack of an order parameter hints at a missing degree of freedom in the connectivity-based model.This omission makes it challenging to determine other critical exponents besides ν (or its dynamic counterpart, zν).Additionally, the existing data on ν (or zν) do not allow us to distinguish between the universality classes of concurrence percolation and classical percolation [40].
Thus, an open question remains: are concurrence percolation and classical percolation simply two facets of a single underlying statistical theory if we overlook short-range details, or are they genuinely distinct theories at a macroscopic level?
• Experimental implementation.One of the greatest challenges of quantum information experiments is to achieve high-efficiency multi-body operations.For instance, two-body quantum gates like CNOT are considerably more challenging and less efficient to implement compared to single-body gates such as the rotation gates RX, RY, and RZ.In fact, the number of two-body gates often serves as a benchmark for gauging the computational complexity of quantum algorithms.Interestingly, a practical QN offers an easier path to scalability compared to universal quantum computers.This is since in a QN, only local operations are allowed on qubits across different nodes.This eliminates the need for complex gates like CNOT between qubits from different nodes.This design constraint substantially simplifies QN implementation and boosts its scalability.Recently, experimental feasibility of the series/parallel rules of the DET scheme (Table III) has also been demonstrated on IBM's quantum computation platform Qiskit [184].The series and parallel rules typically perform with fidelity rates of 92.4% and 78.2%, respectively [149].These rates are expected to improve, given ongoing advancements in two-qubit gate fidelity [185,186].Compared to the CEP/QEP schemes, the DET scheme has its advantages and drawbacks.On the upside, the DET inputs/outputs are only partially entangled, which generally makes them easier to produce and results in higher fidelity.On the flip side, circuit parameters are input-dependent, requiring precise initial state estimations through techniques like heralding [32] or tomography [187] before deployment.This brings us to a crucial question: to what extent can we experimentally scale the DET scheme for larger QNs?More importantly, given that the current CEP/QEP/DET schemes focus solely on pure states, there is an urgent need to extend these results to mixed states that are affected by noise-a vital step for the practical implementation of QNs.
• Other network-based tasks enhanced by entanglement.The feasibility of establishing entanglement over network structures also opens up further new possibilities of using entanglement to enhance some more general, nontrivial network-based tasks.For example, in Refs.[188,189], researchers studied the application of entanglement to enhance quantum games on both regular lattices and complex network structures, demonstrating that entanglement is a crucial resource for achieving favorable outcomes in the realm of quantum game theory.Additionally, similar improvements have been noted, such as in a quantum adaptation of the card game, bridge, as highlighted in Ref. [190].The discussion on networks of networks in Section III B also provides an alternative perspective regarding entanglement.Rather than regarding it solely as a resource, we can view entanglement as a control parameter that regulates the interdependency between multiple network layers, potentially giving rise to novel critical or multicritical behaviors.Indeed, recent theoretical advancements in quantum phase transitions [191][192][193] have suggested that long-range entanglement among quantum spins at near absolute zero temperatures could trigger a shift from a second-order to a first-order phase transition.We hypothesize that this long-range entanglement may operate similarly to the introduction of interdependency among nodes across multiple layers, akin to classical networks-of-networks models.Yet, it is worth noting that the underlying physics governing this interdependency stems from entirely distinct principles within the quantum realm.

FIG. 5 :
FIG. 5: Classical percolation and concurrence percolation on the Bethe lattice.(a) The Bethe lattice (k = 3).(b) The sponge-crossing probability P SC (brown) between sets S (the root) and T (the collection of all leaf nodes) as a function of θ.Driven by classical percolation, a transition threshold is found at θ = π/6, or p th = 2 sin 2 θ = 1/2.As a comparison, the sponge-crossing concurrence C SC (red), driven by concurrence percolation, shows a similar but lower threshold at θ = π/8, or c th = sin 2θ = 1/ √ 2.Moreover, a saturation point at θ = 0.633 (π/4), or c sat ≈ 0.838 also exists, beyond which we already have C SC = 1.This saturating feature has no counterpart in classical percolation.(The pink dashed line represents another nonphysical solution.)

FIG. 6 :
FIG. 6: Classical percolation and concurrence percolation on (a)-(b) the square lattice, (c)-(d) the honeycomb lattice, and (e)-(f) the triangular lattice, The sponge-crossing probability P SC (brown) and sponge-crossing concurrence C SC (red) are defined between sets S (the collection of nodes on the left boundary) and T (the collection of nodes on the right boundary), as a function of θ.The brown and red vertical lines denote the finite-size thresholds p th and c th , respectively.

(
Determine the set of m-shortest paths) Parallel approximation (Treat all paths as non-overlapping)

FIG. 13 :
FIG. 13: The sponge-crossing concurrence C SC for the Bethe lattice under the parallel approximation.Results are shown for coordination numbers (a) k = 3 and (b) k = 4.As the number of layers, L, in the network become larger the numerical concurrence percolation threshold approaches the analytical value,θ th = (2/π) sin −1 (1/ √ k − 1).The solid black lines represent the exact C SC for the Bethe lattice (cf.Fig.5).

FIG. 14 :
FIG. 14: The sponge-crossing concurrence C SC for 2D square lattices under the S m and parallel approximations.(a) Sponge-crossing concurrence C SC as a function of link's entanglement θ under the S 3 approximation.The results of S 1 and S 2 are nearly identical to S 3 and not plotted.(b) Numerical concurrence percolation threshold θ th under the S m approximation.As the approximation order m increases, θ th approaches a constant value.(c) θ th for different size N .(d) Same as (c) but for larger N .The results of S 3 are not shown because it becomes too computationally intensive to calculate for N > 20 2 .As N increases, θ th also approaches a constant value.

TABLE II :
Connectivity rules that define the classical/concurrence percolation theories.

TABLE III :
Entanglement transmission rules for the CEP and DET schemes, derived from the classical and concurrence percolation rules (TableII), respectively.
FIG.4:The DET series/parallel rules outperform the CEP series/parallel rules.Now, for the series rule, we have ).The equivalence between G(N )

TABLE V :
Numerical concurrence percolation thresholds θ th (≡ 2 −1 arcsin c th ) of different network topologies, obtained by the S m and parallel approximations