Large independent sets on random $d$-regular graphs with fixed degree $d$

This paper presents a linear prioritized local algorithm that computes large independent sets on a random $d$-regular graph with small and fixed degree $d$. We studied experimentally the independence ratio obtained by the algorithm when $ d \in [3,100]$. For all $d \in [5,100]$, our results are larger than lower bounds calculated by exact methods, thus providing improved estimates of lower bounds.

that for every two vertices i, j ∈ I, there is no edge connecting the two, i.e., (i, j) / ∈ E, needs a time which is super-polynomial if P = N P .For example, the first nontrivial exact algorithm for the MIS was due to Tarjan and Trojanowski's O(2 N/3 ) ∼ O(1.2599 N ) algorithm in 1977 [2].Since then, many improvements have been obtained.Today, the best algorithm that can solve the MIS exactly needs a time O(1.1996N ) [3].Those results are a worst case bound.We direct the interested reader to [3], and references therein, for a complete discussion on exact algorithms.
The MIS is important for applications in Computer Science, Operations Research, and Engineering, such as graph coloring, assigning channels to the radio stations, register allocation in a compiler, etc.
Besides having several direct applications [4], the MIS is closely related to another well-known optimization problem, the maximum clique problem [5], [6].For finding the maximum clique (the largest complete subgraph) of a graph G(N , E), it suffices to search for the maximum independent set of the complement of G(N , E).
The MIS has been studied on many different random structures, like Erdős-Rényi graphs (ER), random d-regular graphs (RRG).A random d-regular graph is a graph selected from the distribution of all d-regular graphs on N vertices, with N d even.A regular graph, is defined as a graph where each vertex has the same number of neighbors, i.e., d.Random d-regular graphs represent a subset of Erdős-Rényi graphs distribution with probability p =< d > /N .
For the Erdős-Rényi class G ER (N, p), where p is the probability that two different vertices are connected to each other, known local-search algorithms can find solutions only up to half the maximum independent set present, which is ∼ 2 log 1/(1−p) N [7] in the limit N → ∞.
This behavior also appears for random d-regular graphs G d (N ).In this cases, for example, Gamarnik and Sudan [8] showed that, for a sufficiently large value of d, local algorithms cannot find the size of the largest independent set in a d-regular graph of large girth with an arbitrarily small multiplicative error.
The results of Gamarnik and Sudan [8] was successively improved by Rahman, and Virág [9], which analyzed the intersection densities of many independent sets in random d-regular graphs.They proved that for any > 0, local algorithms cannot find independent sets in random d-regular graphs with an independence ratio larger than (1 + ) ln d d if d is sufficiently large.The independence ratio is defined as the density of the independent set, thus α = |I|/|N |.Recently, the exact value of the independence ratio for all sufficiently large d was given by Ding et al. [10].
However, these results appear to say nothing about small and fixed d.When d is small and fixed, e.g., d = 3 or d = 30, indeed, only lower and upper limits, expressed in terms of independence ratio, are known.
Lower bounds on the independent sets' size identify sets that an efficient algorithm can find, while upper bounds are on the actual maximum independent set, not just on the size an algorithm can find.
The first upper bound for such a problem was given in 1981 by Bollobás [11].He showed that the supremum of the independence ratio of 3-regular graphs with large girth is less than 6/13 ∼ 0.461538, in the limit N → ∞.
McKay, 1987, improved and generalized this result to d-regular graphs with large girth [12], by using the same technique and a much more careful calculation.For example, for the cubic graph (3-regular graph), he was able to push Bollobás upper bound down to 0.455370.However, since then, only for cubic graphs, the upper bound has been improved by Balogh et al. [13], namely to 0.454.Replica methods suggest a slightly lower upper bound, and thus a smaller gap at small values of d [14].For examples, the upper bounds given in [14] for d = 3 is 0.4509, while for d = 4 is 0.4112.A recent paper shows that this approach can be proven, but again, only for large d [10].
Remarkable results for lower bounds were obtained first by Wormald in 1995 [15].He considered processes in which random graphs are labeled as they are generated and derived conditions under which parameters of the process concentrate around the values of real variables which come from the solution of an associated system of differential equations.By solving the differential equations he computed lower bounds for any fixed d returned by a prioritized algorithm, improving the values of bounds given by Shearer [16].This algorithm is called prioritized because there is a priority in choosing vertices added to the independent set [17].It follows the procedure of choosing vertices in the independent set I one by one, with the condition that the next vertex is chosen randomly from those with the maximum number of neighbors adjacent to vertices already in I.After each new vertex in I is chosen (or labeled with an I), we must complete all of its remaining connections and label the neighbors which are identified as members of the set V (for vertex cover).Although each vertex in I can be chosen according to its priority, the covering vertices that complete its unfilled connections must then be chosen at random amount the remaining connections, to satisfy Bolobas' configuration model [15].Following this priority is a simple way to minimize the size of the set of vertices covered and maximize the number of sites remaining as candidates for the set I.
More precisely, we are given a random d-regular graph G d (N ), and we randomly choose a site i from the set of vertices N .We set i into I, and we set all the vertices neighboring i into a set V. We label elements of I with the letter I, while elements of V with the letter V .Then, from the subset of vertices in N that are neighbors of vertices in V, but are not yet labeled I or V , we choose randomly the element k that has the maximum number of connections with sites in V. We set it into I.The vertices neighboring k, which are not in V, are added to the set V. This rule is repeated until |I| + |V| = N .Along with this algorithm, one can consider an associated algorithm that simultaneously generates the random d-regular graph G d (N ) and labels vertices with the letter I or V .This associated algorithm, which will be described in detail in the next sections, allowed Wormald to build up the system of differential equations used for computing lower bounds for the MIS.[12], and only for d = 3 by Balogh et al. [13].Upper bounds identifies the actual maximum value of the independent set can be.α LB column identifies the best density of independent set obtained in [15], [18], [21], [20] .Lower bounds identify the size of independent sets that an efficient algorithm can find.The last column, i.e. α∞ ± z 99% σα ∞ , instead, identifies the confidence intervals of the conjectured new bounds at the 99%.
Improvements over this algorithm were achieved by Duckworth et al. [18].These improvements were obtained by observing, broadly speaking, that the size of the structure produced by the algorithm is almost the same for d-regular graphs of very large girth, as it is for a random d-regular graph.However, since then, new lower bounds have been achieved only at small values of d, e.g., d = 3 and d = 4. Interesting results at d = 3 have been achieved by Csóka, Gerencsér, Harangi and Virág [19].They were able to find an independent set of cardinality up to 0.436194N using invariant Gaussian processes on the infinite d-regular tree.This result was once again improved by Csóka [20] alone, which was able to increase the cardinality of the independent set on large-girth 3-regular graph up to 0.445327N and on large-girth 4-regular graph up to 0.404070N , by solving numerically the associated system of differential equations.
These improvements were obtained by deferring the decision whether a site i ∈ N must be labeled with a letter I or V .More precisely, he requires that the sites for which a decision is deferred need additional (temporary) labels.This means that counting the evolution of their populations, either through a differential equation or by experiment, gets more complicated.
Csóka [20] was able to improve lower bounds only for d = 3 and d = 4.This paper aims to generalize his method for any d ≥ 5, using an experimental approach.We recall in Tab. 1 the best upper and lower bounds 1 for d ∈ [3,100], first and second columns respectively.
In this paper, as stated above, we present experimental results of a greedy algorithm, built upon existing heuristic strategies, which leads to improvements on known lower bounds of large independent set in random d-regular graphs ∀d ∈ [5,100] [15], [21], [18].
This new algorithm runs in linear time O(N ) and melds Wormald's, Duckworth and Zito's, and Csoka's ideas of prioritized algorithms [15], [18], [21], [20].The results obtained here are conjectured new lower bounds for large independent set in random d-regular graphs.
They are obtained by inferring the asymptotic values that our algorithm can reach when N → ∞ and by averaging sufficient simulations to achieve confidence intervals at 99%.These results lead improvements on known lower bounds ∀d ∈ [5,100] that, as far as we know, are not reached by any other greedy algorithms.Although the gap with upper bounds is still present, these improvements may imply new rigorous results in finding a large independent set in random d-regular graphs.
The paper is structured as follows: in section 2 we define our deferred decision algorithm, and introduce a site labelling which will identify those sites for which we defer the I/V labelling decisions.In Sec. 3 we present the deferred decision algorithm for d = 3, and we introduce experimental results obtained on random 3-regular graphs of sizes2 up to 10 9 .In Sec. 4, we present our deferred decision algorithm for d > 3, and the experimental results associated with it, by extrapolation on random d-regular graphs with sizes up to 10 9 (see fourth column Tab. 1).

Notation and the general operations of the deferred decision algorithm
In this section, we define the notation used throughout this manuscript, and we define all operations that will be used to understand the deferred decision algorithm.
We start by recalling that we deal with random d-regular graphs G d (N ), where d is the degree of each vertex i ∈ N , where N is the set of vertices and N = |N |.All vertices i ∈ N are unlabeled. 1Recently in [22] has been presented a Monte Carlo method that can experimentally outperform any algorithm in finding a large independent set in random d-regular graphs, in a (using the words of the authors) " running time growing more than linearly in N" [22].These authors conjectured lower bounds improvements only for d = 20 and d = 100, but with experimental results obtained on random d-regular graphs of order N = 5 • 10 4 .However, in this work, we are interested in comparing our results with the ones given by the family of prioritized algorithm because we believe that a rigorous analysis of the computational complexity would be performed on this algorithm.
Fig. 1: The figure shows the equivalence of a C − P − C structure with a ṽ site of anti-degree ∆ ṽ = 4.The site s, labeled P , is connected to site i ∈ V, and with two random sites, j and k labeled C. The resulting structure is a virtual site ṽ ∈ A.
For building a random d-regular graph we used the method described in [15], and introduced in [11].
Definition 1 (Generator of random d-regular graph Algorithm) We take dN points, with dN even, and distribute them in N urns labeled 1, 2, . . ., N , with d points in each urn.We choose a random pairing P = p 1 , . . ., p dN/2 of the points such that |p i | = 2∀i.Each urn identifies a site in N .Each point is in only one pair p i , and no pair contains two points in the same urn.No two pairs contain four points from just two urns.For building a d-regular graph G d (N ), then, we connect two distinct vertices i and j if some pair has a point in urn i and one in urn j.The conditions on the pairing prevent the formation of loops and multiple edges.
The pairing referred to must be chosen uniformly at random, subjected to the constraints given.This can be done by repeatedly choosing an unpaired point and then choosing a partner for this point to create a new pair.As long as the partner is chosen uniformly at random from the remaining unpaired points, and as long as the process is restarted if a loop or multiple edge is created, the result is a random pairing of the required type [15].
In this paper, we use the method above described so that while we generate the random d-regular graph G d (N ), concurrently with our labelling process, labelling sites as we identify new links.
The graphs built using the Generator of random d-regular graph Algorithm prevent the formation of loops and multiple edges, without introducing bias in the distribution where we sampling the graphs.We perform our analysis only on such graphs.In the case we meet on the last pair of vertices a loop, we set those sites not to be in the independent set.
We define two separate sets I and V for independent and vertex cover sites.I identifies the set of graph nodes that satisfies the property that no two of which are adjacent and V its complement.A site i ∈ I is labeled with the letter I, while a site j ∈ V, it is labeled with the letter V .
We define ∆ i to be the degree of a vertex i, i.e. the number of links that a site is connected with, while with ∆ i the anti -degree of a vertex i, i.e. the number of free connections that i needs to complete during the graph building process.Of course, the constraint ∆ i + ∆ i = d is always preserved ∀i ∈ N .At the beginning of the graph building process all i ∈ N have ∆ i = 0, ∆ i = d.At the end of graph building process all graph nodes will have We define ∂i to be the set that contains all neighbors of i.
For the sake of clarity, we define a simple subroutine on a single site i ∈ N of the Generator of random d-regular graph Algorithm (Subroutine GA(i, ∆ i )) that will be useful for understanding the algorithm presented in the next sections.The Subroutine GA(i, ∆ i ) generates the remaining ∆ i connections of site i.It keeps the supporting data to reflect the evolution of the network growth.
Algorithm 1: Subroutine GA(i, ∆ i ) The sites that we choose following some priority, either the one we describe or any other scheme, will be called P sites.The sites which are found by following links from the P sites (or by randomly generating connections from the P sites) are called C sites.More precisely, each site j ∈ N we choose, not labeled yet with any letter (C or P ) s.t.∆ j ≤ 2 and the random connection(s) present on j are on site(s) in V is a P site.The set P defines the set of P sites.The set P is kept in ascending order with respect to the anti-degree ∆ i of each site i ∈ P.
In general a site i labeled P will be surrounded by two sites labeled C. Because the labeling of those sites is deferred, we call those C −P −C structures virtual sites.A single virtual site, ṽ has an anti-degree ∆ ṽ equal to the sum of all anti-degrees of sites l that compose site ṽ, i.e. ∆ ṽ = l∈ṽ ∆ l .The number of sites l ∈ ṽ is equal to the cardinality of |ṽ|.The degree of ṽ is ∆ ṽ = d |ṽ| −∆ ṽ .As an example, we show in Fig. 1 the operation of how a virtual site is created from a site s ∈ P with ∆ s = 2, ∆ s = 1, and two sites j, k ∈ N with ∆ j = 3, ∆ k = 3 and ∆ j = 0, ∆ k = 0. Let's assume that a site s ∈ N s.t.∆ s = 2 exists.This is possible because a site l ∈ V is connected with it.This means that Fig. 2: The figure shows how the virtual site ṽ1 ∈ A is expanded.
s must be labeled P and put into P. Let's run Subroutine GA(s, ∆ s ) on s, and assume that the s connects with two neighbors j, k ∈ N .Being j, k ∈ N connected to a P site, they are labeled C. We, then, define ṽ = {s, j, k}.This set is a virtual node ṽ with ∆ ṽ = 4.
We define A to be the set of virtual sites.The set A is kept in ascending order respect to the anti-degree ∆ of each virtual site ṽ ∈ A. Virtual sites can be created, as described above, expanded or merged together (creating a new virtual site θ = ∪ i ṽi ).Two examples are shown in Fig. 2 and 3.
Fig. 2 shows how to expand a virtual site ṽ1 .Let's imagine that a site m ∈ P, with anti-degree ∆ m = 2, is chosen.Let's run Subroutine GA(m, ∆ m ) on m.Assume that m connects with n ∈ N (∆ n = 2) and with ṽ1 ∈ A (∆ ṽ1 = 4).In this case ṽ1 ∈ A expands itself, swallowing sites m and n and having a ∆ ṽ1 = 5.
Fig. 3 shows how two virtual sites merge together.Let's imagine that a site p ∈ P, with anti-degree ∆ p = 2, is chosen during the building graph process.Let's run Subroutine GA(p, ∆ p ) on m.Assume that p connects with two virtual sites ṽ1 ∈ A and ṽ2 ∈ A with ∆ ṽ1 = 4 and ∆ ṽ2 = 4.The new structure is a virtual site θ ∈ A with ∆ θ = 6.
We define in the following a list of operations that will be useful for understanding the algorithm presented in the next sections.
Definition 2 (OP i move (i, X , Y)) Let X and Y be two sets.Let i ∈ X and i / ∈ Y.We define OP i move (i, X , Y) to be the operation that moves the site i ∈ X from the set X to Y, i.e., i ∈ Y and i / ∈ X .
For example, OP i move (i, N , V) moves i ∈ N from the set N to V, i.e., i ∈ V and i / ∈ N .Instead, the operation OP i move (i, N , I) moves i ∈ N from the set θ Fig. 3: The figure shows how a virtual site θ ∈ A with ∆ θ = 6 is created.
SWAP-OP( ) ṽ Fig. 4: The figure shows how SW AP − OP (ṽ) works on a virtual site ṽ, which has ∆ ṽ = 0. SW AP − OP (ṽ) swaps all P sites in ṽ into C sites and all C sites in ṽ into P sites.
N to I, i.e., i ∈ I and i / ∈ N .We recall that when a site is set into I it is labeled with I, while when a site is set into V it is labeled with V .Definition 3 (OP ṽ del (ṽ, A)) Let A the set that contains virtual nodes ṽ.We define OP ṽ del (ṽ, A). to be the operation that deletes the site ṽ ∈ A from the set A,i.e., the element ṽ / ∈ A anymore, and applies the operation OP i move (i, X , Y) on each site i ∈ ṽ, following the rule: if i ∈ ṽ is labeled with the letter P then X = N and Y = I; if i ∈ ṽ is labeled with the letter C then X = N and Y = V.Definition 4 (SW AP − OP (ṽ)) Let ṽ ∈ A. We define SW AP − OP (ṽ) to be the operation such that ∀i ∈ ṽ: if i is labeled P then the label P swaps to C; if i is labeled C then the label C swaps to P ; Fig. 4 shows how SW AP − OP (ṽ) acts on a virtual site ṽ.
3 The deferred decision algorithm for d = 3 In this section, we present our algorithm, simpler and slightly different from the one in [20], but based on the same idea, for determining large independent set in random d-regular graphs with d = 3, i.e., G 3 (N ).It will also be the core of the algorithm developed in Sec. 4. As mentioned above, the algorithm discussed in this paper is basically a prioritized algorithm, i.e., algorithms that make local choices in which there is a priority in selecting a certain site.Our algorithm belongs to this class.
We start the discussion on the local algorithm for d = 3 by giving the pseudo-code of the algorithm in Algorithm 2.
The algorithm starts by randomly taking a site i from the set N and completes its connections in a random way, following the method described in Algorithm 1. Once all its connections are completed, site i has ∆ i = 3 and ∆ i = 0.It is labeled with letter V , erased from N , and set into V.In other words, operation OP i move (i, N , V) is applied on it.Each neighbor of i, i.e., j ∈ ∂i, has degree ∆ j = 1 and anti-degree ∆ j = 2. Therefore, they are set into P, thus labeled P .
The algorithm picks a site k from P with the minimum remaining connections.In general, If k has ∆ k = 0, the algorithm completes all its connections, and removes it from P. Each site connected with a P is automatically labeled with the letter C. If a site k ∈ P connects to another site j ∈ P, with j = k, j is removed from P and it is labeled C.
If k ∈ P has ∆ k = 0, the site k is set into I, and it is removed from N and P, i.e. the algorithm applies the operation OP k move (k, N , I).As defined in Sec. 2, a C − P − C structure is equivalent to a single virtual site, ṽ, which has an anti-degree ∆ ṽ .Each virtual site ṽ created with ∆ ṽ > 2, is inserted into the set A.
Once the set P is empty, the algorithm selects a site ṽ ∈ A with the largest anti-degree ∆ ṽ , and it applies the operation OP ṽ del (ṽ, A) after having completed all the connections ∀i ∈ ṽ with ∆ i = 0, using on each i ∈ ṽ with ∆ i = 0 Algorithm 1.
We apply operation OP ṽ del (ṽ, A) on virtual sites ṽ ∈ A with the largest anti-degree because we hope the random connections outgoing from those sites could reduce the anti-degrees of existing virtual sites in A, in such a way that the probability to have virtual nodes with anti-degree ∆ ṽ ≤ 2 increases.In other words, we want to create islands of virtual sites that are surrounded by a sea of V sites for applying the SW AP − OP (ṽ) on those nodes.This protocol, indeed, allows to increase the independent set cardinality and decrease the vertex cover set cardinality.For this reason, if virtual nodes with anti-degree ∆ ṽ ≤ 2 exist in A, those sites have the highest priority in being selected.More precisely, the algorithm follows the priority rule: 1. ∀ṽ ∈ A s.t.∆ ṽ = 0 the algorithm applies sequentially the operation SW AP − OP (ṽ) and then the operation OP ṽ del (ṽ, A). 2. If no virtual sites ṽ ∈ A with ∆ ṽ = 0 are present, then the algorithm looks for those that have ∆ ṽ = 1.∀ṽ ∈ A s.t.∆ ṽ = 1 it applies the operation SW AP − OP (ṽ), completes the last connection of the site i ∈ ṽ with ∆ i = 1, applies on the last neighbour of i added OP j move (j, N , V), and then OP ṽ del (ṽ, A). 3. If no virtual sites ṽ ∈ A with ∆ ṽ = 0 and ∆ ṽ = 1 are present, then the algorithm looks for those that have ∆ ṽ = 2. ∀ṽ ∈ A s.t.∆ ṽ = 2 it applies the operation SW AP − OP (ṽ), completes the last connections of the sites i ∈ ṽ with ∆ i = 0, labels the new added sites with the letter C, and updates the degree and the anti-degree of the virtual node ṽ.
The algorithm proceeds selecting virtual nodes and creating sites labeled P until N = ∅.Once N = ∅ it returns the set I. The set of independent sites.The code of the algorithm can be downloaded at [23].
We are comparing numerical results for independence ratios that agree with theoretical ones, at least, up to 5 th digit.For this reason, we performed an accurate analysis on random 3-regular graphs starting from those that have an order of 10 6 , and pushing it up to 5 • 10 8 .This analysis aims to compute the sample mean of the independence ratio size α(N ) outputted by our algorithm.Each average is obtained in the following manner: for graphs of order N = 10 6 we averaged over a sample of 10 4 graphs; for order N = 2.5 • 10 6 we make an average over a sample of 7.5 • 10 3 graphs; for order N = 5 • 10 6 we make an average over a sample of 5 • 10 3 graphs; for order N = 10 7 the average is performed over a sample of 10 3 graphs; for N = 2.5 • 10 7 over 7.5 • 10 2 graphs, for N = 5 • 10 7 over 5 • 10 2 graphs, for N = 10 8 over 10 2 graphs, for N = 2.5 • 10 8 over 50 graphs, and for N = 5 • 10 8 over 10 graphs.The mean and the standard deviation for each sample analyzed are reported in Tab. 2. Observing that the values of each independent set ratio sample mean reach an asymptotic value, we perform a linear regression on the model f (N ) = (a/ ln N ) + α ∞ for estimating the parameter α ∞ (blue line in Fig. 5).When N → ∞ the first term of the regression, i.e., (a/ ln N ), goes to 0 leaving out the value of α ∞ that describes the asymptotic value of the independence ratio that our algorithm can reach.Using the numerical standard errors obtained on each sample, we apply a General Least Square (GLS) method [24] for inferring the values of the parameters α ∞ , averaging sufficient simulations to achieve a confidence interval at 99% on it.The value of α ∞ is the most important because it is the asymptotic value that our algorithm can reach when N → ∞.From the theory of GLS, we know that the estimator of parameter α ∞ is unbiased, consistent and efficient, and a confidence interval on this parameter is justified.The analysis, performed on data reported in Tab. 2, shows that the independent set ratio reaches the asymptotic value α ∞ = 0.445330(3).This value agrees with the theoretical value proposed in [20].
Label i with letter P and insert it into P; Apply OP ṽ del (ṽ, A); 4 The deferred decision algorithm for d > 3 In this section, we present how to generalize the prioritized algorithm for all d > 3. It, like the one previously described in Sec. 3, builds the random regular graph, and, at the same time, tries to maximize the independent set cardinality |I|.The main idea that we propose is to melt down two existing algorithms, namely the one in [15] and the one above described, into a new prioritized algorithm, which is able to maximize the independent set cardinality, providing improved estimates of lower bounds.The new conjectured lower bounds come from extrapolation on random d-regular graphs of size up to 10 9 .Before introducing the algorithm, we present a new operation that will allow us to simplify the discussion.
Definition 5 (OP i build−del (i, N , I, V)) Let i ∈ N .We define OP i build−del (i, N , I, V) the operation that connects i to ∆ i sites following Algorithm 1 rules, applies OP i move (i, N , I), and ∀j ∈ ∂i sequentially runs Algorithm 1 and applies the operation OP j move (j, N , V).
The pseudo-code of the last operation is described in Algorithm 3.
Algorithm 3: We start the discussion on the local algorithm for d > 3 by giving the pseudo-code of the algorithm in Algorithm 4.
The algorithm starts randomly selecting a site z from the set of all nodes N , i.e. z ∈ N .It then applies OP z build−del (z, N , I, V) on the site z (see Fig. 6).This operation creates nodes with different degrees and anti-degrees.The algorithm proceeds in choosing the node m, from those with minimum ∆ m .If the node m has ∆ m > 2, the algorithm applies the operation OP m build−del (m, N , I, V) on site m.In other words, we are using the algorithm developed in [15] until a site m ∈ N with ∆ m ≤ 2 pops up.When such a case appears, we label it as a P site and we move it into the set P.
As before, once P is not empty, the sites in P have the highest priority in being processed for creating virtual nodes.
In principle more complex virtual nodes can be created.For instance defining a P site as a site i ∈ N not labeled yet with any letter (C or P ) s.t.∆ j ≤ d − 1, with ∆ j random connections with sites already in V.Although we do not see any logical impediment in creating more complex virtual nodes, we confine ourselves to the case where the anti-degree of a bare site in N is less or equal to two for any values of d, because it is much easier to handle and explain.
Until the set P is empty, the algorithm builds virtual sites, which are set into A.
Once the set P is empty, the highest priority in being covered is placed on the virtual sites contained in A. The rules that the algorithm follows are: 1. ∀ṽ ∈ A s.t.∆ ṽ = 0 ∨ ∆ ṽ = 1, the algorithm applies sequentially operation SW AP − OP (ṽ) and the operation OP ṽ del (ṽ, A). (in the case ∆ ṽ = 1, the algorithm before it completes the absent connection for the site i ∈ ṽ s.t.∆ i = 1, and it applies on the last added site j ∈ ∂i the operation OP j move (j, N , V).Then on the virtual site ṽ it applies OP ṽ del (ṽ, A)). 2. If ∃ṽ ∈ A s.t.∆ ṽ = 2 ∧ q ∈ A s.t.∆ q = 0 ∨ ∆ q = 1 the algorithm chooses with the highest priority the site with ∆ ṽ = 2. Then it applies operation SW AP − OP (ṽ) on ṽ, it runs ∀i ∈ ṽ with ∆ i = 0 the Subroutine GA(i, ∆ i ) and labels each neighbour(s) of i with letter C. 3. If ∃ṽ ∈ A s.t.∆ ṽ > 2 ∧ p ∈ A s.t.∆ p ≤ 2 the algorithm chooses a a site ṽ ∈ A with the maximum ∆ ṽ and it applies the OP ṽ del (ṽ, A) with the maximum ∆ ṽ , after having run the Subroutine GA(i, ∆ i ) on each i ∈ ṽ such that ∆ i = 0. 4. In the case P = ∅ ∧ A = ∅ ∧ N = ∅ the algorithm takes a site t ∈ N with minimum ∆ t , and applies the operation OP t build−del (t, N , I, V).The algorithm works until the following condition is true: N = ∅ ∧ P = ∅ ∧ A = ∅.Then, it checks that all sites in I are covered only by sites in V, and no site in I connects to any other site in I.
The results obtained by the algorithm at different values of d, and at different orders N , are presented in Tab. 3, 4, and 5.The confidence intervals of the asymptotic independent set ratio values, obtained by extrapolation described in the previous section, are presented in Tab. 1.In other words, we performed simulations for each value of d by computing the sample mean and the standard error of the independence ratio a some values of N .Then we use GLS methods for extrapolating the values of α ∞ and building up its confidence interval.From our analysis, we observe that ∀d > 4 our results, as far as we know, exceed the best theoretical lower bounds given by greedy algorithms.Those improvements are obtained because we allow the virtual nodes to increase and decrease their anti-degree.In other words, this process transforms the random d-regular graph into a sparse random graph, where it is much easier making local rearrangements (our SW AP − OP (•) move) to enlarge the independent set.
More precisely, the creation of virtual nodes that increase or decrease their anti-degrees allows us to deal with a graph that is not anymore d-regular but has average connectivity d .
However, this improvement decreases as d becomes large, ∼ 1/d, and disappears when d → ∞ (see Fig. 7, bottom panel).Indeed, the number of P labeled sites decreases during the graph building process (see Fig. 7, top panel), invalidating the creation of virtual nodes that are the core of our algorithm.This means that our algorithm for d → ∞ will reach the same asymptotic independent set ratio values obtained by the algorithm in [15].
In conclusion, for any fixed and small d, we have that the two algorithms are distinct, and our algorithm produces better results without increasing the computational complexity.

Conclusion
This manuscript presents a new local prioritized algorithm for finding a large independent set in a random d-regular graph at fixed connectivity.This algorithm makes deferred decision in choosing which site must be set into the independent set or into the vertex cover set.This deferred strategy can be seen as a depth-first search delayed in time, without backtracking.It works, and shows very interesting results.For all d ∈ [5, 100] we conjecture new lower bounds for this problem.All the new bounds improve upon the best previous bounds.All of them have been obtained by extrapolation on samples of random d-regular graphs of sizes up to 10 9 .For random 3-regular graphs, our algorithm is able to reach, when N → ∞, the asymptotic value presented in [20].For 4-regular graphs, we are not able to improve the existing lower bound.This discrepancy could be described by the fact that our algorithm is general and is not implemented only for a single value of d with ad hoc strategy.
The improvements upon the best bounds are due to reducing the density of the graph, introducing regions in which virtual sites replace multiple original nodes and optimal labellings can be identified.The creation of virtual sites allows to group together nodes of the graph to label at a different instant with respect to their creation.Those blobs of nodes transform the random d-regular graphs into a sparse graph, where the searching of a large independent set is simpler.
Undoubtedly more complex virtual nodes can be defined and additional optimizations can be identified.This will be addressed in a future manuscript.

Fig. 5 :
Fig. 5: The figure shows the extrapolation of the independent set ratio as a function of graphs order, i.e. |N | = N , and d = 3.The error bars identify the standard errors multiplied for the quantile of the t distribution z 99% = 3.35.The asymptotic value α∞ = 0.445330(3), extrapolated by fitting the data using a function f (N ) = (a/ ln N ) + α∞ (blue line), where a = −0.00033(6),identifies the value that our algorithm can reach when n → ∞.The confidence interval at 99% of α∞ (in Tab. 1) agrees with the theoretical value of α LB =0.44533.

Algorithm 2 :
local algorithm for d = 3 Input: N , d = 3; Output: I; 1 Build the set of sites N with |N | = N ; 2

Fig. 6 :
Fig. 6: The figure shows how a P site appears, i.e. a site b with ∆ b ≤ 2, in a random d-regular graph G d (N, E) with degree d = 5.In the event that a P site has not been created, the algorithm picks a site m ∈ N with minimum ∆m and applies operation OP m build−del (m, N , I, V) on it.

Fig. 7 :
Fig.7:The figure shows how the dynamics of the fraction of P sites in the graph building process appears as a function of links inserted in (top panel), and the total fraction of P sites as a function of d (lower panel).The fraction of P sites decreases like ∼ 1/d, when d becomes large.As stated in the main text this behavior shows that for d → ∞ our algorithm matches the one in[15].

Algorithm 4 :
local algorithm for d > 3 Input: N , d; Output: I; 1 Build the set of sites N with |N | = N ; 2 I = ∅; 3 V = ∅; 4 Pick a random site i ∈ N ; 5 Apply OP i build−del (i, N , I, V); 6 while N = ∅ do 7 while ∃i ∈ N s.t

Table 1 :
The table shows the values of upper and lower bounds for the independence ratio for random d-regular graph with small and fixed value of d. d is the degree of the random d-regular graph.α U B column describes the upper bound computed by McKay in

Table 2 :
The table shows the sample average and standard deviation values of the independent set ratio α(N ) the for random regular graphs of order N and d = 3.

Table 3 :
The table shows the sample average and standard deviation values of the independent set ratio α(N ) for random regular graphs of order N and degree d = 4, 5 , 6 , 7.

Table 4 :
The table shows the sample average and standard deviation values of the independent set ratio α(N ) for random regular graphs of order N and degree d = 8, 9, 10 .

Table 5 :
The table shows the sample average and standard deviation values of the independent set ratio α(N ) for random regular graphs of order N and degree d = 20, 50, 100.
. ∆ i ≤ 2 ∧ i / ∈ P ∧ i is not labeled C do 8 Label i with letter P and insert it into P; ∆ i ≤ 2 run Subroutine GA(i, ∆ i ) and label the neighbour(s) of i with the letter C; Pick ṽ s.t.max ṽ∈A ∆ ṽ ; 37 ∀i ∈ ṽ s.t.∆ i = 0 and labeled C, run Subroutine GA(i, ∆ i ); Pick a random site m ∈ N s.t.min m∈N ∆ m (not labeled P ,or C); 9 if P = ∅ then 10 while P = ∅ do 11 Pick the first l ∈ P; 12 if ∆ l = 0 then 13 Apply OP l move (l, N , I);14 Remove l from P; 15 else 16 Run Subroutine GA(l, ∆ l ); 17 If a neighbour j of l, i.e., j ∈ ∂l, is in P remove j from P; 18 ∀j ∈ ∂l, label each j with the letter C; 19 Build or update the virtual node ṽ and, if it is not present, insert it into A; 20 Remove l from P; 21 else if A = ∅ then 22 while ∃ṽ ∈ A s.t.∆ ṽ = 0 do 23 Apply SW AP − OP (ṽ); 24 Apply OP ṽ move (ṽ, A); 25 while ∃ṽ ∈ A s.t.∆ ṽ = 1 do 26 Apply SW AP − OP (ṽ); 27 For i ∈ ṽ labeled P s.t.∆ i = 1 run Subroutine GA(i, ∆ i ); 28 Pick j ∈ ∂i,with j the last neighbour of i added; 29 Run Subroutine GA(j, ∆ j ); 30 Apply OP j move (j, N , V); 31 Apply OP ṽ del (ṽ, A); 32 while ∃ṽ ∈ A s.t.∆ ṽ = 2 do 33 Apply SW AP − OP (ṽ); 34 ∀i ∈ ṽ labeled P s.t.35 Update the virtual node ṽ; 36