Exact Recovery of Stochastic Block Model by Ising Model

In this paper, we study the phase transition property of an Ising model defined on a special random graph—the stochastic block model (SBM). Based on the Ising model, we propose a stochastic estimator to achieve the exact recovery for the SBM. The stochastic algorithm can be transformed into an optimization problem, which includes the special case of maximum likelihood and maximum modularity. Additionally, we give an unbiased convergent estimator for the model parameters of the SBM, which can be computed in constant time. Finally, we use metropolis sampling to realize the stochastic estimator and verify the phase transition phenomenon thfough experiments.


Introduction
In network analysis, community detection consists in inferring the group of vertices that are more densely connected in a graph [1]. It has been used in many domains, such as recommendation systems [2], task allocation in distributed computing [3], gene expressions [4], and so on. The stochastic block model (SBM) is one of the most commonly used statistical models for community detection problems [5,6]. It provides a benchmark artificial dataset to evaluate different community detection algorithms and inspires the design of many algorithms for community detection tasks. These algorithms, such as semidefinite relaxation, spectral clustering, and label propagation, not only have theoretical guarantees when applied to the SBM, but also perform well on datasets without the SBM assumption. The study of the theoretical guarantee of the SBM model can be divided between the problem of exact recovery and that of partial recovery. Exact recovery requires that the estimated community should be exactly the same as the underlining community structure of the SBM whereas partial recovery expects the ratio of misclassified nodes to be as small as possible. For both cases, the asymptotic behavior of the detection error is analyzed when the scale of the graph tends to infinity. There are already some well-known results for the exact recovery problem on the SBM. To name but a few, Abbe and Mossel established the exact recovery region for a special sparse SBM with two communities [7,8]. Later on, the result was extended to a general SBM with multiple communities [9].
Parameter inference in the SBM is often considered alongside the exact recovery problem. Previous inference methods require the joint estimation of node labels and model parameters [10], which have high complexity since the recovery and inference tasks are done simultaneously. In this article, we will decouple the inference and recovery problems, and propose an unbiased convergent estimator for SBM parameters when the number of communities is known. Once the estimator is obtained, the recovery condition can be checked to determine whether it is possible to recover the labels exactly. Additionally, the estimated parameter will guide the choice of parameters for our proposed stochastic algorithm.
In this article, the exact recovery of the SBM is analyzed by considering the Ising model, which is a probability distribution of node states [11]. We use the terms node states and node labels interchangeably throughout this paper, both of which refer to the membership of the underlining community. The Ising model was originally proposed in statistical mechanics to model the ferromagnetism phenomenon but has wide applications in neuroscience, information theory, and social networks. Among different variants of Ising models, the phase transition property is shared. Phase transition can be generally formulated when some information quantity changes sharply in a small neighborhood of parameters. Based on the random graph generated by an SBM with two underlining communities, the connection of the SBM and the Ising model was first studied by [12]. Our work will extend the existing result to the multiple community case, establish the phase transition property, and give the recovery error an upper bound. The error bounds decay in a polynomially fast rate in different phases. Then we will propose an alternative approach to estimate the labels by finding the Ising state with maximal probability. Compared with sampling from the Ising model directly, we will show that the optimization approach has a sharper error upper bound. Solving the optimization problem is a generalization of maximum likelihood and also has a connection with maximum modularity. Additionally, searching the state with maximal probability could also be done within all balanced partitions. We will show that this constrained search is equivalent to the graph minimum cut problem, and the detection error upper bound for the constrained maximization will also be given.
The exact solution to maximize the probability function or exact sampling from the Ising model is NP-hard. Many polynomial time algorithms have been proposed for approximation purposes. Among these algorithms, simulated annealing performs well and produces a solution that is very close to the true maximal value [13]. On the other hand, in the original Ising model, metropolis sequential sampling is used to generate samples for the Ising model [14]. Simulated annealing can be regarded as metropolis sampling with decreasing temperature. In this article, we will use the metropolis sampling technique to sample from the Ising model defined on the SBM. This approximation enables us to verify the phase transition property of our Ising model numerically.
This paper is organized as follows. Firstly, in Section 3 we introduce the SBM and give an estimator for the parameters of the SBM. Then, in Section 4, our specific Ising model is given and its phase transition property is obtained. Derived from the Ising model, in Section 5, the energy minimization method is introduced, and we establish its connection with maximum likelihood and modularity maximization algorithm. Furthermore, in Section 6, we realize the Ising model using the metropolis algorithm to generate samples. Numerical experiments and conclusion are given lastly to finish this paper.

Related Works
The classical Ising model is defined on a lattice and confined to two states {±1}. This definition can be extended to a general graph and multiple-state case [15]. In [16], Liu considered the Ising model as defined on a graph generated by sparse SBM and his focus was to compute the log partition function, which was averaged over all random graphs. In [17], an Ising model with a repelling interaction was considered on a fixed graph structure, and the phase transition condition was established, which involves both the attracting and repelling parameters. Our Ising model derives from the work of [12], but we extend their results by considering the error upper bound and multiple-community case.
The exact recovery condition for the SBM can be derived as a special case of many generalized models, such as pairwise measurements [18], minimax rates [19], and side information [20]. The Ising model in this paper provides another way to extend the SBM model and derives the recovery condition. Additionally, the error upper bound for exact recovery of the two-community SBM by constrained maximum likelihood has been obtained in [7]. Compared with previous results, we establish a sharper upper bound for the multiplecommunity case in this paper.
The connection between maximum modularity and maximal likelihood was investigated in [21]. To get an optimal value of maximum modularity approximately, simulated annealing was exploited [22], which proceeds by using the partition approach, while the Metropolis sampling used in this paper is applied to estimate the node membership directly.

Stochastic Block Model and Parameter Estimation
In this paper, we consider a special symmetric stochastic block model (SSBM), which is defined as follows: and X = (X 1 , . . . , X n ) ∈ W n . X satisfies the constraint that |{v ∈ [n] : X v = u}| = n k for u ∈ W. The random graph G is generated under SSBM(n, k, p, q) if the following two conditions are satisfied.

1.
There is an edge of G between the vertices i and j with probability p if X i = X j and with probability q if X i = X j .

2.
The existences of each edge are mutually independent.
To explain SSBM in more detail, we define the random variable Z ij : which is the indicator function of the existence of an edge between nodes i and j. Given the node labels X, Z ij follows Bernoulli distribution, whose expectation is given by: Then the random graph G with n nodes is completely specified by Z := {Z ij , 1 ≤ i < j ≤ n} in which all Z ij are jointly independent. The probability distribution for SSBM can be written as: We will use the notation G n to represent the set containing all graphs with n nodes. By the normalization property, P G (G n ) = ∑ G∈G n P G (G) = 1.
In Definition 1, we have supposed that the node label X is fixed instead of a uniformly distributed random variable. Since the maximum posterior estimator is equivalent to the maximum likelihood when the prior is uniform, these two definitions are equivalent. Although the random variable definition is more commonly used in previous literature [6], fixing X makes our formal analysis more concise.
Given the SBM, the exact recovery problem can be formally defined as follows: Definition 2 (Exact recovery in SBM). Given X, the random graph G is drawn under SSBM(n, k, p, q). We say that the exact recovery is solvable for SSBM(n, k, p, q) if there exists an algorithm that takes G as input and outputsX such that: P a (X) := P(X ∈ S k (X)) → 1 as n → ∞ In the above definition, the notation P a (X) is called the probability of accuracy for estimatorX. Let P e (X) = 1 − P a (X) represent the probability of error. Definition 2 can also be formulated as P e (X) → 0 as n → ∞. The notationX ∈ S k (X) means that we can only expect a recovery up to a global permutation of the ground truth label vector X. This is common in unsupervised learning as no anchor exists to assign labels to different communities. Additionally, given a graph G, the algorithm can be either deterministic or stochastic. Generally speaking, the probability ofX ∈ S k (X) should be understood as ∑ G∈G n P G (G)PX |G (X ∈ S k (X)), which reduced to P G (X ∈ S k (X)) for the deterministic algorithm.
For constants p, q, which are irrelevant with the graph size n, we can always find algorithms to recover X such that the detection error decreases exponentially fast as n increases; that is to say, the task with a dense graph is relatively easy to handle. Within this paper, we consider a sparse case when p = a log n n , q = b log n n . This case corresponds to the sparsest graph when exact recovery of the SBM is possible. And under this condition, a well known result [9] states that exact recovery is possible if and only if: Before diving into the exact recovery problem, we first consider the inference problem for SBM. Suppose k is known, and we want to estimate a, b from the graph G. We offer a simple method by counting the number of edges T 1 and the number of triangles T 2 of G, and the estimatorsâ,b are obtained by solving the following equation systems: The theoretical guarantee for the solution is given by the following theorem: Theorem 1. When n is large enough, the equation system of Equations (4) and (5) has the unique solution (â,b), which are unbiased consistent estimators of (a, b). That is, E[â] = a, E[b] = b, and a andb converge to a, b in probability, respectively.
Given a graph generated by the SBM, we can use Theorem 1 to obtain the estimated a, b and determine whether exact recovery of label X is possible by Equation (3). Additionally, Theorem 1 provides good estimation of a, b to initialize their parameters of some recovery algorithm like maximum likelihood or our proposed Metropolis sampling in Section 6.

Ising Model for Community Detection
In the previous section, we have defined SBM and its exact recovery problem. While SBM is regarded as obtaining the graph observation G from node label X, the Ising model provides a way to generate estimators of X from G by a stochastic procedure. The definition of such an Ising model is given as follows: Definition 3 (Ising model with k states). Given a graph G sampled from SSBM(n, k, a log n n , b log n n ), the Ising model with parameters γ, β > 0 is a probability distribution of the state vector σ ∈ W n whose probability mass function is where The subscript in P σ|G indicates that the distribution depends on G, and Z G (α, β) is the normalizing constant for this distribution.
In physics, β refers to the inverse temperature and Z G (γ, β) is called the partition function. The Hamiltonian energy H(σ) consists of two terms: the repelling interaction between nodes without edge connection and the attracting interaction between nodes with edge connection. The parameter γ indicates the ratio of the strength of these two interactions. The term log n n is added to balance the two interactions because there are only O( log n n ) connecting edges for each node. The probability of each state is proportional to exp(−βH(σ)), and the state with the largest probability corresponds to that with the lowest energy.
The classical definition of the Ising model is specified by There are two main differences between Definition 3 and the classical one. Firstly, we add a repelling term between nodes without an edge connection. This makes these nodes have a larger probability to take different labels. Secondly, we allow the state at each node to take k values from W instead of the two values ±1. When γ = 0 and k = 2, Definition 3 is reduced to the classical definition of the Ising model up to a scaling factor. Definition 3 gives a stochastic estimatorX * for X:X * is one sample generated from the Ising model, which is denoted asX * ∼ Ising(γ, β). The exact recovery error probability for X * can be written as P e (X * ) := ∑ G∈G n P G (G)P σ|G (S c k (X)). From this expression we can see that the error probability is determined by two parameters (γ, β). When these parameters take proper values, P e (X * ) → 0, and the exact recovery of the SBM is achievable. On the contrary, P e (X * ) → 1 if (γ, β) takes other values. These two cases are summarized in the following theorem: Theorem 2. Define the function g(β),g(β) as follows: and:g . Let β * be defined as: which is the solution to the equation g(β) = 0 and β * <β. Then depending on how (γ, β) take values, for any given > 0 andX * ∼ Ising(γ, β), when n is sufficiently large, we have: 1.
By simple calculus,g(β) < 0 for β > β * and g(β) > 0 for β < β * . g(β) < 0 follows from Equation (3). The illustration of g(β),g(β) is shown in Figure 1a. Therefore, for sufficiently small and as n → ∞, the upper bounds in Theorem 2 all converge to 0 at least in polynomial speed. Therefore, Theorem 2 establishes the sharp phase transition property of the Ising model, which is illustrated in Figure 1b.
(b) Phase transition region in (β, γ) plane. The exact recovery of the SSBM is solvable only in Region I. Theorem 2 can also be understood from the marginal distribution for σ : P σ (σ =σ) = ∑ G∈G n P G (G)P σ|G (σ =σ). Let D(σ, σ ) be the event when σ is closest to σ among all its permutations. That is, Then Theorem 2 can be stated with respect to the marginal distribution P σ : When β < β * , P σ (σ = X|D(σ, X)) = o(1).
Below we outline the proof ideas of Theorem 2. The insight is obtained from the analysis of the one-flip energy difference. This useful result is summarized in the following lemma: Lemma 1. Supposeσ differs fromσ only at position r byσ r = ω s ·σ r . Then the change of energy is: Lemma 1 gives an explicit way to compare the probability of two neighboring states by the following equality: Additionally, since the graph is sparse and every node has O(log n) neighbors, from Equation (12) the computational cost (time complexity) for the energy difference is also O(log n).

Community Detection via Energy Minimization
Since β * is irrelevant with n, when γ > b, we can choose a sufficiently large β such that β > β * , then by Theorem 2, σ ∈ S k (X) almost surely, which implies that P σ|G (σ = X) has the largest probability for almost all graphs G sampled from the SBM. Therefore, instead of sampling from the Ising model, we can directly maximize the conditional probability to find the state with the largest probability. Equivalently, we can proceed by minimizing the energy term in Equation (7):X := arg min σ∈W n H(σ) In (14), we allowσ to take values from W n . Since we know X has equal size |{v ∈ [n] : X v = u}| = n k for each label u, another formulation is to restrict the search space where the minimal value is the minimum cut between different detected communities. WhenX = X, we must have dist(X , X) ≥ 2 to satisfy the constraintX ∈ W * . Additionally, the estimator ofX is parameter-free whereasX depends on γ. The extra parameter γ in the expression ofX can be regarded as a kind of Lagrange multiplier for this integer programming problem. Thus, the optimization problem forX is the relaxation of that forX by introducing a penalized term and enlarging the searched space from W * to W n .
When β >β,g(β) becomes a constant value. Therefore, we can get n g(β)/2 as the tightest error upper bound for the Ising estimatorX * from Theorem 2. For the estimatorX andX , we can obtain a sharper error upper bound, which is summarized in the following theorem: As g(β) < 0, n 2g(β) < n g(β) < n g(β)/2 , Theorem 3 implies that P e (X ) has the sharpest upper bound among the three estimators. This can be intuitively understood as the result of smaller search space. The proof technique of Theorem 3 is to consider the probability of events H(X) > H(σ) for dist(σ, X) ≥ 1. Then by union bound, these error probabilities can be summed up. We note that a loose bound n g(β)/4 was obtained in [7] for the estimator , Theorem 3 implies that exact recovery is possible usingX as long as √ a − √ b > √ k is satisfied. EstimatorX has one parameter, γ. When γ takes different values,X is equivalent with maximum likelihood or maximum modularity in the asymptotic case. The following analysis shows their relationship intuitively.
The maximum likelihood estimator is obtained by maximizing the log-likelihood function. From (2), this function can be written as: ) and C is a constant irrelevant with σ. When n is sufficiently large, we have γ → γ ML := a−b log(a/b) . That is, the maximum likelihood estimator is equivalent toX when γ = γ ML asymptotically.
The maximum modularity estimator is obtained by maximizing the modularity of a graph [23], which is defined by: For the i-th node, d i is its degree and C i is its community belonging. A is the adjacency matrix. Up to a scaling factor, the modularity Q can be re-written using the label vector σ as: From (17), we can see that Q(σ) → −H(σ) with γ = γ MQ = a+b 2 as n → ∞. Indeed, we have d i ∼ (a+b) log n 2 , |E| ∼ 1 2 nd i . Therefore, we have That is, the maximum modularity estimator is equivalent withX when γ = γ MQ asymptotically.
Using a > b and the inequality x − 1 > log x > 2 x−1 x+1 for x > 1 we can verify that γ MQ > γ ML > b. That is, both the maximum likelihood and the maximum modularity estimator satisfy the exact recovery conditions γ > b in Theorem 3.

Community Detection Based on Metropolis Sampling
From Theorem 2, if we could sample from the Ising model, then with large probability, the sample is aligned with X. However, exact sampling is difficult when n is very large since the cardinality of the state space increases in the rate of k n . Therefore, some approximation is necessary, and the most common way to generate an Ising sample is using Metropolis sampling [14]. Empirically speaking, starting from a random state, the Metropolis algorithm updates the state by randomly selecting one position to flip its state at each iteration step. Then after some initial burning time, the generated samples can be regarded as sampling from the Ising model.
The theoretical guarantee of Metropolis sampling is based on the Markov chain. Under some general conditions, Metropolis samples converge to the steady state of the Markov chain, and the steady state follows the probability distribution to be approximated. For the Ising model, there are many previous works which have shown the convergence of Metropolis sampling [24].
For our specific Ising model and energy term in Equation (7), the pseudo code of our algorithm is summarized in Algorithm 1. This algorithm requires that the number of the communities k is known and the strength ratio parameter γ is given. We should choose γ > b where b is estimated byb in Theorem 1. The iteration time N should also be specified in advance.

Algorithm 1 Metropolis sampling algorithm for SBM.
Inputs: the graph G, inverse temperature β, the strength ratio parameter γ Output:X =σ 1: random initializeσ ∈ W n 2: for i = 1, 2, . . . , N do 3: propose a new stateσ according to Lemma 1 where s, r are randomly chosen 4: compute ∆H(r, s) = H(σ ) − H(σ) using (12) 5: if ∆H(r, s) < 0 then 6: σ r ← w s · σ r 7: else 8: with probability exp(−β∆H(r, s)) such that σ r ← w s · σ r In the remaining part of this section, we present experiments conducted to verify our theoretical results. Firstly, we considered several combinations of (a, b, k) and obtained the estimator (â,b) by Theorem 1. Using the empirical mean squared error (MSE) 2 as the criterion and choosing m = 1000, the result is shown in Figure 2a. As we can see, as n increases, the MSE decreases polynomially fast. Therefore, the convergence ofâ → a andb → b was verified.
Secondly, using Metropolis sampling, we conducted a moderate simulation to verify Theorem 2 for the case γ > b. We chose n = 9000, k = 2, and the empirical accuracy was computed by P e = 1 In this formula, m 1 is the number of times the random graph was generated by the SBM, whereas m 2 is the number of times consecutive samples were generated by Algorithm 1 for a given graph. We chose m 1 = 2100, m 2 = 6000, which is fairly large and can achieve a good approximation of P e (X * ) by the law of large numbers. The result is shown in Figure 2b.  The vertical red line (β = β * = 0.198), computed from (10), represents the phase transition threshold. The point (0.199, 1 2 ) in the figure can be regarded as the empirical phase transition threshold, whose first coordinate is close to β * . The green line (β, n g(β)/2 ) is the theoretical lower bound of accuracy for β > β * , and the purple line (β, n −g(β) ) is the theoretical upper bound of accuracy for β < β * . It can be expected that as n becomes larger, the empirical accuracy curve (blue line in the figure) will approach the step function, which jumps from 0 to 1 at β = β * .

Conclusions
In this paper, we presented one convergent estimator (in Theorem 1) to infer the parameters of the SBM and analyzed three label estimators to detect communities of the SBM. We gave the exact recovery error upper bound for all label estimators (in Theorems 2 and 3) and studied their relationships. By introducing the Ising model, our work makes a new path to study the exact recovery problem for the SBM. More theoretical and empirical work will be done in the future, such as convergence analyses on modularity (in Equation (17)), the necessary iteration time (in Algorithm 1) for Metropolis sampling, and so on.

Proof of Theorem 1
Lemma 2. Consider an Erdős-Rényi random graph G with n nodes, in which edges are placed independently with probability p [26]. Suppose p = a log n n , the number of edges is denoted by |E| while the number of triangles is T. Then 2 < a(n − 1) 2n 2 2 log n For a given , when n is sufficiently large, ≤ n − 1 8n 2 2 log n Therefore, by the definition of convergence in probability, we have |E| n log n → a 2 as n → ∞.
Let X ijk represents a Bernoulli random variable with parameter p 3 . Then T = ∑ i,j,k X ijk . It is easy to compute that E[T] = ( n 3 )p 3 . Since X ijk are not independent, the variance of T needs careful calculation. From [27] we know that: Therefore by Chebyshev's inequality, The convergence of |E| in the Erdős-Rényi graph can be extended directly to the SBM since the existence of each edge is independent. However, for T, it is a little tricky since the existences of each triangle are mutually dependent. The following two lemmas give the formula for the variance of inter-community triangles in the SBM. Lemma 3. Consider a two-community SBM (2n, p, q) and count the number of triangles T, which has a node in S 1 and an edge in S 2 . Then the variance of T is: Lemma 4. Consider a three-community SBM(3n, p, q) and count the number of triangles T, which has a node in S 1 , one node in S 2 , and one node in S 3 . Then the variance of T is: The proof of the above two lemmas uses some counting techniques and is similar to that in [27], and we omit it here. Proof. We split T into three parts: the first is the number of triangles within community i, T i . There are k terms of T i . The second is the number of triangles that have one node in community i and one edge in community j, T ij . There are k(k − 1) terms of T ij . The third is the number of triangles that have one node in community i, one node in community j and one node in community k.
We only need to show that: The convergence of T i log 3 n comes from Lemma 2. For T ij we use the conclusion from Lemma 3. We replace n with n/k, p = a log n n , and q = b log n n in Equation (18).
Therefore, we can get a unique solution y within (0, 2e 1 ). Since (a, b) is a solution for the equation array, the conclusion follows.
By taking expectation on both sizes of Equations (4) and (5) we can show E[â] = a, E[b] = b. By the continuous property of g(y),b → b andâ → a follows similarly.

Proof of Theorem 2
Proof of Lemma 1. First we rewrite the energy term in (7) as: Then calculating the energy difference term by: Before diving into the technical proof of Theorem 2, we need to introduce some extra notations. Whenσ differs from X only at position r, takingσ = X in Lemma 1, we have: where A s r is defined as A s r = |{j ∈ [n]\{r} : {j, r} ∈ E(G), X j = ω s · X r }|. Since the existence of each edge in G is independent, A s r ∼ Bernoulli( n k , b log n n ) for s = 0 and A s 0 ∼ Bernoulli( n k − 1, a log n n ). For the general case, we can write: in which we use Aσ or Bσ to represent the binomial random variable with parameter a log n n or b log n n , respectively, and Nσ is a deterministic positive number depending onσ but irrelevant with the graph structure. The following lemma gives the expression of Aσ, Bσ and Nσ: Lemma 6. For SSBM(n, k, p, q), we assumeσ differs from the ground truth label vector X in the |I| := dist(σ, X) coordinate. Let I ij = |{r ∈ [n]|X r = w i , σ r = w j } for i = j and I ii = 0. We further denote the row sum as I i = ∑ k−1 j=0 I ij and the column sum as I i = ∑ k−1 j=0 I ji . Then: The proof of Lemma 6 is mainly composed of careful counting techniques, and we omit it here. When |I| is small compared to n, we have the following Lemma, which is an extension of Proposition 6 in [12].
Corresponding to the three cases of Theorem 2, we use three non-trivial lemmas to establish the properties of the Ising model.
Proof. We denote the event P σ|G (σ =σ) > exp(−Cn)P σ|G (σ = X) as D(σ, C). By Equation (24), D(σ, C) is equivalent to: We claim thatσ must satisfy at least one of the following two conditions: If neither of the above two condition holds, then from condition 1 we have I ij < . Let X be the vector that exchanges the value of w i with w j in X. We consider: which contracts with the fact thatσ is nearest to X. Therefore, we should have I ji < 1 k(k−1) n √ log n . Now the (i, j) pair satisfies condition 2, which contracts with the fact thatσ satisfies neither of the two conditions. Under condition 1, we can get a lower bound on |Aσ| from Equation (27). Let I ij = I ij for i = j and I ii = n k − I i . Then we can simplify |Aσ| as: (1)). As a result, Under condition 2, we can get a lower bound on Nσ. Since dist(σ, X ) − dist(σ, X) ≥ 0, from (30) we have I ij + I ji From (25): (1)). Now we use the Chernoff inequality to bound Equation (29); we can omit Using |Bσ| = Nσ + |Aσ| we can further simplify the exponential term as: Now we investigate the function g 1 (s) = b(e s − 1) + a(e −s − 1) and Both functions take zero values at s = 0 and g 1 (s) = (be s − ae −s ), g 2 (s) = be s − γ. Therefore, g 1 (0) = b − a < 0, g 2 (0) = b − γ < 0 and we can choose s * > 0 such that g 1 (s * ) < 0, g 2 (s * ) < 0. To compensate the influence of the term sCn/β we only need to make sure that the order of log n n min{|Aσ|, Nσ} is larger than n. This requirement is satisfied since either |Aσ| ≥ (1)).
Using the Chernoff inequality we have: Since s > 0 and a > b, we further have: we have: Proof of Theorem 2.

Proof of Theorem 3
Lemma 13 (Lemma 8 of [7]). Let m be a positive number larger than n. When Z 1 , . . . , Z m are i.i.d. Bernoulli( b log n n ) and W 1 , . . . , W m are i.i.d Bernoulli( a log n n ), independent of Z 1 , . . . , Z m , then: Proof of Theorem 3. Let P (r) F denote the probability when there is σ satisfying dist(σ, X) = r and H(σ) < H(X).
Author Contributions: The work of this paper was conceptualized by M.Y., who also developed the mathematical analysis methodology for the proof of the main theorem of this paper. All simulation code was written by F.Z., who also extended the idea of M.Y., concreted the proof and wrote this paper. The work is supervised and funded by S.