Evaluating the Gilbert–Varshamov Bound for Constrained Systems

We revisit the well-known Gilbert–Varshamov (GV) bound for constrained systems. In 1991, Kolesnik and Krachkovsky showed that the GV bound can be determined via the solution of an optimization problem. Later, in 1992, Marcus and Roth modified the optimization problem and improved the GV bound in many instances. In this work, we provide explicit numerical procedures to solve these two optimization problems and, hence, compute the bounds. We then show that the procedures can be further simplified when we plot the respective curves. In the case where the graph presentation comprises a single state, we provide explicit formulas for both bounds.


I. INTRODUCTION
From early applications in magnetic recording systems to recent applications in DNA-based data storage [2]- [7] and energy-harvesting [8]- [13], constrained codes play a central role in enhancing reliability in many data storage and communications systems (see also [14] for a survey).Specifically, for most data storage systems, certain substrings are more prone to errors than others.Thus, by forbidding the appearance of such strings, that is, imposing constraints on the codewords, the user is able to reduce the likelihood of error.We refer to the collection of words that satisfy the constraints as the constrained space S. Now, to further reduce the error probability, one can impose certain distance constraints on the codebook.In this work, we focus on the Hamming metric and consider the maximum size of a codebook whose words belong to the constrained space S and whose pairwise distance are at least a certain value d.Specifically, we study one of the most well-known and fundamental lower bounds to this quantity -the Gilbert-Varshamov (GV) bound.
To determine the GV bound, one requires two quantities: the size of the constrained space, |S|, and also, the ball volume, that is, the number of words with distance at most d − 1 from a "center" word.In the case where the space is unconstrained, i.e. S = {0, 1} n , the ball volume does not depend on the center.Then the GV bound is simply |S|/V where V is the ball volume for some center.However, for most constrained systems, the ball volume varies with the center.Nevertheless, Kolesnik and Krachkovsky showed that the GV lower bound can be generalized to |S|/4V where V is the average ball volume [15].This was further improved by Gu and Fuja to |S|/V in [16] (see [14, pp. 242-243] for additional details).In the same paper [15], they showed the asymptotic rate of average ball volume can be computed via some optimization problem.Later, Marcus and Roth modified The paper was presented in part at the 2022 IEEE International Symposium on Information Theory (ISIT) [1].

arXiv:2402.18869v1 [cs.IT] 29 Feb 2024
the optimization problem by including an additional constraint and variable [17], and the resulting bound, which we refer to as GV-MR bound, improves the usual GV bound.Furthermore, in most cases, the improvement is strictly positive.
However, about three decades later, very few works have evaluated these bounds for specific constrained systems.To the best of our knowledge, in all works that computed numerically, the GV bound and / or GV-MR bound, the constrained systems of interest have at most eight states [18].In [18], the authors wrote that "evaluation of the bound required considerable computation", referring to the GV-MR bound.
In this paper, we revisit the optimization problems defined by Kolesnik and Krachkovsky [15] and Marcus and Roth [17] and develop a suite of explicit numerical procedures that solve these problems.In particular, to demonstrate the feasibility of our methods, we evaluate and plot the GV and GV-MR bounds for a constrained system involving 120 states in Fig. 1(b).
We provide a high-level description of our approach.For both optimization problems, we first characterize the optimal solutions as roots of certain equations.Then using the celebrated Newton-Raphson iterative procedure, we proceed to find the roots of these equations.However, as the latter equations involve the largest eigenvalues of certain matrices, each Newton-Raphson iteration requires the (partial) derivatives of these eigenvalues (in some variables).To resolve this, we make modifications to another celebrated iterative procedure -the power iteration method and the resulting procedures compute the GV and GV-MR bounds efficiently for a specific relative distance δ.Interestingly, if we are plotting the bounds for 0 ≤ δ ≤ 1, the numerical procedure can be further simplified.Specifically, by exploiting certain properties of the optimal solutions, we provide procedures that use less Newton-Raphson iterations.
In the next section, we provide the formal definitions and state the optimization problems that compute the GV bound.

II. PRELIMINARIES
Let Σ = {0, 1} be the binary alphabet and let Σ n denote the set of all words of length n over Σ.A labelled graph G = (V, E, L) is a finite directed graph with states V, edges E ⊆ V × V, and an edge labelling L : E → Σ s for some s ≥ 1.Here, we use v i σ − → v j to mean that there is an edge from v i to v j with label σ.The labelled graph G is deterministic if for each state, the outgoing edges have distinct labels.
A constrained system S is then the set of all words obtained by reading the labels of paths in a labelled graph G.We say that G is a graph presentation of S. We further denote the set of all length-n words S by S n .Alternatively, S n is the set of all words obtained by reading the labels of length-(n/s) paths in G. Then the capacity of S, denoted by Cap(S) is given by Cap(S) ≜ lim sup n→∞ log |S n |/n.It is well-known that Cap(S) corresponds to the largest eigenvalue of the adjacency matrix A G (see for example, [14]).Here, A G is a (|V| × |V|)-matrix whose rows and columns are indexed by V.For each entry (u, v) ∈ V × V, we set the corresponding entry to be one if (u, v) is an edge, and zero, otherwise.
Every constrained system can be presented by a deterministic graph G .Furthermore, any deterministic graph can be transformed into a primitive deterministic graph H such that the capacity of G is same as the capacity of the constrained system presented by some irreducible component of H (see for example, Marcus et al. [14]).Therefore, we henceforth assume that our graphs are deterministic and primitive.When |V| = 1, we call this a single-state graph presentation and study these graphs in Section V.In terms of asymptotic rates, we fix 0 ≤ δ ≤ 1 and our task is to find the highest attainable rate, denoted by R(δ), which is given by R(δ; S) ≜ lim sup n→∞ log A(n, ⌊δn⌋; S)/n.

A. Review of Gilbert-Varshamov Bound
To define the GV bound, we need to determine the total ball size.Specifically, for x ∈ S n and 0 ≤ r ≤ n, we define V (x, r; S) ≜ |{y ∈ S n : d H (x, y) ≤ r}|.We further define T (n, d; S) = x∈Sn V (x, d − 1; S) .Then the GV bound as given by Gu and Fuja [16], [19] states that there exists an (n, d; S)-code of size at least In terms of asymptotic rates, there exists a family of (n, ⌊δn⌋; S)-codes such that their rates approach where ∼ T (δ) ≜ lim sup n→∞ log T (n, ⌊δn⌋; S)/n .In this paper, our main task is to determine R GV (δ) efficiently.Observe that since Cap(S) = ∼ T (0), it suffices to find efficient ways of determining ∼ T (δ).It turns out that ∼ T (δ) can be found via the solution of some convex optimization problem.Specifically, given a labelled graph G = (V, E, L), we define its product graph Then we label the edges in E ′ with the function D : And ∼ T (δ) can be obtained by solving the following optimization problem [15], [17].
To this end, we consider the dual problem of (2).Specifically, we define a (|V| 2 × |V| 2 )-distance matrix T G×G (y) whose rows and columns are indexed by V ′ .For each entry indexed by e ∈ V ′ × V ′ , we set the entry to be zero if e / ∈ E ′ and we set it to be y D(e) if e ∈ E ′ .Then the dual problem can be stated in terms of the dominant eigenvalue of the matrix T G×G (y).By applying the reduction techniques from [17], we can reduce the problem size by a factor of two.Formally, in the case of s = 1, we define a |V|+1

2
reduced distance matrix B G×G (y) whose rows and columns are indexed by V (2) ≜ {(v i , v j ) : 1 ≤ i ≤ j ≤ |V|} using the following procedure.
Two states The matrix B G×G (y) is then obtained by merging all pairs of equivalent states s 1 and s 2 .That is, we add the column indexed by v 2 to the column indexed by v 1 and then remove the row and column which are indexed by v 2 .Note that it may be possible to reduce the size of this matrix B G×G (y) further.However, for the ease of exposition, we do not consider this case in this work.
Subgraph induced by the states {v -entry of the matrix B G×G (y) according to subgraph induced by the states v i ,v j ,v k , and v ℓ .Here, σ denotes the complement of σ.
Following this procedure, we observe that the entries in the matrix B G×G (y) can be described by the rules in TABLE I.Moreover, the dominant eigenvalue of B G×G (y) is the same as that of T G×G (y).Then by strong duality, computing (2) is equivalent to solving the following dual problem [20], [21] (see also, [22]).
Here, we use Λ(M) to denote the dominant eigenvalue of matrix M. To simplify further, we write Λ(y; B) ≜ Λ(B G×G (y)).
Since the objective function in (3) is convex, it follows from standard calculus that any local minimum solution y * in the interval [0, 1] is also a global minimum solution.Furthermore, y * is a zero of the first derivative of the objective function.If we consider the numerator of this derivative, then y * is a root of the function, In Corollary 6, we show that there is only one y * such that F (y * ) = 0 and F ′ (y) is strictly positive for all values of y.Therefore, to evaluate the GV bound for a fixed δ, it suffices to determine y * .
Later, Marcus and Roth [17] improve the GV bound (1) by considering certain subsets of the constrained space S.This entails the inclusion of an additional constraint defined in optimization problem (2), and correspondingly, an additional variable in the dual problem (3).Specifically, they considered certain subsets S(p) ⊆ S where each symbol in the words of S(p) appears with a certain frequency dependent on the parameter p.We describe this in more detail in Section IV.

B. Our Contributions
(A) In Section III, we develop the numerical procedures to compute ∼ T (δ) for a fixed δ and hence, determine the GV bound (1).Our procedure modifies the well-known power iteration method to compute the derivatives of Λ(y; B).After that, using these derivatives, we apply the classical Newton-Raphson method to determine the root of (4).In the same section, we also study procedures to plot the GV curve, that is, the set {(δ, R GV (δ)) : 0 ≤ δ ≤ 1}.Here, we demonstrate that the GV curve can be plotted without any Newton-Raphson iteration.
(B) In Section IV, we then develop similar power iteration methods and numerical procedures to compute the GV-MR bound.Similar to the GV curve, we also provide a plotting procedure that uses significantly less Newton-Raphson iterations.(C) In Section V, we provide explicit formulas for the computation of GV bound and GV-MR bound for graph presentations that have exactly one state but multiple parallel edges.(D) In Section VI, we validate our methods by computing the GV and the GV-MR bounds for some specific constrained systems.For comparison purposes, we also plot a simple lower bound that is obtained by using an upper estimate of the ball size.From the plots in Figures 1, 2 and 3, it is also clear that the GV and GV-MR bounds are significantly better.We also observe that the GV bound and GV-MR bound for subblock energy constrained codes (SECC) obtained through our procedures improve the GV-type bound given by Tandon et al. [24,Proposition 12].

III. EVALUATING THE GILBERT-VARSHAMOV BOUND
In this section, we first describe a numerical procedure that solves (3) and hence determine R GV (δ) for fixed values of δ.Then we showed the procedure can be simplified when we compute the GV-curve, that is, the set of points {(δ, R GV (δ)) : δ ∈ [0, 1]}.Here, we abuse notation and use [a, b] to denote the interval {x : a ≤ x ≤ b}, if a < b; and the interval {x : b ≤ x ≤ a}, otherwise.
Below we provide formal description of our procedure to obtain the GV bound for a fixed relative distance δ.
Procedure 1 (GV bound for fixed relative distance).
INPUT: Adjacency matrix A G , reduced distance matrix B G×G (y), and relative minimum distance δ OUTPUT: GV bound, that is, R GV (δ) as defined in (1) (1) Apply the Newton-Raphson method to obtain y * such that F (y * ) is approximately zero.
-Compute the next guess y t+1 as follows. .
-In this step, we apply the power iteration method to compute Λ(y t ; B), Λ ′ (y t ; B), and Λ ′′ (y t ; B).
-Increment t by one.
• Set y * ← y t .(2) Determine R GV (δ) using y * .Specifically, we compute: Throughout Sections III and IV, we illustrate our numerical procedures via a running example using the class of sliding window constrained codes (SWCC).Formally, we fix a window length L and window weight w, and say that a binary word satisfies the (L, w)-sliding window weight-constraint if the number of ones in every consecutive L bits is at least w.We refer to the collection of words that meet this constraint as an (L, w)-SWCC constrained system.The class of SWCCs was introduced by Tandon et al. for the application of simultaneous energy and information transfer [10], [13].Later, Immink and Cai [11], [12] studied encoders for this constrained system and provided a simple graph presentation that uses only L w states.In the next example, we illustrate how the numerical procedure can be used to compute the GV bound for the value when δ = 0.1.Then the corresponding adjacency and reduced distance matrices are as follows: To determine the GV bound at δ = 0.1, we first approximate the optimal point y * for which −δ log y +log Λ(y; B) is minimized.
We apply the Newton-Raphson method to find a zero of the function F (y). Now, with the initial guess y 0 = 0.3, we apply the power iteration method to determine Then we compute that y 1 ≈ 0.238.Repeating the computations, we have that y 2 ≈ 0238.Since |y 2 − y 1 | is less than the tolerance value 10 −5 , we set y * = 0.238.Hence, we have that ∼ T (0.1) = 0.9.Applying the power iteration method to either A G or B G×G (0), we compute the capacity of (3, 2)-SWCC constrained system to be Cap(S) = 0.551.Then the GV bound is given by R GV (0.1) = 2(0.551)− 0.9 = 0.202.
We discuss the convergence issues arising from Procedure 1. Observe that there are two different iterative processes in Step 1: namely, (a) the power iteration method to compute the values Λ(y t ; B), Λ ′ (y t ; B), and Λ ′′ (y t ; B); and (b) the Newton-Raphson method that determines the zero of F (y).
(a) Recall that Λ(y; B) is the largest eigenvalue of the reduced distance matrix B G×G (y).If we apply naive methods to compute this dominant eigenvalue, the computational complexity increases very rapidly with the matrix size.Specifically, if G has M states, then the reduced distance matrix has dimension Θ(M 2 ) × Θ(M 2 ) and finding its characteristic equation takes O(M 6 ) time.Even then, determining the exact roots of characteristic equations with degree at least five is generally impossible.Therefore, we turn to the numerical procedures like the ubiquitous power iteration method [23].However, the standard power iteration method is only able to compute the dominant eigenvalue Λ(y; B).Nevertheless, we can modify the power iteration method to compute Λ(y; B) and its higher order derivatives.
In Appendix A, we demonstrate that under certain mild assumptions, the modified power iteration method always converges.Moreover, using the sparsity of the reduced distance matrix, we have that each iteration can be completed in O(M 2 ) time.(b) Next, we discuss whether we can guarantee that y t converges to y * as t approaches infinity.Even though the Newton-Raphson method converges in all our numerical experiments, we are unable to demonstrate that it always converges for F (y).Nevertheless, we can circumvent this issue if we are interested in plotting the GV curve.Specifically, if our objective is to determine the curve {(δ, R GV (δ)) : δ ∈ [0, 1]}, it turns out that we need not implement the Newton-Raphson iterations and we discuss this next.
Fix some constrained system S. Let us define its corresponding GV curve to be the set of points GV(S) ≜ {(δ, R GV (δ)) : δ ∈ [0, 1]}.Here, we demonstrate that the GV curve can be plotted without any Newton-Raphson iteration.
Theorem 2. Let G be the graph presentation for the constrained system S.If we define the function then the corresponding GV curve is given by Before we prove Theorem 2, we discuss its implications.Note that to compute δ(y) and ρ(y), it suffices to determine Λ(y; B) and Λ ′ (y; B) using the modified power iteration methods described in Appendix A. In other words, no Newton-Raphson iterations are required.We also have additional computational savings as we need not apply the power iteration method to compute the second derivative Λ ′′ (y; B).Example 3. We continue our example and plot the GV curve for the (3, 2)-SWCC constrained system in Fig. 1(a).
Next, we compute a set of 100 points on the GV curve.If we apply Procedure 1 to compute R GV (δ) for 100 values of δ in the interval [0, δ max ], we require 275 Newton-Raphson iterations and 6900 power iterations to find these points.In contrast, applying Theorem 2, we compute (δ(y), ρ(y)) for 100 values of y in the interval [0, 1].This does not require any Newton-Raphson iterations and involves only 2530 power iterations.
To prove Theorem 2, we demonstrate the following lemmas.Our first lemma is immediate from the definitions of R GV , δ and ρ in (1), (5), and (6), respectively.
The next lemma studies the behaviour of both δ and ρ as functions in y.
Next, we show that ρ is monotone decreasing.Recall that ρ(y yields the asymptotic rate of the total ball size, we have that as y increases, δ(y) increases and so, ∼ T (δ) increases.
Theorem 2 is then immediate from Lemmas 4 and 5.We have the following corollary that immediately follows from Lemma 5.This corollary then implies that y * yields the global minimum for the optimization problem.

IV. EVALUATING MARCUS AND ROTH'S IMPROVEMENT OF THE THE GILBERT-VARSHAMOV BOUND
In [17], Marcus and Roth improve the GV lower bound for most constrained systems by considering subsets S(p) of S where p is some parameter.Here, we focus on the case s = 1 and set p to be the normalized frequency of edges whose labels correspond to one.Specifically, we set S(p) ≜ {x ∈ S : wt(x) = ⌊p|x|⌋}.
Next, let S n (p) to be the set of all words/paths of length n in S(p) and we define S(p) ≜ lim sup n→∞ Similar to before, we define ∼ T (p, δ) = lim sup n→∞ 1 n log T (⌊δn⌋, n; S n (p)).Since S n (p) is a subset of S n , it follows the usual GV argument that there exists a family of (n, ⌊δn⌋; S)-codes whose rates approach 2S(p) − ∼ T (p, δ) for all 0 ≤ p ≤ 1.Therefore, we have the following lower bound on asymptotic achievable code rates: Now, a key result from [17] is that both S(p) and ∼ T (p, δ) can be obtained via two different convex optimization problems.For succinctness, we state the dual formulations of these optimization problems.First, S(p) can be obtained from the following problem.
Here, C G (z) is the following (|V| × |V|)-matrix C G (z) whose rows and columns are indexed by V.For each entry indexed by e, we set (C G (z)) e to be zero if e / ∈ E, and z L(e) if e ∈ E.
As before, we simplify notation by writing Λ(z; C) ≜ Λ(C G (z)).Again, following the convexity of (10), we are interested in finding the zero of the following function.
Next, ∼ T (p, δ) can be obtained via the following optimization. Here, -reduced distance matrix indexed by V (2) .To define the entry of matrix D G×G (x, y) indexed by ((v i , v j ), (v k , v ℓ )), we look at the vertices v i , v j , v k and v ℓ and follow the rules given in Table II.
Again, we write Λ(x, y; D) ≜ Λ(D G×G (x, y)).Also, following the convexity of ( 12), we have that if the optimal solution is obtained at x and y, then To this end, we consider the function ∆(x) = Λ y (x, 1; D)/Λ(x, 1; D) for x > 0 and set δ max = sup{∆(x) : x > 0}.As with the previous section, we develop a numerical procedure to solve the optimization problem (9).
To this end, we have the following critical observation.
Proof.Let λ 1 , λ 2 , λ 3 be real-valued variables and define L(p, x, y, z, λ 1 , λ 2 , λ 3 ) ≜ G(p, x, y, z) Subgraph induced by the states {v -entry of the matrix D G×G (x, y) according to subgraph induced by the states v i ,v j ,v k , and v ℓ .
Therefore, to determine R MR (δ) for any fixed δ, it suffices to find x, y, z, p such that G 1 (z) = G 2 (x, y) = G 3 (x, y) = 0 and x = z.Now, the optimization in Theorem 7 does not constrain the values of p. Furthermore, for certain constrained systems, there are instances where p falls outside the interval [0, 1].In this case, instead of solving the optimization problem (9), we set p to be either zero or one, and we solve the corresponding optimization problems (10) and (12).Specifically, if we have p * < 0, then we set p * = 0 and x * = 0, or if p * > 1, then we set p * = 1 and x * = ∞.Hence, the resulting rates we obtain is a lower bound for the GV-MR bound.(1) Apply the Newton-Raphson method to obtain p * , x * , y * such that G 1 (x * ), G 2 (x * , y * ) and G 3 (x * , y * ) are approximately zero.Specifically, we do the following.
-Compute the next guess p t+1 , x t+1 , y t+1 : -Here, we apply the power iteration method to compute Λ(x t ; C), Λ ′ (x t ; C), Λ ′′ (x t ; C), Λ(x t , y t ; D), Λ x (x t , y t ; D), Λ y (x t , y t ; D), Λ xx (x t , y t ; D), Λ yy (x t , y t ; D), and Λ xy (x t , y t ; D). -Increment t by one.As before, we develop a plotting procedure that minimizes the use of Newton-Raphson iterations.Note that we have three scenarios for ∆(x).If ∆(x) is monotone decreasing, then δ max = lim x→0 ∆(x) and we set x # = 0.If ∆(x) is monotone increasing, then δ max = lim x→∞ ∆(x) and we set x # = ∞.Otherwise, ∆(x) is maximized for some positive value and we set x # to be this value.Next, to obtain the GV-MR curve (see Remark 11), we iterate over x ∈ 1, x # .Note that if y(x # ) < 1 or equivalently δ(x # ) < δ max , we obtain a lower bound on the GV-MR curve by iterating over y ∈ y(x # ), 1 .Similar to Theorem 2, we define and Finally, we state the following analogue of Theorem 2.
Example 10.We continue our example and evaluate the GV-MR bound for the (3, 2)-SWCC constrained system.
In this case, the matrices of interest are .
Here, we observe that ∆(x) is a monotone decreasing function and so, we set x # = 0.01 and δ max = lim x→0 ∆(x) ≈ 0.426.If we apply Procedure 2 to compute R MR (δ) for 100 points in [0, δ max ], we require 437 Newton-Raphson iterations and 85500 power iterations.In contrast, we use Theorem 9 to compute (δ(x), ρ MR (x)) for 100 values of x in the interval 1, x # .This requires 323 Newton-Raphson iterations and involves 22296 power iterations.The resulting GV-MR curve is given in Fig. 1(a).
Remark 11.Strictly speaking, the GV-MR curve described by (17) may not equal to the curve defined by the optimization problem (15).Nevertheless, the curve provides a lower bound for the optimal asymptotic code rates and we conjecture that the GV-MR curve described by ( 17) is a lower bound for the curve defined by the optimization problem (15).

V. SINGLE-STATE GRAPH PRESENTATION
In this section, we focus on graph presentations that have exactly one state.Here, we allow these single-state graph presentations to contain the parallel edges and their labels to be binary strings of length possibly greater than one.Now, for these constrained systems, the procedures to evaluate the GV bound and its MR improvements can be greatly simplified.This is because the matrices B G×G (y), C G (z), D G×G (x, y) are all of dimensions one by one.Therefore, determining their respective dominant eigenvalues is straightforward and does not require the power iteration method.The results in this section follow directly from previous sections and our objective is to provide explicit formulas whenever possible.
Formally, let S be the constrained system with graph presentation G = (V, E, L) such that |V | = 1 and L : E → Σ s with s ≥ 1.We further define α t ≜ #{(x, y) ∈ L(E) 2 : d H (x, y) = t} for 0 ≤ t ≤ s.Then the corresponding adjacency and reduced distance matrices are as follows: Then we compute the capacity using its definition as Cap(S) = (log |E|)/s.
To compute ∼ T (δ), we consider the following extension of the optimization problem (3) for the case s ≥ 1.
As before, following the convexity of the objective function in (18), we have that the optimal y is the zero (in the interval [0, 1]) of the function So, for fixed values of δ, we can use the Newton-Raphson procedure to compute the root y of ( 19), and hence, evaluate R GV (δ).Note that the power iteration method is not required in this case.
On the other hand, to plot the GV curve, we have the following corollary of Theorem 2.
Corollary 12. Let G be the single-state graph presentation for a constrained system S. Then the corresponding GV curve is given by where We illustrate this evaluation procedure via an example of the class of subblock energy constrained codes (SECC).Formally, we fix a subblock length L and energy constraint w.A binary word x of length mL said to satisfies the (L, w)-subblock energy constraint if we partition x into m subblocks of length L, then the number of ones in every subblock is at least w.We refer to the collection of words that meet this constraint as an (L, w)-SECC constrained system.The class of SECCs was introduced by Tandon et al. for the application of simultaneous energy and information transfer [10].Later, in [24], a GV-type bound was introduced (see [24,Proposition 12] and also, (28)) and we make comparisons with the GV bound (20) in the following example.
Example 13.Let L = 3 and w = 2 and we consider a (3, 2)-SECC constrained system.It is straightforward to observe that the graph presentation is as follows with the single state x.Here, s = L = 3.
In contrast, the GV-type lower bound given by [24, Proposition 12] is zero.Hence, the evaluation of the GV bound yields a significantly better lower bound.
To plot the GV curve, we observe that δ max = 3/8 and thus, the curve is given by .
We plot the curve in Section VI.
Next, we evaluate the GV-MR bound.To this end, we consider some proper subset P ⊂ E and define Then we consider the following matrices: Setting p to be the normalized frequency of edges in P, we obtain S(p) by solving the optimization problem (10).
Specifically, we have that and this value is achieved when z To compute ∼ T (p, δ), we consider the following extension of the optimization problem (12) for the case s ≥ 1.
As before, following the convexity of the objective function in (23), we have that the optimal x and y are the zeroes (in the interval [0, 1]) of the functions So, for fixed values of p and δ, we can use the Newton-Raphson procedure to compute the roots x and y of (24), and hence, evaluate R GV (p, δ).Note that the power iteration method is not required in this case.We find x # as defined in Section IV and set Furthermore if y(x # ) < 1, we set Next, to plot the GV-MR curve, we have the following corollary of Theorem 9.
Corollary 14.Let G be the single-state graph presentation for a constrained system S.For x ∈ 1, x # , we set where y(x) is the smallest root of the equation Then the corresponding GV-MR curve is given by where ρ MR and ρ LB are defined in (25) and (26) respectively.
Example 15.We continue our example and evaluate the GV-MR bound for the (3, 2)-SECC constrained system.
We have the following single state graph presentation.
Now observe that we have y(x # ) = 1/2.Since we can still increase y to 1, we apply the GV-bound with p = 1 and x = z = x # once we reach at the boundary that is p = 1.Hence at the boundary, we solve the following problem.

VI. NUMERICAL PLOTS
In this section, we apply our numerical procedures to compute the GV and the GV-MR bounds for some specific constrained systems.In particular, we consider the (L, w)-SWCC constrained systems defined in Section III, the ubiquitous (d, k)-runlength-limited systems (see for example [14, p3]) and the (L, w)-subblock energy constrained (a) Lower bounds for R(δ; S) where S is the class of (3, 2)-SWCC (b) Lower bounds for R(δ; S) where S is the class of (10, 7)-SWCC Fig. 1: Lower bounds for optimal asymptotic code rates R(δ; S) for the class of sliding-window constrained codes codes recently introduced in [10].In addition to the GV and GV-MR curves, we also plot a simple lower bound.For each δ ∈ [0, 1/2], any ball size is at most 2H(δn).So, for any constrained system S, we have that T (δ) ≤ Cap(S) + H(δ).Therefore, we have that From the plots in Figures 1, 2 and 3, it is also clear that the computations of ( 7) and ( 17) yield a significantly better lower bound.

A. (L, w)-Sliding Window Constrained Codes
Fix L and w.Recall from Section III, a binary word satisfies the (L, w)-sliding window weight-constraint if the number of ones in every consecutive L bits is at least w and the (L, w)-SWCC constrained system refers to the collection of words that meet this constraint.From [11], [12], we have a simple graph presentation that uses only L w states.To validate our methods, we choose (L, w) ∈ {(3, 2), (10, 7)} and the corresponding graph presentations have 3 and 120 states, respectively.Applying the plotting procedures described in Theorems 2 and 9, we obtain Figure 1.

B. (d, k)-Runlength Limited Codes
Next, we revisit the ubiquitous runlength constraint.Fix d and k.We say that a binary word satisfies the (d, k)-RLL constraint if each run of zeroes in the word has a length of at least d and at most k .Here, we allow the first and last runs of zeroes are allowed to have length less than d .We refer to the collection of words that meet this constraint as a (d, k)-RLL constrained system.It is well known that (d, k)-RLL constrained system has the graph presentation with k + 1 states (see for example [14]).Here, we choose (d, k) ∈ {(1, 3), (3, 7)} to validate our methods and apply Theorems 2 and 9 to obtain Figure 2.For (d, k) = (3, 7), we corroborate our results with that derived in [18].Specifically, Winick and Yang determined the GV bound (1) for the (3, 7)-RLL constraint and remarked that the "evaluation of the (GV-MR) bound required considerable computation" for "a small improvement".In the following table, we verify this statement.

δ
GV-MR bound (15) GV Bound [18] (see also, ( Fix L and w.Recall from Section V, a binary word satisfies the (L, w)-subblock energy constraint if each subblock of length L have weight at least w and the (L, w)-SECC constrained system refers to the collection of words that meet this constraint.Then the corresponding graph presentation has a single state x with w i=0 L i edges, where each edge is labelled by a word of length L and weight at least w.We apply the methods in Section V to determine the GV and GV-MR bounds.
For the GV bound, we provide the explicit formula for α t and proceed as in Example 13.
Similarly, for GV-MR bound, we provide the explicit formula for α t , β t and γ t and proceed as in Example 15.
In Figure 3, we plot the GV bound and GV-MR bounds.We remark that the simple lower bound (28) corresponds to [24,Proposition 12].-Set -Increment k by one.
Theorem 16.If A is irreducible nonnegative diagonalizable matrix and q (0) has positive components with unit norm, then as k → ∞, we have Here, q (k) → e 1 means that q (k) − e 1 → 0 as k → ∞.
Before we present the proof of Theorem 16, we remark that the usual power iteration method computes only λ (k) and q (k) .Then it is well-known (see for example, [23]) that λ (k) and q (k) tends to λ 1 and e 1 , respectively.Now, since e i 's span R n , we can write q (0) = n i=1 α i e i for any initial vector q (0) .The next technical lemma provides closed formulae for λ (k) , q (k) , µ (k) and r (k) in terms of λ i 's, e i 's and α i 's.
Finally, we are ready to demonstrate the correctness of Power Iteration I.
Note that since λ k 1 ∥Φk∥ tends to a finite limit, we have that λ k 1 ∥Φk∥ bounded above by some constant.In other words, we have that Next, we show the following inequality: Using (36), we have that Again, to reduce clutter, we introduce the following abbreviations. ), Thus, we can rewrite (38) as Next, we bound each of the summands on the right-hand side.Specifically, we show the following inequalities: To demonstrate (46), we consider ∥α i e ′ i + α ′ i e i ∥|λ i − λ (k) |.
We use (45) to bound the first summand by some constant multiple of ϵ k−1 .On the other hand, we have In other words, the

For
x, y ∈ S, denoted by d H (x, y) is the Hamming distance between x and y.Fix 1 ≤ d ≤ n and a fundamental problem in coding theory is to find the largest subset C of S n such that d H (x, y) ≥ d for all distinct x, y ∈ C. Let A(n, d; S) denote the size of largest subset C.

Fig. 2 :
Fig. 2: Lower bounds for optimal asymptotic code rates R(δ; S) for the class of runlength limited codes.