Next Article in Journal
Statistical Assessment of Discrimination Capabilities of a Fractional Calculus Based Image Watermarking System for Gaussian Watermarks
Next Article in Special Issue
Improving Log-Likelihood Ratio Estimation with Bi-Gaussian Approximation under Multiuser Interference Scenarios
Previous Article in Journal
Dephasing Process of a Single Atom Interacting with a Two-Mode Field
Previous Article in Special Issue
Threshold Computation for Spatially Coupled Turbo-Like Codes on the AWGN Channel
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Error Exponents of LDPC Codes under Low-Complexity Decoding

1
Center for Computational and Data-Intensive Science and Engineering, Skolkovo Institute of Science and Technology, 121205 Moscow, Russia
2
Sirius University of Science and Technology, 1 Olympic Ave, 354340 Sochi, Russia
3
Laboratory №3—Transmission, Protection and Analysis of Information, Institute for Information Transmission Problems, Russian Academy of Sciences, 119991 Moscow, Russia
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(2), 253; https://doi.org/10.3390/e23020253
Submission received: 17 December 2020 / Revised: 18 February 2021 / Accepted: 19 February 2021 / Published: 22 February 2021
(This article belongs to the Special Issue Information Theory for Channel Coding)

Abstract

:
This paper deals with the specific construction of binary low-density parity-check (LDPC) codes. We derive lower bounds on the error exponents for these codes transmitted over the memoryless binary symmetric channel (BSC) for both the well-known maximum-likelihood (ML) and proposed low-complexity decoding algorithms. We prove the existence of such LDPC codes that the probability of erroneous decoding decreases exponentially with the growth of the code length while keeping coding rates below the corresponding channel capacity. We also show that an obtained error exponent lower bound under ML decoding almost coincide with the error exponents of good linear codes.

1. Introduction

Low-density parity-check (LDPC) codes [1] are known for their very efficient low-complexity decoding algorithms. This paper’s central question is: Are there LDPC codes that asymptotically achieve the capacity of binary-symmetric channel (BSC) under a low-complexity decoding algorithm? The following results help us construct LDPC code with specific construction and develop a decoding algorithm to answer yes to this question. So, Zyablov and Pinsker showed in [2] that the ensemble of LDPC codes, proposed by Gallager (G-LDPC codes), includes codes that can correct a number of the errors that grow linearly with the code length n while the decoding complexity remains O n log n . Later the lower bound on this fraction of errors was improved in [3,4,5]. Thus, the main idea of LDPC code construction and decoding algorithm, considered in this paper, is as follows. We need to introduce to the construction of G-LDPC code some “good” codes that reduce the number of errors from the channel in such a way that the low-complexity majority decoding can correct the rest errors. As “good” codes, we select the codes with the error exponent of good codes under ML decoding [6]. To introduce these “good” codes to the construction, we compose the parity-check matrix layer with the parity-check matrices of “good” codes. To meet the requirements on low-complexity ( O n log n ) of decoding algorithm we impose restrictions on the length of “good” codes (it must be small ( log log ( n ) ) compared to the length of whole construction). To show that the proposed construction asymptotically achieves the capacity of BSC we consider the estimation on error-exponent under the proposed low-complexity decoding algorithm.
It worth mention that papers [7,8] introduce expander codes achieving the BSC capacity under an iterative decoding algorithm with low complexity. But in this paper, we are interested in LDPC-code construction and corresponding decoding algorithm.
To show that the proposed construction of LDPC code is good, we also estimate the error exponent under ML decoding and compare it with the error exponent under the proposed low-complexity decoding algorithm. Previously the authors of [9,10] have derived the upper and lower bounds on the G-LDPC codes error exponent under the ML decoding assumption. Moreover, one can conclude from [10] that the lower bound on the error exponent under ML decoding of G-LDPC codes almost coincides with the lower bound obtained for good linear codes (from [6]) under ML.
Some parts of this paper were previously presented (with almost all of the proofs omitted) in the conference paper [11]. The low-complexity decoding algorithm that we use for our analysis was proposed in [12,13]. Unlike in previous papers, Corollary 1 is significantly enhanced and proved in detail in this paper. Moreover, the results for the error-exponent bound under ML decoding and corresponding proofs are added. We compare the obtained lower bounds on the error exponents under the low-complexity decoding and the ML decoding. We evaluate the error exponents numerically for different code parameters.

2. LDPC Code Construction

Let us briefly consider the LDPC code construction from [11,12]. First, let us consider the G-LDPC code parity-check matrix H 2 of size × b 0 n 0 from [1]:
H 2 = π 1 H b 0 π 2 H b 0 π H b 0 .
Here we denote π l H b 0 , l = 1 , , , as a random column permutation of H b 0 , which is given by
H b 0 = H 0 0 0 0 H 0 0 0 0 H 0 b 0 ,
where H 0 is the parity-check matrix of the constituent single parity check (SPC) code of length n 0 .
The elements of the Gallager’s LDPC codes ensemble E G , n 0 , b 0 are obtained by independently selecting the equiprobable permutations π l , l = 1, 2, …, .
One can write a lower bound on the G-LDPC code rate E G , n 0 , b 0 as
R 2 1 1 R 0 ,
where R 0 = n 0 1 n 0 is a SPC code rate.
The equality achieved if and only if the matrix H 2 has full rank.
Consider a G-LDPC parity check matrix with an additional layer consisting of linear codes (LG-LDPC code). Let us denote this matrix as H :
H = π 1 H b 0 π 2 H b 0 π H b 0 π + 1 H b 1 ,
where H b 1 is given by
H b 1 = H 1 0 0 0 H 1 0 0 0 H 1 b 1 ,
where b 1 is such that b 1 n 1 = b 0 n 0 . As soon as the first layers of matrix H is the G-LDPC parity-check matrix, we can write H as
H = H 2 π + 1 H b 1 .
For a given SPC code with the code length n 0 and the parity-check matrix H 0 and for a given linear code with the code length n 1 and the parity-check matrix H 1 , the elements of the LG-LDPC codes ensemble E L G , n 0 , b 0 , R 1 , n 1 , b 1 are obtained by independently selecting the equiprobable permutations π l , l = 1, 2, …, + 1 .
The length of the constructed LG-LDPC code is n = b 0 n 0 = b 1 n 1 , and the code rate R is lower bounded by
R R 1 1 R 0 ,
According to (1),
R R 1 + R 2 1 .

3. Decoding Algorithms

In this paper, we consider two decoding algorithms of the proposed construction. The first one is the well-known maximum likelihood decoding algorithm A M L . Under the second decoding algorithm A C , the LG-LDPC code is decoded as a concatenated code. In other words, in the first step, we decode the received sequence using linear codes with the parity-check matrix H 1 from the + 1 layer of H . Then, in the second step, we decode the sequence obtained in the previous step using the G-LDPC code with the parity-check matrix H 2 . Thus, this algorithm A C consists of the following two steps:
  • The received sequence is separately decoded with the well-known maximum likelihood algorithm by b 1 linear codes with the parity-check matrix H 1 from the + 1 layer of H .
  • The tentative sequence is decoded with the well-known bit-flipping (majority) decoding algorithm A M by the G-LDPC code with the parity-check matrix H 2 .
An important note here is that A C is a two-step decoding algorithm, and each step is performed only once. It first decodes the received sequence once by the ML decoding algorithm using linear codes H 1 . Then it applies the iterative bit-flipping (majority) algorithm A M using the G-LDPC code to the tentative sequence.
It is also worth noting that the complexity of the proposed decoding algorithm A C is O n log n with some restrictions on the length of the linear codes with the parity-check matrix H 1 (see Lemma 3). At the same time the complexity of ML decoding is exponential.

4. Main Results

Consider a BSC with a bit error probability p. Let a decoding error probability P be the probability of the union of decoding denial and the erroneous decoding events. In this paper, we consider the decoding error probability P in the following form
P exp n E · ,
with the E · being the required error exponent.
Let us define two error exponents: E C · and E M L · corresponding to the A C decoding algorithm (having O n log n complexity) and the A M L decoding algorithm (having an exponential complexity) respectively. Let us consider first the error exponent E C · .
Theorem 1.
Let there exist in the ensemble E G , n 0 , b 0 of the G-LDPC codes a code with the code rate R 2 that can correct any error pattern of weight up to ω t n while decoding with the bit-flipping (majority) algorithm A M .
Let there exist a linear code with code length n 1 , code rate R 1 and an error exponent under maximum likelihood decoding lower bounded by E 0 R 1 , p .
Then, in the ensemble E L G , n 0 , b 0 , R 1 , n 1 , b 1 of the LG-LDPC codes, there exists a code with code length n,
n = n 0 b 0 = n 1 b 1 ,
code rate R,
R R 1 + R 2 1 ,
and an error exponent over the memoryless BSC with BER p under the decoding algorithm A C with complexity O n log n lower bounded by E C · :
E C R 1 , n 1 , ω t , p = min ω t β β 0 β E 0 R 1 , p + E 2 β , ω t , p 1 n 1 H β ,
where β 0 = min ω t 2 p , 1 , H β = β ln β 1 β ln 1 β – an entropy function – and E 2 β , ω t , p is given by
E 2 β , ω t , p = 1 2 ω t ln ω t p + 2 β ω t ln 2 β ω t 1 p β ln 2 β ,
where n 1 satisfies the following condition:
ln β 0 E 0 R 1 , p n 1 1 R 1 log 2 log 2 n .
Corollary 1.
E C ( · ) > 0 , if R C , where C is the capacity of a memoryless BSC with error probability p, such that R 1 C and R 2 < 1 .
Thus, according to Corollary 1, we can state that there exists an LG-LDPC code such that the error probability of the low-complexity decoding algorithm A C exponentially decreases with the code length for all code rates below the channel capacity C.
Remark 1.
We have obtained the lower bound on E C R 1 , n 1 , ω t , p assuming n , where n 0 = c o n s t , n 1 = c o n s t , b 0 , and b 1 . As a result, the complexity of A C algorithm equals to O n log n , and we have the right inequality of condition (3) for n 1 .
Theorem 1 was obtained in [12]. The main idea of the proof is based on the following results. Our previous results [3,4] show that in the ensemble E G , n 0 , b 0 of G-LDPC codes, there exists a code that can correct any error pattern of weight up to ω t n under the algorithm A M with complexity O n log n . In [6], it was shown that there exists a linear code for which the error exponent under ML decoding is lower bounded by E 0 R , p , where E 0 R , p > 0 for R < C .
Let us now consider the lower bound on the error exponent E M L · .
Theorem 2.
In the ensemble E L G , n 0 , b 0 , R 1 , n 1 , b 1 , there exists an LG-LDPC code such that the error exponent of this code over the memoryless BSC with BER p under the decoding algorithm A M L is lower bounded by
E M L p = max ω 0 ω c 1 min E δ ω c , p , E ω c ω c , p ,
where ω 0 = max δ , p , δ is the code distance of this code, and E δ ω c , p is given by
E δ ω c , p = max δ ω ω c ν ω + ω ln 2 + ln p 1 p ,
where ν ω is an asymptotic spectrum of the LG-LDPC code:
ν ω = lim n ln N ¯ ω n n ,
where N ¯ ω n is the average number of codewords and E ω c ω c , p is given by
E ω c ω c , p = 1 ω c ln 1 ω c 1 p + ω c ln ω c p .
We have obtained Theorem 2 using the methods developed in [14] to estimate the error exponent under the ML decoding of codes with the given spectrum. We have taken ideas of [1] for G-LDPC codes to construct the upper bound on the code spectrum and the lower bound on the code distance of the proposed LDPC construction (see Lemmas 1 and 2).
Lemma 1.
The value of ν ω for the codes from the ensemble E L G , n 0 , b 0 , R 1 , n 1 , b 1 of LG-LDPC codes is upper bounded by
ν ω ν 0 ω = 1 H ω + min s > 0 1 n 0 ln g 0 s , n 0 + 1 n 1 ln g 1 s , R 1 , n 1 ω ln s ,
where g 0 s , n 0 is a spectrum function of the constituent SPC code with length n 0 ,
g 0 s , n 0 = i = 0 n 0 1 + s n 0 + 1 s n 0 2 ,
and g 1 s , R 1 , n 1 is a spectrum function of the constituent linear code with a good spectrum, code rate R 1 and length n 1 obtained in [14]:
g 1 s , R 1 , n 1 1 + n 1 2 1 R 1 n 1 i = δ V G n 1 n 1 n 1 i s i ,
where δ V G is given by the Varshamov-Gilbert bound.
Lemma 2.
Let the positive root δ 0 of the following equation exist:
ν 0 δ 0 = 0 .
Then, a code with minimum code distance δ δ 0 exists in ensemble E L G , n 0 , b 0 , R 1 , n 1 , b 1 .

5. Numerical Results

One can see from Theorems 1 and 2 that the obtained lower-bounds E C ( · ) and E M L ( · ) depend on the set of parameters: error probability p of BSC, code rate R 1 and length n 1 of linear code from added layer, code rate R 2 and constituent code length n 0 of G-LDPC code (the value of ω t , used in E C ( · ) bound, depends on these parameters), wherein the code rate R of whole construction depends on R 1 and R 2 .
Thus, to simplify the analysis let us first fix the parameters R 1 = 0.85 , n 1 = 2000 , R = 0.5 and p = 10 3 and find how E C · and E M L · depend on the SPC code length n 0 (see Figure 1). Then, let us consider the maximized E M L · and E C · over the values of n 0 (see Figure 2).
We can explain the different behaviors of E M L · and E C · shown in Figure 1 and Figure 2 by the following: the value of E M L · significantly depends on the value of the code distance δ of the LG-LDPC code and the value of E C · depends on the value of the error fraction ω t , which is guaranteed to be corrected by the G-LDPC code. And it is known that for the fixed code rate R code distance of LDPC code increases with the growth of constituent code length n 0 and guaranteed corrected error fraction ω t has the maximum for the certain parameters n 0 and .
In Figure 3, we compare the dependencies on R for fixed p = 10 3 of the obtained lower bound E M L · , maximized over the values of n 0 and R 1 , and of the lower bound E 0 · .
Figure 4 shows the dependencies on R of the maximum values of E M L · and E C · for fixed p = 10 3 (the maximization was performed over the values of n 0 and R 1 ).
As observed from Figure 4, E C · is approximately two orders of magnitude smaller than E M L · , which almost reaches the lower bound on the error exponent E 0 · of the good linear code (see Figure 3). However, it is important to note that E M L · encounters only exponential decoding complexity and E C · encounters the decoding complexity of O n log n .

6. Conclusions

The main result of this paper is that we prove (see Corollary 1) the existence of such LDPC code with specific construction that the probability of erroneous decoding with low-complexity algorithm ( O n log n ) decreases exponentially with the growth of the code length for all code rates below BSC capacity. We also obtain the lower bound on error exponent under ML decoding for proposed construction (see Theorem 2) and show with numeric results that obtained lower-bound almost coincide with the error exponents of good linear codes for the certain parameters.
As a future work to improve the lower-bound for the low-complexity decoder, we plan to consider error-reducing codes instead of good linear codes and generalize our results for channels with reliability (e.g., channels with additive white Gaussian noise (AWGN) and “soft” reception).

7. Proofs of the Main Results

7.1. Error Exponent for Decoding Algorithm A C

Theorem 1 was proved in [12]. Here, we provide the proof for convenience of the reader in more detail, especially for the essential Corollary 1.
Let us first consider the complexity of the decoding algorithm A C of an LG-LDPC code.
Lemma 3.
The complexity of the decoding algorithm A C of an LG-LDPC code with length n is of order O ( n log n ) if the length of the linear code satisfies the inequality n 1 1 R 1 log 2 log 2 ( n ) .
Proof. 
Since the length of the linear code is equal to n 1 and the code rate is R 1 , the complexity of the maximum likelihood decoding algorithm for the single code is of order O ( 2 R 1 n 1 ) . The total number of codes is b 1 , which is proportional to n, and then, the complexity of decoding all of the codes is of order O ( n 2 R 1 n 1 ) .
In [4], it was shown that the complexity of the bit-flipping decoding algorithm of LDPC codes is O ( n log n ) .
Therefore, the complexity of decoding algorithm A C is of order O ( n log 2 n ) if the following condition is satisfied:
n 2 R 1 n 1 n log 2 ( n ) .
Here we find the condition on n 1 :
n 1 1 R 1 log 2 log 2 ( n ) .
 □
Let us now consider the proof of Theorem 1.
Proof. 
Assume that in the first step of the decoding algorithm A C of LG-LDPC code, the decoding error occurred exactly in i linear codes. Since each code contains no more than n 1 errors, the total number of errors W after the first step of decoding is no greater than i n 1 . Let i = β b 1 , where β is the fraction of linear codes in which the decoding failure occurred; then,
W β b 1 n 1 = β n .
According to [4], LDPC code is capable of correcting any error pattern with weight less than W, that is,
W < W 0 = ω t n ,
where ω t is the fraction of errors guaranteed corrected by the G-LDPC code [4] (Theorem 1). Consequently, for the case where β < ω t , the decoding error probability P for LG-LDPC code under decoding algorithm A C is equal to 0:
P = 0 , β < ω t .
At β > ω t , the error decoding probability is defined as
P = i = ω t b 1 b 1 b 1 i P 2 W W 0 | i P 1 i 1 P 1 b 1 i ,
where P 1 is the error decoding probability of linear code,
P 1 exp { n 1 E 0 ( R 1 , p ) } ,
and P 2 W W 0 | i is the probability that the number of errors after the first step of the decoding algorithm A C is not less than W 0 under the condition that the decoding error occurred exactly in i linear codes.
Since the number of errors no more than doubles in a block in the case of error decoding with the maximum likelihood decoding algorithm, it must be greater than W 0 2 errors before the first step in i erroneous blocks to have more than W 0 errors after the first step of decoding algorithm A C . Then, we can write P 2 ( W W 0 | i ) as
P 2 W W 0 | i = j = ω t n 2 i n 1 i n 1 j p j 1 p i n 1 j .
Using the Chernoff bound, we obtain
P 2 ( W W 0 | i ) exp { n E 2 ( β , ω t , p ) } ,
where
E 2 β , ω t , p = 1 2 ω t ln ω t p + 2 β ω t ln 2 β ω t 1 p β ln 2 β , β < β 0 , 0 , β β 0 .
Here β = i b 1 > ω t , and
β 0 = min ω t 2 p , 1
because β > 1 has no sense.
In accordance with (6), the probability P 2 ( W W 0 | i ) can be replaced with the trivial estimation P 2 ( W W 0 | i ) 1 for i β 0 b 1 , and then, sum (5) is upper bounded as follows:
P i = ω t b 1 β 0 b 1 b 1 i P 2 W W 0 | i P 1 i 1 P 1 b 1 i + i = β 0 b 1 b 1 b 1 i P 1 i 1 P 1 b 1 i .
Let P II denote the first sum in the right part of this inequality and P I denote the second sum. Let us consider each sum separately.
The sum P I can be easily estimated as a tail of the binomial distribution with probability P 1 using the Chernoff bound:
P I exp { n E I ( R 1 , n 1 , ω t , p ) } ,
where
E I ( R 1 , n 1 , p ) = β 0 E 0 ( R 1 , p ) 1 n 1 H ( β 0 ) ,
and P 1 satisfies the condition
P 1 ( n 1 , R 1 , p ) β 0 .
Thus,
n 1 ln β 0 E 0 ( R 1 , p ) .
Let us now consider the sum P II :
P I I β 0 ω t b 1 × max ω t β β 0 b 1 β b 1 P 2 W W 0 | β b 1 P 1 β b 1 1 P 1 1 β b 1 .
Hence, at n ( b 1 and b 0 ), we obtain
E II ( R 1 , n 1 , ω t , p ) = min ω t β β 0 E 2 ( β , ω t , p ) + β E 0 ( R 1 , p ) 1 n 1 H ( β ) .
Let us note that if a minimum is achieved at β 0 in the right part of equality (8), then according to (6), we obtain E II = E I . Consequently, E II E I .
It is easy to see that at n , the following inequality is satisfied:
P exp { n E ( R 1 , n 1 , ω t , p ) } ,
where E ( R 1 , n 1 , ω t , p ) = min { E II , E I } = E II .
According to the proved lemma, the complexity of the decoding algorithm A C is of order O ( n log n ) if the condition (4) is satisfied, but for the obtained estimation, the condition (7) must also be satisfied. Thus,
ln β 0 E 0 ( R 1 , p ) n 1 1 R 1 log 2 log 2 n .
This completes the proof.  □
Before proving Corollary 1, we need to consider the lower-bound behavior of the error fraction ω t guaranteed corrected by G-LDPC code. In [4], the new estimation of the error fraction ω t guaranteed corrected by generalized LDPC code with a given constituent code was obtained. Let us formulate this result for G-LDPC code:
Theorem 3.
Let the root ω 0 exist for the following equation:
h ω 0 F e ω 0 , n 0 = 0 ,
where F e ω 0 , n 0 is given by
F e ω 0 , n 0 = h ω t + max s > 0 , 0 < v < 1 ω 0 log 2 s v 1 n 0 log 2 g e s , v , n 0 + g 0 s , n 0 ,
where g 0 s , n 0 and g e s , v , n 0 have the following forms:
g 0 s , n 0 = 1 + s n 0 + 1 s n 0 2 , g e s , v , n 0 = g d s v 2 , n 0 ,
where
g d s , n 0 = 1 + s n 0 g 0 s , n 0 .
Let for the found value ω 0 , the root α 0 exist for the following equation:
h ω 0 F s α , ω 0 , n 0 , = 0 ,
where F s α , ω 0 , n 0 , is given by
F s α , ω 0 , n 0 , = h ω 0 + max s > 0 , 0 < v < 1 ω 0 log 2 s + 1 α α log 2 v 1 n 0 log 2 g d s , n 0 v + g 0 s , n 0 .
Then, there exists a code (with p n : lim n p n = 1 ) in the ensemble E G , n 0 , b 0 of G-LDPC codes that can correct any error pattern with weight less than ω t n , where ω t = α 0 ω 0 , with decoding complexity O n log n .
For Theorem 3, we obtain the following:
Corollary 2.
For the given code rate R < 1 , there exists a G-LDPC code in the ensemble E G , n 0 , b 0 with > 2 such that equation (9) has a positive root ω t > 0 .
The proof of Theorem 3 was given in a more generalized form in [4]. Here, we consider only the proof of Corollary 2. For this purpose, let us formulate some useful facts proved in [4].
First, let us formulate the condition of the existence of a symbol that upon inversion, reduces the number of unsatisfied checks:
Lemma 4.
At least one such symbol exists that will be inverted during one iteration of decoding algorithm A M for G-LDPC code if the following condition is satisfied:
E ( W ) = 2 j = 1 W e A 1 0 ( i j ) + j = 1 W e A 1 1 ( i j ) > W ,
where W is the number of errors in the received sequence, i 1 , i 2 , , i W are indices of erroneous symbols, e A 1 0 ( i ) is the number of edges emanating from the ith variable-node to the set of check-nodes for which the checks become satisfied after the inversion of this symbol, and e A 1 1 ( i ) is the number of the edges emanating from the ith variable-node to the set of check-nodes for which the checks remain unsatisfied after the inversion of this symbol.
Now, let us consider the estimation of the probability that the above condition is not satisfied:
Lemma 5.
The probability P W E ( W ) W for the fixed pattern of errors of weight W that condition (11) is not satisfied, e.g., E ( W ) W , is upper bounded as follows:
P W E ( W ) W 2 n F e ω , n 0 + o n , ω = W n .
Now, let us consider the proof of Corollary 2.
Proof. 
Let us select an arbitrary small value ε and write the following condition:
lim n W = 1 ε n 2 n F e W n , n 0 h W n < 1 .
In the left part of the inequality is the upper bound on the probability of the code that condition (11) is not satisfied for some sequences.
Let us introduce the following function G ω :
G ω = F e ω , n 0 h ω = 1 h ω + max s > 0 , 0 < v < 1 ω log 2 s v 1 n 0 log 2 g e s , v , n 0 + g 0 s , n 0 .
Since the variables s and v are dummies, they can be equal to an arbitrary value if the conditions s > 0 and 0 < v < 1 are satisfied. Then, let us set s = v = ω 4 (this choice is justified by the fact that due to the structure of the parity-check matrix, the conditions > 2 should be satisfied):
G * ω = 1 h ω + ω 2 log 2 ω 1 n 0 log 2 g e ω 4 , ω 4 , n 0 + g 0 ω 4 , n 0 .
Let us transform G * ω as follows:
G * ω = 2 1 ω log 2 ω 1 1 ω log 2 1 ω n 0 log 2 g e ω 4 , ω 4 , n 0 + g 0 ω 4 , n 0 .
It is easy to show that g e s , v , n 0 + g 0 s , n 0 1 + s n 0 for 0 < s < 1 and 0 < v < 1 . Then, we obtain
G * ω = 2 1 ω log 2 ω + O ω .
It is easily noted that G ω G * ω implies
lim n W = 1 ε n 2 n G W n lim n W = 1 ε n 2 n G * W n .
Since LDPC code construction requires > 2 , 2 1 > 0 , and consequently,
G ω c 1 ω log 2 ω + c 2 ω + o ω , c 1 > 0 .
lim n W = 1 ε n 2 n G W n lim n W = 1 ε n 2 n · c 1 · W n · log 2 W n n · c 2 · W n = lim n W = 1 ε n W n c 1 W 2 c 2 W
lim n W = 1 ε n ε c 1 2 c 2 W = ε c 1 2 c 2 1 ε c 1 2 c 2 = ε .
It should be noted that the sign of c 2 is not important because ε can be made arbitrarily small by a correct choice of ε .
Thus,
lim n W = 1 ε n 2 n F e W n , n 0 h W n ε < 1 .
Consequently, the code for which the condition (11) is satisfied for all values of ω t < ε exists with non-zero probability in the ensemble of G-LDPC codes.  □
Finally, let us consider the proof of Corollary 1.
Proof. 
The correctness of the corollary is easy to see if we note that E 0 ( · ) > 0 for R 1 < C [6] and E 2 ( · ) 0 , which follows from (6), and we can always select n 1 such that 1 n 1 H ( β ) < β E 0 ( · ) + E 2 ( · ) 0 p t 12.5 p t because n 1 can be arbitrarily large according to condition (3). Therefore, according to Corollary 2, the construction of G-LDPC code with ω t > 0 for any code rate R 2 < 1 exists, helping us omit this condition in the corollary formulation (unlike the formulation of a similar corollary in [12]).  □

7.2. Error Exponent for Decoding Algorithm A M L

Let us consider the proof of Theorem 2.
Proof. 
To simplify the proof without loss of generality, let us consider the transmission of a zero codeword over the BSC with BER p. Let the probability of the transition of the zero codeword to each of N w codewords with weight w during decoding with algorithm A M L be equal to P δ w . Moreover, let there exist a critical value w c of the number of errors that leads to erroneous decoding with algorithm A M L . Then, we can write
P M L = w = d w c N w P δ w + P w w c ,
where d is the code distance of the LG-LDPC code.
To obtain the upper bound, it is sufficient to consider the case when the zero codeword becomes the word with weight w if there are more than w / 2 errors:
P δ w = i = w 2 w w i p i 1 p w i 2 w 1 p w 2 1 p w 2 ,
where p 1 2 .
From this inequality, we easily obtain
E δ ω c , p = max δ ω ω c ν ω + ω ln 2 + ln p 1 p ,
where ν ω is an asymptotic spectrum of the LG-LDPC code given by Lemma 1 and δ is the relative code distance of the LG-LDPC code given by Lemma 2.
With the help of the Chernoff bound, we obtain the exponent of the probability that more than w c errors have occurred:
P w w c exp n E ω c ω c , p .
E ω c ω c , p = 1 ω c ln 1 ω c 1 p + ω c ln ω c p , ω c p .
Consequently,
E M L p = max ω 0 ω c 1 min E δ ω c , p , E ω c ω c , p ,
ω 0 = max δ , p .
 □
The estimations given in Lemmas 1 and 2 were obtained by the slightly modified classical Gallager’s method [1]. Thus, in this paper, we give only a sketch of the proof.
Let us first consider the proof of Lemma 1.
Proof. 
Let us consider the fixed word of weight W and find the probability of there being a code in the LG-LDPC code ensemble such that this word is a codeword for this code. For this purpose, let us consider the first layer of the parity-check matrix of some LG-LDPC code from the ensemble composed of the parity-check matrices of the single parity check code. We can write the probability that the considered word is a codeword for a given layer as follows:
P W 1 = N 1 W n W ,
where N 1 W is the number of layers, and the word of weight W is a codeword.
We estimate N 1 W as
N 1 W min s > 0 g 0 b 0 s , n 0 s W ,
where g 0 s , n 0 is a spectrum function of the SPC code.
Thus,
P W 1 n w 1 min s > 0 g 0 b 0 s , n 0 s W .
It is clear that the obtained estimation is the same for all 1 layers:
P W i n w 1 min s > 0 g 0 b 0 s , n 0 s W , i = 1 1 .
Similarly, we can write the probability that the considered word of weight W is a codeword for the th layer of the parity-check matrix composed of “optimal” linear codes:
P W n w 1 min s > 0 g 1 b 1 s , R 1 , n 1 s W ,
where g 1 s , R 1 , n 1 is a spectrum of the code with a good spectrum.
Since the layer permutations are independent, we can write the probability that the given word of weight W is a codeword for the whole code construction as
P W = i = 1 P W i n W min s > 0 g 0 b 0 1 s , n 0 g 1 b 1 s , R 1 , n 1 s W .
Consequently, the average number of weight W codewords is given by
N ¯ W = n W P W n W 1 min s > 0 g 0 b 0 1 s , n 0 g 1 b 1 s , R 1 , n 1 s W .
For W = ω n , we obtain
ν ω = lim n ln N ω n n ν 0 ω .
 □
Now, let us consider the proof of Lemma 2.
Proof. 
If the average number of codewords N ¯ W in the ensemble of LG-LDPC codes satisfies the condition
W = 1 d 0 N ¯ W 1 ,
then the code with code distance d d 0 exists in this ensemble.
It is easy to show that the sum of the right part of the inequality can be estimated with the last member of this sum. Therefore, using the estimation obtained in the previous lemma, we can write
ν 0 δ 0 ,
where δ = d / n is the relative code distance.
Thus, we can obtain the maximum value of δ 0 such that the above-considered condition is satisfied for all smaller values δ δ 0 0 as the smallest positive root of the following equation:
ν 0 δ 0 = 0 .
 □

Author Contributions

Conceptualization, V.Z.; Investigation, P.R. and K.A.; Writing—original draft, P.R.; writing—review and editing, K.A. All authors have read and agreed to the published version of the manuscript.

Funding

The reported study was funded by RFBR, project number 19-37-51036.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gallager, R.G. Low-Density Parity-Check Codes; MIT Press: Cambridge, MA, USA, 1963. [Google Scholar]
  2. Zyablov, V.; Pinsker, M. Estimation of the error-correction complexity for Gallager low-density codes. Probl. Inf. Transm. 1975, 11, 18–28. [Google Scholar]
  3. Rybin, P.; Zyablov, V. Asymptotic estimation of error fraction corrected by binary LDPC code. In Proceedings of the 2011 IEEE International Symposium on Information Theory Proceedings, St. Petersburg, Russia, 31 July–5 August 2011; pp. 351–355. [Google Scholar] [CrossRef]
  4. Zyablov, V.; Rybin, P. Analysis of the relation between properties of LDPC codes and the Tanner graph. Probl. Inf. Transm. 2012, 48, 297–323. [Google Scholar] [CrossRef]
  5. Rybin, P. On the error-correcting capabilities of low-complexity decoded irregular LDPC codes. In Proceedings of the 2014 IEEE International Symposium on Information Theory, Honolulu, HI, USA, 29 June–4 July 2014; pp. 3165–3169. [Google Scholar] [CrossRef]
  6. Gallager, R.G. Information Theory and Reliable Communication; John Wiley & Sons, Inc.: New York, NY, USA, 1968. [Google Scholar]
  7. Barg, A.; Zemor, G. Error exponents of expander codes. IEEE Trans. Inf. Theory 2002, 48, 1725–1729. [Google Scholar] [CrossRef]
  8. Barg, A.; Zémor, G. Error Exponents of Expander Codes Under Linear-Complexity Decoding. SIAM J. Discret. Math. 2004, 17, 426–445. [Google Scholar] [CrossRef]
  9. Burshtein, D.; Barak, O. Upper bounds on the error exponents of LDPC code ensembles. In Proceedings of the 2006 IEEE International Symposium on Information Theory, Seattle, WA, USA, 9–14 July 2006; pp. 401–405. [Google Scholar] [CrossRef]
  10. Barak, O.; Burshtein, D. Lower bounds on the error rate of LDPC code ensembles. IEEE Trans. Inf. Theory 2007, 53, 4225–4236. [Google Scholar] [CrossRef]
  11. Rybin, P.; Frolov, A. On the Error Exponents of Capacity Approaching Construction of LDPC code. In Proceedings of the 2018 10th International Congress on Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT), Moscow, Russia, 5–9 November 2018; pp. 1–5. [Google Scholar] [CrossRef]
  12. Zyablov, V.; Rybin, P. Estimation of the exponent of the decoding error probability for a special generalized LDPC code. J. Commun. Technol. Electron. 2012, 57, 946–952. [Google Scholar] [CrossRef]
  13. Rybin, P.S.; Zyablov, V.V. Asymptotic bounds on the decoding error probability for two ensembles of LDPC codes. Probl. Inf. Transm. 2015, 51, 205–216. [Google Scholar] [CrossRef]
  14. Blokh, E.; Zyablov, V. Linear Concatenated Codes; Nauka: Moscow, Russia, 1982. [Google Scholar]
Figure 1. Comparison of the dependence on n 0 of E C · , E M L · and E 0 · for fixed R 1 0.85 , n 1 = 2000 , R = 0.5 and p = 10 3 .
Figure 1. Comparison of the dependence on n 0 of E C · , E M L · and E 0 · for fixed R 1 0.85 , n 1 = 2000 , R = 0.5 and p = 10 3 .
Entropy 23 00253 g001
Figure 2. Comparison of the dependences on R 1 of E C · , E M L · and E 0 · for fixed n 1 = 2000 , R = 0.5 and p = 10 3 .
Figure 2. Comparison of the dependences on R 1 of E C · , E M L · and E 0 · for fixed n 1 = 2000 , R = 0.5 and p = 10 3 .
Entropy 23 00253 g002
Figure 3. Comparison of the dependencies on R for fixed p = 10 3 of E M L · , maximized over the values of n 0 and R 1 for fixed n 1 = 2000 , and of E 0 · .
Figure 3. Comparison of the dependencies on R for fixed p = 10 3 of E M L · , maximized over the values of n 0 and R 1 for fixed n 1 = 2000 , and of E 0 · .
Entropy 23 00253 g003
Figure 4. Comparison of the dependencies on R of E C · and E M L · , maximized over the values of n 0 and R 1 for fixed n 1 = 2000 and p = 10 3 .
Figure 4. Comparison of the dependencies on R of E C · and E M L · , maximized over the values of n 0 and R 1 for fixed n 1 = 2000 and p = 10 3 .
Entropy 23 00253 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Rybin, P.; Andreev, K.; Zyablov, V. Error Exponents of LDPC Codes under Low-Complexity Decoding. Entropy 2021, 23, 253. https://doi.org/10.3390/e23020253

AMA Style

Rybin P, Andreev K, Zyablov V. Error Exponents of LDPC Codes under Low-Complexity Decoding. Entropy. 2021; 23(2):253. https://doi.org/10.3390/e23020253

Chicago/Turabian Style

Rybin, Pavel, Kirill Andreev, and Victor Zyablov. 2021. "Error Exponents of LDPC Codes under Low-Complexity Decoding" Entropy 23, no. 2: 253. https://doi.org/10.3390/e23020253

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop