Next Article in Journal
Energy Management Strategy Design and Simulation Validation of Hybrid Electric Vehicle Driving in an Intelligent Fleet
Previous Article in Journal
Management of the Output Electrical Power in Thermoelectric Generators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Role of the Product λ(0)ρ(1) in Determining LDPC Code Performance

Department of Engineering and Architecture (DIA), University of Trieste, 34127 Trieste, Italy
*
Author to whom correspondence should be addressed.
Current address: Via A. Valerio, 10, I-34127 Trieste, Italy.
Electronics 2019, 8(12), 1515; https://doi.org/10.3390/electronics8121515
Submission received: 31 October 2019 / Revised: 4 December 2019 / Accepted: 6 December 2019 / Published: 10 December 2019
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
The objective of this work is to analyze the importance of the product λ ( 0 ) ρ ( 1 ) in determining low density parity check (LDPC) code performance, as far as its influence on the weight distribution function and on the decoding thresholds. This analysis is based on the 2006 paper by Di et al., as far as the weight distribution function is concerned, and on the 2018 paper by Vatta et al., regarding the LDPC decoding thresholds. In particular, the first paper Di et al. analyzed the relation between the above mentioned product and the minimum weight of an ensemble of random LDPC codewords, whereas in the second some analytical upper bounds to the LDPC decoding thresholds were determined. In the present work, besides analyzing the performance of an ensemble of LDPC codes through the outcomes of Di et al.’s 2006 paper, we give the relation between one of the upper bounds found in Vatta et al.’s 2018 paper and the above mentioned product λ ( 0 ) ρ ( 1 ) , thus showing its role in also determining an upper bound to LDPC decoding thresholds.

1. Introduction

Low Density Parity Check (LDPC) codes, belonging to the class of block channel codes, were introduced for the first time in the 1960s in Robert Gallager’s doctoral thesis [1]. Now they are counted among the most promising and cutting-edge coding techniques in contemporary channel coding. Due to the technological impediments of the time in which they were introduced and of the decoder complexity, these codes were hardly contemplated for about 30 years, with the exception of Tanner’s graphical description, introduced in [2], and successively designated as the Tanner graph. Appearing independently of Gallager’s work, the authors of [3,4] were the re-inventors of LDPC codes in the mid 1990s. Afterwards, because of their astonishing performance approaching the Shannon limit along with Turbo codes [5], they were rapidly comprehended in current communication standards [6,7,8].
The asymptotic performance of a block channel code, such as an LDPC code, is determined by its minimum distance, i.e., by its minimum weight codewords and by their multiplicities. These determine its correction capability and its asymptotic performance expressed in terms of bit error rate (BER) and frame error rate (FER) vs. the signal-to-noise ratio (SNR) E b / N 0 from moderate to high SNRs, i.e., away from capacity. Moreover, as far as LDPC codes are concerned, as first observed in [1], it presents the well-known “threshold phenomenon”. In other words, the noise threshold defines a channel noise upper bound under which the lost information probability can be maintained as low as needed.
Since it is extremely difficult to study the performance of a LDPC codes ensemble analytically as the determination of its weight enumerating function is a complicated task, in this work we try to give an insight into this problem analyzing the importance of the product λ ( 0 ) ρ ( 1 ) in influencing not only this function, that, as said above, determines the asymptotic BER and FER performance, away from capacity, but also the decoding thresholds, i.e., the performance next to the Shannon limit. Namely, on the basis of the results of [9,10], respectively, we are interested in determining, on one side, the relationship between the above mentioned product and the minimum distance of a LDPC codes ensemble, and, on the other side, the relationship between this product and the noise threshold of a LDPC codes ensemble. The first, namely the minimum distance, is related to LDPC’s codewords weight distribution, i.e., to their distance spectrum. The second, namely the noise threshold, is related to the convergence behavior of their iterative decoding algorithm, which is particularly important because LDPC codes, as well as turbo-like codes, are “capacity-achieving” codes.
As far as the first of the above cited papers, namely [9], is concerned, its main result asserts that the minimum distance growth rate of irregular LDPC codes ensembles is only determined by the value of the product λ ( 0 ) ρ ( 1 ) , a number that is related to the degree distribution polynomial couple λ ( x ) and ρ ( x ) . In particular, if λ ( 0 ) ρ ( 1 ) > 1 , the minimum distance growth with the block length being sublinear, if not, the minimum distance growth with the block length is linear.
As far as the second of the above cited papers, namely Vatta et al.’s 2018 paper [10], is concerned, in this work we derived three computationally light upper bounds to LDPC belief-propagation decoding thresholds, usually determined “exactly” through density evolution, assuming, as in [11], the validity of the Gaussian Approximation (GA) (the employment of which is very convenient when an effective and computationally light method is required. See [10,12,13,14,15]), and by using the derivation of [16] (the clarifying vision and interpretation of [11] allows to determine the threshold as the ultimate value guaranteeing the convergence of the recurrent sequence, therein defined, but the authors of [11] did not provide any mathematical method to calculate it. Thus, in [16] we presented a mathematical method, based on the quadratic degeneracy theory, allowing the evaluation of noise thresholds by means of converting a convergence problem into a mathematical analysis problem. Using the algorithm shown in [16], a computationally light approximation of LDPC belief-propagation decoding thresholds, usually determined “exactly” through density evolution, can be obtained, assuming, as in [11], the validity of the GA) to determine the asymptotic analytical performance of Δ ( s , t ) defined, e.g., in [6].
In Section 2 we recall LDPC codes graphical representation and, in Section 3, the Gaussian Approximation approach. In Section 4 we review the mathematical method deduced in [16] to explain how the upper bounds of [10] were obtained. Furthermore, to analyze the relationship between the product λ ( 0 ) ρ ( 1 ) and the minimum distance of a LDPC codes ensemble, in Section 6 the results of [9] are recalled, and, to analyze the relationship between this product and the noise threshold of a LDPC codes ensemble, in Section 7 we present a Lemma specifying the relationship between the third noise threshold upper bound found in [10] and the product λ ( 0 ) ρ ( 1 ) . In Section 8, the numeric results of these two analyses are reported. These results are discussed in Section 9 which also will report some simulation results to support the discussion. Finally, Section 10 summarizes the conclusions.

2. Graphical Representation of LDPC Codes

LDPC codes can be represented in a graphical way through a commonly named bipartite graph, commonly known as the Tanner graph [2]. As the trellis diagram gives an efficient graphical representation of a convolutional code behavior, so does the Tanner graph in providing an efficient graphical representation of how a LDPC encoder and decoder work.
The nodes in a Tanner graph are classified in variable nodes and check nodes. In the drawing of the Tanner graph, the following rule must be respected: An edge connects a variable node i to a check node j when the corresponding element h i j in the parity-check matrix H is a 1. From this it may be deduced that there is a check node for each of the m = n k check equations, and that there is a variable node for each of the n code bits c i .
As far as irregular LDPC codes are concerned [4], the parameters d l and d r specify the maximum degrees of the nodes distributions in their Tanner graphs, defined, e.g., in [17]. In particular, the polynomial λ ( x ) ( ρ ( x ) ) defines the variable (check) nodes edge-perspective degree distribution:
λ ( x ) = i = 2 d l λ i x i 1
ρ ( x ) = j = 2 d r ρ j x j 1
A LDPC code rate can be expressed as [18]:
r ( λ , ρ ) = 1 0 1 ρ ( x ) d x 0 1 λ ( x ) d x = 1 j = 2 d r ρ j / j j = 2 d l λ j / j
The edge-perspective degree distributions λ ( x ) and ρ ( x ) are related to the node perspective degree distributions L ( x ) and R ( x ) as follows [9]:
L ( x ) = i = 2 d l L i x i
R ( x ) = j = 2 d r R j x j
being,
L j = λ j j 0 1 λ ( x ) = λ j / j i = 2 d l λ i / i
R j = ρ j j 0 1 ρ ( x ) = ρ j / j i = 2 d r ρ i / i .

3. Gaussian Approximation

Following the assumptions made in [11], the distributions of the messages involved in the LDPC iterative decoding process under appropriate hypotheses may be approximated as Gaussians. Since it may be observed that to totally specify a Gaussian only its mean and variance are needed, and during the iterative decoding process it is necessary to specify only the means and variances of a generic check node output message, u, and of a generic variable node output message, v. Furthermore, since in [11], assuming the validity of the symmetry condition, the variance σ 2 was shown to be connected to the mean m by the relation σ 2 = 2 m , the means only can be kept in the computation process. The means of u and v have been denoted by m u ( l ) and m v ( l ) at the l-th iteration, respectively. Furthermore, the log-likelihood ratio (LLR) message u 0 from the channel can be taken as Gaussian with a mean of m u 0 = 2 / σ n 2 and a variance of 4 / σ n 2 , being σ n 2 = N 0 / 2 the variance of the channel noise.
The mean of the output of a degree-i variable node at the l t h iteration is:
m v , i ( l ) = m u 0 + ( i 1 ) m u ( l 1 )
where m u 0 is the mean of u 0 and m u ( l 1 ) is the mean of u at the ( l 1 ) -th iteration.
Defining ϕ ( x ) as in Definition 1 in [11], the update rule for an irregular code becomes:
m u , j ( l ) = ϕ 1 1 1 i = 2 d l λ i ϕ ( m v , i ( l ) ) j 1 .
The output of a variable node is specified by its mean m u ( l ) , that can be calculated as the linear combination of the means m u , j ( l ) :
m u ( l ) = j = 2 d r ρ j ϕ 1 1 1 i = 2 d l λ i ϕ m u 0 + ( i 1 ) m u ( l 1 ) j 1 .
Defining s = m u 0 and t l = m u ( l ) , Equation (10) may be rewritten as:
t l = f ( s , t l 1 )
where the function f ( s , t ) is expressed as:
f j ( s , t ) : = ϕ 1 1 1 i = 2 d l λ i ϕ ( s + ( i 1 ) t ) j 1
f ( s , t ) : = j = 2 d r ρ j f j ( s , t ) .

4. Mathematical Method of [16]

As noted in [16], the problem of the determination of the convergence of Equation (10) may be solved by converting it to a problem of quadratic degeneracy, the solution of which can be given in charge to a commercial software. If the second partial derivative of f ( s , t ) with respect to t, f t t ( s , t ) , is  0 , the problem of the determination of the convergence of Equation (10) becomes the search for the solution of the following system:
f ( s , t ) = t f t ( s , t ) = 1
where f t ( s , t ) is the first partial derivative of f ( s , t ) with respect to t. Its solution is the value s * = m u 0 * , which is the minimum s = m u 0 guaranteeing the convergence of Equation (10).
Defining, as in [6],
Δ ( s , t ) : = f ( s , t ) t
and
Δ t ( s , t ) = f t ( s , t ) 1
Equation (14) is given by:
Δ ( s , t ) = 0 Δ t ( s , t ) = 0
Its solution ( s * , t * ) gives an estimate of the belief-propagation decoding threshold σ * : = 2 s * that may be determined exactly using density evolution. To solve Equation (17), an invertible approximation of the function ϕ ( x ) is needed (see, e.g., [11,19,20]).

5. Upper Bounds on LDPC Codes Decoding Thresholds

The upper bounds on thresholds have been determined in [10] from the asymptotic performance of Equation (17). The first bound, called s bound * in [10], was obtained determining an approximation of the function ϕ ( x ) valid for when x 10 . To obtain the second upper bound, called s approx * in [10], we have used the approximation Equation (16) of [10] (that was implicitly used in [11]). Finally, involving the Jensen’s inequality to manipulate the second upper bound, the third bound, called s Jensen * in [10], was determined.

5.1. Upper Bound on LDPC Codes Decoding Thresholds Holding for i 1 2

As far as the above mentioned first upper bound on LDPC codes thresholds (called s bound * in [10]) is concerned, in [20] the following lemma was proved.
Lemma: Given λ ( x ) of minimum degree i 1 2 , and defining z ( s , t ) as z ( s , t ) : = s + ( i 1 1 ) t 2 and A j as A j : = 1 ( j 1 ) 2 λ i 1 2 , being W ( · ) the Lambert-W function, the following asymptotic approximation holds:
f ( s , t ) = 2 j = 2 d r ρ j W ( A j z ( s , t ) e z ( s , t ) ) + O ( t 1 ) .
Recalling that d W ( x ) d x = 1 x + e W ( x ) :
f t ( s , t ) = 2 j = 2 d r ρ j z t ( s , t ) e z ( s , t ) ( 1 + z ( s , t ) ) z ( s , t ) e z ( s , t ) + e W ( A j z ( s , t ) e z ( s , t ) ) log A j .
Applying Equation (14) to Equations (18) and (19) we get:
2 j = 2 d r ρ j W ( A j z ( s , t ) e z ( s , t ) ) = t 2 j = 2 d r ρ j z t ( s , t ) e z ( s , t ) ( 1 + z ( s , t ) ) z ( s , t ) e z ( s , t ) + e W ( A j z ( s , t ) e z ( s , t ) ) log A j = 1 .
and Equation (17) can be rewritten as:
2 j = 2 d r ρ j W ( A j z ( s , t ) e z ( s , t ) ) t = 0 2 j = 2 d r ρ j z t ( s , t ) e z ( s , t ) ( 1 + z ( s , t ) ) z ( s , t ) e z ( s , t ) + e W ( A j z ( s , t ) e z ( s , t ) ) log A j 1 = 0 .
Its solution ( s bound * , t bound * ) determines the bound σ bound * = 2 s bound * , which is valid i 1 , unlike the other two ( s approx * and s Jensen * ) reported in [10], which hold both for i 1 = 2 only.

5.2. Further Upper Bounds on LDPC Codes Decoding Thresholds Holding for i 1 = 2

With a j : = log A j = 2 log ( ( j 1 ) λ i 1 ) , Equation (18) can be rewritten as:
f ( s , t ) = 2 j = 2 d r ρ j W z ( s , t ) e z ( s , t ) + a j + O ( t 1 ) .
Assuming that the following simplified approximation holds:
W z ( s , t ) e z ( s , t ) + a j W ( z ( s , t ) + a j ) e z ( s , t ) + a j ,
calling x : = z s ( t ) + a j , and being W ( x e x ) x for x > 0 , we find a much simpler asymptotic expression for f ( s , t ) (simpler than the one of Equation (18)):
f ( s , t ) 2 j = 2 d r ρ j W ( z ( s , t ) + a j ) e z ( s , t ) + a j = 2 j = 2 d r ρ j ( z ( s , t ) + a j ) = j = 2 d r ρ j ( s + ( i 1 1 ) t 4 log ( ( j 1 ) λ i 1 ) ) .
Thus, the asymptotic performance of f ( s , t ) is given by:
f ( s , t ) = s + ( i 1 1 ) t 4 log λ i 1 4 j = 2 d r ρ j log ( j 1 ) + O ( t 1 ) .
This is the same asymptotic expression obtained, following a different derivation, in [11], where the approximation in Equation (23) was implicitly used in the proof of Lemma 3.
Thus, ignoring the O ( t 1 ) , for large t Equation (17) can be rewritten as:
s + ( i 1 2 ) t 4 log λ i 1 4 j = 2 d r ρ j log ( j 1 ) = 0 i 1 2 = 0 .
The solution of Equation (26) is:
s approx * = 4 log λ i 1 + 4 j = 2 d r ρ j log ( j 1 )
and σ approx * = 2 s approx * gives the second upper bound of [10].
Applying the Jensen’s inequality:
j = 2 d r ( j 1 ) ρ j j = 2 d r ρ j ( j 1 )
and ignoring the O ( t 1 ) , for large t Equation (17) can be rewritten as:
s + ( i 1 2 ) t 4 log λ i 1 4 log j = 2 d r ( j 1 ) ρ j = 0 i 1 2 = 0
The solution of Equation (29) is:
s Jensen * = 4 log λ i 1 + 4 log j = 2 d r ( j 1 ) ρ j
Taking σ Jensen * = 2 s Jensen * we obtain the third upper bound of [10]. Since, for the Jensen’s inequality for Equation (28):
j = 2 d r ρ j log ( j 1 ) log j = 2 d r ( j 1 ) ρ j
it results s approx * s Jensen * and thus σ Jensen * σ approx * , i.e., σ Jensen * gives a tighter upper bound.

6. Role of the Product λ ( 0 ) ρ ( 1 ) in Determining the Weight Distribution of LDPC Codes

In Reference [9] it was demonstrated that λ ( 0 ) = λ 2 occupies a key position in determining LDPC codes performance, both theoretically and in practice. In particular, the minimum distance growth rate, namely, whether it is linear or not, was shown to depend only on the product λ ( 0 ) ρ ( 1 ) . Namely, if  λ ( 0 ) ρ ( 1 ) > 1 , the minimum distance growth rate is sublinear with the block length, otherwise, i.e., if λ ( 0 ) ρ ( 1 ) < 1 , it is linear with the block length.
Moreover, in Reference [9] it was demonstrated that, when λ ( 0 ) ρ ( 1 ) > 1 (condition examined in this paper), for any integer l such that:
l min { L 2 n , ( 1 r ) n }
being n the codeword length, the probability that a randomly chosen code G C ( n , λ , ρ ) has any weight-l codeword is:
1 Pr ( X l = 0 ) = 1 e ( λ ( 0 ) ρ ( 1 ) ) l 2 l + O ( n 1 / 3 )
From Equations (32) and (33), it may be concluded that both the parameter L 2 and the product λ ( 0 ) ρ ( 1 ) have an effect on the weight distribution of a LDPC code.

7. Role of the Product λ ( 0 ) ρ ( 1 ) in Determining the Decoding Threshold of LDPC Codes

The bound of Equation (30) is strictly related to the product λ ( 0 ) ρ ( 1 ) investigated in this paper, but this relationship was not explicitly put in evidence in [10]. In particular, extending the result of [10], the following Lemma can be easily proven.
Lemma 1.
The above mentioned third bound on threshold in Equation (30) may be rewritten as:
s Jensen * = 4 log ( λ ( 0 ) ρ ( 1 ) )
Proof of Lemma 1.
Remembering the expression of λ ( x ) and ρ ( x ) in (1) and (2), respectively, λ ( 0 ) may be alternatively expressed as
λ ( 0 ) = i = 2 d l ( i 1 ) λ i x x = 0 i 2 = λ 2
and ρ ( 1 ) as,
ρ ( 1 ) = j = 2 d r ( j 1 ) ρ j x x = 1 j 2 = j = 2 d r ( j 1 ) ρ j
Thus, remembering the expression of s Jensen * found in Equation (30), which holds for i 1 = 2 only, this can be rewritten as:
s Jensen * = 4 log λ 2 + 4 log j = 2 d r ( j 1 ) ρ j
and thus, given the above mentioned expressions found for λ ( 0 ) and ρ ( 1 ) , also as:
s Jensen * = 4 log λ ( 0 ) + 4 log ρ ( 1 )
from which we get the result. □
Taking σ Jensen * = 2 s Jensen * , the third upper bound of [10] may be rewritten as:
σ Jensen * = 1 2 log ( λ ( 0 ) ρ ( 1 ) )

8. Numeric Results

Being λ ( 0 ) and ρ ( 1 ) given in Equations (35) and (36), respectively, we obtain:
λ ( 0 ) ρ ( 1 ) = λ 2 j = 2 d r ( j 1 ) ρ j
In Table 1, Table 2, and Table 3 we report λ ( x ) and ρ ( x ) for the irregular LDPC codes of Table I and II in [18]. For each pair of degree distributions, we also report the values of the product λ ( 0 ) ρ ( 1 ) and of the parameter L 2 . Moreover, we give the σ * values obtained in [18] by applying the density evolution analysis, jointly with the upper bound σ Jensen * = 2 / s Jensen * defined in Equation (30).
Besides these results, we also reported the SNR gap and SNR gap spline values in dB, given by:
SNR gap ( d l ) : = 1 2 ( σ Jensen * ( d l ) ) 2 C 1 1 ( 1 / 2 )
SNR gap spline ( d l ) : = 1 2 ( σ Jensen * ( d l ) ) 2 C spline 1 ( 1 / 2 )
defining the gap from the Shannon limit of the bounds σ Jensen . In Equations (41) and (42), C 1 ( 1 / 2 ) was evaluated using an approximation or a spline interpolation of C 1 ( r ) , respectively, both recalled in the Appendix A. Applying the approximation C 1 1 ( 1 / 2 ) , we get the Shannon limit as σ Shannon 1 = 0.977813 , which in good agreement with [21]. Applying the spline interpolation we find σ Shannon spline = 0.978696 is in good agreement with [18]. In regards to the noise power, the Shannon limit is given by C 1 1 ( 1 / 2 ) = ( 2 σ Shannon 1 2 ) 1 = 0.522948 and C spline 1 ( 1 / 2 ) = ( 2 σ Shannon spline 2 ) 1 = 0.522005 .

9. Discussion and Simulation Results

Given the pairs λ ( x ) and ρ ( x ) of Table 1, Table 2, and Table 3, for each of them, an ensemble with a random rate-1/2 LDPC codes has been generated, and their performance simulated using a customized software built on the basis of [22], assuming a memoryless binary input with an additive white Gaussian noise (BI-AWGN) channel.
Since l in Equation (33) is upper bounded by Equation (32), the discussion of the results may be conducted separately for the following two conditions:
  • L 2 n > ( 1 r ) n and
  • L 2 n < ( 1 r ) n .

9.1. Case 1: L 2 n > ( 1 r ) n

The rate r of the codes whose pairs λ ( x ) and ρ ( x ) were listed in Table 1, Table 2, and Table 3, is 1/2. It follows that, for the examples considered in the paper, 1 r = 1 / 2 . Thus, the degree distributions fulfilling the condition L 2 n > ( 1 r ) n are those with L 2 n > n / 2 , i.e., L 2 > 1 / 2 . This condition is fulfilled by the distributions having d l = 4 , 6, and 8 of Table 1. This implicates that l n / 2 for all these codes, i.e., that l is fixed. Given this fixed l value, the performance of the random selected LDPC codes having degree distributions given in the 2-nd, 3-rd, and last column of Table 1, are characterized by an asymptotic BER performance that depends on the probability of Equation (33) only, i.e., only on the value of the product λ ( 0 ) ρ ( 1 ) . Taken these three degree distributions, the random selected LDPC code having a pair λ ( x ) and ρ ( x ) that minimizes Equation (33), and thus is characterized by the best asymptotic BER performance, is the one for which the product λ ( 0 ) ρ ( 1 ) is minimum. This is the case of the code ensemble with d l = 8 in Table 1. On the other hand, the random selected LDPC code having a pair λ ( x ) and ρ ( x ) that maximize Equation (33) is the one for which the product λ ( 0 ) ρ ( 1 ) is the maximum. This is the case of the code ensemble with d l = 4 in Table 1. Moreover, the minimization of the product λ ( 0 ) ρ ( 1 ) implies, as expected from Equation (39), the maximization of σ Jensen * and, thus, a minimization of SNR gap and of SNR gap spline .
Figure 1 illustrates the BER performances of two randomly generated codes with pairs λ ( x ) and ρ ( x ) having d l = 4 and 8, respectively, minimizing and, respectively, maximizing the product λ ( 0 ) ρ ( 1 ) .
As shown in the figure above, the simulated performance achieved by the codes having d l = 8 is better than that achieved by the codes having d l = 4 . For instance, at E b / N 0 = 5 dB, the BER obtained with d l = 4 is ∼3 × 10 7 , whereas the BER obtained with d l = 8 is ∼5 × 10 8 .

9.2. Case 2: L 2 n < ( 1 r ) n

Since, as said above, for the examples considered in the paper, 1 r = 1 / 2 , it follows that the degree distributions fulfilling the condition L 2 n < ( 1 r ) n are those with L 2 n < n / 2 , i.e., L 2 < 1 / 2 . This condition is fulfilled by the codes with d l = 7 of Table 1, and by all the other codes of Table 2 and Table 3. This implies that l L 2 n for all these codes, i.e., that, given the block length n, l varies with L 2 . In this case, we expect that a higher upper bound of Equation (32) on the codeword weight l will provide in general a better BER performance. Thus, the dependence of this performance from the product λ ( 0 ) ρ ( 1 ) must be evaluated case by case. For instance, considering Figure 2, Figure 3, and Figure 4, where the performances of three randomly generated codes with pairs λ ( x ) and ρ ( x ) having d l = 7 , 9 and 10 are shown respectively, the BER curve obtained with d l = 7 (Figure 2) is the worse of the three (at E b / N 0 = 5 dB the BER is ∼2 × 10 7 ) and the one obtained with d l = 9 (Figure 3) is the best of the three (at E b / N 0 = 5 dB the BER is ∼2 × 10 8 ), whereas the one obtained with d l = 10 (Figure 4) is intermediate between the above mentioned two (at E b / N 0 = 5 dB the BER is ∼4 × 10 8 ) since the code with d l = 7 presents the lowest L 2 value ( L 2 = 0.43927 ), that with d l = 9 presents the highest L 2 value ( L 2 = 0.49128 ), whereas the one with d l = 10 presents an intermediate L 2 value ( L 2 = 0.46020 ) in respect to the other two. As far as the role of the product λ ( 0 ) ρ ( 1 ) is concerned, in this case it may be seen that, even if the code with d l = 9 presents a higher value of this product with respect to the code with d l = 10 (1.69321 vs. 1.59749), its performance is better (at E b / N 0 = 5 dB the BER is ∼2 × 10 8 ) than that of the code with d l = 10 (presenting at E b / N 0 = 5 dB a BER of ∼4 × 10 8 ) because it presents a higher value of L 2 .
Thus, in conclusion, the randomly generated code having a degree distribution with d l = 9 (Figure 3) is the best since as it has the highest value of L 2 (0.49128) even if it presents an intermediate value of the product λ ( 0 ) ρ ( 1 ) (1.69321). Moreover, the code with the lowest value of the product λ ( 0 ) ρ ( 1 ) (1.59749), i.e., the one with d l = 10 also presents, as expected from (39), the highest value of σ Jensen * (1.03314).

10. Conclusions

The objective of this work was the analysis of the importance of the product λ ( 0 ) ρ ( 1 ) in determining LDPC codes performance, as far as both the weight distribution function and the decoding thresholds are concerned. This analysis was based on [9], as far as the weight distribution function was concerned, and on [10], regarding the LDPC decoding thresholds. The analysis was conducted on two main conditions, i.e., the one for which L 2 > ( 1 r ) (being r the code rate) and the one for which L 2 < ( 1 r ) . In the first case, the role of the product alone was fundamental in determining the performance. In the second, parameter L 2 was also important with the product. The best case was that presenting the highest value of L 2 together with the lowest value of the product λ ( 0 ) ρ ( 1 ) > 1 . Moreover, a lower value of the product λ ( 0 ) ρ ( 1 ) implied, as expected from (39), a higher value of the upper bound on the decoding threshold σ Jensen * and thus, in terms of noise power, a smaller gap from the Shannon limit.

Author Contributions

Conceptualization, F.V. and F.B.; methodology, F.V.; software, F.E. and M.N.; validation, F.V., G.B., and M.C.; formal analysis, F.V.; investigation, F.V. and F.B.; resources, F.B. and M.C.; data curation, F.E. and M.N.; writing—original draft preparation, F.V.; writing—review and editing, F.V. and F.B.; visualization, F.V.; supervision, M.C.; project administration, M.C.; funding acquisition, F.V., G.B., and M.C.

Funding

This research was partly funded by the Italian Ministry of University and Research (MIUR) within the project FRA 2018 (University of Trieste, Italy), entitled “UBER-5G: Cubesat 5G networks—Access layer analysis and antenna system development.”

Acknowledgments

The authors wish to thank Alessandro Soranzo for his kind support in the formal analysis.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A. Approximation of the Functions C ( γ ) and C 1 ( r )

We approximated the function C ( γ ) , reported in Equation (8) of [16], with Mathematica®, using a numeric integration therein specified.
Here we report 66 (approximate) values ( × 10 4 ) of C ( γ ) , corresponding to γ = 0.1 , , 3.35 with steps of 0.05. (The first value is 0.1314, the second is 0.1889, and so on.)
1314 5738 7804 8841 9379 9664 1889 5993 7929 8905 9413 9682 2417 6231 8047 8966 9445 9699 2905 6454 8158 9023 9475 9715 3356 6663 8263 9077 9504 9730 3774 6859 8361 9128 9530 9745 4162 7042 8453 9176 9556 9759 4523 7215 8540 9222 9580 9771 4859 7376 8622 9264 9603 9784 5173 7528 8699 9305 9624 9795 5465 7670 8772 9343 9645 9806
Figure A1 reports a graph of C ( γ ) obtained with the same software.
Figure A1. Graph of the function C ( γ ) .
Figure A1. Graph of the function C ( γ ) .
Electronics 08 01515 g0a1
To implement an approximation of the inverse C 1 ( r ) we used a spline interpolation of the above approximatively reported values ( γ i , C ( γ i ) ) = ( C 1 ( r i ) , r i ) , with 0.1314 < r i < 0.9806 :
  • s[y_] = Interpolation[v, y];
where v is the array of the 66 pairs of values ( C 1 ( r i ) , r i ) .
Moreover, in [16] we also reported a couple of functions, one inverse of the other, approximating C ( γ ) and its inverse C 1 ( r ) , with forms, respectively,
C 1 ( γ ) = 1 e u γ w + v C 1 1 ( r ) = log ( 1 r ) + v u 1 w
with
u = 1.286 v = 0.01022 w = 0.9308 .

References

  1. Gallager, R.G. Low-Density Parity-Check Codes; MIT Press: Cambridge, UK, 1963. [Google Scholar]
  2. Tanner, R.M. A recursive approach to low complexity codes. IEEE Trans. Inf. Theory 1981, 5, 533–547. [Google Scholar] [CrossRef] [Green Version]
  3. Mackay, D.J.C. Good error correcting codes based on very sparse matrices. IEEE Trans. Inf. Theory 1999, 2, 399–431. [Google Scholar] [CrossRef] [Green Version]
  4. Luby, M.G.; Mitzenmacher, M.; Shokrollahi, M.A.; Spielman, D.A. Improved low-density parity-check codes using irregular graphs. IEEE Trans. Inf. Theory 2001, 2, 585–598. [Google Scholar] [CrossRef]
  5. Babich, F.; Vatta, F. Turbo Codes Construction for Robust Hybrid Multitransmission Schemes. J. Commun. Softw. Syst. (JCOMSS) 2011, 4, 128–135. [Google Scholar] [CrossRef]
  6. Vatta, F.; Soranzo, A.; Comisso, M.; Buttazzoni, G.; Babich, F. Performance Study of a Class of Irregular LDPC Codes through Low Complexity Bounds on Their Belief-Propagation Decoding Thresholds. In Proceedings of the 2019 AEIT International Annual Conference, AEIT’19, Florence, Italy, 18–20 September 2019. [Google Scholar]
  7. Babich, F.; Comisso, M. Multi-Packet Communication in Heterogeneous Wireless Networks Adopting Spatial Reuse: Capture Analysis. IEEE Trans. Wirel. Commun. 2013, 10, 5346–5359. [Google Scholar] [CrossRef]
  8. Babich, F.; D’Orlando, M.; Vatta, F. Distortion Estimation Algorithms for Real-Time Video Streaming: An Application Scenario. In Proceedings of the 2011 International Conference on Software, Telecommunications and Computer Networks, SoftCOM’11, Split, Croatia, 15–17 September 2011. [Google Scholar]
  9. Di, C.; Richardson, T.J.; Urbanke, R. Weight distribution of low-density parity-check codes. IEEE Trans. Inf. Theory 2006, 11, 4839–4855. [Google Scholar] [CrossRef]
  10. Vatta, F.; Soranzo, A.; Babich, F. Low-Complexity bound on irregular LDPC belief-propagation decoding thresholds using a Gaussian approximation. Electron. Lett. 2018, 17, 1038–1040. [Google Scholar] [CrossRef] [Green Version]
  11. Chung, S.-Y.; Richardson, T.J.; Urbanke, R. Analysis of sum-product decoding of low-density parity-check codes using a Gaussian approximation. IEEE Trans. Inf. Theory 2001, 2, 657–670. [Google Scholar] [CrossRef] [Green Version]
  12. Babich, F.; Noschese, M.; Soranzo, A.; Vatta, F. Low complexity rate compatible puncturing patterns design for LDPC codes. In Proceedings of the 2017 International Conference on Software, Telecommunications and Computer Networks, SoftCOM’17, Split, Croatia, 21–23 September 2017. [Google Scholar]
  13. Babich, F.; Noschese, M.; Vatta, F. Analysis and design of rate compatible LDPC codes. In Proceedings of the 27th IEEE International Symposium on Personal, Indoor and Mobile Radio Communications, PIMRC ’16, Valencia, Spain, 4–8 September 2016. [Google Scholar]
  14. Tan, B.S.; Li, K.H.; Teh, K.C. Bit-error rate analysis of low-density parity-check codes with generalised selection combining over a Rayleigh-fading channel using Gaussian approximation. IET Commun. 2012, 1, 90–96. [Google Scholar] [CrossRef]
  15. Chen, X.; Lau, F.C.M. Optimization of LDPC codes with deterministic UEP properties. IET Commun. 2011, 11, 1560–1565. [Google Scholar] [CrossRef]
  16. Babich, F.; Soranzo, A.; Vatta, F. Useful mathematical tools for capacity approaching codes design. IEEE Commun. Lett. 2017, 9, 1949–1952. [Google Scholar] [CrossRef]
  17. Vatta, F.; Babich, F.; Ellero, F.; Noschese, M.; Buttazzoni, G.; Comisso, M. Performance study of a class of irregular LDPC codes based on their weight distribution analysis. In Proceedings of the 2019 International Conference on Software, Telecommunications and Computer Networks, SoftCOM’19, Split, Croatia, 19–21 September 2019. [Google Scholar]
  18. Richardson, T.J.; Shokrollahi, A.; Urbanke, R. Design of capacity- approaching irregular low-density parity-check codes. IEEE Trans. Inf. Theory 2001, 2, 619–637. [Google Scholar] [CrossRef] [Green Version]
  19. Vatta, F.; Soranzo, A.; Comisso, M.; Buttazzoni, G.; Babich, F. New explicitly invertible approximation of the function involved in LDPC codes density evolution analysis using a Gaussian approximation. Electron. Lett. 2019, 22, 1183–1186. [Google Scholar] [CrossRef]
  20. Vatta, F.; Soranzo, A.; Babich, F. More accurate analysis of sum-product decoding of LDPC codes using a Gaussian approximation. IEEE Commun. Lett. 2019, 2, 230–233. [Google Scholar] [CrossRef] [Green Version]
  21. Chung, S.-Y. On the Construction of Some Capacity-Approaching Coding Schemes. Ph.D. Thesis, MIT, Cambridge, MA, USA, 2000. Available online: http://dspace.mit.edu/handle/1721.1/8981 (accessed on 9 December 2019).
  22. Boscolo, A.; Vatta, F.; Armani, F.; Viviani, E.; Salvalaggio, D. Physical AWGN channel emulator for Bit Error Rate test of digital baseband communication. Appl. Mech. Mater. 2013, 241–244, 2491–2495. [Google Scholar] [CrossRef]
Figure 1. BER with respect to E b / N 0 in dB with d l = 4 and d l = 8 , n = 1000 , and iterations number I = 50 .
Figure 1. BER with respect to E b / N 0 in dB with d l = 4 and d l = 8 , n = 1000 , and iterations number I = 50 .
Electronics 08 01515 g001
Figure 2. BER with respect to E b / N 0 in dB with d l = 7 , n = 1000 , and iterations number I = 30 .
Figure 2. BER with respect to E b / N 0 in dB with d l = 7 , n = 1000 , and iterations number I = 30 .
Electronics 08 01515 g002
Figure 3. BER with respect to E b / N 0 in dB with d l = 9 , n = 1000 , and iterations number I = 30 .
Figure 3. BER with respect to E b / N 0 in dB with d l = 9 , n = 1000 , and iterations number I = 30 .
Electronics 08 01515 g003
Figure 4. BER with respect to E b / N 0 in dB with d l = 10 , n = 1000 , and iterations number I = 30 .
Figure 4. BER with respect to E b / N 0 in dB with d l = 10 , n = 1000 , and iterations number I = 30 .
Electronics 08 01515 g004
Table 1. Computed parameters and decoding threshold bound regarding rate- 1 / 2 codes deduced from Table I of [18], with d l = 4 , 6, 7, and 8.
Table 1. Computed parameters and decoding threshold bound regarding rate- 1 / 2 codes deduced from Table I of [18], with d l = 4 , 6, 7, and 8.
d l 4678
λ 2 0.383540.332410.315700.30013
λ 3 0.042370.246320.416720.28395
λ 4 0.574090.11014
λ 5
λ 6 0.31112
λ 7 0.43810
λ 8 0.41592
ρ 5 0.24123
ρ 6 0.758770.766110.438100.22919
ρ 7 0.233890.561900.77081
λ ( 0 ) 0.383540.332410.315700.30013
ρ ( 1 ) 4.758775.233895.561905.77081
λ ( 0 ) ρ ( 1 ) 1.825181.739801.755891.73199
0 1 λ ( x ) 0.349420.327700.359340.29671
L 2 0.548830.507190.439270.50577
σ * 0.91140.93040.94240.9497
σ Jensen * 0.911600.950210.942410.95409
SNR gap 0.609030.248720.320320.21333
SNR gap spline 0.616870.256560.328150.22117
Table 2. Computed parameters and decoding threshold bound regarding rate- 1 / 2 codes deduced from Table I of [18], with d l = 9 , 10, 11, and 12.
Table 2. Computed parameters and decoding threshold bound regarding rate- 1 / 2 codes deduced from Table I of [18], with d l = 9 , 10, 11, and 12.
d l 9101112
λ 2 0.276840.251050.238820.24426
λ 3 0.283420.309380.295150.25907
λ 4 0.001040.032610.01054
λ 5 0.05510
λ 6
λ 7
λ 8 0.01455
λ 9 0.43974
λ 10 0.438530.01275
λ 11 0.43342
λ 12 0.40373
ρ 6 0.01568
ρ 7 0.852440.636760.430110.25475
ρ 8 0.131880.363240.569890.73438
ρ 9 0.01087
λ ( 0 ) 0.276840.251050.238820.24426
ρ ( 1 ) 6.116206.363246.569896.75612
λ ( 0 ) ρ ( 1 ) 1.693211.597491.569021.65025
0 1 λ ( x ) 0.281750.272760.265350.25888
L 2 0.491280.460200.450010.47176
σ * 0.95400.95580.95720.9580
σ Jensen * 0.974391.033141.053560.99908
SNR gap 0.030460.478070.648070.18689
SNR gap spline 0.038300.470230.640230.17905
Table 3. Computed parameters and decoding threshold bound regarding rate- 1 / 2 codes deduced from Table II of [18] with d l = 15 , 20, 30, and 50.
Table 3. Computed parameters and decoding threshold bound regarding rate- 1 / 2 codes deduced from Table II of [18] with d l = 15 , 20, 30, and 50.
d l 15203050
λ 2 0.238020.219910.196060.17120
λ 3 0.209970.233280.240390.21053
λ 4 0.034920.020580.00273
λ 5 0.12015
λ 6 0.085430.00228
λ 7 0.015870.065400.055160.00009
λ 8 0.047670.166020.15269
λ 9 0.019120.040880.09227
λ 10 0.010640.02802
λ 14 0.00480
λ 15 0.376270.01206
λ 19 0.08064
λ 20 0.22798
λ 28 0.00221
λ 30 0.286360.07212
λ 50 0.25830
ρ 8 0.980130.648540.00749
ρ 9 0.019870.347470.991010.33620
ρ 10 0.003990.001500.08883
ρ 11 0.57497
λ ( 0 ) 0.238020.219910.196060.17120
ρ ( 1 ) 7.019877.355457.994019.23877
λ ( 0 ) ρ ( 1 ) 1.670871.617541.567311.58168
0 1 λ ( x ) 0.249450.240170.222400.19699
L 2 0.477080.457830.440780.43454
σ * 0.96220.96490.96900.9718
σ Jensen * 0.986921.019661.054851.04429
SNR gap 0.080520.363990.658700.57131
SNR gap spline 0.072680.356150.650860.56347

Share and Cite

MDPI and ACS Style

Vatta, F.; Babich, F.; Ellero, F.; Noschese, M.; Buttazzoni, G.; Comisso, M. Role of the Product λ(0)ρ(1) in Determining LDPC Code Performance. Electronics 2019, 8, 1515. https://doi.org/10.3390/electronics8121515

AMA Style

Vatta F, Babich F, Ellero F, Noschese M, Buttazzoni G, Comisso M. Role of the Product λ(0)ρ(1) in Determining LDPC Code Performance. Electronics. 2019; 8(12):1515. https://doi.org/10.3390/electronics8121515

Chicago/Turabian Style

Vatta, Francesca, Fulvio Babich, Flavio Ellero, Matteo Noschese, Giulia Buttazzoni, and Massimiliano Comisso. 2019. "Role of the Product λ(0)ρ(1) in Determining LDPC Code Performance" Electronics 8, no. 12: 1515. https://doi.org/10.3390/electronics8121515

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop