Next Article in Journal
Sherlock Holmes Doesn’t Play Dice: The Mathematics of Uncertain Reasoning When Something May Happen, That You Are Not Even Able to Figure Out
Previous Article in Journal
Comprehensive Examination of Unrolled Networks for Solving Linear Inverse Problems
Previous Article in Special Issue
An Efficient Two-Stage Decoding Scheme for LDPC-CRC Concatenated Codes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimized Generalized LDPC Convolutional Codes

1
Yangtze Delta Region Institute (Huzhou), University of Electronic Science and Technology of China, Huzhou 313000, China
2
National Key Laboratory on Communications, University of Electronic Science and Technology of China, Chengdu 611731, China
3
ZTE Corporation, Shenzhen 518057, China
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(9), 930; https://doi.org/10.3390/e27090930
Submission received: 1 July 2025 / Revised: 30 August 2025 / Accepted: 2 September 2025 / Published: 4 September 2025
(This article belongs to the Special Issue LDPC Codes for Communication Systems)

Abstract

In this paper, some optimized encoding and decoding schemes are proposed for the generalized LDPC convolutional codes (GLDPC–CCs). In terms of the encoding scheme, a flexible doping method is proposed, which replaces multiple single parity check (SPC) nodes with one generalized check (GC) node. Different types of BCH codes can be selected as the GC node by adjusting the number of SPC nodes to be replaced. Moreover, by fine-tuning the truncated bits and the extended parity check bits, or by reasonably adjusting the GC node distribution, the performance of GLDPC–CCs can be further improved. In terms of the decoding scheme, a hybrid layered normalized min-sum (HLNMS) decoding algorithm is proposed, where the layered normalized min-sum (LNMS) decoding is used for SPC nodes, and the Chase–Pyndiah decoding is adopted for GC nodes. Based on analysis of the decoding convergence of GC node and SPC node, an adaptive weight factor is designed for GC nodes that changes as the decoding iterations, aiming to further improve the decoding performance. In addition, an early stop decoding strategy is also proposed based on the minimum amplitude threshold of mutual information in order to reduce the decoding complexity. The simulation results have verified the superiority of the proposed scheme for GLDPC–CCs over the prior art, which has great application potential in optical communication systems.

1. Introduction

The evolution of forward error correction (FEC) in optical communication systems has generally progressed through three generations, with different standards recommending distinct FEC codes over time. The first-generation FEC employed hard-decision codes, notably the RS ( 255 , 239 ) code specified in standards like ITU-T G.975 and G.709, which provided approximately 6 dB of net coding gain (NCG) and was primarily used in synchronous digital hierarchy (SDH) and early wavelength division multiplexing (WDM) systems. As single-wavelength 10G/40G WDM systems demanded higher error correction capabilities, second-generation FEC utilized hard-decision concatenated coding techniques (such as the concatenated codes defined in ITU-T G.975.1), achieving NCG above 8 dB through interleaving and iterative decoding. With the advent of single-wavelength 100G and beyond-100G coherent systems, the development of coherent technology and high-speed integrated circuits facilitated the adoption of third-generation soft-decision FEC (SD-FEC), where LDPC codes (low-density parity check codes) and turbo product codes (TPC) became mainstream, offering approximately 11 dB of NCG and significantly enhancing long-haul transmission performance. Owing to their near-Shannon-limit performance, low decoding complexity, and high throughput potential, LDPC codes have been widely incorporated into optical communication standards, such as 100G/400G/800G coherent transmission systems (e.g., OIF 400ZR/800ZR and ITU-T G.709.3 FlexO interfaces), which utilize LDPC-based codes.
Generalized low-density parity check (GLDPC) codes are a class of channel codes with broad code rate selection and performance approaching the Shannon limit. Compared to conventional low-density parity check (LDPC) codes, GLDPC codes replace single parity check (SPC) nodes with generalized constraint (GC) nodes of stronger error-correcting capabilities, theoretically offering potential advantages such as larger minimum distances, greater flexibility in component code selection for check nodes, lower error rates under short-to-medium block lengths, faster decoding convergence, and reduced error floors. Moreover, they can achieve higher NCG at the same code rate compared to RS codes and reduce encoding complexity and memory overhead through structured designs (e.g., QC-LDPC), making them suitable for hardware implementation. Their application potentials are primarily focused on scenarios with extremely high reliability and low latency requirements, such as beyond-100G optical transmission systems (e.g., 1.6T), visible light communication (VLC) systems, massive machine-type communications (mMTCs), and integrated optical-wireless networks (e.g., 5G/6G fronthaul and midhaul). These scenarios require resilience against nonlinear noise and complex channel impairments, where the strong error correction capability and flexibility of GLDPC codes can significantly enhance system robustness. At the present stage, the application research of GLDPC codes in optical communications is still in the exploratory stage, which have not yet been directly adopted in mainstream optical communication standards (e.g., ITU-T or OIF specifications). However, as optical communications evolve toward single-wavelength 800G/1.6T and leverage technologies like C+L band expansion, GLDPC codes are expected to be one of the key technologies for beyond-400G standards.
Figure 1 illustrates the development timeline of GLDPC codes. Tanner first proposed that check nodes need not be limited to SPC nodes but could instead adopt any generalized parity check structure, thereby introducing the concept of GLDPC codes. However, Tanner did not provide concrete GLDPC code constructions [1]. In 1999, Lentmaier et al. systematically expanded LDPC codes by employing Hamming codes as component codes for check nodes, marking the beginning of structured GLDPC research [2]. Since then, various GLDPC architectures have emerged, including non-binary GLDPC codes [3], quasi-cyclic GLDPC codes [4], doubly generalized GLDPC codes [5], and convolutional GLDPC codes [6,7,8]. For component codes, scholars have explored diverse candidates such as BCH codes [9], Reed–Solomon (RS) codes [9], Hadamard codes [10], Reed–Muller (RM) codes [11], Zigzag codes [12], Hamming codes [2,13], and Quadratic Residue (QR) codes [14]—wherein Hamming codes [15,16,17], BCH codes, and RM codes [11] have become common choices in optical communication systems due to their good balance between bit error performance and implementation complexity. In 2025, authors in [18] proposed a novel GLDPC code using polar codes as component codes (termed GLDPC-PC codes), where soft information from polar components is efficiently extracted from the soft-output successive cancellation list (SO-SCL) decoder. GLDPC-PC codes exhibit outstanding block error rate (BLER) performance, particularly in ultralow error regimes, aligning with 6G communication requirements for higher data rates and ultrareliable transmission.
Particularly, the generalized low-density parity check convolutional code (GLDPC–CC) is a class of GLDPC codes. The convolution structure brings about better waterfall performance, which makes GLDPC–CC a very promising solution for optical communication. In [16], a kind of GLDPC–CC is proposed with Hamming ( 15 , 11 ) code as the component code. The layered decoding algorithm is used in the decoding implementation, where the SPC nodes adopt the revised min-sum algorithm, and GC nodes use the Bahl–Cocke–Jelinek–Raviv (BCJR) algorithm [17]. However, the authors found that the proposed GLDPC–CC has no obvious advantages over LDPC convolutional code (LDPC–CC) [19] in terms of error correction performance, delay, and power consumption.
In this paper, we focus on the research of GLDPC–CC codes that are suitable for high-reliability and low-latency optical communication scenarios. In order to further improve the error correction performance while reduce the decoding delay and power consumption, we conducted a series of studies on various aspects of GLDPC–CCs, including the code construction, decoding algorithm, and low-power strategies. The contributions of this paper can be summarized as the following two aspects:
(1)
Firstly, we re-design GLDPC–CCs from the perspective of code structure, aiming to enhance the error-correction performance.
  • Considering that the existing GLDPC–CCs adopt Hamming codes as the component code whose error-correction performance is weak and the construction method is not flexible, a novel doping method of replacing multiple SPC nodes with one GC node is proposed, in which the joint optimization strategy of GC node types and GC node ratios is adopted. This method enables the construction of BCH codes of various lengths as component codes, and also allows for the flexible construction of GLDPC–CCs with different code rates.
  • Moreover, the influence of GC node distribution on the coupling structure is discussed, which indicates that the nonuniform intra-layer GC node distribution can lead to better error correction performance compared to the uniform GC node distribution.
(2)
Secondly, in terms of the decoding algorithm aspect, considering that the BCJR algorithm is only applicable to simple code types such as Hamming codes, for the BCH code family with longer code lengths and stronger error correction capabilities, the decoding complexity of the BCJR algorithm will sharply increase and will no longer be applicable. Therefore, we develop a decoding algorithm and its optimized versions suitable for GLDPC–CCs with GC nodes of BCH or RS codes.
  • A hybrid layered normalized minimum sum (HLNMS) decoding scheme of GLDPC–CCs is proposed in this paper, where the layered normalized minimum sum (LNMS) decoding is adopted for the single parity check (SPC) node, and the Chase–Pyndiah decoding is adopted for the generalized check (GC) node, which has lower decoding delay and storage complexity.
  • Moreover, in order to further improve the error correction performance of proposed HLNMS algorithm, an optimized adaptive weighting factor of the Chase–Pyndiah algorithm for GC nodes is proposed based on the analysis of the decoding convergence characteristics of GC nodes and SPC nodes.
  • On the other hand, in terms of low-power consumption strategies, a decoding early stopping strategy is also proposed for the HLNMS algorithm to reduce the decoding complexity while maintaining comparable error correction performance.
Among the above mentioned works, the proposed HLNMS algorithm is an optimized scheme of the decoding algorithm of [16,17], where the Chase–Pyndiah decoding for GC nodes of BCH code is different from the BCJR decoding for GC nodes of Hamming code in [16,17]. The other proposed encoding and decoding schemes are all original.
The remainder of this paper is organized as follows. The background of generalized LDPC codes is briefly introduced in Section 2. Specific descriptions of the proposed encoding and decoding schemes are provided in Section 3 and Section 4. The numerical analyses and simulation results are presented in Section 5, followed by the conclusions in Section 6.

2. Preliminaries

In this section, we briefly introduce the basic concepts, characteristics, and common construction methods of GLDPC codes, as well as the prior-art construction method of GLDPC–CCs.

2.1. Basic Definitions of GLDPC Codes

GLDPC codes were first proposed by Tanner in 1981 [1]. Unlike traditional LDPC codes that use single parity check (SPC) codes as check nodes, which can only detect one error, GLDPC codes employ component codes with certain error correction capabilities as check nodes. From the perspective of composition structure, GLDPC codes are an extension of standard LDPC codes. Therefore, they are also a kind of linear block code whose parity check matrix satisfies the sparse property and conforms to all the corresponding characteristics of linear block codes. That is, GLDPC codes have the generator matrix G and the corresponding parity check matrix H , which satisfy G · H T = 0 . In the parity check matrix of a standard LDPC code, each row corresponds to a parity check constraint equation, and this parity check constraint conforms to the parity check relationship of an SPC code; each column represents the information of a corresponding parity check variable. Compared with the parity check matrix of a standard LDPC code, the parity check matrix of a GLDPC code shows no difference in form, and the two can even be used interchangeably. However, there is an essential difference in their physical meanings. That is, the parity check constraints that a GLDPC code needs to satisfy can be selected from many other codes besides SPC codes, such as Hamming codes, BCH codes, etc. Therefore, in its parity check equations, it no longer satisfies the parity check relationship of one SPC per row, but instead, multiple rows represent one parity check relationship, jointly satisfying the constraint conditions of the corresponding parity check component codes, which are called generalized parity check (GC) nodes.
Let R be the code rate of GLDPC codes, and let the variable node degree of GLDPC codes be d v , and the check node degree be n. The component code is a linear block code C ( n , k ) with a code rate of r = k / n . Then the code rate R must satisfy the following relationship:
R 1 ( 1 r ) d v
Tanner derived the relationship between the lower bound of the minimum Hamming distance and the parameters of the component code in [1]. Let d be the minimum Hamming distance of the component code, D be the minimum Hamming distance of GLDPC, the degree of the bit node be d v , and the circumference of the Tanner graph be g. Then, the following relationship holds:
D d [ ( d 1 ) ( d v 1 ) ] g 2 4 1 ( d 1 ) ( d v 1 ) 1 + d d v [ ( d 1 ) ( d v 1 ) ] g 2 4 , g / 2 is odd ; d [ ( d 1 ) ( d v 1 ) ] g 4 1 ( d 1 ) ( d v 1 ) 1 , g / 2 is even .
Suppose the code length of the component code is n, and the code length of the GLDPC code is N, then there is the following relationship:
N n [ ( n 1 ) ( d v 1 ) ] g 2 4 1 ( n 1 ) ( d v 1 ) 1 + n d v [ ( n 1 ) ( d v 1 ) ] g 2 4 , g / 2 is odd ; n [ ( n 1 ) ( d v 1 ) ] g 4 1 ( n 1 ) ( d v 1 ) 1 , g / 2 is even .
Lentmaier proved in [2] that the minimum Hamming distance of GLDPC linearly increases with the code length, and provided the lower bound formula as follows:
D N δ ( d v , n ) + o ( N ) ,
where δ ( d v , n ) is a coefficient that is related to the variable node degree d v and the code length n of the component code.

2.2. Construction Methods of GLDPC Codes

In the terms of the parity check matrix construction of GLDPC codes, there are two classic construction methods.
The first construction method proposed by Boutrous in [20] is to extend the structure of Gallager’s LDPC codes, by replacing each parity check equation of the ( N , K ) LDPC codes with a small parity check matrix of the ( n , k ) linear component code. Firstly, a parity check matrix H of regular LDPC codes is constructed by using the Gallager’s construction method. Secondly, each row of the H matrix is replaced with the parity check matrix H 0 of the ( n , k ) component code, thereby generating a new parity check matrix H 1 . Suppose the parity check matrix H g of GLDPC code has J submatrices. Then set H 1 as its first submatrix, and the other J 1 submatrices are generated by column permutations of H 1 .
The second construction method proposed by Lentmaier in [2] is basically the same as that of Boutrous’s method, which also extends the initial LDPC matrix H by using the parity check matrix H 0 of ( n , k ) component codes. The difference is that the non-zero elements in each row of the parity check matrix H are permutations with n different column vectors in H 0 and the zero elements in each row of H matrix are permutations with a completely zero column vector of length n k to obtain the parity check matrix H g of GLDPC codes.

2.3. Construction Method of GLDPC–CCs

The GLDPC–CC in [16] is a generalized extension of the quasi-cyclic (QC) LDPC–CC. The adoption of a quasi-cyclic structure is to enhance the hardware feasibility. The specific construction process of the parity check matrix of GLDPC–CCs is shown in Figure 2, where H q is a quasi-cyclic matrix expanded by the base matrix H b with a row weight of d c and a column weight of d v according to the cyclic shift factor matrix H s h i f t and the expansion factor Z. The base matrix H b is a full one matrix.
As shown in Figure 2a, each row of the H q matrix is evenly divided into d v sub-matrices, denoted as H q i , j (i, j = 1, 2, …, d v ). Each H q i , j contains d c / d v cyclic blocks. Note that d v is set as 3 and d c is set as 15 in this paper. This partitioning strategy is used for the subsequent convolution operations of H q . In Figure 2a, the H q matrix is cut along the diagonal to obtain the upper right part H q 2 and the lower left part H q 1 . Subsequently, H q 2 and H q 1 are spliced left and right to obtain H q 3 as shown in Figure 2b. The cutting and splicing operations are carried out to prepare for the subsequent convolution operations. As shown in Figure 2c, by repeatedly copying and pasting H q 3 , the parity check matrix H c of the QC–LDPC–CC code can be formed. The period of the QC–LDPC–CC codes is T = d v = 3 . Finally, by replacing certain rows of H c with the parity check matrix H 0 of the component code, the parity check matrix H g of GLDPC–CCs can be easily constructed.

3. Optimized Encoding Scheme of GLDPC–CCs

This section focuses on the optimized encoding schemes of GLDPC–CCs. Firstly, the matching selection of generalized code types and the generalized check node ratios is discussed. Then, the effect of the GC nodes distribution on the degree distribution characteristics of GLDPC–CCs is analyzed.

3.1. Joint Optimization of GC Node Proportion and GC Component Code Type

In the encoding scheme described in [16], one single SPC node is replace with a GC node of Hamming (15, 11) code. This encoding scheme results in a uniform and fixed GC component code type, which prevents any further improvement on error correction performance. Accordingly, an optimized doping method of replacing multiple SPC nodes with a single GC node. Accordingly, with the same code rate, different combinations of component code types and GC nodes proportion can be obtained, and the optimal combination with the best error correction performance can be selected.
Figure 3 illustrates the specific encoding process of GLDPC–CCs. As shown in Figure 3, the length of each row of encoding memory is ( d c / d v ) Z , where Z is the cyclic shift expansion factor. All d v rows of the encoding memory are coupled together to generate a long code of d c Z bits. The specific encoding process is as follows.
Step 1: Initialization. Load the codewords in the (i−2)-th and the (i−1)-th row of encoding memory as the first 2 ( d c / d v ) Z bits of the codeword Code i , and initialize the last ( d c / d v ) Z bits to be zeros, that is, Code i = [ B L i 2 , B L i 1 , zeros(1, (dc/dv)Z)]. Especially, when i = 1 , Code i = [ B L 00 , B L 01 , z e r o s ( 1 , ( d c / d v ) Z ) ] ; and when i = 2 , Code i = [ B L 01 , B L i 1 , z e r o s ( 1 , ( d c / d v ) Z ) ] . Here, B L 00 and B L 01 are all-zero sequences.
Step 2: Determine the sub-matrix of H s h i f t involved in encoding. For the i-th row of encoding memory, the k-th row of the cyclic shift matrix H s h i f t ( K , : ) is involved in encoding, where K = m o d ( i , d v ) .
Step 3: Encode the codeword Code i at the i-th row of the encoding memory.
(1)
SPC node encoding
For the j -th ( d c 1 , d c ) SPC node, the first 2 ( d c / d v ) bits are firstly searched according to H s h i f t ( K , 1 : 2 ( d c / d v ) ) , and d c 2 ( d c / d v ) 1 new information bits need to be randomly generated. Then one check bit is generated, and through the addition operation in GF(2), similarly the newly generated d c 2 ( d c / d v ) 1 information bits and one check bits are placed at the corresponding positions of Code i according to H s h i f t ( K , 2 ( d c / d v ) + 1 : d c ) .
(2)
GC node encoding by replacing W ( W 1 ) SPC nodes with a single GC node.
For j [ 1 , Z ] , replace the W SPC rows with a GC node every W / P rows. First, search for the corresponding first 2(dc/dv) bits of the W rows of SPC nodes according to H s h i f t ( K , 1 : 2 ( d c / d v ) ) , and merge them sequentially into the first 2 ( d c / d v ) W information bits. Then, generate k 2 ( d c / d v ) W new information bits, and add J zeros (where J is the number of truncated bits) at the beginning of the information sequence, merging them into k + J information bits. Next, encode n k check bits using BCH ( n + J , k + J ) code. Finally, place the newly generated k 2 ( d c / d v ) W information bits and n k check bits at the corresponding positions of c o d e i according to H s h i f t ( K , 2 ( d c / d v ) + 1 : d c ) . Table 1 provides different combinations of component code types and the GC nodes proportion with the same code rate of 0.775, where the code rate can be calculated as follows:
R = 1 d v d c · 1 + n k W 1 · P ,
where W is the number of SPC nodes to be replaced with a GC node; P is the GC node proportion; d v and d c are the column weight and row weight of base matrix H b , respectively; and n and k are the codeword length and information length of the BCH code, respectively.
In particular, the component code type BCH ( 60 , 53 ) in Table 1 is an expanded-truncated BCH code with one extended parity check bit and one more truncated bit compared with the BCH ( 60 , 54 ) code type. Moreover, in order to keep the same code rate of 0.775, the GC node proportion is reduced to 1 / 6 . The purpose of adding this extended parity check bit is aimed at obtaining better error correction performance.

3.2. Analysis of the GC Node Distribution

In this section, the effects of intra-layer GC node distribution on the inter-layer GC node distribution is discussed. The inter-layer GC node distribution refers to how many GC nodes a certain bit is checked by. As seen in Figure 2c, the coupling length of the considered GLDPC–CCs is d v , which means that each bit is checked by d v check nodes. Among the d v check nodes, the higher the proportion of GC nodes, the greater the probability that the erroneous bit will be corrected. Therefore, we statistically analyzed various situations which include two or more GC nodes among all d v check nodes corresponding to the coding structure of Figure 2c. As shown in Table 2, there are six types in total, which include two more GC nodes. Take Type 1 as an example. It indicates that when one bit node is encoded using the first and second layers of cyclic shift factor matrix H s h i f t , respectively, it is checked by the GC node. While this bit node is encoded by the third layer of H s h i f t , it is checked by the SPC node. Therefore, in the case of Type 1, one bit node is checked by a total of two GC nodes and one SPC node.
Given a certain cyclic shift factor matrix H s h i f t and the cyclic shift expansion factor Z, different intra-layer GC node distributions would greatly affect the proportions of the inter-layer distribution types listed in Table 2. Here we compared two cases: the uniform distribution and the nonuniform distribution. For the uniform intra-layer distribution, the codeword is encoded by uniformly spaced GC and SPC nodes, while for the nonuniform distribution, the codeword is firstly encoded by GC nodes, and then by SPC nodes.
Take the GC node of BCH ( 60 , 53 ) code as an example, where the code length of GLDPC–CCs is set as 19800 bit. The numbers of bit nodes corresponding to the six types of inter-layer GC node distribution are denoted as N G C = [ N 1 , N 2 , N 3 , N 4 , N 5 , N 6 ] . For the uniform intra-layer distribution, N G C = [ 165 , 0 , 110 , 0 , 110 , 0 ] , which has no Type 2, 4, and 6 with 3 GC nodes. For the nonuniform intra-layer distribution, N G C = [ 522 , 151 , 1215 , 95 , 288 , 0 ] , which has more types with three GC nodes and is approximately six times the uniform distribution in total. That is to say, the nonuniform intra-layer GC node distribution can lead to more inter-layer GC node distributions which have two more GC nodes.

4. Decoding Schemes of GLDPC–CCs

This section discusses the decoding scheme of GLDPC–CCs, including the parallel hybrid decoding architecture, the hybrid layered normalized minimum sum (HLNMS) decoding scheme of GLDPC–CCs, and two kinds of optimized HLNMS decoding schemes.

4.1. Parallel Hybrid Decoding Architecture of GLDPC–CCs

The parallel hybrid decoding architecture of GLDPC–CCs is shown in Figure 4. The decoding window contains B blocks ( B = 60 in Figure 4), and every three blocks form one group (represented with different color blocks), i.e., a complete codeword of d c Z bits, which are. Thus, there are 20 codewords in a decoding window. The mutual information (MI) of i-th codeword ( i [ 1 , 20 ] ) is loaded as L i = [ L 3 i 2 , L 3 i 1 , L 3 i ] . These groups can be simultaneously decoded, which can significantly improve the decoding efficiency and throughput. In the decoding window, the cyclic shift sub-matrix corresponding to each group is consistent. For ease of processing, we can uniformly represent it using the corresponding sub-matrix of the third row or the B -th row, and denote it as H s h i f t ( K , : ) , where K = 1 , 2 , 3 . After decoding is completed, the block at the bottom of the decoding window is output, and a new block is entered from the top of the decoding window. As shown by the arrows in Figure 4, this operation can be understood as the decoding window sliding as, thereby initiating a new round of decoding. Through this continuous window sliding mechanism, continuous decoding of the data is achieved. When the window slides three times, one block will be decoded by all the three sub-matrices of H s h i f t , and thus one decoding iteration is complete.

4.2. Hybrid Layered Normalized Minimum Sum (HLNMS) Decoding of GLDPC–CCs

The proposed hybrid layered normalized minimum sum (HLNMS) decoding algorithm consists of two parts. The SPC node adopts the layered normalized minimum sum (LNMS) decoding algorithm, and the GC node adopts the Chase–Pyndiah decoding algorithm [21,22]. These two types of check nodes carry out the fusion of mutual information under the framework of LNMS decoding, which can be described as the following:
L ( l , k ) q j , i = L ( l , k 1 ) Q j L ( l 1 , k ) r i , j , L ( l , k ) r i , j = α · j N ( i ) / j sgn L ( l , k ) q j , i · min j N ( i ) / j L ( l , k ) q j , i , L ( l , k ) Q j = L ( l , k ) q j , i + L ( l , k ) r i , j ,
where l is the number of decoding iterations; k is the number of decoding layers; L ( l , k ) ( q j , i ) is the mutual information of variable nodes; L ( l , k ) ( r i , j ) is to verify the mutual information of check nodes; L ( l , k ) ( Q j ) is the posterior information of the variable node; and α is the normalization factor, which is set to 0.75. The specific decoding process is as follows.
Step 1: Initialization. The initial information of L ( l , k ) ( Q j ) is set as the channel information L c h ; the initial information of L ( l , k ) ( r i , j ) is 0.
Step 2: If the j-th row ( j [ 1 , Z ] ) of the i-th codeword ( i [ 1 , 20 ] ) is a SPC node, search the corresponding LLRs according to H s h i f t ( K , : ) , then update them with (6).
Step 3: If the j-th row ( j [ 1 , Z ] ) of i-th codeword ( i [ 1 , 20 ] ) is a GC node, search the corresponding MI according to H s h i f t ( K , : ) , then decode them with the Chase–Pyndiah decoder. The input of the decoder is the updated L ( l , k ) ( q j , i ) , and the output is the updated L ( l , k ) ( r i , j ) .
Step 4: When the mutual information of all Z nodes has been updated, aggregate the MI of GC nodes and SPC nodes, and update the posterior probability L ( l , k ) ( Q j ) .
Step 5: After updating the posterior information L ( l , k ) ( Q j ) of all blocks, directly perform hard decision on L ( l , k ) ( Q j ) ( 1 , : ) and output the result from the bottom of the decoding window. Meanwhile, the MI of a new block enters from the top of the decoding window. Then, return to Step 2 to start a new round of decoding.

4.3. Complexity Analysis of HLNMS Decoding Scheme

In this subsection, we provide a quantitative analysis of the computational complexity, decoding latency, throughput, and memory usage for the proposed HLNMS decoding scheme. The parameters involved are hereby restated as follows:
-
Z: Cyclic shift expansion factor;
-
d c : Row weight of the base matrix;
-
d v : Column weight of the base matrix;
-
P: Proportion of GC nodes in the check nodes;
-
W: Number of SPC nodes to be replaced together with a GC node;
-
n: Length of the component code used in a GC node;
-
t: Number of test patterns in the Chase–Pyndiah algorithm for GC nodes;
-
M a x i t e r : Maximum number of decoding iterations;
-
B: Decoding window size.
(1)
Computational Complexity
The total computational complexity for decoding one codeword is
C total = M a x i t e r · d v · Z ( 1 P ) · O d c + P · W · O 2 t · n ,
where O d c is the complexity per SPC node (LNMS update), and O 2 t · n is the complexity per GC node (Chase–Pyndiah decoding), which is the dominant factor of Equation (7).
(2)
Decoding latency
The end-to-end latency for decoding one codeword is
D = M a x i t e r · d v · τ GC , τ GC 2 t · n .
The decoding latency scales linearly with M a x i t e r , d v and τ GC , where M a x i t e r · d v is the total window slides, and τ GC is the processing delay per GC node. Note that τ GC is a hardware-dependent constant, the actual value of which depends on implementation (ASIC/FPGA parallelism, quantization, etc.).
(3)
Throughput
The throughput (decoded bits per unit time) is
Throughput = d c · Z d v · τ slide where τ slide τ GC ,
where d c · Z is the codeword length (bits), τ slide is the time per window slide (driven by τ GC ) , and d v · τ slide is the time to process one full iteration.
(4)
Memory usage
The total memory required for the sliding-window decoder is as follows:
M total = B · d c d v Z · m v + m c + P · W · B · Z · O 2 t · n .
The first term of Equation (10) is the block memory, where m v and m c are the memory per variable node and memory per check node, respectively. The second term is the GC node-specific memory, which stores test patterns and metrics for Chase–Pyndiah decoding.
To sum up, due to the fully parallel decoding structure, the computational complexity, decoding delay, and throughput of the proposed HLNMS decoding algorithm mainly depend on the decoding algorithm of one GC node. Furthermore, since the Chase–Pyndiah algorithm adopted by the GC nodes has the characteristics of simple calculation, fast implementation, and excellent performance; thus, the proposed HLNMS decoding algorithm has the features of low computational complexity, small decoding latency, and high throughput, which is very conducive to hardware implementation.

4.4. Optimized HLNMS Decoding Schemes for GLDPC–CCs

In this section, two kinds of optimization schemes are proposed for the HLNMS decoding algorithm described in Section 4.2. The first one is aimed to improve the decoding performance, and the another one is used to reduce the decoding complexity while maintaining the error correction performance.
Scheme I: Adaptive weighting factor for GC node decoding
As shown in Figure 4, a code block (such as block # 60 ) needs to go through 60 times of window sliding, i.e., 20 decoding iterations, from entering to sliding out of the decoding window. Its mutual information becomes more reliable during the iterative process. Therefore, in the same decoding window, the reliabilities of 60 blocks are different, which gradually increase from the entrance to the exit of the decoding window.
Based on this observation, we proposed an adaptive weighting factor strategy named as Scheme I, which assigns different weighting factors for GC nodes with different levels of reliability. The reliability of mutual information can be simply divided based on the position of GC nodes within the decoding window. It can also be understood as the same GC node in a code block being assigned different weighting factors for the different decoding iterations. The specific weighting factor values are derived empirically based on simulations. In order to make the hardware implementation convenient, these weighting factor values can be directly converted into binary form as follows:
α G C = 0.75 , 1 i < 0.2 B 0.71825 , 0.2 B i < 0.4 B 0.6875 , 0.4 B i < 0.6 B 0.65625 , 0.6 B i < 0.8 B 0.5625 , 0.8 B i B ,
where B is the window size, and i is the block index of the decoding window. As shown in (11), the blocks close to the exit are assigned to larger weighting factors.
Scheme II: Early stopping strategy for GC node decoding
It is found that the mutual information of the GC node increases significantly faster with the decoding iteration than that of the SPC node, and thus the GC nodes could converge more quickly. Accordingly, a low-power decoding strategy is proposed as follows.
If the minimum amplitude of MI is greater than a certain threshold Th, it indicates that the reliability of this GC node is already very high or has converged. Decoding this GC node is no longer necessary to reduce the decoding complexity and decoding power consumption. If the threshold is selected appropriately, the decoding complexity/power consumption can be reduced on the premise of ensuring no loss of decoding performance. The selection of the threshold can be obtained through the offline analysis of the decoding convergence characteristics of GC nodes and SPC nodes, which is detailed as follows.
Step 1: Analyze offline the trend of the minimum MI amplitude of GC and SPC nodes in the high SNR range as the decoding iterations, and find the maximum value Th0 of the minimum MI amplitude of GC node when it converges.
Step 2: The initial value of Th for the minimum MI amplitude is set as Th0.
Step 3: Before decoding at the GC node, a judgment is made: if the minimum MI amplitude of the GC node is greater than Th, then the decoding of this GC node will not be carried out; otherwise, the decoding will continue.
Step 4: After the decoding, if the p o s t _ F E C BER exceeds the preset tolerance value B E R G a p compared to the p o s t _ F E C BER without implementing the low-power control, then increase Th and return to Step 3; otherwise, terminate the search and output Th.
Note that the tolerance parameter B E R G a p is a parameter used to balance the BER performance and the power consumption. The smaller the B E R G a p , the less that BER performance deteriorates and the less the power consumption is reduced.

5. Simulation and Analysis

In this section, we present a performance comparison and analysis of different combinations of GC node types and proportions, the convergence analysis of GC nodes and SPC nodes, the performance analysis of the two optimized HLNMS decoding algorithms, and finally, we will conduct a performance comparison between the proposed algorithm and the algorithm in the related literature.
Given that this work focuses on studying GLDPC codes that are suitable for optical communication scenarios that require simultaneously meeting the requirements of high reliability and low processing delay, we select a set of parameter configurations with a code rate lower than 0.8 and a code length of approximately 20,000 bits in the simulations. Regarding the setting of a code length of approximately 20,000 bits, we refer to references [23,24,25]; regarding the settings of the code rate and overhead, we refer to references [16,17]; in terms of code type selection, reference is made to [11]. The specific coding parameters are set as follows. The circular shift expansion factor is set as Z = 1320 , the block length is d c Z = 19 , 800 bits, and the code rate is 0.775. The normalization factor of LNMS decoding for SPC nodes is α = 0.75 . Note that the specific parameter settings of the proposed algorithm are based on the actual engineering of optical communication systems. For the sake of generality, we present the simulation results in two ways. One way is to adopt the bit error rate before error correction (pre−FEC BER) as the horizontal axis, and the other way is to use the signal-to-noise ratio (SNR) as the horizontal coordinates.

5.1. Performance Comparison of Combinations of Component Code Types and GC Node Proportions

Figure 5 presents the performance comparisons of all combinations of code types and GC node proportions listed in Table 1 with a fixed code rate of 0.775. The fixed weighting factor of Chase–Pyndiah decoding for GC nodes is also α G C = 1 . The x-axis of Figure 5a is the bit error rate before error correction (Pre-FEC BER), and the Y-axis is the bit error rate after error correction (Post-FEC BER). The x-axis of Figure 5b is the signal-to-noise ratio (SNR).
Impact of W on performance: W is the number of SPC nodes to be replaced with a GC node. When the code rate remains at 0.775, as the value of W gradually increases to 4, the performance of GLDPC–CCs continuously improves. However, as the value of W further increases, the performance shows a downward trend. Therefore, the combination with W = 4 has the best performance. This indicates that there is an optimal range for the value of W, within which the performance advantages can be fully exploited.
Impact of the GC node proportion P on performance: for the combinations with the same parameters of W and code rate, it can be found that the larger the GC node proportion P, the better the error performance.
Impact of the extended parity check node on performance: As seen in the two red curves in Figure 5, the BCH ( 60 , 53 ) code (red solid line) is constructed based on BCH ( 60 , 54 ) code (red dashed line) by adding one parity check node, adding one bit of truncated bit, and also decreasing the GC node proportion P to keep the same code rate. The results show that the performance improvement brought by this optimization strategy is quite significant. To sum up, we obtained a set of optimal parameters with best error correction performance, i.e., W = 4 , P = 1 / 6 , n = 60 , k = 53 , J = 4 , R = 0.775 . Therefore, the subsequent analysis and optimization encoding and decoding schemes are based on these settings.

5.2. Analysis of the Impact of Window Size on Decoding Performance

In this subsection, the effect of window size B on the decoding performance is analyzed. The decoding window size is set as 60, 51 and 45, corresponding to the maximum iteration numbers of 20, 17, and 15, respectively. As shown in Figure 6, the larger the window size B (or maximum iteration number M a x i t e r , B = M a x i t e r · d v ), the better the error correction performance. However, as shown in Equations (7) and (8), larger B or M a x i t e r also lead to a higher computation complexity and decoding latency. An appropriate decoding window size can balance the performance of these two aspects.

5.3. Analysis of Intra-Layer GC Node Distribution on the Error Correction Performance

In this section, the effect of intra-layer GC node distribution on the error correction performance is evaluated. We set the coding parameters as W = 4 , P = 1 / 6 , n = 60 , k = 53 , J = 4 , and R = 0.775 . As analyzed in Section 3.2, for the GC node code type of BCH ( 60 , 53 ) with W = 4 and P = 1 / 6 , the uniform intra-layer GC node distribution is N G C = [ 165 , 0 , 110 , 0 , 110 , 0 ] , and the nonuniform intra-layer distribution is N G C = [ 522 , 151 , 1215 , 95 , 288 , 0 ] , which has more distribution types with three GC nodes and is approximately six times the uniform distribution in total. Figure 7 shows the BER performance results of the two kinds of intra-layer GC node distribution. It can be seen in Figure 7 that, with more bit nodes with two more GC check nodes, the nonuniform intra-layer GC node distribution can lead to better BER performance during the whole SNR region, and has an earlier waterfall area.

5.4. Analysis of Decoding Convergence Characteristics of GC Nodes and SPC Nodes

In this section, we discuss the decoding convergence characteristics of GC nodes and SPC nodes as the decoding iterations at a certain signal-to-noise ratio (SNR). As shown in Figure 8, the convergence speed of GC node decoding is significantly faster than that of the SPC node. The error bit rate is 0.18 times (about one order of magnitude) of the SPC. The average value of the MI amplitude is approximately 2.9 times that of SPC. The minimum value of the MI amplitude is larger than that of SPC in the middle stage of decoding and gradually converges after a certain number of iterations. At the last stage of decoding, the minimum MI of GC nodes stops increasing, and becomes smaller than that of the SPC node.
On the other hand, we compare the differences in decoding convergence between the fixed α G C and adaptive α G C schemes, where the fixed α G C is 0.75, and the adaptive α G C is shown in (7). It can be found that the adaptive weighting factor scheme has a faster decoding convergence speed and higher amplitudes of mutual information.

5.5. Performance of Optimized HLNMS Decoding Schemes

Scheme I: Figure 8 shows the analysis of the decoding convergence characteristics of GC and SPC nodes at S N R = 2.45 dB. As shown in Figure 8a, the adaptive weighting factor scheme has a faster convergence rate than the fixed weighting factor scheme. For the adaptive weighting factor scheme, i.e., Scheme I, both the GC nodes and SPC nodes have converged before the 13-th iteration. Accordingly, in Figure 9, the decoding window size is set as B = 39 , i.e., the maximum iteration number is set as M a x i t e r = 13 . It can be clearly observed that the adaptive α G C scheme can achieve a lower Post-FEC BER under the same Pre-FEC BER.
Scheme II: As described in the early stopping strategy for GC node decoding in Scheme II, the selection of threshold Th is important to reduce the decoding complexity while maintaining comparable BER performance. Based on the searching method in Section 5.4, an appropriate threshold Th can be obtained. As shown in Figure 8c, the minimum amplitude of MI of the GC nodes converges at the 16th iteration for the fixed α G C scheme. Accordingly, the initial value of Th can be set as Th0 = 0.75. When the tolerance parameter B E R G a p is set as 1.0 × 10 5 , the threshold Th can be searched as 2.5. It is shown in Figure 10a,b that, with the threshold of Th = 2.5, the error performance of Scheme II has not deteriorated and is comparable to that without an early stopping strategy. Moreover, both the MI update reduction ratio and the decoding latency reduction ratio for GC nodes are shown in Figure 10b, both of which increase rapidly as SNRs, and exceed 13% at the high SNR region.

5.6. Performance Comparisons with Prior-Arts

Figure 11 shows performance comparisons with the prior art under the same coding parameters, wherein the coding rate is 0.775, the code length is 19,800 bit, and the maximum iteration is 20. Moreover, the compared schemes adopt the same uniform intra-layer GC node distribution, and the same fixed α G C of 0.75. In Figure 11, two kinds of GC nodes of the proposed scheme are adopted, i.e., Hamming ( 15 , 11 ) code with GC node proportion of P = 1 / 24 and BCH ( 60 , 53 ) code with GC node proportion of P = 1 / 6 , wherein the GC node of Hamming ( 15 , 11 ) code is the same as the GC node type of [16]. Therefore, the performance gains of about 0.1 dB come from the proposed HLNMS decoding algorithm, where the Chase–Pyndiah decoding is adopted for GC nodes. Moreover, considering that the GC node of BCH (60, 53) code has a longer code length and higher GC node proportion of P = 1 / 6 , about 0.2 dB error correction performance gain is thereby further obtained. Compared with LDPC–CC in [19], the proposed GLDPC–CC schemes demonstrates a more significant performance improvement compared to that in [16].

6. Conclusions

In this paper, we have proposed some optimized encoding schemes for the generalized LDPC code named as GLDPC–CCs. On the encoding side, we have provided a joint optimization method on the GC node proportion and the GC component code type, together with the optimization of intra-layer GC node distribution. Both of the optimized methods can lead to better error correction performance results. On the decoding side, we have provided two optimized decoding schemes for better performance and lower power consumption, respectively. The simulation results have verified the superiority of the proposed schemes for GLDPC–CCs over the prior art. However, the research work of this paper mainly stems from specific engineering applications. The main contributions of this work lie in the innovation of code construction and decoding algorithms, while the contributions to the theoretical analysis are relatively insufficient. Accordingly, in the next stage of our research, we plan to conduct an in-depth theoretical analysis of the existing research based on some analytical methods, including the analysis of the minimum distance, the decoding threshold, and the error floor of the proposed codes, as well as the complexity analysis of the proposed decoding algorithms, etc.

Author Contributions

Conceptualization, K.T. and Z.S.; methodology, Y.S., L.D. and Y.Z.; software, L.D., Y.Z., Y.S., and J.W.; validation, J.W. and T.L.; formal analysis, Y.W. and T.L.; investigation, T.L. and L.D.; resources, K.T. and Z.S.; data curation, Y.Z. and J.W.; writing—original draft preparation, L.D., Z.S. and Y.Z.; writing—review and editing, K.T., T.L., Y.S., J.W., and Y.W.; visualization, Y.Z., Y.S. and Y.W.; supervision, K.T. and Y.W.; project administration, Z.S., K.T. and Y.W.; funding acquisition, Z.S. and Y.W.; All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Key Research and Development Program under Grant 2022YFC3005702; the National Natural Science Foundation of China under Grant 62371101 and 62461017; the ZTE Industry-University-Institute Cooperation Funds under Grant IA20240000001; the National Key Laboratory of Wireless Communications Foundation under Grant 2023KP01609; the Natural Science Foundation of Guangxi under Grant 2022GXNSFBA035508 and 2023GXNSFAA026018.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors thank the anonymous reviewers and the Editor for their helpful comments and review of this article.

Conflicts of Interest

Authors Kai Tao, Yinlong Shi, Jian Wang, Tian Liu and Yongben Wang are employed by the “ZTE Corporation”. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as potential conflicts of interest.

References

  1. Tanner, R. A recursive approach to low complexity codes. IEEE Trans. Inf. Theory 1981, 27, 533–547. [Google Scholar] [CrossRef]
  2. Lentmaier, M.; Zigangirov, K.S. On generalized low-density parity-check codes based on Hamming component codes. IEEE Commun. Lett. 1999, 3, 248–250. [Google Scholar] [CrossRef]
  3. Kuo, F.; Hanzo, L. Symbol-flipping based decoding of generalized low-density parity-check codes over GF(q). In Proceedings of the IEEE Wireless Communications and Networking Conference, 2006, WCNC 2006, Las Vegas, NV, USA, 3–6 April 2006; Volume 3, pp. 1207–1211. [Google Scholar]
  4. Liva, G.; Ryan, W.E.; Chiani, M. Quasi-cyclic generalized LDPC codes with low error floors. IEEE Trans. Commun. 2008, 56, 49–57. [Google Scholar] [CrossRef]
  5. Wang, Y.; Fossorier, M. Doubly generalized LDPC codes over the AWGN channel. IEEE Trans. Commun. 2009, 57, 1312–1319. [Google Scholar] [CrossRef]
  6. Farooq, M.U.; Moloudi, S.; Lentmaier, M. Generalized LDPC codes with convolutional code constraints. In Proceedings of the 2020 IEEE International Symposium on Information Theory (ISIT), Los Angeles, CA, USA, 21–26 June 2020; pp. 479–484. [Google Scholar]
  7. Li, H.W.; Bi, N.J.; Sha, J. Design of raptor-like LDPC codes and high throughput decoder towards 100 Gbit/s throughput. ZTE Commun. 2023, 21, 86–92. [Google Scholar] [CrossRef]
  8. Shi, X.Y.; Han, T.Z.; Tian, H.; Zhao, D.F. Design of Raptor-Like Rate Compatible SC-LDPC Codes. ZTE Commun. 2022, 20, 16–21. [Google Scholar] [CrossRef]
  9. Miladinovic, N.; Fossorier, M. Generalized LDPC codes with Reed-Solomon and BCH codes as component codes for binary channels. In Proceedings of the GLOBECOM ’05, IEEE Global Telecommunications Conference, 2005, St. Louis, MO, USA,, 28 November–2 December 2005. [Google Scholar]
  10. Yue, G.; Ping, L.; Wang, X. Generalized low-density parity-check codes based on Hadamard constraints. IEEE Trans. Inf. Theory 2007, 53, 1058–1079. [Google Scholar] [CrossRef]
  11. Djordjevic, I.B.; Xu, L.; Wang, T.; Cvijetic, M. GLDPC codes with Reed-Muller component codes suitable for optical communications. IEEE Commun. Lett. 2008, 12, 684–686. [Google Scholar] [CrossRef]
  12. Li, J.D.; Guo, K.; Chen, Y.H. systematic GLDPC Codes with Zigzag Component Codes. Acta Electonica Sin. 2011, 39, 178. [Google Scholar]
  13. Guan, R.; Zhang, L. Hybrid Hamming GLDPC codes over the binary erasure channel. In Proceedings of the 2017 11th IEEE International Conference on Anti-Counterfeiting, Security, and Identification (ASID), Xiamen, China, 27–29 October 2017; pp. 130–133. [Google Scholar]
  14. Li, Y.; Huang, X.; Zhou, C.; Luo, Z.; Wu, H. A Construction Method of Doped Low-Density Parity-Check Codes Based on Quadratic Residue Codes. In Proceedings of the 2021 15th International Symposium on Medical Information and Communication Technology (ISMICT), Xiamen, China, 14–16 April 2021; pp. 221–226. [Google Scholar]
  15. Djordjevic, I.B.; Milenkovic, O.; Vasic, B. Generalized low-density parity-check codes for optical communication systems. J. Light. Technol. 2005, 23, 1939–1946. [Google Scholar] [CrossRef]
  16. Shi, Y.; Tao, K.; Wei, Z.; Wang, J. FPGA Verification of Generalized LDPC Convolutional Codes with Hamming Code. In CLEO 2024; Technical Digest Series; paper JTu2A.82; Optica Publishing Group: Washington, DC, USA, 2024. [Google Scholar]
  17. Zou, D.; Djordjevic, I. FPGA-Based Rate-Compatible LDPC Codes for the Next Generation of Optical Transmission Systems. IEEE Photonic J. 2016, 8, 1–8. [Google Scholar] [CrossRef]
  18. Shen, L.; Wu, Y.; Xu, Y.; You, X.; Gao, X.; Zhang, W. GLDPC-PC Codes: Channel Coding Towards 6G Communications. IEEE Commun. Mag. 2025; accepted. [Google Scholar] [CrossRef]
  19. Cai, Y.; Wang, W.; Tao, K.; Lei, M.; Yin, J.; Zhou, X.; Zhang, S.; Chien, H. FPGA assisted design of concatenated LDPC convolutional and BCH codes for optical fiber communications. In Proceedings of the 45th European Conference on Optical Communication (ECOC 2019), Dublin, Ireland, 22–26 September 2019; pp. 1–4. [Google Scholar]
  20. Boutros, J.; Pothier, O.; Zemor, G. Generalized low density (Tanner) codes. In Proceedings of the 1999 IEEE International Conference on Communications (Cat. No. 99CH36311), Vancouver, BC, Canada, 6–10 June 1999; pp. 441–445. [Google Scholar]
  21. Chase, D. Class of algorithms for decoding block codes with channel measurement information. IEEE Trans. Inf. Theory 1972, 18, 170–182. [Google Scholar] [CrossRef]
  22. Pyndiah, R.M. Near-optimum decoding of product codes block turbo codes. IEEE Trans. Commun. 1998, 46, 1003–1010. [Google Scholar] [CrossRef]
  23. Chang, D.; Yu, F.; Xiao, Z.; Stojanovic, N.; Hauske, F.N.; Cai, Y.; Xie, C.; Li, L.; Xu, X.; Xiong, Q. LDPC convolutional codes using layered decoding algorithm for high speed coherent optical transmission. In Proceedings of the OFC/NFOEC 2012, OW1H.4, Los Angeles, CA, USA, 4–8 March 2012. [Google Scholar]
  24. Zhang, L.; Tao, K.; Qian, W.; Wang, W.; Liang, J.; Cai, Y.; Feng, Z. Real-Time FPGA Investigation of Interplay Between Probabilistic Shaping and Forward Error Correction. J. Light. Technol. 2005, 40, 1339–1345. [Google Scholar] [CrossRef]
  25. Zou, D.; Djordjevic, I. FPGA implementation of concatenated non-binary QC-LDPC codes for high-speed optical transport. Opt. Express 2015, 23, 14501–14509. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The research progress of GLDPC codes [1,2,3,4,5,6,9,10,11,12,13,14,18].
Figure 1. The research progress of GLDPC codes [1,2,3,4,5,6,9,10,11,12,13,14,18].
Entropy 27 00930 g001
Figure 2. Construction process of the parity check matrix of GLDPC–CCs: (a) Cutting operation; (b) Splicing operation; (c) Copying and pasting operations.
Figure 2. Construction process of the parity check matrix of GLDPC–CCs: (a) Cutting operation; (b) Splicing operation; (c) Copying and pasting operations.
Entropy 27 00930 g002
Figure 3. Encoding process of GLDPC–CCs.
Figure 3. Encoding process of GLDPC–CCs.
Entropy 27 00930 g003
Figure 4. Parallel decoding architecture diagram of GLDPC–CCs.
Figure 4. Parallel decoding architecture diagram of GLDPC–CCs.
Entropy 27 00930 g004
Figure 5. Performance comparison of combinations of GC node types and GC node proportions: α G C = 1 , α = 0.75 , B = 60 , M a x i t e r = 20 : (a) Post-FEC BER versus Pre-FEC BER; (b) Post-FEC BER versus SNR.
Figure 5. Performance comparison of combinations of GC node types and GC node proportions: α G C = 1 , α = 0.75 , B = 60 , M a x i t e r = 20 : (a) Post-FEC BER versus Pre-FEC BER; (b) Post-FEC BER versus SNR.
Entropy 27 00930 g005aEntropy 27 00930 g005b
Figure 6. Effect of window size on the decoding performance: W = 4 , P = 1 / 6 , n = 60 , k = 53 , J = 4 , R = 0.775 , α G C = 0.75 , α = 0.75 : (a) Post-FEC BER versus Pre-FEC BER; (b) Post-FEC BER versus SNR.
Figure 6. Effect of window size on the decoding performance: W = 4 , P = 1 / 6 , n = 60 , k = 53 , J = 4 , R = 0.775 , α G C = 0.75 , α = 0.75 : (a) Post-FEC BER versus Pre-FEC BER; (b) Post-FEC BER versus SNR.
Entropy 27 00930 g006
Figure 7. Performance comparison of different intra-layer GC node distribution: W = 4 , P = 1 / 6 , n = 60 , k = 53 , J = 4 , R = 0.775 , α G C = 0.75 , α = 0.75 , B = 60 , M a x i t e r = 20 : (a) Post-FEC BER versus Pre-FEC BER; (b) Post-FEC BER versus SNR.
Figure 7. Performance comparison of different intra-layer GC node distribution: W = 4 , P = 1 / 6 , n = 60 , k = 53 , J = 4 , R = 0.775 , α G C = 0.75 , α = 0.75 , B = 60 , M a x i t e r = 20 : (a) Post-FEC BER versus Pre-FEC BER; (b) Post-FEC BER versus SNR.
Entropy 27 00930 g007
Figure 8. Analysis of decoding convergence characteristics of GC/SPC nodes at SNR = 2.45 dB: (a) Average BER performance; (b) Average mutual information; (c) Minimum mutual information.
Figure 8. Analysis of decoding convergence characteristics of GC/SPC nodes at SNR = 2.45 dB: (a) Average BER performance; (b) Average mutual information; (c) Minimum mutual information.
Entropy 27 00930 g008
Figure 9. Performance comparison of the fixed weighting factor of 0.75 and the adaptive weighting factor (Scheme I): (a) Post-FEC BER versus Pre-FEC BER; (b) Post-FEC BER versus SNR.
Figure 9. Performance comparison of the fixed weighting factor of 0.75 and the adaptive weighting factor (Scheme I): (a) Post-FEC BER versus Pre-FEC BER; (b) Post-FEC BER versus SNR.
Entropy 27 00930 g009
Figure 10. Performance of Scheme II: (a) Bit error rate after error correction; (b) Complexity reduction ratio of GC nodes; B = 60 , M a x i t e r = 20 , α = 0.75 , α G C = 0.75 .
Figure 10. Performance of Scheme II: (a) Bit error rate after error correction; (b) Complexity reduction ratio of GC nodes; B = 60 , M a x i t e r = 20 , α = 0.75 , α G C = 0.75 .
Entropy 27 00930 g010
Figure 11. Performance comparison with the prior art: (a) Post-FEC BER versus Pre-FEC BER; (b) Post-FEC BER versus SNR [16,19].
Figure 11. Performance comparison with the prior art: (a) Post-FEC BER versus Pre-FEC BER; (b) Post-FEC BER versus SNR [16,19].
Entropy 27 00930 g011
Table 1. Component code type and parameter comparison.
Table 1. Component code type and parameter comparison.
WnkPBCH CodeTruncated Number JOriginal BCH CodeRate
115111/24 ( 15 , 11 ) 0 ( 15 , 11 ) 0.775
230251/12 ( 30 , 25 ) 1 ( 31 , 26 ) 0.775
230201/32 ( 30 , 20 ) 1 ( 31 , 21 ) 0.775
460541/4 ( 60 , 54 ) 3 ( 63 , 57 ) 0.775
460531/6 ( 60 , 53 ) 4 ( 63 , 57 ) 0.775
460421/28 ( 60 , 42 ) 3 ( 63 , 45 ) 0.775
81201061/6 ( 120 , 106 ) 7 ( 127 , 113 ) 0.775
825523117/56 ( 255 , 231 ) 0 ( 255 , 231 ) 0.775
Table 2. The inter-layer distribution types of GC nodes.
Table 2. The inter-layer distribution types of GC nodes.
TypeLayer of GC NodesNumber of GC NodesNumber of SPC Nodes
1Layer 1 + Layer 221
2Layer 1 + Layer 2 + Layer 330
3Layer 2 + Layer 321
4Layer 2 + Layer 3 + Layer 130
5Layer 3 + Layer 121
6Layer 3 + Layer 1 + Layer 230
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Deng, L.; Tao, K.; Shi, Z.; Zhang, Y.; Shi, Y.; Wang, J.; Liu, T.; Wang, Y. Optimized Generalized LDPC Convolutional Codes. Entropy 2025, 27, 930. https://doi.org/10.3390/e27090930

AMA Style

Deng L, Tao K, Shi Z, Zhang Y, Shi Y, Wang J, Liu T, Wang Y. Optimized Generalized LDPC Convolutional Codes. Entropy. 2025; 27(9):930. https://doi.org/10.3390/e27090930

Chicago/Turabian Style

Deng, Li, Kai Tao, Zhiping Shi, You Zhang, Yinlong Shi, Jian Wang, Tian Liu, and Yongben Wang. 2025. "Optimized Generalized LDPC Convolutional Codes" Entropy 27, no. 9: 930. https://doi.org/10.3390/e27090930

APA Style

Deng, L., Tao, K., Shi, Z., Zhang, Y., Shi, Y., Wang, J., Liu, T., & Wang, Y. (2025). Optimized Generalized LDPC Convolutional Codes. Entropy, 27(9), 930. https://doi.org/10.3390/e27090930

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop