Next Article in Journal
Single-State Multi-Party Quantum Key Agreement with Single-Particle Measurement
Next Article in Special Issue
Quasi-Optimal Path Convergence-Aided Automorphism Ensemble Decoding of Reed–Muller Codes
Previous Article in Journal
Instability of Financial Time Series Revealed by Irreversibility Analysis
Previous Article in Special Issue
Design and Implementation of Low-Complexity Multiple Symbol Detection Algorithm Using Hybrid Stochastic Computing in Aircraft Wireless Communications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Low-Density Parity-Check Decoding Algorithm Based on Symmetric Alternating Direction Method of Multipliers

1
School of Mathematics and Statistics, Henan University of Science and Technology, Luoyang 471000, China
2
Intelligent System Science and Technology Innovation Center, Longmen Laboratory, Luoyang 471000, China
3
College of Physics and Telecommunication Engineering, Zhoukou Normal University, Zhoukou 466001, China
4
School of Computer, Henan University of Engineering, Zhengzhou 451191, China
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(4), 404; https://doi.org/10.3390/e27040404
Submission received: 18 March 2025 / Revised: 5 April 2025 / Accepted: 7 April 2025 / Published: 9 April 2025
(This article belongs to the Special Issue Advances in Information and Coding Theory, the Third Edition)

Abstract

:
The Alternating Direction Method of Multipliers (ADMM) has proven to be an efficient approach for implementing linear programming (LP) decoding of low-density parity-check (LDPC) codes. By introducing penalty terms into the LP decoding model’s objective function, ADMM-based variable node penalized decoding effectively mitigates non-integral solutions, thereby improving frame error rate (FER) performance, especially in the low signal-to-noise ratio (SNR) region. In this paper, we leverage the ADMM framework to derive explicit iterative steps for solving the LP decoding problem for LDPC codes with penalty functions. To further enhance decoding efficiency and accuracy, We propose an LDPC code decoding algorithm based on the symmetric ADMM (S-ADMM). We also establish some contraction properties satisfied by the iterative sequence of the algorithm. Through simulation experiments, we evaluate the proposed S-ADMM decoder using three standard LDPC codes and three representative fifth-generation (5G) codes. The results show that the S-ADMM decoder consistently outperforms conventional ADMM penalized decoders, offering significant improvements in decoding performance.

1. Introduction

The linear programming (LP) decoder, initially proposed by Feldman et al. [1,2,3], serves as an approximation to maximum-likelihood (ML) decoding and is widely recognized as a key technique for decoding binary linear codes. Despite its theoretical strength, traditional LP decoding, when solved using generic solvers, is computationally demanding and can lead to considerable delays. This makes it less practical for real-time applications, where decoding speed is critical. To overcome this limitation, Barman et al. [4] introduced a more efficient and tractable decoding framework by applying the Alternating Direction Method of Multipliers (ADMM) [5]. This approach significantly improves computational efficiency while retaining the benefits of LP decoding. The ADMM-based LP decoder offers a decoding performance that is comparable to that of belief-propagation (BP) decoders [6], making it a more viable option for practical decoding in real-world systems. Additionally, the ADMM-LP decoder provides a scalable and faster alternative, especially in scenarios where low latency and high throughput are essential.
Currently, research on ADMM decoding can be broadly classified into several key areas. One direction focuses on accelerating the decoding process. Several studies aim to reduce the complexity of ADMM-LP decoding [7,8,9,10,11,12]. Wei et al. proposed an iterative projection algorithm that has linear complexity in the worst case in [11]. Xia et al. proposed a line segment projection algorithm [12] that does not require sorting or iterative operations, and conducted further research to improve this algorithm. The line segment projection algorithm is an approximate projection algorithm with low complexity, which saves more projection time compared to cut search algorithms. However, its decoding performance is somewhat inferior to that of more precise projection algorithms. In addition to improving projection techniques, other studies have taken different approaches to enhance the efficiency of ADMM decoding. Bai et al. proposed a check polytope-free ADMM framework in [13], which bypasses the need for complex check polytope computations, further reducing the overall computational burden. Yang et al. demonstrated in [14] that linear complexity offered a more scalable solution to decoding as the code length increases. In [15], Xia et al. introduced an efficient hybrid projection algorithm based on the even vertex projection algorithm (EVA), which combines different projection techniques to minimize decoding time while maintaining performance. In [16], the author proposed a sparse affine projection algorithm that projects vectors onto the affine shell of vertices, reducing the complexity of each iteration. Hierarchical ADMM decoding [17] and node scheduling [18] have successfully reduced the required decoding iterations by changing the update strategy during the decoding process.
Another critical feature of LDPC codes is their decoding performance. One of the challenges in LDPC decoding arises from the fact that the LP model relaxes the binary constraints typically imposed on the variables, allowing them to take continuous values instead of being strictly binary. As a result, the solution can sometimes be fractional, leading to suboptimal results. To mitigate this issue, researchers have explored various techniques to refine the decoding process and ensure that the decoder outputs integral solutions. In [6], the authors introduced a penalty term. This penalty term is designed to assign smaller values to integral points, making them more favorable compared to fractional solutions. By doing so, the penalty encourages the algorithm to converge toward integer solutions that correspond to valid codewords, thereby improving the decoding accuracy and performance. Building on this idea, Jiao et al. [19] assigned distinct penalty parameters based on the node degree; the method allows for more flexibility in how penalties are applied, leading to better decoding performance, particularly in the context of irregular LDPC code. This approach helps to balance the penalties across the network, further enhancing the likelihood of the decoder outputting a valid codeword. This characteristic enhances the reliability of the decoding process, ensuring that the solutions converge to valid codewords more efficiently. The authors also explored and compared a variety of penalty functions to further optimize decoding performance, considering different strategies for balancing the penalties between valid and invalid solutions. Wang et al. [20] improved performance by allowing for a more nuanced penalization of invalid solutions, thus enhancing the likelihood of finding the correct codeword. In another study, ref. [21] introduced an innovative penalty term that focused on penalizing check nodes rather than variable nodes. This approach targeted the constraints more directly, leading to a more effective decoding strategy, particularly for certain types of error patterns.
Additionally, some researchers have proposed new constraints that perform functions similar to penalty terms but with distinct mathematical formulations. For example, Wu et al. [22] introduced LP-box constraints, which impose bounds on the values that the variables can take, providing a new way of controlling the solution space and improving decoding efficiency. In another work [23], the authors further refined the solution space and enhanced the accuracy of the decoding process. Moreover, ref. [24] proposed a restartable ADMM decoding framework, which allows for periodic resets of the algorithm to avoid local minima and improve overall decoding performance.
Due to the non-convexity of the objective function, the Lagrange multipliers need to be updated in each iteration to handle constraints by applying the ADMM penalty decoding. The convergence speed of the ADMM is slow, resulting in a significant increase in the number of iterations. Especially when the signal-to-noise ratio (SNR) is low, the decoding performance will be worse, which cannot meet the low latency requirements of 5G communication scenarios.
In this paper, unlike ADMM non-convex quadratic penalized decoders, we propose an LDPC code decoding algorithm based on the symmetric ADMM (S-ADMM). The key difference lies in updating the multiplier immediately after minimizing the first variable, followed by minimizing the second variable and updating the multiplier again. Therefore, theoretically, this approach should yield superior numerical results. Additionally, to ensure global convergence of the iterative sequence, we introduce a contraction factor in the multiplier update. There is an intermediate equation for updating the Lagrange multipliers. We refer to the proposed decoding method as the S-ADMM penalized decoder. In this approach, the penalty terms applied to all valid codewords are uniform and smaller compared to those assigned to invalid solutions. The penalty for valid codewords is carefully chosen to encourage solutions that satisfy the desired decoding conditions. On the other hand, invalid solutions, such as binary solutions that fail to meet all parity-check equations or non-binary solutions are assigned significantly larger penalty terms. This differential penalization ensures that invalid solutions are heavily discouraged, effectively reducing their likelihood of attaining the minimum objective function value. Consequently, the decoder is more likely to converge to a valid codeword, as the penalties imposed on invalid solutions make them increasingly unfavorable in the optimization process.
The main contributions of this paper are summarized as follows:
  • We adopt a split optimization strategy to speed up the convergence for the ADMM decoding algorithm in handling non-convex quadratic models. By ensuring the relative independence and stability of each update, we solve the problem of unstable convergence of the ADMM in non-convex problems;
  • We propose the S-ADMM decoding algorithm based on penalty terms and derive the algorithm process for the S-ADMM decoding model;
  • We establish some contraction properties satisfied by the iterative sequence of the S-ADMM algorithm;
  • Simulations demonstrate that the S-ADMM decoding algorithm outperforms the ADMM penalized decoders.
The remainder of this paper is organized as follows. In Section 2, we provide a comprehensive overview of the LP decoding model and introduce the ADMM penalized decoding approach, discussing its foundational principles and applications in decoding. Section 3 is dedicated to the presentation of the S-ADMM decoding model. We outline the specific formulation of the ADMM algorithm tailored to solve this model, and describe the overall decoding framework that integrates the proposed method. In Section 4, we thoroughly analyze the contraction properties of the S-ADMM decoder. Section 5 presents extensive simulation results, where we compare the performance of the S-ADMM decoder against existing methods, demonstrating its superior decoding efficiency and accuracy in various scenarios. Section 6 offers concluding remarks and summarizes the contributions of this work.

2. Preliminaries

Assume we are given an LDPC code of length n and dimension k specified by its ( n k ) × n parity-check matrix H. Use I = { 1 , 2 , , n } and J = { 1 , 2 , , m } to represent the set of all variable nodes and check nodes, where m = n k . N v ( i ) is the set of all the check nodes connected to variable node v i , d v i represents the degree of v i , and similarly, N c ( j ) is the set of all variable nodes connected to check node c j , and d c j represents the degree of c j . For regular LDPC codes, all check nodes have the same degree d c j , and all variable nodes have the same degree d v i .

2.1. LDPC Decoding Algorithm Based on ADMM-LP

Assuming that vector x = x 1 , x 2 , , x n is the codeword to be transmitted, x C , C is the code set, and the received codeword is w ( w 1 , w 2 , , w n ) . By using the maximum-likelihood decoding algorithm, it can be transformed into an optimization problem.
min x γ T x , s . t . T j x P P d c j , j J ,
where γ represents the logarithmic likelihood ratio (LLR) vector of the codeword after passing through the channel, and its i-th entry is γ i = log E w i x i = 0 E w i x i = 1 . Each row of the check matrix corresponds to a check node, which is a check equation. Then, the selection matrix T j can be used to represent the variable node v connected to the check node c, with a size of d c j × n . For example, for a certain row of the check matrix h j = ( 1 , 0 , 0 , 1 , 1 , 0 , 0 , 0 , 1 ) , then T j will be
T j = 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 .
P d c j represents the parity polytope corresponding to the j-th parity node. So, we transformed Equation (1) into the solving framework of the ADMM.
min x γ T x , s . t . T j x = z j , z j P P d c j , j J ,
z = ( z 1 , z 2 , , z m ) is the auxiliary variable. The augmented Lagrangian function of the constrained optimization problem (2) is
L β ( x , z , λ ) = γ T x + j λ j T T j x z j + β 2 j T j x z j 2 2 ,
where λ = ( λ 1 , λ 2 , , λ m ) is Lagrangian multiplier, and β is the penalty parameter.
x k + 1 = arg min x L β x , z k , λ j k , z j k + 1 = arg min z L β x k + 1 , z j , λ j k , λ j k + 1 = λ j k + β ( T j x k + 1 z j k + 1 ) .
The update rules for each iteration are shown in Equations (5a)–(5c).
x i k + 1 = [ 0 , 1 ] 1 d v i ( j N v ( i ) ( ( z j ( i ) ) k ( λ j ( i ) ) k β ) γ i β ) ,
z j k + 1 = PP d c j T j x k + 1 + λ j k β ,
λ j k + 1 = λ j k + β ( T j x k + 1 z j k + 1 ) .
In the update operation of x i k + 1 , we use T i , j = d v i δ i , j . For simplicity, we use z j ( i ) to represent the i-th component of T j T z j . Similarly, we use λ j ( i ) to represent the i-th component of T j T λ j . The [ 0 , 1 ] ( q ) operation represents projecting the vector q onto the unit polyhedron. In the update operation of z j k + 1 , the PP d c j ( q ) represents the Euclidean projection algorithm of vector q on the test polytope PP d c j . But it is not the focus of this paper. In this paper, the projection algorithm proposed in the literature will be directly adopted. For more information on the projection algorithm, please refer to the literature [25].

2.2. ADMM-LP Decoding Algorithm with Penalty Term

Although the ADMM-LP decoding algorithm overcomes the problem of error platform in the Message Passing (MP) decoding algorithm, experiments have shown that under low signal-to-noise ratio conditions, the error correction performance of the ADMM-LP algorithm is often lower than that of the MP algorithm under the same conditions. To overcome this drawback, Liu et al. [6] proposed the ADMM algorithm with a penalty term. The core of this algorithm is to add a penalty term to the objective function of Equation (2), making it γ T x + g ( x i ) , and for the ADMM-PD, its mathematical expression is
min x γ T x + i g ( x i ) , s . t . T j x = z j , z j P d c j , j J .
For Equation (6), after substituting the ADMM template, the updates of λ and z remain unchanged, except for the update of x , which is transformed from Equation (5a) to Equation (7).
x i k + 1 = [ 0 , 1 ] [ 1 d v i ( j N v ( i ) ( ( z j ( i ) ) k ( λ j ( i ) ) k β ) ) γ i β 1 β ( x g ) i ) ] .
Regarding the selection of the penalty function g ( x ) , Liu proposed several different penalty functions. For the penalty function g ( x ) = α | x 0.5 | , it is referred to as the L 1 penalty function; for g ( x ) = α x 0.5 2 2 , it is denoted as ADMM- L 2 . For the penalty function of L 2 , Equation (7) becomes Equation (8).
x i = [ 0 , 1 ] t i α β d v i 2 α β ,
where t i is defined as t i = j N v ( i ) ( z j ( i ) 1 β λ j ( i ) ) 1 β γ i . The parameter α of the penalty function g ( x ) is a constant greater than 0, and when a is equal to 0, the ADMM-PD decoding algorithm will degenerate into the ADMM-LP decoding algorithm. In the ADMM decoding process, the Over Relaxation (OR) technique can be used to make the algorithm converge to the optimal solution faster. The specific approach of the OR acceleration strategy is to replace T j x in Formulas (5b) and (5c) with
ρ T j x + ( 1 ρ ) z j ,
where ρ is the relaxation coefficient for ρ > 1 ; this technique is called hyper relaxation. The main idea is to perform weighted correction on the latest value of the current iteration and the value of the previous iteration to obtain the best convergence speed.

3. S-ADMM Decoder

As we all know, sparsity leads to each variable node being constrained by only a few validation nodes, resulting in weak global coupling. The single dual variable update of the traditional ADMM may not fully coordinate local conflicts, resulting in slow convergence. In addition, in sparse structures, the dependency chains between variable nodes are relatively short, and a single dual variable update may not be able to transmit information in a timely manner, resulting in delayed information transmission. In response to this situation, we adopted a phased strategy to coordinate local conflicts based on the current original variables and historical auxiliary variables. We prioritized the resolving conflicts among some verification nodes based on the differences, and then made further adjustments based on the updated ones. On the basis of the ADMM, some algorithm improvements were made, and the update steps in the ADMM algorithm were changed by introducing a new relaxation factor, so that the selection range of parameters is wider.
Consider the model with L 2 penalty function
min x γ T x α ( x 0.5 ) 2 , s . t . T j x = z j , j J .
Due to the L 2 penalty function we used, the model becomes a non-convex function. In this paper, we propose an S-ADMM for solving the possibly non-convex optimization problem (10), whose iterative scheme is
x k + 1 = arg min x L β ( x , z k , λ j k ) , λ j k + 1 2 = λ j k + η β ( T j x k + 1 z j k ) , z j k + 1 = arg min z L β ( x k + 1 , z j , λ j k + 1 2 ) , λ j k + 1 = λ j k + 1 2 + β ( T j x k + 1 z j k + 1 ) .
In the S-ADMM iterative format (11), we added an update to the Lagrange multiplier (i.e., λ k + 1 2 ). At the same time, in order to improve the numerical performance of the algorithm, we introduced a parameter η for this term. For different codes in the simulation, the value of η makes the iteration more representative. Here η ( 1 , 1 ) . Note that for η = 0 , the iterative scheme (11) reduces to the classic ADMM (4). As we will demonstrate, introducing an additional relaxation factor into the S-ADMM scheme (11) plays a crucial role in ensuring that the sequence generated by the algorithm exhibits a strictly contractive behavior relative to the solution set of (10). This contraction property allows us to establish rigorous worst-case convergence rates for the S-ADMM method. Importantly, this analysis can be conducted without the need for further assumptions or modifications to the underlying model (10), highlighting the robustness of the scheme’s convergence guarantees.
For model (10), the specific iteration rules for each variable are as follows:
x k + 1 = arg min x L β ( x , z j k , λ j k ) = [ 0 , 1 ] j N v ( i ) ( ( z j ( i ) ) k 1 β ( λ j ( i ) ) k ) γ i β α β d v i 2 α β ,
λ j k + 1 2 = λ j k + η β ( T j x k + 1 z j k ) ,
z j k + 1 = arg min z L β ( x k + 1 , z , λ j k + 1 2 ) = P d c j T j x k + 1 + λ j k + 1 2 β ,
λ j k + 1 = λ j k + 1 2 + β ( T j x k + 1 z j k + 1 ) .
Algorithm 1 provides the specific process of the S-ADMM decoding algorithm. From the specific implementation of the algorithm, it can be seen that the entire decoding algorithm is still an iterative architecture, similar to the BP decoding algorithm, where data are still passed back and forth between variable nodes and check nodes for iterative operations.
Algorithm 1 Decoding Algorithm Based on S-ADMM.
1.
Construct a non-negative LLR vector; γ is based on the received vector w.
2.
Construct the selection matrix; T j corresponds to each row of the check matrix.
3.
Initialization, λ 0 is a full 0 vector and z 0 is a full 0.5 vector.
4.
repeat
5.
iter← iter+1
6.
    for all i I do
7.
        x k + 1 = [ 0 , 1 ] j N v ( i ) ( ( z j ( i ) ) k 1 β ( λ j ( i ) ) k ) γ i β α β d v i 2 α β
8.
    end for
9.
    for all j J do
10.
        λ j k + 1 2 = λ j k + η β ( T j x k + 1 z j k )
11.
        z j k + 1 = P d c j T j x k + 1 + λ j k + 1 2 β
12.
        λ j k + 1 = λ j k + 1 2 + β ( T j x k + 1 z j k + 1 )
13.
    end for
14.
until  j T j x k z j k 2 2 < ϵ 2 or iter I m a x
15.
return  x
In each iteration of the ADMM algorithm, the computational complexity for updating the variable x is O ( n ) . Similarly, the update of the auxiliary variable z involves a complexity of O ( m ) . The update of the Lagrange multiplier λ also incurs a complexity of O ( m ) . When we incorporate the S-ADMM scheme, an additional step is introduced where the intermediate variable λ is updated. However, the complexity of this update remains O ( m ) , which is consistent with the complexity of updating z and λ in the ADMM scheme. In Table 1, we provide the number of operations and complexity for each step of the S-ADMM algorithm, where D v represents the maximum column weight in the matrix H and D c represents the maximum row weight in the matrix H. Thus, the overall complexity of the S-ADMM method is dominated by the updates of x , z , and λ , with the intermediate update of λ not significantly affecting the total computational cost. In addition, in Table 2, we list the updates of variables, average iteration times, and complexity of shrinkage properties for the S-ADMM algorithm and A-ADMM- L 2 algorithm.
In the following analysis, we turn our attention to comparing the average number of iterations required for the algorithm to converge to the correct codeword, providing insight into the efficiency of the S-ADMM scheme in practical applications.

4. Algorithm and Contraction Analysis

4.1. Variational Reformulation of Equation (10)

The Lagrangian function of the model (10) is as follows:
L ( x , z , λ ) = γ T x α ( x 0.5 ) 2 + j λ j T T j x z j + β 2 j T j x z j 2 2 .
To solve problem (10), we need to find a solution y = ( x , z , λ ) that [26]
ϕ ( u ) ϕ ( u ) + ( y y ) T F ( y ) 0 .
Suppose
ϕ 1 ( x ) = γ T x α ( x 0.5 ) 2 , ϕ 2 ( z ) = 0 · z , ϕ ( u ) = ϕ 1 ( x ) + ϕ 2 ( z ) ,
where
u = x z , y = x z λ , F ( y ) = j T j T λ j j λ j ( T j x z j ) .
Theorem 1.
If Ω represent the solution set of the optimization problem V ( Ω , F , ϕ ) , then
Ω * = y Ω y ˜ Ω : ϕ ( u ) ϕ ( u ˜ ) + ( y y ˜ ) T F ( y ) 0 .
Its proof can be seen in [27]. Theorem 1 states that if y ˜ Ω satisfies
sup y D { ϕ ( u ˜ ) ϕ ( u ) + ( y ˜ y ) T F ( y ) } ϵ ,
with D Ω , then y ˜ is an approximate solution to the set V ( Ω , F , ϕ ) .

4.2. Some Notation

For the S-ADMM, when solving for variable x k + 1 , it is related to z k and λ k in (12), so we define a set s k such that s k = ( z k , λ k ) ,
S : = s = ( z , λ ) | y = ( x , z , λ ) .
For convenience, we define the following matrices.
M = I d c j 0 β I d c j ( η + 1 ) I d c j ,
and
Q O = 0 0 β I d c j η I d c j I d c j 1 β I d c j ,
We further define
Q = β I d c j η I d c j I d c j 1 β I d c j ,
A = 1 η + 1 β I d c j η I d c j η I d c j 1 β I d c j .
Notice that
A = 1 η + 1 β I d c j 0 0 1 β I d c j T I d c j η I d c j η I d c j I d c j β I d c j 0 0 1 β I d c j ,
and the matrix
1 η η 1
is positive definite for η ( 1 , 1 ) and positive semidefinite for η = 1 .
The convergence rate analysis of the iterative process defined in Equation (11) relies heavily on this contraction property. By examining the contraction behavior of the sequence, we can establish the conditions under which the sequence s k converges to the optimal solution and derive the associated convergence rate. This contraction analysis for Equation (11) is crucial because it provides a rigorous mathematical framework to quantify how quickly the iterates s k approach the optimal set S * . More specifically, we investigate the rate of decay of the distance between successive iterates and the optimal solution set, ensuring that each step of the S-ADMM method brings the sequence closer to convergence. This analysis will form the foundation for deriving the convergence rate, as it characterizes the speed at which the algorithm converges under appropriate conditions. We focus here solely on the contraction properties of Equation (11), as these properties directly influence the overall efficiency and convergence speed of the S-ADMM scheme.

4.3. Contraction Analysis

In this section, we establish the contraction property of the sequence s k , generated by (11), with respect to the set S .
Here we define a set y ˜ k as
y ˜ k = x ˜ k z ˜ j k λ ˜ j k = x k + 1 z j k + 1 λ j k + β ( T j x k + 1 z j k ) ,
where ( x k + 1 , z j k + 1 ) is generated by (11). Note that with the notation of y ˜ k , we immediately have
x k + 1 = x ˜ k , z j k + 1 = z ˜ j k , λ j k + 1 2 = λ j k η ( λ j k λ ˜ j k ) .
Then, based on (11) and (28), we immediately get
λ j k + 1 = λ j k + 1 2 + β ( T j x ˜ k z ˜ j k ) = λ j k η ( λ j k λ ˜ j k ) + [ β ( T j x ˜ k z j k ) + β ( z j k z ˜ j k ) ] = λ j k η ( λ j k λ ˜ j k ) + [ ( λ j k + λ ˜ j k ) + β ( z j k z ˜ j k ) ] = λ j k ( η + 1 ) ( λ j k λ ˜ j k ) + β ( z j k z ˜ j k ) .
Furthermore, together with z k + 1 = z ˜ k , we have the relationship
z j k + 1 λ j k + 1 = z j k λ j k I d c j 0 β I d c j ( η + 1 ) I d c j z j k z ˜ j k λ j k λ ˜ j k ,
which can be rewritten compactly as
s k + 1 = s k M ( s k s ˜ k ) .
Next, we will explain that the algorithm (11) is convergent.
Lemma 1.
For given s k , let y k + 1 be generated by the S-ADMM scheme. Then,
ϕ ( u ) ϕ ( u ˜ k ) + ( y y ˜ k ) T F ( y ˜ k ) ( s s ˜ k ) T Q ( s k s ˜ k ) ,
and y ˜ k is a solution of V ( Ω , F , ϕ ) if s k s k + 1 A 2 = 0 .
Proof. 
Since x k + 1 = x ˜ k , for the update of x in Equation (11), we have
ϕ ( x ) ϕ ( x ˜ k ) + ( x x ˜ k ) T j T j T [ λ j k + β ( T j x ˜ k z j k ) ] 0 .
According to the definition (27), we have
λ ˜ j k = λ j k + β ( T j x k + 1 z j k ) .
Using (34), the inequality (33) will become
ϕ ( x ) ϕ ( x ˜ k ) + ( x x ˜ k ) T ( j T j T λ ˜ j k ) 0 .
Similarly, for the update of z in Equation (11), we have
( z z ˜ k ) T j [ λ j k η ( λ j k λ ˜ j k ) ] β j ( T j x ˜ k z ˜ k ) 0 .
Reusing (34),we have
λ j k η ( λ j k λ ˜ j k ) + β ( T j x ˜ k z ˜ k ) = λ j k + η ( λ ˜ j k λ j k ) β ( z ˜ j k z j k ) + ( λ ˜ j k λ j k ) = λ ˜ j k + η ( λ ˜ j k λ j k ) β ( z ˜ j k z j k ) .
Consequently, it follows from (36) that
( z z ˜ k ) T j [ λ ˜ j k η ( λ ˜ j k λ j k ) + β ( z ˜ j k z j k ) ] 0 .
Meanwhile,
( T j x ˜ k z ˜ j k ) + ( z ˜ j k z j k ) 1 β ( λ ˜ k λ k ) = 0 .
Combining (35), (38), and (39), we get
ϕ ( u ) ϕ ( u ˜ k ) + x x ˜ k z z ˜ k λ λ ˜ k T j T j T λ ˜ j k j λ ˜ j k ( T j x ˜ k z ˜ j k ) + 0 β ( z ˜ k z k ) η ( λ ˜ k λ k ) ( z ˜ k z k ) + 1 β ( λ ˜ k λ k ) 0 .
Equation (32) represents only one form of the above inequality. At last, we get
ϕ ( u ) ϕ ( u ˜ k ) + ( y y ˜ k ) T F ( y ˜ k ) ( s s ˜ k ) T Q ( s k s ˜ k ) .
Based on this, from Q = A M and s k + 1 = s k M ( s k s ˜ k ) , we obtain
( s s ˜ k ) T Q ( s k s ˜ k ) = ( s s ˜ k ) T A ( s k s k + 1 ) .
Apply the above equation to (32):
ϕ ( u ) ϕ ( u ˜ k ) + ( y y ˜ k ) T F ( y ˜ k ) ( s s ˜ k ) T A ( s k s k + 1 ) .
Note that A is a symmetric positive definite matrix. From (32) and s k s k + 1 A 2 = 0 , it can be inferred that A ( s k s k + 1 ) = 0 , and thus ϕ ( u ) ϕ ( u ˜ ) + ( y y ˜ ) T F ( y ˜ k ) 0 . According to (19), y ˜ k is the solution. The proof is completed. □

5. Simulation Result

5.1. Parameter Selection

In this section, we selected several different codes for experimentation and compared the reliability and complexity of the designed S-ADMM decoding algorithm with the ADMM- L 2 algorithm and A-ADMM- L 2 algorithm. Among them, the A-ADMM- L 2 represents the optimization-based ADMM- L 2 algorithm, which uses the differential evolution (DE) algorithm. They are, respectively, referred to as C 1 (WIMAX (576,288), rate 1 2 ), C 2 (802.11n (648,216), rate 3 4 ), C 3 ((1152,288), rate 3 4 ), and three types of 5G LDPC codes with the information length 320 but different rates. In the 5G standard, the performance requirements for the design of 5G LDPC codes are higher. In Table 3, we list the parameter information for six types of codes.
Since the codes are all irregular codes, we assign a different parameter to each variable node when doing the simulation, i.e., parameter α . For code C 1 , the parameters for ADMM- L 2 are directly selected from the parameters provided in reference [20]. In this paper, μ = 4.0 and α = 2.2 . Here, we provide the FER corresponding to different parameters of C 1 , C 2 , and C 4 codes, and similar methods are also used for other codes. The SNR of code C 1 is 2.0 dB, the SNR of code C 2 is 2.8 dB, and the SNR of code C 4 is 2.8 dB. The FER performances of the three codes are evaluated over the additive white Gaussian noise channel (AWGNC) using binary phase-shift keying (BPSK) modulation. The corresponding results for each code are depicted in Table 4, Table 5 and Table 6, respectively. These tables illustrate the impact of different values of parameters on the error performance, providing a comparative analysis of the effectiveness of each code in mitigating frame errors under the given channel conditions.
In Figure 1, the SNR and penalty parameters are fixed, and the FER of the parameters at different values is plotted. At least 200 errors were collected for each data point at three fixed SNRs (SNR = 2 dB, 2.2 dB, 2.4 dB). The maximum number of iterations in Figure 1 is set to 1000 times, and it can be seen from the figure that when the value is less than −0.3, the FER curve of the S-ADMM decoding algorithm decreases with the increase in the value, and when the parameter value is between −0.3 and 0.4, the FER reaches a lower peak, and when the value is greater than 0.3, the FER curve has a tendency to rise and increase after that. It can be found that when the parameter value is between −0.3 and 0.2, the S-ADMM decoder has better error correction performance. So, we choose a relatively stable value of −0.15. Similarly, for the C 2 code in Figure 2, we find a stable value of −0.3 and a similar one for C 4 in Figure 3. In the following performance analysis, we will conduct experiments using the analyzed optimal parameters.

5.2. Performance Analysis

Figure 4 compares the FER performance of several decoding algorithms for different codes. From Figure 4a, it can be seen that for code C 1 , compared to the ordinary ADMM- L 2 and A-ADMM- L 2 decoding algorithms, the S-ADMM decoding algorithm can achieve a decoding performance improvement of nearly 0.1 dB. For example, at a frame error rate of 10 3 , the SNR corresponding to the ADMM- L 2 decoding algorithms is approximately 3 dB. The SNR corresponding to the A-ADMM- L 2 decoding algorithms is approximately 2.5 dB. The corresponding SNR of the S-ADMM decoding algorithm is about 2.4 dB. Similarly, from Figure 4b, it can be seen that for code C 2 , compared to other decoding algorithms, the S-ADMM decoding algorithm still has better decoding performance, which is also true for the high-rate C 3 code in Figure 4c. On the other hand, for the 5G LDPC codes in Figure 4d, compared with S-ADMM and BP decoding algorithms, under high SNR conditions, it can be clearly seen that the BP algorithm has the problem of error platform, while the S-ADMM decoding algorithm can still maintain good decoding performance. At a frame error rate of 10 2 , the S-ADMM decoding algorithm can achieve a decoding performance improvement of nearly 1.5 dB more than BP.

5.3. Average Number of Decoding Iterations

Figure 5 illustrates the average number of decoding iterations required by the ADMM- L 2 , the A-ADMM- L 2 , and the S-ADMM decoding algorithms for different codes. Specifically, when the SNR is 2.8 dB, the ADMM- L 2 requires an average of 11.8 and 22.6 iterations, respectively, for codes C 1 in Figure 5a and C 2 in Figure 5b. For the A-ADMM- L 2 , the average number of iterations decreases to 10.4 and 21.4, respectively. However, the S-ADMM decoding algorithm achieves even faster convergence, with average iteration counts of 9.2 and 18.2 for the same codes. This represents a reduction of approximately 22.1 % and 19.5 % , respectively, in the average number of decoding iterations compared to the ADMM- L 2 . For high-rate C 3 , the S-ADMM algorithm also reduces the average number of decoding iterations in Figure 5c. Figure 5d further demonstrates the comparative performance of the S-ADMM and A-ADMM- L 2 decoding algorithms for 5G LDPC codes. It is clear from the results that the S-ADMM decoding algorithm significantly accelerates convergence relative to A-ADMM- L 2 decoding, further emphasizing its efficiency in iterative decoding scenarios. These findings highlight the superior performance of the S-ADMM decoding approach, not only in terms of decoding speed but also in its ability to reduce computational complexity.

6. Conclusions

In this paper, we addressed the issue of slow convergence speed in the ADMM when applied to LP models with penalty functions. Specifically, we proposed an enhanced ADMM decoding algorithm, where we introduced an intermediate update of the multipliers within the ADMM iteration scheme to accelerate convergence and improve performance.
Based on the proof framework of variational inequalities, we studied the contraction properties of algorithms, analyzed the complexity and optimal parameter values of algorithms, and verified the feasibility of the algorithms. Additionally, we presented a brief performance analysis of LDPC codes over the AWGNC, highlighting the advantages of our approach in comparison to existing methods.
Furthermore, we described the detailed algorithmic process for implementing the S-ADMM algorithm, outlining its key steps and computational considerations. Numerical results demonstrated that the S-ADMM algorithm significantly outperforms traditional ADMM penalized decoders. These findings suggested that our approach offers a promising solution for improving the efficiency of ADMM decoding algorithms, particularly in the context of LP models with penalty functions.
The symmetric ADMM algorithm presented in this paper provides an efficient and stable new method for LDPC code decoding, but its practical deployment still requires continuous breakthroughs in theoretical completeness, hardware optimization, and cross technology integration. Future research can focus on the full chain innovation of theory algorithm system scenario, promoting the deep application of the ADMM framework in next-generation communication systems.

Author Contributions

Conceptualization, methodology, formal analysis, and investigation: J.Z. and A.C.; writing—original draft preparation: A.C., B.J. and Y.Z.; writing—review and editing: H.L. and H.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Key Science and Technology Research Project of Henan Province of China (Grant No. 222102210053); the Key Scientific Research Project in Colleges and Universities of Henan Province of China (Grant No. 21A510003); and the Major Science and Technology Projects of Longmen Laboratory (Grant Nos. 231100220400, 231100220300); the Open Fund of Intelligent Group System Engineering Research Center of the Ministry of Education (Grant No. ZZU-CISS-2024003).

Institutional Review Board Statement

The study does not involve humans or animals, therefore, ethical review and approval are not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Feldman, J.; Wainwright, M.; Karger, D. Using linear programming to Decode Binary linear codes. IEEE Trans. Inf. Theory 2005, 51, 954–972. [Google Scholar] [CrossRef]
  2. Li, X.; Liu, M.; Dang, S.; Luong, N.C.; Yuen, C.; Nallanathan, A.; Niyato, D. Covert Communications with Enhanced Physical Layer Security in RIS-Assisted Cooperative Networks. IEEE Trans. Wirel. Commun. 2025. [Google Scholar] [CrossRef]
  3. Li, X.; Zhao, J.; Chen, G.; Hao, W.; Da Costa, D.B.; Nallanathan, A.; Shin, H.; Yuen, C. STAR-RIS Assisted Covert Wireless Communications with Randomly Distributed Blockages. IEEE Trans. Wirel. Commun. 2025. [Google Scholar] [CrossRef]
  4. Barman, S.; Liu, X.; Draper, S.C.; Recht, B. Decomposition Methods for Large Scale LP Decoding. IEEE Trans. Inf. Theory 2013, 59, 7870–7886. [Google Scholar] [CrossRef]
  5. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  6. Liu, X.; Draper, S.C. The ADMM Penalized Decoder for LDPC Codes. IEEE Trans. Inf. Theory 2016, 62, 2966–2984. [Google Scholar] [CrossRef]
  7. Jiao, X.; Mu, J.; He, Y.C.; Chen, C. Efficient ADMM Decoding of LDPC Codes Using Lookup Tables. IEEE Trans. Commun. 2017, 65, 1425–1437. [Google Scholar] [CrossRef]
  8. Wei, H.; Jiao, X.; Mu, J. Reduced-Complexity Linear Programming Decoding Based on ADMM for Codes. IEEE Commun. Lett. 2015, 19, 909–912. [Google Scholar] [CrossRef]
  9. Jiao, X.; He, Y.C.; Mu, J. Memory-Reduced Look-Up Tables for Efficient ADMM Decoding of LDPC Codes. IEEE Signal Process. Lett. 2018, 25, 110–114. [Google Scholar] [CrossRef]
  10. Gensheimer, F.; Dietz, T.; Ruzika, S.; Kraft, K.; Wehn, N. A Low-Complexity Projection Algorithm for ADMM-Based LP Decoding. In Proceedings of the 2018 IEEE 10th International Symposium on Turbo Codes Iterative Information Processing (ISTC), Hong Kong, China, 3–7 December 2018; pp. 1–5. [Google Scholar] [CrossRef]
  11. Wei, H.; Banihashemi, A.H. An Iterative Check Polytope Projection Algorithm for ADMM-Based LP Decoding of LDPC Codes. IEEE Commun. Lett. 2018, 22, 29–32. [Google Scholar] [CrossRef]
  12. Xia, Q.; Lin, Y.; Tang, S.; Zhang, Q. A Fast Approximate Check Polytope Projection Algorithm for ADMM Decoding of LDPC Codes. IEEE Commun. Lett. 2019, 23, 1520–1523. [Google Scholar] [CrossRef]
  13. Bai, J.; Wang, Y.; Shi, Q. Efficient QP-ADMM Decoder for Binary LDPC Codes and Its Performance Analysis. IEEE Trans. Signal Process. 2020, 68, 503–518. [Google Scholar] [CrossRef]
  14. Yang, K.; Wang, X.; Feldman, J. A New Linear Programming Approach to Decoding Linear Block Codes. IEEE Trans. Inf. Theory 2008, 54, 1061–1072. [Google Scholar] [CrossRef]
  15. Xia, Q.; Wang, X.; Liu, H.; Zhang, Q.L. A Hybrid Check Polytope Projection Algorithm for ADMM Decoding of LDPC Codes. IEEE Commun. Lett. 2021, 25, 108–112. [Google Scholar] [CrossRef]
  16. Asadzadeh, A.; Barakatain, M.; Draper, S.C.; Mitra, J. SAPA: Sparse Affine Projection Algorithm in ADMM-LP Decoding of LDPC Codes. In Proceedings of the 2022 17th Canadian Workshop on Information Theory (CWIT), Ottawa, ON, Canada, 5–8 June 2022; pp. 27–32. [Google Scholar] [CrossRef]
  17. Debbabi, I.; Gal, B.L.; Khouja, N.; Tlili, F.; Jego, C. Fast Converging ADMM-Penalized Algorithm for LDPC Decoding. IEEE Commun. Lett. 2016, 20, 648–651. [Google Scholar] [CrossRef]
  18. Xia, Q.; He, P.; Wang, X.; Liu, H.; Zhang, Q. Node-Wise Scheduling Algorithm of ADMM Decoding Based on Line Segment Projection. IEEE Commun. Lett. 2022, 26, 738–742. [Google Scholar] [CrossRef]
  19. Jiao, X.; Wei, H.; Mu, J.; Chen, C. Improved ADMM Penalized Decoder for Irregular Low-Density Parity-Check Codes. IEEE Commun. Lett. 2015, 19, 913–916. [Google Scholar] [CrossRef]
  20. Wang, B.; Mu, J.; Jiao, X.; Wang, Z. Improved Penalty Functions of ADMM Penalized Decoder for LDPC Codes. IEEE Commun. Lett. 2017, 21, 234–237. [Google Scholar] [CrossRef]
  21. Wei, H.; Banihashemi, A.H. ADMM Check Node Penalized Decoders for LDPC Codes. IEEE Trans. Commun. 2021, 69, 3528–3540. [Google Scholar] [CrossRef]
  22. Wu, Q.; Zhang, F.; Wang, H.; Lin, J.; Liu, Y. Parameter-Free p -Box Decoding of LDPC Codes. IEEE Commun. Lett. 2018, 22, 1318–1321. [Google Scholar] [CrossRef]
  23. Wei, Y.; Zhao, M.M.; Zhao, M.J.; Lei, M. A PDD Decoder for Binary Linear Codes With Neural Check Polytope Projection. IEEE Wirel. Commun. Lett. 2020, 9, 1715–1719. [Google Scholar] [CrossRef]
  24. Chen, Y.; Wang, R.; Zhu, J.; Wen, Z. Decoding LDPC Codes by Using Negative Proximal Regularization. IEEE Trans. Commun. 2023, 71, 3835–3846. [Google Scholar] [CrossRef]
  25. Wasson, M.; Draper, S.C. Hardware based projection onto the parity polytope and probability simplex. In Proceedings of the 2015 49th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 8–11 November 2015; pp. 1015–1020. [Google Scholar] [CrossRef]
  26. He, B.; Liu, H.; Wang, Z.; Yuan, X. A strictly contractive peaceman–rachford splitting method for convex programming. SIAM J. Optim. 2014, 24, 1011–1040. [Google Scholar] [CrossRef] [PubMed]
  27. Facchinei, F.; Pang, J.S. Finite-Dimensional Variational Inequalities and Complementarity Problems; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
Figure 1. The FER of C 1 for the S-ADMM decoder under different η .
Figure 1. The FER of C 1 for the S-ADMM decoder under different η .
Entropy 27 00404 g001
Figure 2. The FER of C 2 for the S-ADMM decoder under different η .
Figure 2. The FER of C 2 for the S-ADMM decoder under different η .
Entropy 27 00404 g002
Figure 3. The block error rate (BLER) of C 4 for the S-ADMM decoder under different η .
Figure 3. The block error rate (BLER) of C 4 for the S-ADMM decoder under different η .
Entropy 27 00404 g003
Figure 4. Decoding performance corresponding to six types of codes. They are listed as follows: (a) The FER of the S-ADMM, ADMM- L 2 , and A-ADMM- L 2 decoders of C 1 code. (b) The FER of the S-ADMM, ADMM- L 2 , and A-ADMM- L 2 decoders of C 2 code. (c) The FER of the S-ADMM, ADMM- L 2 , and A-ADMM- L 2 decoders of C 3 code. (d) The BLER of the S-ADMM, BP, and A-ADMM- L 2 decoders of three types of 5G codes.
Figure 4. Decoding performance corresponding to six types of codes. They are listed as follows: (a) The FER of the S-ADMM, ADMM- L 2 , and A-ADMM- L 2 decoders of C 1 code. (b) The FER of the S-ADMM, ADMM- L 2 , and A-ADMM- L 2 decoders of C 2 code. (c) The FER of the S-ADMM, ADMM- L 2 , and A-ADMM- L 2 decoders of C 3 code. (d) The BLER of the S-ADMM, BP, and A-ADMM- L 2 decoders of three types of 5G codes.
Entropy 27 00404 g004
Figure 5. The average number of decoding iterations corresponding to six types of codes. They are listed as follows: (a) The average number of iterations for code C 1 . (b) The average number of iterations for code C 2 . (c) The average number of iterations for code C 3 . (d) The average number of iterations of 5G LDPC codes.
Figure 5. The average number of decoding iterations corresponding to six types of codes. They are listed as follows: (a) The average number of iterations for code C 1 . (b) The average number of iterations for code C 2 . (c) The average number of iterations for code C 3 . (d) The average number of iterations of 5G LDPC codes.
Entropy 27 00404 g005
Table 1. The number of operations and the complexity of the algorithm execution.
Table 1. The number of operations and the complexity of the algorithm execution.
StepNumber of OperationsComplexity
1n-
2m-
3 d c j -
6 d v i O ( n · D v )
9 d c j O ( m · D c )
10 d c j O ( m · D c )
11 d c j O ( m · D c )
Table 2. Comparison of complexity between S-ADMM algorithm and A-ADMM- L 2 algorithm.
Table 2. Comparison of complexity between S-ADMM algorithm and A-ADMM- L 2 algorithm.
AlgorithmS-ADMMA-ADMM- l 2 Comparison of Complexity
Update of x O ( n · D v ) O ( n · D v ) Same, both have
linear complexity.
Update of z O ( m · D c ) O ( m · D c ) The projection algorithms
used are all from
reference [25], with
the same complexity.
Update of λ 2 × O ( m ) 1 × O ( m ) S-ADMM has an
additional dual update,
but it is only a
linear operation
and the actual
cost can be ignored.
Average number
of iterations
Less (Due to the two-stage balance adjustment, oscillation is suppressed.)Many, due to the single
update direction being
prone to oscillation.
Symmetrical design balances
the adjustment direction
of dual variables and
accelerates convergence.
Convergence
stability
Theorem 1
guarantees strict
monotonic convergence.
Relying on the convergence
of the traditional ADMM may
lead to local oscillations.
S-ADMM suppresses
oscillations and reduces
ineffective iterations.
Table 3. Six types of LDPC codes.
Table 3. Six types of LDPC codes.
CodeSymbolRateColumn Redistribution
(576,288) C 1 1 2 {2,3,6}
(648,216) C 2 3 4 {2,3,4,6,8}
(1,152,288) C 3 3 4 {2,3,6}
320 C 4 1 2 {1,2,3,4,5,7,8}
320 C 5 2 5 {1,2,3,4,5,7,8,10,11}
320 C 6 2 3 {1,2,3,4,5}
Table 4. C 1 code improves the parameter values of the penalty function and the S-ADMM decoding algorithm.
Table 4. C 1 code improves the parameter values of the penalty function and the S-ADMM decoding algorithm.
Decoding AlgorithmA-ADMM- L 2 S-ADMM
α 1 5.113680.00001
α 2 1.005861.90024
α 3 0.301385.42336
β 3.298664.15607
Table 5. C 2 code improves the parameter values of the penalty function and the S-ADMM decoding algorithm.
Table 5. C 2 code improves the parameter values of the penalty function and the S-ADMM decoding algorithm.
Decoding AlgorithmA-ADMM- L 2 S-ADMM
α 1 0.127940.06876
α 2 0.764661.13234
α 3 1.900171.60122
α 4 2.947285.29668
α 5 6.160486.44272
β 3.603233.17501
Table 6. C 4 code improves the parameter values of the penalty function and the S-ADMM decoding algorithm.
Table 6. C 4 code improves the parameter values of the penalty function and the S-ADMM decoding algorithm.
Decoding AlgorithmA-ADMM- L 2 S-ADMM
α 1 0.000010.00290
α 2 0.000010.00001
α 3 1.459491.53340
α 4 0.000014.09405
α 5 2.564772.17151
α 6 6.498953.05173
α 7 10.146897.49879
β 9.487085.93976
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, J.; Chen, A.; Zhang, Y.; Ji, B.; Li, H.; Xu, H. Low-Density Parity-Check Decoding Algorithm Based on Symmetric Alternating Direction Method of Multipliers. Entropy 2025, 27, 404. https://doi.org/10.3390/e27040404

AMA Style

Zhang J, Chen A, Zhang Y, Ji B, Li H, Xu H. Low-Density Parity-Check Decoding Algorithm Based on Symmetric Alternating Direction Method of Multipliers. Entropy. 2025; 27(4):404. https://doi.org/10.3390/e27040404

Chicago/Turabian Style

Zhang, Ji, Anmin Chen, Ying Zhang, Baofeng Ji, Huaan Li, and Hengzhou Xu. 2025. "Low-Density Parity-Check Decoding Algorithm Based on Symmetric Alternating Direction Method of Multipliers" Entropy 27, no. 4: 404. https://doi.org/10.3390/e27040404

APA Style

Zhang, J., Chen, A., Zhang, Y., Ji, B., Li, H., & Xu, H. (2025). Low-Density Parity-Check Decoding Algorithm Based on Symmetric Alternating Direction Method of Multipliers. Entropy, 27(4), 404. https://doi.org/10.3390/e27040404

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop