Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection

In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.


Introduction
On 14 October 2016, the 3rd generation partnership project radio layer 1 (3GPP RAN1) conference determined the long-code block coding scheme for 5G communication using the low density parity check node (LDPC) code as the data information of the mobile bandwidth enhance mobile broadband (eMBB) service [1,2]. The 5G includes both people-centric and machine-centric communications [3,4]. These two types of scenarios have different needs. People-centric communications pursue high performance and high-speed communications. Corresponding end-user data rates should reach 10 Gbps and base station data rates should reach 1 Tbps. Machine-centric communication pursues low power consumption with the corresponding sensor rate of only 10-100 Bps, while industrial control applications require a particularly high delay of 10 −4 s [5].
Gallager proposed the LDPC code [6]. Furthermore, he created the bit flip hard decision decoding algorithm based on the check and statistics as well as proposing the soft decision decoding algorithm based on a posterior probability (APP). The decoding algorithms of multiple LDPC codes are divided into two categories according to the different decision methods [7]: Soft decision algorithm (SDA) and hard decision algorithm (HDA). The representative algorithm of the SDA is the belief propagation (BP) decoding algorithm [8,9]. The error correction performance of the BP algorithm is excellent, but the computational complexity of the BP algorithm is high with a large amount of computation and a large amount of cached data that must be stored in memory during decoding iteration, which is not conducive to engineering implementation [10]. Therefore, the SDA is chosen mainly from the where n is the independent white noise real vector with noise power σ 2 = (2R c E b /N 0 ) −1 ; E b /N 0 is the signal-to-noise ratio of each information bit; and R c = K/N is the bit rate.

Sum of the Magnitude of Weighted Symbol Flipping Decoding Algorithm
The WSF decoding algorithm uses the information derived from the channel to calculate the reliability [15,20]. The core idea is to compute the measure of each symbol according to the reliability of the check equation. According to the size of the measured value, the most unreliable symbol is selected to be flipped. It is the simplest SF decoding algorithm based on the hard decision. The flip function E (k) n,α of the MWSF algorithm and the IMWSF algorithm takes into account the information carried by the symbol itself, but still neglects the reliability information of a large number of variable nodes when calculating the reliability of the check equation w n,m,α [19,25]. However, the variable node involved in each check equation is a set and the reliability calculation of the check equation should also contain the reliability information for each variable node in the set. Therefore, in this paper, the reliability and flip function are redefined. Based on the hard decision decoding, some soft information from the channel is used to calculate the reliability. The sum of the amplitude of the variable nodes connected to the check node is used as the reliability of the check equation, the influence of all symbols is added into the reliability of the check equation and the weighting factor is introduced. Using |L n,α | as part of the flip function, we obtained the WSF algorithm based on the sum of the magnitude (SMWSF). The specific process of SMWSF decoding algorithm is as follows.
Calculate the initial hard decision output sequence, before using this sequence as the output of the decoding iteration in k = 0, which is denoted as y (0) Firstly, the hard decision sequence x = [x 0 , x 1 , · · · , x Nb−1 ] is obtained by the hard decision of the binary sequence r of channel output. The rules of the decision are as follows: If r i ≥ 0, the x i is determined to be 1; if r i < 0, the x i is determined to be 0 (0 ≤ i < Nb − 1). As the hard decision is a binary sequence, so according to the binary and q-ary conversion principle, converted to N length, the sequence becomes y = [y 0 , y 1 , · · · , y N−1 ]. The sequence of y = [y 0 , y 1 , · · · , y N−1 ] is the output sequence.

3.
Calculate the probability vector L n (α) After the hard decision, it is necessary to calculate the initial likelihood probability of the symbol and to prepare the average likelihood probability of the check equation adjacent to all variable nodes in the subsequent iterative decoding process. The definition GF 0 (q) = α 1 , · · · , α q−1 to remove the Galois field of zero element, while GF(q) does not contain zero elements. Gives the reliability vector α on the nth symbol: L n = L n,α 1 , · · · , L n,α , · · · , L n,α q−1 (2) where the symbol probability log likelihood is shown as follows: Obviously, the log likelihood ratio of symbol probability is directly proportional to ∑ j:φ(α) j =+1 r nb+j . In this present study, we ignored the coefficient δ 2 2 related to channel noise [29] and used ∑ j:φ(α) j =+1 r nb+j approximation as the symbol probability log likelihood ratio, without any effect on performance.

4.
Compute the reliability w n,m,α of the external information of the received symbol y n We defined L n,α = ∑ j:φ(α) j =+1 r nb+j (0 ≤ j ≤ b − 1) and used Equation (3) to calculate probability matrix L n = [L n,α ], 0 ≤ n < N, α 1 ≤ α < α q−1 . In this, w n,m,α represents the average probability information of the symbol α with all the variable nodes adjacent to the mth check equation. As shown in Equation (4), the formula of w n,m,α is:

•
Step 2: Calculating check equation s We assume that the symbol vector is y (k−1) before the k(k ≥ 1) iteration, while the corresponding calibration equation is s h = 0, the decoding iteration is stopped and the decoding is shown to be successful.

•
Step 3: k ← k + 1 , if the k > k max (k max is the maximum number of iterations set for the user), the decoding is declared to fail and stop.
The completion of a symbol of the flip need to determine the two parameters: (1) The position of the flip; and (2) the value or amplitude of the flip.

•
Step 4: Determining the position of the flip symbol We calculated the measured value E (k) n , 0 ≤ n < N for each variable node at the k th iteration. The core of the WSF algorithm involves flipping out the symbols that do not satisfy the check equation.
To calculate E (k) n , we first need to calculate the reliability of variable node n, respectively α 1 , · · · , α q−1 .
In Equation (5), β(β ≥ 0) is the weighting factor. At that time, β = 0, which allows us to integrate the MSMWSF algorithm into SMWSF algorithm. For the given non-binary LDPC codes, the optimal value of the weighting factor can be obtained by Monte Carlo simulation under the specified number of iterations [30].
After this, we update the check equation to get the adjacency value of the hard decision: Among them, m is the CN. If s (k) h = 0, a valid code word is obtained and the search is stopped. If s (k) h = 0, the flip function E (k+1) n , 0 ≤ n < N is calculated and looped accordingly. In order to estimate the reliability measure of a symbol, the measure of each symbol must be calculated: E We select the location where the symbol is to be flipped, which is n (k) , 0 ≤ n < N. The summation operation in Equation (7) is a weighted method to measure the location possibility of flipping symbols. The sum of the weights of 1 can enhance the measure of the symbol position and provide a more accurate judgment standard. In this way, the symbol corresponding to the maximum value of E (k) n is the position of the flip symbol, such as the Equation (8).
Step 5: Determining the value or magnitude of the flip For the chosen symbol y (k) n , flip the bit corresponding to the minimum value of |r ni |, where 0 ≤ i < b, and update y (k−1) .

•
Step 6: Decoding, according to the results of the flip to get a new hard decision decoding sequence.
Assign the value of y (k−1) to y (k) , y (k) ← y (k−1) , and return to the second step.

Loop Update Detection Algorithm
Due to the possibility of an infinite loop in the process of symbol flipping, the symbol after the current flipping is still an error symbol and cannot be flipped to correct the symbol. Therefore, this paper proposes a loop update detection algorithm to further improve the decoding performance and speed up the convergence.
Firstly, the output symbol sequence matrix and the infinite loop detection matrix are defined, which involves the sequential traversal of each symbol. After this, sorting them from smallest to largest, we determine the location of the flip symbol. From the second smallest symbol, we re-flip the same error symbol, which is converted to the bit value. The wrong bits are sorted from smallest to largest. According to the definition of small to large index value, the bit is flipped and then converted into a symbol value to replace the previous decoding symbol. The symbol position corresponding to the maximum value is placed in the exclusion sequence. Essentially, after the symbol position corresponding to the maximum value is excluded, the maximum value error symbol is searched again. This process is repeated for decoding the iteration. The specific decoding process is as follows.

•
Step 1: Initialization The excluded sequence A is initialized to an empty set, which is used to store the position corresponding to the symbol that does not satisfy the flipping function. The bit flipping identifier F is defined and initialized to 1, which is used as the counter. The maximum value is b (1bq) and the value of F determines the number of bits to be flipped in the flip symbol.

•
Step 2: Determining the magnitude of symbol flipping The bit position to be flipped in the symbol position n (k) is determined by the binary sequence r = [r 0 , r 1 , · · · , r Nb−1 ], which is transmitted after the AWGN channel is transmitted. The symbol at position n (k) of the symbol in the to-be-flipped state can be converted into b bits, which correspond to r nb , r nb+1 , · · · , r (n+1)b−1 in r. We sort |r i |(nb ≤ i ≤ (n + 1)b − 1) from largest to smallest, with a smaller |r i | indicating lower reliability of the corresponding bit. Therefore, the F bit position with the smallest absolute value in r nb , r nb+1 , · · · , r (n+1)b−1 is selected according to the bit flipping identifier F, while the F bits in the corresponding position are reversed to obtain a new symbol sequence y (k) = [y 0 , y 1 , · · · , y N−1 ].

•
Step 3: Detects whether there is an infinite loop The loop update detection algorithm (LUD) is as follows. First, the output symbol sequence matrix Y (k+1)×N is defined, as shown in Equation (9).
Following this, an infinite loop detection matrix E k×N is defined, as shown in Equation (10).
As long as all the elements of one row in an infinite loop matrix are 0, an infinite loop is detected. Otherwise, an infinite loop is not detected.
(1) If an infinite loop is detected and F does not reach the maximum b, we increase the value of F by 1. After this, we return to step 2 to re-determine the position of the specific bit to be flipped by the symbol, before flipping the F bits of the corresponding position. (2) If an infinite loop is detected but F has reached the maximum b, the currently selected flip symbol position is stored in the exclusion symbol sequence A. F is set to 1 and the flip symbol position is rediscovered. (3) If no infinite loop is detected, the exclusion symbol sequence A is set as an empty set, the bit flipping identifier F is 1 and the calibration equation is recalculated.
Combining the loop update detection algorithm with the weighted symbol flipping algorithm based on sum of magnitude, a modified sum of the magnitude for the weighted symbol flipping decoding algorithm based on loop update detection (LUDMSMWSF) is proposed. On the basis of the MSMWSF algorithm, the algorithm repeatedly flips the bits corresponding to the error code word and looks for them in the order of the most probable error probability. Finally, this algorithm finds the correct code word. If this error traverses all the symbols, there is still an infinite loop, which indicates that the current symbol is not an error symbol. Following this, the algorithm adds the current symbol to the excluded symbol set to ensure that the next current position will not be continuously found. The specific decoding algorithm flow chart is shown in Figure 1. current symbol to the excluded symbol set to ensure that the next current position will not be continuously found. The specific decoding algorithm flow chart is shown in Figure 1.

Complexity Analysis
In this section, the conditions used to analyze the complexity of the decoding algorithm are: (1) Ignoring a small number of binary calculation and multiplication operations in the algorithm, with the assumption that the comparison operation is equivalent to the addition operation; and (2) taking the regular non-binary LDPC codes as an example to compare the average number of real addition operations in each algorithm. In the past, the computational complexity of the initial stage before the first iteration is often neglected when analyzing the computational complexity [31]. However, the simulation analysis of the decoding algorithm in the next section shows that the highest signal-tonoise ratio (SNR) in the frame occurs through iterative decoding less frequently after convergence. Thus, there is strong complexity of the initialization phase in the decoding process of the high proportion, if the neglect leads to a significant error. Therefore, the computational complexity involved in the initialization stage before the first iteration is taken into account. At the same time, the principle of second sections shows that all WSF algorithms perform symbol flipping in the final stage of each iteration, which is a serial decoding algorithm. After the symbol flipping is completed, the check equation is updated and the flip function is updated. In each iteration, there are three steps involved: (1) Calculating the adjoining vector  

Complexity Analysis
In this section, the conditions used to analyze the complexity of the decoding algorithm are: (1) Ignoring a small number of binary calculation and multiplication operations in the algorithm, with the assumption that the comparison operation is equivalent to the addition operation; and (2) taking the regular non-binary LDPC codes as an example to compare the average number of real addition operations in each algorithm. In the past, the computational complexity of the initial stage before the first iteration is often neglected when analyzing the computational complexity [31]. However, the simulation analysis of the decoding algorithm in the next section shows that the highest signal-to-noise ratio (SNR) in the frame occurs through iterative decoding less frequently after convergence. Thus, there is strong complexity of the initialization phase in the decoding process of the high proportion, if the neglect leads to a significant error. Therefore, the computational complexity involved in the initialization stage before the first iteration is taken into account. At the same time, the principle of second sections shows that all WSF algorithms perform symbol flipping in the final stage of each iteration, which is a serial decoding algorithm. check node m is calculated. Therefore, for both steps, the total number of operations required is d c · d v . In Step (3), it is assumed that the node position of the variable corresponding to the maximum flip function E k+1 n is found in n variable nodes, while further comparison operations are performed for N − 1 times. Therefore, the calculation amount of an iterative process in a standard WSF algorithm is N − 1 + d c · d v . The specific update process is as follows. First, we calculate s k+1 h,m = 1 − s k h,m , m ∈ M(θ) and update it according to the updated formula of flip function, which is E k+1 . The average number of iterations of the five decoding algorithms is AI 1 -AI 6 , while d c represents line weight and d v represents column weight. Table 1 gives the calculation methods of the total number of real operations of each decoding algorithm. From Table 1, it can be concluded that the complexity of the LUDWSF, LUDSMWSF and LUDMSMWSF algorithms is lower than the WSF, SMWSF and MSMWSF algorithms. Taking one algorithm as an example, the computation amount of WSF algorithm and LUDWSF algorithm is the same for each iteration with the difference of the average iteration times. However, the average iterations of LUDWSF algorithm are much less than that of WSF algorithm. From Table 1, the LUDMSMWSF algorithm only requires real additions, complexity is Mq(2d c − 1) However, complexity of the Fast Fourier Transform-based belief propagation decoding algorithm (FFT-BP) [32,33] includes real additions, multiplications and divisions, the complexities of which are respectively. Real multiplications and divisions are more consumable units than real additions. Although the proposed WSF algorithm with flipping pattern requires more iterations for decoding than FFT-BP, it needs less real additions than FFT-BP and requires no multiplying the iterations with computational requirements in each iteration, the total computational requirement of WSF algorithm is still lower than FFT-BP. Therefore, the computational requirement of WSF algorithm is much lower than FFT-BP.

Simulation Results and Statistical Analysis
The simulation parameters used in this section are as follows: 384,192 LDPC codes with a code rate of 0.5 and a column weight of three. The matrix is generated by the progressive edge growth (PEG) algorithm [34,35], divided into 4-ary (Code 1) and 16-ary (Code 2) simulation, which has a maximum number of iterations of 100. Under the AWGN channel conditions and using BPSK modulation, at least 1000 error bits are collected at each SNR. The link level simulation block diagram is shown in Figure 2.
(PEG) algorithm [34,35], divided into 4-ary (Code 1) and 16-ary (Code 2) simulation, which has a maximum number of iterations of 100. Under the AWGN channel conditions and using BPSK modulation, at least 1000 error bits are collected at each SNR. The link level simulation block diagram is shown in Figure 2.

Weighted Factor Test
For non-binary LDPC codes with given column weights, the decoding performance is different under the same signal-to-noise ratio with different weighting factors. When choosing a constant and optimal weighting factor, the performance loss is negligible and the complexity of the implementation is decreased [36,37]. The optimal value of the weighting factor is generally related to the non-binary LDPC codes and the specific code structure. Figures 3 and 4 show the bit error rate performance of Code 1 and Code 2 with different weighting factors when using MSMWSF algorithm under different SNR conditions. Based on the definition of the optimal value of weighted factor, it can be seen from Figure 3 that the influence of weighting factor on bit error rate (BER) is not obvious at lower SNR as the optimal value of weighting factor varies little with an increase in SNR. As shown in Figures 3 and 4, the optimal value of the weighting factor of the MSMWSF algorithm in Code 1 in this paper is 1.8, while the optimal value of the weighting factor of the MSMWSF algorithm in Code 2 is one.

Weighted Factor Test
For non-binary LDPC codes with given column weights, the decoding performance is different under the same signal-to-noise ratio with different weighting factors. When choosing a constant and optimal weighting factor, the performance loss is negligible and the complexity of the implementation is decreased [36,37]. The optimal value of the weighting factor is generally related to the non-binary LDPC codes and the specific code structure. Figures 3 and 4 show the bit error rate performance of Code 1 and Code 2 with different weighting factors when using MSMWSF algorithm under different SNR conditions. Based on the definition of the optimal value of weighted factor, it can be seen from Figure 3 that the influence of weighting factor on bit error rate (BER) is not obvious at lower SNR as the optimal value of weighting factor varies little with an increase in SNR. As shown in Figures 3  and 4, the optimal value of the weighting factor of the MSMWSF algorithm in Code 1 in this paper is 1.8, while the optimal value of the weighting factor of the MSMWSF algorithm in Code 2 is one.

Weighted Factor Test
For non-binary LDPC codes with given column weights, the decoding performance is different under the same signal-to-noise ratio with different weighting factors. When choosing a constant and optimal weighting factor, the performance loss is negligible and the complexity of the implementation is decreased [36,37]. The optimal value of the weighting factor is generally related to the non-binary LDPC codes and the specific code structure. Figures 3 and 4 show the bit error rate performance of Code 1 and Code 2 with different weighting factors when using MSMWSF algorithm under different SNR conditions. Based on the definition of the optimal value of weighted factor, it can be seen from Figure 3 that the influence of weighting factor on bit error rate (BER) is not obvious at lower SNR as the optimal value of weighting factor varies little with an increase in SNR. As shown in Figures 3 and 4, the optimal value of the weighting factor of the MSMWSF algorithm in Code 1 in this paper is 1.8, while the optimal value of the weighting factor of the MSMWSF algorithm in Code 2 is one.

Comparison of Algorithm Performance and Average Iteration Numbers
The performance comparison between Code 1 and Code 2 in five different decoding algorithms under the optimal parameters is shown in Figures 5 and 6, respectively. With an increase in SNR, the coding gain of MSMWSF algorithm is gradually increasing compared with other existing algorithms. At a low SNR, the performance of MSMWSF algorithm is almost the same as other algorithms. According to the simulation diagram of reference [34][35][36], a higher number of binary numbers indicates a poorer performance of the WSF algorithm. From Figures 5 and 6, we can see that the performance of Code 1 is better than that of Code 2, which proves the effectiveness of the algorithm proposed in this paper.

Comparison of Algorithm Performance and Average Iteration Numbers
The performance comparison between Code 1 and Code 2 in five different decoding algorithms under the optimal parameters is shown in Figures 5 and 6, respectively. With an increase in SNR, the coding gain of MSMWSF algorithm is gradually increasing compared with other existing algorithms. At a low SNR, the performance of MSMWSF algorithm is almost the same as other algorithms. According to the simulation diagram of reference [34][35][36], a higher number of binary numbers indicates a poorer performance of the WSF algorithm. From Figures 5 and 6, we can see that the performance of Code 1 is better than that of Code 2, which proves the effectiveness of the algorithm proposed in this paper.

Comparison of Algorithm Performance and Average Iteration Numbers
The performance comparison between Code 1 and Code 2 in five different decoding algorithms under the optimal parameters is shown in Figures 5 and 6, respectively. With an increase in SNR, the coding gain of MSMWSF algorithm is gradually increasing compared with other existing algorithms. At a low SNR, the performance of MSMWSF algorithm is almost the same as other algorithms. According to the simulation diagram of reference [34][35][36], a higher number of binary numbers indicates a poorer performance of the WSF algorithm. From Figures 5 and 6, we can see that the performance of Code 1 is better than that of Code 2, which proves the effectiveness of the algorithm proposed in this paper.          Figures 9 and 10, the two algorithms proposed in this paper are significantly less than the average number of iterations and the average number of addition operations in the traditional algorithm. Because the algorithm proposed in this paper has more efficient and accurate symbol flipping function, it improves the performance and reduces the complexity of the algorithm to a certain extent. We see that MSMWSF algorithm achieves fast convergence and low complexity.    Figures 9 and 10, the two algorithms proposed in this paper are significantly less than the average number of iterations and the average number of addition operations in the traditional algorithm. Because the algorithm proposed in this paper has more efficient and accurate symbol flipping function, it improves the performance and reduces the complexity of the algorithm to a certain extent. We see that MSMWSF algorithm achieves fast convergence and low complexity.   Figures 9 and 10, the two algorithms proposed in this paper are significantly less than the average number of iterations and the average number of addition operations in the traditional algorithm. Because the algorithm proposed in this paper has more efficient and accurate symbol flipping function, it improves the performance and reduces the complexity of the algorithm to a certain extent. We see that MSMWSF algorithm achieves fast convergence and low complexity.   We set the SNR to be 3.5 dB with the simulation of 10,000 frames under the condition of Code 1. We then combined the statistical five algorithms for decoding failure frames, as shown in Table 2. Table 2 shows that the WSF algorithm has nearly 99% frame failure. Two decoding algorithms are proposed in this paper. The failure of the frame is about 62% and 42%, which is less than traditional algorithms. It can also be seen that the algorithm proposed in this paper does not increase the complexity of the implementation, but has been reduced to a certain extent. The total computation required for decoding this LDPC codewith three various algorithms at 4.5 dB is shown in Table 3. From Table 3, the FFT-BP algorithm is nearly six times the computational requirement of the MSMWSF algorithm only in real addition. Therefore, we can prove that the computational requirement of MSMWSF algorithm is much lower than FFT-BP with no real multiplication and division. The MSMWSF algorithm has the lowest complexity and does not need to consume hardware resources and multiplication and division operations in software overhead.

Conclusions
This paper proposes a sum of the magnitude for hard decision decoding algorithm based on loop update detection. The algorithm combines the magnitude of the sum information of the variable We set the SNR to be 3.5 dB with the simulation of 10,000 frames under the condition of Code 1. We then combined the statistical five algorithms for decoding failure frames, as shown in Table 2. Table 2 shows that the WSF algorithm has nearly 99% frame failure. Two decoding algorithms are proposed in this paper. The failure of the frame is about 62% and 42%, which is less than traditional algorithms. It can also be seen that the algorithm proposed in this paper does not increase the complexity of the implementation, but has been reduced to a certain extent. The total computation required for decoding this LDPC codewith three various algorithms at 4.5 dB is shown in Table 3. From Table 3, the FFT-BP algorithm is nearly six times the computational requirement of the MSMWSF algorithm only in real addition. Therefore, we can prove that the computational requirement of MSMWSF algorithm is much lower than FFT-BP with no real multiplication and division. The MSMWSF algorithm has the lowest complexity and does not need to consume hardware resources and multiplication and division operations in software overhead.

Conclusions
This paper proposes a sum of the magnitude for hard decision decoding algorithm based on loop update detection. The algorithm combines the magnitude of the sum information of the variable nodes adjacent to the check node and uses it as the reliability information. At the same time, the reliability information of the variable node itself is taken into account and a more effective flip function is obtained, which improves the flip efficiency of the symbol and improves the decoding performance of the algorithm. The loop update detection algorithm is introduced to improve the accuracy of symbol flipping and to further accelerate the convergence rate of decoding. Simulation results show that compared with the WSF algorithm, the proposed LUDMSMWSF algorithm gains about 1.3 dB and 1.8 dB respectively and the decoding complexity is greatly reduced. Therefore, the algorithm proposed in this paper can be better applied to the 5G mobile communication system and meet the requirements of the decoding algorithm. It is a good candidate decoding algorithm for high speed communication devices.