A Novel Flip-List-Enabled Belief Propagation Decoder for Polar Codes

: Due to the design principle of parallel processing, belief propagation (BP) decoding is attractive, and it provides good error-correction performance compared with successive cancellation (SC) decoding. However, its error-correction performance is still inferior to that of successive cancellation list (SCL) decoding. Consequently, this paper proposes a novel ﬂip-list- (FL)-enabled belief propagation (BP) method to improve the error-correction performance of BP decoding for polar codes with low computational complexity. The proposed technique identiﬁes the vulnerable channel log-likelihood ratio (LLR) that deteriorates the BP decoding result. The FL is utilized to efﬁciently identify the erroneous channel LLRs and correct them for the next BP decoding attempt. The preprocessed channel LLR through FL improves the error-correction performance with minimal ﬂipping attempts and reduces the computational complexity. The proposed technique was compared with the state-of-the-art BP, i.e., BP bit-ﬂip (BP-BF), generalized BP-ﬂip (GBPF), cyclic redundancy check (CRC)-aided (CA-SCL) decoding, and ordered statistic decoding (OSD), algorithms. Simulation results showed that the FL-BP had an excellent block error rate (BLER) performance gain up to 0.7 dB compared with BP, BP-BF, and GBPF decoder. Besides, the computational complexity was reduced considerably in the high signal-to-noise ratio (SNR) regime compared with the BP-BF and GBPF decoding methods.


Introduction
Erdal Arikan proved that polar codes have the potential to achieve the Shannon capacity limit [1] of a binary-input discrete memoryless channel with infinite blocklength (N) using an SC decoder with computational complexity O(NlogN) [2]. However, the BLER performance of polar codes under SC is not competitive with the low-density parity-check (LDPC) and Turbo codes with finite blocklengths [3,4]. Successive cancellation list (SCL) [5] and successive cancellation stack (SCS) [6] decoding algorithms are proposed to improve the BLER performance of SC decoding with computational complexities of O(LNlogN) and O(DNlogN), where L is the list size in the SCL decoder and D is the maximal depth of the stack in the SCS decoder. In the literature, multiple variants of SCL and SCS have been introduced to improve the performance of the SCL and SCS decoders, albeit that the CA-SCL decoding algorithm [7] significantly improves the BLER performance of SC decoding, yet it suffers high decoding latency and implementation complexity compared with the conventional SC decoder. To alleviate such issues, a successive cancellation flip (SC-flip) decoding algorithm was proposed in [8]. SC-flip identifies the first incorrectly estimated bit that occurs during SC decoding and flips it to restrict the error propagation in the next SC decoding attempt. Several supplements of SC-flip with the average computational complexity close to the standard SC decoder in the moderate to high SNR regime have been proposed [9][10][11][12][13][14][15]. However, they still rely on the SC decoding mechanism, which results in high decoding latency and low throughput due to the sequential processing nature.
Belief propagation (BP) is an alternative decoding mechanism with the intrinsic advantage of the parallel structure design and is more attractive due to its parallel processing [16]. It has low latency and considerably outperforms SC decoding in terms of BLER performance, but is unable to compete with the CA-SCL decoding [3,4]. This implies the inadequacy of the BP decoding and that there is a need for improvement to achieve the BLER performance of CA-SCL. Zhang et al. in [17] proposed a modified belief propagation method, in which the node's messages are rectified by multiplying them against the specified check nodes' messages to improve the reliability of these propagated messages. A hybrid BP-SC coupling the SC and BP decoders was introduced by the authors in [18,19]. In the hybrid BP-SC method, the BP and SC decoders are executed sequentially such that either BP or SC may be able to determine the valid output. Firstly, it executes BP to estimate the valid output, and if it fails, then the valid codeword is estimated by assigning the denoised LLRs to the SC decoder. A postprocessing algorithm classifying errors into converged, false converged, and oscillation errors was introduced in [20]. Elkelesh et al. in [21] proposed BP-list decoding based on the permuted factor introduced in [22] to improve BLER performance. In their method, BP-list uses the permuted version of the standard factor graph for the correction of the result presuming that the standard graph of BP-list decoding is unable to estimate the valid decoding result. Wang et al. in [23] proposed a bit-strengthening algorithm, which strengthens the prior message of stable information by a hard decision.
Recently, a bit-flipping strategy for the BP decoder that improves the BLER performance of BP significantly was proposed in [24]. Yu et al. in [24] utilized the critical set (CS) approach of [25] to identify the error-prone bit and flip the a priori knowledge of it when BP-decoding fails to estimate valid results. The bit-flip method for the BP decoder was initially introduced by [24] to improve the BLER performance of the BP decoder. Inspired by the BP bit-flip method, Yuyu et al. in [26] merged the BP-list decoding with bit-flip and artificial noise. The artificial noise was added to the CRC-aided belief propagation list (CA-BP-list) as a postdecoding process, and the flip bit set (FBS) was constructed if all the BP-list decoders failed. The work in [27] used multiple stopping criteria to determine undetected errors that contributed to the error floor and proposed multiple bit-flipping sets (BFSs) to improve the error-correction performance of the BP decoder. Most recently, GBPF and enhanced BP-flip (EBPF) decoders were proposed to better utilize the concept of BP bit-flip [28]. The GBPF and EBPF decoders construct the flip list through the smallest absolute decision LLRs and the reliability sequence of unfrozen bit indices, respectively. Initially, the smallest absolute decision LLRs are used as a metric for identifying the first error that occurs during the SC decoding process caused by channel noise, which may help to slow down or stop the error propagation phenomena in SC decoding [8]. However, later, it was proven by [9] that this metric is suboptimal for identifying the error-prone bit with a higher sorting complexity. Moreover, the BP decoding is free of error propagation phenomena, and all errors that occur are due to the channel noise. Thus, this implies that flipping of the a priori knowledge of the indices of the low absolute decision LLR and least reliable nonfrozen bit channel is not able to ensure the improvement for the finite blocklength. Besides, all of the aforementioned flipping algorithms focus on nonfrozen bit channels and their soft decision LLRs. On the contrary, the BP decoding algorithm initializes the messages by the channel LLRs. However, there exist erroneous channel LLRs that lead to an invalid BP decoding output. Consequently, the BP decoding messages should be initialized at a high accuracy to achieve a high BLER performance of BP decoding [29].
To the best of our knowledge, none of the previous works utilized channel LLRs for BP decoder performance improvement; despite that, the channel LLR plays an important role and has the potential to improve the BLER performance of the BP decoder. In the sequel, this work classifies the channel LLRs into the corresponding frozen and nonfrozen bit indices by Monte Carlo simulation and considers the frozen bit indices in conjunction with the distribution of erroneous channel LLRs to improve the BLER performance of the BP decoder. Therefore, a novel flip-list-enabled belief propagation (FL-BP) decoder for polar codes is proposed. The FL-BP technique utilizes the corresponding frozen bit indices for the correction of erroneous channel LLRs. Simulation results show high BLER gains with the FL-BP compared with BP, BP bit-flip, GBPF decoding, and CA-SCL while retaining the average computational complexity close to the standard BP decoding in medium-to-high SNR regions.
Our contribution in this work is threefold: • The functional behavior of the channel LLRs was analyzed by designing the preprocessing genie-aided BP decoder with the enabled (foreknowledge of transmitted information block (u n 1 ) and codeword (x n 1 )) feature. This analysis enabled us to identify the erroneous channel LLRs causing the invalid BP decoding results. Consequently, the correction of a single-channel LLR improved the BLER performance significantly; • We classified the channel LLRs into frozen and nonfrozen bit indices by Monte Carlo simulation. Accordingly, it was observed that about half of the erroneous channel LLRs corresponded to the frozen bit indices and are called FL. Furthermore, we developed a method for identifying and correcting the erroneous channel LLR using FL to improve the BLER performance of the BP decoder for the polar code; • We propose a novel FL-BP decoding algorithm that incorporates the erring channels from the FL list and performs correction through the preprocessed erroneous channel LLRs, in the event where the initial BP decoding fails to produce the correct output.
The simulation results showed a significant SNR gain on conventional BP, BP bit-flip, and GBPF, and it considerably outperformed CA-SCL-8 with the average complexity close to the conventional BP decoding in medium-to-high SNR regions.
The remainder of this paper is organized as follows: Section 2 briefly explains the preliminaries of the polar code, BP decoder, and BP-BF decoder. Section 3 provides a comprehensive analysis of the erroneous channel LLRs and the flipping behavior of the erroneous channel LLR for incorrect BP decoding results. In Section 4, we present the details of our proposed FL-BP algorithm and describe the pseudocode. The simulation results and discussion are presented in Section 5, and finally, the paper is concluded in Section 6 with the possible future direction.

Polar Codes
The polar codes split the N channels into extremely reliable (K) channels for carrying the information and unreliable (N − K) channels for the frozen bit. A polar code for the blocklength N = 2 n is a function P(N, K, A) having dimension K with rate R = K/N, A being a subset of the indices for the information bits such that A⊂{1, · · · , N}. The term A c is the complement of A, which represents the set of indices for the frozen bits such that A ∪ A c = {1, · · · , N}, with A c known to both the transmitter and receiver. The encoding process of polar codes is represented by Equation (1): where x N 1 is an encoded codeword, u N 1 an input message vector, and G ⊗n the n-th Kronecker product (⊗) of G = 1 0 1 1 , known as the Arikan kernel.
In this work, the polar codes were constructed through the GA method by assuming that the all-zero codeword was transmitted over the AWGN channel [30]. The error probability for each of the polarized channels is thus computed according to Equation (2): where the term P e (u i ) is the error probability of each subchannel, N ] is the expectation of the LLR of each subchannel.

Belief Propagation Decoding
Belief propagation is an iterative decoding mechanism proposed for the decoding of polar codes in [16]. Due to its parallel structured graph, it performs all the processing concurrently. A polar code P(N, K) can be represented through a factor graph with n total stages such that each stage consists of N/2 processing elements (PEs) having two input and two output nodes, where n ∈ {1, 2, 3, . . . } and N = 2 n .
Following the polar codes' standard definition, Figure 1a illustrates the factor graph for polar codes P(8, 4), where pair (i, j) denotes the row and column number of the factor graph. Moreover, as depicted in Figure 1b, a single PE consists of four nodes such that each of the nodes has two types of LLR messages (i.e. right-to-left and left-to-right) denoted by (L i,j ) and (R i,j ), respectively. These messages are represented by L and R matrices such that each one has N × (n + 1) dimensions. Prior to the BP decoding process, the two matrices L N×(n+1) and R N×(n+1) are initialized according to Equations (3) and (4): (4) where L(y i ) is the LLR of the i-th received bit channel computed using Equation (5) and assigned to the right-most column of the L matrix. The left-most column of the R matrix represents the a priori knowledge of the information sequence with +∞ being the probability of frozen bits. In each iteration, the LLR messages of the L and R matrices are updated and propagated according to Equation (6): The function g(a, b) = 0.9375sign(a)sign(b)min(|a|, |b|) is the min-sum approximation proposed for the polar coding in [31], where 0.9375 is the scaling factor suggested in [32] for balancing the approximation error. The decisions for the estimated information bit u i and the codewordx i are made upon the completion of each iteration, according to Equations (7) and (8), respectively:

Belief Propagation Bit Flip Decoding
Recently, Yu et al. in [24] proposed a more advanced BP method with the introduction of a bit-flip mechanism having a maximum flipping order of length ω. They incorporated the idea of the critical set (CS) originally defined for the SC [25] to identify the error-prone bit while retaining the underlying mechanism of the conventional BP decoder. In their algorithm, the standard BP decoding was used first to estimate the valid output u N 1 , then performing an m bit CRC over the obtained output to verify the correctness of the received information block. In the case where the CRC verifies that the estimated information block has at least an error, the BP-BF decoder obtains the error-prone bit (u i ) from the CS and sets the a priori knowledge for it according to Equation (9) for the next BP decoding attempt: The BP-BF sets the a priori knowledge of ω indices to +∞ and −∞ consecutively for ω > 1 and thus requires additional decoding attempts to obtain the desired BP decoding results. Furthermore, additional attempts were performed for all the Rate-1 nodes until the correct results were obtained, which led to the growth in the size of CS and, hence, resulted in high computational complexity and decoding latency.

Analysis of the Channel LLRs and the Flipping Behavior for the Erroneous LLR
To investigate the erroneous behavior of the channel LLRs, we designed a preprocessing genie-added decoder for the channel LLRs (L(y N 1 )). An adaptive observation method was applied, and the erroneous channel LLRs were classified into the corresponding frozen and nonfrozen bit indices. Then, through the preprocessing genie-aided decoder, the improvement of the BLER performance by flipping the erroneous channel LLRs was analyzed. Details of the overall mechanism are presented in the subsequent sections.

Analysis of the Channel LLRs
The designed preprocessing genie-aided BP decoder maintains the foreknowledge for the transmitted information block (u N 1 ) and codeword (x N 1 ). The preprocessing genieaided decoder corrects the single erroneous channel LLRs, once it detects the failure of the BP decoder in estimating the correct information block. In our adaptive method, we analyzed the behavior of channel LLRs by running the Monte Carlo simulations for the 10 7 blocks over the additive white Gaussian noise (AWGN) channel with binary phase-shift keying (BPSK) modulation for different SNRs with a code rate of R = 0.5. The parameters for our Monte Carlo simulation are given in Table 1. Before starting BP decoding, the preprocessing genie-aided BP decoder is able to obtain the received codeword (z N 1 ) from the hard decision according to Equation (10) using the channel LLRs and records the indices for the unmatched transmitted (x N 1 ) and received (z N 1 ) (x i = z i ) codeword following Equation (11). This enables the preprocessing genieaided BP decoder to segregate the frozen bit indices I e f and nonfrozen bit indices I en f according to Equations (12) and (13): where I et represents the total number of erroneous bit indices. The output of the simulation concerning the expected value of the cardinality of the set (I et ) known as the event gamma (γ) against the different SNR is summarized in Table 2. It can be observed that the event gamma (γ) increases linearly with the block size while decreasing with the increase of the SNR. For instance, given SNR = 3.5 dB, the resultant event is 17, 30, 63, and 133, for the polar codes P with block sizes (256, 128), (512, 256), (1024,512), and (2048, 1024), respectively. We further analyzed the behavior of channel LLRs by taking a percentage measurement for the erroneous bit channels caused by the noise over the AWGN and plot a histogram for the polar codes P(256, 128) in Figure 2. It can be observed that in low SNR regions, the percentage of I e f is higher compared with the I en f and becomes closer with the increasing SNR values. Such behavior is due to the inverse relationship between the SNR and the erroneous channel LLRs, i.e., the number of erroneous LLR decreases when the SNR increases.

Flipping Behavior of the Erroneous Channel LLR
Considering the foregoing analysis in Section 3.1, one pending question is for the given erroneously channel LLRs: What is the impact of correcting a single erroneous channel LLR, and would such rectification result in correct BP decoding information bits (u N 1 )? We answer this question as follows: whenever the BP decoder fails to estimate the valid output, the preprocessing genie-aided BP decoder flips the single erroneous channel LLR corresponding to the index from the list (i et ) sequentially, before starting the next BP decoding process, as given in Equation (15): where L i e ,n+1 represents the erroneous channel LLR of the i e -th index discussed in Equation (3). Once the LLR is flipped, the rest of the decoding follows the standard BP decoding procedure, and the decoding process is terminated, with the valid BP decoding results. The BLER performance of such a preprocessing genie-aided BP decoder is presented in Figure 3. It can be noticed that with the correction of a single erroneous channel LLR, the BP decoding converges to the correct output while improving the average BLER performance by 1.5 dB throughout the simulated SNR region. We analyzed the simulation results of the 10 7 blocks through the frequency of event alpha (α) by considering the different code-lengths under the various SNR regimes, and the results are summarized in Table 3 where event (α) is the maximum number of flipping attempts to produce a valid result. We recorded the event α by setting a maximum limit of 16 attempts, and it is clear from Table 3 that within such a limit, the results' accuracy for P(256, 128) was about 90% under all the SNR values and remained the same for P(512, 256) at SNR 3.5 E b /N 0 . Due to the proportionality between the block size and erroneous channel LLRs, we can see a decreasing frequency of event α for the large blocklength P (1024, 512). The frequency of an event α is trivial for the large blocks while, it is adequate for the small blocklength, which implies the suitability of bit flipping for the medium blocklength codes. Consequently, the correction of a single erroneous channel LLR improves the BLER performance significantly.

The Proposed FL-Enabled BP Algorithm
In this section, we first present the design mechanism for the flip list to determine the erroneously received LLRs and then propose the BP flip algorithm based on the FL.

Construction of the Flip List
Given the preceding analysis (Section 3.1), about half of the erroneous channel LLRs were observed to be from the I e f at all SNR points along with the higher frequency of α (i.e., 90% in Table 3) for P(256, 128). In the sequel, it was believed that the I e f represent an adequate parameter for the flip list and suitable for identifying erroneous channel LLR.
Suppose a codeword (x N 1 ) generated through Equation (1) is transmitted with BPSK modulation over the AWGN channel. The BPSK modulation maps the x N 1 into transmission symbol r N i = {r 1 , r 2 , · · · , r N } with r i = (−l) x i ∈ {±1}. At the receiver end, the received signal y N 1 = {y 1 , y 2 , · · · , y N } with y i = r i + n i is the output of the sampler in the demodulator, where 1 ≤ i ≤ N and n i is the independent Gaussian noise with zero mean and variance N 0 /2. The received vector (y N 1 ) is further employed to compute the channel LLR vector (L(y N 1 )) using Equation (5). If a hard decision is performed on each i-th channel LLR L(y i ) independently, the natural choice for the measure of reliability is abs(L(y i )) [33]. Consequently, the proposed method sorts the channel LLRs in ascending order such that |L(y 1 )| < |L(y 2 )| < · · · < |L(y N )|. In our work, each time an error occurred, the flip list was used for the identification of the erroneous channel LLR, as their indices were arranged according to their smallest absolute channel LLRs, as presented in Equation (16): where k represents the index of the channel LLR.

Pseudocode of the Proposed FL-BP Algorithm
The underlying mechanism of our proposed FL-BP is to identify the erroneously channel LLR by using the FL. FL-BP initializes the decoding process of the received codeword (x N 1 ) by exploiting the service of the standard BP decoder. The BP decoding requires early termination (ET) criteria to stop the decoding process on a correct output, which helps to eliminate the unnecessary iterations. This work considers the CRC termination mechanism due to its better performance in error identification. The decoding process stops by either meeting the maximum number of iterations or the CRC traces out the correct output. Once the BP decoder fails to obtain a sound output, the proposed FL-BP activates the erroneous channel LLR flip event and records the j-th index from the FL to identify the erroneous channel LLR using Equation (17): The algorithm flips the erroneous channel LLR (L(y i e )), which corresponds to the j-th index by using Equation (15), and initializes the L matrix with the new channel LLRs in the upcoming BP decoding, as discussed in Equation (3). The process continues until the end of the FL, provided that the CRC does not detect a valid result.
The pseudocodes of the main algorithm and subalgorithms for the proposed FL-BP are provided in Algorithms 1 and 2. The main algorithm takes several inputs and utilizes the services of the subalgorithms to identify and correct the received codewords. A detailed description of the main steps is presented as follows: 1.
Initialize the local and global parameters such as loop control variables i, j, and f lag and the maximum number of iterations (I max ); 2.
Arrange the left-to-right propagation matrices (L) and right-to-left propagation matrices (R) by iterating through the size of the codeword (N) in Lines 2 to 9; 3.
Obtain the estimated information block (u k 1 ) by utilizing the conventional BP decoding method and pass it through the CRC procedure. Check the remainder (r) to validate whether the BP has succeeded in achieving the correct output or not. If the BP succeeded, set the f lag value and break the loop or continue with the next iteration. Repeat this process from Lines 10 to 18; 4.
Check the f lag value; if the BP leads to a valid output, return the results, otherwise call Algorithm 2; 5.
Initialize the local variables such as the number of maximum flipping attempts (T), the erroneous channel LLR (e), and the loop control variables in Line 1 (Algorithm 2). Pick the index for the error-prone channel LLR from the FL and perform the flipping in Lines 3 and 4. Call the BP decoding method with rectified preprocessing LLRs (Ppllr N 1 ) and verify the validity of the result through the CRC procedure in Lines 5-9. Finally, return the result to the main calling algorithm. r ← CRC(û k 1 ) Get the CRC for the codeword 13: if (r == 0) then 14: f lag ← 0 15: break; 16: end if 17: j ← j + 1 18: end while 19: if ( f lag! = 0) then 20 r ← CRC(û k 1 ) Get the CRC for the codeword 7: if (r == 0) then 8: break; 9: end if 10: end for 11: Returnû k 1 ;

Simulation Results
The simulation was conducted under the GA construction with SNR = 2.5 dB and BPSK modulation over the AWGN channel. The performance of FL-BP was validated in terms of the BLER and average computational complexity and compared against the BP, BP-BF, GBPF, and CA-SCL decoders for P(256, 128) and P(512, 256).

BLER Performance
The BLER performance of the proposed FL-BP for the polar code P(256, 128) with m bit CRC (m = 24) is demonstrated in Figure 4. It compares the BLER of the proposed FL-BP with the conventional BP, BP-BF(CS-3) [24], and GBPF [28] decoders for P(256, 128 + 24), the CA-SCL [7] decoder for P(256, 128 + 8), and OSD [34] for P(256, 128 + 8), where Y th represents the threshold and σ is the variance of AWGN. It can be observed that the FL-BP had a remarkable BLER performance under P(256, 128 + 12) compared with the state-ofthe-art flip decoders. The results reveal that the BLER performance of FL-BP had a higher gain at the low-to-moderate SNR regions compared with the conventional BP decoder. For instance, at SNR = 2 dB, it had up to a 0.7 dB and 0.4 dB gain over BP and CA-SCL-4 for FL-BP (T = 100). Nevertheless, due to the error floor of BP decoding, its performance gradually deteriorated in a high SNR regime, but still considerably competed with BP-BF(CS-3) and GBPF. The BLER performance improvement rendered the importance of an early correction of a single LLR before initiating the decoding process.
A comparison of the BLER performance concerning the FL-BP, conventional BP [16], BP-BF(CS-3) [24], and GBPF [28] decoders for P(512, 256 + 24), the CA-SCL [7] decoder for P(512, 256 + 16), and OSD [34] for P(512, 256 + 8) is visualized in Figure 5. The results indicated a higher gain with the proposed FL-BP for P(512, 256) against the conventional BP for the given SNR ≤ 3 dB. In more detail, at SNR = 2.5 dB, it had up to a 0.7 dB and 0.2 dB gain over BP and CA-SCL-4 for FL-BP (T = 100). As discussed earlier (Section 3), the FL size is proportional to the code-length and requires a higher number of flipping attempts to improve the BLER performance for the BP decoder. Consequently, the proposed FL-BP algorithms showed a more feasible solution for the short-to-medium code-length.

Average Computational Complexity
The computational complexity for the BP-BF decoding of the polar code was measured in terms of the average iterations I avg at SNR = δ (i.e., a given SNR value), in conjunction with the overall processing of the error-prone list. Consequently, the same criteria were adopted to evaluate the computational complexity of the proposed FL-BP. The proposed FL-BP eased the computation burden of BP decoding with the introduction of the preprocessed FL and thereby lowered the computational complexity. The FL-BP had the worst-case number of flipping attempts T max = |FL| and had the average-case flipping attempts I avg ≤ T max × I, regarding the computational complexity, where I represents BP decoding iterations such that I = 100. FL-BP's performance was measured in terms of the average computational complexity and compared with the BP-BF and GBPF decoders.
The average computational complexity of BP, BP-BF, GBPF, and FL-BP is visualized in Figure 6. It shows a rational number of iterations in the moderate SNR range (i.e., 1-2 dBs) with FL-BP compared with the underlying BP flip decoding methods while reaching close to the conventional BP decoders at higher SNRs.
In addition, the computational complexity of FL-BP was further analyzed in terms of decoding clock cycles and evaluated for thP(256, 128 + 24) concerning BP-BF, GBPF, and the proposed FL-BP, as tabulated in Table 4. It can be observed that due to the preprocessed FL, FL-BP resulted in an early correction of the codeword and, thus, had substantially low decoding cycles compared with the other BP-BF methods. The decoding clock cycles showed an inverse relationship to the SNR and decreased with the increase of the SNRs for all the BP flip algorithms. In contrast to others BP flip algorithms, FL-BP was more feasible for a medium blocklength of the polar codes, while compared tothe conventional BP algorithm at a high service.

Conclusions
In this paper, we highlighted the consequences of single-channel LLR correction over BP performance and presented a novel FL-BP algorithm based on the FL mechanism. The proposed FL-BP identified the erroneous channel LLRs and analyzed their functional behavior through the preprocessing genie-aided BP decoder to improve the BLER performance. Based on Monte Carlo simulation analysis, we considered a list of the bit indices (i.e., FL) that were more susceptible to errors. FL-BP employed the FL to identify the erroneous channel LLRs and corrected them for the next BP decoding attempt. The preprocessed FL improved the error-correction performance with a minimal flipping attempt and thus lowered the decoding clock cycles. The simulation results showed a better BLER performance with minimal computational complexity concerning the proposed FL-BP compared with the BP flipping algorithms. In more detail, FL-BP had a higher gain of 0.7 dB and 0.4 dB at low-to-moderate SNRs (i.e., 1 to 2 dB) compared to the conventional BP and CA-SCL-4 algorithms. The computational complexity of FL-BP was competitive with the conventional BP decoding at high SNRs and more sensible at SNR = 1 and 2 dBs compared with the rest of the BP flip decoding methods. In the future, this work can be expanded for the identification of multiple-erroneous-channel LLRs.