Next Article in Journal
An Efficient Algorithmic Way to Construct Boltzmann Machine Representations for Arbitrary Stabilizer Code
Previous Article in Journal
Operational Constraints in Quantum Otto Engines: Energy-Gap Modulation and Majorization
Previous Article in Special Issue
On Vector Random Linear Network Coding in Wireless Broadcasts
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Threshold Successive Cancellation Flip Decoding Algorithm for Polar Codes: Design and Performance

1
School of Mathematics and Statistics, Beijing Jiaotong University, Beijing 100044, China
2
University of Chinese Academy of Sciences, Academy of Mathematics and Systems Science, CAS, Beijing 100190, China
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(6), 626; https://doi.org/10.3390/e27060626
Submission received: 11 May 2025 / Revised: 5 June 2025 / Accepted: 8 June 2025 / Published: 12 June 2025
(This article belongs to the Special Issue Network Information Theory and Its Applications)

Abstract

:
In this paper, we propose the threshold successive cancellation flip (Th-SCF) decoding algorithm for polar codes, which enhances the performance of the SC decoder while maintaining low complexity. Theoretical analysis reveals that Th-SCF asymptotically delays the first error position (FEP, the first part where the SC decoder fails) with probability 1, ensuring high decoding performance. Simulation results show that the Th-SCF algorithm achieves performance comparable to the dynamic SC flip (D-SCF) algorithm, but with a reduction in complexity by eliminating the need for sorting operations. A key contribution of this work is the rigorous theoretical framework supporting the Th-SCF algorithm, distinguishing it from existing SC flip (SCF) decoding methods. This theoretical foundation not only explains the performance improvements but also provides insights into the underlying mechanisms of flipping. The proposed Th-SCF algorithm demonstrates strong performance across a wide range of code lengths and rates, and its performance remains stable within a certain threshold range, indicating its practical applicability in real-world communication systems. These results offer valuable perspectives for the design of efficient flip decoding strategies in 5G and future networks.

1. Introduction

Polar codes, introduced by Arıkan in [1], are groundbreaking capacity-achieving codes that have fundamentally reshaped the field of coding theory. Their achievement of channel capacity through explicit construction has led to widespread practical applications, with one prominent example being their adoption in 5G control channels [2], which has spurred advancements in decoding techniques.
Among the various decoding methods for polar codes, successive cancellation (SC) decoding, initially proposed by Arıkan [1], is known for its simplicity and effectiveness. However, the performance of the SC decoder is limited for finite code lengths. To improve this, several enhancements have been introduced, such as log-likelihood ratio-based successive cancellation list (LLR-based SCL) decoding and cyclic redundancy check-aided SCL (CA-SCL) decoding [3,4,5], which improve performance by integrating additional information during the decoding process.
An alternative to standard SCL decoders is the SC flip (SCF) algorithm, initially introduced in Figure 5 of [6] and later enhanced in [7]. In [7], the authors significantly reduced the search space of unreliable bits by introducing the concept of the critical set, which consists of the first information bits of rate-1 subblocks ( R 1 node) derived from the decomposed complete code tree in a polar code. The authors demonstrated that the incorrectly decoded bits are highly likely to be found within the critical set, providing a solid foundation for the design of the algorithm. While the critical set SCF algorithm (Algorithm 2 of ref. [7]) also allows multiple bits to be flipped per iteration, it employs a progressive, multi-level bit-flipping strategy, setting it apart from the D-SCF algorithm [8], which flips multiple potentially erroneous bits simultaneously in one iteration.
The SCF decoder improves upon the basic SC decoder but still falls short compared to SCL decoders with moderate list sizes [8]. To bridge this gap, the D-SCF algorithm [9] extends SCF by combining bit reliability with its position in the information set. While the multi-flip strategy in the D-SCF algorithm enhances performance by flipping multiple bits per iteration, its increased complexity limits its practical applicability [9].
The algorithm proposed in [10] also employs a threshold-based approach similar to ours, but the key innovation of our proposed method lies in its solid theoretical foundation. While previous works primarily relied on experimental observations—such as the significant difference in LLR magnitudes between erroneous and correct bits—our approach provides a rigorous theoretical explanation for the effectiveness of the threshold-based SCF algorithm in correcting errors in SC decoding. This theoretical framework offers a comprehensive evaluation of the performance of the Th-SCF algorithm, distinguishing it from earlier strategies.
Given that point-to-point communication forms the theoretical foundation of network information theory [11,12,13,14,15], enhancing classical decoding techniques (such as successive cancellation (SC) decoding for polar codes) with solid theoretical guarantees is essential. Although SCF decoding algorithms have shown promising empirical results, their development has been hindered by the absence of a rigorous theoretical foundation, as most approaches rely heavily on simulations. Several key questions remain unanswered, and resolving them will provide crucial insights into the underlying mechanisms of the flip algorithm. Specifically, how does the flipping technique correct errors in SC decoding, and how effective is it in doing so? This work seeks to address these questions, which are essential for the design of practical and efficient flip decoding algorithms.
In this work, we propose a threshold SCF (Th-SCF) decoding algorithm with provable theoretical properties. The key contributions of this work are as follows:
  • We prove that, asymptotically, the Th-SCF algorithm delays the first error position (FEP) with probability 1 (Theorem 1). This result demonstrates that the Th-SCF algorithm effectively improves the SC decoding performance by delaying the FEP, leading to a substantial enhancement in error correction efficiency.
  • We propose a novel flip algorithm called the Th-SCF algorithm (Algorithm 1), based on our theoretical analysis (Theorem 1), which achieves performance comparable to the D-SCF algorithm using a single bit-flip for CRC-aided polar codes. This approach not only enhances error-correction performance with provable theoretical guarantees but also ensures practical implementation feasibility.
The remainder of the paper is structured as follows: Section 2 provides an overview of polar codes and the SC/SCL decoding algorithms. In Section 3, we derive the delay probability for the first error position (FEP) in the Th-SCF algorithm. Section 4 presents simulation results that demonstrate the performance of the proposed algorithm. Finally, Section 5 concludes the paper.
Notation Conventions: This paper defines the probability density function (PDF) of X as P X ( x ) , with P ( · ) the probability measure, Z ( W ) and I ( W ) representing the Bhattacharyya parameter and symmetric capacity of channel W, respectively, and sgn ( · ) denoting the sign function. The Gaussian-Q function is defined as Q ( x ) = 1 2 π x e t 2 / 2 d t , its inverse is denoted by Q 1 ( y ) { x : Q ( x ) = y } , and the cumulative distribution function (CDF) of the standard normal distribution is expressed as Φ ( x ) = 1 Q ( x ) . Logarithms are expressed in base 2 as log 2 ( · ) , unless otherwise specified. The natural logarithm is denoted by ln ( · ) . Matrices and vectors are denoted in boldface.

2. Brief Review of the Encoding and Decoding Algorithms of Polar Code

In this section, we provide a comprehensive overview of the theoretical foundations necessary for understanding and analyzing polar codes and their decoding algorithms. In Section 2.1, we introduce the basic principles and mathematical formulation of polar codes. Section 2.2 reviews the successive cancellation (SC) and successive cancellation list (SCL) decoding algorithms, with a focus on the computation of log-likelihood ratio (LLR) sequences. Finally, we examine key concepts related to SCF decoding, including the min-LLR SCF, D-SCF and simplified D-SCF algorithms, along with their respective implementation strategies in Section 2.3. Together, these discussions establish a rigorous theoretical basis for the performance analysis and algorithmic development presented in subsequent sections.

2.1. Generating Polar Codes—An Overview

Polar codes of length N = 2 n are constructed through the following encoding process:
x 0 : N = u 0 : N G N ,
where x 0 : N = [ x 0 , , x N 1 ] represents the encoded bits, u 0 : N = [ u 0 , , u N 1 ] denotes the source bits, and G N = B n F n is the generator matrix. Here, F n represents the n-th Kronecker power of the matrix F = 1 0 1 1 , which is the fundamental building block of polar codes, and B n is the bit-reversal permutation matrix as described in Equation (70) of ref. [1].
A CRC-aided polar code is an extended version of the polar code [6], constructed using a similar approach to that of standard polar codes.
In Figure 1, s 0 : K = { s 0 , , s K 1 } (the blue-colored sequence) represent the K source bits, while c 0 : r = { c 0 , , c r 1 } (the red-colored sequence) are the r cyclic redundancy check (CRC) bits. These bits are then combined as [ s 0 : K , c 0 : r ] = [ s 0 , , s K 1 , c 0 , , c r 1 ] and placed into the information bit set A . The resulting sequence u 0 : N (the brown-colored sequence) is then encoded using the method shown in Equation (1) to generate the encoded codeword x 0 : N (the sequence in purple). The notation ( N , K + r ) is used in this work to indicate a polar code of length N, K information bits, and r additional CRC bits.
Channel polarization is a process by which the channel is transformed into a set of channels that are either noiseless or fully noisy. To achieve reliable communication, the K most reliable bit positions (out of the N total positions, indexed from 0 to N 1 ) are selected to carry information bits. The set of information bits is denoted by A , while the remaining N K indices are reserved for frozen bits, which are set to fixed values (typically zero). The rate of the code is given by R = K N , where K is the number of information bits and N is the total code length.
The encoded bits { x i } i = 0 N 1 are then mapped to binary phase shift keying (BPSK) symbols and transmitted over an additive white Gaussian noise (AWGN) channel, i.e., a binary-input AWGN (BI-AWGN) channel denoted by W, then the received symbols are given by:
y i = 1 2 x i + n i ,
where n i N ( 0 , σ 2 ) represents the Gaussian noise with variance σ 2 . The received signal vector is y 0 : N = [ y 0 , , y N 1 ] , which is used for decoding at the receiver.
This framework for polar code construction ensures that the code’s performance approaches the channel capacity as the code length N grows, especially when combined with efficient decoding algorithms such as SC algorithm and its list-based variants.

2.2. SC/SCL-Based Decoding for Polar Codes

In this section, we provide an overview of the SC decoding process, which is a key decoding method for polar codes. We begin by explaining how the LLRs are computed. Next, we describe the recursive f- and g-functions used in SC decoding to efficiently recover the codeword. We then introduce SCL decoding, an enhancement to SC decoding that maintains multiple candidate paths, improving error correction performance, especially for finite block lengths. Finally, we present the workflow of the CRC method, which increases the probability of finding the correct decoding result, thus boosting the finite length performance of SC decoding.
  • LLR Calculation and the f -function
    Denote the LLR for a received y i as L ( y i ) = 2 y i σ 2 with σ 2 the noise variance. For a length-2 polar code, the information bits u 0 : 2 = [ u 0 , u 1 ] are decoded as follows:
    To compute the LLR of u 0 , we use the min-sum approximation of the f-function, as given in Equation (10) of ref. [16]:
    L ( u 0 ) = sgn ( L ( y 0 ) ) · sgn ( L ( y 1 ) ) · min { | L ( y 0 ) | , | L ( y 1 ) | } .
    Then a hard decision on L ( u 0 ) is made to obtain the estimate u ^ 0 .
  • The g -function and Update of L ( u 1 )
    Next, the g-function updates the LLR for the second information bit u 1 :
    L ( u 1 ) = ( 1 u ^ 0 ) L ( y 0 ) + L ( y 1 ) ,
    where u ^ 0 is the decision made for u 0 in the previous step. After updating L ( u 1 ) , a hard decision is made to obtain the estimate u ^ 1 .
  • Recursive f - and g -Functions in SC Decoding
    The successive cancellation (SC) decoder recursively applies the f- and g-functions to estimate the information bits of a polar code. For a code of length N = 2 n , the decoder produces an output vector u ^ 0 : N = [ u ^ 0 , , u ^ N 1 ] . Each bit u ^ i is decoded sequentially based on the previously decoded bits u ^ 0 : i = [ u ^ 0 , , u ^ i 1 ] according to the following decision rule for all i { 0 , , N 1 } :
    u ^ i = 0 , if L ( y 0 : N , u ^ 0 : i u i ) 0 and i A , 1 , if L ( y 0 : N , u ^ 0 : i u i ) < 0 and i A , u i , if i A c ,
    where A denotes the set of information bits, and A c represents the set of frozen bits. The function L ( y 0 : N , u ^ 0 : i u i ) , or abbreviated as L N ( i ) , is calculated as:
    L N ( i ) = log W N ( i ) y 0 : N , u ^ 0 : i u i = 0 W N ( i ) y 0 : N , u ^ 0 : i u i = 1 ,
    where W N ( i ) represents the i-th bit channel, and L N ( i ) is referred to as the LLR for the information bit u i , as defined in Equation (5) of ref. [1].
  • Successive Cancellation List (SCL) Decoding
    Building upon the SC decoding framework, successive cancellation list (SCL) decoding enhances performance by maintaining a list of candidate decoding paths. Instead of making a hard decision at each information bit, the decoder explores both possible bit values and retains the L most reliable decoding paths based on their path metrics. This list-based approach allows the decoder to track multiple hypotheses in parallel and defer the final decision until all bits have been processed.
    At each information bit position, the decoder recursively updates the list by expanding each path into two candidates and selecting the L best ones according to a reliability criterion. Each path in the list represents a potential decoded sequence, and the final output is typically chosen as the most likely path, often aided by techniques such as cyclic redundancy check (CRC) for path selection [3,4,5].
    This enhancement significantly improves decoding performance over SC decoding, particularly for moderate to long block lengths. By expanding the solution space through a more thorough exploration of candidate paths based on the path metric (PM), it effectively reduces the probability of early decoding errors as demonstrated in Theorem 1 of ref. [4].
  • Adjustments after Integrating CRC (Section IV-A of ref. [6])
    To incorporate CRC into the decoding process, we assume we are given a polar code with rate R ˜ = K N and an information bit set A ˜ . A CRC of length r is added to the code to tell us, with high probability, whether an estimate u ^ 0 : N = [ u ^ 0 , u ^ 1 , , u ^ N 1 ] obtained from the SC decoder is a valid codeword or not.
    To account for the added CRC, the rate of the polar code is effectively increased to R = R ˜ + r N = K + r N . This ensures that the overall information rate remains unchanged. In practice, the set of information bits A ˜ is extended by adding the r most reliable channel indices from the complementary set A ˜ c , which is denoted as A ˜ r - most c . Thus, the new set of information bits becomes A = A ˜ A ˜ r - most c .
    This approach ensures that the CRC is incorporated efficiently into the decoding process, improving error detection and providing an effective way to identify valid codeword estimates.

2.3. Min-LLR SCF and D-SCF Decoding of Polar Codes

The min-LLR SCF algorithm Figure 5 of ref. [6] improves the performance of SC decoding by flipping the bit corresponding to the smallest absolute value of the LLR, | L N ( i ) | . This bit-flipping strategy targets the least reliable bits, which can help reduce errors and improve the overall decoding performance.
In comparison, the D-SCF algorithm (see Algorithm 2 of [9]), utilizes a more sophisticated metric, M α ( i ) , which combines the reliability of the bit with its position in the information set. This metric is designed to balance the decoding performance with the computational complexity, improving the decision process by considering both the magnitude of the LLR and the ordering of bits. The metric M α ( i ) is computed as follows:
M α ( i ) = | L N ( i ) | + 1 α j < i , j A log 1 + exp α | L N ( j ) | ,
where α is a parameter that controls the influence of the prior bits in the summation, and the sum runs over all previous information bits j A (those that carry information). The parameter α is optimized using Eqaution (23) of [9].
Some researchers have noted that the metric in the D-SCF algorithm can be simplified without significantly compromising performance [17]. Through numerical experiments, they found that replacing the original metric, i.e., Equation (2), with a simplified version yields performance comparable to that of the original D-SCF decoding algorithm. The simplified metric is expressed as follows:
M α ( i ) = L N ( i ) + j A , j < i J L N ( j ) ,
where J ( · ) can be approximated according to [17]:
J L N ( j ) = 1.5 , if L N ( j ) 5.0 , 0 , otherwise .
To strike a balance between performance improvement and computational complexity, we employ a single-bit flipping strategy per trial in the studies presented in this paper. This approach ensures that the algorithm remains efficient while achieving significant performance gains.
The simplified D-SCF algorithm with the M α ( i ) metric (3) serves as a performance benchmark for evaluating the effectiveness of the new algorithms introduced in the subsequent sections of this work. This comparison allows us to quantify the improvements and assess the trade-offs in decoding performance and complexity.

3. Th-SCF Decoder and Its Analysis

In this section, we propose the Th-SCF algorithm (Algorithm 1) in Section 3.1, followed by a comparative analysis with existing SCF algorithms in Section 3.2. In Section 3.3, we examine the distribution of the first error position (FEP) and compare the ability of various SCF algorithms to identify the true FEP. Section 3.4 provides a detailed comparison of the complexity across different SCF algorithms. Furthermore, we emphasize the key innovation of the proposed algorithm: it is built on a solid theoretical framework, which not only ensures performance comparable to previous threshold-based methods [10], but also offers the benefit of rigorous theoretical support for the flipping strategy. In Section 3.5, we prove that the Th-SCF algorithm delays the FEP with probability 1 (Theorem 1), providing a theoretical foundation for the performance improvements observed empirically.
For analytical tractability, we adopt the Gaussian approximation (GA) assumption, as provided in Section III of [18], which provides a valid approximation of the LLR distribution. Furthermore, we prove Proposition 1, which guarantees high average reliability for all information bits in the asymptotic regime.
Let μ N ( i ) denote the mean of the LLR L N ( i ) , with μ 1 ( 1 ) = 2 σ 2 . The LLR sequence is defined as L 0 : N = [ L N ( 0 ) , , L N ( N 1 ) ] . We assume that all-zero codeword 0 N is transmitted as adopted in Section III of [18] with 0 representing an all-zero vector and adopt an information set A of size K, where i A satisfy Z ( W N ( i ) ) < 2 N β for β ( 0 , 1 2 ) as introduced in Proposition 18 of [1].
To begin with, we reformulate the GA assumption (see Section III of [18]) into a more convenient form, as summarized in Assumption 1. This assumption provides an effective and analytically tractable approximation of the LLR distribution for information bits, thereby facilitating the subsequent theoretical analysis.
Assumption 1.
Assuming the LLRs of each subchannel follow a Gaussian distribution with mean equal to half the variance as introduced in Section III of [18] and assume that u ^ 0 : i = u 0 : i (i.e., u ^ 0 = u 0 , , u ^ i 1 = u i 1 ) when decoding u i , then we have:
{ L N ( i ) u ^ 0 : i = u 0 : i } N ( μ N ( i ) , 2 μ N ( i ) ) ,
where μ N ( i ) is recursively calculated by:
μ N ( i ) = ϕ 1 ( 1 ( 1 ϕ ( μ N / 2 ( i / 2 ) ) ) 2 ) , if i is even , 2 μ N / 2 ( i 1 ) / 2 , if i is odd ,
with ϕ ( x ) given by:
ϕ ( x ) 1 1 4 π x tanh t 2 e ( t x ) 2 4 x d t .
Proposition 1 supports the Th-SCF framework by ensuring asymptotically high average LLR magnitudes for all information bits. According to the statistical properties of the Gaussian distribution, bits with LLR magnitudes below N β are unreliable and likely correspond to the true FEP.
Proposition 1.
For any i A and sufficiently large N, we obtain for any β ( 0 , 1 2 ) and any rate R < I ( W ) that:
μ N ( i ) N β .
Proof. 
Using Assumption 1 and the definition of the bit error rate Pe ( W N ( i ) ) from Equation (17) of ref. [18], it follows from Proposition 18 of ref. [1] that i A and any rate R < I ( W ) , we have:
Pe W N ( i ) = Q μ N ( i ) 2 Z W N ( i ) 2 N β ,
which implies:
μ N ( i ) 2 Q 1 ( 2 N β ) .
To prove (4), it suffices to verify the following condition:
Q 1 ( 2 N β ) ln 2 N β = N β ln 2 .
This inequality shows that the LLR magnitude for the information bits grows asymptotically as N β , guaranteeing high reliability for the information bits as N increases.
We now begin the analysis of inequality (5). Starting from the definition of the Q-function and applying the method of integration by parts, we obtain:
1 x 2 x + e t 2 2 d t x + 1 t 2 e t 2 2 d t = 1 x e x 2 2 x + e t 2 2 d t .
This leads to:
1 x e x 2 2 1 + 1 x 2 x + e t 2 2 d t ,
which further implies the following lower bound on the Q-function:
Q ( x ) = 1 2 π x + e t 2 2 d t x 2 π ( 1 + x 2 ) e x 2 2 .
The above bound also appears in Equation (2.1.b) of ref. [19].
For sufficiently large x, this expression can be simplified as:
Q ( x ) x 2 π ( 1 + x 2 ) e x 2 2 1 x e x 2 2 = e x 2 2 ln x e x 2 ,
where the last inequality follows from the fact that for sufficiently large x, we have x 2 2 ln x .
Given that y = Q ( x ) , we now invert the relationship to express x as x = Q 1 ( y ) . For sufficiently small y, which implies that x is sufficiently large, we have the following inequality:
ln y x 2 = ( Q 1 ( y ) ) 2 .
Therefore, for sufficiently small y, we can conclude:
Q 1 ( y ) ln y .
Substituting y = 2 N β , we obtain, for sufficiently large N:
Q 1 ( 2 N β ) ln 2 N β = N β ln 2 .
Hence, we derive that for any i A and sufficiently large N, the following holds:
μ N ( i ) 2 Q 1 ( 2 N β ) N β ln 2 ,
which implies that
μ N ( i ) ( 2 ln 2 ) N β > N β .
Thus, the result is established. □
Proposition 1 serves as the foundation for our subsequent analysis by highlighting a key asymptotic property: the LLRs of all information bits in a polar code become increasingly large, indicating high reliability as the block length grows. This insight allows us to establish a practical criterion—if the LLR magnitude of a particular information bit falls below N β , it can be considered unreliable in the asymptotic sense. Consequently, such a bit is more likely to correspond to the true location of a first error position (FEP), and flipping its hard decision result has the potential to significantly enhance decoding performance.

3.1. Proposed Th-SCF Decoding Algorithm

This section presents the Threshold SCF (Th-SCF) algorithm (Algorithm 1), which leverages the result from Proposition 1, ensuring asymptotically high reliability for all information bits i A .
In Algorithm 1, the function SCDecoder ( y 0 : N , A , k ) executes the standard successive cancellation (SC) decoding process. When k = 0 , it performs regular SC decoding without modification. When k > 0 , the decoder proceeds as usual but intentionally flips the decoding result of the information bit u k , simulating a correction at a potentially erroneous position.
Algorithm 1 Th-SCF Decoding Algorithm
Input: Original received symbols y 0 : N , information set A , the maximum attempts T, predefined threshold T f
Output: The decoded codewords u ^ 0 : N
  1:
( u ^ 0 : N , L 0 : N ) ←SCDecoder( y 0 : N , A ,0);
  2:
if CRCChek ( u ^ 0 : N ) = fail  then
  3:
      S FlipSelector ( L 0 : N , T f , A ) ;
  4:
     for  t = 1 to T do
  5:
           u ^ 0 : N ←SCDecoder( y 0 : N , A , i t );
  6:
          if CRCChek ( u ^ 0 : N ) = success  then return u ^ 0 : N ;
  7:
                 break
  8:
          end if
  9:
     end for
10:
end if
The function FlipSelector ( L 0 : N , T f , A ) is responsible for identifying a candidate set of error-prone bits based on their reliability metrics. Specifically, it selects the first T indices S = { i 1 < i 2 < < i T } A such that the magnitude of the LLR | L i t | for each index is less than or equal to a predefined threshold T f . This threshold is chosen to satisfy the condition T f C N β , where C ( 0 , 1 ) is a constant and β ( 0 , 1 2 ) is an empirically optimized parameter, typically determined through offline Monte Carlo simulations. By quantifying bit reliability in this way, the algorithm effectively isolates those information bits that are most susceptible to decoding errors, thereby guiding the SC Flip decoding process toward more promising correction candidates.

3.2. Comparative Analysis with Existing SCF Algorithms

In this section, we provide an in-depth analysis of the threshold SCF (Th-SCF) algorithm (Algorithm 1) and compare it with the min-LLR SCF algorithm (see Figure 5 of ref. [6]) simplified D-SCF algorithm [17] and the improved SCF algorithm as proposed in Section III of ref. [10].
The min-LLR SCF algorithm selects the T candidate indices with the smallest LLR magnitudes as described in Figure 5 of ref. [6], but it often struggles to accurately identify the true first error position (FEP) bits due to the sequential nature of the SC decoding process as observed in Section III of ref. [10]. The simplified D-SCF algorithm [17] enhances the min-LLR SCF algorithm by incorporating both the reliability of the bits and their positions in the information set A . This improvement leads to a significant performance boost while keeping the computational complexity on par with the min-LLR SCF algorithm [17]. Specifically, this algorithm selects the bit indices corresponding to the T smallest values of M α ( i ) , as defined in (3), to construct the set of bits to flip.
Compared to the min-LLR SCF and simplified D-SCF decoding algorithms, the Th-SCF algorithm offers a more straightforward approach to determining the bit positions that need to be flipped. As outlined in Algorithm 1, the core idea of the Th-SCF algorithm can be summarized as follows: Proposition 1 shows that the average LLR magnitudes of all information bits are greater than N β . Based on this insight, we define a threshold criterion: when the LLR magnitude of a bit falls below a predefined threshold T f , it reliably signals that this bit is unreliable and should be flipped. This approach eliminates the need for complex sorting operations, which are typically computationally expensive in traditional SCF methods.
The key contribution of our approach, compared to existing SCF decoding algorithms, lies in the rigorous theoretical framework underpinning the Th-SCF algorithm. While similar threshold-based techniques were proposed in [10], they mainly relied on experimental observations to select flipped bits and set the flipping threshold. In contrast, we rigorously prove that our algorithm can asymptotically identify the first error position (FEP) with probability 1 after a single threshold flip, providing a robust theoretical guarantee that was previously missing. This solid theoretical foundation significantly enhances the potential for improving SC decoding performance.
The critical set SCF algorithm [7] is another approach that determines the critical set offline and stores it for future use. While effective, this method increases storage requirements. In contrast, our threshold-based SCF algorithm selects the flip set by comparing the LLR magnitude of each information bit to a predefined threshold T f . A significant advantage of our approach lies in its rigorous theoretical foundation, which provides a clear explanation for its performance improvements. Unlike the critical set method [7] and other experimental flip-based approaches [10], our algorithm not only enhances SC decoding performance but also offers valuable insights into designing more effective decoding strategies backed by solid theoretical guarantees.
For the first time, we provide a rigorous analysis of the LLR magnitude difference between correctly and incorrectly decoded bits. As demonstrated in Proposition 1, asymptotically, the average LLR magnitude of each information bit is high. Under the Gaussian approximation (GA) assumption (Assumption 1), the LLR of each information bit follows a Gaussian distribution, where the mean is equal to half the variance. This insight leads us to conclude that, when decoding errors occur, the LLR magnitudes of the erroneous bits are small, which is consistent with the observations in previous work (see Figure 10 of ref. [10]).

3.3. FEP Distribution and the Capability of SCF Algorithms to Identify the True FEP

To more clearly demonstrate the advantages of our proposed algorithm over existing SCF algorithms, such as the min-LLR SCF algorithm (Figure 5 of ref. [6]), we have plotted the distribution of the first error position (FEP) in the SC decoder across different signal-to-noise ratio (SNR). We also compare the probability of correctly identifying the FEP for both the proposed algorithm and the min-LLR SCF algorithm, with the maximum number of flips limited to T = 10 . This comparison further reinforces the benefits of our proposed algorithm.
As an example, we consider a ( 1024 , 512 + 12 ) polar code with the check polynomial g ( x ) = x 13 + x 11 + x 3 + x 2 + 1 . The information bits are generated using the GA method, with the design SNR set to 2.5 dB [18]. We then statistically analyze the probability distribution of the first error position (FEP) of the SC decoder across different SNR values, the decoding process continues until 400 errors are detected at each SNR setting.
The results from Figure 2 indicate that as the SNR increases, the distribution of the FEP across the information set A becomes more uniform, a trend also observed in [[10], Figure 13], making it more difficult to accurately identify the FEP.
To better highlight the superiority of the proposed algorithm, we compare its ability to identify the true FEP with that of the min-LLR SCF algorithm across different code lengths and SNRs. A higher probability of correctly identifying the true FEP signifies a greater potential for performance improvement.
To gain a clearer understanding of the ability of different SCF algorithms to identify the true FEP, we formally define the flip sets generated by each SCF algorithm, as well as the corresponding probabilities of correctly identifying the true FEP.
For the min-LLR SCF algorithm, the flip set F min - LLR is the set of indices i A corresponding to the T smallest LLR magnitudes | L N ( i ) | , formally defined as:
F min - LLR = i 1 , , i T | L N ( i 1 ) | | L N ( i T ) | | L N ( j ) | , j A { i 1 , , i T } .
For the proposed Th-SCF algorithm (Algorithm 1), the flip set F Th - SCF consists of the first T indices { i 1 < i 2 < < i T } A , such that for all k { 1 , 2 , , T } , the condition | L N ( i k ) | T f is satisfied. This can be expressed as:
F Th - SCF = { i 1 , i 2 , , i T i k A , for the first T indices satisfying | L N ( i k ) | T f } .
In these definitions, A represents the information set, and T f denotes the predefined threshold used in the Th-SCF algorithm. The main distinction between the two algorithms lies in their selection criteria: while the min-LLR SCF algorithm chooses the indices corresponding to the T smallest LLR magnitudes, the Th-SCF algorithm selects the first T indices for which the LLR magnitudes fall below a predefined threshold T f .
We define the effectiveness of different SCF algorithms in identifying the true first error position (FEP) as the probability that the true FEP is included in the flip set. This can be formally expressed as:
P find = P idx FEP F SCF ,
where F SCF represents the flip set generated by the SCF algorithm, either F min - LLR from Equation (7) or F Th - SCF from Equation (8), and idx FEP refers to the index of the true first error position in the SC decoder. This probability quantifies the effectiveness of the SCF algorithm in successfully identifying the true FEP after the flipping process.
To better highlight the advantages of the proposed algorithm, we focus on two code lengths, N = 1024 and N = 4096 , with a CRC length of 12 and a check polynomial g ( x ) = x 13 + x 11 + x 3 + x 2 + 1 , and a code rate of 1 2 , while the maximum flipping attempts T max = 10 . The information set A is generated using the GA method [18] and the design SNR for each code length is selected to ensure that the SC decoder achieves optimal performance when the block error rate (BLER) is 10 2 , as recommended in [6]. Specifically, the design SNRs for the GA method are set to 2.5 dB for N = 1024 and 2.15 dB for N = 4096 .
Figure 3 visualizes the probability defined in Equation (9). The two solid lines illustrate the variation in the probability of the Th-SCF algorithm correctly identifying the true FEP with SNR for different code lengths, whereas the two dashed lines represent the corresponding probabilities for the min-LLR SCF algorithm.
From Figure 3, we can observe that, for various code lengths and SNRs, the proposed Th-SCF algorithm outperforms the min-LLR SCF algorithm in terms of identifying the true FEP. Moreover, as the code length N increases, the advantage of the proposed algorithm over the min-LLR SCF algorithm becomes more significant. This trend helps explain the improved performance observed in the experimental results.

3.4. Complexity Analysis of the Proposed Th-SCF Algorithm and the Existing SCF Algorithms

In this section, we derive the worst-case and average-case complexities of the SCF algorithm [6], simplified D-SCF algorithm [17], as well as the proposed Th-SCF algorithm.
To provide a clearer understanding of how worst-case and average complexities are calculated, we restate the conclusions from Propositions 1 and 2 of ref. [6] as follows:
Proposition 2
(Worst-case complexity of the min-LLR SCF algorithm). The worst-case computational complexity of the min-LLR SCF algorithm (see Figure 5 of ref. [6]) is O ( T N log 2 N ) , where T represents the maximum number of flipping attempts.
Proposition 3
(Average complexity of the min-LLR SCF algorithm). Let P e ( R , SNR ) denote the block error rate of a polar code of rate R at a given SNR. The average computational complexity of the min-LLR SCF algorithm is O N log 2 N 1 + T · P e ( R , SNR ) , where R = K + r N , with r being the CRC length.
As noted in [17], the performance of the simplified D-SCF algorithm is essentially comparable to that of the D-SCF algorithm in various scenarios. Therefore, we use the simplified D-SCF algorithm as the focus of our subsequent experiments. Since the computation method of the simplified D-SCF algorithm closely matches that of the min-LLR SCF algorithm [17], their computational complexities are almost the same. Moreover, based on the operational principles of the Th-SCF algorithm (Algorithm 1), we observe that its worst-case complexity is also similar to that of the min-LLR SCF algorithm.
Although the proposed algorithm does not exhibit a significant advantage in worst-case complexity compared to the min-LLR SCF and simplified D-SCF algorithms, it demonstrates notable improvements in terms of average complexity. Specifically, we find that the average computational complexity of the SCF algorithm is closely related to the average number of iterations. For both the min-LLR SCF and simplified D-SCF algorithms, each iteration involves SC decoding, CRC decoding, and an additional step of selecting and sorting the flipping indices based on a specific metric, such as the LLR magnitudes or the metric M α ( i ) , as defined in Equation (3), when CRC decoding fails. In contrast, an iteration of the proposed Th-SCF algorithm consists of SC decoding, CRC decoding, and a simpler comparison step to check if the LLR magnitude falls below the predefined threshold T f . This simplified procedure reduces the computational complexity, making the proposed algorithm more efficient in terms of average performance.
As a result, the complexity of our Th-SCF algorithm, which encompasses the computational and sorting complexities, is lower than that of existing SCF algorithms. It is worth noting that previous studies have shown that threshold-based strategies can achieve lower average complexity compared to min-LLR SCF decoders as observed in Figures 19 and 20 of ref. [10]. A more detailed comparison of the complexity of various SCF algorithms will be provided in Section 4.2, offering a more intuitive illustration of the advantages of our proposed algorithm over other flip approaches.
It should also be noted that the flipping method provided in [10] shares similarities with our approach. However, our proposed Th-SCF decoding algorithm is distinguished by its rigorous theoretical foundation, as established in Proposition 1 and Theorem 1. In Section 3.5, we further demonstrate that, in the asymptotic case, a single threshold flip can identify the true first error position (FEP) of the SC decoder with probability 1, thereby improving the performance of the decoder with high probability. This marks the first theoretical guarantee for the effectiveness of an SCF decoding algorithm, providing a solid and principled basis for its practical deployment.

3.5. Theoretical Analysis of the Th-SCF Algorithm

In this section, we analyze the probability of delaying the first error position (FEP) after one threshold flip in the Th-SCF algorithm. Our analysis shows that, asymptotically, the FEP is delayed with probability 1 (Theorem 1), thus paving the way for the development of more efficient flip algorithms.
Let u ^ 0 : N ( 0 ) and u ^ 0 : N ( 1 ) denote the SC and Th-SCF decoding results, respectively. Let τ 0 and τ 1 represent their corresponding FEPs. We define the error events as follows:
{ τ 0 = i } = { u ^ i ( 0 ) = 1 , u ^ 0 : i ( 0 ) = 0 } , { τ 1 = i } = { u ^ i ( 1 ) = 1 , u ^ 0 : i ( 1 ) = 0 } .
This section focuses on the asymptotic probabilities of the FEP being delayed (10), unchanged (11), or advanced (12) following an SC failure event { τ 0 < N } . Specifically, we are interested in the following probabilities:
P ( τ 1 > τ 0 τ 0 < N ) ,
P ( τ 1 = τ 0 τ 0 < N ) ,
P ( τ 1 < τ 0 τ 0 < N ) .
The events in question correspond to the scenarios where the FEP is delayed, unchanged, or advanced after a threshold flip, respectively.
By applying the law of total probability, we derive the following for T f C N β (with 0 < C < 1 a constant) and any Borel set B, where i , j A and j < i :
P ( L N ( i ) B | L 0 : i > 0 ) = P ( L N ( i ) B , L N ( j ) > T f | L 0 : i > 0 ) + P ( L N ( i ) B , 0 < L N ( j ) T f | L 0 : i > 0 ) ,
where L 0 : i > 0 indicates that L N ( 0 ) > 0 , , L N ( i 1 ) > 0 .
By splitting this probability into two cases, one where L N ( j ) T f and the other where L N ( j ) T f , we can then apply the conditional probability formula to derive the following result:
P ( L N ( i ) B | L 0 : i > 0 ) = P ( L N ( i ) B | L 0 : i > 0 , L N ( j ) > T f ) P ( L N ( j ) > T f | L 0 : i > 0 ) + P ( L N ( i ) B | L 0 : i > 0 , 0 < L N ( j ) T f ) P ( 0 < L N ( j ) T f | L 0 : i > 0 ) P ( L N ( i ) B | L 0 : i > 0 , L N ( j ) > T f ) + P ( 0 < L N ( j ) T f | L 0 : i > 0 ) ,
Similarly, we can derive a lower bound:
P ( L N ( i ) B | L 0 : i > 0 ) P ( L N ( i ) B , L N ( j ) > T f | L 0 : i > 0 ) = P ( L N ( i ) B | L 0 : i > 0 , L N ( j ) > T f ) P ( L N ( j ) > T f | L 0 : i > 0 ) = P ( L N ( i ) B | L 0 : i > 0 , L N ( j ) > T f ) ( 1 P ( 0 < L N ( j ) T f | L 0 : i > 0 ) ) = P ( L N ( i ) B | L 0 : i > 0 , L N ( j ) > T f ) P ( L N ( i ) B | L 0 : i > 0 , L N ( j ) > T f ) P ( 0 < L N ( j ) T f | L 0 : i > 0 ) ] P ( L N ( i ) B | L 0 : i > 0 , L N ( j ) > T f ) P ( 0 < L N ( j ) T f | L 0 : i > 0 ) .
Thus, we obtain for sufficiently large N and any Borel set B with T f C N β that:
| P ( L N ( i ) B | L 0 : i > 0 ) P ( L N ( i ) B | L 0 : i > 0 , L N ( j ) > T f ) | P ( 0 < L N ( j ) T f | L 0 : i > 0 ) P ( L N ( j ) T f | L 0 : i > 0 ) = Φ ( T f μ N ( j ) 2 μ N ( j ) ) = Q ( μ N ( j ) T f 2 μ N ( j ) ) ( a ) Q ( ( 1 C ) N β 2 2 ) 0 ,
where
μ N ( j ) T f 2 μ N ( j ) = μ N ( j ) 2 T f 2 μ N ( j ) N β 2 2 C N β 2 2 ,
and the condition T f C N β , together with Proposition 1, leads to ( a ) .
This allows us to conclude that for any i , j A with j < i and any Borel set B, for sufficiently large N and any T f C N β with 0 < C < 1 , the following result holds:
P ( L N ( i ) B L 0 : i > 0 , L N ( j ) > T f ) P ( L N ( i ) B L 0 : i > 0 ) .
This approximation simplifies our analysis, as it implies that when N is sufficiently large and the threshold condition T f C N β are met, the correctly decoded preceding bits have a negligible effect on the decoding probability of the current bit.
The main result of this paper is summarized in the following theorem, which shows that, asymptotically, the Th-SCF algorithm delays the FEP with probability 1, thereby improving the performance of the SC decoder.
Theorem 1.
For sufficiently large N, some β ( 0 , 1 2 ) , we obtain for any T f C N β with 0 < C < 1 that:
lim N P ( τ 1 > τ 0 τ 0 < N ) = 1 .
Proof. 
To improve the clarity and structure of our proof, we define two types of errors: Type-I error and Type-II error. A Type-I error occurs when bit i is not flipped despite the FEP is at i. In contrast, a Type-II error occurs when there exist a j A with j < i is incorrectly flipped if the FEP is at i.
Under the hard-decision framework of SC decoding, we establish a relationship between the FEP τ 0 and the LLRs at the hard-decision side ( L N ( i ) for all i A ), allowing for efficient error localization without the need for exhaustive search.
{ τ 0 = i } = { u ^ i ( 0 ) = 1 , u ^ 0 : i ( 0 ) = 0 } = { L N ( i ) < 0 , L 0 : i > 0 } .
Let X denote the index of the first bit that is flipped. To evaluate the delay probability of the first error position (FEP), denoted as i, after a threshold flip, we only need to compute the following probability:
P ( τ 1 τ 0 τ 0 < N ) = P ( τ 1 τ 0 , τ 0 < N ) P ( τ 0 < N ) = i A P ( τ 1 τ 0 , τ 0 = i ) P ( τ 0 < N ) = i A P ( τ 1 τ 0 τ 0 = i ) P ( τ 0 = i ) P ( τ 0 < N ) = ( i ) i A P ( X i τ 0 = i ) P ( τ 0 = i ) P ( τ 0 < N ) ,
where
P ( X i τ 0 = i ) = P ( { j < i , j A , s . t . , | L N ( j ) | < T f } { | L N ( i ) | T f } τ 0 = i ) j A , j < i P | L N ( j ) | < T f τ 0 = i Type - II error + P | L N ( i ) | T f τ 0 = i Type - I error ,
with the last inequality follows from the union bound, and (i) holds because when the FEP is at bit u i (i.e., τ 0 = i ), and the index of the first flipped bit is also i (or equivalently, X = i ), the FEP must be delayed after performing one threshold flip.
If there exists a j A with j < i such that | L N ( j ) | < T f , which corresponds to a Type-II error, then, by the law of total probability, we decompose L 0 : i into L 0 : j , L N ( j ) , and L j + 1 : i , resulting in:
P | L N ( j ) | < T f τ 0 = i = 1 P L N ( j ) > T f τ 0 = i = 1 P ( L N ( j ) > T f , L 0 : i > 0 , L N ( i ) < 0 ) P ( L 0 : i > 0 , L N ( i ) < 0 ) = ( b ) 1 P L N ( j ) > T f L 0 : j > 0 P ( L N ( i ) < 0 L 0 : i > 0 , L N ( j ) > T f ) P L N ( j ) > 0 L 0 : j > 0 P ( L N ( i ) < 0 L 0 : i > 0 ) × k = j + 1 i 1 P ( L k > 0 L 0 : k > 0 , L N ( j ) > T f ) k = j + 1 i 1 P ( L k > 0 L 0 : k > 0 ) ( c ) 1 P ( L N ( j ) > T f L 0 : j > 0 ) P ( L N ( j ) > 0 L 0 : j > 0 ) 1 P ( L N ( j ) > T f L 0 : j > 0 ) = 1 Q ( T f μ N ( j ) 2 μ N ( j ) ) = Q ( μ N ( j ) T f 2 μ N ( j ) ) .
where ( b ) follows from the fact that
P ( L N ( k ) > 0 L 0 : k > 0 , L N ( j ) > T f ) = P ( L N ( k ) > 0 L 0 : k > 0 )
for any k , j A with k < j and (13) yields ( c ) .
In situations where no L N ( i ) meets the threshold flip criterion or a Type-I error occurs, it follows that:
P ( | L N ( i ) | T f τ 0 = i ) = P ( L 0 : i > 0 , L N ( i ) < 0 , L N ( i ) T f ) P ( L 0 : i > 0 , L N ( i ) < 0 ) = P ( L N ( i ) T f L 0 : i > 0 ) P ( L 0 : i > 0 ) P ( L N ( i ) < 0 L 0 : i > 0 ) P ( L 0 : i > 0 ) = Φ ( μ N ( i ) + T f 2 μ N ( i ) ) Φ ( μ N ( i ) 2 ) = Q ( μ N ( i ) + T f 2 μ N ( i ) ) Q ( μ N ( i ) 2 ) .
Noting that T f C N β and applying Proposition 1, we obtain for all i , j A with j < i that:
μ N ( j ) T f 2 μ N ( j ) = μ N ( j ) 2 T f 2 μ N ( j ) N β 2 2 C N β 2 2 .
By combining the bounds previously established in Equations (15) and (16), we conclude that, there exists a constant 0 < b < β such that the following result holds:
P ( X i τ 0 = i ) j A , j < i P | L N ( j ) | < T f τ 0 = i Type - II error + P | L N ( i ) | T f τ 0 = i Type - I error N Q ( μ N ( j ) T f 2 μ N ( j ) ) + Q ( μ N ( i ) + T f 2 μ N ( i ) ) Q ( μ N ( i ) 2 ) ( d ) N Q ( N β C N β 2 N β ) + e ( μ N ( i ) + T f ) 2 4 μ N ( i ) e μ N ( i ) 4 = N Q ( N β C N β 2 N β ) + e T f 2 T f 2 4 μ N ( i ) ( e ) N e ( 1 C ) 2 4 N β + e ( C + C 2 ) N β e N b ,
where the approximation Q ( x ) 1 2 exp x 2 2 , valid for sufficiently large x from Equation (9) of ref. [20], justifies step ( d ) . Step ( e ) then follows from the assumption that for all i A and some constant 0 < C < 1 , we have:
T f C N β , T f 2 μ N ( i ) C 2 N 2 β N β = C 2 N β .
With the use of the conditional probability formula, we can infer that for some 0 < b < β and sufficiently large N:
P ( X τ 0 τ 0 < N ) = i A P ( X i τ 0 = i ) P ( τ 0 = i ) P ( τ 0 < N ) e N b i A P ( τ 0 = i ) P ( τ 0 < N ) = e N b ,
which implies that the Th-SCF algorithm flips the true FEP with probability 1, thus delaying the FEP with probability 1!
Accordingly, the proof of Theorem 1 is established. □
Theorem 1 confirms that Th-SCF algorithm delays the FEP with probability 1, ensuring improved SC decoder performance. This provides a strong theoretical foundation for designing effective SCF strategies tailored for the SC decoder.

4. Performance Analysis of the Th-SCF Algorithm

This section presents simulation results for a binary-input additive white Gaussian noise (BI-AWGN) channel, where the information bit set A is generated using the Gaussian approximation (GA) method [18] to ensure that the SNR required for the SC algorithm to achieve a BLER of 10 2 is minimized for different code lengths. Decoding continues until 400 errors are detected for each code length and code rate. To evaluate the performance of the proposed algorithm, we use a 12-bit CRC as suggested in [10] for all SCF decoding algorithms, including the min-LLR SCF algorithm [6], simplified D-SCF algorithm [17], improved SCF algorithm [10], and the proposed Th-SCF algorithm, with the check polynomial g ( x ) = x 13 + x 11 + x 3 + x 2 + 1 . For the SCL decoding algorithm, we use a CRC length of 8 bits, as recommended in [10], with the check polynomial g ( x ) = x 8 + x 6 + x 3 + x 2 + 1 .
To assess the upper limit of error correction for the proposed algorithm, we also compared the BLER for SC Oracle-1 (SCO-1) decoding as proposed in Section III-C of ref. [6] using a 12 bit CRC, which is only allowed to intervene only once during the decoding process to correct the first erroneous bit decision (i.e., the first error position, or FEP, as discussed earlier), as suggested in Section IV of [10].
To enhance the practicality of the proposed algorithm, the threshold T f is determined through a two-step process:
(a).
It is computed through offline Monte Carlo simulations to ensure the Th-SCF algorithm achieves at least 75% reliability in identifying the true FEP, i.e.,
P ( i S τ 0 = i ) 0.75 .
(b).
The simulation result (17) is then fitted to the following parametric form:
T f = N a ( T ) 2 b ( N ) ,
where a ( T ) and b ( N ) are some coefficients. The following approximation, satisfying condition (17), achieves a relative error of less than 5% compared to the simulated T f :
T f N 1 3 T 45 + 97 90 2 log N 2 4 .
To better validate the results presented in Theorem 1, we visualize the probability described in the theorem. To maintain consistency with the theoretical setting, we exclude the influence of CRC, focusing solely on the ability of the threshold flipping strategy to accurately identify and flip the true FEP (the first error bit where the SC decoder fails). Additionally, we employ the GA method [18] to generate the information bit set A and ensure that the SNR required for the SC algorithm to achieve a BLER of 10 2 was minimized for different code lengths. Specifically, for code lengths N { 1024 , 2048 , 4096 , 8192 , 16 , 384 } , we set the corresponding design SNRs for the GA method to { 2.75 , 2.5 , 2.25 , 2 , 1.85 } dB, respectively.
As shown in Figure 4, the delay probability of the first error position (FEP) for the threshold SCF (Th-SCF) algorithm (represented by the black star-shaped curve) increases to 1, while the probabilities for advancement and no change (represented by the blue square-shaped curve and red circle-shaped curve, respectively) decrease to 0 as N increases. This indicates a significant performance improvement for long polar codes.

4.1. Error Correction Performance of the Proposed Th-SCF Algorithm

In this subsection, we evaluate the performance of the proposed algorithm under different parameter settings. Specifically, we compare the performance of our algorithm with existing SCF algorithms across various code lengths N, flipping attempts T, code rates R, and flipping thresholds T f , highlighting the advantages of the proposed approach under these varying conditions.
Firstly, we compare the BLER performance of the proposed algorithm with several existing SCF algorithms. As shown in Figure 5, as the SNR increases, our proposed algorithm achieves comparable performance with the simplified D-SCF [17] and improved SCF [10] algorithms. Furthermore, when the maximum number of flips is set to T max = 10 , our algorithm surpasses CA-SCL with list size L = 2 [5], with performance approaching that of the SCO-1 decoder (Section III-C of ref. [6]) These results provide compelling evidence for the effectiveness of the proposed algorithm. While the proposed algorithm exhibits strong decoding performance, a comparison with finite-length achievability bounds—such as the Normal Approximation (NA) bound, the Random-Coding Union (RCU) bound, and the refined RCU bound [21,22]—indicates that there is still considerable room for improvement. This observation highlights the potential for further research aimed at closing the gap to the theoretical limits at finite block lengths.
Figure 6 demonstrates that the proposed Th-SCF algorithm (Algorithm 1) achieves comparable performance to the simplified dynamic SCF (D-SCF) decoding algorithm across different code rates. Moreover, the Th-SCF algorithm exhibits almost the same performance within a threshold range of approximately 0.9 × T f to 1.1 × T f , where T f = 1024 1.3 / 3 2 , which is visually supported by the three red dashed lines in the figure. Additionally, the proposed Th-SCF algorithm outperforms CA-SCL with list size L = 2 after 10 flips across all tested code rates, further confirming the effectiveness and practicality of our approach.
As illustrated in Figure 7, the proposed Th-SCF algorithm demonstrates performance comparable to the simplified D-SCF algorithm across various flipping attempts T and code lengths N. Additionally, for all code lengths N, it is evident that when the number of flips reaches 10, the performance of the proposed algorithm surpasses that of CA-SCL with list size L = 2 , thereby validating the effectiveness of our approach.

4.2. Complexity Comparison with Existing SCF Algorithms and Performance Evaluation in Non-Ideal Channels

In this section, we compare the average complexity of the proposed Th-SCF algorithm with that of existing SCF algorithms. Additionally, we highlight the advantages of our proposed algorithm in non-ideal channels, such as Rayleigh fading channels, to demonstrate its practical applicability in real-world scenarios.
In Figure 8, we compare the average complexity of the proposed Th-SCF algorithm with that of the min-LLR SCF and simplified D-SCF algorithms. For this comparison, we set the code length N = 1024 and the code rate to 1 2 . The information bit set A is generated using the GA method, with the design SNR set to 2.5 dB. The maximum number of flips for the min-LLR SCF algorithm is fixed at 10. To ensure a fair comparison, the maximum number of flips for the Th-SCF algorithm is adjusted so that its error-correction performance matches that of the min-LLR SCF algorithm. Specifically, when the code rate R = 1 2 , both the Th-SCF and simplified D-SCF algorithms achieve performance comparable to that of the min-LLR SCF algorithm with 10 flips when the maximum number of flips T max = 5 .
As shown in Figure 8, in the high SNR regime, the average complexity of all algorithms converges to that of the SC decoding algorithm. However, in the low SNR regime, the proposed algorithm significantly reduces the average complexity compared to the min-LLR SCF algorithm. Additionally, when the maximum number of flips T max is fixed, the complexity of the proposed algorithm is lower than that of the simplified D-SCF algorithm. This demonstrates the efficiency of the proposed algorithm and highlights its advantage in terms of reduced average complexity.
Next, we explore the practicality of the proposed algorithm in real-world applications. Since the Gaussian approximation (GA) assumption (Assumption 1) may not always hold in practical scenarios, it is essential to evaluate the performance of the proposed algorithm under non-ideal channel conditions. To this end, we consider the Rayleigh fading channel as a representative case.
Figure 9 compares the performance of the proposed Th-SCF decoder with the min-LLR SCF decoder as described in Figure 5 of ref. [6] over a Rayleigh fading channel where the fading coefficient h Rayleigh ( 1 ) , evaluated within the framework of Trifonov as summarized in Figure 4 of ref. [23]. The proposed decoder achieves an approximate 0.27 dB performance gain at a BLER of 10 2 , highlighting two key advantages:
  • Practicality: The method retains strong error correction performance even when the Gaussian approximation (GA) assumption (Assumption 1) is no longer valid, demonstrating its reliability in practical, non-ideal fading environments.
  • Versatility: Despite offering improved performance, it retains the efficient O ( N log N ) computational complexity, making it applicable to a wider range of communication channels beyond the Gaussian case.
These results affirm the practical value and generalizability of the proposed Th-SCF decoding approach. Therefore, Assumption 1 could provide a simple (though not exclusive) threshold design, with alternative methods to be explored in future work.

4.3. Discussion

In this subsection, we explore potential approaches for integrating the proposed algorithm with recent techniques, such as the generalized restart mechanism (GRM) [24] and the fast decoding method [25], to further reduce its computational complexity. Both of these methods aim to enhance decoding efficiency by reducing overall complexity.
The core mechanism of the generalized restart mechanism (GRM) lies in bypassing the partial traversal of the decoding tree through strategic design, while leveraging previously stored information to estimate the bits decoded in earlier stages. This approach can be integrated with the proposed Th-SCF algorithm.
Specifically, during each iteration, we can record the index of the bit where the LLR magnitude first fails to exceed the threshold T f (e.g., when the bit u i does not meet the threshold criterion). If decoding errors remain after applying threshold-based flipping, the subsequent Th-SCF decoding iteration can preserve the decisions for bits u 0 to u i 1 , which are assumed to be correct. The decoder can then resume the SC decoding process from bit u i + 1 , following the restart path defined in Definition 2 of ref. [24]. This approach effectively avoids redundant recomputation for earlier bits whose decoding outcomes remain unchanged across iterations. This is just an initial conceptual framework and more detailed design strategies will be presented in our future work.
The integration of the Th-SCF algorithm with fast decoding techniques relies on establishing appropriate threshold selection rules tailored to various node types, such as R 0 , R 1 , REP, and SPC nodes.
For R 0 nodes, which consist entirely of frozen bits [25], bit flipping is unnecessary. In R 1 nodes, composed exclusively of information bits [25], the bit corresponding to the smallest LLR magnitude at the top node is considered for flipping, provided its magnitude does not exceed a predefined threshold T f .
For REP nodes, where the LLR of the single non-frozen bit is obtained by summing the LLRs of its constituent top-node bits [25], the entire REP node is subjected to flipping evaluation if this aggregated LLR magnitude falls below T f .
The threshold determination for SPC nodes is more complex, due to their hybrid structure involving a frozen bit alongside several R 1 bits [25]. A viable approach is to incorporate the parity-based flipping criteria proposed in Equations (17a)–(18b) of ref. [26], which consider the three smallest LLR magnitudes for decision-making.
Note that the current design is only a preliminary discussion, and more detailed content will be provided in future work. We will further explore efficient strategies for implementing the fast threshold-based SCF algorithm and investigate additional node types, such as REP-SPC nodes [26], as well as refine the threshold selection techniques for optimal performance.
It is also important to emphasize that the theoretical analysis presented in this work relies on the validity of the Gaussian approximation (GA) assumption (Assumption 1). Moreover, the performance of the proposed algorithm may be affected by the choice of code construction method. As highlighted in Theorem 1, the effectiveness of the Th-SCF algorithm is influenced by several critical factors, including code length, SNR, and the specific method used to construct the information set. In particular, adopting alternative construction techniques—such as Reed–Muller (RM)-based schemes [27]—may lead to different performance outcomes. While a detailed investigation of this aspect is not provided in the current work, we acknowledge that understanding the impact of various construction methods is essential for the practical deployment of the proposed algorithm. We plan to explore this topic more thoroughly in our future work. These dependencies not only delineate the boundaries of the current study but also point to promising avenues for further research.

5. Conclusions

In this paper, we introduce the threshold successive cancellation flip (Th-SCF) decoding algorithm for polar codes. The Th-SCF algorithm delivers performance comparable to the dynamic SCF (D-SCF) decoder while significantly reducing computational complexity by eliminating the need for expensive sorting operations. Through rigorous theoretical analysis and extensive simulations, we show that Th-SCF can asymptotically delay the first error position (FEP) with probability 1, ensuring high decoding performance. Additionally, the proposed method demonstrates reliable performance under non-ideal conditions, such as Rayleigh fading channels, highlighting its practical applicability in real-world communication systems. Looking ahead, future work will focus on developing adaptive threshold selection strategies for non-Gaussian channels, particularly for emerging 5G and 6G networks. Furthermore, we plan to explore the integration of Th-SCF with other advanced techniques, such as the generalized restart mechanism (GRM), fast decoding methods, and iterative decoding frameworks, with the goal of further reducing computational complexity and enhancing system flexibility.

Author Contributions

Conceptualization, Z.L., L.Y., S.Y., Z.M. and Y.L.; data curation, Z.L., S.Y. and Y.L.; formal analysis, Z.L., L.Y. and S.Y.; funding acquisition, G.Y. and Z.M.; investigation, Z.L., L.Y. and S.Y.; methodology, Z.L., L.Y., S.Y. and Z.M.; project administration, G.Y. and Z.M.; resources, Z.L. and Y.L.; software, Z.L. and S.Y.; supervision, G.Y., Z.M. and Y.L.; validation, Z.L. and S.Y.; visualization, Z.L.; writing—original draft, Z.L.; writing—review and editing, Z.L., L.Y., S.Y. and Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Key R&D Program of China under Grant 2023YFA1009601.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this research are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SCsuccessive cancellation
SCLsuccessive cancellation list
SCFsuccessive cancellation flip
Th-SCFthreshold successive cancellation flip
LLRlog-likelihood ratio
PDFprobability density function
CDFcumulative distribution function
FEPfirst error position
BPSKbinary phase shift keying
BI-AWGNbinary input additive white Gaussian distribution
CRCcyclic redundanct check
D-SCFDynamic successive cancellation flip
GRMgeneralized restart mechanism
RCURandom-Coding Union
NANormal Approximation

References

  1. Arıkan, E. Channel Polarization: A Method for Constructing Capacity—Achieving Codes for Symmetric Binary-Input Memoryless Channels. IEEE Trans. Inf. Theory 2009, 55, 3051–3073. [Google Scholar] [CrossRef]
  2. 3GPP TSG RAN WG1, R1-167703, Channel Coding Scheme for URLLC, mMTC and Control Channels; Intel Corporation: Santa Clara, CA, USA, 2016.
  3. Tal, I.; Vardy, A. List Decoding of Polar Codes. IEEE Trans. Inf. Theory 2015, 61, 2213–2226. [Google Scholar] [CrossRef]
  4. Balatsoukas-Stimming, A.; Parizi, M.B.; Burg, A. LLR-based successive cancellation list decoding of polar codes. IEEE Trans. Signal Process. 2015, 63, 5165–5179. [Google Scholar] [CrossRef]
  5. Niu, K.; Chen, K. CRC-Aided Decoding of Polar Codes. IEEE Commun. Lett. 2012, 16, 1668–1671. [Google Scholar] [CrossRef]
  6. Afisiadis, O.; Balatsoukas-Stimming, A.; Burg, A. A low-complexity improved successive cancellation decoder for polar codes. In Proceedings of the 2014 48th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 2–5 November 2014. [Google Scholar]
  7. Zhang, Z.; Qin, K.; Zhang, L.; Zhang, H.; Chen, G.T. Progressive bit-flipping decoding of polar codes over layered critical sets. In Proceedings of the GLOBECOM 2017—2017 IEEE Global Communications Conference, Singapore, 4–8 December 2017; pp. 1–6. [Google Scholar]
  8. Giard, P.; Balatsoukas-Stimming, A.; Müller, T.C.; Bonetti, A.; Thibeault, C.; Gross, W.J.; Flatresse, P.; Burg, A. PolarBear: A 28-nm FD-SOI ASIC for decoding of polar codes. IEEE J. Emerg. Sel. Top. Circuits Syst. 2017, 7, 616–629. [Google Scholar] [CrossRef]
  9. Chandesris, L.; Savin, V.; Declercq, D. Dynamic-SCFlip decoding of polar codes. IEEE Trans. Commun. 2018, 66, 2333–2345. [Google Scholar] [CrossRef]
  10. Ercan, F.; Condo, C.; Gross, W.J. Improved bit-flipping algorithm for successive cancellation decoding of polar codes. IEEE Trans. Commun. 2019, 67, 61–72. [Google Scholar] [CrossRef]
  11. Ahlswede, R.; Cai, N.; Li, S.-Y.R.; Yeung, R.W. Network information flow. IEEE Trans. Inf. Theory 2000, 46, 1204–1216. [Google Scholar] [CrossRef]
  12. El Gamal, A.; Kim, Y.-H. Network Information Theory; Cambridge University Press: NewYork, NY, USA, 2011. [Google Scholar]
  13. Farooqi, M.Z.; Tabassum, S.M.; Rehmani, M.H.; Saleem, Y. A survey on network coding: From traditional wireless networks to emerging cognitive radio networks. J. Netw. Comput. Appl. 2014, 46, 166–181. [Google Scholar] [CrossRef]
  14. Bassoli, R.; Marques, H.; Rodriguez, J.; Shum, K.W.; Tafazolli, R. Network coding theory: A survey. IEEE Commun. Surv. Tutorials 2013, 15, 1950–1978. [Google Scholar] [CrossRef]
  15. Yeung, R.W.; Li, S.-Y.R.; Cai, N.; Zhang, Z. Network Coding Theory Part I: Single Source. Found. Trends® Commun. Inf. Theory 2006, 2, 241–329. [Google Scholar] [CrossRef]
  16. Leroux, C.; Tal, I.; Vardy, A.; Gross, W.J. Hardware architectures for successive cancellation decoding of polar codes. In Proceedings of the 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Prague, Czech Republic, 22–27 May 2011; pp. 1665–1668. [Google Scholar]
  17. Ercan, F.; Tonnellier, T.; Doan, N.; Gross, W.J. Simplified dynamic SC-flip polar decoding. In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 1733–1737. [Google Scholar]
  18. Trifonov, P. Efficient design and decoding of polar codes. IEEE Trans. Commun. 2012, 60, 3221–3227. [Google Scholar] [CrossRef]
  19. Lin, Z.; Bai, Z. Probability Inequalities; Springer Series in Statistics; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  20. Shirinabadi, P.A.; Abbasi, A. On approximation of Gaussian Q-function and its applications. In Proceedings of the 2019 IEEE 10th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), New York, NY, USA, 10–12 October 2019; pp. 0883–0887. [Google Scholar]
  21. Polyanskiy, Y.; Poor, H.V.; Verdú, S. Channel coding rate in the finite blocklength regime. IEEE Trans. Inf. Theory 2010, 56, 2307–2359. [Google Scholar] [CrossRef]
  22. Martinez, A.; Fàbregas, A.G.I. Saddlepoint approximation of random-coding bounds. In Proceedings of the 2011 Information Theory and Applications Workshop, La Jolla, CA, USA, 6–11 February 2011; pp. 1–6. [Google Scholar]
  23. Trifonov, P. Design of polar codes for Rayleigh fading channel. In Proceedings of the 2015 International Symposium on Wireless Communication Systems (ISWCS), Brussels, Belgium, 25–28 August 2015; pp. 331–335. [Google Scholar]
  24. Sagitov, I.; Pillet, C.; Balatsoukas-Stimming, A.; Giard, P. Generalized Restart Mechanism for Successive-Cancellation Flip Decoding of Polar Codes. J. Signal Process. Syst. 2025, 97, 11–29. [Google Scholar] [CrossRef]
  25. Giard, P.; Burg, A. Fast-SSC-flip decoding of polar codes. In Proceedings of the 2018 IEEE Wireless Communications and Networking Conference Workshops (WCNCW), Barcelona, Spain, 15–18 April 2018; pp. 73–77. [Google Scholar]
  26. Ercan, F.; Tonnellier, T.; Gross, W.J. Energy-efficient hardware architectures for fast polar decoders. IEEE Trans. Circuits Syst. I Reg. Pap. 2020, 67, 322–335. [Google Scholar] [CrossRef]
  27. Li, B.; Shen, H.; Tse, D. A RM-Polar Codes. arXiv 2014, arXiv:1407.5483. [Google Scholar]
Figure 1. Construction flowchart of CRC-aided polar codes.
Figure 1. Construction flowchart of CRC-aided polar codes.
Entropy 27 00626 g001
Figure 2. Distribution of the first error position (FEP) in the SC decoder across codeword indices at different SNRs with N = 1024 , R = 1 2 and 12 bit CRC.
Figure 2. Distribution of the first error position (FEP) in the SC decoder across codeword indices at different SNRs with N = 1024 , R = 1 2 and 12 bit CRC.
Entropy 27 00626 g002
Figure 3. Probability of correctly identifying the true FEP by the proposed algorithm and the min-LLR SCF algorithm across different code lengths andSNRs ([6]).
Figure 3. Probability of correctly identifying the true FEP by the proposed algorithm and the min-LLR SCF algorithm across different code lengths andSNRs ([6]).
Entropy 27 00626 g003
Figure 4. Probabilities of delay, unchanged, and advance for various code lengths N { 1024 , 2048 , 4096 , 8192 , 16384 } at a fixed code rate R = 1 2 and design SNRs { 2.75 , 2.5 , 2.25 , 2 , 1.85 } dB, with a target BLER of 10 2 , using Th-SCF algorithm (Algorithm 1) with T = 1 and T f = N 1.1 / 3 2 log N 2 4 .
Figure 4. Probabilities of delay, unchanged, and advance for various code lengths N { 1024 , 2048 , 4096 , 8192 , 16384 } at a fixed code rate R = 1 2 and design SNRs { 2.75 , 2.5 , 2.25 , 2 , 1.85 } dB, with a target BLER of 10 2 , using Th-SCF algorithm (Algorithm 1) with T = 1 and T f = N 1.1 / 3 2 log N 2 4 .
Entropy 27 00626 g004
Figure 5. BLER performance comparison between the proposed Th-SCF algorithm and several existing SCF algorithms, including the min-LLR SCF algorithm [6], the SCO algorithm [6], the improved SCF algorithm [10], the simplified D-SCF algorithm [17], the normal approximation (NA) bound [21], the random-coding union (RCU) bound [21], and the refined RCU bound [22]. The comparison is conducted with a maximum of 10 flipping attempts ( T max = 10 ), a threshold value of T _ f = 1024 1.3 / 3 2 , and a 12-bit CRC using the generator polynomial g ( x ) = x 13 + x 11 + x 3 + x 2 + 1 . The information bit set A is constructed using the GA method, with a design SNR of 2.5 dB.
Figure 5. BLER performance comparison between the proposed Th-SCF algorithm and several existing SCF algorithms, including the min-LLR SCF algorithm [6], the SCO algorithm [6], the improved SCF algorithm [10], the simplified D-SCF algorithm [17], the normal approximation (NA) bound [21], the random-coding union (RCU) bound [21], and the refined RCU bound [22]. The comparison is conducted with a maximum of 10 flipping attempts ( T max = 10 ), a threshold value of T _ f = 1024 1.3 / 3 2 , and a 12-bit CRC using the generator polynomial g ( x ) = x 13 + x 11 + x 3 + x 2 + 1 . The information bit set A is constructed using the GA method, with a design SNR of 2.5 dB.
Entropy 27 00626 g005
Figure 6. BLER performance comparison of min-LLR SCF ([6], Figure 5), simplified D-SCF [17], SC-List decoding [5] with list size L = 2 , and the proposed Th-SCF algorithm (this work) for polar codes of length N = 1024 and code rates R { 1 4 , 1 2 , 3 4 } . All SCF-based decoders are configured with a CRC length of 12, while the SC-List decoder uses a CRC length of 8 as suggested in [10]. The maximum number of flipping attempts is set to T max = 10 , and the threshold for Th-SCF is T f = 1024 1.3 / 3 2 .
Figure 6. BLER performance comparison of min-LLR SCF ([6], Figure 5), simplified D-SCF [17], SC-List decoding [5] with list size L = 2 , and the proposed Th-SCF algorithm (this work) for polar codes of length N = 1024 and code rates R { 1 4 , 1 2 , 3 4 } . All SCF-based decoders are configured with a CRC length of 12, while the SC-List decoder uses a CRC length of 8 as suggested in [10]. The maximum number of flipping attempts is set to T max = 10 , and the threshold for Th-SCF is T f = 1024 1.3 / 3 2 .
Entropy 27 00626 g006
Figure 7. BLER performance comparison of the min-LLR SCF ([6], Figure 5), simplified D-SCF [17], SC-List [5] with L = 2 , and Th-SCF (this work) algorithms for code lengths N { 1024 , 4096 , 16384 } , code rate R = 1 2 , and corresponding design SNRs { 2.5 , 2.15 , 1.9 } dB. The maximum number of flipping attempts T is set to 1 and 10, while T f is estimated using (18). All SCF decoders use a CRC length of 12, while the SC-List decoder uses a CRC length of 8, as recommended in [10].
Figure 7. BLER performance comparison of the min-LLR SCF ([6], Figure 5), simplified D-SCF [17], SC-List [5] with L = 2 , and Th-SCF (this work) algorithms for code lengths N { 1024 , 4096 , 16384 } , code rate R = 1 2 , and corresponding design SNRs { 2.5 , 2.15 , 1.9 } dB. The maximum number of flipping attempts T is set to 1 and 10, while T f is estimated using (18). All SCF decoders use a CRC length of 12, while the SC-List decoder uses a CRC length of 8, as recommended in [10].
Entropy 27 00626 g007
Figure 8. Average complexity comparison of the min-LLR SCF [6], simplified D-SCF [17], and Th-SCF (Algorithm 1) algorithms, normalized to the complexity of SC decoding, for a polar code with length N = 1024 , code rate R = 1 2 , and a 12-bit CRC. The information bit set A is generated using the GA method, with the design SNR chosen to be 2.5 dB.
Figure 8. Average complexity comparison of the min-LLR SCF [6], simplified D-SCF [17], and Th-SCF (Algorithm 1) algorithms, normalized to the complexity of SC decoding, for a polar code with length N = 1024 , code rate R = 1 2 , and a 12-bit CRC. The information bit set A is generated using the GA method, with the design SNR chosen to be 2.5 dB.
Entropy 27 00626 g008
Figure 9. BLER performance comparison of the min-LLR SCF [6] and Th-SCF algorithm (Algorithm 1) for code length N = 1024 , code rate R = 1 2 under Rayleigh fading, with T = 1 , where h Rayleigh ( σ h ) and σ h = 1 . The information set A is constructed using the GA method with a design SNR of 5 dB [23].
Figure 9. BLER performance comparison of the min-LLR SCF [6] and Th-SCF algorithm (Algorithm 1) for code length N = 1024 , code rate R = 1 2 under Rayleigh fading, with T = 1 , where h Rayleigh ( σ h ) and σ h = 1 . The information set A is constructed using the GA method with a design SNR of 5 dB [23].
Entropy 27 00626 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Z.; Yao, L.; Yuan, S.; Yan, G.; Ma, Z.; Liu, Y. Threshold Successive Cancellation Flip Decoding Algorithm for Polar Codes: Design and Performance. Entropy 2025, 27, 626. https://doi.org/10.3390/e27060626

AMA Style

Liu Z, Yao L, Yuan S, Yan G, Ma Z, Liu Y. Threshold Successive Cancellation Flip Decoding Algorithm for Polar Codes: Design and Performance. Entropy. 2025; 27(6):626. https://doi.org/10.3390/e27060626

Chicago/Turabian Style

Liu, Zhicheng, Liuquan Yao, Shuai Yuan, Guiying Yan, Zhiming Ma, and Yuting Liu. 2025. "Threshold Successive Cancellation Flip Decoding Algorithm for Polar Codes: Design and Performance" Entropy 27, no. 6: 626. https://doi.org/10.3390/e27060626

APA Style

Liu, Z., Yao, L., Yuan, S., Yan, G., Ma, Z., & Liu, Y. (2025). Threshold Successive Cancellation Flip Decoding Algorithm for Polar Codes: Design and Performance. Entropy, 27(6), 626. https://doi.org/10.3390/e27060626

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop