Next Article in Journal
Principal Component Analysis and t-Distributed Stochastic Neighbor Embedding Analysis in the Study of Quantum Approximate Optimization Algorithm Entangled and Non-Entangled Mixing Operators
Next Article in Special Issue
A Novel Image-Classification-Based Decoding Strategy for Downlink Sparse Code Multiple Access Systems
Previous Article in Journal
A Fluid Perspective of Relativistic Quantum Mechanics
Previous Article in Special Issue
Variable-Length Resolvability for General Sources and Channels
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Note on the Mixing Factor of Polar Codes

School of Mathematics, Taiyuan University of Technology, Taiyuan 030024, China
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(11), 1498; https://doi.org/10.3390/e25111498
Submission received: 28 September 2023 / Revised: 27 October 2023 / Accepted: 27 October 2023 / Published: 30 October 2023
(This article belongs to the Special Issue Advances in Information and Coding Theory II)

Abstract

:
Over binary-input memoryless symmetric (BMS) channels, the performance of polar codes under successive cancellation list (SCL) decoding can approach maximum likelihood (ML) algorithm when the list size L is greater than or equal to 2 M F , where MF, known as mixing factor of code, represents the number of information bits before the last frozen bit. Recently, Yao et al. showed the upper bound of the mixing factor of decreasing monomial codes with length n = 2 m and rate R 1 2 when m is an odd number; moreover, this bound is reachable. Herein, we obtain an achievable upper bound in the case of an even number. Further, we propose a new decoding hard-decision rule beyond the last frozen bit of polar codes under BMS channels.

1. Introduction

Polar code [1], a channel coding scheme theoretically able to reach the Shannon limit, is proposed by Arıkan based on polarization phenomenon. In the encoding process, polar codes have a clear and explicit structure: the codeword set is a linear space spanned by given rows of the Kronecker power of a special second-order polarization matrix. Meanwhile, Arıkan adopted a successive cancellation (SC) decoding scheme [1] with low complexity. In practical applications, an efficient decoding algorithm is one of the decisive factors for judging the ability of a coding system. SC decoding of polar codes has low error probability in long code length but poor performance in finite code length. Therefore, Tal and Vardy proposed the SCL decoding algorithm [2] to attain better performance. From the perspective of improving polar codes with finite length, a series of concatenation codes are utilized to ameliorate the code spectrum and approach the ML bound, such as cyclic redundancy check (CRC) polar codes [2], parity check (PC) polar codes [3], and polarization-adjusted convolutional (PAC) codes [4].
At present, most researchers focus on SCL decoding with good performance, since its error probability can coincide with ML decoding when the list size L tends to infinity. In [5], it is proved theoretically that under binary erasure channels (BEC), the SCL decoding of n , k polar codes with L = 2 M F can be close to optimal maximum a posteriori (MAP) performance, and numerical experiments presented that the above conclusion holds for general channels such as binary-input additive white Gaussian noise (BI-AWGN) channels. Fazeli et al. further proved that the SCL decoding of polar codes with length n can achieve the performance of ML algorithm under BMS channels when L is greater than or equal to 2 M F and designed a hybrid decoder between SCL and the nearest coset algorithm; hence, the upper bound of ML decoding complexity for polar codes is O 2 M F n l o g n [6]. Both simulation experiments in [6,7] showed that if a Reed–Muller (RM) code and a polar code have the same length and dimension, the MF value corresponding to the RM code is larger; this implies, compared with general polar codes, that the PAC codes with RM rate profiling need a larger value L to close ML performance under SCL decoding. Aiming to obtain the entire weight distribution of polar codes, [8] used a recursive algorithm to calculate the weight enumeration function (WEF) of a polar coset with quadratic polynomial complexity, since polar codes can be regarded as the union of some polar cosets. And Yao et al. also showed the upper bound of the mixing factor of decreasing monomial codes with length n = 2 m and dimension k n 2 when m is an odd number, so as to limit the complexity of the total algorithm bounded loosely by O 2 M F n 2 in [8].
In this paper, we first show the upper bound of the mixing factor of decreasing monomial codes with code length n = 2 m and code rate R 1 2 , when m is an even number. Meanwhile, we certify that this bound is reachable, but the achievable condition is distinct under different code lengths. Further, we propose a new decoding hard-decision rule with respect to the Hamming distance between cosets and the given vector beyond the last frozen bit of polar codes over BMS channels.

2. Preliminary

In this part, we review some basic knowledge about polar codes and decreasing monomial codes.

2.1. Polar Codes

Let the ( n , k ) code represent a polar code with length n, dimension k and rate R = k n ; the information set used to transmit messages and the frozen set used to convey fixed bits are, respectively, denoted by I and F , where I , F 0 , 1 , , n 1 . The information set I contains indices of k reliable subchannels W n ( i ) ( i 0 , 1 , , n 1 ) gained by density evolution (DE) [9], Tal–Vardy algorithm [10], Gaussian approximation (GA) [11], polarization weight (PW) construction [12] and so on. Here, we consider the frozen bit to be zero. The codeword set of a polar code is generated by the product of information sequence and Kronecker power of G 2 = 1 0 1 1 , i.e., x 0 n 1 = u 0 n 1 G 2 m , where n = 2 m , a i j = a i , a i + 1 , , a j 0 , 1 j i + 1 and ⊗ is the Kronecker product. RM codes with length n = 2 m and order r m , denoted by R M ( r , m ) , are also generated by G 2 m . Compared with polar codes constructed on subchannels reliability, the information set selection method of RM codes relies on the row weight of generated matrix G 2 m , which is related with the binary expression of the row index.
SC decoding is a classic decoding scheme for polar codes with length n = 2 m , which uses n received signals y 0 n 1 and i decoded bits u ^ 0 i 1 to estimate the i-th bit u i . The following SC hard-decision rule is based on subchannel transition probabilities W n ( i ) ( y 0 n 1 , u ^ 0 i 1 | u i ) , i 0 , 1 , , n 1 .
u ^ i = u i , i F 0 , i I a n d W n ( i ) ( y 0 n 1 , u ^ 0 i 1 | u i = 0 ) W n ( i ) ( y 0 n 1 , u ^ 0 i 1 | u i = 1 ) 1 1 , i I a n d W n ( i ) ( y 0 n 1 , u ^ 0 i 1 | u i = 0 ) W n ( i ) ( y 0 n 1 , u ^ 0 i 1 | u i = 1 ) < 1
The SCL algorithm is an aggregation of SC decoders, which can preserve L decoding results with the smallest L path metric values and exceed ML performance combined with CRC-Aided [13]. In addition, belief propagation (BP) [14], successive cancellation stack (SCS) [15], etc., decoding schemes also improve the performance of polar codes to a certain extent.

2.2. Decreasing Monomial Codes

Let M m = x 0 a 0 x 1 a 1 x m 1 a m 1 | a i 0 , 1 , i 0 , 1 , , m 1 denote a monomial set with m variables and i n d ( f ) represent a set containing all variables that appear in monomial f M m . Due to the particularity of structure of G 2 m , each row g i of G 2 m = g 0 T , g 1 T , , g 2 m 1 T T can be represented by a specific monomial, where T denotes the transpose. Let b i n i = i m 1 i m 2 i 1 i 0 be the binary expansion of i 0 , 1 , , 2 m 1 , where i = k = 0 m 1 i k 2 k . Then, each row index i of G 2 m uniquely corresponds to a monomial f; for simplicity, we use i f to signify row index and monomial, i.e., f = x 0 1 i f , 0 x m 1 1 i f , m 1 , where b i n ( i f ) = i f , m 1 i f , 1 i f , 0 . The following are the row index and corresponding monomial expression of G 2 3 .
0 x 0 x 1 x 2 1 x 1 x 2 2 x 0 x 2 3 x 2 4 x 0 x 1 5 x 1 6 x 0 7 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 1 1 0 1 1 0 0 0 0 0 0 0 0 1 0 1 1 0 0 0 0 1 0 1 1 0 0 0 0 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 x 0 x 1 x 2 x 1 x 2 x 0 x 2 x 2 x 0 x 1 x 1 x 0 1
It is easy to see that both polar codes and RM codes are spanned by monomials; hence, we call the code of this form a monomial code. Further, Bardet et al. in [16] first defined partial order ≼, in addition to proposing the concept of decreasing monomial codes and their main properties and checking that polar codes and RM codes belong to decreasing monomial codes.
Definition 1 
(Decreasing Polar Codes) [16]). A set I M m is decreasing if and only if ( f I and g f ) implies g I . When I M m is a decreasing set, then C ( I ) is called a decreasing monomial code.
In this paper, we directly use the symbols, definitions and conclusions from [16]. Note, we do not distinguish whether the information set I contains row indices or monomials.

3. The Upper Bound of the Mixing Factor

In [6], the concept of mixing factor is proposed based on the index of the last frozen bit. Next, as represented by Theorem 1, Yao et al. proved the upper bound of the mixing factor of decreasing monomial codes with length 2 m and dimension k 2 m 1 achievable when m is an odd number; however, they only speculated on the case of even numbers m [8]. Therefore, we mainly prove Theorem 2 to make Theorem 1 complete in this section.
Definition 2 
(Mixing Factor). Let C be a decreasing monomial code with length n and information set I and τ C represent the index of the last frozen bit; then, we call the number of information bits before τ C as the mixing factor of C , denoted by M F ( C ) . Obviously, τ C and M F ( C ) satisfy the following equation:
M F C = τ C n I + 1 .
Theorem 1 
([8]). Let C be an ( n , k ) polar code with length n = 2 m , m = 2 t + 1 and dimension k n 2 , then
M F ( C ) 2 2 t 2 t + 1 + 1 .
Moreover, the equality holds only when C is the self-dual Reed–Muller code.
Theorem 2. 
Let C be a decreasing monomial code with length n = 2 m , m = 2 t and rate R 1 2 , then
M F ( C ) 2 2 t 1 2 t + 1 + 2 .
Moreover, the equality holds for t 3 if and only if C is a subcode of R M ( t , 2 t ) code with the information set I = f M m | d e g ( f ) t 1 g M m | d e g ( g ) = t , x 0 i n d ( g ) . In the case of t = 1 and t = 2 , the equality holds if and only if C is an R M ( t 1 , 2 t ) code or C is a subcode of R M ( t , 2 t ) code with the information set I = f M m | d e g ( f ) t 1 g M m | d e g ( g ) = t , x 0 i n d ( g ) .
As shown in Table 1, we obtain some of the largest MF values according to inequality (5). In the following, we analyze certain special cases with t 1 , 2 , 3 , 4 , 5 .
If t = 1 and t = 2 , we consider the decreasing monomial codes with τ ( C ) 2 2 t 1 1 , since the code rate R 1 2 . By exhaustive research, we can obtain the conclusions of Theorem 2. For example, let t = 2 and τ ( C ) = 9 , then the monomial corresponding to row index 9 is f = x 1 x 2 . Aiming to count the number of information bits above τ ( C ) , we initiate the enumeration of frozen bits, except for f; there are 7 monomials g i satisfying f g i , where i 1 , 2 , , 7 , respectively, corresponding to index 0 x 0 x 1 x 2 x 3 , 1 x 1 x 2 x 3 , 2 x 0 x 2 x 3 , 3 x 2 x 3 , 4 x 0 x 1 x 3 , 5 x 1 x 3 , 8 x 0 x 1 x 2 . Therefore, there are at least 8 frozen bits due to the property of decreasing monomial codes, and then M F ( C ) 2 from (3); moreover, M F ( C ) = 2 when C is exactly a subcode of R M ( 2 , 4 ) code and the information set includes 6 x 0 x 3 , 7 x 3 , 10 x 0 x 2 , 11 x 2 , 12 x 0 x 1 , 13 x 1 , 14 x 0 , 15 1 . Besides, if the decreasing monomial code is R M ( 1 , 4 ) code, then τ ( C ) = 12 and M F ( C ) = 2 .
If 3 t 5 , we also consider cases under τ ( C ) 2 2 t 1 1 to obtain the conclusion of Theorem 2 by exhaustive research. Compared with the previous case on t 1 , 2 , there is only one necessary and sufficient condition for reaching the largest mixing factor value in this case.
In the following, we prove the upper bound of the mixing factor of decreasing monomial codes achievable by Lemmas 1 and 5 when t 5 .
Remark 1. 
By observing Table 2, in which we omit the writing of the bottom right corner marker f in i f , the degree of monomials under monomial h is no larger than t, especially if the degree is exactly t, then the monomial includes variable x 0 .
Lemma 1. 
n = 2 m , m = 2 t , let C be a subcode of R M ( t , 2 t ) code with information set I = f M m | d e g ( f ) t 1 g M m | d e g ( g ) = t , x 0 i n d ( g ) , then M F ( C ) = 2 2 t 1 2 t + 1 + 2 .
Proof. 
First, we enumerate information set I :
I = f M m | d e g ( f ) t 1 + g M m | d e g ( g ) = t , x 0 i n d ( g ) = k = 0 t 1 C 2 t k + 1 2 C 2 t t = 2 2 t 1 .
Thus, the dimension of code C is n 2 . Next, according to Table 2, for any p M m satisfying i p i h , we have 0 d e g ( p ) t and x 0 i n d ( p ) when d e g ( p ) = t , then p I . Therefore, τ ( C ) = i h and M F ( C ) = 2 2 t 1 2 t + 1 + 2 from (3). □
Lemma 2. 
n = 2 m , m = 2 t , let C be a decreasing monomial code with length n and frozen set F . If τ ( C ) i h , then i h F .
Proof. 
By observation of Table 2, for any p M m satisfying i p i h , we have p h . Thus, let i p = τ ( C ) i h , then p h , that is i h F because of the property of decreasing monomial codes. □
Lemma 3. 
n = 2 m , m = 2 t , let C be a decreasing monomial code with length n and frozen set F . If τ ( C ) i h , then i h F .
Proof. 
Similarly to the proof of Lemma 2, p h holds for any p M m satisfying i p i h . Let i p = τ ( C ) i h , then p h , that is i h F because of the property of decreasing monomial codes. □
Lemma 4. 
Let C be a decreasing monomial code with length n = 2 m , m = 2 t , rate R 1 2 and frozen set F . If M F ( C ) 2 2 t 1 2 t + 1 + 2 and τ ( C ) = i h , then C is a subcode of R M ( t , 2 t ) code with information set I = f M m | d e g ( f ) t 1 g M m | d e g ( g ) = t , x 0 i n d ( g ) .
Proof. 
Obviously, for any p M m satisfying d e g ( p ) t + 1 , we have h p ; thus, p F because of the decreasing code C . This implies C is a subcode of R M ( t , 2 t ) code and the dimension of this subcode is no larger than k = 0 t C 2 t k .
On the other hand, for any p f | d e g ( f ) t + 1 g | d e g ( g ) = t , x 0 i n d ( g ) , we have h p , then p F . Thus, we know the size of the frozen set of code C :
F C 2 t 1 t + k = t + 1 2 t C 2 t k = 2 2 t 1 ,
that is, I n 2 . By applying 3, we obtain
I = M F C τ C + n 1 n 2 .
This implies I = n 2 from (7) and (8); hence, we deduce that information set of code C is f M m | d e g ( f ) t 1 g M m | d e g ( g ) = t , x 0 i n d ( g ) . □
Lemma 5. 
t 5 , let C be a decreasing monomial code with length n = 2 m , m = 2 t , rate R 1 2 and frozen set F . If M F ( C ) 2 2 t 1 2 t + 1 + 2 , then C is only a subcode of R M ( t , 2 t ) code with the information set I = f M m | d e g ( f ) t 1 g M m | d e g ( g ) = t , x 0 i n d ( g ) .
Proof. 
First, by applying (3), we have τ C 2 2 t 2 t + 1 + 1 = i h .
Next, we are ready to find the contradiction to verify τ ( C ) = i h , then suppose τ ( C ) > i h . From Lemmas 2 and 3, we have i h , i h F , and then enumerate the elements in the frozen set F . According to i h F , for any p f M m | d e g ( f ) t + 1 g M m | d e g ( g ) = t , x 0 i n d ( g ) , then h p , i p F , this implies
F k = t + 1 2 t C 2 t k + C 2 t 1 t .
Then, consider i h F , for any p f M m | d e g ( f ) = t , x 0 i n d ( f ) , x 1 i n d ( f ) , then h p . Hence, the number of monomials satisfying the above conditions is C 2 t 2 t 1 .
Therefore, we have
F k = t + 1 2 t C 2 t k + C 2 t 1 t + C 2 t 2 t 1 = 2 2 t 1 + C 2 t 2 t 1 ,
then
I = n F 2 2 t 1 C 2 t 2 t 1 .
If t 5 , this is a contradiction for
I < 2 2 t 1 2 t + 1 + 2 M F C ,
we have τ ( C ) = i h . From Lemma 4, we can finish the proof. □
Finally, due to Lemmas 1 and 5, we can easily conclude Theorem 2. Therefore, the complexity bound of the ML decoding algorithm in polar codes can obviously be obtained.
Next, we adopt the PW method [12] to gain the information sets of ( n , k ) polar codes with k = n 4 and k = n 2 , respectively, then compare their mixing factors with the largest MF values from (5) under distinct lengths, as shown in Table 3.

4. Decoding Beyond the Last Frozen Bit

Fazeli et al. displayed a combination algorithm between SCL with L = 2 M F and the nearest coset decoding in [6], which coincided with ML performance. Herein, we propose a new decoding hard-decision rule after the last frozen bit in BMS channels, which can also obtain close to ML performance.
Definition 3 
(Polar Coset). Let C be a polar code with length n = 2 m , given a binary information sequence u 0 i 1 , where i 0 , 1 , , n 1 ; we denote C n ( i ) ( u 0 i 1 ) as a polar coset:
C n ( i ) ( u 0 i 1 ) = j = 0 i 1 u j G 2 m j + s p a n G 2 m i : n 1 ,
and polar coset space is spanned by all rows of G 2 m if i = 0 . Denote C n ( u 0 i 1 | u i = 0 ) , C n ( u 0 i 1 | u i = 1 ) as zero-coset and one-coset, respectively,
C n ( u 0 i 1 | u i = 0 ) = j = 0 i 1 u j G 2 m j + s p a n G 2 m i + 1 : n 1 ,
C n ( u 0 i 1 | u i = 1 ) = G 2 m i + C n ( u 0 i 1 | u i = 0 ) ,
where G 2 m i represents the ( i + 1 ) -th row of G 2 m , and G 2 m i : j represents the ( i + 1 ) -th row to the ( j + 1 ) -th row of G 2 m .
According to the definition in [1], identity permutation is denoted by π 0 . For symmetric channels W, there exists a permutation π 1 on output alphabet Y such that π 1 = π 1 1 and W ( y | 1 ) = W ( π 1 ( y ) | 0 ) for y Y . Let x 0 n 1 y 0 n 1 = x 0 y 0 , x 1 y 1 , , x n 1 y n 1 , where x i y i is π x i ( y i ) .
Proposition 1. 
Over BMS W, y 0 n 1 and u ^ 0 i 1 are, respectively, the received vector and decoded vector in SC decoding, then the transition probabilities of subchannels W n ( i ) are
W n ( i ) ( y 0 n 1 , u ^ 0 i 1 | u i = q ) = W n ( i ) ( y ˜ 0 n 1 , 0 0 i 1 | u i = q ) ,
where q 0 , 1 , i 0 , 1 , , n 1 , y ˜ 0 n 1 = u ^ 0 i 1 G 2 m 0 : i 1 y 0 n 1 .
Proof. 
Applying symmetric property from Proposition 13 in [1], for any a 0 n 1 0 , 1 n , we have
W n ( i ) ( y 0 n 1 , u ^ 0 i 1 | u i ) = W n ( i ) ( a 0 n 1 G 2 m y 0 n 1 , u ^ 0 i 1 a 0 i 1 | u i a i )
Let a 0 i 1 = u ^ 0 i 1 , a i = 0 , a i + 1 n 1 = 0 , y ˜ 0 n 1 = a 0 n 1 G 2 m y 0 n 1 , then u ^ 0 i 1 a 0 i 1 = 0 ,   y ˜ 0 n 1 = u ^ 0 i 1 G 2 m 0 : i 1 y 0 n 1 . Hence, W n ( i ) ( y 0 n 1 , u ^ 0 i 1 | u i = q ) = W n ( i ) ( y ˜ 0 n 1 , 0 0 i 1 | u i = q ) when q 0 , 1 . □
Due to the transition probabilities relationship of subchannels and synthetic channels in polarization process
W n ( i ) ( y 0 n 1 , u 0 i 1 | u i ) = u i + 1 n 1 0 , 1 n i 1 1 2 n 1 W n ( y 0 n 1 | u 0 n 1 G 2 m ) ,
we change the condition of (1) into (19) from Proposition 1 and (18)
W n ( i ) ( y 0 n 1 , u ^ 0 i 1 | u i = 0 ) W n ( i ) ( y 0 n 1 , u ^ 0 i 1 | u i = 1 ) = W n ( i ) ( y ˜ 0 n 1 , 0 0 i 1 | u i = 0 ) W n ( i ) ( y ˜ 0 n 1 , 0 0 i 1 | u i = 1 ) = x 0 n 1 C n ( 0 0 i 1 | u i = 0 ) W n ( y ˜ 0 n 1 | x 0 n 1 ) x 0 n 1 C n ( 0 0 i 1 | u i = 1 ) W n ( y ˜ 0 n 1 | x 0 n 1 ) .
By switching channel transition probabilities into Hamming distance, the hard-decision rule after the last frozen bit under SC decoding is replaced to achieve ML bound in [6]. Therefore, we switch the origin rule of polar codes into finding minimum Hamming distance of two cosets C n ( 0 0 i 1 | u i = 0 ) , C n ( 0 0 i 1 | u i = 1 ) with y ˜ 0 n 1 , that is to say
x 0 n 1 C n ( 0 0 i 1 | u i = 0 ) W n ( y ˜ 0 n 1 | x 0 n 1 ) x 0 n 1 C n ( 0 0 i 1 | u i = 1 ) W n ( y ˜ 0 n 1 | x 0 n 1 ) min x 0 n 1 C n ( 0 0 i 1 | u i = 0 ) d ( y ˜ 0 n 1 , x 0 n 1 ) min x 0 n 1 C n ( 0 0 i 1 | u i = 1 ) d ( y ˜ 0 n 1 , x 0 n 1 ) .
Under SCL decoding, aiming to ensure close to ML algorithm performance, we reserve all possible paths with L = 2 M F before the last frozen bit of polar codes, following employing (20) to compare the distance of two cosets to decode the rest of continued information bits.
In Section 3, we completely show the upper bound of mixing factor of polar codes with R < 1 2 and achievable RM codes with R = 1 2 ; so, herein, consider the situation in R < 1 2 , that is, τ ( C ) n 2 . By the structure observation of G 2 m = 1 0 1 1 m and subcode duality C n ( n i ) ( 0 0 n i 1 ) = [ C n ( i ) ( 0 0 i 1 ) ] ( n 2 i n 1 ) of Theorem 8 in [17], we induce expression of polar coset C n ( i ) ( 0 0 i 1 )
C n ( i ) ( 0 0 i 1 ) = [ C n ( n i ) ( 0 0 n i 1 ) ] , 0 i n 2 1 u i 0 , 1 ( c , c ) c C n 2 ( i n 2 + 1 ) ( 0 0 i n 2 1 , u i n 2 = u i ) , n 2 i n 1
where ⊥ represents the dual of code. Therefore, we can recursively calculate the polar coset C n ( i ) ( 0 0 i 1 ) using (21) in certain cases so as to obtain decoding results after the last frozen bit from (20). In fact, we merely provide a simple decision idea for decoding beyond the last frozen bit. Unfortunately, we do not give an appropriate and practical algorithm to recursively calculate minimum Hamming distance.

5. Conclusions

In this paper, aiming to prove the conjecture in [8], we show the upper bound of the mixing factor of decreasing monomial codes with code length n = 2 m and code rate R 1 2 , when m is an even number. Meanwhile, we certify that this bound is reachable: there are two situations that reach the largest value if t = 1 and t = 2 , and there is only one case for t 3 . Further, we propose a new decoding hard-decision rule with respect to the Hamming distance between cosets and the given vector beyond the last frozen bit of polar codes over BMS channels, but we do not give an effective and complete algorithm to calculate minimun distance, which provides us with purpose and motivation to continue exploring this work.

Author Contributions

Conceptualization, K.W. and W.Y.; methodology, K.W.; software, K.W.; validation, K.W., X.J. and W.Y.; formal analysis, K.W.; investigation, K.W.; resources, K.W. and W.Y.; data curation, K.W.; writing—original draft preparation, K.W.; writing—review and editing, K.W. and X.J.; visualization, K.W.; supervision, K.W.; project administration, K.W., X.J. and W.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BMSBinary-input Memoryless Symmetric
SCLSuccessive Cancellation List
MLMaximum Likelihood
MFMixing Factor
SCSuccessive Cancellation
CRCCyclic Redundancy Check
PCParity Check
PACPolarization-adjusted Convolutional
BECBinary Erasure Channel
MAPMaximum A Posteriori
BI-AWGNBinary-input Additive White Gaussian Noise
RMReed-Muller
WEFWeight Enumeration Function
DEDensity Evolution
GAGaussian Approximation
PWPolarization Weight
BPBelief Propagation
SCSSuccessive Cancellation Stack

References

  1. Arıkan, E. Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels. IEEE Trans. Inf. Theory 2009, 55, 3051–3073. [Google Scholar] [CrossRef]
  2. Tal, I.; Vardy, A. List Decoding of Polar Codes. IEEE Trans. Inf. Theory 2015, 61, 2213–2226. [Google Scholar] [CrossRef]
  3. Wang, T.; Qu, D.; Jiang, T. Parity-Check-Concatenated Polar Codes. IEEE Commun. Lett. 2016, 20, 2342–2345. [Google Scholar] [CrossRef]
  4. Arıkan, E. From Sequential Decoding to Channel Polarization and Back Again. arXiv 2019, arXiv:1908.09594. [Google Scholar]
  5. Hashemi, S.A.; Mondelli, M.; Hassani, S.H.; Urbanke, R.; Gross, W.J. Partitioned List Decoding of Polar Codes: Analysis and Improvement of Finite Length Performance. In Proceedings of the GLOBECOM 2017—2017 IEEE Global Communications Conference, Singapore, 4–8 December 2017; pp. 1–7. [Google Scholar]
  6. Fazeli, A.; Vardy, A.; Yao, H. List Decoding of Polar Codes: How Large Should the List Be to Achieve ML Decoding? In Proceedings of the 2021 IEEE International Symposium on Information Theory (ISIT), Melbourne, Australia, 12–20 July 2021; pp. 1594–1599. [Google Scholar]
  7. Yao, H.; Fazeli, A.; Vardy, A. List Decoding of Arıkan’s PAC Codes. Entropy 2021, 23, 841. [Google Scholar] [CrossRef] [PubMed]
  8. Yao, H.; Fazeli, A.; Vardy, A. A Deterministic Algorithm for Computing the Weight Distribution of Polar Code. IEEE Trans. Inf. Theory 2023. [Google Scholar] [CrossRef]
  9. Mori, R.; Tanaka, T. Performance of Polar Codes with the Construction using Density Evolution. IEEE Commun. Lett. 2009, 13, 519–521. [Google Scholar] [CrossRef]
  10. Tal, I.; Vardy, A. How to Construct Polar Codes. IEEE Trans. Inf. Theory 2013, 59, 6562–6582. [Google Scholar] [CrossRef]
  11. Trifonov, P. Efficient Design and Decoding of Polar Codes. IEEE Trans. Commun. 2012, 60, 3221–3227. [Google Scholar] [CrossRef]
  12. He, G.; Belfiore, J.C.; Land, I.; Yang, G.; Liu, X.; Chen, Y.; Li, R.; Wang, J.; Ge, Y.; Zhang, R.; et al. β-Expansion: A Theoretical Framework for Fast and Recursive Construction of Polar Codes. In Proceedings of the GLOBECOM 2017—2017 IEEE Global Communications Conference, Singapore, 4–8 December 2017; pp. 1–6. [Google Scholar]
  13. Niu, K.; Chen, K. CRC-Aided Decoding of Polar Codes. IEEE Commun. Lett. 2012, 16, 1668–1671. [Google Scholar] [CrossRef]
  14. Hussami, N.; Korada, S.B.; Urbanke, R. Performance of Polar Codes for Channel and Source Coding. In Proceedings of the 2009 IEEE International Symposium on Information Theory (ISIT), Seoul, Republic of Korea, 28 June–3 July 2009; pp. 1488–1492. [Google Scholar]
  15. Niu, K.; Chen, K. Stack Decoding of Polar Codes. Electron. Lett. 2012, 48, 695–697. [Google Scholar] [CrossRef]
  16. Bardet, M.; Dragoi, V.; Otmani, A.; Tillich, J.P. Algebraic Properties of Polar Codes from a New Polynomial Formalism. In Proceedings of the 2016 IEEE International Symposium on Information Theory (ISIT), Barcelona, Spain, 10–15 July 2016; pp. 230–234. [Google Scholar]
  17. Niu, K.; Li, Y.; Wu, W. Polar Codes: Analysis and Construction Based on Polar Spectrum. arXiv 2019, arXiv:1908.05889. [Google Scholar]
Table 1. Some of the largest mixing factor values of decreasing monomial code C with length n = 2 m , m = 2 t .
Table 1. Some of the largest mixing factor values of decreasing monomial code C with length n = 2 m , m = 2 t .
t n = 2 2 t max MF C
140
2162
36418
425698
51024450
640961922
716,3847936
Table 2. Row indices of G 2 m and corresponding monomials.
Table 2. Row indices of G 2 m and corresponding monomials.
i f bin ( i f ) f deg ( f )
000⋯00 x 0 x 1 . . . x 2 t 1 2 t
100⋯01 x 1 x 2 . . . x 2 t 1 2 t 1
2 2 t 2 t + 1 11 11 t 1 00 00 t + 1 x 0 x 1 . . . x t t + 1
2 2 t 2 t + 1 + 1 11 11 t 1 00 01 t + 1 h = x 1 x 2 . . . x t t
2 2 t 2 t + 1 + 2 11 11 t 1 00 10 t + 1 h = x 0 x 2 . . . x t t
2 2 t 1 11 11 t 1 11 11 t + 1 1
Table 3. The MF values of polar codes C based on PW construction with length n = 2 m , m = 2 t .
Table 3. The MF values of polar codes C based on PW construction with length n = 2 m , m = 2 t .
tn max MF C MF ( C ) with R = 1 4 MF ( C ) with R = 1 2
140 0
216212
36418917
4256983773
51024450193385
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wei, K.; Jin, X.; Yang, W. A Note on the Mixing Factor of Polar Codes. Entropy 2023, 25, 1498. https://doi.org/10.3390/e25111498

AMA Style

Wei K, Jin X, Yang W. A Note on the Mixing Factor of Polar Codes. Entropy. 2023; 25(11):1498. https://doi.org/10.3390/e25111498

Chicago/Turabian Style

Wei, Keer, Xiaoyu Jin, and Weihua Yang. 2023. "A Note on the Mixing Factor of Polar Codes" Entropy 25, no. 11: 1498. https://doi.org/10.3390/e25111498

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop