Entropies of weighted sums in cyclic groups and an application to polar codes

In this note, the following basic question is explored: in a cyclic group, how are the Shannon entropies of the sum and difference of i.i.d. random variables related to each other? For the integer group, we show that they can differ by any real number additively, but not too much multiplicatively; on the other hand, for $\mathbb{Z}/3\mathbb{Z}$, the entropy of the difference is always at least as large as that of the sum. These results are closely related to the study of more-sum-than-difference (i.e. MSTD) sets in additive combinatorics. We also investigate polar codes for $q$-ary input channels using non-canonical kernels to construct the generator matrix, and present applications of our results to constructing polar codes with significantly improved error probability compared to the canonical construction.


Introduction
For a discrete random variable X supported on a countable set A, its Shannon entropy H(X) is defined to be H(X) = − x∈A P(X = x) log P(X = x). (1) The Shannon entropy can be thought of as the logarithm of the effective cardinality of the support of X; the justification for this interpretation comes from the fact that when the alphabet A is finite, H(X) ≤ log |A|, with equality if and only if X is uniformly distributed on A. This suggests an informal parallelism between entropy inequalities and set cardinality inequalities that has been extensively explored for projections of subsets of Cartesian product sets (see, e.g., [20] for a review of these and their applications to combinatorics), and more recently for sums of subsets of a group that are of great interest in the area of additive combinatorics [38]. For two finite subsets A, B of an additive group, the sumset A + B and difference set A − B are defined by In the trivial bound max{|A|, |B|} ≤ |A ± B| ≤ |A||B|, replacing the sets A, B by independent discrete random variables X, Y and replacing the log-cardinality of each set by the Shannon entropy, one obtains the entropy analogue This is, of course, an analogy but not a proof; however, the inequality (2) can be seen to be true from elementary properties of entropy. First identified by Ruzsa [33], this connection between entropy inequalities and cardinality inequalities in additive combinatorics has been studied extensively in the last few years. Useful tools in additive combinatorics have been developed in the entropy setting, such as Plünnecke-Ruzsa inequalities by Madiman, Marcus and Tetali [19], and Freiman-Ruzsa and Balog-Szemerédi-Gowers theorems by Tao [37]. Much more work has also recently emerged on related topics, such as efforts towards an entropy version of the Cauchy-Davenport inequality [9,13,39,40], an entropy analogue of the doubling-difference inequality [17], extensions from discrete groups to locally compact abelian groups [14,18], and applications of additive combinatorics in information theory [15,16,6,8,41].
In an abelian group, since addition is commutative while subtraction is not, two generic elements generate one sum but two differences. Likely motivated by this observation, the following conjecture is contained in H. T. Croft's Research Problems, 1967: "Let A = {a 1 , a 2 , . . . , a N } be a finite set of integers, and define A+A = {a i +a j : 1 ≤ i, j ≤ N } and A − A = {a i − a j : 1 ≤ i, j ≤ N }. Prove that A − A always has more members than A + A, unless A is symmetric about 0." However, that is not always the case. In 1969, I. J. Marica [21] showed that the conjecture (which he attributed to J. H. Conway) is false by exhibiting the set A = {1, 2, 3,5,8,9,13,15, 16}, for which A + A has 30 elements and A − A has 29 elements. According to Nathanson [25], Conway himself had already found the MSTD set {0, 2, 3, 4, 7, 11, 12, 14} in the 1960's, thus disproving the conjecture attributed to him. Subsequently, S. Stein [36] showed that one can construct sets A for which the ratio |A − A|/|A + A| is as close to 0 or as large as we please; apart from his own proof, he observed that such constructions also follow by adapting arguments in an earlier work of S. Piccard [28] that focused on the Lebesgue measure of A + A and A − A for subsets A of R. A stream of recent papers aims to quantify how rare or frequent MSTD sets are (see, e.g., [22,11] for work on the integers, and [43] for finite abelian groups more generally), or try to provide denser constructions of infinite families of MSTD sets (see, e.g., [24,42]); however these are not directions we will explore in this note.
Since convolutions of uniforms are always distributed on the sumset of the supports, but are typically not uniform distributions, it is not immediately obvious from the Conway and Marica constructions whether there exist i.i.d. random variables X and Y such that . The purpose of this note is to explore this and related questions. For example, one natural related question to ask is for some description of the coefficient λ ∈ {1, . . . , |G|} that maximizes H(X + λY ) for X, Y drawn i.i.d. from some distribution in G; restricting the choice of coefficients to {+1, −1} would correspond to the sum-difference question. This question is motivated by applications to the class of polar codes, which is a very promising class of codes that has attracted much recent attention in information and coding theory. Specifically, we show that over F q , the "spread" of the polar martingale can be significantly enlarged by using optimized kernels rather than the original kernel 1 0 1 1 . In some cases, this leads to significant improvements on the error probability of polar codes, even at low block lengths like 1024. We also consider additive noise channels, and show that the improvement is particularly significant when the noise distribution is concentrated on "small" support.
This note is organized as follows. In Section 2.1, we show that entropies of sums (of i.i.d. random variables) are never greater than entropies of differences for random variables taking values in the cyclic group Z/3Z; however this fails for larger groups, and in particular we show that there always exist distributions on finite cyclic groups of order at least 21 such that H(X + Y ) > H(X − Y ). In Section 2.2 and Section 2.3, we explore more quantitative questions-that is, we ask not only what the ordering of H(X + Y ) and H(X − Y ) may be, but how different these can be in either direction; the finding here is that on Z, these can differ by arbitrarily large amounts additively, but not too much multiplicatively. Finally, in Section 3, we explore the question about entropies of weighted sums mentioned at the end of the previous paragraph, and describe the applications to polar codes as well.
2 Comparing entropies of sums and differences

Basic examples
We start by considering the smallest group in which the sum and difference are distinct, namely Z/3Z. Let p = (p 0 , p 1 , p 2 ) be a probability distribution on Z/3Z, and let H(p) be its Shannon entropy. We denote by p − U 2 the Euclidean distance between p and the uniform distribution U = ( 1 3 , 1 3 , 1 3 ). For any fixed 0 ≤ t ≤ log 3, the following lemma verifies the "triangular" shape of the entropy circle H(p) = t. Lemma 1. Let p be a probability distribution on the entropy circle H(p) = t such that p 0 ≥ p 1 ≥ p 2 . Then the distance p − U 2 is an increasing function of p 0 .
Using identities (3), (5) and (6), we have The last inequality follows from the assumption that p 0 ≥ p 1 ≥ p 2 and the concavity of the logarithmic function. Now we can show that the entropy of the sum of two i.i.d. random variables taking values in Z/3Z can never exceed the entropy of their difference. We use basic facts about the Fourier transform on finite groups, which can be found, e.g., in [35].
Proof. Let p = (p 0 , p 1 , p 2 ) be the distribution of X. Since Y is an independent copy of X, we can see that −Y has distribution q = (p 0 , p 2 , p 1 ). Then the distributions of X + Y and X − Y can be written as p p and p q, respectively, where ' ' is the convolution operation. Let p = ( p 0 , p 1 , p 2 ) be the Fourier transform of p with Fourier coefficients defined by p k e −i2πjk/3 , j = 0, 1, 2.
One basic property of the Fourier transform asserts that where p j is is the conjugate of p j . We also have which holds for general distributions q. The Parseval-Plancherel identity says Using the identities (8), (9) and (10), we have It is not hard to see that X − Y is symmetric with (p q) 0 ≥ (p q) 1 = (p q) 2 . Using Lemma 1, we can see that the entropy circle passing through p q lies inside the Euclidean circle centered at U with radius p q − U 2 . Thus the distribution p p is on an entropy circle with entropy not greater than H(p q). Then we have the desired statement.
The property in Theorem 1 fails to hold for larger cyclic groups; we demonstrate this by discussing three specific examples of i.i.d. random variables X, Y such that the entropy of their sum is larger than the entropy of their difference.

The second example is based on the set
Then we have 3. The group Z/12Z is the smallest cyclic group that contains a MSTD set. Let A = {0, 1, 2, 4, 5, 9}. It is easy to check that A is a MSTD set since A + A = Z/12Z and A − A = (Z/12Z)\{6}. We let X, Y be independent random variables uniformly distributed on A. Then we have Remark 1. Applying linear transformations, we can get infinitely many MSTD sets of Z from Conway's MSTD set. Correspondingly, one can get as many "MSTD" random variables as one pleases. Thus MSTD sets are useful in the construction of "MSTD" random variables; however we can also construct "MSTD" random variables supported on non-MSTD sets as shown by the second example.
Remark 2. Hegarty [12] proved that there is no MSTD set in Z of size 7 and, up to linear transformations, Conway's set is the unique MSTD set in Z of size 8. We do not know the smallest support of "MSTD" random variables taking values in Z, although 8 is clearly an upper bound.
Remark 3. We also do not know the smallest m such that there exist "MSTD" random variables taking values in Z/mZ; however, the third example shows that this m cannot be greater than 12.

Achievable differences
We first briefly introduce the construction of Stein [36] of finite subsets A k ⊂ Z such that the ratio |A k − A k |/|A k + A k | can be arbitrarily large or small when k is large. Using this construction we will give an alternate proof of the result of Lapidoth and Pete [15], which asserts that H(X − Y ) can exceed H(X + Y ) by an arbitrarily large amount. Let A, B ⊂ Z be two finite subsets. Suppose that the gap between any two consecutive elements of B is sufficiently large. For any b ∈ B, the set b + A represents a relatively small fluctuation around b. Large gaps between elements of B will imply that (b+A)∩(b +A) = ∅ for distinct b, b ∈ B. Then we will have |A + B| = |A||B|. For m ∈ Z large, this argument implies that |A + m · A| = |A| 2 , where m · A := {ma : a ∈ A}. Therefore, the following equations hold simultaneously for sufficiently large m 0 ∈ Z (which depends on A, A − A and A + A): |A Repeating this argument, we can get a sequence of sets A k , defined by where A 0 = A, m k−1 ∈ Z sufficiently large, with the following properties Now we are ready to reprove the result of Lapidoth and Pete [15].
Proof. Recall the following basic property of Shannon entropy We let X k , Y k be independent random variables uniformly distributed on the set A k obtained by the iteration equation (11). Using the right hand side of (13) and the properties given by (12), we have Since X k , Y k are independent and uniform on A k , for all x ∈ A k − A k , we have Notice the fact that −t log t is increasing over (0, 1/e). When k is large enough, we have For any k ∈ Z + , we can always find a set A ⊂ Z with k 2 elements such that the set A − A achieves the possible maximal cardinality, Combining (14), (16) and the trivial bound we have that for k large Combining (15) and (16), we have

Therefore we have
Then the statement follows from that k can be arbitrarily large.
We observe that the following complementary result is also true.
Remark 4. The previous argument can not be used to prove this result. If we proceed the same argument, we will see that the lower bound of H(X k + Y k ) similar to (15) will be really bad. The reason is that |A + A| |A| 2 2k → 0 exponentially fast. Both Theorems 2 and 3 can be proved using a probabilistic construction of Ruzsa [32] on the existence of large additive sets A with |A−A| very close to the maximal value |A| 2 , but |A + A| ≤ n 2−c for some explicit absolute constant c > 0; and similarly with the roles of A − A and A + A reversed.
In fact, we have the following stronger result.
is a continuous function of the probability mass function of X, which consists of n real variables. We can assume that n is large enough if necessary. From the discussion in Section 2.1, we know that this function can take both positive and negative values. (For instance Theorem 1 implies that a binary distribution can give us negative difference, and the uniform distribution on Conway's MSTD set will yield positive difference). Since the function is continuous, the intermediate value theorem implies that its range must contain an open interval (a, b) with a < 0 < b. Let X 1 , · · · X k be k independent copies of X and we define X = (X 1 , · · · , X k ). Let Y be an independent copy of X . Then we have The range of H(X + Y ) − H(X − Y ) will contain (ka, kb). This difference can take any real number since k can be arbitrarily large. The random variables X , Y take finite values of Z k . Using the linear transformation (x 1 , · · · , x k ) → x 1 + dx 2 + · · · + d k−1 x k , we can map X, Y to Z-valued random variables. This map preserves entropy as d is large enough. So these Z-valued random variables will have the desired property.
Recall that, for a real-valued random variable X with the density function f (x), the differential entropy h(X) is defined by Theorem 5. For any M ∈ R, there exist i.i.d. real-valued random variables X, Y with finite differential entropy such that Proof. From Theorem 4, we know that there exist Z-valued random variables X , Y with the desired property. Let U, V be independent random variables uniformly distributed on (−1/4, 1/4), which are also independent of (X , Y ). Then we define X = X + U and Y = Y + V . Elementary calculations will show that Since U, V are symmetric, U + V and U − V have the same distribution. Therefore, we have Then the theorem follows.  [22] proved that for any k ∈ Z there exists A such that |A + A| − |A − A| = k; this was also independently obtained by Hegarty [12].

Entropy analogue of Freiman-Pigarev inequality
We proved that the entropies of the sum and difference of two i.i.d. random variables can differ by an arbitrary amount additively. However we will show that they do not differ too much multiplicatively. In additive combinatorics, for a finite additive set A, the doubling constant σ[A] is defined as Similarly the difference constant δ[A] is defined by It was first observed by Ruzsa [31] that The upper bound can be improved down to δ[A] 2 using Plünnecke inequalities. Thus a finite additive set has small doubling constant if and only if its difference constant is also small. In the entropy setting, we have for i.i.d. random variables X, Y . The upper bound was proved by Madiman [16] and the lower bound was proved independently by Ruzsa [33] and Tao [37]. The inequalities also hold for differential entropy [17,14], and in fact for entropy with respect to the Haar measure on any locally compact abelian group [18]. In other words, after subtraction of H(X), the entropies of the sum and the difference of two i.i.d. random variables are not too different. We observe that the entropy version (22) of the doubling-difference inequality implies the entropy analogue of the following result proved by Freiman and Pigarev [29]: Theorem 6. Let X, Y be i.i.d. discrete random variables with finite entropy, then we have Proof. The basic fact of Shannon entropy (2) implies that H(X + Y ) = 0 if and only if H(X − Y ) = 0. In this case, the above theorem is true if we define 0/0 = 1. So we assume that neither H(X + Y ) nor H(X − Y ) is 0. For the upper bound, we have The second step follows from the upper bound in (22) and the fact that Shannon entropy is non-negative. The last step uses the right hand side of (2) and the fact that, in the i.i.d. case, " = " of the upper bound happens only when X takes on a single value, i.e. H(X) = 0. The lower bound can be proved in a similar way. (22) is best possible. Suppose that, for some α ∈ (1, 2), we have

Remark 6. It is unknown if the inequality
Using the same argument, the above theorem can be improved to Remark 7. The above theorem does not hold for continuous random variables. For example, let X be an exponential random variable with parameter λ, and Y be an independent copy of X. Then X + Y satisfies the Gamma distribution Γ(2, λ −1 ) with the differential entropy where γ is the Euler's constant. On the other hand, X − Y has the Laplace distribution Laplace(0, λ −1 ) with the differential entropy We can see that 3 Weighted sums and polar codes

Polar codes: introduction
Polar codes, invented by Arıkan [3] in 2009, achieve the capacity of arbitrary binary-input symmetric discrete memoryless channels. Moreover, they have low encoding and decoding complexity and an explicit construction. Consequently they have attracted a great deal of attention in recent years. In order to discuss polar codes more precisely, we now recall some standard terminology from information and coding theory.
As is standard practice in information theory, we use U k to denote (U 1 , . . . , U k ), and I(X; Y |Z) to denote the conditional mutual information between X and Y given Z, which is defined by It is well known, and also trivial to see, that the conditional entropy H(X|Y ), defined as the mean using the distribution of Y of H(X|Y = y), satisfies the "chain rule" H(Y ) + H(X|Y ) = H(X, Y ), so that I(X; Y |Z) = H(X|Z) − H(X|Y, Z). The mutual information between X and Y , namely I(X; Y ) = H(X) − H(X|Y ), emerges in the case where there is no conditioning. In particular, I(X; Y |Z) = 0 if and only if X and Y are conditionally independent given Z. Furthermore, one also has the chain rule for mutual information, which states that I(X; Y, Z) = I(X; Z) + I(X; Y |Z).
A major goal in coding theory is to obtain efficient codes that achieve the Shannon capacity on a discrete memoryless channel. A memoryless channel is defined first by a "one-shot" channel W , which is a stochastic kernel from an input alphabet X to an output alphabet Y (i.e., for each x ∈ X , W (·|x) is a probability distribution on Y), and the memoryless extension of W for length n vectors is defined by To simplify the notation, one often makes a slight abuse of notation, writing W (n) as W . A linear code of block length n on an alphabet X = F (which must be a field) is a subspace of F n . The vectors in the subspace are often called the codewords. A linear code is equivalently defined by a generator matrix, i.e., a matrix with entries in the field whose rows form a basis for the code. If the dimension of the code is k, and if G is a k ×n generator matrix for the linear code, the codewords are given by the span of the rows of G, i.e., all multiplications uG where u is a 1 × k vector over the field. We refer to [7,30] for a more detailed introduction to information and coding theory.
In polar codes, the generator matrix of block length n is obtained by deleting 1 some rows of the matrix G n = 1 0 1 1 ⊗ log 2 n . Which rows to delete depends on the channel and the targeted error probability (or rate). For a symmetric discrete memoryless channel W , the rows to be deleted are indexed by B ε,n := {i ∈ [n] : where ε is a parameter governing the error probability, the vector U n has i.i.d. components which are uniform on the input alphabet, X n = U n G n , and Y n is the output of n independent uses of W when X n is the input. To see the purpose of the transform G n , consider the case n = 2 first. Applying G 2 to the vector (U 1 , U 2 ) yields Transmitting X 1 and X 2 on two independent uses of a binary input channel W leads to two output variables Y 1 and Y 2 ; recall that this means that Y 1 (or Y 2 ) is a random variable whose distribution is given by W (·|x) where x is the realization of X 1 (or X 2 ). If we look at the mutual information between the vectors X 2 = (X 1 , X 2 ) and Y 2 = (Y 1 , Y 2 ), since the pair of components (X 1 , Y 1 ) and (X 2 , Y 2 ) are mutually independent, the chain rule yields where I(W ) is defined as the mutual information of the one-shot channel W with a uniformly distributed input. Further, since the transformation G 2 is one-to-one, and since the mutual information is clearly invariant under one-to-one transformations of its arguments (the mutual information depends only on the joint distribution of its arguments), we have that If we now apply the chain rule to the left hand side of previous equality, the dependencies in the components of U 2 obtained by mixing X 2 with G 2 lead this time to two different terms, namely, Putting back (27), (28), and (29) together, we have that Now, the above is interesting because the two terms in the right-hand side are precisely not equal. In fact, I(U 2 ; Y 2 , U 1 ) must be greater than its counter-part without the mixing of G 2 , i.e., I(U 2 ; Y 2 , U 1 ) ≥ I(X 2 ; Y 2 ) = I(W ). To see this, note that where the inequality above uses the fact that conditioning can only reduce entropy, hence dropping the variable U 1 in H(U 2 |Y 2 , U 1 ) can only increase the entropy. Further, one can check that besides for degenerated cases where W is deterministic or fully noisy (i.e., making input and output independent), I(U 2 ; Y 2 , U 1 ) is strictly larger than I(X 2 ; Y 2 ). Thus, the two terms in the right-hand side of (30) are respectively lesser and greater that I(W ), but they average out to the original amount I(W ). In summary, out of two independent copies of the channel W , the transform G 2 allows us to create two new synthetic channels that have respectively a worse and better mutual information while overall preserving the total amount of mutual information The key use of the above phenomena, is that if one wants to transmit only one bit (uniformly drawn), using W + rather than W leads to a lower error probability since the channel W + carries more information. One can then iterate this argument several times and hope obtaining a subset of channels of very high mutual information, on which bits can be reliably transmitted. After log 2 n iterations, one obtains the synthesized channels U i → (Y n , U i−1 ). Thus, for a given number of information bits to be transmitted (i.e., for a given rate), one can select the channels with the largest mutual informations to minimize the error probability. As explained in the next section, the phenomenon of polarization happens in the sense that as n tends to infinity, the synthesized channels have mutual information tending to either 0 or 1 (besides for a vanishing fraction of exceptions). Hence, sending information bits through the high mutual information channels (equivalently, deleting rows of G n corresponding to low mutual information channels) allows one to achieve communication rates as large as the mutual information of the original binary input channel. The construction extends to q-ary input alphabets when q is prime 2 , using the same matrix G n = 1 0 1 1 ⊗ log 2 n , while carrying the operations over F q . It is tempting to investigate what happens if one keeps the Kronecker structure of the generator matrix but modifies the kernel 1 0 1 1 . For binary input alphabets, there is no other interesting choice (up to equivalent permutations). In Mori and Tanaka [23], the error probability of non-binary polar codes constructed on the basis of Reed-Solomon matrices is calculated using numerical simulations on q-ary erasure channels. It is confirmed that 4-ary polar codes can have significantly better performance than binary polar codes. Our goal here is to investigate potential improvements at finite block length using modified kernels over F q . We propose to pick kernels not by optimizing the polar code exponent as in [23] but by maximizing the polar martingale spread. This connects to the object of study in this paper, as explained next. The resulting improvements are illustrated with numerical simulations.

Polar martingale
In order to see that polarization happens, namely that it is helpful to rely on a random process having a uniform measure on the possible realizations of I(U i ; Y n U i−1 ). Then, counting the number of such mutual informations in (ε, 1 − ε) can be obtained by evaluating the probability that the process lies in this interval. The process is defined by taking {B n } n≥1 to be i.i.d. random variables uniform on {−, +} and the binary (or q-ary with q prime) random input channels {W n , n ≥ 0} are defined by Then the polarization result can be expressed as The process I(W n ) is particularly handy as it is a bounded martingale with respect to the filtration B n . This is a consequence of the balance equation derived in (30). Therefore, I(W n ) converges almost surely, which means that almost surely, for any ε > 0 and n large enough, |I(W n+1 )−I(W n )| = I(W + n )−I(W n ) < ε. Since for q-ary input channels (q prime), the only channels for which I(W + ) − I(W ) is arbitrarily small is when I(W ) is arbitrarily close to 0 or 1, the conclusion of polarization follows. The key point is that the martingale I(W n ) is a random walk in [0, 1] and it is 'unstable at any point I(W ) ∈ (0, 1) as it must move at least I(W + ) − I(W ) > 0 in this range. The plot of I(W + ) − I(W ) > 0 for different values of I(W ) is provided in Figure 1.
Thus, the larger the spread I(W + ) − I(W ), the more unstable the martingale is at nonextremal points and the faster it should converge to the extremes (i.e., polarized channels). To see why this is connected to the object of study of this paper, we need one more aspect about polar codes.
When considering channels that are 'additive noise', polarization can be understood in terms of the noise process rather than the actual channels W n . Consider for example the the binary symmetric channel. When transmitting a codeword c n on this channel, the output is Y n = c n + Z n , where Z n has i.i.d. Bernoulli components. The polar transform can then be carried over the noise Z n . Since the mutual information of the polarized channels is directly obtained from the conditional entropies of the polarized noise vector G n Z n . The counterpart of this polarization phenomenon is called source polarization [4]. It is extended in [1] to multiple correlated sources. For n = 2, the spread of the two conditional entropies is exactly given by H(Z +Z )−H(Z), where Z, Z are i.i.d. under the noise distribution. In Arıkan and Telatar [5], the rate of convergence of the polar martingale is studied as a function of the block length. Our goal here is to investigate the performance at finite block length, motivated by maximizing the spread at block length n = 1. When considering non-binary polar codes, that spread is governed by the entropy of a linear combination of i.i.d. variables. Preliminary results on this approach were presented in [2], while the error exponent and scaling law of polar codes have been studied in particular in [10] and references therein.

Kernels with maximal spread
Being interested in the performance of polar codes at finite block length, we start with the optimization of the kernel matrix over F q of block length n = 2. Namely, we investigate the following optimization problem: where W + (W, K) is the channel u 2 → Y 1 Y 2 u 1 , and (Y 1 , Y 2 ) are the output of two independent uses of W when (x 1 , x 2 ) = (u 1 , u 2 )K are the inputs. We call K * the 2-optimal kernel for W . A general kernel is a 2 × 2 invertible matrix over F q . Let K = a b c d be such a matrix and let (U 1 , U 2 ) be i.i.d. under µ over F q and (X 1 , X 2 ) = (U 1 , U 2 )K. Since K is invertible, we have and which is the entropy spread gained by using the transformation K. To maximize the spread, one may maximize H(X 1 ) = H(aU 1 +cU 2 ) over the choice of a and c, or simply H(U 1 +cU 2 ) over the choice of c. Hence, the maximization problem depends only on the variable c, (a can be set to 1, and b, d only need to ensure that K is invertible), which leads to a kernel  Figure 2: For an additive noise channel over F 3 with noise distribution {0.7, 0.3, 0}, the block error probability (in log 10 scale) of a polar code with block length of n = 1024 is plotted against the rate of the code. The red curve (lower curve) is for the polar code using the 2-optimal kernel whereas the blue curve is for the polar code using the original kernel.
of the form K = 1 0 c 1 . Note that to maximize the spread, one may alternatively minimize H(X 2 |X 1 ) = H(U 2 |U 1 + cU 2 ).
We consider in particular channels which are 'additive noise', in which case one can equivalently study the 'source' version of this problem as follows: where U 1 , U 2 are i.i.d. under µ. As discussed above, this is related with the previous problem by choosing where µ is the distribution of the noise of the channel W .
Our first observation about the optimal coefficients λ * (µ) is in the context of F 3 , and follows immediately from Theorem 1.  When µ is over F q with q ≥ 5, λ * (µ) varies with µ. For example, one can check numerically that for the distribution {0.8, 0.1, 0.1, 0, 0} we have λ * = 4, whereas for the distribution {0.7, 0.2, 0.1, 0, 0} we have λ * = {2, 3}. Thus finding a solution to the problem of determining λ * (µ) for general probability distributions µ on F q seems not so easy. Nonetheless, for a certain class of probability distributions µ, we can identify λ * (µ) explicitly using the following observation. Proposition 1. Let µ be a probability distribution over F q with support S µ . If there exists γ ∈ F q such that then H(U 2 |U 1 + γU 2 ) = 0 (40) where U 1 , U 2 are i.i.d. under µ.
Remark 8. The condition on the support could be simplified but as such it makes the conclusion of Proposition 1 immediate. Also note that γ such that H(U 2 |U 1 + γU 2 ) = 0 is clearly optimal to maximize the spread, i.e., it maximizes H(U 1 + γU 2 ). Let us consider some examples of distributions satisfying (39): 1. Let µ over F 5 be such that S µ = {0, 1}. Picking γ = 2, one obtains 2S µ = {0, 2} and S µ + 2S µ = {0, 1, 2, 3}, and (39) is verified. In this case, using γ = 1 can only provide a strictly smaller spread since it will not set H(U 2 |U 1 + γU 2 ) = 0. It is hence better to use the 2-optimal kernel 1 0 2 1 rather than the original kernel 1 0 1 1 . As illustrated in Figure 3, this leads to significant improvements in the error probability at finite block length. Also note that a channel with noise µ satisfying (39) has positive zero-error capacity, which is captured by the 2-optimal kernel as shown with the rapid drop of the error probability (it is 0 at low enough rates since half of the synthesized channels have noise entropy exactly zero). If µ is close to a distribution satisfying (39), the error probability can also be significantly improved with respect to the original kernel  (39) holds. Therefore, the choice of γ varies with respect to q.
In conclusion, we have shown that over F q , the martingale spread can be significantly enlarged by using 2-optimal kernels rather than the original kernel 1 0 1 1 . Moreover, we have observed that this can lead to significant improvements on the error probability of polar codes, even at low block length (n = 1024). For additive noise channels, while the improvement is significant when the noise distribution is concentrated on "small" support, the improvement may not be as significant for distributions that are more more spread out.  Figure 3: For an additive noise channel over F 5 with noise distribution {0.5, 0.5, 0, 0, 0} (i.e., taking any symbol of F 5 to itself with probability 1 2 and shifting any symbol circularly with probability 1 2 ), the block error probability (in log 10 scale) of a polar code with block length of n = 1024 is plotted against the rate of the code. The red curve (lower curve) is for the polar code using the 2-optimal kernel whereas the blue curve is for the polar code using the original kernel.