Next Article in Journal
Mechanism Analysis and Suppression Strategy of Continuous High-Frequency Oscillation in MMC-HVDC System
Next Article in Special Issue
Efficient Decoder for Turbo Product Codes Based on Quadratic Residue Codes
Previous Article in Journal
Gated Multi-Attention Feedback Network for Medical Image Super-Resolution
Previous Article in Special Issue
Redesign of Channel Codes for Joint Source-Channel Coding Systems over One-Dimensional Inter-Symbol-Interference Magnetic Recording Channels
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Double Polar Codes for Joint Source and Channel Coding

Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing 100876, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(21), 3557; https://doi.org/10.3390/electronics11213557
Submission received: 15 September 2022 / Revised: 26 October 2022 / Accepted: 27 October 2022 / Published: 31 October 2022
(This article belongs to the Special Issue Multirate and Multicarrier Communication)

Abstract

:
In this paper, we design a joint source and channel coding (JSCC) framework combining the source polar coding and the channel polar coding. The source is first compressed using a polar code (PC), and source check decoding is employed to construct an error set containing the index of all source decoding errors. Then, the proposed JSCC system employs another PC or systematic PC (SPC) to protect the compressed source and the error set against noise, which is called double PC (D-PC) or systematic double PC (SD-PC), respectively. For a D-PC JSCC system, we prove a necessary condition for the optimal mapping between the source PC and the channel PC. On the receiver side, by introducing the joint factor graph representation of the D-PC and SD-PC, we propose two joint source and channel decoders: a joint belief propagation (J-BP) decoder, and a systematic joint belief propagation (SJ-BP) decoder. In addition, a biased extrinsic information transfer (B-EXIT) chart is developed for various decoders as a theoretical performance evaluation tool. Both B-EXIT and simulation results show that the performance of the proposed JSCC scheme has no error floor and outperforms the turbo-like BP decoder.

1. Introduction

The source-channel coding theorem [1] states that a source can be reliably transmitted over a channel as long as its entropy is less than the channel capacity, assuming that latency, complexity, and block length are not constrained. This theorem suggests that we can design source and channel coding separately to achieve optimal results. However, in practical implementations, separate source and channel coding (SSCC) is suboptimal due to the residual redundancy and finite block length. As a potential remedy, joint source and channel coding (JSCC) have been proposed to provide improvements by exploiting residual redundancy and avoiding capacity loss, which can at most double the error exponent vis-a-vis tandem coding [2]. A theoretical study of finite-blocklength bounds for the best achievable lossy JSCC rate demonstrates that JSCC designs bring considerable performance advantages over SSCC in the nonasymptotic regime [3].
One of the main methods of JSCC is the joint decoding of a given source compression format (e.g., JPEG) with a channel decoder. For example, rate-compatible low-density parity-check (RC-LDPC) codes can provide unequal error protection for JPEG2000 compressed images [4]. If the JPEG2000 performs source coding with certain error-resilience (ER) modes, the source decoder can use the ER mode to identify corrupt sections of the codestream for the channel decoder [5]. Iterative decoding of variable-length codes (VLCs) in tandem with channel code can provide remarkable error correction performance [6]. The low-density parity-check (LDPC) codes can be combined with Huffman coded sources to exploit the a priori bit-source probability [7].
Another type of JSCC approach is to assume a Markov model of the source to decode jointly with the factor graph of the channel coding. Joint source-channel Turbo coding for binary Markov sources was investigated in [8]. The drawback of such methods is that the standard VLCs are not suitable for forming a single graph model structure with channel codes (e.g., LDPC codes), as they require long sequences to achieve the entropy of the source. Therefore, a double LDPC (D-LDPC) code for JSCC has been proposed; one LDPC code is used to compress the source first, followed by another LDPC code to protect the compressed source against noise [9]. D-LDPC codes can be represented as a single bipartite graph, allowing the belief propagation (BP) algorithm to decode it on the receiver side. Double protograph LDPC (DP-LDPC) codes are a variant of the D-LDPC codes that introduce a P-LDPC code to replace the conventional LDPC code [10]. Optimization of the base matrix of the DP-LDPC codes can effectively improve the performance of the DP-LDPC JSCC system [11,12,13].
Polar codes, proposed by Arıkan, provably achieve the capacity of any symmetric binary-input discrete memoryless channel (B-DMC) with efficient encoding and decoding algorithms [14]. The successive cancellation algorithm (SC) and belief propagation algorithm (BP) [15] are two commonly used decoding approaches. With the aid of list-decoding, polar codes can outperform LDPC codes at short codelength [16,17,18,19]. Arıkan presented the source polarization as a channel polarization complement [20], which is the theoretical basis for applying polar codes to source compression. Polar codes can achieve the rate-distortion bound for a symmetric binary source for lossy source coding [21]. Moreover, polar codes can asymptotically achieve the optimal compression rate for lossless source coding [22]. Meanwhile, polar code is a good candidate to take advantage of the benefits of source redundancy in the JSCC system [23]. The JSCC system using polar codes can significantly improve the decoding performance for language-based sources and distributed sources [24,25]. A quasi-uniform systematic polar code (qu-SPC) is constructed for the JSCC system with side information by introducing an additional bit-swap coding to modify the original polar coding [26,27]. Our previous work proposes a JSCC scheme with double polar codes by combining source polarization and channel polarization, which shows performance improvement under short codelength [28].
Although polar codes have been studied in the fields of channel coding and source coding, there is a lack of systematic research on the framework of JSCC based on polar codes. In our previous work [28], we introduced the basic double polar code structure of cascading source polar code and channel SPC. However, double polar codes have a high error floor under short blocklength, and lack performance analysis tools in the presence of biased sources.
In this paper, we establish a systematic framework for a JSCC with double polar codes and provide a complete theoretical analysis.
The contributions of this paper are summarized as follows:
  • Double Polar Coding Framework: Guided by the source polarization and channel polarization, we propose a JSCC framework based on polar codes, composed of the source polar coding and the channel polar coding. First, a group of source bits is transformed into a series of bit polarized sources by source polarization, where the bit polarized sources with higher entropy, namely, high-entropy bits, are regarded as the compressed source, with other parts are treated as redundancies. After the source is polarized, a source check decoding is performed to construct an error set containing the index of all error decisions that occurred in the source decoding. Second, another polar code (PC) or SPC is used to protect the compressed source bits and the bits in the error set against the noise. Moreover, a mapping is inserted between the source polar code and the PC, where the mapping design affects the performance of the JSCC system. Depending on the channel code, the proposed JSCC scheme is called double polar code (D-PC) or systematic D-PC (SD-PC), respectively. The proposed JSCC framework facilitates the exploitation of residual redundancy in the source code on the decoder side.
  • Joint Belief Propagation Decoding: To represent the D-PC and SD-PC, we introduce two factor graphs: the joint factor graph (J-FG) and the systematic joint factor graph (SJ-FG). Correspondingly, the proposed joint source and channel decoder can be divided into two types: the J-TG-based joint belief propagation (J-BP) decoder and the SJ-TG-based systematic joint belief propagation (SJ-BP) decoder. One one hand, for the J-FG, the mapping within D-PC determines the connection between the factor graph of the source code and that of the channel code. The J-BP decoder is iteratively applied over the J-FG, where the source decoder and the source decoder exchange the soft information through the mapping. On the other hand, for the SJ-FG, due to the systematic bits directly carrying high entropy bits, the factor graph of the source code and that of the channel code are combined into a single factor graph. The SD-PC can be decoded as a single factor graph using the SJ-BP decoder.
  • B-EXIT Evaluation: Regarding the proposed JSCC system, we present a biased extrinsic information transfer (B-EXIT) convergence analysis for the J-BP decoder and SJ-BP decoder. The EXIT method is extended to consider the non-uniform source. We design a method to calculate the distribution of each high entropy bit, which permits the B-EXIT algorithm to track mutual information transfer for each high entropy bit. This approach reveals the superiority of the proposed JSCC system from a theoretical perspective.
The rest of this paper is organized as follows. Section 2 briefly introduces the background of polar codes and BP decoding. Section 3 proposes the proposed JSCC framework. Section 4 describes the J-BP decoder for D-PC, and Section 5 provides the SJ-BP decoder for SD-PC. Section 6 introduces the B-EXIT chart. Section 7 shows the performance evaluation results with B-EXIT and simulations. Finally, Section 8 concludes the paper.

2. Preliminaries

2.1. Notational Conventions

In this paper, we use calligraphic characters, such as X , to denote sets. For any finite set of integers X , | X | denotes its cardinality. We denote random variables (RVs) by upper case letters, such as X , Y , and their realizations by the corresponding lower case letters, such as x , y . For an RV X, P X denotes the probability assignment on X. We use the notation a as shorthand for denoting a row vector ( a 1 , . . . , a N ) . We use bold letters (e.g., X ) to denote matrices. Given a and A { 1 , . . . , N } , we write a A to denote the subvector ( a i : i A ) .

2.2. Polar Codes

A polar code can be identified by a parameter vector ( N , K , A , u A c ) , where N = 2 n is the code length, K is the code dimension and specifies the size of the information set A , and  u A c is the frozen bits. The frozen bits u A c are known to both the transmitter and the receiver, and are usually set to all zeros. Let x denote the codeword of a polar code; then, the encoding process can be expressed as x = u G , where u constitutes of the information bits u A and frozen bits u A c , while  G = F 2 n is called the generation matrix of size N and is the nth Kronecker product of the polarizing kernel F 2 = 1 0 1 1 . Then, the codeword x is transmitted over an AWGN channel. The modulation is binary phase shift keying (BPSK). The signals received at the destination are provided by y = ( 1 2 x ) + z , where z is an i.i.d Gaussian noise sequence and each noise sample has zero mean and variance σ 2 . Furthermore, the LLR of y i can be written as L ( y i ) = 2 σ 2 y i .

2.3. BP Decoding

An ( N , K ) polar code can be represented by an n-stage factor graph which contains ( n + 1 ) N nodes. Each ( i , j ) -index node is associated with the left-to-right and right-to-right likelihood messages. Let R i , j t and L i , j t denote the logarithmic likelihood ratio (LLR)-based left-to-right message and right-to-left message in the t-th iteration. The BP decoder iteratively updates and propagates these soft messages between neighboring nodes over the factor graph. Before decoding starts, R i , 0 t is initialized to 0 for i A ; otherwise, + , and  L i , n t is initialized to the channel receive value. The soft messages in the rest of the nodes are initialized to 0. Then, the BP decoder updates the soft message of each node over the whole factor graph. The BP decoder terminates when the number of iterations reaches the preset maximum number of iterations. After the iteration finishes, the estimation of information bits u ^ i can be obtained by u ^ i D ( L i , 0 t + R i , 0 t ) , where the hard-decision function D ( y ) = 1 when y < 0 , and is otherwise 0.

3. Joint Source and Channel Coding

In this section, we present details of the proposed JSCC framework based on the polar code. First, we provide a brief description of the proposed JSCC framework. Then, the coding process of the D-PC scheme and the SD-PC scheme is elaborated.

3.1. Joint Source and Channel Coding Framework

As shown in Figure 1, the proposed JSCC framework mainly comprises two phases: the source polar coding and the channel polar coding. The source polar coding phase involves the source polarization and the check decoding. The source vector s is first transformed into a polarized source vector c by source polarization. Then, the polarized source vector c , which is composed of the high-entropy vector c H and the low-entropy vector c H c , is fed into a source polar decoder for check decoding. The check decoding generates an error set E by collecting the indices of the source decoding errors produced during source decoding. This error set E is converted into a binary sequence e and appended to the high-entropy vector c H to form the vector v .
In the channel polar coding phase, there are two alternative schemes. Let π ( · ) represent a mapping function; for the D-PC scheme, the vector v is first mapped to the vector π ( v ) , which is then protected against noise by a PC. For the SD-PC scheme, we directly adopt an SPC to protect the vector v . From an overall perspective, in the transmitter, the source vector s is coded into the codeword x and then transmitted to the receiver through a channel. In the receiver, the received signal y is decoded by a joint source and channel decoder to obtain the source estimation s ^ .

3.2. Joint Source and Channel Coding for D-PC

The D-PC scheme in the proposed JSCC framework can be implemented by the three procedures described below.

3.2.1. Source Polar Coding

First, the source vector is compressed using a PC. This paper considers an independent and identically distributed (i.i.d.) binary Bernoulli source S with P S ( 1 ) = p . The N s c = 2 m source bits from S are represented by the vector s = ( s 1 , s 2 , , s N s c ) . The source polarization of the source vector s is performed as follows:
c = s G s c ,
where G s c = F m is the m-order Kronecker power of F . Let H denote the high-entropy set with | H | = K s c . The high-entropy vector are designated as c H = { c i | i H } . To avoid errors in recovering c H c at the receiver, we perform a source check decoding to find all indices for which the decision of the corresponding compressed source bit c i would be incorrect. Let c ^ H c denote the estimation of c H c . Then, a set of indices E can be defined as
E = { i : c ^ i c i , i H c } .
The final output of the source polar encoding is provided by ( c H , E ) . Because the size of E is at most N s c , each element in E can be represented by an m-bit binary sequence. Let e denote the binary vector in which all elements of E are cascaded in binary form and let v = [ c H , e ] . The length of binary vector v is denoted as K c c = | H | + | E | m .
The construction of the set E has been introduced in [22] based on the SC decoding algorithm. Here, we consider the construction of E based on the BP decoding algorithm. In a BP decoder, the estimation c ^ H c is updated in each iteration. Let c ^ i t denote the estimation of c i in the t-th iteration and let T denote a preset positive integer. If c ^ i t satisfies the convergence condition
j = t T t c ^ i j 1 c ^ i j = 0 ,
we can assume that the converged bit c ^ i = c ^ i t [29]. For i H c , if the converged bit c ^ i is not equal to c i , then the index of c i belongs to E . However, if  c ^ i t keeps changing without converging, i.e., it remains oscillating, the source BP decoder cannot obtain the converged c ^ i . To tackle this issue, we propose an oscillation check criterion that considers the c ^ i t corresponding to the maximum LLR amplitude recorded during the oscillation as the converged c ^ i . Let L ( c ^ i t ) denote the LLR corresponding to the estimation c ^ i t and let O m a x represent L ( c ^ i t ) with maximum | L ( c ^ i t ) | during the oscillation. Moreover, the variable O c n t counts the iteration number of the oscillation and  T o s c is the preset maximum number of oscillations. When O c n t T o s c , the  O m a x can be provided by
O m a x = sign ( L ( c ^ i t ) ) · max 1 O c n t T o s c | L ( c ^ i t ) | ,
where sign ( x ) = 1 when x > 1 and is otherwise 1 , then the set E includes the index i if c i D ( O m a x ) .

3.2.2. Optimal Mapping

For the D-PC JSCC system, the  K c c polarized sub-channels with the highest capacity carry the vector v = [ c H , e ] , where the high-entropy vector c H consists of K s c polarized source bits. Because the block length is finite, the polarization of both the source and the channel is insufficient. When one polarized source bit is transmitted over a polarized channel, if the entropy of the former is less than the mutual information of the latter, such transmission is unreliable. Therefore, we need to optimize the mapping between the source and channel codes to improve system performance. From the perspective of mutual information maximization, we provide Theorem 1, which states a necessary condition for the optimal mapping.
Theorem 1.
For a double polar code identified by a parameter vector ( N s c , K s c , N c c , K c c , H , A , u A c ) , there exists an optimal mapping π between the source polar code and channel polar code that maximizes the mutual information I ( c H ; c ^ H ) . The optimal mapping must satisfy
H ( c i | c 1 i 1 ) I ( W N c c ( π ( i ) ) )
for all i H and π ( i ) A .
Proof of Theorem 1.
Consider a mapping π . In the coding process, the high-entropy bits c H are carried by the information bits u A ; then, there is c ^ i = u ^ π ( i ) with i H in the decoding process. The mutual information I ( c H ; c ^ H ) is calculated as follows:
I ( c H ; c ^ H ) = I ( u π ( H ) ; u ^ π ( H ) ) = H ( u π ( H ) ) H ( u π ( H ) ) I ( u π ( H ) ; u ^ π ( H ) ) = H ( c H ) H ( c H ) I ( u π ( H ) ; u ^ π ( H ) ) .
The information bit u π ( i ) is transmitted over the polarized channel W N ( π ( i ) ) . The mutual information I ( u π ( i ) ; u ^ π ( i ) ) is not able to exceed the channel capacity I ( W N ( π ( i ) ) ) . The vector u π ( H ) is transmitted over polarized channels { W N ( π ( i ) ) | π ( i ) A } . Let h j denote the j-th element of the set H . Based on the chain rule for entropy, we have
H ( c H ) = h j H H ( c h j | c h 1 , c h 2 , , c h j 1 )
If the index i H corresponds to the j-th element h j of the set H , the  H ( c i | c 1 i 1 ) can be rewritten as
H ( c h j | c 1 h j 1 ) = H ( c h j | c 1 , c 2 , , c h j 1 )
As a result of { c h 1 , c h 2 , , c h j 1 } { c 1 , c 2 , , c h j 1 } , we can obtain
H ( c h j | c 1 , c 2 , , c h j 1 ) H ( c h j | c h 1 , c h 2 , , c h j 1 )
Thus, we have the following relationships:
i H H ( c i | c 1 i 1 ) H ( c H ) I ( u π ( H ) ; u ^ π ( H ) ) π ( i ) A I ( W N c c ( π ( i ) ) ) .
Then, we can obtain the following inequality:
I ( c H ; c ^ H ) H ( c H ) i H H ( c i | c 1 i 1 ) π ( i ) A I ( W N c c ( π ( i ) ) ) .
The compressed bit c i is assigned to u π ( i ) by the mapping. Accordingly, (11) can be rewritten as follows:
I ( c H ; c ^ H ) H ( c H ) i H H ( c i | c 1 i 1 ) I ( W N c c ( π ( i ) ) ) .
From (12), I ( c H ; c ^ H ) can reach H ( c H ) when (5) holds.    □
Based on Theorem 1, we propose Algorithm 1 to construct an optimal mapping. Let a i and h i denote the ith element of A and H . We use a sequence α = [ α 1 , α 1 , , α K c c ] to denote the sorted { a i | 1 i K c c } by the descending order of reliability. Moreover, a sequence β = [ β 1 , β 2 , , β K s c ] is used to denote the sorted { h i | 1 i K s c } by descending order of entropy. Let a sequence γ denote the mapping π , where γ i indicates that c i is mapped to u γ i , i.e.,  c i is transmitted by the sub-channel W N c c ( γ i ) . In Algorithm 1, the most reliable polarized sub-channel transmits the polarized source bit with the highest entropy. Meanwhile, the least reliable polarized sub-channel transmits the polarized source bit with the lowest entropy.
Algorithm 1:Construct the mapping γ .
Electronics 11 03557 i001

3.2.3. Channel Polar Coding

Finally, we protect the compressed source bits and the bits in the error set with another PC. Let A denote the information set of the channel PC with | A | = K c c . The encoding of the PC has been described in Section 2.2. We can write the encoding process as
x = u A G c c ( A ) u A c G c c ( A c ) .
Because u A = π ( v ) , (13) can be rewritten as
x = π ( v ) G c c ( A ) u A c G c c ( A c ) .

3.3. Joint Source and Channel Coding for SD-PC

The difference between SD-PC and D-PC is that SD-PC employs an SPC to protect the vector v . We can split the codeword into two parts by writing x = ( x B , x B c ) , where N c c = 2 n is the codeword length B = A and the systematic bits x B are assigned to the vector v . For the SPC, the parity bits x B c are provided by
x B c = u A G c c ( A , B c ) u A c G c c ( A c , B c ) ,
where u A c is an all-zero subvector and  G c c ( A , B c ) denotes the submatrix of G c c consisting of the array of elements ( G ( i , j ) ) , with i A and j B c , and submatrix G c c ( A c , B c ) can be similarly defined. Then,  u A is calculated as follows:
u A = ( x B u A c G c c ( A c , B ) ) ( G c c ( A , B ) ) 1
= ( v u A c G c c ( A c , B ) ) ( G c c ( A , B ) ) 1
In summary, the source polar coding and channel polar coding can form a JSCC framework. For the D-PC JSCC system, the optimal mapping can maximize the mutual information between the compressed source bits c H and the corresponding estimation c ^ H .

4. Joint Source and Channel Decoding of D-PC

In this section, we first extend the factor graph representation to the D-PC, namely, J-FG. Second, we propose a joint source and channel decoding algorithm based on the J-FG, i.e., the J-BP decoder.

4.1. Joint Tanner Graph Representation

The polar codes can be represented by a factor graph, which can be generalized to the D-PC; that is to say, the source polar code with code length N s c and the channel polar code with code length N c c can be described by a factor graph with m stages and a graph with n stages, respectively. Therefore, the D-PC can be illustrated by a joint factor graph (J-FG) with n J stages, where n J = m + n . In the J-FG, the part depicting the source code is called the source factor graph (SFG), and the part that represents the channel code is called the channel factor graph (CFG).
The J-FG of the D-PC is shown in Figure 2. For the D-PC, the high-entropy bits of the source polar code are assigned to the information bits of the channel polar code, which is the basis for constructing the J-FG. The m stages on the left side of the J-FG are the SFG, and the n stages on the right side of the J-FG are the CFG. In this J-FG, the variable nodes in the leftmost column correspond to the source vector s and the variable nodes in the rightmost column correspond to the codeword x . The channel polar code does not directly carry the vector v with the information bits u A in the D-PC scheme; rather, it carries π ( v ) after mapping. Accordingly, the variable node corresponding to the high-entropy bits c H in SFG is connected to the variable node corresponding to u A in CFG through a mapping. The vector e contained in v can be ignored, as the bits in the vector e do not have corresponding variable nodes in the SFG. Thus, we can obtain the J-FG of the D-PC scheme.

4.2. Joint Belief Propagation Decoding

The Turbo-like BP (TL-BP) decoder [28] consists of a channel BP decoder and source BP decoder. External information interaction between two independent decoders implements joint decoding in [28]. Unlike TL-BP decoding, the J-BP decoder iteratively operates over the entire J-FG representing the D-PC, and no longer distinguishes between the source BP decoder and the channel BP decoder. A high-level description of the J-BP decoding algorithm is provided in Algorithm 2. The J-FG has n J stages and n J + 2 columns, which from left to right are the 0th to the ( n J ) + 1 th column, respectively. Each node b i , j is associated with LLR messages R i , j t and L i , j t . These messages are updated and propagated among adjacent nodes during the whole iteration process.
Algorithm 2:A High-Level Description of the Joint Source and Channel Decoder.
Electronics 11 03557 i002

4.2.1. Initialization

We first initialize the J-BP decoder. This paper considers a binary Bernoulli source S with P S ( 1 ) = p . Thus, the initialization of R i , 0 t can be provided by
R i , 0 t = ln P S ( s = 0 ) P S ( s = 1 ) = ln 1 p p .
As shown in Figure 2, the variable nodes in the rightmost column correspond to the codeword x ; hence, L i , n J + 1 t = L ( y i ) for  1 i N c c . Furthermore, we know that the frozen bits are ‘0’. Thus, the initialization of the R i , m + 1 t can be described by
R i , m + 1 t = + , if i A c , 0 , otherwise .
The messages in the remaining variable nodes of the J-FG are all initialized to zeros.

4.2.2. Iteration Decoding

After initialization is complete the J-BP decoder is activated, which is carried out on Lines 2–9 of Algorithm 2. The maximum number of iterations of the J-BP decoder is preset and denoted as I m a x . One iteration of the J-BP decoder consists of left-to-right and right-to-left message propagation. In the left-to-right message propagation, the J-BP decoder updates R i , j t serially in the order of j = 1 , 2 , , n J + 1 . Then, the J-BP decoder updates L i , j t from right to left in the order j = n J , n J 1 , , 0 . The message update rule employed by the J-BP decoder is the same as the conventional BP decoder.
The message update rule of a processing element is described by
L i , j t = f ( R i + 2 j , j t + L i + 2 j , j + 1 t , L i , j + 1 t ) L i + 2 j , j t = f ( R i , j t , L i , j + 1 t ) + L i + 2 j , j + 1 t R i , j + 1 t = f ( L i + 2 j , j + 1 t 1 + R i + 2 j , j t , R i , j t ) R i + 2 j + 1 , j + 1 t = f ( L i , j + 1 t 1 , R i , j t ) + R i + 2 j , j t
where f ( x , y ) is as follows:
f ( x , y ) log 1 + e x + y e x + e y .
Recall the optimal mapping between the source code and the channel code, as shown in Figure 2; because soft message exchange between SFG and CFG involves the mapping, we describe this in greater detail below.
In the J-BP decoder of the D-PC scheme, soft messages R i , m t are transferred to CFG through a mapping. In contrast, soft messages L i , m + 1 t are transferred to SFG through an inverse mapping. Let π 1 denote the inverse of the mapping π . Then, the soft message exchange between SFG and CFG can be expressed by
R π ( i ) , m + 1 t = R i , m t , L π 1 ( j ) , m t = L j , m + 1 t ,
for i H and j A .
To avoid redundant iterations, we apply the early stopping criterion for the J-BP decoder. For the early stopping criterion of the BP decoder for the polar codes, refer to [30]. If the early stopping criterion holds for both the source PC and channel PC, the J-BP decoder terminates and outputs the estimation of the source. The estimation of the source is denoted as s ^ , which can be obtained by
s i = D ( L i , 0 t + R i , 0 t ) , 0 i N s c .
Otherwise, the lossless BP decoding is activated if the early stopping criterion holds only for the channel polar code (Lines 7–9, Algorithm 2).

4.2.3. Lossless Source BP Decoding

The description of the lossless source BP decoding is provided in Algorithm 3. The channel polar code can be regarded as having been decoded successfully if the early stopping criterion holds for the channel polar code, which means we can obtain the estimation v ^ of v . For the J-BP decoder, the estimation v ^ can be calculated by
v π 1 ( i ) = D ( L i , m + 1 t + R i , m + 1 t ) ,
where i A . The estimation v ^ can be divided into two parts, c ^ H and e ^ . Therefore, the SFG of J-BP decoder can be reinitialized based on c ^ H as follows:
L i , n s c 0 = ( 1 2 c ^ i ) × , if i H , 0 , otherwise .
Meanwhile, we can use the vector e ^ to recover the error set E , which is shown in Line 2 of Algorithm 3. One element of the set E is represented by a binary vector of m bits. The cardinality of E can be obtained by ( K c c K s c ) / m , i.e., the number of errors that occur in the source BP decoding can be known. If the cardinality of E is not zero, we can convert every m bits of the binary vector e ^ into a decimal number to obtain an element of E . After recovering the error set E , the source decoding errors can be corrected, as we already know in advance the indices of errors that will occur during the source decoding process.
Algorithm 3:Lossless BP Source Decoding.
Electronics 11 03557 i003
The estimation c ^ can be updated by executing a BP decoding over SFG. If c ^ i t satisfies the convergence condition (3), it can be assumed that the estimation c ^ i t has converged. However, there exist some c ^ i t that constantly keep changing during iteration; this is called the oscillation error, in which case the convergence condition (3) is invalid. To solve this oscillation error, we take the D ( L i , m t ) with the largest absolute value recorded during the oscillation as the converged c ^ i , which is a recall of (4). If i E , the converged c ^ i is regarded as a correct estimation. Furthermore, we can obtain the correct estimation by flipping the converged c ^ i if i E . The lossless source BP decoding terminates when the iteration number reaches the preset maximum I s c or the estimation c ^ converges. In the end, the source estimation s ^ can be obtained by re-encoding c ^ .

5. Joint Source and Channel Decoding of SD-PC

In this section, we first extend the factor graph representation to the SD-PC, which we call SJ-FG. Second, we propose a joint source and channel decoding algorithm based on J-FG, i.e., the SJ-BP decoder.

5.1. Systematic Joint Factor Graph Representation

The SD-PC can be represented by an n J -stages SJ-FG, where n J = m + n . For the SD-PC scheme, the high-entropy bits of the source PC are directly transmitted by the systematic bits of the channel SPC. Therefore, the variable node corresponding to the high-entropy bit can be connected with the variable node corresponding to the systematic bit by a check node. Figure 3 shows an SJ-FG of the SD-PC scheme. In this SJ-FG, the m stages on the left side represent the source PC and the n stages on the right side represent the channel SPC. The check nodes connect the source high-entropy nodes and the channel systematic nodes. The variable nodes in the leftmost column correspond to the source vector s , and the variable nodes in the rightmost column correspond to the vector u .

5.2. Systematic Joint Source and Channel Decoding

The SJ-BP decoder iteratively operates over the SJ-FG representing the SD-PC. The high-level description of the SJ-BP decoding algorithm has been provided in Algorithm 2. The details of the SJ-BP decoder are presented as follows.

5.2.1. Initialization

The initialization of R i , 0 t is provided in (18). For i B c , the message R i , m + 1 t associated with the parity check node is initialized by R i , m + 1 t = L ( y i ) . The variable nodes corresponding to the frozen bits are located in the rightmost column. The message L i , n J + 1 t can be initialized by
L i , n J + 1 t = + , if i A c , 0 , otherwise .
Moreover, the messages in the remaining variable nodes in the SJ-FG are all initialized to zeros.

5.2.2. Iteration Decoding

The SJ-BP decoder is activated after initialization is complete. The message passing schedule of the SJ-BP decoder is the same as the J-BP decoder. The message update rule employed by the SJ-BP decoder has been described by (20). The soft message exchange between SFG and CFG can be expressed as follows:
R a i , m + 1 t = R h i , m t + L ( y a i ) , L h i , m t = L a i , m + 1 t + L ( y a i ) ,
where a i is the ith element of the set A and h i is the ith element of the set H . The SJ-BP decoding is terminated when the source PC and channel SPC satisfy the early stopping criterion or the iteration number reaches the preset maximum, I m a x . If the early stopping criterion holds only for the channel SPC, the lossless source BP decoding is activated.

5.2.3. Lossless Source BP Decoding

When the early stopping criterion holds for the channel SPC, the estimation v ^ can be calculated by
v i = D ( L i , m + 1 t + R i , m + 1 t ) ,
where i B . Then, we split the vector v ^ into the estimation c ^ H and e ^ , where e ^ is further recovered as the set E . Because source polar coding is common to the D-PC and SD-PC schemes, the lossless source BP decoding is the same in both the J-BP decoder and the SJ-BP decoder, which has been provided in Algorithm 3.

6. B-EXIT Convergence Analysis

EXIT chat [31] is an efficient tool for analyzing the convergence of the iterative decoder. It provides an excellent visual representation of the iterative decoder by tracking the mutual information (MI) at each iteration. In this section, we perform a biased-EXIT (B-EXIT) convergence analysis for the proposed JSCC system with binary Bernoulli source. To evaluate the convergence performance of the J-BP decoder and the SJ-BP decoder under binary Bernoulli source, we track the average MI (AMI) transfer process between SFG and CFG such that we can provide a virtual representation of the iterative decoding process. The EXIT method is named B-EXIT due to the biased source, which is shown in Algorithm 4.
Algorithm 4:B-EXIT analysis.
Electronics 11 03557 i004
For a uniform source, it can be assumed without loss of generality that the codeword sent is all zeros, which is not reasonable for a biased source. Therefore, as shown in lines 2–3 of Algorithm 4, we actually generate a codeword x for the D-PC or SD-PC with a biased source P S ( 1 ) = p and record the compressed bits c H . Then, the receiver obtains the signal y by sending the codeword x over the binary phase shift keying (BPSK) modulated additive white Gaussian noise (AWGN) channel. Based on the received signal y and the source distribution P S ( 1 ) = p , the joint source-channel decoder, i.e., the J-BP decoder and SJ-BP decoder, can be initialized.
For the t-th iteration of the J-BP decoder, let I A , C C ( t ) ( I A , S C ( t ) ) denote the a priori AMI between the input LLRs s c ( t 1 ) ( c s ( t ) ) and the compressed bits. Similarly, I E , C C ( t ) ( I E , S C ( t ) ) is used to denote the extrinsic AMI between the output LLRs c s ( t ) ( s c ( t ) ) and the compressed bits. For the J-BP decoder and SJ-BP decoder, the input LLRs s c ( t ) ( c s ( t ) ) actually refer to R i , m t ( L i , m t ), which represents the input LLRs of the CFG (SFG). Similarly, the output LLRs c s ( t ) ( s c ( t ) ) refer to L i , m ( t ) ( R i , m ( t ) ), which represents the output LLRs of the CFG (SFG). To track the decoding trajectory, we collect all the R i , m ( t ) and L i , m ( t ) at each iteration by using a number of simulated code blocks (Line 6, Algorithm 4). For i H , let SC ( t ) ( i | 0 ) ( SC ( t ) ( i | 1 ) ) to collect R i , m t when c i = 1 ( c i = 0 ) , and let CC ( t ) ( i | 0 ) ( CC ( t ) ( i | 1 ) ) to collect L i , m t when c i = 1 ( c i = 0 ) . In line 7 of Algorithm 4, for t = 1 , 2 , , I we utilize SC ( t ) ( i | 0 ) ( SC ( t ) ( i | 1 ) ) to count the PDFs P E , SC ( | 0 ) ( P E , SC ( | 1 ) ) with the histogram method. The CC ( t ) ( i | 0 ) ( CC ( t ) ( i | 1 ) ) can be utilized to count PDFs P E , CC ( | 0 ) ( P E , CC ( | 1 ) ) with the histogram method.
After the probability density functions (PDFs) of these LLRs have been counted with the histogram method, we can calculate the AMI I E , C C ( t ) ( I E , S C ( t ) ) . The compressed bits c H are not uniform due to the source s being biased. Therefore, we first need to calculate the mutual information between each compressed bit c i and the corresponding LLR independently. Let I E , SC ( t ) ( i | )   ( I E , CC ( t ) ( i | ) ) denote the mutual information between the compressed bit c i and R i , m t   ( R i , m t ) ; we can obtain P Y ( y | x ) by the histogram method. Here, P X ( x ) refers to the probability distribution of compressed bits { c i | i H } . The probability distribution of each bit in c H is different. The compressed bits c i are obtained by multiplying the source sequence s and the ith column of the generation matrix. Let the set W denote the index of ‘1’ in the ith column of the generation matrix and let d i denote the Hamming weight of the ith column of the generation matrix. Then, we have
c i = s w 1 s w 2 s w d i
for i H . Here, P c ( c i = 1 ) is the probability that the number of ones in the sequence u W is odd. Thus, we can obtain the distribution of the compressed bits c i as follows [32]:
P c ( c i = 0 ) = 1 2 + 1 2 j = 0 d i 1 ( 1 2 p ) , P c ( c i = 1 ) = 1 2 1 2 j = 0 d i 1 ( 1 2 p ) ,
where p is P s ( s i = 1 ) of a Bernoulli source S. After obtaining the probability distribution of compressed bits { c i | i H } , we can calculate I E , SC ( t ) ( i | ) ( I E , CC ( t ) ( i | ) ) . Then, the AMI I E , C C ( t ) ( I E , S C ( t ) ) is provided by
I E , S C ( t ) = 1 | H | i H I E , SC ( t ) ( i | ) I E , C C ( t ) = 1 | H | i H I E , CC ( t ) ( i | )
The computed AMI can be plotted as a zigzag path, which reflects the decoding trajectory. On the EXIT plane, 2 I AMI pairs can be written as the coordinate points, which can be expressed as follows:
I A , CC ( t ) , I E , CC ( t ) = I E , SC ( t 1 ) , I E , CC ( t ) , I E , SC ( t ) , I A , SC ( t ) = I E , SC ( t ) , I E , CC ( t ) ,
where t = 1 , 2 , , I . By connecting the vertices at both ends of the zigzag path separately, we can obtain two curves, corresponding to T c c ( · ) and T s c 1 ( · ) . We now connect the adjacent coordinate points in the form
I A , CC ( t ) , I E , CC ( t ) I E , SC ( t ) , I A , SC ( t ) ,
where t = 1 , 2 , , I . Then, we obtain the extrinsic AMI transfer trajectory. After obtaining the extrinsic AMI transfer trajectory, the convergence speed of the proposed joint source-channel decoder with finite coding length and limited number of iterations can be reflected.

7. Performance Evaluation

In this section, we illustrate the advantages of the proposed JSCC system. We begin with a binary Bernoulli source S with P S ( 1 ) = p . Let R s c = K s c / N s c denote the compression rate and R t = N s c / N c c denote the overall rate of the JSCC system. We simulate the bit error rate (BER) performance of the proposed JSCC system over the binary phase shift keying (BPSK) modulated additive white Gaussian noise (AWGN) channel. For all these results, the signal-to-noise ratio is represented by E b / N 0 , where E b refers to the energy per source bit and N 0 denotes the noise power. The employed polar code is constructed by the Gaussian approximation method [33].

7.1. B-EXIT Analysis

The B-EXIT analysis of the proposed JSCC system for a binary Bernoulli source with p = 0.04 is provided in this part. Two EXIT charts are shown in Figure 4, where the iteration number is I = 28 . In Figure 4a with E b / N 0 = 1.75 dB, we can observe that the extrinsic AMI increment step of the J-BP decoder sees an improvement if the optimal mapping is introduced in the D-PC. In the region with I A , C C / I E , S C < 0.8 , the convergence of J-BP is faster than that of J-BP (no π ), where “no π ” means that the optimal mapper is removed in the D-PC scheme. When I A , C C / I E , S C is larger than 0.8, the decoding trajectories of J-BP and J-BP (no π ) gradually approach each other. Figure 4b shows that the extrinsic AMI increment step of the SD-PC is superior to the D-PC with E b / N 0 = 2.5 dB. In the region where I A , C C / I E , S C is less than 0.6, the SJ-BP decoder converges significantly faster than the J-BP decoder. The convergence of the J-BP decoder is accelerated when I A , C C / I E , S C is greater than 0.6, although the final convergence coordinate of the J-BP decoder remains significantly lower than that of the SJ-BP decoder.

7.2. Simulation Results

The BER results of the J-BP decoder and the SJ-BP decoder are provided in Figure 5 with p = 0.07 , N s c = 512 , and the overall rate R t = 1 / 2 . The source compression rate R s c is chosen from { 0.4 , 0.5 , 0.6 } . For comparison, the BER curves of the TL-BP decoder [28] are provided as benchmarks in Figure 5a. For a fair comparison with the TL-BP decoder, we set the maximum number I m a x = 1000 . Meanwhile, to reduce complexity, the G-Matrix [30] and cyclic redundancy check-aided (CA) [34] early stopping criterion are applied to the source polar code and the channel polar code, respectively.
In Figure 5a, we observe that the J-BP decoder shows no significant error floor compared with the TL-BP decoder [28]. The error floor of the TL-BP decoder is at the 10 4 level with the compression rate R s = 0.6 . Moreover, the error floor of the TL-BP decoder deteriorates further to the 10 2 level as the compression rate R s c is set to 0.5. As a comparison, the error floor does not appear in the performance curve of the J-BP decoder or the SJ-BP decoder. The proposed JSCC system transmits the index of the error decision that occurs in the source decoding along with the compressed source through the channel. Therefore, as long as the channel decoding is correct we can know the index of the error decision and correct the source decoding, which is the reason for the disappearance of the error floor.
In addition, Figure 5a shows that the SJ-BP decoder outperforms the J-BP decoder. For R s = 0.6 and R s = 0.5 , the SJ-BP decoder compared to the J-BP decoder yields 0.30 dB and 0.28 dB gain at BER = 10 4 , respectively. For SD-PC, the SJ-BP decoder can obtain 0.41 dB gain when R s c is reduced from 0.6 to 0.5 at BER = 10 4 . With a constant overall rate R, the lower compression rate R s c implies a lower channel coding rate R c c , which is the reason for the additional gain. However, if R s c is further reduced to 0.4, the performance of the J-BP decoder deteriorates instead. The lower R s c leads to an increase in source decoding errors, and therefore the actual compression rate rises due to the need to transmit a larger error set E . In Table 1, we provide the average actual compression rate of the proposed JSCC system with various p. It can be observed that the actual compression rate increases only slightly for R s c = 0.6 and R s c = 0.5 , while for R s c = 0.4 the actual compression rate rises to 0.4559. The excessive rise in the average actual compression rate causes a significant loss in the performance of the JSCC system. In Figure 5a, a slight performance loss can be observed in the low E b / N 0 region for the SJ-BP decoder compared to the TL-BP decoder, as the actual compression rate is slightly larger than the preset compression rate R s c . For R s = 0.6 , the J-BP decoder has 0.18 dB performance loss compared with the TL-BP decoder at BER = 10 3 , which is due to the superior BER performance of SPC over PC [27]. Figure 5b demonstrates the effect of the optimal mapping for the D-PC scheme. In the lower E b / N 0 region, the effect of the optimal mapping on the performance of the D-PC scheme is not obvious. The gain from the optimal mapping rises with the E b / N 0 . When R s c = 0.6 and R s c = 0.5 , the optimal mapping brings a gain of 1.67 dB and 0.51 dB, respectively, to the D-PC scheme at BER = 10 5 . Removing the optimal mapping from the D-PC scheme causes significant performance loss in the high E b / N 0 region. For R s c = 0.4 , the optimal mapping reduces the gain to 0.16 dB at BER = 10 4 , which is again due to an excessive increase in the actual compression rate.
Figure 6 provides a comparison of the SD-PC employing SPC with the DP-LDPC codes. To facilitate the comparison with the last DP-LDPC [13], the source code and the channel code with half the rate are designed. The overall rate of the JSCC system is 1. From Figure 6, it can be observed that the SD-PC achieves significant performance gains in the high E b / N 0 region due to the absence of an error floor. For the source block length N s c = 512 with p = 0.07 , the SD-PC has a 0.7 dB performance gap compared to DP-LDPC at BER = 10 3 . As p decreases to 0.04, the performance gap reaches 0.9 dB. The DP-LDPC codes perform better than the D-Polar codes at low E b / N 0 . However, we can observe that the DP-LDPC codes suffer from a severe error floor at short code length, which is a weakness that the SD-PC does not have. Because the SD-PC is not subject to an error floor, we can further reduce the compression rate R s c to obtain greater gain without causing an increase in the error. As shown in Figure 6, the performance of the SD-PC in the low E b / N 0 region is close to that of DP-LDPC codes when R s c = 0.35 , at which point the performance gap is reduced to 0.1 dB. Reducing R s c to obtain the decoding gain of the JSCC system is not suitable for DP-LDPC codes due to their already high error floor.

7.3. Complexity Analysis

For both the D-PC and SD-PC, the coding complexity is O ( N s c log 2 ( N s c ) + N c c log 2 ( N c c ) ) and the joint source-channel decoding involves source decoding and channel decoding. Let I denote the iteration number; the complexity of the J-BP decoding and SJ-BP decoding is O ( I N s c log 2 ( N s c ) + I N c c log 2 ( N c c ) ) . When the number of iterations is high, the high complexity of the decoding limits the application of the double polar codes. Figure 7 shows the average iteration number of the J-BP decoder and the SJ-BP decoder. The SJ-BP decoder has the lowest average iteration number. In Figure 7, all curves fall rapidly as E b / N 0 increases. Overall, the curve of the average iteration number shifts left as the compression rate decreases from R s c = 0.6 to R s c = 0.5 . However, for R s c = 0.4 the number of iterations increases as the compression rate decreases, which is due to the same reasons as the performance loss presented in Section 7.2. For the D-PC, the presence of the optimal mapping leads to an increase in the iteration number in the low E b / N 0 region and a decrease in the iteration number in the high E b / N 0 region. The optimal mapping provides an advantage in reducing the decoder complexity in the low E b / N 0 region.

8. Conclusions

In this paper, a JSCC framework has been proposed which combines source polar coding and channel polar coding. The source is first compressed by a PC, then protected by another PC or SPC. To avoid the error floor, the source polar coding employs a source check decoding to construct the error set, collecting the index of source decoding error. The construction and transmission of the error set ensures that the source decoding definitely succeeds as long as the channel decoding is correct. For the D-PC JSCC system, we prove a necessary condition of the optimal mapping to maximize the MI between the compressed source bits and its estimation. The D-PC and SD-PC can be represented by the J-FG and SJ-FG. On the receiver side, the J-BP decoder and SJ-BP decoder are proposed based on the J-FG and SJ-FG. By calculating the probability distribution of the compressed bits, B-EXIT is developed to analyze the decoding trajectory of the proposed joint source-channel decoder with a binary Bernoulli source. The simulation results show that the proposed JSCC system does not suffer from an error floor. Moreover, theoretical analysis and simulation results show that the SD-PC scheme is optimal, while the optimal mapping can improve the performance of the D-PC scheme. Note that we only consider AWGN channels in this paper. The optimal design of double polar codes in fading channels remains an open problem.

Author Contributions

This work was mainly performed by Y.D. (planning of the work, conceptualisation, investigation, methodology, data curation, resources, software, visualisation, and original draft preparation) and was completed with key contributions from K.N. (planning of the work, conceptualisation, supervision, manuscript review and editing, and funding acquisition). All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the National Natural Science Foundation of China under Grant 92067202, Grant 62071058, Grant 62001049.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  2. Zhong, Y.; Alajaji, F.; Campbell, L. On the joint source-channel coding error exponent for discrete memoryless systems. IEEE Trans. Inf. Theory 2006, 52, 1450–1468. [Google Scholar] [CrossRef] [Green Version]
  3. Kostina, V.; Verdu, S. Lossy Joint Source-Channel Coding in the Finite Blocklength Regime. IEEE Trans. Inf. Theory 2013, 59, 2545–2575. [Google Scholar] [CrossRef] [Green Version]
  4. Pan, X.; Cuhadar, A.; Banihashemi, A. Combined source and channel coding with JPEG2000 and rate-compatible low-density Parity-check codes. IEEE Trans. Signal Process. 2006, 54, 1160–1164. [Google Scholar] [CrossRef]
  5. Pu, L.; Wu, Z.; Bilgin, A.; Marcellin, M.W.; Vasic, B. LDPC-Based Iterative Joint Source-Channel Decoding for JPEG2000. IEEE Trans. Image Process. 2007, 16, 577–581. [Google Scholar] [CrossRef]
  6. Zribi, A.; Pyndiah, R.; Zaibi, S.; Guilloud, F.; Bouallegue, A. Low-Complexity Soft Decoding of Huffman Codes and Iterative Joint Source Channel Decoding. IEEE Trans. Commun. 2012, 60, 1669–1679. [Google Scholar] [CrossRef]
  7. Mei, Z.; Wu, L. Joint Source-Channel Decoding of Huffman codes with LDPC codes. J. Electron. 2006, 23, 806–809. [Google Scholar] [CrossRef]
  8. Zhu, G.C.; Alajaji, F. Joint source-channel turbo coding for binary Markov sources. IEEE Trans. Wirel. Commun. 2006, 5, 1065–1075. [Google Scholar] [CrossRef] [Green Version]
  9. Fresia, M.; Perez-Cruz, F.; Poor, H.V.; Verdu, S. Joint Source and Channel Coding. IEEE Signal Process. Mag. 2010, 27, 104–113. [Google Scholar] [CrossRef]
  10. He, J.; Wang, L.; Chen, P. A joint source and channel coding scheme base on simple protograph structured codes. In Proceedings of the 2012 International Symposium on Communications and Information Technologies (ISCIT), Gold Coast, Australia, 2–5 October 2012; pp. 65–69. [Google Scholar] [CrossRef]
  11. Chen, C.; Wang, L.; Lau, F.C.M. Joint Optimization of Protograph LDPC Code Pair for Joint Source and Channel Coding. IEEE Trans. Commun. 2018, 66, 3255–3267. [Google Scholar] [CrossRef]
  12. Chen, Q.; Wang, L.; Hong, S.; Chen, Y. Integrated Design of JSCC Scheme Based on Double Protograph LDPC Codes System. IEEE Commun. Lett. 2019, 23, 218–221. [Google Scholar] [CrossRef]
  13. Liu, S.; Wang, L.; Chen, J.; Hong, S. Joint Component Design for the JSCC System Based on DP-LDPC Codes. IEEE Trans. Commun. 2020, 68, 5808–5818. [Google Scholar] [CrossRef]
  14. Arıkan, E. Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels. IEEE Trans. Inf. Theory 2009, 55, 3051–3073. [Google Scholar] [CrossRef]
  15. Arikan, E. A performance comparison of polar codes and Reed-Muller codes. IEEE Commun. Lett. 2008, 12, 447–449. [Google Scholar] [CrossRef]
  16. Niu, K.; Chen, K. CRC-Aided Decoding of Polar Codes. IEEE Commun. Lett. 2012, 16, 1668–1671. [Google Scholar] [CrossRef]
  17. Chen, K.; Niu, K.; Lin, J. Improved Successive Cancellation Decoding of Polar Codes. IEEE Trans. Commun. 2013, 61, 3100–3107. [Google Scholar] [CrossRef] [Green Version]
  18. Niu, K.; Chen, K.; Lin, J.; Zhang, Q.T. Polar codes: Primary concepts and practical decoding algorithms. IEEE Commun. Mag. 2014, 52, 192–203. [Google Scholar] [CrossRef]
  19. Tal, I.; Vardy, A. List Decoding of Polar Codes. IEEE Trans. Inf. Theory 2015, 61, 2213–2226. [Google Scholar] [CrossRef]
  20. Arıkan, E. Source polarization. In Proceedings of the 2010 IEEE International Symposium on Information Theory, Austin, TX, USA, 13–18 June 2010; pp. 899–903. [Google Scholar]
  21. Korada, S.B.; Urbanke, R. Polar codes are optimal for lossy source coding. In Proceedings of the 2009 IEEE Information Theory Workshop, Volos, Greece, 10–13 June 2009; pp. 149–153. [Google Scholar]
  22. Cronie, H.S.; Korada, S.B. Lossless source coding with polar codes. In Proceedings of the 2010 IEEE International Symposium on Information Theory, Austin, TX, USA, 13–18 June 2010; pp. 904–908. [Google Scholar] [CrossRef]
  23. Wang, Y.; Narayanan, K.R.; Jiang, A.A. Exploiting source redundancy to improve the rate of polar codes. In Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 864–868. [Google Scholar] [CrossRef]
  24. Wang, Y.; Qin, M.; Narayanan, K.R.; Jiang, A.; Bandic, Z. Joint Source-Channel Decoding of Polar Codes for Language-Based Sources. In Proceedings of the 2016 IEEE Global Communications Conference (GLOBECOM), Washington, DC, USA, 4–8 December 2016; pp. 1–6. [Google Scholar]
  25. Jin, L.; Yang, P.; Yang, H. Distributed Joint Source-Channel Decoding Using Systematic Polar Codes. IEEE Commun. Lett. 2018, 22, 49–52. [Google Scholar] [CrossRef]
  26. Jin, L.; Yang, H. Joint Source-Channel Polarization With Side Information. IEEE Access 2018, 6, 7340–7349. [Google Scholar] [CrossRef]
  27. Arikan, E. Systematic Polar Coding. IEEE Commun. Lett. 2011, 15, 860–862. [Google Scholar] [CrossRef] [Green Version]
  28. Dong, Y.; Niu, K.; Dai, J.; Wang, S.; Yuan, Y. Joint Source and Channel Coding Using Double Polar Codes. IEEE Commun. Lett. 2021, 25, 2810–2814. [Google Scholar] [CrossRef]
  29. Simsek, C.; Turk, K. Simplified Early Stopping Criterion for Belief-Propagation Polar Code Decoders. IEEE Commun. Lett. 2016, 20, 1515–1518. [Google Scholar] [CrossRef]
  30. Yuan, B.; Parhi, K.K. Early Stopping Criteria for Energy-Efficient Low-Latency Belief-Propagation Polar Code Decoders. IEEE Trans. Signal Process. 2014, 62, 6496–6506. [Google Scholar] [CrossRef]
  31. Ten Brink, S. Convergence behavior of iteratively decoded parallel concatenated codes. IEEE Trans. Commun. 2001, 49, 1727–1737. [Google Scholar] [CrossRef]
  32. Gallager, R. Low-density parity-check codes. IRE Trans. Inf. Theory 1962, 8, 21–28. [Google Scholar] [CrossRef] [Green Version]
  33. Trifonov, P. Efficient Design and Decoding of Polar Codes. IEEE Trans. Commun. 2012, 60, 3221–3227. [Google Scholar] [CrossRef]
  34. Ren, Y.; Zhang, C.; Liu, X.; You, X. Efficient early termination schemes for belief-propagation decoding of polar codes. In Proceedings of the 2015 IEEE 11th International Conference on ASIC (ASICON), Chengdu, China, 3–6 November 2015; pp. 1–4. [Google Scholar] [CrossRef]
Figure 1. The JSCC framework based on the polar code.
Figure 1. The JSCC framework based on the polar code.
Electronics 11 03557 g001
Figure 2. The joint factor graph of the D-PC scheme.
Figure 2. The joint factor graph of the D-PC scheme.
Electronics 11 03557 g002
Figure 3. The systematic joint factor graph of the SD-PC scheme.
Figure 3. The systematic joint factor graph of the SD-PC scheme.
Electronics 11 03557 g003
Figure 4. Two B-EXIT charts with I = 28 iterations: (a) impact of the optimal mapping for the D-PC scheme with E b / N 0 = 1.75 dB and (b) performance comparison between D-PC and SD-PC with E b / N 0 = 2.5 dB.
Figure 4. Two B-EXIT charts with I = 28 iterations: (a) impact of the optimal mapping for the D-PC scheme with E b / N 0 = 1.75 dB and (b) performance comparison between D-PC and SD-PC with E b / N 0 = 2.5 dB.
Electronics 11 03557 g004
Figure 5. BER performance comparison: (a) J-BP decoding, SJ-BP decoding and TL-BP decoding; (b) J-BP decoding vs J-BP (no π ) decoding [28].
Figure 5. BER performance comparison: (a) J-BP decoding, SJ-BP decoding and TL-BP decoding; (b) J-BP decoding vs J-BP (no π ) decoding [28].
Electronics 11 03557 g005
Figure 6. BER performance of the SD-PC JSCC system versus the DP-LPDC JSCC system with N s c = 512 [13].
Figure 6. BER performance of the SD-PC JSCC system versus the DP-LPDC JSCC system with N s c = 512 [13].
Electronics 11 03557 g006
Figure 7. Average iteration number of J-BP decoder with p = 0.07 .
Figure 7. Average iteration number of J-BP decoder with p = 0.07 .
Electronics 11 03557 g007
Table 1. Actual compression rate of the source polar coding.
Table 1. Actual compression rate of the source polar coding.
R sc 0.60.50.4
p = 0.07 0.60020.50780.4559
p = 0.04 0.60.50.4017
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dong, Y.; Niu, K. Double Polar Codes for Joint Source and Channel Coding. Electronics 2022, 11, 3557. https://doi.org/10.3390/electronics11213557

AMA Style

Dong Y, Niu K. Double Polar Codes for Joint Source and Channel Coding. Electronics. 2022; 11(21):3557. https://doi.org/10.3390/electronics11213557

Chicago/Turabian Style

Dong, Yanfei, and Kai Niu. 2022. "Double Polar Codes for Joint Source and Channel Coding" Electronics 11, no. 21: 3557. https://doi.org/10.3390/electronics11213557

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop