Distributed Joint Source-Channel Coding Using Quasi-Uniform Systematic Polar Codes

This paper proposes a distributed joint source-channel coding (DJSCC) scheme using polar-like codes. In the proposed scheme, each distributed source encodes source message with a quasi-uniform systematic polar code (QSPC) or a punctured QSPC, and only transmits parity bits over its independent channel. These systematic codes play the role of both source compression and error protection. For the infinite code-length, we show that the proposed scheme approaches the information-theoretical limit by the technique of joint source-channel polarization with side information. For the finite code-length, the simulation results verify that the proposed scheme outperforms the distributed separate source-channel coding (DSSCC) scheme using polar codes and the DJSCC scheme using classic systematic polar codes.


Introduction
Polar codes, invented by Arikan [1] using a technique called channel polarization, are capable of achieving the symmetric capacity of any binary-input discrete memoryless channel (B-DMC) with low encoding and decoding complexity. Afterwards, the concept of source polarization was introduced in [2] as the complement of channel polarization. One immediate application of source polarization is the design of polar codes for source coding. Since the polarization phenomenon exists on both source and channel sides, it is of natural interest to integrate channel polarization and source polarization for joint source-channel coding (JSCC). From the perspective of information theory, this interest is motivated by the fact that the error exponent of JSCC is better than that of separate source-channel coding (SSCC) [3], which may indicate a performance loss of SSCC under the finite code-length. From a practical point of view, two main shortcomings of SSCC resulting in an unsatisfactory performance are summarized as follows: • With finite code-length, channel decoding error is unavoidable, such error may be disastrous for the source decoder.

•
With finite code-length, in the output of source coding may exist residual redundancy, such redundancy is not exploited by the channel decoder to further improve the performance.
Therefore, the JSCC scheme, which takes into consideration both source redundancy and channel error jointly, may have certain advantages over SSCC.
In recent years, JSCC has attracted an increasing number of attention among researchers. The JSCC schemes based on turbo codes, LDPC codes, and polar codes can be found in [4][5][6][7], and [8][9][10], respectively. In particular, the authors of [11] proposed a class of systematic polar codes (SPCs), called quasi-uniform SPCs (QSPCs), and showed that QSPCs are optimal for the problem of JSCC with side information and outperform classic SPCs [12].
For the distributed source coding (DSC) problem, the Slepian-Wolf theorem [13] states that for two or more correlated sources, lossless compression rates of joint encoding can be achieved with separate encoding if a joint decoder is used at the receiver. This theorem has been known for a long time, but the practical DSC scheme was only recently proposed by Pradhan and Ramchandran using syndromes [14]. Inspired by the syndromes approach, a large number of DSC schemes have sprung up based on powerful channel codes (CCs) such as turbo codes, LDPC codes, polar codes [15][16][17]. However, to obtain a good performance, these syndromes-based schemes usually need a considerably long code-length (10 4 ∼10 5 ) and are sensitive to channel errors. Alternatively, the design of DSC scheme can be based on parity approach where a systematic code is used to encode the source and the source is recovered by parity bits. These kinds of schemes can be extended to distributed JSCC (DJSCC) [18,19]. Our work falls into this category. Specifically, we focus on the design of parity-based DJSCC using polar/polar-like codes, where only parity bits are transmitted over the noisy channels. Each distributed source has one encoder to encode source message for both compression and error protection.
The main contributions of this paper are summarized as follows. We propose a new class of polar-like codes, called punctured QSPCs, and show its asymptotic optimality. Then we propose a DJSCC scheme using two polar-like codes and show that the proposed scheme approaches achievable rates.
The rest of this paper is organized as follows. The system model is given in Section 2. Section 3 introduces QSPCs and punctured QSPCs. In Section 4, we propose a DJSCC scheme based on two polar-like codes and analyze its performance limit. Section 5 presents the simulation results to verify the advantage of the proposed scheme and in Section 6 we conclude this paper.

System Model
Consider the problem of transmitting distributed sources V 1 , V 2 , ..., V s over independent symmetric B-DMCs W 1 , W 2 , ..., W s to a common destination. The message from source V i , i = 1, 2, . . . , s, is denoted by a binary vector v  LetR i =Ñ i /K be the rate, measured in the number of channel-use per source-symbol, whereÑ i is the number of channel uses of source V i for transmitting v i . Let C i denote the channel capacity of W i . Since the source-channel separation theorem still holds for this problem [20], reliable transmission is possible ifR = (R 1 ,R 2 , ...,R s ) falls into the achievable rate region R rate , which is defined as where R c (R) is the set of all points (R 1 , R 2 , ..., R s ) satisfying for i = 1, 2, ..., s. GivenR 1 ,R 2 ∈ R rate and 0 ≤ λ ≤ 1, by the definition of convex sets, we can readily prove that λR 1 + (1 − λ)R 2 ∈ R rate as follows. There obviously exist R * 1 ∈ R SW ∩ R c (R 1 ) and R * 2 ∈ R SW ∩ R c (R 2 ). Due to the convexity of R SW , λR * 1 + (1 − λ)R * 2 ∈ R SW . Besides, it can be observed that This indicates that a point at least exists in R SW ∩ R c (λR 1 + (1 − λ)R 2 ), and R SW ∩ R c (λR 1 + (1 − λ)R 2 ) = ∅. Therefore, λR 1 + (1 − λ)R 2 ∈ R rate and the achievable rate region of R rate is convex.
To solve this transmission problem, one option is distributed SSCC (DSSCC) scheme as shown in Figure 1. The message from each source is first compressed into compressed bits with DSC. Then each source encodes its compressed bits into channel code bits with CCs and transmit these channel code bits over its channel. At the destination, the received signals are first decoded by channel decoders, and then decoded bits are jointly decompressed by a source joint decoder. There is no interaction between the source layer and the channel layer in the sense of encoding and decoding. Such a DSSCC scheme is asymptotically optimal when the code-length goes to infinity. However, with finite code-length, such source-channel separate schemes are generally sensitive to the residual errors of channel decoder, and thus are ineffective in some scenarios such as wireless sensor networks (WSNs). To address this problem, in this paper we propose a DJSCC scheme using polar-like codes to achieve the corner point in R rate , i.e.,R for i = 1, 2, ..., s. As shown in Figure 2, each source is equipped with only one encoder to encode the source message for both compression and error protection. Note that the order of the chain rule v j 1 v j 2 ...v j s in (5) has no effect on the code construction and overall performance, hence the ascending order is considered in this paper. It is also obvious that multiple points in R rate can be achieved with time-sharing operations, provided that DJSCC schemes for corner points have been well constructed.

QSPC and Punctured QSPC
In this section, we briefly review QSPC and propose punctured QSPC. At first, two index sets and their properties are presented. Consider a binary sequence L of length N = 2 n . The coordinates of the ones in L are quasi-uniformly distributed and denoted by the index set Q(N, K) orQ(N, K), which is defined as The property of this sequence is given as Proposition 1. . . .

Systematic bit:
Parity bit:

QSPC
A class of SPCs, called QSPCs, were introduced for the problem of JSCC with side information. For a (MN, MK) QSPC (M = 2 m , N = 2 n , K ≤ N), systematic bits x B are first encoded bỹ , andG MN (ÃB) denotes the sub-matrix ofG MN with the rows indexed byÃ and the columns indexed by B. Then the codeword is obtained by whereG MN (Ã) andG MN (Ã c ) denote the rows ofG MN indexed byÃ andÃ c , respectively. The generator matrix is given bỹ where B MN is a bit-reversal matrix, and D is an invertible bit-swap coding matrix (refer to Section IV.A of [11] for details). It can be seen that x =ũG MN = uG MN and u =ũD. For this code, the typical decoder of polar codes attempts to decode u, then the codeword x is obtained by MN . The entropies/BERs (bit error rate) of u are determined by underlying channels and the sources. With the aid of density evolution, we can locate those bits u A which have low entropies/BERs. However, the reversibility of G AB cannot be guaranteed, i.e., it is impossible to construct QSPCs via original polar coding. The bit-swap coding D is then proposed to change the information bits from u A toũÃ and modify the generator matrixG MN = DG MN . This operation ensures thatGÃ B is always invertible. The block error rate (BLER) of x under the SC decoder can be calculated by where P e (u i ) is the BER of u i . Consider a source message v = [v 1 , v 2 , ..., v MK ] with side information w = [w 1 , w 2 , ..., w MK ]. When v is encoded by a (MN, MK) QSPC for JSCC, only parity bits x B c are transmitted over the noisy channel W(y|x). Due to B = Q(MN, MK), the codeword structure can be depicted in Figure 3, and Theorem 1 can be obtained as follows.

Punctured QSPC
By puncturing MP parity bits from a (MN, MK) QSPC code, we can construct a punctured QSPC code with a lower rateR i (in channel-use per source-symbol). This punctured QSPC is dominated by the parameter (MN, MK, MP). As depicted in Figure 4, there are three kinds of bits (systematic bits, parity bits, and punctured bits) in the codeword, and the coordinates of punctured bits arē Q (MN, MP). At the stage of code construction, punctured bits are regarded as the inputs of the zero-capacity channel to calculate BERs of u and bit-swap coding. When this punctured QSPC is used to encode source message v = [v 1 , v 2 , ..., v MK ] with side information w = [w 1 , w 2 , ..., w MK ], Theorem 2 is obtained similarly as follows.
Proof of Theorem 2. The codeword structure of Figure 4 ensures that Z(u i |yu 1:i−1 ) and H(u i |yu 1:i−1 ) converge to either 0 or 1, as M goes to infinity, which is similar to QSPC. The total entropy can be calculated by Therefore, the fraction of the high entropy part (H ≈ 1) is and the fraction of the low entropy part ( This theorem also shows that P BLER of punctured QSPC tends to 0 as M goes to infinity, if H(v|w)/(1 − H(x|y)) = H(v|w)/C ≤ (N − K − P)/K. Since punctured bits contributes zero channel-use, punctured QSPC has no performance loss. After the establishment of supermartingale for Z(u i |yu 1:i−1 ), we also have P BLER ≤ 2 −M β → 0, for any β ∈ (0, 1 2 ).

Proposed DJSCC Scheme and Performance Limit
In this section, we propose a DJSCC scheme based on the previous construction of two polar-like codes, and prove its asymptotic optimality.

Proposed DJSCC Scheme
To achieve the corner point (5), we turn this problem into the problem of JSCC with side information at the receiver, and solve it with QSPC and punctured QSPC. The proposed DJSCC scheme consists of s encoders and one joint decoder, as shown in Figure 2. DefineR max = max i∈{1,2,...,s}R i and For source V i , the previous sources V 1 , V 2 , ..., V i−1 are completely regarded as side information to construct QSPC and punctured QSPC. For the source V i max , its message v i max is encoded by a (MN, MK) QSPC. For sources V i , i = i max , the message v i is encoded by a (MN, MK, MP i ) punctured QSPC to adapt a lower rateR i . At the destination, a joint decoder is used to recover the sources, as shown in Figure 5. The joint decoder consists of s conventional polar decoders working successively. The hard decisionsv 1 ,v 2 , ...,v i−1 from decoder 1, 2, . . . , i − 1 are fed to i-th decoder for calculating log likelihood ratios (LLRs) of systematic bits. For parity bits, the LLRs are calculated with channel observations.

Performance Limit
Based on the analysis in Section 3, both QSPC and punctured QSPC can achieve the information-theoretical limit H/C in channel-use per source-symbol. Therefore, the proposed scheme is asymptotically optimal, as long as the rates can be arbitrarily approached. The following lemma shows that the QSPC and punctured QSPC can approach the rate limits. Proof of Lemma 1. Let K be the integer part of N/(R i + 1), then we have Since lim K =R i , the gap betweenR peak andR i can be arbitrarily small when N is sufficiently large. Besides, the minimum step of adjustable rate by puncturing ∆ = 1 K is getting smaller with increasing N. This indicates that the punctured QSPC can also arbitrarily approach a rate less thanR peak when N is large enough.

Simulation Results
In the previous section, we have proposed a DJSCC scheme and shown its asymptotic optimality. In this section, we present the simulation results to verify the performance of the proposed scheme under finite code-length.
In the simulation, two sources V 1 , V 2 are considered and Pr{v j 2 |v j 1 } is modeled as binary symmetric channel (BSC) with a crossover probability q = 0.11, and Pr{v j 1 = 0} = 0.5. The transmission channels W 1 , W 2 are BI-AWGN channels with the same signal-to-noise ratio (SNR). The (64 · 32, 64 · 10) QSPC and the (64 · 32, 64 · 10, 64 · 11) punctured QSPC are constructed with density evolution. The QSPC and the punctured QSPC are respectively used to encode v 1 and v 2 for DJSCC. For this DJSCC scheme, the rates areR = (R 1 ,R 2 ) = (2.2, 1.1) in channel-use/source-symbol. The Shannon limits for both two sources are approximately -0.4 dB, which are respectively obtained by is the capacity of BI-AWGN channels. As a reference, we consider a DSSCC scheme where both DSC and CC are designed using polar codes of length N s = N c = 1024. The source compression rate (in bit/source-symbol) of DSC R si , and the channel code rate (in bit/channel-use) of CC R ci for source V i , i = 1, 2 are illustrated in Table 1. The rate of this DSSCC scheme can be calculated bỹ in channel-use/source-symbol. In the source layer, each N s bits of source message are divided into a block for DSC and the polar code based DSC is designed as follows. After polar transform the BERs P e (u i ) of u are calculated with density evolution. As shown in Figure 6, both u 1 = v 1 G N and u 2 = v 2 G N are sorted in a descend order according to P e (u i ), and the bits represented by black boxes are compressed bits that will be fed into the buffer for CC. At the receiver, the decoder firstly tries to recover u 1 ⊕ u 2 with LLRs log 1−q q , then the bits represented by gray boxes are reconstructed with some modulo-2 additions. Finally, the sources are obtained byv 1 =û 1 G N andv 2 =û 2 G N . In the channel layer, each N c R ci compressed bits in the buffer are divided into a block for (N c , N c R ci ) CC based on polar codes.  Besides, we consider the DJSCC based on classic SPCs as a reference. For source V 1 , the (64 · 32, 64 · 10) SPC is constructed via Gaussian approximation where the underlying channel is BI-AWGN channels with the equivalent capacity For source V 2 , the (64 · 32, 64 · 10) SPC is constructed via Gaussian approximation where the underlying channel is BI-AWGN channels with equivalent capacity Since the conventional puncturing scheme is not applicable in this situation, 64 · 11 parity bits of source V 2 are randomly punctured.

Performance under SC Decoders
First, we investigate the performance of the proposed scheme, DSSCC schemes, and the DJSCC scheme with classic SPCs under the SC decoder. Figure 7 shows the BER performance versus the SNR of transmission channels. In this figure, lines labeled by "Proposed..." represent BERs for the proposed scheme, lines labeled by "DSSCC..." represent BERs for DSSCC schemes with different values of R si , R ci shown in Table 1, and lines labeled by "Classic SPC..." represent the DJSCC scheme with classic SPCs. From these simulation results, we can see that the proposed scheme outperforms DSSCC schemes and the DJSCC scheme with classic SPCs by approximately 0.2∼2 dB. It can be observed that an "error-floor" of "DSSCC 1" starts from 3.5 dB, which is completely caused by the source joint decoder (channel decoders get almost error-free codewords from 3.5 dB to 5 dB). To avoid this error, the higher compressed rate R si is required. Therefore, we do not observe the "error-floor" for "DSSCC 2" and "DSSCC 3".

Performance under CA-SCL Decoders
For the short or moderate code-length, the performance of polar codes under SC decoder is dissatisfied. To improve the performance, Tal and Vardy proposed CRC-aid SC list (CA-SCL) decoder for polar codes [22]. Next, we investigate the BER performance of the proposed scheme together with two referenced schemes under the CA-SCL decoder. For QSPCs and punctured QSPCs, systematic bits contain 16 bits CRC, i.e., the length of source message is MK − 16. At the decoding stage, LLRs of these 16 bits are initialized as 0 and the list size is set as 32. Taking CRC into consideration, the rates of the DJSCC scheme becomeR = (2.256, 1.128) in channel-use/source-symbol. In DSSCC schemes, information bits of both DSC and CC contain 16 bits CRC and the list size is also set as 32. Figure 8 shows the BER performance under the CA-SCL decoder. We can see that compared with the SC decoder, the performances of all schemes are improved significantly. Due to the reasonable rate allocation, the performance of "DSSCC 1" is the best and very close the performance of the proposed scheme. However, the performances of other two DSSCC schemes are dissatisfied. It can be observed that the proposed scheme outperforms "DSSCC 1" and the DJSCC scheme with classic SPCs by approximately 0.2∼1 dB.

Complexity
For each distributed source, the encoding complexities of both two referenced schemes are O (N log N). The encoding complexity of the proposed scheme is slightly larger, which is O(N + N log N). Due to low encoding complexity, these schemes are well-suited for WSNs.
To investigate decoding complexity, we consider adaptive CA-SCL decoders [23] where the list size is expanded if decoding error is detected by CRC. These kinds of decoders have O(LN log N) complexity, whereL is average list size. In the simulations, the allowable maximum list size is set as L max = 32 and the average list sizeL for decoding both two sources is recorded. The complexity (per source-symbol) can be approximated as O( 1 MK ·LN log N) for the proposed scheme and DJSCC scheme with classic SPCs, and O( 1 N s ·LN log N) for DSSCC schemes. As shown in Figure 9, for the high SNR region and the low SNR region, the DSSCC1 has lower decoding complexity while the proposed scheme has lower decoding complexity for the middle SNR region. In general, three schemes have similar decoding complexity under adaptive CA-SCL decoders.

Conclusions
In this paper, we construct two kinds of polar-like codes (QSPCs and punctured QSPCs), which can be used for JSCC with side information. We then proposed a DJSCC scheme based on QSPC and punctured QSPC. In this scheme, we have transformed the optimization transmission problem into a problem of optimizing JSCC with side information at the receiver, which can be solved by QSPCs and punctured QSPCs. For the infinite code-length, we have proved that the proposed scheme is asymptotically optimal. For the finite code-length, the simulation results have verified that the proposed scheme outperforms the DSSCC scheme with polar codes and the DJSCC scheme with classic SPCs.