Abstract
This paper proposes a distributed joint source-channel coding (DJSCC) scheme using polar-like codes. In the proposed scheme, each distributed source encodes source message with a quasi-uniform systematic polar code (QSPC) or a punctured QSPC, and only transmits parity bits over its independent channel. These systematic codes play the role of both source compression and error protection. For the infinite code-length, we show that the proposed scheme approaches the information-theoretical limit by the technique of joint source-channel polarization with side information. For the finite code-length, the simulation results verify that the proposed scheme outperforms the distributed separate source-channel coding (DSSCC) scheme using polar codes and the DJSCC scheme using classic systematic polar codes.
1. Introduction
Polar codes, invented by Arikan [1] using a technique called channel polarization, are capable of achieving the symmetric capacity of any binary-input discrete memoryless channel (B-DMC) with low encoding and decoding complexity. Afterwards, the concept of source polarization was introduced in [2] as the complement of channel polarization. One immediate application of source polarization is the design of polar codes for source coding. Since the polarization phenomenon exists on both source and channel sides, it is of natural interest to integrate channel polarization and source polarization for joint source-channel coding (JSCC). From the perspective of information theory, this interest is motivated by the fact that the error exponent of JSCC is better than that of separate source-channel coding (SSCC) [3], which may indicate a performance loss of SSCC under the finite code-length. From a practical point of view, two main shortcomings of SSCC resulting in an unsatisfactory performance are summarized as follows:
- With finite code-length, channel decoding error is unavoidable, such error may be disastrous for the source decoder.
- With finite code-length, in the output of source coding may exist residual redundancy, such redundancy is not exploited by the channel decoder to further improve the performance.
Therefore, the JSCC scheme, which takes into consideration both source redundancy and channel error jointly, may have certain advantages over SSCC.
In recent years, JSCC has attracted an increasing number of attention among researchers. The JSCC schemes based on turbo codes, LDPC codes, and polar codes can be found in [4,5,6,7], and [8,9,10], respectively. In particular, the authors of [11] proposed a class of systematic polar codes (SPCs), called quasi-uniform SPCs (QSPCs), and showed that QSPCs are optimal for the problem of JSCC with side information and outperform classic SPCs [12].
For the distributed source coding (DSC) problem, the Slepian-Wolf theorem [13] states that for two or more correlated sources, lossless compression rates of joint encoding can be achieved with separate encoding if a joint decoder is used at the receiver. This theorem has been known for a long time, but the practical DSC scheme was only recently proposed by Pradhan and Ramchandran using syndromes [14]. Inspired by the syndromes approach, a large number of DSC schemes have sprung up based on powerful channel codes (CCs) such as turbo codes, LDPC codes, polar codes [15,16,17]. However, to obtain a good performance, these syndromes-based schemes usually need a considerably long code-length () and are sensitive to channel errors. Alternatively, the design of DSC scheme can be based on parity approach where a systematic code is used to encode the source and the source is recovered by parity bits. These kinds of schemes can be extended to distributed JSCC (DJSCC) [18,19]. Our work falls into this category. Specifically, we focus on the design of parity-based DJSCC using polar/polar-like codes, where only parity bits are transmitted over the noisy channels. Each distributed source has one encoder to encode source message for both compression and error protection.
The main contributions of this paper are summarized as follows. We propose a new class of polar-like codes, called punctured QSPCs, and show its asymptotic optimality. Then we propose a DJSCC scheme using two polar-like codes and show that the proposed scheme approaches achievable rates.
The rest of this paper is organized as follows. The system model is given in Section 2. Section 3 introduces QSPCs and punctured QSPCs. In Section 4, we propose a DJSCC scheme based on two polar-like codes and analyze its performance limit. Section 5 presents the simulation results to verify the advantage of the proposed scheme and in Section 6 we conclude this paper.
2. System Model
Consider the problem of transmitting distributed sources over independent symmetric B-DMCs to a common destination. The message from source is denoted by a binary vector of length K. We assume that for different are i.i.d. (identically and independently distributed) vectors drawn from with a common probability distribution .
From the Slepian-Wolf theorem, the compression rate is achievable if
for any , where is entropy function, is the complementary set of , and denote, respectively, the sub-vectors of indexed by and . This achievable compression region is also known as the Slepian-Wolf region .
Let be the rate, measured in the number of channel-use per source-symbol, where is the number of channel uses of source for transmitting . Let denote the channel capacity of . Since the source-channel separation theorem still holds for this problem [20], reliable transmission is possible if falls into the achievable rate region , which is defined as
where is the set of all points satisfying
for .
Given and , by the definition of convex sets, we can readily prove that as follows. There obviously exist and . Due to the convexity of , . Besides, it can be observed that
This indicates that a point at least exists in , and . Therefore, and the achievable rate region of is convex.
To solve this transmission problem, one option is distributed SSCC (DSSCC) scheme as shown in Figure 1. The message from each source is first compressed into compressed bits with DSC. Then each source encodes its compressed bits into channel code bits with CCs and transmit these channel code bits over its channel. At the destination, the received signals are first decoded by channel decoders, and then decoded bits are jointly decompressed by a source joint decoder. There is no interaction between the source layer and the channel layer in the sense of encoding and decoding. Such a DSSCC scheme is asymptotically optimal when the code-length goes to infinity. However, with finite code-length, such source-channel separate schemes are generally sensitive to the residual errors of channel decoder, and thus are ineffective in some scenarios such as wireless sensor networks (WSNs). To address this problem, in this paper we propose a DJSCC scheme using polar-like codes to achieve the corner point in , i.e.,
for . As shown in Figure 2, each source is equipped with only one encoder to encode the source message for both compression and error protection.
Figure 1.
Distributed separate source-channel coding (DSSCC) scheme.
Figure 2.
Distributed joint source-channel coding (DJSCC) scheme.
Note that the order of the chain rule in (5) has no effect on the code construction and overall performance, hence the ascending order is considered in this paper. It is also obvious that multiple points in can be achieved with time-sharing operations, provided that DJSCC schemes for corner points have been well constructed.
3. QSPC and Punctured QSPC
In this section, we briefly review QSPC and propose punctured QSPC. At first, two index sets and their properties are presented. Consider a binary sequence of length . The coordinates of the ones in are quasi-uniformly distributed and denoted by the index set or , which is defined as
The property of this sequence is given as Proposition 1.
Proposition 1.
The coordinates of the ones in are and if the coordinates of the ones in are and , respectively.
An example for index sets is illustrated in Figure 3 and Figure 4. The coordinates of the ones in and are and , respectively. For and , the coordinates of ones are and , respectively.
Figure 3.
The codeword structure of quasi-uniform systematic polar code (QSPCs) with .
Figure 4.
The codeword structure of punctured QSPCs with .
3.1. QSPC
A class of SPCs, called QSPCs, were introduced for the problem of JSCC with side information. For a QSPC (), systematic bits are first encoded by
where , and denotes the sub-matrix of with the rows indexed by and the columns indexed by . Then the codeword is obtained by
where and denote the rows of indexed by and , respectively. The generator matrix is given by
where is a bit-reversal matrix, and D is an invertible bit-swap coding matrix (refer to Section IV.A of [11] for details). It can be seen that and .
For this code, the typical decoder of polar codes attempts to decode , then the codeword is obtained by or . The entropies/BERs (bit error rate) of are determined by underlying channels and the sources. With the aid of density evolution, we can locate those bits which have low entropies/BERs. However, the reversibility of cannot be guaranteed, i.e., it is impossible to construct QSPCs via original polar coding. The bit-swap coding D is then proposed to change the information bits from to and modify the generator matrix . This operation ensures that is always invertible. The block error rate (BLER) of under the SC decoder can be calculated by
where is the BER of .
Consider a source message with side information . When is encoded by a QSPC for JSCC, only parity bits are transmitted over the noisy channel . Due to , the codeword structure can be depicted in Figure 3, and Theorem 1 can be obtained as follows.
Theorem 1 ([11]).
For any ,
where denotes the cardinality of set, is the source Bhattacharyya parameter [2], and are channel outputs of , and denotes the set of .
Note that the theorem shows that tends to 0 as M goes to infinity, if , because is an upper-bound of . Further, if the supermartingale is established for , we have , for any [21].
3.2. Punctured QSPC
By puncturing parity bits from a QSPC code, we can construct a punctured QSPC code with a lower rate (in channel-use per source-symbol). This punctured QSPC is dominated by the parameter . As depicted in Figure 4, there are three kinds of bits (systematic bits, parity bits, and punctured bits) in the codeword, and the coordinates of punctured bits are . At the stage of code construction, punctured bits are regarded as the inputs of the zero-capacity channel to calculate BERs of and bit-swap coding. When this punctured QSPC is used to encode source message with side information , Theorem 2 is obtained similarly as follows.
Theorem 2.
For any ,
where because of puncturing.
Proof of Theorem 2.
The codeword structure of Figure 4 ensures that and converge to either 0 or 1, as M goes to infinity, which is similar to QSPC. The total entropy can be calculated by
Therefore, the fraction of the high entropy part () is
and the fraction of the low entropy part () is . □
This theorem also shows that of punctured QSPC tends to 0 as M goes to infinity, if . Since punctured bits contributes zero channel-use, punctured QSPC has no performance loss. After the establishment of supermartingale for , we also have , for any .
4. Proposed DJSCC Scheme and Performance Limit
In this section, we propose a DJSCC scheme based on the previous construction of two polar-like codes, and prove its asymptotic optimality.
4.1. Proposed DJSCC Scheme
To achieve the corner point (5), we turn this problem into the problem of JSCC with side information at the receiver, and solve it with QSPC and punctured QSPC. The proposed DJSCC scheme consists of s encoders and one joint decoder, as shown in Figure 2.
Define
and
For source , the previous sources are completely regarded as side information to construct QSPC and punctured QSPC. For the source , its message is encoded by a QSPC. For sources , the message is encoded by a punctured QSPC to adapt a lower rate . At the destination, a joint decoder is used to recover the sources, as shown in Figure 5. The joint decoder consists of s conventional polar decoders working successively. The hard decisions from decoder are fed to i-th decoder for calculating log likelihood ratios (LLRs) of systematic bits. For parity bits, the LLRs are calculated with channel observations.
Figure 5.
Joint decoder.
4.2. Performance Limit
Based on the analysis in Section 3, both QSPC and punctured QSPC can achieve the information-theoretical limit in channel-use per source-symbol. Therefore, the proposed scheme is asymptotically optimal, as long as the rates can be arbitrarily approached. The following lemma shows that the QSPC and punctured QSPC can approach the rate limits.
Lemma 1.
The QSPC asymptotically approaches the rate and the punctured QSPC asymptotically approaches arbitrary rate .
Proof of Lemma 1.
Let K be the integer part of , then we have
and
Since , the gap between and can be arbitrarily small when N is sufficiently large. Besides, the minimum step of adjustable rate by puncturing is getting smaller with increasing N. This indicates that the punctured QSPC can also arbitrarily approach a rate less than when N is large enough. □
5. Simulation Results
In the previous section, we have proposed a DJSCC scheme and shown its asymptotic optimality. In this section, we present the simulation results to verify the performance of the proposed scheme under finite code-length.
In the simulation, two sources are considered and is modeled as binary symmetric channel (BSC) with a crossover probability , and . The transmission channels are BI-AWGN channels with the same signal-to-noise ratio (SNR). The QSPC and the punctured QSPC are constructed with density evolution. The QSPC and the punctured QSPC are respectively used to encode and for DJSCC. For this DJSCC scheme, the rates are in channel-use/source-symbol. The Shannon limits for both two sources are approximately -0.4 dB, which are respectively obtained by and , where is the capacity of BI-AWGN channels.
As a reference, we consider a DSSCC scheme where both DSC and CC are designed using polar codes of length . The source compression rate (in bit/source-symbol) of DSC , and the channel code rate (in bit/channel-use) of CC for source are illustrated in Table 1. The rate of this DSSCC scheme can be calculated by
in channel-use/source-symbol. In the source layer, each bits of source message are divided into a block for DSC and the polar code based DSC is designed as follows. After polar transform , the BERs of are calculated with density evolution. As shown in Figure 6, both and are sorted in a descend order according to , and the bits represented by black boxes are compressed bits that will be fed into the buffer for CC. At the receiver, the decoder firstly tries to recover with LLRs , then the bits represented by gray boxes are reconstructed with some modulo-2 additions. Finally, the sources are obtained by and . In the channel layer, each compressed bits in the buffer are divided into a block for CC based on polar codes.
Table 1.
Distributed separate source-channel coding (DSSCC) schemes with different .
Figure 6.
Bit selection of distributed source coding (DSC) based on polar codes.
Besides, we consider the DJSCC based on classic SPCs as a reference. For source , the SPC is constructed via Gaussian approximation where the underlying channel is BI-AWGN channels with the equivalent capacity . For source , the SPC is constructed via Gaussian approximation where the underlying channel is BI-AWGN channels with equivalent capacity . Since the conventional puncturing scheme is not applicable in this situation, parity bits of source are randomly punctured.
5.1. Performance under SC Decoders
First, we investigate the performance of the proposed scheme, DSSCC schemes, and the DJSCC scheme with classic SPCs under the SC decoder. Figure 7 shows the BER performance versus the SNR of transmission channels. In this figure, lines labeled by “Proposed...” represent BERs for the proposed scheme, lines labeled by “DSSCC...” represent BERs for DSSCC schemes with different values of shown in Table 1, and lines labeled by “Classic SPC...” represent the DJSCC scheme with classic SPCs. From these simulation results, we can see that the proposed scheme outperforms DSSCC schemes and the DJSCC scheme with classic SPCs by approximately 0.2∼2 dB. It can be observed that an “error-floor” of “DSSCC 1” starts from 3.5 dB, which is completely caused by the source joint decoder (channel decoders get almost error-free codewords from 3.5 dB to 5 dB). To avoid this error, the higher compressed rate is required. Therefore, we do not observe the “error-floor” for “DSSCC 2” and “DSSCC 3”.
Figure 7.
The bit error rate (BER) performance under the SC decoder.
5.2. Performance under CA-SCL Decoders
For the short or moderate code-length, the performance of polar codes under SC decoder is dissatisfied. To improve the performance, Tal and Vardy proposed CRC-aid SC list (CA-SCL) decoder for polar codes [22]. Next, we investigate the BER performance of the proposed scheme together with two referenced schemes under the CA-SCL decoder. For QSPCs and punctured QSPCs, systematic bits contain 16 bits CRC, i.e., the length of source message is . At the decoding stage, LLRs of these 16 bits are initialized as 0 and the list size is set as 32. Taking CRC into consideration, the rates of the DJSCC scheme become in channel-use/source-symbol. In DSSCC schemes, information bits of both DSC and CC contain 16 bits CRC and the list size is also set as 32.
Figure 8 shows the BER performance under the CA-SCL decoder. We can see that compared with the SC decoder, the performances of all schemes are improved significantly. Due to the reasonable rate allocation, the performance of “DSSCC 1” is the best and very close the performance of the proposed scheme. However, the performances of other two DSSCC schemes are dissatisfied. It can be observed that the proposed scheme outperforms “DSSCC 1” and the DJSCC scheme with classic SPCs by approximately 0.2∼1 dB.
Figure 8.
The BER performance under the CRC-aid SC list (CA-SCL) decoder.
5.3. Complexity
For each distributed source, the encoding complexities of both two referenced schemes are . The encoding complexity of the proposed scheme is slightly larger, which is . Due to low encoding complexity, these schemes are well-suited for WSNs.
To investigate decoding complexity, we consider adaptive CA-SCL decoders [23] where the list size is expanded if decoding error is detected by CRC. These kinds of decoders have complexity, where is average list size. In the simulations, the allowable maximum list size is set as and the average list size for decoding both two sources is recorded. The complexity (per source-symbol) can be approximated as for the proposed scheme and DJSCC scheme with classic SPCs, and for DSSCC schemes. As shown in Figure 9, for the high SNR region and the low SNR region, the DSSCC1 has lower decoding complexity while the proposed scheme has lower decoding complexity for the middle SNR region. In general, three schemes have similar decoding complexity under adaptive CA-SCL decoders.
Figure 9.
The decoding complexity (per source-symbol) under adaptive CA-SCL decoders.
6. Conclusions
In this paper, we construct two kinds of polar-like codes (QSPCs and punctured QSPCs), which can be used for JSCC with side information. We then proposed a DJSCC scheme based on QSPC and punctured QSPC. In this scheme, we have transformed the optimization transmission problem into a problem of optimizing JSCC with side information at the receiver, which can be solved by QSPCs and punctured QSPCs. For the infinite code-length, we have proved that the proposed scheme is asymptotically optimal. For the finite code-length, the simulation results have verified that the proposed scheme outperforms the DSSCC scheme with polar codes and the DJSCC scheme with classic SPCs.
Author Contributions
L.J. and H.Y. initiated the idea and designed the experiments. L.J. analyzed the data and wrote the manuscript. H.Y. made the revisions for the paper. All authors have read and approved the final manuscript.
Funding
This research received no external funding.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Arikan, E. Channel Polarization: A Method For Constructing Capacity-achieving Codes For Symmetric Binary-input Memoryless Channels. IEEE Trans. Inf. Theory 2009, 55, 3051–3073. [Google Scholar] [CrossRef]
- Arikan, E. Source polarization. In Proceedings of the IEEE International Symposium on Information Theory, Austin, TX, USA, 13–18 June 2010; pp. 899–903. [Google Scholar]
- Zhong, Y.; Alajaji, F.; Campbell, L.L. On the joint source-channel coding error exponent for discrete memoryless systems. IEEE Trans. Inf. Theory 2006, 52, 1450–1468. [Google Scholar] [CrossRef]
- Jeanne, M.; Carlach, J.; Siohan, P. Joint source-channel decoding of variable-length codes for convolutional codes and turbo codes. IEEE Trans. Comm. 2005, 53, 10–15. [Google Scholar] [CrossRef]
- Garcia-Frias, J. Joint source-channel decoding of correlated sources over noisy channels. In Proceedings of the Data Compression Conference (DCC), Snowbird, UT, USA, 27–29 March 2001; pp. 283–292. [Google Scholar]
- Daneshgaran, F.; Laddomada, M.; Mondin, M. LDPC-based channel coding of correlated sources with iterative joint decoding. IEEE Trans. Commun. 2006, 54, 577–582. [Google Scholar] [CrossRef]
- Fresia, M.; Perez-Cruz, F.; Poor, H.V.; Verdu, S. Joint Source and Channel Coding. IEEE Signal Process. Mag. 2010, 27, 104–113. [Google Scholar] [CrossRef]
- Wang, Y.; Qin, M.; Narayanan, K.R.; Jiang, A.; Bandic, Z. Joint source-channel decoding of polar codes for language-based sources. In Proceedings of the 2016 IEEE Global Communications Conference (GLOBECOM), Washington, DC, USA, 4–8 December 2016. [Google Scholar]
- Yaacoub, C.; Sarkis, M. Systematic polar codes for joint source-channel coding in wireless sensor networks and the Internet of Things. Proc. Comput. Sci. 2017, 110, 266–273. [Google Scholar] [CrossRef]
- Jin, L.; Yang, P.; Yang, H. Distributed joint source-channel decoding using systematic polar codes. IEEE Commun. Lett. 2018, 22, 49–52. [Google Scholar] [CrossRef]
- Jin, L.; Yang, H. Joint Source-Channel Polarization With Side Information. IEEE Access 2018, 6, 7340–7349. [Google Scholar] [CrossRef]
- Arikan, E. Systematic Polar Coding. IEEE Comm. Lett. 2011, 15, 860–862. [Google Scholar] [CrossRef]
- Slepian, D.; Wolf, J. Noiseless coding of correlated information sources. IEEE Trans. Inf. Theory 1973, 19, 471–480. [Google Scholar] [CrossRef]
- Pradhan, S.S.; Ramchandran, K. Distributed Source Coding Using Syndromes (DISCUS): Design and Construction. IEEE Trans. Inf. Theory 2003, 3, 626–643. [Google Scholar] [CrossRef]
- Aaron, A.; Girod, B. Compression with side information using turbo codes. In Proceedings of the DCC 2002. Data Compression Conference, Snowbird, UT, USA, USA, 2–4 April 2002; pp. 252–261. [Google Scholar]
- Liveris, A.D.; Xiong, Z.; Georghiades, C.N. Compression of binary sources with side information at the decoder using LDPC codes. IEEE Comm. Lett. 2002, 6, 440–442. [Google Scholar] [CrossRef]
- Onay, S. Polar codes for distributed source coding. Electron. Lett. 2013, 49, 346–348. [Google Scholar] [CrossRef]
- Shahid, I.; Yahampath, P. Distributed Joint Source-Channel Coding Using Unequal Error Protection LDPC Codes. IEEE Trans. Commun. 2013, 61, 3472–3482. [Google Scholar] [CrossRef]
- Cen, F. Distributed Joint Source and Channel Coding with Low-Density Parity-Check Codes. IEEE Comm. Lett. 2013, 17, 2336–2339. [Google Scholar] [CrossRef]
- Gunduz, D.; Erkip, E.; Goldsmith, A.; Poor, H.V. Source and Channel Coding for Correlated Sources Over Multiuser Channels. IEEE Trans. Inf. Theory 2009, 55, 3927–3944. [Google Scholar] [CrossRef]
- Arikan, E.; Telatar, E. On the rate of channel polarization. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Seoul, Korea, 28 June–3 July 2009; pp. 1493–1495. [Google Scholar]
- Tal, I.; Vardy, A. List Decoding of Polar Codes. IEEE Trans. Inf. Theory 2015, 61, 2213–2226. [Google Scholar] [CrossRef]
- Li, B.; Shen, H.; Tse, D. An Adaptive Successive Cancellation List Decoder for Polar Codes with Cyclic Redundancy Check. IEEE Comm. Lett. 2012, 16, 2044–2047. [Google Scholar] [CrossRef]
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).