Next Article in Journal
Exploring Neurofeedback Training for BMI Power Augmentation of Upper Limbs: A Pilot Study
Next Article in Special Issue
A Review on Machine Learning Approaches for Network Malicious Behavior Detection in Emerging Technologies
Previous Article in Journal
EcoQBNs: First Application of Ecological Modeling with Quantum Bayesian Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Secure Polar Coding for the Primitive Relay Wiretap Channel

by
Manos Athanasakos
1,* and
George Karagiannidis
2
1
Department of Informatics and Telecommunications, National and Kapodistrian University of Athens, 157 72 Athens, Greece
2
Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, 541 24 Thessaloniki, Greece
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(4), 442; https://doi.org/10.3390/e23040442
Submission received: 2 February 2021 / Revised: 31 March 2021 / Accepted: 6 April 2021 / Published: 9 April 2021
(This article belongs to the Special Issue Information-Theoretic Approach to Privacy and Security)

Abstract

:
With the emergence of wireless networks, cooperation for secrecy is recognized as an attractive way to establish secure communications. Departing from cryptographic techniques, secrecy can be provided by exploiting the wireless channel characteristics; that is, some error-correcting codes besides reliability have been shown to achieve information-theoretic security. In this paper, we propose a polar-coding-based technique for the primitive relay wiretap channel and show that this technique is suitable to provide information-theoretic security. Specifically, we integrate at the relay an additional functionality, which allows it to smartly decide whether it will cooperate or not based on the decoding detector result. In the case of cooperation, the relay operates in a decode-and-forward mode and assists the communication by transmitting a complementary message to the destination in order to correctly decode the initial source’s message. Otherwise, the communication is completed with direct transmission from source to the destination. Finally, we first prove that the proposed encoding scheme achieves weak secrecy, then, in order to overcome the obstacle of misaligned bits, we implement a double-chaining construction, which achieves strong secrecy.

1. Introduction

The wiretap channel, introduced by A. Wyner in his seminal work [1], paved the way for the exploitation of the channel medium characteristics in terms of information-theoretic security. This approach has the major advantage that security does not rely on any shared secret key, i.e., keyless security. In view of the emergence of wireless communication and massive connectivity, this benefit has led to a rich literature, which investigates several channel models towards the design of low-complexity coding schemes for both reliability and secrecy.
After van der Meulen’s introduction of the relay channel in [2] and the extension of the work of Cover and El Gamal in [3], cooperative diversity is considered as an important advancement in wireless networks, since it can achieve higher rates in comparison to direct transmission. In practice, a network may be comprised with illegitimate users; cooperation between trusted users has been exploited as a way to establish secure communication. The rate-equivocation region was characterized in [4,5] for a four-terminal relay channel and an eavesdropper under several cooperation protocols. Half-duplex relay channel models are considered in [6], where the authors studied coding techniques for the relay channel with orthogonal components (primitive relay channel). The secrecy capacity for this class of channels was investigated in [7] for the binary-input discrete memoryless channel (B-DMC) and the Gaussian case. Although the importance of cooperation for reliability and security in large networks is well established, the aforementioned works presented bounds on the secrecy capacity while relying on random coding arguments. Undoubtedly, designing codes for these type of channels is of great importance as the evolution of networks require security solutions with low consumption and complexity.
Since the pioneering work of Arikan on the polar codes [8], which are capacity-achieving for the symmetric B-DMC, several polar coding schemes have been proposed to fulfill the secrecy requirement. These codes are constructed based on the phenomenon of channel polarization, that is, the channel is split into N “bit-channels”, which tend to be either error-free or fully noisy channels as N grows. This result is the basic tool in designing a polar coding scheme, which satisfies both reliability and secrecy conditions. In [9], a scheme for the degraded wiretap channel that meets the requirement for weak secrecy was proposed; in [10], the authors used a different partition of the index set to develop a scheme for strong secrecy. Under this framework, several coding schemes for multiuser channels have been investigated in [11,12,13]. However, although in the open literature there are some applications of polar codes without security constraints for the relay channel [14,15,16,17,18], the investigation of whether polar codes are suitable for the relay wiretap scenario has drawn little attention. Finally, the authors in [19] proposed a coding scheme capable of achieving weak secrecy for the relay-eavesdropper channel, whereas in [20], the proposed polar coding scheme guarantees strong secrecy for the case of symmetric channels and under the assumption that the eavesdropper’s channel is degraded. Note that the natural nested structure of polar codes and their low encoding/decoding complexity identify them as a promising choice for the practical implementation of merging coding and security into one scheme.

1.1. Related Work and Contributions

Since the introduction of the chaining technique [21] for polar codes, several explicit and efficient coding schemes for different information-theoretic models have been proposed. The work of [10] introduced the polar coding chaining construction in order to provide strong secrecy for the symmetric and degraded point-to-point wiretap channel. Later on, a polar code technique for asymmetric models was proposed in [22] and, along with the chaining construction, were the basic tools used to prove the secrecy capacity achievability of general wiretap channels. Specifically, in [12], the authors considered the chaining technique to deal with the nondegraded wiretap channel by artificially constructing the subset property and polar coding for asymmetric models to prove that polar codes achieve secrecy capacity in general. However, they considered only the weak secrecy case. Concurrently, refs. [11,13] developed a polar coding scheme for the general broadcast wiretap channel relying on different approaches. While both works considered the strong secrecy criterion, the first drew parallels between the achievability proof through output statistic of random binning and their proposed encoding scheme and avoided using randomized decisions during the encoding and decoding procedures ([23], Theorem 3), while the latter relied on shared random mappings ([22], Theorem 3) that may require exponential storage complexity. Recently, the authors of [24] extended the scheme of [13] by considering a more general model, in which the transmitter sends common and confidential messages over a broadcast wiretap channel. In their scheme, a new chaining construction is proposed to deal with the common information transmission.
Motivated by the above, in this paper, we consider the primitive relay wiretap channel and propose polar coding schemes based on different coordinate partitions. We prove that these schemes satisfy weak and strong secrecy requirements, while simultaneously guarantee a low probability of error at the legitimate receiver. Specifically, by careful partitioning of the coordinates, we first improve the analysis of [19] for the case of weak secrecy in a single transmission block. Then, we propose a new encoding algorithm under the strong secrecy criterion. In particular, we consider the general primitive relay wiretap channel, without assuming degradedness or symmetric channels as in [20]. The scheme utilizes previous polar coding techniques using the minimum rate of shared randomness and relies on both choosing the coordinate partitions properly and a transmission protocol that divides the communication into multiple blocks. Due to the nature of the channel under consideration, a new double-chaining construction is designed in order to satisfy the stronger secrecy requirement. Finally, an additional functionality is considered at the relay node; using the results from [15], the relay possesses a detector deciding if the result is erroneous or not and, based on that, discards it or cooperates in decode-and-forward (DF) mode.

1.2. Structure

The rest of the paper is organized as follows. In Section 2, some preliminaries on polar codes and the basic concept of the proposed scheme are introduced. The system model and the constraints in designing the coding scheme are presented in Section 3, followed by the main results of this paper; the encoding schemes for weak and strong secrecy and the analysis of reliability and security are presented in Section 4. Finally, Section 5 concludes the paper.

2. Polar Codes and the Relay Channel

2.1. Some Fundamentals on Polar Coding

We consider a binary-input channel with input alphabet X , output alphabet Y , and the conditional probability distribution W X | Y ( · | · ) , with capacity C ( W ) = max P X I ( X ; Y ) . The symmetric capacity I ( W ) is the value of mutual information I ( X ; Y ) when X is uniformly distributed. Moreover, if W is symmetric, then I ( W ) = C ( W ) .
The Bhattacharyya parameter of the channel W is defined as
Z ( W ) = y Y W X | Y ( 0 | y ) W Y | X ( y | 1 ) .
For length N = 2 n with n N , let G N = B N F n be the polarizing matrix, where B N is the bit-reversal mapping, F = 1 0 1 1 and F n denote the nth Kronecker power of F. By applying transformation G N to N bits u 1 N and sending it through N independent uses of a B-DMC W : X Y , thus, an N-dimensional channel is created, defined by
W N ( i ) ( y 1 N , u 1 i 1 | u i ) = 1 2 N 1 u i + 1 N { 0 , 1 } N i W N ( y 1 N | u 1 N ) ,
where W N ( i ) denotes the i-th bit-channel created by synthesizing N uses of the channel W. As N grows, W N ( i ) approaches either an error-free or a completely noisy channel. The idea is to transmit information only over the “good” channels while keeping the inputs of “bad” channels fixed and known to all parties. Thus, the N bit-channels are partitioned into “good” channels G N ( W ) and “bad” channels B N ( W ) based on the value of their Bhattacharyya parameter.
G N ( W ) = { i [ N ] : Z ( W N ( i ) ) δ N } B N ( W ) = { i [ N ] : Z ( W N ( i ) ) 1 δ N } ,
where [ N ] = { 1 , 2 , , N } , δ N = 2 N β with 0 < β < 1 / 2 and Z ( W N ( i ) ) is the Bhattacharyya parameter of channel W N ( i ) . It has been shown [25,26] that for any symmetric binary-input channel W and for any β < 1 / 2 ,
lim N = G N ( W ) N = C ( W ) lim N = B N ( W ) N = 1 C ( W ) .
Based on the above, we can transmit a message of k = | G N ( W ) | bits, which is written in the bits u i , i G N ( W ) and the rest N k bits of u 1 N are frozen and set to 0. Thus, a code word x 1 N = u 1 N G N is sent over the channel. On the other side, the received sequence y 1 N can be decoded by finding an estimate of u 1 N by computing the values u ^ i , i [ N ] based on the following successive cancellation (SC) rule:
u ^ i = 0 , if W N ( i ) ( y 1 N , u ^ 1 i 1 | 0 ) W N ( i ) ( y 1 N , u ^ 1 i 1 | 1 ) 1 and i G N ( W ) 0 , if i B N ( W ) 1 , otherwise .
Using this decoding rule [8,25], we can upper bound the error probability
P e i G N ( W ) Z ( W N ( i ) ) δ N ,
where β ( 0 , 1 / 2 ) .

2.2. Bounds and Nested Structure

In this paper, one of the main characteristics of the proposed coding scheme is the nested structure of the polar codes. Next, we briefly explain this structure and its usage for binning in multiterminal communication scenarios. In particular, the authors in [14] designed a coding scheme for the three-terminal stochastically degraded relay channel with orthogonal receivers. First however, let us review the capacity bounds of the relay channel.
It is well-known that a bound for the capacity of the general relay channel is given by the cut-set upper bound [3]
C max P X , P X R min { I ( X ; Y S R Y S D ) , I ( X ; Y S D ) + I ( X R ; Y R D ) } ,
and for the primitive relay channel, the DF lower bound reduces to [6]
R D F = max P X min { I ( X ; Y S R ) , I ( X ; Y S D ) + I ( X R ; Y R D ) } .
Next, we briefly describe the nested structure of polar codes proposed in [14] for DF relaying, which, for any rate R < R D F , there exists a sequence of polar codes with a vanishing probability of error at the destination. The following Lemma from [27] is used to exploit the nested nature of polar codes.
Lemma 1.
Let Q and V be BSM channels such that Q is degraded with respect to V. Further, let Q 1 , Q i and V 1 , V i denote the N corresponding bit-channels. Then, Q i is degraded with respect to V i , that is, I ( Q i ) I ( V i ) and Z ( Q i ) Z ( V i ) .
From the above lemma, it follows directly that if the channel Q is degraded with respect to V, the set of good channels for Q is a subset of the set of good channels of V, i.e., for all constants β , we have G N ( Q ) G N ( V ) . Degradation implies that a channel is better than another one. The primitive relay channel is said to be degraded when the source-destination link is worse than that between source and relay. The encoding process starts at the source when a rate R < I ( W S R ) is chosen and a capacity-achieving polar code for the channel W S R is used. We define G S R and B S R as in (3) for channel W S R as the information and frozen set, respectively. Let M contain the information and the frozen bits transmitted by the source. Moreover, the bits m i , i G S R carry the message and m i , i B S R are the frozen bits, which are known to the relay and the destination prior to transmission. As shown in Figure 1, the destination cannot decode this sequence in its entirety due to its degraded channel. So, we also select a set of indices for direct communication over W S D and define G S D and B S D similarly. From Lemma 1, it holds that G S D G S R , as W S D is degraded compared to W S R , i.e., the decoder at the destination knows the values of the symbols m i , i G S D G S R and in order to employ SC decoding, the relay must forward the information m i , i G S R B S D .

2.3. Smart Relaying

It has been shown in [15] that by providing the relay with a simple error detector, we can significantly improve the error performance by letting the decoder at the relay select whether it will cooperate or not. We consider a DF cooperative transmission, where the message is encoded by using the nested polar coding scheme of Section 2.2. The relay carries a detector for erroneous decoding, i.e., if the likelihood ratio is less than a threshold, then the relay discards the decoded result and does not transmit to the destination, otherwise, the communication takes place under the assistance of the relay node.
We consider a threshold s for the relay decoder and the log-likelihood ratio (LLR) as defined for the successive decoding process [8]:
L N ( i ) ( y 1 N , u ^ 1 i 1 ) = log W N ( i ) ( y 1 N , u ^ 1 i 1 | 0 ) W N ( i ) ( y 1 N , u ^ 1 i 1 | 1 ) ,
where the decoder decides based on the rule in (5). A flag F is used to determine if the decoding result will be discarded or not according to
F = 0 , if s L N ( i ) ( y 1 N , u ^ 1 i 1 ) s 1 , otherwise .
The above procedure does not introduce any additional complexity during the relay’s decoding, thus, this functionality improves the error probability of the overall communication for free. In the next section, we bind together the smart relaying and the exploitation of the nested structure of polar codes to design a scheme that satisfies secrecy and reliability requirements for the primitive relay channel in the presence of an eavesdropper.

3. System Model and Requirements

In this section, we describe the primitive relay wiretap channel and set the goals of our encoding scheme; reliability and secrecy. Then, we present the architecture of the proposed communication model with the smart error detector at the relay.

3.1. The Relay Wiretap Channel

The relay wiretap channel models a multihop transmission scheme, where a relay cooperates with the source to communicate with the destination in the presence of an eavesdropper. We consider a four-terminal B-DMC with orthogonal receiver components, with the transition probability mass function
p ( y , y s r , z | x s , x r ) = p ( y s d , y s r , z s e | x s ) p ( y r d , z r e | x r ) .
In this model, X S and X R are the channel inputs from the source and relay, respectively, while Y , Y S R , and Z are the channel outputs at the destination, relay, and eavesdropper, respectively. The observation vectors at the destination’s and eavesdropper’s output are Y = ( Y S D , Y R D ) and Z = ( Z S E , Z R E ) , respectively. Figure 2 illustrates the channel, which consists of a source, a relay, the legitimate receiver, and the eavesdropper. The source wishes to reliably communicate a message M with the legitimate receiver under the assistance of a trusted relay while keeping it safe from the eavesdropper. The transmission takes place in two stages as follows:
  • In the first stage, the Source encodes a message M into a code word X S and broadcasts it to the destination and relay.
  • In the second stage, the Relay first decodes the Y S R and obtains M ^ R , then re-encodes it and transmits X R to the destination.
  • The Destination combines the two observations to produce an estimate M ^ of the original message.
  • The Eavesdropper observes Z = ( Z S E , Z R E ) during both transmissions.

3.2. Coding Requirements

We aim to design a coding scheme that satisfies both reliability and secrecy requirements. Probability of error is used to quantify the reliability of the scheme, where the goal is to satisfy
lim N P r { M M ^ } = 0 .
To measure the statistical independence between the message transmitted and eavesdropper observation, we use the following metrics:
lim N I ( M ; Z ) N = 0 ,
lim N I ( M ; Z ) = 0 .
In (13), security is measured in terms of the normalized mutual information between the transmitted message M and received vector by the eavesdropper Z . The encoding scheme is designed to satisfy this requirement in order to operate with weak secrecy. However, as shown by Maurer in [28], it is too weak for cryptographic applications as it is possible for the eavesdropper to retrieve a considerable amount of information even if (13) is satisfied. As a solution, we can use a stronger metric, that is, the encoding scheme operates with strong secrecy if (14) is satisfied.

3.3. Architecture

Our model is very similar to that described in Section 3.1, where in order to enjoy the benefits of smart relaying, we add an error detector at the relay. This allows the relay to select whether it is beneficial to cooperate with the source or not. As illustrated in Figure 3, the relay operates based on the detector result, i.e., if F = 0 , the relay discards the decoding result and the communication is completed via direct transmission from source to the destination. In this case, the secure polar coding scheme of [9] or [10] for the classic wiretap channel can be used. On the other hand, if F = 1 , the communication is assisted by the DF relay and the designing of such a coding scheme for reliability and secrecy is proposed in this paper.

4. Polar Coding for Secrecy

In this section, we present the encoding scheme, which simultaneously satisfies the reliability and weak secrecy constraints for the model of Section 3.2. Then, we identify the difficulties to achieve strong secrecy and introduce a new construction that satisfies this condition by using a double-chaining technique.

4.1. Weak Secrecy

As already mentioned above, polarization results into noiseless and pure-noisy bit-channels. Having in mind that the system needs to meet only the reliability requirement, one shall fill the good channels with information bits and keep the value of the bad channels fixed. However, when a third unauthorized party eavesdrops the communication, the secrecy constraint must be satisfied. The idea behind coding for secrecy is to confuse the nonlegitimate user with random messages, so their observation is different from the real message. Utilizing polar codes, we can design a secure coding scheme by properly partitioning the bit-channels. First, in order to achieve reliability, we send the message over the good channels (low entropy) for the legitimate users and bad channels (high entropy) for the eavesdropper. To confuse the eavesdropper and secure the transmission, we choose to hide the message by sending random bits over the reliable channels. That is, the goal is to construct an encoding process that makes the communication reliable and secure simultaneously, i.e., satisfying conditions (12) and (13). For the relay wiretap channel and under the DF protocol, these conditions must be satisfied in both transmission stages.
First, as in (3), we define the following subset of indices:
G N ( W k l ) = { i [ N ] : Z ( W N ( i ) ) δ N } B N ( W k l ) = { i [ N ] : Z ( W N ( i ) ) 1 δ N } ,
where k { S , R } and l { R , D , E } , for k l . Next, we partition the set [ N ] based on [9] as follows:
I 1 = G N ( W S R ) B N ( W S E ) F 1 = B N ( W S R ) R 1 = G N ( W S E ) .
Encoding at source: We choose a rate R < I ( W S R ) and use the indices in G N ( W S R ) to encode the information and broadcast it to the relay and the destination. All parties know the values of frozen bits F 1 . We fill with random bits the indices in R 1 in order to protect the message. The information is stored in the bits of set I 1 . However, due to Lemma 1, degradation implies that the destination cannot decode the whole information due to G N ( W S D ) G N ( W S R ) , we distribute the message in I 1 S D = G N ( W S D ) B N ( W S E ) and I 1 R D = G N ( W S R ) B N ( W S D ) , with I 1 = I 1 S D I 1 R D (Figure 4). The message bits in I 1 R D must be provided by the relay during the second transmission using the following partition.
I 2 = G N ( W R D ) B N ( W R E ) F 2 = B N ( W R D ) R 2 = G N ( W R E ) .
  • Processing at the relay: The relay using SC and the knowledge of frozen bits decodes the message transmitted by the source, then extracts and encodes the information bits with indices in I 1 R D and forwards them to the destination using a capacity-achieving polar code for W R D and partition (17). To protect this transmission, we again fill with random bits the indices in R 2 , the information bits are in the set I 2 and the bits in F 2 are frozen and known (Figure 5).
  • Decoding at the destination: Having received Y R D and recovered the missing bits from the relay, the destination uses these bits in addition to the first observation Y S D and employs the SC algorithm to recover the source’s message.
Based on the above encoding and decoding process, we prove that the coding scheme satisfies both requirements (12) and (13).

4.1.1. Reliability Analysis

The reliability follows immediately from the design of the coding scheme and the results developed in [8]. Specifically, the low error probability claim must be satisfied for the relay and destination. First, since I 1 R 1 = G N ( W S R ) , the error probability at the relay is upper bounded as
P e S R i I 1 R 1 Z ( W S R ( i ) ) 2 N β .
The probability of error for the relay–destination transmission of the second time-slot, since I 2 R 2 = G N ( W R D ) by design, is upper bounded as
P e R D i I 2 R 2 Z ( W R D ( i ) ) 2 N β .
The bits from the relay are then decoded by the destination and, together with the source’s transmission Y S D , the original message is retrieved using the SC algorithm. Consequently, the overall error probability at the destination is upper bounded by
P e O ( 2 N β ) .
Moreover, we obtain the constraints on the transmission rate, i.e., R < I ( W S R ) and R < I ( W S D ) + I ( W R D ) , which yields the symmetric DF rate for relay channels R D F .

4.1.2. Secrecy Analysis

Let us turn to the security analysis. Let U denote the intermediate vector constructed by the encoding process, with U I = U I 1 I 2 , U R = U R 1 R 2 , and U F = U F 1 F 2 = 0 . Moreover, the source’s message M = ( M 1 , M 2 ) and takes values in { 0 , 1 } | I | , with M 1 and M 2 denoting messages from the source and relay, respectively. We need to prove that the normalized mutual information, (13), between M and Z vanishes asymptotically. We evaluate the statistical independence between the message and eavesdropper’s observations using random frozen vector over all possible choices of frozen bits as follows:
I ( M ; Z | U F ) = I ( U I ; Z | U F )
= I ( U I , U R ; Z | U F ) I ( U R ; Z | U F , U I )
= I ( U ; Z ) I ( U R ; Z | U F , U I )
= I ( U ; Z ) H ( U R | U F , U I ) + H ( U R | Z , U F , U I )
= I ( U ; Z ) H ( U R ) + H ( U R | Z , U F , U I )
= I ( U ; Z ) | R | + H ( U R | Z , U F , U I )
N ( I ( W S E ) + I ( W R E ) ) | R | + H ( U R | Z , U F , U I ) ,
where (22) is derived from the chain rule of mutual information and (23) is due to I ( U I , U R ; Z | U F ) = I ( U I , U R ; Z | U F ) + I ( U F ; Z ) = I ( U I , U R , U F ; Z ) = I ( U ; Z ) and that I ( U F ; Z ) = 0 . Consequently, (25) follows from the independence of U R , U F , and U I , while (27) is concluded by the fact that I ( W S E ) and I ( W R E ) are the capacity of W S E and W R E , respectively, and the data processing inequality.
Examining (27), we observe that in order to upper bound the mutual information of (21), we need to find an upper bound for the entropy term H ( U R | Z , U F , U I ) . Hence, we have the following lemma:
Lemma 2.
The conditional entropy is upper bounded as
H ( U R | Z , U F , U I ) h 2 ( 2 N β ) + | R | 2 N β ,
where h 2 ( · ) is the binary entropy function and | R | is the size of random vector U R .
Proof. 
Let us assume that the eavesdropper has knowledge of U I in addition to Z and the frozen bits. Therefore, the eavesdropper can compute an estimate of U ^ R (since R is transmitted via the good channels G N ( W S E ) and G N ( W R E ) ), by using the SC algorithm with
P e E v e = P [ U ^ R U R ] i R Z ( W S E ( i ) ) + Z ( W R E ( i ) ) 2 N β .
We then introduce a random variable E for the error as follows:
E = 1 , if U ^ R U R 0 , if U ^ R = U R ,
and we derive the following
H ( E , U R | Z , U F , U I ) = H ( U R | Z , U F , U I ) + H ( E | U R , Z , U F , U I )
= H ( U R | Z , U F , U I ) ,
since the second entropy term in (30) equals zero. Moreover, note that
H ( E , U R | Z , U F , U I ) = H ( U R | E , Z , U F , U I ) + H ( E | Z , U F , U I )
H ( U R | E , Z , U F , U I ) + H ( E ) ,
since the second entropy term in (32) can be upper bounded by H ( E ) = H ( P e E v e ) . Thus, from (31) and (33), we get
H ( U R | Z , U F , U I ) H ( U R | E , Z , U F , U I ) + H ( E ) = P [ E = 0 ] H ( U R | E = 0 , Z , U F , U I )
+ P [ E = 1 ] H ( U R | E = 1 , Z , U F , U I ) + H ( P e E v e )
P e E v e | R | + H ( P e E v e ) ,
where (36) is because H ( U R | E = 0 , Z , U F , U I ) = 0 and H ( U R | E = 1 , Z , U F , U I ) H ( U R ) = | R | . Rearranging (36) and using (28), we get the desired upper bound and the proof is completed. □
Finally, considering Lemma 2 and (27), we get the following upper bound:
I ( M ; Z | U F ) N ϵ N + h 2 ( 2 N β ) + | R | 2 N β ,
where ϵ N = I ( W S E ) + I ( W R E ) | R | / N and if we divide both sides of (37) by N, all terms tend to zero as N , since lim N ϵ N = 0 , with R 1 G N ( W S E ) and R 2 G N ( W R E ) .
Thus, the proposed polar coding scheme satisfies the weak secrecy requirement for all possible choices of frozen vector,
lim N I ( M ; Z ) N = 0
and the achievable rate under this encoding procedure is given by
R s w e a k = R D F ( I ( W S E ) + I ( W R E ) ) ,
for large enough N.

4.2. Strong Secrecy

The above scheme can only achieve the weak secrecy requirement, due to the assumptions that B N c ( W S E ) G N ( W S R ) and B N c ( W R E ) G N ( W R D ) , while in general, this is not true. Although the number of coordinates in G N c ( W S R ) B N c ( W S E ) and G N c ( W R D ) B N c ( W R E ) is very small, this constitutes the difficulty in obtaining reliability and strong secrecy simultaneously. The authors in [10] proposed a different partition of the coordinates which resolves the above problem. For the case of the symmetric relay wiretap channel and degraded eavesdropper link, the strong secrecy claim was proved in [20]. In the following, we provide a solution for a more general case, where we do not make any assumption on eavesdropper’s channel quality and consider a nonsymmetric channel model.

4.2.1. Asymmetric Channel Coding

In [22], the authors presented a polar coding scheme, which achieves the capacity of a B-DMC, to cover the general case of arbitrary input distributions. Let U N = X N G N and define the following sets:
H X = { i [ N ] : Z ( U i | U i 1 ) 1 δ N } L X = { i [ N ] : Z ( U i | U i 1 ) δ N } H X | Y = { i [ N ] : Z ( U i | U i 1 , Y N ) 1 δ N } L X | Y = { i [ N ] : Z ( U i | U i 1 , Y N ) δ N } ,
where Z ( X | Y ) is the Bhattacharyya parameter of a random variable pair ( X , Y ) , defined as
Z ( X | Y ) = 2 y Y P Y ( y ) P X | Y ( 0 | y ) P X | Y ( 1 | y ) .
From [29], we have
lim N H X N = H ( X ) ,
lim N H X | Y N = H ( X | Y ) .
For a nonsymmetric B-DMC channel W : X Y , it is not possible to use all the good bit-channels to transmit information. Hence, in [22], a different partition of the set [ N ] was proposed:
I = H X L X | Y F r = H X L X | Y c F d = H X c .
For i H X , U i is almost uniformly distributed and independent of the past U i 1 , therefore, it can carry information. For i L X | Y , U i is almost determined by U i 1 and Y N , implying that it can be decoded in a successive manner, and from (42) and (43), we have
lim N | I | N = I ( X ; Y ) .
The remaining indices are frozen; for i F r , U i is almost uniformly distributed and independent of the past U i 1 but cannot be reliably decoded given Y N ; for i F d , U i is almost determined by U i 1 . As suggested in [22], the values of bits in { i F r F d } are assigned by random mappings λ i : { 0 , 1 } i 1 { 0 , 1 } according to the following probability rule:
λ i ( u i 1 ) = u , w . p . P U i | U i 1 ,
which are shared between the encoder and the decoder. However, this operation requires sharing a large amount of randomness which is often undesirable. As a remedy to this, simplified schemes that require a vanishing rate of shared randomness were proposed in [23,30].
Let the bits in { i I } be used to store information as mentioned above, and let bits in { i F r } be uniformly distributed random bits shared between the encoder and the decoder. This sequence can be reused over several blocks, making the rate loss negligible. The values of { i F d } are sampled from the distribution P U i | U i 1 , and the bits in { i H X c L X | Y c } are transmitted to the receiver separately with some reliable code, with negligible rate loss (since | L X | Y c H X | Y | = o ( n ) and H X | Y H X , we have that | H X c L X | Y c | = o ( n ) ).
After the transmission of x N = u N G N , the receiver knows the sequences { i F r } and { i H X c L X | Y c } and successively constructs the estimate u ^ using the following rule for each bit-channel:
u ^ i = arg max u [ 0 , 1 ] P U i | U i 1 , Y N ( u | u ^ i 1 , y N ) , if i L X | Y u i , if i L X | Y c .
This scheme’s rate approaches I ( X ; Y ) and the error probability can be upper bounded by P e i L X | Y Z ( U i | U i 1 , Y N ) = O ( δ N ) .

4.2.2. Encoding Scheme

In the following, we develop the main contribution of this work, the encoding scheme for the primitive relay wiretap channel, and remove the assumptions on degradedness and symmetry. The transmission takes place over k + 1 blocks of N bits. Prior to the communication, trusted parties share a secret seed of random bits D , which is used as a “chain” between transmitted blocks. In particular, encoding is performed so that the bits of D are passed on the legitimate receiver (relay or destination) using their reliable and secure indices. The chaining is implemented by sending the bits in D ( j ) of block j as part of the message block j 1 for all j [ 1 , , k ] . This construction allows the legitimate receiver to employ SC for block j and recover these bits reliably, while security is guaranteed.
Let us apply the aforementioned construction to the relay wiretap channel under investigation. We consider the following partition of the index set
I 1 = H X S G N ( W S R ) B N ( W S E ) F 1 = H X S G N c ( W S R ) B N ( W S E ) R 1 = H X S G N ( W S R ) B N c ( W S E ) D 1 = H X S G N c ( W S R ) B N c ( W S E ) B 1 = H X S c ,
where in the set I 1 , information bits are stored; set F 1 is the set of frozen bits; R 1 are the randomly chosen bits; D 1 are the misaligned bits; and B 1 are the almost deterministic bits. We note that, as in the weak secrecy case in Section 4.1, the information bits in I 1 are distributed in both I 1 S D , which can be decoded by the destination, and I 1 R D , which is the message that the relay forwards through the W R D . Thus, for this transmission, the relay uses the following partition:
I 2 = H X R G N ( W R D ) B N ( W R E ) F 2 = H X R G N c ( W R D ) B N ( W R E ) R 2 = H X R G N ( W R D ) B N c ( W R E ) D 2 = H X R G N c ( W R D ) B N c ( W R E ) B 2 = H X R c .
Before describing the encoding procedure, we define the following set D = D 1 D 2 , which is used as the secret seed and is shared among the source, relay, and destination. Additionally, we fix two arbitrary sets E 1 I 1 and E 2 I 2 with | E 1 | = | D 1 | and | E 2 | = | D 2 | , and E = E 1 E 2 with | E | = | D | . Consequently, the messages of the two-hop transmission are indexed by the bits in I ˜ 1 = I 1 E and I ˜ 2 = I 2 E , respectively. As a consequence of removing the eavesdropper’s degraded channel assumption, the cardinality of D 1 and D 2 is not o ( N ) anymore and we must ensure that by removing the bits E 1 and E 2 from I 1 and I 2 , respectively, there is no loss in the rate. However, the preshared rate can be made very small by choosing a large enough k, i.e., | D | / k N .
Overall, the transmission is performed in two stages, where in order to satisfy the strong secrecy requirement while the probability of error vanishes, we manipulate the misaligned bits in both transmissions by creating a double-chaining structure, i.e., the bits in D and their links E of the previous block create a chain for each transmission, as in Figure 6.
Let us describe this double-chaining construction, assuming that the legitimate parties have knowledge of the seed D ( 1 ) . By transmitting E ( 0 ) with a separate code, the first chain is formed by D 1 ( j ) = E 1 ( j 1 ) during the source transmission towards the relay and the destination, and the second chain is formed by D 2 ( j ) = E 2 ( j 1 ) when the relay sends the missing bits to the legitimate receiver. After each source block transmission, the first | D 1 | bits of D are used to create the chain and are being replaced block by block. Similarly, the second-hop chain is created after each block is transmitted by the relay by using the remaining | D 2 | bits of D .
  • Source encoding: For block j = 1 , , k , set I ˜ 1 carries the message bits; set R 1 is filled with uniformly distributed random bits; the first | D 1 | bits of the set D are chained with the bits of E 1 , i.e., D 1 ( j ) = E 1 ( j 1 ) ; the bits in F 1 are fixed, known, and can be reused over blocks; and the bits of B 1 are sampled from P U i , S | U S i 1 . Moreover, since the source–destination link is weaker, the bits in G ( W S R ) B ( W S D ) need to be delivered to the destination by the relay during the second-hop transmission. That is, the message bits of I ˜ 1 are loaded in I ˜ 1 S D = G ( W S D ) B ( W S E ) and I ˜ 1 R D = G ( W S R ) B ( W S D ) . Finally, as described in Section 4.2.1, let Φ 1 be the vector storing the not completely polarized bit-channels { i H X S c G N c ( W S R ) } , which is shared secretly between the legitimate users with some reliable error-correcting code. Figure 7 shows the coding scheme, the lines on D 2 and E 2 imply the first chain construction.
  • Processing at the relay: The relay decodes message block j, knowing F 1 , the seed D 1 ( j ) = E 1 ( j 1 ) , and the bits of Φ 1 , then extracts the bits in I ˜ 1 R D and forwards them to the destination by using a polar code for the channel W R D using partition (49). Specifically, for block j = 1 , , k message bits are loaded in the set I ˜ 2 ; random bits in the set R 2 ; the bits in the set D 2 are chained with those of E 2 , i.e., D 2 ( j ) = E 2 ( j 1 ) , as shown in Figure 8; and the bits of B 2 are sampled from P U i , R | U R i 1 . Furthermore, let Φ 2 be the vector storing the not completely polarized bit-channels { i H X R c G N c ( W R D ) } , which is shared secretly with the destination using some reliable error-correcting code. The frozen set for this transmission is F 2 , which is known to the destination.
  • Destination decoding: At the destination, the process starts by decoding the first block message of the relay transmission, knowing F 2 and D 2 ( j ) = E 2 ( j 1 ) and the bits of Φ 2 . Then, it uses those bits and the knowledge of F 1 , D 1 , and Φ 1 to decode the corresponding message block received from the source transmission at the first stage by employing the SC algorithm.
Remark 1.
The encoding scheme above requires a certain amount of shared randomness between the legitimate users. Specifically, F 1 , F 2 are available to all users (including the eavesdropper), while D 1 , D 2 , and Φ 1 , Φ 2 are known only to the legitimate users. Note that F i , for i = 1 , 2 , can be reused over blocks and since | F 1 | + | F 2 | = O ( N ) , the rate needed can become arbitrarily small by choosing a large k. The rate of secret seed D i , and the rate of the not completely polarized bit-channels Φ i , for i = 1 , 2 , can also become negligible by choosing a sufficiently large k. In general, the rate loss caused by the shared randomness is minor compared to the overall message rate.
Let us now introduce the following random variables needed for the reliability and secrecy analysis. For the transmission in blocks j = 1 , , k , denote the source’s message bits in I ˜ 1 by M 1 , k and let M 2 , k be the message transmitted by the relay with bits in I ˜ 2 , frozen bits in F 1 and F 2 are denoted by F 1 , k , and let F 2 , k . Further, let E 1 , k and E 2 , k correspond to the bits belonging to E 1 ( j ) and E 2 ( j ) , respectively, for j = 0 , , k . To make the analysis compact, we also denote M k = ( M 1 , k , M 2 , k ) , F k = ( F 1 , k , F 2 , k ) , E k = ( E 1 , k , E 2 , k ) ; the k-length vectors M 1 k = ( M 1 , , M k ) , F 1 k = ( F 1 , , F k ) , E 0 k = ( E 0 , , E k ) ; and let Z 0 k = ( Z 0 , , Z k ) be the sequence of eavesdropper’s observations Z = ( Z S E , Z R E ) during the k-th block transmission from source and relay.

4.2.3. Total Variation Distance and Reliability Analysis

We now analyze the proposed scheme by first examining the closeness in terms of total variation distance of the distribution induced by the encoding process and the target distribution. We would like to find an upper bound on the variation distance between these two distributions. Let V ( P , Q ) be the total variation distance and D ( P | | Q ) be the Kullback–Leibler divergence between distributions P and Q. Following the analysis of [13], we have that
D ( P X S | | Q X S ) = D ( P U S | | Q U S )
= i = 1 N D ( P U i , S | | U S i 1 | Q U i , S | U S i 1 )
= i H X S ( 1 H ( U i , S | U S i 1 ) )
i H X S ( 1 Z 2 ( U i , S | U S i 1 ) )
2 | H X S | δ N 2 N δ N ,
where (50) is due to the polar transform X S = U S G N , (51) holds by the chain rule of KL divergence, (52) holds since the values of H X S are chosen uniformly, (53) follows from the inequality Z ( X | Y ) 2 H ( X | Y ) ([29], Proposition 2), and (54) follows by the design of H X S . Similarly, we have
D ( P X R | | Q X R ) 2 N δ N .
To obtain the desired bound in terms of total variation distance between the two joint distributions, we note that
V ( P X S , X R , Y S R , Y S D , Y S E , Y R D , Y R E , Q X S , X R , Y S R , Y S D , Y S E , Y R D , Y R E ) = V ( P X S , X R , Q X S , X R )
4 ln 2 N δ N = Δ δ N ( 1 ) ,
where (56) and (57) follow from [31] (Lemma 17) and Pinsker’s inequality, using (54) and (55), respectively. This result indicates that the induced joint distribution is asymptotically indistinguishable from the target one.
We next examine the reliability of this scheme by estimating the error probability for the legitimate parties. First, for the relay, since the rate of the transmission uses a polar coding sequence with R < I ( W S R ) , and assuming that P r { E ^ 0 E 0 } 0 —i.e., there is a code with ϵ N 0 used to convey the seed to the legitimate users—the probability of erroneous decoding at the relay in the k + 1 blocks is
P e S R ϵ N + k O ( 2 N β ) ,
for all β < 1 / 2 .
Similarly, the destination will recover the relay’s message M 2 , with the error probability bounded by
P e R D k O ( 2 N β ) ,
since I ˜ 2 R 2 G ( W R D ) , and knowing F 2 and D 2 ( j ) . Then, using those bits, it can decode the source’s message M 1 using the SC algorithm. Overall, the probability of error at the destination after the second transmission is then bounded as
P e ϵ N + k O ( 2 N β ) ,
where ϵ N is the vanishing error of the code transmitting the seed prior to the communication.

4.2.4. Secrecy Analysis

We will show that the strong secrecy requirement is satisfied by utilizing the double-chaining construction described above. For the proposed encoding scheme, the information leakage to the eavesdropper can be analyzed as follows:
I ( M 1 k ; Z 0 k | F 1 k ) I ( M 1 k , E k ; Z 0 k | F 1 k ) = I ( M 1 k , E k ; Z k | F 1 k ) + I ( M 1 k , E k ; Z 0 k 1 | Z k , F 1 k )
= I ( M k , E k ; Z k | F 1 k ) + I ( M 1 k , E k ; Z 0 k 1 | Z k , F 1 k ) I ( M k , E k ; Z k | F 1 k ) + I ( M 1 k , E k Z k ; Z 0 k 1 | F 1 k ) I ( M k , E k ; Z k | F 1 k ) + I ( M 1 k , E k 1 , E k , Z k ; Z 0 k 1 | F 1 k )
= I ( M k , E k ; Z k | F 1 k ) + I ( M 1 k 1 , E k 1 ; Z 0 k 1 | F 1 k )
= I ( M k , E k ; Z k | F k ) + I ( M 1 k 1 , E k 1 ; Z 0 k 1 | F 1 k 1 ) ,
where (62) and (63) are due to the Markov chains M 1 k 1 M k E k Z k and M k E k Z k M 1 k 1 E k 1 Z 0 k 1 , respectively. (64) is obtained by noticing that using the chain rule for mutual information, we get
I ( M k , E k ; Z k | F 1 k ) = I ( M k , E k ; Z k | F k ) + I ( M k , E k ; F 1 k 1 | Z k , F k ) I ( M k , E k ; F 1 k 1 | F k )
= I ( M k , E k ; Z k | F k ) ,
where the last two terms equal zero, similar to the second term of (64). Next, from (61), we have the following:
I ( M 1 k ; Z 0 k | F 1 k ) j = 1 k I ( M j E j ; Z j | F 1 k ) + I ( E 0 ; Z 0 ) ϵ N ,
where the last term is the secret seed shared between the legitimate parties prior to the communication and we assume that there exists a secure coding scheme with ϵ N 0 . In order to complete the proof of strong secrecy, it remains to be shown that the first term of the RHS in (67) vanishes as well. Therefore, we need to bound the capacity of the eavesdropper’s channel induced by our encoding. For this purpose, we prove the following lemma.
Lemma 3.
For any j = 1 , , k , we have
I ( M j , E j ; Z j | F j ) δ N ( 3 ) ,
where δ N ( 3 ) = Δ 2 N δ N + δ N ( 1 ) ( N log δ N ( 1 ) ) .
Proof. 
Let I = | I 1 | + | I 2 | with indices labeled as { a 1 , , a I } with a 1 < . . . < a I ; similarly, let F = | F 1 | + | F 2 | and label the indices as { b 1 , , b F } with b 1 < . . . < b F . Moreover, let S = I + F with labels { c 1 , , c S } and assume that c 1 < . . . < c S . -4.6cm0cm
I ( M j , E j ; Z j | F j ) = H ( M j , E j | F j ) H ( M j , E j | Z j , F j ) = H ( M j , E j ) H ( M j , E j , F j | Z j ) + H ( F j | Z j ) = i = 1 I H ( T a i | T a 1 , , T a i 1 ) i = 1 S H ( T c i | Z j , T c 1 , , T c i 1 ) + i = 1 F H ( T b i | Z j , T b 1 , , T b i 1 ) i = 1 S ( 1 H ( T c i | Z j , T c i 1 ) ) ,
where (68) is due to the fact that conditioning reduces entropy. Considering the entropy term in (68) and noticing that it is derived under the induced distribution of the coding scheme, we would like to find the distance between the induced entropy and the entropy under the target distribution H ˜ ( T c i | Z j , T c i 1 ) , where H ˜ denotes the entropy under the target distribution P X N , Z N . First, using the inequality Z ( X | Y ) 2 H ( X | Y ) ([29], Proposition 2), we get that
H ˜ ( T c i | Z j , T c i 1 ) Z ( T c i | Z j , T c i 1 ) 2 1 2 δ N .
To estimate the distance, we rely on Th. 17.3.3 [32], and we have
| H ( T c i | Z j , T c i 1 ) H ˜ ( T c i | Z j , T c i 1 ) | V ( P X N , Z N , Q X N , Z N ) × log 2 N V ( P X N , Z N , Q X N , Z N )
δ N ( 1 ) ( N log δ N ( 1 ) ) = Δ δ N ( 2 ) ,
where (71) follows from (57) and the fact that f ( x ) = x ( N log x ) is increasing for 0 < x < 1 .
Thus, from (68), (69), and (70), we conclude that
I ( M j , E j ; Z j | F j ) 2 N δ N + δ N ( 2 ) = Δ δ N ( 3 ) .
Finally, combining Lemma 3 and (67), we get the desired result:
I ( M 1 k ; Z 0 k | F 1 k ) I ( M 1 k E k ; Z 0 k | F 1 k ) ϵ N + k δ N ( 3 ) ,
where, for k fixed and N , we observe that I ( M 1 k ; Z 0 k | F 1 k ) vanishes as we have assumed that ϵ N 0 , and that completes the secrecy analysis.
Moreover, the achievable rate under this encoding scheme is given by
R s s t r o n g = k k + 1 [ R D F ( I ( W S E ) + I ( W R E ) ) 2 Δ ] ,
as N grows large and by choosing k to be large enough, where 2 Δ is a vanishing small rate penalty induced by the double-chaining structure and the shared seed. Undoubtedly, designing a coding scheme that takes into account reliability and secrecy as requirements, a trade-off between achievable rate and secrecy is introduced. In our scheme (Although we are discussing the strong secrecy encoding scheme, the same applies for the scheme of Section 4.1, where the rate loss is lower due to the absence of the double-chaining construction). the random bits and the bits allocated to convey the misaligned indices are the factors that mean the achievable secret rate is lower than the R D F , which is the lower bound without secrecy constraints. To reduce this loss and attain a higher achievable rate, a line of work exists that proposes a cross-layer security scheme, which combines information-theoretic security with classical encryption mechanisms [33,34].

5. Conclusions

The construction of practical coding schemes for information-theoretic security is of great significance. In this work, we have proposed an efficient coding scheme based on polar codes for the primitive relay wiretap channel, which guarantees a level of information-theoretic security. In our setup, we exploited the nested structure of polar codes for cooperative relaying in a DF strategy. We presented an encoding scheme that achieves reliability and weak secrecy, but fails to provide strong secrecy. Through a different partition of the coordinates and a chaining construction, we were able to prove that reliability and strong secrecy can be obtained simultaneously for the channel model, without any assumption regarding symmetry and the eavesdropper’s channel condition. Moreover, in our strong secrecy scheme, we considered a simplified mechanism that only requires a vanishing rate of shared randomness, instead of sharing random mappings that may heavily increase the storage complexity. Finally, as en extra functionality, a “smart” relay is added, where an erroneous detector at the relay’s decoder operates to reduce the error probability while the complexity remains the same.

Author Contributions

Writing—original draft preparation, formal analysis, conceptualization, methodology, M.A.; supervision, writing—review and editing, G.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by Greece and the European Union (European Social Fund- ESF) through the Operational Programme “Human Resources Development, Education and Lifelong Learning” in the context of the project “Strengthening Human Resources Research Potential via Doctorate Research” (MIS-5000432), implemented by the State Scholarships Foundation (IKY).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wyner, A.D. The wire-tap channel. Bell Syst. Tech. J. 1975, 54, 1355–1387. [Google Scholar] [CrossRef]
  2. Van der Meulen, E.C. Three-terminal communication channels. Adv. Appl. Probab. 1971, 3, 120–154. [Google Scholar] [CrossRef]
  3. Cover, T.; Gamal, A.A.E. Capacity theorems for the relay channel. IEEE Trans. Inf. Theory 1979, 25, 572–584. [Google Scholar] [CrossRef] [Green Version]
  4. Lai, L.; Gamal, H.E. The Relay-Eavesdropper Channel: Cooperation for Secrecy. IEEE Trans. Inf. Theory 2008, 54, 4005–4019. [Google Scholar] [CrossRef]
  5. Yuksel, M.; Erkip, E. The relay channel with a wire-tapper. In Proceedings of the 2007 41st Annual Conference on Information Sciences and Systems, Baltimore, MD, USA, 14–16 March 2007; pp. 13–18. [Google Scholar]
  6. Kim, Y.H. Coding techniques for primitive relay channels. In Proceedings of the 2007 Allerton Conference on Communication, Control, Computing, Monticello, IL, USA, 26–28 September 2007; pp. 129–135. [Google Scholar]
  7. Aggarwal, V.; Sankar, L.; Calderbank, A.R.; Poor, H.V. Secrecy capacity of a class of orthogonal relay eavesdropper channels. In Proceedings of the 2009 Information Theory and Applications Workshop, San Diego, CA, USA, 8–14 February 2009; pp. 295–300. [Google Scholar]
  8. Arikan, E. Channel Polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels. IEEE Trans. Inf. Theory 2009, 55, 3051–3073. [Google Scholar] [CrossRef]
  9. Mahdavifar, H.; Vardy, A. Achieving the secrecy capacity of wiretap channels using polar codes. IEEE Trans. Inf. Theory 2011, 57, 6428–6443. [Google Scholar] [CrossRef] [Green Version]
  10. Sasoglou, E.; Vardy, A. A New Polar Coding Scheme for Strong Security on Wiretap Channels. In Proceedings of the 2013 IEEE International Symposium on Information Theory, Istanbul, Turkey, 7–12 July 2013; pp. 1117–1121. [Google Scholar]
  11. Gulcu, T.; Barg, A. Achieving secrecy capacity of the wiretap channel and broadcast channel with a confidential component. IEEE Trans. Inf. Theory 2017, 63, 1311–1324. [Google Scholar] [CrossRef]
  12. Wei, Y.P.; Ulukus, S. Polar Coding for the General Wiretap Channel With Extensions to Multiuser Scenarios. IEEE J. Sel. Areas Commun. 2016, 34, 278–291. [Google Scholar]
  13. Chou, R.A.; Bloch, M. Polar Coding for the Broadcast Channel With Confidential Messages: A Random Binning Analogy. IEEE Trans. Inf. Theory 2016, 62, 2410–2429. [Google Scholar] [CrossRef] [Green Version]
  14. Blasco-Serrano, R.; Thobanen, R.; Andersson, M.; Rathi, V.; Skoglund, M. Polar codes for cooperative relaying. IEEE Trans. Commun. 2012, 60, 3263–3273. [Google Scholar] [CrossRef]
  15. Karas, D.S.; Pappi, K.N.; Karagiannidis, G. Smart Decode-and-Forward Relaying with Polar Codes. IEEE Wirel. Commun. Lett. 2014, 3, 62–65. [Google Scholar] [CrossRef]
  16. Wang, L. Polar Coding for the Relay Channels. In Proceedings of the 2015 IEEE International Symposium on Information Theory, Hong Kong, China, 14–19 June 2015; pp. 1532–1536. [Google Scholar]
  17. Andersson, M.; Rathi, V.; Thobaben, R.; Kliewer, J.; Skoglund, M. Nested polar codes for wiretap and relay channels. IEEE Wirel. Commun. Lett. 2010, 14, 752–754. [Google Scholar] [CrossRef] [Green Version]
  18. Mondelli, M.; Hassani, S.H.; Urbanke, R. A new Coding Paradigm for the Primitive Relay Channel. In Proceedings of the 2018 IEEE International Symposium on Information Theory, Vail, CO, USA, 17–22 June 2018; pp. 351–355. [Google Scholar]
  19. Duo, B.; Wang, P.; Li, Y.; Vucetic, B. Secure Transmission for Relay-Eavesdropper Channels Using Polar Coding. In Proceedings of the 2014 IEEE International Conference on Communications (ICC), Sydney, NSW, Australia, 10–14 June 2014; pp. 2197–2202. [Google Scholar]
  20. Athanasakos, M.; Karagiannidis, G. Strong Secrecy for Relay Wiretap Channels with Polar Codes and Double-Chaining. In Proceedings of the 2020 IEEE Global Communications Conference—GLOBECOM, Taipei, Taiwan, 7–11 December 2020. [Google Scholar]
  21. Hassani, S.H.; Urbanke, R. Universal polar codes. In Proceedings of the IEEE International Symposium on Information Theory, Honolulu, HI, USA, 29 June–4 July 2014; pp. 1451–1455. [Google Scholar]
  22. Honda, J.; Yamamoto, H. Polar coding without alphabet extension for assymetric models. IEEE Trans. Inf. Theory 2013, 59, 7829–7838. [Google Scholar] [CrossRef]
  23. Chou, R.; Bloch, M. Using deterministic decisions for low-entropy bits in the encoding and decoding of polar codes. In Proceedings of the Annual Allerton Conference on Communication Control and Computing, Monticello, IL, USA, 29 September–2 October 2015. [Google Scholar]
  24. Del Olmo Alòs, J.; Fonollosa, J.R. Polar Coding for Confidential Broadcasting. Entropy 2020, 22, 149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Arikan, E.; Telatar, E. On the rate of Channel Polarization. In Proceedings of the 2009 IEEE International Symposium on Information Theory, Seoul, Korea, 28 June–3 July 2009; pp. 1493–1495. [Google Scholar]
  26. Korada, S.B.; Urbanke, R.L. Polar codes are optimal for lossy source coding. IEEE Trans. Inf. Theory 2010, 56, 1751–1768. [Google Scholar] [CrossRef] [Green Version]
  27. Korada, S.B. Polar Codes for Channel and Source Coding. Ph.D. Thesis, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland, 2009. [Google Scholar]
  28. Mauerer, U.M.; Wolf, S. Information-theoretic key agreement: From weak to strong secrecy for free. In Advances in Cryptology—Eurocrypt 2000; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2000; pp. 351–368. [Google Scholar]
  29. Arikan, E. Source polarization. In Proceedings of the IEEE International Symposium on Information Theory, Austin, TX, USA, 13–18 June 2010; pp. 899–903. [Google Scholar]
  30. Gad, E.E.; Li, Y.; Kliewer, J.; Langberg, M.; Jiang, A.A.; Bruck, J. Asymmetric error correction and flash-memory rewriting using polar codes. IEEE Trans. Inf. Theory 2016, 62, 4024–4038. [Google Scholar] [CrossRef]
  31. Cuff, P.W. Communication in Networks for Coordinating Behavior. Ph.D. Thesis, Stanford University, Stanford, CA, USA, 2009. [Google Scholar]
  32. Cover, T.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley: Hoboken, NJ, USA, 2006. [Google Scholar]
  33. Wu, F.; Xing, C.; Zhao, S.; Gao, F. Encrypted polar codes for wiretap channel. In Proceedings of the 2012 2nd International Conference on Computer Science and Network Technology, Changchun, China, 29–31 December 2012; Volume 1, pp. 579–583. [Google Scholar]
  34. Zhao, Y.; Zou, X.; Lu, Z.; Liu, Z. Chaotic Encrypted Polar Coding Scheme for General Wiretap Channel. IEEE Trans. Very Large Scale Integr. VLSI Syst. 2017, 25, 3331–3340. [Google Scholar] [CrossRef]
Figure 1. Nested structure of polar codes for the relay channel.
Figure 1. Nested structure of polar codes for the relay channel.
Entropy 23 00442 g001
Figure 2. The relay wiretap channel model with orthogonal components.
Figure 2. The relay wiretap channel model with orthogonal components.
Entropy 23 00442 g002
Figure 3. The relay wiretap channel with the erroneous detector.
Figure 3. The relay wiretap channel with the erroneous detector.
Entropy 23 00442 g003
Figure 4. Stage I: Index partitioning (16) at the source.
Figure 4. Stage I: Index partitioning (16) at the source.
Entropy 23 00442 g004
Figure 5. Stage II: Index partitioning (17) at the relay.
Figure 5. Stage II: Index partitioning (17) at the relay.
Entropy 23 00442 g005
Figure 6. The double-chaining construction.
Figure 6. The double-chaining construction.
Entropy 23 00442 g006
Figure 7. Index partitioning (48) at the source.
Figure 7. Index partitioning (48) at the source.
Entropy 23 00442 g007
Figure 8. Index partitioning (49) at the relay.
Figure 8. Index partitioning (49) at the relay.
Entropy 23 00442 g008
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Athanasakos, M.; Karagiannidis, G. Secure Polar Coding for the Primitive Relay Wiretap Channel. Entropy 2021, 23, 442. https://doi.org/10.3390/e23040442

AMA Style

Athanasakos M, Karagiannidis G. Secure Polar Coding for the Primitive Relay Wiretap Channel. Entropy. 2021; 23(4):442. https://doi.org/10.3390/e23040442

Chicago/Turabian Style

Athanasakos, Manos, and George Karagiannidis. 2021. "Secure Polar Coding for the Primitive Relay Wiretap Channel" Entropy 23, no. 4: 442. https://doi.org/10.3390/e23040442

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop