Next Article in Journal
Assessing the Impact of Increasing Tractor-Trailer Speed Limit on the Safety and Mobility of Three-Lane Highways in Egypt
Next Article in Special Issue
Quality Assessment of Dual-Parallel Edge Deblocking Filter Architecture for HEVC/H.265
Previous Article in Journal
Based on Backpropagation Neural Network and Adaptive Linear Active Disturbance Rejection Control for Attitude of a Quadrotor Carrying a Load
Previous Article in Special Issue
Performance Evaluation of a Multidomain IMS/NGN Network Including Service and Transport Stratum
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Neural-Network-Assisted Polar Code Decoding Schemes

Institute of Information Fusion, Naval Aviation University, Yantai 264001, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(24), 12700; https://doi.org/10.3390/app122412700
Submission received: 7 November 2022 / Revised: 8 December 2022 / Accepted: 8 December 2022 / Published: 11 December 2022
(This article belongs to the Special Issue Advance in Digital Signal, Image and Video Processing)

Abstract

:
The traditional fast successive-cancellation (SC) decoding algorithm can effectively reduce the decoding steps, but the decoding adopts a sub-optimal algorithm, so it cannot improve the bit error performance. In order to improve the bit error performance while maintaining low decoding steps, we introduce a neural network subcode that can achieve optimal decoding performance and combine it with the traditional fast SC decoding algorithm. While exploring how to combine neural network node (NNN) with R1, R0, single-parity checks (SPC), and Rep, we find that the decoding failed sometimes when the NNN was not the last subcode. To solve the problem, we propose two neural network-assisted decoding schemes: a key-bit-based subcode NN-assisted decoding (KSNNAD) scheme and a last subcode NN-assisted decoding (LSNNAD) scheme. The LSNNAD scheme recognizes the last subcode as an NNN, and the NNN with nearly optimal decoding performance gives rise to some performance improvements. To further improve performance, the KSNNAD scheme recognizes the subcode with a key bit as an NNN and changes the training data and label accordingly. Computer simulation results confirm that the two schemes can effectively reduce the decoding steps, and their bit error rates (BERs) are lower than those of the successive-cancellation decoder (SCD).

1. Introduction

To reduce interference in the communication process and enable the system to automatically check and correct errors to improve the reliability of data transmission, channel coding technology is widely used in various wireless communication systems. Currently, commonly used coding techniques include Reed–Solomon (RS) codes [1], low-density parity-check (LDPC) codes [2,3], turbo codes [4,5], Bose–Chaudhuri–Hocquenghem codes [6,7,8], convolutional codes [9], and polar codes [10]. Polar codes have been strictly proven to reach the Shannon limit under the binary discrete memoryless channel and binary erasure channel when the code length tends to infinity [11], and due to their excellent short-code performance, they became the control channel selection coding scheme for 5G enhanced mobile broadband scenarios at the 2016 3GPP RAN1#87 conference [12]. As the only channel coding that has been proven to reach the Shannon limit, polar codes are very important for 5G communication and future 6G communication. In the field of wireless communication, the decoding of polar codes has received extensive attention globally. Examples are the traditional belief propagation (BP) algorithm [13], the successive-cancelation (SC) algorithm [14], the early stop BP algorithm [15], other low complexity BP algorithms [16], the fast SC decoding scheme [17,18,19], and successive cancellation stack [20]. These traditional algorithms and their improvements can only reduce the decoding complexity or improve the decoding performance but cannot satisfy the requirements of improving communication quality and reducing decoding steps at the same time.
In recent years, the rapid development of deep learning in computer vision and language translation has attracted widespread attention from experts and scholars in the field of communication. Some algorithms directly train deep learning as a polar decoder, Chao, Z.W. et al. explored a polar code residual network decoder with a denoiser and a decoder. The denoiser used a multi-layer perceptron (MLP) [21], a convolutional neural network (CNN) [22], and a recurrent neural network (RNN) [23], while the decoder used a CNN [24]. The decoding performances nearly reached those of a traditional successive cancelation list (SCL) [25] algorithm when the code length was less than 32. In terms of reducing decoding complexity, Xu, W.H. et al. applied a neural network to polar code decoding, but the application was only used for limited code length [26]. Hashemi, S. et al. used the deep learning algorithm to reduce the complexity of the SC algorithm [27]. However, all of them cannot be directly applied to decoding longer polar codes. To apply the machine learning in polar code decoding, researchers combine machine learning with traditional polar decoding algorithms.
There are many combinations of BP algorithm and machine learning. In terms of reducing the complexity, Gruber, T. et al. proposed a neural network BP decoder that can be executed at any code length, which reduced the decoding bit error rate (BER) and saved 50% of the hardware cost [28]. Cammerer, S. et al. proposed a segmentation decoding scheme based on neural networks, which complied with the BP decoding law and divided polar codes into single-parity checks (SPCs) and repetition code (RC) nodes and decoded them using a neural network to achieve the same bit error performance as the SC and BP algorithms with lower complexity [29]. In terms of improving decoding performance, Teng, C.F. et al. used the check relationship between the cyclic redundancy check (CRC) remainder and the decoding result to propose a comprehensive loss function of polar code frozen bits, which realized the unsupervised decoding of the polar codes, and the decoding performance was better than that of the traditional BP decoding algorithm [30]. Gao, J. et al. proposed a BP structure similar to the res-net network, which achieved better decoding performance than the standard BP algorithm [31]. In terms of simultaneously improving decoding performance and reducing complexity, Xu, W. et al. used deep learning to improve the decoding performance of the BP algorithm and used lower computational complexity to achieve a bit error performance close to that of the CRC-aided SCL (CA-SCL) algorithm [32].
The SC algorithm and machine learning are less frequently combined. Doan, D. et al. divided the polar code into equal-length nodes and decoded them using neural networks [33] and obtained a lower complexity than [29]. However, when the code length is longer, the number of neural network models that need to be called for decoding is higher, and the increase in the calculation amount is not negligible. None of the above studies have achieved a low-complexity low-bit-error decoding performance by combining the SC algorithm and machine learning.
To achieve a low-complexity low-bit-error decoding performance by combining SC algorithm and machine learning, this paper proposes a decoding method combining neural network nodes (NNNs) and traditional nodes. This scheme avoids the extra complexity caused by frequent calls of the neural network model and proposes a decoding scheme that can not only reduce the delay, but also improve the decoding performance.
The remainder of this paper is organized as follows: Section 2 introduces the knowledge required for polar code decoding, including the SC decoding scheme, traditional special nodes, and the fast SC decoding scheme. Section 3 introduces the system model and the decoding scheme based on the NNN and CNN structure and then analyzes the performance of the model in decoding. Two NNN recognition strategies are explored. Section 4 presents the simulation results over the additive white Gaussian noise (AWGN) and Rayleigh channel. Section 5 summarizes the paper.

2. Preliminaries

2.1. SC Decoding Algorithm

A polar code ( x 1 , x 2 , , x N ) with length N = 2 n can be encoded by ( x 1 , x 2 , , x N ) = ( u 1 , u 2 , , u N ) F n , F = [ 1 0 1 1 ] , is Kronecker operation. Consider two independent and identically distributed random variables u 1 and u 2 , which are encoded by polar codes:
( x 1 , x 2 ) = ( u 1 , u 2 ) F = ( u 1 , u 2 ) [ 1 0 1 1 ] = ( u 1 u 2 , u 2 ) .
Obviously, at this time, N = 2 , n = 1 . As shown in Figure 1, ( x 1 , x 2 ) passes through a binary discrete memoryless channel (BDMC) with transition probability W ( y | x ) . Then, the received signal ( y 1 , y 2 ) is obtained, but we always use its log likelihood ratio (LLR) ( L 1 , L 2 ) . The process of solving ( u 1 , u 2 ) from ( L 1 , L 2 ) is the decoding process.
As shown in Figure 1, we solve u 1 using the u 1 = f ( L 1 , L 2 ) operation, and solve u 2 using the u 2 = g ( u 1 , L 1 , L 2 ) operation; indeed, f operation and g operation are conditional probabilities. Specifically, we solve u 1 = 0 when Pr ( y 1 y 2 | u 1 = 0 ) Pr ( y 1 y 2 | u 1 = 1 ) 0 ; otherwise, u 1 = 1 .
Pr ( y 1 y 2 | u 1 ) = Pr ( y 1 y 2 u 1 ) Pr ( u 1 ) = u 2 { 0 , 1 } Pr ( y 1 y 2 u 1 u 2 ) Pr ( u 1 ) = u 2 { 0 , 1 } Pr ( y 1 y 2 | u 1 u 2 ) Pr ( u 1 u 2 ) Pr ( u 1 ) = ( a ) 1 2 u 2 { 0 , 1 } Pr ( y 1 y 2 | x 1 x 2 ) = ( b ) 1 2 u 2 { 0 , 1 } Pr ( y 1 | x 1 ) Pr ( y 2 | x 2 ) = 1 2 u 2 { 0 , 1 } Pr ( y 1 | u 1 u 2 ) Pr ( y 2 | u 2 )
where ( a ) means Pr ( u 1 ) = 0.5 ; Pr ( u 1 u 2 ) = 0.25 ; ( b ) means W is a BDMC.
ln Pr ( y 1 y 2 | u 1 = 0 ) Pr ( y 1 y 2 | u 1 = 1 ) = ln u 2 { 0 , 1 } Pr ( y 1 | u 2 ) Pr ( y 2 | u 2 ) u 2 { 0 , 1 } Pr ( y 1 | 1 u 2 ) Pr ( y 2 | u 2 ) = ln Pr ( y 1 | 0 ) Pr ( y 2 | 0 ) + Pr ( y 1 | 1 ) Pr ( y 2 | 1 ) Pr ( y 1 | 1 ) Pr ( y 2 | 0 ) + Pr ( y 1 | 0 ) Pr ( y 2 | 1 ) = ln Pr ( y 1 | 0 ) Pr ( y 2 | 0 ) Pr ( y 1 | 1 ) Pr ( y 2 | 0 ) + 1 Pr ( y 2 | 0 ) Pr ( y 2 | 1 ) + Pr ( y 1 | 0 ) Pr ( y 1 | 1 ) = ln 1 + e L 1 + L 2 e L 1 + e L 2
where L 1 = ln Pr ( y 1 | 0 ) Pr ( y 1 | 1 ) and L 2 = ln Pr ( y 2 | 0 ) Pr ( y 2 | 1 ) are the LLRs of the received signal. Equation (3) is called the f operation, which is commonly estimated as follows:
ln 1 + e L 1 + L 2 e L 1 + e L 2 sign ( L 1 ) sign ( L 2 ) min { | L 1 | , | L 2 | } ,
where sign ( a ) = 1 ; sign ( a ) = 1 ; a > 0 ; min indicates taking the minimum value.
U 2 = 0 if ln Pr ( y 1 y 2 u 1 | u 2 = 0 ) Pr ( y 1 y 2 u 1 | u 2 = 1 ) 0 , where
Pr ( y 1 y 2 u 1 | u 2 ) = Pr ( y 1 y 2 u 1 u 2 ) Pr ( u 2 ) = Pr ( y 1 y 2 | u 1 u 2 ) Pr ( u 1 u 2 ) Pr ( u 1 ) = 1 2 Pr ( y 1 y 2 | u 1 u 2 ) = 1 2 Pr ( y 1 y 2 | x 1 x 2 ) = 1 2 Pr ( y 1 | x 1 ) Pr ( y 2 | x 2 ) = 1 2 Pr ( y 1 | u 1 u 2 ) Pr ( y 2 | u 2 )
The same can be obtained as follows:
ln Pr ( y 1 y 2 u 1 | u 2 = 0 ) Pr ( y 1 y 2 u 1 | u 2 = 1 ) = ln Pr ( y 1 | u 1 ) Pr ( y 2 | 0 ) Pr ( y 1 | 1 u 1 ) Pr ( y 2 | 1 ) = ln Pr ( y 1 | u 1 ) Pr ( y 1 | 1 u 1 ) + ln Pr ( y 2 | 0 ) Pr ( y 2 | 1 ) = ( 1 2 u 1 ) L 1 + L 2
Equation (6) is called the g calculation. We use a tree diagram to represent the decoding process of Figure 1, which is more intuitive.
As shown in Figure 2, the circles represent nodes; the upper node is the root node; and the lower two nodes are leaf nodes. After the root node calculates the LLR of the data, u 1 is obtained using u 1 = f ( L 1 , L 2 ) operation. Then, it returns the result to the root node. After that, u 2 is obtained using the u 2 = g ( u 1 , L 1 , L 2 ) operation, and then, u 2 is returned to the root node to finally obtain the decoding results. When the code length is 4, the decoding diagram is as shown in Figure 3.
As shown in Figure 3, the polar code of length 4 can be divided into three basic units: the red solid line block diagram in the lower left corner, the red dotted line block diagram in the lower right corner, and the blue dot-dash line block diagram above. According to the algorithm in Figure 2, nodes u 1 , u 2 , 1, u 3 , u 4 , 3 and 1 are calculated sequentially, and the calculation function is marked next to the line. Any polar code with a code length of N = 2 n can perform SC decoding in sequence according to this rule, where n is a positive integer. However, the frozen bit transmission information is fixed and does not need to be decoded. Therefore, in order to reduce the amount of calculation in the actual execution process, the frozen bit part is not included in the recursive calculation. Let white circles denote nodes that contain only frozen bits, black circles denote nodes that contain only information bits, and gray circles denote nodes with both frozen bits and information bits. Unless otherwise specified, the meanings of the nodes in this paper are the same as above. A polar code with a code length of N = 32 and information bit length K = 16 constructed through the Gaussian approximation (GA) method is used, and the signal-to-noise ratio is 2.5 dB. Then, the polar code tree diagram is as shown in Figure 4. We usually measure the decoding time complexity by decoding steps. The decoding steps are calculating by adding the numbers of operation f , operation g and data return. As shown in Figure 4, the downward solid arrow with solid line represents the f operation; the downward non-solid arrow with chain line represents the g operation; the upward solid arrow with dotted line represents the direction of data return, which is calculated through (1). The serial numbers in the figure record the counting process of decoding time step. The total number of decoding step is 81. Note that the R0 does not need a data return. Unless otherwise specified, the meanings of the arrows in this paper are the same as above.

2.2. Decoding of Traditional Special Nodes

This section introduces four special nodes: R0 node, R1 node, Rep node, and SPC node, which are distinguished by the positions of the frozen bits and information bits. A node that has all frozen bits is an R0 node; a node that has all information bits is an R1 node; a node that has only one information bit at the last bit position is a Rep node; a node that has only one frozen bit at the first bit position is an SPC node. Assuming that a polar code s 1 N S = ( s 1 , s 2 , , s N S ) , s i { 0 , 1 } with length N S is transmitted, after the BDMC with transition probability Pr ( y | s ) , s { 0 , 1 } , the received signal y = ( y 1 , y 2 , , y N S ) is chosen to obtain the sequence ( β 1 , β 2 , β N S ) , where
β i = ( 1 s i g n ( ln Pr ( y i | 0 ) Pr ( y i | 1 ) ) ) / 2 ,
If the node is R1, ( β 1 , β 2 , β N S ) is its fast decoding result. If the node is R0, there is no need to decode. The decoding method of the Rep node and SPC node are as follows.

2.2.1. Fast Decoding of a Rep Node

The log-likelihood ratio (LLR) sum of a Rep node with a code length of N S can be calculated as follows:
S q = i = 1 N S ln Pr ( y i | 0 ) Pr ( y i | 1 ) ,  
where S q is the LLR sum and Pr denotes probability. The decoding result of Rep is 0 when S q 0 and 1 otherwise.

2.2.2. Fast Decoding of an SPC Node

For an SPC node with a code length of N S , if i = 1 N S β i = 0 , ( β 1 , β 2 , β N S ) is its decoding result. Otherwise, find β p , which is the smallest absolute LLR of ( β 1 , β 2 , β N S ) . Let β p = 1 β p . Then, ( β 1 , β 2 , β N S ) is the decoding result.

2.3. Traditional Fast SC Decoding

When we recognize a polar code with length N to a string of subcodes, we first judge if the whole polar code can be recognized as a subcode; if not, we will judge the former, and the latter N / 2 polar code can be recognized as a subcode. If so, the recognition stops, and if not, we divide the N / 2 polar code by the N / 4 polar code and continue the recognition, until the original is totally recognized as subcodes. When decoding a polar code with a code length of 32, as discussed in Section 2.1, the polar codes are first recognized as special nodes, as shown in Figure 5.
After node recognition, the nodes are decoded as discussed in Section 2.2. The decoding sequence is the same as that in Section 2.1.
The decoding process is shown in Figure 6. Though the traditional fast SC decoding costs 19 decoding steps which is much lower than SC decoding, it cannot improve the bit-error performance. Maintaining the low time step of the fast SC algorithm while improving the bit-error performance is the goal of the algorithm in this paper.

3. Neural Network-Assisted Polar Code Decoding Scheme

Considering that the traditional fast SC decoding algorithm can effectively reduce the decoding time step, in this paper, we will reduce the BER of fast SC decoding structure. Xu Xiang studied the decoding performance of the polar code neural network decoder based on CNNs, RNNs, long short-term memory, and MLP under different network designs and parameter settings [34]. In comparison, it can be seen that the CNN-based decoder can achieve a better performance with fewer parameters. Therefore, we consider using the CNN to improve the decoding performance and call the scheme neural network assisted (NNA) decoding scheme.

3.1. System Model

Figure 7 shows the block diagram of the proposed decoding scheme system, which is composed of training and decoding parts. The “Training Part” is executed first, followed by the “Decoding Part”. The Training Part is composed of “Polar Encoder1” and “Neural Network Training”; the Decoding Part comprises “Polar Encoder2”, “Mapper”, “Noise”, and “NNA Decoding Scheme”. Note that the Mapper Layer, Noise Layer, and Decoder Layer are trained in the training part, but only the model of the Decoder Layer is saved and called in NNA Decoding. Although the decoder layer decodes y ˙ 1 N S to obtain u ˙ ^ 1 N S , the purpose of the training part is to obtain the trained Decoder Layer model and not the u ˙ ^ 1 N S . N S and K S denote the code length and information length of the codewords to be trained, respectively, while N and K denote the code length and information length of the codewords to be decoded, respectively. u 1 N denotes a row vector ( u 1 , , u N ) . The two parts have different inputs, u ˙ 1 N S and u 1 N , and different diagram structures, but u ˙ 1 N S y ˙ 1 N S and u 1 N y 1 N are mapped in the same way. Taking u 1 N y 1 N as an example, consider single-carrier signaling in which N binary streams u 1 N are encoded with a channel encoder that generates a codeword of length N . The codeword x 1 N is then mapped to binary phase-shift keying (BPSK) symbols s 1 N by the mapper. Then, s 1 N is transmitted over noisy channels by the noise block, and finally, the signal y 1 N is received. When the channel noise is additive gaussian white noise (AWGN), y 1 N is given by y 1 N = s 1 N + n w , n w ~ N ( 0 , σ w 2 ) . Similarly, y ˙ 1 N S can be obtained from u ˙ 1 N S . Note that u ˙ 1 N S is determined by the Node Recognizer. We can obtain the LLR values through 2 y 1 N / σ w 2 , where σ w 2 denotes Gaussian noise power. In this paper, the channel-side information is assumed to be known at the receiver. The details of the neural network (NN) model and decoding scheme are introduced in the following section.

3.2. Neural Network Model and NNN

The NN model is trained by the training part in Figure 7. We write matrix U ˙ with size 2 K S × K S to denote all the possible information sequences and matrix X ˙ with size 2 K S × N S to denote all the possible codewords, resulting in, x ˙ 1 N S X ˙ . Let A denote the position set of the information bits. The decoder layer is trained to find the u ˙ 1 K S corresponding to y ˙ 1 N S , with u ˙ 1 K S U ˙ . To fully train the NN, additional layers without trainable parameters are added to perform certain actions, namely mapping the codewords to BPSK symbols and adding noise. Therefore, the NN model approximates x ˙ 1 N S to u ˙ 1 K S . Therefore, X ˙ represents the training data in this machine learning problem and U ˙ is the label. We can generate as much training data as desired by Monte Carlo simulations. The activation functions used in the decoder layer are sigmoid functions and a rectified linear unit (ReLU), which are defined as follows:
g s i g m o i d ( z ) = 1 1 + e z ,
g Re L U ( z ) = max { 0 , z } .
Mini-batch gradient descent is used to train the NN, and in each batch, different data are fed into the NN. Table 1 summarizes the architecture of the proposed NN model. In this scheme, the batch size, which is equal to the size of the label set and that of the codeword set, is 2 K S . Thus, the training complexity scales exponentially with K S , which makes training the NN decoder for long polar codes not feasible in actuality. To solve this problem, we define the short codes ( N S is generally less than 32) decoded by the NN decoder as NNNs and combine them with the traditional special nodes mentioned in Section 2.2 to derive an NNA decoding scheme. Note that an NNN could be one of the R1, R0, Rep, or SPC nodes, or none of them. Arıkan’s system polar code coding scheme [35] is used in this paper unless otherwise stated.
The neural network structure in this paper consists of a mapper layer, a noise layer, and a decoding layer. All three layers need to be trained. In real-life decoding, only the decoding layer is needed. The structure of the NN is shown in Table 1. The mapper layer and the noise layer are both custom-function layers generated by the Keras tool in Tensorfow2. In this paper, Gaussian white noise is used unless otherwise stated; the Gaussian construction method is used to generate polar codes; the modulation method is BPSK. The input of the network during training is X ˙ . The label is U ˙ . Each round of training adds different noises, i.e., the data of each round of training are different from each other to prevent overfitting.
A trained CNN can classify the received signal, find the corresponding information sequence, and finally decode the polar code. A well-trained neural network can achieve the performance of MAP decoding [26]. The decoding results of the NNN with N S = 16 and K S = 4 are shown in Figure 8.
As shown in Figure 8, NNN decoding performance is better than SC algorithm, almost the same as the optimal MAP decoding algorithm, while the traditional fast decoding adopts a suboptimal decoding algorithm, and its decoding performance is similar to SC. Therefore, using well-trained CNN to decode subcodes can obtain better decoding performance than traditional classified subcodes.

3.3. Neural Network-Assisted (NNA) Decoding

The NNA decoding scheme is composed of a node recognizer and a decoder. The former recognizes the code as a concatenation of smaller constituent codes that were simply named nodes, which includes the NNN, and then the latter decodes these nodes. Note that the NNN is decoded by the trained model from the neural network model. Since the decoding scheme proposed in this paper is only for the receiving end, we use BER as the metric to measure the decoding performance. To avoid the computational complexity of the recursive structure of the successive-cancellation decoder (SCD), a vector of size 2 × N 1 is used to store the intermediate calculation results of the SC decoding algorithm. The decoding error rate will decrease as the SNR increases, i.e., when a BER is 6 × 10 7 at a lower signal noise rate (SNR), the BERs at higher SNRs with the same parameters are definitely lower than 6 × 10 7 . Let 6 × 10 7 denote BER threshold. If a BER is lower than the threshold, it will not be displayed in the figures. Therefore, the calculations of higher SNRs with BERs lower than 6 × 10 7 can be skipped to further improve the calculation speed.
To determine how to recognize the NNN, we must determine how it works. The progress of the signal process is shown in the training part of Figure 7. When N = 16 , K ranges from 1 to 16. As shown in Figure 7 and as explained in Section 3.1, “Polar Encoder1” generates 2 K S possible codewords of N S bits size in the training part. Considering the multiple possibilities of node recognition, we trained the polar code with N S = 16 , K S = 1 16 , with N S = 8 , K S = 1 8 , and with N S = 4 , K S = 1 4 in epochs. The gradient of the loss function is calculated in each epoch using the Adam optimizer [36], and the trained model is finally saved to be used in the decoding part.
When only four kinds of traditional special nodes need to be identified, we first identify the polar code with the entire length of N and judge whether the code is one of the above four kinds of nodes. If so, we directly perform maximum likelihood decoding. If not, we continue to identify the first half and the second half, and the above process is repeated until the original code is completely decomposed into nodes. Although the recognition process is a recursive process, this process is only performed once, and the computational complexity increase is limited. This method of transferring half of the original sequence into the recursive operation to identify the subcode is called dichotomy.
Assuming that the length of the NNN is 8, if we execute dichotomy, when the subcode with a code length of 8 does not belong to the four traditional special nodes mentioned before, it will be recognized as an NNN. In practice, the decoding performance will be greatly reduced in some cases. To find the reasons and explore a proper division method of the NNN, we recognized and decoded polar codes with a code length of 16 and information bit length 1–16. In this experiment, the codewords were constructed using the GA method. Taking polar codes with N = 16 and K = 3 as an example, we write R0(8) to denote an R-0 node with a code length of 8, SPC(4) to denote the SPC node with a code length of 4, and NNN(8/3) to denote an NNN with N S = 8 and K S = 3 . When the code is recognized in the traditional way, the node-type structure is R0(8) + R0(4) + SPC(4) and the node can be recognized as R0(8) + NNN(8/3) by the node recognizer as well, that is, the NNN structure is R0(4) + SPC(4). Note that when N S = 16 , the code can also be recognized as NNN(16/3).
We simulated multiple times and found that when the NNN is the last node of the original code, the decoding performance is always better than that of the traditional fast SC decoding algorithm. When an NNN is decoded as the non-last subcode, the polarization effect cannot be generated sometimes. This is because the information bits position determined by encoder 1 is different from the actual NNN determined by the recognizer. So, the decoding model is mismatched and errors are propagated in the decoding tree, resulting in decoding failure.
There are two ways to solve the abovementioned problems: one is to use an NNN as the last decoded subcode directly, and the other is to consider adapting the model and improving the correct probability of the first error, thus preventing the decoding error propagation. The two solutions lead to two decoding schemes, which are described below.

3.4. Last Subcode Neural Network-Assisted Decoding (LSNNAD) Scheme

Let N S denote the code length of the last subcode neural network node (LSNNN). The main idea of this scheme is to identify the last subcode as LSNNN. Note that at this time, LSNNN may be one of the four traditional subcodes mentioned above, or it may be different from the above four subcodes. The type, length, position, and order are determined at the moment when the codeword is constructed. In this experiment, 0 and 1 are used to represent the frozen bits and information bits, i.e., the sequence i n f o r m a t i o n _ p o s { 0 , 1 } N of code length N is obtained after the codeword is constructed. When i n f o r m a t i o n _ p o s = 1 , the information bits are stored. Otherwise, the frozen bits are stored. In this experiment, all frozen bits are 0. The LSNNN identification flowchart is shown in Figure 9.
The input N is the same as in Figure 1. Let D denote the position of bits and A , an arbitrary subset of { 1 , , N } , denote the position of information bits. I denotes the vector of the index, the initial content of which is in the range 0   to   N 1 , and it represents the sequence position currently being processed. I [ 0 ] denotes the first element of the vector I and represents the starting position of the code being processed. D , I , and N constantly change during the calculation process depicted in the flowchart (Figure 9). D ( 0 : N / 2 ) denotes the first half of the vector D , D ( N / 2 : N ) denotes the second half of the vector D , and N Z = N N S denotes the starting point of the NNN that is already fixed when the total code length and length of the NNN are determined. The output R e is a matrix with five rows, with the first to the last rows denoting the node type, the starting point, K , N , and the information position of every node, respectively. In the node identification process, the decoding depth of each node is recorded, i.e., the node position in the SC decoding tree is recorded. The decoding tree after dividing the subcodes according to the above rules is shown in Figure 10.

3.5. Key-Bit-Based Subcode Neural Network-Assisted Decoding (KSNNAD) Scheme

Although the LSNNAD minimizes the loss of decoding errors caused by model mismatches, the performance improvement is limited. So, we must find another scheme to improve the decoding performance.
Ref. [37] analyzed the SC decoding error of polar codes and found that the first error is usually caused by channel noise, and the frequency of decoding failure caused by this error increases with the increase of code length. Ref. [38] analyzed and proved that when the polar code is divided into several R1 nodes, the first error in SC decoding of the polar code is likely to occur at the first information bit of the R1 nodes. Therefore, the decoding performance of the SC algorithm can be improved as long as the first error is correct. It can be seen from the above analysis that the first error is likely to occur at the first information bit position. This bit is defined as a key bit, and the node containing this bit is recognized as an NNN. In this scheme, all the NNNs are recorded and trained, and the models are saved to be called in the decoding part so that the model mismatched error is solved. Note that the label U ˙ is the decoding result whose key bit is flipped in the decoding process. So, the training part of the KSNNAD does not need a polar encoder.
The scheme is called a KSNNAD scheme, and the NNN is called the key bit based neural network node (KSNNN). We solve another problem: how to recognize a KSNNN.
The simplest idea of recognizing a KSNNN is to select the key bit as the first bit of the KSNNN. Assuming that the position is P o s and the subcode length is N S , the position of the last bit of the KSNNN is P o s + N S . However, the dichotomy does not ensure that the key bit is the first position of the KSNNN; it may be the middle position. Next, we need to recognize a polar code with a KSNNN and the other four traditional subcodes mentioned above. Considering the complexity, the code length of KSNNN is limited to N S 16 . The identification flowchart is shown in Figure 11.
Taking the polar code with N = 32 , which is mentioned above as an example, the recognized result and decoding process are shown in Figure 12.
As shown in Figure 13, the time step required for decoding at this time is 12. The structure of the KSNNN is R0(8) + Rep(4) + SPC(4), compared with Figure 6. The structure of the original polar code is KSNNN(16) + Rep(4) + SPC(4) + R1(8). In the actual execution process, the subcode classification procedure is first executed. All possible forms of KSNNN are recorded. The key bit flip results corresponding to all possible situations of each specific form are trained as labels. Finally, the models are saved and called in the decoding part separately.
.

4. Numerical Simulation

This section simulates over the AWGN channel for Section 4.1, Section 4.2, Section 4.3 and Section 4.4. Section 4.5 is simulated over a Rayleigh channel. The SNR is expressed as E s / N 0 , where E s is the energy per symbol and N 0 is the single-sided noise power spectral density. We used Keras as the front end of the Tensorflow2 platform with an NVIDIA 3080 graphics card and Intel(R) Core(TM) i9 CPU to run the NN model, with the configuration given in Table 1. NNNs in Section 4.2, Section 4.3, Section 4.4 and Section 4.5 have the code length N S = 16 . The polar code was constructed at 2.5 dB unless otherwise stated. Since the decoding scheme in this paper is proposed for the receiving end, the decoding performance is focused on the BER.

4.1. Performance with Different N S

When simulated with a code length of 1024, a code rate of 0.5 and N S of 4, 8 and 16, the BER performance are shown in Figure 13. The BER curve of KSNNAD with N S 4 is marked as KSNNAD N S = 4 , while LSNNAD with N S 4 marked as LSNNAD N S = 4 , and so on.
As shown in Figure 13, combining NNN with R1, R0, SPC, and Rep can indeed improve decoding performance, but the improvement differs with different N S and different schemes. Both KSNNAD and LSNNAD have bit error performance improvement as N S increases and the longer the N S , the smaller the improvement. KSNNAD outperforms LSNNAD due to performance boost at the key bit, regardless of the value of N S .

4.2. Performance with Different Code Rates

When simulated with a code length of 1024 and code rate of 0.25, 0.5, and 0.75, the BER performance is shown in Figure 13. The BER curve of KSNNAD with code rate 0.25 is marked as KSNNAD K = 0.25, while LSNNAD with code rate 0.25 marked as LSNNAD K = 0.25, FSC decoding scheme with code rate 0.25 marked as FSCD K = 0.25, and so on.
As shown in Figure 14, both of the KSNNAD and LSNNAD have bit error performance improvement compared with SC and the lower the code rate, the better the improvement. When the code rate is 0.25, compared to SC decoder, LSNNAD needs 0.049 dB less to reach BER 10 4 , while KSNNAD needs about 0.078 dB less.

4.3. Performance with Different Code Lengths

This section simulates the KSNNAD, LSNNAD and SC BER performance with a code rate of 0.5 and code lengths of 64, 128, 256, 512, and 1024. The results are shown in Figure 14. The BER curve of KSNNAD with a code length of 64 is marked as KSNNAD N = 64, while LSNNAD with a code length of 64 marked as LSNNAD N = 64, the FSC decoding scheme with code length 64 marked as FSCD N = 64, and so on.
As shown in Figure 15, the decoding performance of the KSNNAD and LSNNAD are better than that of the SC decoder. When the code length is 1024, compared to the SC decoder, LSNNAD needs 0.049 dB less to reach BER 10 4 , while KSNNAD needs approximately 0.127 dB less.

4.4. Performance Comparison under AWGN Channel

This section simulates the BER performance comparison of different algorithms with a code length of N = 1024 and a code rate of R = 0 . 5 . The BER performance of the LSNNAD and that of the KSNNAD are compared with those proposed in [17,19,25,33,39]. The simplified SC decoder proposed in [17] is marked as SSCD [17], the SCL decoder proposed in [25] with a list length of 2 is marked as SCLD L = 2 [25], the fast SC flip decoder proposed in [19] with T m a x = 2 is marked as FSCFD T m a x = 2 [19], the recursive trellis decoding scheme proposed in [39] with τ = 4 is marked as RTD τ = 4 [39], and so on. The results are shown in Figure 16.
As shown in Figure 16, the BER curves of SCD, SSCD and SCLD with L = 1 are just the same. The BER performance of LSNNAD scheme was slightly worse than NSCD and RTD with τ = 6 , whereas the performance of KSNNAD is better than FSCFD with T m a x = 3 and worse than SCL decoder with L = 2.

4.5. Performance Comparison under Rayleigh Channel

A transmitted signal s passes through the Rayleigh flat-fading channel, and the received signal can be expressed as follows:
g Re L U ( z ) = max { 0 , z } .
where h ( t ) is the fading factor, which obeys the Rayleigh distribution; θ ( t ) is the phase; and n ( t ) is the complex white Gaussian noise. The comparison between the two decoding schemes and SC algorithm is shown in Figure 17.
As shown in Figure 17, the decoding performance from poor to excellent over Rayleigh channel is as follows: SCD, LSNNAD, and KSNNAD.

4.6. Decoding Complexity

4.6.1. Number of Arithmetic Operations

Compared with FSCD, the difference in the LSNNAD scheme and the KSNNAD scheme lies in node recognition and the training of the NN model. The complexity of NNN recognition and NN training can be ignored because they can be performed offline before the decoding. As to the complexity of node recognition online, LSNNAD needs at most ( 2 + N ) log 2 N 18 comparisons in the worst situation but only 2 N / 3 16 comparisons in the best situation, while KSNNAD needs at most ( 4 + N ) log 2 N 18 comparisons but only 2 N / 3 16 comparisons in the best situation.
Since the subcodes are not of equal length, a specific expression of the arithmetic operation cannot be given for each specific polar code. Let N S denote code length of different special nodes. When decoding a polar code with a code length of 1024, a code rate of 0.5, and construction SNR 2.5 dB, number of special nodes with different node length for different decoding schemes is shown in Table 2, number of arithmetic operations for different decoding steps is shown in Table 3, and total number of arithmetic operations for different decoding schemes is shown in Table 4. Among them, LSNNAD and KSNNAD were simulated under the NN code length of N S = 16 .
One symbolic operation, taking absolute values and subtracting, is recorded as one addition; and one logarithmic and exponential operation is recorded as two multiplications. The operations of each decoding process are shown as Table 3.
An exclusive or operation is recorded as an addition, so FSCFD’s CRC check needs N additions at most. Calling the model once will introduce 164 K S 2 multiplications and 164 K S 2 + 13 K S additions in LANNAD and KSNNAD. Now we have the number of arithmetic operations for different special nodes and the subcode recognition, as well as the number of special nodes, so the total number of arithmetic operations for different decoding schemes can be calculated, and the results are shown in Table 4.
As shown in Table 4, when Tmax = 2, FSCFD and KSNNAD are the closest in complexity, and the performance of KSNNAD is better than FSCFD, according to simulation results in Figure 16. RTD with τ = 6 is much more computationally intensive than KSNNAD but does not perform as well as KSNNAD in Figure 16.

4.6.2. Decoding Steps

Since the subcodes are not of equal length, a specific expression of the time step cannot be given for each specific polar code. Table 5 shows the decoding steps and node structure of LSNNAD, KSNNAD, and the algorithms proposed in Refs. [17,33] for a polar code with a code length of 128, a code rate of 0.5, and construction SNR 2.5 dB. Among them, NSCD proposed in Ref. [33], LSNNAD and KSNNAD were simulated under the NN code length of N S = 16 / N S = 8 . Let None (64) denote 64 bits that have not been recognized by any nodes. Note that the 64 bits are not 64 consecutive bits but several continuous strings scattered of bits. Further, 21R1 denotes that there are 21 R1 nodes to be recognized in total, but we do not provide every length of the R1 nodes.
It can be seen from Table 5 that the decoding steps of Ref. [33] were greatly affected by the length of the NNN. Moreover, they call the NN model multiple times, and each call will give rise to 164 K S 2 multiplications and 164 K S 2 + 13 K S additions. In contrast, the decoding schemes proposed in this paper are less affected by the length of the NNN because they only call the decoding model once. Compared with the traditional SCD algorithm, they can still effectively reduce the running time step of the decoding algorithm.

5. Conclusions

This paper proposes two NNA decoding schemes named LSNNAD and KSNNAD. There are two differences between the two schemes: one is the recognition method. After recognizing the polar code as a combination of NNN, R1, R0, SPC, and Rep node, decoding is performed and the trained NN model is called. The other is the training label. The LSNNAD scheme’s training label is the original information bits, and the KSNNAD scheme’s training label is the right decoding result with the flipping key bit derived from a definitely matched model. The simulation results verified the effectiveness of the two schemes. Since KSNNAD can effectively improve the decoding accuracy of the first error location, its decoding BER performance curve is better than that of the LSNNAD.

Author Contributions

Methodology, H.L.; software, W.Y.; validation, H.L.; formal analysis, H.L.; investigation, H.L.; resources, H.L.; data curation, H.L.; writing—original draft preparation, H.L.; writing—review and editing, W.Y.; supervision, Q.L.; project administration, L.Z.; funding acquisition, W.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the NATIONAL NATURAL SCIENCE FOUNDATION OF CHINA, grant number 62271499.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We are very grateful for Trifonov P’s help with this paper in the simulation comparison.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Helleseth, T.; Klve, T. Algebraic coding theory. In Coding Theory; John Wiley & Sons: San Francisco, CA, USA, 2007; pp. 13–96. [Google Scholar]
  2. Gallager, R. Low-density parity-check codes. IRE Trans. Inf. Theory. 1962, 8, 21–28. [Google Scholar] [CrossRef] [Green Version]
  3. Machay, D.J.C. Good codes based on very sparse matrices. IEEE Trans. Inf. Theory 1999, 45, 399–431. [Google Scholar] [CrossRef] [Green Version]
  4. Berrou, C.; Glavieux, A.; Thitimajshima, P. Near Shannon limit error-correcting coding and decoding: Turbo codes. In Proceedings of the IEEE International Conference on Communications, Geneva, Switzerland, 23–26 May 1993; pp. 1064–1070. [Google Scholar]
  5. Berrou, C.; Glavieux, A. Near optimum error-correcting coding and decoding: Turbo codes. IEEE Trans. Commun. 1996, 44, 1261–1271. [Google Scholar] [CrossRef] [Green Version]
  6. Bose, R.C.; Chaudhuri, D.K. On a class of error correcting binary group codes. Inform. Control 1960, 3, 68–79. [Google Scholar] [CrossRef] [Green Version]
  7. Hocquenghem, A. Codes correcteurs d’erreurs. Chiffres 1959, 2, 147–156. [Google Scholar]
  8. Reed, I.S.; Solomon, G. Polynomial codes over certain finite fields. J. Soc. Ind. Appl. Math. 1960, 8, 300–304. [Google Scholar] [CrossRef]
  9. Virerbi, A.J. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE Trans. Inf. Theory 1967, 13, 260–269. [Google Scholar] [CrossRef] [Green Version]
  10. Arikan, E. Channel polarization: A method for constructing capacity-achieving codes. In Proceedings of the IEEE International Symposium on Information Theory, Toronto, ON, Canada, 6–11 July 2008; pp. 3051–3073. [Google Scholar]
  11. Shannon, C.D. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  12. Liu, Y.J. Channel Codes: Classical and Modern, 1st ed.; Electronic Industry Press: Beijing, China, 2022; pp. 1–576. [Google Scholar]
  13. Jin, S.; Liu, X. A memory efficient belief propagation decoder for polar codes. China Commun. 2015, 12, 34–41. [Google Scholar]
  14. Arikan, E. Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels. IEEE T. Inform. Theory 2009, 55, 3051–3073. [Google Scholar] [CrossRef]
  15. Choi, S.; Yoo, H. Area-efficient early-termination technique for belief-propagation polar decoders. Electronics 2019, 8, 1001. [Google Scholar] [CrossRef] [Green Version]
  16. Abbas, S.M.; Fan, Y.; Chen, J. Low complexity belief propagation polar code decoder. In Proceedings of the IEEE Workshop on Signal Processing Systems, Hangzhou, China, 14–16 October 2015; pp. 1–6. [Google Scholar]
  17. Alamdar-yazdi, A.; Kschischang, F.R. A simplified successive-cancellation decoder for polar codes. IEEE Commun. Lett. 2011, 15, 1378–1380. [Google Scholar] [CrossRef]
  18. Sarkis, G.; Giard, P.; Vardy, A.; Thibeault, C.; Gross, W.J. Fast polar decoders: Algorithm and implementation. IEEE J. Sel. Areas Commun. 2014, 32, 946–957. [Google Scholar] [CrossRef] [Green Version]
  19. Ardakani, M.H.; Hanif, M.; Ardakani, M.; Tellambura, C. Fast successive-cancellation-based decoders of polar codes. IEEE Trans. Commun. 2019, 67, 2360–2363. [Google Scholar] [CrossRef]
  20. Chen, K.; Niu, K. Stack decoding of polar codes. Electron. Lett. 2012, 48, 695–696. [Google Scholar]
  21. Kruse, R.; Mostaghim, S.; Borgelt, C.; Steinbrecher, M. Multi-layer perceptrons. In Computational Intelligence; Springer: Berlin, Germany, 2022; pp. 53–124. [Google Scholar] [CrossRef]
  22. Strigl, D.; Kofler, K.; Podlipnig, S. Performance and scalability of GPU-based convolutional neural networks. In Proceedings of the 18th Euromicro Conference on Parallel, Distributed and Network-based Processing, Pisa, Italy, 17–19 February 2010; pp. 317–324. [Google Scholar]
  23. Ben, N.M.; Chtourou, M. On the training of recurrent neural networks. In Proceedings of the 8th International Multi-Conference on Systems, Signals & Devices, Sousse, Tunisia, 22–25 March 2011; pp. 1–5. [Google Scholar]
  24. Cao, Z.; Zhu, H.; Zhao, Y.; Li, D. Learning to denoise and decode: A novel residual neural network decoder for polar codes. In Proceedings of the IEEE 92nd Vehicular Technology Conference, Victoria, BC, Canada, 18 November–16 December 2020; pp. 1–6. [Google Scholar]
  25. Tal, I.; Vardy, A. List decoding of polar codes. IEEE Trans. Inf. Theory. 2015, 61, 2213–2226. [Google Scholar] [CrossRef]
  26. Xu, W.H.; Wu, Z.Z.; Ueng, Y.L. Improved polar decoder based on deep learning. In Proceedings of the IEEE International Workshop on Signal Processing Systems, Lorient, France, 3–5 October 2017; pp. 1–6. [Google Scholar]
  27. Hashemi, S.; Doan, N.; Tonnellier, T.; Gross, W.J. Deep-learning-aided successive-cancellation decoding of polar codes. In Proceedings of the 2019 53rd Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 03–06 November 2019; pp. 532–536. [Google Scholar]
  28. Gruber, T.; Cammerer, S.; Hoydis, J.; Ten Brink, S. On deep learning-based channel decoding. In Proceedings of the 51st Annual Conference on Information Sciences and Systems, Baltimore, MD, USA, 22–24 March 2017. [Google Scholar]
  29. Cammerer, S.; Gruber, T.; Hoydis, J.; Ten Brink, S. Deep learning-based decoding of polar codes via partitioning. In Proceedings of the IEEE Global Communications Conference, Singapore, 4–8 December 2017. [Google Scholar]
  30. Teng, C.F.; Wu, A.Y. Unsupervised learning for neural network-based polar decoder via syndrome loss. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar]
  31. Gao, J.; Zhang, D.X.; Dai, J.C. Resnet-like belief-propagation decoding for polar codes. IEEE Wirel. Commun. Le. 2021, 10, 934–937. [Google Scholar] [CrossRef]
  32. Xu, W.H.; Tan, X.S.; Be’ery, Y.; Ueng, Y.L.; Huang, Y.M.; You, X.H.; Zhang, C. Deep learning-aided belief propagation decoder for polar codes. arXiv 2019, arXiv:Signal Processing. [Google Scholar] [CrossRef]
  33. Doan, D.; Hashemi, S.H.; Gross, W.J. Neural successive cancellation decoding of polar codes. In Proceedings of the IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications, Kalamata, Greece, 25–28 June 2018; pp. 1–5. [Google Scholar]
  34. Xu, X. Study on Decoding Algorithms of Polar Codes Based on Deep Learning; Beijing Jiaotong University: Beijing, China, 2019. [Google Scholar]
  35. Arikan, E. Systematic polar coding. IEEE Commun. Lett. 2011, 15, 860–862. [Google Scholar] [CrossRef] [Green Version]
  36. Kingma, D.P.; Ba, J.L. Adam: A Method for Stochastic Optimization. Available online: http://arxiv.org/abs/1412.6980 (accessed on 30 January 2017).
  37. Li, S.B.; Deng, Y.Q.; Gao, X. Generalized segmented bit-flipping scheme for successive cancellation decoding of polar codes with cyclic redundancy check. IEEE Access 2019, 7, 83424–83436. [Google Scholar] [CrossRef]
  38. Zhang, Z.Y.; Qin, K.J.; Zhang, L.; Chen, G.T. Progressive bit-flipping decoding of polar codes: A critical-set based tree search approach. IEEE Access 2018, 6, 57738–57750. [Google Scholar] [CrossRef]
  39. Trifonov, P. Recursive trellis decoding techniques of polar codes. In Proceedings of the IEEE International Symposium on Information Theory, Los Angeles, CA, USA, 21–26 June 2020; pp. 407–412. [Google Scholar]
Figure 1. Basic unit model of a polar code.
Figure 1. Basic unit model of a polar code.
Applsci 12 12700 g001
Figure 2. Basic unit model of a polar code.
Figure 2. Basic unit model of a polar code.
Applsci 12 12700 g002
Figure 3. Polar code decoding tree diagram with length 4.
Figure 3. Polar code decoding tree diagram with length 4.
Applsci 12 12700 g003
Figure 4. Schematic diagram of the SC decoding time step.
Figure 4. Schematic diagram of the SC decoding time step.
Applsci 12 12700 g004
Figure 5. Schematic diagram of subcode classification.
Figure 5. Schematic diagram of subcode classification.
Applsci 12 12700 g005
Figure 6. Schematic diagram of the decoding process based on traditional subcodes.
Figure 6. Schematic diagram of the decoding process based on traditional subcodes.
Applsci 12 12700 g006
Figure 7. Block diagram of the NNA decoding scheme.
Figure 7. Block diagram of the NNA decoding scheme.
Applsci 12 12700 g007
Figure 8. Comparison of the NNN decoding and SC decoding algorithms.
Figure 8. Comparison of the NNN decoding and SC decoding algorithms.
Applsci 12 12700 g008
Figure 9. LSNNN identification flowchart.
Figure 9. LSNNN identification flowchart.
Applsci 12 12700 g009
Figure 10. Schematic diagram of the decoding process based on traditional subcodes combined with LSNNN.
Figure 10. Schematic diagram of the decoding process based on traditional subcodes combined with LSNNN.
Applsci 12 12700 g010
Figure 11. KSNNN identification flowchart.
Figure 11. KSNNN identification flowchart.
Applsci 12 12700 g011
Figure 12. Schematic diagram of the decoding process based on traditional subcodes combined with KSNNN.
Figure 12. Schematic diagram of the decoding process based on traditional subcodes combined with KSNNN.
Applsci 12 12700 g012
Figure 13. Decoding results of different N S .
Figure 13. Decoding results of different N S .
Applsci 12 12700 g013
Figure 14. Decoding results of different code rates.
Figure 14. Decoding results of different code rates.
Applsci 12 12700 g014
Figure 15. Decoding results of different code lengths.
Figure 15. Decoding results of different code lengths.
Applsci 12 12700 g015
Figure 16. Decoding results of different decoding schemes.
Figure 16. Decoding results of different decoding schemes.
Applsci 12 12700 g016
Figure 17. Comparison of decoding results.
Figure 17. Comparison of decoding results.
Applsci 12 12700 g017
Table 1. The structure of the NN.
Table 1. The structure of the NN.
LayerOutput DimensionsConvolutional FiltersActivation Function
InputN//
Mapper (Lambda)N//
Noise (Lambda)N//
NN Layer 1 16KS128ReLU
NN Layer 2 8KS64ReLU
NN Layer 3 4KS32ReLU
OutputKS/Sigmoid
Table 2. Number of special nodes for different decoding schemes.
Table 2. Number of special nodes for different decoding schemes.
NSR1R0RepSPCType-IType-IIType-IIIType-IVType-VNNN
FSCFD [19]8
16
32
64
128
1
1
0
0
0
0
0
1
0
0
6
4
0
1
1
8
4
1
1
1
0
0
0
0
0
2
0
1
0
0
1
0
1
0
0
1
0
0
1
0
9
1
0
1
0
0
0
0
0
0
FSCD8
16
32
64
128
3
4
1
0
0
3
4
2
0
0
5
9
0
1
1
5
7
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
LSNNAD
(NS = 16)
8
16
32
64
128
3
5
2
0
0
3
4
2
0
0
5
9
0
1
1
5
7
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
KSNNAD
(NS = 16)
8
16
32
64
128
3
4
1
0
0
3
5
3
1
0
5
9
0
1
0
5
7
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
Table 3. Number of arithmetic operations for different decoding steps.
Table 3. Number of arithmetic operations for different decoding steps.
Decoding StepArithmetic OperationFSCFD [19]FSCDLSNNADKSNNAD
Subcode recognitionComparison5984612862646315
R1Addition
Multiplication
9NS − 7
NS − 1
2NS
4NS
2NS
4NS
2NS
4NS
RepAddition
Multiplication
8NS − 6
NS − 1
NS
3NS
NS
3NS
NS
3NS
SPCAddition
Multiplication
11NS − 7
2NS − 1
NS
0
NS
0
NS
0
Type-IAddition
Multiplication
8NS − 7
NS − 1
0
0
0
0
0
0
Type-IIAddition
Multiplication
10NS − 5
NS
0
0
0
0
0
0
Type-IIIAddition
Multiplication
10NS − 3
NS + 1
0
0
0
0
0
0
Type-IVAddition
Multiplication
10NS − 3
NS + 1
0
0
0
0
0
0
Type-VAddition
Multiplication
41NS − 7
69NS − 1
0
0
0
0
0
0
NNNAddition
Multiplication
0
0
0
0
164 K S 2
164 K S 2
164 K S 2
164 K S 2
Table 4. Total number of arithmetic operations for different decoding schemes.
Table 4. Total number of arithmetic operations for different decoding schemes.
Arithmetic
Operation
FSCFD [19]FSCDLSNNADKSNNADRTD [39]
(τ = 4)
RTD [39]
(τ = 6)
Comparison
Addition
Multiplication
5984
15,081 (Tmax + 1)
11,653 (Tmax + 1)
6128
864
1248
6264
43,152
43,280
6315
43,056
43,208
3587
5358
0
60,270
69,256
0
Table 5. Comparison of decoding steps.
Table 5. Comparison of decoding steps.
Decoding AlgorithmDecoding Steps Node Structure
SCD317None(128)
SSCD [17]14621R1 + None (64)
NSCD [33]28/608NNN/16NNN
LSNNAD66/702R0 + 6Rep + 4SPC + 5R1 + NNN/
2R0 + 6Rep + 4SPC + 6R1 + NNN
KSNNAD70/743R0 + 5Rep + 4SPC + 6R1 + NNN/
4R0 + 5Rep + 4SPC + 6R1 + NNN
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, H.; Zhang, L.; Yan, W.; Ling, Q. Neural-Network-Assisted Polar Code Decoding Schemes. Appl. Sci. 2022, 12, 12700. https://doi.org/10.3390/app122412700

AMA Style

Liu H, Zhang L, Yan W, Ling Q. Neural-Network-Assisted Polar Code Decoding Schemes. Applied Sciences. 2022; 12(24):12700. https://doi.org/10.3390/app122412700

Chicago/Turabian Style

Liu, Hengyan, Limin Zhang, Wenjun Yan, and Qing Ling. 2022. "Neural-Network-Assisted Polar Code Decoding Schemes" Applied Sciences 12, no. 24: 12700. https://doi.org/10.3390/app122412700

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop