Rate-Compatible LDPC Codes for Continuous-Variable Quantum Key Distribution in Wide Range of SNRs Regime

Long block length rate-compatible low-density parity-compatible (LDPC) codes are designed to solve the problems of great variation of quantum channel noise and extremely low signal-to-noise ratio in continuous-variable quantum key distribution (CV-QKD). The existing rate-compatible methods for CV-QKD inevitably cost abundant hardware resources and waste secret key resources. In this paper, we propose a design rule of rate-compatible LDPC codes that can cover all potential SNRs with single check matrix. Based on this long block length LDPC code, we achieve high efficiency continuous-variable quantum key distribution information reconciliation with a reconciliation efficiency of 91.80% and we have higher hardware processing efficiency and lower frame error rate than other schemes. Our proposed LDPC code can obtain a high practical secret key rate and a long transmission distance in an extremely unstable channel.


Introduction
The cryptosystem based on computational complexity is being challenged by increasingly developed quantum computation. Quantum key distribution (QKD) [1][2][3][4], being one-time pad, has been one of the best solutions for its absolute security. QKD enables two remote separated parties named Alice and Bob to extract a symmetrical string of secret keys using a quantum channel.
Currently, there are mainly two types of QKD protocols, called discrete-variable QKD (DV-QKD) [5] and continuous-variable QKD (CV-QKD) [6,7]. In DV-QKD, the information is coded on discrete variables of finite dimensional Hilbert space, such as the polarization or phase of single photon state. In CV-QKD, the information is coded on continuous variables of an infinite-dimensional Hilbert space, including the regular component of coherent state. Compared with the single photon detector used in DV-QKD, homodyne or heterodyne detection techniques, which are used to measure the transmitted quantum states, have already been applied in classical optical communication. Therefore, CV-QKD has great practical advantages for its low cost because of the relatively mature development and being able to transmit in common fiber with classical optical communication. Furthermore, CV-QKD can achieve higher capacity with frequency-multiplexed entanglement source [8].
Due to the imperfection of the quantum channel and potential eavesdropper Eve, the key strings held by Alice and Bob are not consistent, so that a procedure called postprocessing is necessary to make them identical. The post-processing of CV-QKD mainly includes four steps: base vector comparison, parameter estimation, information reconciliation and privacy amplification. Information reconciliation is the most important part, whose performance has a direct correlation to the secret key rate. One of the major factors in information reconciliation is reconciliation efficiency β, which is given by β = R/C. The R is the rate of key and C = 0.5log(1 + SNR) is the channel compacity. The hardware processing efficiency α = D out /D in , where D in represents the data that are input to the hardware device (e.g., Field-programmable Gate Array, FPGA and Graphics Processing Unit, GPU) during information reconciliation and D out represents the output data in unit time [9]. I AB is the mutual information between Alice and Bob. χ BE is the Holevo bound, which is the maximal bound on the information available to the eavesdropper. The factors mentioned above are used to evaluate the performance in a frame, while frame error rate (FER) represents the failure probability of the frames. Ultimately, the practical secret key rate K is given by The parameters mentioned above is related to the error correcting codes, among them low-density parity-compatible (LDPC) code is efficient for CV-QKD [10]. The LDPC code obtained by good degree distribution and reasonable construction method has good error correction performance. The crux of designing a LDPC code is to construct a check matrix which includes check nodes and variable nodes. The degree distribution of check node ρ(x) and variable node λ(x) are expressed as: ρ i /λ i is the proportion of the number of edges owned by the check/variable node with degree j/i to the total number of edges in the Tanner graph and d c /d v indicates the maximum degree of the check/variable node. However, quantum is easily influenced in the process of quantum signal preparation and transmission. To realize the free space QKD with satellite [11,12], ship [13], unmanned aerial vehicles [14] or those with orbital angular momentum, we have to take mode distortion, beam wander, weather etc. into account. Therefore, the problems of great variation of quantum channel noise and extremely low signal-to-noise ratio (SNR) have to be solved.
One of the simplest rate-compatible methods for LDPC code is to operate on singlematrix using puncturing, shortening and extending. Furthermore, Gao proposed multimatrix rate-compatible reconciliation where, in each iteration, multiple matrices produce more useful information to correct errors such that the iteration number falls and the convergence speed increases [15]. However, it inevitably decreases the performance of the original check matrix. Another commonly used way is to construct several check matrices with different code rates to meet the requirements of different SNRs. However, for CV-QKD, the code length has to be longer than 100,000. Base matrices are at least 64,800 long even when we construct the spatially coupled (SC)-LDPC codes or quasi-cyclic (QC)-LDPC codes [16]. As one of the most effective decoding tools of LDPC code, the FPGA has limited hardware resources. To realize high efficiency information reconciliation with FPGA in an extremely unstable channel, it is necessary to construct a single-matrix rate-compatible error correction code. A comparison of the existing works with our proposed LDPC code is shown in Table 1.
In this paper, we first obtain degree distribution with discrete density evolution and differential evolution algorithm. Then we use random construction, progressive edge growth (PEG) algorithm and rate compatible methods of extending and puncturing to construct a check matrix with a code length of 64,800. Finally, we extend the above LDPC code with quasi-cyclic extension to a code length of 648,000. The results show that the proposed codes have a reconciliation efficiency of 91.80%, higher hardware processing efficiency and lower FER than other schemes. Therefore, we can obtain a high practical secret key rate and a longer transmission distance in an extremely unstable channel with wide range of SNRs. The remainder of this paper is organized as follows. In Section 2, we present some preliminaries of LDPC codes and rate-compatibility. In Section 3, we introduce how to construct our rate compatible (RC)-LDPC code. In Section 4, we present the simulation results and comparisons for the proposed scheme and existing schemes. Finally, the conclusions are drawn in Section 5.

Preliminaries
In this section, we first briefly introduce the discrete density evolution and differential evolution, which are used to generate degree distribution. Then we introduce the constructions: random construction, PEG algorithm and QC-LDPC extension, with which we can construct the check matrix with the degree distribution ahead. We also introduce the rate compatible methods: puncturing and extending.

Discrete Density Evolution
Compared with continuous density evolution [17] and Gaussian approximation algorithm [18], discrete density evolution [19] has lower complexity and higher accuracy. Therefore, in this paper, we use discrete density evolution to obtain the optimal degree distribution of LDPC codes. The main steps are as follows: • We firstly define two functions: quantized function Q and probability mass function S.
x is the largest integer not greater than x; and x is the smallest integer not less than x. The value range of decoded message is [−L, L] and evenly divided into m = 2 q intervals; the quantization interval ∆ is given by 2L/m.
In which two-input operator R is where a and b are quantized messages. • The check node and variable node updating of discrete density evolution is is discrete convolution and l is the iteration number. The initial value p • Finally, we calculate the error rate with End the procedure when the p Otherwise, we continue to update the check node and variable node.
Discrete density evolution is first proposed to obtain the noise threshold according to the degree distribution ρ i and λ i . In our work, we use it to obtain the degree distribution under specific channel noise.

Differential Evolution
Stom first proposed the differential evolution algorithm in 1995 to solve the optimization problem [20]. It uses differential mutation operator and crossover operator to generate new individuals by the way of survival of the fittest. Based on this method, we can obtain the optimal degree distribution under specific channel noise.
• Set channel noise threshold σ, target error probability P e , maximum number of iterations l max , maximum degree of variable node d v and the number of terms of degree distribution polynomial n. • Randomly generate NP vectors P i,G , i = 1, 2, . . . , NP for the degree distribution of variable node. Use discretized density to evolve each vector and obtain the respective error probability P e i ,G . The vector with the lowest error probability is marked as the best vector P best,G and its error probability is marked as P e best ,G . • For each i, randomly choose four vectors from set of P i,G and the new vector is updated by Calculate the corresponding error probability The vector with the lowest error probability is marked as the best vector P best,G+1 and its error probability is marked as P e best ,G+1 .
• If the error probability corresponding to the best vector P e best ,G+1 > P e , update the vectors again and return to step (4). If P e best ,G+1 ≤ P e , the P best,G+1 is the ideal vector that we want.

Constructions
In this work, we use random construction, the PEG algorithm and QC extension for their good results in various situations.

Random Construction
Various random constructions have been proposed based on the same core thought, that is, place non-zero elements in random unfilled positions in the check matrix without violating any set constraint. There are two constraint rules: one is that line l i contains X i "1" and column c i contains Y i "1" according to the degree distribution of check nodes and variable nodes; the other one is the number of elements "1" at the same position in any two rows or columns is less than or equal to 1. It means that the shortest girth has to be longer than 4.

Progressive-Edge-Growth Algorithm
Before introducing the PEG algorithm, we first introduce a common representation of LDPC codes-the Tanner diagram and several concepts. As shown in Figure 1a, V i is a variable node, C j is a check node and the line between them is called an edge. If two nodes are connected with each other, we say these two nodes are adjacent to each other. The girth is defined as the minimum number of lines that comes from a node and back to this node, whose intermediate node is only passed once. As shown in Figure 1a, the shortest girth is 6 and one of them is For the PEG algorithm, new edges are added to make the loop girth in the Tanner diagram corresponding to the check matrix as large as possible. As shown in Figure 1b, the steps are as follows: • Determine the number of check node, variable node and the degree distribution of variable node. • Randomly choose a variable node V i and find the check node C j with the least number of connected edges in the Tanner graph. Then connect the variable node V i and the check node C j with an edge and take it as the first edge of the variable node V i . • Take the variable node V i as the root node and expand the current Tanner diagram. When the expansion depth is l, the set of check nodes adjacent to V i is recorded as is the complement set of N l V i , where the complete set V c is the set of all variable nodes. Expand the Tanner graph with the root node and the depth of l. When

QC-LDPC Extension
QC-LDPC extension is uniquely determined by the dimension and shift times of the circulant matrix. Its quasi-cyclic characteristics make the process of coding and decoding more efficient. Compared with randomly constructed LDPC codes, QC-LDPC codes have lower error level and are more convenient for storage and hardware implementation. We multiply the corresponding positions of the base matrix H b and the coefficient matrix H c and we define this operation as , the expression is expressed as follows: Take lifting size of 3 as an example, the elements of the base matrix are 0 and 1, and the elements of the coefficient matrix are 1, 2 and 3. Then the matrix elements are replaced by the cyclic permutation matrices (CPMs). We replace 0 with zero matrices, 1 with

Methods of Rate-Compatible
Puncturing is a method that makes the code rate change from low to high. As shown in Figure 2a, the submatrix A are information bits and submatrix B and C are check bits. The initial code rate is R = L 0 /(L 0 + L 1 + L 2 ). By deleting the submatrix C, we can obtain a code rate increasing to R = L 0 /(L 0 + L 1 ).
On the contrary, extending as shown in Figure 2b enables the code rate to change from high to low. We first construct a check matrix A with the high bit rate of (N 0 − M 0 )/N 0 . Moreover, by adding the submatrix A n , we extend the matrix to make it compatible for the low rate. The code rate is expressed as:

Proposed Check Matrix for RC-LDPC Codes with Wide Range of SNRs Regime
From the Equation (1) we can see that high hardware processing efficiency and reconciliation efficiency result in a good performance of final secret key rate for a given SNR. Proper degree distribution and reasonable construction method lead to good error correction performance.

Obtaining Degree Distribution
We first obtain the initial optimal degree distribution using discretize density evolution and differential evolution refer to Sections 2.1.1 and 2.1.2. Maximum degree of variable node and the number of terms of degree distribution polynomial are set as 10 and 4, respectively.
From the initial optimal degree distribution, we find that the pairs of degree distribution are distributed nearby λ 3 and λ 7 except of λ 2 and λ 10 . Therefore, we calculate the average number of λ 3 and λ 7 at rate from 0.3 to 1, i.e., SNR from 0.1 to 3 (the degree distribution is appropriate to the SNR larger than 3 but the maximum rate 1 corresponds to the SNR of 3). The initial values are average number λ 3 and λ 7 , and maximum degree of variable node and the number of terms of degree distribution polynomial are still set as 10 and 4. The difference is that the degree distribution of the variable distribution is set on the λ 2 , λ 3 , λ 7 and λ 10 instead of a random distribution. Then we repeat the above operations to obtain the optimal degree distribution in these conditions.
Through the above operations, we obtain the degree, the maximum degree of the variable node, and the number of terms of the degree distribution polynomial. Ultimately, we calculate the optimal degree distribution for proposing our LDPC code with Algorithm 1.

Algorithm 1
Obtaining the ultimate variable degree distribution with density evolution and differential evolution Input: Target error probability P e , maximum number of iterations l max , population size NP = 50, the number of terms of variable node degree distribution polynomial l = 5, the highest power of variable node degree distribution, λ 3 = 0.0047, λ 7 = 0.5072 Output: Error rate P e best , vector P best 1: for i= 1 to NP do 2: refer to Section 2.1.1 generate vector P i with λ 2 , λ 3 , λ 7 , λ 8 and λ 10 , λ 2 + λ 8 + λ 10 = 0.4881; 3: calculate the error probability P e i ; 4: if P e best > P e i then 5: P e best ← P e i ; P best ← P i ; 6: end if 7: end for 8: for j = 1 to l max do 9: randomly choose four numbers r 1 , r 2 , r 3 ,r 4 from 1 to NP; 10: v j = P best + 0.5(P r 1 -P r 2 + P r 3 -P r 4 ); 11: calculate the error probability P e j ; 12: if P e best < P e then 13: output v j ; 14: end if 15: if P e best > P e j then 16: P e best ← P e j ; 17: end if 18: end for Table 2 is the result of Algorithm 1, whose input signal X ∼ (0, 1) and additive white Gaussian noise Z ∼ (0, σ 2 ) are random variables that obey Gaussian distribution and independent of each other. The channel noise SNR = 1/σ 2 and σ represents the maximum allowed value of noise for the additive white Gaussian channel. For ρ(x) = λ(x) = 1, the check node degree distribution is definite with the constraint condition r = 1 − 1 0 ρ(x)dx/ 1 0 λ(x)dx. The degree distribution in our scheme especially decreases the difficulty of constructing the check matrix.  1 The practical rate at 1 is close to but lower than 1.
In order to maximize the use of limited key resources, we still need to fully consider the condition of rate lower than 0.1. Obviously, the secret key rate is low for the low mutual information I AB . Therefore, in order to simplify our work, the degree distribution pairs we choose for the rate lower than 0.1 are directly refer to Appendix A [21,22].

Constructing Check Matrix for RC-LDPC Code
With the degree distribution we obtained above, we construct a single matrix RC-LDPC code simultaneously with the random construction, the PEG algorithm, and QC-LDPC codes mentioned in Section 2. The structure of the check matrix is shown in Figure 3 and combined with parts A, B and C.
The part A is a shared part for the rate from 0.1 to 1, which is constructed with λ 3 and λ 7 . This structure has the advantage of reducing computational complexity and saving the storage resources. Previous work showed that the PEG algorithm has better performance at SNR∼3 [23], while random construction exhibits better performance at SNR∼1 [24]. Therefore, the construction that we use to construct the sub-matrix A is the PEG algorithm.
The part B is constructed with rest of degree distribution to realize the rate-compatible method of puncturing. In order to further improve the performance of our LDPC code, we construct the check matrix with the thought of puncturing. More specifically, we divide submatrix B n into two part and construct one part when the R decreases every 0.05. For rate from 0.3 to 0.1, this number is 0.1. We use PEG algorithm to construct B 1 to B 5 and random construction to construct extra part. Moreover, the structure of part B is a lower triangular matrix, which can be directly encoded.
Multi-edge-type (MET)-LDPC codes are employed with low SNRs due to their good error-correcting performances, more amenable decoding complexity and also being able to be rate-compatible at low rates [25]. Based on the check matrix above, we construct part C with degree distribution of the MET-LDPC codes from Appendix A for the rate from 0.01 to 0.1.

Simulation Experiment
In this section, we summarize the implementation results of the proposed LDPC codes over an unstable channel. Our purpose is to construct a RC-LDPC code with single matrix that can be adapt to the SNR from 0.01 to 15. We show the performance of reconciliation efficiency β, hardware processing efficiency α and FER, which are influenced by the change of SNR. Furthermore, the decoding algorithm is a modified Min-Sum algorithm.
The reconciliation efficiency comes from β = R/C. Referring to the construction mentioned in Section 3.2, we change the check matrix when R reduces to a certain extent. When R is from 0.3 to 1, C decreases 0.1 to an integer multiple of 0.1. When R is from 0.01 to 0.3, C decreases 0.05 to an integer multiple of 0.05. In Figure 4, assuming that the channel noise is uniformly distributed, the LDPC code we proposed has an average reconciliation efficiency β of 91.80%, and for higher rates from 0.3 to 1 this number is 96.13%. Because the data with rate lower than 0.3 only have a little contribution to reconciliation efficiency, the practical reconciliation efficiency is close to 96.13%. Compared with the existing scheme, the proposed LDPC code has a relatively high reconciliation efficiency. From Equation (1), the secret key rate is also related to the hardware processing efficiency α, which is equal to the ratio of D out and D in . More specifically, supposing the times used to load check matrix, load data and decode data are t lm , t ld and t dd , separately. The number of times that check matrix has to be reloaded is n and the number of data blocks that have to be processed is m. Suppose the secret key rate that optical system can provide is M, the number of data blocks m is M/L. The hardware processing efficiency α is Because of the finite-size effects, the block length in the procedure of privacy amplification is at least 10 7 , which also takes up abundant hardware resource [26,27], so that not all the check matrices can be stored in advance. The reconciliation efficiency will be reduced quickly even if the SNR changes in a very small range. Therefore, other schemes have to reload the appropriate check matrix and then load and decode data when the rate is higher than the channel capacity. With our proposed LPDC code, we save the time of reloading the check matrix. For the block length of 648,000, the times used to load data and decode data we tested with the FPGA Arria 10 are 13.0 ms and 211.2 ms. Furthermore, the average time we used to load check matrix of ATSC 3.0 LDPC codes is 11.1ms. From the Figure 5, we can see that our work keeps a high hardware processing efficiency α with the number of check matrix changing times n increases. Meanwhile, difference of hardware reconciliation efficiency between our proposed LDPC code and ATSC 3.0 LDPC code also increases.
Frame error rate is the rate that a data block failed to be decoded. It is mainly caused by two reasons: the defect of error correcting code and decoding algorithm; the unadaptable check matrix led by the changing of SNR. The FER caused by the defect of error correcting code and decoding algorithm can be reduced to 3.25 × 10 −3 , which is far lower than the FER led by the latter reason [28]. Therefore, we only take the latter reason into account. It can be seen from Figure 6 that with the number of check matrix changes increases, our proposed LDPC code has a lower FER than the other scheme.  Given the excess noise, efficiency of receiver's detector and electronic noise at Bob's side, we can calculate the practical secret key rate [29]. Figure 7 is the comparison of the practical secret key rate of the proposed LDPC code and ATSC 3.0 LDPC codes. As can be seen in the graph, our scheme has a better performance with same number of check matrix changes N and has a lower performance reduction when the N increases. This comes from the fact that combined action of reconciliation efficiency β, hardware processing efficiency α and FER.

Conclusions
In this study, we design a rule of proposing a RC-LDPC code with single matrix for SNRs between 0.01 and 15 to solve the problems of great variation of quantum channel noise and extremely low SNR. First, we use the discretized density evolution algorithm and differential evolution to acquire good node degree distribution pairs of LDPC codes. Then, with construction methods including PEG algorithm, random construction, quasi-cyclic extension and rate-compatible methods including extending and puncturing, we proposed a convenient and efficient construction method for designing a RC-LDPC code. Considering the number of check matrix changing times led by the change of SNR, the result shows that we have a reconciliation efficiency of 91.80%, higher hardware processing efficiency and lower FER. It has a good performance especially in an extremely unstable channel.