Next Article in Journal
Equiangular Vectors Approach to Mutually Unbiased Bases
Previous Article in Journal
An Estimate of Mutual Information that Permits Closed-Form Optimisation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Information-Dispersion-Entropy-Based Blind Recognition of Binary BCH Codes in Soft Decision Situations

School of Mechatronics Engineering and Automation, National University of Defense Technology, Deya Road, Changsha 410073, Hunan Province, China
*
Author to whom correspondence should be addressed.
Entropy 2013, 15(5), 1705-1725; https://doi.org/10.3390/e15051705
Submission received: 18 March 2013 / Revised: 24 April 2013 / Accepted: 1 May 2013 / Published: 8 May 2013

Abstract

:
A method of blind recognition of the coding parameters for binary Bose-Chaudhuri-Hocquenghem (BCH) codes is proposed in this paper. We consider an intelligent communication receiver which can blindly recognize the coding parameters of the received data stream. The only knowledge is that the stream is encoded using binary BCH codes, while the coding parameters are unknown. The problem can be addressed on the context of the non-cooperative communications or adaptive coding and modulations (ACM) for cognitive radio networks. The recognition processing includes two major procedures: code length estimation and generator polynomial reconstruction. A hard decision method has been proposed in a previous literature. In this paper we propose the recognition approach in soft decision situations with Binary-Phase-Shift-Key modulations and Additive-White-Gaussian-Noise (AWGN) channels. The code length is estimated by maximizing the root information dispersion entropy function. And then we search for the code roots to reconstruct the primitive and generator polynomials. By utilizing the soft output of the channel, the recognition performance is improved and the simulations show the efficiency of the proposed algorithm.

Graphical Abstract

1. Introduction

Channel coding, applied to reduce the errors in transmissions, is an important part in digital communications [1]. The binary Bose-Chaudhuri-Hocquenghem (BCH) codes are widely used for their powerful error-correcting capability [2], convenient design of encoders [3], and efficiency of decoding algorithms [4,5]. In traditional communication systems, the coding parameters are known by both the transmitters and receivers, but with the development of cognitive radios and intelligent techniques, blind recognition of the channel coding parameters is becoming available and realizable. Cognitive radio (CR) was introduced in [6] as a smart spectrum sharing technology, and it is becoming a hot research topic [7,8,9,10]. The adaptive coding and modulations (ACM) technique [11,12,13,14] is an important section of the CR to adapt the channel. In an ACM system, the transmitter chooses optimized coding and modulation parameters according to the channel quality. Thus at the reception, the receiver need to recognize those parameters before demodulation and decoding. Another application field of the blind recognition technique is non-cooperative communications [15,16]. In this case, a non-cooperative receiver does not known the modulation and coding parameters before recognizing them. In the future communications, the terminals are required to be as intelligent as possible to adapt themselves to a specific context and to blindly estimate the transmitter parameters for self-reconfiguration purpose, only with knowledge of the received data stream [17].
To the best of our knowledge, most of the blind recognition algorithms proposed in the literature are focused on convolutional codes. In [18], a Euclidean-algorithm-based method was proposed to identify a 1/2 rate convolutional encoder in noiseless cases. However, it is not suitable for noisy channels. In [19], another approach was presented to identify a 1/n rate convolutional encoder in noisy cases based on the Expectation Maximization algorithm. The authors of [20,21] developed methods for blind recovery of convolutional encoders in turbo code configuration. In [13] and [17], dual code methods for blind identification of k/n rate convolutional codes ware proposed for cognitive radio receivers.
In this paper we consider the problem of blind recognition of the coding parameters for a cognitive receiver. The main focus is on the widely used BCH codes. Some previous literature reports [22,23,24] proposed and developed recognition algorithms for BCH codes in hard decision situations. In [22] and [23], the authors proposed a blind recognition algorithm for BCH codes based on Root Information Dispersion Entropy and Root Statistic (RIDERS). This algorithm can achieve correct recognition with a bit error rate (BER) of 102, but it is computationally intensive, especially when the code length is large. The authors of [24] improved that algorithm proposed in [22,23] to reduce the computational complexity, which made the recognition procedure faster.
However, the previous works on blind recognition of BCH codes are all in hard decision situations, and are based on utilizing the algebraic properties of the codes in Galois Fields (GF). To achieve available recognition results, large amounts of training data are always required. With the development of sampling and signal processing techniques, the soft output of the channel has become available. Many soft-decision-based decoding algorithms have been applied to the error-correcting codes [25,26,27,28,29,30] and yield better decoding performances than hard decision algorithms. Some blind frame synchronization techniques for error correcting codes also utilize the soft output of the channel to improve the synchronization performances [31,32,33,34]. Inspired by the soft decision decoding algorithms, we develop the RIDERS algorithm introduced by [22,23,24] in the soft decision situations in this paper. As an example we mainly discuss the problem of the soft decisions of BPSK modulation on AWGN channels.
The remaining of this paper is organized as follows: Section 2 presents the code length and coding starting positions estimation approach. Section 3 gives the code roots recognition method. Section 4 discusses the recognition of the primitive polynomial. In Section 5, we compare the computational complexity between hard decision and soft decision situations. Finally, the simulation results and conclusions are given in Section 6 and Section 7.

2. Code Length Estimation and Blind Synchronization

2.1. Introduction of the Recognition Algorithm in Hard Decision Situations

In cyclic coding theories, the algebraic model of the encoding operation can be expressed as follows:
c ( x ) = m ( x ) × g ( x )
or in systematic form [35]:
c ( x ) = m ( x ) × x n k + ( ( m ( x ) × x n k )   mod   g ( x ) )
Here g ( x ) is the generator polynomial, m ( x ) is the input information polynomial and c ( x ) is the codeword polynomial, which are all defined over an extension field GF (2m) ( m 1 ). In Equation (2), n and k are the length of the codeword and input information, respectively, and k/n is the code rate. Obviously in Equations (1) and (2), the roots of g(x) are also the roots of c(x). We define the code roots to be the roots of the generator polynomial g(x) in this paper. The number of the elements in an extension field GF (2m) is 2m-1. We define the set of these elements as Gm. For a binary BCH code defined in GF (2m), the possible code roots, which form a root set A, are limited and included in Gm. A is a subset of Gm. We consider a sequence of M valid codewords C1, C2, …, CM and let cj(x) ( 1 j M ) be the codeword polynomial of Cj. Initialize an integer vector [N1, N2, …, NM] to zeros and get all the roots of each codeword polynomial cj(x) when j increases from 1 to M. If the element α i ( 1 i 2 m 1 ) in GF(2) is a root of cj(x), we let i = Ni + 1, where α is a primitive element of GF (2m). Finally, the value of Ni ( 1 i 2 m 1 ) reflects the probability of the element α i ( 1 i 2 m 1 ) being a code root. Note that not all the roots of a valid codeword polynomial c(x) are the code roots, but all the code roots must be the roots of c(x). The elements, which are the roots of c(x) but not the code roots, appear randomly in different codewords, because the information polynomials are not the same in different encoding operations. According to this fact, the probability of being a code root for an element in A should be larger than that in A ¯ , which is a complement of A in Gm, so the values of { N i | α i A } corresponding to the elements in A are generally larger than those corresponding to the elements in A ¯ , but this property is only true when the considered data blocks are valid codewords, i.e., the code length is correct and the codewords synchronization is achieved. When the code length or the coding starting positions are not correctly estimated, the property described above does not exist and the bits in the codewords can be considered to appear randomly. In this case, the authors of [22,23] proposed the following hypothesis:
Hypothesis 1: When the coding parameters are not correct, the probabilities of being codeword roots of the elements in Gm can be assumed to be uniform, i.e., the values of Ni is uniform no matter in A or A ¯ .
Therefore, in the correct parameters cases, the entropy of the distribution of the roots should be lower than that in the incorrect parameters cases. Based on this fact, the authors of [22,23] introduced a root information dispersion entropy function (RIDEF) to describe the imbalance degree of the Ni on different α i and proposed the correct coding parameters that maximize the RIDEF. The RIDEF is defined to be the difference between the entropy of the uniform distribution and the distribution of the code roots on all the elements in Gm as follows:
Δ H = i = 1 n 1 n log 1 n ( i = 1 n p i log p i ) = i = 1 n p i log p i + log n
where n = 2m − 1 is the number of the elements in Gm. pi is the probability of α i being a root of the code, it is calculated as follows:
p i = N i i = 1 2 m 1 N i , 1 i 2 m 1
In the recognition procedure, we traverse all the possible values of code length and coding starting positions to find the ones that make Δ H be the largest as the estimation of code length and synchronization positions.
The RIDERS algorithm has good performance [22,23,24] in the parameter recognition of BCH codes. However, there are still some problems in the algorithm. Firstly, it is restricted to hard decision situations, which limits the recognition performance. Secondly, Hypothesis 1, as a basis of the algorithm, is not always true. In the following paragraphs we propose the recognition methods inspired by the RIDERS algorithm in soft decision situations. In the Appendix, we give the proof of the faultiness of Hypothesis 1.

2.2. Principles of the Proposed Recognition Algorithm in Soft Decision Situations

2.2.1. Calculation of pi in Soft Decision Situations

To develop the application of this algorithm to be suitable for soft decision cases, we should modify Equation (4) to utilize the soft output of the channel. To calculate pi in Equation (4) in soft decision situations, we define the minimal parity check matrix (MPCM) H min ( α i ) corresponding to the element α i in GF(2m) as follows:
H min ( α i ) = ( ( α i ) l 1 , ( α i ) l 2 , , ( α i ) 1 , ( α i ) 0 )
where l is the code length. According to the coding theories, if α i is a root of a codeword C j ( 1 j M ) , we have:
H min ( α i ) × C j = 0
Soft outputs of the channel can provide more information about the reliability of each decision symbol. Instead of verifying whether α i is a root of a codeword polynomial cj(x), we calculate p j , i ( 1 i 2 m 1 , 1 j M ) , which is the probability of α i being a root of cj(x), and calculate pi in Equation (4) as follows:
p i = j = 1 M p j , i i = 1 2 m 1 j = 1 M p j , i
To calculate p j , i , we transform the MPCM defined in Equation (5) to its binary form by replacing the symbol elements in H min ( α i ) with their binary column vector patterns. For example, H min ( α 3 ) , a BCH code corresponding to the element α 3 in GF(23) with code length l = 7, is as follows:
H min ( α 3 ) = ( α 18 α 15 α 12 α 9 α 6 α 3 1 )
Based on the primitive polynomial p ( x ) = x 3 + x + 1 , because the symbol α is a root of p(x), we have α 3 + α + 1 = 0 , and α 3 = α + 1 . Other symbols are processed similarly. Finally, we can calculate all the symbols in H min ( α 3 ) and get their binary vector patterns listed in Table 1. By replacing the symbols in H min ( α 3 ) with the binary patterns, the MPCM can be written in its binary form as follows:
H b min ( α 3 ) = ( 1 1 1 1 1 0 0 1 0 1 0 0 1 0 0 1 1 0 1 1 1 )
Table 1. Symbols in H min ( α 3 ) and their binary vector patterns.
Table 1. Symbols in H min ( α 3 ) and their binary vector patterns.
SymbolsPolynomial expressionsVector form
11001
α α 010
α 3 α + 1 011
α 6 α 2 + 1 101
α 9 α 3 100
α 12 α 2 + α + 1 111
α 15 α 010
α 18 α 2 + α 110
Note that the number of rows in H min ( α i ) equals to m, which is the degree of GF(2m). And then the syndromes corresponding to the MPCM H min ( α i ) and codeword C j ( 1 j M ) are calculated in binary forms as follows:
S j , i = [ S j , 1 ,   S j , 2 ,   ,   S j , m ] T = H b min ( α i ) × C j
Let cj(x) be the codeword polynomial of Cj, if the element α i corresponding to H b min ( α i ) in Equation (10) is a root of cj(x), then we have S j , i = 0 . So the probability of S j , i = 0 can be considered as p j , i in Equation (7) and calculated by the mean probabilities of S j , k = 0   ( 1 k m ) :
p j , i = P r ( S j , i = 0 ) = 1 m k = 1 m P r ( S j , k = 0 )
In Equation (11) and the remainder of the paper we use P r ( Ψ ) to depict the probability of the event Ψ . We assume that the transmitter is sending a binary sequence of codewords and using a BPSK modulation, i.e., let +1 and −1 be the modulated symbols of 0 and 1. The modulation operation from coded bit c u to modulated symbol s u can be written as:
s u = 1 2 c u ,    u = 1 , 2 , 3 ,
We assume that the propagation channel is a Binary Symmetry Channel (BSC) which is corrupted by an AWGN with the variance σ n 2 = N 0 / 2 . For each configuration, the information symbols in the codes are randomly chosen. A soft decision symbol r u at the reception can be expressed as:
r u = s u + w u ,    u = 1 , 2 , 3 ,
where ( w 1 , w 2 , w 3 , ) is an AWGN sequence. According to [30], it is easy to prove that the probability P r ( S j , k = 0 )   ( 1 k m ) can be calculated as follows:
P r ( S j , k = 0 ) = 1 2 + 1 2 u = 1 n e tanh ( r u / σ 2 ) ,   1 k m
where ne is the number of ones in the eth row of the binary MPCM H b min ( α i ) ( 1 i 2 m 1 ) .

2.2.2. Adaptive Processing of MPCM

Note that the matrix H b min ( α i ) is not sparse, so a fault decision symbol has negative influences on many syndromes. Considering that the unreliable decision bits have higher probabilities of being error decisions than the reliable decision bits, the authors of [28] proposed an adaptive processing algorithm for the binary MPCM H b min ( α i ) to reduce the influences of the unreliability decision bits when decoding the RS codes by utilizing the belief propagation (BP) algorithm. In this paper, we adopt that idea on the calculation of P r ( S j , i = 0 ) . The detailed adaptive processing steps for a given binary MPCM H b min ( α i ) and a codeword C j with the code length l are listed below:
Step 1: Combine H b min ( α i ) and C j T to form a new matrix H * ( α i ) as follows:
H * ( α i ) = [ r 1 r 2 r l h 11 h 12 h 1 l h 21 h 22 h 2 l h m 1 h m 2 h m l ]
where r 1 , r 2 , …, r l are the soft decision bits of the codeword C j , { h i j | 1 i m , 1 j l } are the elements of H b min ( α i ) in GF(2).
Step 2: Replace each ru ( 1 u l ) in H * ( α i ) with their absolute values to form a matrix H r * ( α i ) , adjust the positions of the columns in H r * ( α i ) to make the first row in H r * ( α i ) is ranked from the lowest to the highest and record the indexes. The absolute values of { r u | 1 u l } denote the reliability of the received soft decision bits. As shown in Equation (16), | r i 1 | | r i 2 | | r i l | and i 1 , i 2 , , i l are the column indexes of r i 1 , r i 2 , , r i l in H * ( α i ) .
H r * ( α i ) = [ | r i 1 | | r i 2 | | r i l | h 1 i 1 h 1 i 2 h 1 i l h 2 i 1 h 2 i 2 h 2 i l h m i 1 h m i 2 h m i l ]
Step 3: Transform H r * ( α i ) by elementary operations to make the last m elements of the first column in H r * ( α i ) has only one “1” at the top, as shown in Equation (17). The first row does not join the elementary transformation.
H r * ( α i ) = [ | r i 1 | | r i 2 | | r i l | 1 x x 0 x x 0 x x ]
This transformation limits the influences of the most unreliability decision bit to only one syndrome element, which is S j , 1 in Equation (10). Furthermore, we continue the elementary transformation on H r * ( α i ) to limit the numbers of “1” in the following some columns to one (except the first row), as shown in Equation (18):
H r * ( α i ) = [ | r i 1 | | r i 2 | | r i 3 | | r i 4 | | r i l | 1 0 0 x x 0 1 0 x x 0 0 1 x x x x 0 0 0 x x 0 0 0 x x ]
When the elementary transformation is unavailable, stop the operation. The number available operation times equal to the rank of H b min ( α i ) . Then the last m rows in H r * ( α i ) form a new matrix. We recovery its original column orders and call it H b min _ a ( α i ) . Because the transformation is elementary, the relationship H b min _ a × C j = 0 , in the hard decision situations still exists if Cj is a valid codeword, so we can calculate the probability P r ( S j , k = 0 ) defined in Equation (14) according to H b min _ a ( α i ) . This replacement reduces the influences of the unreliable decision bits.
If r a n k ( H b min _ a ( α i ) ) = m , the left m × m area of H b min _ a forms an identity matrix. But if r a n k ( H b min _ a ( α i ) ) < m , H b min _ a becomes the following style after the elementary transformations:
H b min _ a ( α i ) = [ 1 0 0 x x 0 1 0 x x 0 0 1 x x 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ]
In this case, only the first p rows are non-zeros, where p = r a n k ( H b min _ a ( α i ) ) . The last m p rows have no contribution for the calculation of P r ( S j , i = 0 ) in Equation (11). So we modify Equation (11) and Equation (20):
p j , i = P r ( S j , i = 0 ) = 1 r a n k ( H b min _ a ( α i ) ) k = 1 r a n k ( H b min _ a ( α i ) ) P r ( S j , k = 0 )
Now based on Equation (20), Equation (14) and Equation (7), we can calculate the RIDEF defined in Equation (3). However, there is still a problem in the algorithm. Note that the element α 2 m 1 = 1 is a root of a codeword polynomial only if the code weight is even. This fact severely affects the assumption of uniformity of pi. As shown in Figure 1, in the case of recognition of BCH (63, 51), when we assume the code length is 31, the value of p 31 is obviously larger than other p values, although the real code length is not 31 and α 31 = 1 is not a root of the code.
Figure 1. The problem of p31.
Figure 1. The problem of p31.
Entropy 15 01705 g001
To avoid that, we drop the calculation of the probability P r ( S j , i = 0 ) on the element α 2 m 1 = 1 when recognizing the code length. So we modify Equation (3) and Equation (7) to Equation (21) and Equation (22) respectively as follows:
Δ H = i = 1 2 m 2 p i log p i + log ( 2 m 2 )
p i = j = 1 M p j , i i = 1 2 m 2 j = 1 M p j , i
Considering that not all the rows in the parity check matrix H b min _ a ( α i ) have the same value of ne [in Equation (14)], and this affects the comparability of P r ( S j , k = 0 ) among different rows. For the uniformization reasons, so we modify Equation (14) to:
P r [ S j , k = 0 ] = 1 2 + 1 2 sign [ u = 1 n e tanh ( r u / σ 2 ) ] | u = 1 n e tanh ( r u / σ 2 ) | 1 n e = 1 2 + 1 2 u = 1 n e sign ( r u ) × | u = 1 n e tanh ( r u / σ 2 ) | 1 n e

2.2.3. Summary of the Recognition Steps

According to Equation (20), Equation (21), Equation (22) and Equation (23), we propose the code length estimation and blind synchronization in the following steps for the convenience of computer program automatic processing:
Step 1: According to some prior information, set the searching range of the degree m, i.e., set the minimal and maximal degree mmin and mmax.
Step 2: Design a window W which has a length L at least 5 × ( 2 m m a x 1 ) , i.e., M 5 in Equation (22).
Step 3: Full fill the window W with the received soft decision bits.
Step 4: Set the initial degree m = mmin.
Step 5: Set the code length l = 2m −1.
Step 6: Set the initial synchronization position t at 0, which is the starting position of W.
Step 7: Assume the code length is l and the synchronization position is t and calculate Δ H . Note that the window W has more than one assumed codewords, we calculate the Δ H on all the codewords and compute the mean of them as Δ H ( l , t ) .
Step 8: If t < l, then let t = t + 1 and go back to Step 7; if t = l, then jump to Step 9.
Step 9: If l < 2m − 1, then let l = l + 1 and go back to Step 6; if l = 2m 1, then jump to Step 10.
Step 10: If m < mmax, then let m = m + 1 and go back to Step 5; if m = mmax, then jump to Step 11.
Step 11: Compare all the calculated Δ H ( l , t ) , select the maximum one and get the corresponding values of l, t and m as the estimated code length, synchronization position and the degree of the GF of the being recognized codes, respectively.
By traversing all the possible l and t, finally we can search out the parameters pair ( l ^ , t ^ ) that maximizes the RIDEF Δ H . l ^ and t ^ + k l ^ ( k ) are the estimated code length and synchronization positions.

3. Code Roots Recognition and Generator Polynomial Reconstruction

3.1. Principles of the Code Roots Recognition

As mentioned in Section 2, for a given valid codeword Cj, the elements in A have higher probabilities of being the code roots of Cj than the elements in A ¯ , where A is the set of the roots of the generator polynomial. So after the code length and synchronization position estimation, we can compare the Log-Likelihood Ratios (LLR) of P r ( S i = 0 ) on different elements in GF (2m) from α 1 to α 2 m 2 and choose the elements which make the LLR of P r ( S i = 0 ) be obviously higher as the estimated code roots. Finally, we propose a method to judge whether the element α 2 m 1 = 1 , be a root of the code.
In Equation (24), we define the LLR of P r ( S i = 0 ) as follows, which is also written as L ( α i ) :
L ( α i ) = L [ P r ( S i = 0 ) ] = j = 1 M L [ P r ( S j , i = 0 ) ] = j = 1 M [ 1 r a n k ( H b min ( α i ) ) k = 1 r a n k ( H b min ( α i ) ) L ( S j , k = 0 ) ]
where:
L ( S j , k = 0 ) = log P r ( S j , k = 0 ) P r ( S j , k 0 ) = 2 artanh [ u = 1 n e sign ( r u ) × | u = 1 n e tanh ( r u / σ 2 ) | 1 n e ]
But for a computer, the “previous higher” is difficult to follow. For the realization of the computer automatic recognition of the code roots, we propose the procedure includes the following steps:
Step 1: Let l be the estimated code length, calculate the LLRs to form a vector: L l = [ L ( α ) , L ( α 2 ) , , L ( α 2 m 2 ) ] .
Step 2: Rank the vector L l from the lowest to the highest, in order to form a new vector L l R , and record the indexes.
Step 3: Calculate dL, which is the difference of L l R : d L ( i ) = L l R ( i + 1 ) L l R ( i )     ( 1 i l 1 ) .
Step 4: Find the maximum of dL, record the corresponding value of i.
Step 5: Select the (i + 1)th to the lth elements in L l R , get their positions in the vector L l and find corresponding GF elements { α j 1 , α j 2 , } as the estimated roots.
As an example, we consider a BCH (63, 51) code which is corrupted by an AWGN with SNR Es/N0 = 5 dB, and the corresponding hard decision BER is 10−2.19. The recognizing procedure is shown in Figure 2.
Figure 2. Code roots recognition of BCH (63, 51). (a) Original LLRs. (b) Rank the vector L l to form L l R (c) dL: the difference of L l R (d) Insert L ( α 2 m 1 ) into L l .
Figure 2. Code roots recognition of BCH (63, 51). (a) Original LLRs. (b) Rank the vector L l to form L l R (c) dL: the difference of L l R (d) Insert L ( α 2 m 1 ) into L l .
Entropy 15 01705 g002
Figure 2a is the original LLRs calculated in Step 1. By ranking the original LLRs from the lowest to the highest according to Step 2, we can get L l R as shown in Figure 2b. We can see that the change between L l R ( 39 ) and L l R ( 40 ) is the largest, i.e., the difference d L ( 50 ) on i = 50 in Figure 2c is the largest. Thus, we propose L l R ( i )   ( i > 50 ) as the LLRs of the generator polynomial roots, which are α 1 , α 3 and their conjugate roots.

3.2. Discussion of the Element α 2 m 1

Up to now, we have estimated the code roots from the set { α 1 , α 2 , , α 2 m 2 } but the element α 2 m 1 is ignored. To verify whether the element α 2 m 1 = 1 is a root of the code, a method is calculating the LLR L ( α 2 m 1 ) according to its MPCM:
H min ( α 2 m 1 ) = ( ( α 2 m 1 ) n 1 l s ( α 2 m 1 ) n 2 l s ( α 2 m 1 ) 1 1 )
But obviously, we have H min ( α 2 m 1 ) = H b min ( α 2 m 1 ) = H b min _ a ( α 2 m 1 ) = ( 1 1 1 ) , because α 2 m 1 1 . The MPCM is an overall-ones vector which provides very little information for the calculation of the LLR. We propose to create a new MPCM for the element α 2 m 1 simply by the NOT of the minimal check matrix of one of the estimated roots. To get high reliability, we choose the root that has the highest LLR.
The parity check matrix of a code, which has k + 1 roots include α 2 m 1 , has a form as shown in Equation (27):
H = ( α 1 n 1 α 1 n 2 α 1 1 α 2 n 1 α 2 n 2 α 2 1 α k n 1 α k n 2 α k 1 1 1 1 1 )
We choose one of its first k rows and its last row to form a new matrix H ' as follows:
H ' = ( α i n 1 α i n 2 α i 1 1 1 1 1 1 1 )
Then we transform H ' to its binary pattern as follows:
H ' b i n = ( x x x 1 x x x 1 x x x 1 1 1 1 1 )
We define H ' b i n _ f to be any single row of H ' b i n except the last row, and H ' b i n _ l to be the last row. For each valid codeword C, we have H ' b i n × C = 0 . Thus, we have ( H ' b i n _ f H ' b i n _ l ) × C = 0 . Because all the elements in the last row of H ' b i n are 1, the XOR and NOT operation is equivalent. Therefore, NOT ( H ' b i n _ f ) × C = 0 . As a result, the NOT of any row in the MPCM of one of the estimated roots is still a valid MPCM when the code has a root α 2 m 1 .
Based on the new MPCM, we calculate the LLR on α 2 m 1 , and insert it into the vector L l referred in Step 1. Then we re-rank the LLRs and estimate the code roots according to the previous steps. Finally, we can write the generator polynomial based on all of the estimated code roots as:
g ( x ) = ( x α 1 ) ( x α 2 ) ( x α p )
where α 1 , α 2 , …, α p are the estimated code roots. In the example of the recognition of BCH (63, 51) code referred previous as shown in Figure 2, we insert L ( α 2 m 1 ) into L l and draw the stems of the LLRs in Figure 2(d). Re-execute the code roots recognition steps, it is easy to verify that the element α 2 m 1 is not a root of the code and the case of Figure 1 does not appear in this algorithm. Thus the generator polynomial is as follows:
g ( x ) = ( x α ) ( x α 2 ) ( x α 4 ) ( x α 8 ) ( x α 16 ) ( x α 32 ) ( x α 3 ) ( x α 6 ) ( x α 12 ) ( x α 24 ) ( x α 48 ) ( x α 33 ) x 12 + x 10 + x 8 + x 5 + x 4 + x 3

4. Primitive Polynomial Recognition

In Section 2 and Section 3, the primitive polynomial of the being recognized code is not considered. But in fact, the corresponding primitive polynomial should be given when discussing an extension field GF(2m), because there are more than one primitive polynomials in GF(2m) and the calculation rules based on different primitive polynomials are not the same. However, we propose a theorem that a binary cyclic codeword based on a primitive polynomial is also a valid binary cyclic codeword based on another primitive polynomial. According to this, we can choose any one primitive polynomial p(x) randomly and estimate the code length and code roots based on p(x). Then, we can recognize the actual primitive polynomial according to the root properties of the BCH codes.
Theorem 1: A binary cyclic codeword Cr, which is encoded based on a primitive polynomial p 1 ( x ) over GF(2m), is also a valid binary cyclic codeword based on another primitive polynomial p 2 ( x ) over GF(2m) with the same number of code roots.
Proof. The coefficients of a generator polynomial g ( x ) of a binary cyclic code is in GF(2) and can be factored into the product of some minimal polynomials over GF(2):
g ( x ) = m 1 ( x ) m 2 ( x ) m p ( x )
Let GF1(2m) and GF2(2m) be extension fields based on two different primitive polynomials p1(x) and p2(x), respectively, then GF1(2m) and GF2(2m) have the same structure. Each minimal polynomial m i ( x ) ( 1 i p ) can be factored in the extension fields GF1(2m) and GF2(2m) both. We define α and β to be the roots of p 1 ( x ) and p 2 ( x ) , respectively. According to Theorem 2.18 in [36], let e1 and e2 be the smallest integers such that α 2 e 1 = 1 and β 2 e 2 = 1 , respectively, then we have:
m i ( x ) = j = 0 e 1 1 ( x + α 2 j ) = j = 0 e 2 1 ( x + β 2 j )
Since a minimal polynomial has only one distinct root, then according to Equation (33), we have e1 = e2, i.e., each minimal polynomial has the same number of conjugate roots, even if it is based on different primitive polynomials. Consequently, a codeword, which is based on a primitive polynomial p 1 ( x ) , is also a valid codeword based on any other primitive polynomial p 2 ( x ) , and the number of code roots and error-correcting capabilities are the same. Therefore, when recognizing the parameters of a binary cyclic code, we can just randomly choose a primitive polynomial provisionally. In order to reduce the computational complexity, we recommend choosing the primitive polynomial with the smallest number of terms.
According to the basic character of the BCH codes, a generator polynomial has 2t roots with consecutive degrees (the 2t roots do not form all the code roots, but include all the distinct roots), where t is the correction capability of the codes. In other words, if α is a primitive element in GF(2m), the generator polynomial g(x) of a BCH code for correcting t errors has α , α 2 , α 3 , …, α 2 t as its roots [3]:
g ( α i ) = 0 ,   i = 1 , 2 , , 2 t
After the estimation of code roots based on a randomly chosen primitive polynomial p(x) and the estimated degree m, we can calculate the number of roots and the correction capability t. Then, we can traverse all the primitive polynomials over GF(2m) and get the one that makes the code roots be in accordant with the character of BCH codes as the primitive polynomial of the being recognized code.
As an example, we still consider the BCH(63, 51) code referred in Section 3. The codewords are encoded based on the primitive polynomial p 1 ( x ) = x 6 + x + 1 and the code roots are α , α 2 , α 4 , α 8 , α 16 , α 32 , α 3 , α 6 , α 12 , α 24 , α 48 , and α 33 . The first six ones and the last six ones are two groups of conjugate roots. Here α is a primitive root in GF(26) based on the primitive polynomial p 1 ( x ) , i.e., p 1 ( α ) = 0 . But in the recognition procedure, we do the calculations over GF(26) with another primitive polynomial p 2 ( x ) = x 6 + x 5 + 1 and let the symbol β be a root of p 2 ( x ) . As a result, we can get the code roots β 15 , β 30 , β 60 , β 57 , β 51 , β 39 , β 31 , β 62 , β 61 , β 59 , β 55 , and β 47 after the recognition as shown in Figure 3. It is easy to verify that the estimated code roots also form two groups of conjugate roots, thus the error-correction capability t equals two. Now we traverse all the other primitive polynomials and find the one under which the code roots are in accordant with the characters of BCH codes.
Figure 3. Recognizing the BCH(63, 51) codes using a primitive polynomial different from that of the encoder.
Figure 3. Recognizing the BCH(63, 51) codes using a primitive polynomial different from that of the encoder.
Entropy 15 01705 g003

5. Computational Complexity

In the proposed soft decision recognition algorithm, the most complex computation is the calculation of pi. The major computational consumption appears in Equation (23), which includes the neth root calculations, productions and tanh function in the real number field. While, the hard decision algorithm only has production and addition calculations over GF(2). However, our proposed algorithm utilizes the soft decision outputs of the channel, which can provide more information about the reliability decision bits, so require very lower calculation times of pj,i than the hard decision one, which can also reduce the total computational complexity in the recognition procedure.

6. Simulations

The simulation results of the proposed blind recognition algorithm are shown in this section. In the simulations, we assume that the propagation channel is a Binary Symmetry Channel (BSC) which is corrupted by an AWGN with the variance σ n 2 = N 0 / 2 . For each configuration, the information symbols in the codes are randomly chosen and the modulation mode is BPSK. All the simulations have the same settings of the observation window with length L = 3,000 bits and the searching range of the code length is 7–127.
When applying the algorithm to BCH(63,45) codes, the simulation results of the code length estimation are shown in Figure 4, Figure 5 and Figure 6. The signal is corrupted by an AWGN with the SNR Es/N0 = 5 dB. In Figure 4, we draw the stems of pi on different elements α i taken from GF(2m) when the code length and synchronization positions are correct. It is shown in Figure 4 that the values of pi on the code roots are obviously higher than those on the other elements. And we also draw the stems on a fault code length and synchronization position in Figure 5.
Figure 4. Values of pi on correct code length and synchronization positions.
Figure 4. Values of pi on correct code length and synchronization positions.
Entropy 15 01705 g004
Comparing with Figure 4, the values of pi in Figure 5 is uniform, so the information entropy of pi on correct coding parameters should be lower and the corresponding dispersion entropy function is higher. As shown in Figure 6, we draw the stems of RIDEF on different code length l and coding starting positions t when the start position t = 0 of the observation window is at the fortieth bit of a codeword. In the figure, the value of pi in the case of l = 63 and t = 23 is the highest, thus we consider the parameters l = 63 and t = 23k ( k ) as the estimated code length and synchronization positions. The result is in accordance with the simulation settings.
Figure 5. Values of pi on incorrect code length and synchronization positions.
Figure 5. Values of pi on incorrect code length and synchronization positions.
Entropy 15 01705 g005
Figure 6. Code length and synchronization positions estimation of BCH(63, 45) codes.
Figure 6. Code length and synchronization positions estimation of BCH(63, 45) codes.
Entropy 15 01705 g006
The performance of the algorithm is affected by the channel quality. In Figure 7, we draw the performance of the proposed algorithm when applied to code length and synchronization positions recognitions for several different binary BCH codes, including the shortened codes. The curves depict the false recognition probabilities (FRP) of the code length and coding starting positions estimations on different SNRs. We also compare the performance of our proposed recognition algorithm with the hard-decision-based RIDERS algorithm proposed in [22,23,24]. The PFR of our proposed algorithm fall rapidly when SNR increases, and it is much lower than that of the previous hard decision algorithms on each single SNR value.
After the code length and synchronization position estimation, the generator polynomial can be recognized by searching for the code roots according to the steps proposed in Section 3. Figure 8 shows the performance of the proposed generator polynomial recognition algorithm when applied to several different binary BCH codes, which are BCH(63, 51), BCH(63, 39) and BCH(30, 20). The curves show the false recognition probabilities on different noise levels. As Es/N0 rises, the curves fall rapidly. When Es/N0 is above 5 dB, no false recognition occurred during our 200,000 instances of simulation. We also compare our proposed algorithm with the previous hard-decision-based recognition algorithms proposed by [22,23,24] in the figure. It shows that the recognition performance is improved obviously in soft decision situations. A gap of 1–2 dB exists between the two groups of curves.
Figure 7. Performances of code length estimations for some binary BCH codes.
Figure 7. Performances of code length estimations for some binary BCH codes.
Entropy 15 01705 g007
Figure 8. Performances of code roots estimations for some binary BCH codes.
Figure 8. Performances of code roots estimations for some binary BCH codes.
Entropy 15 01705 g008

7. Conclusions

A soft-decision-based blind recognition method for binary BCH codes with BPSK modulations on AWGN channels is proposed aiming at the non-cooperative communications and ACM techniques. The code length estimation and block synchronization are achieved by checking the minimal parity check matrix. After that, the code rate and generator polynomials are reconstructed by searching for the code roots. To the best of our knowledge, this paper is the first publication in literature which introduces an approach for complete-blind recognition of binary BCH codes in soft decision situations. Simulations show that our proposed blind recognition algorithm yields better performance than that of the previous hard-decision-based ones.

Appendix: Proof of the faultiness of the Hypothesis 1

In this Appendix, we present that the Hypothesis 1 proposed in [22,23,24] is not always correct. The proof is shown below:
Proof. We can consider a codeword C with length n and the binary pattern of one of the MPCM H b min ( α i ) , where α i ( 0 i 2 m 1 ) is an element in Gm. We let c(x) be the codeword polynomial of C. If α i is a root of c(x), then we have c ( α i ) = 0 and H b min ( α i ) × C = 0 . There are m rows in H b min ( α i ) and we define h j ( 1 j m ) to be the jth row of H b min ( α i ) . Then the equation H b min ( α i ) × C = 0 means the productions of all the rows with the codeword C equal to zeros, as shown in Equation (35):
H b min ( α i ) × C = 0 { h 1 × C = 0 h 2 × C = 0 h m × C = 0
So we can calculate the probability of α i being the root of c(x), i.e., the probability of H b min ( α i ) × C = 0 as follows:
P r [ H b min ( α i ) × C = 0 ] = P r ( h 1 × C = 0 , h 2 × C = 0 , , h m × C = 0 )
Let h j l ( 1 l n ) and C l be the lth element in the vector h j and C and we define the checking indexing set S j for h j and Cl as follows:
S j = { C l | h j l = 1 }
Obviously, when the number of nonzero elements in S j is even, we have:
h j × C = 0
And when the number of nonzero elements in S j is odd, we have:
h j × C = 1
When the code length and synchronization positions are not estimated correctly, the restriction among the elements in C does exist. So the elements in the codewords can be considered to appear randomly. In this case, the probabilities of the number of nonzero elements in S j being odd and even are all about 0.5. When H b min ( α i ) is full rank, the rows of H b min ( α i ) is linearly independent, so we can calculate Equation (36) as follows:
P r [ H b min ( α i ) × C = 0 ] = j = 1 m P r ( h j × C = 0 ) = ( 0.5 ) m
But if H b min ( α i ) is not full rank, the calculation of Equation (36) is not correct. We define the maximum linearly independent vector group MI of the row vectors set H = { h j | 1 j m } as follows:
MI is a subset of H and meets the following conditions:
(1)
The vectors in MI is linearly independent;
(2)
Any vector in H can be obtained by linear combinations of the vectors in MI.
And it is easy to prove that the number of vectors in MI is the rank of H b min ( α i ) .
According to the condition 2 of the definition of MI, if all the vectors in { h j | h j MI } make h j × C = 0 , then also for all the vectors in { h j | h j H } , we have h j × C = 0 , so the calculation of Equation (36) should be:
P r [ H b min ( α i ) × C = 0 ] = t = 1 r a n k ( H b min ( α i ) ) P r ( h j t × C = 0 ) = ( 0.5 ) r a n k ( H b min ( α i ) )
where { h j t | 1 t r a n k ( H b min ( α i ) ) } are the vectors in MI, i.e., a maximum linearly independent vector group of the rows of H b min ( α i ) .
According to Equation (41), Hypothesis 1 is true only if all the H b min ( α i ) , where 1 i 2 m 1 , have the same rank. But unfortunately, this condition cannot always be met. For example, when considering the BCH(63, 51) codes, we have the following results:
{ r a n k ( H b min ( α 1 ) ) = 6 r a n k ( H b min ( α 9 ) ) = 3 r a n k ( H b min ( α 63 ) ) = 1
Therefore, we have:
{ P r [ H b min ( α 1 ) × C = 0 ] = ( 1 2 ) P r [ H b min ( α 1 ) × C = 0 ] = ( 1 2 ) 3 P r [ H b min ( α 1 ) × C = 0 ] = ( 1 2 ) 1 6
so we find that the Hypothesis 1 is not correct.

References

  1. Lin, S.; Costello, D.J. Coding for reliable digital transmission and storage. In Error Control Coding: Fundamentals and Applications, 2nd ed.; Horton, J.M., Riccardi, D.W., Eds.; Pearson Prentice Hall: Upper Saddle River, NJ, USA, 2004; pp. 1–23. [Google Scholar]
  2. Wang, X.; Xiao, G.Z. BCH codes and Goppa codes. In Error correcting coding: principles and methods; Li, J., Ed.; Xidian University Press: Xian, China, 2001; pp. 242–317. [Google Scholar]
  3. Moreira, J.C.; Farrell, P.G. BCH Codes. In Essentials of Error-Control Coding; John Wiley & Sons Ltd.: Chichester, UK, 2006; pp. 97–112. [Google Scholar]
  4. Peterson, W.W. Encoding and error-correction procedures for Bose-Chaudhuri codes. IRE Trans. Inf. Theory 1960, 9, 459–470. [Google Scholar] [CrossRef]
  5. Chien, R.T. Cyclic Decoding procedure for the Bose-Chaudhuri-Hocquenghem Codes. IEEE Trans. Inf. Theory 1964, 10, 357–363. [Google Scholar] [CrossRef]
  6. Mitola, J.; Maguire, G.Q. Cognitive radio: making software radios more personal. IEEE Personal Commun. 1999, 6, 13–18. [Google Scholar] [CrossRef]
  7. Oreay, O.; Ustundag, B. A pattern construction scheme for neural network-based cognitive communication. Entropy 2011, 13, 64–81. [Google Scholar]
  8. Li, M.; Batalama, N.S.; Pados, A.; Melodia, T.; Medley, M.J.; Matyjas, J.D. Cognitive code-division links with blind primary-system identification. IEEE Trans. Wireless Commun. 2011, 11, 3743–3753. [Google Scholar] [CrossRef]
  9. Liu, Y.; Tan, X.; Anghuwo, A.A. Joint power and spectrum allocation algorithm in cognitive radio networks. J. Syst. Eng. Electron. 2011, 4, 691–701. [Google Scholar] [CrossRef]
  10. Jiang, T.; Grace, D.; Mitchell, P.D. Efficient exploration in reinforcement learning-based cognitive radio spectrum sharing. IET Commun. 2011, 10, 1309–1317. [Google Scholar] [CrossRef]
  11. Marazin, M.; Gautier, R.; Burel, G. Algebraic method for blind recovery of punctured convolutional encoders from an errorneous bitstream. IET Signal Process. 2012, 2, 122–131. [Google Scholar] [CrossRef] [Green Version]
  12. Moosavi, R.; Larsson, E.G. A fast scheme for blind identification of channel codes. In Proceedings of 54th GLOBECOM, Houston, TX, USA, 5–9 December 2011; pp. 1–5.
  13. Marazin, M.; Gautier, R.; Burel, G. Dual code method for blind identification of convolutional encoder for cognitive radio receiver design. In Proceedings of IEEE Globecom Workshops, Honolulu, HI, USA, 30 November–4 December 2009; pp. 1–6.
  14. Goldsmith, A.J.; Chua, S.G. Adaptive coded modulation for fading channels. IEEE Trans. Commun. 1998, 5, 595–602. [Google Scholar] [CrossRef]
  15. Burel, G.; Gautier, R. Blind estimation of encoder and interleaver characteristics in a non cooperative context. In Presented at International Conference on Communications, Internet and Information Technology, Scottsdale, AZ, USA, 17–19 November 2003.
  16. Choqueuse, V.; Marazin, M.; Collin, L.; Yao, K.C.; Burel, G. Blind reconstruction of linear space-time block codes: a likelihood-based approach. IEEE Trans. Signal Process. 2010, 3, 1290–1299. [Google Scholar] [CrossRef] [Green Version]
  17. Marazin, M.; Gautier, R.; Burel, G. Blind recovery of k/n rate convolutional encoders in a noisy environment. EURASIP J. Wireless Commun. Netw. 2011, 168, 1–9. [Google Scholar] [CrossRef] [Green Version]
  18. Wang, F.; Huang, Z.; Zhou, Y. A method for blind recognition of convolution code based on Euclidean algorithm. In Proceedings of IEEE WiCom, Shanghai, China, 21–25 September, 2007; pp. 1414–1417.
  19. Dignel, J.; Hagenauer, J. Parameter estimation of a convolutional encoder from noisy observations. In Proceedings of IEEE ISIT, Nice, France, 24–29 June 2007; pp. 1776–1780.
  20. Marazin, M.; Gautier, R.; Burel, G. Blind recovery of the second convolutional encoder of a turbo-code when its systematic outputs are punctured. Military Tech. Acad. Rev. 2009, 2, 213–232. [Google Scholar]
  21. Yongguang, Z. Blind recognition method for the Turbo coding parameters. J. Xidian Univ. 2011, 2, 167–172. [Google Scholar]
  22. Wen, N.; Yang, X. Recognition methods of BCH codes. Electron. Warf. 2010, 6, 30–34. [Google Scholar]
  23. Yang, X.; Wen, N. Recognition method of BCH codes on roots information dispersion entropy and roots statistic. J. Detect. Control. 2010, 3, 69–73. [Google Scholar]
  24. Lv, X.; Huang, Z.; Su, S. Fast recognition method of generator polynomial of BCH codes. J. Xidian Univ. 2011, 6, 187–191. [Google Scholar]
  25. Fossorier, M.; Lin, S. Bit error probability for maximum likelihood decoding of linear block codes and related soft decision decoding methods. IEEE Trans. Inf. Theory. 1998, 11, 3083–3090. [Google Scholar] [CrossRef]
  26. Kaneko, T.; Nishijima, T.; Inazumi, H.; Hirasawa, S. An effifient maximum likelihood decoding of linear block codes with algebraic decoder. IEEE Trans. Inf. Theory. 1994, 3, 320–327. [Google Scholar] [CrossRef]
  27. Sankaranarayanan, S.; Vasic, B. Iterative decoding of linear block codes: A parity-check orthogonalization approach. IEEE Trans. Inf. Theory 2005, 51, 3347–3353. [Google Scholar] [CrossRef]
  28. Jiang, J.; Narayanan, K.R. Iterative soft-input-soft-output decoding of Reed-Solomon codes by adapting the parity check matrix. IEEE Trans. Inf. Theory. 2006, 8, 3746–3756. [Google Scholar] [CrossRef]
  29. Ni, L.; Yao, F.; Zhang, L. A rotated quasi-othogonal space-time block code for asynchronous cooperative discovery. Entropy 2012, 14, 654–664. [Google Scholar] [CrossRef]
  30. Hagenauer, J.; Offer, E.; Papke, L. Iterative decoding of binary block and convolutional codes. IEEE Trans. Inf. Theory 1996, 2, 429–445. [Google Scholar] [CrossRef]
  31. Imad, R.; Sicot, G.; Houcke, S. Blind frame synchronization for error correcting codes having a sparse parity check matrix. IEEE Trans. Commun. 2009, 6, 1574–1577. [Google Scholar] [CrossRef]
  32. Imad, R.; Houcke, S. Theoretical analysis of a MAP based blind frame synchronizer. IEEE Trans. Wireless Commun. 2009, 11, 5472–5476. [Google Scholar] [CrossRef]
  33. Imad, R.; Houcke, S.; Jego, C. Blind frame synchronization of product codes based on the adaptation of the parity check matrix. In Proceedings of IEEE ICC2009, Dresden, Germany, 14–18 June, 2009; pp. 1574–1577.
  34. Imad, R.; Poulliat, C.; Houcke, S.; Gadat, G. Blind frame synchronization of Reed-Solomon codes: Non-binary vs. binary approach. In Proceedings of 2010 IEEE Eleventh International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Marrakech, Morocco, 20–23 June 2010; pp. 1–5.
  35. Moreira, J.C.; Farrell, P.G. Cyclic codes. In Essentials of Error-Control Coding; John Wiley & Sons Ltd.: Chichester, UK, 2006; pp. 81–94. [Google Scholar]
  36. Lin, S.; Costello, D.J. Introduction to algebra. In Error Control Coding: Fundamentals and Applications, 2nd ed.; Horton, J.M., Riccardi, D.W., Eds.; Pearson Prentice Hall: Upper Saddle River, NJ, USA, 2004; pp. 25–65. [Google Scholar]

Share and Cite

MDPI and ACS Style

Zhou, J.; Huang, Z.; Liu, C.; Su, S.; Zhang, Y. Information-Dispersion-Entropy-Based Blind Recognition of Binary BCH Codes in Soft Decision Situations. Entropy 2013, 15, 1705-1725. https://doi.org/10.3390/e15051705

AMA Style

Zhou J, Huang Z, Liu C, Su S, Zhang Y. Information-Dispersion-Entropy-Based Blind Recognition of Binary BCH Codes in Soft Decision Situations. Entropy. 2013; 15(5):1705-1725. https://doi.org/10.3390/e15051705

Chicago/Turabian Style

Zhou, Jing, Zhiping Huang, Chunwu Liu, Shaojing Su, and Yimeng Zhang. 2013. "Information-Dispersion-Entropy-Based Blind Recognition of Binary BCH Codes in Soft Decision Situations" Entropy 15, no. 5: 1705-1725. https://doi.org/10.3390/e15051705

Article Metrics

Back to TopTop