Improved Approach for the Maximum Entropy Deconvolution Problem

The probability density function (pdf) valid for the Gaussian case is often applied for describing the convolutional noise pdf in the blind adaptive deconvolution problem, although it is known that it can be applied only at the latter stages of the deconvolution process, where the convolutional noise pdf tends to be approximately Gaussian. Recently, the deconvolutional noise pdf was approximated with the Edgeworth Expansion and with the Maximum Entropy density function for the 16 Quadrature Amplitude Modulation (QAM) input but no equalization performance improvement was seen for the hard channel case with the equalization algorithm based on the Maximum Entropy density function approach for the convolutional noise pdf compared with the original Maximum Entropy algorithm, while for the Edgeworth Expansion approximation technique, additional predefined parameters were needed in the algorithm. In this paper, the Generalized Gaussian density (GGD) function and the Edgeworth Expansion are applied for approximating the convolutional noise pdf for the 16 QAM input case, with no need for additional predefined parameters in the obtained equalization method. Simulation results indicate that improved equalization performance is obtained from the convergence time point of view of approximately 15,000 symbols for the hard channel case with our new proposed equalization method based on the new model for the convolutional noise pdf compared to the original Maximum Entropy algorithm. By convergence time, we mean the number of symbols required to reach a residual inter-symbol-interference (ISI) for which reliable decisions can be made on the equalized output sequence.


Introduction
In this paper, the blind adaptive deconvolution problem (blind adaptive equalizer) is considered, where we observe the output of an unknown linear system (channel) from which we want to recover its input, using an adaptive blind equalizer (adaptive linear filter) [1][2][3][4][5][6]. The linear system (channel) is often modeled as a finite impulse response (FIR) filter. Since the channel coefficients are unknown, the equalizer's coefficients used in the deconvolution process are only approximated values leading to an error signal that is added to the source signal at the output of the deconvolution process. In the following, we define this error signal throughout the paper as the convolutional noise. The Gaussian pdf is often applied in the literature [1,[7][8][9][10][11], for approximating the convolutional noise pdf in calculating the conditional expectation of the source input given the equalized output sequence, based on Bayes rules. However, according to [8], the convolutional noise pdf tends approximately to a Gaussian pdf only at the latter stages of the iterative deconvolution process, where the equalizer has converged to a relative low residual ISI (where the convolutional noise is relative low). In the early stages of the iterative deconvolution process, the ISI is typically large with the result that the input sequence and the convolutional noise sequence are strongly correlated and the convolutional noise pdf is more uniform than Gaussian [8,12]. Recently [3,4], the convolutional noise pdf was approximated with the Maximum Entropy density approximation technique [1,2,13,14] with Lagrange multipliers up to order four and with the Edgeworth Expansion series [15,16] up to order six, to obtain the conditional expectation of the source signal (16 QAM input case), given the equalized output via Bayes rules. However, as demonstrated in [3], the blind adaptive equalization algorithm with the closed-form approximated expression for the conditional expectation based on approximating the convolutional noise pdf with the Maximum Entropy density approximation technique, achieved for the hard channel case (named the channel4 case in [3]), the same equalization performance from the residual ISI and convergence time point of view compared with the original blind adaptive equalization algorithm [1] where the convolutional noise pdf was approximated with the Gaussian pdf to obtain the closed-form approximated expression for the conditional expectation of the source signal given the equalized output via Bayes rules. The equalization performance obtained with the Edgeworth Expansion approach [4] was indeed improved compared with the original blind adaptive equalization algorithm [1] where the convolutional noise pdf was approximated with the Gaussian pdf to obtain the closed-form approximated expression for the conditional expectation of the source signal given the equalized output via Bayes rules. However, this equalization method [4] needed two additional predefined parameters (additional to the predefined step-size parameter involved in the equalizer's coefficients update mechanism) in the algorithm. These two additional predefined parameters where used in the approximation for the fourth and sixth moment of the convolutional noise. Since the convolutional noise is channel dependent, the various moments of the convolutional noise are also channel dependent which lead also to the two additional predefined parameters in [4] to be channel dependent. As it was already implied earlier, the shape of the convolutional noise pdf changes during the iterative deconvolution process. Thus, if we could have an approximation for the convolutional pdf that is close to optimality, we could have a closed-form approximated expression for the conditional expectation of the source signal given the equalized output via Bayes rules that may lead to improved equalization performance from the residual ISI and convergence time point of view compared to existing methods based on the closed-form approximated conditional expectation expression [1,3,4]. According to [17][18][19], the GGD provides a flexible and suitable tool for data modeling and simulation. The GGD [17,18] is based on a shape parameter that changes the pdf which may have a Laplacian, or double exponential distribution, a Gaussian distribution or a uniform distribution for a shape parameter equal to one, two and infinity respectively. The shape of the convolutional noise pdf changes as a function of the residual ISI . Thus, in order to apply the GGD for the convolutional noise pdf approximation task, the shape parameter related to the GGD presentation must be a function of the residual ISI. Recently [20], the shape parameter related to the GGD presentation [17,18] was given as a function of the residual ISI.
In this paper, we deal with the 16QAM input case where we use the GGD presentation [17,18] with the results obtained from [20], to approximate the convolutional noise pdf involved in the calculation of the closed-form approximated expression for the conditional expectation of the source signal given the equalized output via Bayes rules. Since the shape parameter related to the GGD presentation [17,18] may have also fractional values during the iterative deconvolution process, the integral involved in the conditional expectation calculation may not lead to a closed-form approximated expression. Thus, we use in this work the Edgeworth Expansion series [15,16] up to order six for approximating the GGD presentation applicable for the convolutional noise pdf where the fourth and sixth moments of the convolutional noise sequence are approximated with the GGD technique [17,18]. By applying the GGD [17,18], the Edgeworth Expansion [15,16] and the results from [20] (the relationship between the shape parameter and the residual ISI), a new closed-form approximated expression is proposed for the conditional expectation of the source signal given the equalized output via Bayes rules that has no need for additional predefined parameters in the obtained equalization method as is the case in [4]. Simulation results indicate that with our new proposed equalization method based on the new model for the convolutional noise pdf we have: • Improved equalization performance from the convergence time point of view for the easy [6] as well as for the hard channel case, compared to the original Maximum Entropy algorithm [1]. The improvement in the convergence time for the hard channel case is approximately of 15,000 symbols while for the easy channel case the improvement in the convergence time is approximately of 250 symbols. In both cases we may say that the improvement in the convergence time is approximately third of the convergence time of the original Maximum Entropy algorithm [1]. • Based on [3], the blind adaptive equalization algorithm with the closed-form approximated expression for the conditional expectation based on approximating the convolutional noise pdf with the Maximum Entropy density approximation technique, achieved for the hard channel case, the same equalization performance from the residual ISI and convergence time point of view as was achieved with the original Maximum Entropy algorithm [1]. Thus, the improvement in the convergence time with our new proposed method compared with the algorithm in [3] is also approximately of 15,000 symbols for the hard channel case.

•
The new proposed equalization method does not need additional predefined parameters (additional to the predefined step-size parameter involved in the equalizer's coefficients update mechanism) in the algorithm in order to get improved convergence time compared to the original Maximum Entropy algorithm [1], as does the algorithm in [4] where the convolutional noise pdf was approximated with the Edgeworth Expansion series.

•
For the easy channel case and SNR of 26 dB, the new proposed equalization method has improved equalization performance from the residual ISI and convergence time point of view compared to the recently proposed methods [2,5] which are versions of the original Maximum Entropy algorithm [1]. From the residual ISI point of view, the improvement is approximately 4 dB while the improvement in the convergence time is approximately third of the convergence time achieved by the equalization methods presented in [2,5].
The paper is organized as follows-after having described the system under consideration in Section 2, the systematic way for obtaining the closed-form approximated expression for the conditional expectation of the source signal given the equalized output via Bayes rules based on the GGD and Edgeworth Expansion series is given in Section 3. In Section 4 we introduce our simulation results. Finally, the conclusion is presented in Section 5.

System Description
In the following (Figure 1), we recall the system under consideration used in [1,3,4], where we apply the same assumptions made in [1,3,4]: The unknown channel h[n] is a possibly non-minimum phase linear time-invariant filter in which the transfer function has no "deep zeros"; namely, the zeros lie sufficiently far from the unit circle.

•
The filter c[n] is a tap-delay line.

•
The channel noise w[n] is an additive Gaussian white noise.

•
The function T[·] is a memoryless nonlinear function that satisfies the additivity condition: where z 1 [n], z 2 [n] are the real and imaginary parts of the equalized output, respectively. The input to the equalizer is given by: where " * " stands for the convolutional operation. Based on (2), the equalized output is obtained via: where ξ[n] stands for the difference (error) between the ideal and the used value for c[n] following (6), δ is the Kronecker delta function,w[n] = w[n] * c[n] and p[n] is the convolutional noise. The ISI is expressed by: where |s| max is the component ofs, given in (4), having the maximal absolute value. The The equalizer is updated according to: where (·) * is the conjugate operation on (·), µ is the step size parameter and c

GGD Based Closed-Form Approximated Expression for the Conditional Expectation
In this section, we present a systematic approach for obtaining the conditional expectation (E[x[n]|z[n]]) based on approximating the convolutional noise pdf with the GGD [17,18] and Edgeworth Expansion [15,16] presentations. For simplicity, we use in the following, x, y, p for x[n], y[n] and p[n], respectively. Theorem 1. For the noiseless and 16QAM input case, the closed-form approximated expression for the conditional expectation (E[x|z]) is given by: where T = ( and where Γ is the Gamma function and β is given by [20]: In this work the ISI is expressed as: and the Lagrange multipliers for k = 2, 4 (λ 2 , λ 4 ) are calculated according to [1]: where Proof of Theorem 1. For the two independent quadrature carrier case where the 16QAM modulation is a special case of it, the conditional expectation (E[x|z]) can be given according to [9] as: Thus, real and imaginary parts of the data are to be estimated separately on the basis of the real and imaginary parts of the equalizer's output sequence. For the noiseless case, (3) may be written as: In the following, we denote p 1 and p 2 as the real and imaginary parts of p. Based on (14) and under the assumption that the blind adaptive equalizer leaves the system with a relative low residual ISI for which the input signal x and the convolutional noise signal p can be considered as independent [8], we may write for the 16QAM modulation case: Based on (3), the variance of the real part of the equalized output signal σ 2 z 1 can be written for the noiseless case as: Next, based on (16), (15) and (5) we may write: Next, we show the systematic approach for calculating the conditional expectation E[ where f x 1 |z 1 (x 1 |z 1 ) is the conditional pdf. Based on Bayes rules we may write: Now, by substituting (19) into (18) we obtain: As was already mentioned earlier in this paper, we would like to use the GGD [17,18] presentation for approximating the real part of the convolutional noise pdf. Thus, based on the GGD [17,18] the real part of the convolutional noise pdf is approximately given by: with where β is defined as the shape parameter of the pdf presentation. Thus, based on [17,18], (21) and (14), the conditional pdf f z 1 |x 1 (z 1 |x 1 ) can be expressed by: Following [1,3,4], we use the Maximum Entropy density approximation technique [13,14] with Lagrange multipliers up to order four, for approximating the pdf of the real part input sequence: where λ 2 and λ 4 are the Lagrange multipliers and A is a constant. Next, by substituting (23) and (24) into (20), some problems are noticed in carrying out the integrals involved in (20) for achieving a closed-form approximated expression for the conditional expectation E[x 1 |z 1 ] due to the fact that the shape parameter β is a changing parameter during the iterative blind deconvolution process that may have also non integer values. Thus, to overcome the problem, we apply the Edgeworth Expansion series [15,16] up to order six for approximating the real part of the convolutional noise pdf where the higher moments of the convolutional noise sequence are calculated via the GGD [17,18] technique: Thus, based on the Edgeworth Expansion series technique [15,16] and (25) we have: with E p 6 1 and E p 4 1 given in (25). Now, substituting (26) and (24) into (20) yields: where In order to obtain closed-form expressions for the integrals involved in (27), the Laplace's method [21] is applied as was also done in [1,3] and [4]. According to [21], the Laplace's method is a general technique for obtaining the asymptotic behavior as ρ → 0 of integrals in which the large parameter 1/ρ appears in the exponent. The main idea of Laplace's method is: if the continues function Ψ(x 1 ) has its minimum at x 0 which is between infinity and minus infinity, then it is only the immediate neighborhood of x 1 = x 0 that contributes to the full asymptotic expansion of the integral for large 1/ρ. Thus, according to [1,3,4,21]: where () and () stand for the second and fourth derivative of (), respectively, O(x) is defined as lim x→0 O(x)/x = r const and r const is a constant. The expressions for Ψ (x 0 ) and x 0 are given by: Now, by substituting (29) and (30) into (27), dividing both the numerator and denominator by the functiong(z 1 ) given in (28) with z 1 instead of x 1 , x 0 = z 1 , Ψ (x 0 ) = 2 from (31), ρ = 2σ 2 p 1 from (28) and σ 2 (15) we obtain: Next, in order to reduce the computational complexity, we notice that the denominator of (32) (E[x 1 |z 1 ] down from (33)) can be approximated by: whereg (z 1 ) andg (z 1 ) are the second and fourth derivative ofg(z 1 ) respectively. Please note that (34) is valid for the Gaussian convolutional noise pdf case. By using (32) with E[x 1 |z 1 ] down and E[x 1 |z 1 ] up from (34) and (33) respectively and the following derivatives: the expression of u 1 f 1 from (7) is obtained. Now, by using (13), the expression from (7) is obtained.

Simulation
In this section, we use the 16QAM input case with two different channels to show via simulation results the usefulness of our new proposed model for the convolutional noise pdf based on the GDD [17,18] and Edgeworth Expansion [15,16] compared to the Gaussian case. For equalization performance comparison, we use the MaxEnt algorithm [1], where the conditional expectation is derived by assuming the Gaussian model for the convolutional noise pdf and the source pdf is approximated with the Maximum Entropy density approximation technique [13,14] as it is done with our new proposed equalization method. Thus, the difference between the two approximated expressions for the conditional expectation ( [1] and (7)) is only due to the different model used for the convolutional noise pdf. In addition, we use for the equalization performance comparison also two additional equalization methods [2,5] which we name as MaxEnt BNEW and MaxEnt ANEW respectively. These methods ( [2,5]) are versions of the original MaxEnt algorithm [1] where also the convolutional noise pdf was approximated with the Gaussian model.
The equalizer's taps for the Maximum Entropy algorithm (MaxEnt) [1] were updated according to: with: where µ me is a positive step-size parameter and where: where σ 2 z s was estimated by where z 2 s 0 > 0, β BNEW and µ BNEW are positive step size parameters. The equalizer's taps in (54) were updated only ifN s > ε 1 , where ε 1 is a small positive parameter and In addition, the gain control was applied according to (53). Two different channels were considered: • Easy channel case, Channel1 (initial ISI = 0.44): The channel parameters were determined according to [6]: h n = 0 for n < 0; −0.4 for n = 0; 0.84 × 0.4 n−1 for n > 0 • Hard channel case, Channel2 (initial ISI = 1.402): The channel parameters were taken according to [22]: h n = 0.2258, 0.5161, 0.6452, 0.5161 For Channel1 and Channel4, we used an equalizer with 13 and 21 taps respectively. In the simulation, the equalizers were initialized by setting the center tap equal to one and all others to zero [1]. The step size parameters µ, β, µ me and β me , were chosen for fast convergence with low steady state ISI, where the values for µ me and β me were taken from [1]. Figure 2 shows the simulated ISI as a function of the iteration number of our new proposed algorithm (GGD), compared to the MaxEnt method [1], for the 16QAM input and Channel1 case for signal-to noise-ratio (SNR) of 26dB according to [1]. Please note that the main purpose of a blind adaptive equalizer is to be as fast as possible, a residual ISI that is low enough for sending the equalized output sequence to the decision device to get reliable decisions on that input data. Reliable decisions can be done on the equalized output sequence when the equalizer leaves the system with a residual ISI that is lower than −16 dB. According to Figure 2, the new algorithm (GGD) achieves the residual ISI of −16 dB faster than the MaxEnt algorithm [1]. Thus, the GGD has a faster convergence rate compared to the MaxEnt [1] method, which means that the equalized output sequence can be send earlier to the decision device with the GGD algorithm compared with the MaxEnt method [1]. Figure 3 shows the simulated ISI as a function of the iteration number of our new proposed algorithm (GGD), compared to the MaxEnt method [1], for the 16QAM input and Channel4 case for SNR of 30 dB according to [1]. According to Figure 3, the GGD algorithm reaches the residual ISI of −16 dB faster by approximately of 15,000 symbols than the MaxEnt [1] algorithm does while leaving the system with approximately the same residual ISI at the convergence state compared with the MaxEnt [1] method.
It should be pointed out that the equalization performance obtained with the GDD algorithm are very similar to those obtained in [4] where the Edgeworth Expansion up to order six was used for approximating the convolutional noise pdf. However, in [4], two additional step parameters were needed in the deconvolution process. Those step size parameters are channel dependent which are not needed in the GDD algorithm. Thus, the GDD algorithm is preferable over the algorithm proposed in [4]. The GDD algorithm has also improved equalization performance for the hard channel case (Channel2) compared to the equalization method proposed in [3] where the Maximum Entropy density approximation technique [13,14] was used for approximating the convolutional noise pdf with Lagrange multipliers up to order four. Please note that according to [3], the MaxEnt method [1] and the equalization algorithm proposed in [3] have the same equalization performance for the hard channel case (Channel2). Figure 4 shows the simulated ISI as a function of the iteration number of our new proposed algorithm (GGD), compared to the MaxEnt method [1], to the MaxEnt ANEW method [5] and to the MaxEnt BNEW method [2] for the 16QAM input and Channel1 case for SNR of 26 dB. According to Figure 4, the GGD algorithm has improved equalization performance from the residual ISI and convergence Please note that the main purpose of a blind adaptive equalizer is to be as fast as possible, a residual ISI that is low enough for sending the equalized output sequence to the decision device to get reliable decisions on that input data. Reliable decisions can be done on the equalized output sequence when the equalizer leaves the system with a residual ISI that is lower than −16 dB. According to Figure 2, the new algorithm (GGD) achieves the residual ISI of −16 dB faster than the MaxEnt algorithm [1]. Thus, the GGD has a faster convergence rate compared to the MaxEnt [1] method, which means that the equalized output sequence can be send earlier to the decision device with the GGD algorithm compared with the MaxEnt method [1]. Figure 3 shows the simulated ISI as a function of the iteration number of our new proposed algorithm (GGD), compared to the MaxEnt method [1], for the 16QAM input and Channel4 case for SNR of 30 dB according to [1]. According to Figure 3, the GGD algorithm reaches the residual ISI of −16 dB faster by approximately of 15,000 symbols than the MaxEnt [1] algorithm does while leaving the system with approximately the same residual ISI at the convergence state compared with the MaxEnt [1] method.
It should be pointed out that the equalization performance obtained with the GDD algorithm are very similar to those obtained in [4] where the Edgeworth Expansion up to order six was used for approximating the convolutional noise pdf. However, in [4], two additional step parameters were needed in the deconvolution process. Those step size parameters are channel dependent which are not needed in the GDD algorithm. Thus, the GDD algorithm is preferable over the algorithm proposed in [4]. The GDD algorithm has also improved equalization performance for the hard channel case (Channel2) compared to the equalization method proposed in [3] where the Maximum Entropy density approximation technique [13,14] was used for approximating the convolutional noise pdf with Lagrange multipliers up to order four. Please note that according to [3], the MaxEnt method [1] and the equalization algorithm proposed in [3] have the same equalization performance for the hard channel case (Channel2). Figure 4 shows the simulated ISI as a function of the iteration number of our new proposed algorithm (GGD), compared to the MaxEnt method [1], to the MaxEnt ANEW method [5] and to the MaxEnt BNEW method [2] for the 16QAM input and Channel1 case for SNR of 26 dB. According to Figure 4, the GGD algorithm has improved equalization performance from the residual ISI and convergence time point of view compared to the MaxEnt ANEW [5] and MaxEnt BNEW [2] methods. From the residual ISI point of view, the improvement is approximately 4 dB while the improvement in the convergence time is approximately third of the convergence time achieved by the equalization methods presented in [2,5].  Figure 3. Equalization performance comparison between the GGD and MaxEnt methods for a 16QAM input going through channel4. The averaged result were obtained in 50 Monte Carlo trials for a SNR of 30dB. The step size parameters were set to: µ = 3 × 10 −4 , β = 2 × 10 −6 , µ me = 2 × 10 −4 , β me = 2 × 10 −6 . In addition we set: ε = 0.5.
Although the GGD algorithm was obtained for the 16QAM constellation input, it can be extended to other two independent quadrature carrier inputs with Lagrange multiplier up to order four, by having just another function for β (9) that fits the new input constellation case. In addition, if more Lagrange multipliers are needed than only four for approximating properly the input sequence pdf, (7) should be used with k = 2, 4, 6, ...K and the Lagrange multipliers should be calculated as given in [1] for the general order case.

Conclusions
In this paper, the blind adaptive deconvolution problem was considered, where the GGD function and the Edgeworth Expansion up to order six were applied for approximating the convolutional noise pdf for the 16 QAM input case. A new closed-form approximated expression was derived for the conditional expectation that led to a new blind adaptive equalization method. This new proposed algorithm does not need additional predefined parameters that are channel dependent like the literature known blind adaptive equalization method based on the conditional expectation expression where the convolutional noise pdf was approximated with the Edgeworth Expansion up to order six. Simulation results demonstrated that improved equalization performance is obtained with our new proposed equalization method based on the new model for the convolutional noise pdf compared to the original Maximum Entropy algorithm and to the two recently obtained versions of the original Maximum Entropy algorithm for the easy channel and high SNR case. Since the original Maximum Entropy algorithm has the same equalization performance for the hard channel case as the equalization method based on the conditional expectation expression where the convolutional noise pdf was approximated with the Maximum Entropy density technique, the new proposed method has also improved equalization performance for the hard channel case compared with this equalization method. This paper demonstrated that improved equalization performance can be obtained if a proper approximation is applied for the convolutional noise pdf in the calculation for the expression of the conditional expectation via Bayes rules. The new proposed algorithm is valid only for the high SNR case due to the fact that the noise was not taken into account in our derivations. Please note that the original Maximum Entropy algorithm and the two equalization methods based on the conditional expectation via Bayes rules, where the convolutional noise pdf was approximated with the Maximum Entropy density technique and with the Edgeworth Expansion approach, are valid also only for the high SNR case.