New Lagrange Multipliers for the Blind Adaptive Deconvolution Problem Applicable for the Noisy Case

Recently, a new blind adaptive deconvolution algorithm was proposed based on a new closed-form approximated expression for the conditional expectation (the expectation of the source input given the equalized or deconvolutional output) where the output and input probability density function (pdf) of the deconvolutional process were approximated with the maximum entropy density approximation technique. The Lagrange multipliers for the output pdf were set to those used for the input pdf. Although this new blind adaptive deconvolution method has been shown to have improved equalization performance compared to the maximum entropy blind adaptive deconvolution algorithm recently proposed by the same author, it is not applicable for the very noisy case. In this paper, we derive new Lagrange multipliers for the output and input pdfs, where the Lagrange multipliers related to the output pdf are a function of the channel noise power. Simulation results indicate that the newly obtained blind adaptive deconvolution algorithm using these new Lagrange multipliers is robust to signal-to-noise ratios (SNR), unlike the previously proposed method, and is applicable for the whole range of SNR down to 7 dB. In addition, we also obtain new closed-form approximated expressions for the conditional expectation and mean square error (MSE).


Introduction
In this paper, we consider a blind adaptive deconvolution problem in which we observe the output of an unknown, possibly non-minimum phase, linear system from which we want to recover its input using an adjustable linear filter (equalizer) [1].A blind deconvolution process arises in many applications, such as seismology, underwater acoustic, image restoration and digital communication .
In this paper, we consider the application in digital communications where the received symbol sequence has been affected by intersymbol interference (ISI), whereby symbols transmitted before and after a given symbol corrupt the detection of that symbol [2].ISI causes harmful distortions and presents a major difficulty in the recovery process.Thus, a blind adaptive filter is usually used to remove the convolutive effect of the system to recover the source signal.Equalizers with the sampling rate equal to the symbol rate are referred to as T-spaced equalizers.Equalizers with the sampling rate higher than the symbol rate are referred to as fractionally-spaced equalizers (FSE).A fractionally-spaced adaptive equalizer is a linear equalizer that is similar to a symbol-spaced linear equalizer (T-spaced equalizer, where T denotes the baud, or symbol, duration).However, a fractionally-spaced equalizer receives, say, U input samples before it produces one output sample and updates the weights, where U is an integer.The output sample rate and the input sample rate are 1/T and U/T, respectively.The weight updating occurs at the output rate, which is the slower rate.Since the output of the fractionally-spaced equalizer needs to be calculated once in every symbol period, the fractionally-spaced equalizer can be modeled as a parallel combination of a number of baud-spaced equalizers (T-spaced equalizers).This parallel combination of baud-spaced equalizers is known as the multi-channel model of FSE.Among fractionally-spaced constant modulus algorithms, we may find [29] and [30], where [29] shows the robustness of the constant modulus algorithm to noise and [30] shows that the constant modulus criterion is well suited for any subgaussian input distribution as all quadrature amplitude modulation (QAM) signals.The main disadvantage with using fractionally-spaced equalizers is that all processing before the equalization must be done at a higher rate than would be needed for T-spaced equalizers.This can increase the cost and power consumption of systems based on fractionally-spaced equalizers.However, fractionally-spaced equalizers have several advantages that often can justify this increase of complexity.One advantage of fractionally-spaced equalizers over T-spaced equalizers is relative immunity to the sampling phase, where sampling phase refers to the time offset of the sampling instance relative to the symbol clock.As already mentioned earlier, a fractionally-spaced equalizer can be modeled with a single-input-multiple-output (SIMO) system.In the field of communication, SIMO channels appear either when the signal is oversampled at the receiver or from the use of an array of antennas in the receiver [31][32][33].It should be pointed out that for the SIMO case, the same information is transmitted through different subchannels; all received sequences will be distinctly distorted versions of the same message, which accounts for a certain signal diversity [34].Therefore, it is reasonable to assume that more information about the transmitted signal will be available at the receiver end [34].SIMO transmission is widely replacing the single-input-single-output (SISO) approach to enhance the performance via diversity combining [35].Since the SIMO approach consists of a parallel combination of baud-spaced (T-spaced) adaptive equalizers, it is reasonable to think that if those T-spaced blind adaptive equalizers already lead to good equalization performance for the SISO case, they also may make a major contribution to the equalization performance for the SIMO approach.We focus in this paper on the SISO case where a T-spaced blind adaptive equalizer is used to overcome the ISI problem.We do not deal in this paper with techniques that can improve the overall equalization performance, such as the use of multiple receive antennas, the oversampling technique or the use of multiple adaptive T-spaced equalizers connected in series, as was shown in [36].
Blind deconvolution algorithms are essentially adaptive filtering algorithms designed such that they do not require the external supply of a desired response to generate the error signal in the output of the adaptive equalization filter [2,37,38].The algorithm itself generates an estimate of the desired response by applying a nonlinear transformation to sequences involved in the adaptation process [2,37,38].In this paper, we consider the Bussgang blind equalization algorithms, where the nonlinear function is applied at the output of the deconvolutional process (equalizer).In the literature, we may find two different approaches for designing this nonlinear function.According to one approach, the nonlinearity is designed to minimize a cost function that is implicitly based on higher order statistics (HOS) and characterizes the ISI [1,[39][40][41][42][43][44][45][46][47][48].Minimizing this cost function with respect to the equalizer's coefficients reduces the ISI to such a level that the sent symbol can be recovered.According another approach, the conditional expectation (the expectation of the source input given the equalized or deconvolutional output) is the nonlinear function.Namely, the conditional expectation based on Bayes rules is derived for estimating the desired response.The relationship between the two approaches will be shown in the next section.In this paper, we consider the approach where the conditional expectation has to be obtained for estimating the desired response.In [38,[49][50][51], the conditional expectation was derived for uniformly distributed source signals.Thus, [38,[49][50][51] cannot cope with a source having a general pdf shape.In [20,52,53], the conditional expectation was given as approximated closed-form expressions suitable for the real or two independent quadrature carrier input case.Those expressions [20,52,53] are suitable for a wider range of source probability density function compared to [38,[49][50][51].The input pdf was approximated in [20,52] with the maximum entropy density approximation technique, while in [53], the input pdf was approximated with the Edgeworth expansion series.The equalized output pdf was calculated in [20] and [53] via the approximated input pdf, while in [52], the equalized output pdf was approximated with the maximum entropy density approximation technique.According to simulation results carried out in [52] for the 16QAM constellation input case, the equalization method based on the conditional expectation from [52] has better equalization performance compared to the maximum entropy equalization technique [20], which was shown to have significant equalization improvement compared to Godard's [39], the reduced constellation algorithm (RCA) [54], the sign reduced constellation algorithm (SRCA) [55] and others.The equalization performance improvement seen in [52] over the maximum entropy equalization method [20] was in the high SNR environment, as well as in the medium SNR case.However, the blind adaptive deconvolution algorithm proposed in [52] is not applicable for the very noisy case, since the Lagrange multipliers related to the output pdf were set in [52] to those used for the input pdf of the deconvolutional process.In other words, the noise component at the equalized output was ignored in [52].In this paper, we do not ignore the noise component at the equalized output as was done in [52].We derive new Lagrange multipliers for the output and input pdfs where the Lagrange multipliers related to the output pdf are a function of the channel noise power.We will show via simulation results that the blind adaptive deconvolution algorithm obtained with the new Lagrange multipliers is robust to SNR, while the recently-obtained blind adaptive deconvolution algorithm [52] with the same Lagrange multipliers for the input and output pdfs is not.By robust to SNR, we mean that the same step size parameter or parameters involved in the update mechanism of the equalizer's taps do not have to be changed for equalization convergence purposes due to changes in the SNR environment.In addition, we will show via simulation results that the blind adaptive deconvolution algorithm with the newly obtained Lagrange multipliers is applicable for the whole range of SNR down to 7 dB.We also obtain in this paper new closed-form approximated expressions for the conditional expectation and MSE.The paper is organized as follows: after having described the system under consideration in Section 2, we introduce in Section 3 our new closed-form approximated expressions for the conditional expectation, MSE and Lagrange multipliers.In Section 4, we present our simulation results, and in Section 5, we present our conclusion.

System Description
The system under consideration is the same system from [52], illustrated in Figure 1, where we make the following assumptions: 1.The input sequence x[n] can be written as As was described in [52], the sequence x[n] is transmitted through the channel h[n] and is corrupted with channel noise w[n].The ideal equalized output is expressed in [37] as: where D is a constant delay and θ is a constant phase shift.Therefore, in the ideal case, we could write: where " * " denotes the convolution operation and δ is the Kronecker delta function.In this paper, we assume that D = 0 and θ = 0, since D does not affect the reconstruction of the original input sequence x[n], and θ can be removed by a decision device [37].According to [56], if the input sequence is stationary and the channel is linear time-invariant, then the observed sequence z[n] is also stationary; its pdf is therefore invariant to the constant delay D. The constant phase shift θ is also of no immediate consequence when the pdf of the input sequence remains symmetric under rotation [56], which is the case in this paper.Thus, according to [56], we may simplify the condition for perfect equalization by requiring that z[n] = x[n], which means D = 0 and θ = 0. Next, convolving c[n] with the received sequence, we obtain: where p[n] is the convolutional noise, arising from the difference between the ideal and the guessed value for c[n] and w The intersymbol interference (ISI) is often used as a measure of performance in deconvolutions' applications, defined by: where |s| max is the component of s, given in (5), having the maximal absolute value. where . This error is fed into the adaptive mechanism, which updates the equalizer's taps: where () * is the conjugate operation, µ is the step size parameter and c[n] is the equalizer vector, where the input vector is y The operator () T denotes the transpose of the function (), and N is the equalizer's tap length.As was mentioned earlier in this paper, according to the first approach for designing the nonlinear function, a predefined cost function F[n] that characterizes the ISI is minimized with respect to the equalizer's coefficients.Minimization is performed with the gradient descent algorithm that searches for an optimal filter tap setting by moving in the direction of the negative gradient −∇ c F[n] over the surface of the cost function in the equalizer filter tap space [57].Thus, the adaptive mechanism that updates the equalizer's taps can be given by: Please note that by ẽ and ( 7): ]. Thus, we may say that according to the first approach (the cost function approach), the conditional expectation ) is obtained implicitly, while according to the second approach, the conditional expectation 5).The conditional expectation for the real valued and noiseless case was obtained [52] via: where f z|x (z|x) was given by: the convolutional noise power was expressed as σ 2 p , and f x (x), f z (z) were denoted as the source and equalized output pdfs, respectively.The source and equalized output pdfs ( f x (x), f z (z)) were approximated in [52] with the maximum entropy density approximation technique: where λ k and λk (k = 0, 1, 2, ..., K) are the Lagrange multipliers, f x (x), f z (z) are the approximated probability density function of the source and equalized output, respectively, and K controls the number of Lagrange multipliers in the maximum entropy density approximation technique (10) and plays a role in how successful that approximation will be.For example, the pdf approximation of a non-Gaussian input sequence with zero mean would be less successful with the choice of K = 2 than with the choice of K > 2. As already mentioned earlier in this paper, [52] assumed that λ k = λk .Next, the conditional expectation (8) was extended in [52] to the two independent quadrature carrier case.According to [49], the conditional mean estimate of the complex datum ) can be written as: . Therefore, the real and imaginary parts of the data could be estimated separately on the basis of the real and imaginary parts of the equalized output.

The New Lagrange Multipliers
In this section, we derive new Lagrange multipliers related to the input and output pdfs valid for the real valued and noisy case.Namely, we consider (10), but unlike [52], we assume: where εk will tend to zero only when both the convolutional noise p[n] and the channel noise w[n] tend to zero.Please note that we consider only the even numbers for k, since according to Assumption 1 from the previous section, the input sequence has zero mean.In addition, we consider here K = 4, since this value was used in [52] for approximating the input and equalized output pdfs and has been shown to lead to good equalization performance.Thus, K = 4 may not be the optimal value in the pdf approximation of a 16QAM constellation input used in [52], but for equalization purposes, this value for K was enough.Since we compare the equalization performance obtained in this paper with [52], the same number of Lagrange multipliers should be taken in the pdf approximations for a fair comparison.This section is divided as follows: In Subsection 3.1, we derive for the real valued case, closed-form approximated expressions for ε0 , ε2 and ε4 as a function of the convolutional noise power, channel noise power and Lagrange multipliers (λ 2 and λ 4 ) related to the input pdf.In Subsection 3.2, we derive, for the real valued case, a closed-form approximated expression for the expected MSE of the system as a function of ε0 , ε2 , ε4 , convolutional noise power, channel noise power and Lagrange multipliers (λ 2 and λ 4 ) related to the input pdf.In Subsection 3.3, we substitute the closed-form approximated expressions for ε0 , ε2 and ε4 obtained in Subsection 3.1 into the obtained MSE expression from Subsection 3.2; thus obtaining a closed-form approximated expression for the MSE depending only on the convolutional noise power, channel noise power and Lagrange multipliers (λ 2 and λ 4 ) related to the input pdf.Next, this new expression for the MSE is minimized in Subsection 3.3 with respect to λ 2 and λ 4 , and newly derived expressions for both λ 2 and λ 4 are obtained depending only on the source moments.Please note that in [52], only λ k (k = 0, 2, 4) were considered in (10).In addition, no closed-form expression was needed for λ 0 in [52], as is the case also in this paper due to the fact that λ 0 is reduced in (8) when using (10) with (11) for approximating the input and output pdfs.
In this section, we use the following additional assumptions, which were also made in [52]: 1.The convolutional noise p[n] is a zero mean, white Gaussian process with variance 2. The source signal x[n] is an independent non-Gaussian signal with known variance and higher moments.3. The convolutional noise p[n] and the source signal are independent.4. The convolutional noise power σ 2 p is sufficiently low.
Assumptions 1 and 3 were also made in [37,38,49,51].As already was noted in [20,58], the described model for the convolutional noise p[n] is applicable during the latter stages of the process, where the process is close to optimality [38].According to [38], in the early stages of the iterative deconvolution process, the ISI is typically large with the result that the data sequence and the convolutional noise are strongly correlated, and the convolutional noise sequence is more uniform than Gaussian [59].However, satisfying equalization performance was obtained by [51] and others [20] in spite of the fact that the described model for the convolutional noise p[n] was used.These results [51], [20] may indicate that the described model for the convolutional noise p[n] can be used (maybe not in the optimum way) also in the early stages where the :eye diagram: is still closed.

Closed-Form Approximated Expressions for ε0 , ε2 and ε4
In this subsection, we derive closed-form approximated expressions for ε0 , ε2 and ε4 .Theorem 1. ε0 , ε2 and ε4 can be expressed as: where: and: Proof of Theorem 1.For simplicity, we use in the following The equalized output sequence (3) can be expressed as: where . By using Assumption 1 and Assumption 4 from this section and the previous section, respectively, we may conclude that p is a white Gaussian process.Furthermore, based on Assumption 3 from this section, p is independent of the source sequence x.Thus, according to [60], if x and p are independent, the equalized output pdf equals to: where f x (x) is the source pdf and f p (z − x) is the pdf of p. Now, by using the maximum entropy density approximation technique (10) for approximating the input pdf and the Gaussian pdf for approximating the pdf of p, (16) can be approximately written as: where: Next, we use Laplace's method [61] for solving the integral in (17).According to [20,61], the Laplace's method is a general technique for obtaining the asymptotic behavior as ρ → 0 of integrals in which the large parameter 1/ρ appears in the exponent.The main idea of Laplace's method is: if the real continuous function Ψ(x) has its minimum at x 0 , which is between infinity and minus infinity, then it is only the immediate neighborhood of x = x 0 that contributes to the full asymptotic expansion of the integral for large 1/ρ.Therefore, according to [20,61], we may write: ) 3 ) where O(x) is defined as lim x→0 O(x)/x = R and R is a constant.The functions d 2 Ψ(x) dx 2 and x 0 are obtained via: Next, by using ( 18)-( 20), the equalized output pdf (17) can be written as: However, according to (10) and (11), the approximated equalized output pdf can be written as: which with the help of ( 12) and ( 22) is: Next we use Taylor expansion [62] up to order three and obtain: (25) where (25) can also be written as: When the equalizer has converged, the convolutional noise power is considered as very small.Thus, by comparing (26) to (21) and neglecting the terms of σ u p where u ≥ 4, we obtain: where by the use of ( 27) and [62], (28) can be written as: and with the help of ( 21), ( 26), ( 27) and [62], we have:

Closed-Form Approximated Expression for the MSE
In this subsection, we derive the MSE, which is defined by: Thus, we have first to derive the relevant expression for the conditional expectation E[x|y] with the new Lagrange multipliers for the input and output pdfs.
Our next step is to substitute (32) into (31) and to get an MSE expression depending on ε 0 , ε 2 , ε 4 , λ 2 , λ 4 and on σ t p for t ≤ 4: 3.3.Closed-Form Approximated Expressions for λ 2 and λ 4 In this subsection, we derive the Lagrange multipliers related to the input pdf.Namely, we derive closed-form approximated expressions for λ 2 and λ 4 .
Theorem 3. The Lagrange multipliers related to the input pdf can be approximately expressed by: where m G is the G-th moment of the real part of the input sequence.Namely, for the real valued input case, we have: Proof of Theorem 3. Let us first substitute the expressions for ε 0 , ε 2 and ε 4 given in ( 13) into (42).Thus, we obtain a new closed-form approximated expression for the MSE depending on the source moments, λ 2 , λ 4 and σ r p with r ≥ 2 only.Please note that we are looking for a linear closed-form approximated expressions for λ 2 and λ 4 .Thus, we ignore in the MSE expression all those terms having λ i 2 λ j 4 with i, j > 1, λ l 2 and λ l 4 with l > 2. In addition, we ignore terms of σ u p with u > 4, since the MSE is calculated when the equalizer has converged where σ 2 p is already considered as very low, thus making σ u p with u > 4 negligible.By considering all of this, the approximated MSE can be written as: Next, we search for those Lagrange multipliers (λ 2 ,λ 4 ) that bring the MSE (45) to the minimum.Thus, we have: Solving ( 46) for λ 2 and λ 4 leads to (43).

Simulation
In this section, we show the usefulness of our newly-derived Lagrange multipliers (43) compared to those derived in [52].Namely, we show via simulation results the robustness to SNR of our new blind adaptive equalization method based on our newly-derived Lagrange multipliers (43) compared to [52].In addition, we also add Godard's [39] and maximum entropy [20] algorithm for comparison.Please note that Godard's [39] algorithm is one of the most popular, computationally simple, tested and best performing blind equalization algorithm in the signal processing domain according to [28].In addition, please note that according to [20], the maximum entropy [20] algorithm has better equalization performance compared to Godard's [39] algorithm for the high SNR case.The equalizer's taps for [20] were updated according to: with: where µ ENT is a positive step size parameter and: where: and σ 2 x 1 ,σ 2 x 2 are the variances of the real and imaginary parts of the source signal, respectively.The variances of the real and imaginary parts of the equalized output are defined as σ 2 z 1 and σ 2 z 2 , respectively, and estimated by [20]: where stands for the estimated expectation, z 2 s 0 > 0, l stands for the l-th tap of the equalizer and β ENT is a positive step size parameter.The Lagrange multipliers λs k from ( 50) are according to [20]: where G are the G-th moment of the real and imaginary parts of the source signal respectively, defined by: According to [20], the equalizer's taps are updated only if Ns > ε, where ε is a small positive parameter and Ns = 1 + ĝ (z 1 ) The equalizer's taps for Godard's algorithm [39] were updated according to: where µ G is a positive step size parameter.The equalizer's taps for [52] were updated according to: with W given in ( 48), but with: where s = 1, 2; ḡ(z with λs k for k = 2, 4 is given according to [52]  and: β NEW and µ NEW are positive step size parameters.The equalizer's taps of our newly-derived blind adaptive equalizer with our newly-derived Lagrange multipliers (43) were updated according to: with W given in ( 48), but with: where : and ε 0 , ε 2 , ε 4 and λ 2 , λ 4 are given in ( 13) and (43), respectively.β ANEW and µ ANEW are positive step size parameters.In the following, we denote "MaxEnt", "MaxEnt NEW ", "MaxEnt ANEW " and "Godard" as the algorithms given in [20], [52], (60) and [39], respectively.For the "MaxEnt", "MaxEnt NEW " and "MaxEnt ANEW " algorithms, we used s ] for initialization.The following channel was considered: Channel 1 (initial ISI = 0.44): Taken according to [1]: h n = 0 for n < 0; −0.4 for n = 0; 0.84 × 0.4 n−1 for n > 0 .
We used an equalizer with 13 taps.The equalizer was initialized by setting the center tap equal to one and all others to zero.Two input sources were considered: A 16QAM source (a modulation using ± {1,3} levels for in-phase and quadrature components) and another complex input where the real and imaginary parts of the input are random independent processes uniformly distributed within [−1, +1]. Figure 2 shows the equalization performance of our new proposed equalization method ("MaxEnt ANEW " (60)), namely the ISI as a function of iteration number for the 16QAM constellation input sent via Channel 1 for the high SNR case (SNR = 30 dB), compared to the equalization performance obtained from the maximum entropy [20,52] and Godard's [39] algorithm.Please note that for Figure 2, the step size parameters of the algorithms were chosen for fast convergence with low steady-state ISI.According to simulation results (Figure 2), our new proposed algorithm ("MaxEnt ANEW " (60)) has better equalization performance, namely a much lower residual ISI compared to Godard's [39] algorithm, but at the same time, leaves the system with a higher residual ISI compared to the maximum entropy [20] and the algorithm given in [52].Figure 3 shows the equalization performance of our new proposed equalization method ("MaxEnt ANEW " (60)), namely the ISI as a function of the iteration number for the 16QAM constellation input sent via Channel 1 for SNR = 15 dB, compared to the equalization performance obtained from [52] and Godard's [39] algorithm.Please note that the maximum entropy [20] algorithm is applicable only for the high SNR case.Therefore, we ignore this algorithm [20] for the lower SNR cases.According to Figure 3, the "MaxEnt NEW " algorithm [52] does not converge with the step size parameters used from the case of SNR = 30 dB (Figure 2).However, unlike the "MaxEnt NEW " algorithm [52], our new proposed algorithm ("MaxEnt ANEW " (60)) and "Godard" [39] work very well according to Figure 3.In addition, based on Figure 3, our new proposed algorithm ("MaxEnt ANEW " ( 60)) shows equalization improvement from the residual ISI point of view of approximately 8 dB compared to "Godard's" [39] method.It is worth noting that, if the algorithm does not diverge at 50 trials, it does not mean that the algorithm will not diverge at 100 or 200 trials, for example.Now, by decreasing the step size parameter (β NEW ) for the "MaxEnt NEW " algorithm [52], the algorithm converges Figure 4.However, as can be seen from Figure 4, our new proposed algorithm ("MaxEnt ANEW " (60)) shows equalization improvement from the residual ISI point of view of approximately 8 dB and 7 dB compared to "Godard's" [39] method and the "MaxEnt NEW " [52] algorithm, respectively.Figures 5-7 show the equalization performance of our new proposed equalization method ("MaxEnt ANEW " (60)), namely the ISI as a function of the iteration number for the 16QAM constellation input sent via Channel 1 for SNR = 10 dB, compared to the equalization performance obtained from [52] and Godard's [39] algorithm.According to Figures 5-7, the "MaxEnt NEW " [52] algorithm does not converge despite the fact that step size parameters were decreased for the algorithm.Please note that the other algorithms ("MaxEnt ANEW " and "Godard") continued working well with the original step size parameters from the case of SNR = 30 dB.Thus, we have seen so far from Figures 2-7 the robustness to SNR of our new proposed algorithm ("MaxEnt ANEW " (60)).Once the step size is optimized for the very high SNR case, no changes are needed in the step size parameters for the lower SNR case.Figures 8 and 9 show the equalization performance of our new proposed equalization method ("MaxEnt ANEW " (60)), namely the ISI as a function of the iteration number for the 16QAM constellation input sent via Channel 1 for SNR = 10 dB and SNR = 7 dB, respectively, compared to the equalization performance obtained with Godard's [39] algorithm.According to Figures 8  and 9, our new proposed algorithm ("MaxEnt ANEW " (60)) shows equalization improvement from the residual ISI point of view of approximately 6dB compared to "Godard's" [39] method for both cases (SNR = 10 dB and SNR = 7 dB).Now, "Godard's" [39] method can achieve better equalization performance for SNR = 7 dB if the step size parameter is changed, as is shown in Figure 10.However, even then (Figure 10), "Godard's" [39] method does not achieve better equalization performance compared to the "MaxEnt ANEW " algorithm.Finally, we turn to check our new proposed algorithm ("MaxEnt ANEW " ( 60)) with a different input sequence.show the equalization performance of our new proposed equalization method ("MaxEnt ANEW " (60)), namely the ISI as a function of the iteration number for a uniformly-distributed input sequence within [−1, +1] sent via Channel 1 for SNR = 30 dB, SNR = 15 dB and SNR = 10 dB, respectively, compared to the equalization performance obtained with Godard's [39] algorithm.According to Figures 11-13, our new proposed algorithm ("MaxEnt ANEW " (60)) has better equalization performance in the residual ISI point of view (Figures 11 and 12) compared to Godard's [39] algorithm or has the same equalization performance as "Godard" [39] (Figure 13).
In blind equalization, the desired signal is unknown to the receiver, except for its probabilistic or statistical properties over some known alphabets.As both the channel and its input are unknown, the objective of blind equalization is to recover the unknown input sequence based solely on its probabilistic and statistical properties; see [2].In this paper, we used the maximum entropy density approximation technique for approximating the input and output pdfs, respectively.Thus, the input pdf was a function of the source signal and a function of some Lagrange multipliers, while the output pdf was a function of the equalized output sequence and a function of some Lagrange multipliers that were different from those used for the input pdf.The Lagrange multipliers used for the input pdf depended only on the source moments.Namely, the Lagrange multipliers are used for the input pdf based solely on the input sequence statistical properties.The Lagrange multipliers used for the output pdf depended on the Lagrange multipliers related to the input pdf, on the convolutional noise power and on the channel noise power seen at the equalized output.Obviously, if there is no channel noise and the equalizer succeeded to reduce the convolutional noise to zero (which means that there is no residual ISI), the Lagrange multipliers related to the input sequence will be the same as the Lagrange multipliers related to the equalized output sequence.The closed-form approximated expression for the conditional expectation was obtained by using the above-mentioned approximations for the input and output pdfs.It depends on the input sequence statistical properties (on the input variance and some higher moments), which are also needed, for example, in Godard's algorithm [39].The newly derived equalizer based on the newly derived expression for the conditional expectation does not need to know if the sent symbol was now −1 + 3j or 1 − 3j, for example, as is needed for the non-blind case; nor does it need to know, for example, if the input sequence belongs to a 16QAM constellation or to an uniformly-distributed input within [−1, +1].However, the input sequence statistical properties need to be known, as is also the case in Godard's algorithm [39].This means that when the input sequence changes from a 16QAM constellation to a uniformly-distributed input within [−1, +1], the algorithm needs the new input sequence statistical properties, which, again, is also the case in Godard's algorithm [39].As already mentioned above, the input pdf was derived as a function of the source signal and as a function of some Lagrange multipliers that depend on the input sequence statistical properties (on the input variance and some higher moments).Thus, for each input sequence with different statistical properties, we obtain different Lagrange multipliers and, therefore, also a different input pdf.Thus, we have not considered in this work a specific input pdf, but rather a successful approximation for the input pdf that can be applied for a very wide range of input signals with different statistical properties (thus having different input pdfs).In this work, we considered only the following Lagrange multipliers for the input pdf: λ k for k = 0, 2, 4. It is reasonable to think that the use of only three Lagrange multipliers (λ k for k = 0, 2, 4) in the approximation of the input pdf may describe as less successful the real input pdf for some input signals than for others.This may explain the simulation results we have seen for the 16QAM input constellation and for the uniformly-distributed input within [−1, +1].It is also reasonable to think that the use of four Lagrange multipliers (λ k for k = 0, 2, 4, 6) or more instead of only three would have led to a more successful approximation for the input pdf related to the uniformly-distributed input sequence within [−1, +1] and, thus, to further improved equalization performance.However, using more Lagrange multipliers increases the computational complexity of the algorithm, which is already much higher compared to Godard's algorithm [39].

Conclusions
In this paper, we derived new approximated closed-form expressions for the Lagrange multipliers (λ 2 , λ 4 , λ2 , λ4 ) related to the input and output pdfs.In addition, we obtained new closed-form approximated expressions for the conditional expectation and MSE inspired by the maximum entropy density approximation technique.Based on the newly-derived expression for the conditional expectation, a new blind adaptive equalization method was obtained that is robust to SNR and is applicable for the whole range of SNR down to 7 dB.Simulation results have shown that our newly-obtained equalization method has also significant equalization performance improvement in the residual ISI point of view compared to Godard's algorithm [39], the maximum entropy method [20] and [52] for 7 dB ≤ SNR ≤ 15 dB.

Figure 2 .
Figure 2. Performance comparison between equalization algorithms for a 16QAM source input going through Channel 1.The averaged results were obtained in 50 Monte Carlo trials for an SNR = 30 dB.
where x 1 [n] and x 2 [n] are the real and imaginary parts of x[n], respectively.We assume that x 1 [n] and x 2[n] 736m 2 4 +1280m 2 m 6 )