A New Efficient Expression for the Conditional Expectation of the Blind Adaptive Deconvolution Problem Valid for the Entire Range ofSignal-to-Noise Ratio

In the literature, we can find several blind adaptive deconvolution algorithms based on closed-form approximated expressions for the conditional expectation (the expectation of the source input given the equalized or deconvolutional output), involving the maximum entropy density approximation technique. The main drawback of these algorithms is the heavy computational burden involved in calculating the expression for the conditional expectation. In addition, none of these techniques are applicable for signal-to-noise ratios lower than 7 dB. In this paper, I propose a new closed-form approximated expression for the conditional expectation based on a previously obtained expression where the equalized output probability density function is calculated via the approximated input probability density function which itself is approximated with the maximum entropy density approximation technique. This newly proposed expression has a reduced computational burden compared with the previously obtained expressions for the conditional expectation based on the maximum entropy approximation technique. The simulation results indicate that the newly proposed algorithm with the newly proposed Lagrange multipliers is suitable for signal-to-noise ratio values down to 0 dB and has an improved equalization performance from the residual inter-symbol-interference point of view compared to the previously obtained algorithms based on the conditional expectation obtained via the maximum entropy technique.


Introduction
In this paper, the blind adaptive deconvolution problem is addressed, which arises in many applications, such as seismology, underwater acoustic, image restoration and digital communication . In digital communication applications, the problem is often called blind adaptive equalization. Non-blind adaptive equalizers, unlike blind adaptive equalizers, require training symbols to generate the error that is fed into the adaptive mechanism which updates the equalizer's taps. Therefore, blind adaptive equalizers have some important advantages compared with the non-blind version: (1) Simplified protocols in point-to-point communications, avoiding the retransmission of training symbols after abrupt changes of the channel. (2) Higher bandwidth efficiency in broadcast networks. (3) Reduced interoperability problems derived from the use of different training symbols. It is well known that inter-symbol interference (ISI) is a limiting factor in many communication environments, where it causes an irreducible degradation of the bit error rate thus imposing an upper limit on the data symbol rate [36]. In order to overcome the ISI problem, an equalizer is implemented in those systems [34][35][36][37][38][39][40][41]. In this work, the T-spaced blind adaptive equalizer is considered for the single-input-single-output (SISO) case where the sampling rate is equal to the symbol rate (thus referred to as T-spaced where T denotes the baud, or symbol, duration). Please note that a fractionally-spaced equalizer (FSE) where the sampling rate is higher than the symbol rate can be modeled with a single-input-multiple-output (SIMO) system. In addition, a SIMO system can be modeled with a parallel combination of T-spaced blind adaptive equalizers. Thus, improving the equalization performance of a T-spaced blind adaptive equalizer for the SISO case may also lead to equalization performance improvement for the SIMO case. As already mentioned above, blind adaptive equalizers do not use any training symbols to generate the error that is fed into the adaptive mechanism which updates the equalizer's taps. Instead of those training symbols, an estimate of the desired response is obtained via the use of a nonlinear transformation to sequences involved in the adaptation process. Now, very often, blind adaptive equalization algorithms are classified according to the location of their nonlinearity in the algorithm chain [42]. According to Reference [42], there are three different types: (1) polyspectral algorithms, (2) Bussgang-type algorithms, (3) probabilistic algorithms. In the first method, the nonlinearity is located at the output of the channel, right before the equalizer. Thus, the nonlinearity actually estimates the channel. This estimation is fed into the adaptive mechanism which updates the equalizer's taps. In the second type, the nonlinearity is situated at the output of the equalizer. Here, the nonlinearity can just be the estimation of the source signal via the the use of the conditional expectation (the expectation of the source input given the equalized or deconvolutional output), or the nonlinearity can be a predefined cost function that holds some information of the ISI. Thus, minimizing this predefined cost function with respect to the equalizer's taps may lower the residual ISI and help in the symbol recovery process. Since Bussgang-type algorithms often have shorter convergence times than polyspectral methods, which need larger amounts of data for an equivalent estimation variance, they are more popular [42]. In the third class of algorithms, directly locating the nonlinearity is more problematic compared to the first two groups since the nonlinearity is combined with the data detection process. While these algorithms can extract considerable information from relatively little data, this is often accomplished at a huge computational cost [42]. In the following, the Bussgang-type blind equalization algorithms are considered, where the conditional expectation (the expectation of the source input given the equalized or deconvolutional output) is derived for estimating the desired response. In the literature, we can find several approximated expressions for the conditional expectation related to the blind adaptive deconvolutional problem [20,[43][44][45][46][47][48][49]. However, References [43][44][45][46] are valid only for a uniformly distributed source input and References [20,47,48] were designed only for the noiseless case. Recently [49], a new blind adaptive equalization method was proposed based on Reference [47] that is applicable for signal-to-noise ratio (SNR) values down to 7 dB. However, the computational burden of the method in Reference [49] is relative high. The closed-form approximated expression proposed in Reference [49] for the conditional expectation with Lagrange multipliers up to order four (thus applicable for the 16QAM input case) is composed of a polynomial function of the equalized output of order twenty one. Please note that the proposed expression for the conditional expectation with Lagrange multipliers up to order four in Reference [47] is a polynomial function of the equalized output of order thirteen while the proposed expression for the conditional expectation with Lagrange multipliers up to order four in Reference [20] is a fraction where the numerator and the denominator are a polynomial function of the equalized output of order thirteen and twelve, respectively.
In this paper, I propose a new closed-form approximated expression for the conditional expectation based on Reference [20] with Lagrange multipliers up to order four. This new proposed expression has a reduced computational burden compared with the previously obtained expressions for the conditional expectation proposed in Reference [20,47,49]. The new proposed expression for the conditional expectation is composed of a fraction where the numerator and the denominator are a polynomial function of the equalized output of order seven and six, respectively. Simulation results show that the new proposed algorithm, with the new proposed Lagrange multipliers up to order four, is suitable for SNR values down to 0 dB and has an improved equalization performance from the residual ISI point of view compared with Reference [49]. [20,47,49], illustrated in Figure 1, where I make the following assumptions as were done in References [20,47,49]:

I consider the system from References
1. The source signal x[n] is given by w where σ 2 w r and σ 2 w i are the variances of the real and imaginary parts of w[n], respectively. 5. The function T[·] is a memoryless nonlinear function that satisfies the additivity condition:  Figure 1. Block diagram of the system. [20,47,49], the source input x[n] is sent via the channel h[n] and is corrupted with channel noise w[n]. The ideal equalized output is expressed in Reference [50] as

As was described in References
where D is a constant delay and θ is a constant phase shift. Therefore, in the ideal case, we could write where " * " denotes the convolution operation and δ is the Kronecker delta function. In this paper, I assume, as was also done in References [20,47,49], that D and θ are equal to zero (please refer to Reference [49] for the explanation). The equalized output is given by wherep[n] is the convolutional noise andw In this paper, the ISI is used to measure the performance of the deconvolution process, defined by where |s| max is the component ofs, given in (9), having the maximal absolute value.
This error plays an important role in updating the equalizer's taps: where (·) * is the conjugate operation on (·), µ is the step size parameter, and c[n] is the equalizer vector, where the input vector is y The operator () T denotes the transpose of the function (), and N is the equalizer's tap length. Another way to update the equalizer's taps is to use the cost function approach [51]: where f z|x (z|x) was given by and the source probability density function (pdf) ( f x (x)) was approximated by the maximum entropy density approximation technique: where f x (x) is the approximated probability density function andλ k (k = 0, 1, 2, ..., K) are the Lagrange multipliers. In the following, for simplicity, I write x and z instead of x[n] and z[n], respectively. By using (13)- (15), the conditional expectation obtained by Reference [20] for the real valued and noiseless case is and Please note that forλ k up to order four,ĝ (z),ĝ 1 (z),ĝ (4) (z), andĝ 1 (z) are polynomial functions of order seven, six, thirteen, and twelve, respectively. According to Reference [20], the Lagrange multipliers were obtained by minimizing the approximated obtained mean square error (MSE) [20] with respect to the Lagrange multipliers. Namely, the Lagrange multipliers were obtained by where x is the conditional expectation, E ( x − x) 2 is the MSE and given by Reference [20] as For the Lagrange multipliersλ k up to order four, based on (19), we obtain the following equations forλ 2 andλ 4 : 2 + 2λ 2 m 2 4 + 2λ 4 m 4 8 = 0

The New Proposed Expression for the Conditional Expectation
In this section, I present my newly proposed approximated closed-form expression for the conditional expectation based on Reference [20]. In the following, I adopt the assumptions made in References [20,47,49]: 1. The convolutional noisep[n] is a zero mean, white Gaussian process with variance 2. The source signal x[n] is an independent non-Gaussian signal with known variance and higher moments.
3. The convolutional noisep[n] and the source signal are independent. 4. The convolutional noise power σ 2 p is sufficiently low.
For justification of the above assumptions, please refer to Reference [49]. In the following, I first consider the real valued case and then turn back to the case where the real and imaginary parts of the input signal are independent (as is the case for the 16QAM source input). According to (19) and (21), we may see that the obtained Lagrange multipliers are depending only on the second leading term associated to the denominator of (16). This may imply that we can use a truncated version of (16) for the approximated conditional expectation expression where the computational burden is automatically reduced compared to (16). Proposition 1. In this paper, I propose for the real valued and noisy case the following expression for the conditional expectation with Lagrange multipliers up to order four: where and where R is the channel tap length, h k is the k-th tap of the channel h[n], and Proof of the proposed Lagrange multipliers given in (26). According to Reference [20] (Appendix B), the approximated MSE for the noisy case, valid at the latter stages of the deconvolutional process, may be given as wherex = x +w. (29) Thus, according to (19), we obtain the following equations for the Lagrange multipliers: The solution of (30) for λ 2 and λ 4 as a function ofm 2 ,m 4 , andm 6 is given in (26). Next, I find closed-form approximated expressions form 2 ,m 4 , andm 6 . When the deconvolutional process has converged and leaves the system with a convolutional noise that can be considered as very low, we may write, according to Reference [52], Since x and w are independent, by using (32) and the expression for SNR (27) we have This completes our proof.
Next, I turn to the 16QAM source input. For this case, according to Reference [44], I have that

Simulation
In this section, I show via simulation results the efficiency of the truncated expression for the conditional expectation (24) combined with the Lagrange multipliers given in (26). Namely, I show via simulation results the equalization performance from the residual ISI point of view of the new proposed algorithm (with (24) and (26)) compared to the simulation results obtained by the maximum entropy [49] method and Godard's [53] algorithm. Godard's [53] algorithm is used for comparison since it is a very efficient algorithm from the equalization performance point of view [28]. In addition, it's computational burden is very low [28]. For Godard's algorithm [53], the equalizer's taps were updated as follows: where µ G is a positive step size parameter and l stands for the l-th tap of the equalizer. The update mechanism of the equalizer's taps associated with the recently obtained maximum entropy algorithm [49] was as follows:c with and and According to Reference [49], and given by where stands for the estimated expectation, z 2 s 0 > 0, β ANEW and µ ANEW are positive step size parameters. ε s 0 , ε s 2 , ε s 4 , λ 2 , and λ 4 were set according to Reference [49] as where In order to get an equalization gain of one, the following gain control was used according to Reference [49]: where c l [n] is the vector of taps after iteration and c l [0] is some reasonable initial guess. The update mechanism of the equalizer's taps associated with the new proposed blind equalization method involving the truncated version for the conditional expectation and the newly derived Lagrange multipliers was as follows:c with W given in (37), but with where σ 2 z s was estimated by where z 2 s 0 > 0, β BNEW and µ BNEW are positive step size parameters. The equalizer's taps in (46) were updated only ifN s > ε, where ε is a small positive parameter and I also used here a gain control according to (45). In the following, I denote "MaxEnt ANEW ", "Godard", and "MaxEnt BNEW " as the algorithms given in References [49,53], and (46), respectively. For the "MaxEnt ANEW " and "MaxEnt BNEW " algorithms, I used for initialization.
In my simulation, the equalizer's length was set to 13 taps. For initialization purposes, the center tap of the equalizer was set to one while all others were set to zero. As a source input, I used the 16QAM constellation. The equalization performance comparison between the new proposed equalization method ("MaxEnt BNEW " (46)), the maximum entropy [49], and Godard's [53] algorithm is given in Figures 2-4. The equalization performance comparison was carried out for a 16QAM constellation input sent via Channel 1 with SNR values of 10 dB, 7 dB, and 0 dB, respectively. It should be pointed out that the results in Figures 2 and 3 for "MaxEnt ANEW " and "Godard" were reproduced from Reference [49]. In addition, please note that according to Reference [49], "MaxEnt ANEW " is not applicable for SNR = 0 dB. According to Figures 2-4, the new proposed algorithm ("MaxEnt BNEW " (46)) has a better equalization performance from the residual ISI point of view compared to "MaxEnt ANEW " and "Godard". Next, I tested the proposed equalization method ("MaxEnt BNEW " (46)) with Channel 2 and Channel 3 where ∑ k |h k | 2 = 1. Figures 5 and 6 show the equalization performance comparison between the new proposed equalization method ("MaxEnt BNEW " (46)) and Godard's [53] algorithm for the 16QAM constellation input sent via Channel 2 and Channel 3, respectively, with SNR = 7 dB. According to Figures 5 and 6, the new proposed algorithm ("MaxEnt BNEW " (46)) has a better equalization performance from the residual ISI point of view compared to "Godard". As a matter of fact, for the case of ∑ k |h k | 2 > 1 (Channel 2), the improvement in the residual ISI compared to the results obtained by "Godard" is approximately 5 dB while the improvement in the residual ISI compared to the results obtained by "Godard" for ∑ k |h k | 2 < 1 (Channel 3) is only approximately 2 dB. Thus, we may say that the proposed algorithm ("MaxEnt BNEW " (46)) has a promising equalization performance from the residual ISI point of view for channels with ∑ k |h k | 2 ≥ 1.
Up to now, I have assumed that the SNR as well as the channel power are known. Thus, we could calculate the required Lagrange multipliers via (48). When the SNR and the channel power are unknown, the m 2 , m 4 , and m 6 cannot be calculated anymore via (48). However, on the basis of (33), we may calculate the required m 2 , m 4 , and m 6 needed for the Lagrange multipliers in (48) from m 2 =m 2 + σ 2 w r , m 4 = 3σ 4 w r + 6m 2 σ 2 w r +m 4 , where σ 2 w r is the variance of the real part of w[n] and may be simulated as Please note that for the ideal case, when the equalizer has converged, the convolutional noise power tends to zero. Thus, this makes (54) reasonable. In the following, I use (53) and (54) for calculating the Lagrange multipliers related to the "MaxEnt BNEW " algorithm. Figures 7 and 8 show the equalization performance of the new proposed equalization method ("MaxEnt BNEW " (46) with (53) and (54)), namely the ISI as a function of iteration number for the 16QAM constellation input sent via Channel 1 for SNR values of 10 dB and 7 dB, respectively, compared to the equalization performance obtained from the maximum entropy [49] and Godard's [53] algorithm. Please note that here the results for "MaxEnt ANEW " and "Godard" were also reproduced from Reference [49]. According to Figures 7  and 8, the new proposed algorithm ("MaxEnt BNEW " (46) with (53) and (54)) has better equalization performance from the residual ISI point of view compared to the maximum entropy [49] and Godard's [53] algorithm even when the SNR and channel power are unknown. µ G = 2.5 × 10 −5 , µ ANEW = 0.00008, β ANEW = 1 × 10 −5 , µ BNEW = 0.00006, β BNEW = 6 × 10 −6 , ε = 0.75.

Conclusions
In this paper, I proposed a new closed-form approximated expression for the conditional expectation (with Lagrange multipliers up to order four) which is actually a truncated version of the expression obtained in Reference [20]. This new proposed expression has a reduced computational burden compared with the previously obtained expressions for the conditional expectation proposed in References [20,47,49]. In addition, I derived new approximated closed-form expressions for the Lagrange multipliers (λ 2 , λ 4 ). Simulation results have shown that my newly proposed equalization algorithm, with my newly proposed expression for the conditional expectation and Lagrange multipliers up to order four, is applicable for SNR values down to 0 dB and has an improved equalization performance from the residual ISI point of view compared with References [49,53].
Funding: This research received no external funding.