Next Article in Journal
Correlation between the Increasing Conductivity of Aqueous Solutions of Cation Chlorides with Time and the “Salting-Out” Properties of the Cations
Previous Article in Journal
High Temperature Oxidation and Corrosion Properties of High Entropy Superalloys
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Lagrange Multipliers for the Blind Adaptive Deconvolution Problem Applicable for the Noisy Case

Department of Electrical and Electronic Engineering, Ariel University, Ariel 40700, Israel
Entropy 2016, 18(3), 65; https://doi.org/10.3390/e18030065
Submission received: 6 September 2015 / Revised: 31 December 2015 / Accepted: 18 February 2016 / Published: 23 February 2016
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
Recently, a new blind adaptive deconvolution algorithm was proposed based on a new closed-form approximated expression for the conditional expectation (the expectation of the source input given the equalized or deconvolutional output) where the output and input probability density function (pdf) of the deconvolutional process were approximated with the maximum entropy density approximation technique. The Lagrange multipliers for the output pdf were set to those used for the input pdf. Although this new blind adaptive deconvolution method has been shown to have improved equalization performance compared to the maximum entropy blind adaptive deconvolution algorithm recently proposed by the same author, it is not applicable for the very noisy case. In this paper, we derive new Lagrange multipliers for the output and input pdfs, where the Lagrange multipliers related to the output pdf are a function of the channel noise power. Simulation results indicate that the newly obtained blind adaptive deconvolution algorithm using these new Lagrange multipliers is robust to signal-to-noise ratios (SNR), unlike the previously proposed method, and is applicable for the whole range of SNR down to 7 dB. In addition, we also obtain new closed-form approximated expressions for the conditional expectation and mean square error (MSE).

1. Introduction

In this paper, we consider a blind adaptive deconvolution problem in which we observe the output of an unknown, possibly non-minimum phase, linear system from which we want to recover its input using an adjustable linear filter (equalizer) [1]. A blind deconvolution process arises in many applications, such as seismology, underwater acoustic, image restoration and digital communication [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28].
In this paper, we consider the application in digital communications where the received symbol sequence has been affected by intersymbol interference (ISI), whereby symbols transmitted before and after a given symbol corrupt the detection of that symbol [2]. ISI causes harmful distortions and presents a major difficulty in the recovery process. Thus, a blind adaptive filter is usually used to remove the convolutive effect of the system to recover the source signal. Equalizers with the sampling rate equal to the symbol rate are referred to as T-spaced equalizers. Equalizers with the sampling rate higher than the symbol rate are referred to as fractionally-spaced equalizers (FSE). A fractionally-spaced adaptive equalizer is a linear equalizer that is similar to a symbol-spaced linear equalizer (T-spaced equalizer, where T denotes the baud, or symbol, duration). However, a fractionally-spaced equalizer receives, say, U input samples before it produces one output sample and updates the weights, where U is an integer. The output sample rate and the input sample rate are 1/T and U/T, respectively. The weight updating occurs at the output rate, which is the slower rate. Since the output of the fractionally-spaced equalizer needs to be calculated once in every symbol period, the fractionally-spaced equalizer can be modeled as a parallel combination of a number of baud-spaced equalizers (T-spaced equalizers). This parallel combination of baud-spaced equalizers is known as the multi-channel model of FSE. Among fractionally-spaced constant modulus algorithms, we may find [29] and [30], where [29] shows the robustness of the constant modulus algorithm to noise and [30] shows that the constant modulus criterion is well suited for any subgaussian input distribution as all quadrature amplitude modulation (QAM) signals. The main disadvantage with using fractionally-spaced equalizers is that all processing before the equalization must be done at a higher rate than would be needed for T-spaced equalizers. This can increase the cost and power consumption of systems based on fractionally-spaced equalizers. However, fractionally-spaced equalizers have several advantages that often can justify this increase of complexity. One advantage of fractionally-spaced equalizers over T-spaced equalizers is relative immunity to the sampling phase, where sampling phase refers to the time offset of the sampling instance relative to the symbol clock. As already mentioned earlier, a fractionally-spaced equalizer can be modeled with a single-input-multiple-output (SIMO) system. In the field of communication, SIMO channels appear either when the signal is oversampled at the receiver or from the use of an array of antennas in the receiver [31,32,33]. It should be pointed out that for the SIMO case, the same information is transmitted through different subchannels; all received sequences will be distinctly distorted versions of the same message, which accounts for a certain signal diversity [34]. Therefore, it is reasonable to assume that more information about the transmitted signal will be available at the receiver end [34]. SIMO transmission is widely replacing the single-input-single-output (SISO) approach to enhance the performance via diversity combining [35]. Since the SIMO approach consists of a parallel combination of baud-spaced (T-spaced) adaptive equalizers, it is reasonable to think that if those T-spaced blind adaptive equalizers already lead to good equalization performance for the SISO case, they also may make a major contribution to the equalization performance for the SIMO approach. We focus in this paper on the SISO case where a T-spaced blind adaptive equalizer is used to overcome the ISI problem. We do not deal in this paper with techniques that can improve the overall equalization performance, such as the use of multiple receive antennas, the oversampling technique or the use of multiple adaptive T-spaced equalizers connected in series, as was shown in [36].
Blind deconvolution algorithms are essentially adaptive filtering algorithms designed such that they do not require the external supply of a desired response to generate the error signal in the output of the adaptive equalization filter [2,37,38]. The algorithm itself generates an estimate of the desired response by applying a nonlinear transformation to sequences involved in the adaptation process [2,37,38]. In this paper, we consider the Bussgang blind equalization algorithms, where the nonlinear function is applied at the output of the deconvolutional process (equalizer). In the literature, we may find two different approaches for designing this nonlinear function. According to one approach, the nonlinearity is designed to minimize a cost function that is implicitly based on higher order statistics (HOS) and characterizes the ISI [1,39,40,41,42,43,44,45,46,47,48]. Minimizing this cost function with respect to the equalizer’s coefficients reduces the ISI to such a level that the sent symbol can be recovered. According another approach, the conditional expectation (the expectation of the source input given the equalized or deconvolutional output) is the nonlinear function. Namely, the conditional expectation based on Bayes rules is derived for estimating the desired response. The relationship between the two approaches will be shown in the next section. In this paper, we consider the approach where the conditional expectation has to be obtained for estimating the desired response. In [38,49,50,51], the conditional expectation was derived for uniformly distributed source signals. Thus, [38,49,50,51] cannot cope with a source having a general pdf shape. In [20,52,53], the conditional expectation was given as approximated closed-form expressions suitable for the real or two independent quadrature carrier input case. Those expressions [20,52,53] are suitable for a wider range of source probability density function compared to [38,49,50,51]. The input pdf was approximated in [20,52] with the maximum entropy density approximation technique, while in [53], the input pdf was approximated with the Edgeworth expansion series. The equalized output pdf was calculated in [20] and [53] via the approximated input pdf, while in [52], the equalized output pdf was approximated with the maximum entropy density approximation technique. According to simulation results carried out in [52] for the 16 Q A M constellation input case, the equalization method based on the conditional expectation from [52] has better equalization performance compared to the maximum entropy equalization technique [20], which was shown to have significant equalization improvement compared to Godard’s [39], the reduced constellation algorithm (RCA) [54], the sign reduced constellation algorithm (SRCA) [55] and others. The equalization performance improvement seen in [52] over the maximum entropy equalization method [20] was in the high SNR environment, as well as in the medium SNR case. However, the blind adaptive deconvolution algorithm proposed in [52] is not applicable for the very noisy case, since the Lagrange multipliers related to the output pdf were set in [52] to those used for the input pdf of the deconvolutional process. In other words, the noise component at the equalized output was ignored in [52]. In this paper, we do not ignore the noise component at the equalized output as was done in [52]. We derive new Lagrange multipliers for the output and input pdfs where the Lagrange multipliers related to the output pdf are a function of the channel noise power. We will show via simulation results that the blind adaptive deconvolution algorithm obtained with the new Lagrange multipliers is robust to SNR, while the recently-obtained blind adaptive deconvolution algorithm [52] with the same Lagrange multipliers for the input and output pdfs is not. By robust to SNR, we mean that the same step size parameter or parameters involved in the update mechanism of the equalizer’s taps do not have to be changed for equalization convergence purposes due to changes in the SNR environment. In addition, we will show via simulation results that the blind adaptive deconvolution algorithm with the newly obtained Lagrange multipliers is applicable for the whole range of SNR down to 7 dB. We also obtain in this paper new closed-form approximated expressions for the conditional expectation and MSE. The paper is organized as follows: after having described the system under consideration in Section 2, we introduce in Section 3 our new closed-form approximated expressions for the conditional expectation, MSE and Lagrange multipliers. In Section 4, we present our simulation results, and in Section 5, we present our conclusion.

2. System Description

The system under consideration is the same system from [52], illustrated in Figure 1, where we make the following assumptions:
  • The input sequence x [ n ] can be written as x [ n ] = x 1 [ n ] + j x 2 [ n ] , where x 1 [ n ] and x 2 [ n ] are the real and imaginary parts of x [ n ] , respectively. We assume that x 1 [ n ] and x 2 [ n ] are independent and that E [ x [ n ] ] = 0 , where E [ · ] stands for the expectation operation.
  • The unknown channel h [ n ] is a possibly non-minimum phase linear time-invariant filter in which the transfer function has no “deep zeros”, namely the zeros lie sufficiently far from the unit circle.
  • The filter c [ n ] is a tap-delay line.
  • The channel noise w [ n ] is an additive Gaussian white noise.
  • The function T [ · ] is a memoryless nonlinear function that satisfies the additivity condition: T [ z 1 [ n ] + j z 2 [ n ] ] = T [ z 1 [ n ] ] + j T [ z 2 [ n ] ] , where z 1 [ n ] , z 2 [ n ] are the real and imaginary parts of the equalized output, respectively.
As was described in [52], the sequence x [ n ] is transmitted through the channel h [ n ] and is corrupted with channel noise w [ n ] . The ideal equalized output is expressed in [37] as:
z [ n ] = x [ n D ] e j θ
where D is a constant delay and θ is a constant phase shift. Therefore, in the ideal case, we could write:
c [ n ] * h [ n ] = δ [ n D ] e j θ
where “*” denotes the convolution operation and δ is the Kronecker delta function. In this paper, we assume that D = 0 and θ = 0 , since D does not affect the reconstruction of the original input sequence x [ n ] , and θ can be removed by a decision device [37]. According to [56], if the input sequence is stationary and the channel is linear time-invariant, then the observed sequence z [ n ] is also stationary; its pdf is therefore invariant to the constant delay D. The constant phase shift θ is also of no immediate consequence when the pdf of the input sequence remains symmetric under rotation [56], which is the case in this paper. Thus, according to [56], we may simplify the condition for perfect equalization by requiring that z [ n ] = x [ n ] , which means D = 0 and θ = 0 . Next, convolving c [ n ] with the received sequence, we obtain:
z n = x n + p ˜ n + w ˜ n
where p ˜ [ n ] is the convolutional noise, arising from the difference between the ideal and the guessed value for c [ n ] and w ˜ n = w n * c n . The intersymbol interference (ISI) is often used as a measure of performance in deconvolutions’ applications, defined by:
I S I = m ˜ | s ˜ [ m ˜ ] | 2 | s ˜ | m a x 2 | s ˜ | m a x 2
where | s ˜ | m a x is the component of s ˜ , given in (5), having the maximal absolute value.
s ˜ [ n ] = c n * h n = δ n + ξ n
where ξ [ n ] stands for the difference (error) between the ideal and the guessed value for c [ n ] . The function d [ n ] is an estimation of the input sequence x [ n ] , which is produced by the function T [ z [ n ] ] . Thus, the error signal is: e ˜ n = T z n z n . This error is fed into the adaptive mechanism, which updates the equalizer’s taps:
c ̲ [ n + 1 ] = c ̲ [ n ] + μ e ˜ n y ̲ * [ n ]
where * is the conjugate operation, μ is the step size parameter and c ̲ [ n ] is the equalizer vector, where the input vector is y ̲ [ n ] = [ y [ n ] . . . y [ n N + 1 ] ] T . The operator T denotes the transpose of the function (), and N is the equalizer’s tap length. As was mentioned earlier in this paper, according to the first approach for designing the nonlinear function, a predefined cost function F [ n ] that characterizes the ISI is minimized with respect to the equalizer’s coefficients. Minimization is performed with the gradient descent algorithm that searches for an optimal filter tap setting by moving in the direction of the negative gradient c F [ n ] over the surface of the cost function in the equalizer filter tap space [57]. Thus, the adaptive mechanism that updates the equalizer’s taps can be given by:
c ̲ [ n + 1 ] = c ̲ [ n ] + μ c F [ n ] = c ̲ [ n ] μ F [ n ] z [ n ] y ̲ * [ n ]
Please note that by e ˜ n = T z n z n , (6) and (7): T [ z [ n ] ] = z [ n ] F [ n ] z [ n ] ; thus, choosing the cost function F [ n ] results in a corresponding choice of T [ z [ n ] ] [57]. According to [37], the conditional expectation ( E [ x [ n ] | z [ n ] ] ) is touted as a good estimate of T [ z [ n ] ] . Thus, we may say that according to the first approach (the cost function approach), the conditional expectation ( E [ x [ n ] | z [ n ] ] ) is obtained implicitly, while according to the second approach, the conditional expectation ( E [ x [ n ] | z [ n ] ] ) is obtained explicitly. In [52], the conditional expectation ( E [ x [ n ] | z [ n ] ] ) was applied for T [ z [ n ] ] by using T [ z 1 [ n ] + j z 2 [ n ] ] = T [ z 1 [ n ] ] + j T [ z 2 [ n ] ] (Assumption 5). The conditional expectation for the real valued and noiseless case was obtained [52] via:
E x [ n ] | z [ n ] = x [ n ] f z | x ( z | x ) f x ( x ) d x f z z
where f z | x z | x was given by:
f z | x z | x = 1 2 π σ p ˜ exp z [ n ] x [ n ] 2 2 σ p ˜ 2
the convolutional noise power was expressed as σ p ˜ 2 , and f x ( x ) , f z ( z ) were denoted as the source and equalized output pdfs, respectively. The source and equalized output pdfs ( f x ( x ) , f z ( z ) ) were approximated in [52] with the maximum entropy density approximation technique:
f ^ x ( x ) = exp k = 0 K λ k x k [ n ] ; f ^ z ( z ) = exp k = 0 K λ ˜ k z k [ n ]
where λ k and λ ˜ k ( k = 0 , 1 , 2 , . . . , K ) are the Lagrange multipliers, f ^ x ( x ) , f ^ z ( z ) are the approximated probability density function of the source and equalized output, respectively, and K controls the number of Lagrange multipliers in the maximum entropy density approximation technique (10) and plays a role in how successful that approximation will be. For example, the pdf approximation of a non-Gaussian input sequence with zero mean would be less successful with the choice of K = 2 than with the choice of K > 2 . As already mentioned earlier in this paper, [52] assumed that λ k = λ ˜ k . Next, the conditional expectation (8) was extended in [52] to the two independent quadrature carrier case. According to [49], the conditional mean estimate of the complex datum x [ n ] ( x [ n ] = x 1 [ n ] + j x 2 [ n ] ) given the complex observation z [ n ] ( z [ n ] = z 1 [ n ] + j z 2 [ n ] ) can be written as: E x [ n ] | z [ n ] = E x 1 [ n ] | z 1 [ n ] + j E x 2 [ n ] | z 2 [ n ] . Therefore, the real and imaginary parts of the data could be estimated separately on the basis of the real and imaginary parts of the equalized output.

3. The New Lagrange Multipliers

In this section, we derive new Lagrange multipliers related to the input and output pdfs valid for the real valued and noisy case. Namely, we consider (10), but unlike [52], we assume:
λ ˜ k = λ k + ε ˜ k for k = 0 , 2 , 4
where ε ˜ k will tend to zero only when both the convolutional noise p ˜ [ n ] and the channel noise w ˜ [ n ] tend to zero. Please note that we consider only the even numbers for k, since according to Assumption 1 from the previous section, the input sequence has zero mean. In addition, we consider here K = 4 , since this value was used in [52] for approximating the input and equalized output pdfs and has been shown to lead to good equalization performance. Thus, K = 4 may not be the optimal value in the pdf approximation of a 16 Q A M constellation input used in [52], but for equalization purposes, this value for K was enough. Since we compare the equalization performance obtained in this paper with [52], the same number of Lagrange multipliers should be taken in the pdf approximations for a fair comparison. This section is divided as follows: In Subsection 3.1, we derive for the real valued case, closed-form approximated expressions for ε ˜ 0 , ε ˜ 2 and ε ˜ 4 as a function of the convolutional noise power, channel noise power and Lagrange multipliers ( λ 2 and λ 4 ) related to the input pdf. In Subsection 3.2, we derive, for the real valued case, a closed-form approximated expression for the expected MSE of the system as a function of ε ˜ 0 , ε ˜ 2 , ε ˜ 4 , convolutional noise power, channel noise power and Lagrange multipliers ( λ 2 and λ 4 ) related to the input pdf. In Subsection 3.3, we substitute the closed-form approximated expressions for ε ˜ 0 , ε ˜ 2 and ε ˜ 4 obtained in Subsection 3.1 into the obtained MSE expression from Subsection 3.2; thus obtaining a closed-form approximated expression for the MSE depending only on the convolutional noise power, channel noise power and Lagrange multipliers ( λ 2 and λ 4 ) related to the input pdf. Next, this new expression for the MSE is minimized in Subsection 3.3 with respect to λ 2 and λ 4 , and newly derived expressions for both λ 2 and λ 4 are obtained depending only on the source moments. Please note that in [52], only λ k ( k = 0 , 2 , 4 ) were considered in (10). In addition, no closed-form expression was needed for λ 0 in [52], as is the case also in this paper due to the fact that λ 0 is reduced in (8) when using (10) with (11) for approximating the input and output pdfs.
In this section, we use the following additional assumptions, which were also made in [52]:
  • The convolutional noise p ˜ [ n ] is a zero mean, white Gaussian process with variance σ p ˜ 2 = E [ p ˜ [ n ] p ˜ * [ n ] ] (where ( · ) * is the conjugate operation on ( · ) ).
  • The source signal x [ n ] is an independent non-Gaussian signal with known variance and higher moments.
  • The convolutional noise p ˜ [ n ] and the source signal are independent.
  • The convolutional noise power σ p ˜ 2 is sufficiently low.
Assumptions 1 and 3 were also made in [37,38,49,51]. As already was noted in [20,58], the described model for the convolutional noise p [ n ] is applicable during the latter stages of the process, where the process is close to optimality [38]. According to [38], in the early stages of the iterative deconvolution process, the ISI is typically large with the result that the data sequence and the convolutional noise are strongly correlated, and the convolutional noise sequence is more uniform than Gaussian [59]. However, satisfying equalization performance was obtained by [51] and others [20] in spite of the fact that the described model for the convolutional noise p [ n ] was used. These results [51], [20] may indicate that the described model for the convolutional noise p [ n ] can be used (maybe not in the optimum way) also in the early stages where the :eye diagram: is still closed.

3.1. Closed-Form Approximated Expressions for ε ˜ 0 , ε ˜ 2 and ε ˜ 4

In this subsection, we derive closed-form approximated expressions for ε ˜ 0 , ε ˜ 2 and ε ˜ 4 .
Theorem 1. 
ε ˜ 0 , ε ˜ 2 and ε ˜ 4 can be expressed as:
ε ˜ 0 = ε 0 ; ε ˜ 2 = ε 2 ; ε ˜ 4 = ε 4
where:
ε 0 = 2 λ 2 σ p 2 ; ε 2 = σ p 2 4 λ 2 2 + 12 λ 4 ; ε 4 = 16 λ 2 λ 4 σ p 2
and:
σ p 2 = σ p ˜ 2 + σ w ˜ 2 ; σ w ˜ 2 = E [ w ˜ [ n ] w ˜ * [ n ] ]
Proof of Theorem 1. 
For simplicity, we use in the following x = x [ n ] , z = z [ n ] and p = p [ n ] . The equalized output sequence (3) can be expressed as:
z = x + p
where p = p ˜ [ n ] + w ˜ [ n ] . By using Assumption 1 and Assumption 4 from this section and the previous section, respectively, we may conclude that p is a white Gaussian process. Furthermore, based on Assumption 3 from this section, p is independent of the source sequence x. Thus, according to [60], if x and p are independent, the equalized output pdf equals to:
f z z = f x x f p z x d x
where f x x is the source pdf and f p z x is the pdf of p. Now, by using the maximum entropy density approximation technique (10) for approximating the input pdf and the Gaussian pdf for approximating the pdf of p, (16) can be approximately written as:
f z z 1 2 π σ p g ( x ) exp ( Ψ ( x ) ρ ) d x
where:
g ( x ) = exp k = 0 K λ k x k for k = 0 , 2 , 4 ; Ψ ( x ) = ( z x ) 2 ; ρ = 2 σ p 2
Next, we use Laplace’s method [61] for solving the integral in (17). According to [20,61], the Laplace’s method is a general technique for obtaining the asymptotic behavior as ρ 0 of integrals in which the large parameter 1 / ρ appears in the exponent. The main idea of Laplace’s method is: if the real continuous function Ψ ( x ) has its minimum at x 0 , which is between infinity and minus infinity, then it is only the immediate neighborhood of x = x 0 that contributes to the full asymptotic expansion of the integral for large 1 / ρ . Therefore, according to [20,61], we may write:
g ( x ) exp ( Ψ ( x ) ρ ) d x exp ( Ψ ( x ) ρ ) 2 π ρ d 2 Ψ ( x ) d x 2 ( g ( x ) + 1 2 d 2 d x 2 g ( x ) ρ d 2 d x 2 Ψ ( x ) + 1 8 d 4 d x 4 g ( x ) ( ρ d 2 d x 2 Ψ ( x ) ) 2 + O ( ρ d 2 d x 2 Ψ ( x ) ) 3 ) x = x 0
where O ( x ) is defined as lim x 0 O ( x ) / x = R and R is a constant. The functions d 2 Ψ ( x ) d x 2 and x 0 are obtained via:
d Ψ ( x ) d x = 2 ( z x ) ; d 2 Ψ ( x ) d x 2 = 2 ; d Ψ ( x 0 ) d x = 2 ( z x 0 ) = 0 x 0 = z
Next, by using (18)–(20), the equalized output pdf (17) can be written as:
f z z g ( z ) 32 λ 4 4 σ p 4 z 12 + 64 λ 2 λ 4 3 σ p 4 z 10 + 1 8 σ p 4 384 λ 2 2 λ 4 2 + 1152 λ 4 3 z 8 + 16 λ 4 2 σ p 2 + 1 8 σ p 4 128 λ 2 3 λ 4 + 1344 λ 2 λ 4 2 z 6 + 1 8 σ p 4 16 λ 2 4 + 480 λ 2 2 λ 4 + 816 λ 4 2 + 16 λ 2 λ 4 σ p 2 z 4 + σ p 2 4 λ 2 2 + 12 λ 4 + 1 8 σ p 4 48 λ 2 3 + 336 λ 4 λ 2 z 2 + 1 8 σ p 4 12 λ 2 2 + 24 λ 4 + 2 λ 2 σ p 2 + 1
where:
g ( z ) = exp k = 0 K λ k z k for k = 0 , 2 , 4
However, according to (10) and (11), the approximated equalized output pdf can be written as:
f ^ z z = exp k = 0 K λ ˜ k z k = exp k = 0 K λ k z k exp k = 0 K ε ˜ k z k for k = 0 , 2 , 4
which with the help of (12) and (22) is:
f ^ z z = g ( z ) exp ε 0 + ε 2 z 2 + ε 4 z 4
Next we use Taylor expansion [62] up to order three and obtain:
f ^ z z g ( z ) 1 ε 0 + ε 2 z 2 + ε 4 z 4 + 1 2 ! ε 0 + ε 2 z 2 + ε 4 z 4 2 + 1 3 ! ε 0 + ε 2 z 2 + ε 4 z 4 3
where (25) can also be written as:
f ^ z z g ( z ) 1 6 ε 4 3 z 12 + 1 2 ε 2 ε 4 2 z 10 + 1 2 ε 4 2 1 6 ε 4 ε 2 2 + 2 ε 0 ε 4 1 6 ε 0 ε 4 2 1 3 ε 2 2 ε 4 z 8 + ε 2 ε 4 1 6 ε 2 ε 2 2 + 2 ε 0 ε 4 2 3 ε 0 ε 2 ε 4 z 6 + 1 2 ε 2 2 1 6 ε 0 ε 2 2 + 2 ε 0 ε 4 ε 4 + ε 0 ε 4 1 3 ε 0 ε 2 2 1 6 ε 0 2 ε 4 z 4 + ε 2 ε 0 1 2 ε 2 ε 0 2 ε 2 z 2 + 1 2 ε 0 2 1 6 ε 0 3 ε 0 + 1
When the equalizer has converged, the convolutional noise power is considered as very small. Thus, by comparing (26) to (21) and neglecting the terms of σ p u where u 4 , we obtain:
1 2 ε 0 2 1 6 ε 0 3 ε 0 + 1 = 1 8 σ p 4 12 λ 2 2 + 24 λ 4 + 2 λ 2 σ p 2 + 1 ε 0 2 λ 2 σ p 2
ε 2 ε 0 1 2 ε 2 ε 0 2 ε 2 = σ p 2 4 λ 2 2 + 12 λ 4 + 1 8 σ p 4 48 λ 2 3 + 336 λ 4 λ 2 ε 2 ε 0 ε 2 σ p 2 4 λ 2 2 + 12 λ 4
where by the use of (27) and [62], (28) can be written as:
ε 2 ε 0 1 σ p 2 4 λ 2 2 + 12 λ 4 ε 2 σ p 2 4 λ 2 2 + 12 λ 4 ε 0 + 1 σ p 2 4 λ 2 2 + 12 λ 4 1 + ε 0 ε 2 σ p 2 4 λ 2 2 + 12 λ 4 1 2 λ 2 σ p 2 σ p 2 4 λ 2 2 + 12 λ 4
and with the help of (21), (26), (27) and [62], we have:
1 2 ε 2 2 1 6 ε 0 ε 2 2 + 2 ε 0 ε 4 ε 4 + ε 0 ε 4 1 3 ε 0 ε 2 2 1 6 ε 0 2 ε 4 = 1 8 σ p 4 16 λ 2 4 + 480 λ 2 2 λ 4 + 816 λ 4 2 + 16 λ 2 λ 4 σ p 2 ε 4 + ε 0 ε 4 16 λ 2 λ 4 σ p 2 ε 4 1 ε 0 16 λ 2 λ 4 σ p 2 ε 4 16 λ 2 λ 4 σ p 2 1 ε 0 16 λ 2 λ 4 σ p 2 1 + ε 0 16 λ 2 λ 4 σ p 2 1 2 λ 2 σ p 2 16 λ 2 λ 4 σ p 2
 ☐

3.2. Closed-Form Approximated Expression for the MSE

In this subsection, we derive the MSE, which is defined by:
M S E = E [ ( E [ x | y ] x ) 2 ]
Thus, we have first to derive the relevant expression for the conditional expectation E [ x | y ] with the new Lagrange multipliers for the input and output pdfs.
Theorem 2. 
The conditional expectation can be approximately written as:
E x | z 1 + ( ε 0 + ε 2 z 2 + ε 4 z 4 ) + 1 2 ( ε 0 + ε 2 z 2 + ε 4 z 4 ) 2 z + σ p 2 2 g 1 z g ( z ) + σ p 2 2 8 g 1 z g ( z ) w h e r e : g 1 z g ( z ) = 2 z 8 z 6 λ 4 2 + 8 z 4 λ 2 λ 4 + 2 z 2 λ 2 2 + 10 z 2 λ 4 + 3 λ 2 g 1 z g ( z ) = 4 z 64 z 12 λ 4 4 + 128 z 10 λ 2 λ 4 3 + 96 z 8 λ 2 2 λ 4 2 + 352 z 8 λ 4 3 + 32 z 6 λ 2 3 λ 4 + 432 z 6 λ 2 λ 4 2 + 4 z 4 λ 2 4 + 168 z 4 λ 2 2 λ 4 + 348 z 4 λ 4 2 + 20 z 2 λ 2 3 + 180 z 2 λ 2 λ 4 + 15 λ 2 2 + 30 λ 4
Proof of Theorem 2. 
According to Bayes rules, we have:
E x | z = x f z / x z / x f x x d x f z z
where:
f z | x z | x = 1 2 π σ p exp z x 2 2 σ p 2
Next, we use (10) and (23) with k = 0 , 2 , 4 for approximating the equalized output and input pdfs, respectively. Thus, by using (10), (18), (23), (34) and with k = 0 , 2 , 4 in (33), we obtain:
E x | z 1 2 π σ p x g ( x ) exp z x 2 2 σ p 2 d x exp k = 0 K λ k z k exp k = 0 K ε ˜ k z k for k = 0 , 2 , 4
which with the help of (12) and (22) leads to:
E x | z 1 2 π σ p exp k = 0 K ε k z k x g ( x ) exp z x 2 2 σ p 2 d x g ( z ) for k = 0 , 2 , 4
The integral in (36) can be written as:
x g ( x ) exp z x 2 2 σ p 2 d x = g 1 ( x ) exp ( Ψ ( x ) ρ ) d x
where:
g 1 ( x ) = x g ( x )
Next, by using Laplace’s method [20,61] for solving the integral in (37), we get:
g 1 ( x ) exp ( Ψ ( x ) ρ ) d x exp ( Ψ ( x ) ρ ) 2 π ρ d 2 Ψ ( x ) d x 2 ( g 1 ( x ) + 1 2 d 2 d x 2 g 1 ( x ) ρ d 2 d x 2 Ψ ( x ) + 1 8 d 4 d x 4 g 1 ( x ) ( ρ d 2 d x 2 Ψ ( x ) ) 2 + O ( ρ d 2 d x 2 Ψ ( x ) ) 3 ) x = x 0
which can be simplified with the help of (18) and (20) as:
g 1 ( x ) exp ( Ψ ( x ) ρ ) d x 2 π σ p 2 g ( z ) z + σ p 2 2 g 1 z g ( z ) + σ p 2 2 8 g 1 z g ( z ) where : g 1 z = d 2 d x 2 g 1 ( x ) x = z ; g 1 z = d 4 d x 4 g 1 ( x ) x = z
Next, we use the Taylor expansion [62] in order to get:
exp ( ε 0 + ε 2 z 2 + ε 4 z 4 ) 1 + ( ε 0 + ε 2 z 2 + ε 4 z 4 ) + 1 2 ( ε 0 + ε 2 z 2 + ε 4 z 4 ) 2
Now, by using (40) and (41) in (36), the expression for the conditional expectation (32) is obtained. ☐
Our next step is to substitute (32) into (31) and to get an MSE expression depending on ε 0 , ε 2 , ε 4 , λ 2 , λ 4 and on σ p t for t 4 :
M S E 4 m 4 λ 2 2 ε 0 σ p 2 + 4 m 6 λ 2 2 ε 2 σ p 2 + 4 m 8 λ 2 2 ε 4 σ p 2 + 12 m 2 λ 2 2 σ p 4 + 16 m 6 λ 2 ε 0 λ 4 σ p 2 + 6 m 2 λ 2 ε 0 σ p 2 + 16 m 8 λ 2 λ 4 ε 2 σ p 2 + 16 m 10 λ 2 λ 4 ε 4 σ p 2 + 80 m 4 λ 2 λ 4 σ p 4 + 6 m 4 λ 2 ε 2 σ p 2 + 6 m 6 λ 2 ε 4 σ p 2 + 6 λ 2 σ p 4 + m 2 ε 0 2 + 16 m 8 ε 0 λ 4 2 σ p 2 + 20 m 4 ε 0 λ 4 σ p 2 + 2 m 4 ε 0 ε 2 + 2 m 6 ε 0 ε 4 + 2 ε 0 σ p 2 + 16 m 10 λ 4 2 ε 2 σ p 2 + 16 m 12 λ 4 2 ε 4 σ p 2 + 112 m 6 λ 4 2 σ p 4 + 20 m 6 λ 4 ε 2 σ p 2 + 20 m 8 λ 4 ε 4 σ p 2 + 60 m 2 λ 4 σ p 4 + m 6 ε 2 2 + 2 m 8 ε 2 ε 4 + 6 m 2 ε 2 σ p 2 + m 10 ε 4 2 + 10 m 4 ε 4 σ p 2 + σ p 2

3.3. Closed-Form Approximated Expressions for λ 2 and λ 4

In this subsection, we derive the Lagrange multipliers related to the input pdf. Namely, we derive closed-form approximated expressions for λ 2 and λ 4 .
Theorem 3. 
The Lagrange multipliers related to the input pdf can be approximately expressed by:
λ 2 1 40 m 2 20 736 m 4 2 + 1280 m 2 m 6 41 472 m 4 2 + 2560 m 2 m 6 144 m 4 480 m 2 2 + 288 m 4 λ 4 1 20 736 m 4 2 + 1280 m 2 m 6 480 m 2 2 + 288 m 4
where m G is the G-th moment of the real part of the input sequence. Namely, for the real valued input case, we have:
m G = E [ x G ]
Proof of Theorem 3. 
Let us first substitute the expressions for ε 0 , ε 2 and ε 4 given in (13) into (42). Thus, we obtain a new closed-form approximated expression for the MSE depending on the source moments, λ 2 , λ 4 and σ p r with r 2 only. Please note that we are looking for a linear closed-form approximated expressions for λ 2 and λ 4 . Thus, we ignore in the MSE expression all those terms having λ 2 i λ 4 j with i , j > 1 , λ 2 l and λ 4 l with l > 2 . In addition, we ignore terms of σ p u with u > 4 , since the MSE is calculated when the equalizer has converged where σ p 2 is already considered as very low, thus making σ p u with u > 4 negligible. By considering all of this, the approximated MSE can be written as:
M S E σ p 2 20 m 2 λ 2 2 σ p 2 + 144 m 4 λ 2 λ 4 σ p 2 2 λ 2 σ p 2 16 m 6 λ 4 2 σ p 2 + 12 m 2 λ 4 σ p 2 1
Next, we search for those Lagrange multipliers ( λ 2 , λ 4 ) that bring the MSE (45) to the minimum. Thus, we have:
d M S E d λ 2 = 0 40 λ 2 m 2 + 144 λ 4 m 4 2 = 0 d M S E d λ 4 = 0 12 m 2 + 144 λ 2 m 4 32 λ 4 m 6 = 0
Solving (46) for λ 2 and λ 4 leads to (43). ☐

4. Simulation

In this section, we show the usefulness of our newly-derived Lagrange multipliers (43) compared to those derived in [52]. Namely, we show via simulation results the robustness to SNR of our new blind adaptive equalization method based on our newly-derived Lagrange multipliers (43) compared to [52]. In addition, we also add Godard’s [39] and maximum entropy [20] algorithm for comparison. Please note that Godard’s [39] algorithm is one of the most popular, computationally simple, tested and best performing blind equalization algorithm in the signal processing domain according to [28]. In addition, please note that according to [20], the maximum entropy [20] algorithm has better equalization performance compared to Godard’s [39] algorithm for the high SNR case. The equalizer’s taps for [20] were updated according to:
c l n + 1 = c l n μ E N T W y * [ n l ]
with:
W = E x 1 | z 1 z 1 n E x 1 | z 1 z 1 2 n + j E x 2 | z 2 z 2 n E x 2 | z 2 z 2 2 n z n
where μ E N T is a positive step size parameter and:
E x 1 | z 1 = z 1 + g ^ 1 ( z 1 ) 2 g ^ ( z 1 ) σ x 1 2 σ z 1 2 + g ^ 1 ( 4 ) ( z 1 ) 8 g ^ ( z 1 ) σ x 1 2 σ z 1 2 2 1 + g ^ ( z 1 ) 2 g ^ ( z 1 ) σ x 1 2 σ z 1 2 + g ^ ( 4 ) ( z 1 ) 8 g ^ ( z 1 ) σ x 1 2 σ z 1 2 2 E x 2 | z 2 = z 2 + g ^ 1 ( z 2 ) 2 g ^ ( z 2 ) σ x 2 2 σ z 2 2 + g ^ 1 ( 4 ) ( z 2 ) 8 g ^ ( z 2 ) σ x 2 2 + σ z 2 2 2 1 + g ^ ( z 2 ) 2 g ^ ( z 2 ) σ x 2 2 σ z 2 2 + g ^ ( 4 ) ( z 2 ) 8 g ^ ( z 2 ) σ x 2 2 σ z 2 2 2
where:
s = 1 , 2 ; g ^ z s = exp k = 2 k = K λ ^ k s x s k x s = z s g ^ ( z s ) = d 2 d x s 2 exp k = 2 k = K λ ^ k s x s k x s = z s ; g ^ ( 4 ) ( z s ) = d 4 d x s 4 exp k = 2 k = K λ ^ k s x s k x s = z s g ^ 1 ( z s ) = d 2 d x s 2 x s exp k = 2 k = K λ ^ k s x s k x s = z s ; g ^ 1 ( 4 ) ( z s ) = d 4 d x s 4 x s exp k = 2 k = K λ ^ k s x s k x s = z s
and σ x 1 2 , σ x 2 2 are the variances of the real and imaginary parts of the source signal, respectively. The variances of the real and imaginary parts of the equalized output are defined as σ z 1 2 and σ z 2 2 , respectively, and estimated by [20]:
z s 2 = 1 β E N T z s 2 n 1 + β E N T z s n 2
where 〈〉 stands for the estimated expectation, z s 2 0 > 0 , l stands for the l-th tap of the equalizer and β E N T is a positive step size parameter. The Lagrange multipliers λ ^ k s from (50) are according to [20]:
k ( k 1 ) m k 2 s + 2 λ ^ k s k 2 m 2 k 2 s + 2 k L = 2 L k L = K λ ^ L s L m k L 2 s = 0 k = 2 , 4 , 6 , . . . , K
where m G 1 , m G 2 are the G-th moment of the real and imaginary parts of the source signal respectively, defined by:
m G s = E x s G
According to [20], the equalizer’s taps are updated only if N ^ s > ε , where ε is a small positive parameter and N ^ s = 1 + g ^ ( z 1 ) 2 g ^ ( z 1 ) σ x s 2 σ z s 2 + g ^ ( 4 ) ( z 1 ) 8 g ^ ( z 1 ) σ x s 2 σ z s 2 2 .
The equalizer’s taps for Godard’s algorithm [39] were updated according to:
c l n + 1 = c l n μ G z 2 E x 4 E x 2 y * n l
where μ G is a positive step size parameter. The equalizer’s taps for [52] were updated according to:
c l n + 1 = c l n μ N E W W y * [ n l ]
with W given in (48), but with:
E x 1 | z 1 z 1 [ n ] + g ¯ 1 z 1 2 g ¯ z 1 σ z 1 2 σ x 1 2 + g ¯ 1 z 1 8 g ¯ z 1 σ z 1 2 σ x 1 2 2 E x 2 | z 2 z 2 [ n ] + g ¯ 1 z 2 2 g ¯ z 2 σ z 2 2 σ x 2 2 + g ¯ 1 z 2 8 g ¯ z 2 σ z 2 2 σ x 2 2 2
where s = 1 , 2 ; g ¯ ( z s ) = exp k = 2 K λ ¯ k s z s k g ¯ 1 z s = d 2 d x s 2 x s exp k = 2 K λ ¯ k s x s k x s = z s g ¯ 1 z s = d 4 d x s 4 x s exp k = 2 K λ ¯ k s x s k x s = z s
with λ ¯ k s for k = 2 , 4 is given according to [52] by solving the following equation:
6 + 24 λ ¯ 2 s m 2 s + 80 λ ¯ 4 s m 4 s = 0 60 m 2 s + 224 λ ¯ 4 s m 6 s + 80 λ ¯ 2 s m 4 s = 0
and:
z s 2 = 1 β N E W z s 2 n 1 + β N E W z s n 2
β N E W and μ N E W are positive step size parameters. The equalizer’s taps of our newly-derived blind adaptive equalizer with our newly-derived Lagrange multipliers (43) were updated according to:
c l n + 1 = c l n μ A N E W W y * [ n l ]
with W given in (48), but with:
E x 1 | z 1 1 + ( ε 0 + ε 2 z 1 2 + ε 4 z 1 4 ) + 1 2 ( ε 0 + ε 2 z 1 2 + ε 4 z 1 4 ) 2 z 1 + σ p 1 2 2 g 1 z 1 g ( z 1 ) + σ p 1 2 2 8 g 1 z 1 g ( z 1 ) E x 2 | z 2 1 + ( ε 0 + ε 2 z 2 2 + ε 4 z 2 4 ) + 1 2 ( ε 0 + ε 2 z 2 2 + ε 4 z 2 4 ) 2 z 2 + σ p 2 2 2 g 1 z 2 g ( z 2 ) + σ p 2 2 2 8 g 1 z 2 g ( z 2 ) w h e r e : s = 1 , 2 g 1 z s g ( z s ) = 2 z s 8 z s 6 λ 4 2 + 8 z s 4 λ 2 λ 4 + 2 z s 2 λ 2 2 + 10 z s 2 λ 4 + 3 λ 2 g 1 z s g ( z s ) = 4 z s 64 z s 12 λ 4 4 + 128 z s 10 λ 2 λ 4 3 + 96 z s 8 λ 2 2 λ 4 2 + 352 z s 8 λ 4 3 + 32 z s 6 λ 2 3 λ 4 + 432 z s 6 λ 2 λ 4 2 + 4 z s 4 λ 2 4 + 168 z s 4 λ 2 2 λ 4 + 348 z s 4 λ 4 2 + 20 z s 2 λ 2 3 + 180 z s 2 λ 2 λ 4 + 15 λ 2 2 + 30 λ 4 σ p s 2 = σ z s 2 σ x s 2
z s 2 = 1 β A N E W z s 2 n 1 + β A N E W z s n 2
and ε 0 , ε 2 , ε 4 and λ 2 , λ 4 are given in (13) and (43), respectively. β A N E W and μ A N E W are positive step size parameters. In the following, we denote “ M a x E n t ”, “ M a x E n t N E W ”, “ M a x E n t A N E W ” and “ G o d a r d ” as the algorithms given in [20], [52], (60) and [39], respectively. For the ” M a x E n t ”, “ M a x E n t N E W ” and “ M a x E n t A N E W ” algorithms, we used E [ z s 2 ] = E [ x s 2 ] for initialization. The following channel was considered: Channel 1 (initial ISI = 0 . 44 ): Taken according to [1]: h n = 0 for n < 0 ; 0 . 4 for n = 0 ; 0 . 84 × 0 . 4 n 1 for n > 0 .
We used an equalizer with 13 taps. The equalizer was initialized by setting the center tap equal to one and all others to zero. Two input sources were considered: A 16QAM source (a modulation using ± {1,3} levels for in-phase and quadrature components) and another complex input where the real and imaginary parts of the input are random independent processes uniformly distributed within [ 1 , + 1 ] . Figure 2 shows the equalization performance of our new proposed equalization method (“ M a x E n t A N E W ” (60)), namely the ISI as a function of iteration number for the 16 Q A M constellation input sent via Channel 1 for the high SNR case ( S N R = 30 dB), compared to the equalization performance obtained from the maximum entropy [20,52] and Godard’s [39] algorithm. Please note that for Figure 2, the step size parameters of the algorithms were chosen for fast convergence with low steady-state ISI. According to simulation results (Figure 2), our new proposed algorithm (“ M a x E n t A N E W ” (60)) has better equalization performance, namely a much lower residual ISI compared to Godard’s [39] algorithm, but at the same time, leaves the system with a higher residual ISI compared to the maximum entropy [20] and the algorithm given in [52]. Figure 3 shows the equalization performance of our new proposed equalization method (“ M a x E n t A N E W ” (60)), namely the ISI as a function of the iteration number for the 16 Q A M constellation input sent via Channel 1 for S N R = 15 dB, compared to the equalization performance obtained from [52] and Godard’s [39] algorithm. Please note that the maximum entropy [20] algorithm is applicable only for the high SNR case. Therefore, we ignore this algorithm [20] for the lower SNR cases. According to Figure 3, the “ M a x E n t N E W ” algorithm [52] does not converge with the step size parameters used from the case of S N R = 30 dB (Figure 2). However, unlike the “ M a x E n t N E W ” algorithm [52], our new proposed algorithm (“ M a x E n t A N E W ” (60)) and “Godard” [39] work very well according to Figure 3. In addition, based on Figure 3, our new proposed algorithm (“ M a x E n t A N E W ” (60)) shows equalization improvement from the residual ISI point of view of approximately 8 dB compared to “Godard’s” [39] method. It is worth noting that, if the algorithm does not diverge at 50 trials, it does not mean that the algorithm will not diverge at 100 or 200 trials, for example. Now, by decreasing the step size parameter ( β N E W ) for the “ M a x E n t N E W ” algorithm [52], the algorithm converges Figure 4. However, as can be seen from Figure 4, our new proposed algorithm (“ M a x E n t A N E W ” (60)) shows equalization improvement from the residual ISI point of view of approximately 8 dB and 7 dB compared to “Godard’s” [39] method and the “ M a x E n t N E W ” [52] algorithm, respectively. Figure 5, Figure 6 and Figure 7 show the equalization performance of our new proposed equalization method (“ M a x E n t A N E W ” (60)), namely the ISI as a function of the iteration number for the 16 Q A M constellation input sent via Channel 1 for S N R = 10 dB, compared to the equalization performance obtained from [52] and Godard’s [39] algorithm. According to Figure 5, Figure 6 and Figure 7, the “ M a x E n t N E W ” [52] algorithm does not converge despite the fact that step size parameters were decreased for the algorithm. Please note that the other algorithms (“ M a x E n t A N E W ” and “Godard”) continued working well with the original step size parameters from the case of S N R = 30 dB. Thus, we have seen so far from Figure 2, Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7 the robustness to SNR of our new proposed algorithm (“ M a x E n t A N E W ” (60)). Once the step size is optimized for the very high SNR case, no changes are needed in the step size parameters for the lower SNR case. Figure 8 and Figure 9 show the equalization performance of our new proposed equalization method (“ M a x E n t A N E W ” (60)), namely the ISI as a function of the iteration number for the 16 Q A M constellation input sent via Channel 1 for S N R = 10 dB and S N R = 7 dB, respectively, compared to the equalization performance obtained with Godard’s [39] algorithm. According to Figure 8 and Figure 9, our new proposed algorithm (“ M a x E n t A N E W ” (60)) shows equalization improvement from the residual ISI point of view of approximately 6 d B compared to “Godard’s” [39] method for both cases ( S N R = 10 dB and S N R = 7 dB). Now, “Godard’s” [39] method can achieve better equalization performance for S N R = 7 dB if the step size parameter is changed, as is shown in Figure 10. However, even then (Figure 10), “Godard’s” [39] method does not achieve better equalization performance compared to the “ M a x E n t A N E W ” algorithm. Finally, we turn to check our new proposed algorithm (“ M a x E n t A N E W ” (60)) with a different input sequence. Figure 11, Figure 12 and Figure 13 show the equalization performance of our new proposed equalization method (“ M a x E n t A N E W ” (60)), namely the ISI as a function of the iteration number for a uniformly-distributed input sequence within [ 1 , + 1 ] sent via Channel 1 for S N R = 30 dB, S N R = 15 dB and S N R = 10 dB, respectively, compared to the equalization performance obtained with Godard’s [39] algorithm. According to Figure 11, Figure 12 and Figure 13, our new proposed algorithm (“ M a x E n t A N E W ” (60)) has better equalization performance in the residual ISI point of view (Figure 11 and Figure 12) compared to Godard’s [39] algorithm or has the same equalization performance as “ G o d a r d ” [39] (Figure 13).
In blind equalization, the desired signal is unknown to the receiver, except for its probabilistic or statistical properties over some known alphabets. As both the channel and its input are unknown, the objective of blind equalization is to recover the unknown input sequence based solely on its probabilistic and statistical properties; see [2]. In this paper, we used the maximum entropy density approximation technique for approximating the input and output pdfs, respectively. Thus, the input pdf was a function of the source signal and a function of some Lagrange multipliers, while the output pdf was a function of the equalized output sequence and a function of some Lagrange multipliers that were different from those used for the input pdf. The Lagrange multipliers used for the input pdf depended only on the source moments. Namely, the Lagrange multipliers are used for the input pdf based solely on the input sequence statistical properties. The Lagrange multipliers used for the output pdf depended on the Lagrange multipliers related to the input pdf, on the convolutional noise power and on the channel noise power seen at the equalized output. Obviously, if there is no channel noise and the equalizer succeeded to reduce the convolutional noise to zero (which means that there is no residual ISI), the Lagrange multipliers related to the input sequence will be the same as the Lagrange multipliers related to the equalized output sequence. The closed-form approximated expression for the conditional expectation was obtained by using the above-mentioned approximations for the input and output pdfs. It depends on the input sequence statistical properties (on the input variance and some higher moments), which are also needed, for example, in Godard’s algorithm [39]. The newly derived equalizer based on the newly derived expression for the conditional expectation does not need to know if the sent symbol was now 1 + 3 j or 1 3 j , for example, as is needed for the non-blind case; nor does it need to know, for example, if the input sequence belongs to a 16 Q A M constellation or to an uniformly-distributed input within [ 1 , + 1 ] . However, the input sequence statistical properties need to be known, as is also the case in Godard’s algorithm [39]. This means that when the input sequence changes from a 16 Q A M constellation to a uniformly-distributed input within [ 1 , + 1 ] , the algorithm needs the new input sequence statistical properties, which, again, is also the case in Godard’s algorithm [39]. As already mentioned above, the input pdf was derived as a function of the source signal and as a function of some Lagrange multipliers that depend on the input sequence statistical properties (on the input variance and some higher moments). Thus, for each input sequence with different statistical properties, we obtain different Lagrange multipliers and, therefore, also a different input pdf. Thus, we have not considered in this work a specific input pdf, but rather a successful approximation for the input pdf that can be applied for a very wide range of input signals with different statistical properties (thus having different input pdfs). In this work, we considered only the following Lagrange multipliers for the input pdf: λ k for k = 0 , 2 , 4 . It is reasonable to think that the use of only three Lagrange multipliers ( λ k for k = 0 , 2 , 4 ) in the approximation of the input pdf may describe as less successful the real input pdf for some input signals than for others. This may explain the simulation results we have seen for the 16 Q A M input constellation and for the uniformly-distributed input within [ 1 , + 1 ] . It is also reasonable to think that the use of four Lagrange multipliers ( λ k for k = 0 , 2 , 4 , 6 ) or more instead of only three would have led to a more successful approximation for the input pdf related to the uniformly-distributed input sequence within [ 1 , + 1 ] and, thus, to further improved equalization performance. However, using more Lagrange multipliers increases the computational complexity of the algorithm, which is already much higher compared to Godard’s algorithm [39].

5. Conclusions

In this paper, we derived new approximated closed-form expressions for the Lagrange multipliers ( λ 2 , λ 4 , λ ˜ 2 , λ ˜ 4 ) related to the input and output pdfs. In addition, we obtained new closed-form approximated expressions for the conditional expectation and MSE inspired by the maximum entropy density approximation technique. Based on the newly-derived expression for the conditional expectation, a new blind adaptive equalization method was obtained that is robust to SNR and is applicable for the whole range of SNR down to 7 dB. Simulation results have shown that our newly-obtained equalization method has also significant equalization performance improvement in the residual ISI point of view compared to Godard’s algorithm [39], the maximum entropy method [20] and [52] for 7 dB ≤ SNR ≤ 15 dB.

Acknowledgments

We would like to thank the anonymous reviewers for their helpful comments.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Shalvi, O.; Weinstein, E. New criteria for blind deconvolution of non-minimum phase systems (channels). IEEE Trans. Inf. Theory 1990, 36, 312–321. [Google Scholar] [CrossRef]
  2. Johnson, R.C.; Schniter, P.; Endres, T.J.; Behm, J.D.; Brown, D.R.; Casas, R.A. Blind Equalization Using the Constant Modulus Criterion: A Review. Proc. IEEE 1998, 86, 1927–1950. [Google Scholar] [CrossRef]
  3. Wiggins, R.A. Minimum entropy deconvolution. Geoexploration 1978, 16, 21–35. [Google Scholar] [CrossRef]
  4. Kazemi, N.; Sacchi, M.D. Sparse multichannel blind deconvolution. Geophysics 2014, 79, V143–V152. [Google Scholar] [CrossRef]
  5. Guitton, A.; Claerbout, J. Nonminimum phase deconvolution in the log domain: A sparse inversion approach. Geophysics 2015, 80, WD11–WD18. [Google Scholar] [CrossRef]
  6. Silva, M.T.M.; Arenas-Garcia, J. A Soft-Switching Blind Equalization Scheme via Convex Combination of Adaptive Filters. IEEE Trans. Signal Process. 2013, 61, 1171–1182. [Google Scholar] [CrossRef]
  7. Mitra, R.; Singh, S.; Mishra, A. Improved multi-stage clustering-based blind equalisation. IET Commun. 2011, 5, 1255–1261. [Google Scholar] [CrossRef]
  8. Gul, M.M.U.; Sheikh, S.A. Design and implementation of a blind adaptive equalizer using Frequency Domain Square Contour Algorithm. Digit. Signal Process. 2010, 20, 1697–1710. [Google Scholar]
  9. Sheikh, S.A.; Fan, P. New Blind Equalization techniques based on improved square contour algorithm. Digit. Signal Process. 2008, 18, 680–693. [Google Scholar] [CrossRef]
  10. Thaiupathump, T.; He, L.; Kassam, S.A. Square contour algorithm for blind equalization of QAM signals. Signal Process. 2006, 86, 3357–3370. [Google Scholar] [CrossRef]
  11. Sharma, V.; Raj, V.N. Convergence and performance analysis of Godard family and multimodulus algorithms for blind equalization. IEEE Trans. Signal Process. 2005, 53, 1520–1533. [Google Scholar] [CrossRef]
  12. Yuan, J.T.; Lin, T.C. Equalization and Carrier Phase Recovery of CMA and MMA in BlindAdaptive Receivers. IEEE Trans. Signal Process. 2010, 58, 3206–3217. [Google Scholar] [CrossRef]
  13. Yuan, J.T.; Tsai, K.D. Analysis of the multimodulus blind equalization algorithm in QAM communication systems. IEEE Trans. Commun. 2005, 53, 1427–1431. [Google Scholar] [CrossRef]
  14. Wu, H.C.; Wu, Y.; Principe, J.C.; Wang, X. Robust switching blind equalizer for wireless cognitive receivers. IEEE Trans. Wirel. Commun. 2008, 7, 1461–1465. [Google Scholar]
  15. Kundur, D.; Hatzinakos, D. A novel blind deconvolution scheme for image restoration using recursive filtering. IEEE Trans. Signal Process. 1998, 46, 375–390. [Google Scholar] [CrossRef]
  16. Likas, C.L.; Galatsanos, N.P. A variational approach for Bayesian blind image deconvolution. IEEE Trans. Signal Process. 2004, 52, 2222–2233. [Google Scholar] [CrossRef]
  17. Li, D.; Mersereau, R.M.; Simske, S. Blind Image Deconvolution Through Support Vector Regression. IEEE Trans. Neural Netw. 2007, 18, 931–935. [Google Scholar] [CrossRef] [PubMed]
  18. Amizic, B.; Spinoulas, L.; Molina, R.; Katsaggelos, A.K. Compressive Blind Image Deconvolution. IEEE Trans. Image Process. 2013, 22, 3994–4006. [Google Scholar] [CrossRef] [PubMed]
  19. Tzikas, D.G.; Likas, C.L.; Galatsanos, N.P. Variational Bayesian Sparse Kernel-Based Blind Image Deconvolution With Student’s-t Priors. IEEE Trans. Image Process. 2009, 18, 753–764. [Google Scholar] [CrossRef] [PubMed]
  20. Pinchas, M.; Bobrovsky, B.Z. A Maximum Entropy approach for blind deconvolution. Signal Process. 2006, 86, 2913–2931. [Google Scholar] [CrossRef]
  21. Feng, C.; Chi, C. Performance of cumulant based inverse filters for blind deconvolution. IEEE Trans. Signal Process. 1999, 47, 1922–1935. [Google Scholar] [CrossRef]
  22. Abrar, S.; Nandi, A.S. Blind Equalization of Square-QAM Signals: A Multimodulus Approach. IEEE Trans. Commun. 2010, 58, 1674–1685. [Google Scholar] [CrossRef]
  23. Vanka, R.N.; Murty, S.B.; Mouli, B.C. Performance comparison of supervised and unsupervised/blind equalization algorithms for QAM transmitted constellations. In Proceedings of 2014 International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 20–21 February 2014.
  24. Ram Babu, T.; Kumar, P.R. Blind Channel Equalization Using CMA Algorithm. In Proceedings of 2009 International Conference on Advances in Recent Technologies in Communication and Computing (ARTCom 09), Kottayam, India, 27–28 October 2009.
  25. Qin, Q.; Huahua, L.; Tingyao, J. A new study on VCMA-based blind equalization for underwater acoustic communications. In Proceedings of 2013 International Conference on Mechatronic Sciences, Electric Engineering and Computer (MEC), Shengyang, China, 20–22 December 2013.
  26. Wang, J.; Huang, H.; Zhang, C.; Guan, J. A Study of the Blind Equalization in the Underwater Communication. In Proceedings of WRI Global Congress on Intelligent Systems, GCIS ’09, Xiamen, China, 19–21 May 2009.
  27. Miranda, M.D.; Silva, M.T.M.; Nascimento, V.H. Avoiding Divergence in the Shalvi Weinstein Algorithm. IEEE Trans. Signal Process. 2008, 56, 5403–5413. [Google Scholar] [CrossRef]
  28. Samarasinghe, P.D.; Kennedy, R.A. Minimum Kurtosis CMA Deconvolution for Blind Image Restoration. In Proceedings of the 4th International Conference on Information and Automation for Sustainability, ICIAFS 2008, Colombo, Sri Lanka, 12–14 December 2008.
  29. Fijalkow, I.; Touzni, A.; Treichler, J.R. Fractionally-Spaced Equalization using CMA: Robustness to Channel Noise and Lack of Disparity. IEEE Trans. Signal Process. 1997, 45, 56–66. [Google Scholar] [CrossRef]
  30. Fijalkow, I.; Manlove, C.E.; Johnson, C.R. Adaptive Fractionally Spaced Blind CMA Equalization: Excess MSE. IEEE Trans. Signal Process. 1998, 46, 227–231. [Google Scholar] [CrossRef]
  31. Moazzen, I.; Doost-Hoseini, A.M.; Omidi, M.J. A novel blind frequency domain equalizer for SIMO systems. In Proceedings of Wireless Communications and Signal Processing, WCSP, Nanjing, China, 13–15 November 2009.
  32. Peng, D.; Xiang, Y.; Yi, Z.; Yu, S. CM-Based Blind Equalization of Time-Varying SIMO-FIR Channel With Single Pulsation Estimation. IEEE Trans. Veh. Technol. 2011, 60, 2410–2415. [Google Scholar] [CrossRef]
  33. Coskun, A.; Kale, I. Blind Multidimensional Matched Filtering Techniques for Single Input Multiple Output Communications. IEEE Trans. Instrum. Meas. 2010, 59, 1056–1064. [Google Scholar] [CrossRef]
  34. Romano, J.M.T.; Attux, R.; Cavalcante, C.C.; Suyama, R. Unsupervised Signal Processing: Channel Equalization and Source Separation; CRC Press: Boca Raton, FL, USA, 2010. [Google Scholar]
  35. Chen, S.; Wolfgang, A.; Hanzo, L. Constant Modulus Algorithm Aided Soft Decision Directed Scheme for Blind Space-Time Equalization of SIMO Channels. Signal Process. 2007, 87, 2587–2599. [Google Scholar] [CrossRef]
  36. Pinchas, M. Two Blind Adaptive Equalizers Connected in Series for Equalization Performance Improvement. J. Signal Inf. Process. 2013, 4, 64–71. [Google Scholar] [CrossRef]
  37. Nikias, C.L.; Petropulu, A.P. (Eds.) Higher-Order Spectra Analysis a Nonlinear Signal Processing Framework; Prentice-Hall: Enlewood Cliffs, NJ, USA, 1993; pp. 419–425.
  38. Haykin, S. Adaptive Filter Theory. In Blind Deconvolution; Haykin, S., Ed.; Prentice-Hall: Englewood Cliffs, NJ, USA, 1991. [Google Scholar]
  39. Godard, D.N. Self recovering equalization and carrier tracking in two-dimenional data communication system. IEEE Trans. Commun. 1980, 28, 1867–1875. [Google Scholar] [CrossRef]
  40. Lazaro, M.; Santamaria, I.; Erdogmus, D.; Hild, K.E.; Pantaleon, C.; Principe, J.C. Stochastic blind equalization based on pdf fitting using parzen estimator. IEEE Trans. Signal Process. 2005, 53, 696–704. [Google Scholar] [CrossRef]
  41. Sato, Y. A method of self-recovering equalization for multilevel amplitude-modulation systems. IEEE Trans. Commun. 1975, 23, 679–682. [Google Scholar] [CrossRef]
  42. Beasley, A.; Cole-Rhodes, A. Performance of an adaptive blind equalizer for QAM signals. In Proceedings of IEEE Military Communications Conference, MILCOM 2005, Atlantic City, NJ, USA, 17–20 October 2015.
  43. Alaghbari, K.A.A.; Tan, A.W.C.; Lim, H.S. Cost function of blind channel equalization. In Proceedings of the 4th International Conference on Intelligent and Advanced Systems (ICIAS), Kuala Lumpur, Malaysia, 12–14 June 2012.
  44. Daas, A.; Hadef, M.; Weiss, S. Adaptive blind multiuser equalizer based on pdf matching. In Proceedings of the International Conference on Telecommunications, ICT ’09, Marrakech, Morocco, 25–27 May 2009.
  45. Giunta, G.; Benedetto, F. A signal processing algorithm for multi-constant modulus equalization. In Proceedings of the 36th International Conference on Telecommunications and Signal Processing (TSP), Rome, Italy, 2–4 July 2013.
  46. Daas, A.; Weiss, S. Blind adaptive equalizer based on pdf matching for Rayleigh time-varying channels. In Proceedings of the Conference Record of the Forty Fourth Asilomar Conference on Signals, Systems and Computers (ASILOMAR), Pacific Grove, CA, USA, 7–10 November 2010.
  47. Abrar, S. A New Cost Function for the Blind Equalization of Cross-QAM Signals. In Proceedings of The 17th International Conference on Microelectronics, ICM 2005, Islamabad, Pakistan, 13–15 December 2005.
  48. Blom, K.C.H.; Gerards, M.E.T.; Kokkeler, A.B.J.; Smit, G.J.M. Nonminimum-phase channel equalization using all-pass CMA. In Proceedings of the IEEE 24th International Symposium on Personal Indoor and Mobile Radio Communications (PIMRC), London, UK, 8–11 September 2013.
  49. Bellini, S. Bussgang techniques for blind equalization. In Proceedings of IEEE Global Telecommunication Conference Records, Houston, TX, USA, 1–4 December 1986.
  50. Bellini, S. Blind Equalization. Alta Freq. 1988, 57, 445–450. [Google Scholar]
  51. Fiori, S. A contribution to (neuromorphic) blind deconvolution by flexible approximated Bayesian estimation. Signal Process. 2001, 81, 2131–2153. [Google Scholar] [CrossRef]
  52. Pinchas, M. 16QAM Blind Equalization Method via Maximum Entropy Density Approximation Technique. In Proceedings of the IEEE 2011 International Conference on Signal and Information Processing (CSIP2011), Shanghai, China, 28–30 October 2011.
  53. Pinchas, M.; Bobrovsky, B.Z. A Novel HOS Approach for Blind Channel Equalization. IEEE Trans. Wirel. Commun. 2007, 6, 875–886. [Google Scholar] [CrossRef]
  54. Gitlin, R.D.; Hayes, J.F.; Weinstein, S.B. Automatic and adaptive equalization, Data Communications Principles; Springer: New York, NY, USA, 1992; pp. 587–590. [Google Scholar]
  55. Im, G.-H.; Park, C.J.; Won, H.C. A blind equalization with the sign algorithm for broadband access. IEEE Commun. Lett. 2001, 5, 70–72. [Google Scholar]
  56. Haykin, S. Adaptive Filter Theory. In Blind Deconvolution, 3rd ed.; Haykin, S., Ed.; Prentice-Hall: Englewood Cliffs, NJ, USA, 2009; p. 789. [Google Scholar]
  57. Nandi, A.K. (Ed.) Blind Estimation Using Higher-Order Statistics; Kluwer Academic: Boston, MA, USA, 1999.
  58. Pinchas, M. A Closed Approximated Formed Expression for the Achievable Residual Intersymbol Interference Obtained by Blind Equalizers. Signal Process. 2010, 90, 1940–1962. [Google Scholar] [CrossRef]
  59. Godfrey, R.; Rocca, F. Zero memory non-linear deconvolution. Geophys. Prospect. 1981, 29, 189–228. [Google Scholar] [CrossRef]
  60. Papulis, A. Probability, Random Variables, and Stochastic Processes, International Student ed.; McGraw-Hill: New York, NY, USA, 1965; p. 189. [Google Scholar]
  61. Orszag, S.A.; Bender, C.M. (Eds.) Advanced Mathematical Methods for Scientist Engineers International Series in Pure and Applied Mathematics; McDraw-Hill: New York, NY, USA, 1978.
  62. Spiegel, M.R. Mathematical Handbook of Formulas and Tables, SCHAUM’S Outline Series; McGraw-Hill: New York, NY, USA, 1968. [Google Scholar]
Figure 1. Block diagram of the system.
Figure 1. Block diagram of the system.
Entropy 18 00065 g001
Figure 2. Performance comparison between equalization algorithms for a 16 Q A M source input going through Channel 1. The averaged results were obtained in 50 Monte Carlo trials for an S N R = 30 dB. μ N E W = 0 . 0001 , β N E W = 1 × 10 4 , μ E N T = 3 × 10 4 , β E N T = 2 × 10 4 , μ G = 7 × 10 5 , μ A N E W = 0 . 00009 , β A N E W = 1 × 10 5 .
Figure 2. Performance comparison between equalization algorithms for a 16 Q A M source input going through Channel 1. The averaged results were obtained in 50 Monte Carlo trials for an S N R = 30 dB. μ N E W = 0 . 0001 , β N E W = 1 × 10 4 , μ E N T = 3 × 10 4 , β E N T = 2 × 10 4 , μ G = 7 × 10 5 , μ A N E W = 0 . 00009 , β A N E W = 1 × 10 5 .
Entropy 18 00065 g002
Figure 3. Performance comparison between equalization algorithms for a 16 Q A M source input going through Channel 1. The averaged results were obtained in 10 Monte Carlo trials for an S N R = 15 dB. μ N E W = 0 . 0001 , β N E W = 1 × 10 4 , μ G = 7 × 10 5 , μ A N E W = 0 . 00009 , β A N E W = 1 × 10 5 .
Figure 3. Performance comparison between equalization algorithms for a 16 Q A M source input going through Channel 1. The averaged results were obtained in 10 Monte Carlo trials for an S N R = 15 dB. μ N E W = 0 . 0001 , β N E W = 1 × 10 4 , μ G = 7 × 10 5 , μ A N E W = 0 . 00009 , β A N E W = 1 × 10 5 .
Entropy 18 00065 g003
Figure 4. Performance comparison between equalization algorithms for a 16 Q A M source input going through Channel 1. The averaged results were obtained in 50 Monte Carlo trials for an S N R = 15 dB. μ N E W = 0 . 0001 , β N E W = 1 × 10 5 , μ G = 7 × 10 5 , μ A N E W = 0 . 00009 , β A N E W = 1 × 10 5 .
Figure 4. Performance comparison between equalization algorithms for a 16 Q A M source input going through Channel 1. The averaged results were obtained in 50 Monte Carlo trials for an S N R = 15 dB. μ N E W = 0 . 0001 , β N E W = 1 × 10 5 , μ G = 7 × 10 5 , μ A N E W = 0 . 00009 , β A N E W = 1 × 10 5 .
Entropy 18 00065 g004
Figure 5. Performance comparison between equalization algorithms for a 16 Q A M source input going through Channel 1. The averaged results were obtained in one Monte Carlo trial for an S N R = 10 dB. μ N E W = 0 . 0001 , β N E W = 1 × 10 5 , μ G = 7 × 10 5 , μ A N E W = 0 . 00009 , β A N E W = 1 × 10 5 .
Figure 5. Performance comparison between equalization algorithms for a 16 Q A M source input going through Channel 1. The averaged results were obtained in one Monte Carlo trial for an S N R = 10 dB. μ N E W = 0 . 0001 , β N E W = 1 × 10 5 , μ G = 7 × 10 5 , μ A N E W = 0 . 00009 , β A N E W = 1 × 10 5 .
Entropy 18 00065 g005
Figure 6. Performance comparison between equalization algorithms for a 16 Q A M source input going through Channel 1. The averaged results were obtained in 10 Monte Carlo trials for an S N R = 10 dB. μ N E W = 0 . 00009 , β N E W = 1 × 10 5 , μ G = 7 × 10 5 , μ A N E W = 0 . 00009 , β A N E W = 1 × 10 5 .
Figure 6. Performance comparison between equalization algorithms for a 16 Q A M source input going through Channel 1. The averaged results were obtained in 10 Monte Carlo trials for an S N R = 10 dB. μ N E W = 0 . 00009 , β N E W = 1 × 10 5 , μ G = 7 × 10 5 , μ A N E W = 0 . 00009 , β A N E W = 1 × 10 5 .
Entropy 18 00065 g006
Figure 7. Performance comparison between equalization algorithms for a 16 Q A M source input going through Channel 1. The averaged results were obtained in 10 Monte Carlo trials for an S N R = 10 dB. μ N E W = 0 . 00007 , β N E W = 1 × 10 5 , μ G = 7 × 10 5 , μ A N E W = 0 . 00009 , β A N E W = 1 × 10 5 .
Figure 7. Performance comparison between equalization algorithms for a 16 Q A M source input going through Channel 1. The averaged results were obtained in 10 Monte Carlo trials for an S N R = 10 dB. μ N E W = 0 . 00007 , β N E W = 1 × 10 5 , μ G = 7 × 10 5 , μ A N E W = 0 . 00009 , β A N E W = 1 × 10 5 .
Entropy 18 00065 g007
Figure 8. Performance comparison between equalization algorithms for a 16 Q A M source input going through Channel 1. The averaged results were obtained in 50 Monte Carlo trials for an S N R = 10 dB. μ G = 7 × 10 5 , μ A N E W = 0 . 00009 , β A N E W = 1 × 10 5 .
Figure 8. Performance comparison between equalization algorithms for a 16 Q A M source input going through Channel 1. The averaged results were obtained in 50 Monte Carlo trials for an S N R = 10 dB. μ G = 7 × 10 5 , μ A N E W = 0 . 00009 , β A N E W = 1 × 10 5 .
Entropy 18 00065 g008
Figure 9. Performance comparison between equalization algorithms for a 16 Q A M source input going through Channel 1. The averaged results were obtained in 50 Monte Carlo trials for an S N R = 7 dB. μ G = 7 × 10 5 , μ A N E W = 0 . 00009 , β A N E W = 1 × 10 5 .
Figure 9. Performance comparison between equalization algorithms for a 16 Q A M source input going through Channel 1. The averaged results were obtained in 50 Monte Carlo trials for an S N R = 7 dB. μ G = 7 × 10 5 , μ A N E W = 0 . 00009 , β A N E W = 1 × 10 5 .
Entropy 18 00065 g009
Figure 10. Performance comparison between equalization algorithms for a 16 Q A M source input going through Channel 1. The averaged results were obtained in 50 Monte Carlo trials for an S N R = 7 dB. μ G = 2 . 5 × 10 5 , μ A N E W = 0 . 00008 , β A N E W = 1 × 10 5 .
Figure 10. Performance comparison between equalization algorithms for a 16 Q A M source input going through Channel 1. The averaged results were obtained in 50 Monte Carlo trials for an S N R = 7 dB. μ G = 2 . 5 × 10 5 , μ A N E W = 0 . 00008 , β A N E W = 1 × 10 5 .
Entropy 18 00065 g010
Figure 11. Performance comparison between equalization algorithms for a uniformly-distributed source input within [ 1 , + 1 ] going through Channel 1. The averaged results were obtained in 50 Monte Carlo trials for an S N R = 30 dB. μ G = 0 . 002 , μ A N E W = 0 . 00055 , β A N E W = 1 × 10 6 .
Figure 11. Performance comparison between equalization algorithms for a uniformly-distributed source input within [ 1 , + 1 ] going through Channel 1. The averaged results were obtained in 50 Monte Carlo trials for an S N R = 30 dB. μ G = 0 . 002 , μ A N E W = 0 . 00055 , β A N E W = 1 × 10 6 .
Entropy 18 00065 g011
Figure 12. Performance comparison between equalization algorithms for a uniformly-distributed source input within [ 1 , + 1 ] going through Channel 1. The averaged results were obtained in 50 Monte Carlo trials for an S N R = 15 dB. μ G = 0 . 002 , μ A N E W = 0 . 00055 , β A N E W = 1 × 10 6 .
Figure 12. Performance comparison between equalization algorithms for a uniformly-distributed source input within [ 1 , + 1 ] going through Channel 1. The averaged results were obtained in 50 Monte Carlo trials for an S N R = 15 dB. μ G = 0 . 002 , μ A N E W = 0 . 00055 , β A N E W = 1 × 10 6 .
Entropy 18 00065 g012
Figure 13. Performance comparison between equalization algorithms for a uniformly-distributed source input within [ 1 , + 1 ] going through Channel 1. The averaged results were obtained in 50 Monte Carlo trials for an S N R = 10 dB. μ G = 0 . 002 , μ A N E W = 0 . 00055 , β A N E W = 1 × 10 6 .
Figure 13. Performance comparison between equalization algorithms for a uniformly-distributed source input within [ 1 , + 1 ] going through Channel 1. The averaged results were obtained in 50 Monte Carlo trials for an S N R = 10 dB. μ G = 0 . 002 , μ A N E W = 0 . 00055 , β A N E W = 1 × 10 6 .
Entropy 18 00065 g013

Share and Cite

MDPI and ACS Style

Pinchas, M. New Lagrange Multipliers for the Blind Adaptive Deconvolution Problem Applicable for the Noisy Case. Entropy 2016, 18, 65. https://doi.org/10.3390/e18030065

AMA Style

Pinchas M. New Lagrange Multipliers for the Blind Adaptive Deconvolution Problem Applicable for the Noisy Case. Entropy. 2016; 18(3):65. https://doi.org/10.3390/e18030065

Chicago/Turabian Style

Pinchas, Monika. 2016. "New Lagrange Multipliers for the Blind Adaptive Deconvolution Problem Applicable for the Noisy Case" Entropy 18, no. 3: 65. https://doi.org/10.3390/e18030065

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop