Next Article in Journal
Exposing Deep Representations to a Recurrent Expansion with Multiple Repeats for Fuel Cells Time Series Prognosis
Previous Article in Journal
Experimental Diagnosis on Combustion Characteristic of Shock Wave Focusing Initiation Engine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generalized Maximum Complex Correntropy Augmented Adaptive IIR Filtering

College of Electronic and Information Engineering, Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing 400715, China
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(7), 1008; https://doi.org/10.3390/e24071008
Submission received: 6 June 2022 / Revised: 17 July 2022 / Accepted: 18 July 2022 / Published: 21 July 2022

Abstract

:
Augmented IIR filter adaptive algorithms have been considered in many studies, which are suitable for proper and improper complex-valued signals. However, lots of augmented IIR filter adaptive algorithms are developed under the mean square error (MSE) criterion. It is an ideal optimality criterion under Gaussian noises but fails to model the behavior of non-Gaussian noise found in practice. Complex correntropy has shown robustness under non-Gaussian noises in the design of adaptive filters as a similarity measure for the complex random variables. In this paper, we propose a new augmented IIR filter adaptive algorithm based on the generalized maximum complex correntropy criterion (GMCCC-AIIR), which employs the complex generalized Gaussian density function as the kernel function. Stability analysis provides the bound of learning rate. Simulation results verify its superiority.

1. Introduction

Complex-valued adaptive filtering algorithm has a wide range of engineering applications in radio systems [1], system identification [2], environment signal processing [3], and other fields. Generally speaking, complex-valued adaptive filtering algorithm is an extension of the real-valued adaptive filtering algorithm. When the complex signal is second-order circular (or proper) [3], the performance of adaptive filter is optimal. For second-order circular signals, the covariance matrix C x x = E x ( n ) x H ( n ) is second-order statistics. A complex-valued random variable is second-order circular if its first and second-order statistics are rotation-invariant in the complex plane, but in most cases, complex signals are noncircular (or improper) [3].
In order to suit both proper and improper complex-valued signals, augmented complex statistics are proposed. There are quantities of adaptive filtering algorithms based on augmented complex statistics, such as augmented complex least mean square (ACLMS) [4], augmented complex adaptive infinite impulse response (IIR) algorithm (ACA-IIR) [5], diffusion augmented complex adaptive IIR algorithm (DACA-IIR) [6], and incremental augmented complex adaptive IIR algorithm (IACA-IIR) [7]. These adaptive filtering algorithms are based on the mean square error (MSE) criterion, which is mathematically tractable, computationally simple, and optimal under Gaussian assumptions [8]. However, the MSE-based algorithm may perform poorly or encounter instability problems when the signal is disturbed by non-Gaussian noise [9,10]. From a statistical point of view, the mean square error is not sufficient to capture all possible information in a non-Gaussian signal. In practical applications, non-Gaussian noise is common. For example, some sources of non-Gaussian impulse noise are non-synchronization in digital recording, motor ignition noise in internal combustion engines, and lightning spikes in natural phenomena [11,12].
Entropy generally describes a measure of uncertainty of a real random variable, and as a means of a functional analysis method, entropy of a signal can define the noise without using a threshold criterion [13]. Correntropy is an extension of entropy, which is a quantity of how similar two random complex variables are in a neighborhood of the joint space controlled by the kernel bandwidth. Compared with the MSE-based algorithms, correntropy algorithm is superior. On the whole, the correntropy uses the Gaussian function as the kernel function [14,15], because it is smooth and strictly positive definite. However, the Gaussian kernel is not always an appropriate choice. Recently, He et al. [16] and Chen et al. [17] extended it to more general cases and proposed the generalized maximum correntropy criterion (GMCC) algorithm, which has strong generality and flexibility, and Qian et al. [18] proposed a GMCCC algorithm based on the generalized maximum complex correntropy criterion, which uses the complex generalized Gaussian density(CGGD) function as a kernel of the complex correntropy. It succeeds in the excellent characteristics of GMCC and can deal with complex signals at the same time. These correntropy algorithms are finite impulse response (FIR) wide linear adaptive filtering algorithms, but when FIR filters need to use a large number of coefficients to obtain satisfactory filtering performance, FIR wide linear models may not always be appropriate.
Unlike the FIR counterpart, the memory depth of an IIR filter is independent of the filter order and the number of coefficients [19]. Alternatively, an IIR filter generally requires considerably fewer coefficients than the corresponding FIR filter to achieve a certain level of performance. Thus, the IIR adaptive filters are suitable for systems with memory, such as autoregressive moving average (ARMA) models. Navarro-Moreno et al. [20] developed an ARMA widely linear model with fixed coefficients. To derive a recursive algorithm for augmented complex adaptive IIR filtering, Took et al. [7] proposed the ACA-IIR to learn the parameters of a widely linear ARMA model.
Based on the generalized maximum complex correntropy criterion (GMCCC) and widely linear ARMA model, we propose a GMCCC algorithm variant, namely the GMCCC augmented adaptive IIR filtering algorithm (GMCCC-AIIR). We show that the GMCCC-AIIR is very flexible, with ACA-IIR, GMCCC, and ACLMS as its special cases. Stability analysis shows that GMCCC-AIIR always converges when the step-size satisfies the theoretical bound. Simulation results demonstrate the superiority of the GMCCC-AIIR algorithm.
The organization of the paper is as follows: Section 2 introduces and describes the augmented IIR system. Section 3 defines the generalized complex correntropy, derives the GMCCC-AIIR algorithm, and introduces a reduced-complexity version of the proposed algorithm. Section 4 provides the analysis on the bounds of the step-size for convergence. The superiority of the GMCCC-AIIR algorithm is verified by simulations in Section 4, and the conclusion is drawn in Section 5.

2. Augmented IIR System

The signals used in communications are usually complex circular, whereas the class of signals made complex by convenience of representation become more general, and such signals are often noncircular. For the stochastic modeling of this kind of signal, Picinbono et al. introduce a widely linear moving average (MA) model, which is given by [21]:
y n = m = 0 N b m x n m + m = 0 N h m x * n m
where b m and h m are filter coefficients. Based on this widely linear model, an ACLMS algorithm was proposed [22].
Since the FIR generalized linear model is not always an optimal choice, Moreno et al. introduce a fixed coefficient ARMA generalized linear model [20].
y n = m = 1 p a m y n m + m = 0 q b m x n m + m = 1 p g m y * n m + m = 0 q h m x * n m
where a m , g m are the coefficients of feedback and its conjugation, p and q are the orders of the AR and MA parts, respectively. The model provides a theoretical basis for the proposed recursive algorithm for training adaptive IIR filters.
To introduce a recursive algorithm of augmented complex adaptive IIR filter, Took et al. [5] give the output of the widely linear IIR filter in the following form:
y n = m = 1 M a m ( n ) y n m + m = 0 N b m ( n ) x n m + m = 1 M g m ( n ) y * n m + m = 0 N h m ( n ) x * n m
where M is the order of the feedback and N is the length of the input. This model can be simplified as follows:
y n = w T n z n
where:
w n = a 1 ( n ) , , a M ( n ) , g 1 ( n ) , , g m ( n ) , b 0 ( n ) , , b M ( n ) , h 0 ( n ) , , h M ( n ) T
z n = y T n , x T n T

3. Generalized Complex Correntropy and GMCCC-AIIR Algorithm

3.1. Generalized Complex Correntropy

For two complex variables C 1 = X + j Y and C 2 = Z + j S , complex Correntropy is defined as [23]:
C 1 , C 2 = E κ C 1 C 2
where X , Y , Z , S are real variables, κ ( C 1 C 2 ) is the kernel function.
For the Gaussian kernel in the complex field [23], the kernel function can be expressed as:
κ C 1 C 2 = G C C 1 C 2 = 1 2 π σ 2 exp C 1 C 2 C 1 C 2 * 2 σ 2
where σ is the kernel width.
In this paper, we employ a CGGD function as the kernel function, and it’s corresponding correntropy is named generalized complex correntropy [18]:
κ C 1 C 2 = G α , β L C 1 C 2 = α π β Γ ( 1 / α ) exp C 1 C 2 C 1 C 2 * α β α = γ α , β exp λ C 1 C 2 C 1 C 2 * α
where α is the shape parameter, β = 2 σ 2 Γ 1 / α / Γ 2 / α is the kernel width, λ = 1 / β α , γ α , β = α π β Γ 1 / α .
In this way, generalized complex correntropy can be written as:
V α , β C C 1 , C 2 = E G α , β C C 1 C 2
The samples c 1 i , c 2 i i = 1 N are finite in reality, so we estimate the generalized complex correntropy by sample mean.
V α , β C ^ C 1 , C 2 = 1 N i = 1 N G α , β C c 1 i c 2 i = 1 N i = 1 N G α , β C e i
where e i = c 1 i c 2 i .
Instead of the correntropy in data analysis, the correntropic loss is often used, so we define the generalized complex correntropic loss as:
J G C l o s s C C 1 , C 2 = G α , β C 0 V α , β C C 1 , C 2
Then, when the sample is finite, the generalized complex correntropic can be expressed as:
J G C l o s s C ^ C 1 , C 2 = γ α , β 1 N i = 1 N G α , β C c 1 i c 2 i = γ α , β 1 N i = 1 N G α , β C e i
There are some properties of generalized complex correntropy [18].
Property 1.
V α , β C C 1 , C 2 is symmetric, i.e., V α , β C C 1 , C 2 = V α , β C C 2 , C 1
Property 2.
V α , β C C 1 , C 2 is bounded with 0 V α , β C C 1 , C 2 γ α , β and achieves its maximum when C 1 = C 2
We can get J G C l o s s C C 1 , C 2 , it is symmetric and achieves its minimum when C 1 = C 2 on the basis of Properties 1 and 2.
Property 3.
Given e = e 1 , e 2 e N T , the following conclusions about J G C l o s s C ^ are true:
  • When α 1 / 2 , J G C l o s s C ^ is convex at any e with e n 2 a 1 / 2 a λ 1 / 2 a ;
  • When 0 < α < 1 / 2 , J G C l o s s C ^ is non-convex at any e with e n 0 .

3.2. GMCCC-AIIR Algorithm

Based on properties of generalized complex correntropy, we define the cost function of the GMCCC-AIIR algorithm as:
J G C l o s s C = G α , β C 0 E G α , β C e n = γ α , β 1 E exp λ e n e * n α
where e n = d n y n .
We can infer from (4) that e n = d n w T n z n , so e * n = d * n w H n z * n . Then, we search for the optimal solution by stochastic gradient descent method, i.e.,
w n + 1 = w n η w 1 exp λ e n e * n α = w n + η α λ exp λ e n 2 α e n 2 α 2 × w [ e n e * n ]
where the option of the kernel bandwidth alpha is according to property 3, so that the cost function is convex and the result of the stochastic gradient descent method is the global optimal solution rather than the local optimal one.
The gradients can be computed as: [24]
w e ( n ) e * ( n ) = e ( n ) y * ( n ) w ( n ) + y ( n ) w ( n ) e * ( n ) = e ( n ) Φ w ( n ) + Ψ w ( n ) e * ( n )
where:
Φ w ( n ) = y * ( n ) w R ( n ) + j y * ( n ) w I ( n )
Ψ w ( n ) = y ( n ) w R ( n ) + j y ( n ) w I ( n )
The gradient vectors (17) and (18) can be written as:
Φ w n = Φ a 1 n , , Φ a M n , Φ g 1 n , , Φ g M n , Φ b 0 n , , Φ b N n , Φ h 0 n , , Φ h N n T
Ψ w n = Ψ a 1 n , , Ψ a M n , Ψ g 1 n , , Ψ g M n , Ψ b 0 n , , Ψ b N n , Ψ h 0 n , , Ψ h N n T
where R and I are the real and the imaginary part of complex quantities respectively, and j = 1 . To calculate the gradient in (16), items in (17), (18) must be calculated separately, such as:
y * n a m R n = y * n m + l = 1 M a l * n y * n l a m R n + l = 1 M g l * n y n l a m R n
y * n a m I n = j y * n m + l = 1 M a l * n y * n l a m I n + l = 1 M g l * n y n l a m I n
The feedback in the IIR system leads to the recursions on the right side of (21) and (22). These are the derivatives of the past values to present weights, which are impossible to compute. To avoid this problem, for a small step-size, we can approximate that:
w n w n 1 w n τ τ = max { M , N + 1 }
Thus, the gradient Φ w n can be written in the following forms,
Φ a m n = y * n m + l = 1 M a l * n Φ a m n l + l = 1 M g l * n Ψ a m n l
Φ b m n = x * n m + l = 1 M a l * n Φ b m n l + l = 1 M g l * n Ψ b m n l
Φ g m n = y n m + l = 1 M a l * n Φ g m n l + l = 1 M g l * n Ψ g m n l
Φ h m n = x n m + l = 1 M a l * n Φ h m n l + l = 1 M g l * n Ψ h m n l
and for the gradient vector Ψ w n in (18), similarly, we have,
Ψ a m n = l = 1 M a l n Ψ a m n l + l = 1 M g l n Φ a m n l
Ψ b m n = l = 1 M a l n Ψ b m n l + l = 1 M g l n Φ b m n l
Ψ g m n = l = 1 M a l n Ψ g m n l + l = 1 M g l n Φ g m n l
Ψ h m n = l = 1 M a l n Ψ h m n l + l = 1 M g l n Φ h m n l
So the GMCCC-AIIR can be expressed in the form as:
w n + 1 = w n μ α λ exp λ e n 2 α e n 2 α 2 e n Φ w n + Ψ w n e * n = w n + μ exp λ e n 2 α e n 2 α 2 e n Φ w n + Ψ w n e * n

3.3. GMCCC-AIIR as a Generalization of ACA-IIR and ACLMS

When λ 0 + and α = 1 degenerate to:
w n + 1 = w n + μ e n Φ w n + Ψ w n e * n
i.e., the classical ACA-IIR algorithm. On this basis, when feedback within the GMCCC-AIIR is cancelled, that is, the partial derivatives on the right-hand side of (25), (27), (29) and (31) vanish for the widely linear FIR filter, yielding:
Φ b m n = x * n m m = 0 , . . . , N
Φ h m n = x n m m = 0 , . . . , N
Ψ b m n = Ψ g m n = 0 m = 0 , . . . , N
As desired, the GMCCC-AIIR algorithm (32) now simplifies into the ACLMS algorithm for FIR filters, given by [22]:
w ( n + 1 ) = w ( n ) μ e ( n ) x ( n )

Reduce the Computational Complexity of AGMCCC-IIR

The weight update of AGMCCC-IIR has a large amount of calculation, and it requires 4 × ( M + N + 1 ) recursions for the sensitivities Φ w n and Ψ w n . However, this can be simplified to updating only eight sensitivities by the approximation (23). For example,
Φ a ( n ) = Φ a 1 ( n ) , Φ a 2 ( n ) , , Φ a M ( n ) T
Φ a 2 n = y * n 2 + l = 1 M a l * n Φ a 2 n l + l = 1 M g l * n Ψ a 2 n l
Further, we define Φ a F ( n ) as follows:
Φ a F ( n ) = Φ a 1 F ( n ) , Φ a 2 F ( n ) , , Φ a M F ( n ) T
Φ a 2 F n = Φ a 1 n 1 = y * n 1 1 + l = 1 M a l * n 1 Φ a 1 n l 1 + l = 1 M g l * n 1 Ψ a 2 n l 1 = y * n 2 + l = 1 M a l * n 1 Φ a 2 F n l + l = 1 M g l * n 1 Ψ a 2 F n l
For a small step-size, a l * n 1 and g l * n 1 approximate to a l * n and g l * n , hence Φ a 2 F n Φ a 2 n . Φ a ( n ) can be approximated, i.e.,
Φ a ( n ) = Φ a 1 F ( n ) , Φ a 2 F ( n ) , Φ a 3 F ( n ) , Φ a M F ( n ) T = Φ a 1 ( n ) , Φ a 1 ( n 1 ) , Φ a 2 ( n 1 ) , , Φ a M 1 ( n 1 ) T = Φ a 1 ( n ) , Φ a 1 ( n 1 ) , Φ a 1 ( n 2 ) , , Φ a 1 ( n M + 1 ) T
We only need to update Φ a 1 n for the sensitivity Φ a F n . This approximation also applies for all other sensitivities.

4. Convergence of AGMCCC-IIR

For convenience, we write the algorithm (32) as:
w n + 1 = w n + μ f e n e n Φ w n + Ψ w n e * n
w 0 is defined as the unknown system parameter, and w ˜ ( n ) = w 0 w ( n ) . In this way,
w ˜ ( n + 1 ) = w ˜ ( n ) μ f ( e ( n ) ) e ( n ) Φ w ( n ) + Ψ w ( n ) e * ( n )
Thus,
E | | w ˜ n + 1 | | 2 = E w ˜ ( n ) 2 2 μ E Re w ˜ ( n ) f ( e ( n ) ) e ( n ) Φ w ( n ) + Ψ w ( n ) e * ( n ) + μ 2 E e ( n ) Φ w ( n ) + Ψ w ( n ) e * ( n ) 2 | f ( e ( n ) ) | 2
We know that the step-size μ is a small positive constant. If the system converges when n , we can approximate E | | w ˜ n + 1 | | 2 E w ˜ ( n ) 2 . It can be inferred that:
0 < μ < 2 E Re w ( n ) f ( e ( n ) ) e ( n ) Φ w ( n ) + Ψ w ( n ) e * ( n ) E e ( n ) Φ w ( n ) + Ψ w ( n ) e * ( n ) 2 | f ( e ( n ) ) | 2

5. Simulation

In this section, we present simulation results to confirm the theoretical conclusions drawn in previous sections. We demonstrate the superiority of the GMCCC-AIIR algorithm compared with the ACA-IIR algorithm in non-Gaussian noise. All the system parameters, signal noise, and input signals are complex valued. The unknown augmented IIR system is given by:
y 0 = 0 y n = 0.125 j y n 1 + 0.25 j y n 2 + ( 0.3 + 0.7 j ) x n + ( 0.5 0.8 j ) x n 1 + ( 0.2 + 0.5 j ) x n 2 + 0.25 j y * n 1 + 0.21 j y * n 2 + ( 0.32 + 0.21 j ) x * n + ( 0.3 + 0.7 j ) x * n 1 + ( 0.5 0.8 j ) x * n 2
The real part and the imaginary part of the input signal x are Gaussian distributed, with zero mean and unit variance. One hundred Monte Carlo simulations were ran. To evaluate estimation accuracy, the mean square deviation (MSD) is defined by MSD = | | w 0 w n | | 2 .

5.1. Complex Non-Gaussian Noise Models

Unlike Gaussian noise, non-Gaussian noise is a random process which the probability distribution function (pdf) of non-Gaussian noise does not satisfy the normal distribution. Generally speaking, the non-Gaussian noise distributions can be divided into two categories: light-tailed (e.g., binary, uniform, etc.) and heavy-tailed (e.g., Cauchy, mixed Gaussian, alphastable, etc.). In the following experiments, four common non-Gaussian noise models, including Cauchy noise, mixed Gaussian noise, alpha-stable noise, and Student’s t noise, are selected for performance evaluation, and the additive complex noise can be written as: v = v r e + j v i m , where v r e and v i m are obedient to different distributions in different non-Gaussian noise models. The descriptions of these non-Gaussian noise are the following.

5.1.1. Mixed Gaussian Noise

The mixed Gaussian noise model is given by [25]:
1 θ N λ 1 , v 1 2 + θ N λ 2 , v 2 2
where N λ i , v i 2 i = 1 , 2 denotes the Gaussian distributions with mean values λ i and variances v i 2 , and θ is the mixture parameter. Usually, one can set θ to a small value and v 2 2 v 1 2 to represent the impulsive noise. Thus, we define the mixed Gaussian noise parameter vector as V mix = λ 1 , λ 2 , v 1 2 , v 2 2 , θ .

5.1.2. Alpha-Stable Noise

The alpha-stable distribution is often used to model the probability distribution of heavy-tailed impulse noise. It is a more generalized Gaussian distribution, or Gaussian distribution is a special case of alpha-stable distribution. It is compatible with many signals in practice, such as noise in telephone lines, atmospheric noise, and backscattering echos in radar systems; even the modeling of economic time series is very successful. The characteristic function of the alpha-stable noise is defined as [26,27]:
ψ ( t ) = exp j δ t γ | t | α [ 1 + j β sgn ( t ) S ( t , α ) ] ( t 0 )
in which:
S ( t , α ) = tan ( α π / 2 ) if α 1 2 / π log | t | if α = 1
From (49) and (50), one can observe that a stable distribution is completely determined by four parameters: (1) the characteristic factor α ; (2) the symmetry parameter β ; (3) the dispersion parameter γ ; and (4) the location parameter δ . Both v r e and v i m obey the alpha-stable distribution, so we define the alpha-stable noise parameter vector as V alpha = α , β , γ , δ .
It is worth mentioning that, in the case of α = 2 , the alpha-stable distribution coincides with the Gaussian distribution, while α = 1 , δ = 0 is the same as the Cauchy distribution.

5.1.3. Cauchy Noise

The PDF of the Cauchy noise is [28]:
p ( v ) = 1 π 1 + v 2

5.1.4. Student’s T Noise

The PDF of the Student’s t noise is [29]:
p ( v , n ) = Γ n + 1 2 n π Γ n 2 1 + v 2 n n + 1 2 , < v < +
where n is the degree of freedom, Γ ( · ) denotes the Gamma function.

5.2. Augmented Linear System Identification

Figure 1 is the block diagram of the system identification, and the length of the adaptive filter is equal to the unknown system impulse response.
First, we demonstrate how kernel bandwidth α affects the convergence performance of GMCCC-AIIR. Figure 2 shows the convergence curves of GMCCC-AIIR with different α , in which the noise chooses mixed Gaussian noise and μ = 0.013 , λ = 0.3 . Obviously, the choice of kernel bandwidth has a significant effect on the convergence. In this example, the convergence performance and convergence speed of the proposed algorithm get better when α decreases. Generally speaking, small bandwidth is more robust to impulse noise without considering the convergence rate, and the performance of the algorithm is optimal when α = 1 .
Second, the stability of GMCCC-AIIR at different step sizes is investigated. Figure 3 shows the convergence performance for different step sizes. The noise is still the mixed Gaussian noise and α = 1 , λ = 0.3 . The simulation results show that when the step size is large, such as μ = 0.025 , the convergence performance gets worse, and the GMCCC-AIIR will diverge if step size continues to increase, which confirms the correctness of the stability analysis in Section 4.
Third, we introduce how the parameter λ will affect the performance of the algorithm. Figure 3 shows the learning curve of GMCCC-AIIR with different λ . The noise is mixed Gaussian noise and α = 1 , μ = 0.013 . We can see from Figure 4 that when the parameter λ increases, the convergence speed will slow down. However, when λ is too small, the GMCCC-AIIR algorithm is approximate to the ACA-IIR, and the convergence performance is poor in the non-Gaussian noise model. Therefore, we should choose the appropriate parameter λ according to different situations.
Fourth, we introduce how pairwise parameters α , μ , and λ affect the steady-state excess MSD (EMSD) and give 3D diagrams of EMSD. The number of iterations is increased to 15,000 to ensure the convergence of the algorithm, and the additive noise is still mixed Gaussian noise. EMSD equals the average MSD of the last 1000 iterations. Figure 5 shows that EMSD of GMCCC-AIIR mainly depends on α and μ . The performance of the algorithm get worse when μ increases and α approaches 1, the algorithm performs best. Combined with Figure 4 and Figure 5b,c, λ mainly affects the convergence speed of the GMCCC. When λ approaches 0, the robustness of the algorithm to non-Gaussian noise gets worse, the outliers of EMSD increase, and the algorithm may even diverge.
Fifth, we compare the performance of GMCCC-AIIR and ACA-IIR under four noise distributions. In the simulation, the mixed Gaussian noise and alpha-stable noise parameters are set separately at V mix = 0 , 0 , 0.01 , 100 , 0.03 and V alpha = 1.4 , 0 , 0.3 , 0 , the freedom parameter of student noise n is set to 2, and the Cauchy noise is the standard form. The step-sizes are chosen such that both algorithms have almost the same initial convergence speed. The simulation results are shown in Figure 6. As expected, the proposed GMCCC-AIIR algorithm can achieve better steady-state performance than ACA-IIR significantly in these non-Gaussian noise models. The ACA-IIR algorithm diverges after encountering impulse and the MSD approaches infinity at this time. The convergence process cannot be well observed when trying to display the ACA-IIR and the GMCCC-AIIR learning curve in the same coordinate system. Therefore, we limit the height of all simulation results.

6. Conclusions

In this paper, we propose an adaptive algorithm for augmented IIR filter based on generalized maximum complex correlation entropy criterion. We study the convergence performance, providing the bound for the step size. Moreover, computational complexity is reduced by making use of the redundancy in the state vector of the filter. We also prove that ACA-IIR and ACLMS are special cases of GMCCC-AIIR. The simulation results verify the theoretical conclusion and show how parameters affect the convergence performance of GMCCC-AIIR and superiority of the GMCCC-IIR algorithm compared with the MSE-based algorithm ACA-IIR when the noise is non-Gaussian distribution.

Author Contributions

Conceptualization, H.Z. and G.Q.; methodology, H.Z.; software, H.Z. and G.Q.; validation, G.Q.; formal analysis, H.Z.; investigation, G.Q.; resources, H.Z. and G.Q.; data curation, H.Z.; writing—original draft preparation, H.Z; writing—review and editing, G.Q.; visualization, H.Z. and G.Q.; supervision, H.Z. and G.Q.; project administration, G.Q.; funding acquisition, G.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Chongqing Municipal Training Program of Innovation and Entrepreneurship for Undergraduates (grant no. S202210635280).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pascual Campo, P.; Anttila, L.; Korpi, D.; Valkama, M. Cascaded Spline-Based Models for Complex Nonlinear Systems: Methods and Applications. IEEE Trans. Signal Process. 2021, 69, 370–384. [Google Scholar] [CrossRef]
  2. Huang, F.; Zhang, J.; Zhang, S. Complex-valued proportionate affine projection Versoria algorithms and their combined-step-size variants for sparse system identification under impulsive noises. Digit. Signal Process. 2021, 118, 103209. [Google Scholar] [CrossRef]
  3. Mandic, D.P.; Goh, V.S.L. Complex valued nonlinear adaptive filters: Noncircularity, widely linear and neural models. In Adaptive and Learning Systems for Signal Processing, Communications, and Control; Wiley: Chichester, UK, 2009. [Google Scholar]
  4. Javidi, S.; Pedzisz, M.; Goh, S.L.; Mandic, D. The Augmented Complex Least Mean Square Algorithm with Application to Adaptive Prediction Problems; Citeseer: Princeton, NJ, USA, 2008; p. 4. [Google Scholar]
  5. Took, C.C.; Mandic, D.P. Adaptive IIR Filtering of Noncircular Complex Signals. IEEE Trans. Signal Process. 2009, 57, 4111–4118. [Google Scholar] [CrossRef]
  6. Khalili, A. Diffusion augmented complex adaptive IIR algorithm for training widely linear ARMA models. Signal Image Video Process. 2018, 12, 1079–1086. [Google Scholar] [CrossRef]
  7. Khalili, A.; Rastegarnia, A.; Bazzi, W.M.; Rahmati, R.G. Incremental augmented complex adaptive IIR algorithm for training widely linear ARMA model. Signal Image Video Process. 2017, 11, 493–500. [Google Scholar] [CrossRef]
  8. Sayed, A.H. Fundamentals of Adaptive Filtering; IEEE Press Wiley-Interscience: New York, NY, USA, 2003. [Google Scholar]
  9. Chen, B.; Zhu, Y.; Hu, J.; Príncipe, J.C. System Parameter Identification: Information Criteria and Algorithms, 1st ed.; Elsevier: London, UK; Waltham, MA, USA, 2013. [Google Scholar]
  10. Principe, J.C.; Xu, D.; Iii, J.W.F. Information-Theoretic Learning; Springer: Berlin/Heidelberg, Germany, 2010; p. 62. [Google Scholar]
  11. Plataniotis, K.N.; Androutsos, D.; Venetsanopoulos, A.N. Nonlinear Filtering of Non-Gaussian Noise. J. Intell. Robot. Syst. 1997, 19, 207–231. [Google Scholar] [CrossRef]
  12. Weng, B.; Barner, K. Nonlinear system identification in impulsive environments. IEEE Trans. Signal Process. 2005, 53, 2588–2594. [Google Scholar] [CrossRef]
  13. Schimmack, M.; Mercorelli, P. An on-line orthogonal wavelet denoising algorithm for high-resolution surface scans. J. Frankl. Inst. 2018, 355, 9245–9270. [Google Scholar] [CrossRef]
  14. Liu, W.; Pokharel, P.P.; Principe, J.C. Correntropy: Properties and Applications in Non-Gaussian Signal Processing. IEEE Trans. Signal Process. 2007, 55, 5286–5298. [Google Scholar] [CrossRef]
  15. Liu, X.; Chen, B.; Zhao, H.; Qin, J.; Cao, J. Maximum Correntropy Kalman Filter With State Constraints. IEEE Access 2017, 5, 25846–25853. [Google Scholar] [CrossRef]
  16. He, Y.; Wang, F.; Yang, J.; Rong, H.; Chen, B. Kernel adaptive filtering under generalized Maximum Correntropy Criterion. In Proceedings of the 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 1738–1745. [Google Scholar] [CrossRef]
  17. Chen, B.; Xing, L.; Zhao, H.; Zheng, N.; Príncipe, J.C. Generalized Correntropy for Robust Adaptive Filtering. IEEE Trans. Signal Process. 2016, 64, 3376–3387. [Google Scholar] [CrossRef] [Green Version]
  18. Qian, G.; Wang, S. Generalized Complex Correntropy: Application to Adaptive Filtering of Complex Data. IEEE Access 2018, 6, 19113–19120. [Google Scholar] [CrossRef]
  19. Li, K.; Principe, J.C. Functional Bayesian Filter. IEEE Trans. Signal Process. 2022, 70, 57–71. [Google Scholar] [CrossRef]
  20. Navarro-Moreno, J. ARMA Prediction of Widely Linear Systems by Using the Innovations Algorithm. IEEE Trans. Signal Process. 2008, 56, 3061–3068. [Google Scholar] [CrossRef]
  21. Picinbono, B.; Chevalier, P. Widely linear estimation with complex data. IEEE Trans. Signal Process. 1995, 43, 2030–2033. [Google Scholar] [CrossRef]
  22. Mandic, D.; Javidi, S.; Goh, S.; Kuh, A.; Aihara, K. Complex-valued prediction of wind profile using augmented complex statistics. Renew. Energy 2009, 34, 196–201. [Google Scholar] [CrossRef]
  23. Guimarães, J.P.F.; Fontes, A.I.R.; Rego, J.B.A.; de M. Martins, A.; Príncipe, J.C. Complex Correntropy: Probabilistic Interpretation and Application to Complex-Valued Data. IEEE Signal Process. Lett. 2017, 24, 42–45. [Google Scholar] [CrossRef]
  24. Shynk, J. A complex adaptive algorithm for IIR filtering. IEEE Trans. Acoust. Speech Signal Process. 1986, 34, 1342–1344. [Google Scholar] [CrossRef]
  25. Zhao, S.; Chen, B.; Príncipe, J.C. Kernel adaptive filtering with maximum correntropy criterion. In Proceedings of the 2011 International Joint Conference on Neural Networks, San Jose, CA, USA, 31 July–5 August 2011; pp. 2012–2017. [Google Scholar] [CrossRef] [Green Version]
  26. Zhang, J.; Qiu, T.; Song, A.; Tang, H. A novel correntropy based DOA estimation algorithm in impulsive noise environments. Signal Process. 2014, 104, 346–357. [Google Scholar] [CrossRef]
  27. Wu, Z.; Peng, S.; Chen, B.; Zhao, H. Robust Hammerstein Adaptive Filtering under Maximum Correntropy Criterion. Entropy 2015, 17, 7149–7166. [Google Scholar] [CrossRef] [Green Version]
  28. Chen, B.; Xing, L.; Liang, J.; Zheng, N.; Príncipe, J.C. Steady-State Mean-Square Error Analysis for Adaptive Filtering under the Maximum Correntropy Criterion. IEEE Signal Process. Lett. 2014, 21, 880–884. [Google Scholar] [CrossRef]
  29. Wang, J.; Dong, P.; Shen, K.; Song, X.; Wang, X. Distributed Consensus Student-t Filter for Sensor Networks With Heavy-Tailed Process and Measurement Noises. IEEE Access 2020, 8, 167865–167874. [Google Scholar] [CrossRef]
Figure 1. System Identification Configuration.
Figure 1. System Identification Configuration.
Entropy 24 01008 g001
Figure 2. Learning Curve in different α .
Figure 2. Learning Curve in different α .
Entropy 24 01008 g002
Figure 3. Learning Curve in different μ .
Figure 3. Learning Curve in different μ .
Entropy 24 01008 g003
Figure 4. Learning Curve in different λ .
Figure 4. Learning Curve in different λ .
Entropy 24 01008 g004
Figure 5. EMSD with different pairwise parameters (a) α and μ ( λ = 0.3 ) (b) α and λ ( μ = 0.013 ) (c) μ and λ ( α = 1 ).
Figure 5. EMSD with different pairwise parameters (a) α and μ ( λ = 0.3 ) (b) α and λ ( μ = 0.013 ) (c) μ and λ ( α = 1 ).
Entropy 24 01008 g005
Figure 6. Learning Curve with different noise: (a) alpha-stable noise, (b) mixed Gaussian noise, (c) Cauchy noise, (d) Student’s t noise.
Figure 6. Learning Curve with different noise: (a) alpha-stable noise, (b) mixed Gaussian noise, (c) Cauchy noise, (d) Student’s t noise.
Entropy 24 01008 g006
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zheng, H.; Qian, G. Generalized Maximum Complex Correntropy Augmented Adaptive IIR Filtering. Entropy 2022, 24, 1008. https://doi.org/10.3390/e24071008

AMA Style

Zheng H, Qian G. Generalized Maximum Complex Correntropy Augmented Adaptive IIR Filtering. Entropy. 2022; 24(7):1008. https://doi.org/10.3390/e24071008

Chicago/Turabian Style

Zheng, Haotian, and Guobing Qian. 2022. "Generalized Maximum Complex Correntropy Augmented Adaptive IIR Filtering" Entropy 24, no. 7: 1008. https://doi.org/10.3390/e24071008

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop