Convex Regularized Recursive Minimum Error Entropy Algorithm

: It is well known that the recursive least squares (RLS) algorithm is renowned for its rapid convergence and excellent tracking capability. However, its performance is significantly compromised when the system is sparse or when the input signals are contaminated by impulse noise. Therefore, in this paper, the minimum error entropy (MEE) criterion is introduced into the cost function of the RLS algorithm in this paper, with the aim of counteracting the interference from impulse noise. To address the sparse characteristics of the system, we employ a universally applicable convex function to regularize the cost function. The resulting new algorithm is named the convex regularization recursive minimum error entropy (CR-RMEE) algorithm. Simulation results indicate that the performance of the CR-RMEE algorithm surpasses that of other similar algorithms, and the new algorithm excels not only in scenarios with sparse systems but also demonstrates strong robustness against pulse noise.


Introduction
Sparse adaptive filters have widespread applications in the field of signal processing, commonly employed in tasks such as acoustic echo cancellation [1,2] and communication channel identification [3].Traditional adaptive filtering algorithms used for identifying sparse systems (referring to systems that contain many near-zero or a few large coefficients that need to be identified) often rely on prior knowledge of sparsity.In particular, many relevant algorithms have been proposed in this context: for the LMS algorithm, scholars have successively improved its cost function using the l 0 -norm and l 1 -norm regularization techniques [4].This result significantly enhances the algorithm's convergence speed in sparse systems, reduces steady-state errors, and simultaneously improves its robustness.Subsequently, other researchers utilized a more general convex function to regularize the LMS algorithm [5], making the proposed algorithm more versatile compared to the former.In [6], the author applied convex regularization techniques to enhance the RLS algorithm.By incorporating a general regularization term into the RLS cost function, they derived a new type of sparse adaptive filtering algorithm, referred to as the convex regularized recursive least squares (CR-RLS) algorithm.This approach allows the use of any convex function for regularization.In recent years, the concept of sparsity has even been applied to kernel adaptive filtering, as mentioned in [7][8][9][10].
The aforementioned adaptive algorithms based on the mean-square error criterion perform well in Gaussian noise environments.However, in real-life scenarios, signals may be contaminated by impulsive noise, and the mean-square error criterion may not yield satisfactory results in the presence of impulsive noise [11].With the development of information theoretic learning (ITL), the maximum correntropy criterion (MCC) [12][13][14][15] and the minimum error entropy criterion (MEE) [16][17][18] have gradually been introduced.

Basic Principles
When performing linear adaptive filtering, there is an input vector u, unknown tap parameter w, and desired response d.The desired response d(n) at each regression point can be obtained with the following expression: where v is a zero-mean background noise with a variance of σ 2 v .In this context, the error signal can be represented as follows: where w(n − 1) is the estimate of w o at point n − 1.
Additionally, in this study, we have made the following assumptions: (a) The input signal is a white sequence with a mean of zero and is wide-sense stationary.The additive noise is zero-mean.(b) The input sequences at different time instants and the noise are uncorrelated.
(c) The inputs at different time instants and the additive noise are uncorrelated.
As a well-known learning criterion in Information Theoretic Learning (ITL), the Minimum Error Entropy was proposed by Alfred Renyi, building upon the foundation of Shannon entropy.Its expression can be derived from the Shannon entropy formula: where α is a the order parameter, and you can change its value to alter the form of entropy expression.p i represents the probability of occurrence of the i − th data set; V α (X) is referred to as the Information Potential (IP), and the second-order Renyi entropy with α = 2 is more commonly used as follows: where κ σ (X) is the Gaussian kernel with bandwidth σ.Kernel functions primarily describe the inner product of random variables in the feature space.The Gaussian kernel function is commonly used, and its expression is as follows: Due to the difficulty in obtaining the Probability Density Function (PDF) of a signal accurately in practical situations, it is common to estimate the PDF.The most common method for PDF estimation is based on the Parzen method.Assuming the input vector data are {u 1 , u 2 , • • • , u N }, the probability density after Parzen estimation can be represented as follows: The estimate of its entropy can be derived by incorporating the kernel function into the second-order Renyi entropy formula: where u i and u i represent input signals at different time instants i and j.When replacing the input in the kernel function with the error e, the error entropy Ĥ2 (e) of the system can be defined.
From the derived error entropy formula above, based on the definition of error entropy, it can be concluded that when the error entropy is minimized, the system error signal becomes more stable.This implies that the system's filtering algorithm has achieved convergence.Therefore, the cost function for algorithms based on the minimum error entropy criterion can be expressed as follows: Because this function is a logarithmic function and monotonically increasing, the cost formula can be simplified to where w represents the tap parameters of the adaptive filter, and V2,σ (e) represents the estimate of the second-order information potential.

Proposed Algorithm
In this chapter, we first proposed a new cost function and derived the novel CR-RMEE algorithm based on it.We also introduced the updated formula for the regularization factor.Finally, towards the end of this chapter, we conducted a mathematical analysis of the convergence of the new algorithm, demonstrating its theoretical convergence.

CR-RMEE Algorithm
The traditional MEE cost function, as shown in (11), leads to the cost function for the exponential forgetting factor (0 < λ < 1) based on the MEE criterion as follows: where L is the length of the sliding window inside the filter, G σ (•) is the Gaussian kernel function, and σ is the kernel width.We incorporate a convex function containing the weight vector into it for regularization, which allows the regularized cost function to leverage some prior knowledge about the unknown system, such as sparsity.In this paper, the regularized MEE cost function is as follows: where f (•) is a convex function, γ n is a variable constant known as the regularization factor.The role of γ n is to maintain a balance between the MEE cost function and the regularization term.When γ n increases or decreases, the impact of the regularization term on the adaptive algorithm also changes accordingly.
There are subgradients for the regularization convex function in the cost function.The collection of subgradients is called subdifferential, which is denoted by ∂ f (w).We denote a subgradient vector of f (•) Therefore, the gradient of the cost function (13) can be represented as follows: To simplify the computation, at each time point n (the following derivations of equations are all under time point n) we have Then, we can obtain So, the gradient of the cost function ( 14) can be rewritten as follows: where Setting the gradient equal to zero, we can solve for Then, (19) can be rewritten as follows: where R L is the weighted autocorrelation matrix of the input signal and can be rewritten as r L is the weighted cross-correlation vector between the input signal and the desired response and can be rewritten as Under the conditions of Assumption (b) and Assumption (c), the following expectations have the following properties: Electronics 2024, 13, 992 6 of 15 So, ( 22) and ( 23) can be rewritten as follows: Define a new vector θ L : By substituting the update (26) into ( 27), assuming that γ L and ∇ s f (w) change only slightly at each time point [6], the vector θ L can also be recursively updated as follows: Then, ( 21) can be rewritten as where P L = R L −1 .By using the matrix inversion lemma [20], we can obtain the updated formula The gain matrix k L is given by Rewriting (29) recursively, we can obtain the following updated formula:

Selection of the Regularization Factor
The use of regularization functions in the field of adaptive filtering is because they can provide some prior knowledge about the unknown system.Here, we assume that the prior knowledge is an upper bound on the regularization function, i.e., f (w) ≤ ρ, where ρ is a constant.Let ŵ(n) be the solution obtained from the cost function without regularization, and ε = w − w o and ε = ŵ − w o represent the differences between w and ŵ with respect to the true weights.Substituting the difference expressions into (19), we obtain The squared deviation of ε n from D n can be computed as follows: where D(n) is the squared deviation from ε(n).One can obtain the following theorem from the (34) [6]. where Consequently, the inequality can be solved as follows: From Theorem 1, it can be seen that if the method described in (36) is used to adaptively update the regularization factor, the performance of the CR-RMEE algorithm will be close to or even surpass the standard RMEE algorithm.However, since in practical use, we cannot obtain the true weight vector, γ(n) is also not easy to calculate directly.Inspired by the work in [6], and using the subgradient definition of a convex function, γ(n) can be approximated by the following expression when the input is a white sequence and n is sufficiently large: where , meaning the difference between the unregularized coefficients and the regularized coefficients, and tr(•) represents a tracking operator.According to (38), γ(n) can be automatically updated to γ(n) = max[γ ′ (n), 0] in each iteration, and the computational complexity for each automatic update is only O N 2 , which effectively saves time in the search for the optimal value.□

Convergence Analysis
Whether an algorithm converges can be determined by observing the behavior when it reaches a steady state characterized by E{ w(n)}, where w(n) = w o − w(n).So, from Equation (32), it can be concluded that Substituting Equation (1) into it, we obtain Substituting ( 30) and (31) into the previous expression yields From (25), the steady-state average value of A can be obtained as follows: Assuming the input signal u(n) is a wide-sense stationary process, combined with the assumptions mentioned earlier (a) and (41), we can obtain where I is the unit matrix of the same dimension as R(n).Summarizing the above expressions, we obtain Although it is known from the above that P(n) = R(n) −1 , calculating the steady-state average of P(n) is still very challenging.Therefore, it is commonly approximated using the following expression [19].
From (37), it can be easily deduced that lim n→∞ γ(n) = 0, and under the conditions of assumption (a), E{v(n)} = 0 can also be obtained.Therefore, the steady-state value of w(n) can be expressed as Therefore, we can demonstrate that the CR-RMEE algorithm proposed by us possesses convergence.

Simulation Experiments
In order to compare the advantages of the proposed algorithm, our simulations were conducted separately in steady and non-steady environments, and all simulation results were obtained through 1000 independent Monte Carlo experiments.The simulations use normalized mean squared deviation (NMSD) as the performance metric to evaluate the algorithms.A smaller value of NMSD indicates better algorithm performance, and the stability of the resulting waveform is also an important indicator of algorithm stability.NMSD can be obtained using the following formula: In the simulations, the unknown weight vector w o is a 20 × 1 vector with S non-zero tap weights, and the sliding window width is 8. Unless otherwise stated, the input u is a white Gaussian random sequence, and the additive noise is impulsive with a mixed Gaussian distribution, v(n) ~0.95N(0,0.01)+ 0.05N(0,25).
For the selection of the function in the regularization term, we will introduce a sparsityinducing convex penalty function.Since the l 0 norm is a non-convex function, we will use its approximate function instead as follows: where β is a constant, and the approximate gradient of the above expression can be represented using the following formula: Firstly, we present the performance of the CR-RMEE algorithm in a stationary sparse environment.For comparison, we also present the performance of the LMS algorithm, RLS algorithm, RMC algorithm, and the original RMEE algorithm.We set the unknown system as a sparse system with S = 1.For the RMC algorithm, RMEE algorithm, CR-RMC algorithm, and our proposed CR-RMEE algorithm, the same σ = 1 is chosen.Moreover, the step size for the LMS algorithm is 0.01.As for the choice of λ, RLS, RMC, RMEE, CR-RMC, and CR-RMEE algorithms are, respectively, chosen as λ = 0.992, λ = 0.942, λ 2 = 0.996, λ 2 = 0.996, and λ 2 = 0.996.As we can see from Figure 1, it is evident that, with the initial convergence rates of various algorithms set to be similar, the NMSD of the LMS, RLS, and RMC algorithms is relatively large.Additionally, both the LMS and RMC algorithms exhibit significant fluctuations after convergence.Although both the RMEE and CR-RMC algorithms have very small NMSD values, the RMEE algorithm exhibits faster convergence, while the CR-RMC algorithm ultimately achieves a lower steady-state error.Compared to the LMS and RMC algorithms, our proposed CR-RMEE algorithm shows significantly reduced fluctuations within an acceptable range.Combining the advantages of the fast convergence of the RMEE algorithm and the low steady-state error of the CR-RMC algorithm, the CR-RMEE algorithm even achieves the lowest NMSD.This simulation experiment demonstrates that our proposed new algorithm exhibits excellent performance in a steady sparse environment.Secondly, to demonstrate the convergence and tracking performance of the CR-RMEE algorithm in a non-stationary sparse environment, we identify another three-stage sparse system.On the other hand, to compare the performance of each algorithm in different sparse environments, we assume the parameter vector of the unknown system as follows: 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 , 2000 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0 , 2000 4000 From the above equation, it can be seen that as S increases from "1" to "10" and finally to "20", the sparsity of the system decreases over time.It is worth noting that due to the fact that the system is re-identified each time the algorithm detects a sudden change in the system, there will be a process of convergence again.And by changing the system parameters twice during the simulation, the subsequent convergence curves will also produce a three-step process.For LMS, RLS, RMS, and RMEE algorithms, the parameters are kept the same as before, and for the CR-RMC and CR-RMEE algorithms,  is set to 0.1 Secondly, to demonstrate the convergence and tracking performance of the CR-RMEE algorithm in a non-stationary sparse environment, we identify another three-stage sparse system.On the other hand, to compare the performance of each algorithm in different sparse environments, we assume the parameter vector of the unknown system as follows: [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], n ≤ 2000 [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0], 2000 From the above equation, it can be seen that as S increases from "1" to "10" and finally to "20", the sparsity of the system decreases over time.It is worth noting that due to the fact that the system is re-identified each time the algorithm detects a sudden change in the system, there will be a process of convergence again.And by changing the system parameters twice during the simulation, the subsequent convergence curves will also produce a three-step process.For LMS, RLS, RMS, and RMEE algorithms, the parameters are kept the same as before, and for the CR-RMC and CR-RMEE algorithms, γ is set to 0.1.From Figure 2, it can be seen that the performance of each algorithm in each sparse phase follows the same trend as in Figure 1.Due to not considering the impact of sparse systems, the LMS algorithm, RLS algorithm, RMC algorithm, and RMEE algorithm are less affected by the sparsity of the environment.Similarly, both CR-RMEE and CR-RMC are influenced by sparsity.When the sparsity level is high, the CR-RMEE algorithm has a significant advantage compared to other algorithms.Although the advantage decreases slightly as the sparsity decreases, regardless of how sparse the system is, the normalized mean squared deviation (NMSD) of the CR-RMEE algorithm remains the lowest.The system undergo two parameter changes in total, and our proposed algorithm responded quickly and converged rapidly to a steady state, which indicates that the new algorithm also possesses excellent tracking performance.In summary, compared to the other algorithms, our algorithm exhibits the lowest steady-state error and excellent performance.Thirdly, we vary the parameter (σ, λ) for the CR-RMEE algorithm to test its on algorithm performance.All simulation results are the average of 200 independe so we can only obtain an approximate value.The experimental results are shown 1.To more intuitively illustrate the effect of different parameters on algorithm mance, we have selected a set of obvious results in graphical form, as shown in F The parameters for CR-RMEE1, CR-RMEE2, CR-RMEE3, CR-RMEE4, and CR-RM [σ = 3, λ 2 = 0.942], [σ = 2, λ 2 = 0.970], [σ = 1, λ 2 = 0.980], [σ = 0.5, λ 2 = 0.992], and [σ = 0.994].Combining Table 1 and Figure 3, it can be observed that the NMSD of the al is negatively correlated with the  value and positively correlated with the λ 2 v the steady-state error decreases, the convergence speed of the algorithm graduall down, and the level of post-convergence oscillation also gradually increases.Th cates that achieving a lower steady-state error often requires sacrificing the algo convergence speed and stability.Therefore, in practical applications, it is necessar just the parameters judiciously to achieve the best performance.Additionally, fro 1, it can be seen that the steady-state error of the CR-RMEE algorithm in   Thirdly, we vary the parameter (σ, λ) for the CR-RMEE algorithm to test its impact on algorithm performance.All simulation results are the average of 200 independent runs, so we can only obtain an approximate value.The experimental results are shown in Table 1.To more intuitively illustrate the effect of different parameters on algorithm performance, we have selected a set of obvious results in graphical form, as shown in Figure 3.The parameters for CR-RMEE1, CR-RMEE2, CR-RMEE3, CR-RMEE4, and CR-RMEE5 are [σ = 3, λ 2 = 0.942], [σ = 2, λ 2 = 0.970], [σ = 1, λ 2 = 0.980], [σ = 0.5, λ 2 = 0.992], and [σ = 0.2, λ 2 = 0.994].Combining Table 1 and Figure 3, it can be observed that the NMSD of the algorithm is negatively correlated with the σ value and positively correlated with the λ 2 value.As the steady-state error decreases, the convergence speed of the algorithm gradually slows down, and the level of post-convergence oscillation also gradually increases.This indicates that achieving a lower steady-state error often requires sacrificing the algorithm's convergence speed and stability.Therefore, in practical applications, it is necessary to adjust the parameters judiciously to achieve the best performance.Additionally, from Table 1, it can be seen that the steady-state error of the CR-RMEE algorithm in Figure 1    Then, although the algorithms were tested separately in both steady and non-steady environments in the above experiments, demonstrating good tracking and convergence performance under certain conditions, the testing for algorithm robustness remains incomplete.Therefore, we continue to test the algorithm and comparative algorithms in four different noise environments.The four types of noise are Gaussian white noise with a distribution of v(n) ~ N(0,0.25),pulse noise with a distribution of v(n) ~ 0.95N(0,0.01)+ 0.05N(0,1), pulse noise with a distribution of v(n) ~ 0.95N(0,0.01)+ 0.05N(0,9), and pulse noise with a distribution of v(n) ~ 0.95N(0,0.01)+ 0.05N(0,25), with an increasing intensity of pulse noise.These correspond, respectively, to Figure 4a-d.The selection of parameters for each algorithm aims to maintain optimal performance as much as possible.We set the unknown system as a sparse system, with the same choice of parameters for RMC algorithm, RMEE algorithm, CR-RMC algorithm, and the proposed CR-RMEE algorithm.The step size for the LMS algorithm is set to 0.01.The choices for λ are, respectively, 0.992 Then, although the algorithms were tested separately in both steady and non-steady environments in the above experiments, demonstrating good tracking and convergence performance under certain conditions, the testing for algorithm robustness remains incomplete.Therefore, we continue to test the algorithm and comparative algorithms in four different noise environments.The four types of noise are Gaussian white noise with a distribution of v(n) ~N(0,0.25),pulse noise with a distribution of v(n) ~0.95N(0,0.01)+ 0.05N(0,1), pulse noise with a distribution of v(n) ~0.95N(0,0.01)+ 0.05N(0,9), and pulse noise with a distribution of v(n) ~0.95N(0,0.01)+ 0.05N(0,25), with an increasing intensity of pulse noise.These correspond, respectively, to Figure 4a-d.The selection of parameters for each algorithm aims to maintain optimal performance as much as possible.We set the unknown system as a sparse system, with the same choice of parameters for RMC algorithm, RMEE algorithm, CR-RMC algorithm, and the proposed CR-RMEE algorithm.The step size for the LMS algorithm is set to 0.01.The choices for λ are, respectively, λ = 0.992, λ = 0.942, λ 2 = 0.996, λ 2 = 0.996, and λ 2 = 0.996.
The results of testing under the four noise conditions can be seen from Figure 4. Firstly, as the intensity of noise pulses increases, the gap between different algorithms also becomes larger (convergence plots initially cluster together but then become dispersed).This indicates that different algorithms have varying degrees of sensitivity to noise, with RMC algorithm being the most affected.While classic algorithms such as LMS and RLS exhibit larger steady-state errors, they demonstrate strong stability.Among the algorithms tested under different noise conditions, the RMEE algorithm, CR-RMC algorithm, and our proposed CR-RMEE algorithm perform the best.They exhibit not only fast convergence rates in different environments but also low steady-state errors.It is noted that, among these three algorithms, our proposed new algorithm performs the best in all four scenarios, which strongly proves the robustness of the algorithm against noise.The results of testing under the four noise conditions can be seen from Figure 4. Firstly, as the intensity of noise pulses increases, the gap between different algorithms also becomes larger (convergence plots initially cluster together but then become dispersed).This indicates that different algorithms have varying degrees of sensitivity to noise, with RMC algorithm being the most affected.While classic algorithms such as LMS and RLS exhibit larger steady-state errors, they demonstrate strong stability.Among the algorithms tested under different noise conditions, the RMEE algorithm, CR-RMC algorithm, and our proposed CR-RMEE algorithm perform the best.They exhibit not only fast convergence rates in different environments but also low steady-state errors.It is noted that, among these three algorithms, our proposed new algorithm performs the best in all four scenarios, which strongly proves the robustness of the algorithm against noise.
Finally, to demonstrate the performance of the CR-RMEE algorithm in practical applications, we consider an echo cancellation application, whose basic structure is as shown in Figure 5.The generation principle of acoustic echo is mainly due to the sound from room A being transmitted to room B through channels such as wired or wireless, played through a speaker.After a series of acoustic reflections, the sound carries the inherent noise in room B and is picked up by the microphone, then transmitted back to room A. Acoustic echo can cause speakers in room A to hear their own speech shortly after speaking, significantly impacting the communication quality.In such cases, echo cancellation aims to eliminate these effects as much as possible.The mathematical model of echo cancellation is as follows: the far-end signal u(i) from room A, when transmitted to room B, generates the echo signal r(i) after multiple reflections.Finally, the near-end signal, carrying the echo signal r(i) and the noise v(i) from room B, is combined to produce d(i).The Acoustic Echo Canceller (AEC) module processes the far-end signal u(i) to obtain y(i) (the echo signal).By taking the difference between d(i) and y(i), the error e(i) is obtained and passed to the adaptive filter (AEC module), leading to iterative updates of the filter coefficients.A smaller error indicates better echo cancellation effectiveness.Finally, to demonstrate the performance of the CR-RMEE algorithm in practical applications, we consider an echo cancellation application, whose basic structure is as shown in Figure 5.The generation principle of acoustic echo is mainly due to the sound from room A being transmitted to room B through channels such as wired or wireless, played through a speaker.After a series of acoustic reflections, the sound carries the inherent noise in room B and is picked up by the microphone, then transmitted back to room A. Acoustic echo can cause speakers in room A to hear their own speech shortly after speaking, significantly impacting the communication quality.In such cases, echo cancellation aims to eliminate these effects as much as possible.The mathematical model of echo cancellation is as follows: the far-end signal u(i) from room A, when transmitted to room B, generates the echo signal r(i) after multiple reflections.Finally, the near-end signal, carrying the echo signal r(i) and the noise v(i) from room B, is combined to produce d(i).The Acoustic Echo Canceller (AEC) module processes the far-end signal u(i) to obtain y(i) (the echo signal).By taking the difference between d(i) and y(i), the error e(i) is obtained and passed to the adaptive filter (AEC module), leading to iterative updates of the filter coefficients.A smaller error indicates better echo cancellation effectiveness.
In this simulation, the input was changed to a real speech signal with a sampling rate of 8 kHz, as shown in Figure 6.The additive noise is impulsive with a mixed Gaussian distribution, v(n) ~0.95N(0,0.01)+ 0.05N(0,5).The parameters used for each algorithm were the same as in Figure 1.From Figure 7, it can be observed that, when the input is real speech, the convergence curves of various algorithms exhibit noticeable fluctuations, especially the LMS algorithm.Although the RMC and RMEE algorithms also have relatively small steady-state errors, the fluctuations are still evident.Our proposed CR-RMEE algorithm not only has smaller fluctuations but also the minimum steady-state error, indicating its superior performance.
were the same as in Figure 1.From Figure 7, it can be observed that, when the input is real speech, the convergence curves of various algorithms exhibit noticeable fluctuations, especially the LMS algorithm.Although the RMC and RMEE algorithms also have relatively small steady-state errors, the fluctuations are still evident.Our proposed CR-RMEE algorithm not only has smaller fluctuations but also the minimum steady-state error, indicating its superior performance.speech, the convergence curves of various algorithms exhibit noticeable fluctuations, especially the LMS algorithm.Although the RMC and RMEE algorithms also have relatively small steady-state errors, the fluctuations are still evident.Our proposed CR-RMEE algorithm not only has smaller fluctuations but also the minimum steady-state error, indicating its superior performance.

Conclusions
This paper introduces a Convex Regularized Recursive Minimum Error Entropy algorithm (CR-RMEE) to address the challenges faced by traditional adaptive filtering algorithms in sparse environments, such as poor recognition capability and susceptibility to impulse noise.Building upon the RLS algorithm, the CR-RMEE algorithm incorporates a series of improvements tailored to pulse noise and sparse environments.By introducing the Minimum Error Entropy (MEE) criterion into the cost function, this algorithm enhances resistance to impulse noise.Furthermore, regularization based on general convex functions is applied to the cost function, enabling the algorithm to leverage prior knowledge of sparsity, significantly improving its performance in sparse environments.Simulation results demonstrate that, in sparse system identification, the CR-RMEE algorithm outperforms the original RMEE algorithm.The new algorithm exhibits robustness

Conclusions
This paper introduces a Convex Regularized Recursive Minimum Error Entropy algorithm (CR-RMEE) to address the challenges faced by traditional adaptive filtering algorithms in sparse environments, such as poor recognition capability and susceptibility to impulse noise.Building upon the RLS algorithm, the CR-RMEE algorithm incorporates a series of improvements tailored to pulse noise and sparse environments.By introducing the Minimum Error Entropy (MEE) criterion into the cost function, this algorithm enhances resistance to impulse noise.Furthermore, regularization based on general convex functions is applied to the cost function, enabling the algorithm to leverage prior knowledge of sparsity, significantly improving its performance in sparse environments.Simulation results demonstrate that, in sparse system identification, the CR-RMEE algorithm outperforms the original RMEE algorithm.The new algorithm exhibits robustness in the presence of impulse noise and can adapt to system sparsity using prior knowledge.In future work, the application of the CR-RMEE algorithm may extend to complex-valued filtering and even nonlinear kernel filtering.

Figure 2 .
Figure 2. Transient NMSDs (dB) of different algorithms in a non-stationary environment.

Figure 2 .
Figure 2. Transient NMSDs (dB) of different algorithms in a non-stationary environment.

Table 1 .
Transient NMSDs of CR-RMEE with different parameters after convergence.

Table 1 .
Transient NMSDs of CR-RMEE with different parameters after convergence.