Next Article in Journal
Sensor Fusion Based Pipeline Inspection for the Augmented Reality System
Previous Article in Journal
A New Stability Theory for Grünwald–Letnikov Inverse Model Control in the Multivariable LTI Fractional-Order Framework
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Cauchy Conjugate Gradient Algorithm with Random Fourier Features

1
College of Electronic and Information Engineering, Southwest University, Chongqing 400715, China
2
Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Chongqing 400715, China
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(10), 1323; https://doi.org/10.3390/sym11101323
Submission received: 23 September 2019 / Revised: 10 October 2019 / Accepted: 17 October 2019 / Published: 22 October 2019

Abstract

:
Random Fourier mapping (RFM) in kernel adaptive filters (KAFs) provides an efficient method to curb the linear growth of the dictionary by projecting the original input data into a finite-dimensional space. The commonly used measure in RFM-based KAFs is the minimum mean square error (MMSE), which causes performance deterioration in the presence of non-Gaussian noises. To address this issue, the minimum Cauchy loss (MCL) criterion has been successfully applied for combating non-Gaussian noises in KAFs. However, these KAFs using the well-known stochastic gradient descent (SGD) optimization method may suffer from slow convergence rate and low filtering accuracy. To this end, we propose a novel robust random Fourier features Cauchy conjugate gradient (RFFCCG) algorithm using the conjugate gradient (CG) optimization method in this paper. The proposed RFFCCG algorithm with low complexity can achieve better filtering performance than the KAFs with sparsification, such as the kernel recursive maximum correntropy algorithm with novelty criterion (KRMC-NC), in stationary and non-stationary environments. Monte Carlo simulations conducted in the time-series prediction and nonlinear system identification confirm the superiorities of the proposed algorithm.

1. Introduction

Many applications in the real world, such as system identification, regression, and online kernel learning (OKL) [1], require complex nonlinear models. The kernel method using a Mercer kernel has attracted interests in tackling these complex nonlinear applications, which transforms nonlinear applications into linear ones in the reproducing kernel Hilbert space (RKHS) [2]. Developed in RKHS, a kernel adaptive filter (KAF) [2] is the most celebrated subfield of OKL algorithms. Using the simplest stochastic gradient descent (SGD) method for learning, KAFs including the kernel least mean square (KLMS) algorithm [3], kernel affine projection algorithm (KAPA) [4], and kernel recursive least squares (KRLS) algorithm [5] have been proposed.
However, allocating a new kernel unit as a radial basis function (RBF) center with the coming of new data, the linearly growing structure (called “dictionary” hereafter) will increase the computational and memory requirements in KAFs. To curb the growth of the dictionary, two categories are chosen for sparsification. The first category accepts only informative data as new dictionary centers by using a threshold, including the surprise criterion (SC) [6], the coherence criterion (CC) [7], and the vector quantization (VQ) [8]. However, these methods cannot fully address the growing problem and still introduce additional time consumption at each iteration. The fixed points methods as the second category, including the fixed-budget (FB) [9], the sliding window (SW) [10], and the kernel approximation methods (e.g., the Nystr o ¨ m method [11] and random Fourier features (RFFs) method [12]), are used to overcome the sublinearly growing problem. However, the FB method and the SW method cannot guarantee a good performance in specific environments with a small amount of time [13]. Compared with the Nystr o ¨ m method, RFFs are drawn from a distribution that is randomly independent from the training data. Due to a data-independent vector representation, RFFs can provide a good solution to non-stationary circumstances. On the basis of RFFs, random Fourier mapping (RFM) is proposed by mapping input data into a finite-dimensional random Fourier features space (RFFS) using a randomized feature kernel’s Fourier transform in a fixed network structure. The RFM alleviates the computational and storage burdens of KAFs, and ensures a satisfactory performance under non-stationary conditions. The examples for developing KAFs with RFM are the random Fourier features kernel least mean square (RFFKLMS) algorithm [13], random Fourier features maximum correntropy (RFFMC) algorithm [14], and random Fourier features conjugate gradient (RFFCG) algorithm [15].
For the loss function, due to their simplicity, smoothness, and mathematical tractability, the second-order statistical measures (e.g., minimum mean square error (MMSE) [2] and least squares [16]) are widely utilized in KAFs. However, KAFs based on the second-order statistical measures are sensitive to non-Gaussian noises including the sub-Gaussian and super-Gaussian noises, which means that their performance may be seriously degraded if the training data are contaminated by outliers. To handle this issue, robust statistical measures have therefore gained more attention, among which the lower-order error measure [17] and the higher-lower error measure [18] are two typical examples. However, the higher-order error measure is not suitable for the mixture of Gaussian and super-Gaussian noises (Laplace, α -stable, etc.) with poor stability and astringency, and the lower-order measure of error is usually more desirable in these noise environments with slow convergence rate. Recently, the information theoretic learning (ITL) [19] similarity measures, such as the maximum correntropy criterion (MCC) [20] and minimum error entropy criterion (MEE) [19], have been introduced to implement robust KAFs. The ITL similarity measures have been shown to have a strong robustness against non-Gaussian noises at the expense of increasing computational burden in training processing. In addition, minimizing the logarithmic moments of the error, the logarithmic error measure—including the Cauchy loss (CL) [21] with low computational complexity—is an appropriate measure of optimality. Using the Cauchy loss to penalize the noise term, some algorithms based on the minimum Cauchy loss (MCL) criterion are efficient for combating non-Gaussian noises, especially for heavy-tailed α - stable noises.
From the aspect of the optimization method, the stochastic gradient descent (SGD)-based algorithms cannot find the minimum using the negative gradient in some loss functions [20,21,22]. Toward this end, recursive-based algorithms [23] address these issues at the cost of increasing computational cost. In comparison with the SGD method and recursive method, the conjugate gradient (CG) method [24,25,26] and Newton’s method as developments of SGD have become alternative optimization methods in KAFs. The inverse of matrix of Newton’s method increases the computation and causes the divergence of algorithms in some cases [22]. However, the CG method gives a trade-off between convergence rate and computational complexity without the inverse computation, and has been successfully applied in various fields, including compressed sensing [27], neural networks [28], and large-scale optimization [29]. In addition, the kernel conjugate gradient (KCG) method is proposed [30] for adaptive filtering. KCG with low computational and space requirements can produce a better solution than KLMS, and has comparable accuracy to KRLS.
In this paper, to reduce the computational complexity, we apply the RFM in the MCL-based KAF to address the problem of linear growth and improve the robustness. Further, the CG optimization method is used to improve the filtering accuracy and convergence rate, developing a novel robust random Fourier features Cauchy conjugate gradient (RFFCCG) algorithm. The contributions of this paper are summarized as follows. 1) Inspired by the finite-dimensional RFM and MCL criterion, a novel RFFCCG algorithm is derived by mapping the original input data into the fixed-dimensional RFFS, which can significantly solve the problem of the growth of network structure and improve robustness compared to other robust algorithms in the context of  non-Gaussian noises. 2) By applying the CG method, RFFCCG with low computational and space complexities provides good filtering accuracy against non-Gaussian noises. The computational and space complexities of RFFCCG are also discussed. 3) The proposed algorithm can also achieve excellent tracking performance when a system has a sudden change.
The rest of this paper is structured as follows. The MCL criterion and its convexity are described in Section 2, and the online CG algorithm is also briefly reviewed in this section. In Section 3, we present the proposed RFFCCG algorithm and its complexity analysis. Illustrative simulations in the presence of non-Gaussian noises are presented to confirm the effectiveness of the proposed algorithm in Section 4. Finally, Section 5 gives the concluding remarks of this paper.

2. Background

In this section, we first briefly review the minimum Cauchy loss (MCL) criterion. The performance surfaces of Cauchy loss (CL) and mean square error (MSE) are also compared. Then, the conjugate gradient (CG) method and its online algorithm are introduced.

2.1. Minimum Cauchy Loss Criterion

Given two random variables X and Y, the Cauchy loss [21] is defined as:
V ( X , Y ) = E ln 1 + X Y 2 γ 2 = ln 1 + X Y 2 γ 2 d F X Y ( x , y ) ,
where E · denotes the mathematical expectation, F X Y ( x , y ) denotes the joint distribution function of X , Y , and γ > 0 is a constant. Since F X Y ( x , y ) is usually unknown, it is difficult to calculate V ( X , Y ) directly. In practice, given a finite number of samples { x k , y k } k = 1 N , (1) can be approximated as follows:
V ^ ( X , Y ) = 1 N k = 1 N ln 1 + x k y k 2 γ 2 ,
which is called the empirical CL. In addition, (2) can also be regarded as a generalized error between two vectors X = [ x 1 , x 2 , , x N ] T and Y = [ y 1 , y 2 , , y N ] T .
Let e = X Y = [ e 1 , e 2 , , e N ] T , where e k = x k y k , k = 1 , 2 , N . Because of (2), the Hessian matrix of V ^ ( X , Y ) with respect to e is expressed as:
H V ^ ( X , Y ) ( e ) = 2 V ^ ( X , Y ) e k e j = diag [ ξ 1 , ξ 2 , . . . , ξ N ] ,
where
ξ k = 2 γ 2 e k 2 γ 2 + e k 2 2 , k = 1 , 2 , , N .
We have that H V ^ ( X , Y ) ( e ) 0 when e k γ , that is, the empirical CL is convex at e k γ . A larger γ results in a larger convex range in general.
The optimal solution to the Cauchy loss function can be obtained by solving the following optimization problem:
min k = 1 N ln 1 + ( d k y k ) 2 γ 2 ,
which is called the minimum Cauchy loss (MCL) criterion. Figure 1 shows the comparison of MSE and CL functions with different γ . We can observe that: 1) compared with the MSE loss function, the CL function maintains the insensitivity to large errors. Thus, adaptive filters using the CL function will be robust against large outliers. 2) The value of γ can control the shape of the CL function, and a larger γ can generate a smoother curve when the error is smaller, which means that the CL function can provide good smoothness to the steady-state error. In practice, we choose an appropriate γ to guarantee the robustness and convexity of the CL function.

2.2. Conjugate Gradient Algorithm

The typical conjugate direction method is the conjugate gradient (CG) method, which is developed by selecting the successive direction vectors as conjugate versions of the successive gradients. The CG method is introduced to a linear adaptive filter in [24,26], which is used to solve the following linear equation:
R ω = b
and (6) is also equivalent to solve the following purely quadratic function:
f ( ω ) = 1 2 ω T R ω b T ω .
where ω R n is the weight vector, b R n is the cross-correlation vector, and R R n × n is a symmetric positive definite auto-correlation matrix. To find the optimal ω , the CG method—which is described as follows—provides an alternative method instead of estimating R 1 . Beginning with any ω 0 R n , and direction p 0 = g 0 = b R ω 0 , the global minimum to (6) can be derived by iteratively computing
α k = g k T p k p k T R p k ω k + 1 = ω k + α k p k g k + 1 = b R ω k + 1 β k = g k + 1 T g k + 1 g k T g k p k + 1 = g k + 1 + β k p k ,
where · T denotes the transpose, g k is the gradient of f ( ω k ) with respect to ω at discrete time k, the step-size α k is given by arg min f ( ω k + α p k ) , and constant β k is selected to provide R -conjugacy for vector p k regarding the previous direction vectors p k 1 , p k 2 , , p 0 . In (8), note that the new conjugate direction p k is formed by a linear combination of the current negative gradient g k and the previous direction vectors with a proper β k to avoid manually resetting the direction vector.
The CG method can be regarded as a trade-off between the stochastic gradient descent (SGD) method and Newton’s method in terms of convergence rate and the complexities of computation and storage. Especially, the inverse of the Hessian matrix—which will cause the divergence of the algorithm—is avoided in the CG method. Therefore, the CG method is widely applicable to address quadratic optimization problems. In addition, it can also be extended to approximate non-quadratic optimization problems.

2.3. Online Conjugate Gradient Algorithm

The aforementioned CG method is generally used for offline applications. In this section, the online CG algorithm is given for online learning, where training data arrive sequentially [1,30].
It is necessary to estimate R and b for online applications. Using the exponentially decaying data window [2], the following recursive form to update R and b is given by
R k + 1 = λ R k + x k + 1 x k + 1 T b k + 1 = λ b k + d k + 1 x k + 1 ,
where positive constant 0 < λ < 1 is usually closest to one, x k R n is the input data, and d k is the desired output at iteration k.
Define a residual vector of normal equations as s k = b k R k ω k at iteration k. To improve clarity, the online CG algorithm to estimate the weight vector is summarized as follows [15]. Given the initial conditions ω 0 = 0 and p 0 = s 0 , when { x k , d k } is available, do
α k = s k T p k p k T R k + 1 p k ω k + 1 = ω k + α k p k R k + 1 = λ R k + x k + 1 x k + 1 T s k + 1 = λ s k α k R k + 1 p k + e k + 1 x k + 1 β k = s k + 1 T s k + 1 s k T s k p k + 1 = s k + 1 + β k p k .
The online CG algorithm is only efficient in dealing with linear problems, but may cause degradation for nonlinear problems. To address this problem, the online kernel conjugate gradient (KCG) based on the least squares (LS) is presented in [30]. Because the LS criterion is used in KCG, it may be degraded considerably or even suffer from divergence in non-Gaussian noise environments. In addition, the linearly growing structure of KCG with each new sample poses both computational and memory issues. To address the complexity issue of KCG, RFFCG is proposed by using the CG method. However, using the LS criterion, RFFCG is only appropriate for Gaussian noises. Therefore, we propose a novel online algorithm to deal with these issues.

3. Proposed Algorithm

In this section, we first describe a fixed dimensional mapping in random Fourier features space (RFFS). Then, the CG method can be applied to the MCL criterion to generate a robust algorithm—that is, the robust random Fourier features Cauchy conjugate gradient (RFFCCG) algorithm.

3.1. Random Fourier Mapping

Based on an available sequence of training data pairs { x k , d k } k = 1 N , kernel adaptive filters are used to deal with the following nonlinear problem:
d k = f * ( x k ) + υ k ,
where x k R n is the input data, d k R is the desirable output at iteration k in the original data space U , υ k is an additive noise signal, and f * ( x k ) is the optimal estimate of d k with a nonlinear input–output mapping f : R n R .
To construct the mapping relationship in the form of f ( · ) = l = 1 k η l κ ( · , x l ) , a kernel method [2] is utilized to map the original input data into a high-dimensional feature space. A continuous, symmetric, and positive definite Mercer kernel is used in the kernel method. Thus, using the kernel method, we can compute the filter output at discrete time k as
f ( x k ) = l = 1 k 1 η l κ ( x k , x l ) ,
with coefficients η l , l = 1 , 2 , k 1 . We observe that the network increases linearly with the length of data to train in the filter output, which poses some challenges to the kernel method in practice. However, the random Fourier features (RFFs) present an efficient solution to solving this problem by transforming input x k into z ( x k ) , that is,
z : x k R n z ( x k ) R m ,
where m n . Considering a shift-invariant and positive definite kernel κ ( x , y ) = κ ( x y ) on R n , Bochner’s theorem [12] ensures that the Fourier transform of ρ ( w ) corresponds to the kernel function given by
κ ( x y ) = R n ρ ( w ) e j w T ( x y ) d w ,
where ρ ( · ) denotes the probability density function (PDF) and w is a Gaussian random vector drawn from ρ ( w ) N ( 0 , σ 2 I n ) with an n × n identity matrix I n . In (14), the commonly used Mercer kernel is the following Gaussian kernel [2], that is,
κ ( x , y ) = exp x y 2 2 2 σ 2 ,
where σ is the kernel bandwidth and · 2 is the 2 norm.
Now, define Z w ( x ) = e j w T x , and (14) is actually equivalent to the expectation operation of Z w H ( x ) Z w ( y ) . Then, we have the following expression for kernel function:
κ ( x y ) = E w [ Z w H ( x ) Z w ( y ) ] ,
where H is the conjugate transpose operation and
Z w ( x ) = e j w T x = 2 cos ( w T x + b ) Z w ( y ) = e j w T y = 2 cos ( w T y + b ) ,
with b being drawn from the uniform distribution on [ 0 , 2 π ] [12].
Based on m random Fourier features w 1 , , w m and m uniformly random numbers b 1 , , b m , the approximation of kernel function in (16) can be rewritten as the empirical average of m random components:
Z w , b T ( x ) Z w , b ( y ) 1 m i = 1 m Z w i , b i ( x ) Z w i , b i ( y ) ,
where
Z w , b ( x ) = [ Z w i , b i ( x ) , . . . , Z w m , b m ( x ) ] T = 2 m cos ( w 1 T x + b 1 ) , . . . , 2 m cos ( w m T x + b m ) T .
The random Fourier features method for approximating the Gaussian kernel is similar to the approximation method of standard Monte Carlo. Therefore, the transformed input data z ( x k ) are
z ( x k ) = 2 m cos ( w 1 T x k + b 1 ) , . . . , cos ( w m T x k + b m ) T ,
which is called random Fourier mapping (RFM). The dimension of subspace that belongs to z ( x k ) is finite. Therefore, a linear adaptive filtering structure is given based on the transformed input z ( x k ) .
The Gaussian kernel function in (12) can be represented by the inner product of the transformed input vector as
κ ( x k , x l ) = z T ( x k ) z ( x l ) ,
with the low approximation error ε provided by m = O ( n ε 2 log 1 ε 2 ) [14].
Combining (12) and (21), the filter output can be recomputed by f ( x k ) = ( Ω r ) T z ( x k ) , where Ω r is a finite-dimensional weight vector in RFFS. Thus, an infinite-dimensional implicit feature space is embedded into a relatively low-dimensional explicit feature space. The expression of filter output is similar to the linear least mean square (LMS) [2]. Therefore, a huge amount of time is saved in the RFM-based algorithms. The fixed-dimensional structure will be used to develop the following algorithm. Denote z ( x k ) = z k for simplicity hereafter.

3.2. RFFCCG Algorithm

The Cauchy loss function is presented for robust adaptive filtering as follows:
J Ω r = 1 N k = 1 N ln 1 + e k 2 γ 2 = 1 N k = 1 N ln 1 + ( d k ( Ω r ) T z k ) 2 γ 2 ,
where e k = d k ( Ω r ) T z k is the prediction error regarding the transformed input z k R m , d k R is the desired output, and  Ω r R m is the weight vector in RFFS. From (5), ( Ω r ) T z k corresponds to y k .
The gradient of the loss function with respect to Ω r is given by
J Ω r = 1 N k = 1 N 2 γ 2 + e k 2 d k ( Ω r ) T z k z k = 1 N k = 1 N 2 γ 2 + e k 2 d k z k 1 N k = 1 N 2 γ 2 + e k 2 z k z k T Ω r = 1 N k = 1 N θ k d k z k 1 N k = 1 N θ k z k z k T Ω r ,
where θ k = 2 γ 2 + e k 2 . Hence, we have that θ k tends to 0 as e k .
Letting J Ω r = 0 generates
k = 1 N θ k z k z k T Ω r = k = 1 N θ k d k z k ,
where weighted auto-correlation matrix R r and weighted cross-correlation vector b r are given by
R r = k = 1 N θ k z k z k T R m × m b r = k = 1 N θ k d k z k R m .
According to (24), (25) can be rewritten as
R r Ω r = b r ,
where the optimal solution Ω r , in practice, can be obtained by the CG method instead of estimating  ( R r ) 1 .
For online learning in RFFS, R r and b r are given by using an exponentially decaying data window as follows:
R k + 1 r = λ R k r + θ k z k + 1 z k + 1 T b k + 1 r = λ b k r + θ k d k + 1 z k + 1 ,
where positive forgetting factor λ which is very close to but smaller than one (i.e., λ ( 0 , 1 ) ) is used to scale down past data. Since θ k tends to 0 as e k for outliers, R k + 1 r and b k + 1 r have almost no change from (27). In such case, according to (26), Ω r almost has no update, which is robust against outliers.
Then, we use an important concept called the search direction vector here. A fundamental relation between the current optimal weight and the previous optimal weight in RFFS is obtained [25]. We have the recursion to update Ω r as
Ω k + 1 r = Ω k r + α k p k r ,
which suggests that we can estimate the optimal weight Ω k + 1 r of a nonlinear dynamical system by combining the previous Ω k r with the search direction vector p k r multiplied by a step-size parameter α k . Furthermore, by the conjugate gradient theory [24,25], we can express α k as
α k = s k r T p k r p k r T R k + 1 r p k r .
Now, a residual vector of normal equations s r is introduced in the kernel space. From (27)–(29), we have the recursive residual vector of RFFCCG as follows:
s k + 1 r = b k + 1 r R k + 1 r Ω k + 1 r = λ s k r α k R k + 1 r p k r + z k + 1 θ k + 1 e k + 1 ,
where e k + 1 = d k + 1 ( Ω k r ) T z k + 1 is the prediction error estimated by the desired and the real output in RFFS. These residual vectors are orthogonal to each other, that is, ( s k r ) T s l r = 0 , for l = 0 , , k 1 .
In [25], the Hessian matrix must be recalculated at each iteration for the Hestenes–Stiefel method, which requires a large amount of computation, and the numerator of the Fletcher–Reeves method may be close zero, resulting in poor performance. From the results of the global convergence characteristics analysis, using the Polak–Ribi e re method [22] which adopts a degenerated scheme to calculate β k , the performance of the conjugate gradient is the best. Thus, a proper coefficient parameter β k ( k = 1 , 2 , ) is expressed to update the current search direction p k + 1 r automatically by
β k = s k + 1 r T ( s k + 1 r s k r ) s k r T s k r .
By using (31) directly, a linear formulation of the new search direction is given by
p k + 1 r = s k + 1 r + β k p k r ,
where β k is set to provide the R-conjugacy for the new search direction p k + 1 r with regard to the previous direction vectors (i.e., p 1 r , p 2 r , , p k r ). The search direction is updated per iteration to ensure the convergence of the algorithm. Depending on the spanning subspace theorem [30], the residual vector at iteration k will be orthogonal to the search direction in the Krylov subspace, that is, s k H p l = 0 , for  l < k .
Finally, combining Equations (27)–(32), we summarize the online RFFCCG algorithm in Algorithm 1.
Remark 1.
The proposed RFFCCG algorithm uses an explicit mapping method to transform the original input into RFFS, generating a linear filtering structure. Based on a data-independent vector representation, RFFs can provide good tracking performance for non-stationary circumstances. The network size of RFFCCG only depends on the dimension m, which plays a significant role in the approximation accuracy. Generally, a larger dimension simultaneously results in a higher filtering accuracy with more computational and storage burdens. In practice, to balance the filtering accuracy and complexity, an appropriate dimension can be chosen by trials to obtain the desirable performance. In addition, we can find from Figure 1 that the Cauchy loss function can provide good robustness to large outliers. Hence, the proposed RFFCCG can also be applied to channel equalization and noise cancellation [2] in non-Gaussian noise environments.
Algorithm 1: The robust random Fourier features Cauchy conjugate gradient (RFFCCG) algorithm.
        Input: Sequential input–output pairs { x k , d k } k = 1 N , kernel bandwidth σ > 0 , forgetting factor
         λ ( 0 , 1 ) , the dimension of RFF m > 0 , and constant γ > 0 .
        Draw: i.i.d. w N ( 0 , σ 2 I n ) , where the dimension of original data space n > 0 .
                    i.i.d. b U [ 0 , 2 π ] , where U denotes the uniform distribution.
        Initialization: z 1 = 2 m cos ( w 1 T x 1 + b 1 ) , . . . , cos ( w m T x 1 + b m ) T ,
         Ω 1 r = 0 , d 1 = e 1 , θ 1 = 2 γ 2 + e 1 2 , R 1 r = θ 1 z 1 z 1 T , b 1 r = θ 1 d 1 z 1 , p 1 r = b 1 r R 1 r Ω 1 r , p 1 r = s 1 r .
        Computation:
        while { x k , d k } is available, do
        1.     z k + 1 = 2 m cos ( w 1 T x k + 1 + b 1 ) , . . . , cos ( w m T x k + 1 + b m ) T ,
        2.     y k + 1 = Ω k + 1 r , z k + 1 ,
        3.     e k + 1 = d k + 1 y k + 1 ,
        4.     θ k + 1 = 2 γ 2 + e k + 1 2 ,
        5.     R k + 1 r = λ R k r + θ k + 1 z k + 1 z k + 1 T ,
        6.     α k + 1 = s k r T p k r p k r T R k + 1 r p k r ,
        7.     Ω k + 1 r = Ω k r + α k p k r ,
        8.     s k + 1 r = λ s k r α k R k + 1 r p k r + z k + 1 θ k + 1 e k + 1 ,
        9.     β k = s k + 1 r T ( s k + 1 r s k r ) s k r T s k r ,
        10.     p k + 1 r = s k + 1 r + β k p k r .
        end while

3.3. Complexity

In this section, we mainly discuss the complexities of RFFCCG in terms of computation and storage. The computational complexity of RFFCCG at each iteration is summarized in detail as follows. It can be seen from Algorithm 1 that at each iteration, RFFCCG requires a total of 4 m 2 + ( n + 12 ) m + 1 multiplications from Steps 1, 2, and 5–10; 3 m 2 + ( n + 11 ) m + 1 additions from Steps 1, 2, and 4–10; and 4 divisions from Steps 1, 4, 6, and 9. The computation of cos ( . ) is not considered here since it can be ignored compared with the cost of RFFCCG. We compare the computational complexities of RFFCCG with the kernel least mean square (KLMS) algorithm [3], kernel recursive least squares (KRLS) algorithm [5], kernel conjugate gradient (KCG) algorithm [30], and kernel recursive maximum correntropy (KRMC) algorithm [31] in Table 1, where k is the number of iterations and n and m are the dimensions of original data space and RFFS, respectively. We clearly see from Table 1 that the proposed RFFCCG algorithm has a fixed computational complexity, but the computational complexities of KLMS, KRLS, KCG, and KRMC increase with the network size. Thus, RFFCCG can significantly reduce the computational requirements.
Further, we discuss the storage complexity of RFFCCG based on the matrix number and size. From Algorithm 1, there is only an m × m symmetric matrix R that needs to be stored. Therefore, the storage complexity of RFFCCG is O ( m ) . Table 2 lists the compared results with other algorithms based on matrix. Note that the storage complexity of RFFCCG is also fixed, and other algorithms have increasing network sizes. In summary, compared with the KAFs without sparsification in Table 1 and Table 2, RFFCCG can efficiently alleviate the computational and storage burdens of KAFs.

4. Simulation

To demonstrate the superior performance of the proposed RFFCCG algorithm in this section, simulations were performed on the Mackey–Glass chaotic time series prediction and nonlinear system identification, respectively. Due to the modest complexity and excellent performance, representative algorithms (i.e., random Fourier features kernel least mean square (RFFKLMS) algorithm [13], quantized kernel recursive least squares (QKRLS) algorithm [32], random Fourier features maximum correntropy (RFFMC) algorithm [14], kernel recursive maximum correntropy algorithm with novelty criterion (KRMC-NC) [31], and random Fourier features conjugate gradient (RFFCG) algorithm [15]) were selected to compare the performance of RFFCCG. Among these algorithms, RFFMC and KRMC-NC are typical robust algorithms, while RFFKLMS, QKRLS, and RFFCG with no robustness are also used for the filtering performance reference. For all simulations, we ran 50 independent Monte Carlo simulations to reduce disturbances using Matlab R2016b on Windows 10, where PC is configured with 3.30 GHz of CPU and 8 GB of RAM.
To evaluate the filtering performance of algorithms, the testing mean-square error (MSE) is defined as:
MSE ( dB ) = 10 log 10 ( 1 N k = 1 N ( d k y k ) 2 ) ,
where y k is the prediction of d k and N is the length of testing data.
The non-Gaussian noise model considered in this section is the impulsive noise [33], which was modeled by the combination of two mutually independent noise processes. We assumed the mixture noise model in the form of υ k = ( 1 a k ) A k + a k B k , where a k 0 , 1 is a binary distribution with occurrence probability P ( a k = 1 ) = c and P ( a k = 0 ) = 1 c ( 0 c 1 ) . Without mentioning otherwise, the parameter c was configured to 0.1 and A k is a zero-mean Gaussian distribution with σ A 2 = 0 . 01 . For B k , we mainly considered the α -stable noise (heavy-tailed impulsive noise) process with characteristic function [34]:
φ ( t ) = exp { j ( ε t ) η t α [ 1 + j ( β sgn ( t ) S ( t , α ) ) ] } ,
where
S ( t , α ) = tan α π 2 , α 1 , 2 π log t , α = 1 ,
where α 0 , 2 is the characteristic factor to measure the heaviness of the tail and a smaller parameter α means a larger impulse, η is the dispersion factor that controls the number of impulses, < ε < + is the location factor, β 1 , 1 is the symmetry factor, sgn ( · ) is the sign function, and j = 1 . The parameter vector of the noise model is written as V α s t a b l e = [ α , β , η , ε ] . Here, we chose the parameter vector V α s t a b l e = [ 0 . 8 , 0 , 0 . 1 , 0 ] in the following simulations.

4.1. Mackey–Glass Time Series

Since the Mackey–Glass (MG) chaotic system is a benchmark problem for nonlinear learning problems, we first considered the MG chaotic time series [2] in the following simulations, which is generated by the delayed differential equation as follows:
d u ( t ) d t = b u ( t ) + a u ( t τ ) 1 + u ( t τ ) n ,
with a = 0 . 2 , b = 0 . 1 , n = 10 , and τ = 30 . The time series was discretized at the sampling period of 6 s and corrupted by the noise model mentioned above. We used the previous seven points u k 7 , u k 6 , , u k 1 T to predict the current value u k . The prediction was trained by 2000 data points, and tested with another 200.
The parameter γ is key in the proposed RFFCCG algorithm. In the first simulation, we discuss the influence of γ on the filtering accuracy of RFFCCG to combat non-Gaussian noises. The parameter was selected within the range γ = 0 . 01 , 0 . 1 , 0 . 3 , 0 . 5 , 0 . 7 , 0 . 9 , 1 , 2 , 4 , 6 . The influence of γ is shown in Figure 2, where the steady-state MSEs are derived by averaging the last 200 iterations. For RFFCCG, the Gaussian kernel bandwidth was set as σ = 1 , the forgetting factor β = 0 . 999 , and the dimension of RFF m = 100 . It can be seen from Figure 2 that the parameter γ had a direct influence on the filtering performance of RFFCCG. The RFFCCG algorithm could achieve the highest filtering accuracy when γ = 0 . 3 , and thus too large or too small γ will cause performance degradation. An appropriate γ can combat impulsive noises efficiently. Therefore, we set the parameter γ = 0 . 3 for RFFCCG in the following simulations.
In addition, the steady-state MSEs and the averaged time are plotted in Figure 3 regarding different dimension m. Here, the simulation environment and kernel bandwidth setting of RFFCCG were similar to those of Figure 2. The range of m was set as [1,100]. From Figure 3, we observe: (1) the average consumed time increased linearly with m; (2) the filtering accuracy of RFFCCG could be improved by increasing m to some extent, however it remained almost unchanged when m 60 . In addition, a larger dimension m resulted in higher filtering accuracy at the expense of increasing computational time. Thus, the dimension m = 60 was set for RFFCCG to provide a trade-off between filtering accuracy and computational time.
In this example, we compared the filtering accuracy and robust performance of RFFCCG with other filtering algorithms. The parameters of each algorithm were set to achieve the desired filtering accuracy and to have the same convergence rate. The bandwidth of Gaussian kernels was set to 1 for all algorithms; the step size was η = 0 . 1 in RFFKLMS and RFFMC; the threshold ε = 0 . 05 was chosen for QKRLS; the distance threshold and the error threshold were set as δ 1 = 0 . 1 and δ 2 = 0 . 1 , respectively, and the regularization parameter λ = 0 . 1 for the KRMC-NC; for RFFCG and RFFCCG, the forgetting factor was set to β = 0 . 999 ; γ = 0 . 3 was chosen for RFFCCG; the dimension of RFFS was m = 60 . From Figure 4, we observe that the performance of quadratic-based algorithms (i.e., RFFKLMS, QKRLS, and RFFCG) became worse in the non-Gaussian noise environment, while RFFMC, KRMC-NC, and RFFCCG always generated stable performance and achieved desirable performance when impulse noise appeared. Especially, the filtering performance of RFFCCG was very close to that of the recursive KRMC-NC algorithm and better than that of the SGD-based RFFMC algorithm. Table 3 lists the detailed simulation results in terms of the dictionary size, steady-state MSE, and average consumed time. One also can observe that RFFCCG could produce the comparable filtering accuracy to KRMC-NC with less consumed time and storage requirements. Thus, RFFCCG is more efficient in the compared algorithms for the MG time series prediction.

4.2. Nonlinear System Identification

To further validate the superiority of RFFCCG, we chose the problem of nonlinear system identification, where the nonlinear system is of the form [35]
u k = ( a 1 a 2 exp ( u k 1 2 ) ) u k 1 ( a 3 + a 4 exp ( u k 1 2 ) ) u k 2 + a 5 sin ( u k 1 π ) ,
where u k denotes the output at discrete time k, u 1 = 0 . 1 , and u 2 = 0 . 1 were configured as the initial values, and a = a 1 , a 2 , a 3 , a 4 , a 5 T denotes the coefficient vector. The setting for prediction task is shown as follows: the previous two values (i.e., u = [ u k 1 , u k 2 ] ) were used as the input vector to predict the current value u k . We considered stationary and non-stationary scenarios in the following simulations. The data were corrupted by the noise model mentioned above and the Gaussian kernel with kernel parameter σ = 1 was used for all the tested algorithms.
In the stationary case, the coefficient vector was fixed at a = 0 . 8 , 0 . 5 , 0 . 3 , 0 . 9 , 0 . 1 T . The first 2000 data points were used for training and the additional 200 for testing. We compared the testing MSE of RFFCCG with those of RFFKLMS, QKRLS, RFFMC, KRMC-NC, and RFFCG due to their modest complexities and excellent performance under the stationary system. The parameters were chosen to obtain the best results as follows: η = 0 . 1 for RFFKLMS; ε = 0 . 002 for QKLMS; η = 0 . 4 for RFFMC; δ 1 = 0 . 01 , δ 2 = 0 . 01 , and λ = 0 . 1 for KRMC-NC; β = 0 . 999 for RFFCG; β = 0 . 999 and γ = 0 . 3 for RFFCCG. To balance the accuracy and computational time, m = 50 was also configured for the dimension of RFFS. The learning curves of all the algorithms are shown in Figure 5. In this case, the RFFCCG algorithm still had satisfactory prediction ability and achieved comparable performance to KRMN-NC and better performance than RFFMC, while others exhibited poor performance. This also means that RFFCCG has strong robustness against impulsive noises. Table 4 shows the dictionary size, steady-state MSEs, and average consumed time of all algorithms. As can be clearly seen from Figure 5 and Table 4, RFFCCG consumed less time and achieved a faster convergence rate and higher filtering accuracy than the compared algorithms including RFFCG.
The tracking performance was evaluated in a non-stationary system where two different coefficient vectors were used for data generation as follows: a = 0 . 8 , 0 . 5 , 0 . 3 , 0 . 3 , 0 . 1 T was selected in the first 2000 data, and a = 0 . 4 , 0 . 7 , 0 . 6 , 0 . 6 , 0 . 2 T was set in the following 2000 data. We compared the testing MSE of RFFCCG with those of RFFKLMS, RFFMC, RFFCG, and RFFCCG due to their modest complexities and excellent performance in the non-stationary system. To compute the convergence curve, a total of 4000 data points were used for training with a sudden change at the 2001-th data point. Regarding the test process, 400 data points were used with a sudden change at the 201-th data point. With the same criterion of parameters setting, the step sizes η of RFFKLMS and RFFMC were chosen as 0.1 and 0.3, respectively, the forgetting factor β = 0 . 999 was used in RFFCG and RFFCCG, and the dimension of RFF was set as m = 50 . The performance comparison is presented in Figure 6. It can be observed that all of the RFF-based algorithms were capable of tracking the change of the system. However, RFFCCG outperformed all the compared algorithms with γ = 0 . 3 when abrupt change occurred. Note that the dictionary size, steady-state MSEs, and consumed time of the tested algorithms averaged by the points of 1500 2000 and 3500 4000 are summarized in Table 5. As observed in Figure 6 and Table 5, RFFCCG provided good tracking performance for a non-stationary system in non-Gaussian noises.
Therefore, in both stationary and non-stationary circumstances for the nonlinear system identification, the proposed RFFCCG algorithm offers excellent filtering performance in terms of filtering accuracy, convergence rate, robustness, and computational and space complexities.

5. Conclusions

In this paper, the robust random Fourier features Cauchy conjugate gradient (RFFCCG) algorithm was proposed and analyzed by integrating random Fourier mapping (RFM) into the Cauchy loss function with the conjugate gradient (CG) optimization method for nonlinear applications in a non-Gaussian noise circumstance. Random Fourier mapping (RFM) for a kernel adaptive filter (KAF) generates an effective finite-dimensional sparsification approach to obtain a more accurate and compact network. The developed RFFCCG algorithm with low computational and space complexities could significantly improve the filtering performance in comparison with other representative robust filters against non-Gaussian noise interferences. We discussed the influence of free parameters, and obtained optimal parameter values for RFFCCG. Simulation results in the presence of non-Gaussian noises validated the superiority of RFFCCG in terms of the filtering accuracy, robustness, tracking performance, and time and storage consumption.

Author Contributions

Conceptualization, X.H. and S.W.; methodology, X.H. and K.X.; software, K.X. and S.W.; validation, S.W. and X.H.; formal analysis, X.H. and K.X.; investigation, X.H. and K.X.; resources, S.W.; data curation, S.W. and X.H.; writing—original draft preparation, X.H.; writing—review and editing, S.W. and K.X.; visualization, X.H.; supervision, S.W.; project administration, S.W. and K.X.; funding acquisition, S.W. and K.X.

Funding

This work was supported by the National Natural Science Foundation of China (61671389) and Fundamental Research Funds for the Central Universities (XDJK2019B011).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kivinen, J.; Smola, A.J.; Williamson, R.C. Online learning with kernels. IEEE Trans. Signal Process. 2004, 52, 1540–1547. [Google Scholar] [CrossRef]
  2. Liu, W.; Príncipe, J.C.; Haykin, S. Kernel Adaptive Filtering: A Comprehensive Introduction; John Wiley & Sons: Hoboken, NJ, USA, 2010. [Google Scholar]
  3. Liu, W.; Pokharel, P.P.; Príncipe, J.C. The kernel least mean square algorithm. IEEE Trans. Signal Process. 2008, 56, 543–554. [Google Scholar] [CrossRef]
  4. Liu, W.; Príncipe, J.C. Kernel affine projection algorithms. EURASIP J. Adv. Signal Process. 2008, 2008, 1–12. [Google Scholar]
  5. Engel, Y.; Mannor, S.; Meir, R. The kernel recursive least squares algorithm. IEEE Trans. Signal Process. 2004, 52, 2275–2285. [Google Scholar] [CrossRef]
  6. Liu, W.; Park, I.; Príncipe, J.C. An information theoretic approach of designing sparse kernel adaptive filters. IEEE Trans. Neural Netw. 2009, 20, 1950–1961. [Google Scholar] [CrossRef]
  7. Richard, C.; Bermudez, J.C.M.; Honeine, P. Online prediction of time series data with kernels. IEEE Trans. Signal Process. 2009, 57, 1058–1067. [Google Scholar] [CrossRef]
  8. Wang, S.; Zheng, Y.; Duan, S.; Wang, L.; Tan, T. Quantized kernel maximum correntropy and its mean square convergence analysis. Dig. Signal Process. 2017, 63, 164–176. [Google Scholar] [CrossRef]
  9. Zhao, S.; Chen, B.; Zhu, P.; Príncipe, J.C. Fixed budget quantized kernel least mean square algorithm. Signal Process. 2013, 93, 2759–2770. [Google Scholar] [CrossRef]
  10. Vaerenbergh, S.V.; Vía, J.; Santamaría, I. A sliding-window kernel RLS algorithm and its application to nonlinear channel identification. In Proceedings of the 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings (ICASSP), Toulouse, France, 14–19 May 2006; pp. 789–792. [Google Scholar]
  11. Wang, S.; Wang, W.; Dang, L.; Jiang, Y. Kernel least mean square based on the Nyström method. Circuits Syst. Signal Process. 2019, 38, 3133–3151. [Google Scholar] [CrossRef]
  12. Rahimi, A.; Recht, B. Random features for large-scale kernel machines. In Proceedings of the 21th Annual Conference on Neural Information Processing Systems (ACNIPS), Vancouver, BC, Canada, 3–6 December 2007; pp. 1177–1184. [Google Scholar]
  13. Singh, A.; Ahuja, N.; Moulin, P. Online learning with kernels: Overcoming the growing sum problem. In Proceedings of the 2012 IEEE International Workshop on Machine Learning for Signal Process (MLSP), Santander, Spain, 23–26 September 2012; pp. 1–6. [Google Scholar]
  14. Wang, S.; Dang, L.; Chen, B.; Duan, S.; Wang, L.; Tse, C.K. Random Fourier filters under maximum correntropy criterion. IEEE Trans. Circuits Syst. I Reg. Pap. 2018, 65, 3390–3403. [Google Scholar] [CrossRef]
  15. Xiong, K.; Wang, S. The online random Fourier features conjugate gradient algorithm. IEEE Signal Process. Lett. 2019, 26, 740–744. [Google Scholar] [CrossRef]
  16. Wu, Q.; Li, Y.; Xue, W. A kernel recursive maximum versoria-like criterion algorithm for nonlinear channel equalization. Symmetry 2019, 11, 1067. [Google Scholar] [CrossRef]
  17. Mathews, V.J.; Cho, S.H. Improved convergence analysis of stochastic gradient adaptive filters using the sign algorithm. IEEE Trans. Acoust. Speech Signal Process. 1987, 35, 450–454. [Google Scholar] [CrossRef]
  18. Walach, E.; Widrow, B. The least mean fourth (LMF) adaptive algorithm and its family. IEEE Trans. Inf. Theory 1984, 42, 275–283. [Google Scholar] [CrossRef]
  19. Príncipe, J.C. Information Theoretic Learning: Renyi’s Entropy and Kernel Perspectives; Springer: New York, NY, USA, 2010. [Google Scholar]
  20. Li, Y.; Wang, Y.; Sun, L. A proportionate normalized maximum correntropy criterion algorithm with correntropy induced metric constraint for identifying sparse systems. Symmetry 2018, 10, 683. [Google Scholar] [CrossRef]
  21. Gallagher, C.H.; Fisher, T.J.; Shen, J. A cauchy estimator test for autocorrelation. J. Stat. Comput. Simul. 2015, 85, 1264–1276. [Google Scholar] [CrossRef]
  22. Luenberger, D.G. Linear and Nonlinear Programming, 4th ed.; Prentice Hall: Englewood Cliffs, NJ, USA, 2016. [Google Scholar]
  23. Yang, J.; Ye, F.; Rong, H.J.; Chen, B. Recursive least mean p-Power extreme learning machine. Neural Netw. 2017, 91, 22–33. [Google Scholar] [CrossRef]
  24. Boray, G.K.; Srinath, M.D. Conjugate gradient techniques for adaptive filtering. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 1992, 39, 1–10. [Google Scholar] [CrossRef]
  25. Chang, P.S.; Willson, A.N. Analysis of conjugate gradient algorithms for adaptive filtering. IEEE Trans. Signal Process. 2008, 48, 409–418. [Google Scholar] [CrossRef]
  26. Hestenes, M.R.; Stiefel, E. Methods of conjugate gradients for solving linear systems. J. Res. Nat. Bur. Stand. 1952, 49, 409–436. [Google Scholar] [CrossRef]
  27. Dassios, I.; Fountoulakis, K.; Gondzio, J. A preconditioner for a primal-dual newton conjugate gradients method for compressed sensing problems. SIAM J. Sci. Comput. 2015, 37, 2783–2812. [Google Scholar] [CrossRef]
  28. Heravi, A.R.; Hodtani, G.A. A new correntropy-based conjugate gradient backpropagation algorithm for improving training in neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 6252–6263. [Google Scholar] [CrossRef] [PubMed]
  29. Caliciotti, A.; Fasano, G.; Roma, M. Preconditioned nonlinear conjugate gradient methods based on a modified secant equation. Appl. Math. Comput. 2018, 318, 196–214. [Google Scholar] [CrossRef]
  30. Zhang, M.; Wang, X.; Chen, X.; Zhang, A. The kernel conjugate gradient algorithms. Trans. Signal Process. 2018, 66, 4377–4387. [Google Scholar] [CrossRef]
  31. Wu, Z.; Shi, J.; Zhang, X.; Ma, W.; Chen, B. Kernel recursive maximum correntropy. Signal Process. 2015, 117, 11–16. [Google Scholar] [CrossRef]
  32. Chen, B.; Zhao, S.; Zhu, P.; Príncipe, J.C. Quantized kernel recursive least squares algorithm. IEEE Trans. Neural Netw. Learn. Syst. 2013, 24, 1484–1491. [Google Scholar] [CrossRef]
  33. Chen, B.; Xing, L.; Zhao, H.; Zheng, N.; Príncipe, J.C. Generalized correntropy for robust adaptive filtering. IEEE Trans. Signal Process. 2016, 64, 3376–3387. [Google Scholar] [CrossRef]
  34. Weng, B.; Barner, K.E. Nonlinear system identification in impulsive environments. IEEE Trans. Signal Process. 2005, 53, 2588–2594. [Google Scholar] [CrossRef]
  35. Chen, S.; Billings, S.A.; Grant, P.M. Recursive hybrid algorithm for non-linear system identification using radial basis function networks. Int. J. Control 1992, 55, 1051–1070. [Google Scholar] [CrossRef]
Figure 1. Comparison of mean square error (MSE) and Cauchy loss (CL) with different γ .
Figure 1. Comparison of mean square error (MSE) and Cauchy loss (CL) with different γ .
Symmetry 11 01323 g001
Figure 2. Steady-state MSEs of RFFCCG versus different γ in Mackey–Glass (MG) time series prediction for non-Gaussian noises.
Figure 2. Steady-state MSEs of RFFCCG versus different γ in Mackey–Glass (MG) time series prediction for non-Gaussian noises.
Symmetry 11 01323 g002
Figure 3. Comparison of steady-state MSEs and average consumed time of RFFCCG versus m in MG time series prediction for non-Gaussian noises.
Figure 3. Comparison of steady-state MSEs and average consumed time of RFFCCG versus m in MG time series prediction for non-Gaussian noises.
Symmetry 11 01323 g003
Figure 4. Learning curves of RFFCCG and different algorithms versus m = 60 in MG time series prediction for non-Gaussian noises. KRMC-NC: kernel recursive maximum correntropy algorithm with novelty criterion; QKLMS: quantized kernel recursive least squares; RFFCG: random Fourier features conjugate gradient; RFFKLMS: random Fourier features kernel least mean square; RFFMC: random Fourier features maximum correntropy.
Figure 4. Learning curves of RFFCCG and different algorithms versus m = 60 in MG time series prediction for non-Gaussian noises. KRMC-NC: kernel recursive maximum correntropy algorithm with novelty criterion; QKLMS: quantized kernel recursive least squares; RFFCG: random Fourier features conjugate gradient; RFFKLMS: random Fourier features kernel least mean square; RFFMC: random Fourier features maximum correntropy.
Symmetry 11 01323 g004
Figure 5. Learning curves of RFFCCG and different algorithms versus m = 50 in nonlinear system identification of a stationary environment for non-Gaussian noises.
Figure 5. Learning curves of RFFCCG and different algorithms versus m = 50 in nonlinear system identification of a stationary environment for non-Gaussian noises.
Symmetry 11 01323 g005
Figure 6. Learning curves of RFFCCG and different algorithms versus m = 50 in nonlinear system identification of a non-stationary environment for non-Gaussian noises.
Figure 6. Learning curves of RFFCCG and different algorithms versus m = 50 in nonlinear system identification of a non-stationary environment for non-Gaussian noises.
Symmetry 11 01323 g006
Table 1. Computational cost of algorithms per iteration. KCG: kernel conjugate gradient; KLMS: kernel least mean square; KRLS: kernel recursive least squares; KRMC: kernel recursive maximum correntropy.
Table 1. Computational cost of algorithms per iteration. KCG: kernel conjugate gradient; KLMS: kernel least mean square; KRLS: kernel recursive least squares; KRMC: kernel recursive maximum correntropy.
AlgorithmAdditionMultiplicationDivision
1-4KLMS [3]kk0
KRLS [5] 4 k 2 + 4 k 4 k 2 + 4 k 1
KRMC [31] 4 k 2 + 4 k 4 k 2 + 4 k 2
KCG [30] 2 k 2 + 8 k 2 k 2 + 10 k 3
RFFCCG 3 m 2 + ( n + 11 ) m + 1 4 m 2 + ( n + 12 ) m + 1 4
Table 2. Storage cost of algorithms per iteration.
Table 2. Storage cost of algorithms per iteration.
AlgorithmMatrix
m × m n × k k × k
KLMS [3]010
KRLS [5]011
KRMC [31]011
KCG [30]011
RFFCCG100
Table 3. Simulation results of RFFKLMS, QKRLS, RFFKMC, KRMC-NC, RFFCG, and RFFCCG in MG time series prediction for non-Gaussian noises.
Table 3. Simulation results of RFFKLMS, QKRLS, RFFKMC, KRMC-NC, RFFCG, and RFFCCG in MG time series prediction for non-Gaussian noises.
AlgorithmSizeMSE ( dB ) Consumed Time ( s )
RFFKLMS [13]60N/A2.6305
QKRLS [32]182N/A8.2586
RFFMC [14]60−22.38702.6282
KRMC-NC [31]500−29.66803.6299
RFFCG [15]60N/A2.6183
RFFCCG60−30.50602.6750
Table 4. Simulation results of RFFKLMS, RFFMCC, RFFCG, and RFFCCG in nonlinear system identification of a stationary environment for non-Gaussian noises.
Table 4. Simulation results of RFFKLMS, RFFMCC, RFFCG, and RFFCCG in nonlinear system identification of a stationary environment for non-Gaussian noises.
AlgorithmSizeMSE ( dB ) Consumed Time ( s )
RFFKLMS [13]50N/A2.2773
QKRLS [32]160N/A7.1883
RFFMC [14]50−23.62632.3359
KRMC-NC [31]500−34.35702.9277
RFFCG [15]50N/A2.3768
RFFCCG50−38.99752.3390
Table 5. Simulation results of RFFKLMS, RFFMCC, RFFCG, and RFFCCG in nonlinear system identification of a non-stationary environment for non-Gaussian noises.
Table 5. Simulation results of RFFKLMS, RFFMCC, RFFCG, and RFFCCG in nonlinear system identification of a non-stationary environment for non-Gaussian noises.
AlgorithmSizeMSE ( dB ) Consumed Time ( s )
RFFKLMS [13]50N/A0.1048
RFFMC [14]50−24.4093/−21.58350.1039
RFFCG [15]50N/A0.1794
RFFCCG50−39.3061/−34.00290.1786

Share and Cite

MDPI and ACS Style

Huang, X.; Wang, S.; Xiong, K. The Cauchy Conjugate Gradient Algorithm with Random Fourier Features. Symmetry 2019, 11, 1323. https://doi.org/10.3390/sym11101323

AMA Style

Huang X, Wang S, Xiong K. The Cauchy Conjugate Gradient Algorithm with Random Fourier Features. Symmetry. 2019; 11(10):1323. https://doi.org/10.3390/sym11101323

Chicago/Turabian Style

Huang, Xuewei, Shiyuan Wang, and Kui Xiong. 2019. "The Cauchy Conjugate Gradient Algorithm with Random Fourier Features" Symmetry 11, no. 10: 1323. https://doi.org/10.3390/sym11101323

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop